threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "Observe the following buildfarm failures:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2019-07-03%2013%3A33%3A59\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dory&dt=2019-04-30%2014%3A45%3A26\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dory&dt=2019-06-02%2015%3A15%3A26\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dory&dt=2019-06-05%2006%3A15%3A26\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dory&dt=2019-06-26%2002%3A00%3A26\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dory&dt=2019-07-03%2022%3A15%3A27\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2019-04-10%2011%3A00%3A09\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2019-04-14%2012%3A42%3A31\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2019-05-01%2011%3A00%3A08\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2019-06-29%2011%3A33%3A25\n\n(These represent all but a couple of the ECPG-Check failures in the last\n90 days, excluding a spate around 31-May that were caused by an\nill-considered code change.)\n\nIn every one of these cases, the visible symptom is that one of the ECPG\ntest programs produced precisely nothing on stdout. stderr seems okay\nthough. (Not all of these cases have nonempty expected stderr, but enough\ndo that we can say it's not a case of the program just failing entirely.)\n\nWe've been seeing this sort of thing for a *long* time, although my\nrecollection is that in the past it's almost always been the thread-thread\ncase that failed, so that I'd supposed that there was some problem with\nthat particular test. It's now clear that that's not true though, and\nthere's seemingly some generic issue with the tests or test\ninfrastructure.\n\nThe other thing that I thought was invariably true was that it was a\nWindows-only thing. But here we have conchuela failing in the exact\nsame way, and it is, um, not Windows.\n\nAnyone have a theory what might be going on here?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Jul 2019 19:24:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Weird intermittent ECPG test failures"
}
] |
[
{
"msg_contents": "In my case, I want to sleep 3 seconds in xxx.sql for pg_regress program.\nbut I don't want to run 'select pg_sleep(3)' . so it is possible for\npg_regress?\n\nin psql, I can run \\! sleep(3); exit;\n\nbut looks pg_regress doesn't support it.\n\nIn my case, I want to sleep 3 seconds in xxx.sql for pg_regress program. but I don't want to run 'select pg_sleep(3)' . so it is possible for pg_regress? in psql, I can run \\! sleep(3); exit;but looks pg_regress doesn't support it.",
"msg_date": "Thu, 4 Jul 2019 20:04:22 +0800",
"msg_from": "Alex <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "run os command in pg_regress?"
}
] |
[
{
"msg_contents": "Hi Guys,\n\nCan you please help here?\n\nBelow reported issue in past about duplicate key entries for primary key.\nhttps://www.postgresql.org/message-id/534C8B33.9050807@pgexperts.com\n\nthe solution was provided in 9.3 version of postgres but it seems issue is still there in 9.5 version which I am running currently.\n\nCan you please let me know if this is also known in 9.5? any fix or Workaround please?\n\nWBR,\n-Pawan\n\n\n\n\n\n\n\n\n\n\n\nHi Guys,\n \nCan you please help here?\n \nBelow reported issue in past about duplicate key entries for primary key.\nhttps://www.postgresql.org/message-id/534C8B33.9050807@pgexperts.com\n \nthe solution was provided in 9.3 version of postgres but it seems issue is still there in 9.5 version which I am running currently.\n \nCan you please let me know if this is also known in 9.5? any fix or Workaround please?\n \nWBR,\n-Pawan",
"msg_date": "Thu, 4 Jul 2019 13:37:01 +0000",
"msg_from": "\"Kumar, Pawan (Nokia - IN/Bangalore)\" <pawan.kumar@nokia.com>",
"msg_from_op": true,
"msg_subject": "duplicate key entries for primary key -- need urgent help"
},
{
"msg_contents": "On Thu, Jul 04, 2019 at 01:37:01PM +0000, Kumar, Pawan (Nokia - IN/Bangalore) wrote:\n>Hi Guys,\n>\n>Can you please help here?\n>\n>Below reported issue in past about duplicate key entries for primary key.\n>https://www.postgresql.org/message-id/534C8B33.9050807@pgexperts.com\n>\n>the solution was provided in 9.3 version of postgres but it seems issue is still there in 9.5 version which I am running currently.\n>\n>Can you please let me know if this is also known in 9.5? any fix or Workaround please?\n>\n\nWhich version are you running, exactly? Whih minor version?\n\nWhy do you think it's the issue you linked?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 4 Jul 2019 18:47:31 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: duplicate key entries for primary key -- need urgent help"
},
{
"msg_contents": "Thanks a lot Tomas for the reply.\n\nWhich version are you running, exactly? Whih minor version?\n[Pawan]: Its (PostgreSQL) 9.5.9\n\nsai=> select version();\n version\n----------------------------------------------------------------------------------------------------------\n PostgreSQL 9.5.9 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit\n(1 row)\n\nWhy do you think it's the issue you linked?\n\n[Pawan]: Because the thread which I shared also has problem statement like \"Duplicate entries of Primary key\" .\nIf this is also known to this version, I will be appreciating a lot if we have some Workaround or config change.\n\nIn our production: See below entries, proc_id is primary key and we can see duplicate entries. How it is possible?\n\nsai=> select ctid,proc_id from etl_status where proc_id='2993229';\n ctid | proc_id\n----------+---------\n (381,20) | 2993229\n (388,28) | 2993229\n(2 rows)\n\nAny idea, how it happened?\n\nI will waiting for your reply\n\nWBR,\n-Pawan\n\n-----Original Message-----\nFrom: Tomas Vondra <tomas.vondra@2ndquadrant.com> \nSent: Thursday, July 04, 2019 10:18 PM\nTo: Kumar, Pawan (Nokia - IN/Bangalore) <pawan.kumar@nokia.com>\nCc: andres@2ndquadrant.com; andrew@dunslane.net; josh@agliodbs.com; pgsql-hackers@postgresql.org\nSubject: Re: duplicate key entries for primary key -- need urgent help\n\nOn Thu, Jul 04, 2019 at 01:37:01PM +0000, Kumar, Pawan (Nokia - IN/Bangalore) wrote:\n>Hi Guys,\n>\n>Can you please help here?\n>\n>Below reported issue in past about duplicate key entries for primary key.\n>https://www.postgresql.org/message-id/534C8B33.9050807@pgexperts.com\n>\n>the solution was provided in 9.3 version of postgres but it seems issue is still there in 9.5 version which I am running currently.\n>\n>Can you please let me know if this is also known in 9.5? any fix or Workaround please?\n>\n\nWhich version are you running, exactly? Whih minor version?\n\nWhy do you think it's the issue you linked?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 4 Jul 2019 17:34:21 +0000",
"msg_from": "\"Kumar, Pawan (Nokia - IN/Bangalore)\" <pawan.kumar@nokia.com>",
"msg_from_op": true,
"msg_subject": "RE: duplicate key entries for primary key -- need urgent help"
},
{
"msg_contents": "On Thu, Jul 04, 2019 at 05:34:21PM +0000, Kumar, Pawan (Nokia - IN/Bangalore) wrote:\n>Thanks a lot Tomas for the reply.\n>\n>Which version are you running, exactly? Whih minor version?\n>[Pawan]: Its (PostgreSQL) 9.5.9\n>\n\nYou're missing 2 years of bugfixes, some of which are addressing data\ncorruption issues and might have caused this.\n\n>sai=> select version();\n> version\n>----------------------------------------------------------------------------------------------------------\n> PostgreSQL 9.5.9 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit\n>(1 row)\n>\n>Why do you think it's the issue you linked?\n>\n>[Pawan]: Because the thread which I shared also has problem statement like \"Duplicate entries of Primary key\" .\n>If this is also known to this version, I will be appreciating a lot if we have some Workaround or config change.\n>\n\nDuplicate entries are clearly some sort of data corruption, but that\nmight have happened in various ways - it does not mean it's the same\nissue. And yes, 9.5.9 has a fix for the issue in the thread you linked.\n\n>In our production: See below entries, proc_id is primary key and we can see duplicate entries. How it is possible?\n>\n>sai=> select ctid,proc_id from etl_status where proc_id='2993229';\n> ctid | proc_id\n>----------+---------\n> (381,20) | 2993229\n> (388,28) | 2993229\n>(2 rows)\n>\n>Any idea, how it happened?\n>\n\nNo, that's impossible to say without you doing some more investigation.\nWe need to know when those rows were created, on which version that\nhappened (the system might have been updated and the corruption predates\nmight have happened on the previous version), and so on. For example, if\nthe system crashed or had any significant issues, that might be related\nto data corruption issues.\n\nWe know nothing about your system, so you'll have to do a bit of\ninvestigation, look for suspicious things, etc.\n\nFWIW it might be a good idea to look for other cases of data corruption.\nBoth to know the extent of the problem, and to gain insight.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Thu, 4 Jul 2019 20:01:48 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: duplicate key entries for primary key -- need urgent help"
},
{
"msg_contents": "Thanks for reply.\n\nThis has happened very often and at different production system.\nThere is no version change. System running with same version since 1 year but duplicate key issue came quiet a time.\nAnd impact is big because of that and only way to fix is to delete the duplicate primary key.\nAny suggestions to check which logs? Any command to run to get more info during the issue?\n\nAny potential configuration to check?\nPlz suggest\n\nWbr,\nPk\n________________________________\nFrom: Tomas Vondra <tomas.vondra@2ndquadrant.com>\nSent: Thursday, July 4, 2019 11:31:48 PM\nTo: Kumar, Pawan (Nokia - IN/Bangalore)\nCc: andres@2ndquadrant.com; andrew@dunslane.net; josh@agliodbs.com; pgsql-hackers@postgresql.org\nSubject: Re: duplicate key entries for primary key -- need urgent help\n\nOn Thu, Jul 04, 2019 at 05:34:21PM +0000, Kumar, Pawan (Nokia - IN/Bangalore) wrote:\n>Thanks a lot Tomas for the reply.\n>\n>Which version are you running, exactly? Whih minor version?\n>[Pawan]: Its (PostgreSQL) 9.5.9\n>\n\nYou're missing 2 years of bugfixes, some of which are addressing data\ncorruption issues and might have caused this.\n\n>sai=> select version();\n> version\n>----------------------------------------------------------------------------------------------------------\n> PostgreSQL 9.5.9 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit\n>(1 row)\n>\n>Why do you think it's the issue you linked?\n>\n>[Pawan]: Because the thread which I shared also has problem statement like \"Duplicate entries of Primary key\" .\n>If this is also known to this version, I will be appreciating a lot if we have some Workaround or config change.\n>\n\nDuplicate entries are clearly some sort of data corruption, but that\nmight have happened in various ways - it does not mean it's the same\nissue. And yes, 9.5.9 has a fix for the issue in the thread you linked.\n\n>In our production: See below entries, proc_id is primary key and we can see duplicate entries. How it is possible?\n>\n>sai=> select ctid,proc_id from etl_status where proc_id='2993229';\n> ctid | proc_id\n>----------+---------\n> (381,20) | 2993229\n> (388,28) | 2993229\n>(2 rows)\n>\n>Any idea, how it happened?\n>\n\nNo, that's impossible to say without you doing some more investigation.\nWe need to know when those rows were created, on which version that\nhappened (the system might have been updated and the corruption predates\nmight have happened on the previous version), and so on. For example, if\nthe system crashed or had any significant issues, that might be related\nto data corruption issues.\n\nWe know nothing about your system, so you'll have to do a bit of\ninvestigation, look for suspicious things, etc.\n\nFWIW it might be a good idea to look for other cases of data corruption.\nBoth to know the extent of the problem, and to gain insight.\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n\n\n\nThanks for reply.\n\nThis has happened very often and at different production system.\nThere is no version change. System running with same version since 1 year but duplicate key issue came quiet a time.\nAnd impact is big because of that and only way to fix is to delete the duplicate primary key.\nAny suggestions to check which logs? Any command to run to get more info during the issue?\n\nAny potential configuration to check?\nPlz suggest\n\nWbr,\nPk\n\nFrom: Tomas Vondra <tomas.vondra@2ndquadrant.com>\nSent: Thursday, July 4, 2019 11:31:48 PM\nTo: Kumar, Pawan (Nokia - IN/Bangalore)\nCc: andres@2ndquadrant.com; andrew@dunslane.net; josh@agliodbs.com; pgsql-hackers@postgresql.org\nSubject: Re: duplicate key entries for primary key -- need urgent help\n \n\n\nOn Thu, Jul 04, 2019 at 05:34:21PM +0000, Kumar, Pawan (Nokia - IN/Bangalore) wrote:\n>Thanks a lot Tomas for the reply.\n>\n>Which version are you running, exactly? Whih minor version?\n>[Pawan]: Its (PostgreSQL) 9.5.9\n>\n\nYou're missing 2 years of bugfixes, some of which are addressing data\ncorruption issues and might have caused this.\n\n>sai=> select version();\n> version\n>----------------------------------------------------------------------------------------------------------\n> PostgreSQL 9.5.9 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit\n>(1 row)\n>\n>Why do you think it's the issue you linked?\n>\n>[Pawan]: Because the thread which I shared also has problem statement like \"Duplicate entries of Primary key\" .\n>If this is also known to this version, I will be appreciating a lot if we have some Workaround or config change.\n>\n\nDuplicate entries are clearly some sort of data corruption, but that\nmight have happened in various ways - it does not mean it's the same\nissue. And yes, 9.5.9 has a fix for the issue in the thread you linked.\n\n>In our production: See below entries, proc_id is primary key and we can see duplicate entries. How it is possible?\n>\n>sai=> select ctid,proc_id from etl_status where proc_id='2993229';\n> ctid | proc_id\n>----------+---------\n> (381,20) | 2993229\n> (388,28) | 2993229\n>(2 rows)\n>\n>Any idea, how it happened?\n>\n\nNo, that's impossible to say without you doing some more investigation.\nWe need to know when those rows were created, on which version that\nhappened (the system might have been updated and the corruption predates\nmight have happened on the previous version), and so on. For example, if\nthe system crashed or had any significant issues, that might be related\nto data corruption issues.\n\nWe know nothing about your system, so you'll have to do a bit of\ninvestigation, look for suspicious things, etc.\n\nFWIW it might be a good idea to look for other cases of data corruption.\nBoth to know the extent of the problem, and to gain insight.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 4 Jul 2019 18:28:03 +0000",
"msg_from": "\"Kumar, Pawan (Nokia - IN/Bangalore)\" <pawan.kumar@nokia.com>",
"msg_from_op": true,
"msg_subject": "Re: duplicate key entries for primary key -- need urgent help"
},
{
"msg_contents": "On Thu, Jul 04, 2019 at 06:28:03PM +0000, Kumar, Pawan (Nokia - IN/Bangalore) wrote:\n>Thanks for reply.\n>\n>This has happened very often and at different production system.\n>There is no version change. System running with same version since 1 year but duplicate key issue came quiet a time.\n>And impact is big because of that and only way to fix is to delete the duplicate primary key.\n>Any suggestions to check which logs? Any command to run to get more info during the issue?\n>\n>Any potential configuration to check?\n>Plz suggest\n>\n\nWell, I've already pointed out you're missing 2 years worth of fixes, so\nupgrading to current minor version is the first thing I'd do. (I doubt\npeople will be rushing to help you in their free time when you're\nmissing two years of fixes, possibly causing this issue.)\n\nIf the issue happens even after upgrading, we'll need to see more details\nabout an actual case - commands creating/modifying the duplicate rows,\nor anything you can find. It's impossible to help you when we only know\nthere are duplicate values in a PK.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Thu, 4 Jul 2019 20:39:17 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: duplicate key entries for primary key -- need urgent help"
}
] |
[
{
"msg_contents": "Paul Eggert, in https://mm.icann.org/pipermail/tz/2019-June/028172.html:\n> zic’s -p option was intended as a transition from historical\n> System V code that treated TZ=\"XXXnYYY\" as meaning US\n> daylight-saving rules in a time zone n hours west of UT,\n> with XXX abbreviating standard time and YYY abbreviating DST.\n> zic -p allows the tzdata installer to specify (say)\n> Europe/Brussels's rules instead of US rules. This behavior\n> is not well documented and often fails in practice; for example it\n> does not work with current glibc for contemporary timestamps, and\n> it does not work in tzdb itself for timestamps after 2037.\n> So, document it as being obsolete, with the intent that it\n> will be removed in a future version. This change does not\n> affect behavior of the default installation.\n\nAs he says, this doesn't work for post-2038 dates:\n\nregression=# set timezone = 'FOO5BAR';\nSET\nregression=# select now();\n now \n-------------------------------\n 2019-07-04 11:55:46.905382-04\n(1 row)\n\nregression=# select timeofday();\n timeofday \n-------------------------------------\n Thu Jul 04 11:56:14.102770 2019 BAR\n(1 row)\n\nregression=# select '2020-07-04'::timestamptz;\n timestamptz \n------------------------\n 2020-07-04 00:00:00-04\n(1 row)\n\nregression=# select '2040-07-04'::timestamptz;\n timestamptz \n------------------------\n 2040-07-04 00:00:00-05 <<-- should be -04\n(1 row)\n\nand this note makes it clear that the IANA crew aren't planning on fixing\nthat. It does work if you write a full POSIX-style DST specification:\n\nregression=# set timezone = 'FOO5BAR,M3.2.0,M11.1.0';\nSET\nregression=# select '2040-07-04'::timestamptz;\n timestamptz \n------------------------\n 2040-07-04 00:00:00-04\n(1 row)\n\nso I think what Eggert has in mind here is that they'll remove the\nTZDEFRULES-loading logic and always fall back to TZDEFRULESTRING when\npresented with a POSIX-style zone spec that lacks explicit transition\ndate rules.\n\nSo, what if anything should we do about this? We do document posixrules,\nvery explicitly, see datatype.sgml around line 2460:\n\n When a daylight-savings zone abbreviation is present,\n it is assumed to be used\n according to the same daylight-savings transition rules used in the\n IANA time zone database's <filename>posixrules</filename> entry.\n In a standard <productname>PostgreSQL</productname> installation,\n <filename>posixrules</filename> is the same as <literal>US/Eastern</literal>, so\n that POSIX-style time zone specifications follow USA daylight-savings\n rules. If needed, you can adjust this behavior by replacing the\n <filename>posixrules</filename> file.\n\nOne option is to do nothing until the IANA code actually changes,\nbut as 2038 gets closer, people are more likely to start noticing\nthat this \"feature\" doesn't work as one would expect.\n\nWe could get out front of the problem and remove the TZDEFRULES-loading\nlogic ourselves. That would be a bit of a maintenance hazard, but perhaps\nnot too awful, because we already deviate from the IANA code in that area\n(we have our own ideas about when/whether to try to load TZDEFRULES).\n\nI don't think we'd want to change this behavior in the back branches,\nbut it might be OK to do it as a HEAD change. I think I'd rather do\nit like that than be forced into playing catchup when the IANA code\ndoes change.\n\nA more aggressive idea would be to stop supporting POSIX-style timezone\nspecs altogether, but I'm not sure I like that answer. Even if we could\nget away with it from a users'-eye standpoint, I think we have some\ninternal dependencies on being able to use such specifications.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Jul 2019 12:38:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "More tzdb fun: POSIXRULES is being deprecated upstream"
},
{
"msg_contents": "Last year I wrote:\n> Paul Eggert, in https://mm.icann.org/pipermail/tz/2019-June/028172.html:\n>> zic’s -p option was intended as a transition from historical\n>> System V code that treated TZ=\"XXXnYYY\" as meaning US\n>> daylight-saving rules in a time zone n hours west of UT,\n>> with XXX abbreviating standard time and YYY abbreviating DST.\n>> zic -p allows the tzdata installer to specify (say)\n>> Europe/Brussels's rules instead of US rules. This behavior\n>> is not well documented and often fails in practice; for example it\n>> does not work with current glibc for contemporary timestamps, and\n>> it does not work in tzdb itself for timestamps after 2037.\n>> So, document it as being obsolete, with the intent that it\n>> will be removed in a future version. This change does not\n>> affect behavior of the default installation.\n\nWell, we ignored this for a year, but it's about to become unavoidable:\nhttp://mm.icann.org/pipermail/tz/2020-June/029093.html\nIANA upstream is changing things so that by default there will not be\nany \"posixrules\" file in the tz database.\n\nThat wouldn't directly affect our builds, since we don't use their\nMakefile anyway. But it will affect installations that use\n--with-system-tzdata, which I believe is most vendor-packaged\nPostgres installations.\n\nIt's possible that most or even all tzdata packagers will ignore\nthe change and continue to ship a posixrules file, for backwards\ncompatibility's sake. But I doubt we should bet that way.\nglibc-based distros, in particular, would have little motivation to\ndo so. We should expect that, starting probably this fall, there\nwill be installations with no posixrules file.\n\nThe minimum thing that we have to do, I'd say, is to change the\ndocumentation to explain what happens if there's no posixrules file.\nHowever, in view of the fact that the posixrules feature doesn't work\npast 2037 and isn't going to be fixed, maybe we should just nuke it\nnow rather than waiting for our hand to be forced. I'm not sure that\nI've ever heard of anyone replacing the posixrules file anyway.\n(The fallback case is actually better in that it works for dates past\n2037; it's worse only in that you can't configure it.)\n\nI would definitely be in favor of \"nuke it now\" with respect to HEAD.\nIt's a bit more debatable for the back branches. However, all branches\nare going to be equally exposed to updated system tzdata trees, so\nwe've typically felt that changes in the tz-related code should be\nback-patched.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Jun 2020 14:08:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: More tzdb fun: POSIXRULES is being deprecated upstream"
},
{
"msg_contents": "I wrote:\n> The minimum thing that we have to do, I'd say, is to change the\n> documentation to explain what happens if there's no posixrules file.\n\nHere's a proposed patch to do that. To explain this, we more or less\nhave to fully document the POSIX timezone string format (otherwise\nnobody's gonna understand what \"M3.2.0,M11.1.0\" means). That's something\nwe've glossed over for many years, and I still feel like it's not\nsomething to explain in-line in section 8.5.3, so I shoved all the gory\ndetails into a new section in Appendix B. To be clear, nothing here is\nnew behavior, it was just undocumented before.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 18 Jun 2020 00:26:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: More tzdb fun: POSIXRULES is being deprecated upstream"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 12:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Here's a proposed patch to do that. To explain this, we more or less\n> have to fully document the POSIX timezone string format (otherwise\n> nobody's gonna understand what \"M3.2.0,M11.1.0\" means). That's something\n> we've glossed over for many years, and I still feel like it's not\n> something to explain in-line in section 8.5.3, so I shoved all the gory\n> details into a new section in Appendix B. To be clear, nothing here is\n> new behavior, it was just undocumented before.\n\nI'm glad you are proposing to document this, because the set of people\nwho had no idea what \"M3.2.0,M11.1.0\" means definitely included me.\nIt's a little confusing, though, that you documented it as Mm.n.d but\nthen in the text the order of explanation is d then m then n. Maybe\nswitch the text around so the order matches, or even use something\nlike Mmonth.occurrence.day.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Jun 2020 12:43:57 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: More tzdb fun: POSIXRULES is being deprecated upstream"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> It's a little confusing, though, that you documented it as Mm.n.d but\n> then in the text the order of explanation is d then m then n. Maybe\n> switch the text around so the order matches, or even use something\n> like Mmonth.occurrence.day.\n\nYeah, I struggled with that text for a bit. It doesn't seem to make sense\nto explain that n means the n'th occurrence of a particular d value before\nwe've explained what d is, so explaining the fields in their syntactic\norder seems like a loser. But we could describe m first without that\nproblem.\n\nNot sure about replacing the m/n/d notation --- that's straight out of\nPOSIX, so inventing our own terminology might just confuse people who\ndo know the spec.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jun 2020 13:05:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: More tzdb fun: POSIXRULES is being deprecated upstream"
},
{
"msg_contents": "On Thu, Jun 18, 2020 at 1:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > It's a little confusing, though, that you documented it as Mm.n.d but\n> > then in the text the order of explanation is d then m then n. Maybe\n> > switch the text around so the order matches, or even use something\n> > like Mmonth.occurrence.day.\n>\n> Yeah, I struggled with that text for a bit. It doesn't seem to make sense\n> to explain that n means the n'th occurrence of a particular d value before\n> we've explained what d is, so explaining the fields in their syntactic\n> order seems like a loser. But we could describe m first without that\n> problem.\n\nYou could consider something along the lines of:\n\nThis form specifies a transition that always happens during the same\nmonth and on the same day of the week. m identifies the month, from 1\nto 12. n specifies the n'th occurrence of the day number identified by\nd. n is a value between 1 and 4, or 5 meaning the last occurrence of\nthat weekday in the month (which could be the fourth or the fifth). d\nis a value between 0 and 6, with 0 indicating Sunday.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Jun 2020 14:15:41 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: More tzdb fun: POSIXRULES is being deprecated upstream"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> You could consider something along the lines of:\n\n> This form specifies a transition that always happens during the same\n> month and on the same day of the week. m identifies the month, from 1\n> to 12. n specifies the n'th occurrence of the day number identified by\n> d. n is a value between 1 and 4, or 5 meaning the last occurrence of\n> that weekday in the month (which could be the fourth or the fifth). d\n> is a value between 0 and 6, with 0 indicating Sunday.\n\nAdopted with some minor tweaks. Thanks for the suggestion!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jun 2020 16:28:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: More tzdb fun: POSIXRULES is being deprecated upstream"
},
{
"msg_contents": "I wrote:\n> ... We should expect that, starting probably this fall, there\n> will be installations with no posixrules file.\n\n> The minimum thing that we have to do, I'd say, is to change the\n> documentation to explain what happens if there's no posixrules file.\n> However, in view of the fact that the posixrules feature doesn't work\n> past 2037 and isn't going to be fixed, maybe we should just nuke it\n> now rather than waiting for our hand to be forced. I'm not sure that\n> I've ever heard of anyone replacing the posixrules file anyway.\n> (The fallback case is actually better in that it works for dates past\n> 2037; it's worse only in that you can't configure it.)\n\nI experimented with removing the posixrules support, and was quite glad\nI did, because guess what: our regression tests fall over. If we do\nnothing we can expect that they'll start failing on various random systems\ncome this fall.\n\nThe cause of the failure is that we set the timezone for all regression\ntests to just 'PST8PDT', which is exactly the underspecified POSIX syntax\nthat is affected by the posixrules feature. So, with the fallback\nrule of \"M3.2.0,M11.1.0\" (which corresponds to US law since 2007)\nwe get the wrong answers for some old test cases involving dates in 2005.\n\nI'm inclined to think that the simplest fix is to replace 'PST8PDT' with\n'America/Los_Angeles' as the standard zone setting for the regression\ntests. We definitely should be testing behavior with time-varying DST\nlaws, and we can no longer count on POSIX-style zone names to do that.\n\nAnother point, which I've not looked into yet, is that I'd always\nsupposed that PST8PDT and the other legacy US zone names would result\nin loading the zone files of those names, ie /usr/share/zoneinfo/PST8PDT\nand friends. This seems not to be happening though. Should we try\nto make it happen? It would probably result in fewer surprises once\nposixrules goes away, because our regression tests are likely not the\nonly users of these zone names.\n\n(I'd still be inclined to do the first thing though; it seems to me\nthat the historical behavior of 'America/Los_Angeles' is way more\nlikely to hold still than that of 'PST8PDT'. The IANA crew might\nnuke the latter zone entirely at some point, especially if the\nrepeated proposals to get rid of DST in the US ever get anywhere.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jun 2020 19:17:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: More tzdb fun: POSIXRULES is being deprecated upstream"
},
{
"msg_contents": "On 2020-06-17 20:08, Tom Lane wrote:\n> I would definitely be in favor of \"nuke it now\" with respect to HEAD.\n> It's a bit more debatable for the back branches. However, all branches\n> are going to be equally exposed to updated system tzdata trees, so\n> we've typically felt that changes in the tz-related code should be\n> back-patched.\n\nIt seems sensible to me to remove it in master and possibly \nREL_13_STABLE, but leave it alone in the back branches.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 19 Jun 2020 10:09:46 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: More tzdb fun: POSIXRULES is being deprecated upstream"
},
{
"msg_contents": "I wrote:\n> I experimented with removing the posixrules support, and was quite glad\n> I did, because guess what: our regression tests fall over. If we do\n> nothing we can expect that they'll start failing on various random systems\n> come this fall.\n\nTo clarify, you can produce this failure without any code changes:\nbuild a standard installation (*not* using --with-system-tzdata),\nremove its .../share/timezone/posixrules file, and run \"make\ninstallcheck\". So builds that do use --with-system-tzdata will fail both\n\"make check\" and \"make installcheck\" if the platform's tzdata packager\ndecides to get rid of the posixrules file.\n\nHowever, on closer inspection, all the test cases that depend on 'PST8PDT'\nare fine, because we *do* pick up the zone file by that name. The cases\nthat fall over are a few in horology.sql that depend on\n\nSET TIME ZONE 'CST7CDT';\n\nThere is no such zone file, because that's a mistake: the central US\nzone is more properly rendered 'CST6CDT'. So this is indeed a bare\nPOSIX zone specification, and its behavior changes if there's no\nposixrules file to back-fill knowledge about pre-2007 DST laws.\n\nThese test cases originated in commit b2b6548c7. That was too long ago\nto be sure, but I suspect that the use of a bogus zone was just a thinko;\nthere's certainly nothing in the commit log or the text of the patch\nsuggesting that it was intentional. Still, it seems good to be testing\nour POSIX-zone-string code paths, so I'm inclined to leave it as CST7CDT\nbut remove the dependence on posixrules by adding an explicit transition\nrule.\n\nAlso, I notice a couple of related documentation issues:\n\n* The same commit added a documentation example that also cites CST7CDT.\nThat needs to be fixed to correspond to something that would actually\nbe used in the real world, probably America/Denver. Otherwise the\nexample will fail to work for some people.\n\n* We should add something to the new appendix about POSIX zone specs\npointing out that while EST5EDT, CST6CDT, MST7MDT, PST8PDT look like they\nare POSIX strings, they actually are captured by IANA zone files, so\nthat they produce valid historical US DST transitions even when a\nplain POSIX string wouldn't.\n\nI'm less excited than I was yesterday about removing the tests' dependency\non 'PST8PDT'. It remains possible that we might need to do that someday,\nbut I doubt it'd happen without plenty of warning.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Jun 2020 11:41:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: More tzdb fun: POSIXRULES is being deprecated upstream"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-06-17 20:08, Tom Lane wrote:\n>> I would definitely be in favor of \"nuke it now\" with respect to HEAD.\n>> It's a bit more debatable for the back branches. However, all branches\n>> are going to be equally exposed to updated system tzdata trees, so\n>> we've typically felt that changes in the tz-related code should be\n>> back-patched.\n\n> It seems sensible to me to remove it in master and possibly \n> REL_13_STABLE, but leave it alone in the back branches.\n\nFor purposes of discussion, here's a patch that rips out posixrules\nsupport altogether. (Note that further code simplifications could\nbe made --- the \"load_ok\" variable is vestigial, for instance. This\nformulation is intended to minimize the diffs from upstream.)\n\nA less aggressive idea would be to leave the code alone and just change\nthe makefiles to not install a posixrules file in our own builds.\nThat'd leave the door open for somebody who really needed posixrules\nbehavior to get it back by just creating a posixrules file. I'm not\nsure this idea has much else to recommend it though.\n\nI'm honestly not sure what I think we should do exactly.\nThe main arguments in favor of the full-rip-out option seem to be\n\n(1) It'd ensure consistent behavior of POSIX zone specs across\nplatforms, whether or not --with-system-tzdata is used and whether\nor not the platform supplies a posixrules file.\n\n(2) We'll presumably be forced into the no-posixrules behavior at\nsome point, so forcing the issue lets us dictate the timing rather\nthan having it be dictated to us. If nothing else, that means we\ncan release-note the behavioral change in a timely fashion.\n\nPoint (2) seems like an argument for doing it only in master\n(possibly plus v13), but on the other hand I'm not convinced about\nhow much control we really have if we wait. What seems likely\nto happen is that posixrules files will disappear from platform\ntz databases over some hard-to-predict timespan. Even if no\nmajor platforms drop them immediately at the next IANA update,\nit seems quite likely that some/many will do so within the remaining\nsupport lifetime of v12. So even if we continue to support the feature,\nit's likely to vanish in practice at some uncertain point.\n\nGiven that the issue only affects people using nonstandard TimeZone\nsettings, it may be that we shouldn't agonize over it too much\neither way.\n\nAnyway, as I write this I'm kind of talking myself into the position\nthat we should indeed back-patch this. The apparent stability\nbenefits of not doing so may be illusory, and if we back-patch then\nat least we get to document that there's a change. But an argument\ncould be made in the other direction too.\n\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 19 Jun 2020 15:27:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: More tzdb fun: POSIXRULES is being deprecated upstream"
},
{
"msg_contents": "On Fri, Jun 19, 2020 at 3:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Anyway, as I write this I'm kind of talking myself into the position\n> that we should indeed back-patch this. The apparent stability\n> benefits of not doing so may be illusory, and if we back-patch then\n> at least we get to document that there's a change. But an argument\n> could be made in the other direction too.\n\nIt's really unclear to me why we should back-patch this into\nalready-released branches. I grant your point that perhaps few people\nwill notice, and also that this might happen at some point the change\nwill be forced upon us. Nonetheless, we bill our back-branches as\nbeing stable, which seems inconsistent with forcing a potentially\nbreaking change into them without a clear and pressing need. If you\ncommit this patch to master and v13, no already-release branches will\nbe affected immediately, and it's conceivable that some or even all of\nthe older branches will age out before the issue is forced. That would\nbe all to the good. And even if the issue is forced sooner rather than\nlater, how much do we really lose by waiting until we have that\nproblem in front of us?\n\nI'm not in a position to judge how much additional maintenance\noverhead would be imposed by not back-patching this at once, so if you\ntell me that it's an intolerable burden, I can't really argue with\nthat. But if it's possible to take a wait-and-see attitude for the\ntime being, so much the better.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 19 Jun 2020 15:41:21 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: More tzdb fun: POSIXRULES is being deprecated upstream"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> It's really unclear to me why we should back-patch this into\n> already-released branches. I grant your point that perhaps few people\n> will notice, and also that this might happen at some point the change\n> will be forced upon us. Nonetheless, we bill our back-branches as\n> being stable, which seems inconsistent with forcing a potentially\n> breaking change into them without a clear and pressing need. If you\n> commit this patch to master and v13, no already-release branches will\n> be affected immediately, and it's conceivable that some or even all of\n> the older branches will age out before the issue is forced. That would\n> be all to the good. And even if the issue is forced sooner rather than\n> later, how much do we really lose by waiting until we have that\n> problem in front of us?\n\n> I'm not in a position to judge how much additional maintenance\n> overhead would be imposed by not back-patching this at once, so if you\n> tell me that it's an intolerable burden, I can't really argue with\n> that. But if it's possible to take a wait-and-see attitude for the\n> time being, so much the better.\n\nThe code delta is small enough that I don't foresee any real maintenance\nproblem if we let the back branches differ from HEAD/v13 on this point.\nWhat I'm concerned about is that people depending on the existing\nbehavior are likely to wake up one fine morning and discover that it's\nbroken after a routine tzdata update. I think that it'd be a better\nuser experience for them to see a release-note entry in a PG update\nrelease explaining that this will break and here's what to do to fix it.\n\nYeah, we can do nothing in the back branches and hope that that doesn't\nhappen for the remaining lifespan of v12. But I wonder whether that\ndoesn't amount to sticking our heads in the sand.\n\nI suppose it'd be possible to have a release-note entry in the back\nbranches that isn't tied to any actual code change on our part, but just\nwarns that such a tzdata change might happen at some unpredictable future\ntime. That feels weird and squishy though; and people would likely have\nforgotten it by the time the change actually hits them.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Jun 2020 15:55:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: More tzdb fun: POSIXRULES is being deprecated upstream"
},
{
"msg_contents": "On Fri, Jun 19, 2020 at 3:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The code delta is small enough that I don't foresee any real maintenance\n> problem if we let the back branches differ from HEAD/v13 on this point.\n> What I'm concerned about is that people depending on the existing\n> behavior are likely to wake up one fine morning and discover that it's\n> broken after a routine tzdata update. I think that it'd be a better\n> user experience for them to see a release-note entry in a PG update\n> release explaining that this will break and here's what to do to fix it.\n\nI was assuming that if you did an update of the tzdata, you'd notice\nif posixrules had been nuked. I guess that wouldn't help people who\nare using the system tzdata, though. It might be nice to know what\nDebian, RHEL, etc. plan to do about this, but I'm not sure how\npractical it is to find out.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 19 Jun 2020 16:02:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: More tzdb fun: POSIXRULES is being deprecated upstream"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Jun 19, 2020 at 3:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What I'm concerned about is that people depending on the existing\n>> behavior are likely to wake up one fine morning and discover that it's\n>> broken after a routine tzdata update. I think that it'd be a better\n>> user experience for them to see a release-note entry in a PG update\n>> release explaining that this will break and here's what to do to fix it.\n\n> I was assuming that if you did an update of the tzdata, you'd notice\n> if posixrules had been nuked. I guess that wouldn't help people who\n> are using the system tzdata, though.\n\nYeah, exactly. We can control this easily enough for PG-supplied tzdata\ntrees, but I think a significant majority of actual users are using\n--with-system-tzdata builds, because we've been telling packagers to\ndo it that way for years. (Nor does changing that advice seem like\na smart move.)\n\n> It might be nice to know what\n> Debian, RHEL, etc. plan to do about this, but I'm not sure how\n> practical it is to find out.\n\nThere's probably no way to know until it happens :-(. We can hope\nthat they'll be conservative, but it's hard to be sure. It doesn't\nhelp that the bigger players rely on glibc: if I understand what\nEggert was saying, nuking posixrules would bring tzcode's behavior\ninto closer sync with what glibc does, so they might well feel it's\na desirable change.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Jun 2020 16:22:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: More tzdb fun: POSIXRULES is being deprecated upstream"
},
{
"msg_contents": "I wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> It might be nice to know what\n>> Debian, RHEL, etc. plan to do about this, but I'm not sure how\n>> practical it is to find out.\n\n> There's probably no way to know until it happens :-(.\n\nOn the other hand, for the open-source players, it might be easier to\nguess. I took a look at the Fedora/RHEL tzdata specfile, and I see\nthat \"-p America/New_York\" is hard-wired into it:\n\nzic -y ./yearistype -d zoneinfo -L /dev/null -p America/New_York $FILES\n\nThis means that IANA's change of their sample Makefile will have no\ndirect impact, and things will only change if the Red Hat packager\nactively changes the specfile. It's still anyone's guess whether\nhe/she will do so, but the odds of a change seem a good bit lower\nthan if the IANA-supplied Makefile were being used directly.\n\nI'm less familiar with Debian so I won't venture to dig into their\npackage, but maybe somebody else would like to.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Jun 2020 16:49:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: More tzdb fun: POSIXRULES is being deprecated upstream"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> It might be nice to know what\n> Debian, RHEL, etc. plan to do about this, but I'm not sure how\n> practical it is to find out.\n\nBy luck, we now have a moderately well-educated guess about that\nfrom Paul Eggert himself [1]:\n\n: Probably NetBSD will go first as they tend to buy these changes\n: quickly; maybe six months from now? Debian and RHEL probably a couple\n: of years. These are all just guesses.\n\nBased on that, I'd say that assuming v12 and earlier won't have to deal\nwith this issue does indeed amount to sticking our heads in the sand.\n\nI don't intend to do anything about this until this week's beta wrap\ncycle is complete, but I'm still leaning to the idea that we ought to\nback-patch something. Maybe the \"something\" could be less than a\nfull posixrules-ectomy, but I'm not really satisfied with any of the\nother alternatives I've thought about.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/9d8b5ec4-7094-04f6-d270-db0198d09bd1%40cs.ucla.edu\n\n\n",
"msg_date": "Mon, 22 Jun 2020 16:01:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: More tzdb fun: POSIXRULES is being deprecated upstream"
},
{
"msg_contents": "On 2020-06-19 21:55, Tom Lane wrote:\n> Yeah, we can do nothing in the back branches and hope that that doesn't\n> happen for the remaining lifespan of v12. But I wonder whether that\n> doesn't amount to sticking our heads in the sand.\n> \n> I suppose it'd be possible to have a release-note entry in the back\n> branches that isn't tied to any actual code change on our part, but just\n> warns that such a tzdata change might happen at some unpredictable future\n> time. That feels weird and squishy though; and people would likely have\n> forgotten it by the time the change actually hits them.\n\nIn my mind, this isn't really that different from other external \nlibraries making API changes. But we are not going to forcibly remove \nPython 2 support in PostgreSQL 9.6 just because it's no longer supported \nupstream. If Debian or RHEL $veryold want to keep maintaining Python 2, \nthey are free to do so, and users thereof are free to continue using it. \n Similarly, Debian or RHEL $veryold are surely not going to drop a \nwhole class of time zone codes from their stable distribution just \nbecause upstream is phasing it out.\n\nWhat you are saying is, instead of the OS dropping POSIXRULES support, \nit would be better if we dropped it first and release-noted that. \nHowever, I don't agree with the premise of that. OSes with long-term \nsupport aren't going to drop it.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 25 Jun 2020 15:08:48 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: More tzdb fun: POSIXRULES is being deprecated upstream"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> What you are saying is, instead of the OS dropping POSIXRULES support, \n> it would be better if we dropped it first and release-noted that. \n> However, I don't agree with the premise of that. OSes with long-term \n> support aren't going to drop it.\n\nYou might be right, or you might not. I think the tzdata distribution is\nin a weird gray area so far as long-term-support platforms are concerned:\nthey have to keep updating it, no matter how far it diverges from what\nthey originally shipped with. Maybe they will figure out that they're not\nrequired to drop POSIXRULES just because upstream did. Or maybe they will\ngo with the flow on that, figuring that it's not any worse than any\npolitically-driven time zone change.\n\nI wouldn't be a bit surprised if it ends up depending on whether the\nparticular distro is using IANA's makefile more or less verbatim.\nIn Red Hat's case I found that they'd have to take positive action to\ndrop POSIXRULES, so I'd agree that it won't happen there for a long time,\nand not in any existing RHEL release. In some other distros, it might\ntake explicit addition of a patch to keep from dropping POSIXRULES, in\nwhich case I think there'd be quite good odds that that won't happen\nand the changeover occurs with the next IANA zone updates.\n\nThe nasty thing about that scenario from our perspective is that it\nmeans that the same timezone spec means different things on different\nplatforms, even ones nominally using the same tzdata release. Do we\nwant to deal with that, or take pre-emptive action to prevent it?\n\n(You could argue that that hazard already exists for people who are\nintentionally using nonstandard posixrules files. But I think the\nset of such people can be counted without running out of fingers.\nIf there's some evidence to the contrary I'd like to see it.)\n\nI'm also worried about what the endgame looks like. It seems clear\nthat at some point IANA is going to remove their code's support for\nreading a posixrules file. Eggert hasn't tipped his hand as to when\nhe thinks that might happen, but I wouldn't care to bet that it's\nmore than five years away. I don't want to find ourselves in a\nsituation where we have to maintain code that upstream has nuked.\nIf they only do something comparable to the patch I posted, it\nwouldn't be so bad; but if they then undertake any significant\nfollow-on cleanup we'd be in a very bad place for tracking them.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Jun 2020 10:13:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: More tzdb fun: POSIXRULES is being deprecated upstream"
},
{
"msg_contents": "Seems like I'm not getting any traction in convincing people that\nback-patching this change is wise. To get this closed out before\nthe CF starts, I'm just going to put it into HEAD/v13 and call it\na day.\n\nI remain of the opinion that we'll probably regret not doing\nanything in the back branches, sometime in the next 4+ years.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 28 Jun 2020 11:02:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: More tzdb fun: POSIXRULES is being deprecated upstream"
}
] |
[
{
"msg_contents": "i am some pullzed by Snapbuild.c\nit seem some code can not reach for ever.\nSnapBuildCommitTxn\n{\n //can not reach there\nif (builder->state == SNAPBUILD_START ||\n(builder->state == SNAPBUILD_BUILDING_SNAPSHOT &&\nTransactionIdPrecedes(xid, SnapBuildNextPhaseAt(builder))))\n}\n\n\n DecodeXactOp {\nif (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT)\nreturn; //returned \n }\n}\n| |\nifquant\n|\n|\nifquant@163.com\n|\n签名由网易邮箱大师定制\n\n\n\n\n\n\n\n\n\ni am some pullzed by Snapbuild.c\n it seem some code can not reach for ever.SnapBuildCommitTxn{ //can not reach there if (builder->state == SNAPBUILD_START || (builder->state == SNAPBUILD_BUILDING_SNAPSHOT && TransactionIdPrecedes(xid, SnapBuildNextPhaseAt(builder))))} DecodeXactOp { if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT) return; //returned }}\n\n\n\n\n\n\n\n\n\n\n\nifquant\n\n\n\n\nifquant@163.com\n\n\n\n\n\n\n\n签名由\n网易邮箱大师\n定制",
"msg_date": "Fri, 5 Jul 2019 02:13:50 +0800 (GMT+08:00)",
"msg_from": "ifquant <ifquant@163.com>",
"msg_from_op": true,
"msg_subject": "question about SnapBuild"
}
] |
[
{
"msg_contents": "Hello,\n\nThe first 2 lc_monetary and lc_numeric are useful if the client for some\nreason executes set lc_*. We don't get a report and in many cases can't\ncontinue to parse numerics or money.\nNow it it possible to get these at startup by issuing show or querying the\ncatalog, but it seems much cleaner to just send them.\n\nThe latter is important for similar reasons. JDBC caches prepared\nstatements internally and if the user changes the search path without using\nsetSchema or uses a function to change it then internally it would be\nnecessary to invalidate the cache. Currently if this occurs these\nstatements fail.\n\nThis seems like a rather innocuous change as the protocol is not changed,\nrather the amount of information returned on startup is increased\nmarginally.\n\nI've included the authors of the npgsql and the node drivers in the email\nfor their input.\n\nDave Cramer\n\nHello,The first 2 lc_monetary and lc_numeric are useful if the client for some reason executes set lc_*. We don't get a report and in many cases can't continue to parse numerics or money.Now it it possible to get these at startup by issuing show or querying the catalog, but it seems much cleaner to just send them.The latter is important for similar reasons. JDBC caches prepared statements internally and if the user changes the search path without using setSchema or uses a function to change it then internally it would be necessary to invalidate the cache. Currently if this occurs these statements fail.This seems like a rather innocuous change as the protocol is not changed, rather the amount of information returned on startup is increased marginally.I've included the authors of the npgsql and the node drivers in the email for their input.Dave Cramer",
"msg_date": "Thu, 4 Jul 2019 14:56:54 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Proposal to add GUC_REPORT to lc_monetary, lc_numeric and search_path"
},
{
"msg_contents": "> The latter is important for similar reasons. JDBC caches prepared\nstatements internally and if the user changes the search path without using\nsetSchema or uses a function to change it then internally it would be\nnecessary to invalidate the cache. Currently if this occurs these\nstatements fail.\n\nWhile Npgsql specifically doesn't care about any locale/formatting (being a\nbinary-only driver), knowing about search_path changes would benefit Npgsql\nin the same way as it would JDBC.\n\n> This seems like a rather innocuous change as the protocol is not changed,\nrather the amount of information returned on startup is increased\nmarginally.\n\nAlthough adding these specific parameters are easy to add, we could also\nthink of a more generic way for clients to subscribe to parameter updates\n(am not sure if this was previously discussed - I cannot see anything\nobvious in the wiki TODO page). At its simplest, this could be a new\nparameter containing a comma-separated list of parameters for which\nasynchronous updates should be sent. This new parameter would default to\nthe current hard-coded list (as documented in\nhttps://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-ASYNC).\nUnless I'm mistaken, one issue (as in general with new parameters) is that\ndrivers wouldn't be able to send this new parameter in the startup package\nbecause they don't yet know whether they're talking to a PostgreSQL version\nwhich supports it.\n\n> The latter is important for similar reasons. JDBC caches prepared statements internally and if the user changes the search path without using setSchema or uses a function to change it then internally it would be necessary to invalidate the cache. Currently if this occurs these statements fail.While Npgsql specifically doesn't care about any locale/formatting (being a binary-only driver), knowing about search_path changes would benefit Npgsql in the same way as it would JDBC.> This seems like a rather innocuous change as the protocol is not changed, rather the amount of information returned on startup is increased marginally.Although adding these specific parameters are easy to add, we could also think of a more generic way for clients to subscribe to parameter updates (am not sure if this was previously discussed - I cannot see anything obvious in the wiki TODO page). At its simplest, this could be a new parameter containing a comma-separated list of parameters for which asynchronous updates should be sent. This new parameter would default to the current hard-coded list (as documented in https://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-ASYNC). Unless I'm mistaken, one issue (as in general with new parameters) is that drivers wouldn't be able to send this new parameter in the startup package because they don't yet know whether they're talking to a PostgreSQL version which supports it.",
"msg_date": "Fri, 5 Jul 2019 14:04:59 +0200",
"msg_from": "Shay Rojansky <roji@roji.org>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to add GUC_REPORT to lc_monetary,\n lc_numeric and search_path"
},
{
"msg_contents": "See attached patch.\n\nI think adding GUC_REPORT to search_path is probably the most important as\nthis is potentially a security issue. See joe conway's blog on security and\nsearch path here\nhttps://info.crunchydata.com/blog/postgresql-defaults-and-impact-on-security-part-2\n\nI also see there was a proposal to make reportable GUC's configurable here\nhttps://www.postgresql.org/message-id/CA+TgmobSXsy0KFR_vDQQOXJxQAFNESFXF_-dArNE+QHhqCwrAA@mail.gmail.com\n\nI don't really care which one gets implemented, although I think the latter\nmakes more sense.\n\nDave Cramer\n\n\nOn Fri, 5 Jul 2019 at 08:05, Shay Rojansky <roji@roji.org> wrote:\n\n> > The latter is important for similar reasons. JDBC caches prepared\n> statements internally and if the user changes the search path without using\n> setSchema or uses a function to change it then internally it would be\n> necessary to invalidate the cache. Currently if this occurs these\n> statements fail.\n>\n> While Npgsql specifically doesn't care about any locale/formatting (being\n> a binary-only driver), knowing about search_path changes would benefit\n> Npgsql in the same way as it would JDBC.\n>\n> > This seems like a rather innocuous change as the protocol is not\n> changed, rather the amount of information returned on startup is increased\n> marginally.\n>\n> Although adding these specific parameters are easy to add, we could also\n> think of a more generic way for clients to subscribe to parameter updates\n> (am not sure if this was previously discussed - I cannot see anything\n> obvious in the wiki TODO page). At its simplest, this could be a new\n> parameter containing a comma-separated list of parameters for which\n> asynchronous updates should be sent. This new parameter would default to\n> the current hard-coded list (as documented in\n> https://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-ASYNC).\n> Unless I'm mistaken, one issue (as in general with new parameters) is that\n> drivers wouldn't be able to send this new parameter in the startup package\n> because they don't yet know whether they're talking to a PostgreSQL version\n> which supports it.\n>\n>",
"msg_date": "Tue, 9 Jul 2019 14:20:01 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to add GUC_REPORT to lc_monetary,\n lc_numeric and search_path"
}
] |
[
{
"msg_contents": "Hello open source administrators and mentors,\n\nThank you again for your patience. We received over 440 technical writer\napplications, so there is quite a lot of enthusiasm from the technical\nwriting community. Every single organization received multiple technical\nwriter project proposals!\n\nWe have been busy processing and reviewing the applications to make sure\nthey follow our guidelines and rules. Consequently, there were a few that\nwe had to remove. There are also some that don't violate any of our\nguidelines or rules, but they contain irrelevant content or perhaps the\napplicant may have misunderstood the purpose of Season of Docs. We have\ndecided not to censor those and are leaving it up to you to dismiss or\nreroute them as appropriate for your organization.\n\nAttached is a link to a Google Sheets export of the technical writer\napplications to your specific organization. We decided that a spreadsheet\nwould be the easiest format for you to manipulate and consume the form\ndata; however, if you would like us to generate a separate email message\nper project proposal, please contact us on the support list,\nseason-of-docs-support@googlegroups.com.\n\n Season of Docs - PostgreSQL (16)\n<https://docs.google.com/a/google.com/spreadsheets/d/1on6sSxX3XkVug2dSRUrxR7HXT_z-Tc5ZPBBxrfi7bvA/edit?usp=drive_web>\n\nWhen assessing a project proposal, org admins and mentors should assess the\nproposal based on the content of the fields in the form, without examining\nany linked docs. (It's fine to follow links for tech writing experience or\nresumes.) The reasons are (a) we must avoid any potential for people to add\nto their proposal after the deadline date has passed, and (b) we just\nensure all applicants are on equal footing. The tech writer guidelines\nstate clearly that the proposal description should be in the form itself:\nhttps://developers.google.com/season-of-docs/docs/tech-writer-application-hints#project_proposal\n\nLater, we will send out a form for you to enter in your organization's\ntechnical writer/project proposal choice and alternates in order of\npreference.\n\nIf you have any questions, contact us at\nseason-of-docs-support@googlegroups.com.\n\nCheers,\n\n\n--Andrew Chen and Sarah Maddox\n\n\nAndrew Chen | Program Manager, Open Source Strategy | chenopis@google.com\n | 650-495-4987\n\nHello open source administrators and mentors,Thank you again for your patience. We received over 440 technical writer applications, so there is quite a lot of enthusiasm from the technical writing community. Every single organization received multiple technical writer project proposals! We have been busy processing and reviewing the applications to make sure they follow our guidelines and rules. Consequently, there were a few that we had to remove. There are also some that don't violate any of our guidelines or rules, but they contain irrelevant content or perhaps the applicant may have misunderstood the purpose of Season of Docs. We have decided not to censor those and are leaving it up to you to dismiss or reroute them as appropriate for your organization.Attached is a link to a Google Sheets export of the technical writer applications to your specific organization. We decided that a spreadsheet would be the easiest format for you to manipulate and consume the form data; however, if you would like us to generate a separate email message per project proposal, please contact us on the support list, season-of-docs-support@googlegroups.com. Season of Docs - PostgreSQL (16)When assessing a project proposal, org admins and mentors should assess the proposal based on the content of the fields in the form, without examining any linked docs. (It's fine to follow links for tech writing experience or resumes.) The reasons are (a) we must avoid any potential for people to add to their proposal after the deadline date has passed, and (b) we just ensure all applicants are on equal footing. The tech writer guidelines state clearly that the proposal description should be in the form itself: https://developers.google.com/season-of-docs/docs/tech-writer-application-hints#project_proposalLater, we will send out a form for you to enter in your organization's technical writer/project proposal choice and alternates in order of preference.If you have any questions, contact us at season-of-docs-support@googlegroups.com.Cheers,--Andrew Chen and Sarah MaddoxAndrew Chen | Program Manager, Open Source Strategy | chenopis@google.com | 650-495-4987",
"msg_date": "Thu, 4 Jul 2019 22:00:00 -0700",
"msg_from": "Andrew Chen <chenopis@google.com>",
"msg_from_op": true,
"msg_subject": "[Season of Docs] Technical writer applications and proposals for\n PostgreSQL"
}
] |
[
{
"msg_contents": "Hi,\n\nI realized that TransactionIdAbort is declared in the transam.h but\nthere is not its function body. As far as I found there are three\nsimilar functions in total by the following script.\n\nfor func in `git ls-files | egrep \"\\w+\\.h$\" | xargs cat | egrep\n\"extern \\w+ \\w+\\(.*\\);\" | sed -e \"s/.* \\(.*\\)(.*);/\\1(/g\"`\ndo\n if [ `git grep \"$func\" -- \"*.c\" | wc -l` -lt 1 ];then\n echo $func\n fi\ndone\n\nI think the following functions are mistakenly left in the header\nfile. So attached patch removes them.\n\ndsa_startup()\nTransactionIdAbort()\nrenameatt_type()\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center",
"msg_date": "Fri, 5 Jul 2019 13:51:32 +0800",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Declared but no defined functions"
},
{
"msg_contents": "Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> I think the following functions are mistakenly left in the header\n> file. So attached patch removes them.\n\n> dsa_startup()\n> TransactionIdAbort()\n> renameatt_type()\n\nAgreed, these are referenced nowhere. I pushed the patch.\n\n> I realized that TransactionIdAbort is declared in the transam.h but\n> there is not its function body. As far as I found there are three\n> similar functions in total by the following script.\n> for func in `git ls-files | egrep \"\\w+\\.h$\" | xargs cat | egrep\n> \"extern \\w+ \\w+\\(.*\\);\" | sed -e \"s/.* \\(.*\\)(.*);/\\1(/g\"`\n> do\n> if [ `git grep \"$func\" -- \"*.c\" | wc -l` -lt 1 ];then\n> echo $func\n> fi\n> done\n\nFWIW, that won't catch declarations that lack \"extern\", nor functions\nthat return pointer-to-something. (Omitting \"extern\" is something\nI consider bad style, but other people seem to be down with it.)\nMight be worth another pass to look harder?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Jul 2019 19:32:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Declared but no defined functions"
},
{
"msg_contents": "On Sat, Jul 6, 2019 at 7:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> > I think the following functions are mistakenly left in the header\n> > file. So attached patch removes them.\n>\n> > dsa_startup()\n> > TransactionIdAbort()\n> > renameatt_type()\n>\n> Agreed, these are referenced nowhere. I pushed the patch.\n\nThanks.\n\n>\n> > I realized that TransactionIdAbort is declared in the transam.h but\n> > there is not its function body. As far as I found there are three\n> > similar functions in total by the following script.\n> > for func in `git ls-files | egrep \"\\w+\\.h$\" | xargs cat | egrep\n> > \"extern \\w+ \\w+\\(.*\\);\" | sed -e \"s/.* \\(.*\\)(.*);/\\1(/g\"`\n> > do\n> > if [ `git grep \"$func\" -- \"*.c\" | wc -l` -lt 1 ];then\n> > echo $func\n> > fi\n> > done\n>\n> FWIW, that won't catch declarations that lack \"extern\", nor functions\n> that return pointer-to-something. (Omitting \"extern\" is something\n> I consider bad style, but other people seem to be down with it.)\n> Might be worth another pass to look harder?\n>\n\nIndeed. I've tried to search again with the following script and got\nmore such functions.\n\nfor func in `git ls-files | egrep \"\\w+\\.h$\" | xargs cat | egrep -v\n\"(^typedef)|(DECLARE)|(BKI)\" | egrep \"^(extern )*[\\_0-9A-Za-z]+\n[\\_\\*0-9a-zA-Z]+ ?\\(.+\\);$\" | sed -e \"s/\\(^extern \\)*[\\_0-9A-Za-z]\\+\n\\([\\_0-9A-Za-z\\*]\\+\\) \\{0,1\\}(.*);$/\\2(/g\" | sed -e \"s/\\*//g\"`\ndo\n if [ \"`git grep \"$func\" -- \"*.c\" | wc -l`\" -lt 1 ];then\n echo $func\n fi\ndone\n\nAttached patch removes these functions.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center",
"msg_date": "Sun, 7 Jul 2019 07:31:12 +0800",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Declared but no defined functions"
},
{
"msg_contents": "On Sun, Jul 07, 2019 at 07:31:12AM +0800, Masahiko Sawada wrote:\n> Attached patch removes these functions.\n\nThanks, applied.\n--\nMichael",
"msg_date": "Sun, 7 Jul 2019 10:04:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Declared but no defined functions"
},
{
"msg_contents": "On Sun, Jul 7, 2019 at 10:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Jul 07, 2019 at 07:31:12AM +0800, Masahiko Sawada wrote:\n> > Attached patch removes these functions.\n>\n> Thanks, applied.\n\nThank you!\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 8 Jul 2019 13:55:43 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Declared but no defined functions"
},
{
"msg_contents": "On Sat, Jul 6, 2019 at 4:32 PM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> Indeed. I've tried to search again with the following script and got\n> more such functions.\n>\n> for func in `git ls-files | egrep \"\\w+\\.h$\" | xargs cat | egrep -v\n> \"(^typedef)|(DECLARE)|(BKI)\" | egrep \"^(extern )*[\\_0-9A-Za-z]+\n> [\\_\\*0-9a-zA-Z]+ ?\\(.+\\);$\" | sed -e \"s/\\(^extern \\)*[\\_0-9A-Za-z]\\+\n> \\([\\_0-9A-Za-z\\*]\\+\\) \\{0,1\\}(.*);$/\\2(/g\" | sed -e \"s/\\*//g\"`\n> do\n> if [ \"`git grep \"$func\" -- \"*.c\" | wc -l`\" -lt 1 ];then\n> echo $func\n> fi\n> done\n>\n>\nDo we wish to make this a tool and have it in src/tools, either as part of\nfind_static tool after renaming that one to more generic name or\nindependent script.\n\nOn Sat, Jul 6, 2019 at 4:32 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\nIndeed. I've tried to search again with the following script and got\nmore such functions.\n\nfor func in `git ls-files | egrep \"\\w+\\.h$\" | xargs cat | egrep -v\n\"(^typedef)|(DECLARE)|(BKI)\" | egrep \"^(extern )*[\\_0-9A-Za-z]+\n[\\_\\*0-9a-zA-Z]+ ?\\(.+\\);$\" | sed -e \"s/\\(^extern \\)*[\\_0-9A-Za-z]\\+\n\\([\\_0-9A-Za-z\\*]\\+\\) \\{0,1\\}(.*);$/\\2(/g\" | sed -e \"s/\\*//g\"`\ndo\n if [ \"`git grep \"$func\" -- \"*.c\" | wc -l`\" -lt 1 ];then\n echo $func\n fi\ndone\nDo we wish to make this a tool and have it in src/tools, either as part of find_static tool after renaming that one to more generic name or independent script.",
"msg_date": "Mon, 8 Jul 2019 11:57:18 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Declared but no defined functions"
},
{
"msg_contents": "Ashwin Agrawal <aagrawal@pivotal.io> writes:\n> Do we wish to make this a tool and have it in src/tools, either as part of\n> find_static tool after renaming that one to more generic name or\n> independent script.\n\nWell, the scripts described so far are little more than jury-rigged\nhacks, with lots of room for false positives *and* false negatives.\nI wouldn't want to institutionalize any of them as the right way to\ncheck for such problems. If somebody made the effort to create a\ntool that was actually trustworthy, perhaps that'd be a different\nstory.\n\n(Personally I was wondering whether pgindent could be hacked up to\nemit things it thought were declarations of function names. I'm\nnot sure that I'd trust that 100% either, but at least it would have\na better shot than the grep hacks we've discussed so far. Note in\nparticular that pgindent would see things inside #ifdef blocks,\nwhether or not your local build ever sees those declarations.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Jul 2019 15:38:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Declared but no defined functions"
}
] |
[
{
"msg_contents": " > Regarding apply_scanjoin_target_to_paths in 0001 and 0007, it seems\n > like what happens is: we first build an Append path for the topmost\n > scan/join rel. That uses paths from the individual relations that\n > don't necessarily produce the final scan/join target. Then we mutate\n > those relations in place during partition-wise aggregate so that they\n > now do produce the final scan/join target and generate some more paths\n > using the results. So there's an ordering dependency, and the same\n > pathlist represents different things at different times. That is, I\n > suppose, not technically any worse than what we're doing for the\n > scan/join rel's pathlist in general, but here there's the additional\n > complexity that the paths get used both before and after being\n > mutated. The UPPERREL_TLIST proposal would clean this up, although I\n > realize that has unresolved issues.\n\nI discouraged by this logic.\nNow I use set_rel_pathlist_hook and make some optimizations at partition \nscanning paths. But apply_scanjoin_target_to_paths() deletes pathlist \nand violates all optimizations.\nMay be it is possible to introduce some flag, that hook can set to \nprevent pathlist cleaning?\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Fri, 5 Jul 2019 11:32:39 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Partition-wise aggregation/grouping"
}
] |
[
{
"msg_contents": "Dear Hackers\n\nI am interested in implementing my own Domain Specific Language (DSL) using PostgreSQL internals. Originally, the plan was not to use PostgreSQL and I had developed a grammar and used ANTLRv4 for parser work and general early development.\n\nInitially, I was hoping for a scenario where I could have PostgreSQL's parser to change grammar (e.g. SET parser_language=SQL vs. SET parser_language=myDSL) in which case my ANTLRv4 project would override the PostgreSQL parser module. I guess another direction that my project could take is to extend PostgreSQL's SQL parser to factor in my DSL keywords and requirements.\n\nTo make matters more complicated, this version of ANTLR does not support code generation to C, but it does support generation to C++. Integrating the generated C++ code requires making it friendly to PostgreSQL e.g. using Plain Old Data Structures as described here https://www.postgresql.org/docs/9.0/extend-cpp.html, which seems to be suggesting to me that I may be using the wrong approach towards my goal.\n\nI would be grateful if anyone could provide any general advice or pointers regarding my approach, for example regarding development effort, so that the development with PostgreSQL internals can be smooth and of a high quality. Maybe somebody has come across another DSL attempt which used PostgreSQL and that I could follow as a reference?\n\nThanks in advance.\n\nBest,\nTom\n\n\n[https://ipmcdn.avast.com/images/icons/icon-envelope-tick-green-avg-v1.png]<http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=oa-4885-a> Virus-free. www.avg.com<http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=oa-4885-a>\n\n\n\n\n\n\n\n\nDear Hackers\n\n\n\n\n\nI am interested in implementing my own Domain Specific Language\n (DSL) using PostgreSQL internals. Originally, the plan was not to use PostgreSQL and I had developed a grammar and used ANTLRv4 for parser work and general early development.\n\n\n\n\nInitially,\n I was hoping for a scenario where I could have PostgreSQL's parser to change grammar (e.g. SET parser_language=SQL vs. SET parser_language=myDSL)\n in which case my ANTLRv4 project would override the PostgreSQL parser module. I\n guess another direction that my project could take is to extend PostgreSQL's SQL parser to factor in my DSL keywords and requirements.\n\n\n\n\nTo make matters more complicated, this version of ANTLR does not support code generation\n to C, but it does support generation to C++. Integrating the generated C++ code requires making it friendly to PostgreSQL e.g. using Plain Old Data Structures as described here\nhttps://www.postgresql.org/docs/9.0/extend-cpp.html, which seems to be suggesting to me that I may be using the wrong approach towards my goal.\n\n\n\n\nI would be grateful if anyone could provide any general advice\n or pointers regarding my approach, for example regarding development effort, so that the development with PostgreSQL internals can be smooth and of a high quality. Maybe somebody has come across another DSL attempt which used PostgreSQL and that I could follow\n as a reference?\n\n\n\n\nThanks in advance.\n\n\n\n\nBest,\n\nTom\n\n\n\n\n\n\n\n\n\nVirus-free. \nwww.avg.com",
"msg_date": "Fri, 5 Jul 2019 07:55:15 +0000",
"msg_from": "Tom Mercha <mercha_t@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Extending PostgreSQL with a Domain-Specific Language (DSL) -\n Development"
},
{
"msg_contents": "On Fri, Jul 05, 2019 at 07:55:15AM +0000, Tom Mercha wrote:\n>Dear Hackers\n>\n>I am interested in implementing my own Domain Specific Language (DSL)\n>using PostgreSQL internals. Originally, the plan was not to use\n>PostgreSQL and I had developed a grammar and used ANTLRv4 for parser\n>work and general early development.\n>\n>Initially, I was hoping for a scenario where I could have PostgreSQL's\n>parser to change grammar (e.g. SET parser_language=SQL vs. SET\n>parser_language=myDSL) in which case my ANTLRv4 project would override\n>the PostgreSQL parser module. I guess another direction that my project\n>could take is to extend PostgreSQL's SQL parser to factor in my DSL\n>keywords and requirements.\n>\n>To make matters more complicated, this version of ANTLR does not\n>support code generation to C, but it does support generation to C++.\n>Integrating the generated C++ code requires making it friendly to\n>PostgreSQL e.g. using Plain Old Data Structures as described here\n>https://www.postgresql.org/docs/9.0/extend-cpp.html, which seems to be\n>suggesting to me that I may be using the wrong approach towards my\n>goal.\n>\n>I would be grateful if anyone could provide any general advice or\n>pointers regarding my approach, for example regarding development\n>effort, so that the development with PostgreSQL internals can be smooth\n>and of a high quality. Maybe somebody has come across another DSL\n>attempt which used PostgreSQL and that I could follow as a reference?\n>\n\nI might be missing something, but it seems like you intend to replace\nthe SQL grammar we have with something else. It's not clear to me what\nwould be the point of doing that, and it definitely looks like a huge\namount of work - e.g. we don't have any support for switching between\ntwo distinct grammars the way you envision, and just that alone seems\nlike a multi-year project. And if you don't have that capability, all\nexternal tools kinda stop working. Good luck with running such database.\n\nWhat I'd look at first is implementing the grammar as a procedural\nlanguage (think PL/pgSQL, pl/perl etc.) implementing whatever you expect\nfrom your DSL. And it's not like you'd have to wrap everything in\nfunctions, because we have anonymous DO blocks. So you could do:\n\n DO LANGUAGE mydsl $$\n ... whatever my dsl allows ...\n $$; \n\nIt's still a fair amount of code to implement this (both the PL handler\nand the DSL implementation), but it's orders of magnitude simpler than\nwhat you described.\n\nSee https://www.postgresql.org/docs/current/plhandler.html for details\nabout how to write a language handler.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 5 Jul 2019 20:48:45 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Extending PostgreSQL with a Domain-Specific Language (DSL) -\n Development"
},
{
"msg_contents": "I might be missing something, but it seems like you intend to replace\nthe SQL grammar we have with something else. It's not clear to me what\nwould be the point of doing that, and it definitely looks like a huge\namount of work - e.g. we don't have any support for switching between\ntwo distinct grammars the way you envision, and just that alone seems\nlike a multi-year project. And if you don't have that capability, all\nexternal tools kinda stop working. Good luck with running such database.\nI was considering having two distinct grammars as an option - thanks for indicating the effort involved. At the end of the day I want both my DSL and the PostgreSQL grammars to coexist. Is extending PostgreSQL's grammar with my own through the PostgreSQL extension infrastructure worth consideration or is it also difficult to develop? Could you suggest any reference material on this topic?\n\nWhat I'd look at first is implementing the grammar as a procedural\nlanguage (think PL/pgSQL, pl/perl etc.) implementing whatever you expect\nfrom your DSL. And it's not like you'd have to wrap everything in\nfunctions, because we have anonymous DO blocks.\nThanks for pointing out this direction! I think I will indeed adopt this approach especially if directly extending PostgreSQL grammar would be difficult.\n\nRegards\nTom\n________________________________\nFrom: Tomas Vondra <tomas.vondra@2ndquadrant.com>\nSent: 05 July 2019 20:48\nTo: Tom Mercha\nCc: pgsql-hackers@postgresql.org\nSubject: Re: Extending PostgreSQL with a Domain-Specific Language (DSL) - Development\n\nOn Fri, Jul 05, 2019 at 07:55:15AM +0000, Tom Mercha wrote:\n>Dear Hackers\n>\n>I am interested in implementing my own Domain Specific Language (DSL)\n>using PostgreSQL internals. Originally, the plan was not to use\n>PostgreSQL and I had developed a grammar and used ANTLRv4 for parser\n>work and general early development.\n>\n>Initially, I was hoping for a scenario where I could have PostgreSQL's\n>parser to change grammar (e.g. SET parser_language=SQL vs. SET\n>parser_language=myDSL) in which case my ANTLRv4 project would override\n>the PostgreSQL parser module. I guess another direction that my project\n>could take is to extend PostgreSQL's SQL parser to factor in my DSL\n>keywords and requirements.\n>\n>To make matters more complicated, this version of ANTLR does not\n>support code generation to C, but it does support generation to C++.\n>Integrating the generated C++ code requires making it friendly to\n>PostgreSQL e.g. using Plain Old Data Structures as described here\n>https://www.postgresql.org/docs/9.0/extend-cpp.html, which seems to be\n>suggesting to me that I may be using the wrong approach towards my\n>goal.\n>\n>I would be grateful if anyone could provide any general advice or\n>pointers regarding my approach, for example regarding development\n>effort, so that the development with PostgreSQL internals can be smooth\n>and of a high quality. Maybe somebody has come across another DSL\n>attempt which used PostgreSQL and that I could follow as a reference?\n>\n\nI might be missing something, but it seems like you intend to replace\nthe SQL grammar we have with something else. It's not clear to me what\nwould be the point of doing that, and it definitely looks like a huge\namount of work - e.g. we don't have any support for switching between\ntwo distinct grammars the way you envision, and just that alone seems\nlike a multi-year project. And if you don't have that capability, all\nexternal tools kinda stop working. Good luck with running such database.\n\nWhat I'd look at first is implementing the grammar as a procedural\nlanguage (think PL/pgSQL, pl/perl etc.) implementing whatever you expect\nfrom your DSL. And it's not like you'd have to wrap everything in\nfunctions, because we have anonymous DO blocks. So you could do:\n\n DO LANGUAGE mydsl $$\n ... whatever my dsl allows ...\n $$;\n\nIt's still a fair amount of code to implement this (both the PL handler\nand the DSL implementation), but it's orders of magnitude simpler than\nwhat you described.\n\nSee https://www.postgresql.org/docs/current/plhandler.html for details\nabout how to write a language handler.\n\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n\n\n\n\n\n\nI might be missing\n something, but it seems like you intend to replace\nthe SQL grammar\n we have with something else. It's not clear to me what\nwould be the point\n of doing that, and it definitely looks like a huge\namount of work\n - e.g. we don't have any support for switching between\ntwo distinct grammars\n the way you envision, and just that alone seems\nlike a multi-year\n project. And if you don't have that capability, all\nexternal\n tools kinda stop working. Good luck with running such database.\n\n\n\nI was considering\n having two distinct grammars as an option - thanks for indicating the effort involved. At the end of the day I want both my DSL and the PostgreSQL grammars to coexist. Is extending PostgreSQL's grammar with my own through the PostgreSQL extension infrastructure\n worth consideration or is it also difficult to develop? Could you suggest any reference material on this topic?\n\n\n\n\n\n\nWhat I'd look at first is implementing\n the grammar as a procedural\nlanguage (think PL/pgSQL, pl/perl etc.)\n implementing whatever you expect\nfrom your DSL. And it's not like you'd\n have to wrap everything in\nfunctions, because we have anonymous\n DO blocks. \n\n\n\nThanks for pointing\n out this direction! I think I will indeed adopt this approach especially if directly extending PostgreSQL grammar would be difficult.\n\n\n\nRegards\nTom\n\n\n\nFrom: Tomas Vondra <tomas.vondra@2ndquadrant.com>\nSent: 05 July 2019 20:48\nTo: Tom Mercha\nCc: pgsql-hackers@postgresql.org\nSubject: Re: Extending PostgreSQL with a Domain-Specific Language (DSL) - Development\n \n\n\nOn Fri, Jul 05, 2019 at 07:55:15AM +0000, Tom Mercha wrote:\n>Dear Hackers\n>\n>I am interested in implementing my own Domain Specific Language (DSL)\n>using PostgreSQL internals. Originally, the plan was not to use\n>PostgreSQL and I had developed a grammar and used ANTLRv4 for parser\n>work and general early development.\n>\n>Initially, I was hoping for a scenario where I could have PostgreSQL's\n>parser to change grammar (e.g. SET parser_language=SQL vs. SET\n>parser_language=myDSL) in which case my ANTLRv4 project would override\n>the PostgreSQL parser module. I guess another direction that my project\n>could take is to extend PostgreSQL's SQL parser to factor in my DSL\n>keywords and requirements.\n>\n>To make matters more complicated, this version of ANTLR does not\n>support code generation to C, but it does support generation to C++.\n>Integrating the generated C++ code requires making it friendly to\n>PostgreSQL e.g. using Plain Old Data Structures as described here\n>https://www.postgresql.org/docs/9.0/extend-cpp.html, which seems to be\n>suggesting to me that I may be using the wrong approach towards my\n>goal.\n>\n>I would be grateful if anyone could provide any general advice or\n>pointers regarding my approach, for example regarding development\n>effort, so that the development with PostgreSQL internals can be smooth\n>and of a high quality. Maybe somebody has come across another DSL\n>attempt which used PostgreSQL and that I could follow as a reference?\n>\n\nI might be missing something, but it seems like you intend to replace\nthe SQL grammar we have with something else. It's not clear to me what\nwould be the point of doing that, and it definitely looks like a huge\namount of work - e.g. we don't have any support for switching between\ntwo distinct grammars the way you envision, and just that alone seems\nlike a multi-year project. And if you don't have that capability, all\nexternal tools kinda stop working. Good luck with running such database.\n\nWhat I'd look at first is implementing the grammar as a procedural\nlanguage (think PL/pgSQL, pl/perl etc.) implementing whatever you expect\nfrom your DSL. And it's not like you'd have to wrap everything in\nfunctions, because we have anonymous DO blocks. So you could do:\n\n DO LANGUAGE mydsl $$\n ... whatever my dsl allows ...\n $$; \n\nIt's still a fair amount of code to implement this (both the PL handler\nand the DSL implementation), but it's orders of magnitude simpler than\nwhat you described.\n\nSee https://www.postgresql.org/docs/current/plhandler.html for details\nabout how to write a language handler.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 5 Jul 2019 21:37:03 +0000",
"msg_from": "Tom Mercha <mercha_t@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Extending PostgreSQL with a Domain-Specific Language (DSL) -\n Development"
},
{
"msg_contents": "First of all, it's pretty difficult to follow the discussion when it's\nnot clear what's the original message and what's the response. E-mail\nclients generally indent the original message with '>' or someting like\nthat, but your client does not do that (which is pretty silly). And\ncopying the message at the top does not really help. Please do something\nabout that.\n\n\nOn Fri, Jul 05, 2019 at 09:37:03PM +0000, Tom Mercha wrote:\n>>I might be missing something, but it seems like you intend to replace\n>>the SQL grammar we have with something else. It's not clear to me what\n>>would be the point of doing that, and it definitely looks like a huge\n>>amount of work - e.g. we don't have any support for switching between\n>>two distinct grammars the way you envision, and just that alone seems\n>>like a multi-year project. And if you don't have that capability, all\n>>external tools kinda stop working. Good luck with running such database.\n>\n>I was considering having two distinct grammars as an option - thanks\n>for indicating the effort involved. At the end of the day I want both\n>my DSL and the PostgreSQL grammars to coexist. Is extending\n>PostgreSQL's grammar with my own through the PostgreSQL extension\n>infrastructure worth consideration or is it also difficult to develop?\n>Could you suggest any reference material on this topic?\n>\n\nWell, I'm not an expert in that area, but we currently don't have any\ninfrastructure to support that. It's a topic that was discussed in the\npast (perhaps you can find some references in the archives) and it\ngenerally boils down to:\n\n1) We're using bison as parser generator.\n2) Bison does not allow adding rules on the fly.\n\nSo you have to modify the in-core src/backend/parser/gram.y and rebuild\npostgres. See for example for an example of such discussion\n\nhttps://www.postgresql.org/message-id/flat/CABSN6VeeEhwb0HrjOCp9kHaWm0Ljbnko5y-0NKsT_%3D5i5C2jog%40mail.gmail.com\n\nWhen two of the smartest people on the list say it's a hard problem, it\nprobably is. Particularly for someone who does not know the internals.\n\n>>What I'd look at first is implementing the grammar as a procedural\n>>language (think PL/pgSQL, pl/perl etc.) implementing whatever you\n>>expect from your DSL. And it's not like you'd have to wrap everything\n>>in functions, because we have anonymous DO blocks.\n>\n>Thanks for pointing out this direction! I think I will indeed adopt\n>this approach especially if directly extending PostgreSQL grammar would\n>be difficult.\n\nWell, it's the only way to deal with it at the moment.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 6 Jul 2019 00:06:49 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Extending PostgreSQL with a Domain-Specific Language (DSL) -\n Development"
},
{
"msg_contents": "On 06/07/2019 00:06, Tomas Vondra wrote:\r\n> First of all, it's pretty difficult to follow the discussion when it's\r\n> not clear what's the original message and what's the response. E-mail\r\n> clients generally indent the original message with '>' or someting like\r\n> that, but your client does not do that (which is pretty silly). And\r\n> copying the message at the top does not really help. Please do something\r\n> about that.\r\n\r\nI would like to apologise. I did not realize that my client was doing \r\nthat and now I have changed the client. I hope it's fine now.\r\n\r\n> \r\n> On Fri, Jul 05, 2019 at 09:37:03PM +0000, Tom Mercha wrote:\r\n>>> I might be missing something, but it seems like you intend to replace\r\n>>> the SQL grammar we have with something else. It's not clear to me what\r\n>>> would be the point of doing that, and it definitely looks like a huge\r\n>>> amount of work - e.g. we don't have any support for switching between\r\n>>> two distinct grammars the way you envision, and just that alone seems\r\n>>> like a multi-year project. And if you don't have that capability, all\r\n>>> external tools kinda stop working. Good luck with running such database.\r\n>>\r\n>> I was considering having two distinct grammars as an option - thanks\r\n>> for indicating the effort involved. At the end of the day I want both\r\n>> my DSL and the PostgreSQL grammars to coexist. Is extending\r\n>> PostgreSQL's grammar with my own through the PostgreSQL extension\r\n>> infrastructure worth consideration or is it also difficult to develop?\r\n>> Could you suggest any reference material on this topic?\r\n>>\r\n> \r\n> Well, I'm not an expert in that area, but we currently don't have any\r\n> infrastructure to support that. It's a topic that was discussed in the\r\n> past (perhaps you can find some references in the archives) and it\r\n> generally boils down to:\r\n> \r\n> 1) We're using bison as parser generator.\r\n> 2) Bison does not allow adding rules on the fly.\r\n> \r\n> So you have to modify the in-core src/backend/parser/gram.y and rebuild\r\n> postgres. See for example for an example of such discussion\r\n> \r\n> https://www.postgresql.org/message-id/flat/CABSN6VeeEhwb0HrjOCp9kHaWm0Ljbnko5y-0NKsT_%3D5i5C2jog%40mail.gmail.com \r\n> \r\n> \r\n> When two of the smartest people on the list say it's a hard problem, it\r\n> probably is. Particularly for someone who does not know the internals.\r\nYou are right. Thanks for bringing it to my attention!\r\n\r\nI didn't design my language for interaction with triggers and whatnot, \r\nbut I think that it would be very interesting to support those as well, \r\nso looking at CREATE LANGUAGE functionality is actually exciting and \r\nappropriate once I make some changes in design. Thanks again for this point!\r\n\r\nI hope this is not off topic but I was wondering if you know what are \r\nthe intrinsic differences between HANDLER and INLINE parameters of \r\nCREATE LANGUAGE? I know that they are functions which are invoked at \r\ndifferent instances of time (e.g. one is for handling anonymous code \r\nblocks), but at the end of the day they seem to have the same purpose?\r\n\r\n>>> What I'd look at first is implementing the grammar as a procedural\r\n>>> language (think PL/pgSQL, pl/perl etc.) implementing whatever you\r\n>>> expect from your DSL. And it's not like you'd have to wrap everything\r\n>>> in functions, because we have anonymous DO blocks.\r\n>>\r\n>> Thanks for pointing out this direction! I think I will indeed adopt\r\n>> this approach especially if directly extending PostgreSQL grammar would\r\n>> be difficult.\r\n> \r\n> Well, it's the only way to deal with it at the moment.\r\n> \r\n> \r\n> regards\r\n> \r\n",
"msg_date": "Sun, 7 Jul 2019 23:06:38 +0000",
"msg_from": "Tom Mercha <mercha_t@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Extending PostgreSQL with a Domain-Specific Language (DSL) -\n Development"
},
{
"msg_contents": "On Sun, Jul 07, 2019 at 11:06:38PM +0000, Tom Mercha wrote:\n>On 06/07/2019 00:06, Tomas Vondra wrote:\n>> First of all, it's pretty difficult to follow the discussion when it's\n>> not clear what's the original message and what's the response. E-mail\n>> clients generally indent the original message with '>' or someting like\n>> that, but your client does not do that (which is pretty silly). And\n>> copying the message at the top does not really help. Please do something\n>> about that.\n>\n>I would like to apologise. I did not realize that my client was doing\n>that and now I have changed the client. I hope it's fine now.\n>\n\nThanks, seems fine now.\n\n>>\n>> On Fri, Jul 05, 2019 at 09:37:03PM +0000, Tom Mercha wrote:\n>>>> I might be missing something, but it seems like you intend to replace\n>>>> the SQL grammar we have with something else. It's not clear to me what\n>>>> would be the point of doing that, and it definitely looks like a huge\n>>>> amount of work - e.g. we don't have any support for switching between\n>>>> two distinct grammars the way you envision, and just that alone seems\n>>>> like a multi-year project. And if you don't have that capability, all\n>>>> external tools kinda stop working. Good luck with running such database.\n>>>\n>>> I was considering having two distinct grammars as an option - thanks\n>>> for indicating the effort involved. At the end of the day I want both\n>>> my DSL and the PostgreSQL grammars to coexist. Is extending\n>>> PostgreSQL's grammar with my own through the PostgreSQL extension\n>>> infrastructure worth consideration or is it also difficult to develop?\n>>> Could you suggest any reference material on this topic?\n>>>\n>>\n>> Well, I'm not an expert in that area, but we currently don't have any\n>> infrastructure to support that. It's a topic that was discussed in the\n>> past (perhaps you can find some references in the archives) and it\n>> generally boils down to:\n>>\n>> 1) We're using bison as parser generator.\n>> 2) Bison does not allow adding rules on the fly.\n>>\n>> So you have to modify the in-core src/backend/parser/gram.y and rebuild\n>> postgres. See for example for an example of such discussion\n>>\n>> https://www.postgresql.org/message-id/flat/CABSN6VeeEhwb0HrjOCp9kHaWm0Ljbnko5y-0NKsT_%3D5i5C2jog%40mail.gmail.com\n>>\n>>\n>> When two of the smartest people on the list say it's a hard problem, it\n>> probably is. Particularly for someone who does not know the internals.\n>You are right. Thanks for bringing it to my attention!\n>\n>I didn't design my language for interaction with triggers and whatnot,\n>but I think that it would be very interesting to support those as well,\n>so looking at CREATE LANGUAGE functionality is actually exciting and\n>appropriate once I make some changes in design. Thanks again for this point!\n>\n\n;-)\n\n>I hope this is not off topic but I was wondering if you know what are\n>the intrinsic differences between HANDLER and INLINE parameters of\n>CREATE LANGUAGE? I know that they are functions which are invoked at\n>different instances of time (e.g. one is for handling anonymous code\n>blocks), but at the end of the day they seem to have the same purpose?\n>\n\nI've never written any PL handler, so I don't know. All I know is this\nquote from the docs, right below the simple example of PL handler:\n\n Only a few thousand lines of code have to be added instead of the\n dots to complete the call handler.\n\nI suppose the best idea to start an implementation is to copy an\nexisting PL implementation, and modify that. That's usually much easier\nthan starting from scratch, because you have something that works. Not\nsure if PL/pgSQL is the right choice though, perhaps pick some other\nlanguage from https://wiki.postgresql.org/wiki/PL_Matrix\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 9 Jul 2019 23:22:27 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Extending PostgreSQL with a Domain-Specific Language (DSL) -\n Development"
},
{
"msg_contents": "On 09/07/2019 23:22, Tomas Vondra wrote:\r\n> On Sun, Jul 07, 2019 at 11:06:38PM +0000, Tom Mercha wrote:\r\n>> On 06/07/2019 00:06, Tomas Vondra wrote:\r\n>>> First of all, it's pretty difficult to follow the discussion when it's\r\n>>> not clear what's the original message and what's the response. E-mail\r\n>>> clients generally indent the original message with '>' or someting like\r\n>>> that, but your client does not do that (which is pretty silly). And\r\n>>> copying the message at the top does not really help. Please do something\r\n>>> about that.\r\n>>\r\n>> I would like to apologise. I did not realize that my client was doing\r\n>> that and now I have changed the client. I hope it's fine now.\r\n>>\r\n> \r\n> Thanks, seems fine now.\r\n> \r\n>>>\r\n>>> On Fri, Jul 05, 2019 at 09:37:03PM +0000, Tom Mercha wrote:\r\n>>>>> I might be missing something, but it seems like you intend to replace\r\n>>>>> the SQL grammar we have with something else. It's not clear to me what\r\n>>>>> would be the point of doing that, and it definitely looks like a huge\r\n>>>>> amount of work - e.g. we don't have any support for switching between\r\n>>>>> two distinct grammars the way you envision, and just that alone seems\r\n>>>>> like a multi-year project. And if you don't have that capability, all\r\n>>>>> external tools kinda stop working. Good luck with running such \r\n>>>>> database.\r\n>>>>\r\n>>>> I was considering having two distinct grammars as an option - thanks\r\n>>>> for indicating the effort involved. At the end of the day I want both\r\n>>>> my DSL and the PostgreSQL grammars to coexist. Is extending\r\n>>>> PostgreSQL's grammar with my own through the PostgreSQL extension\r\n>>>> infrastructure worth consideration or is it also difficult to develop?\r\n>>>> Could you suggest any reference material on this topic?\r\n>>>>\r\n>>>\r\n>>> Well, I'm not an expert in that area, but we currently don't have any\r\n>>> infrastructure to support that. It's a topic that was discussed in the\r\n>>> past (perhaps you can find some references in the archives) and it\r\n>>> generally boils down to:\r\n>>>\r\n>>> 1) We're using bison as parser generator.\r\n>>> 2) Bison does not allow adding rules on the fly.\r\n>>>\r\n>>> So you have to modify the in-core src/backend/parser/gram.y and rebuild\r\n>>> postgres. See for example for an example of such discussion\r\n>>>\r\n>>> https://www.postgresql.org/message-id/flat/CABSN6VeeEhwb0HrjOCp9kHaWm0Ljbnko5y-0NKsT_%3D5i5C2jog%40mail.gmail.com \r\n>>>\r\n>>>\r\n>>>\r\n>>> When two of the smartest people on the list say it's a hard problem, it\r\n>>> probably is. Particularly for someone who does not know the internals.\r\n>> You are right. Thanks for bringing it to my attention!\r\n>>\r\n>> I didn't design my language for interaction with triggers and whatnot,\r\n>> but I think that it would be very interesting to support those as well,\r\n>> so looking at CREATE LANGUAGE functionality is actually exciting and\r\n>> appropriate once I make some changes in design. Thanks again for this \r\n>> point!\r\n>>\r\n> \r\n> ;-)\r\n> \r\n>> I hope this is not off topic but I was wondering if you know what are\r\n>> the intrinsic differences between HANDLER and INLINE parameters of\r\n>> CREATE LANGUAGE? I know that they are functions which are invoked at\r\n>> different instances of time (e.g. one is for handling anonymous code\r\n>> blocks), but at the end of the day they seem to have the same purpose?\r\n>>\r\n> \r\n> I've never written any PL handler, so I don't know. All I know is this\r\n> quote from the docs, right below the simple example of PL handler:\r\n> \r\n> Only a few thousand lines of code have to be added instead of the\r\n> dots to complete the call handler.\r\n> \r\n> I suppose the best idea to start an implementation is to copy an\r\n> existing PL implementation, and modify that. That's usually much easier\r\n> than starting from scratch, because you have something that works. Not\r\n> sure if PL/pgSQL is the right choice though, perhaps pick some other\r\n> language from https://wiki.postgresql.org/wiki/PL_Matrix\r\n> \r\n\r\nI understand that you never wrote any PL handler but was just thinking \r\nabout this functionality as a follow-up to our conversation. I was just \r\nwondering whether anonymous DO blocks *must* return void or not?\r\n\r\nThe docs for DO say it is a function returning void - \r\nhttps://www.postgresql.org/docs/current/sql-do.html\r\n\r\nBut the docs for CREATE LANGUAGE's INLINE HANDLER say 'typically return \r\nvoid' - https://www.postgresql.org/docs/current/sql-createlanguage.html\r\n\r\nIs the implication that we can make the DO block return something \r\nsomehow? I would be quite interested if there is a way of achieving this \r\nkind of functionality. My experiments using an SRF, which I have \r\nwritten, within an anonymous DO block just gives me an \"ERROR: \r\nset-valued function called in context that cannot accept a set\".\r\n\r\nAnyway maybe I'm going off on a tangent here... perhaps it is better to \r\nopen a new thread?\r\n\r\n> \r\n> regards\r\n> \r\n",
"msg_date": "Wed, 10 Jul 2019 00:23:22 +0000",
"msg_from": "Tom Mercha <mercha_t@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Extending PostgreSQL with a Domain-Specific Language (DSL) -\n Development"
},
{
"msg_contents": "On Tue, Jul 9, 2019 at 5:23 PM Tom Mercha <mercha_t@hotmail.com> wrote:\n\n>\n> I understand that you never wrote any PL handler but was just thinking\n> about this functionality as a follow-up to our conversation. I was just\n> wondering whether anonymous DO blocks *must* return void or not?\n>\n> The docs for DO say it is a function returning void -\n> https://www.postgresql.org/docs/current/sql-do.html\n\n\n\n>\n\nBut the docs for CREATE LANGUAGE's INLINE HANDLER say 'typically return\n> void' - https://www.postgresql.org/docs/current/sql-createlanguage.html\n\n\nNo, the language cannot override the SQL execution environment's\nlimitations.\n\n\"The code block is treated as though it were the body of a function with no\nparameters, returning void. It is parsed and executed a single time.\"\n\nThe above applies regardless of the language the code block is written in.\n\nIt can, however, affect permanent session state (so, use tables).\n\nDavid J.\n\nOn Tue, Jul 9, 2019 at 5:23 PM Tom Mercha <mercha_t@hotmail.com> wrote:\nI understand that you never wrote any PL handler but was just thinking \nabout this functionality as a follow-up to our conversation. I was just \nwondering whether anonymous DO blocks *must* return void or not?\n\nThe docs for DO say it is a function returning void - \nhttps://www.postgresql.org/docs/current/sql-do.html \nBut the docs for CREATE LANGUAGE's INLINE HANDLER say 'typically return \nvoid' - https://www.postgresql.org/docs/current/sql-createlanguage.htmlNo, the language cannot override the SQL execution environment's limitations.\"The code block is treated as though it were the body of a function with no parameters, returning void. It is parsed and executed a single time.\"The above applies regardless of the language the code block is written in.It can, however, affect permanent session state (so, use tables).David J.",
"msg_date": "Tue, 9 Jul 2019 17:31:45 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Extending PostgreSQL with a Domain-Specific Language (DSL) -\n Development"
},
{
"msg_contents": "On 10/07/2019 02:31, David G. Johnston wrote:\r\n> On Tue, Jul 9, 2019 at 5:23 PM Tom Mercha <mercha_t@hotmail.com> wrote:\r\n> \r\n>>\r\n>> I understand that you never wrote any PL handler but was just thinking\r\n>> about this functionality as a follow-up to our conversation. I was just\r\n>> wondering whether anonymous DO blocks *must* return void or not?\r\n>>\r\n>> The docs for DO say it is a function returning void -\r\n>> https://www.postgresql.org/docs/current/sql-do.html\r\n> \r\n> \r\n> \r\n>>\r\n> \r\n> But the docs for CREATE LANGUAGE's INLINE HANDLER say 'typically return\r\n>> void' - https://www.postgresql.org/docs/current/sql-createlanguage.html\r\n> \r\n> \r\n> No, the language cannot override the SQL execution environment's\r\n> limitations.\r\n> \r\n> \"The code block is treated as though it were the body of a function with no\r\n> parameters, returning void. It is parsed and executed a single time.\"\r\n> \r\n> The above applies regardless of the language the code block is written in.\r\n> \r\n> It can, however, affect permanent session state (so, use tables).\r\n> \r\n\r\nThank you very much for addressing the question.\r\n\r\nI am still a bit of a novice with PostgreSQL internals. Could you please \r\nprovide some more detail on your comment regarding affecting permanent \r\nsession state?\r\n\r\n> David J.\r\n> \r\n",
"msg_date": "Wed, 10 Jul 2019 00:43:56 +0000",
"msg_from": "Tom Mercha <mercha_t@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Extending PostgreSQL with a Domain-Specific Language (DSL) -\n Development"
},
{
"msg_contents": "On Tue, Jul 9, 2019 at 5:43 PM Tom Mercha <mercha_t@hotmail.com> wrote:\n\n> I am still a bit of a novice with PostgreSQL internals. Could you please\n> provide some more detail on your comment regarding affecting permanent\n> session state?\n\n\nI was not referring to internals.\n\nBEGIN;\nCREATE TEMP TABLE tempdo (id int);\nDO $$\nBEGIN\nINSERT INTO tempdo VALUES (1);\nEND;\n$$;\nSELECT * FROM tempdo;\nROLLBACK;\n\nDavid J.\n\nOn Tue, Jul 9, 2019 at 5:43 PM Tom Mercha <mercha_t@hotmail.com> wrote:\nI am still a bit of a novice with PostgreSQL internals. Could you please \nprovide some more detail on your comment regarding affecting permanent \nsession state?I was not referring to internals.BEGIN;CREATE TEMP TABLE tempdo (id int);DO $$BEGININSERT INTO tempdo VALUES (1);END;$$;SELECT * FROM tempdo;ROLLBACK;David J.",
"msg_date": "Tue, 9 Jul 2019 17:58:40 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Extending PostgreSQL with a Domain-Specific Language (DSL) -\n Development"
}
] |
[
{
"msg_contents": "Hello,\n\nI've noticed that renaming an indexed column produces inconsistencies in\nthe catalog. Namely, the attname of the attribute of the relation is\nproperly updated, whereas the attname of the attribute in the index is not,\nand keeps the old value.\n\nExample:\n\ntest # create table test (id int primary key);\nCREATE TABLE\ntest # alter table test rename id to idnew;\nALTER TABLE\ntest # select attrelid::regclass, attname from pg_attribute where attrelid\nin ('test'::regclass, 'test_pkey'::regclass) and attnum > 0;\n attrelid | attname\n-----------+---------\n test | idnew\n test_pkey | id\n\nWe ran into that while using wal2json, which uses the replication id index\nattnames to identify which columns are part of the primary key. If the\nprimary key column has been renamed, we end with no information about the\nidentity of the tuple being updated / deleted.\n\nI think this could be considered a bug in Postgres. If it isn't, what\nshould be the proper way to retrieve this information ?\n\n-- \n\n\n\n\nThis e-mail message and any attachments to it are intended only for the \nnamed recipients and may contain legally privileged and/or confidential \ninformation. If you are not one of the intended recipients, do not \nduplicate or forward this e-mail message.\n\nHello,I've noticed that renaming an indexed column produces inconsistencies in the catalog. Namely, the attname of the attribute of the relation is properly updated, whereas the attname of the attribute in the index is not, and keeps the old value.Example:test # create table test (id int primary key);CREATE TABLEtest # alter table test rename id to idnew;ALTER TABLEtest # select attrelid::regclass, attname from pg_attribute where attrelid in ('test'::regclass, 'test_pkey'::regclass) and attnum > 0; attrelid | attname -----------+--------- test | idnew test_pkey | idWe ran into that while using wal2json, which uses the replication id index attnames to identify which columns are part of the primary key. If the primary key column has been renamed, we end with no information about the identity of the tuple being updated / deleted.I think this could be considered a bug in Postgres. If it isn't, what should be the proper way to retrieve this information ?\n\nThis e-mail message and any attachments to it are intended only for the named recipients and may contain legally privileged and/or confidential information. If you are not one of the intended recipients, do not duplicate or forward this e-mail message.",
"msg_date": "Fri, 5 Jul 2019 12:37:25 +0200",
"msg_from": "Ronan Dunklau <ronan_dunklau@ultimatesoftware.com>",
"msg_from_op": true,
"msg_subject": "Inconsistency between attname of index and attname of relation"
},
{
"msg_contents": "Ronan Dunklau <ronan_dunklau@ultimatesoftware.com> writes:\n> I've noticed that renaming an indexed column produces inconsistencies in\n> the catalog. Namely, the attname of the attribute of the relation is\n> properly updated, whereas the attname of the attribute in the index is not,\n> and keeps the old value.\n\nIf memory serves, we used to try to rename index columns, and gave up\non that because it caused problems of its own. That's (one reason) why\nmodern versions of psql show a \"definition\" column in \\d of an index.\n\n> I think this could be considered a bug in Postgres.\n\nIt is not.\n\n> If it isn't, what\n> should be the proper way to retrieve this information ?\n\npsql uses pg_get_indexdef(), looks like.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Jul 2019 10:22:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency between attname of index and attname of relation"
},
{
"msg_contents": "Thank you for this quick answer, I'll report the bug to wal2json then.\n\nLe ven. 5 juil. 2019 à 16:22, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n\n> Ronan Dunklau <ronan_dunklau@ultimatesoftware.com> writes:\n> > I've noticed that renaming an indexed column produces inconsistencies in\n> > the catalog. Namely, the attname of the attribute of the relation is\n> > properly updated, whereas the attname of the attribute in the index is\n> not,\n> > and keeps the old value.\n>\n> If memory serves, we used to try to rename index columns, and gave up\n> on that because it caused problems of its own. That's (one reason) why\n> modern versions of psql show a \"definition\" column in \\d of an index.\n>\n> > I think this could be considered a bug in Postgres.\n>\n> It is not.\n>\n> > If it isn't, what\n> > should be the proper way to retrieve this information ?\n>\n> psql uses pg_get_indexdef(), looks like.\n>\n> regards, tom lane\n>\n\n-- \n\n\n\n\nThis e-mail message and any attachments to it are intended only for the \nnamed recipients and may contain legally privileged and/or confidential \ninformation. If you are not one of the intended recipients, do not \nduplicate or forward this e-mail message.\n\nThank you for this quick answer, I'll report the bug to wal2json then.Le ven. 5 juil. 2019 à 16:22, Tom Lane <tgl@sss.pgh.pa.us> a écrit :Ronan Dunklau <ronan_dunklau@ultimatesoftware.com> writes:\n> I've noticed that renaming an indexed column produces inconsistencies in\n> the catalog. Namely, the attname of the attribute of the relation is\n> properly updated, whereas the attname of the attribute in the index is not,\n> and keeps the old value.\n\nIf memory serves, we used to try to rename index columns, and gave up\non that because it caused problems of its own. That's (one reason) why\nmodern versions of psql show a \"definition\" column in \\d of an index.\n\n> I think this could be considered a bug in Postgres.\n\nIt is not.\n\n> If it isn't, what\n> should be the proper way to retrieve this information ?\n\npsql uses pg_get_indexdef(), looks like.\n\n regards, tom lane\n\n\nThis e-mail message and any attachments to it are intended only for the named recipients and may contain legally privileged and/or confidential information. If you are not one of the intended recipients, do not duplicate or forward this e-mail message.",
"msg_date": "Fri, 5 Jul 2019 16:58:43 +0200",
"msg_from": "Ronan Dunklau <ronan_dunklau@ultimatesoftware.com>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistency between attname of index and attname of relation"
},
{
"msg_contents": "Em sex, 5 de jul de 2019 às 07:37, Ronan Dunklau\n<ronan_dunklau@ultimatesoftware.com> escreveu:\n\n> We ran into that while using wal2json, which uses the replication id index attnames to identify which columns are part of the primary key. If the primary key column has been renamed, we end with no information about the identity of the tuple being updated / deleted.\n>\nOuch. That's a wal2json bug. I saw that you already opened an issue.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n",
"msg_date": "Fri, 5 Jul 2019 12:19:24 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency between attname of index and attname of relation"
}
] |
[
{
"msg_contents": "Hello,\n\nI’ve been testing out the 12 (beta) support for generated columns, mainly in order to add support for them to the database synchronisation software I maintain (Kitchen Sync).\n\nSo far it looks good compared to the similar functionality on MariaDB and MySQL (apart from VIRTUAL support which I see was pulled out because it wasn’t ready).\n\nBut I can’t see a way to change the expression for the generated column after its been created initially. I was looking for something like the SET/DROP NOT NULL alter clauses, perhaps ALTER TABLE foo ALTER bar SET GENERATED ALWAYS AS (…) STORED, but that doesn’t work and I can’t see anything in the docs. (MySQL/MariaDB don’t have anything specific, but they have the generic MODIFY column syntax which does allow you to change stored column definitions.)\n\nI can remove and add back the column, but that has the undesirable effect of changing the column order and requires recreating all the indexes etc. which gets messy.\n\nAny thoughts? Will this be implemented later?\n\nCheers,\nWill\n\n",
"msg_date": "Fri, 5 Jul 2019 22:42:35 +1200",
"msg_from": "Will Bryant <will.bryant@gmail.com>",
"msg_from_op": true,
"msg_subject": "Changing GENERATED ALWAYS AS expression"
}
] |
[
{
"msg_contents": "I am importing recent changes into my reloption patch and came to a question, \nI did not find an answer...\n\nvacuum_index_cleanup option exists for both heap and toast relations.\n\nAs I understand from documentation index cleanup is about is about reporting \naccess method that some tuples in table that were indexed are dead, and should \nbe cleaned.\n\nAnd as far as I get, we do not index any TOAST tuples directly. They are \nobtained by getting relation tuple, and then deTOAST it.\n\nSo I do not understand why do we need vacuum_index_cleanup for TOAST tables. \nMay be we should remove it from there??\n\nOr if I am wrong, can you explain where it is needed?\n\n\n-- \nSoftware Developer: https://www.upwork.com/freelancers/~014a87e140ff02c0da\nBody-oriented Therapist: https://vk.com/nataraj_rebalancing (Russian)\n\n\n",
"msg_date": "Fri, 05 Jul 2019 15:28:26 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "Why vacuum_index_cleanup is needed for TOAST relations?"
},
{
"msg_contents": "On Fri, Jul 05, 2019 at 03:28:26PM +0300, Nikolay Shaplov wrote:\n>I am importing recent changes into my reloption patch and came to a question,\n>I did not find an answer...\n>\n>vacuum_index_cleanup option exists for both heap and toast relations.\n>\n>As I understand from documentation index cleanup is about is about reporting\n>access method that some tuples in table that were indexed are dead, and should\n>be cleaned.\n>\n>And as far as I get, we do not index any TOAST tuples directly. They are\n>obtained by getting relation tuple, and then deTOAST it.\n>\n>So I do not understand why do we need vacuum_index_cleanup for TOAST tables.\n>May be we should remove it from there??\n>\n>Or if I am wrong, can you explain where it is needed?\n>\n\nI'm not sure I understand your question / suggestion correctly, but\neach TOAST table certainly has an index on (chunk_id, chunk_seq) - in\nfact it's a unique index backing a primary key.\n\nIt's not clear to me what you mean by \"index any TOAST tuples directly\"\nor \"getting relation tuple\", perhaps you could explain.\n\nIMHO it's correct to have vacuum_index_cleanup even for TOAST tables.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 5 Jul 2019 16:15:45 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why vacuum_index_cleanup is needed for TOAST relations?"
}
] |
[
{
"msg_contents": "One of the recent mcv commits introduced an unused variable warning.\n\nmcv.c: In function 'statext_mcv_serialize':\nmcv.c:914:7: warning: unused variable 'itemlen' [-Wunused-variable]\n int itemlen = ITEM_SIZE(dim);\n\nThe attached fixes it.\n\nCheers,\n\nJeff",
"msg_date": "Fri, 5 Jul 2019 10:13:25 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "mcv compiler warning"
},
{
"msg_contents": "On Fri, Jul 05, 2019 at 10:13:25AM -0400, Jeff Janes wrote:\n>One of the recent mcv commits introduced an unused variable warning.\n>\n>mcv.c: In function 'statext_mcv_serialize':\n>mcv.c:914:7: warning: unused variable 'itemlen' [-Wunused-variable]\n> int itemlen = ITEM_SIZE(dim);\n>\n>The attached fixes it.\n>\n\nThanks.\n\nI think I'll just get rid of the variable entirely, and will just call\nthe macro from the assert directly. The variable used to be referenced\non multiple places, but that changed during the serialization code\nreworks.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 5 Jul 2019 17:06:12 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: mcv compiler warning"
}
] |
[
{
"msg_contents": "Folks,\n\nCorey Huinker put together the documentation for this proposed\nfeature. Does this seem like a reasonable way to do it?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Fri, 5 Jul 2019 18:32:04 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "SHOW CREATE"
},
{
"msg_contents": "On Fri, Jul 5, 2019 at 12:32 PM David Fetter <david@fetter.org> wrote:\n\n> Folks,\n>\n> Corey Huinker put together the documentation for this proposed\n> feature. Does this seem like a reasonable way to do it?\n>\n>\nIn doing that work, it became clear that the command was serving two\nmasters:\n1. A desire to see the underlying nuts and bolts of a given database object.\n2. A desire to essentially make the schema portion of pg_dump a server side\ncommand.\n\nTo that end, I see splitting this into two commands, SHOW CREATE and SHOW\nDUMP.\n\nSHOW DUMP would the original command minus the object type and object name\nspecifier, and it would dump the entire current database as seen from the\ncurrent user (again, no data).\n\nSHOW CREATE would still have all the object_type parameters as before, but\nwould only dump the one specified object, plus any dependent objects\nspecified in the WITH options (comments, grants, indexes, constraints,\npartitions, all).\n\nPlease note that any talk of a server side DESCRIBE is separate from this.\nThat would be a series of commands that would have result sets tailored to\nthe object type, and each one would be an inherent compromise between\ncompleteness and readability.\n\nI'd like to hear what others have to say, and incorporate that feedback\ninto a follow up proposal.\n\nOn Fri, Jul 5, 2019 at 12:32 PM David Fetter <david@fetter.org> wrote:Folks,\n\nCorey Huinker put together the documentation for this proposed\nfeature. Does this seem like a reasonable way to do it?\nIn doing that work, it became clear that the command was serving two masters:1. A desire to see the underlying nuts and bolts of a given database object.2. A desire to essentially make the schema portion of pg_dump a server side command.To that end, I see splitting this into two commands, SHOW CREATE and SHOW DUMP.SHOW DUMP would the original command minus the object type and object name specifier, and it would dump the entire current database as seen from the current user (again, no data).SHOW CREATE would still have all the object_type parameters as before, but would only dump the one specified object, plus any dependent objects specified in the WITH options (comments, grants, indexes, constraints, partitions, all).Please note that any talk of a server side DESCRIBE is separate from this. That would be a series of commands that would have result sets tailored to the object type, and each one would be an inherent compromise between completeness and readability.I'd like to hear what others have to say, and incorporate that feedback into a follow up proposal.",
"msg_date": "Fri, 5 Jul 2019 13:14:33 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SHOW CREATE"
},
{
"msg_contents": "\n\n> On 2019–07–05, at 12:14, Corey Huinker <corey.huinker@gmail.com> wrote:\n> \n> In doing that work, it became clear that the command was serving two masters:\n> 1. A desire to see the underlying nuts and bolts of a given database object.\n> 2. A desire to essentially make the schema portion of pg_dump a server side command.\n> \n> To that end, I see splitting this into two commands, SHOW CREATE and SHOW DUMP.\n\nI like the idea of having these features available via SQL as opposed to separate tools. Is it necessary to have specific commands for them? It seems they would potentially more useful as functions, where they'd be available for all of the programmatic features of the rest of SQL.\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n\n\n\n",
"msg_date": "Sat, 13 Jul 2019 18:32:41 -0500",
"msg_from": "Michael Glaesemann <grzm@seespotcode.net>",
"msg_from_op": false,
"msg_subject": "Re: SHOW CREATE"
},
{
"msg_contents": "On Sat, Jul 13, 2019 at 06:32:41PM -0500, Michael Glaesemann wrote:\n> \n> \n> > On 2019–07–05, at 12:14, Corey Huinker <corey.huinker@gmail.com> wrote:\n> > \n> > In doing that work, it became clear that the command was serving two masters:\n> > 1. A desire to see the underlying nuts and bolts of a given database object.\n> > 2. A desire to essentially make the schema portion of pg_dump a server side command.\n> > \n> > To that end, I see splitting this into two commands, SHOW CREATE\n> > and SHOW DUMP.\n> \n> I like the idea of having these features available via SQL as\n> opposed to separate tools. Is it necessary to have specific commands\n> for them? It seems they would potentially more useful as functions,\n> where they'd be available for all of the programmatic features of\n> the rest of SQL.\n\nHaving commands for them would help meet people's expectations coming\nfrom other RDBMSs.\n\nOn the other hand, making functions could just be done in SQL, which\nmight hurry the process along.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sun, 14 Jul 2019 02:34:40 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: SHOW CREATE"
},
{
"msg_contents": "On Fri, Jul 5, 2019 at 1:14 PM Corey Huinker <corey.huinker@gmail.com> wrote:\n> I'd like to hear what others have to say, and incorporate that feedback into a follow up proposal.\n\nI am unclear how this could be implemented without ending up with a\nton of extra code that has to be maintained. pg_dump is a client-side\ntool that does this; if we also have a server-side tool that does it,\nthen we have two things to maintain instead of one. I think that's\nprobably a non-trivial effort. I think you need to give some serious\nthought to how to minimize that effort, and how to write tests that\nwill catch future problems without requiring everybody who ever makes\na DDL change ever again to test it against this functionality\nspecifically.\n\nI would also like to complain that the original post of this thread\ngave so little context that, unless you opened the patch, you wouldn't\nhave any idea what the thread was about. Ideally, the topic of a\nthread should be evident from the subject line; where that is\nimpractical, it should be evident from the text of the first email; if\nyou have to open an attachment, that's not good. It may deprive people\nwho may have a strong opinion on the topic but limited time an\nopportunity to notice that a discussion on that topic is occurring.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 15 Jul 2019 10:18:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SHOW CREATE"
}
] |
[
{
"msg_contents": "The following documentation comment has been logged on the website:\n\nPage: https://www.postgresql.org/docs/11/ddl-partitioning.html\nDescription:\n\nIn the documentation for Postgres 11 table partitioning, there is no mention\nof the requirement that the Primary Key of a partitioned table must contain\nthe partition key.\r\nIn fact the documentation on primary keys is so light that I am not even\n100% sure the above is correct. If the following table is not possible in\nPostgres 11, the documentation should find some way to make that clear. \r\n\r\n-- Create partitioned table with partition key not in primary key \r\ncreate table events (\r\n id bigint not null default nextval('events_id_seq'),\r\n created_date timestamp not null,\r\n constraint events_pk primary key (id)\r\n) partition by range (created_date);\r\n-- end create table\r\n\r\nI believe this should be documented in section \"5.10.2.3. Limitations\"",
"msg_date": "Fri, 05 Jul 2019 21:20:07 +0000",
"msg_from": "PG Doc comments form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Postgres 11: Table Partitioning and Primary Keys"
},
{
"msg_contents": "On Fri, Jul 5, 2019 at 09:20:07PM +0000, PG Doc comments form wrote:\n> The following documentation comment has been logged on the website:\n> \n> Page: https://www.postgresql.org/docs/11/ddl-partitioning.html\n> Description:\n> \n> In the documentation for Postgres 11 table partitioning, there is no mention\n> of the requirement that the Primary Key of a partitioned table must contain\n> the partition key.\n> In fact the documentation on primary keys is so light that I am not even\n> 100% sure the above is correct. If the following table is not possible in\n> Postgres 11, the documentation should find some way to make that clear. \n> \n> -- Create partitioned table with partition key not in primary key \n> create table events (\n> id bigint not null default nextval('events_id_seq'),\n> created_date timestamp not null,\n> constraint events_pk primary key (id)\n> ) partition by range (created_date);\n> -- end create table\n> \n> I believe this should be documented in section \"5.10.2.3. Limitations\"\n\nCan someone comment on this? CC to hackers.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 8 Jul 2019 22:37:37 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 11: Table Partitioning and Primary Keys"
},
{
"msg_contents": "On Mon, Jul 08, 2019 at 10:37:37PM -0400, Bruce Momjian wrote:\n> On Fri, Jul 5, 2019 at 09:20:07PM +0000, PG Doc comments form wrote:\n>> In the documentation for Postgres 11 table partitioning, there is no mention\n>> of the requirement that the Primary Key of a partitioned table must contain\n>> the partition key.\n>> In fact the documentation on primary keys is so light that I am not even\n>> 100% sure the above is correct. If the following table is not possible in\n>> Postgres 11, the documentation should find some way to make that clear. \n>> \n>> I believe this should be documented in section \"5.10.2.3. Limitations\"\n> \n> Can someone comment on this? CC to hackers.\n\nYep, that's the case:\n=# CREATE TABLE parent_tab (id int, id2 int primary key)\n PARTITION BY RANGE (id);\nERROR: 0A000: insufficient columns in PRIMARY KEY constraint\ndefinition\nDETAIL: PRIMARY KEY constraint on table \"parent_tab\" lacks column\n\"id\" which is part of the partition key.\nLOCATION: DefineIndex, indexcmds.c:894\n\nI agree with the report here that adding one sentence to 5.10.2.3\nwhich is for the limitations of declarative partitioning would be a\ngood idea. We don't mention the limitation in CREATE TABLE either\n(which would be rather incorrect IMO).\n\nAttached is an idea of patch for the documentation, using this\nwording:\n+ <listitem>\n+ <para>\n+ When defining a primary key on a partitioned table, the primary\n+ key column must be included in the partition key.\n+ </para>\n+ </listitem>\nIf somebody has any better idea for that paragraph, please feel free.\n--\nMichael",
"msg_date": "Tue, 9 Jul 2019 11:58:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 11: Table Partitioning and Primary Keys"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Attached is an idea of patch for the documentation, using this\n> wording:\n> + <listitem>\n> + <para>\n> + When defining a primary key on a partitioned table, the primary\n> + key column must be included in the partition key.\n> + </para>\n> + </listitem>\n\nIsn't it the other way around, that the partition key column(s) must be\nincluded in the primary key? Maybe I'm confused, but it seems like\nwe couldn't enforce PK uniqueness otherwise.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Jul 2019 23:10:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 11: Table Partitioning and Primary Keys"
},
{
"msg_contents": "On Mon, Jul 8, 2019 at 7:59 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> Attached is an idea of patch for the documentation, using this\n> wording:\n> + <listitem>\n> + <para>\n> + When defining a primary key on a partitioned table, the primary\n> + key column must be included in the partition key.\n> + </para>\n> + </listitem>\n> If somebody has any better idea for that paragraph, please feel free.\n>\n\nReads a bit backward. How about:\n\n\"As uniqueness can only be enforced within an individual partition when\ndefining a primary key on a partitioned table all columns present in the\npartition key must also exist in the primary key.\"\n\nDavid J.\n\nOn Mon, Jul 8, 2019 at 7:59 PM Michael Paquier <michael@paquier.xyz> wrote:Attached is an idea of patch for the documentation, using this\nwording:\n+ <listitem>\n+ <para>\n+ When defining a primary key on a partitioned table, the primary\n+ key column must be included in the partition key.\n+ </para>\n+ </listitem>\nIf somebody has any better idea for that paragraph, please feel free. Reads a bit backward. How about:\"As uniqueness can only be enforced within an individual partition when defining a primary key on a partitioned table all columns present in the partition key must also exist in the primary key.\"David J.",
"msg_date": "Mon, 8 Jul 2019 20:12:18 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 11: Table Partitioning and Primary Keys"
},
{
"msg_contents": "On Tue, Jul 9, 2019 at 8:29 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Jul 08, 2019 at 10:37:37PM -0400, Bruce Momjian wrote:\n> > On Fri, Jul 5, 2019 at 09:20:07PM +0000, PG Doc comments form wrote:\n> >> In the documentation for Postgres 11 table partitioning, there is no\n> mention\n> >> of the requirement that the Primary Key of a partitioned table must\n> contain\n> >> the partition key.\n> >> In fact the documentation on primary keys is so light that I am not even\n> >> 100% sure the above is correct. If the following table is not possible\n> in\n> >> Postgres 11, the documentation should find some way to make that\n> clear.\n> >>\n> >> I believe this should be documented in section \"5.10.2.3. Limitations\"\n> >\n> > Can someone comment on this? CC to hackers.\n>\n> Yep, that's the case:\n> =# CREATE TABLE parent_tab (id int, id2 int primary key)\n> PARTITION BY RANGE (id);\n> ERROR: 0A000: insufficient columns in PRIMARY KEY constraint\n> definition\n> DETAIL: PRIMARY KEY constraint on table \"parent_tab\" lacks column\n> \"id\" which is part of the partition key.\n> LOCATION: DefineIndex, indexcmds.c:894\n>\nsame is valid for UNIQUE constraint also.\n\npostgres=# CREATE TABLE parent_tab (id int, id2 int unique)\n PARTITION BY RANGE (id);\nERROR: insufficient columns in UNIQUE constraint definition\nDETAIL: UNIQUE constraint on table \"parent_tab\" lacks column \"id\" which is\npart of the partition key.\n\n\n\n>\n> I agree with the report here that adding one sentence to 5.10.2.3\n> which is for the limitations of declarative partitioning would be a\n> good idea. We don't mention the limitation in CREATE TABLE either\n> (which would be rather incorrect IMO).\n>\n> Attached is an idea of patch for the documentation, using this\n> wording:\n> + <listitem>\n> + <para>\n> + When defining a primary key on a partitioned table, the primary\n> + key column must be included in the partition key.\n> + </para>\n> + </listitem>\n> If somebody has any better idea for that paragraph, please feel free.\n> --\n> Michael\n>\n\nOn Tue, Jul 9, 2019 at 8:29 AM Michael Paquier <michael@paquier.xyz> wrote:On Mon, Jul 08, 2019 at 10:37:37PM -0400, Bruce Momjian wrote:\n> On Fri, Jul 5, 2019 at 09:20:07PM +0000, PG Doc comments form wrote:\n>> In the documentation for Postgres 11 table partitioning, there is no mention\n>> of the requirement that the Primary Key of a partitioned table must contain\n>> the partition key.\n>> In fact the documentation on primary keys is so light that I am not even\n>> 100% sure the above is correct. If the following table is not possible in\n>> Postgres 11, the documentation should find some way to make that clear. \n>> \n>> I believe this should be documented in section \"5.10.2.3. Limitations\"\n> \n> Can someone comment on this? CC to hackers.\n\nYep, that's the case:\n=# CREATE TABLE parent_tab (id int, id2 int primary key)\n PARTITION BY RANGE (id);\nERROR: 0A000: insufficient columns in PRIMARY KEY constraint\ndefinition\nDETAIL: PRIMARY KEY constraint on table \"parent_tab\" lacks column\n\"id\" which is part of the partition key.\nLOCATION: DefineIndex, indexcmds.c:894same is valid for UNIQUE constraint also.postgres=# CREATE TABLE parent_tab (id int, id2 int unique) PARTITION BY RANGE (id);ERROR: insufficient columns in UNIQUE constraint definitionDETAIL: UNIQUE constraint on table \"parent_tab\" lacks column \"id\" which is part of the partition key. \n\nI agree with the report here that adding one sentence to 5.10.2.3\nwhich is for the limitations of declarative partitioning would be a\ngood idea. We don't mention the limitation in CREATE TABLE either\n(which would be rather incorrect IMO).\n\nAttached is an idea of patch for the documentation, using this\nwording:\n+ <listitem>\n+ <para>\n+ When defining a primary key on a partitioned table, the primary\n+ key column must be included in the partition key.\n+ </para>\n+ </listitem>\nIf somebody has any better idea for that paragraph, please feel free.\n--\nMichael",
"msg_date": "Tue, 9 Jul 2019 11:39:07 +0530",
"msg_from": "Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 11: Table Partitioning and Primary Keys"
},
{
"msg_contents": "On Mon, Jul 08, 2019 at 08:12:18PM -0700, David G. Johnston wrote:\n> Reads a bit backward. How about:\n> \n> \"As uniqueness can only be enforced within an individual partition when\n> defining a primary key on a partitioned table all columns present in the\n> partition key must also exist in the primary key.\"\n\nYes, I was not really inspired on this one.\n\nLooking closely at the code in DefineIndex() (and as Rajkumar has\nmentioned upthread for unique constraints) this can happen for primary\nkeys, unique constraints and exclusion constraints. So we had better\nmention all three of them. I am not sure that we need to be explicit\nabout the uniqueness part though, let's say the following:\n\"When defining a primary key, a unique constraint or an exclusion\nconstraint on a partitioned table, all the columns present in the\nconstraint definition must be included in the partition key.\"\n--\nMichael",
"msg_date": "Tue, 9 Jul 2019 15:34:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 11: Table Partitioning and Primary Keys"
},
{
"msg_contents": "On Mon, Jul 08, 2019 at 11:10:51PM -0400, Tom Lane wrote:\n> Isn't it the other way around, that the partition key column(s) must\n> be\n> included in the primary key? Maybe I'm confused, but it seems like\n> we couldn't enforce PK uniqueness otherwise.\n\nYes you are right. The full column list of the partition key needs to\nbe included in the constraint, but that's not true the other way\naround.\n--\nMichael",
"msg_date": "Tue, 9 Jul 2019 15:49:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 11: Table Partitioning and Primary Keys"
},
{
"msg_contents": "On Tue, Jul 09, 2019 at 03:34:48PM +0900, Michael Paquier wrote:\n> Looking closely at the code in DefineIndex() (and as Rajkumar has\n> mentioned upthread for unique constraints) this can happen for primary\n> keys, unique constraints and exclusion constraints. So we had better\n> mention all three of them. I am not sure that we need to be explicit\n> about the uniqueness part though, let's say the following:\n> \"When defining a primary key, a unique constraint or an exclusion\n> constraint on a partitioned table, all the columns present in the\n> constraint definition must be included in the partition key.\"\n\nLet's try again that (that's a long day..):\n\"When defining a primary key, a unique constraint or an exclusion\nconstraint on a partitioned table, all the columns present in the\npartition key must be included in the constraint definition.\"\n--\nMichael",
"msg_date": "Tue, 9 Jul 2019 15:51:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 11: Table Partitioning and Primary Keys"
},
{
"msg_contents": "On Mon, Jul 8, 2019 at 11:34 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Jul 08, 2019 at 08:12:18PM -0700, David G. Johnston wrote:\n> > Reads a bit backward. How about:\n> >\n> > \"As uniqueness can only be enforced within an individual partition when\n> > defining a primary key on a partitioned table all columns present in the\n> > partition key must also exist in the primary key.\"\n>\n> Yes, I was not really inspired on this one.\n>\n> Looking closely at the code in DefineIndex() (and as Rajkumar has\n> mentioned upthread for unique constraints) this can happen for primary\n> keys, unique constraints and exclusion constraints. So we had better\n> mention all three of them. I am not sure that we need to be explicit\n> about the uniqueness part though, let's say the following:\n> \"When defining a primary key, a unique constraint or an exclusion\n> constraint on a partitioned table, all the columns present in the\n> constraint definition must be included in the partition key.\"\n>\n>\nThat isn't true, it needs to be reversed at least:\n\n\"Table-scoped constraints defined on a partitioned table - primary key,\nunique, and exclusion - must include the partition key columns because the\nenforcement of such constraints is performed independently on each\npartition.\"\n\nThe complaint here is the user puts a PK id column on their partitioned\ntable and wonders why they need the partition key columns to also be in the\nPK. The answer is the description provided above - with the reminder (or\ninitial cluing in depending) to the reader that this limitation exists\nbecause we do not implement global constraints/indexes but instead the\ndefinition on the partitioned table is simply copied to all of its\npartitions. For me this seems worthy of recapping at this location (I\nhaven't gone looking for a nice cross-reference link to put there).\n\nDavid J.\n\nOn Mon, Jul 8, 2019 at 11:34 PM Michael Paquier <michael@paquier.xyz> wrote:On Mon, Jul 08, 2019 at 08:12:18PM -0700, David G. Johnston wrote:\n> Reads a bit backward. How about:\n> \n> \"As uniqueness can only be enforced within an individual partition when\n> defining a primary key on a partitioned table all columns present in the\n> partition key must also exist in the primary key.\"\n\nYes, I was not really inspired on this one.\n\nLooking closely at the code in DefineIndex() (and as Rajkumar has\nmentioned upthread for unique constraints) this can happen for primary\nkeys, unique constraints and exclusion constraints. So we had better\nmention all three of them. I am not sure that we need to be explicit\nabout the uniqueness part though, let's say the following:\n\"When defining a primary key, a unique constraint or an exclusion\nconstraint on a partitioned table, all the columns present in the\nconstraint definition must be included in the partition key.\"That isn't true, it needs to be reversed at least:\"Table-scoped constraints defined on a partitioned table - primary key, unique, and exclusion - must include the partition key columns because the enforcement of such constraints is performed independently on each partition.\"The complaint here is the user puts a PK id column on their partitioned table and wonders why they need the partition key columns to also be in the PK. The answer is the description provided above - with the reminder (or initial cluing in depending) to the reader that this limitation exists because we do not implement global constraints/indexes but instead the definition on the partitioned table is simply copied to all of its partitions. For me this seems worthy of recapping at this location (I haven't gone looking for a nice cross-reference link to put there).David J.",
"msg_date": "Mon, 8 Jul 2019 23:59:32 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 11: Table Partitioning and Primary Keys"
},
{
"msg_contents": "Sorry for jumping in late here.\n\nOn Tue, Jul 9, 2019 at 3:51 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Jul 09, 2019 at 03:34:48PM +0900, Michael Paquier wrote:\n> > Looking closely at the code in DefineIndex() (and as Rajkumar has\n> > mentioned upthread for unique constraints) this can happen for primary\n> > keys, unique constraints and exclusion constraints. So we had better\n> > mention all three of them. I am not sure that we need to be explicit\n> > about the uniqueness part though, let's say the following:\n> > \"When defining a primary key, a unique constraint or an exclusion\n> > constraint on a partitioned table, all the columns present in the\n> > constraint definition must be included in the partition key.\"\n>\n> Let's try again that (that's a long day..):\n> \"When defining a primary key, a unique constraint or an exclusion\n> constraint on a partitioned table, all the columns present in the\n> partition key must be included in the constraint definition.\"\n\nAs mentioned in the docs, defining exclusion constraints on\npartitioned tables is not supported.\n\n-- on 13dev\ncreate table p (a int, exclude using gist (a with &&)) partition by list (a);\nERROR: exclusion constraints are not supported on partitioned tables\n\nRegarding primary key and unique constraints, how about writing it\nsuch that it's clear that there are limitations? Maybe like:\n\n\"While defining a primary key and unique constraints on partitioned\ntables is supported, the set of columns being constrained must include\nall of the partition key columns.\"\n\nMaybe, as David also says, it might be a good idea to mention the\nreason why. So maybe like:\n\n\"While defining a primary key and unique constraints on partitioned\ntables is supported, the set of columns being constrained must include\nall of the partition key columns. This limitation exists because\n<productname>PostgreSQL</productname> can ensure uniqueness only\nacross a given partition.\"\n\nThanks,\nAmit\n\n[1] https://www.postgresql.org/docs/12/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE\n5.11.2.3. Limitations\nThe following limitations apply to partitioned tables:\n* There is no way to create an exclusion constraint spanning all\npartitions; it is only possible to constrain each leaf partition\nindividually.\n\n\n",
"msg_date": "Tue, 9 Jul 2019 16:28:52 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 11: Table Partitioning and Primary Keys"
},
{
"msg_contents": "On 2019-Jul-09, Amit Langote wrote:\n\n> As mentioned in the docs, defining exclusion constraints on\n> partitioned tables is not supported.\n\nRight.\n\n> \"While defining a primary key and unique constraints on partitioned\n> tables is supported, the set of columns being constrained must include\n> all of the partition key columns. This limitation exists because\n> <productname>PostgreSQL</productname> can ensure uniqueness only\n> across a given partition.\"\n\nI feel that PKs are mostly a special case of UNIQUE keys, so I tend to\nmention UNIQUE as the central element and let PKs fall out from that.\nThat's a mild personal preference only though. Anyway, based on your\nproposed wording, I wrote this:\n\n <listitem>\n <para>\n Unique constraints on partitioned tables (as well as primary keys)\n must constrain all the partition key columns. This limitation exists\n because <productname>PostgreSQL</productname> can only enforce\n uniqueness in each partition individually.\n </para>\n </listitem>\n\nI'm not really sure about the \"must constrain\" verbiage. Is that really\ncomprehensible? Also, I chose to place it just above the existing para\nthat mentions FK limitations, which reads:\n\n <listitem>\n <para>\n While primary keys are supported on partitioned tables, foreign\n keys referencing partitioned tables are not supported. (Foreign key\n references from a partitioned table to some other table are supported.)\n </para>\n\nYour proposed wording seemed to use too many of the same words, which\nprompted me to change a bit. Maybe I read too many novels and\ninsufficient technical literature.\n\nIn CREATE TABLE, we already have this:\n <para>\n When establishing a unique constraint for a multi-level partition\n hierarchy, all the columns in the partition key of the target\n partitioned table, as well as those of all its descendant partitioned\n tables, must be included in the constraint definition.\n </para>\n\nwhich may not be the pinnacle of clarity, but took some time to craft\nand I think is correct. Also it doesn't mention primary keys\nexplicitly; maybe we should patch it by adding \"(as well as a primary\nkey)\" right after \"a unique constraint\". Thoughts?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 9 Jul 2019 18:53:33 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 11: Table Partitioning and Primary Keys"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> That's a mild personal preference only though. Anyway, based on your\n> proposed wording, I wrote this:\n\n> <listitem>\n> <para>\n> Unique constraints on partitioned tables (as well as primary keys)\n> must constrain all the partition key columns. This limitation exists\n> because <productname>PostgreSQL</productname> can only enforce\n> uniqueness in each partition individually.\n> </para>\n> </listitem>\n\n> I'm not really sure about the \"must constrain\" verbiage. Is that really\n> comprehensible?\n\nI think \"must include\" might be better.\n\n> In CREATE TABLE, we already have this:\n> <para>\n> When establishing a unique constraint for a multi-level partition\n> hierarchy, all the columns in the partition key of the target\n> partitioned table, as well as those of all its descendant partitioned\n> tables, must be included in the constraint definition.\n> </para>\n\n> which may not be the pinnacle of clarity, but took some time to craft\n> and I think is correct. Also it doesn't mention primary keys\n> explicitly; maybe we should patch it by adding \"(as well as a primary\n> key)\" right after \"a unique constraint\". Thoughts?\n\nI'd leave that alone. I don't think the parenthetical comment about\nprimary keys in your new text is adding much either.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Jul 2019 18:59:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 11: Table Partitioning and Primary Keys"
},
{
"msg_contents": "On Wed, Jul 10, 2019 at 7:53 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2019-Jul-09, Amit Langote wrote:\n> > \"While defining a primary key and unique constraints on partitioned\n> > tables is supported, the set of columns being constrained must include\n> > all of the partition key columns. This limitation exists because\n> > <productname>PostgreSQL</productname> can ensure uniqueness only\n> > across a given partition.\"\n>\n> I feel that PKs are mostly a special case of UNIQUE keys, so I tend to\n> mention UNIQUE as the central element and let PKs fall out from that.\n> That's a mild personal preference only though. Anyway, based on your\n> proposed wording, I wrote this:\n>\n> <listitem>\n> <para>\n> Unique constraints on partitioned tables (as well as primary keys)\n> must constrain all the partition key columns. This limitation exists\n> because <productname>PostgreSQL</productname> can only enforce\n> uniqueness in each partition individually.\n> </para>\n> </listitem>\n>\n> I'm not really sure about the \"must constrain\" verbiage. Is that really\n> comprehensible?\n\nLooks good after replacing \"must constraint\" by \"must include\" as\nsuggested by Tom.\n\n> Also, I chose to place it just above the existing para\n> that mentions FK limitations\n\nThis placement of the new text sounds good.\n\n> In CREATE TABLE, we already have this:\n> <para>\n> When establishing a unique constraint for a multi-level partition\n> hierarchy, all the columns in the partition key of the target\n> partitioned table, as well as those of all its descendant partitioned\n> tables, must be included in the constraint definition.\n> </para>\n>\n> which may not be the pinnacle of clarity, but took some time to craft\n> and I think is correct. Also it doesn't mention primary keys\n> explicitly; maybe we should patch it by adding \"(as well as a primary\n> key)\" right after \"a unique constraint\". Thoughts?\n\nWorks for me.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 10 Jul 2019 14:30:15 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 11: Table Partitioning and Primary Keys"
},
{
"msg_contents": "On Tue, Jul 09, 2019 at 06:59:59PM -0400, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> <listitem>\n>> <para>\n>> Unique constraints on partitioned tables (as well as primary keys)\n>> must constrain all the partition key columns. This limitation exists\n>> because <productname>PostgreSQL</productname> can only enforce\n>> uniqueness in each partition individually.\n>> </para>\n>> </listitem>\n> \n>> I'm not really sure about the \"must constrain\" verbiage. Is that really\n>> comprehensible?\n> \n> I think \"must include\" might be better.\n\n+1.\n\n>> which may not be the pinnacle of clarity, but took some time to craft\n>> and I think is correct. Also it doesn't mention primary keys\n>> explicitly; maybe we should patch it by adding \"(as well as a primary\n>> key)\" right after \"a unique constraint\". Thoughts?\n> \n> I'd leave that alone. I don't think the parenthetical comment about\n> primary keys in your new text is adding much either.\n\nAgreed with not bothering about this block and not adding the\nparenthetical comment.\n--\nMichael",
"msg_date": "Wed, 10 Jul 2019 16:13:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 11: Table Partitioning and Primary Keys"
}
] |
[
{
"msg_contents": "Greetings:\nI am not sure if this has been brought up before but Python 2 is EOL on Jan\n1 2020. After that time there will not be any security fixes or patches.\n\nhttps://python3statement.org/\n\nAccording to our most recent official documentation:\nhttps://www.postgresql.org/docs/11/plpython-python23.html\n\n*\" The default will probably be changed to Python 3 in a distant future\nrelease of PostgreSQL, depending on the progress of the migration to Python\n3 in the Python community.\"*\n\nI know we are late in the Postgresql 12 cycle but I think switching the\ndefault to Python 3 is warranted given:\n1. The serious nature of not having a default supported Python version soon\nafter the PostgreSQL 12 release\n2. The next opportunity to change the default will be late 2020\n\nIf we do not switch our default version and a vulnerability arises in\nPython 2 then we will end up either\n1. Telling our users to run the default PL/Python with a known security\nvulnerability\n2. The PostgreSQL community patching it's python\n\nI know there are implications for swapping the default version but I think\nthat is outweighed by the seriousness of the situation.\n\nThanks\nSteve\n\nGreetings:I am not sure if this has been brought up before but Python 2 is EOL on Jan 1 2020. After that time there will not be any security fixes or patches. https://python3statement.org/According to our most recent official documentation:https://www.postgresql.org/docs/11/plpython-python23.html\"\nThe default will probably be changed to Python 3 in a distant future \nrelease of PostgreSQL, depending on the progress of the migration to \nPython 3 in the Python community.\"I know we are late in the Postgresql 12 cycle but I think switching the default to Python 3 is warranted given:1. The serious nature of not having a default supported Python version soon after the PostgreSQL 12 release2. The next opportunity to change the default will be late 2020If we do not switch our default version and a vulnerability arises in Python 2 then we will end up either 1. Telling our users to run the default PL/Python with a known security vulnerability2. The PostgreSQL community patching it's pythonI know there are implications for swapping the default version but I think that is outweighed by the seriousness of the situation.ThanksSteve",
"msg_date": "Sat, 6 Jul 2019 12:02:28 -0700",
"msg_from": "Steven Pousty <steve.pousty@gmail.com>",
"msg_from_op": true,
"msg_subject": "Switching PL/Python to Python 3 by default in PostgreSQL 12"
},
{
"msg_contents": "On 2019-07-06 21:02, Steven Pousty wrote:\n> /\" The default will probably be changed to Python 3 in a distant future\n> release of PostgreSQL, depending on the progress of the migration to\n> Python 3 in the Python community.\"/\n>\n> I know we are late in the Postgresql 12 cycle but I think switching the\n> default to Python 3 is warranted given:\n\n\"The default\" in the above statement refers to what version the language\nname \"plpythonu\" invokes. That is separate from whether an installer or\npackager chooses to install a Python-2-based version. I expect that as\noperating systems remove Python 2 from their systems, packagers will\nalso remove Python-2-based plpython packages. So there is no issue.\n\nWe could change plpythonu to be an alias for plpython3u instead of\nplpython2u, but that would be a PG13 (or later) change. I notice that\nPEP-394 now supports this.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 7 Jul 2019 00:28:14 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Switching PL/Python to Python 3 by default in PostgreSQL 12"
},
{
"msg_contents": "Why would it be a 13 or later issue?\nI am specifically talking about changing the default.\n\nOn Sat, Jul 6, 2019, 6:28 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2019-07-06 21:02, Steven Pousty wrote:\n> > /\" The default will probably be changed to Python 3 in a distant future\n> > release of PostgreSQL, depending on the progress of the migration to\n> > Python 3 in the Python community.\"/\n> >\n> > I know we are late in the Postgresql 12 cycle but I think switching the\n> > default to Python 3 is warranted given:\n>\n> \"The default\" in the above statement refers to what version the language\n> name \"plpythonu\" invokes. That is separate from whether an installer or\n> packager chooses to install a Python-2-based version. I expect that as\n> operating systems remove Python 2 from their systems, packagers will\n> also remove Python-2-based plpython packages. So there is no issue.\n>\n> We could change plpythonu to be an alias for plpython3u instead of\n> plpython2u, but that would be a PG13 (or later) change. I notice that\n> PEP-394 now supports this.\n>\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nWhy would it be a 13 or later issue?I am specifically talking about changing the default. On Sat, Jul 6, 2019, 6:28 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2019-07-06 21:02, Steven Pousty wrote:\n> /\" The default will probably be changed to Python 3 in a distant future\n> release of PostgreSQL, depending on the progress of the migration to\n> Python 3 in the Python community.\"/\n>\n> I know we are late in the Postgresql 12 cycle but I think switching the\n> default to Python 3 is warranted given:\n\n\"The default\" in the above statement refers to what version the language\nname \"plpythonu\" invokes. That is separate from whether an installer or\npackager chooses to install a Python-2-based version. I expect that as\noperating systems remove Python 2 from their systems, packagers will\nalso remove Python-2-based plpython packages. So there is no issue.\n\nWe could change plpythonu to be an alias for plpython3u instead of\nplpython2u, but that would be a PG13 (or later) change. I notice that\nPEP-394 now supports this.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 6 Jul 2019 18:34:56 -0400",
"msg_from": "Steven Pousty <steve.pousty@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Switching PL/Python to Python 3 by default in PostgreSQL 12"
},
{
"msg_contents": "On 2019-07-07 00:34, Steven Pousty wrote:\n> Why would it be a 13 or later issue?\n\nBecause PostgreSQL 12 is feature frozen and in beta, and this issue is\nnot a regression.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 7 Jul 2019 16:37:50 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Switching PL/Python to Python 3 by default in PostgreSQL 12"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-07-07 00:34, Steven Pousty wrote:\n>> Why would it be a 13 or later issue?\n\n> Because PostgreSQL 12 is feature frozen and in beta, and this issue is\n> not a regression.\n\nMore to the point: it does not seem to me that we should change what\n\"plpythonu\" means until Python 2 is effectively extinct in the wild.\nWhich is surely some years away yet. If we change it sooner than\nthat, the number of people complaining that we broke perfectly good\ninstallations will vastly outweigh the number of people who are\nhappy because we saved them one keystroke per function definition.\n\nAs a possibly relevant comparison, I get the impression that most\npackagers of Python are removing the versionless \"python\" executable\nname and putting *nothing* in its place. You have to write python2\nor python3 nowadays. Individuals might still be setting up symlinks\nso that \"python\" does what they want, but it's not happening at the\npackaging/distro level.\n\n(This comparison suggests that maybe what we should be thinking\nabout is a way to make it easier to change what \"plpythonu\" means\nat the local-opt-in level.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Jul 2019 11:26:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Switching PL/Python to Python 3 by default in PostgreSQL 12"
},
{
"msg_contents": "The point of the links I sent from the Python community is that they wanted\nPython extinct in the wild as of Jan 1 next year. They are never fixing it,\neven for a security vulnerability.\n\nIt seems to me we roll out breaking changes with major versions. So yes, if\nthe user chooses to upgrade to 12 and they haven't migrated their code to\nPython 2 it might not work.\n\nI don't have a good answer to no changes except regressions. I do hope,\ngiven how much our users expect us to be secure, that we weigh the\nconsequences of making our default Python a version which is dead to the\ncommunity a month or so after Postgresql 12s release. We can certainly take\nthe stance of leave the Python version be, but it seems that we should then\ncome up with a plan if there is a security vulnerability found in Python 2\nafter Jan 1st 2020.\n\nIf Python 2 wasn't our *default* choice then I would be much more\ncomfortable letting this just pass without mention.\n\nAll that aside, I think allowing the admin set the default version of\nplpythonu to be an excellent idea.\n\nThanks\nSteve\n\n\n\nOn Sun, Jul 7, 2019, 8:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > On 2019-07-07 00:34, Steven Pousty wrote:\n> >> Why would it be a 13 or later issue?\n>\n> > Because PostgreSQL 12 is feature frozen and in beta, and this issue is\n> > not a regression.\n>\n> More to the point: it does not seem to me that we should change what\n> \"plpythonu\" means until Python 2 is effectively extinct in the wild.\n> Which is surely some years away yet. If we change it sooner than\n> that, the number of people complaining that we broke perfectly good\n> installations will vastly outweigh the number of people who are\n> happy because we saved them one keystroke per function definition.\n>\n> As a possibly relevant comparison, I get the impression that most\n> packagers of Python are removing the versionless \"python\" executable\n> name and putting *nothing* in its place. You have to write python2\n> or python3 nowadays. Individuals might still be setting up symlinks\n> so that \"python\" does what they want, but it's not happening at the\n> packaging/distro level.\n>\n> (This comparison suggests that maybe what we should be thinking\n> about is a way to make it easier to change what \"plpythonu\" means\n> at the local-opt-in level.)\n>\n> regards, tom lane\n>\n\nThe point of the links I sent from the Python community is that they wanted Python extinct in the wild as of Jan 1 next year. They are never fixing it, even for a security vulnerability.It seems to me we roll out breaking changes with major versions. So yes, if the user chooses to upgrade to 12 and they haven't migrated their code to Python 2 it might not work. I don't have a good answer to no changes except regressions. I do hope, given how much our users expect us to be secure, that we weigh the consequences of making our default Python a version which is dead to the community a month or so after Postgresql 12s release. We can certainly take the stance of leave the Python version be, but it seems that we should then come up with a plan if there is a security vulnerability found in Python 2 after Jan 1st 2020. If Python 2 wasn't our default choice then I would be much more comfortable letting this just pass without mention. All that aside, I think allowing the admin set the default version of plpythonu to be an excellent idea. Thanks Steve On Sun, Jul 7, 2019, 8:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-07-07 00:34, Steven Pousty wrote:\n>> Why would it be a 13 or later issue?\n\n> Because PostgreSQL 12 is feature frozen and in beta, and this issue is\n> not a regression.\n\nMore to the point: it does not seem to me that we should change what\n\"plpythonu\" means until Python 2 is effectively extinct in the wild.\nWhich is surely some years away yet. If we change it sooner than\nthat, the number of people complaining that we broke perfectly good\ninstallations will vastly outweigh the number of people who are\nhappy because we saved them one keystroke per function definition.\n\nAs a possibly relevant comparison, I get the impression that most\npackagers of Python are removing the versionless \"python\" executable\nname and putting *nothing* in its place. You have to write python2\nor python3 nowadays. Individuals might still be setting up symlinks\nso that \"python\" does what they want, but it's not happening at the\npackaging/distro level.\n\n(This comparison suggests that maybe what we should be thinking\nabout is a way to make it easier to change what \"plpythonu\" means\nat the local-opt-in level.)\n\n regards, tom lane",
"msg_date": "Sun, 7 Jul 2019 12:26:11 -0700",
"msg_from": "Steven Pousty <steve.pousty@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Switching PL/Python to Python 3 by default in PostgreSQL 12"
},
{
"msg_contents": "On 2019-07-07 21:26, Steven Pousty wrote:\n> The point of the links I sent from the Python community is that they\n> wanted Python extinct in the wild as of Jan 1 next year. They are never\n> fixing it, even for a security vulnerability.\n\nThe operating systems that most of our users are going to run PostgreSQL\n12 on will keep maintaining Python 2 for the foreseeable future.\n\nI'm not trying to dismiss the importance of managing the Python\ntransition. But this issue has been known for many years, and the\ncurrent setup is more or less in line with the wider world. For\nexample, the Debian release that came out over the weekend still ships\nwith /usr/bin/python being Python 2. So it is neither timely nor urgent\nto try to make some significant change about this in PostgreSQL 12 right\nnow. I would welcome patches for this for PostgreSQL 13.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Jul 2019 13:07:52 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Switching PL/Python to Python 3 by default in PostgreSQL 12"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I'm not trying to dismiss the importance of managing the Python\n> transition. But this issue has been known for many years, and the\n> current setup is more or less in line with the wider world. For\n> example, the Debian release that came out over the weekend still ships\n> with /usr/bin/python being Python 2. So it is neither timely nor urgent\n> to try to make some significant change about this in PostgreSQL 12 right\n> now. I would welcome patches for this for PostgreSQL 13.\n\nI don't think it's been mentioned in this thread yet, but we *did*\nrecently install a configure-time preference for python3 over python2:\n\nAuthor: Peter Eisentraut <peter@eisentraut.org>\nBranch: master Release: REL_12_BR [7291733ac] 2019-01-13 10:23:48 +0100\nBranch: REL_11_STABLE Release: REL_11_2 [3d498c65a] 2019-01-13 10:24:21 +0100\nBranch: REL_10_STABLE Release: REL_10_7 [cd1873160] 2019-01-13 10:25:23 +0100\n\n configure: Update python search order\n \n Some systems don't ship with \"python\" by default anymore, only\n \"python3\" or \"python2\" or some combination, so include those in the\n configure search.\n \n Discussion: https://www.postgresql.org/message-id/flat/1457.1543184081%40sss.pgh.pa.us#c9cc1199338fd6a257589c6dcea6cf8d\n\nconfigure's search order is now $PYTHON, python, python3, python2.\nI think it will be a very long time, if ever, before there would be\na reason to consider changing that. Both of the first two options\nrepresent following a clear user preference.\n\nSo the only thing that's really at stake is when/whether we can make\n\"plpythonu\" a synonym for \"plpython3u\" rather than \"plpython2u\".\nAs I said already, I think that's got to be a long way off, since the\nwhole problem here is that python3 isn't a drop-in replacement for\npython2. We're much more likely to break existing functions than do\nanything useful by forcibly switching the synonym.\n\nBut I could support having a way for individual installations to change\nwhat the synonym means locally. Perhaps we could think about how to do\nthat in conjunction with the project of getting rid of pg_pltemplate\nthat's been kicked around before [1][2][3].\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/763f2fe4-743f-d530-8831-20811edd3d6a%402ndquadrant.com\n\n[2] https://www.postgresql.org/message-id/flat/7495.1524861244%40sss.pgh.pa.us\n\n[3] https://www.postgresql.org/message-id/flat/5351890.TdMePpdHBD%40nb.usersys.redhat.com\n\n\n",
"msg_date": "Mon, 08 Jul 2019 12:25:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Switching PL/Python to Python 3 by default in PostgreSQL 12"
},
{
"msg_contents": "I wrote:\n> But I could support having a way for individual installations to change\n> what the synonym means locally. Perhaps we could think about how to do\n> that in conjunction with the project of getting rid of pg_pltemplate\n> that's been kicked around before [1][2][3].\n\n... actually, if we had that (i.e., languages fully defined by extensions\nwith no help from pg_pltemplate), wouldn't this be nearly trivial?\nI'm imagining two extensions, one that defines plpythonu to call the\npython2 code and one that defines it to call the python3 code, and you\ninstall whichever you want. They're separate from the extensions that\ndefine plpython2u and plpython3u, so mix and match as you wish.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Jul 2019 12:37:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Switching PL/Python to Python 3 by default in PostgreSQL 12"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nPlease consider fixing the next cluster of typos and inconsistencies in\nthe tree:\n5.1. datetkntbl -> datetktbl\n5.2. datminxmid -> datminmxid\n5.3. DatumGetP -> DatumGetPointer\n5.4. ECPG_COMPILE, DECPG_COMPILE -> remove (orphaned since 328d235)\n5.5. defer_cleanup_age -> vacuum_defer_cleanup_age\n5.6. descriptor_index -> remove (orphaned since 991b974)\n5.7. DestroyBuilder, InitBuilder, SetDoc -> DestroyOpaque, InitOpaque,\nSetDocument\n5.8. dictlexize -> thesaurus_lexize\n5.9. regression.diffsregression.planregress/inh -> regression.diffs\nplanregress/diffs.inh\n5.10. dllist -> dlist\n5.11. DocRepresentaion -> DocRepresentation\n5.12. dosplit -> remove (such function is not present since introduction\nin 9892ddf)\n5.13. DOWN_MEM_FENCE -> _DOWN_MEM_FENCE\n5.14. dp_pg_stop_backup -> do_pg_stop_backup\n5.15. DropRelFileNodeAllBuffers -> DropRelFileNodesAllBuffers\n5.16. dshash_release -> dshash_release_lock\n5.17. EACCESS -> remove (not used since introduction in 12c94238)\n5.18. ECPGcheck_PQresult -> ecpg_check_PQresult (renamed in 7793c6ec,\nreferenced code moved in ecpg_execute() in 61bee9f7)\n5.19. ecpg_compatlib -> libecpg_compat\n5.20. ECPGerrmsg -> remove (not used since introduction in 244d2d67)\n5.21. ecpg_free_auto_mem -> remove (not used since introduction in 7793c6ec)\n5.22. ecpggetdescp -> remove (not used since introduction in 90326c01)\n5.23. endBlk -> numBlks\n5.24. endMemb, startMemb, endOff, startOff -> endTruncMemb,\nstartTruncMemb, endTruncOff, startTruncOff\n5.25. EndPlan -> ExecEndPlan\n5.26. EndResult -> ExprEndResult\n5.27. equivalentOpersAfterPromotion -> remove (irrelevant since\n8536c962, but the whole comments is too old to be informational too)\n5.28. es_jit_combined_instr -> es_jit_worker_instr (renamed in c03c1449)\n5.29. ExclusiveRowLock -> RowExclusiveLock\n5.30. exdended -> extended (user-visible, I would fix it in\nREL_12_STABLE too)\n5.31. ExecBitmapHeapNext -> BitmapHeapNext\n5.32. ExecBuildProjectInfo -> ExecBuildProjectionInfo\n5.33. ExecDirection -> remove (this variable is not present since PG95-1_01)\n5.34. ExecEndRecursiveUnionScan -> ExecEndRecursiveUnion\n5.35. ExecGrantStmt -> ExecuteGrantStmt\n5.36. ExecInitRecursiveUnionScan -> ExecInitRecursiveUnion\n5.37. ExecSeqNext -> SeqNext\n5.38. exec_statement_return -> exec_stmt_return\n5.39. exec_subplan_get_plan -> remove (not used since 1cc29fe7)\n5.40. ExecSubqueryNext -> SubqueryNext\n5.41. ExecValuesNext -> ValuesNext\n5.42. existing_oid -> existing_relation_id\n5.43. exit_fatal -> fatal\n5.44. expectedTLIs -> expectedTLEs\n5.45. ExprEvalExpr -> ExecEvalExpr\n5.46. exprhasexecparam -> remove (orphaned since 6630ccad)\n5.47. ExprReadyExpr -> ExecReadyExpr\n5.48. EXTENSION_REALLY_RETURN_NULL -> remove (the behaviour changed with\na7124870)\n\nThere are some other ancient comments like spotted in 5.27, e.g. for\ntextcat(), text_substr() in varlena.c... It seems that they serve more \nhistoric than informational purposes today.\n\nBest regards,\nAlexander",
"msg_date": "Sun, 7 Jul 2019 08:03:01 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix typos and inconsistencies for HEAD (take 5)"
},
{
"msg_contents": "On Sun, Jul 07, 2019 at 08:03:01AM +0300, Alexander Lakhin wrote:\n> 5.8. dictlexize -> thesaurus_lexize\n\nThere could be other dictionaries.\n\n> 5.9. regression.diffsregression.planregress/inh -> regression.diffs\n> planregress/diffs.inh\n\nI am wondering if we should not just nuke that... For now I have\nincluded your change as the mistake is obvious, but I am starting a\nnew thread. The history around this script does not play in favor of\nit:\ncommit: 2fc80e8e8304913c8dd1090bb2976632c0f4a8c3\nauthor: Bruce Momjian <bruce@momjian.us>\ndate: Wed, 12 Feb 2014 17:29:19 -0500\nRename 'gmake' to 'make' in docs and recommended commands\n\nThis simplifies the docs and makes it easier to cut/paste command\nlines.\n\ncommit: c77e2e42fb4cf5c90a7562b9df289165ff164df1\nauthor: Tom Lane <tgl@sss.pgh.pa.us>\ndate: Mon, 18 Dec 2000 02:45:47 +0000\nTweak regressplans.sh to use any already-set PGOPTIONS.\n\nAnd looking closer it seems that there are other issues linked to\nit...\n\n> 5.27. equivalentOpersAfterPromotion -> remove (irrelevant since\n> 8536c962, but the whole comments is too old to be informational too)\n\nIt seems to me that this could be much more reworked. So discarded\nfor now.\n\n> 5.29. ExclusiveRowLock -> RowExclusiveLock\n\nGrammar mistake here.\n\n> 5.31. ExecBitmapHeapNext -> BitmapHeapNext\n\nWell, BitmapHeapRecheck is not listed in the interface routines\neither.. \n\n> 5.37. ExecSeqNext -> SeqNext\n> 5.40. ExecSubqueryNext -> SubqueryNext\n> 5.41. ExecValuesNext -> ValuesNext\n\nHere as well these sets are incomplete. Instead for those series I'd\nlike to think that it would be better to do a larger cleanup and just\nremove all these in the executor \"INTERFACE ROUTINES\". Your proposed\npatches don't make things better either, as the interfaces are listed\nin alphabetical order.\n\n> 5.39. exec_subplan_get_plan -> remove (not used since 1cc29fe7)\n\nThis could be used by extensions. So let's not remove it.\n\nAnd committed most of the rest. Thanks.\n--\nMichael",
"msg_date": "Mon, 8 Jul 2019 13:14:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos and inconsistencies for HEAD (take 5)"
}
] |
[
{
"msg_contents": "Hello,\n\nJust a bit of background - I currently work as a full-time db developer,\nmostly with Ms Sql server but I like Postgres a lot, especially because I\nreally program in sql all the time and type system / plpgsql language of\nPostgres seems to me more suitable for actual programming then t-sql.\n\nHere's the problem - current structure of the language doesn't allow to\ndecompose the code well and split calculations and data into different\nmodules.\n\nFor example. Suppose I have a table employee and I have a function like\nthis (I'll skip definition of return types for the sake of simplicity):\n\ncreate function departments_salary ()\nreturns table (...)\nas\nreturn $$\n select department, sum(salary) as salary from employee group by\ndepartment;\n$$;\n\nso that's fine, but what if I want to run this function on filtered\nemployee? I can adjust the function of course, but it implies I can predict\nall possible filters I'm going to need in the future.\nAnd logically, function itself doesn't have to be run on employee table,\nanything with department and salary columns will fit.\nSo it'd be nice to be able to define the function like this:\n\ncreate function departments_salary(_employee query)\nreturns table (...)\nas\nreturn $$\n select department, sum(salary) as salary from _employee group by\ndepartment;\n$$;\n\nand then call it like this:\n\ndeclare _employee query;\n...\n_poor_employee = (select salary, department from employee where salary <\n1000);\nselect * from departments_salary( _poor_employee);\n\nAnd just to be clear, the query is not really invoked until the last line,\nso re-assigning _employee variable is more like building query expression.\n\nAs far as I understand the closest way to do this is to put the data into\ntemporary table and use this temporary table inside of the function. It's\nnot exactly the same of course, cause in case of temporary tables data\nshould be transferred to temporary table, while it will might be filtered\nlater. So it's something like array vs generator in python, or List vs\nIQueryable in C#.\n\nAdding this functionality will allow much better decomposition of the\nprogram's logic.\nWhat do you think about the idea itself? If you think the idea is worthy,\nis it even possible to implement it?\n\nRegards,\nRoman Pekar\n\nHello,Just a bit of background - I currently work as a full-time db developer, mostly with Ms Sql server but I like Postgres a lot, especially because I really program in sql all the time and type system / plpgsql language of Postgres seems to me more suitable for actual programming then t-sql.Here's the problem - current structure of the language doesn't allow to decompose the code well and split calculations and data into different modules.For example. Suppose I have a table employee and I have a function like this (I'll skip definition of return types for the sake of simplicity):create function departments_salary ()returns \ntable (...)\n\nasreturn $$ select department, sum(salary) as salary from employee group by department;$$;so that's fine, but what if I want to run this function on filtered employee? I can adjust the function of course, but it implies I can predict all possible filters I'm going to need in the future.And logically, function itself doesn't have to be run on employee table, anything with department and salary columns will fit.So it'd be nice to be able to define the function like this:\ncreate function \ndepartments_salary(_employee query)returns table (...)asreturn $$ select department, sum(salary) as salary from _employee group by department;$$;\nand then call it like this:declare _employee query;..._poor_employee = (select \nsalary, department from employee where salary < 1000);\n\nselect * from \ndepartments_salary(\n_poor_employee);And just to be clear, the query is not really invoked until the last line, so re-assigning _employee variable is more like building query expression.As far as I understand the closest way to do this is to put the data into temporary table and use this temporary table inside of the function. It's not exactly the same of course, cause in case of temporary tables data should be transferred to temporary table, while it will might be filtered later. So it's something like array vs generator in python, or List vs IQueryable in C#.Adding this functionality will allow much better decomposition of the program's logic.What do you think about the idea itself? If you think the idea is worthy, is it even possible to implement it?Regards,Roman Pekar",
"msg_date": "Sun, 7 Jul 2019 14:54:26 +0200",
"msg_from": "Roman Pekar <roma.pekar@gmail.com>",
"msg_from_op": true,
"msg_subject": "(select query)/relation as first class citizen"
},
{
"msg_contents": "Hi\n\nne 7. 7. 2019 v 14:54 odesílatel Roman Pekar <roma.pekar@gmail.com> napsal:\n\n> Hello,\n>\n> Just a bit of background - I currently work as a full-time db developer,\n> mostly with Ms Sql server but I like Postgres a lot, especially because I\n> really program in sql all the time and type system / plpgsql language of\n> Postgres seems to me more suitable for actual programming then t-sql.\n>\n> Here's the problem - current structure of the language doesn't allow to\n> decompose the code well and split calculations and data into different\n> modules.\n>\n> For example. Suppose I have a table employee and I have a function like\n> this (I'll skip definition of return types for the sake of simplicity):\n>\n> create function departments_salary ()\n> returns table (...)\n> as\n> return $$\n> select department, sum(salary) as salary from employee group by\n> department;\n> $$;\n>\n> so that's fine, but what if I want to run this function on filtered\n> employee? I can adjust the function of course, but it implies I can predict\n> all possible filters I'm going to need in the future.\n> And logically, function itself doesn't have to be run on employee table,\n> anything with department and salary columns will fit.\n> So it'd be nice to be able to define the function like this:\n>\n> create function departments_salary(_employee query)\n> returns table (...)\n> as\n> return $$\n> select department, sum(salary) as salary from _employee group by\n> department;\n> $$;\n>\n> and then call it like this:\n>\n> declare _employee query;\n> ...\n> _poor_employee = (select salary, department from employee where salary <\n> 1000);\n> select * from departments_salary( _poor_employee);\n>\n> And just to be clear, the query is not really invoked until the last line,\n> so re-assigning _employee variable is more like building query expression.\n>\n> As far as I understand the closest way to do this is to put the data into\n> temporary table and use this temporary table inside of the function. It's\n> not exactly the same of course, cause in case of temporary tables data\n> should be transferred to temporary table, while it will might be filtered\n> later. So it's something like array vs generator in python, or List vs\n> IQueryable in C#.\n>\n> Adding this functionality will allow much better decomposition of the\n> program's logic.\n> What do you think about the idea itself? If you think the idea is worthy,\n> is it even possible to implement it?\n>\n\nIf we talk about plpgsql, then I afraid so this idea can disallow plan\ncaching - or significantly increase the cost of plan cache.\n\nThere are two possibilities of implementation - a) query like cursor -\nunfortunately it effectively disables any optimization and it carry ORM\nperformance to procedures. This usage is known performance antipattern, b)\nquery like view - it should not to have a performance problems with late\noptimization, but I am not sure about possibility to reuse execution plans.\n\nCurrently PLpgSQL is compromise between performance and dynamic (PLpgSQL is\nreally static language). Your proposal increase much more dynamic behave,\nbut performance can be much more worse.\n\nMore - with this behave, there is not possible to do static check - so you\nhave to find bugs only at runtime. I afraid about performance of this\nsolution.\n\nRegards\n\nPavel\n\n\n\n> Regards,\n> Roman Pekar\n>\n>\n>\n\nHine 7. 7. 2019 v 14:54 odesílatel Roman Pekar <roma.pekar@gmail.com> napsal:Hello,Just a bit of background - I currently work as a full-time db developer, mostly with Ms Sql server but I like Postgres a lot, especially because I really program in sql all the time and type system / plpgsql language of Postgres seems to me more suitable for actual programming then t-sql.Here's the problem - current structure of the language doesn't allow to decompose the code well and split calculations and data into different modules.For example. Suppose I have a table employee and I have a function like this (I'll skip definition of return types for the sake of simplicity):create function departments_salary ()returns \ntable (...)\n\nasreturn $$ select department, sum(salary) as salary from employee group by department;$$;so that's fine, but what if I want to run this function on filtered employee? I can adjust the function of course, but it implies I can predict all possible filters I'm going to need in the future.And logically, function itself doesn't have to be run on employee table, anything with department and salary columns will fit.So it'd be nice to be able to define the function like this:\ncreate function \ndepartments_salary(_employee query)returns table (...)asreturn $$ select department, sum(salary) as salary from _employee group by department;$$;\nand then call it like this:declare _employee query;..._poor_employee = (select \nsalary, department from employee where salary < 1000);\n\nselect * from \ndepartments_salary(\n_poor_employee);And just to be clear, the query is not really invoked until the last line, so re-assigning _employee variable is more like building query expression.As far as I understand the closest way to do this is to put the data into temporary table and use this temporary table inside of the function. It's not exactly the same of course, cause in case of temporary tables data should be transferred to temporary table, while it will might be filtered later. So it's something like array vs generator in python, or List vs IQueryable in C#.Adding this functionality will allow much better decomposition of the program's logic.What do you think about the idea itself? If you think the idea is worthy, is it even possible to implement it?If we talk about plpgsql, then I afraid so this idea can disallow plan caching - or significantly increase the cost of plan cache. There are two possibilities of implementation - a) query like cursor - unfortunately it effectively disables any optimization and it carry ORM performance to procedures. This usage is known performance antipattern, b) query like view - it should not to have a performance problems with late optimization, but I am not sure about possibility to reuse execution plans. Currently PLpgSQL is compromise between performance and dynamic (PLpgSQL is really static language). Your proposal increase much more dynamic behave, but performance can be much more worse.More - with this behave, there is not possible to do static check - so you have to find bugs only at runtime. I afraid about performance of this solution.RegardsPavelRegards,Roman Pekar",
"msg_date": "Sun, 7 Jul 2019 15:38:51 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: (select query)/relation as first class citizen"
},
{
"msg_contents": "Hi,\n\nYes, I'm thinking about 'query like a view', 'query like a cursor' is\nprobably possible even now in ms sql server (not sure about postgresql),\nbut it requires this paradygm shift from set-based thinking to row-by-row\nthinking which I'd not want to do.\n\nI completely agree with your points of plan caching and static checks. With\nstatic checks, though it might be possible to do if the query would be\ndefined as typed, so all the types of the columns is known in advance.\nIn certain cases having possibility of much better decomposition is might\nbe more important than having cached plan. Not sure how often these cases\nappear in general, but personally for me it'd be awesome to have this\npossibility.\n\nRegards,\nRoman Pekar\n\nOn Sun, 7 Jul 2019 at 15:39, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> Hi\n>\n> ne 7. 7. 2019 v 14:54 odesílatel Roman Pekar <roma.pekar@gmail.com>\n> napsal:\n>\n>> Hello,\n>>\n>> Just a bit of background - I currently work as a full-time db developer,\n>> mostly with Ms Sql server but I like Postgres a lot, especially because I\n>> really program in sql all the time and type system / plpgsql language of\n>> Postgres seems to me more suitable for actual programming then t-sql.\n>>\n>> Here's the problem - current structure of the language doesn't allow to\n>> decompose the code well and split calculations and data into different\n>> modules.\n>>\n>> For example. Suppose I have a table employee and I have a function like\n>> this (I'll skip definition of return types for the sake of simplicity):\n>>\n>> create function departments_salary ()\n>> returns table (...)\n>> as\n>> return $$\n>> select department, sum(salary) as salary from employee group by\n>> department;\n>> $$;\n>>\n>> so that's fine, but what if I want to run this function on filtered\n>> employee? I can adjust the function of course, but it implies I can predict\n>> all possible filters I'm going to need in the future.\n>> And logically, function itself doesn't have to be run on employee table,\n>> anything with department and salary columns will fit.\n>> So it'd be nice to be able to define the function like this:\n>>\n>> create function departments_salary(_employee query)\n>> returns table (...)\n>> as\n>> return $$\n>> select department, sum(salary) as salary from _employee group by\n>> department;\n>> $$;\n>>\n>> and then call it like this:\n>>\n>> declare _employee query;\n>> ...\n>> _poor_employee = (select salary, department from employee where salary <\n>> 1000);\n>> select * from departments_salary( _poor_employee);\n>>\n>> And just to be clear, the query is not really invoked until the last\n>> line, so re-assigning _employee variable is more like building query\n>> expression.\n>>\n>> As far as I understand the closest way to do this is to put the data into\n>> temporary table and use this temporary table inside of the function. It's\n>> not exactly the same of course, cause in case of temporary tables data\n>> should be transferred to temporary table, while it will might be filtered\n>> later. So it's something like array vs generator in python, or List vs\n>> IQueryable in C#.\n>>\n>> Adding this functionality will allow much better decomposition of the\n>> program's logic.\n>> What do you think about the idea itself? If you think the idea is worthy,\n>> is it even possible to implement it?\n>>\n>\n> If we talk about plpgsql, then I afraid so this idea can disallow plan\n> caching - or significantly increase the cost of plan cache.\n>\n> There are two possibilities of implementation - a) query like cursor -\n> unfortunately it effectively disables any optimization and it carry ORM\n> performance to procedures. This usage is known performance antipattern, b)\n> query like view - it should not to have a performance problems with late\n> optimization, but I am not sure about possibility to reuse execution plans.\n>\n> Currently PLpgSQL is compromise between performance and dynamic (PLpgSQL\n> is really static language). Your proposal increase much more dynamic\n> behave, but performance can be much more worse.\n>\n> More - with this behave, there is not possible to do static check - so you\n> have to find bugs only at runtime. I afraid about performance of this\n> solution.\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>> Regards,\n>> Roman Pekar\n>>\n>>\n>>\n\n\nHi,Yes, I'm thinking about 'query like a \nview', 'query like a cursor' is probably possible even now in ms sql \nserver (not sure about postgresql), but it requires this paradygm shift \nfrom set-based thinking to row-by-row thinking which I'd not want to do.I\n completely agree with your points of plan caching and static checks. \nWith static checks, though it might be possible to do if the query would\n be defined as typed, so all the types of the columns is known in \nadvance.In certain cases having possibility of much better \ndecomposition is might be more important than having cached plan. Not \nsure how often these cases appear in general, but personally for me it'd\n be awesome to have this possibility.Regards,Roman Pekar\nOn Sun, 7 Jul 2019 at 15:39, Pavel Stehule <pavel.stehule@gmail.com> wrote:Hine 7. 7. 2019 v 14:54 odesílatel Roman Pekar <roma.pekar@gmail.com> napsal:Hello,Just a bit of background - I currently work as a full-time db developer, mostly with Ms Sql server but I like Postgres a lot, especially because I really program in sql all the time and type system / plpgsql language of Postgres seems to me more suitable for actual programming then t-sql.Here's the problem - current structure of the language doesn't allow to decompose the code well and split calculations and data into different modules.For example. Suppose I have a table employee and I have a function like this (I'll skip definition of return types for the sake of simplicity):create function departments_salary ()returns \ntable (...)\n\nasreturn $$ select department, sum(salary) as salary from employee group by department;$$;so that's fine, but what if I want to run this function on filtered employee? I can adjust the function of course, but it implies I can predict all possible filters I'm going to need in the future.And logically, function itself doesn't have to be run on employee table, anything with department and salary columns will fit.So it'd be nice to be able to define the function like this:\ncreate function \ndepartments_salary(_employee query)returns table (...)asreturn $$ select department, sum(salary) as salary from _employee group by department;$$;\nand then call it like this:declare _employee query;..._poor_employee = (select \nsalary, department from employee where salary < 1000);\n\nselect * from \ndepartments_salary(\n_poor_employee);And just to be clear, the query is not really invoked until the last line, so re-assigning _employee variable is more like building query expression.As far as I understand the closest way to do this is to put the data into temporary table and use this temporary table inside of the function. It's not exactly the same of course, cause in case of temporary tables data should be transferred to temporary table, while it will might be filtered later. So it's something like array vs generator in python, or List vs IQueryable in C#.Adding this functionality will allow much better decomposition of the program's logic.What do you think about the idea itself? If you think the idea is worthy, is it even possible to implement it?If we talk about plpgsql, then I afraid so this idea can disallow plan caching - or significantly increase the cost of plan cache. There are two possibilities of implementation - a) query like cursor - unfortunately it effectively disables any optimization and it carry ORM performance to procedures. This usage is known performance antipattern, b) query like view - it should not to have a performance problems with late optimization, but I am not sure about possibility to reuse execution plans. Currently PLpgSQL is compromise between performance and dynamic (PLpgSQL is really static language). Your proposal increase much more dynamic behave, but performance can be much more worse.More - with this behave, there is not possible to do static check - so you have to find bugs only at runtime. I afraid about performance of this solution.RegardsPavelRegards,Roman Pekar",
"msg_date": "Sun, 7 Jul 2019 16:22:23 +0200",
"msg_from": "Roman Pekar <roma.pekar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: (select query)/relation as first class citizen"
},
{
"msg_contents": "Hi,\n\nwhat do you think about this idea in general? If you don't have to think\nabout implementation for now? From my point of view writing Sql queries is\nvery close to how functional language work if you treat \"select\" queries as\nfunctions without side-effects, and having query being first-class-citizen\ncould move this even further.\n\nRegards,\nRoman\n\nOn Sun, 7 Jul 2019 at 16:22, Roman Pekar <roma.pekar@gmail.com> wrote:\n\n> Hi,\n>\n> Yes, I'm thinking about 'query like a view', 'query like a cursor' is\n> probably possible even now in ms sql server (not sure about postgresql),\n> but it requires this paradygm shift from set-based thinking to row-by-row\n> thinking which I'd not want to do.\n>\n> I completely agree with your points of plan caching and static checks.\n> With static checks, though it might be possible to do if the query would be\n> defined as typed, so all the types of the columns is known in advance.\n> In certain cases having possibility of much better decomposition is might\n> be more important than having cached plan. Not sure how often these cases\n> appear in general, but personally for me it'd be awesome to have this\n> possibility.\n>\n> Regards,\n> Roman Pekar\n>\n> On Sun, 7 Jul 2019 at 15:39, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>> Hi\n>>\n>> ne 7. 7. 2019 v 14:54 odesílatel Roman Pekar <roma.pekar@gmail.com>\n>> napsal:\n>>\n>>> Hello,\n>>>\n>>> Just a bit of background - I currently work as a full-time db developer,\n>>> mostly with Ms Sql server but I like Postgres a lot, especially because I\n>>> really program in sql all the time and type system / plpgsql language of\n>>> Postgres seems to me more suitable for actual programming then t-sql.\n>>>\n>>> Here's the problem - current structure of the language doesn't allow to\n>>> decompose the code well and split calculations and data into different\n>>> modules.\n>>>\n>>> For example. Suppose I have a table employee and I have a function like\n>>> this (I'll skip definition of return types for the sake of simplicity):\n>>>\n>>> create function departments_salary ()\n>>> returns table (...)\n>>> as\n>>> return $$\n>>> select department, sum(salary) as salary from employee group by\n>>> department;\n>>> $$;\n>>>\n>>> so that's fine, but what if I want to run this function on filtered\n>>> employee? I can adjust the function of course, but it implies I can predict\n>>> all possible filters I'm going to need in the future.\n>>> And logically, function itself doesn't have to be run on employee table,\n>>> anything with department and salary columns will fit.\n>>> So it'd be nice to be able to define the function like this:\n>>>\n>>> create function departments_salary(_employee query)\n>>> returns table (...)\n>>> as\n>>> return $$\n>>> select department, sum(salary) as salary from _employee group by\n>>> department;\n>>> $$;\n>>>\n>>> and then call it like this:\n>>>\n>>> declare _employee query;\n>>> ...\n>>> _poor_employee = (select salary, department from employee where salary <\n>>> 1000);\n>>> select * from departments_salary( _poor_employee);\n>>>\n>>> And just to be clear, the query is not really invoked until the last\n>>> line, so re-assigning _employee variable is more like building query\n>>> expression.\n>>>\n>>> As far as I understand the closest way to do this is to put the data\n>>> into temporary table and use this temporary table inside of the function.\n>>> It's not exactly the same of course, cause in case of temporary tables data\n>>> should be transferred to temporary table, while it will might be filtered\n>>> later. So it's something like array vs generator in python, or List vs\n>>> IQueryable in C#.\n>>>\n>>> Adding this functionality will allow much better decomposition of the\n>>> program's logic.\n>>> What do you think about the idea itself? If you think the idea is\n>>> worthy, is it even possible to implement it?\n>>>\n>>\n>> If we talk about plpgsql, then I afraid so this idea can disallow plan\n>> caching - or significantly increase the cost of plan cache.\n>>\n>> There are two possibilities of implementation - a) query like cursor -\n>> unfortunately it effectively disables any optimization and it carry ORM\n>> performance to procedures. This usage is known performance antipattern, b)\n>> query like view - it should not to have a performance problems with late\n>> optimization, but I am not sure about possibility to reuse execution plans.\n>>\n>> Currently PLpgSQL is compromise between performance and dynamic (PLpgSQL\n>> is really static language). Your proposal increase much more dynamic\n>> behave, but performance can be much more worse.\n>>\n>> More - with this behave, there is not possible to do static check - so\n>> you have to find bugs only at runtime. I afraid about performance of this\n>> solution.\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>\n>>> Regards,\n>>> Roman Pekar\n>>>\n>>>\n>>>\n\nHi,what do you think about this idea in general? If you don't have to think about implementation for now? From my point of view writing Sql queries is very close to how functional language work if you treat \"select\" queries as functions without side-effects, and having query being first-class-citizen could move this even further.Regards,RomanOn Sun, 7 Jul 2019 at 16:22, Roman Pekar <roma.pekar@gmail.com> wrote:\nHi,Yes, I'm thinking about 'query like a \nview', 'query like a cursor' is probably possible even now in ms sql \nserver (not sure about postgresql), but it requires this paradygm shift \nfrom set-based thinking to row-by-row thinking which I'd not want to do.I\n completely agree with your points of plan caching and static checks. \nWith static checks, though it might be possible to do if the query would\n be defined as typed, so all the types of the columns is known in \nadvance.In certain cases having possibility of much better \ndecomposition is might be more important than having cached plan. Not \nsure how often these cases appear in general, but personally for me it'd\n be awesome to have this possibility.Regards,Roman Pekar\nOn Sun, 7 Jul 2019 at 15:39, Pavel Stehule <pavel.stehule@gmail.com> wrote:Hine 7. 7. 2019 v 14:54 odesílatel Roman Pekar <roma.pekar@gmail.com> napsal:Hello,Just a bit of background - I currently work as a full-time db developer, mostly with Ms Sql server but I like Postgres a lot, especially because I really program in sql all the time and type system / plpgsql language of Postgres seems to me more suitable for actual programming then t-sql.Here's the problem - current structure of the language doesn't allow to decompose the code well and split calculations and data into different modules.For example. Suppose I have a table employee and I have a function like this (I'll skip definition of return types for the sake of simplicity):create function departments_salary ()returns \ntable (...)\n\nasreturn $$ select department, sum(salary) as salary from employee group by department;$$;so that's fine, but what if I want to run this function on filtered employee? I can adjust the function of course, but it implies I can predict all possible filters I'm going to need in the future.And logically, function itself doesn't have to be run on employee table, anything with department and salary columns will fit.So it'd be nice to be able to define the function like this:\ncreate function \ndepartments_salary(_employee query)returns table (...)asreturn $$ select department, sum(salary) as salary from _employee group by department;$$;\nand then call it like this:declare _employee query;..._poor_employee = (select \nsalary, department from employee where salary < 1000);\n\nselect * from \ndepartments_salary(\n_poor_employee);And just to be clear, the query is not really invoked until the last line, so re-assigning _employee variable is more like building query expression.As far as I understand the closest way to do this is to put the data into temporary table and use this temporary table inside of the function. It's not exactly the same of course, cause in case of temporary tables data should be transferred to temporary table, while it will might be filtered later. So it's something like array vs generator in python, or List vs IQueryable in C#.Adding this functionality will allow much better decomposition of the program's logic.What do you think about the idea itself? If you think the idea is worthy, is it even possible to implement it?If we talk about plpgsql, then I afraid so this idea can disallow plan caching - or significantly increase the cost of plan cache. There are two possibilities of implementation - a) query like cursor - unfortunately it effectively disables any optimization and it carry ORM performance to procedures. This usage is known performance antipattern, b) query like view - it should not to have a performance problems with late optimization, but I am not sure about possibility to reuse execution plans. Currently PLpgSQL is compromise between performance and dynamic (PLpgSQL is really static language). Your proposal increase much more dynamic behave, but performance can be much more worse.More - with this behave, there is not possible to do static check - so you have to find bugs only at runtime. I afraid about performance of this solution.RegardsPavelRegards,Roman Pekar",
"msg_date": "Mon, 8 Jul 2019 09:33:22 +0200",
"msg_from": "Roman Pekar <roma.pekar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: (select query)/relation as first class citizen"
},
{
"msg_contents": "Hi\n\npo 8. 7. 2019 v 9:33 odesílatel Roman Pekar <roma.pekar@gmail.com> napsal:\n\n> Hi,\n>\n> what do you think about this idea in general? If you don't have to think\n> about implementation for now? From my point of view writing Sql queries is\n> very close to how functional language work if you treat \"select\" queries as\n> functions without side-effects, and having query being first-class-citizen\n> could move this even further.\n>\n\nfirst - please, don't send top posts.\n\nsecond - my opinion is not clear. I can imagine benefits - on second hand,\nthe usage is relative too close to one antipattern - only one query wrapped\nby functions. I see your proposal as little bit more dynamic (with little\nbit different syntax) views.\n\nWith my experience I really afraid about it - it can be very effective\n(from developer perspective) and very slow (from customer perspective).\nThis is example of tool that looks nice on paper, but can be very badly\nused.\n\nMaybe I am not the best man for this topic - I like some functional\nprogramming concepts, but I use it locally - your proposal moves SQL to\nsome unexplored areas - and I think so it can be interesting as real\nresearch topic, but not today Postgres's theme.\n\nThe basic question is why extend SQL and don't use some native functional\nlanguage. Postgres should to implement ANSI SQL - and there is not a space\nfor big experiments. I am sceptic about it - relational databases are\nstatic, SQL is static language, so it is hard to implement some dynamic\nsystem over it - SQL language is language over relation algebra - it is not\nfunctional language, I afraid so introduction another concept to this do\nmore bad than good.\n\nRegards\n\nPavel\n\n\n\n\n> Regards,\n> Roman\n>\n> On Sun, 7 Jul 2019 at 16:22, Roman Pekar <roma.pekar@gmail.com> wrote:\n>\n>> Hi,\n>>\n>> Yes, I'm thinking about 'query like a view', 'query like a cursor' is\n>> probably possible even now in ms sql server (not sure about postgresql),\n>> but it requires this paradygm shift from set-based thinking to row-by-row\n>> thinking which I'd not want to do.\n>>\n>> I completely agree with your points of plan caching and static checks.\n>> With static checks, though it might be possible to do if the query would be\n>> defined as typed, so all the types of the columns is known in advance.\n>> In certain cases having possibility of much better decomposition is might\n>> be more important than having cached plan. Not sure how often these cases\n>> appear in general, but personally for me it'd be awesome to have this\n>> possibility.\n>>\n>> Regards,\n>> Roman Pekar\n>>\n>> On Sun, 7 Jul 2019 at 15:39, Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>>\n>>> Hi\n>>>\n>>> ne 7. 7. 2019 v 14:54 odesílatel Roman Pekar <roma.pekar@gmail.com>\n>>> napsal:\n>>>\n>>>> Hello,\n>>>>\n>>>> Just a bit of background - I currently work as a full-time db\n>>>> developer, mostly with Ms Sql server but I like Postgres a lot, especially\n>>>> because I really program in sql all the time and type system / plpgsql\n>>>> language of Postgres seems to me more suitable for actual programming then\n>>>> t-sql.\n>>>>\n>>>> Here's the problem - current structure of the language doesn't allow to\n>>>> decompose the code well and split calculations and data into different\n>>>> modules.\n>>>>\n>>>> For example. Suppose I have a table employee and I have a function like\n>>>> this (I'll skip definition of return types for the sake of simplicity):\n>>>>\n>>>> create function departments_salary ()\n>>>> returns table (...)\n>>>> as\n>>>> return $$\n>>>> select department, sum(salary) as salary from employee group by\n>>>> department;\n>>>> $$;\n>>>>\n>>>> so that's fine, but what if I want to run this function on filtered\n>>>> employee? I can adjust the function of course, but it implies I can predict\n>>>> all possible filters I'm going to need in the future.\n>>>> And logically, function itself doesn't have to be run on employee\n>>>> table, anything with department and salary columns will fit.\n>>>> So it'd be nice to be able to define the function like this:\n>>>>\n>>>> create function departments_salary(_employee query)\n>>>> returns table (...)\n>>>> as\n>>>> return $$\n>>>> select department, sum(salary) as salary from _employee group by\n>>>> department;\n>>>> $$;\n>>>>\n>>>> and then call it like this:\n>>>>\n>>>> declare _employee query;\n>>>> ...\n>>>> _poor_employee = (select salary, department from employee where salary\n>>>> < 1000);\n>>>> select * from departments_salary( _poor_employee);\n>>>>\n>>>> And just to be clear, the query is not really invoked until the last\n>>>> line, so re-assigning _employee variable is more like building query\n>>>> expression.\n>>>>\n>>>> As far as I understand the closest way to do this is to put the data\n>>>> into temporary table and use this temporary table inside of the function.\n>>>> It's not exactly the same of course, cause in case of temporary tables data\n>>>> should be transferred to temporary table, while it will might be filtered\n>>>> later. So it's something like array vs generator in python, or List vs\n>>>> IQueryable in C#.\n>>>>\n>>>> Adding this functionality will allow much better decomposition of the\n>>>> program's logic.\n>>>> What do you think about the idea itself? If you think the idea is\n>>>> worthy, is it even possible to implement it?\n>>>>\n>>>\n>>> If we talk about plpgsql, then I afraid so this idea can disallow plan\n>>> caching - or significantly increase the cost of plan cache.\n>>>\n>>> There are two possibilities of implementation - a) query like cursor -\n>>> unfortunately it effectively disables any optimization and it carry ORM\n>>> performance to procedures. This usage is known performance antipattern, b)\n>>> query like view - it should not to have a performance problems with late\n>>> optimization, but I am not sure about possibility to reuse execution plans.\n>>>\n>>> Currently PLpgSQL is compromise between performance and dynamic (PLpgSQL\n>>> is really static language). Your proposal increase much more dynamic\n>>> behave, but performance can be much more worse.\n>>>\n>>> More - with this behave, there is not possible to do static check - so\n>>> you have to find bugs only at runtime. I afraid about performance of this\n>>> solution.\n>>>\n>>> Regards\n>>>\n>>> Pavel\n>>>\n>>>\n>>>\n>>>> Regards,\n>>>> Roman Pekar\n>>>>\n>>>>\n>>>>\n\nHipo 8. 7. 2019 v 9:33 odesílatel Roman Pekar <roma.pekar@gmail.com> napsal:Hi,what do you think about this idea in general? If you don't have to think about implementation for now? From my point of view writing Sql queries is very close to how functional language work if you treat \"select\" queries as functions without side-effects, and having query being first-class-citizen could move this even further.first - please, don't send top posts.second - my opinion is not clear. I can imagine benefits - on second hand, the usage is relative too close to one antipattern - only one query wrapped by functions. I see your proposal as little bit more dynamic (with little bit different syntax) views. With my experience I really afraid about it - it can be very effective (from developer perspective) and very slow (from customer perspective). This is example of tool that looks nice on paper, but can be very badly used.Maybe I am not the best man for this topic - I like some functional programming concepts, but I use it locally - your proposal moves SQL to some unexplored areas - and I think so it can be interesting as real research topic, but not today Postgres's theme.The basic question is why extend SQL and don't use some native functional language. Postgres should to implement ANSI SQL - and there is not a space for big experiments. I am sceptic about it - relational databases are static, SQL is static language, so it is hard to implement some dynamic system over it - SQL language is language over relation algebra - it is not functional language, I afraid so introduction another concept to this do more bad than good.RegardsPavel Regards,RomanOn Sun, 7 Jul 2019 at 16:22, Roman Pekar <roma.pekar@gmail.com> wrote:\nHi,Yes, I'm thinking about 'query like a \nview', 'query like a cursor' is probably possible even now in ms sql \nserver (not sure about postgresql), but it requires this paradygm shift \nfrom set-based thinking to row-by-row thinking which I'd not want to do.I\n completely agree with your points of plan caching and static checks. \nWith static checks, though it might be possible to do if the query would\n be defined as typed, so all the types of the columns is known in \nadvance.In certain cases having possibility of much better \ndecomposition is might be more important than having cached plan. Not \nsure how often these cases appear in general, but personally for me it'd\n be awesome to have this possibility.Regards,Roman Pekar\nOn Sun, 7 Jul 2019 at 15:39, Pavel Stehule <pavel.stehule@gmail.com> wrote:Hine 7. 7. 2019 v 14:54 odesílatel Roman Pekar <roma.pekar@gmail.com> napsal:Hello,Just a bit of background - I currently work as a full-time db developer, mostly with Ms Sql server but I like Postgres a lot, especially because I really program in sql all the time and type system / plpgsql language of Postgres seems to me more suitable for actual programming then t-sql.Here's the problem - current structure of the language doesn't allow to decompose the code well and split calculations and data into different modules.For example. Suppose I have a table employee and I have a function like this (I'll skip definition of return types for the sake of simplicity):create function departments_salary ()returns \ntable (...)\n\nasreturn $$ select department, sum(salary) as salary from employee group by department;$$;so that's fine, but what if I want to run this function on filtered employee? I can adjust the function of course, but it implies I can predict all possible filters I'm going to need in the future.And logically, function itself doesn't have to be run on employee table, anything with department and salary columns will fit.So it'd be nice to be able to define the function like this:\ncreate function \ndepartments_salary(_employee query)returns table (...)asreturn $$ select department, sum(salary) as salary from _employee group by department;$$;\nand then call it like this:declare _employee query;..._poor_employee = (select \nsalary, department from employee where salary < 1000);\n\nselect * from \ndepartments_salary(\n_poor_employee);And just to be clear, the query is not really invoked until the last line, so re-assigning _employee variable is more like building query expression.As far as I understand the closest way to do this is to put the data into temporary table and use this temporary table inside of the function. It's not exactly the same of course, cause in case of temporary tables data should be transferred to temporary table, while it will might be filtered later. So it's something like array vs generator in python, or List vs IQueryable in C#.Adding this functionality will allow much better decomposition of the program's logic.What do you think about the idea itself? If you think the idea is worthy, is it even possible to implement it?If we talk about plpgsql, then I afraid so this idea can disallow plan caching - or significantly increase the cost of plan cache. There are two possibilities of implementation - a) query like cursor - unfortunately it effectively disables any optimization and it carry ORM performance to procedures. This usage is known performance antipattern, b) query like view - it should not to have a performance problems with late optimization, but I am not sure about possibility to reuse execution plans. Currently PLpgSQL is compromise between performance and dynamic (PLpgSQL is really static language). Your proposal increase much more dynamic behave, but performance can be much more worse.More - with this behave, there is not possible to do static check - so you have to find bugs only at runtime. I afraid about performance of this solution.RegardsPavelRegards,Roman Pekar",
"msg_date": "Mon, 8 Jul 2019 11:19:07 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: (select query)/relation as first class citizen"
},
{
"msg_contents": "Hi Roman, Pavel,\n\nI was interested in this post, as it’s a topic I’ve stumbled upon in the past. \n\nThere are two topics at play here:\n\n1. The ability to flexibly craft queries from procedural language functions\n\n2. Support for pipelined access to SETOF/TABLEs from procedural language functions\n\nPostgres has parameterised functions that can return a query and optimise it in context of the overall statement, but only for the SQL language. Absence of support for other languages is a curious gap.\n\nAnd it leads to fact that, even in presence of only static parameters, only one “shape” of query can ever be returned. (No IF/ELSE IF logic can alter the query, unless you dive down the rat hole of encoding it into the query itself.) So that is another gap.\n\nPostgres has some relevant first class types: TABLE/SETOF and REFCURSOR. TABLE/SETOF are output only, materialised always and optimiser fences. Current syntax supports pipelined output (via RETURN NEXT), and docs call out the fact that it might in future not be materialised. I suspect an executor change is needed to support it, as well as plpgsql change. \n\nTheir output-only nature is an odd gap. REFCURSOR is not materialised, and is also input-capable. If SETOF/TABLE were made both, then there would be a curious type system duplication. \n\nHowever REFCURSOR is pretty awkward to use from SQL. The fact you can’t cast or convert it to a SETOF/TABLE and SELECT FROM a REFCURSOR in native SQL is weird, and a gap, IMHO.\n\nOn the input aide, REFCURSOR is neat. Despite the above limitation, it can become bound to a query before being OPENed for execution and fetching. If only the optimiser could “see” that pre-OPENed state, as with parameterised views, then, in principle, there would be nothing stopping some other outer function consuming it, SELECTing FROM it, and perhaps even returning a new query, and then the optimiser would be able to see and optimise the final global statement. Okay: this is a biggie, but it’s still a gap, in my view. \n\nSo my view is that Postgres already has types that are close to what is asked for. It also has tools that look ripe to be plumbed together. Problem is, when they are combined, they don’t fit well, and when they are made to fit, the fence, materialisation always and curious output-only nature leads developers to create un-performant messes. :-)\n\nI think some of this could be fixed quite easily. The executor already (obviously) can pipeline. PLs can’t today save and restore their context to support pipelining, but it is not impossible. REFCURSOR can’t be cast to a TABLE/SETOF, not meaningfully be SELECTed FROM, but that can’t be too hard either.\n\nExposing the pre-OPENed query for optimisation is another thing. But here again, I see it as a challenge of mental gymnastics rather than actually hard in terms of code factoring — much of what is needed is surely already there in the way of VIEW rewriting. \n\nRegarding demand for the #2 feature set, this somewhat dated tread is suggestive of a niche use case: https://www.postgresql.org/message-id/flat/005701c6dc2c%2449011fc0%240a00a8c0%40trivadis.com.\n\nd.\n\n> On 8 Jul 2019, at 10:19, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n> Hi\n> \n> po 8. 7. 2019 v 9:33 odesílatel Roman Pekar <roma.pekar@gmail.com> napsal:\n>> Hi,\n>> \n>> what do you think about this idea in general? If you don't have to think about implementation for now? From my point of view writing Sql queries is very close to how functional language work if you treat \"select\" queries as functions without side-effects, and having query being first-class-citizen could move this even further.\n> \n> first - please, don't send top posts.\n> \n> second - my opinion is not clear. I can imagine benefits - on second hand, the usage is relative too close to one antipattern - only one query wrapped by functions. I see your proposal as little bit more dynamic (with little bit different syntax) views. \n> \n> With my experience I really afraid about it - it can be very effective (from developer perspective) and very slow (from customer perspective). This is example of tool that looks nice on paper, but can be very badly used.\n> \n> Maybe I am not the best man for this topic - I like some functional programming concepts, but I use it locally - your proposal moves SQL to some unexplored areas - and I think so it can be interesting as real research topic, but not today Postgres's theme.\n> \n> The basic question is why extend SQL and don't use some native functional language. Postgres should to implement ANSI SQL - and there is not a space for big experiments. I am sceptic about it - relational databases are static, SQL is static language, so it is hard to implement some dynamic system over it - SQL language is language over relation algebra - it is not functional language, I afraid so introduction another concept to this do more bad than good.\n> \n> Regards\n> \n> Pavel\n> \n> \n> \n>> \n>> Regards,\n>> Roman\n>> \n>>> On Sun, 7 Jul 2019 at 16:22, Roman Pekar <roma.pekar@gmail.com> wrote:\n>>> Hi,\n>>> \n>>> Yes, I'm thinking about 'query like a view', 'query like a cursor' is probably possible even now in ms sql server (not sure about postgresql), but it requires this paradygm shift from set-based thinking to row-by-row thinking which I'd not want to do.\n>>> \n>>> I completely agree with your points of plan caching and static checks. With static checks, though it might be possible to do if the query would be defined as typed, so all the types of the columns is known in advance.\n>>> In certain cases having possibility of much better decomposition is might be more important than having cached plan. Not sure how often these cases appear in general, but personally for me it'd be awesome to have this possibility.\n>>> \n>>> Regards,\n>>> Roman Pekar\n>>> \n>>>> On Sun, 7 Jul 2019 at 15:39, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>>>> Hi\n>>>> \n>>>> ne 7. 7. 2019 v 14:54 odesílatel Roman Pekar <roma.pekar@gmail.com> napsal:\n>>>>> Hello,\n>>>>> \n>>>>> Just a bit of background - I currently work as a full-time db developer, mostly with Ms Sql server but I like Postgres a lot, especially because I really program in sql all the time and type system / plpgsql language of Postgres seems to me more suitable for actual programming then t-sql.\n>>>>> \n>>>>> Here's the problem - current structure of the language doesn't allow to decompose the code well and split calculations and data into different modules.\n>>>>> \n>>>>> For example. Suppose I have a table employee and I have a function like this (I'll skip definition of return types for the sake of simplicity):\n>>>>> \n>>>>> create function departments_salary ()\n>>>>> returns table (...)\n>>>>> as\n>>>>> return $$\n>>>>> select department, sum(salary) as salary from employee group by department;\n>>>>> $$;\n>>>>> \n>>>>> so that's fine, but what if I want to run this function on filtered employee? I can adjust the function of course, but it implies I can predict all possible filters I'm going to need in the future.\n>>>>> And logically, function itself doesn't have to be run on employee table, anything with department and salary columns will fit.\n>>>>> So it'd be nice to be able to define the function like this:\n>>>>> \n>>>>> create function departments_salary(_employee query)\n>>>>> returns table (...)\n>>>>> as\n>>>>> return $$\n>>>>> select department, sum(salary) as salary from _employee group by department;\n>>>>> $$;\n>>>>> \n>>>>> and then call it like this:\n>>>>> \n>>>>> declare _employee query;\n>>>>> ...\n>>>>> _poor_employee = (select salary, department from employee where salary < 1000); \n>>>>> select * from departments_salary( _poor_employee);\n>>>>> \n>>>>> And just to be clear, the query is not really invoked until the last line, so re-assigning _employee variable is more like building query expression.\n>>>>> \n>>>>> As far as I understand the closest way to do this is to put the data into temporary table and use this temporary table inside of the function. It's not exactly the same of course, cause in case of temporary tables data should be transferred to temporary table, while it will might be filtered later. So it's something like array vs generator in python, or List vs IQueryable in C#.\n>>>>> \n>>>>> Adding this functionality will allow much better decomposition of the program's logic.\n>>>>> What do you think about the idea itself? If you think the idea is worthy, is it even possible to implement it?\n>>>> \n>>>> \n>>>> If we talk about plpgsql, then I afraid so this idea can disallow plan caching - or significantly increase the cost of plan cache. \n>>>> \n>>>> There are two possibilities of implementation - a) query like cursor - unfortunately it effectively disables any optimization and it carry ORM performance to procedures. This usage is known performance antipattern, b) query like view - it should not to have a performance problems with late optimization, but I am not sure about possibility to reuse execution plans. \n>>>> \n>>>> Currently PLpgSQL is compromise between performance and dynamic (PLpgSQL is really static language). Your proposal increase much more dynamic behave, but performance can be much more worse.\n>>>> \n>>>> More - with this behave, there is not possible to do static check - so you have to find bugs only at runtime. I afraid about performance of this solution.\n>>>> \n>>>> Regards\n>>>> \n>>>> Pavel\n>>>> \n>>>> \n>>>>> \n>>>>> Regards,\n>>>>> Roman Pekar\n>>>>> \n>>>>> \n\nHi Roman, Pavel,I was interested in this post, as it’s a topic I’ve stumbled upon in the past. There are two topics at play here:1. The ability to flexibly craft queries from procedural language functions2. Support for pipelined access to SETOF/TABLEs from procedural language functionsPostgres has parameterised functions that can return a query and optimise it in context of the overall statement, but only for the SQL language. Absence of support for other languages is a curious gap.And it leads to fact that, even in presence of only static parameters, only one “shape” of query can ever be returned. (No IF/ELSE IF logic can alter the query, unless you dive down the rat hole of encoding it into the query itself.) So that is another gap.Postgres has some relevant first class types: TABLE/SETOF and REFCURSOR. TABLE/SETOF are output only, materialised always and optimiser fences. Current syntax supports pipelined output (via RETURN NEXT), and docs call out the fact that it might in future not be materialised. I suspect an executor change is needed to support it, as well as plpgsql change. Their output-only nature is an odd gap. REFCURSOR is not materialised, and is also input-capable. If SETOF/TABLE were made both, then there would be a curious type system duplication. However REFCURSOR is pretty awkward to use from SQL. The fact you can’t cast or convert it to a SETOF/TABLE and SELECT FROM a REFCURSOR in native SQL is weird, and a gap, IMHO.On the input aide, REFCURSOR is neat. Despite the above limitation, it can become bound to a query before being OPENed for execution and fetching. If only the optimiser could “see” that pre-OPENed state, as with parameterised views, then, in principle, there would be nothing stopping some other outer function consuming it, SELECTing FROM it, and perhaps even returning a new query, and then the optimiser would be able to see and optimise the final global statement. Okay: this is a biggie, but it’s still a gap, in my view. So my view is that Postgres already has types that are close to what is asked for. It also has tools that look ripe to be plumbed together. Problem is, when they are combined, they don’t fit well, and when they are made to fit, the fence, materialisation always and curious output-only nature leads developers to create un-performant messes. :-)I think some of this could be fixed quite easily. The executor already (obviously) can pipeline. PLs can’t today save and restore their context to support pipelining, but it is not impossible. REFCURSOR can’t be cast to a TABLE/SETOF, not meaningfully be SELECTed FROM, but that can’t be too hard either.Exposing the pre-OPENed query for optimisation is another thing. But here again, I see it as a challenge of mental gymnastics rather than actually hard in terms of code factoring — much of what is needed is surely already there in the way of VIEW rewriting. Regarding demand for the #2 feature set, this somewhat dated tread is suggestive of a niche use case: https://www.postgresql.org/message-id/flat/005701c6dc2c%2449011fc0%240a00a8c0%40trivadis.com.d.On 8 Jul 2019, at 10:19, Pavel Stehule <pavel.stehule@gmail.com> wrote:Hipo 8. 7. 2019 v 9:33 odesílatel Roman Pekar <roma.pekar@gmail.com> napsal:Hi,what do you think about this idea in general? If you don't have to think about implementation for now? From my point of view writing Sql queries is very close to how functional language work if you treat \"select\" queries as functions without side-effects, and having query being first-class-citizen could move this even further.first - please, don't send top posts.second - my opinion is not clear. I can imagine benefits - on second hand, the usage is relative too close to one antipattern - only one query wrapped by functions. I see your proposal as little bit more dynamic (with little bit different syntax) views. With my experience I really afraid about it - it can be very effective (from developer perspective) and very slow (from customer perspective). This is example of tool that looks nice on paper, but can be very badly used.Maybe I am not the best man for this topic - I like some functional programming concepts, but I use it locally - your proposal moves SQL to some unexplored areas - and I think so it can be interesting as real research topic, but not today Postgres's theme.The basic question is why extend SQL and don't use some native functional language. Postgres should to implement ANSI SQL - and there is not a space for big experiments. I am sceptic about it - relational databases are static, SQL is static language, so it is hard to implement some dynamic system over it - SQL language is language over relation algebra - it is not functional language, I afraid so introduction another concept to this do more bad than good.RegardsPavel Regards,RomanOn Sun, 7 Jul 2019 at 16:22, Roman Pekar <roma.pekar@gmail.com> wrote:\nHi,Yes, I'm thinking about 'query like a \nview', 'query like a cursor' is probably possible even now in ms sql \nserver (not sure about postgresql), but it requires this paradygm shift \nfrom set-based thinking to row-by-row thinking which I'd not want to do.I\n completely agree with your points of plan caching and static checks. \nWith static checks, though it might be possible to do if the query would\n be defined as typed, so all the types of the columns is known in \nadvance.In certain cases having possibility of much better \ndecomposition is might be more important than having cached plan. Not \nsure how often these cases appear in general, but personally for me it'd\n be awesome to have this possibility.Regards,Roman Pekar\nOn Sun, 7 Jul 2019 at 15:39, Pavel Stehule <pavel.stehule@gmail.com> wrote:Hine 7. 7. 2019 v 14:54 odesílatel Roman Pekar <roma.pekar@gmail.com> napsal:Hello,Just a bit of background - I currently work as a full-time db developer, mostly with Ms Sql server but I like Postgres a lot, especially because I really program in sql all the time and type system / plpgsql language of Postgres seems to me more suitable for actual programming then t-sql.Here's the problem - current structure of the language doesn't allow to decompose the code well and split calculations and data into different modules.For example. Suppose I have a table employee and I have a function like this (I'll skip definition of return types for the sake of simplicity):create function departments_salary ()returns \ntable (...)\n\nasreturn $$ select department, sum(salary) as salary from employee group by department;$$;so that's fine, but what if I want to run this function on filtered employee? I can adjust the function of course, but it implies I can predict all possible filters I'm going to need in the future.And logically, function itself doesn't have to be run on employee table, anything with department and salary columns will fit.So it'd be nice to be able to define the function like this:\ncreate function \ndepartments_salary(_employee query)returns table (...)asreturn $$ select department, sum(salary) as salary from _employee group by department;$$;\nand then call it like this:declare _employee query;..._poor_employee = (select \nsalary, department from employee where salary < 1000);\n\nselect * from \ndepartments_salary(\n_poor_employee);And just to be clear, the query is not really invoked until the last line, so re-assigning _employee variable is more like building query expression.As far as I understand the closest way to do this is to put the data into temporary table and use this temporary table inside of the function. It's not exactly the same of course, cause in case of temporary tables data should be transferred to temporary table, while it will might be filtered later. So it's something like array vs generator in python, or List vs IQueryable in C#.Adding this functionality will allow much better decomposition of the program's logic.What do you think about the idea itself? If you think the idea is worthy, is it even possible to implement it?If we talk about plpgsql, then I afraid so this idea can disallow plan caching - or significantly increase the cost of plan cache. There are two possibilities of implementation - a) query like cursor - unfortunately it effectively disables any optimization and it carry ORM performance to procedures. This usage is known performance antipattern, b) query like view - it should not to have a performance problems with late optimization, but I am not sure about possibility to reuse execution plans. Currently PLpgSQL is compromise between performance and dynamic (PLpgSQL is really static language). Your proposal increase much more dynamic behave, but performance can be much more worse.More - with this behave, there is not possible to do static check - so you have to find bugs only at runtime. I afraid about performance of this solution.RegardsPavelRegards,Roman Pekar",
"msg_date": "Wed, 10 Jul 2019 18:01:56 +0100",
"msg_from": "Dent John <denty@qqdd.eu>",
"msg_from_op": false,
"msg_subject": "Re: (select query)/relation as first class citizen"
},
{
"msg_contents": "Hi, John,\n\nI think you've outlined the problem and possible solutions quite well. It's\ngreat to see that the goal might be not that far from implementing.\n\nHi, John,I think you've outlined the problem and possible solutions quite well. It's great to see that the goal might be not that far from implementing.",
"msg_date": "Mon, 19 Aug 2019 16:16:18 +0200",
"msg_from": "Roman Pekar <roma.pekar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: (select query)/relation as first class citizen"
},
{
"msg_contents": "> On 19 Aug 2019, at 15:16, Roman Pekar <roma.pekar@gmail.com> wrote:\n> \n> Hi, John,\n> \n> I think you've outlined the problem and possible solutions quite well. It's great to see that the goal might be not that far from implementing.\n> \n\nThanks for the prompt, Roman. I meant to have a bit of a play, and your message reminded me.\n\nI was intrigued by the gap in how REFCURSOR is exposed. I’ve made two very rough patches to illustrate a possible solution. Down the line, I am wondering if there is appetite to receive these into core code.\n\nFirst is a variant of UNNEST that accepts a REFCURSOR, allowing the results to be processed in a normal query, such as SELECT. To illustrate how it works, consider the following:\n\npostgres=# create or replace function test (src_tab text) returns refcursor immutable language plpgsql as $$ begin return refcursor_from_query ('select * from ' || src_tab); end; $$;\nCREATE FUNCTION\npostgres=# select key, count (value), min (value), max (value) from unnest (array['test1', 'test2', 'test3']) tab, lateral unnest (test (tab.tab)) as (key text, value numeric) group by key;\n key | count | min | max \n--------+-------+-----+-----\n ITEM_A | 100 | 0 | 99\n ITEM_C | 50 | -50 | -1\n ITEM_B | 200 | 0 | 199\n(3 rows)\n\npostgres=# explain select key, count (value), min (value), max (value) from unnest (array['test1', 'test2', 'test3']) tab, lateral unnest (test (tab.tab)) as (key text, value numeric) group by key;\npsql: WARNING: cache reference leak: cache pg_proc (43), tuple 11/9 has count 1\n QUERY PLAN \n----------------------------------------------------------------------------\n HashAggregate (cost=9.29..10.29 rows=100 width=104)\n Group Key: unnest.key\n -> Nested Loop (cost=0.26..6.29 rows=300 width=64)\n -> Function Scan on unnest tab (cost=0.00..0.03 rows=3 width=32)\n -> Function Scan on unnest (cost=0.25..1.25 rows=100 width=64)\n(5 rows)\n\nThe example requires the following setup:\n\npostgres=# create table test1 (key text, value numeric);\nCREATE TABLE\npostgres=# insert into test1 select 'ITEM_A', generate_series (0, 99);\nINSERT 0 100\npostgres=# create table test2 (key text, value numeric);\nCREATE TABLE\npostgres=# insert into test2 select 'ITEM_B', generate_series (0, 199);\nINSERT 0 200\npostgres=# create table test3 (key text, value numeric);\nCREATE TABLE\npostgres=# insert into test1 select 'ITEM_C', generate_series (-50, -1);\nINSERT 0 50\npostgres=# create or replace function refcursor_from_query (qry text) returns refcursor immutable language plpgsql as $$ declare cur refcursor; begin open cur for execute qry; return cur; end; $$;\nCREATE FUNCTION\n\nObviously this kind of construction is open to wide variety of attacks, and a more realistic example would need to defend against inappropriate input.\n\nMy code is really really rough, and also yields a WARNING about cache leaks — which obviously needs fixing — but demonstrates the point. This variant is provided in unnest-refcursor.patch.\n\nThe example is pretty contrived, but I think there is general utility in having a way of processing output from REFCURSORs. Arguably, as I mentioned in my previous comments, this overlaps the plpgsql’s RETURN QUERY capability. However, unlike RETURN QUERY, the result set is not materialised before returning.\n\nIt is also interesting that a RECORD-returning UNNEST requires the row type to be declared explicitly (hence the as (key text, value numeric) clause). This seems a less than ideal syntax, but I’m not sure there is much alternative.\n\nSecond is another variant of UNNEST which attempts to retrieve the query that the REFCURSOR is OPEN’ed for, and inlines the SQL text into the query being planned. (Exactly as may be done with the text of certain SQL language FUNCTIONs.)\n\nAgain, an example probably best illustrates what is going on:\n\npostgres=# explain select key, count (value), min (value), max (value) from unnest (test ('test1')) as (key text, value numeric) where value > 50 group by key;\n QUERY PLAN \n-------------------------------------------------------------\n HashAggregate (cost=3.37..3.39 rows=2 width=79)\n Group Key: test1.key\n -> Seq Scan on test1 (cost=0.00..2.88 rows=49 width=11)\n Filter: (value > '50'::numeric)\n(4 rows)\n\npostgres=# select key, count (value), min (value), max (value) from unnest (test ('test1')) as (key text, value numeric) where value > 50 group by key; \n key | count | min | max \n--------+-------+-----+-----\n ITEM_A | 49 | 51 | 99\n(1 row)\n\nThere are two interesting things going on here. First is that the query returned by test() is rewritten and consumed as if it were verbatim in the query text. Second is that the outer filter (value > 50) can now be pushed down by the planner, potentially yielding a much more efficient plan.\n\nThis variant is provided in unnest-rewrite-refcursor.patch.\n\nMy code here is even rougher than the first. I stopped short of creating a new ’state’ for REFCURSOR that is not yet ‘OPEN’ but allows the query text to be yielded. This ultimately turned out quite non-trivial for a POC. (It does, though, seem ultimately feasible.) The consequence is that the REFCURSOR query is planned before being returned, only for that plan to be ignored and the query text consumed into the outer plan, thus being re-planned. In any significant use, this would be a huge inefficiency, and a shortcoming that would need to be addressed.\n\nBoth patches should apply against 12beta2. unnest-refcursor should be applied first, as the second depends upon its foundation.\n\nI’m wondering what do you think of the concept, Roman and Pavel?\n\ndenty.",
"msg_date": "Fri, 23 Aug 2019 09:52:43 +0100",
"msg_from": "Dent John <denty@QQdd.eu>",
"msg_from_op": false,
"msg_subject": "Re: (select query)/relation as first class citizen"
}
] |
[
{
"msg_contents": "(Moved from pgsql-bugs thread at [1])\n\nConsider\n\nregression=# create domain d1 as int;\nCREATE DOMAIN\nregression=# create table t1 (f1 d1) partition by range(f1);\nCREATE TABLE\nregression=# alter table t1 drop column f1;\nERROR: cannot drop column named in partition key\n\nSo far so good, but that defense has more holes than a hunk of\nSwiss cheese:\n\nregression=# drop domain d1 cascade;\npsql: NOTICE: drop cascades to column f1 of table t1\nDROP DOMAIN\n\nOf course, the table is now utterly broken, e.g.\n\nregression=# \\d t1\npsql: ERROR: cache lookup failed for type 0\n\n(More-likely variants of this include dropping an extension that\ndefines the type of a partitioning column, or dropping the schema\ncontaining such a type.)\n\nThe fix I was speculating about in the pgsql-bugs thread was to add\nexplicit pg_depend entries making the table's partitioning columns\ninternally dependent on the whole table (or maybe the other way around;\nhaven't experimented). That fix has a couple of problems though:\n\n1. In the example, \"drop domain d1 cascade\" would automatically\ncascade to the whole partitioned table, including child partitions\nof course. This might leave a user sad, if a few terabytes of\nvaluable data went away; though one could argue that they'd better\nhave paid more attention to what the cascade cascaded to.\n\n2. It doesn't fix anything for pre-existing tables in pre-v12 branches.\n\n\nI thought of a different possible approach, which is to move the\n\"cannot drop column named in partition key\" error check from\nATExecDropColumn(), where it is now, to RemoveAttributeById().\nThat would be back-patchable, but the implication would be that\ndropping anything that a partitioning column depends on would be\nimpossible, even with CASCADE; you'd have to manually drop the\npartitioned table first. Good for data safety, but a horrible\nviolation of expectations, and likely of the SQL spec as well.\nI'm not sure we could avoid order-of-traversal problems, either.\n\n\nIdeally, perhaps, a DROP CASCADE like this would not cascade to\nthe whole table but only to the table's partitioned-ness property,\nleaving you with a non-partitioned table with most of its data\nintact. It would take a lot of work to make that happen though,\nand it certainly wouldn't be back-patchable, and I'm not really\nsure it's worth it.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CA%2Bu7OA4JKCPFrdrAbOs7XBiCyD61XJxeNav4LefkSmBLQ-Vobg%40mail.gmail.com\n\n\n",
"msg_date": "Sun, 07 Jul 2019 15:11:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Broken defenses against dropping a partitioning column"
},
{
"msg_contents": "On Mon, Jul 8, 2019 at 4:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> (Moved from pgsql-bugs thread at [1])\n\nThanks.\n\n> Consider\n>\n> regression=# create domain d1 as int;\n> CREATE DOMAIN\n> regression=# create table t1 (f1 d1) partition by range(f1);\n> CREATE TABLE\n> regression=# alter table t1 drop column f1;\n> ERROR: cannot drop column named in partition key\n>\n> So far so good, but that defense has more holes than a hunk of\n> Swiss cheese:\n\nIndeed.\n\n> regression=# drop domain d1 cascade;\n> psql: NOTICE: drop cascades to column f1 of table t1\n> DROP DOMAIN\n>\n> Of course, the table is now utterly broken, e.g.\n>\n> regression=# \\d t1\n> psql: ERROR: cache lookup failed for type 0\n\nOops.\n\n> (More-likely variants of this include dropping an extension that\n> defines the type of a partitioning column, or dropping the schema\n> containing such a type.)\n\nYeah. Actually, it's embarrassingly easy to fall through the holes.\n\ncreate type mytype as (a int);\ncreate table mytyptab (a mytype) partition by list (a);\ndrop type mytype cascade;\nNOTICE: drop cascades to column a of table mytyptab\nDROP TYPE\nselect * from mytyptab;\nERROR: cache lookup failed for type 0\nLINE 1: select * from mytyptab;\n ^\ndrop table mytyptab;\nERROR: cache lookup failed for type 0\n\n> The fix I was speculating about in the pgsql-bugs thread was to add\n> explicit pg_depend entries making the table's partitioning columns\n> internally dependent on the whole table (or maybe the other way around;\n> haven't experimented). That fix has a couple of problems though:\n>\n> 1. In the example, \"drop domain d1 cascade\" would automatically\n> cascade to the whole partitioned table, including child partitions\n> of course. This might leave a user sad, if a few terabytes of\n> valuable data went away; though one could argue that they'd better\n> have paid more attention to what the cascade cascaded to.\n>\n> 2. It doesn't fix anything for pre-existing tables in pre-v12 branches.\n>\n>\n> I thought of a different possible approach, which is to move the\n> \"cannot drop column named in partition key\" error check from\n> ATExecDropColumn(), where it is now, to RemoveAttributeById().\n> That would be back-patchable, but the implication would be that\n> dropping anything that a partitioning column depends on would be\n> impossible, even with CASCADE; you'd have to manually drop the\n> partitioned table first. Good for data safety, but a horrible\n> violation of expectations, and likely of the SQL spec as well.\n\nI prefer this second solution as it works for both preexisting and new\ntables, although I also agree that it is not user-friendly. Would it\nhelp to document that one would be unable to drop anything that a\npartitioning column directly and indirectly depends on (type, domain,\nschema, extension, etc.)?\n\n> I'm not sure we could avoid order-of-traversal problems, either.\n>\n> Ideally, perhaps, a DROP CASCADE like this would not cascade to\n> the whole table but only to the table's partitioned-ness property,\n> leaving you with a non-partitioned table with most of its data\n> intact.\n\nYeah, it would've been nice if the partitioned-ness property of table\ncould be deleted independently of the table.\n\n> It would take a lot of work to make that happen though,\n> and it certainly wouldn't be back-patchable, and I'm not really\n> sure it's worth it.\n\nAgreed that this sounds maybe more like a new feature.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Mon, 8 Jul 2019 15:58:12 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Broken defenses against dropping a partitioning column"
},
{
"msg_contents": "On 2019-Jul-07, Tom Lane wrote:\n\n> Ideally, perhaps, a DROP CASCADE like this would not cascade to\n> the whole table but only to the table's partitioned-ness property,\n> leaving you with a non-partitioned table with most of its data\n> intact. It would take a lot of work to make that happen though,\n> and it certainly wouldn't be back-patchable, and I'm not really\n> sure it's worth it.\n\nMaybe we can add dependencies to rows of the pg_partitioned_table\nrelation, with the semantics of \"depends on the partitioned-ness of the\ntable\"?\n\nThat said, I'm not sure I see the use case for an ALTER TABLE .. DROP\nCOLUMN command that turns a partitioned table (with existing partitions\ncontaining data) into one non-partitioned table with all data minus the\npartitioning column(s).\n\nThis seems vaguely related to the issue of dropping foreign keys; see\nhttps://postgr.es/m/20190329152239.GA29258@alvherre.pgsql wherein I\nsettled with a non-ideal solution to the problem of being unable to\ndepend on something that did not cause the entire table to be dropped\nin certain cases.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Jul 2019 10:31:53 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Broken defenses against dropping a partitioning column"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> That said, I'm not sure I see the use case for an ALTER TABLE .. DROP\n> COLUMN command that turns a partitioned table (with existing partitions\n> containing data) into one non-partitioned table with all data minus the\n> partitioning column(s).\n\nYeah, it'd be a lot of work for a dubious goal.\n\n> This seems vaguely related to the issue of dropping foreign keys; see\n> https://postgr.es/m/20190329152239.GA29258@alvherre.pgsql wherein I\n> settled with a non-ideal solution to the problem of being unable to\n> depend on something that did not cause the entire table to be dropped\n> in certain cases.\n\nThat's an interesting analogy. Re-reading that thread, what I said\nin <29497.1554217629@sss.pgh.pa.us> seems pretty apropos to the\ncurrent problem:\n\n>> FWIW, I think that the dependency mechanism is designed around the idea\n>> that whether it's okay to drop a *database object* depends only on what\n>> other *database objects* rely on it, and that you can always make a DROP\n>> valid by dropping all the dependent objects. That's not an unreasonable\n>> design assumption, considering that the SQL standard embeds the same\n>> assumption in its RESTRICT/CASCADE syntax.\n\nI think that is probably a fatal objection to my idea of putting an error\ncheck into RemoveAttributeById(). As an example, consider the possibility\nthat somebody makes a temporary type and then makes a permanent table with\na partitioning column of that type. What shall we do at session exit?\nFailing to remove the temp type is not an acceptable choice, because that\nleaves us with a permanently broken temp schema (compare bug #15631 [1]).\n\nAlso, I don't believe we can make that work without order-of-operations\nproblems in cases comparable to the original bug in this thread [2].\nOne or the other order of the object OIDs is going to lead to the column\nbeing visited for deletion before the whole table is, and again rejecting\nthe column deletion is not going to be an acceptable behavior.\n\nSo I think we're probably stuck with the approach of adding new internal\ndependencies. If we go that route, then our options for the released\nbranches are (1) do nothing, or (2) back-patch the code that adds such\ndependencies, but without a catversion bump. That would mean that only\ntables created after the next minor releases would have protection against\nthis problem. That's not ideal but maybe it's okay, considering that we\nhaven't seen actual field reports of trouble of this kind.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/15631-188663b383e1e697%40postgresql.org\n\n[2] https://www.postgresql.org/message-id/flat/CA%2Bu7OA4JKCPFrdrAbOs7XBiCyD61XJxeNav4LefkSmBLQ-Vobg%40mail.gmail.com\n\n\n",
"msg_date": "Mon, 08 Jul 2019 10:58:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Broken defenses against dropping a partitioning column"
},
{
"msg_contents": "On Mon, Jul 8, 2019 at 10:32 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> That said, I'm not sure I see the use case for an ALTER TABLE .. DROP\n> COLUMN command that turns a partitioned table (with existing partitions\n> containing data) into one non-partitioned table with all data minus the\n> partitioning column(s).\n\nI think it would be useful to have \"ALTER TABLE blah NOT PARTITIONED\" but I\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 8 Jul 2019 11:02:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Broken defenses against dropping a partitioning column"
},
{
"msg_contents": "On Mon, Jul 8, 2019 at 11:02 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Jul 8, 2019 at 10:32 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > That said, I'm not sure I see the use case for an ALTER TABLE .. DROP\n> > COLUMN command that turns a partitioned table (with existing partitions\n> > containing data) into one non-partitioned table with all data minus the\n> > partitioning column(s).\n>\n> I think it would be useful to have \"ALTER TABLE blah NOT PARTITIONED\" but I\n\n...hit send too soon, and also, I don't think anyone will be very\nhappy if they get that behavior as a side effect of a DROP statement,\nmostly because it could take an extremely long time to execute.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 8 Jul 2019 11:03:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Broken defenses against dropping a partitioning column"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jul 8, 2019 at 11:02 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> On Mon, Jul 8, 2019 at 10:32 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>>> That said, I'm not sure I see the use case for an ALTER TABLE .. DROP\n>>> COLUMN command that turns a partitioned table (with existing partitions\n>>> containing data) into one non-partitioned table with all data minus the\n>>> partitioning column(s).\n\n>> I think it would be useful to have \"ALTER TABLE blah NOT PARTITIONED\" but I\n\n> ...hit send too soon, and also, I don't think anyone will be very\n> happy if they get that behavior as a side effect of a DROP statement,\n> mostly because it could take an extremely long time to execute.\n\nFWIW, I was imagining the action as being (1) detach all the child\npartitions, (2) make parent into a non-partitioned table, (3)\ndrop the target column in each of these now-independent tables.\nNo data movement. Other than the need to acquire locks on all\nthe tables, it shouldn't be particularly slow.\n\nBut I'm still not volunteering to write it, because I'm not sure\nanyone would want such a behavior.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Jul 2019 11:07:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Broken defenses against dropping a partitioning column"
},
{
"msg_contents": "On Mon, Jul 8, 2019 at 11:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> FWIW, I was imagining the action as being (1) detach all the child\n> partitions, (2) make parent into a non-partitioned table, (3)\n> drop the target column in each of these now-independent tables.\n> No data movement. Other than the need to acquire locks on all\n> the tables, it shouldn't be particularly slow.\n\nI see. I think that would be reasonable, but like you say, it's not\nclear that it's really what users would prefer. You can think of a\npartitioned table as a first-class object and the partitions as\nsubordinate implementation details; or you can think of the partitions\nas the first-class objects and the partitioned table as the\nsecond-rate glue that holds them together. It seems like users prefer\nthe former view.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 8 Jul 2019 13:18:10 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Broken defenses against dropping a partitioning column"
},
{
"msg_contents": "On Mon, Jul 08, 2019 at 10:58:56AM -0400, Tom Lane wrote:\n>Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> That said, I'm not sure I see the use case for an ALTER TABLE .. DROP\n>> COLUMN command that turns a partitioned table (with existing partitions\n>> containing data) into one non-partitioned table with all data minus the\n>> partitioning column(s).\n>\n>Yeah, it'd be a lot of work for a dubious goal.\n>\n>> This seems vaguely related to the issue of dropping foreign keys; see\n>> https://postgr.es/m/20190329152239.GA29258@alvherre.pgsql wherein I\n>> settled with a non-ideal solution to the problem of being unable to\n>> depend on something that did not cause the entire table to be dropped\n>> in certain cases.\n>\n>That's an interesting analogy. Re-reading that thread, what I said\n>in <29497.1554217629@sss.pgh.pa.us> seems pretty apropos to the\n>current problem:\n>\n>>> FWIW, I think that the dependency mechanism is designed around the idea\n>>> that whether it's okay to drop a *database object* depends only on what\n>>> other *database objects* rely on it, and that you can always make a DROP\n>>> valid by dropping all the dependent objects. That's not an unreasonable\n>>> design assumption, considering that the SQL standard embeds the same\n>>> assumption in its RESTRICT/CASCADE syntax.\n>\n>I think that is probably a fatal objection to my idea of putting an error\n>check into RemoveAttributeById(). As an example, consider the possibility\n>that somebody makes a temporary type and then makes a permanent table with\n>a partitioning column of that type. What shall we do at session exit?\n>Failing to remove the temp type is not an acceptable choice, because that\n>leaves us with a permanently broken temp schema (compare bug #15631 [1]).\n>\n>Also, I don't believe we can make that work without order-of-operations\n>problems in cases comparable to the original bug in this thread [2].\n>One or the other order of the object OIDs is going to lead to the column\n>being visited for deletion before the whole table is, and again rejecting\n>the column deletion is not going to be an acceptable behavior.\n>\n>So I think we're probably stuck with the approach of adding new internal\n>dependencies. If we go that route, then our options for the released\n>branches are (1) do nothing, or (2) back-patch the code that adds such\n>dependencies, but without a catversion bump. That would mean that only\n>tables created after the next minor releases would have protection against\n>this problem. That's not ideal but maybe it's okay, considering that we\n>haven't seen actual field reports of trouble of this kind.\n>\n\nCouldn't we also write a function that adds those dependencies for\nexisting objects, and request users to run it after the update?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 9 Jul 2019 21:18:29 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Broken defenses against dropping a partitioning column"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Mon, Jul 08, 2019 at 10:58:56AM -0400, Tom Lane wrote:\n>> So I think we're probably stuck with the approach of adding new internal\n>> dependencies. If we go that route, then our options for the released\n>> branches are (1) do nothing, or (2) back-patch the code that adds such\n>> dependencies, but without a catversion bump. That would mean that only\n>> tables created after the next minor releases would have protection against\n>> this problem. That's not ideal but maybe it's okay, considering that we\n>> haven't seen actual field reports of trouble of this kind.\n\n> Couldn't we also write a function that adds those dependencies for\n> existing objects, and request users to run it after the update?\n\nMaybe. I'm not volunteering to write such a thing.\n\nBTW, it looks like somebody actually did think about this problem with\nrespect to external dependencies of partition expressions:\n\nregression=# create function myabs(int) returns int language internal as 'int4abs' immutable strict parallel safe;\nCREATE FUNCTION\nregression=# create table foo (f1 int) partition by range (myabs(f1));\nCREATE TABLE\nregression=# drop function myabs(int);\nERROR: cannot drop function myabs(integer) because other objects depend on it\nDETAIL: table foo depends on function myabs(integer)\nHINT: Use DROP ... CASCADE to drop the dependent objects too.\n\nUnfortunately, there's still no dependency on the column f1 in this\nscenario. That means any function that wants to reconstruct the\ncorrect dependencies would need a way to scan the partition expressions\nfor Vars. Not much fun from plpgsql, for sure.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Jul 2019 16:39:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Broken defenses against dropping a partitioning column"
},
{
"msg_contents": "I wrote:\n> So I think we're probably stuck with the approach of adding new internal\n> dependencies. If we go that route, then our options for the released\n> branches are (1) do nothing, or (2) back-patch the code that adds such\n> dependencies, but without a catversion bump. That would mean that only\n> tables created after the next minor releases would have protection against\n> this problem. That's not ideal but maybe it's okay, considering that we\n> haven't seen actual field reports of trouble of this kind.\n\nHere's a proposed patch for that. It's mostly pretty straightforward,\nexcept I had to add some recursion defenses in findDependentObjects that\nweren't there before. But those seem like a good idea anyway to prevent\ninfinite recursion in case of bogus entries in pg_depend.\n\nI also took the liberty of improving some related error messages that\nI thought were unnecessarily vague and not up to project standards.\n\nPer above, I'm envisioning applying this to HEAD and v12 with a catversion\nbump, and to v11 and v10 with no catversion bump.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 19 Jul 2019 14:14:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Broken defenses against dropping a partitioning column"
},
{
"msg_contents": "I wrote:\n> Here's a proposed patch for that. It's mostly pretty straightforward,\n> except I had to add some recursion defenses in findDependentObjects that\n> weren't there before. But those seem like a good idea anyway to prevent\n> infinite recursion in case of bogus entries in pg_depend.\n> Per above, I'm envisioning applying this to HEAD and v12 with a catversion\n> bump, and to v11 and v10 with no catversion bump.\n\nPushed. Back-patching turned up one thing I hadn't expected: pre-v12\npg_dump bleated about circular dependencies. It turned out that Peter\nhad already installed a hack in pg_dump to suppress that complaint in\nconnection with generated columns, so I improved the comment and\nback-patched that too.\n\nI nearly missed the need for that because of all the noise that\ncheck-world emits in pre-v12 branches. We'd discussed back-patching\neb9812f27 at the time, and I think now it's tested enough that doing\nso is low risk (or at least, lower risk than the risk of not seeing\na failure). So I think I'll go do that now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Jul 2019 15:02:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Broken defenses against dropping a partitioning column"
},
{
"msg_contents": "On 2019-Jul-22, Tom Lane wrote:\n\n> I nearly missed the need for that because of all the noise that\n> check-world emits in pre-v12 branches. We'd discussed back-patching\n> eb9812f27 at the time, and I think now it's tested enough that doing\n> so is low risk (or at least, lower risk than the risk of not seeing\n> a failure). So I think I'll go do that now.\n\nI'd like that, as it bites me too, thanks.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 22 Jul 2019 15:34:59 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Broken defenses against dropping a partitioning column"
},
{
"msg_contents": "Thanks a lot for the fix!\n\nBest,\nManuel\n\nOn Mon, Jul 22, 2019 at 9:35 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Jul-22, Tom Lane wrote:\n>\n> > I nearly missed the need for that because of all the noise that\n> > check-world emits in pre-v12 branches. We'd discussed back-patching\n> > eb9812f27 at the time, and I think now it's tested enough that doing\n> > so is low risk (or at least, lower risk than the risk of not seeing\n> > a failure). So I think I'll go do that now.\n>\n> I'd like that, as it bites me too, thanks.\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 22 Jul 2019 22:53:20 +0200",
"msg_from": "Manuel Rigger <rigger.manuel@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Broken defenses against dropping a partitioning column"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jul-22, Tom Lane wrote:\n>> I nearly missed the need for that because of all the noise that\n>> check-world emits in pre-v12 branches. We'd discussed back-patching\n>> eb9812f27 at the time, and I think now it's tested enough that doing\n>> so is low risk (or at least, lower risk than the risk of not seeing\n>> a failure). So I think I'll go do that now.\n\n> I'd like that, as it bites me too, thanks.\n\nDone. The approach \"make check-world >/dev/null\" now emits the\nsame amount of noise on all branches, ie just\n\nNOTICE: database \"regression\" does not exist, skipping\n\n\nThe amount of parallelism you can apply is still pretty\nbranch-dependent, unfortunately.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Jul 2019 17:17:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Broken defenses against dropping a partitioning column"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nI see there is no test case for sslinfo. I have added a test case for it in\nmy project.\nDo you mind if I apply this test case to postgresql?\n\nBest regards,\nHao Wu",
"msg_date": "Mon, 8 Jul 2019 10:59:07 +0800",
"msg_from": "Hao Wu <hawu@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Add test case for sslinfo"
},
{
"msg_contents": "On Mon, Jul 8, 2019 at 2:59 PM Hao Wu <hawu@pivotal.io> wrote:\n> I see there is no test case for sslinfo. I have added a test case for it in my project.\n\nHi Hao Wu,\n\nThanks! I see that you created a CF entry\nhttps://commitfest.postgresql.org/24/2203/. While I was scanning\nthrough the current CF looking for trouble, this one popped in front\nof my eyes, so here's some quick feedback even though it's in the next\nCF:\n\n+#!/bin/bash\n\nI don't think we can require that script interpreter.\n\nThis failed[1] with permissions errors:\n\n+cp: cannot create regular file '/server.crt': Permission denied\n\nIt looks like that's because the script assumes that PGDATA is set.\n\nI wonder if we want to include more SSL certificates, or if we want to\nuse the same set of fixed certificates (currently under\nsrc/test/ssl/ssl) for all tests like this. I don't have a strong\nopinion on that, but I wanted to mention that policy decision. (There\nis also a test somewhere that creates a new one on the fly.)\n\n[1] https://travis-ci.org/postgresql-cfbot/postgresql/builds/555576601\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Jul 2019 16:05:02 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add test case for sslinfo"
},
{
"msg_contents": "Hi Thomas,\n\nThank you for your quick response! I work on greenplum, and I didn't see\nthis folder(src/test/ssl/ssl) before.\nI will add more certificates to test and resend again.\n\nDo you have any suggestion about the missing PGDATA? Since the test needs\nto configure postgresql.conf, maybe there are other ways to determine this\nenvironment.\n\nThank you very much!\n\n\nOn Mon, Jul 8, 2019 at 12:05 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Mon, Jul 8, 2019 at 2:59 PM Hao Wu <hawu@pivotal.io> wrote:\n> > I see there is no test case for sslinfo. I have added a test case for it\n> in my project.\n>\n> Hi Hao Wu,\n>\n> Thanks! I see that you created a CF entry\n> https://commitfest.postgresql.org/24/2203/. While I was scanning\n> through the current CF looking for trouble, this one popped in front\n> of my eyes, so here's some quick feedback even though it's in the next\n> CF:\n>\n> +#!/bin/bash\n>\n> I don't think we can require that script interpreter.\n>\n> This failed[1] with permissions errors:\n>\n> +cp: cannot create regular file '/server.crt': Permission denied\n>\n> It looks like that's because the script assumes that PGDATA is set.\n>\n> I wonder if we want to include more SSL certificates, or if we want to\n> use the same set of fixed certificates (currently under\n> src/test/ssl/ssl) for all tests like this. I don't have a strong\n> opinion on that, but I wanted to mention that policy decision. (There\n> is also a test somewhere that creates a new one on the fly.)\n>\n> [1]\n> https://urldefense.proofpoint.com/v2/url?u=https-3A__travis-2Dci.org_postgresql-2Dcfbot_postgresql_builds_555576601&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=tqYUKh-fXcYPWSaF4E-D6A&m=N21IAtFKoqkBqeNv3h-dDX50l6qCVe5xQlAHlqn0KeY&s=lgcvJiqqeNAtrRYSM2eGPbfv6a1GxgUgig2PicIES8Q&e=\n>\n> --\n> Thomas Munro\n>\n> https://urldefense.proofpoint.com/v2/url?u=https-3A__enterprisedb.com&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=tqYUKh-fXcYPWSaF4E-D6A&m=N21IAtFKoqkBqeNv3h-dDX50l6qCVe5xQlAHlqn0KeY&s=3d9-Snq6Ul9p-LLkcinUksa_kt6tTmq8nBkdUSBRqm8&e=\n>\n\nHi Thomas,Thank you for your quick response! I work on greenplum, and I didn't see this folder(src/test/ssl/ssl) before.I will add more certificates to test and resend again.Do you have any suggestion about the missing PGDATA? Since the test needs to configure postgresql.conf, maybe there are other ways to determine this environment.Thank you very much!On Mon, Jul 8, 2019 at 12:05 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Mon, Jul 8, 2019 at 2:59 PM Hao Wu <hawu@pivotal.io> wrote:\n> I see there is no test case for sslinfo. I have added a test case for it in my project.\n\nHi Hao Wu,\n\nThanks! I see that you created a CF entry\nhttps://commitfest.postgresql.org/24/2203/. While I was scanning\nthrough the current CF looking for trouble, this one popped in front\nof my eyes, so here's some quick feedback even though it's in the next\nCF:\n\n+#!/bin/bash\n\nI don't think we can require that script interpreter.\n\nThis failed[1] with permissions errors:\n\n+cp: cannot create regular file '/server.crt': Permission denied\n\nIt looks like that's because the script assumes that PGDATA is set.\n\nI wonder if we want to include more SSL certificates, or if we want to\nuse the same set of fixed certificates (currently under\nsrc/test/ssl/ssl) for all tests like this. I don't have a strong\nopinion on that, but I wanted to mention that policy decision. (There\nis also a test somewhere that creates a new one on the fly.)\n\n[1] https://urldefense.proofpoint.com/v2/url?u=https-3A__travis-2Dci.org_postgresql-2Dcfbot_postgresql_builds_555576601&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=tqYUKh-fXcYPWSaF4E-D6A&m=N21IAtFKoqkBqeNv3h-dDX50l6qCVe5xQlAHlqn0KeY&s=lgcvJiqqeNAtrRYSM2eGPbfv6a1GxgUgig2PicIES8Q&e= \n\n-- \nThomas Munro\nhttps://urldefense.proofpoint.com/v2/url?u=https-3A__enterprisedb.com&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=tqYUKh-fXcYPWSaF4E-D6A&m=N21IAtFKoqkBqeNv3h-dDX50l6qCVe5xQlAHlqn0KeY&s=3d9-Snq6Ul9p-LLkcinUksa_kt6tTmq8nBkdUSBRqm8&e=",
"msg_date": "Mon, 8 Jul 2019 14:11:34 +0800",
"msg_from": "Hao Wu <hawu@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Add test case for sslinfo"
},
{
"msg_contents": "On Mon, Jul 08, 2019 at 02:11:34PM +0800, Hao Wu wrote:\n> Thank you for your quick response! I work on greenplum, and I didn't see\n> this folder(src/test/ssl/ssl) before.\n> I will add more certificates to test and resend again.\n\nNot having duplicates would be nice.\n\n> Do you have any suggestion about the missing PGDATA? Since the test needs\n> to configure postgresql.conf, maybe there are other ways to determine this\n> environment.\n\n+REGRESS = sslinfo\n+REGRESS_OPT = --temp-config=$(top_srcdir)/contrib/sslinfo/sslinfo.conf\n\nWhen it comes to custom configuration files in the regression tests,\nyou should always have NO_INSTALLCHECK = 1 in the Makefile because\nthere is no guarantee that that the running server will have the\nconfiguration you want when running an installcheck.\n\n+echo \"preparing CRTs and KEYs\"\n+cp -f data/root.crt $PGDATA/\n+cp -f data/server.crt $PGDATA/\n+cp -f data/server.key $PGDATA/\n+chmod 400 $PGDATA/server.key\n+chmod 644 $PGDATA/server.crt\n+chmod 644 $PGDATA/root.crt\nUsing a TAP test here would be more adapted. Another idea would be to\nadd that directly into src/test/ssl/ and enforce the installation of\nwith EXTRA_INSTALL when running the tests.\n\n+-- start_ignore\n+\\! bash config.bash clean\n+\\! pg_ctl restart 2>&1 >/dev/null\n+-- end_ignore\nPlease, no...\n--\nMichael",
"msg_date": "Mon, 8 Jul 2019 17:18:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add test case for sslinfo"
},
{
"msg_contents": "On 2019-07-08 10:18, Michael Paquier wrote:\n> On Mon, Jul 08, 2019 at 02:11:34PM +0800, Hao Wu wrote:\n>> Thank you for your quick response! I work on greenplum, and I didn't see\n>> this folder(src/test/ssl/ssl) before.\n>> I will add more certificates to test and resend again.\n> \n> Not having duplicates would be nice.\n\nI think sslinfo should be tested as an extension of src/test/ssl/\ninstead of its own test suite. There are too many complications that we\nwould otherwise have to solve again.\n\nYou might want to review commit f60a0e96778854ed0b7fd4737488ba88022e47bd\nand how it adds test cases. You can't just hardcode a specific output\nsince different installations might report TLS 1.2 vs 1.3, different\nciphers etc.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 16 Aug 2019 15:50:19 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add test case for sslinfo"
},
{
"msg_contents": "Hao Wu,\n\nAre you submitting an updated version of this patch soon?\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 2 Sep 2019 14:07:04 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add test case for sslinfo"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have begun playing with regressplans.sh which enforces various\ncombinations of \"-f s|i|n|m|h\" when running the regression tests, and\nI have noticed that -fh can cause the server to become stuck in the\ntest join_hash.sql with this query (not sure which portion of the SET\nLOCAL parameters are involved) :\nselect count(*) from simple r join extremely_skewed s using (id);\n\nThis does not happen with REL_10_STABLE where the test executes\nimmediately, so we has visibly an issue caused by v11 here.\n\nAny thoughts?\n--\nMichael",
"msg_date": "Mon, 8 Jul 2019 14:52:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "PGOPTIONS=\"-fh\" make check gets stuck since Postgres 11"
},
{
"msg_contents": "On Mon, Jul 8, 2019 at 5:53 PM Michael Paquier <michael@paquier.xyz> wrote:\n> I have begun playing with regressplans.sh which enforces various\n> combinations of \"-f s|i|n|m|h\" when running the regression tests, and\n> I have noticed that -fh can cause the server to become stuck in the\n> test join_hash.sql with this query (not sure which portion of the SET\n> LOCAL parameters are involved) :\n> select count(*) from simple r join extremely_skewed s using (id);\n>\n> This does not happen with REL_10_STABLE where the test executes\n> immediately, so we has visibly an issue caused by v11 here.\n\nIf you don't allow hash joins it makes this plan:\n\n Aggregate\n -> Nested Loop\n Join Filter: (r.id = s.id)\n -> Seq Scan on simple r\n -> Materialize\n -> Seq Scan on extremely_skewed s\n\n\"simple\" has 20k rows and \"extremely_skewed\" has 20k rows but the\nplanner thinks it only has 2. So this going to take O(n^2) time and n\nis 20k. Not sure what to do about that. Maybe \"join_hash\" should be\nskipped for the -h (= no hash joins please) case?\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Jul 2019 18:49:44 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PGOPTIONS=\"-fh\" make check gets stuck since Postgres 11"
},
{
"msg_contents": "On Mon, Jul 08, 2019 at 06:49:44PM +1200, Thomas Munro wrote:\n> \"simple\" has 20k rows and \"extremely_skewed\" has 20k rows but the\n> planner thinks it only has 2. So this going to take O(n^2) time and n\n> is 20k. Not sure what to do about that. Maybe \"join_hash\" should be\n> skipped for the -h (= no hash joins please) case?\n\nAh, thanks. Yes that's going to take a while :)\n\nWell, another thing I'd like to think about is if there is any point\nto keep regressplans.sh and the relevant options in postgres at this\nstage. From the top of the file one can read that:\n# This script runs the Postgres regression tests with all useful combinations\n# of the backend options that disable various query plan types. If the\n# results are not all the same, it may indicate a bug in a particular\n# plan type, or perhaps just a regression test whose results aren't fully\n# determinate (eg, due to lack of an ORDER BY keyword).\n\nHowever if you run any option with make check, then in all runs there\nare tests failing. We can improve the situation for some of them with\nORDER BY queries by looking at the query outputs, but some EXPLAIN\nqueries are sensitive to that, and the history around regressplans.sh\ndoes not play in favor of it (some people really use it?). If you\nlook at the latest commits for it, it has not been really touched in\n19 years.\n\nSo I would be rather in favor in nuking it.\n--\nMichael",
"msg_date": "Mon, 8 Jul 2019 16:40:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: PGOPTIONS=\"-fh\" make check gets stuck since Postgres 11"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> I have begun playing with regressplans.sh which enforces various\n> combinations of \"-f s|i|n|m|h\" when running the regression tests, and\n> I have noticed that -fh can cause the server to become stuck in the\n> test join_hash.sql with this query (not sure which portion of the SET\n> LOCAL parameters are involved) :\n> select count(*) from simple r join extremely_skewed s using (id);\n\n> This does not happen with REL_10_STABLE where the test executes\n> immediately, so we has visibly an issue caused by v11 here.\n\nYeah, these test cases were added by fa330f9ad in v11.\n\nWhat it looks like to me is that some of these test cases force\n\"enable_mergejoin = off\", so if you also have enable_hashjoin off then\nyou are going to get a nestloop plan, and it's hardly surprising that\nthat takes an unreasonable amount of time on the rather large test\ntables used in these tests.\n\nGiven the purposes of this test, I think it'd be reasonable to force\nboth enable_hashjoin = on and enable_mergejoin = off at the very top\nof the join_hash script, or the corresponding place in join.sql in\nv11. Thomas, was there a specific reason for forcing enable_mergejoin\n= off for only some of these tests?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Jul 2019 10:19:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGOPTIONS=\"-fh\" make check gets stuck since Postgres 11"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Well, another thing I'd like to think about is if there is any point\n> to keep regressplans.sh and the relevant options in postgres at this\n> stage. From the top of the file one can read that:\n\nThe point of regressplans.sh is to see if anything goes seriously\nwrong when forcing non-default plan choices --- seriously wrong being\ndefined as crashes or semantically wrong answers. It's not expected\nthat the regression tests will automatically pass when you do that,\nbecause of their dependencies on output row ordering, not to mention\nall the EXPLAINs. I'm not for removing it --- the fact that its\nresults require manual evaluation doesn't make it useless.\n\nHaving said that, join_hash.sql in particular seems to have zero\nvalue if it's not testing hash joins, so I think it'd be reasonable\nfor it to override a global enable_hashjoin = off setting. None of\nthe other regression test scripts seem to take nearly as much of a\nperformance hit from globally forcing poor plans.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Jul 2019 15:21:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGOPTIONS=\"-fh\" make check gets stuck since Postgres 11"
},
{
"msg_contents": "On Mon, Jul 08, 2019 at 03:21:41PM -0400, Tom Lane wrote:\n> Having said that, join_hash.sql in particular seems to have zero\n> value if it's not testing hash joins, so I think it'd be reasonable\n> for it to override a global enable_hashjoin = off setting. None of\n> the other regression test scripts seem to take nearly as much of a\n> performance hit from globally forcing poor plans.\n\nI am a bit confused here. Don't you mean to have enable_hashjoin =\n*on* at the top of hash_join.sql instead like in the attached?\n--\nMichael",
"msg_date": "Tue, 9 Jul 2019 10:31:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: PGOPTIONS=\"-fh\" make check gets stuck since Postgres 11"
},
{
"msg_contents": "On Tue, Jul 9, 2019 at 2:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Given the purposes of this test, I think it'd be reasonable to force\n> both enable_hashjoin = on and enable_mergejoin = off at the very top\n> of the join_hash script, or the corresponding place in join.sql in\n> v11. Thomas, was there a specific reason for forcing enable_mergejoin\n> = off for only some of these tests?\n\nBased on a suggestion from Andres (if I recall correctly), I wrapped\neach individual test in savepoint/rollback, and then set just the GUCs\nneeded to get the plan shape and execution code path I wanted to\nexercise, and I guess I found that I only needed to disable merge\njoins for some of them. The idea was that the individual tests could\nbe understood independently.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Jul 2019 13:49:59 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PGOPTIONS=\"-fh\" make check gets stuck since Postgres 11"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Jul 08, 2019 at 03:21:41PM -0400, Tom Lane wrote:\n>> Having said that, join_hash.sql in particular seems to have zero\n>> value if it's not testing hash joins, so I think it'd be reasonable\n>> for it to override a global enable_hashjoin = off setting. None of\n>> the other regression test scripts seem to take nearly as much of a\n>> performance hit from globally forcing poor plans.\n\n> I am a bit confused here. Don't you mean to have enable_hashjoin =\n> *on* at the top of hash_join.sql instead like in the attached?\n\nRight, overriding any enable_hashjoin = off that might've come from\nPGOPTIONS or wherever.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Jul 2019 22:20:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGOPTIONS=\"-fh\" make check gets stuck since Postgres 11"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Tue, Jul 9, 2019 at 2:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Given the purposes of this test, I think it'd be reasonable to force\n>> both enable_hashjoin = on and enable_mergejoin = off at the very top\n>> of the join_hash script, or the corresponding place in join.sql in\n>> v11. Thomas, was there a specific reason for forcing enable_mergejoin\n>> = off for only some of these tests?\n\n> Based on a suggestion from Andres (if I recall correctly), I wrapped\n> each individual test in savepoint/rollback, and then set just the GUCs\n> needed to get the plan shape and execution code path I wanted to\n> exercise, and I guess I found that I only needed to disable merge\n> joins for some of them. The idea was that the individual tests could\n> be understood independently.\n\nBut per this discussion, they can only be \"understood independently\"\nif you make some assumptions about the prevailing values of the\nplanner GUCs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Jul 2019 22:22:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGOPTIONS=\"-fh\" make check gets stuck since Postgres 11"
},
{
"msg_contents": "On Tue, Jul 9, 2019 at 2:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Based on a suggestion from Andres (if I recall correctly), I wrapped\n> > each individual test in savepoint/rollback, and then set just the GUCs\n> > needed to get the plan shape and execution code path I wanted to\n> > exercise, and I guess I found that I only needed to disable merge\n> > joins for some of them. The idea was that the individual tests could\n> > be understood independently.\n>\n> But per this discussion, they can only be \"understood independently\"\n> if you make some assumptions about the prevailing values of the\n> planner GUCs.\n\nYeah. I had obviously never noticed that test script. +1 for just\nenabling hash joins the top of join_hash.sql in 12+, and the\nequivalent section in 11's join.sql (which is luckily at the end of\nthe file).\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Jul 2019 14:30:51 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PGOPTIONS=\"-fh\" make check gets stuck since Postgres 11"
},
{
"msg_contents": "On Tue, Jul 09, 2019 at 02:30:51PM +1200, Thomas Munro wrote:\n> Yeah. I had obviously never noticed that test script. +1 for just\n> enabling hash joins the top of join_hash.sql in 12+, and the\n> equivalent section in 11's join.sql (which is luckily at the end of\n> the file).\n\nRight, I did not pay much attention to REL_11_STABLE. In this case\nthe test begins around line 2030 and reaches the bottom of the file.\nI would actually add a RESET at the bottom of it to avoid any tests to\nbe impacted, as usually bug-fix tests are just appended. Thomas,\nperhaps you would prefer fixing it yourself? Or should I?\n--\nMichael",
"msg_date": "Tue, 9 Jul 2019 11:45:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: PGOPTIONS=\"-fh\" make check gets stuck since Postgres 11"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Jul 09, 2019 at 02:30:51PM +1200, Thomas Munro wrote:\n>> Yeah. I had obviously never noticed that test script. +1 for just\n>> enabling hash joins the top of join_hash.sql in 12+, and the\n>> equivalent section in 11's join.sql (which is luckily at the end of\n>> the file).\n\n> Right, I did not pay much attention to REL_11_STABLE. In this case\n> the test begins around line 2030 and reaches the bottom of the file.\n> I would actually add a RESET at the bottom of it to avoid any tests to\n> be impacted, as usually bug-fix tests are just appended.\n\nAgreed that the scope should be limited. But in 12/HEAD, I think the\nrelevant tests are all wrapped into one transaction block, so that\nusing SET LOCAL would be enough. Not sure if 11 is the same.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Jul 2019 22:51:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGOPTIONS=\"-fh\" make check gets stuck since Postgres 11"
},
{
"msg_contents": "On Tue, Jul 9, 2019 at 2:45 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Jul 09, 2019 at 02:30:51PM +1200, Thomas Munro wrote:\n> > Yeah. I had obviously never noticed that test script. +1 for just\n> > enabling hash joins the top of join_hash.sql in 12+, and the\n> > equivalent section in 11's join.sql (which is luckily at the end of\n> > the file).\n>\n> Right, I did not pay much attention to REL_11_STABLE. In this case\n> the test begins around line 2030 and reaches the bottom of the file.\n> I would actually add a RESET at the bottom of it to avoid any tests to\n> be impacted, as usually bug-fix tests are just appended. Thomas,\n> perhaps you would prefer fixing it yourself? Or should I?\n\nIt's my mistake. I'll fix it later today.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Jul 2019 15:04:27 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PGOPTIONS=\"-fh\" make check gets stuck since Postgres 11"
},
{
"msg_contents": "On Tue, Jul 09, 2019 at 03:04:27PM +1200, Thomas Munro wrote:\n> It's my mistake. I'll fix it later today.\n\nThanks!\n--\nMichael",
"msg_date": "Tue, 9 Jul 2019 12:26:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: PGOPTIONS=\"-fh\" make check gets stuck since Postgres 11"
},
{
"msg_contents": "On Mon, Jul 8, 2019 at 12:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> The point of regressplans.sh is to see if anything goes seriously\n> wrong when forcing non-default plan choices --- seriously wrong being\n> defined as crashes or semantically wrong answers. It's not expected\n> that the regression tests will automatically pass when you do that,\n> because of their dependencies on output row ordering, not to mention\n> all the EXPLAINs. I'm not for removing it --- the fact that its\n> results require manual evaluation doesn't make it useless.\n>\n>\nIt might be worth post-processing results files to ignore row ordering\nin some cases to allow for easier comparison. Has this been proposed\nin the past?\n\n-- \nMelanie Plageman\n\nOn Mon, Jul 8, 2019 at 12:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nThe point of regressplans.sh is to see if anything goes seriously\nwrong when forcing non-default plan choices --- seriously wrong being\ndefined as crashes or semantically wrong answers. It's not expected\nthat the regression tests will automatically pass when you do that,\nbecause of their dependencies on output row ordering, not to mention\nall the EXPLAINs. I'm not for removing it --- the fact that its\nresults require manual evaluation doesn't make it useless.\n\nIt might be worth post-processing results files to ignore row orderingin some cases to allow for easier comparison. Has this been proposedin the past?-- Melanie Plageman",
"msg_date": "Tue, 9 Jul 2019 11:54:29 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PGOPTIONS=\"-fh\" make check gets stuck since Postgres 11"
},
{
"msg_contents": "On Tue, Jul 09, 2019 at 11:54:29AM -0700, Melanie Plageman wrote:\n> It might be worth post-processing results files to ignore row ordering\n> in some cases to allow for easier comparison. Has this been proposed\n> in the past?\n\nNot that I recall.\n--\nMichael",
"msg_date": "Wed, 10 Jul 2019 13:41:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: PGOPTIONS=\"-fh\" make check gets stuck since Postgres 11"
},
{
"msg_contents": "On Wed, Jul 10, 2019 at 10:12 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jul 09, 2019 at 11:54:29AM -0700, Melanie Plageman wrote:\n> > It might be worth post-processing results files to ignore row ordering\n> > in some cases to allow for easier comparison. Has this been proposed\n> > in the past?\n>\n> Not that I recall.\n>\n\nIt would be good if we can come up with something like that. It will\nbe helpful for zheap, where in some cases we get different row\nordering due to in-place updates. As of now, we try to add Order By\nor do some extra magic to get consistent row ordering.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Jul 2019 12:51:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PGOPTIONS=\"-fh\" make check gets stuck since Postgres 11"
},
{
"msg_contents": "On Wed, Jul 10, 2019 at 12:51:28PM +0530, Amit Kapila wrote:\n> It would be good if we can come up with something like that. It will\n> be helpful for zheap, where in some cases we get different row\n> ordering due to in-place updates. As of now, we try to add Order By\n> or do some extra magic to get consistent row ordering.\n\nThat was an issue for me as well when working with Postgres-XC when\nthe row ordering was not guaranteed depending on the number of nodes\n(speaking of which Greenplum has the same issues, no?). Adding ORDER\nBY clauses to a set of tests may make sense, but then this may impact\nthe plans generated for some of them..\n--\nMichael",
"msg_date": "Wed, 10 Jul 2019 16:40:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: PGOPTIONS=\"-fh\" make check gets stuck since Postgres 11"
},
{
"msg_contents": "On Wed, Jul 10, 2019 at 12:40 AM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> On Wed, Jul 10, 2019 at 12:51:28PM +0530, Amit Kapila wrote:\n> > It would be good if we can come up with something like that. It will\n> > be helpful for zheap, where in some cases we get different row\n> > ordering due to in-place updates. As of now, we try to add Order By\n> > or do some extra magic to get consistent row ordering.\n>\n> That was an issue for me as well when working with Postgres-XC when\n> the row ordering was not guaranteed depending on the number of nodes\n> (speaking of which Greenplum has the same issues, no?). Adding ORDER\n> BY clauses to a set of tests may make sense, but then this may impact\n> the plans generated for some of them..\n> --\n> Michael\n>\n\nWe have a tool that does this. gpdiff [1] is used for results\npost-processing\nand it uses a perl module called atmsort [2] to deal with the specific\nORDER BY\ncase discussed here.\n\n[1]\nhttps://github.com/greenplum-db/gpdb/blob/master/src/test/regress/gpdiff.pl\n[2]\nhttps://github.com/greenplum-db/gpdb/blob/master/src/test/regress/atmsort.pl\n\n-- \nMelanie Plageman\n\nOn Wed, Jul 10, 2019 at 12:40 AM Michael Paquier <michael@paquier.xyz> wrote:On Wed, Jul 10, 2019 at 12:51:28PM +0530, Amit Kapila wrote:\n> It would be good if we can come up with something like that. It will\n> be helpful for zheap, where in some cases we get different row\n> ordering due to in-place updates. As of now, we try to add Order By\n> or do some extra magic to get consistent row ordering.\n\nThat was an issue for me as well when working with Postgres-XC when\nthe row ordering was not guaranteed depending on the number of nodes\n(speaking of which Greenplum has the same issues, no?). Adding ORDER\nBY clauses to a set of tests may make sense, but then this may impact\nthe plans generated for some of them..\n--\nMichael\nWe have a tool that does this. gpdiff [1] is used for results post-processingand it uses a perl module called atmsort [2] to deal with the specific ORDER BYcase discussed here.[1] https://github.com/greenplum-db/gpdb/blob/master/src/test/regress/gpdiff.pl[2] https://github.com/greenplum-db/gpdb/blob/master/src/test/regress/atmsort.pl-- Melanie Plageman",
"msg_date": "Wed, 10 Jul 2019 06:09:33 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PGOPTIONS=\"-fh\" make check gets stuck since Postgres 11"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Jul 10, 2019 at 12:51:28PM +0530, Amit Kapila wrote:\n>> It would be good if we can come up with something like that. It will\n>> be helpful for zheap, where in some cases we get different row\n>> ordering due to in-place updates. As of now, we try to add Order By\n>> or do some extra magic to get consistent row ordering.\n\n> That was an issue for me as well when working with Postgres-XC when\n> the row ordering was not guaranteed depending on the number of nodes\n> (speaking of which Greenplum has the same issues, no?). Adding ORDER\n> BY clauses to a set of tests may make sense, but then this may impact\n> the plans generated for some of them..\n\nYeah, I do not want to get into a situation where we can't test\nqueries that lack ORDER BY. Also, the fact that tableam X doesn't\nreproduce heap's row ordering is not a good reason to relax the\nstrength of the tests for heap. So I'm wondering about some\npostprocessing that we could optionally apply. Perhaps the tools\nMelanie mentions could help.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Jul 2019 09:45:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGOPTIONS=\"-fh\" make check gets stuck since Postgres 11"
},
{
"msg_contents": "On Wed, Jul 10, 2019 at 6:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Wed, Jul 10, 2019 at 12:51:28PM +0530, Amit Kapila wrote:\n> >> It would be good if we can come up with something like that. It will\n> >> be helpful for zheap, where in some cases we get different row\n> >> ordering due to in-place updates. As of now, we try to add Order By\n> >> or do some extra magic to get consistent row ordering.\n>\n> > That was an issue for me as well when working with Postgres-XC when\n> > the row ordering was not guaranteed depending on the number of nodes\n> > (speaking of which Greenplum has the same issues, no?). Adding ORDER\n> > BY clauses to a set of tests may make sense, but then this may impact\n> > the plans generated for some of them..\n>\n> Yeah, I do not want to get into a situation where we can't test\n> queries that lack ORDER BY. Also, the fact that tableam X doesn't\n> reproduce heap's row ordering is not a good reason to relax the\n> strength of the tests for heap. So I'm wondering about some\n> postprocessing that we could optionally apply. Perhaps the tools\n> Melanie mentions could help.\n>\n\nSurprisingly, I have been working from a couple of days to use those\nPerl tools from Greenplum for Zedstore. As for Zedstore plans differ\nfor many regress tests because relation size not being the same as\nheap and all. Also, for similar reasons, row orders change as\nwell. So, to effectively use the test untouched to validate Zedstore\nand yes was thinking will help Zheap testing as well. I also tested\nthe same for regressplans.sh and it will lift a lot of manual burden\nof investigating the results. As one can specify to completely ignore\nexplain plan outputs from the comparison between results and\nexpected. Will post patch for the tool, once I get in little decent\nshape.\n\nOn Wed, Jul 10, 2019 at 6:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Michael Paquier <michael@paquier.xyz> writes:> On Wed, Jul 10, 2019 at 12:51:28PM +0530, Amit Kapila wrote:>> It would be good if we can come up with something like that. It will>> be helpful for zheap, where in some cases we get different row>> ordering due to in-place updates. As of now, we try to add Order By>> or do some extra magic to get consistent row ordering.\n> That was an issue for me as well when working with Postgres-XC when> the row ordering was not guaranteed depending on the number of nodes> (speaking of which Greenplum has the same issues, no?). Adding ORDER> BY clauses to a set of tests may make sense, but then this may impact> the plans generated for some of them..\nYeah, I do not want to get into a situation where we can't testqueries that lack ORDER BY. Also, the fact that tableam X doesn'treproduce heap's row ordering is not a good reason to relax thestrength of the tests for heap. So I'm wondering about somepostprocessing that we could optionally apply. Perhaps the toolsMelanie mentions could help.Surprisingly, I have been working from a couple of days to use thosePerl tools from Greenplum for Zedstore. As for Zedstore plans differfor many regress tests because relation size not being the same asheap and all. Also, for similar reasons, row orders change aswell. So, to effectively use the test untouched to validate Zedstoreand yes was thinking will help Zheap testing as well. I also testedthe same for regressplans.sh and it will lift a lot of manual burdenof investigating the results. As one can specify to completely ignoreexplain plan outputs from the comparison between results andexpected. Will post patch for the tool, once I get in little decentshape.",
"msg_date": "Wed, 10 Jul 2019 09:11:41 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: PGOPTIONS=\"-fh\" make check gets stuck since Postgres 11"
},
{
"msg_contents": "On Wed, Jul 10, 2019 at 09:11:41AM -0700, Ashwin Agrawal wrote:\n> Will post patch for the tool, once I get in little decent shape.\n\nThat would be nice! We may be able to get something into v13 this way\nthen.\n--\nMichael",
"msg_date": "Thu, 11 Jul 2019 08:52:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: PGOPTIONS=\"-fh\" make check gets stuck since Postgres 11"
}
] |
[
{
"msg_contents": "My patch for using heap_multi_insert in the catalog [1] failed the logical\ndecoding part of test/recovery [2].\n\nThe assertion it failed on seems to not take multi inserts into the catalog\ninto consideration, while the main logic does. This assertion hasn't tripped\nsince there are no multi inserts into the catalog, but if we introduce them it\nwill so I’m raising it in a separate thread as it is sort of unrelated from the\npatch in question.\n\nThe attached patch fixes my test failure and makes sense to me, but this code\nis far from my neck of the tree, so I’m really not sure this is the best way to\nexpress the assertion.\n\ncheers ./daniel\n\n[1] https://commitfest.postgresql.org/23/2125/\n[2] https://postgr.es/m/CA+hUKGLg1vFiXnkxjp_bea5+VP8D=vHRwSdvj7Rbikr_u4xFbg@mail.gmail.com",
"msg_date": "Mon, 8 Jul 2019 21:42:23 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Assertion for logically decoding multi inserts into the catalog"
},
{
"msg_contents": "On 08/07/2019 22:42, Daniel Gustafsson wrote:\n> My patch for using heap_multi_insert in the catalog [1] failed the logical\n> decoding part of test/recovery [2].\n> \n> The assertion it failed on seems to not take multi inserts into the catalog\n> into consideration, while the main logic does. This assertion hasn't tripped\n> since there are no multi inserts into the catalog, but if we introduce them it\n> will so I’m raising it in a separate thread as it is sort of unrelated from the\n> patch in question.\n> \n> The attached patch fixes my test failure and makes sense to me, but this code\n> is far from my neck of the tree, so I’m really not sure this is the best way to\n> express the assertion.\n>\n> --- a/src/backend/replication/logical/decode.c\n> +++ b/src/backend/replication/logical/decode.c\n> @@ -974,7 +974,8 @@ DecodeMultiInsert(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)\n> \t\tReorderBufferQueueChange(ctx->reorder, XLogRecGetXid(r),\n> \t\t\t\t\t\t\t\t buf->origptr, change);\n> \t}\n> -\tAssert(data == tupledata + tuplelen);\n> +\tAssert(xlrec->flags & XLH_INSERT_CONTAINS_NEW_TUPLE &&\n> +\t\t data == tupledata + tuplelen);\n> }\n> \n> /*\n\nThis patch makes the assertion more strict than it was before. I don't \nsee how it could possibly make a regression failure go away. On the \ncontrary. So, huh?\n\n- Heikki\n\n\n",
"msg_date": "Wed, 31 Jul 2019 20:20:57 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Assertion for logically decoding multi inserts into the catalog"
},
{
"msg_contents": "> On 31 Jul 2019, at 19:20, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> This patch makes the assertion more strict than it was before. I don't see how it could possibly make a regression failure go away. On the contrary. So, huh?\n\nYeah, this is clearly fat-fingered, the intent is to only run the Assert in\ncase XLH_INSERT_CONTAINS_NEW_TUPLE is set in xlrec->flags, as it only applies\nunder that condition. The attached is tested in both in the multi-insert patch\nand on HEAD, but I wish I could figure out a better way to express this Assert.\n\ncheers ./daniel",
"msg_date": "Tue, 6 Aug 2019 00:52:09 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Assertion for logically decoding multi inserts into the catalog"
},
{
"msg_contents": "On Tue, Aug 06, 2019 at 12:52:09AM +0200, Daniel Gustafsson wrote:\n> Yeah, this is clearly fat-fingered, the intent is to only run the Assert in\n> case XLH_INSERT_CONTAINS_NEW_TUPLE is set in xlrec->flags, as it only applies\n> under that condition. The attached is tested in both in the multi-insert patch\n> and on HEAD, but I wish I could figure out a better way to express this Assert.\n\n- Assert(data == tupledata + tuplelen);\n+ Assert(data == tupledata + tuplelen ||\n+ ~(xlrec->flags & XLH_INSERT_CONTAINS_NEW_TUPLE));\nI find this way to formulate the assertion a bit confusing, as what\nyou want is basically to make sure that XLH_INSERT_CONTAINS_NEW_TUPLE\nis not set in the context of catalogs. So you could just use that\ninstead:\n(xlrec->flags & XLH_INSERT_CONTAINS_NEW_TUPLE) == 0\n\nAnyway, if you make a parallel with heap_multi_insert() and the way\neach xl_multi_insert_tuple is built, I think that the error does not\ncome from this assertion, but with the way the data length is computed\nin DecodeMultiInsert as a move to the next chunk of tuple data is only\ndone if XLH_INSERT_CONTAINS_NEW_TUPLE is set. So, in my opinion,\nsomething to fix here is to make sure that we compute the correct\nlength even if XLH_INSERT_CONTAINS_NEW_TUPLE is *not* set, and then\nmake sure at the end that the tuple length matches to the end.\n\nThis way, we also make sure that we never finish on a state where\nthe block data associated to the multi-insert record is NULL but\nbecause of a mistake there is some tuple data detected, or that the\ntuple data set has a final length which matches the expected outcome.\nAnd actually, it seems to me that this happens in your original patch\nto open access to multi-insert for catalogs, because for some reason\nXLogRecGetBlockData() returns NULL with a non-zero tuplelen in\nDecodeMultiInsert(). I can see that with the TAP test\n010_logical_decoding_timelines.pl\n\nAttached is a patch for that. Thoughts? \n--\nMichael",
"msg_date": "Tue, 6 Aug 2019 12:36:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assertion for logically decoding multi inserts into the catalog"
},
{
"msg_contents": "> On 6 Aug 2019, at 05:36, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Aug 06, 2019 at 12:52:09AM +0200, Daniel Gustafsson wrote:\n>> Yeah, this is clearly fat-fingered, the intent is to only run the Assert in\n>> case XLH_INSERT_CONTAINS_NEW_TUPLE is set in xlrec->flags, as it only applies\n>> under that condition. The attached is tested in both in the multi-insert patch\n>> and on HEAD, but I wish I could figure out a better way to express this Assert.\n> \n> - Assert(data == tupledata + tuplelen);\n> + Assert(data == tupledata + tuplelen ||\n> + ~(xlrec->flags & XLH_INSERT_CONTAINS_NEW_TUPLE));\n> I find this way to formulate the assertion a bit confusing, as what\n> you want is basically to make sure that XLH_INSERT_CONTAINS_NEW_TUPLE\n> is not set in the context of catalogs. So you could just use that\n> instead:\n> (xlrec->flags & XLH_INSERT_CONTAINS_NEW_TUPLE) == 0\n> \n> Anyway, if you make a parallel with heap_multi_insert() and the way\n> each xl_multi_insert_tuple is built, I think that the error does not\n> come from this assertion, but with the way the data length is computed\n> in DecodeMultiInsert as a move to the next chunk of tuple data is only\n> done if XLH_INSERT_CONTAINS_NEW_TUPLE is set. So, in my opinion,\n> something to fix here is to make sure that we compute the correct\n> length even if XLH_INSERT_CONTAINS_NEW_TUPLE is *not* set, and then\n> make sure at the end that the tuple length matches to the end.\n> \n> This way, we also make sure that we never finish on a state where\n> the block data associated to the multi-insert record is NULL but\n> because of a mistake there is some tuple data detected, or that the\n> tuple data set has a final length which matches the expected outcome.\n> And actually, it seems to me that this happens in your original patch\n> to open access to multi-insert for catalogs, because for some reason\n> XLogRecGetBlockData() returns NULL with a non-zero tuplelen in\n> DecodeMultiInsert(). I can see that with the TAP test\n> 010_logical_decoding_timelines.pl\n> \n> Attached is a patch for that. Thoughts? \n\nThanks, this is a much better approach and it passes tests for me. +1 on this\nversion (regardless of outcome of the other patch as this is separate).\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 6 Aug 2019 15:08:48 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Assertion for logically decoding multi inserts into the catalog"
},
{
"msg_contents": "On Tue, Aug 06, 2019 at 03:08:48PM +0200, Daniel Gustafsson wrote:\n> Thanks, this is a much better approach and it passes tests for me. +1 on this\n> version (regardless of outcome of the other patch as this is separate).\n\nI had an extra lookup at this stuff this morning, and applied the\npatch. Please note that I have kept the assertion on tupledata which\ncannot be NULL and added a comment about that because it is not\npossible to finish yet in a state where we do not have tuple data in\nthis context, but it actually could be the case if we begin to use\nmulti-inserts with system catalogs, so the assertion is here to make\nfuture patch authors careful about that. We could in this case bypass\nDecodeMultiInsert() if tupledata is NULL and assert that\nXLH_INSERT_CONTAINS_NEW_TUPLE should not be set, or we could just\nbypass the logic if XLH_INSERT_CONTAINS_NEW_TUPLE is not set at all.\nLet's sort that out in your other patch.\n--\nMichael",
"msg_date": "Wed, 7 Aug 2019 10:37:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assertion for logically decoding multi inserts into the catalog"
}
] |
[
{
"msg_contents": "I have some specific questions about pg_xact_commit_timestamp, and am\nhoping that this is the right place to ask. I read a lot of the commentary\nabout the original patch, and the contributors seem to be here. If I'm\nasking in the wrong place, just let me know.\n\nI'm working on a design for a concurrency-safe incremental aggregate rollup\nsystem,and pg_xact_commit_timestamp sounds perfect. But I've found very\nlittle commentary on it generally, and couldn't figure out how it works in\ndetail from the source code.\n\nHopefully, someone knows the answers to a few questions:\n\n* Is it possible for pg_xact_commit_timestamp to produce times out of\norder? What I'm after is a way to identify records that have been chagned\nsince a specific time so that I can get any later changes for processing. I\ndon't need them in commit order, so overlapping timestamps aren't a\nproblem.\n\n* How many bytes are added to each row in the final implementation? The\ndiscussions I saw seemed to be ranging from 12-24 bytes. There was\ndiscussion of adding in extra bytes for \"just in case.\" This is pre 9.5, so\na world ago.\n\n* Are the timestamps indexed internally? With a B-tree? I ask for\ncapacity-planning reasons.\n\n* I've seen on StackOverflow and the design discussions that the timestamps\nare not kept indefinitely, but can't find the details on exactly how long\nthey are stored.\n\n* Any rules of thumb on the performance impact of enabling\npg_xact_commit_timestamp? I don't need the data on all tables but, where I\ndo, it sounds like it might work perfectly.\n\nMany thanks for any assistance!\n\nI have some specific questions about pg_xact_commit_timestamp, and am hoping that this is the right place to ask. I read a lot of the commentary about the original patch, and the contributors seem to be here. If I'm asking in the wrong place, just let me know.I'm working on a design for a concurrency-safe incremental aggregate rollup system,and pg_xact_commit_timestamp sounds perfect. But I've found very little commentary on it generally, and couldn't figure out how it works in detail from the source code.Hopefully, someone knows the answers to a few questions:* Is it possible for pg_xact_commit_timestamp to produce times out of order? What I'm after is a way to identify records that have been chagned since a specific time so that I can get any later changes for processing. I don't need them in commit order, so overlapping timestamps aren't a problem. * How many bytes are added to each row in the final implementation? The discussions I saw seemed to be ranging from 12-24 bytes. There was discussion of adding in extra bytes for \"just in case.\" This is pre 9.5, so a world ago.* Are the timestamps indexed internally? With a B-tree? I ask for capacity-planning reasons.* I've seen on StackOverflow and the design discussions that the timestamps are not kept indefinitely, but can't find the details on exactly how long they are stored.* Any rules of thumb on the performance impact of enabling pg_xact_commit_timestamp? I don't need the data on all tables but, where I do, it sounds like it might work perfectly.Many thanks for any assistance!",
"msg_date": "Tue, 9 Jul 2019 08:22:14 +1000",
"msg_from": "Morris de Oryx <morrisdeoryx@gmail.com>",
"msg_from_op": true,
"msg_subject": "Detailed questions about pg_xact_commit_timestamp"
},
{
"msg_contents": "Hi,\n\nOn 7/9/19 12:22 AM, Morris de Oryx wrote:\n> I have some specific questions about pg_xact_commit_timestamp, and am hoping\n> that this is the right place to ask. I read a lot of the commentary about the\n> original patch, and the contributors seem to be here. If I'm asking in the wrong\n> place, just let me know.\n> \n> I'm working on a design for a concurrency-safe incremental aggregate rollup\n> system,and pg_xact_commit_timestamp sounds perfect. But I've found very little\n> commentary on it generally, and couldn't figure out how it works in detail from\n> the source code.\n> \n> Hopefully, someone knows the answers to a few questions:\n> \n> * Is it possible for pg_xact_commit_timestamp to produce times out of order?\n> What I'm after is a way to identify records that have been chagned since a\n> specific time so that I can get any later changes for processing. I don't need\n> them in commit order, so overlapping timestamps aren't a problem. \n\nI think yes. For example, you can have a session \"A\" xid 34386826 that commit\nafter session \"B\" xid 34386827:\npostgres=# select pg_xact_commit_timestamp('34386827'::xid);\n pg_xact_commit_timestamp\n-------------------------------\n 2019-07-11 09:32:29.806183+00\n(1 row)\n\npostgres=# select pg_xact_commit_timestamp('34386826'::xid);\n pg_xact_commit_timestamp\n------------------------------\n 2019-07-11 09:32:38.99444+00\n(1 row)\n\n\n> \n> * How many bytes are added to each row in the final implementation? The\n> discussions I saw seemed to be ranging from 12-24 bytes. There was discussion of\n> adding in extra bytes for \"just in case.\" This is pre 9.5, so a world ago.\n\nsrc/backend/access/transam/commit_ts.c says 8+4 bytes per xact.\n\nNote it is not per row but per xact: We only have to store the timestamp for one\nxid.\n\n> \n> * Are the timestamps indexed internally? With a B-tree? I ask for\n> capacity-planning reasons.\n\nI think no.\n\n> \n> * I've seen on StackOverflow and the design discussions that the timestamps are\n> not kept indefinitely, but can't find the details on exactly how long they are\n> stored.\n> \n\nYes timestamp are stored in pg_commit_ts directory. Old timestamp are removed\nafter freeze has explained in\nhttps://www.postgresql.org/docs/current/routine-vacuuming.html :\n\n> The sole disadvantage of increasing autovacuum_freeze_max_age (and\nvacuum_freeze_table_age along with it) is that the pg_xact and pg_commit_ts\nsubdirectories of the database cluster will take more space, because it must\nstore the commit status and (if track_commit_timestamp is enabled) timestamp of\nall transactions back to the autovacuum_freeze_max_age horizon. The commit\nstatus uses two bits per transaction, so if autovacuum_freeze_max_age is set to\nits maximum allowed value of two billion, pg_xact can be expected to grow to\nabout half a gigabyte and pg_commit_ts to about 20GB. If this is trivial\ncompared to your total database size, setting autovacuum_freeze_max_age to its\nmaximum allowed value is recommended. Otherwise, set it depending on what you\nare willing to allow for pg_xact and pg_commit_ts storage. (The default, 200\nmillion transactions, translates to about 50MB of pg_xact storage and about 2GB\nof pg_commit_ts storage.)\n\n> * Any rules of thumb on the performance impact of enabling\n> pg_xact_commit_timestamp? I don't need the data on all tables but, where I do,\n> it sounds like it might work perfectly.\n> \n> Many thanks for any assistance!\n\nI didn't notice any performance impact, but I didn't do any extensive testing.\n\n\n\nRegards,",
"msg_date": "Thu, 11 Jul 2019 11:48:11 +0200",
"msg_from": "Adrien Nayrat <adrien.nayrat@anayrat.info>",
"msg_from_op": false,
"msg_subject": "Re: Detailed questions about pg_xact_commit_timestamp"
},
{
"msg_contents": "Adrien, thanks very much for answering my question. Just a couple of\nfollow-up points, if you don't mind.\n\nIn our answer, you offer an example of pg_xact_commit_timestamp showing\nout-of-sequence commit times:\n\nSession xid pg_xact_commit_timestamp\nA 34386826 2019-07-11 09:32:38.994440+00 Started earlier,\ncommitted later\nB 34386827 2019-07-11 09:32:29.806183+00\n\nI may not have asked my question clearly, or I may not understand the\nanswer properly. Or both ;-) If I understand it correctly, an xid is\nassigned when a transaction starts. One transaction might take a second,\nanother might take ten minutes. So, the xid sequence doesn't imply anything\nat all about commit sequence. What I'm trying to figure out is if it is\npossible for the commit timestamps to somehow be out of order. What I'm\nlooking for is a way of finding changes committed since a specific moment.\nWhen the transaction started doesn't matter in my case.\n\nIs pg_xact_commit_timestamp suitable for this? I'm getting the impression\nthat it isn't. But I don't understand quite how. And if it isn't suited to\nthis purpose, does anyone know what pg_xact_commit_timestamp is for? What\nI'm after is something like a \"xcommitserial\" that increases reliably, and\nmonotonically on transaction commit. That's how I'm hoping that\npg_xact_commit_timestamp functions.\n\nThanks also for making me understand that pg_xact_commit_timestamp applies\nto a *transaction*, not to each row. That makes it a lot lighter in the\ndatabase. I was thinking 12 bytes+ per row, which is completely off for my\ncase. (I tend to insert thousands of rows in a transaction.)\n\n> Yes timestamp are stored in pg_commit_ts directory. Old timestamp are\nremoved after freeze has explained in\n> https://www.postgresql.org/docs/current/routine-vacuuming.html\n\nThanks for the answer, and for kindly pointing me to the right section of\nthe documentation. It's easy to get impatient with new(er) users. I'm _not_\nlazy about reading manuls and researching but, well, the Postgres\ndocumentation is over 3,000 pages long when you download it. So, I may have\nmissed a detail or two.... If I read that correctly, the ~4 billion number\nrange is made into an endless circle by keeping ~2 billions numbers in the\npast, and 2 billion in the future. If that's right, I'm never going to be\nso out of data that the ~2 billion number window is too small.\n\nAdrien, thanks very much for answering my question. Just a couple of follow-up points, if you don't mind.In our answer, you offer an example of pg_xact_commit_timestamp showing out-of-sequence commit times:Session xid pg_xact_commit_timestampA 34386826 2019-07-11 09:32:38.994440+00 Started earlier, committed laterB 34386827 2019-07-11 09:32:29.806183+00I may not have asked my question clearly, or I may not understand the answer properly. Or both ;-) If I understand it correctly, an xid is assigned when a transaction starts. One transaction might take a second, another might take ten minutes. So, the xid sequence doesn't imply anything at all about commit sequence. What I'm trying to figure out is if it is possible for the commit timestamps to somehow be out of order. What I'm looking for is a way of finding changes committed since a specific moment. When the transaction started doesn't matter in my case. Is pg_xact_commit_timestamp suitable for this? I'm getting the impression that it isn't. But I don't understand quite how. And if it isn't suited to this purpose, does anyone know what pg_xact_commit_timestamp is for? What I'm after is something like a \"xcommitserial\" that increases reliably, and monotonically on transaction commit. That's how I'm hoping that pg_xact_commit_timestamp functions. Thanks also for making me understand that pg_xact_commit_timestamp applies to a *transaction*, not to each row. That makes it a lot lighter in the database. I was thinking 12 bytes+ per row, which is completely off for my case. (I tend to insert thousands of rows in a transaction.)> Yes timestamp are stored in pg_commit_ts directory. Old timestamp are removed after freeze has explained in> https://www.postgresql.org/docs/current/routine-vacuuming.htmlThanks for the answer, and for kindly pointing me to the right section of the documentation. It's easy to get impatient with new(er) users. I'm _not_ lazy about reading manuls and researching but, well, the Postgres documentation is over 3,000 pages long when you download it. So, I may have missed a detail or two.... If I read that correctly, the ~4 billion number range is made into an endless circle by keeping ~2 billions numbers in the past, and 2 billion in the future. If that's right, I'm never going to be so out of data that the ~2 billion number window is too small.",
"msg_date": "Fri, 12 Jul 2019 22:50:21 +1000",
"msg_from": "Morris de Oryx <morrisdeoryx@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Detailed questions about pg_xact_commit_timestamp"
},
{
"msg_contents": "On 7/12/19 2:50 PM, Morris de Oryx wrote:\n> Adrien, thanks very much for answering my question. Just a couple of follow-up\n> points, if you don't mind.\n> \n> In our answer, you offer an example of pg_xact_commit_timestamp showing\n> out-of-sequence commit times:\n> \n> Session xid pg_xact_commit_timestamp\n> A 34386826 2019-07-11 09:32:38.994440+00 Started earlier,\n> committed later\n> B 34386827 2019-07-11 09:32:29.806183+00\n> \n> I may not have asked my question clearly, or I may not understand the answer\n> properly. Or both ;-) If I understand it correctly, an xid is assigned when a\n> transaction starts.\n\nIt is a little bit more complicated :) When a transaction start, a *virtual* xid\nis assigned. It is when the transaction change the state of the database, an xid\nis assigned:\n> Throughout running a transaction, a server process holds an exclusive lock on the transaction's virtual transaction ID. If a permanent ID is assigned to the transaction (which normally happens only if the transaction changes the state of the database), it also holds an exclusive lock on the transaction's permanent transaction ID until it ends.\n\nhttps://www.postgresql.org/docs/current/view-pg-locks.html\n\n(It shouldn't change anything for you)\n\n\n> One transaction might take a second, another might take ten\n> minutes. So, the xid sequence doesn't imply anything at all about commit\n> sequence. What I'm trying to figure out is if it is possible for the commit\n> timestamps to somehow be out of order. \n\nI am sorry but I don't understand what you mean by \"commit timestamps to somehow\nbe out of order\"?\n\n> What I'm looking for is a way of finding\n> changes committed since a specific moment. When the transaction started doesn't\n> matter in my case. \n\n\nYes, the commit timestamp is the time when the transaction is committed :\npostgres=# begin;\nBEGIN\npostgres=# select now();\n now\n------------------------------\n 2019-07-16 08:46:59.64712+00\n(1 row)\n\npostgres=# select txid_current();\n txid_current\n--------------\n 34386830\n(1 row)\n\npostgres=# commit;\nCOMMIT\npostgres=# select pg_xact_commit_timestamp('34386830'::xid);\n pg_xact_commit_timestamp\n-------------------------------\n 2019-07-16 08:47:30.238746+00\n(1 row)\n\n\n> \n> Is pg_xact_commit_timestamp suitable for this? I'm getting the impression that\n> it isn't. But I don't understand quite how. And if it isn't suited to this\n> purpose, does anyone know what pg_xact_commit_timestamp is for? What I'm after\n> is something like a \"xcommitserial\" that increases reliably, and monotonically\n> on transaction commit. That's how I'm hoping that pg_xact_commit_timestamp\n> functions. \n\nI don't think so. pg_xact_commit_timestamp returns the timestamp. If you want\nsome kind of ordering you have to fetch all commit timestamps (with their\nrespective xid) and order them.\n\nYou also can implement this tracking by yourself with triggers which insert a\nrow containing xid and timestamp in a tracking table. You can add an index on\ntimestamp column. With this approach you don't have to worry about vacuum freeze\nwhich remove old timestamps. As you add more write, it could be more expensive\nthan track_commit_timestamp.\n\n> \n> Thanks also for making me understand that pg_xact_commit_timestamp applies to a\n> *transaction*, not to each row. That makes it a lot lighter in the database. I\n> was thinking 12 bytes+ per row, which is completely off for my case. (I tend to\n> insert thousands of rows in a transaction.)\n> \n>> Yes timestamp are stored in pg_commit_ts directory. Old timestamp are removed\n> after freeze has explained in\n>> https://www.postgresql.org/docs/current/routine-vacuuming.html\n> \n> Thanks for the answer, and for kindly pointing me to the right section of the\n> documentation. It's easy to get impatient with new(er) users. I'm _not_ lazy\n> about reading manuls and researching but, well, the Postgres documentation is\n> over 3,000 pages long when you download it. So, I may have missed a detail or\n> two.... If I read that correctly, the ~4 billion number range is made into an\n> endless circle by keeping ~2 billions numbers in the past, and 2 billion in the\n> future. If that's right, I'm never going to be so out of data that the ~2\n> billion number window is too small.\n> \n\nYes it is a circular counter because xid are stored on 32 bits. However you have\nto keep in mind that vacuum freeze old visible rows (default is 200 millions\ntransactions) and you lose timestamp information.\n\nSawada-san made a good presentation on freezing:\nhttps://www.slideshare.net/masahikosawada98/introduction-vauum-freezing-xid-wraparound\n\nYou can also look at this website:\nhttp://www.interdb.jp/pg/pgsql05.html#_5.1.\nhttp://www.interdb.jp/pg/pgsql06.html#_6.3.\n\nRegards,\n\n-- \nAdrien",
"msg_date": "Tue, 16 Jul 2019 11:03:36 +0200",
"msg_from": "Adrien Nayrat <adrien.nayrat@anayrat.info>",
"msg_from_op": false,
"msg_subject": "Re: Detailed questions about pg_xact_commit_timestamp"
},
{
"msg_contents": "Adrien, thanks a lot for taking the time to try and explain all of these\ndetails to me. I'm looking at incremental rollups, and thinking through\nvarious alternative designs. It sounds like pg_xact_commit_timestamp just\nisn't the right tool for my purposes, so I'll go in another direction.\n\nAll the same, I've learned a _lot_ of important points about Postgres from\ntrying to sort all of this out. Your messages have been a real help.\n\n\nOn Tue, Jul 16, 2019 at 7:03 PM Adrien Nayrat <adrien.nayrat@anayrat.info>\nwrote:\n\n> On 7/12/19 2:50 PM, Morris de Oryx wrote:\n> > Adrien, thanks very much for answering my question. Just a couple of\n> follow-up\n> > points, if you don't mind.\n> >\n> > In our answer, you offer an example of pg_xact_commit_timestamp showing\n> > out-of-sequence commit times:\n> >\n> > Session xid pg_xact_commit_timestamp\n> > A 34386826 2019-07-11 09:32:38.994440+00 Started earlier,\n> > committed later\n> > B 34386827 2019-07-11 09:32:29.806183+00\n> >\n> > I may not have asked my question clearly, or I may not understand the\n> answer\n> > properly. Or both ;-) If I understand it correctly, an xid is assigned\n> when a\n> > transaction starts.\n>\n> It is a little bit more complicated :) When a transaction start, a\n> *virtual* xid\n> is assigned. It is when the transaction change the state of the database,\n> an xid\n> is assigned:\n> > Throughout running a transaction, a server process holds an exclusive\n> lock on the transaction's virtual transaction ID. If a permanent ID is\n> assigned to the transaction (which normally happens only if the transaction\n> changes the state of the database), it also holds an exclusive lock on the\n> transaction's permanent transaction ID until it ends.\n>\n> https://www.postgresql.org/docs/current/view-pg-locks.html\n>\n> (It shouldn't change anything for you)\n>\n>\n> > One transaction might take a second, another might take ten\n> > minutes. So, the xid sequence doesn't imply anything at all about commit\n> > sequence. What I'm trying to figure out is if it is possible for the\n> commit\n> > timestamps to somehow be out of order.\n>\n> I am sorry but I don't understand what you mean by \"commit timestamps to\n> somehow\n> be out of order\"?\n>\n> > What I'm looking for is a way of finding\n> > changes committed since a specific moment. When the transaction started\n> doesn't\n> > matter in my case.\n>\n>\n> Yes, the commit timestamp is the time when the transaction is committed :\n> postgres=# begin;\n> BEGIN\n> postgres=# select now();\n> now\n> ------------------------------\n> 2019-07-16 08:46:59.64712+00\n> (1 row)\n>\n> postgres=# select txid_current();\n> txid_current\n> --------------\n> 34386830\n> (1 row)\n>\n> postgres=# commit;\n> COMMIT\n> postgres=# select pg_xact_commit_timestamp('34386830'::xid);\n> pg_xact_commit_timestamp\n> -------------------------------\n> 2019-07-16 08:47:30.238746+00\n> (1 row)\n>\n>\n> >\n> > Is pg_xact_commit_timestamp suitable for this? I'm getting the\n> impression that\n> > it isn't. But I don't understand quite how. And if it isn't suited to\n> this\n> > purpose, does anyone know what pg_xact_commit_timestamp is for? What I'm\n> after\n> > is something like a \"xcommitserial\" that increases reliably, and\n> monotonically\n> > on transaction commit. That's how I'm hoping that\n> pg_xact_commit_timestamp\n> > functions.\n>\n> I don't think so. pg_xact_commit_timestamp returns the timestamp. If you\n> want\n> some kind of ordering you have to fetch all commit timestamps (with their\n> respective xid) and order them.\n>\n> You also can implement this tracking by yourself with triggers which\n> insert a\n> row containing xid and timestamp in a tracking table. You can add an index\n> on\n> timestamp column. With this approach you don't have to worry about vacuum\n> freeze\n> which remove old timestamps. As you add more write, it could be more\n> expensive\n> than track_commit_timestamp.\n>\n> >\n> > Thanks also for making me understand that pg_xact_commit_timestamp\n> applies to a\n> > *transaction*, not to each row. That makes it a lot lighter in the\n> database. I\n> > was thinking 12 bytes+ per row, which is completely off for my case. (I\n> tend to\n> > insert thousands of rows in a transaction.)\n> >\n> >> Yes timestamp are stored in pg_commit_ts directory. Old timestamp are\n> removed\n> > after freeze has explained in\n> >> https://www.postgresql.org/docs/current/routine-vacuuming.html\n> >\n> > Thanks for the answer, and for kindly pointing me to the right section\n> of the\n> > documentation. It's easy to get impatient with new(er) users. I'm _not_\n> lazy\n> > about reading manuls and researching but, well, the Postgres\n> documentation is\n> > over 3,000 pages long when you download it. So, I may have missed a\n> detail or\n> > two.... If I read that correctly, the ~4 billion number range is made\n> into an\n> > endless circle by keeping ~2 billions numbers in the past, and 2 billion\n> in the\n> > future. If that's right, I'm never going to be so out of data that the ~2\n> > billion number window is too small.\n> >\n>\n> Yes it is a circular counter because xid are stored on 32 bits. However\n> you have\n> to keep in mind that vacuum freeze old visible rows (default is 200\n> millions\n> transactions) and you lose timestamp information.\n>\n> Sawada-san made a good presentation on freezing:\n>\n> https://www.slideshare.net/masahikosawada98/introduction-vauum-freezing-xid-wraparound\n>\n> You can also look at this website:\n> http://www.interdb.jp/pg/pgsql05.html#_5.1.\n> http://www.interdb.jp/pg/pgsql06.html#_6.3.\n>\n> Regards,\n>\n> --\n> Adrien\n>\n>\n\nAdrien, thanks a lot for taking the time to try and explain all of these details to me. I'm looking at incremental rollups, and thinking through various alternative designs. It sounds like pg_xact_commit_timestamp just isn't the right tool for my purposes, so I'll go in another direction.All the same, I've learned a _lot_ of important points about Postgres from trying to sort all of this out. Your messages have been a real help. On Tue, Jul 16, 2019 at 7:03 PM Adrien Nayrat <adrien.nayrat@anayrat.info> wrote:On 7/12/19 2:50 PM, Morris de Oryx wrote:\n> Adrien, thanks very much for answering my question. Just a couple of follow-up\n> points, if you don't mind.\n> \n> In our answer, you offer an example of pg_xact_commit_timestamp showing\n> out-of-sequence commit times:\n> \n> Session xid pg_xact_commit_timestamp\n> A 34386826 2019-07-11 09:32:38.994440+00 Started earlier,\n> committed later\n> B 34386827 2019-07-11 09:32:29.806183+00\n> \n> I may not have asked my question clearly, or I may not understand the answer\n> properly. Or both ;-) If I understand it correctly, an xid is assigned when a\n> transaction starts.\n\nIt is a little bit more complicated :) When a transaction start, a *virtual* xid\nis assigned. It is when the transaction change the state of the database, an xid\nis assigned:\n> Throughout running a transaction, a server process holds an exclusive lock on the transaction's virtual transaction ID. If a permanent ID is assigned to the transaction (which normally happens only if the transaction changes the state of the database), it also holds an exclusive lock on the transaction's permanent transaction ID until it ends.\n\nhttps://www.postgresql.org/docs/current/view-pg-locks.html\n\n(It shouldn't change anything for you)\n\n\n> One transaction might take a second, another might take ten\n> minutes. So, the xid sequence doesn't imply anything at all about commit\n> sequence. What I'm trying to figure out is if it is possible for the commit\n> timestamps to somehow be out of order. \n\nI am sorry but I don't understand what you mean by \"commit timestamps to somehow\nbe out of order\"?\n\n> What I'm looking for is a way of finding\n> changes committed since a specific moment. When the transaction started doesn't\n> matter in my case. \n\n\nYes, the commit timestamp is the time when the transaction is committed :\npostgres=# begin;\nBEGIN\npostgres=# select now();\n now\n------------------------------\n 2019-07-16 08:46:59.64712+00\n(1 row)\n\npostgres=# select txid_current();\n txid_current\n--------------\n 34386830\n(1 row)\n\npostgres=# commit;\nCOMMIT\npostgres=# select pg_xact_commit_timestamp('34386830'::xid);\n pg_xact_commit_timestamp\n-------------------------------\n 2019-07-16 08:47:30.238746+00\n(1 row)\n\n\n> \n> Is pg_xact_commit_timestamp suitable for this? I'm getting the impression that\n> it isn't. But I don't understand quite how. And if it isn't suited to this\n> purpose, does anyone know what pg_xact_commit_timestamp is for? What I'm after\n> is something like a \"xcommitserial\" that increases reliably, and monotonically\n> on transaction commit. That's how I'm hoping that pg_xact_commit_timestamp\n> functions. \n\nI don't think so. pg_xact_commit_timestamp returns the timestamp. If you want\nsome kind of ordering you have to fetch all commit timestamps (with their\nrespective xid) and order them.\n\nYou also can implement this tracking by yourself with triggers which insert a\nrow containing xid and timestamp in a tracking table. You can add an index on\ntimestamp column. With this approach you don't have to worry about vacuum freeze\nwhich remove old timestamps. As you add more write, it could be more expensive\nthan track_commit_timestamp.\n\n> \n> Thanks also for making me understand that pg_xact_commit_timestamp applies to a\n> *transaction*, not to each row. That makes it a lot lighter in the database. I\n> was thinking 12 bytes+ per row, which is completely off for my case. (I tend to\n> insert thousands of rows in a transaction.)\n> \n>> Yes timestamp are stored in pg_commit_ts directory. Old timestamp are removed\n> after freeze has explained in\n>> https://www.postgresql.org/docs/current/routine-vacuuming.html\n> \n> Thanks for the answer, and for kindly pointing me to the right section of the\n> documentation. It's easy to get impatient with new(er) users. I'm _not_ lazy\n> about reading manuls and researching but, well, the Postgres documentation is\n> over 3,000 pages long when you download it. So, I may have missed a detail or\n> two.... If I read that correctly, the ~4 billion number range is made into an\n> endless circle by keeping ~2 billions numbers in the past, and 2 billion in the\n> future. If that's right, I'm never going to be so out of data that the ~2\n> billion number window is too small.\n> \n\nYes it is a circular counter because xid are stored on 32 bits. However you have\nto keep in mind that vacuum freeze old visible rows (default is 200 millions\ntransactions) and you lose timestamp information.\n\nSawada-san made a good presentation on freezing:\nhttps://www.slideshare.net/masahikosawada98/introduction-vauum-freezing-xid-wraparound\n\nYou can also look at this website:\nhttp://www.interdb.jp/pg/pgsql05.html#_5.1.\nhttp://www.interdb.jp/pg/pgsql06.html#_6.3.\n\nRegards,\n\n-- \nAdrien",
"msg_date": "Wed, 17 Jul 2019 07:56:22 +1000",
"msg_from": "Morris de Oryx <morrisdeoryx@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Detailed questions about pg_xact_commit_timestamp"
}
] |
[
{
"msg_contents": "The following patch fixes propagation of arguments to the trigger\nfunction to child partitions both when initially creating the trigger\nand when adding new partitions to a partitioned table.\n\nThe included regression test should demonstrate the problem, for clarity\nrepeated in slightly more readable form here:\n\nbb=> create table parted_trig (a int) partition by list (a);\nCREATE TABLE\nbb=> create table parted_trig1 partition of parted_trig for values in (1);\nCREATE TABLE\nbb=> create or replace function trigger_notice() returns trigger as $$\nbb$> declare\nbb$> arg1 text = TG_ARGV[0];\nbb$> arg2 integer = TG_ARGV[1];\nbb$> begin\nbb$> raise notice 'trigger % on % % % for % args % %', TG_NAME, TG_TABLE_NAME, TG_WHEN, TG_OP, TG_LEVEL, arg1, arg2;\nbb$> return null;\nbb$> end;\nbb$> $$ language plpgsql;\nCREATE FUNCTION\nbb=> create trigger aaa after insert on parted_trig for each row execute procedure trigger_notice('text', 1);\nCREATE TRIGGER\nbb=> \\d parted_trig\n Tabelle �public.parted_trig�\n Spalte | Typ | Sortierfolge | NULL erlaubt? | Vorgabewert \n--------+---------+--------------+---------------+-------------\n a | integer | | | \nPartitionsschl�ssel: LIST (a)\nTrigger:\n aaa AFTER INSERT ON parted_trig FOR EACH ROW EXECUTE PROCEDURE trigger_notice('text', '1')\nAnzahl Partitionen: 1 (Mit \\d+ alle anzeigen.)\n\nbb=> \\d parted_trig1\n Tabelle �public.parted_trig1�\n Spalte | Typ | Sortierfolge | NULL erlaubt? | Vorgabewert \n--------+---------+--------------+---------------+-------------\n a | integer | | | \nPartition von: parted_trig FOR VALUES IN (1)\nTrigger:\n aaa AFTER INSERT ON parted_trig1 FOR EACH ROW EXECUTE PROCEDURE trigger_notice()\n\nFixed:\n\nbb=> \\d parted_trig1\n Tabelle �public.parted_trig1�\n Spalte | Typ | Sortierfolge | NULL erlaubt? | Vorgabewert \n--------+---------+--------------+---------------+-------------\n a | integer | | | \nPartition von: parted_trig FOR VALUES IN (1)\nTrigger:\n aaa AFTER INSERT ON parted_trig1 FOR EACH ROW EXECUTE PROCEDURE trigger_notice('text', '1')\n\nPatch is against 11.4, but applies to master with minor offset.\n\nAll regression test pass.",
"msg_date": "Tue, 9 Jul 2019 15:00:27 +0200",
"msg_from": "Patrick McHardy <kaber@trash.net>",
"msg_from_op": true,
"msg_subject": "[PATCH] Fix trigger argument propagation to child partitions"
},
{
"msg_contents": "On Tue, Jul 09, 2019 at 03:00:27PM +0200, Patrick McHardy wrote:\n>The following patch fixes propagation of arguments to the trigger\n>function to child partitions both when initially creating the trigger\n>and when adding new partitions to a partitioned table.\n>\n\nThanks for the report and bugfix. It seeem the parameters in row triggers\non partitioned tables never worked :-( For a moment I was wondering why it\nshows on 11 and not 10 (based on the assumption you'd send a patch against\n10 if it was affected), but 10 actually did not support row triggers on\npartitioned tables.\n\nThe fix seems OK to me, although I see we're parsing tgargs in ruleutils.c\nand that version (around line ~1050) uses fastgetattr instead of\nheap_getattr, and checks the isnull parameter after the call. I guess we\nshould do the same thing here.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 9 Jul 2019 18:59:15 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix trigger argument propagation to child partitions"
},
{
"msg_contents": "On 2019-Jul-09, Tomas Vondra wrote:\n\n> On Tue, Jul 09, 2019 at 03:00:27PM +0200, Patrick McHardy wrote:\n> > The following patch fixes propagation of arguments to the trigger\n> > function to child partitions both when initially creating the trigger\n> > and when adding new partitions to a partitioned table.\n> \n> Thanks for the report and bugfix. It seeem the parameters in row triggers\n> on partitioned tables never worked :-( For a moment I was wondering why it\n> shows on 11 and not 10 (based on the assumption you'd send a patch against\n> 10 if it was affected), but 10 actually did not support row triggers on\n> partitioned tables.\n\nRight ...\n\n> The fix seems OK to me, although I see we're parsing tgargs in ruleutils.c\n> and that version (around line ~1050) uses fastgetattr instead of\n> heap_getattr, and checks the isnull parameter after the call. I guess we\n> should do the same thing here.\n\nYeah, absolutely. The attached v2 is basically Patrick's patch with\nvery minor style changes. I'll get this pushed as soon as the tests\nfinish running.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 9 Jul 2019 16:51:46 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix trigger argument propagation to child partitions"
}
] |
[
{
"msg_contents": "I have been wanting to contribute to the Postgres project for a while \nnow, and I wanted to get some suggestions about the IDE and other tools \nthat others are using (preferably, somewhat modern tools).\n\nCan anyone share what IDE they are using and if they have any other tips \non setting up a development environment etc.?\n\nI use both Windows and Linux so either OS is fine.\n\nThank you,\n\nIgal Sapir\nLucee Core Developer\nLucee.org <http://lucee.org/>\n\n\n\n\n\n\n\nI have been wanting to contribute to the Postgres project for a\n while now, and I wanted to get some suggestions about the IDE and\n other tools that others are using (preferably, somewhat modern\n tools).\nCan anyone share what IDE they are using and if they have any\n other tips on setting up a development environment etc.?\nI use both Windows and Linux so either OS is fine.\nThank you,\n\n\nIgal Sapir\n \n Lucee Core Developer\n \nLucee.org",
"msg_date": "Tue, 9 Jul 2019 09:36:11 -0700",
"msg_from": "\"Igal @ Lucee.org\" <igal@lucee.org>",
"msg_from_op": true,
"msg_subject": "Development Environment"
}
] |
[
{
"msg_contents": "Hello,\n\nSeveral times on master[1] beginning with an initial occurrence 36\ndays ago, and every time on REL_12_STABLE[2], but not on older\nbranches, build farm animal coypu has failed in the regression tests\nwith the error given in the subject. How can there be too many if\nthere are only 20 in a parallel group?\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=coypu&br=HEAD\n[2] https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=coypu&br=REL_12_STABLE\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Jul 2019 12:46:25 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "coypu: \"FATAL: sorry, too many clients already\""
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Several times on master[1] beginning with an initial occurrence 36\n> days ago, and every time on REL_12_STABLE[2], but not on older\n> branches, build farm animal coypu has failed in the regression tests\n> with the error given in the subject. How can there be too many if\n> there are only 20 in a parallel group?\n\nWell, (a) according to the initdb steps in successful runs, coypu\nis running with very resource-starved settings, because initdb picks\n\nselecting default max_connections ... 20\nselecting default shared_buffers ... 128MB\n\nmeaning there is exactly no headroom above the 20 parallel tests\nthat we sometimes try to launch. (Digging in the buildfarm logs\nshows that it used to pick 30, so something's changed there.)\n\n(b) coypu is running with force_parallel_mode = regress, which\ncomes close to doubling its requirement for backend processes.\n(It didn't use to do that, either.)\n\n(c) session disconnect is asynchronous, so previous test backends might\nstill be cleaning up when new ones try to launch.\n\nThe wonder is not that coypu sometimes fails but that it ever succeeds.\n\nI don't see a really good reason to be using force_parallel_mode on\nsuch a low-end box, and would recommend taking that out in any case.\nIf the box's SysV IPC limits can't be raised, it might be a good idea\nto restrict the maximum test parallelism. For instance, I run\nprairiedog with\n\n 'build_env' => {\n 'MAX_CONNECTIONS' => '3',\n },\n\nbecause I determined a long time ago that it got through the parallel\ntests the fastest that way. (Perhaps this setting is no longer optimal,\nbut the exact value is not very relevant here.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Jul 2019 22:09:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: coypu: \"FATAL: sorry, too many clients already\""
},
{
"msg_contents": "\n\n> Le 10 juil. 2019 à 04:09, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n> \n> I don't see a really good reason to be using force_parallel_mode on\n> such a low-end box, and would recommend taking that out in any case.\n> If the box's SysV IPC limits can't be raised, it might be a good idea\n> to restrict the maximum test parallelism. For instance, I run\n> prairiedog with\n> \n> 'build_env' => {\n> 'MAX_CONNECTIONS' => '3',\n> },\n> \n\nHi,\n\nThe difference of behavior might be explained by an upgraded OS and upgraded buildfarm script.\nI’ll implement the suggested setting as soon as I can powercycle the machine which I fail to contact right now.\n\nRegards,\n\nRémi Zara\n\n\n\n",
"msg_date": "Wed, 10 Jul 2019 22:08:04 +0200",
"msg_from": "=?utf-8?Q?R=C3=A9mi_Zara?= <remi_zara@mac.com>",
"msg_from_op": false,
"msg_subject": "Re: coypu: \"FATAL: sorry, too many clients already\""
}
] |
[
{
"msg_contents": "Hi\n\nHere:\n\n https://www.postgresql.org/docs/12/view-pg-roles.html\n\nwe state:\n\n \"This view explicitly exposes the OID column of the underlying table,\n since that is needed to do joins to other catalogs.\"\n\nI think it's superfluous to mention this now OIDs are exposed by default;\nattached patch (for REL_12_STABLE and HEAD) simply removes this sentence.\n\n\nRegards\n\nIan Barwick\n\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Wed, 10 Jul 2019 14:35:56 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "doc: minor update for description of \"pg_roles\" view"
},
{
"msg_contents": "On Wed, Jul 10, 2019 at 02:35:56PM +0900, Ian Barwick wrote:\n> Hi\n> \n> Here:\n> \n> https://www.postgresql.org/docs/12/view-pg-roles.html\n> \n> we state:\n> \n> \"This view explicitly exposes the OID column of the underlying table,\n> since that is needed to do joins to other catalogs.\"\n> \n> I think it's superfluous to mention this now OIDs are exposed by default;\n> attached patch (for REL_12_STABLE and HEAD) simply removes this sentence.\n\nPatch applied though PG 12. Thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Wed, 10 Jul 2019 14:24:59 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: minor update for description of \"pg_roles\" view"
},
{
"msg_contents": "On 7/11/19 3:24 AM, Bruce Momjian wrote:\n> On Wed, Jul 10, 2019 at 02:35:56PM +0900, Ian Barwick wrote:\n>> Hi\n>>\n>> Here:\n>>\n>> https://www.postgresql.org/docs/12/view-pg-roles.html\n>>\n>> we state:\n>>\n>> \"This view explicitly exposes the OID column of the underlying table,\n>> since that is needed to do joins to other catalogs.\"\n>>\n>> I think it's superfluous to mention this now OIDs are exposed by default;\n>> attached patch (for REL_12_STABLE and HEAD) simply removes this sentence.\n> \n> Patch applied though PG 12. Thanks.\n\nThanks!\n\nRegards\n\n\nIan Barwick\n\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 11 Jul 2019 09:05:46 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: minor update for description of \"pg_roles\" view"
}
] |
[
{
"msg_contents": "I found the following make's behavior is annoying (at dab81b9953).\n\nmake distclean\n./configure ....\nmake all\n<succeeds>\nmake -j4 clean all\nrelpath.c:21:10: fatal error: catalog/pg_tablespace_d.h: No such file or directory\n #include \"catalog/pg_tablespace_d.h\"\n\n(-j is needed, this happnes for me by -j2)\n\nJust fixing the Makefile for it reveals the next complainer.\n\nI'm not sure that it's the right thing but make got quiet by\nmoving include to the top in SUBDIRS in src/Makefile.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 10 Jul 2019 14:51:18 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "make clean removes excesively"
},
{
"msg_contents": "Sorry, the subject of the previous mail was wrong. I resend it\nwith the correct subject.\n\n==== \nI found the following make's behavior is annoying (at dab81b9953).\n\nmake distclean\n./configure ....\nmake all\n<succeeds>\nmake -j4 clean all\nrelpath.c:21:10: fatal error: catalog/pg_tablespace_d.h: No such file or directory\n #include \"catalog/pg_tablespace_d.h\"\n\n(-j is needed, this happnes for me by -j2)\n\nJust fixing the Makefile for it reveals the next complainer.\n\nI'm not sure that it's the right thing but make got quiet by\nmoving include to the top in SUBDIRS in src/Makefile.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 10 Jul 2019 16:17:35 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "make -jn fails by requiring not-yet-generated include files."
}
] |
[
{
"msg_contents": "How is this intended to work?\n\npg_checksums enumerate the files. What if there are files there from a\ndifferent tableam? Isn't pg_checksums just going to badly fail then, since\nit assumes everything is heap?\n\nAlso, do we allow AMs that don't support checksumming data? Do we have any\nchecks for tables created with such AMs in a system that has checksums\nenabled?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nHow is this intended to work?pg_checksums enumerate the files. What if there are files there from a different tableam? Isn't pg_checksums just going to badly fail then, since it assumes everything is heap?Also, do we allow AMs that don't support checksumming data? Do we have any checks for tables created with such AMs in a system that has checksums enabled?-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Wed, 10 Jul 2019 11:42:34 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "pg_checksums (or checksums in general) vs tableam"
},
{
"msg_contents": "On Wed, Jul 10, 2019 at 11:42:34AM +0200, Magnus Hagander wrote:\n> pg_checksums enumerate the files. What if there are files there from a\n> different tableam? Isn't pg_checksums just going to badly fail then, since\n> it assumes everything is heap?\n> \n> Also, do we allow AMs that don't support checksumming data? Do we have any\n> checks for tables created with such AMs in a system that has checksums\n> enabled?\n\nTable AMs going through shared buffers and smgr.c, like zedstore,\nshare the same page header, meaning that the on-disk file is the same\nas heap, and that checksums are compiled similarly to heap.\npg_checksums is not going to complain on those ones and would work\njust fine.\n\nTable AMs using their own storage layer (which would most likely use\ntheir own checksum method normally?) would be ignored by pg_checksums\nif the file names don't match what smgr uses, but it could result in\nfailures if they use on-disk file names which match.\n--\nMichael",
"msg_date": "Wed, 10 Jul 2019 22:05:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_checksums (or checksums in general) vs tableam"
},
{
"msg_contents": "On Wed, Jul 10, 2019 at 3:05 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Jul 10, 2019 at 11:42:34AM +0200, Magnus Hagander wrote:\n> > pg_checksums enumerate the files. What if there are files there from a\n> > different tableam? Isn't pg_checksums just going to badly fail then,\n> since\n> > it assumes everything is heap?\n> >\n> > Also, do we allow AMs that don't support checksumming data? Do we have\n> any\n> > checks for tables created with such AMs in a system that has checksums\n> > enabled?\n>\n> Table AMs going through shared buffers and smgr.c, like zedstore,\n> share the same page header, meaning that the on-disk file is the same\n> as heap, and that checksums are compiled similarly to heap.\n> pg_checksums is not going to complain on those ones and would work\n> just fine.\n\n\n> Table AMs using their own storage layer (which would most likely use\n> their own checksum method normally?) would be ignored by pg_checksums\n> if the file names don't match what smgr uses, but it could result in\n> failures if they use on-disk file names which match.\n>\n\nThat would be fine, if we actually knew. Should we (or have we already?)\ndefined a rule that they are not allowed to use the same naming standard\nunless they have the same type of header?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Jul 10, 2019 at 3:05 PM Michael Paquier <michael@paquier.xyz> wrote:On Wed, Jul 10, 2019 at 11:42:34AM +0200, Magnus Hagander wrote:\n> pg_checksums enumerate the files. What if there are files there from a\n> different tableam? Isn't pg_checksums just going to badly fail then, since\n> it assumes everything is heap?\n> \n> Also, do we allow AMs that don't support checksumming data? Do we have any\n> checks for tables created with such AMs in a system that has checksums\n> enabled?\n\nTable AMs going through shared buffers and smgr.c, like zedstore,\nshare the same page header, meaning that the on-disk file is the same\nas heap, and that checksums are compiled similarly to heap.\npg_checksums is not going to complain on those ones and would work\njust fine.\n\nTable AMs using their own storage layer (which would most likely use\ntheir own checksum method normally?) would be ignored by pg_checksums\nif the file names don't match what smgr uses, but it could result in\nfailures if they use on-disk file names which match.That would be fine, if we actually knew. Should we (or have we already?) defined a rule that they are not allowed to use the same naming standard unless they have the same type of header?-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Wed, 10 Jul 2019 18:12:18 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: pg_checksums (or checksums in general) vs tableam"
},
{
"msg_contents": "Hi,\n\nOn July 10, 2019 9:12:18 AM PDT, Magnus Hagander <magnus@hagander.net> wrote:\n>On Wed, Jul 10, 2019 at 3:05 PM Michael Paquier <michael@paquier.xyz>\n>wrote:\n>\n>> On Wed, Jul 10, 2019 at 11:42:34AM +0200, Magnus Hagander wrote:\n>> > pg_checksums enumerate the files. What if there are files there\n>from a\n>> > different tableam? Isn't pg_checksums just going to badly fail\n>then,\n>> since\n>> > it assumes everything is heap?\n>> >\n>> > Also, do we allow AMs that don't support checksumming data? Do we\n>have\n>> any\n>> > checks for tables created with such AMs in a system that has\n>checksums\n>> > enabled?\n>>\n>> Table AMs going through shared buffers and smgr.c, like zedstore,\n>> share the same page header, meaning that the on-disk file is the same\n>> as heap, and that checksums are compiled similarly to heap.\n>> pg_checksums is not going to complain on those ones and would work\n>> just fine.\n>\n>\n>> Table AMs using their own storage layer (which would most likely use\n>> their own checksum method normally?) would be ignored by pg_checksums\n>> if the file names don't match what smgr uses, but it could result in\n>> failures if they use on-disk file names which match.\n>>\n>\n>That would be fine, if we actually knew. Should we (or have we\n>already?)\n>defined a rule that they are not allowed to use the same naming\n>standard\n>unless they have the same type of header?\n\nNo, don't think we have already. There's the related problem of what to include in base backups, too.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Wed, 10 Jul 2019 09:19:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_checksums (or checksums in general) vs tableam"
},
{
"msg_contents": "On Wed, Jul 10, 2019 at 09:19:03AM -0700, Andres Freund wrote:\n> On July 10, 2019 9:12:18 AM PDT, Magnus Hagander <magnus@hagander.net> wrote:\n>> That would be fine, if we actually knew. Should we (or have we already?)\n>> defined a rule that they are not allowed to use the same naming standard\n>> unless they have the same type of header?\n> \n> No, don't think we have already. There's the related problem of\n> what to include in base backups, too.\n\nYes. This one needs a careful design and I am not sure exactly what\nthat would be. At least one new callback would be needed, called from\nbasebackup.c to decide if a given file should be backed up or not\nbased on a path. But then how do you make sure that a path applies to\none table AM or another, by using a regex given by all table AMs to\nsee if there is a match? How do we handle conflicts? I am not sure\neither that it is a good design to restrict table AMs to have a given\nformat for paths as that actually limits the possibilities when it\ncomes to split across data across multiple files for attributes and/or\ntablespaces. (I am a pessimistic guy by nature.)\n--\nMichael",
"msg_date": "Thu, 11 Jul 2019 09:29:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_checksums (or checksums in general) vs tableam"
},
{
"msg_contents": "On Thu, Jul 11, 2019 at 2:30 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Jul 10, 2019 at 09:19:03AM -0700, Andres Freund wrote:\n> > On July 10, 2019 9:12:18 AM PDT, Magnus Hagander <magnus@hagander.net>\n> wrote:\n> >> That would be fine, if we actually knew. Should we (or have we already?)\n> >> defined a rule that they are not allowed to use the same naming standard\n> >> unless they have the same type of header?\n> >\n> > No, don't think we have already. There's the related problem of\n> > what to include in base backups, too.\n>\n> Yes. This one needs a careful design and I am not sure exactly what\n> that would be. At least one new callback would be needed, called from\n> basebackup.c to decide if a given file should be backed up or not\n> based on a path.\n\n\nThat wouldn't be at all enough, of course. We have to think of everybody\nwho uses the pg_start_backup/pg_stop_backup functions (including the\ndeprecated versions we don't want to get rid of :P). So whatever it is it\nhas to be externally reachable. And just calling something before you start\nyour backup won't be enough, as there can be files showing up during the\nbackup etc.\n\nHaving a strict naming standard would help a lot with that, then you'd just\nneed the metadata. For example, one could say that each non-default storage\nengine has to put all their files in a subdirectory, and inside that\nsubdirectory they can name them whatever they want. If we do that, then all\na backup tool would need to know about is all the possible subdirectories\nin the current installation (and *that* doesn't change frequently).\n\n\n\n> But then how do you make sure that a path applies to\n> one table AM or another, by using a regex given by all table AMs to\n> see if there is a match? How do we handle conflicts? I am not sure\n> either that it is a good design to restrict table AMs to have a given\n> format for paths as that actually limits the possibilities when it\n> comes to split across data across multiple files for attributes and/or\n> tablespaces. (I am a pessimistic guy by nature.)\n>\n\nAs long as the restriction contains enough wildcards, it should hopefully\nbe enough :) E.g. data/base/1234/zheap/whatever.they.like.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Jul 11, 2019 at 2:30 AM Michael Paquier <michael@paquier.xyz> wrote:On Wed, Jul 10, 2019 at 09:19:03AM -0700, Andres Freund wrote:\n> On July 10, 2019 9:12:18 AM PDT, Magnus Hagander <magnus@hagander.net> wrote:\n>> That would be fine, if we actually knew. Should we (or have we already?)\n>> defined a rule that they are not allowed to use the same naming standard\n>> unless they have the same type of header?\n> \n> No, don't think we have already. There's the related problem of\n> what to include in base backups, too.\n\nYes. This one needs a careful design and I am not sure exactly what\nthat would be. At least one new callback would be needed, called from\nbasebackup.c to decide if a given file should be backed up or not\nbased on a path.That wouldn't be at all enough, of course. We have to think of everybody who uses the pg_start_backup/pg_stop_backup functions (including the deprecated versions we don't want to get rid of :P). So whatever it is it has to be externally reachable. And just calling something before you start your backup won't be enough, as there can be files showing up during the backup etc.Having a strict naming standard would help a lot with that, then you'd just need the metadata. For example, one could say that each non-default storage engine has to put all their files in a subdirectory, and inside that subdirectory they can name them whatever they want. If we do that, then all a backup tool would need to know about is all the possible subdirectories in the current installation (and *that* doesn't change frequently). But then how do you make sure that a path applies to\none table AM or another, by using a regex given by all table AMs to\nsee if there is a match? How do we handle conflicts? I am not sure\neither that it is a good design to restrict table AMs to have a given\nformat for paths as that actually limits the possibilities when it\ncomes to split across data across multiple files for attributes and/or\ntablespaces. (I am a pessimistic guy by nature.)As long as the restriction contains enough wildcards, it should hopefully be enough :) E.g. data/base/1234/zheap/whatever.they.like. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Thu, 11 Jul 2019 11:17:02 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: pg_checksums (or checksums in general) vs tableam"
}
] |
[
{
"msg_contents": "The current HEAD typedefs list available from\nhttps://buildfarm.postgresql.org/cgi-bin/typedefs.pl\nhas the following interesting additions compared to where\nthings were on July 1:\n\n\t2\n\tECPGt_bytea\n\tconnection_name\n\tin_addr\n\tpg_fprintf\n\tsend_appname\n\nThe \"2\" in particular is causing seriously bad pgindent results for me.\nBut as far as I can tell, none of these have any justification being\nmarked as a typedef.\n\ncalliphoridae seems to be contributing the \"2\" and \"pg_fprintf\".\nI didn't track down the rest (but calliphoridae is not to blame).\n\nWas there any change in calliphoridae's toolchain this month?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Jul 2019 12:57:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "buildfarm's typedefs list has gone completely nutso"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-10 12:57:08 -0400, Tom Lane wrote:\n> The current HEAD typedefs list available from\n> https://buildfarm.postgresql.org/cgi-bin/typedefs.pl\n> has the following interesting additions compared to where\n> things were on July 1:\n> \n> \t2\n> \tECPGt_bytea\n> \tconnection_name\n> \tin_addr\n> \tpg_fprintf\n> \tsend_appname\n\nHuh.\n\n\n> The \"2\" in particular is causing seriously bad pgindent results for\n> me.\n\nI haven't run pgindent, but I certainly can imagine...\n\n\n> But as far as I can tell, none of these have any justification being\n> marked as a typedef.\n> \n> calliphoridae seems to be contributing the \"2\" and \"pg_fprintf\".\n> I didn't track down the rest (but calliphoridae is not to blame).\n\n> Was there any change in calliphoridae's toolchain this month?\n\nHm, it has gotten gcc-9 installed recently, but calliphoridae isn't\nusing that. So it's probably not the compiler side. But I also see a\nbinutils upgrade:\n\n2019-07-08 06:22:48 upgrade binutils-multiarch:amd64 2.31.1-16 2.32.51.20190707-1\n\nand corresponding upgrades forall the arch specific packages. I suspect\nit might be that.\n\nI can't immediately reproduce that locally though, using the same\nversion of binutils. It's somewhat annoying that the buildfarm uses a\ndifferent form of computing the typedefs than src/tools/find_typedef ...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 10 Jul 2019 10:34:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm's typedefs list has gone completely nutso"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-07-10 12:57:08 -0400, Tom Lane wrote:\n>> Was there any change in calliphoridae's toolchain this month?\n\n> Hm, it has gotten gcc-9 installed recently, but calliphoridae isn't\n> using that. So it's probably not the compiler side. But I also see a\n> binutils upgrade:\n> 2019-07-08 06:22:48 upgrade binutils-multiarch:amd64 2.31.1-16 2.32.51.20190707-1\n> and corresponding upgrades forall the arch specific packages. I suspect\n> it might be that.\n\nYeah, a plausible theory is that the output format changed enough\nto confuse our typedef-symbol-scraping code.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Jul 2019 13:39:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm's typedefs list has gone completely nutso"
},
{
"msg_contents": "\n\nOn 7/10/19 1:34 PM, Andres Freund wrote:\n>\n> Hm, it has gotten gcc-9 installed recently, but calliphoridae isn't\n> using that. So it's probably not the compiler side. But I also see a\n> binutils upgrade:\n>\n> 2019-07-08 06:22:48 upgrade binutils-multiarch:amd64 2.31.1-16 2.32.51.20190707-1\n>\n> and corresponding upgrades forall the arch specific packages. I suspect\n> it might be that.\n>\n> I can't immediately reproduce that locally though, using the same\n> version of binutils. It's somewhat annoying that the buildfarm uses a\n> different form of computing the typedefs than src/tools/find_typedef ...\n>\n\n\nThat script is notably non-portable, and hasn't seen any significant\nupdate in a decade.\n\n\nIf you want to run something like the buildfarm code, see\n<https://adpgtech.blogspot.com/2015/05/running-pgindent-on-non-core-code-or.html>\nfor some clues\n\n\nI ran the client on a new Fedora 30 and it didn't produce the error.\n\n\ncheers\n\n\nandrew\n\n\n\n\n\n",
"msg_date": "Wed, 10 Jul 2019 16:40:20 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm's typedefs list has gone completely nutso"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-10 16:40:20 -0400, Andrew Dunstan wrote:\n> On 7/10/19 1:34 PM, Andres Freund wrote:\n> >\n> > Hm, it has gotten gcc-9 installed recently, but calliphoridae isn't\n> > using that. So it's probably not the compiler side. But I also see a\n> > binutils upgrade:\n> >\n> > 2019-07-08 06:22:48 upgrade binutils-multiarch:amd64 2.31.1-16 2.32.51.20190707-1\n> >\n> > and corresponding upgrades forall the arch specific packages. I suspect\n> > it might be that.\n> >\n> > I can't immediately reproduce that locally though, using the same\n> > version of binutils. It's somewhat annoying that the buildfarm uses a\n> > different form of computing the typedefs than src/tools/find_typedef ...\n\n> That script is notably non-portable, and hasn't seen any significant\n> update in a decade.\n\nI think that's kinda what I'm complaining about... It seems like a bad\nidea to have this in the buildfarm code, rather than our code. IMO the\nbuildfarm code should invoke an updated src/tools/find_typedef - that\nway people at least can create typedefs manually locally.\n\nNot yet sure what's actually going on, but there's something odd with\ndebug information afaict:\n\nobjdump -w spits out warnings for a few files, all static libraries:\n\n../install/lib/libpgcommon.a\nobjdump: Warning: Location list starting at offset 0x0 is not terminated.\nobjdump: Warning: Hole and overlap detection requires adjacent view lists and loclists.\nobjdump: Warning: There are 3411 unused bytes at the end of section .debug_loc\n\n../install/lib/libecpg.a\nobjdump: Warning: Hole and overlap detection requires adjacent view lists and loclists.\nobjdump: Warning: There are 8450 unused bytes at the end of section .debug_loc\n\n../install/lib/libpgcommon_shlib.a\nobjdump: Warning: Location list starting at offset 0x0 is not terminated.\nobjdump: Warning: Hole and overlap detection requires adjacent view lists and loclists.\nobjdump: Warning: There are 3411 unused bytes at the end of section .debug_loc\n\n../install/lib/libpgfeutils.a\nobjdump: Warning: Hole and overlap detection requires adjacent view lists and loclists.\nobjdump: Warning: There are 3090 unused bytes at the end of section .debug_loc\n\n../install/lib/libpgport.a\nobjdump: Warning: Location list starting at offset 0x0 is not terminated.\nobjdump: Warning: Hole and overlap detection requires adjacent view lists and loclists.\nobjdump: Warning: There are 38 unused bytes at the end of section .debug_loc\n\n../install/lib/libpgtypes.a\nobjdump: Warning: Location list starting at offset 0x0 is not terminated.\nobjdump: Warning: Hole and overlap detection requires adjacent view lists and loclists.\nobjdump: Warning: There are 13199 unused bytes at the end of section .debug_loc\n\n../install/lib/libecpg_compat.a\nobjdump: Warning: Hole and overlap detection requires adjacent view lists and loclists.\nobjdump: Warning: There are 15001 unused bytes at the end of section .debug_loc\n\n../install/lib/libpgport_shlib.a\nobjdump: Warning: Location list starting at offset 0x0 is not terminated.\nobjdump: Warning: Hole and overlap detection requires adjacent view lists and loclists.\nobjdump: Warning: There are 38 unused bytes at the end of section .debug_loc\n\n../install/lib/libpq.a\nobjdump: Warning: Location list starting at offset 0x0 is not terminated.\nobjdump: Warning: Hole and overlap detection requires adjacent view lists and loclists.\nobjdump: Warning: There are 5528 unused bytes at the end of section .debug_loc\n\nNot sure if that's related - I don't get that locally (newer compiler,\ndifferent compiler options, same binutils).\n\nInterestingly pg_fprintf is only generated by pg_fprintf, as far as I\ncan tell, and the 1/2 typedefs are from libpq. The relevant entries look\nlike:\n\n <1><4b>: Abbrev Number: 4 (DW_TAG_typedef)\n <4c> DW_AT_name : (indirect string, offset: 0x0): USE_SSE42_CRC32C_WITH_RUNTIME_CHECK 1\n <50> DW_AT_decl_file : 2\n <51> DW_AT_decl_line : 216\n <52> DW_AT_decl_column : 23\n <53> DW_AT_type : <0x57>\n\nSo I suspect it might be the corruption/misparsing of objdump that's to\nblame. Kinda looks like there's an issue with the dwarf stringtable, and\nthus the wrong strings (debug information for macros in this case) is\nbeing picked up.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 10 Jul 2019 17:24:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm's typedefs list has gone completely nutso"
},
{
"msg_contents": "\nOn 7/10/19 8:24 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2019-07-10 16:40:20 -0400, Andrew Dunstan wrote:\n>> On 7/10/19 1:34 PM, Andres Freund wrote:\n>>> Hm, it has gotten gcc-9 installed recently, but calliphoridae isn't\n>>> using that. So it's probably not the compiler side. But I also see a\n>>> binutils upgrade:\n>>>\n>>> 2019-07-08 06:22:48 upgrade binutils-multiarch:amd64 2.31.1-16 2.32.51.20190707-1\n>>>\n>>> and corresponding upgrades forall the arch specific packages. I suspect\n>>> it might be that.\n>>>\n>>> I can't immediately reproduce that locally though, using the same\n>>> version of binutils. It's somewhat annoying that the buildfarm uses a\n>>> different form of computing the typedefs than src/tools/find_typedef ...\n>> That script is notably non-portable, and hasn't seen any significant\n>> update in a decade.\n> I think that's kinda what I'm complaining about... It seems like a bad\n> idea to have this in the buildfarm code, rather than our code. IMO the\n> buildfarm code should invoke an updated src/tools/find_typedef - that\n> way people at least can create typedefs manually locally.\n\n\n\n\nOK, I don't have a problem with that. I think the sane way to go would\nbe to replace it with what the buildfarm is using, more or less.\n\n\n\n>\n> Not yet sure what's actually going on, but there's something odd with\n> debug information afaict:\n>\n> objdump -w spits out warnings for a few files, all static libraries:\n\n\n\nI assume you mean -W rather than -w\n\n\n>\n> ../install/lib/libpgcommon.a\n> objdump: Warning: Location list starting at offset 0x0 is not terminated.\n> objdump: Warning: Hole and overlap detection requires adjacent view lists and loclists.\n> objdump: Warning: There are 3411 unused bytes at the end of section .debug_loc\n>\n\n\nThat seems new. I didn't get this in my attempt to recreate the setup of\nyour animal. That was on debian/buster which has binutils 2.31.1-16\n\n\n\n\n> Not sure if that's related - I don't get that locally (newer compiler,\n> different compiler options, same binutils).\n>\n> Interestingly pg_fprintf is only generated by pg_fprintf, as far as I\n> can tell, and the 1/2 typedefs are from libpq. The relevant entries look\n> like:\n>\n> <1><4b>: Abbrev Number: 4 (DW_TAG_typedef)\n> <4c> DW_AT_name : (indirect string, offset: 0x0): USE_SSE42_CRC32C_WITH_RUNTIME_CHECK 1\n> <50> DW_AT_decl_file : 2\n> <51> DW_AT_decl_line : 216\n> <52> DW_AT_decl_column : 23\n> <53> DW_AT_type : <0x57>\n>\n> So I suspect it might be the corruption/misparsing of objdump that's to\n> blame. Kinda looks like there's an issue with the dwarf stringtable, and\n> thus the wrong strings (debug information for macros in this case) is\n> being picked up.\n>\n\nThis looks like a bug in the version of objdump unless I'm reading it\nwrong. Why would this be tagged as a typedef?\n\n\nI would tentatively suggest that you revert to the previous version of\nbinutils if possible, and consider reporting a bug in the bleeding edge\nof objdump.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 11 Jul 2019 11:54:50 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm's typedefs list has gone completely nutso"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 7/10/19 8:24 PM, Andres Freund wrote:\n>> I think that's kinda what I'm complaining about... It seems like a bad\n>> idea to have this in the buildfarm code, rather than our code. IMO the\n>> buildfarm code should invoke an updated src/tools/find_typedef - that\n>> way people at least can create typedefs manually locally.\n\n> OK, I don't have a problem with that. I think the sane way to go would\n> be to replace it with what the buildfarm is using, more or less.\n\n+1 for the idea --- I've not studied the code, but surely the buildfarm's\nversion of this code is more bulletproof.\n\n> This looks like a bug in the version of objdump unless I'm reading it\n> wrong. Why would this be tagged as a typedef?\n\nMaybe. We still need to explain the other non-typedef symbols that have\njust appeared and are not being reported by calliphoridae.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Jul 2019 12:50:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm's typedefs list has gone completely nutso"
},
{
"msg_contents": "\nOn 7/11/19 12:50 PM, Tom Lane wrote:\n>\n>> This looks like a bug in the version of objdump unless I'm reading it\n>> wrong. Why would this be tagged as a typedef?\n> Maybe. We still need to explain the other non-typedef symbols that have\n> just appeared and are not being reported by calliphoridae.\n>\n\n\nThe others come from komodoensis (also Andres' machine)\n\n\npgbfprod=> select l.sysname, l.snapshot, l.branch from build_status_log\nl where snapshot > now() - interval '12 days' and log_stage =\n'typedefs.log' and log_text ~\n'ECPGt_bytea|in_addr|connection_name|send_appname';\n sysname | snapshot | branch\n-------------+---------------------+--------\n komodoensis | 2019-07-08 23:34:01 | HEAD\n komodoensis | 2019-07-10 02:38:01 | HEAD\n(2 rows)\n\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 11 Jul 2019 14:54:42 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm's typedefs list has gone completely nutso"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-10 17:24:41 -0700, Andres Freund wrote:\n> Not yet sure what's actually going on, but there's something odd with\n> debug information afaict:\n> \n> objdump -W spits out warnings for a few files, all static libraries:\n> \n> ../install/lib/libpgcommon.a\n> objdump: Warning: Location list starting at offset 0x0 is not terminated.\n> objdump: Warning: Hole and overlap detection requires adjacent view lists and loclists.\n> objdump: Warning: There are 3411 unused bytes at the end of section .debug_loc\n\n...\n\n\nInterestingly, for the same files, readelf spits out correct\ndata. E.g. here's a short excerpt from libpq.a:\n\nobjdump -W src/interfaces/libpq/libpq.a\n...\n <0><b>: Abbrev Number: 1 (DW_TAG_compile_unit)\n <c> DW_AT_producer : (indirect string, offset: 0x0): SYNC_FILE_RANGE_WRITE 2\n <10> DW_AT_language : 12 (ANSI C99)\n <11> DW_AT_name : (indirect string, offset: 0x0): SYNC_FILE_RANGE_WRITE 2\n <15> DW_AT_comp_dir : (indirect string, offset: 0x0): SYNC_FILE_RANGE_WRITE 2\n...\n <1><31>: Abbrev Number: 2 (DW_TAG_typedef)\n <32> DW_AT_name : Oid\n <36> DW_AT_decl_file : 30\n <37> DW_AT_decl_line : 31\n <38> DW_AT_decl_column : 22\n <39> DW_AT_type : <0x3d>\n <1><3d>: Abbrev Number: 3 (DW_TAG_base_type)\n <3e> DW_AT_byte_size : 4\n <3f> DW_AT_encoding : 7 (unsigned)\n <40> DW_AT_name : (indirect string, offset: 0x0): SYNC_FILE_RANGE_WRITE 2\n <1><44>: Abbrev Number: 3 (DW_TAG_base_type)\n <45> DW_AT_byte_size : 8\n <46> DW_AT_encoding : 5 (signed)\n <47> DW_AT_name : (indirect string, offset: 0x0): SYNC_FILE_RANGE_WRITE 2\n <1><4b>: Abbrev Number: 4 (DW_TAG_typedef)\n <4c> DW_AT_name : (indirect string, offset: 0x0): SYNC_FILE_RANGE_WRITE 2\n <50> DW_AT_decl_file : 2\n <51> DW_AT_decl_line : 216\n <52> DW_AT_decl_column : 23\n <53> DW_AT_type : <0x57>\n...\n\nreadelf --debug-dump src/interfaces/libpq/libpq.a\n...\n <0><b>: Abbrev Number: 1 (DW_TAG_compile_unit)\n <c> DW_AT_producer : (indirect string, offset: 0x29268): GNU C17 8.3.0 -march=skylake -mmmx -mno-3dnow -msse -msse2 -m\n <10> DW_AT_language : 12 (ANSI C99)\n <11> DW_AT_name : (indirect string, offset: 0x28ef3): /home/andres/src/postgresql/src/interfaces/libpq/fe-auth.c\n <15> DW_AT_comp_dir : (indirect string, offset: 0xf800): /home/andres/build/postgres/dev-assert/vpath/src/interfaces/l\n...\n <1><31>: Abbrev Number: 2 (DW_TAG_typedef)\n <32> DW_AT_name : Oid\n <36> DW_AT_decl_file : 30\n <37> DW_AT_decl_line : 31\n <38> DW_AT_decl_column : 22\n <39> DW_AT_type : <0x3d>\n <1><3d>: Abbrev Number: 3 (DW_TAG_base_type)\n <3e> DW_AT_byte_size : 4\n <3f> DW_AT_encoding : 7 (unsigned)\n <40> DW_AT_name : (indirect string, offset: 0x4f12f): unsigned int\n <1><44>: Abbrev Number: 3 (DW_TAG_base_type)\n <45> DW_AT_byte_size : 8\n <46> DW_AT_encoding : 5 (signed)\n <47> DW_AT_name : (indirect string, offset: 0x57deb): long int\n <1><4b>: Abbrev Number: 4 (DW_TAG_typedef)\n <4c> DW_AT_name : (indirect string, offset: 0x26129): size_t\n <50> DW_AT_decl_file : 2\n <51> DW_AT_decl_line : 216\n <52> DW_AT_decl_column : 23\n <53> DW_AT_type : <0x57>\n...\n\nso it seems that objdump mis-parses all indirect strings - which IIRC is\nsomething like a pointer into a \"global\" string table, assuming the\noffset to be 0. That then just returns the first table entry.\n\nIt doesn't happen with a slightly older version of binutils. Bisecting\nnow.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 16 Jul 2019 19:35:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm's typedefs list has gone completely nutso"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-16 19:35:39 -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2019-07-10 17:24:41 -0700, Andres Freund wrote:\n> > Not yet sure what's actually going on, but there's something odd with\n> > debug information afaict:\n> > \n> > objdump -W spits out warnings for a few files, all static libraries:\n> > \n> > ../install/lib/libpgcommon.a\n> > objdump: Warning: Location list starting at offset 0x0 is not terminated.\n> > objdump: Warning: Hole and overlap detection requires adjacent view lists and loclists.\n> > objdump: Warning: There are 3411 unused bytes at the end of section .debug_loc\n> \n> ...\n> \n> \n> Interestingly, for the same files, readelf spits out correct\n> data. E.g. here's a short excerpt from libpq.a:\n> \n> objdump -W src/interfaces/libpq/libpq.a\n> ...\n> <0><b>: Abbrev Number: 1 (DW_TAG_compile_unit)\n> <c> DW_AT_producer : (indirect string, offset: 0x0): SYNC_FILE_RANGE_WRITE 2\n> <10> DW_AT_language : 12 (ANSI C99)\n> <11> DW_AT_name : (indirect string, offset: 0x0): SYNC_FILE_RANGE_WRITE 2\n> <15> DW_AT_comp_dir : (indirect string, offset: 0x0): SYNC_FILE_RANGE_WRITE 2\n> ...\n> <1><31>: Abbrev Number: 2 (DW_TAG_typedef)\n> <32> DW_AT_name : Oid\n> <36> DW_AT_decl_file : 30\n> <37> DW_AT_decl_line : 31\n> <38> DW_AT_decl_column : 22\n> <39> DW_AT_type : <0x3d>\n> <1><3d>: Abbrev Number: 3 (DW_TAG_base_type)\n> <3e> DW_AT_byte_size : 4\n> <3f> DW_AT_encoding : 7 (unsigned)\n> <40> DW_AT_name : (indirect string, offset: 0x0): SYNC_FILE_RANGE_WRITE 2\n> <1><44>: Abbrev Number: 3 (DW_TAG_base_type)\n> <45> DW_AT_byte_size : 8\n> <46> DW_AT_encoding : 5 (signed)\n> <47> DW_AT_name : (indirect string, offset: 0x0): SYNC_FILE_RANGE_WRITE 2\n> <1><4b>: Abbrev Number: 4 (DW_TAG_typedef)\n> <4c> DW_AT_name : (indirect string, offset: 0x0): SYNC_FILE_RANGE_WRITE 2\n> <50> DW_AT_decl_file : 2\n> <51> DW_AT_decl_line : 216\n> <52> DW_AT_decl_column : 23\n> <53> DW_AT_type : <0x57>\n\n> so it seems that objdump mis-parses all indirect strings - which IIRC is\n> something like a pointer into a \"global\" string table, assuming the\n> offset to be 0. That then just returns the first table entry.\n> \n> It doesn't happen with a slightly older version of binutils. Bisecting\n> now.\n\nIt's\n\ncommit 39f0547e554df96608dd041d2a7b3c72882fd515 (HEAD, refs/bisect/bad)\nAuthor: Nick Clifton <nickc@redhat.com>\nDate: 2019-02-25 12:15:41 +0000\n\n Extend objdump's --dwarf=follow-links option so that separate debug info files will also be affected by other dump function, and symbol tables from separate debug info files will be used when disassembling the main file.\n \n * objdump.c (sym_ok): New function.\n (find_symbol_for_address): Use new function.\n (disassemble_section): Compare sections by name, not pointer.\n (dump_dwarf): Move code to initialise byte_get pointer and iterate\n over separate debug files from here to ...\n (dump_bfd): ... here. Add parameter indicating that a separate\n debug info file is being dumped. For main file, pull in the\n symbol tables from all separate debug info files.\n (display_object): Update call to dump_bfd.\n * doc/binutils.texi: Document extened behaviour of the\n --dwarf=follow-links option.\n * NEWS: Mention this new feature.\n * testsuite/binutils-all/objdump.WK2: Update expected output.\n * testsuite/binutils-all/objdump.exp (test_follow_debuglink): Add\n options and dump file parameters.\n Add extra test.\n * testsuite/binutils-all/objdump.WK3: New file.\n * testsuite/binutils-all/readelf.exp: Change expected output for\n readelf -wKis test.\n * testsuite/binutils-all/readelf.wKis: New file.\n\nluckily that allowed me to find a workaround too. If objdump -W's k and K\nparameters (or --dwarf=links,=follow-links) aren't enabled, the dump\ncomes out correctly:\n\nobjdump -WlLiaprmfFsoRtUuTgAckK /tmp/test.o|grep -A5 '(DW_TAG_compile_unit)'\n <0><b>: Abbrev Number: 1 (DW_TAG_compile_unit)\n <c> DW_AT_producer : (indirect string, offset: 0x0): /home/andres\n <10> DW_AT_language : 12 (ANSI C99)\n <11> DW_AT_name : (indirect string, offset: 0x0): /home/andres\n <15> DW_AT_comp_dir : (indirect string, offset: 0x0): /home/andres\n <19> DW_AT_low_pc : 0x0\n\nobjdump -WlLiaprmfFsoRtUuTgAc /tmp/test.o|grep -A5 '(DW_TAG_compile_unit)'\n <0><b>: Abbrev Number: 1 (DW_TAG_compile_unit)\n <c> DW_AT_producer : (indirect string, offset: 0x2a): GNU C17 9.1.0 -mtune=generic -march=x86-64 -ggdb -fasynchronous-\n <10> DW_AT_language : 12 (ANSI C99)\n <11> DW_AT_name : (indirect string, offset: 0x14): /tmp/test.c\n <15> DW_AT_comp_dir : (indirect string, offset: 0x0): /home/andres\n <19> DW_AT_low_pc : 0x0\n\n(lLiaprmfFsoRtUuTgAckK just is all sub-parts of -W that my objdump knows\nabout)\n\nlooking at the .debug_str section (with -Ws):\n\n 0x00000000 2f686f6d 652f616e 64726573 00646f75 /home/andres.dou\n 0x00000010 626c6500 2f746d70 2f746573 742e6300 ble./tmp/test.c.\n 0x00000020 72657431 00726574 3200474e 55204331 ret1.ret2.GNU C1\n 0x00000030 3720392e 312e3020 2d6d7475 6e653d67 7 9.1.0 -mtune=g\n 0x00000040 656e6572 6963202d 6d617263 683d7838 eneric -march=x8\n 0x00000050 362d3634 202d6767 6462202d 66617379 6-64 -ggdb -fasy\n 0x00000060 6e636872 6f6e6f75 732d756e 77696e64 nchronous-unwind\n 0x00000070 2d746162 6c657300 666f6f62 61723100 -tables.foobar1.\n 0x00000080 666f6f62 61723200 foobar2.\n\nit makes sense why in this case all strings are /home/andres, and also\nwhy the precise symbols output into the typedef list appears to be\npretty random - it's just the first string, as the offsets are\nmis-computed. And not all strings are put into the string table, which\nexplains why the output still has some contents left.\n\nIt turns out that -Wi is actually all we need - so I'll probably patch\nmy animals to use that for now, until the bug is fixed.\n\nIt might actually be sensible to always do that - it's a lot cheaper\nthat way:\n\n$ time objdump -WlLiaprmfFsoRtUuTgAc src/interfaces/libpq/libpq.a|wc\n 747866 5190832 48917526\n\nreal\t0m0.827s\nuser\t0m1.040s\nsys\t0m0.074s\n\n$ time objdump -Wi src/interfaces/libpq/libpq.a|wc\n 78703 378433 3594563\n\nreal\t0m0.075s\nuser\t0m0.076s\nsys\t0m0.025s\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 16 Jul 2019 22:21:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm's typedefs list has gone completely nutso"
},
{
"msg_contents": "\nOn 7/17/19 1:21 AM, Andres Freund wrote:\n\n[nice forensics]\n\n>\n> It turns out that -Wi is actually all we need - so I'll probably patch\n> my animals to use that for now, until the bug is fixed.\n>\n> It might actually be sensible to always do that - it's a lot cheaper\n> that way:\n>\n> $ time objdump -WlLiaprmfFsoRtUuTgAc src/interfaces/libpq/libpq.a|wc\n> 747866 5190832 48917526\n>\n> real\t0m0.827s\n> user\t0m1.040s\n> sys\t0m0.074s\n>\n> $ time objdump -Wi src/interfaces/libpq/libpq.a|wc\n> 78703 378433 3594563\n>\n> real\t0m0.075s\n> user\t0m0.076s\n> sys\t0m0.025s\n>\n\n\nWFM, I'll put that in the next buildfarm client release, unless we get \na core script to use instead in the meantime.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n\n",
"msg_date": "Wed, 17 Jul 2019 08:08:15 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm's typedefs list has gone completely nutso"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-16 22:21:34 -0700, Andres Freund wrote:\n> It turns out that -Wi is actually all we need - so I'll probably patch\n> my animals to use that for now\n\ndid that now.\n\n\n> until the bug is fixed.\n\nBug report: https://sourceware.org/bugzilla/show_bug.cgi?id=24818\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Jul 2019 10:33:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm's typedefs list has gone completely nutso"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-17 10:33:02 -0700, Andres Freund wrote:\n> On 2019-07-16 22:21:34 -0700, Andres Freund wrote:\n> > It turns out that -Wi is actually all we need - so I'll probably patch\n> > my animals to use that for now\n> \n> did that now.\n\nLooks like that made the generated typedef lists sane. Any residual\ncomplaints?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Jul 2019 19:37:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm's typedefs list has gone completely nutso"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Looks like that made the generated typedef lists sane. Any residual\n> complaints?\n\nBefore ack'ing this, I've been waiting around for komodoensis to update\nits typedefs list, in the hopes that the last couple of bogus entries\nwould go away. It still hasn't, and I just realized from looking at its\nconfig that you have it set to do so at most twice a week:\n\n 'optional_steps' => {\n 'find_typedefs' => {\n 'branches' => [\n 'HEAD'\n ],\n 'min_hours_since' => 25,\n 'dow' => [\n 1,\n 4\n ]\n }\n },\n\nThat seems a bit, er, miserly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jul 2019 11:27:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm's typedefs list has gone completely nutso"
},
{
"msg_contents": "I wrote:\n> ... I just realized from looking at its\n> config that you have it set to do so at most twice a week:\n\n> 'dow' => [\n> 1,\n> 4\n> ]\n\nI was still confused, seeing that today is Thursday, as to why\nkomodoensis didn't update its typedefs list in the run it just\nfinished. Looking at the buildfarm script (in sub\ncheck_optional_step), it seems the \"dow\" filter is implemented\nlike this:\n\n\treturn\n\t if (exists $oconf->{dow}\n\t\t&& grep { $_ eq $wday } @{ $oconf->{dow} });\n\nI'm the world's worst Perl programmer, but isn't that backwards?\nIt seems like it will return undef if today matches any entry\nof the dow list, making dow a blacklist of weekdays *not* to run\nthe step on. That's not what I would have expected it to mean,\nalthough build-farm.conf.sample is surely unclear on the point.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jul 2019 11:42:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm's typedefs list has gone completely nutso"
},
{
"msg_contents": "Gum\n\nOn 2019-07-18 11:27:49 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Looks like that made the generated typedef lists sane. Any residual\n> > complaints?\n> \n> Before ack'ing this, I've been waiting around for komodoensis to update\n> its typedefs list, in the hopes that the last couple of bogus entries\n> would go away. It still hasn't, and I just realized from looking at its\n> config that you have it set to do so at most twice a week:\n\nI've changed that yesterday to be just min_hours_since=48. I can easily set it\nto more frequent, especially after using -Wi makes the whole step a lot\nfaster - I just though that swamping the buildfarm with the additional\ndata would be unnecessary? But perhaps it doesn't matter in comparison\nto the normal stage logs.\n\nBoosted to min_hours_since=24 for now.\n\n\n> 'optional_steps' => {\n> 'find_typedefs' => {\n> 'branches' => [\n> 'HEAD'\n> ],\n> 'min_hours_since' => 25,\n> 'dow' => [\n> 1,\n> 4\n> ]\n> }\n> },\n> \n> That seems a bit, er, miserly.\n\nComes from the default suggestion in build-farm.conf.sample:\n # find_typedefs => { branches => ['HEAD'], dow => [1,4],\n # min_hours_since => 25 },\n\n\nI'm happy to boost that to something a lot more aggressive.\n\n\nAt least the REL_11_STABLE logs are from after the change to use -Wi\ninstead of -W, and therefore ought to be correct.\n\nbf@andres-postgres-edb-buildfarm-v1:~/src/pgbuildfarm-client-stock$ TZ=UTC ls -l run_build.pl\n-rwxr-xr-x 1 bf bf 63701 Jul 17 17:16 run_build.pl\n\nkomodoensis 2019-07-17 03:04:01 HEAD 3308\nkomodoensis 2019-07-17 17:32:02 REL_12_STABLE 3296\nkomodoensis 2019-07-17 23:18:24 REL_11_STABLE 3204\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 18 Jul 2019 08:56:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm's typedefs list has gone completely nutso"
},
{
"msg_contents": "> On 18 Jul 2019, at 17:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \treturn\n> \t if (exists $oconf->{dow}\n> \t\t&& grep { $_ eq $wday } @{ $oconf->{dow} });\n> \n> I'm the world's worst Perl programmer, but isn't that backwards?\n> It seems like it will return undef if today matches any entry\n> of the dow list, making dow a blacklist of weekdays *not* to run\n> the step on.\n\nAs it’s in a scalar context, grep will return the number of times for which the\ncondition is true against the list entries, so if $oconf->{dow} doesn’t have\nduplicates it will return 1 if $wday is found on the list. This will in turn\nmake the condition true as the exists expression must be true for grep to at\nall execute.\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 18 Jul 2019 18:17:54 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm's typedefs list has gone completely nutso"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-07-18 11:27:49 -0400, Tom Lane wrote:\n>> Before ack'ing this, I've been waiting around for komodoensis to update\n>> its typedefs list, in the hopes that the last couple of bogus entries\n>> would go away. It still hasn't, and I just realized from looking at its\n>> config that you have it set to do so at most twice a week:\n\n> Boosted to min_hours_since=24 for now.\n\nOK, I see it's updated, and we now have a sane typedefs list again:\n\n$ diff -u src/tools/pgindent/typedefs.list new\n--- src/tools/pgindent/typedefs.list 2019-07-02 10:32:57.078918638 -0400\n+++ new/typedefs.list 2019-07-18 12:45:22.533451991 -0400\n@@ -674,6 +674,12 @@\n FmgrBuiltin\n FmgrHookEventType\n FmgrInfo\n+ForBothCellState\n+ForBothState\n+ForEachState\n+ForFiveState\n+ForFourState\n+ForThreeState\n ForeignDataWrapper\n ForeignKeyCacheInfo\n ForeignKeyOptInfo\n@@ -2981,7 +2987,7 @@\n leaf_item\n line_t\n lineno_t\n-list_qsort_comparator\n+list_sort_comparator\n locale_t\n locate_agg_of_level_context\n locate_var_of_level_context\n@@ -3150,6 +3156,7 @@\n pullup_replace_vars_context\n pushdown_safety_info\n qsort_arg_comparator\n+qsort_comparator\n query_pathkeys_callback\n radius_attribute\n radius_packet\n\nAll of those changes are correct, so we're good. Thanks!\n\n(I still think the buildfarm's dow filter is wrong though)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jul 2019 12:48:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm's typedefs list has gone completely nutso"
},
{
"msg_contents": "\nOn 7/18/19 12:17 PM, Daniel Gustafsson wrote:\n>> On 18 Jul 2019, at 17:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> \treturn\n>> \t if (exists $oconf->{dow}\n>> \t\t&& grep { $_ eq $wday } @{ $oconf->{dow} });\n>>\n>> I'm the world's worst Perl programmer, but isn't that backwards?\n>> It seems like it will return undef if today matches any entry\n>> of the dow list, making dow a blacklist of weekdays *not* to run\n>> the step on.\n> As it’s in a scalar context, grep will return the number of times for which the\n> condition is true against the list entries, so if $oconf->{dow} doesn’t have\n> duplicates it will return 1 if $wday is found on the list. This will in turn\n> make the condition true as the exists expression must be true for grep to at\n> all execute.\n>\n\n\nYes, it's a bug. Will fix.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 18 Jul 2019 12:55:43 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm's typedefs list has gone completely nutso"
}
] |
[
{
"msg_contents": "Hi,\n\nrunning sqlsmith on the regression database of REL_12_STABLE at\nff597b656f yielded a crash in mcv_get_match_bitmap. I can reproduce it\nwith the following query on the regression database:\n\n select filler1 from mcv_lists where a is not null and (select 42) <= c;\n\nBacktrace below.\n\nregards,\nAndreas\n\nProgram received signal SIGSEGV, Segmentation fault.\npg_detoast_datum (datum=0x0) at fmgr.c:1741\n(gdb) bt\n#0 pg_detoast_datum (datum=0x0) at fmgr.c:1741\n#1 0x000055b2bbeb2656 in numeric_le (fcinfo=0x7ffceeb2cb90) at numeric.c:2139\n#2 0x000055b2bbf3cdca in FunctionCall2Coll (flinfo=flinfo@entry=0x7ffceeb2cc30, collation=collation@entry=100,\n arg1=<optimized out>, arg2=<optimized out>) at fmgr.c:1162\n#3 0x000055b2bbdd7aec in mcv_get_match_bitmap (root=0x55b2bd2acff0, clauses=<optimized out>, keys=0x55b2bd2c4e38,\n mcvlist=0x55b2bd2c44e0, is_or=false) at mcv.c:1638\n#4 0x000055b2bbdda581 in mcv_clauselist_selectivity (root=root@entry=0x55b2bd2acff0, stat=stat@entry=0x55b2bd2c4e00,\n clauses=clauses@entry=0x55b2bd2c5298, varRelid=varRelid@entry=0, jointype=jointype@entry=JOIN_INNER, sjinfo=sjinfo@entry=0x0,\n rel=0x55b2bd2c4158, basesel=0x7ffceeb2cd70, totalsel=0x7ffceeb2cd78) at mcv.c:1876\n#5 0x000055b2bbdd6064 in statext_mcv_clauselist_selectivity (estimatedclauses=0x7ffceeb2cde8, rel=0x55b2bd2c4158,\n sjinfo=<optimized out>, jointype=<optimized out>, varRelid=<optimized out>, clauses=0x55b2bd2c4e00, root=<optimized out>)\n at extended_stats.c:1146\n#6 statext_clauselist_selectivity (root=root@entry=0x55b2bd2acff0, clauses=clauses@entry=0x55b2bd2c5010,\n varRelid=varRelid@entry=0, jointype=jointype@entry=JOIN_INNER, sjinfo=sjinfo@entry=0x0, rel=0x55b2bd2c4158,\n estimatedclauses=0x7ffceeb2cde8) at extended_stats.c:1177\n#7 0x000055b2bbd27372 in clauselist_selectivity (root=root@entry=0x55b2bd2acff0, clauses=0x55b2bd2c5010,\n varRelid=varRelid@entry=0, jointype=jointype@entry=JOIN_INNER, sjinfo=sjinfo@entry=0x0) at clausesel.c:94\n#8 0x000055b2bbd2d788 in set_baserel_size_estimates (root=root@entry=0x55b2bd2acff0, rel=rel@entry=0x55b2bd2c4158)\n at costsize.c:4411\n#9 0x000055b2bbd24658 in set_plain_rel_size (rte=0x55b2bd20cf00, rel=0x55b2bd2c4158, root=0x55b2bd2acff0) at allpaths.c:583\n#10 set_rel_size (root=root@entry=0x55b2bd2acff0, rel=rel@entry=0x55b2bd2c4158, rti=rti@entry=1, rte=rte@entry=0x55b2bd20cf00)\n at allpaths.c:412\n#11 0x000055b2bbd264a0 in set_base_rel_sizes (root=<optimized out>) at allpaths.c:323\n#12 make_one_rel (root=root@entry=0x55b2bd2acff0, joinlist=joinlist@entry=0x55b2bd2c49c0) at allpaths.c:185\n#13 0x000055b2bbd482f8 in query_planner (root=root@entry=0x55b2bd2acff0,\n qp_callback=qp_callback@entry=0x55b2bbd48ed0 <standard_qp_callback>, qp_extra=qp_extra@entry=0x7ffceeb2d070) at planmain.c:271\n#14 0x000055b2bbd4cb32 in grouping_planner (root=<optimized out>, inheritance_update=false, tuple_fraction=<optimized out>)\n at planner.c:2048\n#15 0x000055b2bbd4f900 in subquery_planner (glob=glob@entry=0x55b2bd2b1c88, parse=parse@entry=0x55b2bd20cd88,\n parent_root=parent_root@entry=0x0, hasRecursion=hasRecursion@entry=false, tuple_fraction=tuple_fraction@entry=0)\n at planner.c:1012\n#16 0x000055b2bbd509c6 in standard_planner (parse=0x55b2bd20cd88, cursorOptions=256, boundParams=<optimized out>) at planner.c:406\n#17 0x000055b2bbe13b89 in pg_plan_query (querytree=querytree@entry=0x55b2bd20cd88, cursorOptions=cursorOptions@entry=256,\n boundParams=boundParams@entry=0x0) at postgres.c:878\n[...]\n\n\n",
"msg_date": "Wed, 10 Jul 2019 22:37:51 +0200",
"msg_from": "Andreas Seltenreich <seltenreich@gmx.de>",
"msg_from_op": true,
"msg_subject": "[sqlsmith] Crash in mcv_get_match_bitmap"
},
{
"msg_contents": "Andreas Seltenreich <seltenreich@gmx.de> writes:\n> running sqlsmith on the regression database of REL_12_STABLE at\n> ff597b656f yielded a crash in mcv_get_match_bitmap. I can reproduce it\n> with the following query on the regression database:\n> select filler1 from mcv_lists where a is not null and (select 42) <= c;\n\nSeems to be the same problem I just complained of in the other\nthread: mcv_get_match_bitmap has an untenable assumption that\n\"is a pseudoconstant\" means \"is a Const\".\n\nI notice it's also assuming that the Const must be non-null.\nIt's not really easy to crash it that way, because if you just\nwrite \"null <= c\" that'll get reduced to constant-NULL earlier.\nBut I bet there's a way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Jul 2019 16:57:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Crash in mcv_get_match_bitmap"
},
{
"msg_contents": "On Wed, Jul 10, 2019 at 04:57:54PM -0400, Tom Lane wrote:\n>Andreas Seltenreich <seltenreich@gmx.de> writes:\n>> running sqlsmith on the regression database of REL_12_STABLE at\n>> ff597b656f yielded a crash in mcv_get_match_bitmap. I can reproduce it\n>> with the following query on the regression database:\n>> select filler1 from mcv_lists where a is not null and (select 42) <= c;\n>\n>Seems to be the same problem I just complained of in the other\n>thread: mcv_get_match_bitmap has an untenable assumption that\n>\"is a pseudoconstant\" means \"is a Const\".\n>\n\nYeah, that's a bug. Will fix (not sure how yet).\n\nBTW which other thread? I don't see any other threads mentioning this\nfunction.\n\n>I notice it's also assuming that the Const must be non-null.\n>It's not really easy to crash it that way, because if you just\n>write \"null <= c\" that'll get reduced to constant-NULL earlier.\n>But I bet there's a way.\n>\n\nHmmm, I did not think of that.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 10 Jul 2019 23:26:09 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Crash in mcv_get_match_bitmap"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> BTW which other thread? I don't see any other threads mentioning this\n> function.\n\nhttps://www.postgresql.org/message-id/flat/CA%2Bu7OA65%2BjEFb_TyV5g%2BKq%2BonyJ2skMOPzgTgFH%2BqgLwszRqvw%40mail.gmail.com\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Jul 2019 17:31:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Crash in mcv_get_match_bitmap"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> Yeah, that's a bug. Will fix (not sure how yet).\n\nYou could do worse than replace this:\n\n ok = (NumRelids(clause) == 1) &&\n (is_pseudo_constant_clause(lsecond(expr->args)) ||\n (varonleft = false,\n is_pseudo_constant_clause(linitial(expr->args))));\n\nwith something like\n\n\tif (IsA(linitial(expr->args), Var) &&\n\t IsA(lsecond(expr->args), Const))\n\t ok = true, varonleft = true;\n\telse if (IsA(linitial(expr->args), Const) &&\n\t IsA(lsecond(expr->args), Var))\n\t ok = true, varonleft = false;\n\nOr possibly get rid of varonleft as such, and merge extraction of the\n\"var\" and \"cst\" variables into this test.\n\nBTW, I bet passing a unary-argument OpExpr also makes this code\nunhappy.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Jul 2019 17:45:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Crash in mcv_get_match_bitmap"
},
{
"msg_contents": "On Wed, Jul 10, 2019 at 05:45:24PM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> Yeah, that's a bug. Will fix (not sure how yet).\n>\n>You could do worse than replace this:\n>\n> ok = (NumRelids(clause) == 1) &&\n> (is_pseudo_constant_clause(lsecond(expr->args)) ||\n> (varonleft = false,\n> is_pseudo_constant_clause(linitial(expr->args))));\n>\n>with something like\n>\n>\tif (IsA(linitial(expr->args), Var) &&\n>\t IsA(lsecond(expr->args), Const))\n>\t ok = true, varonleft = true;\n>\telse if (IsA(linitial(expr->args), Const) &&\n>\t IsA(lsecond(expr->args), Var))\n>\t ok = true, varonleft = false;\n>\n>Or possibly get rid of varonleft as such, and merge extraction of the\n>\"var\" and \"cst\" variables into this test.\n>\n\nOK, thanks for the suggestion.\n\nI probably also need to look at the \"is compatible\" test in\nextended_stats.c which also looks at the clauses. It may not crash as it\ndoes not attempt to extract the const values etc. but it likely needs to\nbe in sync with this part.\n\n>BTW, I bet passing a unary-argument OpExpr also makes this code\n>unhappy.\n\nWhooops :-(\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 10 Jul 2019 23:59:04 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Crash in mcv_get_match_bitmap"
},
{
"msg_contents": "Oh ... while we're piling on here, it just sunk into me that\nmcv_get_match_bitmap is deciding what the semantics of an operator\nare by seeing what it's using for a selectivity estimator.\nThat is just absolutely, completely wrong. For starters, it\nmeans that the whole mechanism fails for any operator that wants\nto use a specialized estimator --- hardly an unreasonable thing\nto do. For another, it's going to be pretty unreliable for\nextensions, because I do not think they're all careful about using\nthe right estimator --- a lot of 'em probably still haven't adapted\nto the introduction of separate <= / >= estimators, for instance.\n\nThe right way to determine operator semantics is to look to see\nwhether they are in a btree opclass. That's what the rest of the\nplanner does, and there is no good reason for the mcv code to\ndo it some other way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Jul 2019 18:48:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Crash in mcv_get_match_bitmap"
},
{
"msg_contents": "On Wed, Jul 10, 2019 at 11:26:09PM +0200, Tomas Vondra wrote:\n> Yeah, that's a bug. Will fix (not sure how yet).\n\nPlease note that I have added an open item for it.\n--\nMichael",
"msg_date": "Thu, 11 Jul 2019 09:55:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Crash in mcv_get_match_bitmap"
},
{
"msg_contents": "On Thu, Jul 11, 2019 at 09:55:01AM +0900, Michael Paquier wrote:\n>On Wed, Jul 10, 2019 at 11:26:09PM +0200, Tomas Vondra wrote:\n>> Yeah, that's a bug. Will fix (not sure how yet).\n>\n>Please note that I have added an open item for it.\n\nThanks.\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 11 Jul 2019 16:59:32 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Crash in mcv_get_match_bitmap"
},
{
"msg_contents": "On Wed, Jul 10, 2019 at 06:48:16PM -0400, Tom Lane wrote:\n>Oh ... while we're piling on here, it just sunk into me that\n>mcv_get_match_bitmap is deciding what the semantics of an operator\n>are by seeing what it's using for a selectivity estimator.\n>That is just absolutely, completely wrong. For starters, it\n>means that the whole mechanism fails for any operator that wants\n>to use a specialized estimator --- hardly an unreasonable thing\n>to do. For another, it's going to be pretty unreliable for\n>extensions, because I do not think they're all careful about using\n>the right estimator --- a lot of 'em probably still haven't adapted\n>to the introduction of separate <= / >= estimators, for instance.\n>\n>The right way to determine operator semantics is to look to see\n>whether they are in a btree opclass. That's what the rest of the\n>planner does, and there is no good reason for the mcv code to\n>do it some other way.\n>\n\nHmmm, but that will mean we're unable to estimate operators that are not\npart of a btree opclass. Which is a bit annoying, because that would also\naffect equalities (and thus functional dependencies), in which case the\ntype may easily have just hash opclass or something.\n\nBut maybe I just don't understand how the btree opclass detection works\nand it'd be fine for sensibly defined operators. Can you point me to code\ndoing this elsewhere in the planner?\n\nWe may need to do something about code in pg10, because functional\ndependencies this too (although just with F_EQSEL). But maybe we should\nleave pg10 alone, we got exactly 0 reports about it until now anyway.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 11 Jul 2019 17:08:22 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Crash in mcv_get_match_bitmap"
},
{
"msg_contents": "On Thu, Jul 11, 2019 at 05:08:22PM +0200, Tomas Vondra wrote:\n>On Wed, Jul 10, 2019 at 06:48:16PM -0400, Tom Lane wrote:\n>>Oh ... while we're piling on here, it just sunk into me that\n>>mcv_get_match_bitmap is deciding what the semantics of an operator\n>>are by seeing what it's using for a selectivity estimator.\n>>That is just absolutely, completely wrong. For starters, it\n>>means that the whole mechanism fails for any operator that wants\n>>to use a specialized estimator --- hardly an unreasonable thing\n>>to do. For another, it's going to be pretty unreliable for\n>>extensions, because I do not think they're all careful about using\n>>the right estimator --- a lot of 'em probably still haven't adapted\n>>to the introduction of separate <= / >= estimators, for instance.\n>>\n>>The right way to determine operator semantics is to look to see\n>>whether they are in a btree opclass. That's what the rest of the\n>>planner does, and there is no good reason for the mcv code to\n>>do it some other way.\n>>\n>\n>Hmmm, but that will mean we're unable to estimate operators that are not\n>part of a btree opclass. Which is a bit annoying, because that would also\n>affect equalities (and thus functional dependencies), in which case the\n>type may easily have just hash opclass or something.\n>\n>But maybe I just don't understand how the btree opclass detection works\n>and it'd be fine for sensibly defined operators. Can you point me to code\n>doing this elsewhere in the planner?\n>\n>We may need to do something about code in pg10, because functional\n>dependencies this too (although just with F_EQSEL). But maybe we should\n>leave pg10 alone, we got exactly 0 reports about it until now anyway.\n>\n\nHere are WIP patches addressing two of the issues:\n\n1) determining operator semantics by matching them to btree opclasses\n\n2) deconstructing OpExpr into Var/Const nodes\n\nI'd appreciate some feedback particularly on (1).\n\nFor the two other issues mentioned in this thread:\n\na) I don't think unary-argument OpExpr are an issue, because this is\nchecked when determining which clauses are compatible (and the code only\nallows the case with 2 arguments).\n\nb) Const with constisnull=true - I'm not yet sure what to do about this.\nThe easiest fix would be to deem those clauses incompatible, but that\nseems a bit too harsh. The right thing is probably passing the NULL to\nthe operator proc (but that means we can't use FunctionCall).\n\nNow, looking at this code, I wonder if using DEFAULT_COLLATION_OID when\ncalling the operator is the right thing. We're using type->typcollation\nwhen building the stats, so maybe we should do the same thing here.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 13 Jul 2019 02:11:37 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Crash in mcv_get_match_bitmap"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Thu, Jul 11, 2019 at 05:08:22PM +0200, Tomas Vondra wrote:\n>> On Wed, Jul 10, 2019 at 06:48:16PM -0400, Tom Lane wrote:\n>>> The right way to determine operator semantics is to look to see\n>>> whether they are in a btree opclass. That's what the rest of the\n>>> planner does, and there is no good reason for the mcv code to\n>>> do it some other way.\n\n>> Hmmm, but that will mean we're unable to estimate operators that are not\n>> part of a btree opclass. Which is a bit annoying, because that would also\n>> affect equalities (and thus functional dependencies), in which case the\n>> type may easily have just hash opclass or something.\n\nAfter thinking about this more, I may have been analogizing to the wrong\ncode. It's necessary to use opclass properties when we're reasoning about\noperators in a way that *must* be correct, for instance to conclude that\na partition can be pruned from a query. But this code is just doing\nselectivity estimation, so the correctness standards are a lot lower.\nIn particular I see that the long-established range-query-detection\ncode in clauselist_selectivity is looking for operators that have\nF_SCALARLTSEL etc. as restriction estimators (in fact I'm guessing you\nlifted parts of the mcv code from that, cause it looks pretty similar).\nSo if we've gotten away with that so far over there, there's probably\nno reason not to do likewise here.\n\nI am a little troubled by the point I made about operators possibly\nwanting to have a more-specific estimator than scalarltsel, but that\nseems like an issue to solve some other time; and if we do change that\nlogic then clauselist_selectivity needs to change too.\n\n> Here are WIP patches addressing two of the issues:\n\n> 1) determining operator semantics by matching them to btree opclasses\n\nPer above, I'm sort of inclined to drop this, unless you feel better\nabout doing it like this than the existing way.\n\n> 2) deconstructing OpExpr into Var/Const nodes\n\ndeconstruct_opexpr is still failing to verify that the Var is a Var.\nI'd try something like\n\n\tleftop = linitial(expr->args);\n\twhile (IsA(leftop, RelabelType))\n\t leftop = ((RelabelType *) leftop)->arg;\n\t// and similarly for rightop\n\tif (IsA(leftop, Var) && IsA(rightop, Const))\n\t // return appropriate results\n\telse if (IsA(leftop, Const) && IsA(rightop, Var))\n\t // return appropriate results\n\telse\n\t // fail\n\nAlso, I think deconstruct_opexpr is far too generic a name for what\nthis is actually doing. It'd be okay as a static function name\nperhaps, but not if you're going to expose it globally.\n\n> a) I don't think unary-argument OpExpr are an issue, because this is\n> checked when determining which clauses are compatible (and the code only\n> allows the case with 2 arguments).\n\nOK.\n\n> b) Const with constisnull=true - I'm not yet sure what to do about this.\n> The easiest fix would be to deem those clauses incompatible, but that\n> seems a bit too harsh. The right thing is probably passing the NULL to\n> the operator proc (but that means we can't use FunctionCall).\n\nNo, because most of the functions in question are strict and will just\ncrash on null inputs. Perhaps you could just deem that cases involving\na null Const don't match what you're looking for.\n\n> Now, looking at this code, I wonder if using DEFAULT_COLLATION_OID when\n> calling the operator is the right thing. We're using type->typcollation\n> when building the stats, so maybe we should do the same thing here.\n\nYeah, I was wondering that too. But really you should be using the\ncolumn's collation not the type's default collation. See commit\n5e0928005.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 13 Jul 2019 11:39:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Crash in mcv_get_match_bitmap"
},
{
"msg_contents": "On Sat, Jul 13, 2019 at 11:39:55AM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Thu, Jul 11, 2019 at 05:08:22PM +0200, Tomas Vondra wrote:\n>>> On Wed, Jul 10, 2019 at 06:48:16PM -0400, Tom Lane wrote:\n>>>> The right way to determine operator semantics is to look to see\n>>>> whether they are in a btree opclass. That's what the rest of the\n>>>> planner does, and there is no good reason for the mcv code to\n>>>> do it some other way.\n>\n>>> Hmmm, but that will mean we're unable to estimate operators that are not\n>>> part of a btree opclass. Which is a bit annoying, because that would also\n>>> affect equalities (and thus functional dependencies), in which case the\n>>> type may easily have just hash opclass or something.\n>\n>After thinking about this more, I may have been analogizing to the wrong\n>code. It's necessary to use opclass properties when we're reasoning about\n>operators in a way that *must* be correct, for instance to conclude that\n>a partition can be pruned from a query. But this code is just doing\n>selectivity estimation, so the correctness standards are a lot lower.\n>In particular I see that the long-established range-query-detection\n>code in clauselist_selectivity is looking for operators that have\n>F_SCALARLTSEL etc. as restriction estimators (in fact I'm guessing you\n>lifted parts of the mcv code from that, cause it looks pretty similar).\n>So if we've gotten away with that so far over there, there's probably\n>no reason not to do likewise here.\n>\n>I am a little troubled by the point I made about operators possibly\n>wanting to have a more-specific estimator than scalarltsel, but that\n>seems like an issue to solve some other time; and if we do change that\n>logic then clauselist_selectivity needs to change too.\n>\n>> Here are WIP patches addressing two of the issues:\n>\n>> 1) determining operator semantics by matching them to btree opclasses\n>\n>Per above, I'm sort of inclined to drop this, unless you feel better\n>about doing it like this than the existing way.\n>\n\nOK. TBH I don't have a very strong opinion on this - I always disliked\nhow we rely on the estimator OIDs in this code, and relying on btree\nopclasses seems somewhat more principled. But I'm not sure I understand\nall the implications of such change (and I have some concerns about it\ntoo, per my last message), so I'd revisit that in PG13.\n\n>> 2) deconstructing OpExpr into Var/Const nodes\n>\n>deconstruct_opexpr is still failing to verify that the Var is a Var.\n>I'd try something like\n>\n>\tleftop = linitial(expr->args);\n>\twhile (IsA(leftop, RelabelType))\n>\t leftop = ((RelabelType *) leftop)->arg;\n>\t// and similarly for rightop\n>\tif (IsA(leftop, Var) && IsA(rightop, Const))\n>\t // return appropriate results\n>\telse if (IsA(leftop, Const) && IsA(rightop, Var))\n>\t // return appropriate results\n>\telse\n>\t // fail\n>\n\nAh, right. The RelabelType might be on top of something that's not Var.\n\n>Also, I think deconstruct_opexpr is far too generic a name for what\n>this is actually doing. It'd be okay as a static function name\n>perhaps, but not if you're going to expose it globally.\n>\n\nI agree. I can't quite make it static, because it's used from multiple\nplaces, but I'll move it to extended_stats_internal.h (and I'll see if I\ncan think of a better name too).\n\n>> a) I don't think unary-argument OpExpr are an issue, because this is\n>> checked when determining which clauses are compatible (and the code only\n>> allows the case with 2 arguments).\n>\n>OK.\n>\n>> b) Const with constisnull=true - I'm not yet sure what to do about this.\n>> The easiest fix would be to deem those clauses incompatible, but that\n>> seems a bit too harsh. The right thing is probably passing the NULL to\n>> the operator proc (but that means we can't use FunctionCall).\n>\n>No, because most of the functions in question are strict and will just\n>crash on null inputs. Perhaps you could just deem that cases involving\n>a null Const don't match what you're looking for.\n>\n\nMakes sense, I'll do that.\n\n>> Now, looking at this code, I wonder if using DEFAULT_COLLATION_OID when\n>> calling the operator is the right thing. We're using type->typcollation\n>> when building the stats, so maybe we should do the same thing here.\n>\n>Yeah, I was wondering that too. But really you should be using the\n>column's collation not the type's default collation. See commit\n>5e0928005.\n>\n\nOK, thanks.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 13 Jul 2019 22:43:42 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Crash in mcv_get_match_bitmap"
},
{
"msg_contents": "OK, attached is a sequence of WIP fixes for the issues discussed here.\n\n\n1) using column-specific collations (instead of type/default ones)\n\nThe collations patch is pretty simple, but I'm not sure it actually does\nthe right thing, particularly during estimation where it uses collation\nfrom the Var node (varcollid). But looking at 5e0928005, this should use\nthe same collation as when building the extended statistics (which we\nget from the per-column stats, as stored in pg_statistic.stacoll#).\n\nBut we don't actually store collations for extended statistics, so we\ncan either modify pg_statistic_ext_data and store it there, or lookup\nthe per-column statistic info during estimation, and use that. I kinda\nthink the first option is the right one, but that'd mean yet another\ncatversion bump.\n\nOTOH 5e0928005 actually did modify the extended statistics (mvdistinct\nand dependencies) to use type->typcollation during building, so maybe we\nwant to use the default type collation for some reason?\n\n\n2) proper extraction of Var/Const from opclauses\n\nThis is the primary issue discussed in this thread - I've renamed the\nfunction to examine_opclause_expression() so that it kinda resembles\nexamine_variable() and I've moved it to the \"internal\" header file. We\nstill need it from two places so it can't be static, but hopefully this\nnaming is acceptable.\n\n\n3) handling of NULL values (Const and MCV items)\n\nAside from the issue that Const may represent NULL, I've realized the\ncode might do the wrong thing for NULL in the MCV item itself. It did\ntreat it as mismatch and update the bitmap, but it might have invoke the\noperator procedure anyway (depending on whether it's AND/OR clause,\nwhat's the current value in the bitmap, etc.). This patch should fix\nboth issues by treating them as mismatch, and skipping the proc call.\n\n\n4) refactoring of the bitmap updates\n\nThis is not a bug per se, but while working on (3) I've realized the\ncode updating the bitmap is quite repetitive and it does stuff like\n\n if (is_or)\n bitmap[i] = Max(bitmap[i], match)\n else\n bitmap[i] = Min(bitmap[i], match)\n\nover and over on various places. This moves this into a separate static\nfunction, which I think makes it way more readable. Also, it replaces\nthe Min/Max with a plain boolean operators (the patch originally used\nthree states, not true/false, hence the Min/Max).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 15 Jul 2019 03:34:25 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Crash in mcv_get_match_bitmap"
},
{
"msg_contents": "Hi,\n\nI've pushed the fixes listed in the previous message, with the exception\nof the collation part, because I had some doubts about that.\n\n1) handling of NULL values in Cons / MCV items\n\nThe handling of NULL elements was actually a bit more broken, because it\nalso was not quite correct for NULL values in the MCV items. The code\ntreated this as a mismatch, but then skipped the rest of the evaluation\nonly for AND-clauses (because then 'false' is final). But for OR-clauses\nit happily proceeded to call the proc, etc. It was not hard to cause a\ncrash with statistics on varlena columns.\n\nI've fixed this and added a simple regression test to check this. It\nhowever shows the stats_ext suite needs some improvements - until now it\nonly had AND-clauses. Now it has one simple OR-clause test, but it needs\nmore of that - and perhaps some combinations mixing AND/OR. I've tried\nadding an copy of each existing query, with AND replaced by OR, and that\nworks fine (no crashes, estimates seem OK). But it's quite heavy-handed\nway to create regression tests, so I'll look into this in PG13 cycle.\n\n\n2) collations\n\nNow, for the collation part - after some more thought and looking at code\nI think the fix I shared before is OK. It uses per-column collations\nconsistently both when building the stats and estimating clauses, which\nmakes it consistent. I was not quite sure if using Var->varcollid is the\nsame thing as pg_attribute.attcollation, but it seems it is - at least for\nVars pointing to regular columns (which for extended stats should always\nbe the case).\n\nAnd we reset stats whenever the attribute type changes (which includes\nchange of collation for the column), so I think it's OK. To be precise, we\nonly reset MCV list in that case - we keep mvdistinct and dependencies,\nbut that's probably OK because those don't store values and we won't\nrun any functions on them.\n\nSo I think the attached patch is OK, but I'd welcome some feedback.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 18 Jul 2019 13:20:01 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Crash in mcv_get_match_bitmap"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> I've pushed the fixes listed in the previous message, with the exception\n> of the collation part, because I had some doubts about that.\n\nSorry for being slow on this.\n\n> Now, for the collation part - after some more thought and looking at code\n> I think the fix I shared before is OK. It uses per-column collations\n> consistently both when building the stats and estimating clauses, which\n> makes it consistent. I was not quite sure if using Var->varcollid is the\n> same thing as pg_attribute.attcollation, but it seems it is - at least for\n> Vars pointing to regular columns (which for extended stats should always\n> be the case).\n\nI think you are right, but it could use some comments in the code.\nThe important point here is that if we coerce a Var's collation to\nsomething else, that will be represented as a separate CollateExpr\n(which will be a RelabelType by the time it gets here, I believe).\nWe don't just replace varcollid, the way eval_const_expressions will\ndo to a Const.\n\n\nWhile I'm looking at the code --- I don't find this at all convincing:\n\n /*\n * We don't care about isgt in equality, because\n * it does not matter whether it's (var op const)\n * or (const op var).\n */\n match = DatumGetBool(FunctionCall2Coll(&opproc,\n DEFAULT_COLLATION_OID,\n cst->constvalue,\n item->values[idx]));\n\nIt *will* matter if the operator is cross-type. I think there is no\ngood reason to have different code paths for the equality and inequality\ncases --- just look at isgt and swap or don't swap the arguments.\n\nBTW, \"isgt\" seems like a completely misleading name for that variable.\nAFAICS, what that is actually telling is whether the var is on the left\nor right side of the OpExpr. Everywhere else in the planner with a\nneed for that uses \"bool varonleft\", and I think you should do likewise\nhere (though note that that isgt, as coded, is more like \"varonright\").\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jul 2019 11:16:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Crash in mcv_get_match_bitmap"
},
{
"msg_contents": "On Thu, Jul 18, 2019 at 11:16:08AM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> I've pushed the fixes listed in the previous message, with the exception\n>> of the collation part, because I had some doubts about that.\n>\n>Sorry for being slow on this.\n>\n>> Now, for the collation part - after some more thought and looking at code\n>> I think the fix I shared before is OK. It uses per-column collations\n>> consistently both when building the stats and estimating clauses, which\n>> makes it consistent. I was not quite sure if using Var->varcollid is the\n>> same thing as pg_attribute.attcollation, but it seems it is - at least for\n>> Vars pointing to regular columns (which for extended stats should always\n>> be the case).\n>\n>I think you are right, but it could use some comments in the code.\n>The important point here is that if we coerce a Var's collation to\n>something else, that will be represented as a separate CollateExpr\n>(which will be a RelabelType by the time it gets here, I believe).\n>We don't just replace varcollid, the way eval_const_expressions will\n>do to a Const.\n>\n\nOK, thanks. I've added a comment about that into mcv_get_match_bitmap (not\nall the details about RelabelType, because that gets stripped while\nexamining the opexpr, but generally about the collations).\n\n>\n>While I'm looking at the code --- I don't find this at all convincing:\n>\n> /*\n> * We don't care about isgt in equality, because\n> * it does not matter whether it's (var op const)\n> * or (const op var).\n> */\n> match = DatumGetBool(FunctionCall2Coll(&opproc,\n> DEFAULT_COLLATION_OID,\n> cst->constvalue,\n> item->values[idx]));\n>\n>It *will* matter if the operator is cross-type. I think there is no\n>good reason to have different code paths for the equality and inequality\n>cases --- just look at isgt and swap or don't swap the arguments.\n>\n>BTW, \"isgt\" seems like a completely misleading name for that variable.\n>AFAICS, what that is actually telling is whether the var is on the left\n>or right side of the OpExpr. Everywhere else in the planner with a\n>need for that uses \"bool varonleft\", and I think you should do likewise\n>here (though note that that isgt, as coded, is more like \"varonright\").\n>\n\nYes, you're right in both cases. I've fixed the first issue with equality\nby simply using the same code as for all operators (which means we don't\nneed to check the oprrest at all in this code, making it simpler).\n\nAnd I've reworked the examine_opclause_expression() to use varonleft\nnaming - I agree it's a much better name. This was one of the remnants of\nthe code I initially copied from somewhere in selfuncs.c and massaged\nit until it did what I needed.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 19 Jul 2019 18:46:35 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Crash in mcv_get_match_bitmap"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> [ mcv fixes ]\n\nThese patches look OK to me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Jul 2019 14:37:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Crash in mcv_get_match_bitmap"
},
{
"msg_contents": "On Fri, Jul 19, 2019 at 02:37:19PM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> [ mcv fixes ]\n>\n>These patches look OK to me.\n>\n\nOK, thanks. Pushed.\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 20 Jul 2019 16:38:16 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [sqlsmith] Crash in mcv_get_match_bitmap"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on adding a new log_destination I noticed that the\nsyslogger piping would need to be updated. At the moment both ends\nonly handle stderr/csvlog as the pipe message header has a char\n\"is_last\" that is either t/f (stderr last, stderr partial) or T/F\n(csvlog last, csvlog partial). Couple approaches came to mind:\n\n1. Use additional pairs of chars for each additional destination (e.g.\nx/X, y/Y, ...) and mimic the logic of csvlog.\n2. Repurpose the char \"is_last\" as a bitmap of the log destination\nwith the highest order bit indicating whether it's the last chunk.\n3. Add a separate field \"dest\" for the log destination and leave\n\"is_last\" as a t/f indicating whether it's the last chunk.\n\nAttached are patches for each approach (fun exercise!). Also attached\nis a basic TAP test to invoke the csvlog destination. It's a clone of\npg_ctl log rotation test that looks for .csv logs. If there's interest\nin the test I was thinking of expanding it a bit to include \"big\"\noutput that would span multiple messages to test the partial/combining\npath. My thoughts on the approaches:\n\n#1 doesn't change the header types or size but seems ugly as it leads\nto new pairs of constants and logic in multiple places. In particular,\nboth send and receive ends have to encode and decode the destination.\n#2 is cleaner as there's a logical separation of the dest fields and\nno need for new constant pairs when adding new destinations. Would\nalso need to ensure new LOG_DESTINATION_xyz constants do not use that\nlast bit (there's already four now so room for three more).\n#3 leads to the cleanest code though you lose 4-bytes of max data size\nper chunk.\n\nWhich would be preferable? I'd like to validate the approach as the\nnew log destination would be built atop it. I leaning toward #3 though\nif the 4-byte loss is a deal breaker then #2.\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/",
"msg_date": "Wed, 10 Jul 2019 17:05:09 -0400",
"msg_from": "Sehrope Sarkuni <sehrope@jackdb.com>",
"msg_from_op": true,
"msg_subject": "Refactoring syslogger piping to simplify adding new log destinations"
},
{
"msg_contents": "Hi Sehrope,\n\nOn 2019-Jul-10, Sehrope Sarkuni wrote:\n\n> While working on adding a new log_destination I noticed that the\n> syslogger piping would need to be updated. At the moment both ends\n> only handle stderr/csvlog as the pipe message header has a char\n> \"is_last\" that is either t/f (stderr last, stderr partial) or T/F\n> (csvlog last, csvlog partial).\n\nI spent some time a couple of weeks ago looking at profiles of the\nsyslogger code, and my impression is that the current design of using a\npipe to move data from one process to another may benefit from a\ncomplete rewrite; it seems that the kernel overhead of going over the\npipe is significant. (The test case was a couple dozen processes all\ngenerating a thousand of couple-hundred-KB log lines. In perf, the pipe\nread/write takes up 99% of the CPU time).\n\nMaybe we can use something like a shared memory queue, working in a\nsimilar way to wal_buffers -- where backends send over the shm queue to\nsyslogger, and syslogger writes in order to the actual log file. Or\nmaybe something completely different; I didn't actually prototype\nanything, just observed the disaster.\n\nI'm not opposed to your patches, just trying to whet your appetite for\nsomething bigger in the vicinity.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 10 Jul 2019 18:41:48 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring syslogger piping to simplify adding new log\n destinations"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Maybe we can use something like a shared memory queue, working in a\n> similar way to wal_buffers -- where backends send over the shm queue to\n> syslogger, and syslogger writes in order to the actual log file.\n\nNo way that's going to be acceptable for postmaster output.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Jul 2019 18:50:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring syslogger piping to simplify adding new log\n destinations"
},
{
"msg_contents": "On 2019-Jul-10, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > Maybe we can use something like a shared memory queue, working in a\n> > similar way to wal_buffers -- where backends send over the shm queue to\n> > syslogger, and syslogger writes in order to the actual log file.\n> \n> No way that's going to be acceptable for postmaster output.\n\nWell, we can use both mechanisms simultaneously. Postmaster doesn't emit\nall that much output anyway, so I don't think that's a concern. And\nactually, we still need the pipes from the backend for the odd cases\nwhere third party code writes to stderr, no?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 10 Jul 2019 18:54:12 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring syslogger piping to simplify adding new log\n destinations"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jul-10, Tom Lane wrote:\n>> No way that's going to be acceptable for postmaster output.\n\n> Well, we can use both mechanisms simultaneously. Postmaster doesn't emit\n> all that much output anyway, so I don't think that's a concern. And\n> actually, we still need the pipes from the backend for the odd cases\n> where third party code writes to stderr, no?\n\nYeah, if you don't want to give up capturing random stderr output (and you\nshouldn't), that's another issue. But as you say, maybe we could have both\nmechanisms. There'd be a synchronization problem for pipe vs queue output\nfrom the same process, but maybe that will be tolerable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Jul 2019 18:59:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring syslogger piping to simplify adding new log\n destinations"
}
] |
[
{
"msg_contents": "Hello\n\nI noticed few warnings from my compiler (gcc version 8.3.0 (Debian 8.3.0-6)) during make check-world:\n\narray.pgc: In function ‘main’:\narray.pgc:41:16: warning: ‘%d’ directive writing between 1 and 11 bytes into a region of size 10 [-Wformat-overflow=]\n sprintf(str, \"2000-1-1 0%d:00:00\", j);\n ^~~~~~~~~~~~~~~~~~~~\narray.pgc:41:16: note: directive argument in the range [-2147483648, 9]\narray.pgc:41:3: note: ‘sprintf’ output between 18 and 28 bytes into a destination of size 20\n sprintf(str, \"2000-1-1 0%d:00:00\", j);\n ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\narray.pgc:43:16: warning: ‘sprintf’ may write a terminating nul past the end of the destination [-Wformat-overflow=]\n sprintf(str, \"2000-1-1%d\\n\", j);\n ^~~~~~~~~~~~~~\narray.pgc:43:3: note: ‘sprintf’ output between 11 and 21 bytes into a destination of size 20\n sprintf(str, \"2000-1-1%d\\n\", j);\n ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThey coming from src/interfaces/ecpg tests ( ./src/interfaces/ecpg/test/sql/array.pgc ).\nSeems this code is 4 year old but I did not found discussion related to such compiler warnings. Is this expected?\n\nregards, Sergei\n\n\n",
"msg_date": "Thu, 11 Jul 2019 15:21:15 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": true,
"msg_subject": "complier warnings from ecpg tests"
},
{
"msg_contents": "On Thu, Jul 11, 2019 at 03:21:15PM +0300, Sergei Kornilov wrote:\n> I noticed few warnings from my compiler (gcc version 8.3.0 (Debian\n> 8.3.0-6)) during make check-world:\n>\n> They coming from src/interfaces/ecpg tests (\n> ./src/interfaces/ecpg/test/sql/array.pgc ).\n> Seems this code is 4 year old but I did not found discussion related\n> to such compiler warnings. Is this expected?\n\nAre you using -Wformat-overflow? At which level?\n--\nMichael",
"msg_date": "Thu, 11 Jul 2019 22:40:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: complier warnings from ecpg tests"
},
{
"msg_contents": "Hi\n\n> Are you using -Wformat-overflow? At which level?\n\nI use: ./configure --prefix=somepath --enable-cassert --enable-debug CFLAGS=\"-ggdb -Og -g3 -fno-omit-frame-pointer\" --enable-tap-tests\nNo other explicit options.\n\npg_config reports:\n\nCPPFLAGS = -D_GNU_SOURCE\nCFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -g -ggdb -Og -g3 -fno-omit-frame-pointer\nCFLAGS_SL = -fPIC\n\nregards, Sergei\n\n\n",
"msg_date": "Thu, 11 Jul 2019 16:57:08 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": true,
"msg_subject": "Re: complier warnings from ecpg tests"
},
{
"msg_contents": "On Thu, Jul 11, 2019 at 04:57:08PM +0300, Sergei Kornilov wrote:\n> I use: ./configure --prefix=somepath --enable-cassert --enable-debug\n> CFLAGS=\"-ggdb -Og -g3 -fno-omit-frame-pointer\" --enable-tap-tests\n> No other explicit options.\n\nThanks for the set of flags. So this comes from the use of -Og, and\nthe rest of the tree does not complain. The issue is that gcc\ncomplains about the buffer not being large enough, but %d only uses up\nto 2 characters so there is no overflow. In order to fix the issue it\nis fine enough to increase the buffer size to 28 bytes, so I would\nrecommend to just do that. This is similar to the business done in\n3a4b891.\n--\nMichael",
"msg_date": "Thu, 1 Aug 2019 15:14:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: complier warnings from ecpg tests"
},
{
"msg_contents": "On Thu, Aug 01, 2019 at 03:14:06PM +0900, Michael Paquier wrote:\n> Thanks for the set of flags. So this comes from the use of -Og, and\n> the rest of the tree does not complain. The issue is that gcc\n> complains about the buffer not being large enough, but %d only uses up\n> to 2 characters so there is no overflow. In order to fix the issue it\n> is fine enough to increase the buffer size to 28 bytes, so I would\n> recommend to just do that. This is similar to the business done in\n> 3a4b891.\n\nAnd fixed with a9f301d.\n--\nMichael",
"msg_date": "Fri, 2 Aug 2019 09:55:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: complier warnings from ecpg tests"
},
{
"msg_contents": "Hi\n\n>> Thanks for the set of flags. So this comes from the use of -Og, and\n>> the rest of the tree does not complain. The issue is that gcc\n>> complains about the buffer not being large enough, but %d only uses up\n>> to 2 characters so there is no overflow. In order to fix the issue it\n>> is fine enough to increase the buffer size to 28 bytes, so I would\n>> recommend to just do that. This is similar to the business done in\n>> 3a4b891.\n>\n> And fixed with a9f301d.\n\nThank you! My compiler is now quiet\n\nregards, Sergei\n\n\n",
"msg_date": "Fri, 02 Aug 2019 10:33:31 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": true,
"msg_subject": "Re: complier warnings from ecpg tests"
}
] |
[
{
"msg_contents": "Hi,\n\nI had some spare time tonight so I started a prototype to allow\nfiltering the indexes that are processed using the REINDEX command, as\nPeter suggested in the parallel reindexdb thread [1].\n\nI didn't want to spend too much time enjoying bison and adding new\nunreserved keywords, so for now I just implemented this syntax to\nstart a discussion for this feature in the next commitfest:\n\nREINDEX ( FILTER = COLLATION ) [...]\n\nThe FILTER clause can be used multiple times, each one is OR-ed with\nthe ReindexStmt's option, so we could easily add a LIBC, ICU and other\nfilters, also making COLLATION (or more realistically a better new\nkeyword) an alias for (LIBC | ICU) for instance.\n\nThe filtering is done at table level (with and without the\nconcurrently option), so SCHEMA, DATABASE and SYSTEM automatically\nbenefit from it. If this clause is used with a REINDEX INDEX, the\nstatement errors out, as I don't see a valid use case for providing a\nsingle index name and asking to possibly filter it at the same time.\n\nUnder the hood, the filtering is for now done in a single function by\nappending elements, not removing them. An empty oid list is created,\nall indexes belonging to the underlying relation are processed by the\nspecific filter(s), and any index that fails to be discarded by at\nleast one filter, even partially, is added to the final list.\n\nI also added some minimal documentation and regression tests. I'll\nadd this patch to the next commitfest.\n\n[1] https://www.postgresql.org/message-id/7140716c-679e-a0b9-a273-b201329d8891%402ndquadrant.com",
"msg_date": "Thu, 11 Jul 2019 23:14:20 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "REINDEX filtering in the backend"
},
{
"msg_contents": "On Thu, Jul 11, 2019 at 11:14:20PM +0200, Julien Rouhaud wrote:\n> I didn't want to spend too much time enjoying bison and adding new\n> unreserved keywords, so for now I just implemented this syntax to\n> start a discussion for this feature in the next commitfest:\n> \n> REINDEX ( FILTER = COLLATION ) [...]\n> \n> The FILTER clause can be used multiple times, each one is OR-ed with\n> the ReindexStmt's option, so we could easily add a LIBC, ICU and other\n> filters, also making COLLATION (or more realistically a better new\n> keyword) an alias for (LIBC | ICU) for instance.\n\nI would prefer keeping the interface simple with only COLLATION, so as\nonly collation sensitive indexes should be updated, including icu and\nlibc ones. Or would it be useful to have the filtering for both as\nlibicu can break things similarly to glibc in an update still a\nbreakage on one or the other would not happen at the same time? I\ndon't know enough of libicu regarding that, eager to learn. In which\ncase, wouldn't it be better to support that from the start?\n\n> The filtering is done at table level (with and without the\n> concurrently option), so SCHEMA, DATABASE and SYSTEM automatically\n> benefit from it. If this clause is used with a REINDEX INDEX, the\n> statement errors out, as I don't see a valid use case for providing a\n> single index name and asking to possibly filter it at the same time.\n\nSupporting that case would not be that much amount of work, no?\n\n> I also added some minimal documentation and regression tests. I'll\n> add this patch to the next commitfest.\n> \n> [1] https://www.postgresql.org/message-id/7140716c-679e-a0b9-a273-b201329d8891%402ndquadrant.com\n\n+ if ((stmt->options & REINDEXOPT_ALL_FILTERS) != 0)\n+ elog(ERROR, \"FILTER clause is not compatible with REINDEX INDEX\");\n[...]\n+ discard indexes whose ordering does not depend on a collation. Note that\n+ the FILTER option is not compatible with <command>REINDEX\n+ SCHEMA</command>.\n\nWhy do you have both limitations? I think that it would be nice to be\nable to do both, generating an error for REINDEX INDEX if the index\nspecified is not compatible, and a LOG if the index is not filtered\nout when a list is processed. Please note that elog() cannot be used\nfor user-facing failures, only for internal ones.\n\n+REINDEX (VERBOSE, FILTER = COLLATION) TABLE reindex_verbose;\n+-- One column, not depending on a collation\nIn order to make sure that a reindex has been done for a given entry\nwith the filtering, an idea is to save the relfilenode before the\nREINDEX and check it afterwards. That would be nice to make sure that\nonly the wanted indexes are processed, but it is not possible to be\nsure of that based on your tests, and some tests should be done on\nrelations which have collation-sensitive as well as\ncollation-insensitive indexes.\n\n+ index = index_open(indexOid, AccessShareLock);\n+ numAtts = index->rd_index->indnatts;\n+ index_close(index, AccessShareLock);\nWouldn't it be better to close that after doing the scan?\n\nNit: I am pretty sure that this should be indented.\n\nCould you add tests with REINDEX CONCURRENTLY?\n--\nMichael",
"msg_date": "Wed, 28 Aug 2019 14:02:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX filtering in the backend"
},
{
"msg_contents": "On Wed, Aug 28, 2019 at 02:02:08PM +0900, Michael Paquier wrote:\n> + index = index_open(indexOid, AccessShareLock);\n> + numAtts = index->rd_index->indnatts;\n> + index_close(index, AccessShareLock);\n> Wouldn't it be better to close that after doing the scan?\n> \n> Nit: I am pretty sure that this should be indented.\n> \n> Could you add tests with REINDEX CONCURRENTLY?\n\nBonus: support for reindexdb should be added. Let's not forget about\nit.\n--\nMichael",
"msg_date": "Wed, 28 Aug 2019 14:09:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX filtering in the backend"
},
{
"msg_contents": "On Wed, Aug 28, 2019 at 7:03 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Jul 11, 2019 at 11:14:20PM +0200, Julien Rouhaud wrote:\n> > I didn't want to spend too much time enjoying bison and adding new\n> > unreserved keywords, so for now I just implemented this syntax to\n> > start a discussion for this feature in the next commitfest:\n> >\n> > REINDEX ( FILTER = COLLATION ) [...]\n> >\n> > The FILTER clause can be used multiple times, each one is OR-ed with\n> > the ReindexStmt's option, so we could easily add a LIBC, ICU and other\n> > filters, also making COLLATION (or more realistically a better new\n> > keyword) an alias for (LIBC | ICU) for instance.\n>\n> I would prefer keeping the interface simple with only COLLATION, so as\n> only collation sensitive indexes should be updated, including icu and\n> libc ones. Or would it be useful to have the filtering for both as\n> libicu can break things similarly to glibc in an update still a\n> breakage on one or the other would not happen at the same time? I\n> don't know enough of libicu regarding that, eager to learn. In which\n> case, wouldn't it be better to support that from the start?\n\nI'm not sure either. Another thing would be to add extra syntax to be\nable to discard even more indexes. For instance we could store the\nversion of the underlying lib used when the index is (re)created, and\ndo something like\nREINDEX (FILTER = LIBC!=2.28) or REINDEX (FILTER = LIBC==2.27) or\nsomething similar.\n\n> > The filtering is done at table level (with and without the\n> > concurrently option), so SCHEMA, DATABASE and SYSTEM automatically\n> > benefit from it. If this clause is used with a REINDEX INDEX, the\n> > statement errors out, as I don't see a valid use case for providing a\n> > single index name and asking to possibly filter it at the same time.\n>\n> Supporting that case would not be that much amount of work, no?\n\nProbably not, but I'm dubious about the use case. Adding the lib\nversion in the catalog would be more useful for people who want to\nwrite their own rules to reindex specific set of indexes.\n\n> > I also added some minimal documentation and regression tests. I'll\n> > add this patch to the next commitfest.\n> >\n> > [1] https://www.postgresql.org/message-id/7140716c-679e-a0b9-a273-b201329d8891%402ndquadrant.com\n>\n> + if ((stmt->options & REINDEXOPT_ALL_FILTERS) != 0)\n> + elog(ERROR, \"FILTER clause is not compatible with REINDEX INDEX\");\n> [...]\n> + discard indexes whose ordering does not depend on a collation. Note that\n> + the FILTER option is not compatible with <command>REINDEX\n> + SCHEMA</command>.\n>\n> Why do you have both limitations?\n\nThat's actually a typo, the documentation should have specified that\nit's not compatible with REINDEX INDEX, not REINDEX SCHEMA, I'll fix.\n\n> I think that it would be nice to be\n> able to do both, generating an error for REINDEX INDEX if the index\n> specified is not compatible, and a LOG if the index is not filtered\n> out when a list is processed.\n\nDo you mean having an error if the index does not contain any\ncollation based type? Also, REINDEX only accept a single name, so\nthere shouldn't be any list processing for REINDEX INDEX? I'm not\nreally in favour of adding extra code the filtering when user asks for\na specific index name to be reindexed.\n\n> Please note that elog() cannot be used\n> for user-facing failures, only for internal ones.\n\nIndeed, I'll change with an ereport and ERRCODE_FEATURE_NOT_SUPPORTED.\n\n>\n> +REINDEX (VERBOSE, FILTER = COLLATION) TABLE reindex_verbose;\n> +-- One column, not depending on a collation\n> In order to make sure that a reindex has been done for a given entry\n> with the filtering, an idea is to save the relfilenode before the\n> REINDEX and check it afterwards. That would be nice to make sure that\n> only the wanted indexes are processed, but it is not possible to be\n> sure of that based on your tests, and some tests should be done on\n> relations which have collation-sensitive as well as\n> collation-insensitive indexes.\n\nThat's what I did when I first submitted the feature in reindexdb. I\ndidn't use them because it means switching to TAP tests. I can drop\nthe simple regression test (especially since I now realize than one is\nquite broken) and use the TAP one if, or should I keep both?\n\n>\n> + index = index_open(indexOid, AccessShareLock);\n> + numAtts = index->rd_index->indnatts;\n> + index_close(index, AccessShareLock);\n> Wouldn't it be better to close that after doing the scan?\n\nYes indeed.\n\n> Could you add tests with REINDEX CONCURRENTLY?\n\nSure!\n\n> Bonus: support for reindexdb should be added. Let's not forget about\n> it.\n\nYep. That was a first prototype to see if this approach is ok. I'll\nadd more tests, run pgindent and reindexdb support if this approach is\nsensible.\n\n\n",
"msg_date": "Wed, 28 Aug 2019 10:22:07 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX filtering in the backend"
},
{
"msg_contents": "On Wed, Aug 28, 2019 at 10:22:07AM +0200, Julien Rouhaud wrote:\n>>> The filtering is done at table level (with and without the\n>>> concurrently option), so SCHEMA, DATABASE and SYSTEM automatically\n>>> benefit from it. If this clause is used with a REINDEX INDEX, the\n>>> statement errors out, as I don't see a valid use case for providing a\n>>> single index name and asking to possibly filter it at the same time.\n>>\n>> Supporting that case would not be that much amount of work, no?\n> \n> Probably not, but I'm dubious about the use case. Adding the lib\n> version in the catalog would be more useful for people who want to\n> write their own rules to reindex specific set of indexes.\n\nHearing from others here would be helpful. My take is that having a\nsimple option doing the filtering, without the need to store anything\nin the catalogs, would be helpful enough for users mixing both index\ntypes on a single table. Others may not agree.\n\n>> I think that it would be nice to be\n>> able to do both, generating an error for REINDEX INDEX if the index\n>> specified is not compatible, and a LOG if the index is not filtered\n>> out when a list is processed.\n> \n> Do you mean having an error if the index does not contain any\n> collation based type? Also, REINDEX only accept a single name, so\n> there shouldn't be any list processing for REINDEX INDEX? I'm not\n> really in favour of adding extra code the filtering when user asks for\n> a specific index name to be reindexed.\n\nI was referring to adding an error if trying to reindex an index with\nthe filtering enabled. That's useful to inform the user that what he\nintends to do is not compatible with the options provided.\n\n> That's what I did when I first submitted the feature in reindexdb. I\n> didn't use them because it means switching to TAP tests. I can drop\n> the simple regression test (especially since I now realize than one is\n> quite broken) and use the TAP one if, or should I keep both?\n\nThere is no need for TAP I think. You could for example store the\nrelid and its relfilenode in a temporary table before running the\nreindex, run the REINDEX and then compare with what pg_class stores.\nAnd that's much cheaper than setting a new instance for a TAP test.\n--\nMichael",
"msg_date": "Thu, 29 Aug 2019 09:09:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX filtering in the backend"
},
{
"msg_contents": "On Thu, Aug 29, 2019 at 2:09 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Aug 28, 2019 at 10:22:07AM +0200, Julien Rouhaud wrote:\n> >>> The filtering is done at table level (with and without the\n> >>> concurrently option), so SCHEMA, DATABASE and SYSTEM automatically\n> >>> benefit from it. If this clause is used with a REINDEX INDEX, the\n> >>> statement errors out, as I don't see a valid use case for providing a\n> >>> single index name and asking to possibly filter it at the same time.\n> >>\n> >> Supporting that case would not be that much amount of work, no?\n> >\n> > Probably not, but I'm dubious about the use case. Adding the lib\n> > version in the catalog would be more useful for people who want to\n> > write their own rules to reindex specific set of indexes.\n>\n> Hearing from others here would be helpful. My take is that having a\n> simple option doing the filtering, without the need to store anything\n> in the catalogs, would be helpful enough for users mixing both index\n> types on a single table. Others may not agree.\n\nThat was already suggested by Thomas and seconded by Peter E., see\nhttps://www.postgresql.org/message-id/2b1504ac-3d6c-11ec-e1ce-3daf132b3d37%402ndquadrant.com.\n\nI personally think that it's a sensible approach, and I'll be happy to\npropose a patch for that too if no one worked on this yet.\n\n> >> I think that it would be nice to be\n> >> able to do both, generating an error for REINDEX INDEX if the index\n> >> specified is not compatible, and a LOG if the index is not filtered\n> >> out when a list is processed.\n> >\n> > Do you mean having an error if the index does not contain any\n> > collation based type? Also, REINDEX only accept a single name, so\n> > there shouldn't be any list processing for REINDEX INDEX? I'm not\n> > really in favour of adding extra code the filtering when user asks for\n> > a specific index name to be reindexed.\n>\n> I was referring to adding an error if trying to reindex an index with\n> the filtering enabled. That's useful to inform the user that what he\n> intends to do is not compatible with the options provided.\n\nOk, I can add it if needed.\n\n> > That's what I did when I first submitted the feature in reindexdb. I\n> > didn't use them because it means switching to TAP tests. I can drop\n> > the simple regression test (especially since I now realize than one is\n> > quite broken) and use the TAP one if, or should I keep both?\n>\n> There is no need for TAP I think. You could for example store the\n> relid and its relfilenode in a temporary table before running the\n> reindex, run the REINDEX and then compare with what pg_class stores.\n> And that's much cheaper than setting a new instance for a TAP test.\n\nOh indeed, good point! I'll work on better tests using this approach then.\n\n\n",
"msg_date": "Thu, 29 Aug 2019 10:52:55 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX filtering in the backend"
},
{
"msg_contents": "On Thu, Aug 29, 2019 at 10:52:55AM +0200, Julien Rouhaud wrote:\n> That was already suggested by Thomas and seconded by Peter E., see\n> https://www.postgresql.org/message-id/2b1504ac-3d6c-11ec-e1ce-3daf132b3d37%402ndquadrant.com.\n> \n> I personally think that it's a sensible approach, and I'll be happy to\n> propose a patch for that too if no one worked on this yet.\n\nThat may be interesting to sort out first then because we'd likely\nwant to know what is first in the catalogs before designing the\nfiltering processing looking at it, no?\n--\nMichael",
"msg_date": "Fri, 30 Aug 2019 09:10:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX filtering in the backend"
},
{
"msg_contents": "On 2019-08-30 02:10, Michael Paquier wrote:\n> On Thu, Aug 29, 2019 at 10:52:55AM +0200, Julien Rouhaud wrote:\n>> That was already suggested by Thomas and seconded by Peter E., see\n>> https://www.postgresql.org/message-id/2b1504ac-3d6c-11ec-e1ce-3daf132b3d37%402ndquadrant.com.\n>>\n>> I personally think that it's a sensible approach, and I'll be happy to\n>> propose a patch for that too if no one worked on this yet.\n> \n> That may be interesting to sort out first then because we'd likely\n> want to know what is first in the catalogs before designing the\n> filtering processing looking at it, no?\n\nRight. We should aim to get per-object collation version tracking done.\n And then we might want to have a REINDEX variant that processes exactly\nthose indexes with an out-of-date version number -- and then updates\nthat version number once the reindexing is done. I think that project\nis achievable for PG13.\n\nI think we can keep this present patch in our back pocket for, say, the\nlast commit fest if we don't make sufficient progress on those other\nthings. Right now, it's hard to review because the target is moving,\nand it's unclear what guidance to give users.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 4 Sep 2019 13:54:53 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX filtering in the backend"
},
{
"msg_contents": "On Wed, Sep 04, 2019 at 01:54:53PM +0200, Peter Eisentraut wrote:\n> Right. We should aim to get per-object collation version tracking done.\n> And then we might want to have a REINDEX variant that processes exactly\n> those indexes with an out-of-date version number -- and then updates\n> that version number once the reindexing is done. I think that project\n> is achievable for PG13.\n> \n> I think we can keep this present patch in our back pocket for, say, the\n> last commit fest if we don't make sufficient progress on those other\n> things. Right now, it's hard to review because the target is moving,\n> and it's unclear what guidance to give users.\n\nOkay, agreed. I have marked the patch as returned with feedback\nthen.\n--\nMichael",
"msg_date": "Fri, 6 Sep 2019 13:56:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX filtering in the backend"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nHere is a small patch extracted from the undo log patch set that I'd\nlike to discuss separately and commit soon. I'm pretty sure that\nzheap, zedstore and anyone else developing new AMs based on 64 bit\nxids needs this, but there are no plans to extend WAL records to\nactually carry 64 bit xids yet, and I want to discourage people from\nmaking generic xid expanding functions that don't have any\ninterlocking, as I mentioned recently[1].\n\nIt's defined in xlogreader.c, because that's where such things\nnaturally live, but it can only work during replay for now, so I\nwrapped it in a FRONTEND invisibility cloak. That means that\nfront-end tools (pg_waldump.c) can't use it and will have to continue\nshow 32 bit xids for now.\n\nBetter ideas?\n\n[1] https://www.postgresql.org/message-id/CA+hUKGJPuKR7i7UvmXRXhjhdW=3v1-nSO3aFn4XDLdkBJru15g@mail.gmail.com\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Fri, 12 Jul 2019 13:25:17 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "XLogRecGetFullXid()"
},
{
"msg_contents": "On Fri, Jul 12, 2019 at 1:25 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Here is a small patch extracted from the undo log patch set that I'd\n> like to discuss separately and commit soon. [...]\n\nPushed.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Jul 2019 17:22:27 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: XLogRecGetFullXid()"
}
] |
[
{
"msg_contents": "Hello.\n\nIn src/tools/msvc/config_default.pl, peremeter \"perl\" requires a\npath string, not a bool differently from that of configure\nscript. --with-python has the same characteristics and the\ncomment is suggesting that.\n\nWe need to fix --with-perl and --with-uuid the same way.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 12 Jul 2019 12:15:29 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Comment fix of config_default.pl"
},
{
"msg_contents": "On Fri, Jul 12, 2019 at 12:15:29PM +0900, Kyotaro Horiguchi wrote:\n> In src/tools/msvc/config_default.pl, parameter \"perl\" requires a\n> path string, not a bool differently from that of configure\n> script. --with-python has the same characteristics and the\n> comment is suggesting that.\n> \n> We need to fix --with-perl and --with-uuid the same way.\n>\n> +\tuuid => undef, # --with-ossp-uuid=<path>\n\n--with-ossp-uuid is an obsolete spelling. Wouldn't it be better to\nreplace it with --with-uuid=<path>? That would be a bit inconsistent\nwith configure which can only take a set of hardcoded names, still\nthere is little point in keeping an option which would get removed\nsooner than later?\n--\nMichael",
"msg_date": "Fri, 12 Jul 2019 13:01:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Comment fix of config_default.pl"
},
{
"msg_contents": "Thanks.\n\nAt Fri, 12 Jul 2019 13:01:13 +0900, Michael Paquier <michael@paquier.xyz> wrote in <20190712040113.GD2149@paquier.xyz>\n> On Fri, Jul 12, 2019 at 12:15:29PM +0900, Kyotaro Horiguchi wrote:\n> > In src/tools/msvc/config_default.pl, parameter \"perl\" requires a\n> > path string, not a bool differently from that of configure\n> > script. --with-python has the same characteristics and the\n> > comment is suggesting that.\n> > \n> > We need to fix --with-perl and --with-uuid the same way.\n> >\n> > +\tuuid => undef, # --with-ossp-uuid=<path>\n> \n> --with-ossp-uuid is an obsolete spelling. Wouldn't it be better to\n> replace it with --with-uuid=<path>? That would be a bit inconsistent\n\nOops! Right. My eyes slipped over the difference..\n\n> with configure which can only take a set of hardcoded names, still\n> there is little point in keeping an option which would get removed\n> sooner than later?\n\nAgreed. Attached the fixed version.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 12 Jul 2019 15:34:11 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Comment fix of config_default.pl"
},
{
"msg_contents": "On Fri, Jul 12, 2019 at 03:34:11PM +0900, Kyotaro Horiguchi wrote:\n> At Fri, 12 Jul 2019 13:01:13 +0900, Michael Paquier <michael@paquier.xyz> wrote in <20190712040113.GD2149@paquier.xyz>\n>> --with-ossp-uuid is an obsolete spelling. Wouldn't it be better to\n>> replace it with --with-uuid=<path>? That would be a bit inconsistent\n> \n> Oops! Right. My eyes slipped over the difference..\n\nI would also patch GetFakeConfigure in Solution.pm (no need to send a\nnew patch), and I thought that you'd actually do the change. What do\nyou think?\n--\nMichael",
"msg_date": "Fri, 12 Jul 2019 17:01:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Comment fix of config_default.pl"
},
{
"msg_contents": "On Fri, Jul 12, 2019 at 05:01:41PM +0900, Michael Paquier wrote:\n> I would also patch GetFakeConfigure in Solution.pm (no need to send a\n> new patch), and I thought that you'd actually do the change. What do\n> you think?\n\nOK, applied as I have been able to look at it again, and after fixing\nthe portion for GetFakeConfigure. Thanks! \n--\nMichael",
"msg_date": "Sat, 13 Jul 2019 16:53:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Comment fix of config_default.pl"
},
{
"msg_contents": "At Sat, 13 Jul 2019 16:53:45 +0900, Michael Paquier <michael@paquier.xyz> wrote in <20190713075345.GC2137@paquier.xyz>\n> On Fri, Jul 12, 2019 at 05:01:41PM +0900, Michael Paquier wrote:\n> > I would also patch GetFakeConfigure in Solution.pm (no need to send a\n> > new patch), and I thought that you'd actually do the change. What do\n> > you think?\n> \n> OK, applied as I have been able to look at it again, and after fixing\n> the portion for GetFakeConfigure. Thanks! \n\nThanks for committing and your additional part.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 16 Jul 2019 18:16:40 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Comment fix of config_default.pl"
}
] |
[
{
"msg_contents": "Hi,\n\nThe 2019b DST update [1] disables DST for Brazil. This would take effect\nstarting November 2019. The last DST update in Postgres was 2019a in v11.3\n(since this update came in after the recent-most Postgres release).\n\nSince a ~3 month release cycle may be too close for some users, are there\nany plans for an early 11.5 (or are such occurrences not a candidate for an\nearly release)?\n\nReference:\na) https://mm.icann.org/pipermail/tz-announce/2019-July/000056.html\n-\nrobins\n\nHi,The 2019b DST update [1] disables DST for Brazil. This would take effect starting November 2019. The last DST update in Postgres was 2019a in v11.3 (since this update came in after the recent-most Postgres release).Since a ~3 month release cycle may be too close for some users, are there any plans for an early 11.5 (or are such occurrences not a candidate for an early release)? Reference:a) https://mm.icann.org/pipermail/tz-announce/2019-July/000056.html-robins",
"msg_date": "Fri, 12 Jul 2019 13:42:59 +1000",
"msg_from": "Robins Tharakan <tharakan@gmail.com>",
"msg_from_op": true,
"msg_subject": "Brazil disables DST - 2019b update"
},
{
"msg_contents": "On Fri, Jul 12, 2019 at 01:42:59PM +1000, Robins Tharakan wrote:\n> The 2019b DST update [1] disables DST for Brazil. This would take effect\n> starting November 2019. The last DST update in Postgres was 2019a in v11.3\n> (since this update came in after the recent-most Postgres release).\n> \n> Since a ~3 month release cycle may be too close for some users, are there\n> any plans for an early 11.5 (or are such occurrences not a candidate for an\n> early release)?\n> \n> Reference:\n> a) https://mm.icann.org/pipermail/tz-announce/2019-July/000056.html\n\nSo 2019b has been released on the 1st of July. Usually tzdata updates\nhappen just before a minor release, so this would get pulled in at the\nbeginning of August (https://www.postgresql.org/developer/roadmap/).\nTom, I guess that would be again the intention here?\n--\nMichael",
"msg_date": "Fri, 12 Jul 2019 13:04:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Brazil disables DST - 2019b update"
},
{
"msg_contents": "On Fri, 12 Jul 2019 at 14:04, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Jul 12, 2019 at 01:42:59PM +1000, Robins Tharakan wrote:\n> So 2019b has been released on the 1st of July. Usually tzdata updates\n> happen just before a minor release, so this would get pulled in at the\n> beginning of August (https://www.postgresql.org/developer/roadmap/).\n> Tom, I guess that would be again the intention here?\n> --\n> Michael\n>\n\nAn August release does give a little more comfort. (I was expecting that\nthe August\ndate would get pushed out since 11.4 was an emergency release at the end of\nJune).\n\n-\nrobins\n\nOn Fri, 12 Jul 2019 at 14:04, Michael Paquier <michael@paquier.xyz> wrote:On Fri, Jul 12, 2019 at 01:42:59PM +1000, Robins Tharakan wrote:So 2019b has been released on the 1st of July. Usually tzdata updates\nhappen just before a minor release, so this would get pulled in at the\nbeginning of August (https://www.postgresql.org/developer/roadmap/).\nTom, I guess that would be again the intention here?\n--\nMichaelAn August release does give a little more comfort. (I was expecting that the Augustdate would get pushed out since 11.4 was an emergency release at the end of June).-robins",
"msg_date": "Fri, 12 Jul 2019 14:52:53 +1000",
"msg_from": "Robins Tharakan <tharakan@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Brazil disables DST - 2019b update"
},
{
"msg_contents": "Robins Tharakan <tharakan@gmail.com> writes:\n> The 2019b DST update [1] disables DST for Brazil. This would take effect\n> starting November 2019. The last DST update in Postgres was 2019a in v11.3\n> (since this update came in after the recent-most Postgres release).\n\nYeah. I intend to install 2019b (or later?) before our next minor\nreleases.\n\n> Since a ~3 month release cycle may be too close for some users, are there\n> any plans for an early 11.5 (or are such occurrences not a candidate for an\n> early release)?\n\nWe do not consider tzdb updates to be a release-forcing issue.\nThe fact that we ship tzdb at all is just a courtesy to PG users who\nare on platforms that lack a more direct way to get tzdb updates.\nThe usual recommendation on well-maintained production systems is to\nconfigure PG with --with-system-tzdata, then rely on your platform\nvendor for timely updates of that data.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Jul 2019 10:14:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Brazil disables DST - 2019b update"
},
{
"msg_contents": "On Fri, Jul 12, 2019 at 4:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robins Tharakan <tharakan@gmail.com> writes:\n> > The 2019b DST update [1] disables DST for Brazil. This would take effect\n> > starting November 2019. The last DST update in Postgres was 2019a in\n> v11.3\n> > (since this update came in after the recent-most Postgres release).\n>\n> Yeah. I intend to install 2019b (or later?) before our next minor\n> releases.\n>\n> > Since a ~3 month release cycle may be too close for some users, are there\n> > any plans for an early 11.5 (or are such occurrences not a candidate for\n> an\n> > early release)?\n>\n> We do not consider tzdb updates to be a release-forcing issue.\n> The fact that we ship tzdb at all is just a courtesy to PG users who\n> are on platforms that lack a more direct way to get tzdb updates.\n> The usual recommendation on well-maintained production systems is to\n> configure PG with --with-system-tzdata, then rely on your platform\n> vendor for timely updates of that data.\n>\n\nIt should be noted that this is not true on Windows -- on Windows we cannot\nuse the system timezone functionality, and rely entirely on the files we\nship as part of our release.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Jul 12, 2019 at 4:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Robins Tharakan <tharakan@gmail.com> writes:\n> The 2019b DST update [1] disables DST for Brazil. This would take effect\n> starting November 2019. The last DST update in Postgres was 2019a in v11.3\n> (since this update came in after the recent-most Postgres release).\n\nYeah. I intend to install 2019b (or later?) before our next minor\nreleases.\n\n> Since a ~3 month release cycle may be too close for some users, are there\n> any plans for an early 11.5 (or are such occurrences not a candidate for an\n> early release)?\n\nWe do not consider tzdb updates to be a release-forcing issue.\nThe fact that we ship tzdb at all is just a courtesy to PG users who\nare on platforms that lack a more direct way to get tzdb updates.\nThe usual recommendation on well-maintained production systems is to\nconfigure PG with --with-system-tzdata, then rely on your platform\nvendor for timely updates of that data.It should be noted that this is not true on Windows -- on Windows we cannot use the system timezone functionality, and rely entirely on the files we ship as part of our release. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 12 Jul 2019 16:22:58 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Brazil disables DST - 2019b update"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Fri, Jul 12, 2019 at 4:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The usual recommendation on well-maintained production systems is to\n>> configure PG with --with-system-tzdata, then rely on your platform\n>> vendor for timely updates of that data.\n\n> It should be noted that this is not true on Windows -- on Windows we cannot\n> use the system timezone functionality, and rely entirely on the files we\n> ship as part of our release.\n\nIMO this is one of many reasons why Windows isn't a great choice of\nplatform for production use of Postgres ;-).\n\nI hear that Microsoft is going to start embedding some flavor of\nLinux in Windows, which presumably would extend to having a copy of\n/usr/share/zoneinfo somewhere. It'll be interesting to see how that\nworks and whether they'll maintain it well enough that it'd be a\nplausible tzdata reference.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Jul 2019 10:33:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Brazil disables DST - 2019b update"
},
{
"msg_contents": "On Fri, Jul 12, 2019 at 02:52:53PM +1000, Robins Tharakan wrote:\n>On Fri, 12 Jul 2019 at 14:04, Michael Paquier <michael@paquier.xyz> wrote:\n>\n>> On Fri, Jul 12, 2019 at 01:42:59PM +1000, Robins Tharakan wrote:\n>> So 2019b has been released on the 1st of July. Usually tzdata updates\n>> happen just before a minor release, so this would get pulled in at the\n>> beginning of August (https://www.postgresql.org/developer/roadmap/).\n>> Tom, I guess that would be again the intention here?\n>> --\n>> Michael\n>>\n>\n>An August release does give a little more comfort. (I was expecting that\n>the August\n>date would get pushed out since 11.4 was an emergency release at the end of\n>June).\n>\n\nI think the plan is still to do the August release. One of the fixes in\nthe out-of-cycle release actually introduced a new regression, but we've\ndecided no to do another one exactly because there's a next minor release\nscheduled in ~three weeks anyway.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 12 Jul 2019 17:08:32 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Brazil disables DST - 2019b update"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Fri, Jul 12, 2019 at 02:52:53PM +1000, Robins Tharakan wrote:\n>> An August release does give a little more comfort. (I was expecting that\n>> the August\n>> date would get pushed out since 11.4 was an emergency release at the end of\n>> June).\n\n> I think the plan is still to do the August release.\n\nYes, the August releases will happen on the usual schedule.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Jul 2019 11:31:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Brazil disables DST - 2019b update"
}
] |
[
{
"msg_contents": "I am trying to build Postgres in an IDE (CLion on Ubuntu), and I'm getting\nthe following error message:\n\nIn file included from /workspace/src/postgres/src/include/c.h:61:0,\n from /workspace/src/postgres/src/include/postgres.h:46,\n from /workspace/src/postgres/contrib/bloom/blcost.c:13:\n/workspace/src/postgres/src/include/common/string.h:13:8: error: unknown\ntype name ‘bool’\n extern bool pg_str_endswith(const char *str, const char *end);\n\nCLion created a CMakeLists.txt file with the following at the top:\n\ncmake_minimum_required(VERSION 3.14)\nproject(postgres)\nset(CMAKE_CXX_STANDARD 14)\n\nAnd my compiler version is: gcc version 7.4.0 (Ubuntu\n7.4.0-1ubuntu1~18.04.1)\n\nAny thoughts? (disclaimer: I have much more experience with Java than C)\n\nThanks,\n\nIgal\n\nI am trying to build Postgres in an IDE (CLion on Ubuntu), and I'm getting the following error message:In file included from /workspace/src/postgres/src/include/c.h:61:0, from /workspace/src/postgres/src/include/postgres.h:46, from /workspace/src/postgres/contrib/bloom/blcost.c:13:/workspace/src/postgres/src/include/common/string.h:13:8: error: unknown type name ‘bool’ extern bool pg_str_endswith(const char *str, const char *end);CLion created a CMakeLists.txt file with the following at the top:cmake_minimum_required(VERSION 3.14)project(postgres)set(CMAKE_CXX_STANDARD 14)And my compiler version is: gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)Any thoughts? (disclaimer: I have much more experience with Java than C)Thanks,Igal",
"msg_date": "Thu, 11 Jul 2019 22:21:06 -0700",
"msg_from": "Igal Sapir <igal@lucee.org>",
"msg_from_op": true,
"msg_subject": "Unknown type name bool"
},
{
"msg_contents": "On Thu, Jul 11, 2019 at 10:21:06PM -0700, Igal Sapir wrote:\n> Any thoughts? (disclaimer: I have much more experience with Java than C)\n\nWe don't support cmake directly. Here is the documentation about how\nto build the beast:\nhttps://www.postgresql.org/docs/current/install-procedure.html\n--\nMichael",
"msg_date": "Fri, 12 Jul 2019 14:27:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Unknown type name bool"
},
{
"msg_contents": "On Thu, Jul 11, 2019 at 10:27 PM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> On Thu, Jul 11, 2019 at 10:21:06PM -0700, Igal Sapir wrote:\n> > Any thoughts? (disclaimer: I have much more experience with Java than C)\n>\n> We don't support cmake directly. Here is the documentation about how\n> to build the beast:\n> https://www.postgresql.org/docs/current/install-procedure.html\n\n\nThank you, Michael, but my goal is not to just build from source, but to\nrun Postgres in an IDE. I tried CLion because it's modern and cross\nplatform, but I am open to other IDEs.\n\nWhat IDEs do Postgres hackers use (other than vi with gcc)? Is there any\ndocumentation or posts on how to set up the project in an IDE?\n\nThanks,\n\nIgal\n\nOn Thu, Jul 11, 2019 at 10:27 PM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Jul 11, 2019 at 10:21:06PM -0700, Igal Sapir wrote:\n> Any thoughts? (disclaimer: I have much more experience with Java than C)\n\nWe don't support cmake directly. Here is the documentation about how\nto build the beast:\nhttps://www.postgresql.org/docs/current/install-procedure.htmlThank you, Michael, but my goal is not to just build from source, but to run Postgres in an IDE. I tried CLion because it's modern and cross platform, but I am open to other IDEs.What IDEs do Postgres hackers use (other than vi with gcc)? Is there any documentation or posts on how to set up the project in an IDE?Thanks,Igal",
"msg_date": "Thu, 11 Jul 2019 22:54:53 -0700",
"msg_from": "Igal Sapir <igal@lucee.org>",
"msg_from_op": true,
"msg_subject": "Re: Unknown type name bool"
},
{
"msg_contents": "On 12/07/2019 17:54, Igal Sapir wrote:\n> On Thu, Jul 11, 2019 at 10:27 PM Michael Paquier <michael@paquier.xyz \n> <mailto:michael@paquier.xyz>> wrote:\n>\n> On Thu, Jul 11, 2019 at 10:21:06PM -0700, Igal Sapir wrote:\n> > Any thoughts? (disclaimer: I have much more experience with\n> Java than C)\n>\n> We don't support cmake directly. Here is the documentation about how\n> to build the beast:\n> https://www.postgresql.org/docs/current/install-procedure.html\n>\n>\n> Thank you, Michael, but my goal is not to just build from source, but \n> to run Postgres in an IDE. I tried CLion because it's modern and \n> cross platform, but I am open to other IDEs.\n>\n> What IDEs do Postgres hackers use (other than vi with gcc)? Is there \n> any documentation or posts on how to set up the project in an IDE?\n>\n> Thanks,\n>\n> Igal\n>\nI'm not a pg hacker.\n\nHowever, I'd use Eclipse -- but I don't do much programming these days.\n\nReal Programmers use emacs. I used emacs very successfully for \nprogramming in C over twenty years ago. If you're willing to put in the \neffort, emacs is worth it.\n\nBoth emacs & Eclipse have integrated debuggers. As I suspects all \nmodern IDE's do. :-)\n\nI wouldn't use vi.\n\n\nCheers,\nGavin\n\n\n\n",
"msg_date": "Fri, 12 Jul 2019 18:13:26 +1200",
"msg_from": "Gavin Flower <GavinFlower@archidevsys.co.nz>",
"msg_from_op": false,
"msg_subject": "Re: Unknown type name bool"
},
{
"msg_contents": "On Thu, Jul 11, 2019 at 10:54:53PM -0700, Igal Sapir wrote:\n> Thank you, Michael, but my goal is not to just build from source, but to\n> run Postgres in an IDE. I tried CLion because it's modern and cross\n> platform, but I am open to other IDEs.\n>\n> What IDEs do Postgres hackers use (other than vi with gcc)?\n\nA set of N people would likely result in more than (N+1) different\napproaches when it comes to that. The environment is old school here\nas I just have a set of terminals coupled with emacs as editor and\ngcc/clang, but you have a large set of editors at your disposal (nano,\nvi, etc.).\n\n> Is there any documentation or posts on how to set up the project in an IDE?\n\nIt depends on what you are actually trying to do and how you want to\nease your development experience. I have little experience with CLion\nor such kind of tools, some with Eclipse, but I find that kind of\ncumbersome as well when it comes to C.\n--\nMichael",
"msg_date": "Fri, 12 Jul 2019 15:14:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Unknown type name bool"
},
{
"msg_contents": "On Thu, Jul 11, 2019 at 11:14 PM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> On Thu, Jul 11, 2019 at 10:54:53PM -0700, Igal Sapir wrote:\n> > Thank you, Michael, but my goal is not to just build from source, but to\n> > run Postgres in an IDE. I tried CLion because it's modern and cross\n> > platform, but I am open to other IDEs.\n> >\n> > What IDEs do Postgres hackers use (other than vi with gcc)?\n>\n> A set of N people would likely result in more than (N+1) different\n> approaches when it comes to that. The environment is old school here\n> as I just have a set of terminals coupled with emacs as editor and\n> gcc/clang, but you have a large set of editors at your disposal (nano,\n> vi, etc.).\n>\n\nI'd actually be happy with (N+1) different approaches. It will allow me,\nand others like, to choose the one that works for us best.\n\n\n> > Is there any documentation or posts on how to set up the project in an\n> IDE?\n>\n> It depends on what you are actually trying to do and how you want to\n> ease your development experience. I have little experience with CLion\n> or such kind of tools, some with Eclipse, but I find that kind of\n> cumbersome as well when it comes to C.\n>\n\nAt the moment I am trying to run psql in a debugger with breakpoints. I\nhave spent many hours troubleshooting a `\\copy` from a large CSV that kept\nfailing, until I realized that there was a null character in the middle of\na quoted string. I'd be happy to submit a patch that at least warns of\nsuch issues when they happen.\n\nThanks,\n\nIgal\n\nOn Thu, Jul 11, 2019 at 11:14 PM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Jul 11, 2019 at 10:54:53PM -0700, Igal Sapir wrote:\n> Thank you, Michael, but my goal is not to just build from source, but to\n> run Postgres in an IDE. I tried CLion because it's modern and cross\n> platform, but I am open to other IDEs.\n>\n> What IDEs do Postgres hackers use (other than vi with gcc)?\n\nA set of N people would likely result in more than (N+1) different\napproaches when it comes to that. The environment is old school here\nas I just have a set of terminals coupled with emacs as editor and\ngcc/clang, but you have a large set of editors at your disposal (nano,\nvi, etc.).I'd actually be happy with (N+1) different approaches. It will allow me, and others like, to choose the one that works for us best. \n\n> Is there any documentation or posts on how to set up the project in an IDE?\n\nIt depends on what you are actually trying to do and how you want to\nease your development experience. I have little experience with CLion\nor such kind of tools, some with Eclipse, but I find that kind of\ncumbersome as well when it comes to C.At the moment I am trying to run psql in a debugger with breakpoints. I have spent many hours troubleshooting a `\\copy` from a large CSV that kept failing, until I realized that there was a null character in the middle of a quoted string. I'd be happy to submit a patch that at least warns of such issues when they happen.Thanks,Igal",
"msg_date": "Thu, 11 Jul 2019 23:23:11 -0700",
"msg_from": "Igal Sapir <igal@lucee.org>",
"msg_from_op": true,
"msg_subject": "Re: Unknown type name bool"
}
] |
[
{
"msg_contents": "Hello.\n\nAs mentioned in the following message:\n\nhttps://www.postgresql.org/message-id/20190712.150527.145133646.horikyota.ntt%40gmail.com\n\nMutable function are allowed in check constraint expressions but\nit is not right. The attached is a proposed fix for it including\nregression test.\n\nOther \"constraints vs xxxx\" checks do not seem to be exercised\nbut it would be another issue.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 12 Jul 2019 15:44:58 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Check-out mutable functions in check constraints"
},
{
"msg_contents": "Hi\n\npá 12. 7. 2019 v 8:45 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nnapsal:\n\n> Hello.\n>\n> As mentioned in the following message:\n>\n>\n> https://www.postgresql.org/message-id/20190712.150527.145133646.horikyota.ntt%40gmail.com\n>\n> Mutable function are allowed in check constraint expressions but\n> it is not right. The attached is a proposed fix for it including\n> regression test.\n>\n> Other \"constraints vs xxxx\" checks do not seem to be exercised\n> but it would be another issue.\n>\n\nI think so this feature (although is correct) can breaks almost all\napplications - it is 20 year late.\n\nRegards\n\nPavel\n\n\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\nHipá 12. 7. 2019 v 8:45 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com> napsal:Hello.\n\nAs mentioned in the following message:\n\nhttps://www.postgresql.org/message-id/20190712.150527.145133646.horikyota.ntt%40gmail.com\n\nMutable function are allowed in check constraint expressions but\nit is not right. The attached is a proposed fix for it including\nregression test.\n\nOther \"constraints vs xxxx\" checks do not seem to be exercised\nbut it would be another issue.I think so this feature (although is correct) can breaks almost all applications - it is 20 year late.RegardsPavel\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 12 Jul 2019 08:55:20 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Check-out mutable functions in check constraints"
},
{
"msg_contents": "On Fri, Jul 12, 2019 at 08:55:20AM +0200, Pavel Stehule wrote:\n>Hi\n>\n>p� 12. 7. 2019 v 8:45 odes�latel Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n>napsal:\n>\n>> Hello.\n>>\n>> As mentioned in the following message:\n>>\n>>\n>> https://www.postgresql.org/message-id/20190712.150527.145133646.horikyota.ntt%40gmail.com\n>>\n>> Mutable function are allowed in check constraint expressions but\n>> it is not right. The attached is a proposed fix for it including\n>> regression test.\n>>\n>> Other \"constraints vs xxxx\" checks do not seem to be exercised\n>> but it would be another issue.\n>>\n>\n>I think so this feature (although is correct) can breaks almost all\n>applications - it is 20 year late.\n>\n\nI'm not sure it actually breaks such appliations.\n\nLet's assume you have a mutable function (i.e. it may change return value\neven with the same parameters) and you use it in a CHECK constraint. Then\nI'm pretty sure your application is already broken in various ways and you\njust don't know it (sometimes it subtle, sometimes less so).\n\nIf you have a function that actually is immutable and it's just not marked\naccordingly, then that only requires a single DDL to fix that during\nupgrade. I don't think that's a massive issue.\n\nThat being said, I don't know whether fixing this is worth the hassle.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 12 Jul 2019 13:11:26 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Check-out mutable functions in check constraints"
},
{
"msg_contents": "On Fri, Jul 12, 2019 at 03:44:58PM +0900, Kyotaro Horiguchi wrote:\n>Hello.\n>\n>As mentioned in the following message:\n>\n>https://www.postgresql.org/message-id/20190712.150527.145133646.horikyota.ntt%40gmail.com\n>\n>Mutable function are allowed in check constraint expressions but\n>it is not right. The attached is a proposed fix for it including\n>regression test.\n>\n\nI think the comment in parse_expr.c is wrong:\n\n /*\n * All SQL value functions are stable so we reject them in check\n * constraint expressions.\n */\n if (pstate->p_expr_kind == EXPR_KIND_CHECK_CONSTRAINT)\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n errmsg(\"mutable functions are not allowed in check constraints\")));\n\nAt first it claims SQL value functions are stable, but then rejects them\nwith a message that they're mutable.\n\nAlso, the other places use \"cannot ...\" messages:\n\n case EXPR_KIND_COLUMN_DEFAULT:\n err = _(\"cannot use column reference in DEFAULT expression\");\n break;\n\nso maybe these new checks should use the same style.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 12 Jul 2019 13:14:57 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Check-out mutable functions in check constraints"
},
{
"msg_contents": "pá 12. 7. 2019 v 13:11 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com>\nnapsal:\n\n> On Fri, Jul 12, 2019 at 08:55:20AM +0200, Pavel Stehule wrote:\n> >Hi\n> >\n> >pá 12. 7. 2019 v 8:45 odesílatel Kyotaro Horiguchi <\n> horikyota.ntt@gmail.com>\n> >napsal:\n> >\n> >> Hello.\n> >>\n> >> As mentioned in the following message:\n> >>\n> >>\n> >>\n> https://www.postgresql.org/message-id/20190712.150527.145133646.horikyota.ntt%40gmail.com\n> >>\n> >> Mutable function are allowed in check constraint expressions but\n> >> it is not right. The attached is a proposed fix for it including\n> >> regression test.\n> >>\n> >> Other \"constraints vs xxxx\" checks do not seem to be exercised\n> >> but it would be another issue.\n> >>\n> >\n> >I think so this feature (although is correct) can breaks almost all\n> >applications - it is 20 year late.\n> >\n>\n> I'm not sure it actually breaks such appliations.\n>\n> Let's assume you have a mutable function (i.e. it may change return value\n> even with the same parameters) and you use it in a CHECK constraint. Then\n> I'm pretty sure your application is already broken in various ways and you\n> just don't know it (sometimes it subtle, sometimes less so).\n>\n\nYears ago SQL functions was used for checks instead triggers - I am not\nsure if this pattern was in documentation or not, but surely there was not\nany warning against it.\n\nYou can see some documents with examples\n\nCREATE OR REPLACE FUNCTION check_func(int)\nRETURNS boolean AS $$\nSELECT 1 FROM tab WHERE id = $1;\n$$ LANGUAGE sql;\n\nCREATE TABLE foo( ... id CHECK(check_func(id)));\n\n\n\n\n\n> If you have a function that actually is immutable and it's just not marked\n> accordingly, then that only requires a single DDL to fix that during\n> upgrade. I don't think that's a massive issue.\n>\n\nThese functions are stable, and this patch try to prohibit it.\n\nRegards\n\nPavel\n\n>\n> That being said, I don't know whether fixing this is worth the hassle.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n\npá 12. 7. 2019 v 13:11 odesílatel Tomas Vondra <tomas.vondra@2ndquadrant.com> napsal:On Fri, Jul 12, 2019 at 08:55:20AM +0200, Pavel Stehule wrote:\n>Hi\n>\n>pá 12. 7. 2019 v 8:45 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n>napsal:\n>\n>> Hello.\n>>\n>> As mentioned in the following message:\n>>\n>>\n>> https://www.postgresql.org/message-id/20190712.150527.145133646.horikyota.ntt%40gmail.com\n>>\n>> Mutable function are allowed in check constraint expressions but\n>> it is not right. The attached is a proposed fix for it including\n>> regression test.\n>>\n>> Other \"constraints vs xxxx\" checks do not seem to be exercised\n>> but it would be another issue.\n>>\n>\n>I think so this feature (although is correct) can breaks almost all\n>applications - it is 20 year late.\n>\n\nI'm not sure it actually breaks such appliations.\n\nLet's assume you have a mutable function (i.e. it may change return value\neven with the same parameters) and you use it in a CHECK constraint. Then\nI'm pretty sure your application is already broken in various ways and you\njust don't know it (sometimes it subtle, sometimes less so).Years ago SQL functions was used for checks instead triggers - I am not sure if this pattern was in documentation or not, but surely there was not any warning against it.You can see some documents with examplesCREATE OR REPLACE FUNCTION check_func(int)RETURNS boolean AS $$SELECT 1 FROM tab WHERE id = $1;$$ LANGUAGE sql;CREATE TABLE foo( ... id CHECK(check_func(id))); \n\nIf you have a function that actually is immutable and it's just not marked\naccordingly, then that only requires a single DDL to fix that during\nupgrade. I don't think that's a massive issue.These functions are stable, and this patch try to prohibit it. RegardsPavel\n\nThat being said, I don't know whether fixing this is worth the hassle.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 12 Jul 2019 14:00:25 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Check-out mutable functions in check constraints"
},
{
"msg_contents": "On Fri, Jul 12, 2019 at 02:00:25PM +0200, Pavel Stehule wrote:\n>p� 12. 7. 2019 v 13:11 odes�latel Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>napsal:\n>\n>> On Fri, Jul 12, 2019 at 08:55:20AM +0200, Pavel Stehule wrote:\n>> >Hi\n>> >\n>> >p� 12. 7. 2019 v 8:45 odes�latel Kyotaro Horiguchi <\n>> horikyota.ntt@gmail.com>\n>> >napsal:\n>> >\n>> >> Hello.\n>> >>\n>> >> As mentioned in the following message:\n>> >>\n>> >>\n>> >>\n>> https://www.postgresql.org/message-id/20190712.150527.145133646.horikyota.ntt%40gmail.com\n>> >>\n>> >> Mutable function are allowed in check constraint expressions but\n>> >> it is not right. The attached is a proposed fix for it including\n>> >> regression test.\n>> >>\n>> >> Other \"constraints vs xxxx\" checks do not seem to be exercised\n>> >> but it would be another issue.\n>> >>\n>> >\n>> >I think so this feature (although is correct) can breaks almost all\n>> >applications - it is 20 year late.\n>> >\n>>\n>> I'm not sure it actually breaks such appliations.\n>>\n>> Let's assume you have a mutable function (i.e. it may change return value\n>> even with the same parameters) and you use it in a CHECK constraint. Then\n>> I'm pretty sure your application is already broken in various ways and you\n>> just don't know it (sometimes it subtle, sometimes less so).\n>>\n>\n>Years ago SQL functions was used for checks instead triggers - I am not\n>sure if this pattern was in documentation or not, but surely there was not\n>any warning against it.\n>\n>You can see some documents with examples\n>\n>CREATE OR REPLACE FUNCTION check_func(int)\n>RETURNS boolean AS $$\n>SELECT 1 FROM tab WHERE id = $1;\n>$$ LANGUAGE sql;\n>\n>CREATE TABLE foo( ... id CHECK(check_func(id)));\n>\n\nConsidering this does not work (e.g. because in READ COMMITTED mode you\nwon't see the effects of uncommitted DELETE), I'd say this is a quite\nnice example of the breakage I mentioned before.\n\nYou might add locking and make it somewhat safer, but there will still\nbe plenty of holes (e.g. because you won't see new but not yet\ncommitted records). But it can cause issues e.g. with pg_dump [1].\n\nSo IMHO this is more an argument for adding the proposed check ...\n\n>\n>\n>> If you have a function that actually is immutable and it's just not marked\n>> accordingly, then that only requires a single DDL to fix that during\n>> upgrade. I don't think that's a massive issue.\n>>\n>\n>These functions are stable, and this patch try to prohibit it.\n>\n\nYes, and the question is whether this is the right thing to do (I think\nit probably is).\n\nOTOH, even if we prohibit mutable functions in check constraints, people\ncan still create triggers doing those checks (and shoot themselves in\nthe foot that way).\n\n\n[1] https://www.postgresql.org/message-id/6753.1452274727%40sss.pgh.pa.us\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 13 Jul 2019 01:26:51 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Check-out mutable functions in check constraints"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Fri, Jul 12, 2019 at 02:00:25PM +0200, Pavel Stehule wrote:\n>>> Mutable function are allowed in check constraint expressions but\n>>> it is not right. The attached is a proposed fix for it including\n>>> regression test.\n\n> Yes, and the question is whether this is the right thing to do (I think\n> it probably is).\n\nI'm pretty sure this change has been proposed before, and rejected before.\nHas anybody excavated in the archives for prior discussions?\n\n> OTOH, even if we prohibit mutable functions in check constraints, people\n> can still create triggers doing those checks (and shoot themselves in\n> the foot that way).\n\nThere are, and always will be, lots of ways to shoot yourself in the foot.\nIn the case at hand, I fear we might just encourage people to mark\nfunctions as immutable when they really aren't --- which will make their\nproblems *worse* not better, because now other uses besides check\nconstraints will also be at risk of misoptimization.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Jul 2019 19:59:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Check-out mutable functions in check constraints"
},
{
"msg_contents": "On Fri, Jul 12, 2019 at 07:59:13PM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Fri, Jul 12, 2019 at 02:00:25PM +0200, Pavel Stehule wrote:\n>>>> Mutable function are allowed in check constraint expressions but\n>>>> it is not right. The attached is a proposed fix for it including\n>>>> regression test.\n>\n>> Yes, and the question is whether this is the right thing to do (I think\n>> it probably is).\n>\n>I'm pretty sure this change has been proposed before, and rejected before.\n>Has anybody excavated in the archives for prior discussions?\n>\n\nYes, I've done some quick searches like \"volatile constraint\" and so on.\nThere are a couple of relevant discussions:\n\n2004: https://www.postgresql.org/message-id/flat/0C3A1AEC-6BE4-11D8-9224-000A95C88220%40myrealbox.com\n\n2010: https://www.postgresql.org/message-id/flat/12849.1277918175%40sss.pgh.pa.us#736c8ef9d7810c0bb85f495490fd40f5\n\nBut I don't think the conclusions are particularly clear.\n\nIn the first thread you seem to agree with requiring immutable functions\nfor check constraints (and triggers for one-time checks). The second\nthread ended up discussing some new related stuff in SQL standard.\n\nThere may be other threads and I just haven't found them, of course.\n\n>There are, and always will be, lots of ways to shoot yourself in the foot.\n>In the case at hand, I fear we might just encourage people to mark\n>functions as immutable when they really aren't --- which will make their\n>problems *worse* not better, because now other uses besides check\n>constraints will also be at risk of misoptimization.\n>\n>> OTOH, even if we prohibit mutable functions in check constraints, people\n>> can still create triggers doing those checks (and shoot themselves in\n>> the foot that way).\n>\n>There are, and always will be, lots of ways to shoot yourself in the foot.\n>In the case at hand, I fear we might just encourage people to mark\n>functions as immutable when they really aren't --- which will make their\n>problems *worse* not better, because now other uses besides check\n>constraints will also be at risk of misoptimization.\n>\n\nTrue.\n\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 13 Jul 2019 03:58:05 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Check-out mutable functions in check constraints"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Fri, Jul 12, 2019 at 07:59:13PM -0400, Tom Lane wrote:\n>> I'm pretty sure this change has been proposed before, and rejected before.\n>> Has anybody excavated in the archives for prior discussions?\n\n> Yes, I've done some quick searches like \"volatile constraint\" and so on.\n> There are a couple of relevant discussions:\n> 2004: https://www.postgresql.org/message-id/flat/0C3A1AEC-6BE4-11D8-9224-000A95C88220%40myrealbox.com\n> 2010: https://www.postgresql.org/message-id/flat/12849.1277918175%40sss.pgh.pa.us#736c8ef9d7810c0bb85f495490fd40f5\n> But I don't think the conclusions are particularly clear.\n> In the first thread you seem to agree with requiring immutable functions\n> for check constraints (and triggers for one-time checks). The second\n> thread ended up discussing some new related stuff in SQL standard.\n\nWell, I think that second thread is very relevant here, because\nit correctly points out that we are *required by spec* to allow\ncheck constraints of the form CHECK(datecol <= CURRENT_DATE) and\nrelated tests. See the stuff about \"retrospectively deterministic\"\npredicates in SQL:2003 or later.\n\nI suppose you could imagine writing some messy logic that allowed the\nspecific cases called out by the spec but not any other non-immutable\nfunction calls. But that just leaves us with an inconsistent\nrestriction. If the spec is allowing this because it can be seen\nto be safe, why should we not allow other cases that the user has\ntaken the trouble to prove to themselves are safe? (If their proof is\nwrong, well, it wouldn't be the first bug in anyone's SQL application.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 13 Jul 2019 11:17:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Check-out mutable functions in check constraints"
},
{
"msg_contents": "On Sat, Jul 13, 2019 at 11:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If the spec is allowing this because it can be seen\n> to be safe, why should we not allow other cases that the user has\n> taken the trouble to prove to themselves are safe? (If their proof is\n> wrong, well, it wouldn't be the first bug in anyone's SQL application.)\n\nWell said.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 15 Jul 2019 10:20:48 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Check-out mutable functions in check constraints"
},
{
"msg_contents": "Mmm.\n\n# I eventually found messages sent to me stashed in unexpcted\n# place. I felt I was in a void space for these days.. That's\n# silly!\n\nThank you for the comment.\n\n# Putting aside the appliability(?) of this check..\n\nAt Fri, 12 Jul 2019 13:14:57 +0200, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote in <20190712111457.ekkcgx5mpkxl2ooh@development>\n> On Fri, Jul 12, 2019 at 03:44:58PM +0900, Kyotaro Horiguchi wrote:\n> >Hello.\n> >\n> >As mentioned in the following message:\n> >\n> >https://www.postgresql.org/message-id/20190712.150527.145133646.horikyota.ntt%40gmail.com\n> >\n> >Mutable function are allowed in check constraint expressions but\n> >it is not right. The attached is a proposed fix for it including\n> >regression test.\n> >\n> \n> I think the comment in parse_expr.c is wrong:\n> \n> /*\n> * All SQL value functions are stable so we reject them in check\n> * constraint expressions.\n> */\n> if (pstate->p_expr_kind == EXPR_KIND_CHECK_CONSTRAINT)\n> ereport(ERROR,\n> (errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n> errmsg(\"mutable functions are not allowed in check\n> constraints\")));\n> \n> At first it claims SQL value functions are stable, but then rejects\n> them\n> with a message that they're mutable.\n\nIsn't Stable mutable? By definition stable functions can return\ndifferent values with the same input. But the message may be\nsomewhat confusing for unaccostomed users.\n\n> Also, the other places use \"cannot ...\" messages:\n> \n> case EXPR_KIND_COLUMN_DEFAULT:\n> err = _(\"cannot use column reference in DEFAULT expression\");\n> break;\n> \n> so maybe these new checks should use the same style.\n\nIt is following to messages like the follows:\n\nparse_func.c:2497\n| case EXPR_KIND_CHECK_CONSTRAINT:\n| case EXPR_KIND_DOMAIN_CHECK:\n| err = _(\"set-returning functions are not allowed in check constraints\");\n\nShould we unify them? \"are not allowed in\" is used in\nparse_func.c and parse_agg.c, and \"cannot use\" is used in\nparse_expr.c for the same instruction.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 16 Jul 2019 17:52:34 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Check-out mutable functions in check constraints"
},
{
"msg_contents": "Hello, Thanks all!\n\nAt Sat, 13 Jul 2019 11:17:32 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in <18372.1563031052@sss.pgh.pa.us>\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > On Fri, Jul 12, 2019 at 07:59:13PM -0400, Tom Lane wrote:\n> >> I'm pretty sure this change has been proposed before, and rejected before.\n> >> Has anybody excavated in the archives for prior discussions?\n> \n> > Yes, I've done some quick searches like \"volatile constraint\" and so on.\n> > There are a couple of relevant discussions:\n> > 2004: https://www.postgresql.org/message-id/flat/0C3A1AEC-6BE4-11D8-9224-000A95C88220%40myrealbox.com\n> > 2010: https://www.postgresql.org/message-id/flat/12849.1277918175%40sss.pgh.pa.us#736c8ef9d7810c0bb85f495490fd40f5\n> > But I don't think the conclusions are particularly clear.\n> > In the first thread you seem to agree with requiring immutable functions\n> > for check constraints (and triggers for one-time checks). The second\n> > thread ended up discussing some new related stuff in SQL standard.\n> \n> Well, I think that second thread is very relevant here, because\n> it correctly points out that we are *required by spec* to allow\n> check constraints of the form CHECK(datecol <= CURRENT_DATE) and\n> related tests. See the stuff about \"retrospectively deterministic\"\n> predicates in SQL:2003 or later.\n> \n> I suppose you could imagine writing some messy logic that allowed the\n> specific cases called out by the spec but not any other non-immutable\n> function calls. But that just leaves us with an inconsistent\n> restriction. If the spec is allowing this because it can be seen\n> to be safe, why should we not allow other cases that the user has\n> taken the trouble to prove to themselves are safe? (If their proof is\n> wrong, well, it wouldn't be the first bug in anyone's SQL application.)\n> \n> \t\t\tregards, tom lane\n\nIf, we have a CURRENT_DATE() that always returns UTC timestamp\n(or something like), then CURRENT_DATE()::text gives a local\nrepresentation. We may have constraints using CURRENT_DATE()\nsince it is truly immutable. I think the spec can be interpreted\nas that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 16 Jul 2019 18:15:22 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Check-out mutable functions in check constraints"
}
] |
[
{
"msg_contents": "Hello PostgreSQL-development,\n\nsomething's stopping the planner from being able to deduce that (t.o).id is safe to push through a GROUP BY ocd.o\n\n SELECT * FROM (\n SELECT\n sum( t.group_suma ) OVER( PARTITION BY t.id ) AS total_suma,\n-- sum( t.group_suma ) OVER( PARTITION BY (t.o).id ) AS total_suma, -- For any WHERE this takes 2700ms\n *\n FROM (\n SELECT\n sum( ocd.item_cost ) AS group_cost,\n sum( ocd.item_suma ) AS group_suma,\n max( (ocd.ic).consumed ) AS consumed,\n (ocd.ic).consumed_period,\n ocd.o,\n (ocd.o).id\n FROM order_cost_details( tstzrange( '2019-04-01', '2019-05-01' ) ) ocd\n GROUP BY ocd.o, (ocd.o).id, (ocd.ic).consumed_period\n ) t\n ) t\n WHERE t.id = 6154 AND t.consumed_period @> '2019-04-01'::timestamptz -- This takes 2ms\n-- WHERE (t.o).id = 6154 AND t.consumed_period @> '2019-04-01'::timestamptz -- This takes 2700ms\n\n\nMore info is here: https://stackoverflow.com/q/57003113/4632019\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n",
"msg_date": "Fri, 12 Jul 2019 13:04:27 +0300",
"msg_from": "Eugen Konkov <kes-kes@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Request for improvement: Allow to push (t.o).id via GROUP BY ocd.o"
},
{
"msg_contents": "Hello\n\nto my mind I may be done, because `id` is primary key of `o` table\n\nFriday, July 12, 2019, 1:04:27 PM, you wrote:\n\n> Hello PostgreSQL-development,\n\n> something's stopping the planner from being able to deduce that\n> (t.o).id is safe to push through a GROUP BY ocd.o\n\n> SELECT * FROM (\n> SELECT\n> sum( t.group_suma ) OVER( PARTITION BY t.id ) AS total_suma,\n> -- sum( t.group_suma ) OVER( PARTITION\n> BY (t.o).id ) AS total_suma, -- For any WHERE this takes 2700ms\n> *\n> FROM (\n> SELECT\n> sum( ocd.item_cost ) AS group_cost,\n> sum( ocd.item_suma ) AS group_suma,\n> max( (ocd.ic).consumed ) AS consumed,\n> (ocd.ic).consumed_period,\n> ocd.o,\n> (ocd.o).id\n> FROM order_cost_details( tstzrange(\n> '2019-04-01', '2019-05-01' ) ) ocd\n> GROUP BY ocd.o, (ocd.o).id, (ocd.ic).consumed_period\n> ) t\n> ) t\n> WHERE t.id = 6154 AND t.consumed_period @>\n> '2019-04-01'::timestamptz -- This takes 2ms\n> -- WHERE (t.o).id = 6154 AND t.consumed_period @>\n> '2019-04-01'::timestamptz -- This takes 2700ms\n\n\n> More info is here: https://stackoverflow.com/q/57003113/4632019\n\n\n\n\n-- \nBest regards,\nEugen Konkov\n\n\n\n",
"msg_date": "Fri, 12 Jul 2019 13:32:49 +0300",
"msg_from": "Eugen Konkov <kes-kes@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Re: Request for improvement: Allow to push (t.o).id via GROUP BY\n ocd.o"
},
{
"msg_contents": "And, probably, next query belongs to same issue:\n\nSELECT\n--next_ots.group_cost AS next_cost,\n(SELECT next_ots FROM order_total_suma( next_range ) next_ots \nWHERE next_ots.order_id = ots.order_id AND next_ots.consumed_period @> (ots.o).billed_to\n) AS next_suma, -- << this takes 111ms only\nots.* FROM (\n SELECT \n tstzrange(\n NULLIF( (ots.o).billed_to, 'infinity' ),\n NULLIF( (ots.o).billed_to +p.interval, 'infinity' )\n ) as next_range,\n ots.*\n FROM order_total_suma() ots\n LEFT JOIN period p ON p.id = (ots.o).period_id\n) ots\n--LEFT JOIN order_total_suma( next_range ) next_ots ON next_ots.order_id = 6154 --<< this is fine\n-- AND next_ots.consumed_period @> (ots.o).billed_to \n--LEFT JOIN order_total_suma( next_range ) next_ots ON next_ots.order_id = ots.order_id --<< this takes 11500ms\n-- AND next_ots.consumed_period @> (ots.o).billed_to \nWHERE ots.order_id IN ( 6154, 10805 )\n\n\nid is not pushed for LEFT JOIN\n\n\nI have attached plans:\n\n\n\n-- \nBest regards,\nEugen Konkov",
"msg_date": "Fri, 12 Jul 2019 15:27:46 +0300",
"msg_from": "Eugen Konkov <kes-kes@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Re: Request for improvement: Allow to push (t.o).id via GROUP BY\n ocd.o"
}
] |
[
{
"msg_contents": "Hello,\n pg_dump creates plain ALTER TABLE statements even if the table is a foreign table, which for someone reading the dump is confusing.\n This also made a difference when applying the dump if there is any plugin installed that hooks on ProcessUtility, because the plugin could react differently to ALTER TABLE than to ALTER FOREIGN TABLE. Opinions?\n\n An unrelated question: if I apply pgindent to a file (in this case pg_dump.c) and get a bunch of changes on the indentation that are not related to my patch, which is the accepted policy? A different patch first with only the indentation? Maybe, am I using pgindent wrong?\n\nCheers\nLuis M Carril",
"msg_date": "Fri, 12 Jul 2019 12:02:37 +0000",
"msg_from": "Luis Carril <luis.carril@swarm64.com>",
"msg_from_op": true,
"msg_subject": "Add FOREIGN to ALTER TABLE in pg_dump"
},
{
"msg_contents": "On 2019-Jul-12, Luis Carril wrote:\n\n> Hello,\n> pg_dump creates plain ALTER TABLE statements even if the table is a foreign table, which for someone reading the dump is confusing.\n> This also made a difference when applying the dump if there is any plugin installed that hooks on ProcessUtility, because the plugin could react differently to ALTER TABLE than to ALTER FOREIGN TABLE. Opinions?\n\nI think such a hook would be bogus, because it would miss anything done\nby a user manually.\n\nI don't disagree with adding FOREIGN, though.\n\nYour patch is failing the pg_dump TAP tests. Please use\nconfigure --enable-tap-tests, fix the problems, then resubmit.\n\n> An unrelated question: if I apply pgindent to a file (in this case pg_dump.c) and get a bunch of changes on the indentation that are not related to my patch, which is the accepted policy? A different patch first with only the indentation? Maybe, am I using pgindent wrong?\n\nWe don't typically accept pgindent-only changes at random points in\nthe devel cycle.\n\nI would suggest to run pgindent over the file and \"git add -p\" only the\nchanges that are relevant to your patch, discard the rest.\n(Alternative: run pgindent, commit that, then apply your patch, pgindent\nagain and \"git commit --amend\", then \"git rebase -i\" and discard the\nfirst pgindent commit. Your patch ends up pgindent-correct without\ndisturbing the rest of the file/tree).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 25 Sep 2019 17:04:06 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add FOREIGN to ALTER TABLE in pg_dump"
},
{
"msg_contents": "I don't disagree with adding FOREIGN, though.\n\nYour patch is failing the pg_dump TAP tests. Please use\nconfigure --enable-tap-tests, fix the problems, then resubmit.\n\nFixed, I've attached a new version.\n\n\nCheers\nLuis M Carril",
"msg_date": "Thu, 26 Sep 2019 13:47:28 +0000",
"msg_from": "Luis Carril <luis.carril@swarm64.com>",
"msg_from_op": true,
"msg_subject": "Re: Add FOREIGN to ALTER TABLE in pg_dump"
},
{
"msg_contents": "Hi,\n\nOn Thu, Sep 26, 2019 at 01:47:28PM +0000, Luis Carril wrote:\n>\n>I don't disagree with adding FOREIGN, though.\n>\n>Your patch is failing the pg_dump TAP tests. Please use\n>configure --enable-tap-tests, fix the problems, then resubmit.\n>\n>Fixed, I've attached a new version.\n>\n\nThis seems like a fairly small and non-controversial patch (I agree with\nAlvaro that having the optional FOREIGN seems won't hurt). So barring\nobjections I'll polish it a bit and push sometime next week.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 11 Jan 2020 02:52:53 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add FOREIGN to ALTER TABLE in pg_dump"
},
{
"msg_contents": "On Thu, Sep 26, 2019 at 7:17 PM Luis Carril <luis.carril@swarm64.com> wrote:\n>\n>\n> I don't disagree with adding FOREIGN, though.\n>\n> Your patch is failing the pg_dump TAP tests. Please use\n> configure --enable-tap-tests, fix the problems, then resubmit.\n>\n> Fixed, I've attached a new version.\n\nWill it be possible to add a test case for this, can we validate by\nadding one test?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 13 Jan 2020 17:43:10 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add FOREIGN to ALTER TABLE in pg_dump"
},
{
"msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n> On Thu, Sep 26, 2019 at 7:17 PM Luis Carril <luis.carril@swarm64.com> wrote:\n>>> Your patch is failing the pg_dump TAP tests. Please use\n>>> configure --enable-tap-tests, fix the problems, then resubmit.\n\n>> Fixed, I've attached a new version.\n\n> Will it be possible to add a test case for this, can we validate by\n> adding one test?\n\nIsn't the change in the TAP test output sufficient?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 Jan 2020 09:22:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add FOREIGN to ALTER TABLE in pg_dump"
},
{
"msg_contents": "On 2020-Jan-11, Tomas Vondra wrote:\n\n> Hi,\n> \n> On Thu, Sep 26, 2019 at 01:47:28PM +0000, Luis Carril wrote:\n> > \n> > I don't disagree with adding FOREIGN, though.\n> > \n> > Your patch is failing the pg_dump TAP tests. Please use\n> > configure --enable-tap-tests, fix the problems, then resubmit.\n> > \n> > Fixed, I've attached a new version.\n> \n> This seems like a fairly small and non-controversial patch (I agree with\n> Alvaro that having the optional FOREIGN seems won't hurt). So barring\n> objections I'll polish it a bit and push sometime next week.\n\nIf we're messing with that code, we may as well reduce cognitive load a\nlittle bit and unify all those multiple consecutive appendStringInfo\ncalls into one. (My guess is that this was previously not possible\nbecause there were multiple fmtId() calls in the argument list, but\nthat's no longer the case.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 13 Jan 2020 12:36:38 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add FOREIGN to ALTER TABLE in pg_dump"
},
{
"msg_contents": "On Mon, Jan 13, 2020 at 7:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> vignesh C <vignesh21@gmail.com> writes:\n> > On Thu, Sep 26, 2019 at 7:17 PM Luis Carril <luis.carril@swarm64.com> wrote:\n> >>> Your patch is failing the pg_dump TAP tests. Please use\n> >>> configure --enable-tap-tests, fix the problems, then resubmit.\n>\n> >> Fixed, I've attached a new version.\n>\n> > Will it be possible to add a test case for this, can we validate by\n> > adding one test?\n>\n> Isn't the change in the TAP test output sufficient?\n>\n\nI could not see any expected file output changes in the patch. Should\nwe modify the existing test to validate this.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 14 Jan 2020 06:02:56 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add FOREIGN to ALTER TABLE in pg_dump"
},
{
"msg_contents": "On 2020-Jan-14, vignesh C wrote:\n\n> On Mon, Jan 13, 2020 at 7:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > vignesh C <vignesh21@gmail.com> writes:\n> > > On Thu, Sep 26, 2019 at 7:17 PM Luis Carril <luis.carril@swarm64.com> wrote:\n> > >>> Your patch is failing the pg_dump TAP tests. Please use\n> > >>> configure --enable-tap-tests, fix the problems, then resubmit.\n> >\n> > >> Fixed, I've attached a new version.\n> >\n> > > Will it be possible to add a test case for this, can we validate by\n> > > adding one test?\n> >\n> > Isn't the change in the TAP test output sufficient?\n> \n> I could not see any expected file output changes in the patch. Should\n> we modify the existing test to validate this.\n\nYeah, I think there should be at least one regexp in t/002_pg_dump.pl to\nverify ALTER FOREIGN TABLE is being produced.\n\nI wonder if Tom is thinking about Luis' other pg_dump patch for foreign\ntables, which includes some changes to src/test/modules/test_pg_dump.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 14 Jan 2020 19:23:26 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add FOREIGN to ALTER TABLE in pg_dump"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> On Mon, Jan 13, 2020 at 7:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Isn't the change in the TAP test output sufficient?\n\n> Yeah, I think there should be at least one regexp in t/002_pg_dump.pl to\n> verify ALTER FOREIGN TABLE is being produced.\n> I wonder if Tom is thinking about Luis' other pg_dump patch for foreign\n> tables, which includes some changes to src/test/modules/test_pg_dump.\n\nNo, I was just reacting to the comment that the TAP test was failing,\nand assuming that that meant the patch had already changed the expected\noutput. Looking at the patch now, I suppose that just means it had\nincautiously changed whitespace or something for the non-foreign case.\n\nI can't get terribly excited about persuading that test to cover this\ntrivial little bit of logic, but if you are, I won't stand in the way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jan 2020 17:42:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add FOREIGN to ALTER TABLE in pg_dump"
},
{
"msg_contents": "On 2020-Jan-14, Tom Lane wrote:\n\n> I can't get terribly excited about persuading that test to cover this\n> trivial little bit of logic, but if you are, I won't stand in the way.\n\nHmm, that's a good point actually: the patch changed several places to\ninject the FOREIGN keyword, so in order to cover them all it would need\nseveral additional regexps, not just one. I'm not sure that\n002_pg_dump.pl is prepared to do that without unsightly contortions.\n\nAnyway, other than that minor omission the patch seemed good to me, so I\ndon't oppose Tomas pushing the version I posted yesterday. Or I can, if\nhe prefers that.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 14 Jan 2020 20:04:11 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add FOREIGN to ALTER TABLE in pg_dump"
},
{
"msg_contents": "> On 15 Jan 2020, at 00:04, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> On 2020-Jan-14, Tom Lane wrote:\n> \n>> I can't get terribly excited about persuading that test to cover this\n>> trivial little bit of logic, but if you are, I won't stand in the way.\n> \n> Hmm, that's a good point actually: the patch changed several places to\n> inject the FOREIGN keyword, so in order to cover them all it would need\n> several additional regexps, not just one. I'm not sure that\n> 002_pg_dump.pl is prepared to do that without unsightly contortions.\n\nI agree that it doesn't seem worth holding up this patch for that, even though\nit would be nice if we do add a test at some point.\n\n> Anyway, other than that minor omission the patch seemed good to me, so I\n> don't oppose Tomas pushing the version I posted yesterday. Or I can, if\n> he prefers that.\n\nThis patch still applies with some offsets and a bit of fuzz, and looking over\nthe patch I agree with Alvaro.\n\nMoving this patch to Ready for Committer.\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 19 Mar 2020 14:29:42 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Add FOREIGN to ALTER TABLE in pg_dump"
},
{
"msg_contents": "On 2020-Mar-19, Daniel Gustafsson wrote:\n\n> Moving this patch to Ready for Committer.\n\nThanks, pushed.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 20 Mar 2020 17:35:25 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add FOREIGN to ALTER TABLE in pg_dump"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nHere's a set of ideas that I think could get rid of wraparound freezes\nfrom the traditional heap, using undo logs technology. They're\ninspired by things that Heikki has said over various threads, adapted\nto our proposed undo infrastructure.\n\n1. Don't freeze committed xids by brute force search. Keep using the\nsame tuple header as today, but add a pair of 64 bit FullTrasactionIds\nto the page header (low fxid, high fxid) so that xids are not\nambiguous even after they wrap around. If you ever find yourself\nupdating high fxid to a value that is too far ahead of low fxid, you\nneed to do a micro-freeze of the page, but you were already writing to\nthe page so that's cool.\n\n2. Get rid of aborted xids eagerly, instead of relying on brute force\nscans to move the horizon. Remove the xid references at rollback time\nwith the undo machinery we've built for zheap. While zheap uses undo\nrecords to rollback the effects of a transaction (reversing in-place\nupdates etc), these would be very simple undo records that simply\nremove item pointers relating to aborted transactions, so their xids\nvanish from the heap. Now the horizon for oldest aborted xid that you\ncan find anywhere in the system is oldest-xid-having-undo, which is\ntracked by the undo machinery. You don't need to keep more clog than\nthat AFAIK, other than to support the txid_status() function.\n\n3. Don't freeze multixacts by brute force search. Instead, invent 64\nbit multixacts and track (low fmxid, high fmxid) and do micro-freezing\non the page when the range would be too wide, as we did in point 1 for\nxids.\n\n4. Get rid of multixacts eagerly. Move the contents of\npg_mutixact/members into undo logs, using the new UNDO_SHARED records\nthat we invented at PGCon[1] for essentially the same purpose in\nzheap. This is storage that is automatically cleaned up by a \"discard\nworker\" when every member of a set of xids is no longer running (and\nit's a bit like the \"TED\" storage that Heikki talked about a few years\nback[2]). Keep pg_multixact/offsets, but change it to contain undo\nrecord pointers that point to UNDO_SHARED records holding the members.\nIt is a map of multixact ID -> undo holding the members, and it needs\nto exist only to preserve the 32 bit size of multixact IDs; it'd be\nnicer to use the undo rec ptr directly, but the goal in this thought\nexperiment is to make minimal format changes to kill freezing (if you\nwant more drastic changes, see zheap). Now you just have to figure\nout how to trim pg_multixact/offsets, and I think that could be done\nperiodically by testing the oldest multixact it holds: has the undo\nrecord it points to been discarded? If so we can trim this multixact.\n\nFinding room for 4 64 bit values on the page header is of course\ntricky and incompatible with pg_upgrade, and hard to support\nincrementally. I also don't know exactly at which point you'd\nconsider high fxid in visibility computations, considering that in\nplaces where you have a tuple pointer, you can't easily find the high\nfxid you need. One cute but scary idea is that when you're scanning\nthe heap you'd non-durably clobber xmin and xmax with\nFrozenTrasactionId if appropriate.\n\n[1] https://www.postgresql.org/message-id/CA+hUKGKni7EEU4FT71vZCCwPeaGb2PQOeKOFjQJavKnD577UMQ@mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/55511D1F.7050902%40iki.fi\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sat, 13 Jul 2019 12:33:51 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Some thoughts on heaps and freezing"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nPlease consider fixing the next batch of typos and inconsistencies in\nthe tree:\n6.1. FADVISE_WILLNEED -> POSIX_FADV_WILLNEED\n6.2. failOK -> missing_ok\n6.3. failOnerror -> failOnSignal\n6.4. fakebits -> remove (irrelevant since introduction in 945543d9)\n6.5. FastPathGetLockEntry -> FastPathGetRelationLockEntry\n6.6. FAST_PATH_HASH_BUCKETS -> FAST_PATH_STRONG_LOCK_HASH_PARTITIONS\n6.7. FastPathTransferLocks -> FastPathTransferRelationLocks\n6.8. GenericOptionFlags -> remove (unused since 090173a3)\n6.9. fetch_data -> fetched_data\n6.10. fildes -> fd\n6.11. filedescriptors -> file descriptors\n6.12. fillatt -> remove (orphaned since 8609d4ab)\n6.13. finalfunction -> finalfn\n6.14. flail -> fail\n6.15. FlushBuffers -> FlushBuffer & rephrase a comment (incorrectly\nupdated in 6f5c38dc)\n6.16. flush_context -> wb_context\n6.17. followon -> follow-on\n6.18. force_quotes -> remove (orphaned since e18d900d)\n6.19. formatstring -> format-string\n6.20. formarray, formfloat -> remove (orphaned since a237dd2b)\n6.21. found_row_type -> found_whole_row\n6.22. freeScanStack -> remove a comment (irrelevant since 2a636834)\n6.23. free_segment_counter -> freed_segment_counter\n6.24. FreeSpace Map -> FreeSpaceMap\n6.25. user friendly-operand -> user-friendly operand\n6.26. frozenids -> frozenxids\n6.27. fsm_internal.h -> fsm_internals.h\n6.28. fsm_size_to_avail_cat -> fsm_space_avail_to_cat\n6.29. full_result -> full_results\n6.30. FULL_SIZ -> remove (orphaned since 65b731bd)\n6.31. funxtions -> functions\n6.32. generate_nonunion_plan, generate_union_plan ->\ngenerate_nonunion_paths, generate_union_paths\n6.33. getaddinfo -> getaddrinfo\n6.34. get_expr, get_indexdef, get_ruledef, get_viewdef, get_triggerdef,\nget_userbyid -> pg_get_*\n6.35. GetHashPageStatis -> GetHashPageStats\n6.36. GetNumShmemAttachedBgworkers -> remove (orphaned since 6bc8ef0b)\n6.37. get_one_range_partition_bound_string -> get_range_partbound_string\n6.38. getPartitions -> remove a comment (irrelevant since 44c52881)\n6.39. GetRecordedFreePage -> GetRecordedFreeSpace\n6.40. get_special_varno -> resolve_special_varno\n6.41. gig -> GB\n6.42. GinITupIsCompressed -> GinItupIsCompressed\n6.43. GinPostingListSegmentMaxSize-bytes -> GinPostingListSegmentMaxSize\nbytes\n6.44. gistfindCorrectParent -> gistFindCorrectParent\n6.45. gistinserthere -> gistinserttuple\n6.46. GISTstate -> giststate\n6.47. GlobalSerializableXmin, SerializableGlobalXmin -> SxactGlobalXmin\n6.48. Greenwish -> Greenwich\n6.49. groupClauseVars -> groupClauseCommonVars\n\nAs a side note, while looking at dt_common.c (fixing 6.47), I've got a\nfeeling that the datetktbl is largely outdated and thus mostly unuseful\n(e.g. USSR doesn't exist for almost 30 years).\n\nBest regards,\nAlexander",
"msg_date": "Sun, 14 Jul 2019 08:24:01 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix typos and inconsistencies for HEAD (take 6)"
},
{
"msg_contents": "On Sun, Jul 14, 2019 at 08:24:01AM +0300, Alexander Lakhin wrote:\n> 6.10. fildes -> fd\n\nNot sure that this one was worth bothering.\n\nAnd the rest looks correct after review, so applied! Thanks!\n\n> As a side note, while looking at dt_common.c (fixing 6.47), I've got a\n> feeling that the datetktbl is largely outdated and thus mostly unuseful\n> (e.g. USSR doesn't exist for almost 30 years).\n\nPerhaps this could be discussed in its own thread?\n--\nMichael",
"msg_date": "Tue, 16 Jul 2019 13:24:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos and inconsistencies for HEAD (take 6)"
}
] |
[
{
"msg_contents": "Hello,\n\nIn the category \"doing more tricks with our existing btrees\", which\nincludes all that difficult stuff like skip scans and incremental\nsort, here's an easier planner-only one: if you have a unique index\non (a) possibly \"including\" (b) and you have a pathkey (a, b), you can\nuse an index [only] scan. That is, if the index is unique, and you\nwant exactly one extra column in index order, then you don't need any\nextra sorting to get (a, b) in order. (If the index is not unique, or\nthere is more than one extra trailing column in the pathkey, you need\nthe incremental sort patch[1] to use this index). This was brought to\nmy attention by a guru from a different RDBMS complaining about stupid\nstuff that PostgreSQL does and I was triggered to write this message\nas a kind of TODO note...\n\n[1] https://commitfest.postgresql.org/24/1124/\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Jul 2019 12:58:42 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Using unique btree indexes for pathkeys with one extra column"
},
{
"msg_contents": "On Mon, 15 Jul 2019 at 12:59, Thomas Munro <thomas.munro@gmail.com> wrote:\n> In the category \"doing more tricks with our existing btrees\", which\n> includes all that difficult stuff like skip scans and incremental\n> sort, here's an easier planner-only one: if you have a unique index\n> on (a) possibly \"including\" (b) and you have a pathkey (a, b), you can\n> use an index [only] scan. That is, if the index is unique, and you\n> want exactly one extra column in index order, then you don't need any\n> extra sorting to get (a, b) in order. (If the index is not unique, or\n> there is more than one extra trailing column in the pathkey, you need\n> the incremental sort patch[1] to use this index). This was brought to\n> my attention by a guru from a different RDBMS complaining about stupid\n> stuff that PostgreSQL does and I was triggered to write this message\n> as a kind of TODO note...\n>\n> [1] https://commitfest.postgresql.org/24/1124/\n\nThis is one of the problems I've wanted to solve in the various times\nI've mentioned the word \"UniqueKeys\" on this mailing list.\n\nProbably my most detailed explanation is in\nhttps://www.postgresql.org/message-id/CAKJS1f86FgODuUnHiQ25RKeuES4qTqeNxm1QbqJWrBoZxVGLiQ%40mail.gmail.com\n\nWithout detecting the UniqueKeys through joins then the optimisation\nyou mention is limited to just single rel queries, since a join may\nduplicate the \"a\" column and make it so the sort on \"b\" is no longer\nredundant. In my view, limiting this to just single relation queries\nis just too restrictive to bother writing any code for, so I think do\nto as you mention we need the full-blown thing I mention in the link\nabove. i.e tagging a list of UniqueKeys onto RelOptInfo and checking\nwhich ones are still applicable after joins and tagging those onto\njoin RelOptInfos too. PathKey redundancy could then take into account\nthat list of UniqueKeys the RelOptInfo level. At the top-level plan,\nyou can do smarts for ORDER BY / GROUP BY / DISTINCT.\n\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 15 Jul 2019 13:46:08 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Using unique btree indexes for pathkeys with one extra column"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> In the category \"doing more tricks with our existing btrees\", which\n> includes all that difficult stuff like skip scans and incremental\n> sort, here's an easier planner-only one: if you have a unique index\n> on (a) possibly \"including\" (b) and you have a pathkey (a, b), you can\n> use an index [only] scan. That is, if the index is unique, and you\n> want exactly one extra column in index order, then you don't need any\n> extra sorting to get (a, b) in order. (If the index is not unique, or\n> there is more than one extra trailing column in the pathkey, you need\n> the incremental sort patch[1] to use this index).\n\nSeems like you also have to insist that a is NOT NULL.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Jul 2019 19:52:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Using unique btree indexes for pathkeys with one extra column"
}
] |
[
{
"msg_contents": "Hi\n\nI noticed the documentation for pg_hba.conf:\n\n https://www.postgresql.org/docs/current/auth-pg-hba-conf.html\n\nsays:\n\n you will need to signal the postmaster (using pg_ctl reload or kill -HUP) to\n make it re-read the file.\n\nIt would be useful to mention pg_reload_conf() as another option here, as done\nelsewhere in the docs.\n\nPatch with suggested change attached.\n\n\nRegards\n\nIan Barwick\n\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 15 Jul 2019 12:47:18 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "doc: mention pg_reload_conf() in pg_hba.conf documentation"
},
{
"msg_contents": "On Mon, Jul 15, 2019 at 12:47:18PM +0900, Ian Barwick wrote:\n> Hi\n> \n> I noticed the documentation for pg_hba.conf:\n> \n> https://www.postgresql.org/docs/current/auth-pg-hba-conf.html\n> \n> says:\n> \n> you will need to signal the postmaster (using pg_ctl reload or kill -HUP) to\n> make it re-read the file.\n> \n> It would be useful to mention pg_reload_conf() as another option here, as done\n> elsewhere in the docs.\n> \n> Patch with suggested change attached.\n\nOh, good point. Not sure how we missed that, but I had to fix a mention\nin pg_hba.conf a while ago too. Also, there were two mentions in that\nfile, so I fixed them both with the attached patch. Backpatched to 9.4.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +",
"msg_date": "Mon, 15 Jul 2019 21:09:36 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: mention pg_reload_conf() in pg_hba.conf documentation"
},
{
"msg_contents": "On 7/16/19 10:09 AM, Bruce Momjian wrote:\n> On Mon, Jul 15, 2019 at 12:47:18PM +0900, Ian Barwick wrote:\n>> Hi\n>>\n>> I noticed the documentation for pg_hba.conf:\n>>\n>> https://www.postgresql.org/docs/current/auth-pg-hba-conf.html\n>>\n>> says:\n>>\n>> you will need to signal the postmaster (using pg_ctl reload or kill -HUP) to\n>> make it re-read the file.\n>>\n>> It would be useful to mention pg_reload_conf() as another option here, as done\n>> elsewhere in the docs.\n>>\n>> Patch with suggested change attached.\n> \n> Oh, good point. Not sure how we missed that, but I had to fix a mention\n> in pg_hba.conf a while ago too. Also, there were two mentions in that\n> file, so I fixed them both with the attached patch. Backpatched to 9.4.\n\nThanks!\n\nI only noticed it because I was writing up some basic instructions for someone\nnot very familiar with Pg, and cross-referencing to the documentation.\n\n\nRegards\n\nIan Barwick\n\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 16 Jul 2019 13:17:01 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: mention pg_reload_conf() in pg_hba.conf documentation"
}
] |
[
{
"msg_contents": "On Fri, Jul 12, 2019 at 1:25 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Thu, Jul 11, 2019 at 09:44:07AM -0400, Tom Lane wrote:\n> > I thought we *did* have an agreement, to wit using\n> >\n> > Discussion: https://postgr.es/m/<message-id>\n> >\n> > to link to relevant mail thread(s). Some people use more tags\n> > but that seems inessential to me.\n>\n> Hehe. I actually was thinking about advocating for having more of\n> them in the commit logs. I'll just start a new thread about what I\n> had in mind. Perhaps that will lead us nowhere, but let's see.\n\n[Moving to -hackers]\n\nHere are the tags that people have used in the past year, in commit messages:\n\n 763 Author\n 9 Authors\n 144 Backpatch-through\n 55 Backpatch\n 14 Bug\n 14 Co-authored-by\n 27 Diagnosed-By\n1593 Discussion\n 42 Doc\n 284 Reported-By\n 5 Review\n 8 Reviewed by\n 456 Reviewed-By\n 7 Security\n 9 Tested-By\n\nOther things I've noticed:\n\n* a few people list authors and reviewers in prose in a fairly\nmechanical paragraph\n* some people put back-patch and bug number information in prose\n* a few people list authors and reviewers with full email addresses\n* some people repeat tags for multiple values, others make comma separated lists\n* some people break long lines of meta-data with newlines\n* authors \"X and Y\" may be an alternative to \"X, Y\", or imply greater\ncollaboration\n\nThe counts above were produced by case-insensitively sorting and\ncounting unique stuff that precedes a colon, and then throwing out\nthose used fewer than three times (these are false matches and typos),\nand then throwing out a couple of obvious false matches by hand.\nStarting from here:\n\ngit log --since 2018-07-14 | \\\n grep -E '^ *[A-Z].*: ' | \\\n sort -i | \\\n sed 's/:.*//' | \\\n uniq -ic | \\\n grep -v -E '^ *[12] '\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Jul 2019 16:42:59 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "A little report on informal commit tag usage"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Here are the tags that people have used in the past year, in commit messages:\n\n> 763 Author\n> 9 Authors\n> 144 Backpatch-through\n> 55 Backpatch\n> 14 Bug\n> 14 Co-authored-by\n> 27 Diagnosed-By\n> 1593 Discussion\n> 42 Doc\n> 284 Reported-By\n> 5 Review\n> 8 Reviewed by\n> 456 Reviewed-By\n> 7 Security\n> 9 Tested-By\n\nOne small comment on that --- I'm not sure what you meant to count\nin respect to the \"Doc\" item, but I believe there's a fairly widespread\nconvention to write \"doc:\" or some variant in the initial summary line\nof commits that touch only documentation. The point here is to let\nrelease-note writers quickly ignore such commits, since we never list\nthem as release note items. Bruce and I, being the usual suspects for\nrelease-note writing, are pretty religious about this but other people\ndo it too. I see a lot more than 42 such commit messages in the past\nyear, so not sure what you were counting?\n\nAnyway, that's not a \"tag\" in the sense I understand you to be using\n(otherwise the entries would look something like \"Doc: yes\" and be at\nthe end, which is unhelpful for the purpose). But it's a related sort\nof commit-message convention.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Jul 2019 01:12:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: A little report on informal commit tag usage"
},
{
"msg_contents": "On Mon, Jul 15, 2019 at 5:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > 42 Doc\n\n> [...] I see a lot more than 42 such commit messages in the past\n> year, so not sure what you were counting?\n\nI would have tried to exclude the first line messages if I'd thought\nof that. But anyway, the reason for the low Doc number is case\nsensitivity. I ran that on a Mac and its lame collation support failed\nme in the \"sort\" step (also -i didn't do what I wanted, but that\nwasn't the issue). Trying again on FreeBSD box and explicitly setting\nLANG for the benefit of anyone else wanting to run this (see end), and\nthen removing a few obvious false matches, I now get similar numbers\nin most fields but a higher \"doc\" number:\n\n 767 Author\n 9 Authors\n 144 Backpatch-through\n 55 Backpatch\n 14 Bug\n 14 Co-authored-by\n 27 Diagnosed-by\n1599 Discussion\n 119 doc\n 36 docs\n 284 Reported-by\n 5 Review\n 8 Reviewed by\n 460 Reviewed-by\n 7 Security\n 9 Tested-by\n\ngit log --since 2018-07-14 | \\\n grep -E '^ +[a-zA-Z].*: ' | \\\n LANG=en_US.UTF-8 sort | \\\n sed 's/:.*//' | \\\n LANG=en_US.UTF-8 uniq -ic | \\\n grep -v -E '^ *[12] '\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Jul 2019 17:49:26 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A little report on informal commit tag usage"
},
{
"msg_contents": "On Mon, Jul 15, 2019 at 05:49:26PM +1200, Thomas Munro wrote:\n> I would have tried to exclude the first line messages if I'd thought\n> of that. But anyway, the reason for the low Doc number is case\n> sensitivity. I ran that on a Mac and its lame collation support failed\n> me in the \"sort\" step (also -i didn't do what I wanted, but that\n> wasn't the issue). Trying again on FreeBSD box and explicitly setting\n> LANG for the benefit of anyone else wanting to run this (see end), and\n> then removing a few obvious false matches, I now get similar numbers\n> in most fields but a higher \"doc\" number:\n> \n> 767 Author\n> 9 Authors\n> 144 Backpatch-through\n> 55 Backpatch\n> 14 Bug\n> 14 Co-authored-by\n> 27 Diagnosed-by\n> 1599 Discussion\n> 119 doc\n> 36 docs\n> 284 Reported-by\n> 5 Review\n> 8 Reviewed by\n> 460 Reviewed-by\n> 7 Security\n> 9 Tested-by\n\nThanks for those numbers. I am wondering if we could do a bit of\nconsolidation here and write a page about this stuff on the wiki.\nGetting the \"Discussion\" field most of the time is really cool.\n\nI think that we could get some improvements on the following things.\nHere is a set of ideas:\n- Avoid \"Authors\" and replace it with \"Author\" even if there are\nmultiple authors.\n- Avoid having multiple entries for each one of them? For example we\nhave a couple of commits listed listing one \"Reviewed-by\" field with\none single name.\n- Most commit entries to not use the email address with the name of\nthe author, reviewer, tester or reporter. Perhaps we should give up\non that?\n- Keep \"Backpatch-through\", not \"Backpatch\".\n- Keep \"Reviewed-by\", not \"Reviewed by\" nor \"Review\".\n\n\"Security\" is a special case, we append it to all the CVE-related\ncommits.\n\nThat is mainly a matter of taste, but I tend to prefer the following\nformat, protecting usually the order:\n- Diagnosed-by\n- Author\n- Reviewed-by\n- Discussion\n- Backpatch-through\n- I tend to have only one \"Reviewed-by\" entry with a list of names,\nsame for \"Author\" and \"Reported-by\".\n- Only names, no emails.\n\nAs mentioned on different threads, \"Discussion\" is the only one we had\na strong agreement with. Could it be possible to consider things like\nAuthor, Reported-by, Reviewed-by or Backpatch-through for example and\nextend to that? The first three ones are useful for parsing the\ncommit logs. The fourth one is handy so as there is no need to look\nat a full log tree with git log --graph or such, which is something I\ndo from time to time to guess down to where a fix has been applied (I\ntend to avoid git_changelog).\n--\nMichael",
"msg_date": "Tue, 16 Jul 2019 16:43:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: A little report on informal commit tag usage"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> As mentioned on different threads, \"Discussion\" is the only one we had\n> a strong agreement with. Could it be possible to consider things like\n> Author, Reported-by, Reviewed-by or Backpatch-through for example and\n> extend to that? The first three ones are useful for parsing the\n> commit logs. The fourth one is handy so as there is no need to look\n> at a full log tree with git log --graph or such, which is something I\n> do from time to time to guess down to where a fix has been applied (I\n> tend to avoid git_changelog).\n\nFWIW, I'm one of the people who prefer prose for this. The backpatching\nbit is a good example of why, because my log messages typically don't\njust say \"backpatch to 9.6\" but something about why that was the cutoff.\nFor instance in 0ec3e13c6,\n\n Per gripe from Ken Tanzer. Back-patch to 9.6. The issue exists\n further back, but before 9.6 the code looks very different and it\n doesn't actually know whether the \"var\" name matches anything,\n so I desisted from trying to fix it.\n\nI am in favor of trying to consistently mention that a patch is being\nback-patched, rather than expecting people to rely on git metadata\nto find that out. But I don't see that a rigid \"Backpatch\" tag format\nmakes anything easier there. If you need to know that mechanically,\ngit_changelog is way more reliable.\n\nI'm also skeptical of the argument that machine-parseable Reported-by\nand so forth are useful to anybody. Who'd use them, and for what?\nAlso, it's not always clear how to apply such a format to a real\nsituation --- eg, what do you do if the reporter is also the patch\nauthor, or a co-author? I'm not excited about redundantly entering\nsomebody's name several times.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Jul 2019 10:33:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: A little report on informal commit tag usage"
},
{
"msg_contents": "> On 16 Jul 2019, at 16:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Michael Paquier <michael@paquier.xyz> writes:\n>> As mentioned on different threads, \"Discussion\" is the only one we had\n>> a strong agreement with. Could it be possible to consider things like\n>> Author, Reported-by, Reviewed-by or Backpatch-through for example and\n>> extend to that? The first three ones are useful for parsing the\n>> commit logs. The fourth one is handy so as there is no need to look\n>> at a full log tree with git log --graph or such, which is something I\n>> do from time to time to guess down to where a fix has been applied (I\n>> tend to avoid git_changelog).\n> \n> FWIW, I'm one of the people who prefer prose for this. The backpatching\n> bit is a good example of why, because my log messages typically don't\n> just say \"backpatch to 9.6\" but something about why that was the cutoff.\n\nWearing my $work-hat where I regularly perform interesting merges of postgres\nreleases as an upstream, these detailed commit messages are very valuable and\nmuch appreciated. The wealth of (human readable) information stored in the\ncommit logs makes tracking postgres as an upstream quite a lot easier.\n\n> I'm also skeptical of the argument that machine-parseable Reported-by\n> and so forth are useful to anybody. Who'd use them, and for what?\n\nThe green gamification dot on people’s Github profiles might light up if the\nmachine readable format with email address was used (and the user has that\nspecific email connected to their Github account unless it’s a primary email).\nLooking at commit 1c9bb02d8ec1d5b1b319e4fed70439a403c245b1 I can see that for\nAugust 2018 Amit’s Github profile lists “Created 1 commit in 1 repository\npostgres/postgres 1 commit”, which is likely from this commit message being\nparsed in the mirror.\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 16 Jul 2019 20:48:20 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: A little report on informal commit tag usage"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-16 10:33:06 -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > As mentioned on different threads, \"Discussion\" is the only one we had\n> > a strong agreement with. Could it be possible to consider things like\n> > Author, Reported-by, Reviewed-by or Backpatch-through for example and\n> > extend to that? The first three ones are useful for parsing the\n> > commit logs. The fourth one is handy so as there is no need to look\n> > at a full log tree with git log --graph or such, which is something I\n> > do from time to time to guess down to where a fix has been applied (I\n> > tend to avoid git_changelog).\n> \n> FWIW, I'm one of the people who prefer prose for this. The backpatching\n> bit is a good example of why, because my log messages typically don't\n> just say \"backpatch to 9.6\" but something about why that was the cutoff.\n\nThey don't preclude each other though. E.g. it'd be sensible to have both\n\n> Per gripe from Ken Tanzer. Back-patch to 9.6. The issue exists\n> further back, but before 9.6 the code looks very different and it\n> doesn't actually know whether the \"var\" name matches anything,\n> so I desisted from trying to fix it.\n\nand \"Backpatch: 9.6-\" or such.\n\n\n> I am in favor of trying to consistently mention that a patch is being\n> back-patched, rather than expecting people to rely on git metadata\n> to find that out. But I don't see that a rigid \"Backpatch\" tag format\n> makes anything easier there. If you need to know that mechanically,\n> git_changelog is way more reliable.\n\nI find it useful to have a quick place to scan in a commit message. It's\na lot quicker to focus on the last few lines with tags, and see a\n'Backpatch: 9.6-' than to parse a potentially long commit message. If\nI'm then still interested in the commit, I'll then read the commit.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 16 Jul 2019 16:22:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: A little report on informal commit tag usage"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> They don't preclude each other though. E.g. it'd be sensible to have both\n\n>> Per gripe from Ken Tanzer. Back-patch to 9.6. The issue exists\n>> further back, but before 9.6 the code looks very different and it\n>> doesn't actually know whether the \"var\" name matches anything,\n>> so I desisted from trying to fix it.\n\n> and \"Backpatch: 9.6-\" or such.\n\nI've wondered for some time what you think the \"-\" means in this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Jul 2019 19:26:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: A little report on informal commit tag usage"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-16 19:26:59 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > They don't preclude each other though. E.g. it'd be sensible to have both\n> \n> >> Per gripe from Ken Tanzer. Back-patch to 9.6. The issue exists\n> >> further back, but before 9.6 the code looks very different and it\n> >> doesn't actually know whether the \"var\" name matches anything,\n> >> so I desisted from trying to fix it.\n> \n> > and \"Backpatch: 9.6-\" or such.\n> \n> I've wondered for some time what you think the \"-\" means in this.\n\nUp to master. Occasionally there's bugs that only need to be fixed in\nsome back branches etc.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 16 Jul 2019 16:33:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: A little report on informal commit tag usage"
},
{
"msg_contents": "On 2019-Jul-16, Daniel Gustafsson wrote:\n\n> The green gamification dot on people’s Github profiles might light up if the\n> machine readable format with email address was used (and the user has that\n> specific email connected to their Github account unless it’s a primary email).\n> Looking at commit 1c9bb02d8ec1d5b1b319e4fed70439a403c245b1 I can see that for\n> August 2018 Amit’s Github profile lists “Created 1 commit in 1 repository\n> postgres/postgres 1 commit”, which is likely from this commit message being\n> parsed in the mirror.\n\nI specifically use \"co-authored-by\" (and scanning the grep results, I'm\nthe only person doing it) because github recognizes it in this way.\nHowever I only feel entitled to use it when the patch has been developed\nby me plus some other person(s), which has a bit of a contradictory result:\nwhen I don't touch some submitted patch, I use \"Author\" since I (the\ncommitter) am not a co-author. That means github attributes such\npatches solely to me :-(\n\nI realize now, however, that in order for this to work I have to include\nthe email address, not just the name. I failed to do that at least\nonce.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 16 Jul 2019 22:45:13 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: A little report on informal commit tag usage"
},
{
"msg_contents": "On Tue, Jul 16, 2019 at 04:33:07PM -0700, Andres Freund wrote:\n> On 2019-07-16 19:26:59 -0400, Tom Lane wrote:\n>> I've wondered for some time what you think the \"-\" means in this.\n> \n> Up to master. Occasionally there's bugs that only need to be fixed in\n> some back branches etc.\n\nIs \"-\" most common to define a range of branches? I would have\nimagined that \"~\" makes more sense here.\n--\nMichael",
"msg_date": "Wed, 17 Jul 2019 13:39:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: A little report on informal commit tag usage"
}
] |
[
{
"msg_contents": "\nHello pgdevs,\n\nsorry if this has been already discussed, but G did not yield anything \nconvincing about that.\n\nWhile looking at HASH partitioning and creating a few ones, it occured to \nme that while RANGE and LIST partitions cannot be guessed easily, it would \nbe easy to derive HASH partitioned table for a fixed MODULUS, e.g. with\n\n CREATE TABLE foo(...) PARTITION BY HASH AUTOMATIC (MODULUS 10);\n -- or some other syntax\n\nPostgres could derive statically the 10 subtables, eg named foo_$0$ to \nfoo_$1$.\n\nThat would not be a replacement for the feature where one may do something \nfunny and doubtful like (MODULUS 2 REMAINDER 0, MODULUS 4 REMAINDER 1, \nMODULUS 4 REMAINDER 3).\n\nThe same declarative approach could eventually be considered for RANGE \nwith a fixed partition duration and starting and ending points.\n\nThis would be a relief on the longer path of dynamically creating \npartitions, but with lower costs than a dynamic approach.\n\nThe ALTER thing would be a little pain.\n\nThoughts?\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 15 Jul 2019 07:29:07 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Creating partitions automatically at least on HASH?"
},
{
"msg_contents": "On Mon, Jul 15, 2019 at 1:29 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> Hello pgdevs,\n>\n> sorry if this has been already discussed, but G did not yield anything\n> convincing about that.\n>\n> While looking at HASH partitioning and creating a few ones, it occured to\n> me that while RANGE and LIST partitions cannot be guessed easily, it would\n> be easy to derive HASH partitioned table for a fixed MODULUS, e.g. with\n>\n> CREATE TABLE foo(...) PARTITION BY HASH AUTOMATIC (MODULUS 10);\n> -- or some other syntax\n>\n> Postgres could derive statically the 10 subtables, eg named foo_$0$ to\n> foo_$1$.\n>\n> That would not be a replacement for the feature where one may do something\n> funny and doubtful like (MODULUS 2 REMAINDER 0, MODULUS 4 REMAINDER 1,\n> MODULUS 4 REMAINDER 3).\n>\n> The same declarative approach could eventually be considered for RANGE\n> with a fixed partition duration and starting and ending points.\n>\n> This would be a relief on the longer path of dynamically creating\n> partitions, but with lower costs than a dynamic approach.\n\nYeah, I think something like this would be reasonable, but I think\nthat the best syntax is not really clear. We might want to look at\nhow other systems handle this.\n\nI don't much like AUTOMATIC. It doesn't read like SQL's usual\npseudo-English. WITH would be better, but doesn't work because of\ngrammar conflicts. We need something that will let you specify just a\nmodulus for hash partitions, a start, end, and interval for range\npartitions, and a list of bounds for list partitions. If we're\nwilling to create a new keyword, we could make PARTITIONS a keyword.\nThen:\n\nPARTITION BY HASH (whatever) PARTITIONS 8\nPARTITION BY RANGE (whatever) PARTITIONS FROM 'some value' TO 'some\nlater value' ADD 'some delta'\nPARTITION BY LIST (whatever) PARTITIONS ('bound', 'other bound',\n('multiple', 'bounds', 'same', 'partition'))\n\nThat looks fairly clean. The method used to generate the names of the\nbacking tables would need some thought.\n\n> The ALTER thing would be a little pain.\n\nWhy would we need to do anything about ALTER? I'd view this as a\nconvenience way to set up a bunch of initial partitions, nothing more.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 15 Jul 2019 10:53:51 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating partitions automatically at least on HASH?"
},
{
"msg_contents": "On Mon, Jul 15, 2019 at 10:54 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jul 15, 2019 at 1:29 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > Hello pgdevs,\n> >\n> > sorry if this has been already discussed, but G did not yield anything\n> > convincing about that.\n> >\n> > While looking at HASH partitioning and creating a few ones, it occured to\n> > me that while RANGE and LIST partitions cannot be guessed easily, it would\n> > be easy to derive HASH partitioned table for a fixed MODULUS, e.g. with\n> >\n> > CREATE TABLE foo(...) PARTITION BY HASH AUTOMATIC (MODULUS 10);\n> > -- or some other syntax\n> >\n> > Postgres could derive statically the 10 subtables, eg named foo_$0$ to\n> > foo_$1$.\n> >\n> > That would not be a replacement for the feature where one may do something\n> > funny and doubtful like (MODULUS 2 REMAINDER 0, MODULUS 4 REMAINDER 1,\n> > MODULUS 4 REMAINDER 3).\n> >\n> > The same declarative approach could eventually be considered for RANGE\n> > with a fixed partition duration and starting and ending points.\n> >\n> > This would be a relief on the longer path of dynamically creating\n> > partitions, but with lower costs than a dynamic approach.\n>\n> Yeah, I think something like this would be reasonable, but I think\n> that the best syntax is not really clear. We might want to look at\n> how other systems handle this.\n\nGreenplum has a syntax that covers some cases but not the hash case.\n\nFor range based partitions we have:\n\nCREATE TABLE sales (id int, date date, amt decimal(10,2))\nDISTRIBUTED BY (id)\nPARTITION BY RANGE (date)\n( START (date '2016-01-01') INCLUSIVE\n END (date '2017-01-01') EXCLUSIVE\n EVERY (INTERVAL '1 day') );\n\nThis is equivelant to the below so you can also declare and name each\npartition individually. For example:\n\nCREATE TABLE sales (id int, date date, amt decimal(10,2))\nDISTRIBUTED BY (id)\nPARTITION BY RANGE (date)\n( PARTITION Jan16 START (date '2016-01-01') INCLUSIVE ,\n PARTITION Feb16 START (date '2016-02-01') INCLUSIVE ,\n PARTITION Mar16 START (date '2016-03-01') INCLUSIVE ,\n PARTITION Apr16 START (date '2016-04-01') INCLUSIVE ,\n PARTITION May16 START (date '2016-05-01') INCLUSIVE ,\n PARTITION Jun16 START (date '2016-06-01') INCLUSIVE ,\n PARTITION Jul16 START (date '2016-07-01') INCLUSIVE ,\n PARTITION Aug16 START (date '2016-08-01') INCLUSIVE ,\n PARTITION Sep16 START (date '2016-09-01') INCLUSIVE ,\n PARTITION Oct16 START (date '2016-10-01') INCLUSIVE ,\n PARTITION Nov16 START (date '2016-11-01') INCLUSIVE ,\n PARTITION Dec16 START (date '2016-12-01') INCLUSIVE\n END (date '2017-01-01') EXCLUSIVE );\n\nYou can do similar things with numeric\n\nCREATE TABLE rank (id int, rank int, year int, gender\nchar(1), count int)\nDISTRIBUTED BY (id)\nPARTITION BY RANGE (year)\n( START (2006) END (2016) EVERY (1),\n DEFAULT PARTITION extra );\n\nENUM\n\nCREATE TABLE rank (id int, rank int, year int, gender\nchar(1), count int )\nDISTRIBUTED BY (id)\nPARTITION BY LIST (gender)\n( PARTITION girls VALUES ('F'),\n PARTITION boys VALUES ('M'),\n DEFAULT PARTITION other );\n\nAlso it supports multilevel partitioning using a PARTITION TEMPLATE\nand SUBPARTITION TEMPLATE. The partitioning template ensures that the\nstructure at every level is the same.\n\nCREATE TABLE p3_sales (id int, year int, month int, day int,\nregion text)\nDISTRIBUTED BY (id)\nPARTITION BY RANGE (year)\n SUBPARTITION BY RANGE (month)\n SUBPARTITION TEMPLATE (\n START (1) END (13) EVERY (1),\n DEFAULT SUBPARTITION other_months )\n SUBPARTITION BY LIST (region)\n SUBPARTITION TEMPLATE (\n SUBPARTITION usa VALUES ('usa'),\n SUBPARTITION europe VALUES ('europe'),\n SUBPARTITION asia VALUES ('asia'),\n DEFAULT SUBPARTITION other_regions )\n( START (2002) END (2012) EVERY (1),\n DEFAULT PARTITION outlying_years );\n\n-- Rob\n\n\n",
"msg_date": "Mon, 15 Jul 2019 15:50:03 -0400",
"msg_from": "Robert Eckhardt <reckhardt@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Creating partitions automatically at least on HASH?"
},
{
"msg_contents": "Hello Robert and Robert,\n\n>>\n>> CREATE TABLE foo(...) PARTITION BY HASH AUTOMATIC (MODULUS 10);\n>> -- or some other syntax\n>>\n>> This would be a relief on the longer path of dynamically creating\n>> partitions, but with lower costs than a dynamic approach.\n>\n> Yeah, I think something like this would be reasonable, but I think\n> that the best syntax is not really clear. We might want to look at\n> how other systems handle this.\n\n> I don't much like AUTOMATIC. It doesn't read like SQL's usual\n> pseudo-English.\n\nMy English is kind-of broken. The intention is to differentiate the 3\ncases with some syntax to say very clearly whether:\n\n - no partitions are created immediately (current case)\n but will have to be created manually later\n\n - static partitions are created automatically, based on provided\n parameters\n\n - dynamic partitions will be created later, when needed, based\n on provided parameters again.\n\nEven if all that is not implemented immediately.\n\n> We need something that will let you specify just a modulus for hash \n> partitions, a start, end, and interval for range partitions, and a list \n> of bounds for list partitions. If we're willing to create a new \n> keyword, we could make PARTITIONS a keyword. Then:\n>\n> PARTITION BY HASH (whatever) PARTITIONS 8\n\nI think that it should reuse already existing keywords, i.e. MODULUS \nshould appear somewhere.\n\nMaybe:\n\n ... PARTITION BY HASH (whatever)\n [ CREATE [IMMEDIATE | DEFERRED] PARTITIONS (MODULUS 8) |\n NOCREATE or maybe NO CREATE ];\n\nThis way the 3 cases are syntactically covered. Then they just need to be \nimplemented:-) The IMMEDIATE case for HASH is pretty straightforward.\n\n> PARTITION BY RANGE (whatever) PARTITIONS FROM 'some value' TO 'some\n> later value' ADD 'some delta'\n\nRobert Eckhardt \"greenplum\" syntax for ranges looks okay as well, and \ncover some corner cases (default, included/excluded bound...).\n\n> PARTITION BY LIST (whatever) PARTITIONS ('bound', 'other bound',\n> ('multiple', 'bounds', 'same', 'partition'))\n\nPossibly.\n\n> That looks fairly clean. The method used to generate the names of the\n> backing tables would need some thought.\n\nPg has a history of doing simple things, eg $ stuff on constraints, _pk \nfor primary keys... I would not look too far.\n\n>> The ALTER thing would be a little pain.\n>\n> Why would we need to do anything about ALTER? I'd view this as a\n> convenience way to set up a bunch of initial partitions, nothing more.\n\nI'm naᅵve: I'd like that the user could change their mind about a given \nparameter and change it with ALTER:-)\n\n-- \nFabien.",
"msg_date": "Mon, 15 Jul 2019 23:51:08 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: Creating partitions automatically at least on HASH?"
},
{
"msg_contents": "Hello Robert & Robert,\n\n> - no partitions are created immediately (current case)\n> but will have to be created manually later\n>\n> - static partitions are created automatically, based on provided\n> parameters\n>\n> - dynamic partitions will be created later, when needed, based\n> on provided parameters again.\n>\n> Even if all that is not implemented immediately.\n>\n>> We need something that will let you specify just a modulus for hash \n>> partitions, a start, end, and interval for range partitions, and a list of \n>> bounds for list partitions. If we're willing to create a new keyword, we \n>> could make PARTITIONS a keyword. Then:\n>> \n>> PARTITION BY HASH (whatever) PARTITIONS 8\n>\n> I think that it should reuse already existing keywords, i.e. MODULUS should \n> appear somewhere.\n>\n> Maybe:\n>\n> ... PARTITION BY HASH (whatever)\n> [ CREATE [IMMEDIATE | DEFERRED] PARTITIONS (MODULUS 8) |\n> NOCREATE or maybe NO CREATE ];\n\nI have given a small go at the parser part of that.\n\nThere are 3 types of partitions with 3 dedicated syntax structures to \nhandle their associated parameters (WITH …, FROM … TO …, IN …). ISTM that \nit is a \"looks good from far away\" idea, but when trying to extend that it \nis starting to be a pain. If a 4th partition type is added, should it be \nyet another syntax? So I'm looking for an generic and extensible syntax \nthat could accomodate all cases for automatic creation of partitions.\n\nSecond problem, adding a \"CREATE\" after \"PARTITION BY … (…)\" create \nshift-reduce conflicts with potential other CREATE TABLE option \nspecification syntax. Not sure which one, but anyway. So the current \ngeneric syntax I'm considering is using \"DO\" as a trigger to start the \noptional automatic partition creation stuff:\n\n CREATE TABLE Stuff (...)\n PARTITION BY [HASH | RANGE | LIST] (…)\n DO NONE -- this is the default\n DO [IMMEDIATE|DEFERRED] USING (…)\n\nWhere the USING part would be generic keword value pairs, eg:\n\nFor HASH: (MODULUS 8) and/or (NPARTS 10)\n\nFor RANGE: (START '1970-01-01', STOP '2020-01-01', INCREMENT '1 year')\n and/or (START 1970, STOP 2020, NPARTS 50)\n\nAnd possibly for LIST: (IN (…), IN (…), …), or possibly some other \nkeyword.\n\nThe \"DEFERRED\" could be used as an open syntax for dynamic partitioning, \nif later someone would feel like doing it.\n\nISTM that \"USING\" is better than \"WITH\" because WITH is already used \nspecifically for HASH and other optional stuff in CREATE TABLE.\n\nThe text constant would be interpreted depending on the partitioning \nexpression/column type.\n\nAny opinion about the overall approach?\n\n-- \nFabien.",
"msg_date": "Sun, 18 Aug 2019 11:33:20 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: Creating partitions automatically at least on HASH?"
},
{
"msg_contents": "On Sun, 18 Aug 2019 at 11:33, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n>\n> Hello Robert & Robert,\n>\n> > - no partitions are created immediately (current case)\n> > but will have to be created manually later\n> >\n> > - static partitions are created automatically, based on provided\n> > parameters\n> >\n> > - dynamic partitions will be created later, when needed, based\n> > on provided parameters again.\n> >\n> > Even if all that is not implemented immediately.\n> >\n> >> We need something that will let you specify just a modulus for hash\n> >> partitions, a start, end, and interval for range partitions, and a list\n> of\n> >> bounds for list partitions. If we're willing to create a new keyword,\n> we\n> >> could make PARTITIONS a keyword. Then:\n> >>\n> >> PARTITION BY HASH (whatever) PARTITIONS 8\n> >\n> > I think that it should reuse already existing keywords, i.e. MODULUS\n> should\n> > appear somewhere.\n> >\n> > Maybe:\n> >\n> > ... PARTITION BY HASH (whatever)\n> > [ CREATE [IMMEDIATE | DEFERRED] PARTITIONS (MODULUS 8) |\n> > NOCREATE or maybe NO CREATE ];\n>\n> I have given a small go at the parser part of that.\n>\n> There are 3 types of partitions with 3 dedicated syntax structures to\n> handle their associated parameters (WITH …, FROM … TO …, IN …). ISTM that\n> it is a \"looks good from far away\" idea, but when trying to extend that it\n> is starting to be a pain. If a 4th partition type is added, should it be\n> yet another syntax? So I'm looking for an generic and extensible syntax\n> that could accomodate all cases for automatic creation of partitions.\n>\n> Second problem, adding a \"CREATE\" after \"PARTITION BY … (…)\" create\n> shift-reduce conflicts with potential other CREATE TABLE option\n> specification syntax. Not sure which one, but anyway. So the current\n> generic syntax I'm considering is using \"DO\" as a trigger to start the\n> optional automatic partition creation stuff:\n>\n> CREATE TABLE Stuff (...)\n> PARTITION BY [HASH | RANGE | LIST] (…)\n> DO NONE -- this is the default\n> DO [IMMEDIATE|DEFERRED] USING (…)\n>\n> Where the USING part would be generic keword value pairs, eg:\n>\n> For HASH: (MODULUS 8) and/or (NPARTS 10)\n>\n> For RANGE: (START '1970-01-01', STOP '2020-01-01', INCREMENT '1 year')\n> and/or (START 1970, STOP 2020, NPARTS 50)\n>\n> And possibly for LIST: (IN (…), IN (…), …), or possibly some other\n> keyword.\n>\n> The \"DEFERRED\" could be used as an open syntax for dynamic partitioning,\n> if later someone would feel like doing it.\n>\nISTM that \"USING\" is better than \"WITH\" because WITH is already used\n> specifically for HASH and other optional stuff in CREATE TABLE.\n>\n> The text constant would be interpreted depending on the partitioning\n> expression/column type.\n>\n> Any opinion about the overall approach?\n>\n>\n> I happen to start a similar discussion [1] being unaware of this one and\nthere Ashutosh Sharma talked about interval partitioning in Oracle. Looking\nclosely it looks like we can have this automatic partitioning more\nconvenient by having something similar. Basically, it is creating\npartitions on demand or lazy partitioning. To explain a bit more, let's\ntake range partition for example, first parent table is created and it's\ninterval and start and end values are specified and it creates only the\nparent table just like it works today. Now, if there comes a insertion\nthat does not belong to the existing (or any, in the case of first\ninsertion) partition(s), then the corresponding partition is created, I\nthink it is extensible to other partitioning schemes as well. Also it is\nlikely to have a positive impact on the queries, because there will be\nrequired partitions only and would not require to educate planner/executor\nabout many empty partitions.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/20190820205005.GA25823%40alvherre.pgsql#c67245b98e2cfc9c3bd261f134d05368\n\n-- \nRegards,\nRafia Sabih\n\nOn Sun, 18 Aug 2019 at 11:33, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\nHello Robert & Robert,\n\n> - no partitions are created immediately (current case)\n> but will have to be created manually later\n>\n> - static partitions are created automatically, based on provided\n> parameters\n>\n> - dynamic partitions will be created later, when needed, based\n> on provided parameters again.\n>\n> Even if all that is not implemented immediately.\n>\n>> We need something that will let you specify just a modulus for hash \n>> partitions, a start, end, and interval for range partitions, and a list of \n>> bounds for list partitions. If we're willing to create a new keyword, we \n>> could make PARTITIONS a keyword. Then:\n>> \n>> PARTITION BY HASH (whatever) PARTITIONS 8\n>\n> I think that it should reuse already existing keywords, i.e. MODULUS should \n> appear somewhere.\n>\n> Maybe:\n>\n> ... PARTITION BY HASH (whatever)\n> [ CREATE [IMMEDIATE | DEFERRED] PARTITIONS (MODULUS 8) |\n> NOCREATE or maybe NO CREATE ];\n\nI have given a small go at the parser part of that.\n\nThere are 3 types of partitions with 3 dedicated syntax structures to \nhandle their associated parameters (WITH …, FROM … TO …, IN …). ISTM that \nit is a \"looks good from far away\" idea, but when trying to extend that it \nis starting to be a pain. If a 4th partition type is added, should it be \nyet another syntax? So I'm looking for an generic and extensible syntax \nthat could accomodate all cases for automatic creation of partitions.\n\nSecond problem, adding a \"CREATE\" after \"PARTITION BY … (…)\" create \nshift-reduce conflicts with potential other CREATE TABLE option \nspecification syntax. Not sure which one, but anyway. So the current \ngeneric syntax I'm considering is using \"DO\" as a trigger to start the \noptional automatic partition creation stuff:\n\n CREATE TABLE Stuff (...)\n PARTITION BY [HASH | RANGE | LIST] (…)\n DO NONE -- this is the default\n DO [IMMEDIATE|DEFERRED] USING (…)\n\nWhere the USING part would be generic keword value pairs, eg:\n\nFor HASH: (MODULUS 8) and/or (NPARTS 10)\n\nFor RANGE: (START '1970-01-01', STOP '2020-01-01', INCREMENT '1 year')\n and/or (START 1970, STOP 2020, NPARTS 50)\n\nAnd possibly for LIST: (IN (…), IN (…), …), or possibly some other \nkeyword.\n\nThe \"DEFERRED\" could be used as an open syntax for dynamic partitioning, \nif later someone would feel like doing it. \nISTM that \"USING\" is better than \"WITH\" because WITH is already used \nspecifically for HASH and other optional stuff in CREATE TABLE.\n\nThe text constant would be interpreted depending on the partitioning \nexpression/column type.\n\nAny opinion about the overall approach?\nI happen to start a similar discussion [1] being unaware of this one and there Ashutosh Sharma talked about interval partitioning in Oracle. Looking closely it looks like we can have this automatic partitioning more convenient by having something similar. Basically, it is creating partitions on demand or lazy partitioning. To explain a bit more, let's take range partition for example, first parent table is created and it's interval and start and end values are specified and it creates only the parent table just like it works today. Now, if there comes a insertion that does not belong to the existing (or any, in the case of first insertion) partition(s), then the corresponding partition is created, I think it is extensible to other partitioning schemes as well. Also it is likely to have a positive impact on the queries, because there will be required partitions only and would not require to educate planner/executor about many empty partitions.[1] https://www.postgresql.org/message-id/flat/20190820205005.GA25823%40alvherre.pgsql#c67245b98e2cfc9c3bd261f134d05368-- Regards,Rafia Sabih",
"msg_date": "Mon, 26 Aug 2019 11:31:28 +0200",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating partitions automatically at least on HASH?"
},
{
"msg_contents": "Hello Rafia,\n\n>> CREATE TABLE Stuff (...)\n>> PARTITION BY [HASH | RANGE | LIST] (…)\n>> DO NONE -- this is the default\n>> DO [IMMEDIATE|DEFERRED] USING (…)\n>>\n>> Where the USING part would be generic keword value pairs, eg:\n>>\n>> For HASH: (MODULUS 8) and/or (NPARTS 10)\n>>\n>> For RANGE: (START '1970-01-01', STOP '2020-01-01', INCREMENT '1 year')\n>> and/or (START 1970, STOP 2020, NPARTS 50)\n>>\n>> And possibly for LIST: (IN (…), IN (…), …), or possibly some other\n>> keyword.\n>>\n>> The \"DEFERRED\" could be used as an open syntax for dynamic partitioning,\n>> if later someone would feel like doing it.\n>>\n> ISTM that \"USING\" is better than \"WITH\" because WITH is already used\n>> specifically for HASH and other optional stuff in CREATE TABLE.\n>>\n>> The text constant would be interpreted depending on the partitioning\n>> expression/column type.\n>>\n>> Any opinion about the overall approach?\n\n> I happen to start a similar discussion [1] being unaware of this one \n> and there Ashutosh Sharma talked about interval partitioning in Oracle. \n> Looking\n> closely it looks like we can have this automatic partitioning more\n> convenient by having something similar. Basically, it is creating\n> partitions on demand or lazy partitioning.\n\nYep, the \"what\" of dynamic partitioning is more or less straightforward, \nalong the line you are describing.\n\nFor me there are really two questions:\n\n - having a extendable syntax, hence the mail I sent, which would cover\n both automatic static & dynamic partitioning and their parameters,\n given that we already have manual static, automatic static should\n be pretty easy.\n\n - implementing the stuff, with limited performance impact if possible\n for the dynamic case, which is non trivial.\n\n> To explain a bit more, let's take range partition for example, first \n> parent table is created and it's interval and start and end values are \n> specified and it creates only the parent table just like it works today.\n\n> Now, if there comes a insertion that does not belong to the existing (or \n> any, in the case of first insertion) partition(s), then the \n> corresponding partition is created,\n\nYep. Now, you also have to deal with race conditions issues, i.e. two \nparallel session inserting tuples that must create the same partition, and \nprobably you would like to avoid a deadlock.\n\n> I think it is extensible to other partitioning schemes as well. Also it \n> is likely to have a positive impact on the queries, because there will \n> be required partitions only and would not require to educate \n> planner/executor about many empty partitions.\n\nYep, but it creates other problems to solve…\n\n-- \nFabien.",
"msg_date": "Mon, 26 Aug 2019 19:46:04 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: Creating partitions automatically at least on HASH?"
},
{
"msg_contents": "On Mon, 26 Aug 2019 at 19:46, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n>\n> Hello Rafia,\n>\n> >> CREATE TABLE Stuff (...)\n> >> PARTITION BY [HASH | RANGE | LIST] (…)\n> >> DO NONE -- this is the default\n> >> DO [IMMEDIATE|DEFERRED] USING (…)\n> >>\n> >> Where the USING part would be generic keword value pairs, eg:\n> >>\n> >> For HASH: (MODULUS 8) and/or (NPARTS 10)\n> >>\n> >> For RANGE: (START '1970-01-01', STOP '2020-01-01', INCREMENT '1 year')\n> >> and/or (START 1970, STOP 2020, NPARTS 50)\n> >>\n> >> And possibly for LIST: (IN (…), IN (…), …), or possibly some other\n> >> keyword.\n> >>\n> >> The \"DEFERRED\" could be used as an open syntax for dynamic partitioning,\n> >> if later someone would feel like doing it.\n> >>\n> > ISTM that \"USING\" is better than \"WITH\" because WITH is already used\n> >> specifically for HASH and other optional stuff in CREATE TABLE.\n> >>\n> >> The text constant would be interpreted depending on the partitioning\n> >> expression/column type.\n> >>\n> >> Any opinion about the overall approach?\n>\n> > I happen to start a similar discussion [1] being unaware of this one\n> > and there Ashutosh Sharma talked about interval partitioning in Oracle.\n> > Looking\n> > closely it looks like we can have this automatic partitioning more\n> > convenient by having something similar. Basically, it is creating\n> > partitions on demand or lazy partitioning.\n>\n> Yep, the \"what\" of dynamic partitioning is more or less straightforward,\n> along the line you are describing.\n>\n> For me there are really two questions:\n>\n> - having a extendable syntax, hence the mail I sent, which would cover\n> both automatic static & dynamic partitioning and their parameters,\n> given that we already have manual static, automatic static should\n> be pretty easy.\n>\n> - implementing the stuff, with limited performance impact if possible\n> for the dynamic case, which is non trivial.\n>\n> > To explain a bit more, let's take range partition for example, first\n> > parent table is created and it's interval and start and end values are\n> > specified and it creates only the parent table just like it works today.\n>\n> > Now, if there comes a insertion that does not belong to the existing (or\n> > any, in the case of first insertion) partition(s), then the\n> > corresponding partition is created,\n>\n> Yep. Now, you also have to deal with race conditions issues, i.e. two\n> parallel session inserting tuples that must create the same partition, and\n> probably you would like to avoid a deadlock.\n>\n> Hmmm, that shouldn't be very hard. Postgres handles many such things and I\nthink mostly by a mutex guarded shared memory structure. E.g. we can have a\nshared memory structure associated with the parent table holding the\ninformation of all the available partitions, and keep this structure\nguarded by mutex. Anytime a new partition has to be created the relevant\ninformation is first entered in this structure before actually creating it.\n\n> I think it is extensible to other partitioning schemes as well. Also it\n> > is likely to have a positive impact on the queries, because there will\n> > be required partitions only and would not require to educate\n> > planner/executor about many empty partitions.\n>\n> Yep, but it creates other problems to solve…\n>\n> Isn't it always the case. :)\n\n-- \nRegards,\nRafia Sabih\n\nOn Mon, 26 Aug 2019 at 19:46, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\nHello Rafia,\n\n>> CREATE TABLE Stuff (...)\n>> PARTITION BY [HASH | RANGE | LIST] (…)\n>> DO NONE -- this is the default\n>> DO [IMMEDIATE|DEFERRED] USING (…)\n>>\n>> Where the USING part would be generic keword value pairs, eg:\n>>\n>> For HASH: (MODULUS 8) and/or (NPARTS 10)\n>>\n>> For RANGE: (START '1970-01-01', STOP '2020-01-01', INCREMENT '1 year')\n>> and/or (START 1970, STOP 2020, NPARTS 50)\n>>\n>> And possibly for LIST: (IN (…), IN (…), …), or possibly some other\n>> keyword.\n>>\n>> The \"DEFERRED\" could be used as an open syntax for dynamic partitioning,\n>> if later someone would feel like doing it.\n>>\n> ISTM that \"USING\" is better than \"WITH\" because WITH is already used\n>> specifically for HASH and other optional stuff in CREATE TABLE.\n>>\n>> The text constant would be interpreted depending on the partitioning\n>> expression/column type.\n>>\n>> Any opinion about the overall approach?\n\n> I happen to start a similar discussion [1] being unaware of this one \n> and there Ashutosh Sharma talked about interval partitioning in Oracle. \n> Looking\n> closely it looks like we can have this automatic partitioning more\n> convenient by having something similar. Basically, it is creating\n> partitions on demand or lazy partitioning.\n\nYep, the \"what\" of dynamic partitioning is more or less straightforward, \nalong the line you are describing.\n\nFor me there are really two questions:\n\n - having a extendable syntax, hence the mail I sent, which would cover\n both automatic static & dynamic partitioning and their parameters,\n given that we already have manual static, automatic static should\n be pretty easy.\n\n - implementing the stuff, with limited performance impact if possible\n for the dynamic case, which is non trivial.\n\n> To explain a bit more, let's take range partition for example, first \n> parent table is created and it's interval and start and end values are \n> specified and it creates only the parent table just like it works today.\n\n> Now, if there comes a insertion that does not belong to the existing (or \n> any, in the case of first insertion) partition(s), then the \n> corresponding partition is created,\n\nYep. Now, you also have to deal with race conditions issues, i.e. two \nparallel session inserting tuples that must create the same partition, and \nprobably you would like to avoid a deadlock.\nHmmm, that shouldn't be very hard. Postgres handles many such things and I think mostly by a mutex guarded shared memory structure. E.g. we can have a shared memory structure associated with the parent table holding the information of all the available partitions, and keep this structure guarded by mutex. Anytime a new partition has to be created the relevant information is first entered in this structure before actually creating it.\n> I think it is extensible to other partitioning schemes as well. Also it \n> is likely to have a positive impact on the queries, because there will \n> be required partitions only and would not require to educate \n> planner/executor about many empty partitions.\n\nYep, but it creates other problems to solve…Isn't it always the case. :)-- Regards,Rafia Sabih",
"msg_date": "Tue, 27 Aug 2019 10:36:06 +0200",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating partitions automatically at least on HASH?"
},
{
"msg_contents": "Hello Fabien, Rafia,\n\nThanks for starting this discussion.\n\nOn Tue, Aug 27, 2019 at 5:36 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> On Mon, 26 Aug 2019 at 19:46, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>> > I happen to start a similar discussion [1] being unaware of this one\n>> > and there Ashutosh Sharma talked about interval partitioning in Oracle.\n>> > Looking\n>> > closely it looks like we can have this automatic partitioning more\n>> > convenient by having something similar. Basically, it is creating\n>> > partitions on demand or lazy partitioning.\n>>\n>> Yep, the \"what\" of dynamic partitioning is more or less straightforward,\n>> along the line you are describing.\n>>\n>> For me there are really two questions:\n>>\n>> - having a extendable syntax, hence the mail I sent, which would cover\n>> both automatic static & dynamic partitioning and their parameters,\n>> given that we already have manual static, automatic static should\n>> be pretty easy.\n>>\n>> - implementing the stuff, with limited performance impact if possible\n>> for the dynamic case, which is non trivial.\n>>\n>> > To explain a bit more, let's take range partition for example, first\n>> > parent table is created and it's interval and start and end values are\n>> > specified and it creates only the parent table just like it works today.\n>>\n>> > Now, if there comes a insertion that does not belong to the existing (or\n>> > any, in the case of first insertion) partition(s), then the\n>> > corresponding partition is created,\n>>\n>> Yep. Now, you also have to deal with race conditions issues, i.e. two\n>> parallel session inserting tuples that must create the same partition, and\n>> probably you would like to avoid a deadlock.\n>>\n> Hmmm, that shouldn't be very hard. Postgres handles many such things and I think mostly by a mutex guarded shared memory structure. E.g. we can have a shared memory structure associated with the parent table holding the information of all the available partitions, and keep this structure guarded by mutex. Anytime a new partition has to be created the relevant information is first entered in this structure before actually creating it.\n\nI like the Fabien's approach to focus on automatic creation of\npartitions only \"statically\" at first, deferring any complex matters\nof the \"dynamic\" counterpart to a later date. One advantage is that\nwe get to focus on the details of the UI for this feature, which has\ncomplexities of its own. Speaking of which, how about the following\nvariant of the syntax that Fabien proposed earlier:\n\nCREATE TABLE ... PARTITION BY partition_method (list_of_columns)\npartition_auto_create_clause\n\nwhere partition_auto_create_clause is:\n\nPARTITIONS { IMMEDIATE | DEFERRED } USING (partition_descriptor)\n\nwhere partition_descriptor is:\n\nMODULUS integer | FROM (range_start) END (range_end) INTERVAL\n(range_step) | list_values\n\nwhere range_ start/end/step is:\n\n(expr [,...])\n\nand list_values is:\n\n(expr [,...]) [, ....]\n\nNote that list_values contains one parenthesized list per partition.\nThis is slightly different from what Robert suggested upthread in that\neven a single value needs parentheses.\n\nAutomatic creation of multi-column range partitions seems a bit tricky\nas thinking about a multi-column \"interval\" is tricky.\n\nNeedless to say, PARTITIONS DEFERRED will cause an unsupported feature\nerror in the first cut.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 18 Sep 2019 15:11:11 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating partitions automatically at least on HASH?"
}
] |
[
{
"msg_contents": "Hi,\n\ncommit a31ad27fc5d introduced required_relids field. By default, it \nlinks to the clause_relids.\nIt works good while we do not modify clause_relids or required_relids.\nBut in the case of modification such initialization demands us to \nremember, that this field is shared. And we need to do bms_copy() before \nmaking any changes (see [1] for example).\nAlso, we make some changes of the RestrictInfo fields (see patch [2]) \nduring removing of unneeded self joins.\nI propose to do more secure initialization way of required_relids (see \npatch in attachment).\n\n[1] commit 4e97631e6a9, analyzejoins.c, line 434,435:\nrinfo->required_relids = bms_copy(rinfo->required_relids);\nrinfo->required_relids = bms_del_member(rinfo->required_relids, relid);\n[2] https://commitfest.postgresql.org/23/1712/\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 15 Jul 2019 11:12:35 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Insecure initialization of required_relids field"
},
{
"msg_contents": "Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n> commit a31ad27fc5d introduced required_relids field. By default, it \n> links to the clause_relids.\n> It works good while we do not modify clause_relids or required_relids.\n> But in the case of modification such initialization demands us to \n> remember, that this field is shared. And we need to do bms_copy() before \n> making any changes (see [1] for example).\n> Also, we make some changes of the RestrictInfo fields (see patch [2]) \n> during removing of unneeded self joins.\n> I propose to do more secure initialization way of required_relids (see \n> patch in attachment).\n\nThis seems fairly expensive (which is why it wasn't done like that\nto start with) and you've pointed to no specific bug that it fixes.\nSeeing that (a) the original commit is 14 years old, and (b) changing\neither of these fields after-the-fact is at most a very niche usage,\nI don't think we really have a problem here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Jul 2019 09:48:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Insecure initialization of required_relids field"
},
{
"msg_contents": "\n\nOn 15/07/2019 18:48, Tom Lane wrote:\n> Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n>> commit a31ad27fc5d introduced required_relids field. By default, it\n>> links to the clause_relids.\n>> It works good while we do not modify clause_relids or required_relids.\n>> But in the case of modification such initialization demands us to\n>> remember, that this field is shared. And we need to do bms_copy() before\n>> making any changes (see [1] for example).\n>> Also, we make some changes of the RestrictInfo fields (see patch [2])\n>> during removing of unneeded self joins.\n>> I propose to do more secure initialization way of required_relids (see\n>> patch in attachment).\n> \n> This seems fairly expensive (which is why it wasn't done like that\n> to start with) and you've pointed to no specific bug that it fixes.\n> Seeing that (a) the original commit is 14 years old, and (b) changing\n> either of these fields after-the-fact is at most a very niche usage,\n> I don't think we really have a problem here.\nIn the patch 'Removing unneeded self joins' [1] we modify both \nclause_relids and required_relids. Valgrind detected a problem: during \nthe required_relids change routine repalloc() was executed. In this \ncase, clause_relids will point to free memory block.\nIn accordance to your answer do you recommend me to make the bms_copy() \ncall before changing any of clause_relids and required_relids fields?\n\n[1] https://commitfest.postgresql.org/23/1712/\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 15 Jul 2019 22:08:41 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Insecure initialization of required_relids field"
}
] |
[
{
"msg_contents": "Hi,\n\nthe tip in the \"Adding a column\" section is not true anymore since PostgreSQL 11:\n\nhttps://www.postgresql.org/docs/current/ddl-alter.html#DDL-ALTER-ADDING-A-COLUMN\n\nAttached a patch proposal for this.\n\nRegards\nDaniel",
"msg_date": "Mon, 15 Jul 2019 11:01:00 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>",
"msg_from_op": true,
"msg_subject": "Documentation fix for adding a column with a default value"
},
{
"msg_contents": ">______________________________________\n>From: Daniel Westermann (DWE)\n>Sent: Monday, July 15, 2019 13:01\n>To: pgsql-hackers@postgresql.org\n>Subject: Documentation fix for adding a column with a default value\n>\n>Hi,\n>\n>the tip in the \"Adding a column\" section is not true anymore since PostgreSQL 11:\n>\n>https://www.postgresql.org/docs/current/ddl-alter.html#DDL-ALTER-ADDING-A-COLUMN<https://www.postgresql.org/docs/current/ddl-alter.html#DDL-ALTER-ADDING-A-COLUMN>\n>\n>Attached a patch proposal for this.\n\nSeems the first mail didn't make it ...\n\nRegards\nDaniel",
"msg_date": "Wed, 17 Jul 2019 06:42:13 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>",
"msg_from_op": true,
"msg_subject": "Fw: Documentation fix for adding a column with a default value"
},
{
"msg_contents": "On Wed, 17 Jul 2019 at 15:42, Daniel Westermann (DWE) <\ndaniel.westermann@dbi-services.com> wrote:\n\n> >______________________________________\n> >From: Daniel Westermann (DWE)\n> >Sent: Monday, July 15, 2019 13:01\n> >To: pgsql-hackers@postgresql.org\n> >Subject: Documentation fix for adding a column with a default value\n> >\n> >Hi,\n> >\n> >the tip in the \"Adding a column\" section is not true anymore since\n> PostgreSQL 11:\n> >\n>\n> >https://www.postgresql.org/docs/current/ddl-alter.html#DDL-ALTER-ADDING-A-COLUMN\n> <https://www.postgresql.org/docs/current/ddl-alter.html#DDL-ALTER-ADDING-A-COLUMN>\n> >\n> >Attached a patch proposal for this.\n>\n> Seems the first mail didn't make it ...\n>\n\nActually it did, I was about to reply to it :)\n\nThe suggested change pares down the \"Tip\" to more of a brief \"Note\", which\nIMHO is a bit\nterse for that section of the documentation (which has more of a tutorial\ncharacter),\nand the contents of the original tip basically still apply for volatile\ndefault values\nanyway.\n\nI've attached another suggestion for rewording this which should also make\nthe\nmechanics of the operation a little clearer.\n\nRegards\n\nIan Barwick\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Wed, 17 Jul 2019 15:54:13 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Fw: Documentation fix for adding a column with a default value"
},
{
"msg_contents": ">> Seems the first mail didn't make it ...\n\n>Actually it did, I was about to reply to it :)\n>\n>The suggested change pares down the \"Tip\" to more of a brief \"Note\", which IMHO is a bit\n>terse for that section of the documentation (which has more of a tutorial character),\n>and the contents of the original tip basically still apply for volatile default values\n>anyway.\n>\n>I've attached another suggestion for rewording this which should also make the\n>mechanics of the operation a little clearer.\n\nThank you, that better explains it. Looks good to me.\n\nRegards\nDaniel\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n>> Seems the first mail didn't make it ...\n\n\n\n\n\n\n\n\n>Actually it did, I was about to reply to it :)\n>\n\n>The suggested change pares down the \"Tip\" to more of a brief \"Note\", which IMHO is a bit\n\n>terse for that section of the documentation (which has more of a tutorial character),\n>and the contents of the original tip basically still apply for volatile default values\n\n>anyway.\n>\n\n>I've attached another suggestion for rewording this which should also make the\n\n>mechanics of the operation a little clearer.\n\n\nThank you, that better explains it. Looks good to me.\n\n\n\nRegards\nDaniel",
"msg_date": "Wed, 17 Jul 2019 09:08:36 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>",
"msg_from_op": true,
"msg_subject": "Re: Fw: Documentation fix for adding a column with a default value"
},
{
"msg_contents": ">>The suggested change pares down the \"Tip\" to more of a brief \"Note\", which IMHO is a bit\n>>terse for that section of the documentation (which has more of a tutorial character),\n>>and the contents of the original tip basically still apply for volatile default values\n>>anyway.\n>>\n>>I've attached another suggestion for rewording this which should also make the\n>>mechanics of the operation a little clearer.\n\n>Thank you, that better explains it. Looks good to me.\n\nShouldn't we add that to the current commit fest?\n\nRegards\nDaniel\n\n\n\n",
"msg_date": "Thu, 18 Jul 2019 15:46:00 +0000",
"msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>",
"msg_from_op": true,
"msg_subject": "Re: Fw: Documentation fix for adding a column with a default value"
},
{
"msg_contents": "> On 18 Jul 2019, at 17:46, Daniel Westermann (DWE) <daniel.westermann@dbi-services.com> wrote:\n> \n>>> The suggested change pares down the \"Tip\" to more of a brief \"Note\", which IMHO is a bit\n>>> terse for that section of the documentation (which has more of a tutorial character),\n>>> and the contents of the original tip basically still apply for volatile default values\n>>> anyway.\n>>> \n>>> I've attached another suggestion for rewording this which should also make the\n>>> mechanics of the operation a little clearer.\n> \n>> Thank you, that better explains it. Looks good to me.\n> \n> Shouldn't we add that to the current commit fest?\n\nThe current commitfest is closed for new additions, but please add it to the\nnext one (2019-09) and it will be picked up then.\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 18 Jul 2019 17:51:26 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Documentation fix for adding a column with a default value"
},
{
"msg_contents": "On 7/19/19 12:51 AM, Daniel Gustafsson wrote:\n>> On 18 Jul 2019, at 17:46, Daniel Westermann (DWE) <daniel.westermann@dbi-services.com> wrote:\n>>\n>>>> The suggested change pares down the \"Tip\" to more of a brief \"Note\", which IMHO is a bit\n>>>> terse for that section of the documentation (which has more of a tutorial character),\n>>>> and the contents of the original tip basically still apply for volatile default values\n>>>> anyway.\n>>>>\n>>>> I've attached another suggestion for rewording this which should also make the\n>>>> mechanics of the operation a little clearer.\n>>\n>>> Thank you, that better explains it. Looks good to me.\n>>\n>> Shouldn't we add that to the current commit fest?\n> \n> The current commitfest is closed for new additions, but please add it to the\n> next one (2019-09) and it will be picked up then.\n\nTo me it looks like a minor documentation correction to fix an omission\nfrom a patch already in PostgreSQL.\n\n\nRegards\n\nIan Barwick\n\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 19 Jul 2019 09:04:03 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation fix for adding a column with a default value"
},
{
"msg_contents": "On Fri, Jul 19, 2019 at 09:04:03AM +0900, Ian Barwick wrote:\n> To me it looks like a minor documentation correction to fix an omission\n> from a patch already in PostgreSQL.\n\nI think that it is better to register it in the commit fest anyway so\nas we don't lose track of it. Things tend to get lost easily as this\nlist has a lot of traffic.\n\nI have been looking at the original patch from Daniel and got\nsurprised by the simple removal of the paragraph as this applies to\n16828d5c where using volatile defaults still require a table rewrite.\nWell, this just comes back to the point raised by Ian upthread ;p\n\nExcept for a couple of misplaced and missing markups and one typo, the\nnew paragraph looked fine, so committed down to v11 after fixing the\nwhole.\n--\nMichael",
"msg_date": "Fri, 19 Jul 2019 11:46:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Documentation fix for adding a column with a default value"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nI have been having a question about this with no answer from various sources\n. As known after dropping a column using 'alter table', table is not\nrewritten and vacuum full does not remove them also (still see the dropped\ncolumn in pg_attribute).\n\nPG document says:\n\nhttps://www.postgresql.org/docs/current/sql-altertable.html\n\n\"To force immediate reclamation of space occupied by a dropped column, you\ncan execute one of the forms of ALTER TABLE that performs a rewrite of the\nwhole table. This results in reconstructing each row with the dropped\ncolumn replaced by a null value.\"\n\nThis seems to a bit vague for users (how to rewrite but keep the table\ndefinition) and it seems to still keep the dropped columns (though with\nnull). Isn't it better to leave the functionality to command like 'vacuum\nfull' to completely remove the dropped columns (i.e. no dropped columns in\npg_attributes and no null values for dropped columns for a table)?\n\nThanks.\n\nHello hackers,I have been having a question about this with no answer from various sources. As known after dropping a column using 'alter table', table is not rewritten and vacuum full does not remove them also (still see the dropped column in pg_attribute).PG document says:https://www.postgresql.org/docs/current/sql-altertable.html\"To force immediate reclamation of space occupied by a dropped column, you can execute one of the forms of ALTER TABLE that performs a rewrite of the whole table. This results in reconstructing each row with the dropped column replaced by a null value.\"This seems to a bit vague for users (how to rewrite but keep the table definition) and it seems to still keep the dropped columns (though with null). Isn't it better to leave the functionality to command like 'vacuum full' to completely remove the dropped columns (i.e. no dropped columns in pg_attributes and no null values for dropped columns for a table)?Thanks.",
"msg_date": "Mon, 15 Jul 2019 23:42:11 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "How to reclaim the space of dropped columns of a table?"
},
{
"msg_contents": "On Mon, Jul 15, 2019 at 8:42 AM Paul Guo <pguo@pivotal.io> wrote:\n\n> This seems to a bit vague for users (how to rewrite but keep the table\n> definition) and it seems to still keep the dropped columns (though with\n> null). Isn't it better to leave the functionality to command like 'vacuum\n> full' to completely remove the dropped columns (i.e. no dropped columns in\n> pg_attributes and no null values for dropped columns for a table)?\n>\n\nProbably. But it doesn't seem worth the effort to accomplish. The amount\nof data involved (and VACUUM FULL does perform the table rewrite described)\nto represent the missing column is minimal.\n\nDavid J.\n\nOn Mon, Jul 15, 2019 at 8:42 AM Paul Guo <pguo@pivotal.io> wrote:This seems to a bit vague for users (how to rewrite but keep the table definition) and it seems to still keep the dropped columns (though with null). Isn't it better to leave the functionality to command like 'vacuum full' to completely remove the dropped columns (i.e. no dropped columns in pg_attributes and no null values for dropped columns for a table)?Probably. But it doesn't seem worth the effort to accomplish. The amount of data involved (and VACUUM FULL does perform the table rewrite described) to represent the missing column is minimal.David J.",
"msg_date": "Mon, 15 Jul 2019 08:55:43 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How to reclaim the space of dropped columns of a table?"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Mon, Jul 15, 2019 at 8:42 AM Paul Guo <pguo@pivotal.io> wrote:\n>> This seems to a bit vague for users (how to rewrite but keep the table\n>> definition) and it seems to still keep the dropped columns (though with\n>> null). Isn't it better to leave the functionality to command like 'vacuum\n>> full' to completely remove the dropped columns (i.e. no dropped columns in\n>> pg_attributes and no null values for dropped columns for a table)?\n\n> Probably. But it doesn't seem worth the effort to accomplish. The amount\n> of data involved (and VACUUM FULL does perform the table rewrite described)\n> to represent the missing column is minimal.\n\nCompletely removing a column is pretty impractical, because that would\nrequire renumbering subsequent columns, which would have potential impacts\nthroughout the system catalogs (for example, in views referencing this\ntable, or foreign key info for other tables referencing this one).\n\nThere's been repeated discussion about separating the concepts of\na column's (a) permanent identifier for catalog purposes, (b)\nphysical position in table rows, and (c) logical position as\nreflected in \"SELECT *\" ordering. If we had that, this sort of\nthing would be much more practical. But making that happen is a\nlarge and very bug-prone task, so it hasn't been done (yet).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Jul 2019 12:52:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: How to reclaim the space of dropped columns of a table?"
}
] |
[
{
"msg_contents": "Attached patch slightly simplifies nbtsort.c by making it use\nPageIndexTupleOverwrite() to overwrite the last right non-pivot tuple\nwith the new high key (pivot tuple). PageIndexTupleOverwrite() is\ndesigned so that code like this doesn't need to delete and re-insert\nto replace an existing tuple.\n\nThis slightly simplifies the code, and also makes it marginally\nfaster. I'll add this to the 2019-09 CF.\n\n-- \nPeter Geoghegan",
"msg_date": "Mon, 15 Jul 2019 15:12:19 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Use PageIndexTupleOverwrite() within nbtsort.c"
},
{
"msg_contents": "16.07.2019 1:12, Peter Geoghegan wrote:\n> Attached patch slightly simplifies nbtsort.c by making it use\n> PageIndexTupleOverwrite() to overwrite the last right non-pivot tuple\n> with the new high key (pivot tuple). PageIndexTupleOverwrite() is\n> designed so that code like this doesn't need to delete and re-insert\n> to replace an existing tuple.\n>\n> This slightly simplifies the code, and also makes it marginally\n> faster. I'll add this to the 2019-09 CF.\n\nI'm okay with this patch.\n\nShould we also update similar code in _bt_mark_page_halfdead()?\nI attached a new version of the patch with this change.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 6 Aug 2019 18:30:08 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Use PageIndexTupleOverwrite() within nbtsort.c"
},
{
"msg_contents": "On Tue, Aug 6, 2019 at 8:30 AM Anastasia Lubennikova\n<a.lubennikova@postgrespro.ru> wrote:\n> Should we also update similar code in _bt_mark_page_halfdead()?\n> I attached a new version of the patch with this change.\n\nPushed.\n\nAt first I thought that there might be a problem with doing the same\nthing within _bt_mark_page_halfdead(), because we still won't use\nPageIndexTupleOverwrite() in the corresponding recovery routine -- in\ntheory, that could break WAL consistency checking because the redo\nroutine works by reconstructing a half-deleted leaf page from scratch,\nresulting in a logically equivalent though physically different page\n(even after masking within btree_mask()). However, I eventually\ndecided that you had it right. Your _bt_mark_page_halfdead() change is\nclearer overall and doesn't break WAL consistency checking in\npractice, for reasons that are no less obvious than before.\n\nThanks!\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 13 Aug 2019 12:00:30 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Use PageIndexTupleOverwrite() within nbtsort.c"
}
] |
[
{
"msg_contents": "Attached patch slightly simplifies _bt_getstackbuf() by making it\naccept a child BlockNumber argument, rather than requiring that\ncallers store the child block number in the parent stack item's\nbts_btentry field. We can remove the bts_btentry field from the\nBTStackData struct, because we know where we ended up when we split a\npage and need to relocate parent to insert new downlink -- it's only\ntruly necessary to remember what pivot tuple/downlink we followed to\narrive at the page being split. There is no point in remembering the\nchild block number during our initial descent of a B-Tree, since it's\nnever actually used at a later point, and can go stale immediately\nafter the buffer lock on parent is released. Besides,\n_bt_getstackbuf() callers can even redefine the definition of child to\nbe child's right sibling after the descent is over. For example, this\nhappens when we move right, or when we step right during unique index\ninsertion.\n\nThis slightly simplifies the code. Our stack is inherently\napproximate, because we might have to move right for a number of\nreasons.\n\nI'll add the patch to the 2019-09 CF.\n-- \nPeter Geoghegan",
"msg_date": "Mon, 15 Jul 2019 16:16:08 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Removing unneeded downlink field from nbtree stack struct"
},
{
"msg_contents": "16.07.2019 2:16, Peter Geoghegan wrote:\n> Attached patch slightly simplifies _bt_getstackbuf() by making it\n> accept a child BlockNumber argument, rather than requiring that\n> callers store the child block number in the parent stack item's\n> bts_btentry field. We can remove the bts_btentry field from the\n> BTStackData struct, because we know where we ended up when we split a\n> page and need to relocate parent to insert new downlink -- it's only\n> truly necessary to remember what pivot tuple/downlink we followed to\n> arrive at the page being split. There is no point in remembering the\n> child block number during our initial descent of a B-Tree, since it's\n> never actually used at a later point, and can go stale immediately\n> after the buffer lock on parent is released. Besides,\n> _bt_getstackbuf() callers can even redefine the definition of child to\n> be child's right sibling after the descent is over. For example, this\n> happens when we move right, or when we step right during unique index\n> insertion.\n>\n> This slightly simplifies the code. Our stack is inherently\n> approximate, because we might have to move right for a number of\n> reasons.\n>\n> I'll add the patch to the 2019-09 CF.\n\n\nThe refactoring is clear, so I set Ready for committer status.\nI have just a couple of notes about comments:\n\n1) I think that it's worth to add explanation of the case when we use \nright sibling to this comment:\n+ * stack to work back up to the parent page. We use the \nchild block\n+ * number (or possibly the block number of a page to its \nright)\n\n2) It took me quite some time to understand why does page deletion case \ndoesn't need a lock.\nI propose to add something like \"For more see comments for \n_bt_lock_branch_parent()\" to this line:\n\nPage deletion caller\n+ * can get away with a lock on leaf level page when \nlocating topparent\n+ * downlink, though.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Mon, 12 Aug 2019 19:42:55 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Removing unneeded downlink field from nbtree stack struct"
},
{
"msg_contents": "On Mon, Aug 12, 2019 at 9:43 AM Anastasia Lubennikova\n<a.lubennikova@postgrespro.ru> wrote:\n> The refactoring is clear, so I set Ready for committer status.\n> I have just a couple of notes about comments:\n>\n> 1) I think that it's worth to add explanation of the case when we use\n> right sibling to this comment:\n> + * stack to work back up to the parent page. We use the\n> child block\n> + * number (or possibly the block number of a page to its\n> right)\n\nThat appears over _bt_getstackbuf().\n\n> 2) It took me quite some time to understand why does page deletion case\n> doesn't need a lock.\n> I propose to add something like \"For more see comments for\n> _bt_lock_branch_parent()\" to this line:\n\nI ended up removing the reference to page deletion here (actually, I\nremoved the general discussion about the need to keep the child page\nlocked). This seemed like something that was really up to the callers.\n\nPushed a version with that change. Thanks for the review!\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 14 Aug 2019 11:33:30 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Removing unneeded downlink field from nbtree stack struct"
}
] |
[
{
"msg_contents": "Greetings Hackers.\n\nWe have a reproduceable case of $subject that issues a backtrace such as\nseen below.\n\nThe query that I'd prefer to sanitize before sending is <30 lines of at\na glance, not terribly complex logic.\n\nIt nonetheless dies hard after a few seconds of running and as expected,\nresults in an automatic all-backend restart.\n\nPlease advise on how to proceed. Thanks!\n\nbt\n#0 initscan (scan=scan@entry=0x55d7a7daa0b0, key=0x0, keep_startblock=keep_startblock@entry=1 '\\001')\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c:233\n#1 0x000055d7a72fa8d0 in heap_rescan (scan=0x55d7a7daa0b0, key=key@entry=0x0) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c:1529\n#2 0x000055d7a7451fef in ExecReScanSeqScan (node=node@entry=0x55d7a7d85100) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeSeqscan.c:280\n#3 0x000055d7a742d36e in ExecReScan (node=0x55d7a7d85100) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:158\n#4 0x000055d7a7445d38 in ExecReScanGather (node=node@entry=0x55d7a7d84d30) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeGather.c:475\n#5 0x000055d7a742d255 in ExecReScan (node=0x55d7a7d84d30) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:166\n#6 0x000055d7a7448673 in ExecReScanHashJoin (node=node@entry=0x55d7a7d84110) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeHashjoin.c:1019\n#7 0x000055d7a742d29e in ExecReScan (node=node@entry=0x55d7a7d84110) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:226\n<about 30 lines omitted>\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: postgres.consulting@comcast.net\n\n\n",
"msg_date": "Mon, 15 Jul 2019 18:48:05 -0500",
"msg_from": "Jerry Sievers <gsievers19@comcast.net>",
"msg_from_op": true,
"msg_subject": "SegFault on 9.6.14"
},
{
"msg_contents": "On Mon, Jul 15, 2019 at 06:48:05PM -0500, Jerry Sievers wrote:\n>Greetings Hackers.\n>\n>We have a reproduceable case of $subject that issues a backtrace such as\n>seen below.\n>\n>The query that I'd prefer to sanitize before sending is <30 lines of at\n>a glance, not terribly complex logic.\n>\n>It nonetheless dies hard after a few seconds of running and as expected,\n>results in an automatic all-backend restart.\n>\n>Please advise on how to proceed. Thanks!\n>\n>bt\n>#0 initscan (scan=scan@entry=0x55d7a7daa0b0, key=0x0, keep_startblock=keep_startblock@entry=1 '\\001')\n> at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c:233\n>#1 0x000055d7a72fa8d0 in heap_rescan (scan=0x55d7a7daa0b0, key=key@entry=0x0) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c:1529\n>#2 0x000055d7a7451fef in ExecReScanSeqScan (node=node@entry=0x55d7a7d85100) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeSeqscan.c:280\n>#3 0x000055d7a742d36e in ExecReScan (node=0x55d7a7d85100) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:158\n>#4 0x000055d7a7445d38 in ExecReScanGather (node=node@entry=0x55d7a7d84d30) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeGather.c:475\n>#5 0x000055d7a742d255 in ExecReScan (node=0x55d7a7d84d30) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:166\n>#6 0x000055d7a7448673 in ExecReScanHashJoin (node=node@entry=0x55d7a7d84110) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeHashjoin.c:1019\n>#7 0x000055d7a742d29e in ExecReScan (node=node@entry=0x55d7a7d84110) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:226\n><about 30 lines omitted>\n>\n\nHmmm, that means it's crashing here:\n\n if (scan->rs_parallel != NULL)\n scan->rs_nblocks = scan->rs_parallel->phs_nblocks; <--- here\n else\n scan->rs_nblocks = RelationGetNumberOfBlocks(scan->rs_rd);\n\nBut clearly, scan is valid (otherwise it'd crash on the if condition),\nand scan->rs_parallel must me non-NULL. Which probably means the pointer\nis (no longer) valid.\n\nCould it be that the rs_parallel DSM disappears on rescan, or something\nlike that?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 16 Jul 2019 02:15:29 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n\n> On Mon, Jul 15, 2019 at 06:48:05PM -0500, Jerry Sievers wrote:\n>\n>>Greetings Hackers.\n>>\n>>We have a reproduceable case of $subject that issues a backtrace such as\n>>seen below.\n>>\n>>The query that I'd prefer to sanitize before sending is <30 lines of at\n>>a glance, not terribly complex logic.\n>>\n>>It nonetheless dies hard after a few seconds of running and as expected,\n>>results in an automatic all-backend restart.\n>>\n>>Please advise on how to proceed. Thanks!\n>>\n>>bt\n>>#0 initscan (scan=scan@entry=0x55d7a7daa0b0, key=0x0, keep_startblock=keep_startblock@entry=1 '\\001')\n>> at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c:233\n>>#1 0x000055d7a72fa8d0 in heap_rescan (scan=0x55d7a7daa0b0, key=key@entry=0x0) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c:1529\n>>#2 0x000055d7a7451fef in ExecReScanSeqScan (node=node@entry=0x55d7a7d85100) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeSeqscan.c:280\n>>#3 0x000055d7a742d36e in ExecReScan (node=0x55d7a7d85100) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:158\n>>#4 0x000055d7a7445d38 in ExecReScanGather (node=node@entry=0x55d7a7d84d30) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeGather.c:475\n>>#5 0x000055d7a742d255 in ExecReScan (node=0x55d7a7d84d30) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:166\n>>#6 0x000055d7a7448673 in ExecReScanHashJoin (node=node@entry=0x55d7a7d84110) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeHashjoin.c:1019\n>>#7 0x000055d7a742d29e in ExecReScan (node=node@entry=0x55d7a7d84110) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:226\n>><about 30 lines omitted>\n>>\n>\n> Hmmm, that means it's crashing here:\n>\n> if (scan->rs_parallel != NULL)\n> scan->rs_nblocks = scan->rs_parallel->phs_nblocks; <--- here\n> else\n> scan->rs_nblocks = RelationGetNumberOfBlocks(scan->rs_rd);\n>\n> But clearly, scan is valid (otherwise it'd crash on the if condition),\n> and scan->rs_parallel must me non-NULL. Which probably means the pointer\n> is (no longer) valid.\n>\n> Could it be that the rs_parallel DSM disappears on rescan, or something\n> like that?\n\nNo clue but something I just tried was to disable parallelism by setting\nmax_parallel_workers_per_gather to 0 and however the query has not\nfinished after a few minutes, there is no crash.\n\nPlease advise.\n\nThx\n\n>\n>\n> regards\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: postgres.consulting@comcast.net\n\n\n",
"msg_date": "Mon, 15 Jul 2019 19:22:55 -0500",
"msg_from": "Jerry Sievers <gsievers19@comcast.net>",
"msg_from_op": true,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Mon, Jul 15, 2019 at 07:22:55PM -0500, Jerry Sievers wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>\n>> On Mon, Jul 15, 2019 at 06:48:05PM -0500, Jerry Sievers wrote:\n>>\n>>>Greetings Hackers.\n>>>\n>>>We have a reproduceable case of $subject that issues a backtrace such as\n>>>seen below.\n>>>\n>>>The query that I'd prefer to sanitize before sending is <30 lines of at\n>>>a glance, not terribly complex logic.\n>>>\n>>>It nonetheless dies hard after a few seconds of running and as expected,\n>>>results in an automatic all-backend restart.\n>>>\n>>>Please advise on how to proceed. Thanks!\n>>>\n>>>bt\n>>>#0 initscan (scan=scan@entry=0x55d7a7daa0b0, key=0x0, keep_startblock=keep_startblock@entry=1 '\\001')\n>>> at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c:233\n>>>#1 0x000055d7a72fa8d0 in heap_rescan (scan=0x55d7a7daa0b0, key=key@entry=0x0) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c:1529\n>>>#2 0x000055d7a7451fef in ExecReScanSeqScan (node=node@entry=0x55d7a7d85100) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeSeqscan.c:280\n>>>#3 0x000055d7a742d36e in ExecReScan (node=0x55d7a7d85100) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:158\n>>>#4 0x000055d7a7445d38 in ExecReScanGather (node=node@entry=0x55d7a7d84d30) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeGather.c:475\n>>>#5 0x000055d7a742d255 in ExecReScan (node=0x55d7a7d84d30) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:166\n>>>#6 0x000055d7a7448673 in ExecReScanHashJoin (node=node@entry=0x55d7a7d84110) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeHashjoin.c:1019\n>>>#7 0x000055d7a742d29e in ExecReScan (node=node@entry=0x55d7a7d84110) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:226\n>>><about 30 lines omitted>\n>>>\n>>\n>> Hmmm, that means it's crashing here:\n>>\n>> if (scan->rs_parallel != NULL)\n>> scan->rs_nblocks = scan->rs_parallel->phs_nblocks; <--- here\n>> else\n>> scan->rs_nblocks = RelationGetNumberOfBlocks(scan->rs_rd);\n>>\n>> But clearly, scan is valid (otherwise it'd crash on the if condition),\n>> and scan->rs_parallel must me non-NULL. Which probably means the pointer\n>> is (no longer) valid.\n>>\n>> Could it be that the rs_parallel DSM disappears on rescan, or something\n>> like that?\n>\n>No clue but something I just tried was to disable parallelism by setting\n>max_parallel_workers_per_gather to 0 and however the query has not\n>finished after a few minutes, there is no crash.\n>\n\nThat might be a hint my rough analysis was somewhat correct. The\nquestion is whether the non-parallel plan does the same thing. Maybe it\npicks a plan that does not require rescans, or something like that.\n\n>Please advise.\n>\n\nIt would be useful to see (a) exacution plan of the query, (b) full\nbacktrace and (c) a bit of context for the place where it crashed.\n\nSomething like (in gdb):\n\n bt full\n list\n p *scan\n\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 16 Jul 2019 02:34:49 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n\n> On Mon, Jul 15, 2019 at 07:22:55PM -0500, Jerry Sievers wrote:\n>\n>>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>>\n>>> On Mon, Jul 15, 2019 at 06:48:05PM -0500, Jerry Sievers wrote:\n>>>\n>>>>Greetings Hackers.\n>>>>\n>>>>We have a reproduceable case of $subject that issues a backtrace such as\n>>>>seen below.\n>>>>\n>>>>The query that I'd prefer to sanitize before sending is <30 lines of at\n>>>>a glance, not terribly complex logic.\n>>>>\n>>>>It nonetheless dies hard after a few seconds of running and as expected,\n>>>>results in an automatic all-backend restart.\n>>>>\n>>>>Please advise on how to proceed. Thanks!\n>>>>\n>>>>bt\n>>>>#0 initscan (scan=scan@entry=0x55d7a7daa0b0, key=0x0, keep_startblock=keep_startblock@entry=1 '\\001')\n>>>> at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c:233\n>>>>#1 0x000055d7a72fa8d0 in heap_rescan (scan=0x55d7a7daa0b0, key=key@entry=0x0) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c:1529\n>>>>#2 0x000055d7a7451fef in ExecReScanSeqScan (node=node@entry=0x55d7a7d85100) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeSeqscan.c:280\n>>>>#3 0x000055d7a742d36e in ExecReScan (node=0x55d7a7d85100) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:158\n>>>>#4 0x000055d7a7445d38 in ExecReScanGather (node=node@entry=0x55d7a7d84d30) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeGather.c:475\n>>>>#5 0x000055d7a742d255 in ExecReScan (node=0x55d7a7d84d30) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:166\n>>>>#6 0x000055d7a7448673 in ExecReScanHashJoin (node=node@entry=0x55d7a7d84110) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeHashjoin.c:1019\n>>>>#7 0x000055d7a742d29e in ExecReScan (node=node@entry=0x55d7a7d84110) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:226\n>>>><about 30 lines omitted>\n>>>>\n>>>\n>>> Hmmm, that means it's crashing here:\n>>>\n>>> if (scan->rs_parallel != NULL)\n>>> scan->rs_nblocks = scan->rs_parallel->phs_nblocks; <--- here\n>>> else\n>>> scan->rs_nblocks = RelationGetNumberOfBlocks(scan->rs_rd);\n>>>\n>>> But clearly, scan is valid (otherwise it'd crash on the if condition),\n>>> and scan->rs_parallel must me non-NULL. Which probably means the pointer\n>>> is (no longer) valid.\n>>>\n>>> Could it be that the rs_parallel DSM disappears on rescan, or something\n>>> like that?\n>>\n>>No clue but something I just tried was to disable parallelism by setting\n>>max_parallel_workers_per_gather to 0 and however the query has not\n>>finished after a few minutes, there is no crash.\n>>\n>\n> That might be a hint my rough analysis was somewhat correct. The\n> question is whether the non-parallel plan does the same thing. Maybe it\n> picks a plan that does not require rescans, or something like that.\n>\n>>Please advise.\n>>\n>\n> It would be useful to see (a) exacution plan of the query, (b) full\n> backtrace and (c) a bit of context for the place where it crashed.\n>\n> Something like (in gdb):\n>\n> bt full\n> list\n> p *scan\n\nThe p *scan did nothing unless I ran it first however my gdb $foo isn't\nstrong presently.\n\nI'll need to sanitize the explain output but can do so ASAP and send it\nalong.\n\nThx!\n\n\n$ gdb /usr/lib/postgresql/9.6/bin/postgres core\nGNU gdb (Ubuntu 7.11.1-0ubuntu1~16.5) 7.11.1\nCopyright (C) 2016 Free Software Foundation, Inc.\nLicense GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>\nThis is free software: you are free to change and redistribute it.\nThere is NO WARRANTY, to the extent permitted by law. Type \"show copying\"\nand \"show warranty\" for details.\nThis GDB was configured as \"x86_64-linux-gnu\".\nType \"show configuration\" for configuration details.\nFor bug reporting instructions, please see:\n<http://www.gnu.org/software/gdb/bugs/>.\nFind the GDB manual and other documentation resources online at:\n<http://www.gnu.org/software/gdb/documentation/>.\nFor help, type \"help\".\nType \"apropos word\" to search for commands related to \"word\"...\nReading symbols from /usr/lib/postgresql/9.6/bin/postgres...Reading symbols from /usr/lib/debug/.build-id/04/6f55a5ce6ce05064edfc8feee61c6cb039d296.debug...done.\ndone.\n[New LWP 31654]\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\nCore was generated by `postgres: foo_eis_segfault: jsievers staging 10.220.22.26(57948) SELECT '.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 initscan (scan=scan@entry=0x55d7a7daa0b0, key=0x0, keep_startblock=keep_startblock@entry=1 '\\001')\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c:233\n233\t/build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c: No such file or directory.\n(gdb) p *scan\n$1 = {rs_rd = 0x7fa6c6935a08, rs_snapshot = 0x55d7a7c2e630, rs_nkeys = 0, rs_key = 0x0, rs_bitmapscan = 0 '\\000', rs_samplescan = 0 '\\000', rs_pageatatime = 1 '\\001', \n rs_allow_strat = 1 '\\001', rs_allow_sync = 1 '\\001', rs_temp_snap = 1 '\\001', rs_nblocks = 198714, rs_startblock = 1920300133, rs_numblocks = 4294967295, rs_strategy = 0x55d7a7daa6a0, \n rs_syncscan = 1 '\\001', rs_inited = 0 '\\000', rs_ctup = {t_len = 114, t_self = {ip_blkid = {bi_hi = 0, bi_lo = 62879}, ip_posid = 77}, t_tableOid = 994804890, t_data = 0x0}, \n rs_cblock = 4294967295, rs_cbuf = 0, rs_parallel = 0x7fa673a54108, rs_cindex = 76, rs_ntuples = 77, rs_vistuples = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, \n 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, \n 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 28255, 25711, 24421, 25705, 12576, 8247, 29754, 29281, 25959, 27764, 29545, 8308, 31528, 16724, 18258, 21573, 20037, 21076, \n 8281, 25914, 28792, 8306, 22139, 21057, 14880, 24950, 28274, 8303, 12337, 14880, 24950, 24946, 29812, 28526, 13088, 14880, 24950, 29810, 28793, 8293, 13106, 14880, 24950, 29810, 28793, \n 28525, 8292, 12589, 14880, 24950, 25458, 27759, 26988, 8292, 8240, 30266, 29281, 25964, 25974, 29548, 28789, 12320, 14880, 24950, 28274, 28527, 25708, 12576, 8240, 30266, 29281, 24943, \n 29812, 28526, 13088, 14880, 28524, 24931, 26996, 28271, 13344, 13110, 8317, 29242, 29541, 28526, 12576, 14880, 25970, 28275, 28001, 8293, 15932, 14880, 25970, 29555, 29295, 26484, \n 28530, 28789, 25970, 8294, 8240, 29242, 29541, 29295, 26473, 25204, 8300, 8240, 29242, 29541, 29295, 26473, 28515, 8300, 8240, 29242, 29541...}}\n(gdb) bt full\n#0 initscan (scan=scan@entry=0x55d7a7daa0b0, key=0x0, keep_startblock=keep_startblock@entry=1 '\\001')\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c:233\n allow_strat = <optimized out>\n allow_sync = <optimized out>\n#1 0x000055d7a72fa8d0 in heap_rescan (scan=0x55d7a7daa0b0, key=key@entry=0x0) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c:1529\n __func__ = \"heap_rescan\"\n#2 0x000055d7a7451fef in ExecReScanSeqScan (node=node@entry=0x55d7a7d85100) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeSeqscan.c:280\n scan = <optimized out>\n#3 0x000055d7a742d36e in ExecReScan (node=0x55d7a7d85100) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:158\n __func__ = \"ExecReScan\"\n#4 0x000055d7a7445d38 in ExecReScanGather (node=node@entry=0x55d7a7d84d30) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeGather.c:475\nNo locals.\n#5 0x000055d7a742d255 in ExecReScan (node=0x55d7a7d84d30) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:166\n __func__ = \"ExecReScan\"\n#6 0x000055d7a7448673 in ExecReScanHashJoin (node=node@entry=0x55d7a7d84110) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeHashjoin.c:1019\nNo locals.\n#7 0x000055d7a742d29e in ExecReScan (node=node@entry=0x55d7a7d84110) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:226\n __func__ = \"ExecReScan\"\n#8 0x000055d7a7433ce7 in ExecProcNode (node=node@entry=0x55d7a7d84110) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execProcnode.c:381\n result = <optimized out>\n __func__ = \"ExecProcNode\"\n#9 0x000055d7a7452989 in ExecSort (node=node@entry=0x55d7a7d83ea0) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeSort.c:103\n plannode = <optimized out>\n outerNode = 0x55d7a7d84110\n tupDesc = <optimized out>\n estate = 0x55d7a7d5fee8\n dir = ForwardScanDirection\n tuplesortstate = 0x55d7a7dd2448\n slot = <optimized out>\n#10 0x000055d7a7433de8 in ExecProcNode (node=0x55d7a7d83ea0) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execProcnode.c:495\n result = <optimized out>\n __func__ = \"ExecProcNode\"\n#11 0x000055d7a743ffe9 in fetch_input_tuple (aggstate=aggstate@entry=0x55d7a7d83528) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeAgg.c:598\n slot = <optimized out>\n#12 0x000055d7a7441bb3 in agg_retrieve_direct (aggstate=0x55d7a7d83528) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeAgg.c:2078\n econtext = 0x55d7a7d838b0\n pergroup = 0x55d7a7d8e758\n firstSlot = 0x55d7a7d83960\n numGroupingSets = 1\n node = 0x7fa6c68a5da8\n tmpcontext = 0x55d7a7d83750\n peragg = 0x55d7a7d8d6b8\n outerslot = <optimized out>\n nextSetSize = <optimized out>\n result = <optimized out>\n hasGroupingSets = 0 '\\000'\n currentSet = <optimized out>\n numReset = 1\n i = <optimized out>\n---Type <return> to continue, or q <return> to quit---\n#13 ExecAgg (node=node@entry=0x55d7a7d83528) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeAgg.c:1903\n result = <optimized out>\n#14 0x000055d7a7433dc8 in ExecProcNode (node=node@entry=0x55d7a7d83528) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execProcnode.c:503\n result = <optimized out>\n __func__ = \"ExecProcNode\"\n#15 0x000055d7a744af74 in ExecLimit (node=node@entry=0x55d7a7d83288) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeLimit.c:91\n direction = ForwardScanDirection\n slot = <optimized out>\n outerPlan = 0x55d7a7d83528\n __func__ = \"ExecLimit\"\n#16 0x000055d7a7433d28 in ExecProcNode (node=node@entry=0x55d7a7d83288) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execProcnode.c:531\n result = <optimized out>\n __func__ = \"ExecProcNode\"\n#17 0x000055d7a744ff69 in ExecNestLoop (node=node@entry=0x55d7a7d60cd0) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeNestloop.c:174\n nl = 0x7fa6c68a6048\n innerPlan = 0x55d7a7d83288\n outerPlan = 0x55d7a7d610c0\n outerTupleSlot = <optimized out>\n innerTupleSlot = <optimized out>\n joinqual = 0x0\n otherqual = 0x0\n econtext = 0x55d7a7d60de0\n lc = <optimized out>\n#18 0x000055d7a7433e28 in ExecProcNode (node=node@entry=0x55d7a7d60cd0) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execProcnode.c:476\n result = <optimized out>\n __func__ = \"ExecProcNode\"\n#19 0x000055d7a7452989 in ExecSort (node=node@entry=0x55d7a7d60a60) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeSort.c:103\n plannode = <optimized out>\n outerNode = 0x55d7a7d60cd0\n tupDesc = <optimized out>\n estate = 0x55d7a7d5fee8\n dir = ForwardScanDirection\n tuplesortstate = 0x55d7a7d98398\n slot = <optimized out>\n#20 0x000055d7a7433de8 in ExecProcNode (node=0x55d7a7d60a60) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execProcnode.c:495\n result = <optimized out>\n __func__ = \"ExecProcNode\"\n#21 0x000055d7a743ffe9 in fetch_input_tuple (aggstate=aggstate@entry=0x55d7a7d60088) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeAgg.c:598\n slot = <optimized out>\n#22 0x000055d7a7441bb3 in agg_retrieve_direct (aggstate=0x55d7a7d60088) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeAgg.c:2078\n econtext = 0x55d7a7d60440\n pergroup = 0x55d7a7d91230\n firstSlot = 0x55d7a7d604f0\n numGroupingSets = 1\n node = 0x7fa6c68a6328\n tmpcontext = 0x55d7a7d602b0\n peragg = 0x55d7a7d90190\n outerslot = <optimized out>\n nextSetSize = <optimized out>\n---Type <return> to continue, or q <return> to quit---\n result = <optimized out>\n hasGroupingSets = 0 '\\000'\n currentSet = <optimized out>\n numReset = 1\n i = <optimized out>\n#23 ExecAgg (node=node@entry=0x55d7a7d60088) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeAgg.c:1903\n result = <optimized out>\n#24 0x000055d7a7433dc8 in ExecProcNode (node=node@entry=0x55d7a7d60088) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execProcnode.c:503\n result = <optimized out>\n __func__ = \"ExecProcNode\"\n#25 0x000055d7a742ff2e in ExecutePlan (dest=0x7fa673a96308, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, \n planstate=0x55d7a7d60088, estate=0x55d7a7d5fee8) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execMain.c:1567\n slot = <optimized out>\n current_tuple_count = 0\n#26 standard_ExecutorRun (queryDesc=0x55d7a7d54718, direction=<optimized out>, count=0) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execMain.c:339\n estate = 0x55d7a7d5fee8\n operation = CMD_SELECT\n dest = 0x7fa673a96308\n sendTuples = <optimized out>\n#27 0x00007fa6c7027515 in explain_ExecutorRun (queryDesc=0x55d7a7d54718, direction=ForwardScanDirection, count=0)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../contrib/auto_explain/auto_explain.c:281\n save_exception_stack = 0x7fff4aeeaa80\n save_context_stack = 0x0\n local_sigjmp_buf = {{__jmpbuf = {94384722106264, 8229023444991490729, 0, 94384722102040, 0, 1, 8229023444890827433, 8250672449167702697}, __mask_was_saved = 0, __saved_mask = {\n __val = {94384721739856, 140734450543072, 94384714940022, 140354273004312, 140354273004312, 140734450543104, 94384714691234, 2, 2, 140734450543200, 94384711690034, 2, \n 3462443396, 8388608, 3547611511646930944, 140734450543200}}}}\n#28 0x00007fa6c6e1fdb0 in pgss_ExecutorRun (queryDesc=0x55d7a7d54718, direction=ForwardScanDirection, count=0)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../contrib/pg_stat_statements/pg_stat_statements.c:875\n save_exception_stack = 0x7fff4aeeac20\n save_context_stack = 0x0\n local_sigjmp_buf = {{__jmpbuf = {94384722106264, 8229023444960033449, 0, 94384722102040, 0, 1, 8229023444993587881, 8250670555334589097}, __mask_was_saved = 0, __saved_mask = {\n __val = {4294967296, 140354272256808, 94384714928429, 16, 94384719269552, 24, 94384720895528, 94384722102040, 0, 140734450543408, 94384714928429, 94384722106264, \n 94384720895528, 140734450543440, 94384714994982, 94384722106264}}}}\n#29 0x000055d7a7553167 in PortalRunSelect (portal=portal@entry=0x55d7a7d55798, forward=forward@entry=1 '\\001', count=0, count@entry=9223372036854775807, dest=dest@entry=0x7fa673a96308)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/tcop/pquery.c:948\n queryDesc = 0x55d7a7d54718\n direction = <optimized out>\n nprocessed = <optimized out>\n __func__ = \"PortalRunSelect\"\n#30 0x000055d7a75547a0 in PortalRun (portal=portal@entry=0x55d7a7d55798, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=1 '\\001', dest=dest@entry=0x7fa673a96308, \n altdest=altdest@entry=0x7fa673a96308, completionTag=completionTag@entry=0x7fff4aeeb050 \"\") at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/tcop/pquery.c:789\n save_exception_stack = 0x7fff4aeeaf00\n save_context_stack = 0x0\n local_sigjmp_buf = {{__jmpbuf = {94384721085312, 8229023445033433769, 94384722106264, 140352881779464, 94384721085584, 2, 8229023444955839145, 2765920793019169449}, \n __mask_was_saved = 0, __saved_mask = {__val = {0, 12099560782865280144, 0, 8, 8, 140734450544226, 1, 88, 94384722106264, 94384715935530, 94384721085584, 140734450543840, \n 94384714930017, 2, 94384722106264, 140734450543872}}}}\n result = <optimized out>\n nprocessed = <optimized out>\n saveTopTransactionResourceOwner = 0x55d7a7c118e8\n---Type <return> to continue, or q <return> to quit---\n saveTopTransactionContext = 0x55d7a7c10eb8\n saveActivePortal = 0x0\n saveResourceOwner = 0x55d7a7c118e8\n savePortalContext = 0x0\n saveMemoryContext = 0x55d7a7c10eb8\n __func__ = \"PortalRun\"\n#31 0x000055d7a75512d6 in exec_simple_query (\n query_string=0x55d7a7ce6b38 \"select v.account_id, COUNT(cnt.clicks), te.description,\\nl.product_id\\nfrom nbox_nc_ah.tracking_events te\\njoin nbox_nc_ah.page_views pv on pv.page_view_id = te.page_view_id\\njoin nbox_nc_ah.visits v on v\"...) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/tcop/postgres.c:1109\n parsetree = 0x55d7a7c5c380\n portal = 0x55d7a7d55798\n snapshot_set = <optimized out>\n commandTag = <optimized out>\n completionTag = \"\\000\\370\\253\\247\\327U\\000\\000\\240F\\252\\247\\327U\\000\\000\\200\\260\\356J\\377\\177\\000\\000\\215\\326g\\247\\327U\\000\\000\\300\\260\\356J\\377\\177\\000\\000d\\261\\356J\\377\\177\\000\\000\\240\\260\\356J\\377\\177\\000\\000v\\031F\\247\\327U\\000\"\n querytree_list = <optimized out>\n plantree_list = 0x7fa673a962d8\n receiver = 0x7fa673a96308\n format = 0\n dest = DestRemote\n parsetree_list = 0x55d7a7c5c4b0\n save_log_statement_stats = 0 '\\000'\n was_logged = 0 '\\000'\n msec_str = \"\\020\\261\\356J\\377\\177\\000\\000(\\002\", '\\000' <repeats 14 times>, \"\\340?\\256\\247\\327U\\000\"\n parsetree_item = 0x55d7a7c5c490\n isTopLevel = 1 '\\001'\n#32 PostgresMain (argc=<optimized out>, argv=argv@entry=0x55d7a7c56830, dbname=0x55d7a7c11b88 \"staging\", username=<optimized out>)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/tcop/postgres.c:4101\n query_string = 0x55d7a7ce6b38 \"select v.account_id, COUNT(cnt.clicks), te.description,\\nl.product_id\\nfrom nbox_nc_ah.tracking_events te\\njoin nbox_nc_ah.page_views pv on pv.page_view_id = te.page_view_id\\njoin nbox_nc_ah.visits v on v\"...\n firstchar = -1479190632\n input_message = {\n data = 0x55d7a7ce6b38 \"select v.account_id, COUNT(cnt.clicks), te.description,\\nl.product_id\\nfrom nbox_nc_ah.tracking_events te\\njoin nbox_nc_ah.page_views pv on pv.page_view_id = te.page_view_id\\njoin nbox_nc_ah.visits v on v\"..., len = 1042, maxlen = 2048, cursor = 1042}\n local_sigjmp_buf = {{__jmpbuf = {140734450544288, 8229023445169748649, 94384721061936, 1, 94384721061720, 94384721052928, 8229023445035530921, 2765920790734322345}, \n __mask_was_saved = 1, __saved_mask = {__val = {0, 94386201296895, 94384713689589, 18446603339259007057, 140354407146656, 0, 1305670059009, 32, 4, 489626271867, 0, 0, \n 532575944823, 140734450544608, 0, 140734450544704}}}}\n send_ready_for_query = 0 '\\000'\n disable_idle_in_transaction_timeout = <optimized out>\n __func__ = \"PostgresMain\"\n#33 0x000055d7a72c6a1b in BackendRun (port=0x55d7a7c54500) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/postmaster/postmaster.c:4339\n ac = 1\n secs = 616545808\n usecs = 503344\n i = 1\n av = 0x55d7a7c56830\n maxac = <optimized out>\n#34 BackendStartup (port=0x55d7a7c54500) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/postmaster/postmaster.c:4013\n bn = <optimized out>\n---Type <return> to continue, or q <return> to quit---\n pid = <optimized out>\n#35 ServerLoop () at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/postmaster/postmaster.c:1722\n rmask = {fds_bits = {16, 0 <repeats 15 times>}}\n selres = <optimized out>\n now = <optimized out>\n readmask = {fds_bits = {48, 0 <repeats 15 times>}}\n last_lockfile_recheck_time = 1563230588\n last_touch_time = 1563230588\n __func__ = \"ServerLoop\"\n#36 0x000055d7a74ed281 in PostmasterMain (argc=13, argv=<optimized out>) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/postmaster/postmaster.c:1330\n opt = <optimized out>\n status = <optimized out>\n userDoption = <optimized out>\n listen_addr_saved = 1 '\\001'\n i = <optimized out>\n output_config_variable = <optimized out>\n __func__ = \"PostmasterMain\"\n#37 0x000055d7a72c7bf1 in main (argc=13, argv=0x55d7a7c0f840) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/main/main.c:228\nNo locals.\n(gdb) list\n228\tin /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c\n(gdb) \n\n>\n>\n>\n> regards\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: postgres.consulting@comcast.net\n\n\n",
"msg_date": "Mon, 15 Jul 2019 20:20:00 -0500",
"msg_from": "Jerry Sievers <gsievers19@comcast.net>",
"msg_from_op": true,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Mon, Jul 15, 2019 at 08:20:00PM -0500, Jerry Sievers wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>\n>> On Mon, Jul 15, 2019 at 07:22:55PM -0500, Jerry Sievers wrote:\n>>\n>>>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>>>\n>>>> On Mon, Jul 15, 2019 at 06:48:05PM -0500, Jerry Sievers wrote:\n>>>>\n>>>>>Greetings Hackers.\n>>>>>\n>>>>>We have a reproduceable case of $subject that issues a backtrace such as\n>>>>>seen below.\n>>>>>\n>>>>>The query that I'd prefer to sanitize before sending is <30 lines of at\n>>>>>a glance, not terribly complex logic.\n>>>>>\n>>>>>It nonetheless dies hard after a few seconds of running and as expected,\n>>>>>results in an automatic all-backend restart.\n>>>>>\n>>>>>Please advise on how to proceed. Thanks!\n>>>>>\n>>>>>bt\n>>>>>#0 initscan (scan=scan@entry=0x55d7a7daa0b0, key=0x0, keep_startblock=keep_startblock@entry=1 '\\001')\n>>>>> at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c:233\n>>>>>#1 0x000055d7a72fa8d0 in heap_rescan (scan=0x55d7a7daa0b0, key=key@entry=0x0) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c:1529\n>>>>>#2 0x000055d7a7451fef in ExecReScanSeqScan (node=node@entry=0x55d7a7d85100) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeSeqscan.c:280\n>>>>>#3 0x000055d7a742d36e in ExecReScan (node=0x55d7a7d85100) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:158\n>>>>>#4 0x000055d7a7445d38 in ExecReScanGather (node=node@entry=0x55d7a7d84d30) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeGather.c:475\n>>>>>#5 0x000055d7a742d255 in ExecReScan (node=0x55d7a7d84d30) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:166\n>>>>>#6 0x000055d7a7448673 in ExecReScanHashJoin (node=node@entry=0x55d7a7d84110) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeHashjoin.c:1019\n>>>>>#7 0x000055d7a742d29e in ExecReScan (node=node@entry=0x55d7a7d84110) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:226\n>>>>><about 30 lines omitted>\n>>>>>\n>>>>\n>>>> Hmmm, that means it's crashing here:\n>>>>\n>>>> if (scan->rs_parallel != NULL)\n>>>> scan->rs_nblocks = scan->rs_parallel->phs_nblocks; <--- here\n>>>> else\n>>>> scan->rs_nblocks = RelationGetNumberOfBlocks(scan->rs_rd);\n>>>>\n>>>> But clearly, scan is valid (otherwise it'd crash on the if condition),\n>>>> and scan->rs_parallel must me non-NULL. Which probably means the pointer\n>>>> is (no longer) valid.\n>>>>\n>>>> Could it be that the rs_parallel DSM disappears on rescan, or something\n>>>> like that?\n>>>\n>>>No clue but something I just tried was to disable parallelism by setting\n>>>max_parallel_workers_per_gather to 0 and however the query has not\n>>>finished after a few minutes, there is no crash.\n>>>\n>>\n>> That might be a hint my rough analysis was somewhat correct. The\n>> question is whether the non-parallel plan does the same thing. Maybe it\n>> picks a plan that does not require rescans, or something like that.\n>>\n>>>Please advise.\n>>>\n>>\n>> It would be useful to see (a) exacution plan of the query, (b) full\n>> backtrace and (c) a bit of context for the place where it crashed.\n>>\n>> Something like (in gdb):\n>>\n>> bt full\n>> list\n>> p *scan\n>\n>The p *scan did nothing unless I ran it first however my gdb $foo isn't\n>strong presently.\n\nHmm, the rs_parallel pointer looks sane (it's not obvious garbage). Can\nyou try this?\n\n p *scan->rs_parallel\n\nAnother question - are you sure this is not an OOM issue? That might\nsometimes look like SIGSEGV due to overcommit. What's the memory\nconsumption / is there anything in dmesg?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 16 Jul 2019 10:22:04 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Tue, Jul 16, 2019 at 8:22 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> On Mon, Jul 15, 2019 at 08:20:00PM -0500, Jerry Sievers wrote:\n> >>>>>We have a reproduceable case of $subject that issues a backtrace such as\n> >>>>>seen below.\n\n> >>>>>#0 initscan (scan=scan@entry=0x55d7a7daa0b0, key=0x0, keep_startblock=keep_startblock@entry=1 '\\001')\n> >>>>> at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c:233\n> >>>>>#1 0x000055d7a72fa8d0 in heap_rescan (scan=0x55d7a7daa0b0, key=key@entry=0x0) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c:1529\n> >>>>>#2 0x000055d7a7451fef in ExecReScanSeqScan (node=node@entry=0x55d7a7d85100) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeSeqscan.c:280\n> >>>>>#3 0x000055d7a742d36e in ExecReScan (node=0x55d7a7d85100) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:158\n> >>>>>#4 0x000055d7a7445d38 in ExecReScanGather (node=node@entry=0x55d7a7d84d30) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeGather.c:475\n> >>>>>#5 0x000055d7a742d255 in ExecReScan (node=0x55d7a7d84d30) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:166\n\nHere's a query that rescans a gather node repeatedly on 9.6 in case it\nhelps someone build a repro, but it works fine here.\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Tue, 16 Jul 2019 22:42:06 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n\n> On Mon, Jul 15, 2019 at 08:20:00PM -0500, Jerry Sievers wrote:\n>\n>>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>>\n>>> On Mon, Jul 15, 2019 at 07:22:55PM -0500, Jerry Sievers wrote:\n>>>\n>>>>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>>>>\n>>>>> On Mon, Jul 15, 2019 at 06:48:05PM -0500, Jerry Sievers wrote:\n>>>>>\n>>>>>>Greetings Hackers.\n>>>>>>\n>>>>>>We have a reproduceable case of $subject that issues a backtrace such as\n>>>>>>seen below.\n>>>>>>\n>>>>>>The query that I'd prefer to sanitize before sending is <30 lines of at\n>>>>>>a glance, not terribly complex logic.\n>>>>>>\n>>>>>>It nonetheless dies hard after a few seconds of running and as expected,\n>>>>>>results in an automatic all-backend restart.\n>>>>>>\n>>>>>>Please advise on how to proceed. Thanks!\n>>>>>>\n>>>>>>bt\n>>>>>>#0 initscan (scan=scan@entry=0x55d7a7daa0b0, key=0x0, keep_startblock=keep_startblock@entry=1 '\\001')\n>>>>>> at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c:233\n>>>>>>#1 0x000055d7a72fa8d0 in heap_rescan (scan=0x55d7a7daa0b0, key=key@entry=0x0) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c:1529\n>>>>>>#2 0x000055d7a7451fef in ExecReScanSeqScan (node=node@entry=0x55d7a7d85100) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeSeqscan.c:280\n>>>>>>#3 0x000055d7a742d36e in ExecReScan (node=0x55d7a7d85100) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:158\n>>>>>>#4 0x000055d7a7445d38 in ExecReScanGather (node=node@entry=0x55d7a7d84d30) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeGather.c:475\n>>>>>>#5 0x000055d7a742d255 in ExecReScan (node=0x55d7a7d84d30) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:166\n>>>>>>#6 0x000055d7a7448673 in ExecReScanHashJoin (node=node@entry=0x55d7a7d84110) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeHashjoin.c:1019\n>>>>>>#7 0x000055d7a742d29e in ExecReScan (node=node@entry=0x55d7a7d84110) at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execAmi.c:226\n>>>>>><about 30 lines omitted>\n>>>>>>\n>>>>>\n>>>>> Hmmm, that means it's crashing here:\n>>>>>\n>>>>> if (scan->rs_parallel != NULL)\n>>>>> scan->rs_nblocks = scan->rs_parallel->phs_nblocks; <--- here\n>>>>> else\n>>>>> scan->rs_nblocks = RelationGetNumberOfBlocks(scan->rs_rd);\n>>>>>\n>>>>> But clearly, scan is valid (otherwise it'd crash on the if condition),\n>>>>> and scan->rs_parallel must me non-NULL. Which probably means the pointer\n>>>>> is (no longer) valid.\n>>>>>\n>>>>> Could it be that the rs_parallel DSM disappears on rescan, or something\n>>>>> like that?\n>>>>\n>>>>No clue but something I just tried was to disable parallelism by setting\n>>>>max_parallel_workers_per_gather to 0 and however the query has not\n>>>>finished after a few minutes, there is no crash.\n>>>>\n>>>\n>>> That might be a hint my rough analysis was somewhat correct. The\n>>> question is whether the non-parallel plan does the same thing. Maybe it\n>>> picks a plan that does not require rescans, or something like that.\n>>>\n>>>>Please advise.\n>>>>\n>>>\n>>> It would be useful to see (a) exacution plan of the query, (b) full\n>>> backtrace and (c) a bit of context for the place where it crashed.\n>>>\n>>> Something like (in gdb):\n>>>\n>>> bt full\n>>> list\n>>> p *scan\n>>\n>>The p *scan did nothing unless I ran it first however my gdb $foo isn't\n>>strong presently.\n>\n> Hmm, the rs_parallel pointer looks sane (it's not obvious garbage). Can\n> you try this?\n>\n> p *scan->rs_parallel\n\n\n$ gdb /usr/lib/postgresql/9.6/bin/postgres core\nGNU gdb (Ubuntu 7.11.1-0ubuntu1~16.5) 7.11.1\nCopyright (C) 2016 Free Software Foundation, Inc.\nLicense GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>\nThis is free software: you are free to change and redistribute it.\nThere is NO WARRANTY, to the extent permitted by law. Type \"show copying\"\nand \"show warranty\" for details.\nThis GDB was configured as \"x86_64-linux-gnu\".\nType \"show configuration\" for configuration details.\nFor bug reporting instructions, please see:\n<http://www.gnu.org/software/gdb/bugs/>.\nFind the GDB manual and other documentation resources online at:\n<http://www.gnu.org/software/gdb/documentation/>.\nFor help, type \"help\".\nType \"apropos word\" to search for commands related to \"word\"...\nReading symbols from /usr/lib/postgresql/9.6/bin/postgres...Reading symbols from /usr/lib/debug/.build-id/04/6f55a5ce6ce05064edfc8feee61c6cb039d296.debug...done.\ndone.\n[New LWP 31654]\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\nCore was generated by `postgres: foo_eis_segfault: jsievers staging 10.220.22.26(57948) SELECT '.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 initscan (scan=scan@entry=0x55d7a7daa0b0, key=0x0, keep_startblock=keep_startblock@entry=1 '\\001')\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c:233\n233\t/build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c: No such file or directory.\n(gdb) p *scan->rs_parallel\nCannot access memory at address 0x7fa673a54108\n(gdb) \n\n>\n> Another question - are you sure this is not an OOM issue? That might\n> sometimes look like SIGSEGV due to overcommit. What's the memory\n> consumption / is there anything in dmesg?\n\nBelow is all I got after a prior dmesg -c...\n\ndmesg -c\n[5441294.442062] postgres[12033]: segfault at 7f3d011d2110 ip 000055666def9a31 sp 00007ffc37be9a70 error 4 in postgres[55666de23000+653000]\n\nThanks!\n\n>\n> regards\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: postgres.consulting@comcast.net\n\n\n",
"msg_date": "Tue, 16 Jul 2019 18:05:44 -0500",
"msg_from": "Jerry Sievers <gsievers19@comcast.net>",
"msg_from_op": true,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 11:06 AM Jerry Sievers <gsievers19@comcast.net> wrote:\n> (gdb) p *scan->rs_parallel\n> Cannot access memory at address 0x7fa673a54108\n\nSo I guess one question is: was it a valid address that's been\nunexpectedly unmapped, or is the pointer corrupted? Any chance you\ncan strace the backend and pull out the map, unmap calls?\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Jul 2019 11:11:41 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 11:11 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> map, unmap\n\nmmap, munmap\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Jul 2019 11:13:23 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n\n> On Wed, Jul 17, 2019 at 11:06 AM Jerry Sievers <gsievers19@comcast.net> wrote:\n>\n>> (gdb) p *scan->rs_parallel\n>> Cannot access memory at address 0x7fa673a54108\n>\n> So I guess one question is: was it a valid address that's been\n> unexpectedly unmapped, or is the pointer corrupted? Any chance you\n> can strace the backend and pull out the map, unmap calls?\n\nI'll dig further.\n\nHere is a sanitized look at the query and explain plan...\n\nThe segfault happens $immediately upon issuance of the query.\n\n\n\n\n\nbegin;\n\n-- This setting makes the segfault go away\n--set local max_parallel_workers_per_gather to 0;\n\nexplain\nselect v.account_id, COUNT(cnt.clicks), te.description,\nl.product_id\nfrom thing3.thing10 te\njoin thing3.thing9 pv on pv.page_view_id = te.page_view_id\njoin thing3.thing11 v on v.visit_id = pv.visit_id\nleft join thing6.thing12 l on v.account_id=l.account_id\n left join lateral (\n select MAX(v.visit_id)\n ,COUNT(*) as clicks\n from thing3.thing10 te\n join thing3.thing9 pv on pv.page_view_id =\nte.page_view_id\n join thing3.thing11 v on v.visit_id = pv.visit_id\n where te.description in ('thing7',\n'thing8')\n and v.account_id=l.account_id\n GROUP BY v.account_id, v.visit_id\n order by v.account_id, v.visit_id desc\n limit 1\n )cnt on true\nwhere (te.description in ('thing4',\n'thing5')\n or te.description like'%auto%')\n and te.created_at > '2019-06-24 00:00:00'\n--and l.loan_status_id in (5,6)\ngroup by v.account_id, te.description,\nl.product_id;\n\nabort;\nBEGIN\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=12300178.71..12300179.79 rows=48 width=44)\n Group Key: v.account_id, te.description, l.product_id\n -> Sort (cost=12300178.71..12300178.83 rows=48 width=44)\n Sort Key: v.account_id, te.description, l.product_id\n -> Nested Loop Left Join (cost=251621.81..12300177.37 rows=48 width=44)\n -> Gather (cost=1001.55..270403.27 rows=48 width=40)\n Workers Planned: 3\n -> Nested Loop Left Join (cost=1.56..269398.47 rows=15 width=40)\n -> Nested Loop (cost=1.13..269391.71 rows=14 width=32)\n -> Nested Loop (cost=0.57..269368.66 rows=39 width=32)\n -> Parallel Seq Scan on thing10 te (cost=0.00..269228.36 rows=39 width=32)\n Filter: ((created_at > '2019-06-24 00:00:00'::timestamp without time zone) AND (((description)::text = ANY ('{thing4,thing5}'::text[])) OR ((description)::text ~~ '%auto%'::text)))\n -> Index Scan using page_views_pkey on thing9 pv (cost=0.57..3.59 rows=1 width=8)\n Index Cond: (page_view_id = te.page_view_id)\n -> Index Scan using visits_pkey on thing11 v (cost=0.56..0.58 rows=1 width=8)\n Index Cond: (visit_id = pv.visit_id)\n -> Index Scan using index_loans_on_account_id on thing12 l (cost=0.42..0.46 rows=2 width=8)\n Index Cond: (v.account_id = account_id)\n -> Limit (cost=250620.25..250620.27 rows=1 width=20)\n -> GroupAggregate (cost=250620.25..250620.27 rows=1 width=20)\n Group Key: v_1.visit_id\n -> Sort (cost=250620.25..250620.26 rows=1 width=8)\n Sort Key: v_1.visit_id DESC\n -> Hash Join (cost=1154.34..250620.24 rows=1 width=8)\n Hash Cond: (te_1.page_view_id = pv_1.page_view_id)\n -> Gather (cost=1000.00..250452.00 rows=3706 width=4)\n Workers Planned: 3\n -> Parallel Seq Scan on thing10 te_1 (cost=0.00..249081.40 rows=1195 width=4)\n Filter: ((description)::text = ANY ('{thing7,thing8}'::text[]))\n -> Hash (cost=152.85..152.85 rows=119 width=12)\n -> Nested Loop (cost=1.01..152.85 rows=119 width=12)\n -> Index Scan using index_visits_on_account_id on thing11 v_1 (cost=0.43..15.63 rows=18 width=8)\n Index Cond: (account_id = l.account_id)\n -> Index Scan using index_pv_on_visit on thing9 pv_1 (cost=0.57..7.55 rows=7 width=8)\n Index Cond: (visit_id = v_1.visit_id)\n(35 rows)\n\nROLLBACK\n\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: postgres.consulting@comcast.net\n\n\n",
"msg_date": "Tue, 16 Jul 2019 18:33:36 -0500",
"msg_from": "Jerry Sievers <gsievers19@comcast.net>",
"msg_from_op": true,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n\n> On Wed, Jul 17, 2019 at 11:06 AM Jerry Sievers <gsievers19@comcast.net> wrote:\n>\n>> (gdb) p *scan->rs_parallel\n>> Cannot access memory at address 0x7fa673a54108\n>\n> So I guess one question is: was it a valid address that's been\n> unexpectedly unmapped, or is the pointer corrupted? Any chance you\n> can strace the backend and pull out the map, unmap calls?\n\nThere were about 60k lines from strace including these few...\n\n\nmmap(NULL, 528384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3d0127a000\nmmap(NULL, 266240, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3d01239000\nmmap(NULL, 287624, PROT_READ|PROT_WRITE, MAP_SHARED, 124, 0) = 0x7f3d011f2000\nmmap(NULL, 262504, PROT_READ|PROT_WRITE, MAP_SHARED, 124, 0) = 0x7f3d011b1000\nmunmap(0x7f3d011b1000, 262504) = 0\n\nThx\n\n\n\n\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: postgres.consulting@comcast.net\n\n\n",
"msg_date": "Tue, 16 Jul 2019 18:42:03 -0500",
"msg_from": "Jerry Sievers <gsievers19@comcast.net>",
"msg_from_op": true,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 11:33 AM Jerry Sievers <gsievers19@comcast.net> wrote:\n> -> Nested Loop Left Join (cost=251621.81..12300177.37 rows=48 width=44)\n> -> Gather (cost=1001.55..270403.27 rows=48 width=40)\n\n> -> Limit (cost=250620.25..250620.27 rows=1 width=20)\n\n> -> Gather (cost=1000.00..250452.00 rows=3706 width=4)\n\nOne observation is that it's a rescan a bit like the one in the\nunsuccessful repro attempt I posted, but it has *two* Gather nodes in\nit (and thus two parallel query DSM segments), and only one of them\nshould be rescanned, and from the backtrace we see that it is indeed\nthe expected one, the one under the Limit operator. Neither of them\nshould be getting unmapped in the leader though and AFAIK nothing\nhappening in the workers could cause this effect, the leader would\nhave to explicitly unmap the thing AFAIK.\n\nOn Wed, Jul 17, 2019 at 11:42 AM Jerry Sievers <gsievers19@comcast.net> wrote:\n> mmap(NULL, 287624, PROT_READ|PROT_WRITE, MAP_SHARED, 124, 0) = 0x7f3d011f2000\n> mmap(NULL, 262504, PROT_READ|PROT_WRITE, MAP_SHARED, 124, 0) = 0x7f3d011b1000\n> munmap(0x7f3d011b1000, 262504) = 0\n\nOk, there go our two parallel query DSM segments, and there it is\nbeing unmapped. Hmm. Any chance you could attach a debugger, and\n\"break munmap\", \"cont\", and then show us the backtrace \"bt\" when that\nis reached?\n\n\n\n\n--\nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Jul 2019 11:49:23 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n\n> On Wed, Jul 17, 2019 at 11:33 AM Jerry Sievers <gsievers19@comcast.net> wrote:\n>\n>> -> Nested Loop Left Join (cost=251621.81..12300177.37 rows=48 width=44)\n>> -> Gather (cost=1001.55..270403.27 rows=48 width=40)\n>\n>> -> Limit (cost=250620.25..250620.27 rows=1 width=20)\n>\n>> -> Gather (cost=1000.00..250452.00 rows=3706 width=4)\n>\n> One observation is that it's a rescan a bit like the one in the\n> unsuccessful repro attempt I posted, but it has *two* Gather nodes in\n> it (and thus two parallel query DSM segments), and only one of them\n> should be rescanned, and from the backtrace we see that it is indeed\n> the expected one, the one under the Limit operator. Neither of them\n> should be getting unmapped in the leader though and AFAIK nothing\n> happening in the workers could cause this effect, the leader would\n> have to explicitly unmap the thing AFAIK.\n>\n> On Wed, Jul 17, 2019 at 11:42 AM Jerry Sievers <gsievers19@comcast.net> wrote:\n>> mmap(NULL, 287624, PROT_READ|PROT_WRITE, MAP_SHARED, 124, 0) = 0x7f3d011f2000\n>> mmap(NULL, 262504, PROT_READ|PROT_WRITE, MAP_SHARED, 124, 0) = 0x7f3d011b1000\n>> munmap(0x7f3d011b1000, 262504) = 0\n>\n> Ok, there go our two parallel query DSM segments, and there it is\n> being unmapped. Hmm. Any chance you could attach a debugger, and\n> \"break munmap\", \"cont\", and then show us the backtrace \"bt\" when that\n> is reached?\n\ngdb /usr/lib/postgresql/9.6/bin/postgres 21640\nGNU gdb (Ubuntu 7.11.1-0ubuntu1~16.5) 7.11.1\nCopyright (C) 2016 Free Software Foundation, Inc.\nLicense GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>\nThis is free software: you are free to change and redistribute it.\nThere is NO WARRANTY, to the extent permitted by law. Type \"show copying\"\nand \"show warranty\" for details.\nThis GDB was configured as \"x86_64-linux-gnu\".\nType \"show configuration\" for configuration details.\nFor bug reporting instructions, please see:\n<http://www.gnu.org/software/gdb/bugs/>.\nFind the GDB manual and other documentation resources online at:\n<http://www.gnu.org/software/gdb/documentation/>.\nFor help, type \"help\".\nType \"apropos word\" to search for commands related to \"word\"...\nReading symbols from /usr/lib/postgresql/9.6/bin/postgres...Reading symbols from /usr/lib/debug/.build-id/04/6f55a5ce6ce05064edfc8feee61c6cb039d296.debug...done.\ndone.\nAttaching to program: /usr/lib/postgresql/9.6/bin/postgres, process 21640\nReading symbols from /usr/lib/x86_64-linux-gnu/libxml2.so.2...Reading symbols from /usr/lib/debug/.build-id/d3/57ce1dba1fab803eddf48922123ffd0a303676.debug...done.\ndone.\nReading symbols from /lib/x86_64-linux-gnu/libpam.so.0...(no debugging symbols found)...done.\nReading symbols from /lib/x86_64-linux-gnu/libssl.so.1.0.0...Reading symbols from /usr/lib/debug/.build-id/ff/69ea60ebe05f2dd689d2b26fc85a73e5fbc3a0.debug...done.\ndone.\nReading symbols from /lib/x86_64-linux-gnu/libcrypto.so.1.0.0...Reading symbols from /usr/lib/debug/.build-id/15/ffeb43278726b025f020862bf51302822a40ec.debug...done.\ndone.\nReading symbols from /usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2...(no debugging symbols found)...done.\nReading symbols from /lib/x86_64-linux-gnu/librt.so.1...Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/librt-2.23.so...done.\ndone.\nReading symbols from /lib/x86_64-linux-gnu/libdl.so.2...Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/libdl-2.23.so...done.\ndone.\nReading symbols from /lib/x86_64-linux-gnu/libm.so.6...Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/libm-2.23.so...done.\ndone.\nReading symbols from /usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2...Reading symbols from /usr/lib/debug/.build-id/38/90d33727391e4a85dc0f819ab0aa29bb5dfc86.debug...done.\ndone.\nReading symbols from /lib/x86_64-linux-gnu/libsystemd.so.0...(no debugging symbols found)...done.\nReading symbols from /lib/x86_64-linux-gnu/libc.so.6...Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/libc-2.23.so...done.\ndone.\nReading symbols from /usr/lib/x86_64-linux-gnu/libicuuc.so.55...Reading symbols from /usr/lib/debug/.build-id/46/3d8b610702d64ae0803c7dfcaa02cfb4c6477b.debug...done.\ndone.\nReading symbols from /lib/x86_64-linux-gnu/libz.so.1...Reading symbols from /usr/lib/debug/.build-id/8d/9bd4ce26e45ef16075c67d5f5eeafd8b562832.debug...done.\ndone.\nReading symbols from /lib/x86_64-linux-gnu/liblzma.so.5...(no debugging symbols found)...done.\nReading symbols from /lib/x86_64-linux-gnu/libaudit.so.1...(no debugging symbols found)...done.\nReading symbols from /usr/lib/x86_64-linux-gnu/libkrb5.so.3...(no debugging symbols found)...done.\nReading symbols from /usr/lib/x86_64-linux-gnu/libk5crypto.so.3...(no debugging symbols found)...done.\nReading symbols from /lib/x86_64-linux-gnu/libcom_err.so.2...Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/libcom_err.so.2.1...done.\ndone.\nReading symbols from /usr/lib/x86_64-linux-gnu/libkrb5support.so.0...(no debugging symbols found)...done.\nReading symbols from /lib/x86_64-linux-gnu/libpthread.so.0...Reading symbols from /usr/lib/debug/.build-id/b1/7c21299099640a6d863e423d99265824e7bb16.debug...done.\ndone.\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\nReading symbols from /lib64/ld-linux-x86-64.so.2...Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/ld-2.23.so...done.\ndone.\nReading symbols from /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2...Reading symbols from /usr/lib/debug/.build-id/8e/613d0b8d8e3537785637424782be8502ababd2.debug...done.\ndone.\nReading symbols from /lib/x86_64-linux-gnu/libresolv.so.2...Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/libresolv-2.23.so...done.\ndone.\nReading symbols from /usr/lib/x86_64-linux-gnu/libsasl2.so.2...(no debugging symbols found)...done.\nReading symbols from /usr/lib/x86_64-linux-gnu/libgssapi.so.3...(no debugging symbols found)...done.\nReading symbols from /usr/lib/x86_64-linux-gnu/libgnutls.so.30...(no debugging symbols found)...done.\nReading symbols from /lib/x86_64-linux-gnu/libselinux.so.1...(no debugging symbols found)...done.\nReading symbols from /lib/x86_64-linux-gnu/libgcrypt.so.20...(no debugging symbols found)...done.\nReading symbols from /usr/lib/x86_64-linux-gnu/libicudata.so.55...(no debugging symbols found)...done.\nReading symbols from /usr/lib/x86_64-linux-gnu/libstdc++.so.6...(no debugging symbols found)...done.\nReading symbols from /lib/x86_64-linux-gnu/libgcc_s.so.1...Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/libgcc_s.so.1...done.\ndone.\nReading symbols from /lib/x86_64-linux-gnu/libkeyutils.so.1...(no debugging symbols found)...done.\nReading symbols from /usr/lib/x86_64-linux-gnu/libheimntlm.so.0...(no debugging symbols found)...done.\nReading symbols from /usr/lib/x86_64-linux-gnu/libkrb5.so.26...(no debugging symbols found)...done.\nReading symbols from /usr/lib/x86_64-linux-gnu/libasn1.so.8...(no debugging symbols found)...done.\nReading symbols from /usr/lib/x86_64-linux-gnu/libhcrypto.so.4...(no debugging symbols found)...done.\nReading symbols from /usr/lib/x86_64-linux-gnu/libroken.so.18...(no debugging symbols found)...done.\nReading symbols from /usr/lib/x86_64-linux-gnu/libp11-kit.so.0...(no debugging symbols found)...done.\nReading symbols from /usr/lib/x86_64-linux-gnu/libidn.so.11...(no debugging symbols found)...done.\nReading symbols from /usr/lib/x86_64-linux-gnu/libtasn1.so.6...(no debugging symbols found)...done.\nReading symbols from /usr/lib/x86_64-linux-gnu/libnettle.so.6...(no debugging symbols found)...done.\nReading symbols from /usr/lib/x86_64-linux-gnu/libhogweed.so.4...(no debugging symbols found)...done.\nReading symbols from /usr/lib/x86_64-linux-gnu/libgmp.so.10...(no debugging symbols found)...done.\nReading symbols from /lib/x86_64-linux-gnu/libpcre.so.3...Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/libpcre.so.3.13.2...done.\ndone.\nReading symbols from /lib/x86_64-linux-gnu/libgpg-error.so.0...(no debugging symbols found)...done.\nReading symbols from /usr/lib/x86_64-linux-gnu/libwind.so.0...(no debugging symbols found)...done.\nReading symbols from /usr/lib/x86_64-linux-gnu/libheimbase.so.1...(no debugging symbols found)...done.\nReading symbols from /usr/lib/x86_64-linux-gnu/libhx509.so.5...(no debugging symbols found)...done.\nReading symbols from /usr/lib/x86_64-linux-gnu/libsqlite3.so.0...Reading symbols from /usr/lib/debug/.build-id/3b/0454e57467057071f7ad49651e0fa7b01cf5c7.debug...done.\ndone.\nReading symbols from /lib/x86_64-linux-gnu/libcrypt.so.1...Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/libcrypt-2.23.so...done.\ndone.\nReading symbols from /usr/lib/x86_64-linux-gnu/libffi.so.6...Reading symbols from /usr/lib/debug/.build-id/9d/9c958f1f4894afef6aecd90d1c430ea29ac34f.debug...done.\ndone.\nReading symbols from /usr/lib/postgresql/9.6/lib/auto_explain.so...Reading symbols from /usr/lib/debug/.build-id/94/ab76178c50b0e098f2bd0f3501d9cb6562c743.debug...done.\ndone.\nReading symbols from /usr/lib/postgresql/9.6/lib/pg_stat_statements.so...Reading symbols from /usr/lib/debug/.build-id/cf/f288800c22fd97059aaf8e425ae17e29fb88fb.debug...done.\ndone.\nReading symbols from /usr/lib/postgresql/9.6/lib/pglogical.so...(no debugging symbols found)...done.\nReading symbols from /usr/lib/x86_64-linux-gnu/libpq.so.5...(no debugging symbols found)...done.\nReading symbols from /lib/x86_64-linux-gnu/libnss_files.so.2...Reading symbols from /usr/lib/debug//lib/x86_64-linux-gnu/libnss_files-2.23.so...done.\ndone.\n0x00007f3d093379f3 in __epoll_wait_nocancel () at ../sysdeps/unix/syscall-template.S:84\n84\t../sysdeps/unix/syscall-template.S: No such file or directory.\n(gdb) break munmap\nBreakpoint 1 at 0x7f3d09331740: file ../sysdeps/unix/syscall-template.S, line 84.\n(gdb) cont\nContinuing.\n\nProgram received signal SIGUSR1, User defined signal 1.\nhash_search_with_hash_value (hashp=0x5566701baa68, keyPtr=keyPtr@entry=0x7ffc37be9790, \n hashvalue=hashvalue@entry=1634369601, action=action@entry=HASH_FIND, \n foundPtr=foundPtr@entry=0x0)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/utils/hash/dynahash.c:959\n959\t/build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/utils/hash/dynahash.c: No such file or directory.\n(gdb) bt\n#0 hash_search_with_hash_value (hashp=0x5566701baa68, keyPtr=keyPtr@entry=0x7ffc37be9790, \n hashvalue=hashvalue@entry=1634369601, action=action@entry=HASH_FIND, \n foundPtr=foundPtr@entry=0x0)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/utils/hash/dynahash.c:959\n#1 0x000055666e1224ca in BufTableLookup (tagPtr=tagPtr@entry=0x7ffc37be9790, \n hashcode=hashcode@entry=1634369601)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/storage/buffer/buf_table.c:96\n#2 0x000055666e12527d in BufferAlloc (foundPtr=0x7ffc37be978b \"\", strategy=0x556670360418, \n blockNum=53, forkNum=MAIN_FORKNUM, relpersistence=112 'p', smgr=0x5566702a5990)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/storage/buffer/bufmgr.c:1013\n#3 ReadBuffer_common (smgr=0x5566702a5990, relpersistence=<optimized out>, \n forkNum=forkNum@entry=MAIN_FORKNUM, blockNum=blockNum@entry=53, mode=RBM_NORMAL, \n strategy=0x556670360418, hit=0x7ffc37be9837 \"\")\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/storage/buffer/bufmgr.c:745\n#4 0x000055666e125b15 in ReadBufferExtended (reln=0x7f3d015e2670, \n forkNum=forkNum@entry=MAIN_FORKNUM, blockNum=blockNum@entry=53, \n mode=mode@entry=RBM_NORMAL, strategy=<optimized out>)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/storage/buffer/bufmgr.c:664\n#5 0x000055666defc036 in heapgetpage (scan=scan@entry=0x5566703484f8, page=page@entry=53)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c:375\n#6 0x000055666defd5c2 in heapgettup_pagemode (key=0x0, nkeys=0, dir=ForwardScanDirection, \n scan=0x5566703484f8)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c:1036\n#7 heap_getnext (scan=scan@entry=0x5566703484f8, \n direction=direction@entry=ForwardScanDirection)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/heap/heapam.c:1787\n#8 0x000055666e053e21 in SeqNext (node=node@entry=0x556670328c48)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeSeqscan.c:80\n#9 0x000055666e03d711 in ExecScanFetch (recheckMtd=0x55666e053de0 <SeqRecheck>, \n accessMtd=0x55666e053df0 <SeqNext>, node=0x556670328c48)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execScan.c:95\n#10 ExecScan (node=node@entry=0x556670328c48, \n accessMtd=accessMtd@entry=0x55666e053df0 <SeqNext>, \n recheckMtd=recheckMtd@entry=0x55666e053de0 <SeqRecheck>)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execScan.c:180\n#11 0x000055666e053ea8 in ExecSeqScan (node=node@entry=0x556670328c48)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeSeqscan.c:127\n---Type <return> to continue, or q <return> to quit---\n\n\n>\n>\n>\n>\n> --\n> Thomas Munro\n> https://enterprisedb.com\n>\n>\n>\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: postgres.consulting@comcast.net\n\n\n",
"msg_date": "Tue, 16 Jul 2019 19:05:39 -0500",
"msg_from": "Jerry Sievers <gsievers19@comcast.net>",
"msg_from_op": true,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 12:05 PM Jerry Sievers <gsievers19@comcast.net> wrote:\n> Program received signal SIGUSR1, User defined signal 1.\n\nOh, we need to ignore those pesky signals with \"handle SIGUSR1 noprint nostop\".\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Jul 2019 12:07:46 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n\n> On Wed, Jul 17, 2019 at 12:05 PM Jerry Sievers <gsievers19@comcast.net> wrote:\n>\n>> Program received signal SIGUSR1, User defined signal 1.\n>\n> Oh, we need to ignore those pesky signals with \"handle SIGUSR1 noprint nostop\".\n\nIs this the right sequencing?\n\n1. Start client and get backend pid\n2. GDB; handle SIGUSR1, break, cont\n3. Run query\n4. bt\n\nThanks\n\nDon't think I am doing this correctly. Please advise.\n\nhandle SIGUSR1 noprint nostop\nSignal Stop\tPrint\tPass to program\tDescription\nSIGUSR1 No\tNo\tYes\t\tUser defined signal 1\n(gdb) break munmap\nBreakpoint 1 at 0x7f3d09331740: file ../sysdeps/unix/syscall-template.S, line 84.\n(gdb) cont\nContinuing.\n\nBreakpoint 1, munmap () at ../sysdeps/unix/syscall-template.S:84\n84\t../sysdeps/unix/syscall-template.S: No such file or directory.\n(gdb) bt\n#0 munmap () at ../sysdeps/unix/syscall-template.S:84\n#1 0x000055666e12d7f4 in dsm_impl_posix (impl_private=0x22, elevel=19, \n mapped_size=0x556670205890, mapped_address=0x556670205888, request_size=0, \n handle=<optimized out>, op=DSM_OP_DETACH)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/storage/ipc/dsm_impl.c:259\n#2 dsm_impl_op (op=op@entry=DSM_OP_DETACH, handle=<optimized out>, \n request_size=request_size@entry=0, impl_private=impl_private@entry=0x556670205880, \n mapped_address=mapped_address@entry=0x556670205888, \n mapped_size=mapped_size@entry=0x556670205890, elevel=19)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/storage/ipc/dsm_impl.c:176\n#3 0x000055666e12efb1 in dsm_detach (seg=0x556670205860)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/storage/ipc/dsm.c:738\n#4 0x000055666df31369 in DestroyParallelContext (pcxt=0x556670219b68)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/access/transam/parallel.c:750\n#5 0x000055666e0357bb in ExecParallelCleanup (pei=0x7f3d012218b0)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execParallel.c:575\n#6 0x000055666e047ca2 in ExecShutdownGather (node=node@entry=0x55667033bed0)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeGather.c:443\n#7 0x000055666e0359f5 in ExecShutdownNode (node=0x55667033bed0)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execProcnode.c:820\n#8 0x000055666e0777e1 in planstate_tree_walker (planstate=0x55667033b2b0, \n walker=0x55666e0359a0 <ExecShutdownNode>, context=0x0)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/nodes/nodeFuncs.c:3636\n#9 0x000055666e0777e1 in planstate_tree_walker (planstate=0x55667033b040, \n walker=0x55666e0359a0 <ExecShutdownNode>, context=0x0)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/nodes/nodeFuncs.c:3636\n#10 0x000055666e0777e1 in planstate_tree_walker (planstate=planstate@entry=0x55667033a6c8, \n walker=walker@entry=0x55666e0359a0 <ExecShutdownNode>, context=context@entry=0x0)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/nodes/nodeFuncs.c:3636\n#11 0x000055666e0359df in ExecShutdownNode (node=node@entry=0x55667033a6c8)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execProcnode.c:830\n#12 0x000055666e04d0ff in ExecLimit (node=node@entry=0x55667033a428)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeLimit.c:139\n#13 0x000055666e035d28 in ExecProcNode (node=node@entry=0x55667033a428)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execProcnode.c:531\n#14 0x000055666e051f69 in ExecNestLoop (node=node@entry=0x55667031c660)\n---Type <return> to continue, or q <return> to quit--- at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeNestloop.c:174\n#15 0x000055666e035e28 in ExecProcNode (node=node@entry=0x55667031c660)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execProcnode.c:476\n#16 0x000055666e054989 in ExecSort (node=node@entry=0x55667031c3f0)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeSort.c:103\n#17 0x000055666e035de8 in ExecProcNode (node=0x55667031c3f0)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execProcnode.c:495\n#18 0x000055666e041fe9 in fetch_input_tuple (aggstate=aggstate@entry=0x55667031ba18)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeAgg.c:598\n#19 0x000055666e043bb3 in agg_retrieve_direct (aggstate=0x55667031ba18)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeAgg.c:2078\n#20 ExecAgg (node=node@entry=0x55667031ba18)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeAgg.c:1903\n#21 0x000055666e035dc8 in ExecProcNode (node=node@entry=0x55667031ba18)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execProcnode.c:503\n#22 0x000055666e031f2e in ExecutePlan (dest=0x7f3d01277aa8, direction=<optimized out>, \n numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, \n use_parallel_mode=<optimized out>, planstate=0x55667031ba18, estate=0x55667031b878)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execMain.c:1567\n#23 standard_ExecutorRun (queryDesc=0x556670320a78, direction=<optimized out>, count=0)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execMain.c:339\n#24 0x00007f3d01cd0515 in explain_ExecutorRun (queryDesc=0x556670320a78, \n direction=ForwardScanDirection, count=0)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../contrib/auto_explain/auto_explain.c:281\n#25 0x00007f3d01ac8db0 in pgss_ExecutorRun (queryDesc=0x556670320a78, \n direction=ForwardScanDirection, count=0)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../contrib/pg_stat_statements/pg_stat_statements.c:875\n#26 0x000055666e155167 in PortalRunSelect (portal=portal@entry=0x5566701d6df8, \n forward=forward@entry=1 '\\001', count=0, count@entry=9223372036854775807, \n dest=dest@entry=0x7f3d01277aa8)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/tcop/pquery.c:948\n#27 0x000055666e1567a0 in PortalRun (portal=portal@entry=0x5566701d6df8, \n count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=1 '\\001', \n dest=dest@entry=0x7f3d01277aa8, altdest=altdest@entry=0x7f3d01277aa8, \n completionTag=completionTag@entry=0x7ffc37bea670 \"\")\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/tcop/pquery.c:---Type <return> to continue, or q <return> to quit---\n789\n#28 0x000055666e1532d6 in exec_simple_query (\n query_string=0x5566702a4c68 \"select v.account_id, COUNT(cnt.clicks), te.description,\\nl.product_id\\nfrom nbox_nc_ah.tracking_events te\\njoin nbox_nc_ah.page_views pv on pv.page_view_id = te.page_view_id\\njoin nbox_nc_ah.visits v on v\"...)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/tcop/postgres.c:1109\n#29 PostgresMain (argc=<optimized out>, argv=argv@entry=0x556670204630, \n dbname=0x5566701bab88 \"staging\", username=<optimized out>)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/tcop/postgres.c:4101\n#30 0x000055666dec8a1b in BackendRun (port=0x5566701fd500)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/postmaster/postmaster.c:4339\n#31 BackendStartup (port=0x5566701fd500)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/postmaster/postmaster.c:4013\n#32 ServerLoop ()\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/postmaster/postmaster.c:1722\n#33 0x000055666e0ef281 in PostmasterMain (argc=13, argv=<optimized out>)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/postmaster/postmaster.c:1330\n#34 0x000055666dec9bf1 in main (argc=13, argv=0x5566701b8840)\n at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/main/main.c:228\n(gdb) \n(gdb) quit\nA debugging session is active.\n\n\tInferior 1 [process 32291] will be detached.\n\nQuit anyway? (y or n) y\nDetaching from program: /usr/lib/postgresql/9.6/bin/postgres, process 32291\nroot@pgdev01:/home/jsievers# \n\n\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: postgres.consulting@comcast.net\n\n\n",
"msg_date": "Tue, 16 Jul 2019 19:26:20 -0500",
"msg_from": "Jerry Sievers <gsievers19@comcast.net>",
"msg_from_op": true,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 12:26 PM Jerry Sievers <gsievers19@comcast.net> wrote:\n> Is this the right sequencing?\n>\n> 1. Start client and get backend pid\n> 2. GDB; handle SIGUSR1, break, cont\n> 3. Run query\n> 4. bt\n\nPerfect, thanks. I think I just spotted something:\n\n> #11 0x000055666e0359df in ExecShutdownNode (node=node@entry=0x55667033a6c8)\n> at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execProcnode.c:830\n> #12 0x000055666e04d0ff in ExecLimit (node=node@entry=0x55667033a428)\n> at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeLimit.c:139\n\nhttps://github.com/postgres/postgres/blob/REL9_6_STABLE/src/backend/executor/nodeLimit.c#L139\n\nLimit thinks it's OK to \"shut down\" the subtree, but if you shut down a\nGather node you can't rescan it later because it destroys its shared\nmemory. Oops. Not sure what to do about that yet.\n\n\n--\nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Jul 2019 12:44:49 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 12:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > #11 0x000055666e0359df in ExecShutdownNode (node=node@entry=0x55667033a6c8)\n> > at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execProcnode.c:830\n> > #12 0x000055666e04d0ff in ExecLimit (node=node@entry=0x55667033a428)\n> > at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeLimit.c:139\n>\n> https://github.com/postgres/postgres/blob/REL9_6_STABLE/src/backend/executor/nodeLimit.c#L139\n>\n> Limit thinks it's OK to \"shut down\" the subtree, but if you shut down a\n> Gather node you can't rescan it later because it destroys its shared\n> memory. Oops. Not sure what to do about that yet.\n\nCCing Amit and Robert, authors of commits 19df1702 and 69de1718.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Jul 2019 12:57:42 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n\n> On Wed, Jul 17, 2019 at 12:26 PM Jerry Sievers <gsievers19@comcast.net> wrote:\n>\n>> Is this the right sequencing?\n>>\n>> 1. Start client and get backend pid\n>> 2. GDB; handle SIGUSR1, break, cont\n>> 3. Run query\n>> 4. bt\n>\n> Perfect, thanks. I think I just spotted something:\n\nDig that! Great big thanks to you and Tomas, et al for jumping on this.\n\nPlease let know if there's anything else I can submit that would be\nhelpful.\n\n\n>\n>> #11 0x000055666e0359df in ExecShutdownNode (node=node@entry=0x55667033a6c8)\n>> at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execProcnode.c:830\n>> #12 0x000055666e04d0ff in ExecLimit (node=node@entry=0x55667033a428)\n>> at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeLimit.c:139\n>\n> https://github.com/postgres/postgres/blob/REL9_6_STABLE/src/backend/executor/nodeLimit.c#L139\n>\n> Limit thinks it's OK to \"shut down\" the subtree, but if you shut down a\n> Gather node you can't rescan it later because it destroys its shared\n> memory. Oops. Not sure what to do about that yet.\n>\n>\n> --\n> Thomas Munro\n> https://enterprisedb.com\n>\n>\n>\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: postgres.consulting@comcast.net\n\n\n",
"msg_date": "Tue, 16 Jul 2019 19:57:50 -0500",
"msg_from": "Jerry Sievers <gsievers19@comcast.net>",
"msg_from_op": true,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 12:57 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Jul 17, 2019 at 12:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > #11 0x000055666e0359df in ExecShutdownNode (node=node@entry=0x55667033a6c8)\n> > > at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execProcnode.c:830\n> > > #12 0x000055666e04d0ff in ExecLimit (node=node@entry=0x55667033a428)\n> > > at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeLimit.c:139\n> >\n> > https://github.com/postgres/postgres/blob/REL9_6_STABLE/src/backend/executor/nodeLimit.c#L139\n> >\n> > Limit thinks it's OK to \"shut down\" the subtree, but if you shut down a\n> > Gather node you can't rescan it later because it destroys its shared\n> > memory. Oops. Not sure what to do about that yet.\n>\n> CCing Amit and Robert, authors of commits 19df1702 and 69de1718.\n\nHere's a repro (I'm sure you can find a shorter one, this one's hacked\nup from join_hash.sql, basically just adding LIMIT):\n\ncreate table join_foo as select generate_series(1, 3000) as id,\n'xxxxx'::text as t;\nalter table join_foo set (parallel_workers = 0);\ncreate table join_bar as select generate_series(0, 10000) as id,\n'xxxxx'::text as t;\nalter table join_bar set (parallel_workers = 2);\n\nset parallel_setup_cost = 0;\nset parallel_tuple_cost = 0;\nset max_parallel_workers_per_gather = 2;\nset enable_material = off;\nset enable_mergejoin = off;\nset work_mem = '1GB';\n\nselect count(*) from join_foo\n left join (select b1.id, b1.t from join_bar b1 join join_bar b2\nusing (id) limit 1000) ss\n on join_foo.id < ss.id + 1 and join_foo.id > ss.id - 1;\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Jul 2019 15:19:18 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 6:28 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Wed, Jul 17, 2019 at 12:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > #11 0x000055666e0359df in ExecShutdownNode (node=node@entry=0x55667033a6c8)\n> > > at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execProcnode.c:830\n> > > #12 0x000055666e04d0ff in ExecLimit (node=node@entry=0x55667033a428)\n> > > at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeLimit.c:139\n> >\n> > https://github.com/postgres/postgres/blob/REL9_6_STABLE/src/backend/executor/nodeLimit.c#L139\n> >\n> > Limit thinks it's OK to \"shut down\" the subtree, but if you shut down a\n> > Gather node you can't rescan it later because it destroys its shared\n> > memory. Oops. Not sure what to do about that yet.\n>\n\nYeah, that is a problem. Actually, what we need here is to\nwait-for-workers-to-finish and collect all the instrumentation\ninformation. We don't need to destroy the shared memory at this\nstage, but we don't have a special purpose function which can just\nallow us to collect stats. One idea could be that we create a special\npurpose function which sounds like a recipe of code duplication,\nanother could be that somehow pass the information through\nExecShutdownNode to Gather/GatherMerge that they don't destroy shared\nmemory. Immediately, I can't think of better ideas, but it is\npossible that there is some better way to deal with this.\n\n> CCing Amit and Robert, authors of commits 19df1702 and 69de1718.\n>\n\nThanks for diagnosing the issue.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Jul 2019 16:10:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 4:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jul 17, 2019 at 6:28 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> > On Wed, Jul 17, 2019 at 12:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > > #11 0x000055666e0359df in ExecShutdownNode (node=node@entry=0x55667033a6c8)\n> > > > at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/execProcnode.c:830\n> > > > #12 0x000055666e04d0ff in ExecLimit (node=node@entry=0x55667033a428)\n> > > > at /build/postgresql-9.6-5O8OLM/postgresql-9.6-9.6.14/build/../src/backend/executor/nodeLimit.c:139\n> > >\n> > > https://github.com/postgres/postgres/blob/REL9_6_STABLE/src/backend/executor/nodeLimit.c#L139\n> > >\n> > > Limit thinks it's OK to \"shut down\" the subtree, but if you shut down a\n> > > Gather node you can't rescan it later because it destroys its shared\n> > > memory. Oops. Not sure what to do about that yet.\n> >\n>\n> Yeah, that is a problem. Actually, what we need here is to\n> wait-for-workers-to-finish and collect all the instrumentation\n> information. We don't need to destroy the shared memory at this\n> stage, but we don't have a special purpose function which can just\n> allow us to collect stats. One idea could be that we create a special\n> purpose function which sounds like a recipe of code duplication,\n> another could be that somehow pass the information through\n> ExecShutdownNode to Gather/GatherMerge that they don't destroy shared\n> memory. Immediately, I can't think of better ideas, but it is\n> possible that there is some better way to deal with this.\n>\n\nI am not able to come up with anything better. Robert, Thomas, do you\nsee any problem with this idea or do you have any better ideas to fix\nthis issue?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Jul 2019 12:10:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Thu, Jul 18, 2019 at 6:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Wed, Jul 17, 2019 at 4:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Yeah, that is a problem. Actually, what we need here is to\n> > wait-for-workers-to-finish and collect all the instrumentation\n> > information. We don't need to destroy the shared memory at this\n> > stage, but we don't have a special purpose function which can just\n> > allow us to collect stats. One idea could be that we create a special\n> > purpose function which sounds like a recipe of code duplication,\n> > another could be that somehow pass the information through\n> > ExecShutdownNode to Gather/GatherMerge that they don't destroy shared\n> > memory. Immediately, I can't think of better ideas, but it is\n> > possible that there is some better way to deal with this.\n>\n> I am not able to come up with anything better. Robert, Thomas, do you\n> see any problem with this idea or do you have any better ideas to fix\n> this issue?\n\nHmm, so something like a new argument \"bool final\" added to the\nExecXXXShutdown() functions, which receives false in this case to tell\nit that there could be a rescan so keep the parallel context around.\nOr alternatively a separate function with another end-of-scan type of\nname that I'm having trouble inventing, which is basically the same\nbut a bigger patch. If you add a new argument you might in theory\nwant to pass that on to the ShutdownForeignScan and ShutdownCustomScan\ncallbacks, but we obviously can't change those APIs in the back\nbranches. If you add a new function instead you might theoretically\nwant to add it to those APIs too, which you also can't really do in\nthe back branches either (well even if you could, existing extensions\nwon't register anything). I think the new argument version is\nprobably better because I suspect only Gather would really ever have\nany reason to treat the two cases differently, and all existing cases\nin or out of core would just keep doing what they're doing. So I\nthink adding \"bool final\" (or better name) would probably work out OK,\nand I don't have a better idea.\n\n--\nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Jul 2019 21:40:32 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Hmm, so something like a new argument \"bool final\" added to the\n> ExecXXXShutdown() functions, which receives false in this case to tell\n> it that there could be a rescan so keep the parallel context around.\n\nI think this is going in the wrong direction. Nodes should *always*\nassume that a rescan is possible until ExecEndNode is called. See the\ncommentary about EXEC_FLAG_REWIND in executor.h:\n\n * REWIND indicates that the plan node should try to efficiently support\n * rescans without parameter changes. (Nodes must support ExecReScan calls\n * in any case, but if this flag was not given, they are at liberty to do it\n * through complete recalculation. Note that a parameter change forces a\n * full recalculation in any case.)\n\nIf nodeLimit is doing something that's incompatible with that, it's\nnodeLimit's fault; and similarly for the parallel machinery.\n\nIf you want to do otherwise, you are going to be inventing a whole\nbunch of complicated and doubtless-initially-buggy control logic\nto pass down information about whether a rescan might be possible.\nThat doesn't sound like a recipe for a back-patchable fix. Perhaps\nwe could consider redesigning the rules around REWIND in a future\nversion, but that's not where to focus the bug fix effort.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jul 2019 09:45:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Thu, Jul 18, 2019 at 7:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Hmm, so something like a new argument \"bool final\" added to the\n> > ExecXXXShutdown() functions, which receives false in this case to tell\n> > it that there could be a rescan so keep the parallel context around.\n>\n> I think this is going in the wrong direction. Nodes should *always*\n> assume that a rescan is possible until ExecEndNode is called.\n>\n\nI am thinking that why not we remove the part of destroying the\nparallel context (and shared memory) from ExecShutdownGather (and\nExecShutdownGatherMerge) and then do it at the time of ExecEndGather\n(and ExecEndGatherMerge)? This should fix the bug in hand and seems\nto be more consistent with our overall design principles. I have not\ntried to code it to see if there are any other repercussions of the\nsame but seems worth investigating. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 19 Jul 2019 08:30:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Fri, Jul 19, 2019 at 3:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Thu, Jul 18, 2019 at 7:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > Hmm, so something like a new argument \"bool final\" added to the\n> > > ExecXXXShutdown() functions, which receives false in this case to tell\n> > > it that there could be a rescan so keep the parallel context around.\n> >\n> > I think this is going in the wrong direction. Nodes should *always*\n> > assume that a rescan is possible until ExecEndNode is called.\n>\n> I am thinking that why not we remove the part of destroying the\n> parallel context (and shared memory) from ExecShutdownGather (and\n> ExecShutdownGatherMerge) and then do it at the time of ExecEndGather\n> (and ExecEndGatherMerge)? This should fix the bug in hand and seems\n> to be more consistent with our overall design principles. I have not\n> tried to code it to see if there are any other repercussions of the\n> same but seems worth investigating. What do you think?\n\nI tried moving ExecParallelCleanup() into ExecEndGather(). The first\nproblem is that ExecutePlan() wraps execution in\nEnterParallelMode()/ExitParallelMode(), but ExitParallelMode() fails\nan assertion that no parallel context is active because\nExecEndGather() hasn't run yet. The enclosing\nExecutorStart()/ExecutorEnd() calls are further down the call stack,\nin ProcessQuery(). So some more restructuring might be needed to exit\nparallel mode later, but then I feel like you might be getting way out\nof back-patchable territory, especially if it involves moving code to\nthe other side of the executor hook boundary. Is there an easier way?\n\nAnother idea from the band-aid-solutions-that-are-easy-to-back-patch\ndepartment: in ExecutePlan() where we call ExecShutdownNode(), we\ncould write EXEC_FLAG_DONE into estate->es_top_eflags, and then have\nExecGatherShutdown() only run ExecParallelCleanup() if it sees that\nflag. That's not beautiful, but it's less churn that the 'bool final'\nargument we discussed before, and could be removed in master when we\nhave a better way.\n\nStepping back a bit, it seems like we need two separate tree-walking\ncalls: one to free resources not needed anymore by the current rescan\n(workers), and another to free resources not needed ever again\n(parallel context). That could be spelled ExecShutdownNode(false) and\nExecShutdownNode(true), or controlled with the EXEC_FLAG_DONE kluge,\nor a new additional ExecSomethingSomethingNode() function, or as you\nsay, perhaps the second thing could be incorporated into\nExecEndNode(). I suspect that the Shutdown callbacks for Hash, Hash\nJoin, Custom Scan and Foreign Scan might not be needed anymore if we\ncould keep the parallel context around until after the run\nExecEndNode().\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Jul 2019 15:40:46 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Tue, Jul 23, 2019 at 9:11 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Fri, Jul 19, 2019 at 3:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I am thinking that why not we remove the part of destroying the\n> > parallel context (and shared memory) from ExecShutdownGather (and\n> > ExecShutdownGatherMerge) and then do it at the time of ExecEndGather\n> > (and ExecEndGatherMerge)? This should fix the bug in hand and seems\n> > to be more consistent with our overall design principles. I have not\n> > tried to code it to see if there are any other repercussions of the\n> > same but seems worth investigating. What do you think?\n>\n> I tried moving ExecParallelCleanup() into ExecEndGather(). The first\n> problem is that ExecutePlan() wraps execution in\n> EnterParallelMode()/ExitParallelMode(), but ExitParallelMode() fails\n> an assertion that no parallel context is active because\n> ExecEndGather() hasn't run yet. The enclosing\n> ExecutorStart()/ExecutorEnd() calls are further down the call stack,\n> in ProcessQuery(). So some more restructuring might be needed to exit\n> parallel mode later, but then I feel like you might be getting way out\n> of back-patchable territory, especially if it involves moving code to\n> the other side of the executor hook boundary. Is there an easier way?\n>\n\nIf we have to follow the solution on these lines, then I don't see an\neasier way. One idea could be that we relax the assert in\nExitParallelMode so that it doesn't expect parallel context to be gone\nby that time, but not sure if that is a good idea because it is used\nin some other places as well. I feel in general it is a good\nassertion that before we leave parallel mode, the parallel context\nshould be gone as that ensures we won't do any parallel activity after\nthat.\n\n> Another idea from the band-aid-solutions-that-are-easy-to-back-patch\n> department: in ExecutePlan() where we call ExecShutdownNode(), we\n> could write EXEC_FLAG_DONE into estate->es_top_eflags, and then have\n> ExecGatherShutdown() only run ExecParallelCleanup() if it sees that\n> flag. That's not beautiful, but it's less churn that the 'bool final'\n> argument we discussed before, and could be removed in master when we\n> have a better way.\n>\n\nRight, that will be lesser code churn and it can also work. However,\none thing that needs some thought is till now es_top_eflags is only\nset in ExecutorStart and same is mentioned in comments where it is\ndeclared and it seems we are going to change that with this idea. How\nabout having a separate function ExecBlahShutdown which will clean up\nresources as parallel context and can be called only from ExecutePlan\nwhere we are calling ExecShutdownNode? I think both these and the\nother solution we have discussed are on similar lines and another idea\ncould be to relax the assert which again is not a superb idea.\n\n> Stepping back a bit, it seems like we need two separate tree-walking\n> calls: one to free resources not needed anymore by the current rescan\n> (workers), and another to free resources not needed ever again\n> (parallel context). That could be spelled ExecShutdownNode(false) and\n> ExecShutdownNode(true), or controlled with the EXEC_FLAG_DONE kluge,\n> or a new additional ExecSomethingSomethingNode() function, or as you\n> say, perhaps the second thing could be incorporated into\n> ExecEndNode(). I suspect that the Shutdown callbacks for Hash, Hash\n> Join, Custom Scan and Foreign Scan might not be needed anymore if we\n> could keep the parallel context around until after the run\n> ExecEndNode().\n>\n\nI think we need those to collect instrumentation information. I guess\nthat has to be done before we call InstrStopNode, otherwise, we might\nmiss some instrumentation information.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Jul 2019 17:28:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Tue, Jul 23, 2019 at 5:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jul 23, 2019 at 9:11 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n>\n> > Another idea from the band-aid-solutions-that-are-easy-to-back-patch\n> > department: in ExecutePlan() where we call ExecShutdownNode(), we\n> > could write EXEC_FLAG_DONE into estate->es_top_eflags, and then have\n> > ExecGatherShutdown() only run ExecParallelCleanup() if it sees that\n> > flag. That's not beautiful, but it's less churn that the 'bool final'\n> > argument we discussed before, and could be removed in master when we\n> > have a better way.\n> >\n>\n> Right, that will be lesser code churn and it can also work. However,\n> one thing that needs some thought is till now es_top_eflags is only\n> set in ExecutorStart and same is mentioned in comments where it is\n> declared and it seems we are going to change that with this idea. How\n> about having a separate function ExecBlahShutdown which will clean up\n> resources as parallel context and can be called only from ExecutePlan\n> where we are calling ExecShutdownNode? I think both these and the\n> other solution we have discussed are on similar lines and another idea\n> could be to relax the assert which again is not a superb idea.\n>\n\nIt seems we don't have a clear preference for any particular solution\namong these and neither there appears to be any better idea. I guess\nwe can wait for a few days to see if Robert has any views on this,\notherwise, pick one of the above and move ahead.\n\nRobert, let us know if you have any preference or better idea to fix\nthis problem?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Jul 2019 09:43:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Fri, Jul 26, 2019 at 4:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Tue, Jul 23, 2019 at 5:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Right, that will be lesser code churn and it can also work. However,\n> > one thing that needs some thought is till now es_top_eflags is only\n> > set in ExecutorStart and same is mentioned in comments where it is\n> > declared and it seems we are going to change that with this idea. How\n> > about having a separate function ExecBlahShutdown which will clean up\n> > resources as parallel context and can be called only from ExecutePlan\n> > where we are calling ExecShutdownNode? I think both these and the\n> > other solution we have discussed are on similar lines and another idea\n> > could be to relax the assert which again is not a superb idea.\n>\n> It seems we don't have a clear preference for any particular solution\n> among these and neither there appears to be any better idea. I guess\n> we can wait for a few days to see if Robert has any views on this,\n> otherwise, pick one of the above and move ahead.\n\nI take the EXEC_FLAG_DONE idea back. It's ugly and too hard to verify\nthat every appropriate path sets it, and a flag that means the\nopposite would be even more of a kluge, and generally I think I was\nlooking at this too myopically: I was looking for a way to shut down\nprocesses ASAP without giving up the shared memory we'll need for\nrescanning, but what I should have been looking at is the reason you\ndid that in the first place: to get the instrumentation data. Can you\nexplain why it's necessary to do that explicitly for Limit? Wouldn't\nthe right place to collect instrumentation be at the end of execution\nwhen Shutdown will run in all cases anyway (and possibly also during\nExecParallelReinitialize() or something like that if it's being\nclobbered by rescans, I didn't check)? What's special about Limit?\n\nToday while poking at this and trying to answer those questions for\nmyself, I realised that the repro I posted earlier[1] crashes exactly\nas Jerry reported on REL9_6_STABLE, but in later release branches it\nruns to completion. That's because the crashing code was removed in\ncommit 41b0dd98 \"Separate reinitialization of shared parallel-scan\nstate from ExecReScan.\".\n\nSo newer branches get past that problem, but they all spit out tons of\neach of these three warnings:\n\nWARNING: buffer refcount leak: [172] (rel=base/12558/16390,\nblockNum=5, flags=0x93800000, refcount=1 2998)\n...\nWARNING: relcache reference leak: relation \"join_bar\" not closed\n...\nWARNING: Snapshot reference leak: Snapshot 0x7ff20383bfb0 still referenced\n...\n\nOops. I don't know exactly why yet, but the problem goes away if you\njust comment out the offending ExecShutdownNode() call in nodeLimit.c.\nI tried to understand whether the buffer stats were wrong with that\ncode commented out (Adrien Nayrat's original complaint[2]), but I ran\nout of time for debugging adventures today.\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGJyqDp9FZSHLTjiNMcz-c6%3DRdStB%2BUjVZsR8wfHnJXy8Q%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/86137f17-1dfb-42f9-7421-82fd786b04a1%40anayrat.info\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sat, 27 Jul 2019 14:59:05 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Sat, Jul 27, 2019 at 8:29 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Fri, Jul 26, 2019 at 4:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Tue, Jul 23, 2019 at 5:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Right, that will be lesser code churn and it can also work. However,\n> > > one thing that needs some thought is till now es_top_eflags is only\n> > > set in ExecutorStart and same is mentioned in comments where it is\n> > > declared and it seems we are going to change that with this idea. How\n> > > about having a separate function ExecBlahShutdown which will clean up\n> > > resources as parallel context and can be called only from ExecutePlan\n> > > where we are calling ExecShutdownNode? I think both these and the\n> > > other solution we have discussed are on similar lines and another idea\n> > > could be to relax the assert which again is not a superb idea.\n> >\n> > It seems we don't have a clear preference for any particular solution\n> > among these and neither there appears to be any better idea. I guess\n> > we can wait for a few days to see if Robert has any views on this,\n> > otherwise, pick one of the above and move ahead.\n>\n> I take the EXEC_FLAG_DONE idea back. It's ugly and too hard to verify\n> that every appropriate path sets it, and a flag that means the\n> opposite would be even more of a kluge, and generally I think I was\n> looking at this too myopically: I was looking for a way to shut down\n> processes ASAP without giving up the shared memory we'll need for\n> rescanning, but what I should have been looking at is the reason you\n> did that in the first place: to get the instrumentation data. Can you\n> explain why it's necessary to do that explicitly for Limit? Wouldn't\n> the right place to collect instrumentation be at the end of execution\n> when Shutdown will run in all cases anyway (and possibly also during\n> ExecParallelReinitialize() or something like that if it's being\n> clobbered by rescans, I didn't check)? What's special about Limit?\n>\n\nI think here you are missing the point that to collect the\ninstrumentation information one also need to use InstrStartNode and\nInstrStopNode. So, for the Limit node, the InstrStopNode would be\nalready done by the time we call shutdown of workers at the end of\nexecution. To know a bit more details, see [1][2][3].\n\n> Today while poking at this and trying to answer those questions for\n> myself, I realised that the repro I posted earlier[1] crashes exactly\n> as Jerry reported on REL9_6_STABLE, but in later release branches it\n> runs to completion. That's because the crashing code was removed in\n> commit 41b0dd98 \"Separate reinitialization of shared parallel-scan\n> state from ExecReScan.\".\n>\n> So newer branches get past that problem, but they all spit out tons of\n> each of these three warnings:\n>\n> WARNING: buffer refcount leak: [172] (rel=base/12558/16390,\n> blockNum=5, flags=0x93800000, refcount=1 2998)\n> ...\n> WARNING: relcache reference leak: relation \"join_bar\" not closed\n> ...\n> WARNING: Snapshot reference leak: Snapshot 0x7ff20383bfb0 still referenced\n> ...\n>\n> Oops.\n>\n\nThis is exactly due to the same problem that before rescans, we have\ndestroyed the shared memory. If you do the earlier trick of not\ncleaning up shared memory till ExecEndNode, then you won't see this\nproblem.\n\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1KZEbYKj9HHP-6WqqjAXuoB%2BWJu-w1s9uovj%3DeeBxC48Q%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CA%2BTgmoY3kcTcc5bFCZeY5NMFna-xaMPuTHA-z-z2Bmfg%2Bdb-XQ%40mail.gmail.com\n[3] - https://www.postgresql.org/message-id/CAA4eK1L0KAZWgnRJz%3DVNVpyS3FFbVh8E5egyziaR0E10bC204Q%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 27 Jul 2019 11:28:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Thu, Jul 18, 2019 at 9:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think this is going in the wrong direction. Nodes should *always*\n> assume that a rescan is possible until ExecEndNode is called.\n> If you want to do otherwise, you are going to be inventing a whole\n> bunch of complicated and doubtless-initially-buggy control logic\n> to pass down information about whether a rescan might be possible.\n> That doesn't sound like a recipe for a back-patchable fix. Perhaps\n> we could consider redesigning the rules around REWIND in a future\n> version, but that's not where to focus the bug fix effort.\n\nSo, if I can summarize how we got here, as best I understand it:\n\n0. The historic behavior of the executor is to assume it's OK to leak\nresources for the lifetime of the query. Nodes that are executed to\ncompletion generally do some cleanup, but we feel free (as under\nLimit) to just stop executing a node without giving it any hint that\nit should release resources. So a Sort may hold onto a terabyte of\nmemory and an index scan may keep holding a pin even after there's no\ntheoretical way of ever needing those resources again, and we just\ndon't care.\n\n1. Parallel query made that perhaps-already-shaky assumption a lot\nmore problematic. Partly that's because workers are a a more scarce\nand considerably heavier resource than anything else, and moreover act\nas a container for anything else, so whatever you were leaking before,\nyou can now leak N times more of it, plus N processes, until the end\nof the query. However, there's a correctness reason too, which is that\nwhen a node has a copy in the leader and a copy in every worker, each\ncopy has its own instrumentation data (startup time, run time, nloops,\netc) and we can only fold all that together once the node is done\nexecuting, because it's really hard to add up a bunch of numbers\nbefore the numbers are done changing. We could've made the\ninstrumentation shared throughout, but if we had, we could have\ncontention for updating the instrumentation data, which seems like\nit'd be bad.\n\n2. To fix that correctness problem, we decided to try to shut down the\nnode under a limit node when we're done with it (commit\n85c9d3475e4f680dbca7c04fe096af018f3b8760). At a certain level, this\nlooks fundamentally necessary to me. If you're going to have N\nseparate copies of the instrumentation, and you want to add them up\nwhen you're done, then you have to decide to be done at some point;\notherwise you don't know when to add them up, and maybe won't add them\nup at all, and then you'll be sad. This does not mean that the exact\ntiming couldn't be changed somehow, but if you want a correct\nimplementation, you have to shut down Limit's sub-node after you're\ndone executing it (so that you can get the instrumentation data from\nthe workers after it's final) and before you start destroying DSM\nsegments and stuff (so that you get the instrumentation data from the\nworkers before it vanishes).\n\n3. The aforementioned commit turned out to be buggy in at least to two\nways, precisely because it didn't do a good enough job predicting when\nthe Limit needed to be shut down. First, there was commit\n2cd0acfdade82f3cab362fd9129d453f81cc2745, where we missed the fact\nthat you could hit the Limit and then back up. Second, there's the\npresent issue, where the Limit gets rescanned.\n\nSo, given all that, if we want to adopt Tom's position that we should\nalways cater to a possible rescan, then we're going to have to rethink\nthe way that instrumentation data gets consolidated from workers into\nthe leader in such a way that we can consolidate multiple times\nwithout ending up with the wrong answer. The other option is to do\nwhat I understand Amit and Thomas to be proposing, which is to do a\nbetter job identifying the case where we're \"done for good\" and can\ntrigger the shutdown fearlessly.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 30 Jul 2019 14:35:05 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 12:05 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jul 18, 2019 at 9:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I think this is going in the wrong direction. Nodes should *always*\n> > assume that a rescan is possible until ExecEndNode is called.\n> > If you want to do otherwise, you are going to be inventing a whole\n> > bunch of complicated and doubtless-initially-buggy control logic\n> > to pass down information about whether a rescan might be possible.\n> > That doesn't sound like a recipe for a back-patchable fix. Perhaps\n> > we could consider redesigning the rules around REWIND in a future\n> > version, but that's not where to focus the bug fix effort.\n>\n> So, if I can summarize how we got here, as best I understand it:\n>\n\nThanks for the summarization. This looks mostly correct to me.\n\n> 0. The historic behavior of the executor is to assume it's OK to leak\n> resources for the lifetime of the query. Nodes that are executed to\n> completion generally do some cleanup, but we feel free (as under\n> Limit) to just stop executing a node without giving it any hint that\n> it should release resources. So a Sort may hold onto a terabyte of\n> memory and an index scan may keep holding a pin even after there's no\n> theoretical way of ever needing those resources again, and we just\n> don't care.\n>\n> 1. Parallel query made that perhaps-already-shaky assumption a lot\n> more problematic. Partly that's because workers are a a more scarce\n> and considerably heavier resource than anything else, and moreover act\n> as a container for anything else, so whatever you were leaking before,\n> you can now leak N times more of it, plus N processes, until the end\n> of the query. However, there's a correctness reason too, which is that\n> when a node has a copy in the leader and a copy in every worker, each\n> copy has its own instrumentation data (startup time, run time, nloops,\n> etc) and we can only fold all that together once the node is done\n> executing, because it's really hard to add up a bunch of numbers\n> before the numbers are done changing. We could've made the\n> instrumentation shared throughout, but if we had, we could have\n> contention for updating the instrumentation data, which seems like\n> it'd be bad.\n>\n> 2. To fix that correctness problem, we decided to try to shut down the\n> node under a limit node when we're done with it (commit\n> 85c9d3475e4f680dbca7c04fe096af018f3b8760). At a certain level, this\n> looks fundamentally necessary to me. If you're going to have N\n> separate copies of the instrumentation, and you want to add them up\n> when you're done, then you have to decide to be done at some point;\n> otherwise you don't know when to add them up, and maybe won't add them\n> up at all, and then you'll be sad. This does not mean that the exact\n> timing couldn't be changed somehow, but if you want a correct\n> implementation, you have to shut down Limit's sub-node after you're\n> done executing it (so that you can get the instrumentation data from\n> the workers after it's final) and before you start destroying DSM\n> segments and stuff (so that you get the instrumentation data from the\n> workers before it vanishes).\n>\n> 3. The aforementioned commit turned out to be buggy in at least to two\n> ways, precisely because it didn't do a good enough job predicting when\n> the Limit needed to be shut down. First, there was commit\n> 2cd0acfdade82f3cab362fd9129d453f81cc2745, where we missed the fact\n> that you could hit the Limit and then back up.\n>\n\nWe have not missed it, rather we decided to it separately because it\nappears to impact some different cases as well [1][2].\n\n> Second, there's the\n> present issue, where the Limit gets rescanned.\n>\n> So, given all that, if we want to adopt Tom's position that we should\n> always cater to a possible rescan, then we're going to have to rethink\n> the way that instrumentation data gets consolidated from workers into\n> the leader in such a way that we can consolidate multiple times\n> without ending up with the wrong answer.\n>\n\nThe other idea we had discussed which comes closer to adopting Tom's\nposition was that during ExecShutdownNode, we just destroy parallel\nworkers, collect instrumentation data and don't destroy the parallel\ncontext. The parallel context could be destroyed in ExecEndNode\n(ExecEndGather(Merge)) code path. The problem with this idea is that\nExitParallelMode doesn't expect parallel context to be active. Now,\nwe can either change the location of Exit/EnterParallelMode or relax\nthat restriction. As mentioned above that restriction appears good to\nme, so I am not in favor of changing it unless we have some other\nsolid way to install it. I am not sure if this idea is better than\nother approaches we are discussing.\n\n> The other option is to do\n> what I understand Amit and Thomas to be proposing, which is to do a\n> better job identifying the case where we're \"done for good\" and can\n> trigger the shutdown fearlessly.\n>\n\nYes, this sounds safe fix for back-branches. We might want to go with\nthis for back-branches and then see if we can come up with a better\nway to fix for HEAD.\n\n[1] - https://www.postgresql.org/message-id/CA%2BTgmoYAxqmE13UOOSU%3DmE-hBGnTfYakb3dOoOJ_043Oc%3D6Xug%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CAA4eK1KwCx9qQk%3DKo4LFTwoYg9B8TSccPAc%3DEoJR88rQpCYVdA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 31 Jul 2019 09:37:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Wed, Jul 31, 2019 at 12:05 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> The other option is to do\n>> what I understand Amit and Thomas to be proposing, which is to do a\n>> better job identifying the case where we're \"done for good\" and can\n>> trigger the shutdown fearlessly.\n\n> Yes, this sounds safe fix for back-branches.\n\nActually, my point was exactly that I *didn't* think that would be a\nsafe fix for the back branches --- at least, not unless you're okay\nwith a very conservative and hence resource-leaky method for deciding\nwhen it's safe to shut down sub-nodes.\n\nWe could do something involving (probably) adding new eflags bits to\npass this sort of info down to child plan nodes. But that's going\nto require design and coding, and it will not be backwards compatible.\nAt least not from the point of view of any extension that's doing\nanything in that area.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Jul 2019 00:30:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 9:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jul 31, 2019 at 12:05 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Thu, Jul 18, 2019 at 9:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > I think this is going in the wrong direction. Nodes should *always*\n> > > assume that a rescan is possible until ExecEndNode is called.\n> > > If you want to do otherwise, you are going to be inventing a whole\n> > > bunch of complicated and doubtless-initially-buggy control logic\n> > > to pass down information about whether a rescan might be possible.\n> > > That doesn't sound like a recipe for a back-patchable fix. Perhaps\n> > > we could consider redesigning the rules around REWIND in a future\n> > > version, but that's not where to focus the bug fix effort.\n> >\n> > So, if I can summarize how we got here, as best I understand it:\n> >\n>\n> Thanks for the summarization. This looks mostly correct to me.\n>\n> > 0. The historic behavior of the executor is to assume it's OK to leak\n> > resources for the lifetime of the query. Nodes that are executed to\n> > completion generally do some cleanup, but we feel free (as under\n> > Limit) to just stop executing a node without giving it any hint that\n> > it should release resources. So a Sort may hold onto a terabyte of\n> > memory and an index scan may keep holding a pin even after there's no\n> > theoretical way of ever needing those resources again, and we just\n> > don't care.\n> >\n> > 1. Parallel query made that perhaps-already-shaky assumption a lot\n> > more problematic. Partly that's because workers are a a more scarce\n> > and considerably heavier resource than anything else, and moreover act\n> > as a container for anything else, so whatever you were leaking before,\n> > you can now leak N times more of it, plus N processes, until the end\n> > of the query. However, there's a correctness reason too, which is that\n> > when a node has a copy in the leader and a copy in every worker, each\n> > copy has its own instrumentation data (startup time, run time, nloops,\n> > etc) and we can only fold all that together once the node is done\n> > executing, because it's really hard to add up a bunch of numbers\n> > before the numbers are done changing. We could've made the\n> > instrumentation shared throughout, but if we had, we could have\n> > contention for updating the instrumentation data, which seems like\n> > it'd be bad.\n> >\n> > 2. To fix that correctness problem, we decided to try to shut down the\n> > node under a limit node when we're done with it (commit\n> > 85c9d3475e4f680dbca7c04fe096af018f3b8760). At a certain level, this\n> > looks fundamentally necessary to me. If you're going to have N\n> > separate copies of the instrumentation, and you want to add them up\n> > when you're done, then you have to decide to be done at some point;\n> > otherwise you don't know when to add them up, and maybe won't add them\n> > up at all, and then you'll be sad. This does not mean that the exact\n> > timing couldn't be changed somehow, but if you want a correct\n> > implementation, you have to shut down Limit's sub-node after you're\n> > done executing it (so that you can get the instrumentation data from\n> > the workers after it's final) and before you start destroying DSM\n> > segments and stuff (so that you get the instrumentation data from the\n> > workers before it vanishes).\n> >\n> > 3. The aforementioned commit turned out to be buggy in at least to two\n> > ways, precisely because it didn't do a good enough job predicting when\n> > the Limit needed to be shut down. First, there was commit\n> > 2cd0acfdade82f3cab362fd9129d453f81cc2745, where we missed the fact\n> > that you could hit the Limit and then back up.\n> >\n>\n> We have not missed it, rather we decided to it separately because it\n> appears to impact some different cases as well [1][2].\n>\n> > Second, there's the\n> > present issue, where the Limit gets rescanned.\n> >\n> > So, given all that, if we want to adopt Tom's position that we should\n> > always cater to a possible rescan, then we're going to have to rethink\n> > the way that instrumentation data gets consolidated from workers into\n> > the leader in such a way that we can consolidate multiple times\n> > without ending up with the wrong answer.\n> >\n>\n> The other idea we had discussed which comes closer to adopting Tom's\n> position was that during ExecShutdownNode, we just destroy parallel\n> workers, collect instrumentation data and don't destroy the parallel\n> context. The parallel context could be destroyed in ExecEndNode\n> (ExecEndGather(Merge)) code path. The problem with this idea is that\n> ExitParallelMode doesn't expect parallel context to be active. Now,\n> we can either change the location of Exit/EnterParallelMode or relax\n> that restriction. As mentioned above that restriction appears good to\n> me, so I am not in favor of changing it unless we have some other\n> solid way to install it. I am not sure if this idea is better than\n> other approaches we are discussing.\n>\n>\nI have made a patch based on the above lines.\nI have tested the scenarios which Thomas had shared in the earlier\nmail and few more tests based on Thomas's tests.\nI'm not sure if we will be going ahead with this solution or not.\nLet me know your opinion on the same.\nIf you feel this approach is ok, we can add few of this tests into pg tests.\n\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 7 Aug 2019 15:15:38 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Wed, Aug 7, 2019 at 3:15 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, Jul 31, 2019 at 9:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Jul 31, 2019 at 12:05 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > The other idea we had discussed which comes closer to adopting Tom's\n> > position was that during ExecShutdownNode, we just destroy parallel\n> > workers, collect instrumentation data and don't destroy the parallel\n> > context. The parallel context could be destroyed in ExecEndNode\n> > (ExecEndGather(Merge)) code path. The problem with this idea is that\n> > ExitParallelMode doesn't expect parallel context to be active. Now,\n> > we can either change the location of Exit/EnterParallelMode or relax\n> > that restriction. As mentioned above that restriction appears good to\n> > me, so I am not in favor of changing it unless we have some other\n> > solid way to install it. I am not sure if this idea is better than\n> > other approaches we are discussing.\n> >\n> >\n> I have made a patch based on the above lines.\n> I have tested the scenarios which Thomas had shared in the earlier\n> mail and few more tests based on Thomas's tests.\n> I'm not sure if we will be going ahead with this solution or not.\n> Let me know your opinion on the same.\n> If you feel this approach is ok, we can add few of this tests into pg tests.\n>\n\nThis patch is on the lines of what I had in mind, but I see some\nproblems in this which are explained below. The other approach to fix\nthis was to move Enter/ExitParallelMode to the outer layer. For ex.,\ncan we enter in parallel mode during InitPlan and exit from it during\nExecEndPlan? That might not be good to backpatch, but it might turn\nout to be more robust than the current approach.\n\nFew comments on your patch:\n1.\n@@ -569,13 +569,6 @@ ExecParallelCleanup(ParallelExecutorInfo *pei)\n if (pei->instrumentation)\n ExecParallelRetrieveInstrumentation(pei->planstate,\n pei->instrumentation);\n-\n- if (pei->pcxt != NULL)\n- {\n- DestroyParallelContext(pei->pcxt);\n- pei->pcxt = NULL;\n- }\n- pfree(pei);\n }\n\nHere, you have just removed parallel context-free, but I think we\ncan't detach from parallel context area here as well, otherwise, it\nwill create similar problems in other cases. Note, that we create the\narea only in ExecInitParallelPlan and just reuse it in\nExecParallelReinitialize. So, if we allow getting it destroyed in\nExecParallelCleanup (which is called via ExecShutdownNode), we won't\nhave access to it in rescan code path. IT is better if we have a test\nfor the same as well. I think we should only retrieve the\ninstrumentation information here. Also, if we do that, then we might\nalso want to change function name and comments atop of this function.\n\n2.\nExecEndGather(GatherState *node)\n {\n+ ParallelExecutorInfo *pei = node->pei;\n ExecShutdownGather(node);\n+\n+ if (pei != NULL)\n+ {\n+ if (pei->pcxt != NULL)\n+ {\n+ DestroyParallelContext(pei->pcxt);\n+ pei->pcxt = NULL;\n+ }\n+\n+ pfree(pei);\n+ node->pei = NULL;\n+ }\n\nI feel that it is better you move a collection of instrumentation\ninformation from ExecParallelCleanup to a separate function and then\nuse ExecParallelCleanup here.\n\n3.\nextern bool IsInParallelMode(void);\n+extern bool getParallelModeLevel(void);\n\nTo be consistent, it better to name the function as GetParallelModeLevel.\n\n4.\n@@ -1461,6 +1461,8 @@ ExecEndPlan(PlanState *planstate, EState *estate)\n ExecEndNode(subplanstate);\n }\n\n+ if (estate->es_use_parallel_mode)\n+ Assert (getParallelModeLevel() > 0 || !ParallelContextActive());\n\nAdd some comments here to explain about this Assert. I am not sure if\nthis is correct because it won't fail even if the parallel mode is\nnon-zero and there is no parallel\ncontext. At this stage, we must have exited the parallel mode.\n\n5.\nexplain analyze\n select count(*) from join_foo\n left join (select b1.id from join_bar b1\n limit 1000) ss\n\nAll the tests in your test file use left join to reproduce the issue,\nbut I think it should be reproducible with inner join as well. This\ncomment is not that your test case is wrong, but I want to see if we\ncan further simplify it.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Aug 2019 09:34:44 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Wed, Aug 7, 2019 at 5:45 AM vignesh C <vignesh21@gmail.com> wrote:\n> I have made a patch based on the above lines.\n> I have tested the scenarios which Thomas had shared in the earlier\n> mail and few more tests based on Thomas's tests.\n> I'm not sure if we will be going ahead with this solution or not.\n> Let me know your opinion on the same.\n> If you feel this approach is ok, we can add few of this tests into pg tests.\n\nI think this patch is bizarre:\n\n- It introduces a new function called getParallelModeLevel(), which is\nrandomly capitalized different from the other functions that do\nsimilar things, and then uses it to do the same thing that could have\nbeen done with the existing function IsInParallelMode().\n- It contains an \"if\" statement whose only content is an Assert().\nDon't write if (a) Assert(b); write Assert(!a || b).\n- It contains zero lines of comment changes, which is obviously not\nenough for a patch that proposes to fix a very thorny issue. This\nfailure has two parts. First, it adds no new comments to explain the\nbug being fixed or the theory of operation of the new code. Second, it\ndoes not even bother updating existing comments that are falsified by\nthe patch, such as the function header comments for\nExecParallelCleanup and ExecShutdownGather.\n- It changes what ExecParallelCleanup does while adjusting only one of\nthe two callers to match the behavior change. nodeGatherMerge.c\nmanages to be completed untouched by this patch. If you change what a\nfunction does, you really need to grep for all the calls to that\nfunction and adjust all callers to match the new set of expectations.\n\nIt's a little hard to get past all of those issues and look at what\nthe patch actually does, but I'm going to try: the theory of operation\nof the patch seems to be that we can skip destroying the parallel\ncontext when performing ExecParallelCleanup and in fact when exiting\nparallel mode, and then when we get to executor end time the context\nwill still be there and we can fish the instrumentation out of it. But\nthis seems problematic for several reasons. For one thing, as Amit\nalready observed, the code currently contains an assertion which\nensure that a ParallelContext can't outlive the time spent in parallel\nmode, and it doesn't seem desirable to relax that assertion (this\npatch removes it).\n\nBut beyond that, the issue here is that the Limit node is shutting\ndown the Gather node too early, and the right fix must be to stop\ndoing that, not to change the definition of what it means to shut down\na node, as this patch does. So maybe a possible approach here - which\nI think is more or less what Tom is proposing - is:\n\n1. Remove the code from ExecLimit() that calls ExecShutdownNode().\n2. Adjust ExecutePlan() so that it ensures that ExecuteShutdownNode()\ngets called at the very end of execution, at least when execute_once\nis set, before exiting parallel mode.\n3. Figure out, possibly at a later time or only in HEAD, how to make\nthe early call to ExecLimit() in ExecShutdownNode(), and then put it\nback. I think we could do this by passing down some information\nindicating which nodes are potentially rescanned by other nodes higher\nup in the tree; there's the separate question of whether rescans can\nhappen due to cursor operations, but the execute_once stuff can handle\nthat aspect of it, I think.\n\nI'm not quite sure that approach is altogether correct so I'd\nappreciate some analysis on that point.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 9 Aug 2019 08:59:26 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Sat, Aug 10, 2019 at 12:59 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> But beyond that, the issue here is that the Limit node is shutting\n> down the Gather node too early, and the right fix must be to stop\n> doing that, not to change the definition of what it means to shut down\n> a node, as this patch does. So maybe a possible approach here - which\n> I think is more or less what Tom is proposing - is:\n>\n> 1. Remove the code from ExecLimit() that calls ExecShutdownNode().\n> 2. Adjust ExecutePlan() so that it ensures that ExecuteShutdownNode()\n> gets called at the very end of execution, at least when execute_once\n> is set, before exiting parallel mode.\n> 3. Figure out, possibly at a later time or only in HEAD, how to make\n> the early call to ExecLimit() in ExecShutdownNode(), and then put it\n> back. I think we could do this by passing down some information\n> indicating which nodes are potentially rescanned by other nodes higher\n> up in the tree; there's the separate question of whether rescans can\n> happen due to cursor operations, but the execute_once stuff can handle\n> that aspect of it, I think.\n>\n> I'm not quite sure that approach is altogether correct so I'd\n> appreciate some analysis on that point.\n\nI'm not sure exactly what we should do yet, but one thought I wanted\nto resurrect from older discussions is that we now think it was a\nmistake to give every Gather node its own DSM segment, having seen\nqueries in the wild where that decision interacted badly with large\nnumber of partitions. In 13 we should try to figure out how to have a\nsingle DSM segment allocated for all Gather[Merge] nodes in the tree\n(and remove the embarrassing band-aid hack in commit fd7c0fa7).\nThat's possibly relevant because it means we'd have a ParallelContext\nor some new overarching object that has a lifetime that is longer than\nthe individual Gather nodes' processes and instrumentation data. I'm\nnot saying we need to discuss any details of this other concern now,\nI'm just wondering out loud if the whole problem in this thread goes\naway automatically when we fix it.\n--\nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 12 Aug 2019 13:04:38 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On 2019-Aug-12, Thomas Munro wrote:\n\n> That's possibly relevant because it means we'd have a ParallelContext\n> or some new overarching object that has a lifetime that is longer than\n> the individual Gather nodes' processes and instrumentation data. I'm\n> not saying we need to discuss any details of this other concern now,\n> I'm just wondering out loud if the whole problem in this thread goes\n> away automatically when we fix it.\n\nHow likely is it that we would ever be able to release memory from a\nSort (or, say, a hashjoin hash table) when it's done being read, but\nbefore completing the whole plan? As I understand, right now we hold\nonto a lot of memory after such plans have been fully read, for no good\nreason other than executor being unaware of this. This might not be\ndirectly related to the problem at hand, since it's not just parallel\nplans that are affected.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 12 Aug 2019 15:07:49 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Mon, Aug 12, 2019 at 3:07 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> How likely is it that we would ever be able to release memory from a\n> Sort (or, say, a hashjoin hash table) when it's done being read, but\n> before completing the whole plan? As I understand, right now we hold\n> onto a lot of memory after such plans have been fully read, for no good\n> reason other than executor being unaware of this. This might not be\n> directly related to the problem at hand, since it's not just parallel\n> plans that are affected.\n\nBeing able to do that sort of thing was one of my goals in designing\nthe ExecShutdownNode stuff. Unfortunately, it's clear from this bug\nreport that it's still a few bricks short of a load, and Tom doesn't\nseem real optimistic about how easy it will be to buy those bricks at\ndiscount prices. But I hope we persist in trying to get there, because\nI don't like the idea of saying that we'll never be smart enough to\nknow we're done with any part of the plan until we're definitely done\nwith the whole thing. I think that's leaving too much money on the\ntable.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 12 Aug 2019 17:42:21 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Tue, Aug 13, 2019 at 7:07 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2019-Aug-12, Thomas Munro wrote:\n> > That's possibly relevant because it means we'd have a ParallelContext\n> > or some new overarching object that has a lifetime that is longer than\n> > the individual Gather nodes' processes and instrumentation data. I'm\n> > not saying we need to discuss any details of this other concern now,\n> > I'm just wondering out loud if the whole problem in this thread goes\n> > away automatically when we fix it.\n>\n> How likely is it that we would ever be able to release memory from a\n> Sort (or, say, a hashjoin hash table) when it's done being read, but\n> before completing the whole plan? As I understand, right now we hold\n> onto a lot of memory after such plans have been fully read, for no good\n> reason other than executor being unaware of this. This might not be\n> directly related to the problem at hand, since it's not just parallel\n> plans that are affected.\n\nRight, AIUI we hold onto that memory because it's a nice optimisation\nto be able to rescan the sorted data or reuse the hash table (single\nbatch, non-parallel hash joins only for now). We have no\ndisincentive, because our memory model doesn't care about the total\npeak memory usage (ie all nodes). Some other RDBMSs do care about\nthat, and somehow consider the peak memory usage (that is, considering\nearly memory release) when comparing join orders.\n\nHowever, I think it's independent of the DSM lifetime question,\nbecause the main Parallel Context DSM segment is really small, it has\na small fixed header and then a small object per parallel-aware node,\nand isn't used for holding the hash table for Parallel Hash and\nprobably wouldn't be used for a future hypothetical Parallel Sort (if\nit's implemented the way I imagine at least). It contains a DSA area,\nwhich creates more DSM segments as it needs them, and nodes can opt to\nfree DSA memory sooner, which will likely result in those extra DSM\nsegments being freed; you can see that happening in Parallel Hash\nwhich in fact does give back memory quite eagerly. (I'm the first to\nadmit that it's weird that DSM segments can hold DSA areas and DSA\nareas are made up of DSM segments; that falls out of the choice to use\nDSM segments both for storage and as a lifetime management system for\nshared resources, and I wouldn't be surprised if we reconsider that as\nwe get more experience and ideas.)\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 13 Aug 2019 09:47:44 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Being able to do that sort of thing was one of my goals in designing\n> the ExecShutdownNode stuff. Unfortunately, it's clear from this bug\n> report that it's still a few bricks short of a load, and Tom doesn't\n> seem real optimistic about how easy it will be to buy those bricks at\n> discount prices. But I hope we persist in trying to get there, because\n> I don't like the idea of saying that we'll never be smart enough to\n> know we're done with any part of the plan until we're definitely done\n> with the whole thing. I think that's leaving too much money on the\n> table.\n\nTo clarify my position --- I think it's definitely possible to improve\nthe situation a great deal. We \"just\" have to pass down more information\nabout whether rescans are possible. What I don't believe is that that\nleads to a bug fix that would be sane to back-patch as far as 9.6.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Aug 2019 17:48:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Mon, Aug 12, 2019 at 5:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> To clarify my position --- I think it's definitely possible to improve\n> the situation a great deal. We \"just\" have to pass down more information\n> about whether rescans are possible. What I don't believe is that that\n> leads to a bug fix that would be sane to back-patch as far as 9.6.\n\nSounds like a fair opinion. I'm not sure how complicated the fix\nwould be so I don't know whether I agree with your opinion, but you\nusually have a fairly good intuition for such things, so...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 12 Aug 2019 19:58:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Tue, Aug 13, 2019 at 3:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Being able to do that sort of thing was one of my goals in designing\n> > the ExecShutdownNode stuff. Unfortunately, it's clear from this bug\n> > report that it's still a few bricks short of a load, and Tom doesn't\n> > seem real optimistic about how easy it will be to buy those bricks at\n> > discount prices. But I hope we persist in trying to get there, because\n> > I don't like the idea of saying that we'll never be smart enough to\n> > know we're done with any part of the plan until we're definitely done\n> > with the whole thing. I think that's leaving too much money on the\n> > table.\n>\n> To clarify my position --- I think it's definitely possible to improve\n> the situation a great deal. We \"just\" have to pass down more information\n> about whether rescans are possible.\n>\n\nRight, you have speculated above that it is possible via adding some\neflag bits. Can you please describe a bit more about that idea, so\nthat somebody else can try to write a patch? I think if someone other\nthan you try to write a patch without having some sort of upfront\ndesign, it might lead to a lot of re-work. It would be great if you\nhave an interest in doing the leg work which can then be extended to\nfix the issue in the parallel query, but if not at least let us know\nthe idea you have in mind in a bit more detail.\n\n> What I don't believe is that that\n> leads to a bug fix that would be sane to back-patch as far as 9.6.\n>\n\nFair enough. In such a situation, we can plan to revert the earlier\nfix for Limit node and tell people that the same will be fixed in\nPG-13.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 13 Aug 2019 09:10:40 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Tue, Aug 13, 2019 at 3:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> To clarify my position --- I think it's definitely possible to improve\n>> the situation a great deal. We \"just\" have to pass down more information\n>> about whether rescans are possible.\n\n> Right, you have speculated above that it is possible via adding some\n> eflag bits. Can you please describe a bit more about that idea, so\n> that somebody else can try to write a patch?\n\nWell, there are two components to solving this problem:\n\n1. What are we going to do about the executor's external API?\n\nRight now, callers of ExecutorStart don't have to say whether they\nmight call ExecutorRewind. We need some way for callers to make a\nbinding promise that they won't do any such thing. Perhaps we just\nwant to invent another flag that's like EXEC_FLAG_BACKWARD, but it's\nnot clear how it should interact with the existing \"soft\" REWIND\nflag. Nor do I know how far up the call stack will we have to make\nchanges to make it possible to promise as much as we can -- for\ninstance, will we have to adapt the SPI interfaces?\n\n2. What happens inside ExecutorStart in response to such promises?\n\nI imagine that we translate them into additional eflags bits that\nget passed down to node init functions, possibly with modification\n(e.g., nodeNestloop.c would have to revoke the no-rescans promise\nto its inner input). You'd need to work out what is the most\nconvenient set of conventions (positive or negative sense of the\nflag bits, etc), and go through all the non-leaf node types to\ndetermine what they can pass down.\n\n(BTW, unless I'm missing something, there's not currently any\nenforcement of EXEC_FLAG_BACKWARD, ie a caller can fail to pass\nthat and then try to back up anyway. We probably want to improve\nthat situation, and also enforce this new flag about\nExecutorRewind.)\n\nThe reason I'm dubious about back-patching this is that each\nof these things seems likely to affect external code. Point 1\ncould affect external callers of the executor, and point 2 is\nlikely to have consequences for FDWs and custom-scan providers.\nMaybe we can set things up so that everything defaults in a\nsafe direction for unchanged code, but I don't want to contort\nthe design just to do that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Aug 2019 11:58:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Tue, Aug 13, 2019 at 9:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Tue, Aug 13, 2019 at 3:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> To clarify my position --- I think it's definitely possible to improve\n> >> the situation a great deal. We \"just\" have to pass down more information\n> >> about whether rescans are possible.\n>\n> > Right, you have speculated above that it is possible via adding some\n> > eflag bits. Can you please describe a bit more about that idea, so\n> > that somebody else can try to write a patch?\n>\n> Well, there are two components to solving this problem:\n>\n> 1. What are we going to do about the executor's external API?\n>\n> Right now, callers of ExecutorStart don't have to say whether they\n> might call ExecutorRewind. We need some way for callers to make a\n> binding promise that they won't do any such thing. Perhaps we just\n> want to invent another flag that's like EXEC_FLAG_BACKWARD, but it's\n> not clear how it should interact with the existing \"soft\" REWIND\n> flag.\n\nYeah making it interact with REWIND will be a bit of challenge as I\nthink to some extent the REWIND flag also indicates the same. Do I\nunderstand correctly that, we have some form of rule such that if\nEXEC_FLAG_REWIND is set or node's chgParam is NULL, then we can expect\nthat node can support rescan? If it is true, then maybe we need to\ndesign this new flag in such a way that it covers existing cases of\nREWIND as well.\n\nAnother point which I am wondering is why can't we use the existing\nREWIND flag to solve the current issue, basically if we have access to\nthat information in nodeLimit.c (ExecLimit), then can't we just pass\ndown that to ExecShutdownNode? I guess the problem could be that if\nLimitNode doesn't support REWIND, but one of the nodes beneath it\nsupports that same, then we won't be able to rely on the information\npassed to ExecShutdownNode.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Aug 2019 10:12:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> Another point which I am wondering is why can't we use the existing\n> REWIND flag to solve the current issue, basically if we have access to\n> that information in nodeLimit.c (ExecLimit), then can't we just pass\n> down that to ExecShutdownNode?\n\nThe existing REWIND flag tells subnodes whether they should *optimize*\nfor getting rewound or not. I don't recall right now (well past\nmidnight) why that seemed like a useful definition, but if you grep for\nplaces that are paying attention to that flag, I'm sure you'll find out.\n\nWe probably don't want to give up that distinction --- if it had been\nequally good to define the flag as a hard yes-or-no, I'm sure we would\nhave taken that definition, because it's simpler.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Aug 2019 00:52:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Fri, Aug 9, 2019 at 6:29 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Aug 7, 2019 at 5:45 AM vignesh C <vignesh21@gmail.com> wrote:\n> > I have made a patch based on the above lines.\n> > I have tested the scenarios which Thomas had shared in the earlier\n> > mail and few more tests based on Thomas's tests.\n> > I'm not sure if we will be going ahead with this solution or not.\n> > Let me know your opinion on the same.\n> > If you feel this approach is ok, we can add few of this tests into pg tests.\n>\n> I think this patch is bizarre:\n>\n> - It introduces a new function called getParallelModeLevel(), which is\n> randomly capitalized different from the other functions that do\n> similar things, and then uses it to do the same thing that could have\n> been done with the existing function IsInParallelMode().\n> - It contains an \"if\" statement whose only content is an Assert().\n> Don't write if (a) Assert(b); write Assert(!a || b).\n> - It contains zero lines of comment changes, which is obviously not\n> enough for a patch that proposes to fix a very thorny issue. This\n> failure has two parts. First, it adds no new comments to explain the\n> bug being fixed or the theory of operation of the new code. Second, it\n> does not even bother updating existing comments that are falsified by\n> the patch, such as the function header comments for\n> ExecParallelCleanup and ExecShutdownGather.\n> - It changes what ExecParallelCleanup does while adjusting only one of\n> the two callers to match the behavior change. nodeGatherMerge.c\n> manages to be completed untouched by this patch. If you change what a\n> function does, you really need to grep for all the calls to that\n> function and adjust all callers to match the new set of expectations.\n>\n> It's a little hard to get past all of those issues and look at what\n> the patch actually does, but I'm going to try: the theory of operation\n> of the patch seems to be that we can skip destroying the parallel\n> context when performing ExecParallelCleanup and in fact when exiting\n> parallel mode, and then when we get to executor end time the context\n> will still be there and we can fish the instrumentation out of it. But\n> this seems problematic for several reasons. For one thing, as Amit\n> already observed, the code currently contains an assertion which\n> ensure that a ParallelContext can't outlive the time spent in parallel\n> mode, and it doesn't seem desirable to relax that assertion (this\n> patch removes it).\n>\n> But beyond that, the issue here is that the Limit node is shutting\n> down the Gather node too early, and the right fix must be to stop\n> doing that, not to change the definition of what it means to shut down\n> a node, as this patch does. So maybe a possible approach here - which\n> I think is more or less what Tom is proposing - is:\n>\n> 1. Remove the code from ExecLimit() that calls ExecShutdownNode().\n>\n\nAttached patch does that. I have also added one test as a separate\npatch so that later if we introduce shutting down resources in Limit\nnode, we don't break anything. As of now, I have kept it separate for\neasy verification, but if we decide to go with this approach and test\nappears fine, we can merge it along with the fix.\n\n> 2. Adjust ExecutePlan() so that it ensures that ExecuteShutdownNode()\n> gets called at the very end of execution, at least when execute_once\n> is set, before exiting parallel mode.\n>\n\nI am not sure if I completely understand this point. AFAICS, the\nExecuteShutdownNode is called whenever we are done getting the tuples.\nOne place where it is not there in that function is when we assume\ndestination is closed, basically below code:\nExecutePlan()\n{\n..\nif (!dest->receiveSlot(slot, dest))\nbreak;\n..\n}\n\nDo you expect this case to be also dealt or you have something else in\nmind? The other possibility could be that we move the shutdown of the\nnode at the end of the function when we exit parallel mode but doing\nthat lead to some regression failure on my machine. I will\ninvestigate the same.\n\n> 3. Figure out, possibly at a later time or only in HEAD, how to make\n> the early call to ExecLimit() in ExecShutdownNode(), and then put it\n> back.\n\nOkay, Tom has suggested a design to address this, but that will be\nfor HEAD only. To be clear, I am not planning to spend time on that\nat this moment, but OTOH, I want the bug reported in this thread to be\nclosed, so for now, we need to proceed with some minimum fix as\nmentioned by you in above two points. If someone else can write a\npatch, I can help in the review of same.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 2 Sep 2019 16:51:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Mon, Sep 2, 2019 at 4:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Aug 9, 2019 at 6:29 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> >\n> > But beyond that, the issue here is that the Limit node is shutting\n> > down the Gather node too early, and the right fix must be to stop\n> > doing that, not to change the definition of what it means to shut down\n> > a node, as this patch does. So maybe a possible approach here - which\n> > I think is more or less what Tom is proposing - is:\n> >\n> > 1. Remove the code from ExecLimit() that calls ExecShutdownNode().\n> >\n>\n> Attached patch does that. I have also added one test as a separate\n> patch so that later if we introduce shutting down resources in Limit\n> node, we don't break anything. As of now, I have kept it separate for\n> easy verification, but if we decide to go with this approach and test\n> appears fine, we can merge it along with the fix.\n>\n\nI have merged the code change and test case patch as I felt that it is\ngood to cover this case. I have slightly changed the test case to\nmake its output predictable (made the inner scan ordered so that the\nquery always produces the same result). One more thing I am not able\nto come up with some predictable test case for 9.6 branches as it\ndoesn't support Gather Merge which is required for this particular\ntest to always produce predictable output. There could be some better\nway to write this test, so any input in that regards or otherwise is\nwelcome. So, if we commit this patch the containing test case will be\nfor branches HEAD~10, but the code will be for HEAD~9.6.\n\n> > 2. Adjust ExecutePlan() so that it ensures that ExecuteShutdownNode()\n> > gets called at the very end of execution, at least when execute_once\n> > is set, before exiting parallel mode.\n> >\n>\n> I am not sure if I completely understand this point. AFAICS, the\n> ExecuteShutdownNode is called whenever we are done getting the tuples.\n> One place where it is not there in that function is when we assume\n> destination is closed, basically below code:\n> ExecutePlan()\n> {\n> ..\n> if (!dest->receiveSlot(slot, dest))\n> break;\n> ..\n> }\n>\n> Do you expect this case to be also dealt or you have something else in\n> mind?\n>\n\nIt still appears problematic, but I couldn't come up with a test case\nto reproduce the problem. I'll try some more on this, but I think\nthis anyway can be done separately once we have a test to show the\nproblem.\n\n> The other possibility could be that we move the shutdown of the\n> node at the end of the function when we exit parallel mode but doing\n> that lead to some regression failure on my machine. I will\n> investigate the same.\n>\n\nThis was failing because use_parallel_mode flag in function\nExecutePlan() won't be set for workers and hence they won't get a\nchance to accumulate its stats.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 5 Sep 2019 19:53:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Thu, Sep 5, 2019 at 7:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Sep 2, 2019 at 4:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Aug 9, 2019 at 6:29 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> > >\n> > > But beyond that, the issue here is that the Limit node is shutting\n> > > down the Gather node too early, and the right fix must be to stop\n> > > doing that, not to change the definition of what it means to shut down\n> > > a node, as this patch does. So maybe a possible approach here - which\n> > > I think is more or less what Tom is proposing - is:\n> > >\n> > > 1. Remove the code from ExecLimit() that calls ExecShutdownNode().\n> > >\n> >\n> > Attached patch does that. I have also added one test as a separate\n> > patch so that later if we introduce shutting down resources in Limit\n> > node, we don't break anything. As of now, I have kept it separate for\n> > easy verification, but if we decide to go with this approach and test\n> > appears fine, we can merge it along with the fix.\n> >\n>\n> I have merged the code change and test case patch as I felt that it is\n> good to cover this case. I have slightly changed the test case to\n> make its output predictable (made the inner scan ordered so that the\n> query always produces the same result). One more thing I am not able\n> to come up with some predictable test case for 9.6 branches as it\n> doesn't support Gather Merge which is required for this particular\n> test to always produce predictable output. There could be some better\n> way to write this test, so any input in that regards or otherwise is\n> welcome. So, if we commit this patch the containing test case will be\n> for branches HEAD~10, but the code will be for HEAD~9.6.\n>\n\nRobert, Thomas, do you have any more suggestions related to this. I\nam planning to commit the above-discussed patch (Forbid Limit node to\nshutdown resources.) coming Monday, so that at least the reported\nproblem got fixed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 12 Sep 2019 18:25:44 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Thu, Sep 12, 2019 at 8:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Robert, Thomas, do you have any more suggestions related to this. I\n> am planning to commit the above-discussed patch (Forbid Limit node to\n> shutdown resources.) coming Monday, so that at least the reported\n> problem got fixed.\n\nI think that your commit message isn't very clear about what the\nactual issue is. And the patch itself doesn't add any comments or\nanything to try to clear it up. So I wouldn't favor committing it in\nthis form.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 12 Sep 2019 09:35:15 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Fri, Sep 13, 2019 at 1:35 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Sep 12, 2019 at 8:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Robert, Thomas, do you have any more suggestions related to this. I\n> > am planning to commit the above-discussed patch (Forbid Limit node to\n> > shutdown resources.) coming Monday, so that at least the reported\n> > problem got fixed.\n>\n> I think that your commit message isn't very clear about what the\n> actual issue is. And the patch itself doesn't add any comments or\n> anything to try to clear it up. So I wouldn't favor committing it in\n> this form.\n\nIs the proposed commit message at the bottom of this email an improvement?\n\nDo I understand correctly that, with this patch, we can only actually\nlose statistics in the case where we rescan? That is, precisely the\ncase that crashes (9.6) or spews warnings (10+)? In a quick\nnon-rescan test with the ExecShutdownNode() removed, I don't see any\nproblem with the buffer numbers on my screen:\n\npostgres=# explain (analyze, buffers, timing off, costs off) select\ncount(*) from t limit 50000;\n QUERY PLAN\n------------------------------------------------------------------------------\n Limit (actual rows=1 loops=1)\n Buffers: shared hit=16210 read=28038\n -> Finalize Aggregate (actual rows=1 loops=1)\n Buffers: shared hit=16210 read=28038\n -> Gather (actual rows=3 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=16210 read=28038\n -> Partial Aggregate (actual rows=1 loops=3)\n Buffers: shared hit=16210 read=28038\n -> Parallel Seq Scan on t (actual rows=3333333 loops=3)\n Buffers: shared hit=16210 read=28038\n Planning Time: 0.086 ms\n Execution Time: 436.669 ms\n(14 rows)\n\n===\nDon't shut down Gather[Merge] early under Limit.\n\nRevert part of commit 19df1702f5.\n\nEarly shutdown was added by that commit so that we could collect\nstatistics from workers, but unfortunately it interacted badly with\nrescans. Rescanning a Limit over a Gather node could produce a SEGV\non 9.6 and resource leak warnings on later releases. By reverting the\nearly shutdown code, we might lose statistics in some cases of Limit\nover Gather, but that will require further study to fix.\n\nAuthor: Amit Kapila, testcase by Vignesh C\nReported-by: Jerry Sievers\nDiagnosed-by: Thomas Munro\nBackpatch-through: 9.6\nDiscussion: https://postgr.es/m/87ims2amh6.fsf@jsievers.enova.com\n===\n\n\n",
"msg_date": "Thu, 17 Oct 2019 18:20:52 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Thu, Oct 17, 2019 at 10:51 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Fri, Sep 13, 2019 at 1:35 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Thu, Sep 12, 2019 at 8:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Robert, Thomas, do you have any more suggestions related to this. I\n> > > am planning to commit the above-discussed patch (Forbid Limit node to\n> > > shutdown resources.) coming Monday, so that at least the reported\n> > > problem got fixed.\n> >\n> > I think that your commit message isn't very clear about what the\n> > actual issue is. And the patch itself doesn't add any comments or\n> > anything to try to clear it up. So I wouldn't favor committing it in\n> > this form.\n>\n> Is the proposed commit message at the bottom of this email an improvement?\n>\n> Do I understand correctly that, with this patch, we can only actually\n> lose statistics in the case where we rescan?\n>\n\nNo, it will lose without rescan as well. To understand in detail, you\nmight want to read the emails pointed by me in one of the above email\n[1] in this thread.\n\n> That is, precisely the\n> case that crashes (9.6) or spews warnings (10+)? In a quick\n> non-rescan test with the ExecShutdownNode() removed, I don't see any\n> problem with the buffer numbers on my screen:\n>\n\nTry by removing aggregate function. Basically, the Limit node has to\nfinish before consuming all the rows sent by a parallel node beneath\nit.\n\n>\n> ===\n> Don't shut down Gather[Merge] early under Limit.\n>\n> Revert part of commit 19df1702f5.\n>\n> Early shutdown was added by that commit so that we could collect\n> statistics from workers, but unfortunately it interacted badly with\n> rescans. Rescanning a Limit over a Gather node could produce a SEGV\n> on 9.6 and resource leak warnings on later releases. By reverting the\n> early shutdown code, we might lose statistics in some cases of Limit\n> over Gather, but that will require further study to fix.\n>\n\nHow about some text like below? I have added slightly different text\nto explain the reason for the problem.\n\n\"Early shutdown was added by that commit so that we could collect\nstatistics from workers, but unfortunately, it interacted badly with\nrescans. The problem is that we ended up destroying the parallel\ncontext which is required for rescans. This leads to rescans of a\nLimit node over a Gather node to produce unpredictable results as it\ntries to access destroyed parallel context. By reverting the early\nshutdown code, we might lose statistics in some cases of Limit over\nGather, but that will require further study to fix.\"\n\nI am not sure but we can even add a comment in the place where we are\nremoving some code (in nodeLimit.c) to indicate that 'Ideally we\nshould shutdown parallel resources here to get the correct stats, but\nthat would lead to rescans misbehaving when there is a Gather [Merge]\nnode beneath it. (Explain the reason for misbehavior and the ideas we\ndiscussed in this thread to fix the same) .........\"\n\nI can try to come up with comments in nodeLimit.c on the above lines\nif we think that is a good idea?\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1Ja-eoavXcr0eq7w7hP%3D64VP49k%3DNMFxwhtK28NHfBOdA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 18 Oct 2019 10:08:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Fri, Oct 18, 2019 at 10:08 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Oct 17, 2019 at 10:51 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> > ===\n> > Don't shut down Gather[Merge] early under Limit.\n> >\n> > Revert part of commit 19df1702f5.\n> >\n> > Early shutdown was added by that commit so that we could collect\n> > statistics from workers, but unfortunately it interacted badly with\n> > rescans. Rescanning a Limit over a Gather node could produce a SEGV\n> > on 9.6 and resource leak warnings on later releases. By reverting the\n> > early shutdown code, we might lose statistics in some cases of Limit\n> > over Gather, but that will require further study to fix.\n> >\n>\n> How about some text like below? I have added slightly different text\n> to explain the reason for the problem.\n>\n> \"Early shutdown was added by that commit so that we could collect\n> statistics from workers, but unfortunately, it interacted badly with\n> rescans. The problem is that we ended up destroying the parallel\n> context which is required for rescans. This leads to rescans of a\n> Limit node over a Gather node to produce unpredictable results as it\n> tries to access destroyed parallel context. By reverting the early\n> shutdown code, we might lose statistics in some cases of Limit over\n> Gather, but that will require further study to fix.\"\n>\n> I am not sure but we can even add a comment in the place where we are\n> removing some code (in nodeLimit.c) to indicate that 'Ideally we\n> should shutdown parallel resources here to get the correct stats, but\n> that would lead to rescans misbehaving when there is a Gather [Merge]\n> node beneath it. (Explain the reason for misbehavior and the ideas we\n> discussed in this thread to fix the same) .........\"\n>\n> I can try to come up with comments in nodeLimit.c on the above lines\n> if we think that is a good idea?\n>\n\nI have modified the commit message as proposed above and additionally\nadded comments in nodeLimit.c. I think we should move ahead with this\nbug-fix patch. If we don't like the comment, it can anyway be\nimproved later.\n\nAny suggestions?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 18 Nov 2019 14:22:44 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Mon, Nov 18, 2019 at 2:22 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I have modified the commit message as proposed above and additionally\n> added comments in nodeLimit.c. I think we should move ahead with this\n> bug-fix patch. If we don't like the comment, it can anyway be\n> improved later.\n>\n> Any suggestions?\n>\n\nIf there are no further suggestions or objections, I will commit this\nearly next week, probably on Monday.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 20 Nov 2019 17:12:04 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 5:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 18, 2019 at 2:22 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I have modified the commit message as proposed above and additionally\n> > added comments in nodeLimit.c. I think we should move ahead with this\n> > bug-fix patch. If we don't like the comment, it can anyway be\n> > improved later.\n> >\n> > Any suggestions?\n> >\n>\n> If there are no further suggestions or objections, I will commit this\n> early next week, probably on Monday.\n>\n\nYesterday, I pushed this patch.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 27 Nov 2019 07:52:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SegFault on 9.6.14"
}
] |
[
{
"msg_contents": "I see some code of long standing in PL/Java where its handler\nfor a set-returning function creates a new context, \"PL/Java row context\",\nduring first-call init, as a child of context->multi_call_memory_ctx,\ndiligently resets it with every percall-setup and before calling the\nuser function, and deletes it when the whole set has been returned.\n\nThe more I look at it, the more convinced I am that this is one hundred\npercent redundant with what ExecMakeTableFunctionResult is already doing\nwith ecxt_per_tuple_memory, which is similarly reset before every call\nfor a tuple, and is already current when PL/Java's handler is called.\n\nAm I missing some obvious reason Thomas might have used his own context\nfor that? As far as I can see in git, ExecMakeTableFunctionResult has been\nproviding its own ecxt_per_tuple_memory, at least as far back as 7.3,\nwhich I think is (slightly) older than PL/Java.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Mon, 15 Jul 2019 20:43:43 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": true,
"msg_subject": "doesn't execSRF.c do that already?"
}
] |
[
{
"msg_contents": "Hi,\n\nI am getting ERROR: relation 16401 has no triggers error while executing\nbelow query.\n\npostgres=# create table tbl1(f1 int primary key);\nCREATE TABLE\npostgres=# create table tbl2(f1 int references tbl1 deferrable initially\ndeferred) partition by range(f1);\nCREATE TABLE\npostgres=# create table tbl2_p1 partition of tbl2 for values from\n(minvalue) to (maxvalue);\nCREATE TABLE\npostgres=# insert into tbl1 values(1);\nINSERT 0 1\npostgres=# begin;\nBEGIN\npostgres=# insert into tbl2 values(1);\nINSERT 0 1\npostgres=# alter table tbl2 drop constraint tbl2_f1_fkey;\nALTER TABLE\npostgres=# commit;\nERROR: relation 16395 has no triggers\n\nThanks & Regards,\nRajkumar Raghuwanshi\nQMG, EnterpriseDB Corporation\n\nHi, I am getting ERROR: relation 16401 has no triggers error while executing below query.postgres=# create table tbl1(f1 int primary key);CREATE TABLEpostgres=# create table tbl2(f1 int references tbl1 deferrable initially deferred) partition by range(f1);CREATE TABLEpostgres=# create table tbl2_p1 partition of tbl2 for values from (minvalue) to (maxvalue);CREATE TABLEpostgres=# insert into tbl1 values(1);INSERT 0 1postgres=# begin;BEGINpostgres=# insert into tbl2 values(1);INSERT 0 1postgres=# alter table tbl2 drop constraint tbl2_f1_fkey;ALTER TABLEpostgres=# commit;ERROR: relation 16395 has no triggersThanks & Regards,Rajkumar RaghuwanshiQMG, EnterpriseDB Corporation",
"msg_date": "Tue, 16 Jul 2019 13:07:45 +0530",
"msg_from": "Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "getting ERROR \"relation 16401 has no triggers\" with partition foreign\n key alter"
},
{
"msg_contents": "On Tue, Jul 16, 2019 at 01:07:45PM +0530, Rajkumar Raghuwanshi wrote:\n> I am getting ERROR: relation 16401 has no triggers error while executing\n> below query.\n\nIndeed, confirmed. I can reproduce that down to v11, so that's not an\nopen item. I have added an entry in the section for older issues\nthough.\n--\nMichael",
"msg_date": "Tue, 16 Jul 2019 17:05:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: getting ERROR \"relation 16401 has no triggers\" with partition\n foreign key alter"
},
{
"msg_contents": "Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> writes:\n> I am getting ERROR: relation 16401 has no triggers error while executing\n> below query.\n\nYeah, I can reproduce that back to v11. If you try the same scenario\nwith a non-partitioned table you get\n\nERROR: 55006: cannot ALTER TABLE \"tbl2\" because it has pending trigger events\nLOCATION: CheckTableNotInUse, tablecmds.c:3436\n\nbut that test evidently fails to detect pending events for a partition\nchild table.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Jul 2019 09:59:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: getting ERROR \"relation 16401 has no triggers\" with partition\n foreign key alter"
},
{
"msg_contents": "On 2019-Jul-16, Tom Lane wrote:\n\n> Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> writes:\n> > I am getting ERROR: relation 16401 has no triggers error while executing\n> > below query.\n> \n> Yeah, I can reproduce that back to v11. If you try the same scenario\n> with a non-partitioned table you get\n> \n> ERROR: 55006: cannot ALTER TABLE \"tbl2\" because it has pending trigger events\n> LOCATION: CheckTableNotInUse, tablecmds.c:3436\n> \n> but that test evidently fails to detect pending events for a partition\n> child table.\n\nAh, yeah. So the problem is that when dropping an FK,\nATExecDropConstraint does not recurse itself, but instead relies on the\ndependency mechanism, which obviously does not run CheckTableNotInUse on\nthe partitions.\n\nI think we should just run CheckTableNotInUse for each partition in\nATExecDropConstraint. Trying that out now.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 17 Jul 2019 18:08:47 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: getting ERROR \"relation 16401 has no triggers\" with partition\n foreign key alter"
},
{
"msg_contents": "On 2019-Jul-17, Alvaro Herrera wrote:\n\n> I think we should just run CheckTableNotInUse for each partition in\n> ATExecDropConstraint. Trying that out now.\n\nActually, that doesn't fix this problem, because the partitioned side is\nthe *referencing* side, and ATExecDropConstraint is obsessed about the\n*referenced* side only and assumes that the calling code has already\ndealt with the referencing side checks. I'm trying a fix for that now.\n\nI wonder if there are other AT subcommands that are similarly broken,\nbecause many of them skip the CheckTableNotInUse for the partitions.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 17 Jul 2019 18:30:37 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: getting ERROR \"relation 16401 has no triggers\" with partition\n foreign key alter"
},
{
"msg_contents": "On 2019-Jul-17, Alvaro Herrera wrote:\n\n> Actually, that doesn't fix this problem, because the partitioned side is\n> the *referencing* side, and ATExecDropConstraint is obsessed about the\n> *referenced* side only and assumes that the calling code has already\n> dealt with the referencing side checks. I'm trying a fix for that now.\n\nYeah, the attached patch fixes Rajkumar's reproducer.\n\n> I wonder if there are other AT subcommands that are similarly broken,\n> because many of them skip the CheckTableNotInUse for the partitions.\n\nI suppose the question here is where else do we need to call the new\nATRecurseCheckNotInUse function (which needs a comment).\n\nI thought about doing the recursion in CheckTableNotInUse itself, but I\ndidn't feel comfortable with assuming that all callers are OK with that.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 17 Jul 2019 18:48:07 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: getting ERROR \"relation 16401 has no triggers\" with partition\n foreign key alter"
},
{
"msg_contents": "On 2019-Jul-17, Alvaro Herrera wrote:\n\n> On 2019-Jul-17, Alvaro Herrera wrote:\n\n> > I wonder if there are other AT subcommands that are similarly broken,\n> > because many of them skip the CheckTableNotInUse for the partitions.\n> \n> I suppose the question here is where else do we need to call the new\n> ATRecurseCheckNotInUse function (which needs a comment).\n\nI decided to rename the new function to ATCheckPartitionsNotInUse, and\nmake it a no-op for legacy inheritance. This seems quite specific to\npartitioned tables (as opposed to legacy inheritance behavior).\n\nAfter looking at the code some more, I think calling the new function in\nthe Prep phase is correct. The attached patch is pretty much final form\nfor this bugfix. I decided to unwrap a couple of error messages (I did\nget bitten while grepping because of this), and reordered one of the new\nIdentity command cases in ATPrepCmd since it appeared in inconsistent\norder in that one place of four.\n\n\nI looked at all the other AT subcommand cases that might require the\nsame treatment, and didn't find anything -- either the recursion is done\nat Prep time, which checks already, or contains the proper check at Exec\ntime right after opening the partition rel. (I think it would be better\nto do the check during the Prep phase, to avoid wasting work in case a\npartition happens to be used. However, that's not critical and not for\nthis commit to fix IMO.)\n\nSeparately from that, there's AT_SetLogged / AT_SetUnlogged which look\npretty dubious ... I'm not sure that recursion is handled correctly\nthere. Maybe it's considered okay to have a partitioned table with\nunlogged partitions, and vice versa?\n\nI also noticed that AT_AlterConstraint does not handle recursion at all,\nand it also has this comment:\n\n * Currently only works for Foreign Key constraints.\n * Foreign keys do not inherit, so we purposely ignore the\n * recursion bit here, but we keep the API the same for when\n * other constraint types are supported.\n\nwhich sounds to oppose reality.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 22 Jul 2019 18:18:29 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: getting ERROR \"relation 16401 has no triggers\" with partition\n foreign key alter"
},
{
"msg_contents": "On 2019-Jul-22, Alvaro Herrera wrote:\n\n> After looking at the code some more, I think calling the new function in\n> the Prep phase is correct. The attached patch is pretty much final form\n> for this bugfix. I decided to unwrap a couple of error messages (I did\n> get bitten while grepping because of this), and reordered one of the new\n> Identity command cases in ATPrepCmd since it appeared in inconsistent\n> order in that one place of four.\n\nPushed to all three branches.\n\nThanks for reporting\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 23 Jul 2019 17:34:29 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: getting ERROR \"relation 16401 has no triggers\" with partition\n foreign key alter"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jul-22, Alvaro Herrera wrote:\n>> After looking at the code some more, I think calling the new function in\n>> the Prep phase is correct. The attached patch is pretty much final form\n>> for this bugfix. I decided to unwrap a couple of error messages (I did\n>> get bitten while grepping because of this), and reordered one of the new\n>> Identity command cases in ATPrepCmd since it appeared in inconsistent\n>> order in that one place of four.\n\n> Pushed to all three branches.\n\nThis is still listed as a live issue in\n\nhttps://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items#Live_issues\n\nShould that be closed now?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Aug 2019 14:22:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: getting ERROR \"relation 16401 has no triggers\" with partition\n foreign key alter"
},
{
"msg_contents": "On 2019-Aug-14, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2019-Jul-22, Alvaro Herrera wrote:\n> >> After looking at the code some more, I think calling the new function in\n> >> the Prep phase is correct. The attached patch is pretty much final form\n> >> for this bugfix. I decided to unwrap a couple of error messages (I did\n> >> get bitten while grepping because of this), and reordered one of the new\n> >> Identity command cases in ATPrepCmd since it appeared in inconsistent\n> >> order in that one place of four.\n> \n> > Pushed to all three branches.\n> \n> This is still listed as a live issue in\n> \n> https://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items#Live_issues\n> \n> Should that be closed now?\n\nYep, done, thanks!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 14 Aug 2019 14:33:36 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: getting ERROR \"relation 16401 has no triggers\" with partition\n foreign key alter"
}
] |
[
{
"msg_contents": "The attached small patch adds new entry points to simplehash.h that\nallow the caller to pass in the already-calculated hash value, so that\nsimplehash doesn't need to recalculate it.\n\nThis is helpful for Memory-Bounded Hash Aggregation[1], which uses the\nhash value for multiple purposes. For instance, if the hash table is\nfull and the group is not already present in the hash table, it needs\nto spill the tuple to disk. In that case, it would use the hash value\nfor the initial lookup, then to select the right spill partition.\nLater, when it processes the batch, it will again need the same hash\nvalue to perform a lookup. By separating the hash value calculation\nfrom where it's used, we can avoid needlessly recalculating it for each\nof these steps.\n\nThere is already an option for simplehash to cache the calculated hash\nvalue and return it with the entry, but that doesn't quite fit the\nneed. The hash value is needed in cases where the lookup fails, because\nthat is when the tuple must be spilled; but if the lookup fails, it\nreturns NULL, discarding the calculated hash value.\n\nI am including this patch separately from Hash Aggregation because it\nis a small and independently-reviewable change.\n\nIn theory, this could add overhead for \"SH_SCOPE extern\" for callers\nnot specifying their own hash value, because it adds an extra external\nfunction call. I looked at the generated LLVM and it's a simple tail\ncall, and I looked at the generated assembly and it's just an extra\njmp. I tested by doing a hash aggregation of 30M zeroes, which should\nexercise that path a lot, and I didn't see any difference. Also, once\nwe actually use this for hash aggregation, there will be no \"SH_SCOPE\nextern\" callers that don't specify the hash value anyway.\n\nRegards,\n\tJeff Davis\n\n[1] \nhttps://postgr.es/m/507ac540ec7c20136364b5272acbcd4574aa76ef.camel%40j-davis.com",
"msg_date": "Tue, 16 Jul 2019 15:20:33 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Allow simplehash to use already-calculated hash values"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-16 15:20:33 -0700, Jeff Davis wrote:\n> The attached small patch adds new entry points to simplehash.h that\n> allow the caller to pass in the already-calculated hash value, so that\n> simplehash doesn't need to recalculate it.\n> \n> This is helpful for Memory-Bounded Hash Aggregation[1], which uses the\n> hash value for multiple purposes. For instance, if the hash table is\n> full and the group is not already present in the hash table, it needs\n> to spill the tuple to disk. In that case, it would use the hash value\n> for the initial lookup, then to select the right spill partition.\n> Later, when it processes the batch, it will again need the same hash\n> value to perform a lookup. By separating the hash value calculation\n> from where it's used, we can avoid needlessly recalculating it for each\n> of these steps.\n\nMakes sense to me.\n\n\n\n> In theory, this could add overhead for \"SH_SCOPE extern\" for callers\n> not specifying their own hash value, because it adds an extra external\n> function call. I looked at the generated LLVM and it's a simple tail\n> call, and I looked at the generated assembly and it's just an extra\n> jmp.\n\nHow does it look for gcc? And was that with LTO enabled or not?\n\nIs that still true when the hashtable is defined in a shared library, or\nwhen you compile postgres as a PIE executable? I'm not sure that\ncompilers can optimize the external function call at least in the former\ncase, because the typical function resolution rules IIRC mean that\nreferences to extern functions could be resolved to definitions in other\ntranslation units, *even if* there's a definition in the same TU.\n\nISTM that it'd be best to just have a static inline helper function\nemployed both the hash-passing and the \"traditional\" insertion routines?\nThen that problem ought to not exist anymore.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 16 Jul 2019 15:46:49 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Allow simplehash to use already-calculated hash values"
},
{
"msg_contents": "On Tue, 2019-07-16 at 15:46 -0700, Andres Freund wrote:\n> ISTM that it'd be best to just have a static inline helper function\n> employed both the hash-passing and the \"traditional\" insertion\n> routines?\n> Then that problem ought to not exist anymore.\n\nAgreed, attached.\n\nRegards,\n\tJeff Davis",
"msg_date": "Wed, 17 Jul 2019 11:17:46 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow simplehash to use already-calculated hash values"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-17 11:17:46 -0700, Jeff Davis wrote:\n> From a6aba8e53f7a36a42922add68098682c2c96683e Mon Sep 17 00:00:00 2001\n> From: Jeff Davis <jdavis@postgresql.org>\n> Date: Wed, 17 Jul 2019 10:52:15 -0700\n> Subject: [PATCH] Allow simplehash to use already-calculated hash values.\n> \n> Add _lookup_hash and _insert_hash functions for callers that have\n> already calculated the hash value of the key.\n\nI've not tested it, but this looks reasonable to me. Do you actually\nneed the lookup variant, or is that more for completeness?\n\n\n> This is intended for use with hash algorithms that write to disk in\n> partitions. The hash value can be calculated once, used to perform a\n> lookup, used to select the partition, then written to the partition\n> along with the tuple. When the tuple is read back, the hash value does\n> not need to be recalculated.\n\nnitpick^3: I'd s/This is intended for use/The immediate use-case is/\n\n\n> +static inline\tSH_ELEMENT_TYPE *\n> +SH_INSERT_HASH_INTERNAL(SH_TYPE * tb, SH_KEY_TYPE key, uint32 hash, bool *found)\n\nI'd perhaps add a comment here along the lines of:\n\n/*\n * This is a separate static inline function, so it can be reliably be inlined\n * into its wrapper functions even if SH_SCOPE is extern.\n */\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Jul 2019 11:59:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Allow simplehash to use already-calculated hash values"
},
{
"msg_contents": "On Wed, 2019-07-17 at 11:59 -0700, Andres Freund wrote:\n> I've not tested it, but this looks reasonable to me. Do you actually\n> need the lookup variant, or is that more for completeness?\n\nYes. If the hash table is full, I do a lookup. If not, I do an insert.\n\n> nitpick^3: I'd s/This is intended for use/The immediate use-case is/\n\nOK.\n\n> > +static inline\tSH_ELEMENT_TYPE *\n> > +SH_INSERT_HASH_INTERNAL(SH_TYPE * tb, SH_KEY_TYPE key, uint32\n> > hash, bool *found)\n> \n> I'd perhaps add a comment here along the lines of:\n> \n> /*\n> * This is a separate static inline function, so it can be reliably\n> be inlined\n> * into its wrapper functions even if SH_SCOPE is extern.\n> */\n\nWill do.\n\nRegards,\n\tJeff\n\n\n\n\n",
"msg_date": "Wed, 17 Jul 2019 12:59:36 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow simplehash to use already-calculated hash values"
}
] |
[
{
"msg_contents": "Hi all,\n(Thomas in CC as per 6912acc0)\n\nI got surprised by the following behavior from pg_stat_get_wal_senders\nwhen connecting for example pg_receivewal to a primary:\n=# select application_name, flush_lsn, replay_lsn, flush_lag,\nreplay_lag from pg_stat_replication;\n application_name | flush_lsn | replay_lsn | flush_lag | replay_lag\n------------------+-----------+------------+-----------------+-----------------\n receivewal | null | null | 00:09:13.578185 | 00:09:13.578185\n(1 row)\n\nIt makes little sense to me, as we are reporting a replay lag on a\nposition which has never been reported yet, so it cannot actually be\nused as a comparison base for the lag. Am I missing something or\nshould we return NULL for those fields if we have no write, flush or\napply LSNs like in the attached?\n\nThoughts?\n--\nMichael",
"msg_date": "Wed, 17 Jul 2019 10:51:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "pg_stat_replication lag fields return non-NULL values even with NULL\n LSNs"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 1:52 PM Michael Paquier <michael@paquier.xyz> wrote:\n> I got surprised by the following behavior from pg_stat_get_wal_senders\n> when connecting for example pg_receivewal to a primary:\n> =# select application_name, flush_lsn, replay_lsn, flush_lag,\n> replay_lag from pg_stat_replication;\n> application_name | flush_lsn | replay_lsn | flush_lag | replay_lag\n> ------------------+-----------+------------+-----------------+-----------------\n> receivewal | null | null | 00:09:13.578185 | 00:09:13.578185\n> (1 row)\n>\n> It makes little sense to me, as we are reporting a replay lag on a\n> position which has never been reported yet, so it cannot actually be\n> used as a comparison base for the lag. Am I missing something or\n> should we return NULL for those fields if we have no write, flush or\n> apply LSNs like in the attached?\n\nHmm. It's working as designed, but indeed it's not very newsworthy\ninformation in this case. If you run pg_receivewal --synchronous then\nyou get sensible looking flush_lag times. Without that, flush_lag\nonly goes up, and of course replay_lag only goes up, so although it's\ntelling the truth, I think your proposal makes sense.\n\nOne question I had is what would happen with your patch without\n--synchronous, once it flushes a whole file and opens a new one; I\nwondered if your new boring-information-hiding behaviour would stop\nworking after one segment file because of that. I tested that and the\ncolumn remains NULL when we move to a new file, so that's good.\n\nOne thing I noticed in passing is that you always get the same times\nin the write_lag and flush_lag columns, in --synchronous mode, and the\ntimes updates infrequently. That's not the case with regular\nreplicas; I suspect there is a difference in the time and frequency of\nreplies sent to the server, which I guess might make synchronous\ncommit a bit \"lumpier\", but I didn't dig further today.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 13 Aug 2019 11:15:42 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_replication lag fields return non-NULL values even with\n NULL LSNs"
},
{
"msg_contents": "On Tue, Aug 13, 2019 at 11:15:42AM +1200, Thomas Munro wrote:\n> Hmm. It's working as designed, but indeed it's not very newsworthy\n> information in this case. If you run pg_receivewal --synchronous then\n> you get sensible looking flush_lag times. Without that, flush_lag\n> only goes up, and of course replay_lag only goes up, so although it's\n> telling the truth, I think your proposal makes sense.\n\nThanks!\n\n> One question I had is what would happen with your patch without\n> --synchronous, once it flushes a whole file and opens a new one; I\n> wondered if your new boring-information-hiding behaviour would stop\n> working after one segment file because of that.\n\nIndeed.\n\n> I tested that and the column remains NULL when we move to a new\n> file, so that's good. \n\nThanks for looking.\n\n> One thing I noticed in passing is that you always get the same times\n> in the write_lag and flush_lag columns, in --synchronous mode, and the\n> times updates infrequently. That's not the case with regular\n> replicas; I suspect there is a difference in the time and frequency of\n> replies sent to the server, which I guess might make synchronous\n> commit a bit \"lumpier\", but I didn't dig further today.\n\nThe messages are sent by pg_receivewal via sendFeedback() in\nreceivelog.c. It gets triggered for the --synchronous case once a\nflush is done (but you are not surprised by my reply here, right!),\nand most likely the matches you are seeing some from the messages sent\nat the beginning of HandleCopyStream() where the flush and write\nLSNs are equal. This code behaves as I would expect based on your\ndescription and a read of the code I have just done to refresh my\nmind, but we may of course have some issues or potential\nimprovements.\n--\nMichael",
"msg_date": "Tue, 13 Aug 2019 11:19:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_replication lag fields return non-NULL values even with\n NULL LSNs"
},
{
"msg_contents": "On Tue, Aug 13, 2019 at 2:20 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Aug 13, 2019 at 11:15:42AM +1200, Thomas Munro wrote:\n> > One thing I noticed in passing is that you always get the same times\n> > in the write_lag and flush_lag columns, in --synchronous mode, and the\n> > times updates infrequently. That's not the case with regular\n> > replicas; I suspect there is a difference in the time and frequency of\n> > replies sent to the server, which I guess might make synchronous\n> > commit a bit \"lumpier\", but I didn't dig further today.\n>\n> The messages are sent by pg_receivewal via sendFeedback() in\n> receivelog.c. It gets triggered for the --synchronous case once a\n> flush is done (but you are not surprised by my reply here, right!),\n> and most likely the matches you are seeing some from the messages sent\n> at the beginning of HandleCopyStream() where the flush and write\n> LSNs are equal. This code behaves as I would expect based on your\n> description and a read of the code I have just done to refresh my\n> mind, but we may of course have some issues or potential\n> improvements.\n\nRight. For a replica server we call XLogWalRcvSendReply() after\nwriting, and then again inside XLogWalRcvFlush(). So the primary gets\nto measure write_lag and flush_lag separately. If pg_receivewal just\nsends one reply after flushing, then turning on --synchronous has the\neffect of showing the flush lag in both write_lag and flush_lag\ncolumns.\n\nOf course those things aren't quite as independent as they should be\nanyway, since the flush is blocking and therefore delays the next\nwrite. <mind-reading-mode>That's why Simon probably wants to move the\nflush to the WAL writer process, and Andres probably wants to change\nthe whole thing to use some kind of async IO[1].</mind-reading-mode>\n\n[1] https://lwn.net/Articles/789024/\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 13 Aug 2019 15:04:00 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_replication lag fields return non-NULL values even with\n NULL LSNs"
}
] |
[
{
"msg_contents": "Hi\n\nWhile poking about with [1], I noticed a few potential issues with the\ninclusion handling for configuration files; another issue is demonstrated in [2].\n\n[1] https://www.postgresql.org/message-id/aed6cc9f-98f3-2693-ac81-52bb0052307e%402ndquadrant.com\n (\"Stop ALTER SYSTEM from making bad assumptions\")\n[2] https://www.postgresql.org/message-id/CY4PR1301MB22001D3FAAB3499C5D41DE23A9E50%40CY4PR1301MB2200.namprd13.prod.outlook.com\n (\"First Time Starting Up PostgreSQL and Having Problems\")\n\nSpecifically these are:\n\n1) Provision of empty include directives\n=========================================\n\nThe default postgresql.conf file includes these thusly:\n\n #include_dir = '' # include files ending in '.conf' from\n # a directory, e.g., 'conf.d'\n #include_if_exists = '' # include file only if it exists\n #include = '' # include file\n\nCurrently, uncommenting them but leaving the value empty (as happened in [2] above) can\nresult in unexpected behaviour.\n\nFor \"include\" and \"include_if_exists\", it's a not critical issue as non-existent\nfiles are, well, non-existent, however this will leave behind the cryptic\nmessage \"input in flex scanner failed\" in pg_file_settings's \"error\" column, e.g.:\n\n postgres=# SELECT sourceline, seqno, name, setting, applied, error\n FROM pg_file_settings\n WHERE error IS NOT NULL;\n sourceline | seqno | name | setting | applied | error\n ------------+-------+------+---------+---------+------------------------------\n 1 | 45 | | | f | input in flex scanner failed\n 1 | 46 | | | f | input in flex scanner failed\n (2 rows)\n\nHowever, an empty value for \"include_dir\" will result in the current configuration\nfile's directory being read, which can result in circular inclusion and triggering\nof the nesting depth check.\n\nPatch {1} makes provision of an empty value for any of these directives cause\nconfiguration file processing to report an approprate error, e.g.:\n\n postgres=# SELECT sourceline, seqno, name, setting, applied, error\n FROM pg_file_settings\n WHERE error IS NOT NULL;\n sourceline | seqno | name | setting | applied | error\n ------------+-------+------+---------+---------+---------------------------------------\n 757 | 45 | | | f | \"include\" must not be empty\n 758 | 46 | | | f | \"include_if_exists\" must not be empty\n 759 | 47 | | | f | \"include_dir\" must not be empty\n\n\n2) Circular inclusion of configuration files\n============================================\n\nCurrently there is a simple maximum nesting threshold (currently 10) which\nwill stop runaway circular inclusions. However, if triggered, it's not\nalways obvious what triggered it, and sometimes resource exhaustion\nmight kick in beforehand (as appeared to be the case in [2] above).\n\nPatch {2} attempts to handle this situation by keeping track of which\nfiles have already been included (based on their absolute, canonical\npath) and reporting an error if they were encountered again.\n\nOn server startup:\n\n\t2019-07-11 09:13:25.610 GMT [71140] LOG: configuration file \"/var/lib/pgsql/data/postgresql.conf\" was previously parsed\n\t2019-07-11 09:13:25.610 GMT [71140] FATAL: configuration file \"/var/lib/pgsql/data/postgresql.conf\" contains errors\n\nAfter sending SIGHUP:\n\n postgres=# SELECT sourceline, seqno, name, setting, applied, error FROM pg_file_settings WHERE error IS NOT NULL;\n sourceline | seqno | name | setting | applied | error\n ------------+-------+------+---------+---------+--------------------------------------------------------------------------------\n 757 | 45 | | | f | configuration file \"/var/lib/pgsql/data/postgresql.conf\" was previously parsed\n (1 row)\n\n3) \"include\" directives in postgresql.auto.conf and extension control files\n===========================================================================\n\nCurrently these are parsed and acted on, even though it makes no sense for further\nconfig files to be included in either case.\n\nWith \"postgresql.auto.conf\", if a file is successfully included, its contents\nwill then be written to \"postgresql.auto.conf\" and the include directive will be\nremoved, which seems like a recipe for confusion.\n\nThese are admittedly unlikely corner cases, but it's easy enough to stop this\nhappening on the offchance someone tries to use this to solve some problem in\ncompletely the wrong way.\n\nPatch {3} implements this (note that this patch depends on patch {2}).\n\nExtension example:\n\n\tpostgres=# CREATE EXTENSION repmgr;\n\tERROR: \"include\" not permitted in file \"/home/barwick/devel/postgres/builds/HEAD/share/extension/repmgr.control\" line 8\n\tpostgres=# CREATE EXTENSION repmgr;\n\tERROR: \"include_dir\" not permitted in file \"/home/barwick/devel/postgres/builds/HEAD/share/extension/repmgr.control\" line 9\n\tpostgres=# CREATE EXTENSION repmgr;\n\tERROR: \"include_if_exists\" not permitted in file \"/home/barwick/devel/postgres/builds/HEAD/share/extension/repmgr.control\" line 10\n\npg.auto.conf example:\n\n\tpostgres=# ALTER SYSTEM SET default_tablespace ='pg_default';\n\tERROR: could not parse contents of file \"postgresql.auto.conf\"\n postgres=# SELECT regexp_replace(sourcefile, '^/.+/','') AS sourcefile,\n seqno, name, setting, applied, error\n FROM pg_file_settings WHERE error IS NOT NULL;\n sourcefile | seqno | name | setting | applied | error\n ----------------------+-------+------+---------+---------+-------------------------\n postgresql.auto.conf | 45 | | | f | \"include\" not permitted\n (1 row)\n\nThe patch also has the side-effect that \"include\" directives are no longer\n(silently) removed from \"postgresql.auto.conf\"; as the only way they can get\ninto the file in the first place is by manually editing it, I feel it's\nreasonable for the user to be made aware that they're not valid and have to\nmanually remove them.\n\n\nPatches\n=======\n\nCode:\n\n{1} disallow-empty-include-directives.v1.patch\n{2} track-included-files.v1.patch\n{3} prevent-disallowed-includes.v1.patch\n\nTAP tests:\n{1} tap-test-configuration.v1.patch\n{2} tap-test-disallow-empty-include-directives.v1.patch\n{3} tap-test-track-included-files.v1.patch\n{4} tap-test-prevent-disallowed-includes.v1.patch\n\nPatches apply cleanly to REL_12_STABLE/HEAD, they could be modfied for\nall supported versions if required. I can consolidate the patches\nif preferred.\n\nRegards\n\nIan Barwick\n\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Wed, 17 Jul 2019 12:29:43 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Make configuration file \"include\" directive handling more\n robust"
},
{
"msg_contents": "Hello.\n\nAt Wed, 17 Jul 2019 12:29:43 +0900, Ian Barwick <ian.barwick@2ndquadrant.com> wrote in <8c8bcbca-3bd9-dc6e-8986-04a5abdef142@2ndquadrant.com>\n> Hi\n> \n> While poking about with [1], I noticed a few potential issues with the\n> inclusion handling for configuration files; another issue is\n> demonstrated in [2].\n> \n> [1]\n> https://www.postgresql.org/message-id/aed6cc9f-98f3-2693-ac81-52bb0052307e%402ndquadrant.com\n> (\"Stop ALTER SYSTEM from making bad assumptions\")\n> [2]\n> https://www.postgresql.org/message-id/CY4PR1301MB22001D3FAAB3499C5D41DE23A9E50%40CY4PR1301MB2200.namprd13.prod.outlook.com\n> (\"First Time Starting Up PostgreSQL and Having Problems\")\n\nYeah! That's annoying..\n\n> Specifically these are:\n> \n> 1) Provision of empty include directives\n> =========================================\n> \n> The default postgresql.conf file includes these thusly:\n> \n> #include_dir = '' # include files ending in '.conf' from\n> # a directory, e.g., 'conf.d'\n> #include_if_exists = '' # include file only if it exists\n> #include = '' # include file\n> \n> Currently, uncommenting them but leaving the value empty (as happened\n> in [2] above) can\n> result in unexpected behaviour.\n> \n> For \"include\" and \"include_if_exists\", it's a not critical issue as\n> non-existent\n> files are, well, non-existent, however this will leave behind the\n> cryptic\n> message \"input in flex scanner failed\" in pg_file_settings's \"error\"\n> column, e.g.:\n> \n> postgres=# SELECT sourceline, seqno, name, setting, applied, error\n> FROM pg_file_settings\n> WHERE error IS NOT NULL;\n> sourceline | seqno | name | setting | applied | error\n> ------------+-------+------+---------+---------+------------------------------\n> 1 | 45 | | | f | input in flex scanner failed\n> 1 | 46 | | | f | input in flex scanner failed\n> (2 rows)\n> \n> However, an empty value for \"include_dir\" will result in the current\n> configuration\n> file's directory being read, which can result in circular inclusion\n> and triggering\n> of the nesting depth check.\n> \n> Patch {1} makes provision of an empty value for any of these\n> directives cause\n> configuration file processing to report an approprate error, e.g.:\n> \n> postgres=# SELECT sourceline, seqno, name, setting, applied, error\n> FROM pg_file_settings\n> WHERE error IS NOT NULL;\n> sourceline | seqno | name | setting | applied | error\n> ------------+-------+------+---------+---------+---------------------------------------\n> 757 | 45 | | | f | \"include\" must not be empty\n> 758 | 46 | | | f | \"include_if_exists\" must not be empty\n> 759 | 47 | | | f | \"include_dir\" must not be empty\n\nThe patch 1 looks somewhat superficial. All the problems are\nreduced to creating unexpected filepath for\ninclusion. AbsoluteConfigLocation does the core work, and it can\nissue generic error message covering all the cases like:\n\ninvalid parameter \"<param>\" at <calling_file>:<calling_lineno>\n\nwhich seems sufficient. (The function needs some additional\nparameters.)\n\n\n> 2) Circular inclusion of configuration files\n> ============================================\n> \n> Currently there is a simple maximum nesting threshold (currently 10)\n> which\n> will stop runaway circular inclusions. However, if triggered, it's not\n> always obvious what triggered it, and sometimes resource exhaustion\n> might kick in beforehand (as appeared to be the case in [2] above).\n> \n> Patch {2} attempts to handle this situation by keeping track of which\n> files have already been included (based on their absolute, canonical\n> path) and reporting an error if they were encountered again.\n\nThis seems to me to be overkill. The issue [2] is prevented by\nthe patch 1's amendment. (I don't think it's not worth donig to\nadd protection from explicit inclusion of pg_hba.conf from\npostgresql.conf or itself or such like.)\n\n> On server startup:\n> \n> \t2019-07-11 09:13:25.610 GMT [71140] LOG: configuration file\n> \t\"/var/lib/pgsql/data/postgresql.conf\" was previously parsed\n> \t2019-07-11 09:13:25.610 GMT [71140] FATAL: configuration file\n> \t\"/var/lib/pgsql/data/postgresql.conf\" contains errors\n> \n> After sending SIGHUP:\n> \n> postgres=# SELECT sourceline, seqno, name, setting, applied, error\n> FROM pg_file_settings WHERE error IS NOT NULL;\n> sourceline | seqno | name | setting | applied | error\n> ------------+-------+------+---------+---------+--------------------------------------------------------------------------------\n> 757 | 45 | | | f | configuration file\n> \"/var/lib/pgsql/data/postgresql.conf\" was previously parsed\n> (1 row)\n> \n> 3) \"include\" directives in postgresql.auto.conf and extension control\n> files\n> ===========================================================================\n> \n> Currently these are parsed and acted on, even though it makes no sense\n> for further\n> config files to be included in either case.\n\nAnyway manual edit is explicitly prohibited for auto.conf. And,\neven if it is added, the 10-depth limitation would protect from\ninfinite loop.\n\n> With \"postgresql.auto.conf\", if a file is successfully included, its\n> contents\n> will then be written to \"postgresql.auto.conf\" and the include\n> directive will be\n> removed, which seems like a recipe for confusion.\n> \n> These are admittedly unlikely corner cases, but it's easy enough to\n> stop this\n> happening on the offchance someone tries to use this to solve some\n> problem in\n> completely the wrong way.\n> \n> Patch {3} implements this (note that this patch depends on patch {2}).\n> \n> Extension example:\n> \n> \tpostgres=# CREATE EXTENSION repmgr;\n> \tERROR: \"include\" not permitted in file\n> \t\"/home/barwick/devel/postgres/builds/HEAD/share/extension/repmgr.control\"\n> \tline 8\n> \tpostgres=# CREATE EXTENSION repmgr;\n> \tERROR: \"include_dir\" not permitted in file\n> \t\"/home/barwick/devel/postgres/builds/HEAD/share/extension/repmgr.control\"\n> \tline 9\n> \tpostgres=# CREATE EXTENSION repmgr;\n> \tERROR: \"include_if_exists\" not permitted in file\n> \t\"/home/barwick/devel/postgres/builds/HEAD/share/extension/repmgr.control\"\n> \tline 10\n> \n> pg.auto.conf example:\n> \n> \tpostgres=# ALTER SYSTEM SET default_tablespace ='pg_default';\n> \tERROR: could not parse contents of file \"postgresql.auto.conf\"\n> postgres=# SELECT regexp_replace(sourcefile, '^/.+/','') AS\n> sourcefile,\n> seqno, name, setting, applied, error\n> FROM pg_file_settings WHERE error IS NOT NULL;\n> sourcefile | seqno | name | setting | applied | error\n> ----------------------+-------+------+---------+---------+-------------------------\n> postgresql.auto.conf | 45 | | | f | \"include\" not permitted\n> (1 row)\n> \n> The patch also has the side-effect that \"include\" directives are no\n> longer\n> (silently) removed from \"postgresql.auto.conf\"; as the only way they\n> can get\n> into the file in the first place is by manually editing it, I feel\n> it's\n> reasonable for the user to be made aware that they're not valid and\n> have to\n> manually remove them.\n> \n> \n> Patches\n> =======\n> \n> Code:\n> \n> {1} disallow-empty-include-directives.v1.patch\n> {2} track-included-files.v1.patch\n> {3} prevent-disallowed-includes.v1.patch\n> \n> TAP tests:\n> {1} tap-test-configuration.v1.patch\n> {2} tap-test-disallow-empty-include-directives.v1.patch\n> {3} tap-test-track-included-files.v1.patch\n> {4} tap-test-prevent-disallowed-includes.v1.patch\n> \n> Patches apply cleanly to REL_12_STABLE/HEAD, they could be modfied for\n> all supported versions if required. I can consolidate the patches\n> if preferred.\n\nI don't think this is new to 12.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 17 Jul 2019 17:34:45 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Make configuration file \"include\" directive handling\n more robust"
},
{
"msg_contents": "On 7/17/19 5:34 PM, Kyotaro Horiguchi wrote:> Hello.\n >\n > At Wed, 17 Jul 2019 12:29:43 +0900, Ian Barwick <ian.barwick@2ndquadrant.com> wrote in <8c8bcbca-3bd9-dc6e-8986-04a5abdef142@2ndquadrant.com>\n >> Hi\n >>\n >> While poking about with [1], I noticed a few potential issues with the\n >> inclusion handling for configuration files; another issue is\n >> demonstrated in [2].\n >>\n >> [1]\n >> https://www.postgresql.org/message-id/aed6cc9f-98f3-2693-ac81-52bb0052307e%402ndquadrant.com\n >> (\"Stop ALTER SYSTEM from making bad assumptions\")\n >> [2]\n >> https://www.postgresql.org/message-id/CY4PR1301MB22001D3FAAB3499C5D41DE23A9E50%40CY4PR1301MB2200.namprd13.prod.outlook.com\n >> (\"First Time Starting Up PostgreSQL and Having Problems\")\n >\n > Yeah! That's annoying..\n >\n >> Specifically these are:\n >>\n >> 1) Provision of empty include directives\n >> =========================================\n >>\n >> The default postgresql.conf file includes these thusly:\n >>\n >> #include_dir = '' # include files ending in '.conf' from\n >> # a directory, e.g., 'conf.d'\n >> #include_if_exists = '' # include file only if it exists\n >> #include = '' # include file\n >>\n >> Currently, uncommenting them but leaving the value empty (as happened\n >> in [2] above) can\n >> result in unexpected behaviour.\n >>\n >> For \"include\" and \"include_if_exists\", it's a not critical issue as\n >> non-existent\n >> files are, well, non-existent, however this will leave behind the\n >> cryptic\n >> message \"input in flex scanner failed\" in pg_file_settings's \"error\"\n >> column, e.g.:\n >>\n >> postgres=# SELECT sourceline, seqno, name, setting, applied, error\n >> FROM pg_file_settings\n >> WHERE error IS NOT NULL;\n >> sourceline | seqno | name | setting | applied | error\n >> ------------+-------+------+---------+---------+------------------------------\n >> 1 | 45 | | | f | input in flex scanner failed\n >> 1 | 46 | | | f | input in flex scanner failed\n >> (2 rows)\n >>\n >> However, an empty value for \"include_dir\" will result in the current\n >> configuration\n >> file's directory being read, which can result in circular inclusion\n >> and triggering\n >> of the nesting depth check.\n >>\n >> Patch {1} makes provision of an empty value for any of these\n >> directives cause\n >> configuration file processing to report an approprate error, e.g.:\n >>\n >> postgres=# SELECT sourceline, seqno, name, setting, applied, error\n >> FROM pg_file_settings\n >> WHERE error IS NOT NULL;\n >> sourceline | seqno | name | setting | applied | error\n >> ------------+-------+------+---------+---------+---------------------------------------\n >> 757 | 45 | | | f | \"include\" must not be empty\n >> 758 | 46 | | | f | \"include_if_exists\" must not be empty\n >> 759 | 47 | | | f | \"include_dir\" must not be empty\n >\n > The patch 1 looks somewhat superficial. All the problems are\n > reduced to creating unexpected filepath for\n > inclusion. AbsoluteConfigLocation does the core work, and it can\n > issue generic error message covering all the cases like:\n >\n > invalid parameter \"<param>\" at <calling_file>:<calling_lineno>\n >\n > which seems sufficient. (The function needs some additional\n > parameters.)\n\nThat seems unnecessarily complex to me, as we'd be overloading a\nfunction with a single purpose (to manipulate a path) with some\nof the parsing logic/control.\n\n >\n >> 2) Circular inclusion of configuration files\n >> ============================================\n >>\n >> Currently there is a simple maximum nesting threshold (currently 10)\n >> which\n >> will stop runaway circular inclusions. However, if triggered, it's not\n >> always obvious what triggered it, and sometimes resource exhaustion\n >> might kick in beforehand (as appeared to be the case in [2] above).\n >>\n >> Patch {2} attempts to handle this situation by keeping track of which\n >> files have already been included (based on their absolute, canonical\n >> path) and reporting an error if they were encountered again.\n >\n > This seems to me to be overkill. The issue [2] is prevented by\n > the patch 1's amendment.\n\nYes, that particular issue is prevented, but this patch is intended to\nprovide better protection against explicit circular inclusions, e.g.\nif someone adds \"include 'postgresql.conf'\" at the end of \"postgresql.conf\"\n(or more realistically has a complex setup with multiple included\nconfiguration files where something gets mixed up).\n\nCurrently the nesting threshold stops it becoming a runaway\nproblem, but is not very user-friendly. E.g. with \"include 'postgresql.conf'\"\nadded to the end of \"postgresql.conf\", without patch on startup you get:\n\n LOG: could not open configuration file \"postgresql.conf\": maximum nesting depth exceeded\n FATAL: configuration file \"/path/to/postgresql.conf\" contains errors\n\n(cue panicking user with production server down: \"OMG the file can't be opened,\nis my filesystem corrupted?\" etc.)\n\nWith the patch:\n\n LOG: configuration file \"/path/to/postgresql.conf\" was previously parsed\n FATAL: configuration file \"/path/to/postgresql.conf\" contains errors\n\n(actually maybe we could add a bit more detail such as line number there).\n\n > (I don't think it's not worth donig to\n > add protection from explicit inclusion of pg_hba.conf from\n > postgresql.conf or itself or such like.)\n\nI thought about that, but came to the same conclusion.\n\n >> On server startup:\n >>\n >> \t2019-07-11 09:13:25.610 GMT [71140] LOG: configuration file\n >> \t\"/var/lib/pgsql/data/postgresql.conf\" was previously parsed\n >> \t2019-07-11 09:13:25.610 GMT [71140] FATAL: configuration file\n >> \t\"/var/lib/pgsql/data/postgresql.conf\" contains errors\n >>\n >> After sending SIGHUP:\n >>\n >> postgres=# SELECT sourceline, seqno, name, setting, applied, error\n >> FROM pg_file_settings WHERE error IS NOT NULL;\n >> sourceline | seqno | name | setting | applied | error\n >> ------------+-------+------+---------+---------+--------------------------------------------------------------------------------\n >> 757 | 45 | | | f | configuration file\n >> \"/var/lib/pgsql/data/postgresql.conf\" was previously parsed\n >> (1 row)\n >>\n >> 3) \"include\" directives in postgresql.auto.conf and extension control\n >> files\n >> ===========================================================================\n >>\n >> Currently these are parsed and acted on, even though it makes no sense\n >> for further\n >> config files to be included in either case.\n >\n > Anyway manual edit is explicitly prohibited for auto.conf.\n\nIndeed, but there are many things we tell people not to do, such as\nremoving tablespace directories, but they still do them...\n\n > And even if it is added, the 10-depth limitation would protect from\n > infinite loop.\n\nIt's not just about protecting againt infinite loops - if you do something\nlike \"include 'postgresql.conf'\", as-is the code will happily slurp in\nall the items from \"postgresql.conf\" into \"postgresql.auto.conf\", which\nis going to cause some headscratching if it ever happens.\n\nLike I said in the original mail these are extremely unlikely corner cases;\nbut if patch {2} is in place, it's trivial to prevent them ever becoming a\nproblem.\n\n >> With \"postgresql.auto.conf\", if a file is successfully included, its\n >> contents\n >> will then be written to \"postgresql.auto.conf\" and the include\n >> directive will be\n >> removed, which seems like a recipe for confusion.\n >>\n >> These are admittedly unlikely corner cases, but it's easy enough to\n >> stop this\n >> happening on the offchance someone tries to use this to solve some\n >> problem in\n >> completely the wrong way.\n >>\n >> Patch {3} implements this (note that this patch depends on patch {2}).\n >>\n >> Extension example:\n >>\n >> \tpostgres=# CREATE EXTENSION repmgr;\n >> \tERROR: \"include\" not permitted in file\n >> \t\"/home/barwick/devel/postgres/builds/HEAD/share/extension/repmgr.control\"\n >> \tline 8\n >> \tpostgres=# CREATE EXTENSION repmgr;\n >> \tERROR: \"include_dir\" not permitted in file\n >> \t\"/home/barwick/devel/postgres/builds/HEAD/share/extension/repmgr.control\"\n >> \tline 9\n >> \tpostgres=# CREATE EXTENSION repmgr;\n >> \tERROR: \"include_if_exists\" not permitted in file\n >> \t\"/home/barwick/devel/postgres/builds/HEAD/share/extension/repmgr.control\"\n >> \tline 10\n >>\n >> pg.auto.conf example:\n >>\n >> \tpostgres=# ALTER SYSTEM SET default_tablespace ='pg_default';\n >> \tERROR: could not parse contents of file \"postgresql.auto.conf\"\n >> postgres=# SELECT regexp_replace(sourcefile, '^/.+/','') AS\n >> sourcefile,\n >> seqno, name, setting, applied, error\n >> FROM pg_file_settings WHERE error IS NOT NULL;\n >> sourcefile | seqno | name | setting | applied | error\n >> ----------------------+-------+------+---------+---------+-------------------------\n >> postgresql.auto.conf | 45 | | | f | \"include\" not permitted\n >> (1 row)\n >>\n >> The patch also has the side-effect that \"include\" directives are no\n >> longer\n >> (silently) removed from \"postgresql.auto.conf\"; as the only way they\n >> can get\n >> into the file in the first place is by manually editing it, I feel\n >> it's\n >> reasonable for the user to be made aware that they're not valid and\n >> have to\n >> manually remove them.\n >>\n >>\n >> Patches\n >> =======\n >>\n >> Code:\n >>\n >> {1} disallow-empty-include-directives.v1.patch\n >> {2} track-included-files.v1.patch\n >> {3} prevent-disallowed-includes.v1.patch\n >>\n >> TAP tests:\n >> {1} tap-test-configuration.v1.patch\n >> {2} tap-test-disallow-empty-include-directives.v1.patch\n >> {3} tap-test-track-included-files.v1.patch\n >> {4} tap-test-prevent-disallowed-includes.v1.patch\n >>\n >> Patches apply cleanly to REL_12_STABLE/HEAD, they could be modfied for\n >> all supported versions if required. I can consolidate the patches\n >> if preferred.\n >\n > I don't think this is new to 12.\n\nNo, though I'm not sure how much this would be seen as a bugfix\nand how far back it would be sensible to patch.\n\nRegards\n\nIan Barwick\n\n\n--\n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 17 Jul 2019 23:50:18 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Make configuration file \"include\" directive handling more\n robust"
},
{
"msg_contents": "Ian Barwick <ian.barwick@2ndquadrant.com> writes:\n> On 7/17/19 5:34 PM, Kyotaro Horiguchi wrote:> Hello.\n>>> I don't think this is new to 12.\n\n> No, though I'm not sure how much this would be seen as a bugfix\n> and how far back it would be sensible to patch.\n\nI think this is worth considering as a bugfix; although I'm afraid\nwe can't change the signature of ParseConfigFile/ParseConfigFp in\nreleased branches, since extensions could possibly be using those.\nThat limits what we can do --- but it's still possible to detect\ndirect recursion, which seems like enough to produce a nice error\nmessage in typical cases.\n\nI concur with Kyotaro-san that disallow-empty-include-directives.v1.patch\nseems a bit brute-force, but where I would put the checks is in\nParseConfigFile and ParseConfigDirectory.\n\nAlso, I don't agree with the goals of prevent-disallowed-includes.patch.\nI'm utterly not on board with breaking use of \"include\" in extension\nfiles, for instance; while that may not be documented, it works fine,\nand maybe somebody out there is relying on it. Likewise, while \"include\"\nin pg.auto.conf is not really considered supported, I don't see the\npoint of going out of our way to break the historical behavior.\n\nThat leads me to propose the attached simplified patch. While I haven't\nactually tried, I'm pretty sure this should back-patch without trouble.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 24 Aug 2019 15:39:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Make configuration file \"include\" directive handling more\n robust"
},
{
"msg_contents": "On 8/25/19 4:39 AM, Tom Lane wrote:\n> Ian Barwick <ian.barwick@2ndquadrant.com> writes:\n>> On 7/17/19 5:34 PM, Kyotaro Horiguchi wrote:> Hello.\n>>>> I don't think this is new to 12.\n> \n>> No, though I'm not sure how much this would be seen as a bugfix\n>> and how far back it would be sensible to patch.\n> \n> I think this is worth considering as a bugfix; although I'm afraid\n> we can't change the signature of ParseConfigFile/ParseConfigFp in\n> released branches, since extensions could possibly be using those.\n> That limits what we can do --- but it's still possible to detect\n> direct recursion, which seems like enough to produce a nice error\n> message in typical cases.\n\nMakes sense.\n\n> I concur with Kyotaro-san that disallow-empty-include-directives.v1.patch\n> seems a bit brute-force, but where I would put the checks is in\n> ParseConfigFile and ParseConfigDirectory.\n> \n> Also, I don't agree with the goals of prevent-disallowed-includes.patch.\n> I'm utterly not on board with breaking use of \"include\" in extension\n> files, for instance; while that may not be documented, it works fine,\n> and maybe somebody out there is relying on it.\n\nI couldn't for the life of me think of any reason for using it.\nBut if there's undocumented functionality we think someone might\nbe using, shouldn't that be documented somewhere, if only as a note\nin the code to prevent its accidental removal at a later date?\n\n> Likewise, while \"include\"\n> in pg.auto.conf is not really considered supported, I don't see the\n> point of going out of our way to break the historical behavior.\n\nThe amusing thing about that of course is that the include directive\nwill disappear the next time ALTER SYSTEM is run and the values from\nthe included file will appear in pg.auto.conf, which may cause some\nheadscratching. But I guess hasn't been an actual real-world\nissue so far.\n\n> That leads me to propose the attached simplified patch. While I haven't\n> actually tried, I'm pretty sure this should back-patch without trouble.\n\nAh, I see it's been applied already, thanks!\n\n\nRegards\n\nIan Barwick\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 28 Aug 2019 10:57:24 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Make configuration file \"include\" directive handling more\n robust"
}
] |
[
{
"msg_contents": "Hi,\n\nOne observation when we execute a select query having results more than the\nscreen space available and press ctrl+f to display the remaining records,\none of the record was not displayed and the message \"...skipping one line\"\nwas displayed.\n\nI'm not sure if this is intentional behaviour.\n\nSteps for the same:\npostgres=# create table psqltest as select generate_series(1,50);\nSELECT 50\npostgres=# select * from psqltest;\n generate_series\n-----------------\n 1\n 2\n 3\n 4\n 5\n 6\n 7\n 8\n 9\n 10\n 11\n 12\n 13\n 14\n 15\n 16\n 17\n 18\n 19\n 20\n 21\n 22\n 23\n 24\n 25\n 26\n 27\n 28\n 29\n 30\n 31\n 32\n 33\n 34\n 35\n 36\n 37\n 38\n 39\n 40\n 41\n 42\n 43\n 44\n 45\n\n*...skipping one line*\n 47\n 48\n 49\n 50\n(50 rows)\n\nIs this intended?\n\n-- \nRegards,\nvignesh\nEnterpriseDB: http://www.enterprisedb.com\n\nHi,One observation when we execute a select query having results more than the screen space available and press ctrl+f to display the remaining records, one of the record was not displayed and the message \"...skipping one line\" was displayed.I'm not sure if this is intentional behaviour.Steps for the same:postgres=# create table psqltest as select generate_series(1,50);SELECT 50postgres=# select * from psqltest; generate_series ----------------- 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45...skipping one line 47 48 49 50(50 rows)Is this intended?-- Regards,vigneshEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 17 Jul 2019 09:37:14 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "psql ctrl+f skips displaying of one record and displays skipping one\n line"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 4:07 PM vignesh C <vignesh21@gmail.com> wrote:\n> One observation when we execute a select query having results more than the screen space available and press ctrl+f to display the remaining records, one of the record was not displayed and the message \"...skipping one line\" was displayed.\n>\n> I'm not sure if this is intentional behaviour.\n\nPretty sure this is coming from your system's pager. You can see the\nsame thing when you run this on a RHEL box:\n\nseq 1 10000 | more\n\nIt skips a line each time you press ^F.\n\nDoesn't happen on FreeBSD or macOS though.\n\n\n--\nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Jul 2019 16:18:08 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql ctrl+f skips displaying of one record and displays skipping\n one line"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Pretty sure this is coming from your system's pager. You can see the\n> same thing when you run this on a RHEL box:\n> seq 1 10000 | more\n> It skips a line each time you press ^F.\n\nYeah, duplicated on RHEL6. It seems to behave the same as the documented\n\"s\" command. Not sure why it's not listed in the man page --- though\nthere's a disclaimer saying that the man page was basically\nreverse-engineered, so maybe they just missed this synonym.\n\n> Doesn't happen on FreeBSD or macOS though.\n\nmacOS's \"more\" is actually \"less\", so it's not surprising it's not\nbug-compatible. Can't say about FreeBSD.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Jul 2019 00:35:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql ctrl+f skips displaying of one record and displays skipping\n one line"
},
{
"msg_contents": "I'm able to get the same behaviour in centos as well.\nShould we do anything to handle this in Postgres or any documentation\nrequired?\n\nOn Wed, Jul 17, 2019 at 10:05 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Pretty sure this is coming from your system's pager. You can see the\n> > same thing when you run this on a RHEL box:\n> > seq 1 10000 | more\n> > It skips a line each time you press ^F.\n>\n> Yeah, duplicated on RHEL6. It seems to behave the same as the documented\n> \"s\" command. Not sure why it's not listed in the man page --- though\n> there's a disclaimer saying that the man page was basically\n> reverse-engineered, so maybe they just missed this synonym.\n>\n> > Doesn't happen on FreeBSD or macOS though.\n>\n> macOS's \"more\" is actually \"less\", so it's not surprising it's not\n> bug-compatible. Can't say about FreeBSD.\n>\n> regards, tom lane\n>\n\n\n-- \nRegards,\nvignesh\n Have a nice day\n\nI'm able to get the same behaviour in centos as well.Should we do anything to handle this in Postgres or any documentation required?On Wed, Jul 17, 2019 at 10:05 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Thomas Munro <thomas.munro@gmail.com> writes:\n> Pretty sure this is coming from your system's pager. You can see the\n> same thing when you run this on a RHEL box:\n> seq 1 10000 | more\n> It skips a line each time you press ^F.\n\nYeah, duplicated on RHEL6. It seems to behave the same as the documented\n\"s\" command. Not sure why it's not listed in the man page --- though\nthere's a disclaimer saying that the man page was basically\nreverse-engineered, so maybe they just missed this synonym.\n\n> Doesn't happen on FreeBSD or macOS though.\n\nmacOS's \"more\" is actually \"less\", so it's not surprising it's not\nbug-compatible. Can't say about FreeBSD.\n\n regards, tom lane\n-- Regards,vignesh Have a nice day",
"msg_date": "Wed, 17 Jul 2019 10:09:32 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: psql ctrl+f skips displaying of one record and displays skipping\n one line"
},
{
"msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n> I'm able to get the same behaviour in centos as well.\n> Should we do anything to handle this in Postgres or any documentation\n> required?\n\nIt already is documented:\n\n PSQL_PAGER\n PAGER\n\n If a query's results do not fit on the screen, they are piped\n through this command. Typical values are more or less. Use of the\n pager can be disabled by setting PSQL_PAGER or PAGER to an empty\n string, or by adjusting the pager-related options of the \\pset\n command. These variables are examined in the order listed; the\n first that is set is used. If none of them is set, the default is\n to use more on most platforms, but less on Cygwin.\n\nWe're certainly not going to copy four or five different versions\nof the \"more\" and \"less\" man pages into psql's man page, if that's\nwhat you're suggesting. Nor is it our job to point out shortcomings\nin some versions of those man pages.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Jul 2019 00:47:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql ctrl+f skips displaying of one record and displays skipping\n one line"
},
{
"msg_contents": "Thanks Tom.\nThat sounds good to me.\n\nOn Wed, Jul 17, 2019 at 10:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> vignesh C <vignesh21@gmail.com> writes:\n> > I'm able to get the same behaviour in centos as well.\n> > Should we do anything to handle this in Postgres or any documentation\n> > required?\n>\n> It already is documented:\n>\n> PSQL_PAGER\n> PAGER\n>\n> If a query's results do not fit on the screen, they are piped\n> through this command. Typical values are more or less. Use of the\n> pager can be disabled by setting PSQL_PAGER or PAGER to an empty\n> string, or by adjusting the pager-related options of the \\pset\n> command. These variables are examined in the order listed; the\n> first that is set is used. If none of them is set, the default is\n> to use more on most platforms, but less on Cygwin.\n>\n> We're certainly not going to copy four or five different versions\n> of the \"more\" and \"less\" man pages into psql's man page, if that's\n> what you're suggesting. Nor is it our job to point out shortcomings\n> in some versions of those man pages.\n>\n> regards, tom lane\n>\n\n\n-- \nRegards,\nvignesh\n Have a nice day\n\nThanks Tom.That sounds good to me.On Wed, Jul 17, 2019 at 10:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:vignesh C <vignesh21@gmail.com> writes:\n> I'm able to get the same behaviour in centos as well.\n> Should we do anything to handle this in Postgres or any documentation\n> required?\n\nIt already is documented:\n\n PSQL_PAGER\n PAGER\n\n If a query's results do not fit on the screen, they are piped\n through this command. Typical values are more or less. Use of the\n pager can be disabled by setting PSQL_PAGER or PAGER to an empty\n string, or by adjusting the pager-related options of the \\pset\n command. These variables are examined in the order listed; the\n first that is set is used. If none of them is set, the default is\n to use more on most platforms, but less on Cygwin.\n\nWe're certainly not going to copy four or five different versions\nof the \"more\" and \"less\" man pages into psql's man page, if that's\nwhat you're suggesting. Nor is it our job to point out shortcomings\nin some versions of those man pages.\n\n regards, tom lane\n-- Regards,vignesh Have a nice day",
"msg_date": "Wed, 17 Jul 2019 10:54:17 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: psql ctrl+f skips displaying of one record and displays skipping\n one line"
}
] |
[
{
"msg_contents": "Hello,\n\nAttached is a patch that adds the option of using SET clause to specify\nthe columns and values in an INSERT statement in the same manner as that\nof an UPDATE statement.\n\nA simple example that uses SET instead of a VALUES() clause:\n\nINSERT INTO t SET c1 = 'foo', c2 = 'bar', c3 = 'baz';\n\nValues may also be sourced from a CTE using a FROM clause:\n\nWITH x AS (\n SELECT 'foo' AS c1, 'bar' AS c2, 'baz' AS c3\n)\nINSERT INTO t SET c1 = x.c1, c2 = x.c2, c3 = x.c3 FROM x;\n\nThe advantage of using the SET clause style is that the column and value\nare kept together, which can make changing or removing a column or value from\na large list easier.\n\nInternally the grammar parser converts INSERT SET without a FROM clause into\nthe equivalent INSERT with a VALUES clause. When using a FROM clause it becomes\nthe equivalent of INSERT with a SELECT statement.\n\nThere was a brief discussion regarding INSERT SET on pgsql-hackers in late\nAugust 2009 [1].\n\nINSERT SET is not part of any SQL standard (that I am aware of), however this\nsyntax is also implemented by MySQL [2]. Their implementation does not support\nspecifying a FROM clause.\n\nPatch also contains regression tests and documentation.\n\n\nRegards,\nGareth\n\n\n[1] https://www.postgresql.org/message-id/flat/2c5ef4e30908251010s46d9d566m1da21357891bab3d%40mail.gmail.com\n[2] https://dev.mysql.com/doc/refman/8.0/en/insert.html",
"msg_date": "Wed, 17 Jul 2019 16:30:02 +1200",
"msg_from": "Gareth Palmer <gareth@internetnz.net.nz>",
"msg_from_op": true,
"msg_subject": "[PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 7:30 AM Gareth Palmer <gareth@internetnz.net.nz>\nwrote:\n\n> Attached is a patch that adds the option of using SET clause to specify\n> the columns and values in an INSERT statement in the same manner as that\n> of an UPDATE statement.\n>\n\nCool! Thanks for working on this, I'd love to see the syntax in PG.\n\nThere was a brief discussion regarding INSERT SET on pgsql-hackers in late\n> August 2009 [1].\n>\n\nThere was also at least one slightly more recent adventure:\nhttps://www.postgresql.org/message-id/709e06c0-59c9-ccec-d216-21e38cb5ed61%40joh.to\n\nYou might want to check that thread too, in case any of the criticism there\napplies to this patch as well.\n\n\n.m\n\nOn Wed, Jul 17, 2019 at 7:30 AM Gareth Palmer <gareth@internetnz.net.nz> wrote:\nAttached is a patch that adds the option of using SET clause to specify\nthe columns and values in an INSERT statement in the same manner as that\nof an UPDATE statement.Cool! Thanks for working on this, I'd love to see the syntax in PG.\nThere was a brief discussion regarding INSERT SET on pgsql-hackers in late\nAugust 2009 [1].There was also at least one slightly more recent adventure: https://www.postgresql.org/message-id/709e06c0-59c9-ccec-d216-21e38cb5ed61%40joh.toYou might want to check that thread too, in case any of the criticism there applies to this patch as well..m",
"msg_date": "Wed, 17 Jul 2019 08:52:06 +0300",
"msg_from": "Marko Tiikkaja <marko@joh.to>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "Hi Marko,\n\n> On 17/07/2019, at 5:52 PM, Marko Tiikkaja <marko@joh.to> wrote:\n> \n> On Wed, Jul 17, 2019 at 7:30 AM Gareth Palmer <gareth@internetnz.net.nz> wrote:\n> Attached is a patch that adds the option of using SET clause to specify\n> the columns and values in an INSERT statement in the same manner as that\n> of an UPDATE statement.\n> \n> Cool! Thanks for working on this, I'd love to see the syntax in PG.\n> \n> There was a brief discussion regarding INSERT SET on pgsql-hackers in late\n> August 2009 [1].\n> \n> There was also at least one slightly more recent adventure: https://www.postgresql.org/message-id/709e06c0-59c9-ccec-d216-21e38cb5ed61%40joh.to\n> \n> You might want to check that thread too, in case any of the criticism there applies to this patch as well.\n\nThank-you for the pointer to that thread.\n\nI think my version avoids issue raised there by doing the conversion of the SET clause as part of the INSERT grammar rules.\n\nGareth\n\n",
"msg_date": "Thu, 18 Jul 2019 11:30:04 +1200",
"msg_from": "Gareth Palmer <gareth@internetnz.net.nz>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "Hello.\n\nAt Thu, 18 Jul 2019 11:30:04 +1200, Gareth Palmer <gareth@internetnz.net.nz> wrote in <D50A93EB-11F3-4ED2-8192-0328DF901BBA@internetnz.net.nz>\n> Hi Marko,\n> \n> > On 17/07/2019, at 5:52 PM, Marko Tiikkaja <marko@joh.to> wrote:\n> > \n> > On Wed, Jul 17, 2019 at 7:30 AM Gareth Palmer <gareth@internetnz.net.nz> wrote:\n> > Attached is a patch that adds the option of using SET clause to specify\n> > the columns and values in an INSERT statement in the same manner as that\n> > of an UPDATE statement.\n> > \n> > Cool! Thanks for working on this, I'd love to see the syntax in PG.\n> > \n> > There was a brief discussion regarding INSERT SET on pgsql-hackers in late\n> > August 2009 [1].\n> > \n> > There was also at least one slightly more recent adventure: https://www.postgresql.org/message-id/709e06c0-59c9-ccec-d216-21e38cb5ed61%40joh.to\n> > \n> > You might want to check that thread too, in case any of the criticism there applies to this patch as well.\n> \n> Thank-you for the pointer to that thread.\n> \n> I think my version avoids issue raised there by doing the conversion of the SET clause as part of the INSERT grammar rules.\n\nIf I'm not missing something, \"SELECT <targetlist>\" without\nhaving FROM clause doesn't need to be tweaked. Thus\ninsert_set_clause is useless and all we need here would be\nsomething like the following. (and the same for OVERRIDING.)\n\n+ | SET set_clause_list from_clause\n+ {\n+ SelectStmt *n = makeNode(SelectStmt);\n+ n->targetList = $2;\n+ n->fromClause = $3;\n+ $$ = makeNode(InsertStmt);\n+ $$->selectStmt = (Node *)n;\n+ $$->cols = $2;\n+ }\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 18 Jul 2019 15:54:10 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "Hi Kyotaro,\n\nThank-you for looking at the patch.\n\n> On 18/07/2019, at 6:54 PM, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> \n> Hello.\n> \n> If I'm not missing something, \"SELECT <targetlist>\" without\n> having FROM clause doesn't need to be tweaked. Thus\n> insert_set_clause is useless and all we need here would be\n> something like the following. (and the same for OVERRIDING.)\n> \n> + | SET set_clause_list from_clause\n> + {\n> + SelectStmt *n = makeNode(SelectStmt);\n> + n->targetList = $2;\n> + n->fromClause = $3;\n> + $$ = makeNode(InsertStmt);\n> + $$->selectStmt = (Node *)n;\n> + $$->cols = $2;\n> + }\n\nWhile that would mostly work, it would prevent setting the column to its\ndefault value using the DEFAULT keyword.\n\nOnly expressions specified in valuesLists allow DEFAULT to be used. Those\nin targetList do not because transformInsertStmt() treats that as a general\nSELECT statement and the grammar does not allow the use of DEFAULT there.\n\nSo this would generate a \"DEFAULT is not allowed in this context\" error\nif only targetList was used:\n\nINSERT INTO t set c1 = DEFAULT;\n\n\nRegards,\nGareth\n\n> regards.\n> \n> -- \n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n\n\n\n",
"msg_date": "Fri, 19 Jul 2019 14:38:46 +1200",
"msg_from": "Gareth Palmer <gareth@internetnz.net.nz>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "Patch conflict with this assertion \r\nAssert(pstate->p_expr_kind == EXPR_KIND_UPDATE_SOURCE); \r\n\r\nsrc/backend/parser/parse_expr.c line 1570\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Thu, 15 Aug 2019 19:14:48 +0000",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "Hi Ibrar,\n\n> On 16/08/2019, at 7:14 AM, Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> \n> Patch conflict with this assertion \n> Assert(pstate->p_expr_kind == EXPR_KIND_UPDATE_SOURCE); \n> \n> src/backend/parser/parse_expr.c line 1570\n> \n> The new status of this patch is: Waiting on Author\n\nThank-you for reviewing the patch.\n\nAttached is version 2 of the patch that fixes the above by allowing\np_expr_kind to be EXPR_KIND_VALUES_SINGLE as well.\n\n\nGareth",
"msg_date": "Fri, 16 Aug 2019 13:21:01 +1200",
"msg_from": "Gareth Palmer <gareth@internetnz.net.nz>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 10:00 AM Gareth Palmer <gareth@internetnz.net.nz> wrote:\n>\n> Hello,\n>\n> Attached is a patch that adds the option of using SET clause to specify\n> the columns and values in an INSERT statement in the same manner as that\n> of an UPDATE statement.\n>\n> A simple example that uses SET instead of a VALUES() clause:\n>\n> INSERT INTO t SET c1 = 'foo', c2 = 'bar', c3 = 'baz';\n>\n> Values may also be sourced from a CTE using a FROM clause:\n>\n> WITH x AS (\n> SELECT 'foo' AS c1, 'bar' AS c2, 'baz' AS c3\n> )\n> INSERT INTO t SET c1 = x.c1, c2 = x.c2, c3 = x.c3 FROM x;\n>\n> The advantage of using the SET clause style is that the column and value\n> are kept together, which can make changing or removing a column or value from\n> a large list easier.\n>\n> Internally the grammar parser converts INSERT SET without a FROM clause into\n> the equivalent INSERT with a VALUES clause. When using a FROM clause it becomes\n> the equivalent of INSERT with a SELECT statement.\n>\n> There was a brief discussion regarding INSERT SET on pgsql-hackers in late\n> August 2009 [1].\n>\n> INSERT SET is not part of any SQL standard (that I am aware of), however this\n> syntax is also implemented by MySQL [2]. Their implementation does not support\n> specifying a FROM clause.\n>\n\nI think this can be a handy feature in some cases as pointed by you,\nbut do we really want it for PostgreSQL? In the last round of\ndiscussions as pointed by you, there doesn't seem to be a consensus\nthat we want this feature. I guess before spending too much time into\nreviewing this feature, we should first build a consensus on whether\nwe need this.\n\nAlong with users, I request some senior hackers/committers to also\nweigh in about the desirability of this feature.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 16 Aug 2019 08:49:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "On Fri, Aug 16, 2019 at 8:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Wed, Jul 17, 2019 at 10:00 AM Gareth Palmer <gareth@internetnz.net.nz>\n> wrote:\n> >\n> > Hello,\n> >\n> > Attached is a patch that adds the option of using SET clause to specify\n> > the columns and values in an INSERT statement in the same manner as that\n> > of an UPDATE statement.\n> >\n> > A simple example that uses SET instead of a VALUES() clause:\n> >\n> > INSERT INTO t SET c1 = 'foo', c2 = 'bar', c3 = 'baz';\n> >\n> > Values may also be sourced from a CTE using a FROM clause:\n> >\n> > WITH x AS (\n> > SELECT 'foo' AS c1, 'bar' AS c2, 'baz' AS c3\n> > )\n> > INSERT INTO t SET c1 = x.c1, c2 = x.c2, c3 = x.c3 FROM x;\n> >\n> > The advantage of using the SET clause style is that the column and value\n> > are kept together, which can make changing or removing a column or value\n> from\n> > a large list easier.\n> >\n> > Internally the grammar parser converts INSERT SET without a FROM clause\n> into\n> > the equivalent INSERT with a VALUES clause. When using a FROM clause it\n> becomes\n> > the equivalent of INSERT with a SELECT statement.\n> >\n> > There was a brief discussion regarding INSERT SET on pgsql-hackers in\n> late\n> > August 2009 [1].\n> >\n> > INSERT SET is not part of any SQL standard (that I am aware of), however\n> this\n> > syntax is also implemented by MySQL [2]. Their implementation does not\n> support\n> > specifying a FROM clause.\n> >\n>\n> I think this can be a handy feature in some cases as pointed by you,\n> but do we really want it for PostgreSQL? In the last round of\n> discussions as pointed by you, there doesn't seem to be a consensus\n> that we want this feature. I guess before spending too much time into\n> reviewing this feature, we should first build a consensus on whether\n> we need this.\n>\n>\nI agree with you Amit, that we need a consensus on that. Do we really need\nthat\nfeature or not. In the previous discussion, there was no resistance to have\nthat\nin PostgreSQL, but some problem with the patch. Current patch is very simple\nand not invasive, but still, we need a consensus on that.\n\nAlong with users, I request some senior hackers/committers to also\n> weigh in about the desirability of this feature.\n>\n> --\n> With Regards,\n> Amit Kapila.\n> EnterpriseDB: http://www.enterprisedb.com\n>\n>\n>\n\n-- \nIbrar Ahmed\n\nOn Fri, Aug 16, 2019 at 8:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Wed, Jul 17, 2019 at 10:00 AM Gareth Palmer <gareth@internetnz.net.nz> wrote:\n>\n> Hello,\n>\n> Attached is a patch that adds the option of using SET clause to specify\n> the columns and values in an INSERT statement in the same manner as that\n> of an UPDATE statement.\n>\n> A simple example that uses SET instead of a VALUES() clause:\n>\n> INSERT INTO t SET c1 = 'foo', c2 = 'bar', c3 = 'baz';\n>\n> Values may also be sourced from a CTE using a FROM clause:\n>\n> WITH x AS (\n> SELECT 'foo' AS c1, 'bar' AS c2, 'baz' AS c3\n> )\n> INSERT INTO t SET c1 = x.c1, c2 = x.c2, c3 = x.c3 FROM x;\n>\n> The advantage of using the SET clause style is that the column and value\n> are kept together, which can make changing or removing a column or value from\n> a large list easier.\n>\n> Internally the grammar parser converts INSERT SET without a FROM clause into\n> the equivalent INSERT with a VALUES clause. When using a FROM clause it becomes\n> the equivalent of INSERT with a SELECT statement.\n>\n> There was a brief discussion regarding INSERT SET on pgsql-hackers in late\n> August 2009 [1].\n>\n> INSERT SET is not part of any SQL standard (that I am aware of), however this\n> syntax is also implemented by MySQL [2]. Their implementation does not support\n> specifying a FROM clause.\n>\n\nI think this can be a handy feature in some cases as pointed by you,\nbut do we really want it for PostgreSQL? In the last round of\ndiscussions as pointed by you, there doesn't seem to be a consensus\nthat we want this feature. I guess before spending too much time into\nreviewing this feature, we should first build a consensus on whether\nwe need this.\n I agree with you Amit, that we need a consensus on that. Do we really need thatfeature or not. In the previous discussion, there was no resistance to have thatin PostgreSQL, but some problem with the patch. Current patch is very simpleand not invasive, but still, we need a consensus on that.\nAlong with users, I request some senior hackers/committers to also\nweigh in about the desirability of this feature.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n-- Ibrar Ahmed",
"msg_date": "Fri, 16 Aug 2019 18:03:47 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "On 2019-08-16 05:19, Amit Kapila wrote:\n> I think this can be a handy feature in some cases as pointed by you,\n> but do we really want it for PostgreSQL? In the last round of\n> discussions as pointed by you, there doesn't seem to be a consensus\n> that we want this feature. I guess before spending too much time into\n> reviewing this feature, we should first build a consensus on whether\n> we need this.\n\nI think the problem this is attempting to solve is valid.\n\nWhat I don't like about the syntax is that it kind of breaks the\nnotional processing model of INSERT in a fundamental way. The model is\n\nINSERT INTO $target $table_source\n\nwhere $table_source could be VALUES, SELECT, possibly others in theory.\n\nThe proposed syntax changes this to only allow a single row to be\nspecified via the SET syntax, and the SET syntax does not function as a\nrow or table source in other contexts.\n\nLet's think about how we can achieve this using existing concepts in\nSQL. What we really need here at a fundamental level is an option to\nmatch $target to $table_source by column *name* rather than column\n*position*. There is existing syntax in SQL for that, namely\n\n a UNION b\n\nvs\n\n a UNION CORRESPONDING b\n\nI think this could be used for INSERT as well.\n\nAnd then you need a syntax to assign column names inside the VALUES\nrows. I think you could do either of the following:\n\n VALUES (a => 1, b => 2)\n\nor\n\n VALUES (1 AS a, 2 AS b)\n\nAnother nice effect of this would be that you could so something like\n\n INSERT INTO tbl2 CORRESPONDING SELECT * FROM tbl1;\n\nwhich copies the contents of tbl1 to tbl2 if they have the same column\nnames but allowing for a different column order.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 18 Aug 2019 11:03:11 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "On 18/08/2019 11:03, Peter Eisentraut wrote:\n>\n> a UNION b\n>\n> vs\n>\n> a UNION CORRESPONDING b\n\n\nI have a WIP patch for CORRESPONDING [BY]. Is there any interest in me\ncontinuing it? If so, I'll start another thread for it.\n\n-- \n\nVik Fearing\n\n\n\n",
"msg_date": "Sun, 18 Aug 2019 14:28:16 +0200",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "Vik Fearing <vik.fearing@2ndquadrant.com> writes:\n> On 18/08/2019 11:03, Peter Eisentraut wrote:\n>> a UNION b\n>> vs\n>> a UNION CORRESPONDING b\n\n> I have a WIP patch for CORRESPONDING [BY]. Is there any interest in me\n> continuing it? If so, I'll start another thread for it.\n\nCORRESPONDING is in the SQL standard, so in theory we ought to provide\nit. I think the hard question is how big/complicated the patch would be\n--- if the answer is \"complicated\", maybe it's not worth it. People\nhave submitted patches for it before that didn't go anywhere, suggesting\nthat the tradeoffs are not very good ... but maybe you'll think of a\nbetter way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 18 Aug 2019 10:35:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> What I don't like about the syntax is that it kind of breaks the\n> notional processing model of INSERT in a fundamental way.\n\nAgreed. I really don't like that this only works for a VALUES-like case\n(and only the one-row form at that). It's hard to see it as anything\nbut a wart pasted onto the syntax.\n\n> Let's think about how we can achieve this using existing concepts in\n> SQL. What we really need here at a fundamental level is an option to\n> match $target to $table_source by column *name* rather than column\n> *position*. There is existing syntax in SQL for that, namely\n> a UNION b\n> vs\n> a UNION CORRESPONDING b\n\nA potential issue here --- and something that applies to Vik's question\nas well, now that I think about it --- is that CORRESPONDING breaks down\nin the face of ALTER TABLE RENAME COLUMN. Something that had been a\nlegal query before the rename might be invalid, or mean something quite\ndifferent, afterwards. This is really nasty for stored views/rules,\nbecause we have neither a mechanism for forbidding input-table renames\nnor a mechanism for revalidating views/rules afterwards. Maybe we could\nmake it go by resolving CORRESPONDING in the rewriter or planner, rather\nthan in parse analysis; but that seems quite unpleasant as well.\nChanging our conclusions about the data types coming out of a UNION\nreally shouldn't happen later than parse analysis.\n\nThe SET-style syntax doesn't have that problem, since it's explicit\nabout which values go into which columns.\n\nPerhaps the way to resolve Peter's objection is to make the syntax\nmore fully like UPDATE:\n\nINSERT INTO target SET c1 = x, c2 = y+z, ... FROM tables-providing-x-y-z\n\n(with the patch as-submitted corresponding to the case with an empty\nFROM clause, hence no variables in the expressions-to-be-assigned).\n\nOf course, this is not functionally distinct from\n\nINSERT INTO target(c1,c2,...) SELECT x, y+z, ... FROM tables-providing-x-y-z\n\nand it's fair to question whether it's worth supporting a nonstandard\nsyntax just to allow the target column names to be written closer to\nthe expressions-to-be-assigned.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 18 Aug 2019 11:00:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "Hi Tom,\n\n> On 19/08/2019, at 3:00 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> What I don't like about the syntax is that it kind of breaks the\n>> notional processing model of INSERT in a fundamental way.\n> \n> Agreed. I really don't like that this only works for a VALUES-like case\n> (and only the one-row form at that). It's hard to see it as anything\n> but a wart pasted onto the syntax.\n> \n>> Let's think about how we can achieve this using existing concepts in\n>> SQL. What we really need here at a fundamental level is an option to\n>> match $target to $table_source by column *name* rather than column\n>> *position*. There is existing syntax in SQL for that, namely\n>> a UNION b\n>> vs\n>> a UNION CORRESPONDING b\n> \n> A potential issue here --- and something that applies to Vik's question\n> as well, now that I think about it --- is that CORRESPONDING breaks down\n> in the face of ALTER TABLE RENAME COLUMN. Something that had been a\n> legal query before the rename might be invalid, or mean something quite\n> different, afterwards. This is really nasty for stored views/rules,\n> because we have neither a mechanism for forbidding input-table renames\n> nor a mechanism for revalidating views/rules afterwards. Maybe we could\n> make it go by resolving CORRESPONDING in the rewriter or planner, rather\n> than in parse analysis; but that seems quite unpleasant as well.\n> Changing our conclusions about the data types coming out of a UNION\n> really shouldn't happen later than parse analysis.\n> \n> The SET-style syntax doesn't have that problem, since it's explicit\n> about which values go into which columns.\n> \n> Perhaps the way to resolve Peter's objection is to make the syntax\n> more fully like UPDATE:\n> \n> INSERT INTO target SET c1 = x, c2 = y+z, ... FROM tables-providing-x-y-z\n> \n> (with the patch as-submitted corresponding to the case with an empty\n> FROM clause, hence no variables in the expressions-to-be-assigned).\n> \n> Of course, this is not functionally distinct from\n> \n> INSERT INTO target(c1,c2,...) SELECT x, y+z, ... FROM tables-providing-x-y-z\n> \n> and it's fair to question whether it's worth supporting a nonstandard\n> syntax just to allow the target column names to be written closer to\n> the expressions-to-be-assigned.\n\nThanks for the feedback. Attached is version 3 of the patch that makes\nthe syntax work more like an UPDATE statement when a FROM clause is used.\n\nSo, an updated summary of the new syntax is:\n\n1. Equivalent to VALUES(...):\n\n INSERT INTO t SET c1 = x, c2 = y, c3 = z;\n\n2. Equivalent to INSERT INTO ... SELECT ...:\n\n INSERT INTO t SET c1 = sum(x.c1) FROM x WHERE x.c1 < y AND x.c2 != z\n GROUP BY x.c3 ORDER BY x.c4 ASC LIMIT a OFFSET b;\n\n\nGareth\n\n> \t\t\tregards, tom lane",
"msg_date": "Mon, 26 Aug 2019 16:14:11 +1200",
"msg_from": "Gareth Palmer <gareth@internetnz.net.nz>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nPatch looks to me and works on my machine 73025140885c889410b9bfc4a30a3866396fc5db - HEAD I have not reviewed the documentaion changes.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Thu, 31 Oct 2019 19:32:47 +0000",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "On Sun, Aug 18, 2019 at 11:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Perhaps the way to resolve Peter's objection is to make the syntax\n> more fully like UPDATE:\n>\n> INSERT INTO target SET c1 = x, c2 = y+z, ... FROM tables-providing-x-y-z\n>\n> (with the patch as-submitted corresponding to the case with an empty\n> FROM clause, hence no variables in the expressions-to-be-assigned).\n>\n> Of course, this is not functionally distinct from\n>\n> INSERT INTO target(c1,c2,...) SELECT x, y+z, ... FROM tables-providing-x-y-z\n>\n> and it's fair to question whether it's worth supporting a nonstandard\n> syntax just to allow the target column names to be written closer to\n> the expressions-to-be-assigned.\n\nFor what it's worth, I think this would be useful enough to justify\nits existence. Back in days of yore when dragons roamed the earth and\nI wrote database-driven applications instead of hacking on the\ndatabase itself, I often wondered why I had to write two\ncompletely-different looking SQL statements, one to insert the data\nwhich a user had entered into a webform into the database, and another\nto update previously-entered data. This feature would allow those\nqueries to be written in the same way, which would have pleased me,\nback in the day.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 1 Nov 2019 12:30:49 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "On Fri, Nov 1, 2019 at 6:31 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Sun, Aug 18, 2019 at 11:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Perhaps the way to resolve Peter's objection is to make the syntax\n> > more fully like UPDATE:\n> >\n> > INSERT INTO target SET c1 = x, c2 = y+z, ... FROM tables-providing-x-y-z\n> >\n> > (with the patch as-submitted corresponding to the case with an empty\n> > FROM clause, hence no variables in the expressions-to-be-assigned).\n> >\n> > Of course, this is not functionally distinct from\n> >\n> > INSERT INTO target(c1,c2,...) SELECT x, y+z, ... FROM\n> tables-providing-x-y-z\n> >\n> > and it's fair to question whether it's worth supporting a nonstandard\n> > syntax just to allow the target column names to be written closer to\n> > the expressions-to-be-assigned.\n>\n> For what it's worth, I think this would be useful enough to justify\n> its existence. Back in days of yore when dragons roamed the earth and\n> I wrote database-driven applications instead of hacking on the\n> database itself, I often wondered why I had to write two\n> completely-different looking SQL statements, one to insert the data\n> which a user had entered into a webform into the database, and another\n> to update previously-entered data. This feature would allow those\n> queries to be written in the same way, which would have pleased me,\n> back in the day.\n>\n\nI still do, and this would be a big help. I don't care if it's\nnon-standard.\n\n\n.m\n\nOn Fri, Nov 1, 2019 at 6:31 PM Robert Haas <robertmhaas@gmail.com> wrote:On Sun, Aug 18, 2019 at 11:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Perhaps the way to resolve Peter's objection is to make the syntax\n> more fully like UPDATE:\n>\n> INSERT INTO target SET c1 = x, c2 = y+z, ... FROM tables-providing-x-y-z\n>\n> (with the patch as-submitted corresponding to the case with an empty\n> FROM clause, hence no variables in the expressions-to-be-assigned).\n>\n> Of course, this is not functionally distinct from\n>\n> INSERT INTO target(c1,c2,...) SELECT x, y+z, ... FROM tables-providing-x-y-z\n>\n> and it's fair to question whether it's worth supporting a nonstandard\n> syntax just to allow the target column names to be written closer to\n> the expressions-to-be-assigned.\n\nFor what it's worth, I think this would be useful enough to justify\nits existence. Back in days of yore when dragons roamed the earth and\nI wrote database-driven applications instead of hacking on the\ndatabase itself, I often wondered why I had to write two\ncompletely-different looking SQL statements, one to insert the data\nwhich a user had entered into a webform into the database, and another\nto update previously-entered data. This feature would allow those\nqueries to be written in the same way, which would have pleased me,\nback in the day.I still do, and this would be a big help. I don't care if it's non-standard..m",
"msg_date": "Fri, 1 Nov 2019 20:00:47 +0200",
"msg_from": "Marko Tiikkaja <marko@joh.to>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "Gareth Palmer <gareth@internetnz.net.nz> writes:\n>> On 19/08/2019, at 3:00 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Perhaps the way to resolve Peter's objection is to make the syntax\n>> more fully like UPDATE:\n>> INSERT INTO target SET c1 = x, c2 = y+z, ... FROM tables-providing-x-y-z\n>> (with the patch as-submitted corresponding to the case with an empty\n>> FROM clause, hence no variables in the expressions-to-be-assigned).\n\n> Thanks for the feedback. Attached is version 3 of the patch that makes\n> the syntax work more like an UPDATE statement when a FROM clause is used.\n\nSince nobody has objected to this, I'm supposing that there's general\nconsensus that that design sketch is OK, and we can move on to critiquing\nimplementation details. I took a look, and didn't like much of what I saw.\n\n* In the grammar, there's no real need to have separate productions\nfor the cases with FROM and without. The way you have it is awkward,\nand it arbitrarily rejects combinations that work fine in plain\nSELECT, such as WHERE without FROM. You should just do\n\ninsert_set_clause:\n\t\tSET set_clause_list from_clause where_clause\n\t\t group_clause having_clause window_clause opt_sort_clause\n\t\t opt_select_limit\n\nrelying on the ability of all those symbols (except set_clause_list) to\nreduce to empty.\n\n* This is randomly inconsistent with select_no_parens, and not in a\ngood way, because you've omitted the option that's actually most likely\nto be useful, namely for_locking_clause. I wonder whether it's practical\nto refactor select_no_parens so that the stuff involving optional trailing\nclauses can be separated out into a production that insert_set_clause\ncould also use. Might not be worth the trouble, but I'm concerned\nabout select_no_parens growing additional clauses that we then forget\nto also add to insert_set_clause.\n\n* I'm not sure if it's worth also refactoring simple_select so that\nthe \"into_clause ... window_clause\" business could be shared. But\nit'd likely be a good idea to at least have a comment there noting\nthat any changes in that production might need to be applied to\ninsert_set_clause as well.\n\n* In kind of the same vein, it feels like the syntax documentation\nis awkwardly failing to share commonality that it ought to be\nable to share with the SELECT man page.\n\n* I dislike the random hacking you did in transformMultiAssignRef.\nThat weakens a useful check for error cases, and it's far from clear\nwhy the new assertion is OK. It also raises the question of whether\nthis is really the only place you need to touch in parse analysis.\nPerhaps it'd be better to consider inventing new EXPR_KIND_ values\nfor this situation; you'd then have to run around and look at all the\nexisting EXPR_KIND uses, but that seems like a useful cross-check\nactivity anyway. Or maybe we need to take two steps back and\nunderstand why that change is needed at all. I'd imagined that this\npatch would be only syntactic sugar for something you can do already,\nso it's not quite clear to me why we need additional changes.\n\n(If it's *not* just syntactic sugar, then the scope of potential\nproblems becomes far greater, eg does ruleutils.c need to know\nhow to reconstruct a valid SQL command from a querytree like this.\nIf we're not touching ruleutils.c, we need to be sure that every\ncommand that can be written this way can be written old-style.)\n\n* Other documentation gripes: the lone example seems insufficient,\nand there needs to be an entry under COMPATIBILITY pointing out\nthat this is not per SQL spec.\n\n* Some of the test cases seem to be expensively repeating\nconstruction/destruction of tables that they could have shared with\nexisting test cases. I do not consider it a virtue for new tests\nadded to an existing test script to be resolutely independent of\nwhat's already in that script.\n\nI'm setting this back to Waiting on Author.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Nov 2019 16:20:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "On Thu, Nov 14, 2019 at 9:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Gareth Palmer <gareth@internetnz.net.nz> writes:\n> >> On 19/08/2019, at 3:00 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Perhaps the way to resolve Peter's objection is to make the syntax\n> >> more fully like UPDATE:\n> >> INSERT INTO target SET c1 = x, c2 = y+z, ... FROM\n> tables-providing-x-y-z\n> >> (with the patch as-submitted corresponding to the case with an empty\n> >> FROM clause, hence no variables in the expressions-to-be-assigned).\n>\n> > Thanks for the feedback. Attached is version 3 of the patch that makes\n> > the syntax work more like an UPDATE statement when a FROM clause is used.\n>\n> Since nobody has objected to this, I'm supposing that there's general\n> consensus that that design sketch is OK, and we can move on to critiquing\n> implementation details. I took a look, and didn't like much of what I saw.\n>\n> ...\n>\n> I'm setting this back to Waiting on Author.\n>\n> regards, tom lane\n>\n>\n>\nRegarding syntax and considering that it makes INSERT look like UPDATE:\nthere is another difference between INSERT and UPDATE. INSERT allows SELECT\nwith ORDER BY and OFFSET/LIMIT (or FETCH FIRST), e.g.:\n\nINSERT INTO t (a,b)\nSELECT a+10. b+10\nFROM t\nORDER BY a\nLIMIT 3;\n\nBut UPDATE doesn't. I suppose the proposed behaviour of INSERT .. SET will\nbe the same as standard INSERT. So we'll need a note for the differences\nbetween INSERT/SET and UPDATE/SET syntax.\n\nOn a related not, column aliases can be used in ORDER BY, e.g:\n\ninsert into t (a, b)\nselect\n a + 20,\n b - 2 * a as f\nfrom t\norder by f desc\nlimit 3 ;\n\nWould that be expressed as follows?:\n\ninsert into t\nset\n a = a + 20,\n b = b - 2 * a as f\nfrom t\norder by f desc\nlimit 3 ;\n\nBest regards,\nPantelis Theodosiou\n\nOn Thu, Nov 14, 2019 at 9:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Gareth Palmer <gareth@internetnz.net.nz> writes:\n>> On 19/08/2019, at 3:00 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Perhaps the way to resolve Peter's objection is to make the syntax\n>> more fully like UPDATE:\n>> INSERT INTO target SET c1 = x, c2 = y+z, ... FROM tables-providing-x-y-z\n>> (with the patch as-submitted corresponding to the case with an empty\n>> FROM clause, hence no variables in the expressions-to-be-assigned).\n\n> Thanks for the feedback. Attached is version 3 of the patch that makes\n> the syntax work more like an UPDATE statement when a FROM clause is used.\n\nSince nobody has objected to this, I'm supposing that there's general\nconsensus that that design sketch is OK, and we can move on to critiquing\nimplementation details. I took a look, and didn't like much of what I saw.\n...\n\nI'm setting this back to Waiting on Author.\n\n regards, tom lane\n\nRegarding syntax and considering that it makes INSERT look like UPDATE: there is another difference between INSERT and UPDATE. INSERT allows SELECT with ORDER BY and OFFSET/LIMIT (or FETCH FIRST), e.g.:INSERT INTO t (a,b)SELECT a+10. b+10FROM tORDER BY a LIMIT 3;But UPDATE doesn't. I suppose the proposed behaviour of INSERT .. SET will be the same as standard INSERT. So we'll need a note for the differences between INSERT/SET and UPDATE/SET syntax.On a related not, column aliases can be used in ORDER BY, e.g:insert into t (a, b)select a + 20, b - 2 * a as ffrom torder by f desclimit 3 ;Would that be expressed as follows?:insert into tset a = a + 20, b = b - 2 * a as ffrom torder by f desclimit 3 ;Best regards,Pantelis Theodosiou",
"msg_date": "Fri, 15 Nov 2019 09:06:10 +0000",
"msg_from": "Pantelis Theodosiou <ypercube@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "Pantelis Theodosiou <ypercube@gmail.com> writes:\n> On 19/08/2019, at 3:00 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Perhaps the way to resolve Peter's objection is to make the syntax\n>>> more fully like UPDATE:\n>>> INSERT INTO target SET c1 = x, c2 = y+z, ... FROM\n>>> tables-providing-x-y-z\n\n> Regarding syntax and considering that it makes INSERT look like UPDATE:\n> there is another difference between INSERT and UPDATE. INSERT allows SELECT\n> with ORDER BY and OFFSET/LIMIT (or FETCH FIRST), e.g.: ...\n> But UPDATE doesn't. I suppose the proposed behaviour of INSERT .. SET will\n> be the same as standard INSERT. So we'll need a note for the differences\n> between INSERT/SET and UPDATE/SET syntax.\n\nI was supposing that this syntax should be just another way to spell\n\nINSERT INTO target (columnlist) SELECT ...\n\nSo everything past FROM would work exactly like it does in SELECT.\n\n> On a related not, column aliases can be used in ORDER BY, e.g:\n\nAs proposed, there's no option equivalent to writing output-column aliases\nin the INSERT ... SELECT form, so the question doesn't come up.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Nov 2019 12:48:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "\n\n> On 15/11/2019, at 10:20 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Gareth Palmer <gareth@internetnz.net.nz> writes:\n>>> On 19/08/2019, at 3:00 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Perhaps the way to resolve Peter's objection is to make the syntax\n>>> more fully like UPDATE:\n>>> INSERT INTO target SET c1 = x, c2 = y+z, ... FROM tables-providing-x-y-z\n>>> (with the patch as-submitted corresponding to the case with an empty\n>>> FROM clause, hence no variables in the expressions-to-be-assigned).\n> \n>> Thanks for the feedback. Attached is version 3 of the patch that makes\n>> the syntax work more like an UPDATE statement when a FROM clause is used.\n> \n> Since nobody has objected to this, I'm supposing that there's general\n> consensus that that design sketch is OK, and we can move on to critiquing\n> implementation details. I took a look, and didn't like much of what I saw.\n> \n> * In the grammar, there's no real need to have separate productions\n> for the cases with FROM and without. The way you have it is awkward,\n> and it arbitrarily rejects combinations that work fine in plain\n> SELECT, such as WHERE without FROM. You should just do\n> \n> insert_set_clause:\n> \t\tSET set_clause_list from_clause where_clause\n> \t\t group_clause having_clause window_clause opt_sort_clause\n> \t\t opt_select_limit\n> \n> relying on the ability of all those symbols (except set_clause_list) to\n> reduce to empty.\n\nThere are two separate productions to match the two different types\nof inserts: INSERT with VALUES and INSERT with SELECT.\n\nThe former has to store the the values in valuesLists so that DEFAULT\ncan still be used.\n\nAllowing a WHERE without a FROM also mean that while this would\nwork:\n\nINSERT INTO t SET c = DEFAULT;\n\nBut this would fail with 'DEFAULT is not allowed in this context':\n\nINSERT INTO t SET c = DEFAULT WHERE true;\n\nI should have put a comment explaining why there are two rules.\n\nIt could be combined into one production but there would have to be\na check that $4 .. $9 are NULL to determine what type of INSERT to\nuse.\n\ntransformInsertStmt() also has an optimisation for the case of a\nsingle valueLists entry.\n\n> * This is randomly inconsistent with select_no_parens, and not in a\n> good way, because you've omitted the option that's actually most likely\n> to be useful, namely for_locking_clause. I wonder whether it's practical\n> to refactor select_no_parens so that the stuff involving optional trailing\n> clauses can be separated out into a production that insert_set_clause\n> could also use. Might not be worth the trouble, but I'm concerned\n> about select_no_parens growing additional clauses that we then forget\n> to also add to insert_set_clause.\n> \n> * I'm not sure if it's worth also refactoring simple_select so that\n> the \"into_clause ... window_clause\" business could be shared. But\n> it'd likely be a good idea to at least have a comment there noting\n> that any changes in that production might need to be applied to\n> insert_set_clause as well.\n\nI can add opt_for_locking_clause and a comment to simple_select to\nstart with while the format of insert_set_clause is still being\nworked out.\n\n> * In kind of the same vein, it feels like the syntax documentation\n> is awkwardly failing to share commonality that it ought to be\n> able to share with the SELECT man page.\n\nI could collapse the from clause to just '[ FROM from_clause ]'\nand have it refer to the from clause and everything after it in\nSELECT.\n\n> * I dislike the random hacking you did in transformMultiAssignRef.\n> That weakens a useful check for error cases, and it's far from clear\n> why the new assertion is OK. It also raises the question of whether\n> this is really the only place you need to touch in parse analysis.\n> Perhaps it'd be better to consider inventing new EXPR_KIND_ values\n> for this situation; you'd then have to run around and look at all the\n> existing EXPR_KIND uses, but that seems like a useful cross-check\n> activity anyway. Or maybe we need to take two steps back and\n> understand why that change is needed at all. I'd imagined that this\n> patch would be only syntactic sugar for something you can do already,\n> so it's not quite clear to me why we need additional changes.\n> \n> (If it's *not* just syntactic sugar, then the scope of potential\n> problems becomes far greater, eg does ruleutils.c need to know\n> how to reconstruct a valid SQL command from a querytree like this.\n> If we're not touching ruleutils.c, we need to be sure that every\n> command that can be written this way can be written old-style.)\n\nIt was intended to just be syntatic sugar, however because\nset_clause_list is being re-used the ability to do multi-assignment\nin an INSERT's targetList 'came along for the ride' which has\nno equivalent in the current INSERT syntax.\n\nThat would be why those EXPR_KIND's are now appearing in\ntransformMultiAssignRef().\n\nThere are 3 things that could be done here:\n\n1. Update ruletutils.c to emit INSERT SET in get_insert_query_def()\n if query->hasSubLinks is true.\n\n2. Add a new production similar to set_clause_list which doesn't\n allow multi-assignment.\n\n3. Re-use set_clause_list but reject targetLists that contain\n multi-assignment.\n\nKeeping that feature is probably desirable at least for consistency\nwith other SET clauses.\n\nI will work on getting get_insert_query_def() to correctly\nreconstruct the new syntax.\n\n> * Other documentation gripes: the lone example seems insufficient,\n> and there needs to be an entry under COMPATIBILITY pointing out\n> that this is not per SQL spec.\n\nI will add something to in the compatibility section.\n\n> * Some of the test cases seem to be expensively repeating\n> construction/destruction of tables that they could have shared with\n> existing test cases. I do not consider it a virtue for new tests\n> added to an existing test script to be resolutely independent of\n> what's already in that script.\n\nThose test cases will be changed to share the those tables.\n\n> I'm setting this back to Waiting on Author.\n> \n> \t\t\tregards, tom lane\n\n\n\n",
"msg_date": "Tue, 19 Nov 2019 15:34:41 +1300",
"msg_from": "Gareth Palmer <gareth@internetnz.net.nz>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "\n\n> On 15/11/2019, at 10:20 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Gareth Palmer <gareth@internetnz.net.nz> writes:\n>>> On 19/08/2019, at 3:00 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Perhaps the way to resolve Peter's objection is to make the syntax\n>>> more fully like UPDATE:\n>>> INSERT INTO target SET c1 = x, c2 = y+z, ... FROM tables-providing-x-y-z\n>>> (with the patch as-submitted corresponding to the case with an empty\n>>> FROM clause, hence no variables in the expressions-to-be-assigned).\n> \n>> Thanks for the feedback. Attached is version 3 of the patch that makes\n>> the syntax work more like an UPDATE statement when a FROM clause is used.\n> \n> Since nobody has objected to this, I'm supposing that there's general\n> consensus that that design sketch is OK, and we can move on to critiquing\n> implementation details. I took a look, and didn't like much of what I saw.\n> \n> * In the grammar, there's no real need to have separate productions\n> for the cases with FROM and without. The way you have it is awkward,\n> and it arbitrarily rejects combinations that work fine in plain\n> SELECT, such as WHERE without FROM. You should just do\n> \n> insert_set_clause:\n> \t\tSET set_clause_list from_clause where_clause\n> \t\t group_clause having_clause window_clause opt_sort_clause\n> \t\t opt_select_limit\n> \n> relying on the ability of all those symbols (except set_clause_list) to\n> reduce to empty.\n> \n> * This is randomly inconsistent with select_no_parens, and not in a\n> good way, because you've omitted the option that's actually most likely\n> to be useful, namely for_locking_clause. I wonder whether it's practical\n> to refactor select_no_parens so that the stuff involving optional trailing\n> clauses can be separated out into a production that insert_set_clause\n> could also use. Might not be worth the trouble, but I'm concerned\n> about select_no_parens growing additional clauses that we then forget\n> to also add to insert_set_clause.\n> \n> * I'm not sure if it's worth also refactoring simple_select so that\n> the \"into_clause ... window_clause\" business could be shared. But\n> it'd likely be a good idea to at least have a comment there noting\n> that any changes in that production might need to be applied to\n> insert_set_clause as well.\n> \n> * In kind of the same vein, it feels like the syntax documentation\n> is awkwardly failing to share commonality that it ought to be\n> able to share with the SELECT man page.\n> \n> * I dislike the random hacking you did in transformMultiAssignRef.\n> That weakens a useful check for error cases, and it's far from clear\n> why the new assertion is OK. It also raises the question of whether\n> this is really the only place you need to touch in parse analysis.\n> Perhaps it'd be better to consider inventing new EXPR_KIND_ values\n> for this situation; you'd then have to run around and look at all the\n> existing EXPR_KIND uses, but that seems like a useful cross-check\n> activity anyway. Or maybe we need to take two steps back and\n> understand why that change is needed at all. I'd imagined that this\n> patch would be only syntactic sugar for something you can do already,\n> so it's not quite clear to me why we need additional changes.\n> \n> (If it's *not* just syntactic sugar, then the scope of potential\n> problems becomes far greater, eg does ruleutils.c need to know\n> how to reconstruct a valid SQL command from a querytree like this.\n> If we're not touching ruleutils.c, we need to be sure that every\n> command that can be written this way can be written old-style.)\n\nSo it appears as though it may not require any changes to ruleutils.c\nas the parser is converting the multi-assignments into separate\ncolumns, eg:\n\nCREATE RULE r1 AS ON INSERT TO tab1\n DO INSTEAD\n INSERT INTO tab2 SET (col2, col1) = (new.col2, 0), col3 = tab3.col3\n FROM tab3\n\nThe rule generated is:\n\n r1 AS ON INSERT TO tab1 DO INSTEAD\n INSERT INTO tab2 (col2, col1, col3)\n SELECT new.col2, 0 AS col1, tab3.col3 FROM tab3\n\nIt will trigger that Assert() though, as EXPR_KIND_SELECT_TARGET is\nnow also being passed to transformMultiassignRef().\n\n> * Other documentation gripes: the lone example seems insufficient,\n> and there needs to be an entry under COMPATIBILITY pointing out\n> that this is not per SQL spec.\n> \n> * Some of the test cases seem to be expensively repeating\n> construction/destruction of tables that they could have shared with\n> existing test cases. I do not consider it a virtue for new tests\n> added to an existing test script to be resolutely independent of\n> what's already in that script.\n> \n> I'm setting this back to Waiting on Author.\n> \n> \t\t\tregards, tom lane\n\n\n\n",
"msg_date": "Tue, 19 Nov 2019 17:05:37 +1300",
"msg_from": "Gareth Palmer <gareth@internetnz.net.nz>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "> On 19/11/2019, at 5:05 PM, Gareth Palmer <gareth@internetnz.net.nz> wrote:\n>> \n>> Since nobody has objected to this, I'm supposing that there's general\n>> consensus that that design sketch is OK, and we can move on to critiquing\n>> implementation details. I took a look, and didn't like much of what I saw.\n\nAttached is an updated patch with for_locking_clause added, test-cases\nre-use existing tables and the comments and documentation have been\nexpanded.\n\n>> I'm setting this back to Waiting on Author.",
"msg_date": "Fri, 22 Nov 2019 12:24:15 +1300",
"msg_from": "Gareth Palmer <gareth@internetnz.net.nz>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 12:24:15PM +1300, Gareth Palmer wrote:\n> Attached is an updated patch with for_locking_clause added, test-cases\n> re-use existing tables and the comments and documentation have been\n> expanded.\n\nPer the automatic patch tester, documentation included in the patch\ndoes not build. Could you please fix that? I have moved the patch to\nnext CF, waiting on author.\n--\nMichael",
"msg_date": "Sun, 1 Dec 2019 12:32:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "On Sun, Dec 1, 2019 at 4:32 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Nov 22, 2019 at 12:24:15PM +1300, Gareth Palmer wrote:\n> > Attached is an updated patch with for_locking_clause added, test-cases\n> > re-use existing tables and the comments and documentation have been\n> > expanded.\n>\n> Per the automatic patch tester, documentation included in the patch\n> does not build. Could you please fix that? I have moved the patch to\n> next CF, waiting on author.\n\nAttached is a fixed version.\n\n> --\n> Michael",
"msg_date": "Tue, 3 Dec 2019 22:44:23 +1300",
"msg_from": "Gareth Palmer <gareth@internetnz.net.nz>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "Hi Tom,\n\nOn 12/3/19 4:44 AM, Gareth Palmer wrote:\n> On Sun, Dec 1, 2019 at 4:32 PM Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On Fri, Nov 22, 2019 at 12:24:15PM +1300, Gareth Palmer wrote:\n>>> Attached is an updated patch with for_locking_clause added, test-cases\n>>> re-use existing tables and the comments and documentation have been\n>>> expanded.\n>>\n>> Per the automatic patch tester, documentation included in the patch\n>> does not build. Could you please fix that? I have moved the patch to\n>> next CF, waiting on author.\n> \n> Attached is a fixed version.\n\nDoes this version of the patch address your concerns?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 24 Mar 2020 13:00:03 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "David Steele <david@pgmasters.net> writes:\n> On 12/3/19 4:44 AM, Gareth Palmer wrote:\n>> Attached is a fixed version.\n\n> Does this version of the patch address your concerns?\n\nNo. I still find the reliance on a FROM clause being present\nto be pretty arbitrary. Also, I don't believe that ruleutils.c\nrequires no changes, because it's not going to be possible to\ntransform every usage of this syntax to old-style. I tried to\nprove the point with this trivial example:\n\nregression=# create table foo (f1 int ,f2 int, f3 int);\nCREATE TABLE\nregression=# create table bar (f1 int ,f2 int, f3 int);\nCREATE TABLE\nregression=# create rule r1 as on insert to foo do instead\nregression-# insert into bar set (f1,f2,f3) = (select f1,f2,f3 from foo);\n\nintending to show that the rule decompilation was bogus, but\nI didn't get that far because the parser crashed:\n\nTRAP: FailedAssertion(\"pstate->p_multiassign_exprs == NIL\", File: \"parse_target.c\", Line: 287)\npostgres: postgres regression [local] CREATE RULE(ExceptionalCondition+0x55)[0x8fb6e5]\npostgres: postgres regression [local] CREATE RULE[0x5bd0c3]\npostgres: postgres regression [local] CREATE RULE[0x583def]\npostgres: postgres regression [local] CREATE RULE(transformStmt+0x2d5)[0x582665]\npostgres: postgres regression [local] CREATE RULE(transformRuleStmt+0x2ad)[0x5bf2ad]\npostgres: postgres regression [local] CREATE RULE(DefineRule+0x17)[0x793847]\n\nIf I do it like this, I get a different assertion:\n\nregression=# insert into bar set (f1,f2,f3) = (select f1,f2,f3) from foo;\nserver closed the connection unexpectedly\n\nTRAP: FailedAssertion(\"exprKind == EXPR_KIND_UPDATE_SOURCE\", File: \"parse_target.c\", Line: 209)\npostgres: postgres regression [local] INSERT(ExceptionalCondition+0x55)[0x8fb6e5]\npostgres: postgres regression [local] INSERT(transformTargetList+0x1a7)[0x5bd277]\npostgres: postgres regression [local] INSERT(transformStmt+0xbe0)[0x582f70]\npostgres: postgres regression [local] INSERT[0x5839f3]\npostgres: postgres regression [local] INSERT(transformStmt+0x2d5)[0x582665]\npostgres: postgres regression [local] INSERT(transformTopLevelStmt+0xd)[0x58411d]\npostgres: postgres regression [local] INSERT(parse_analyze+0x69)[0x584269]\n\n\nNo doubt that's all fixable, but the realization that some cases of\nthis syntax are *not* just syntactic sugar for standards-compliant\nsyntax is giving me pause. Do we really want to get out front of\nthe SQL committee on extending INSERT in an incompatible way?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 Mar 2020 13:57:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "I wrote:\n> No doubt that's all fixable, but the realization that some cases of\n> this syntax are *not* just syntactic sugar for standards-compliant\n> syntax is giving me pause. Do we really want to get out front of\n> the SQL committee on extending INSERT in an incompatible way?\n\nOne compromise that might be worth thinking about is to disallow\nmultiassignments in this syntax, so as to (1) avoid the possibility\nof generating something that can't be represented by standard INSERT\nand (2) get something done in time for v13. The end of March is not\nthat far off. Perhaps somebody would come back and extend it later,\nor perhaps not.\n\nA slightly more ambitious compromise would be to allow multiassignment\nonly when the source can be pulled apart into independent subexpressions,\ncomparable to the restriction we used to have in UPDATE itself (before\n8f889b108 or thereabouts).\n\nIn either case the transformation could be done right in gram.y and\na helpful error thrown for unsupported cases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 Mar 2020 14:45:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "On 2020-03-24 18:57, Tom Lane wrote:\n> No doubt that's all fixable, but the realization that some cases of\n> this syntax are*not* just syntactic sugar for standards-compliant\n> syntax is giving me pause. Do we really want to get out front of\n> the SQL committee on extending INSERT in an incompatible way?\n\nWhat is the additional functionality that we are considering adding here?\n\nThe thread started out proposing a more convenient syntax, but it seems \nto go deeper now and perhaps not everyone is following.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 25 Mar 2020 13:51:25 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-03-24 18:57, Tom Lane wrote:\n>> No doubt that's all fixable, but the realization that some cases of\n>> this syntax are*not* just syntactic sugar for standards-compliant\n>> syntax is giving me pause. Do we really want to get out front of\n>> the SQL committee on extending INSERT in an incompatible way?\n\n> What is the additional functionality that we are considering adding here?\n> The thread started out proposing a more convenient syntax, but it seems \n> to go deeper now and perhaps not everyone is following.\n\nAIUI, the proposal is to allow INSERT commands to be written\nusing an UPDATE-like syntax, for example\n\nINSERT INTO table SET col1 = value1, col2 = value2, ... [ FROM ... ]\n\nwhere everything after FROM is the same as it is in SELECT. My initial\nbelief was that this was strictly equivalent to what you could do with\na target-column-names list in standard INSERT, viz\n\nINSERT INTO table (col1, col2, ...) VALUES (value1, value2, ...);\nor\nINSERT INTO table (col1, col2, ...) SELECT value1, value2, ... FROM ...\n\nbut it's arguably more legible/convenient because the column names\nare written next to their values.\n\nHowever, that rewriting falls down for certain multiassignment cases\nwhere you have a row source that can't be decomposed, such as my\nexample\n\nINSERT INTO table SET (col1, col2) = (SELECT value1, value2 FROM ...),\n... [ FROM ... ]\n\nSo, just as we found for UPDATE, multiassignment syntax is strictly\nstronger than plain column-by-column assignment.\n\nThere are some secondary issues about which variants of this syntax\nwill allow a column value to be written as DEFAULT, and perhaps\nabout whether set-returning functions work. But the major point\nright now is about whether its's possible to rewrite to standard\nsyntax.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 25 Mar 2020 10:17:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "> On 26/03/2020, at 3:17 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> On 2020-03-24 18:57, Tom Lane wrote:\n>>> No doubt that's all fixable, but the realization that some cases of\n>>> this syntax are*not* just syntactic sugar for standards-compliant\n>>> syntax is giving me pause. Do we really want to get out front of\n>>> the SQL committee on extending INSERT in an incompatible way?\n> \n>> What is the additional functionality that we are considering adding here?\n>> The thread started out proposing a more convenient syntax, but it seems \n>> to go deeper now and perhaps not everyone is following.\n> \n> AIUI, the proposal is to allow INSERT commands to be written\n> using an UPDATE-like syntax, for example\n> \n> INSERT INTO table SET col1 = value1, col2 = value2, ... [ FROM ... ]\n> \n> where everything after FROM is the same as it is in SELECT. My initial\n> belief was that this was strictly equivalent to what you could do with\n> a target-column-names list in standard INSERT, viz\n> \n> INSERT INTO table (col1, col2, ...) VALUES (value1, value2, ...);\n> or\n> INSERT INTO table (col1, col2, ...) SELECT value1, value2, ... FROM ...\n> \n> but it's arguably more legible/convenient because the column names\n> are written next to their values.\n> \n> However, that rewriting falls down for certain multiassignment cases\n> where you have a row source that can't be decomposed, such as my\n> example\n> \n> INSERT INTO table SET (col1, col2) = (SELECT value1, value2 FROM ...),\n> ... [ FROM ... ]\n> \n> So, just as we found for UPDATE, multiassignment syntax is strictly\n> stronger than plain column-by-column assignment.\n> \n> There are some secondary issues about which variants of this syntax\n> will allow a column value to be written as DEFAULT, and perhaps\n> about whether set-returning functions work. But the major point\n> right now is about whether its's possible to rewrite to standard\n> syntax.\n> \n> \t\t\tregards, tom lane\n\nAttached is v6 of the patch.\n\nAs per the suggestion the SET clause list is checked for any\nMultiAssigmentRef nodes and to report an error if any are found.\n\nFor example, the rule definition that previously caused a parser crash\nwould now produce the following error:\n\nvagrant=> create rule r1 as on insert to foo do instead\nvagrant-> insert into bar set (f1,f2,f3) = (select f1,f2,f3 from foo);\nERROR: INSERT SET syntax does not support multi-assignment of columns.\nLINE 2: insert into bar set (f1,f2,f3) = (select f1,f2,f3 from foo);\n ^\nHINT: Specify the column assignments separately.\n\n\nRequiring a FROM clause was a way to differentiate between an INSERT\nwith VALUES() which does allow DEFAULT and an INSERT with SELECT which\ndoes not.\n\nThe idea was that it would help the user understand that they were writing\na different type of query and that DEFAULT would not be allowed in that\ncontext.\n\nTo show what it would look like without that requirement I have removed\nit from the v6 patch. In the first example works but the second one will\ngenerate an error.\n\nINSERT INTO t SET c1 = 1 WHERE true;\nINSERT INTO t SET c1 = DEFAULT WHERE true;",
"msg_date": "Thu, 26 Mar 2020 16:21:47 +1300",
"msg_from": "Gareth Palmer <gareth@internetnz.net.nz>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nIt builds failed by applying to the latest code version, and I try head\r\n'73025140885c889410b9bfc4a30a3866396fc5db' which work well.\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Wed, 22 Apr 2020 02:40:28 +0000",
"msg_from": "movead li <movead.li@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "Hi Movead,\n\n> On 22/04/2020, at 2:40 PM, movead li <movead.li@highgo.ca> wrote:\n> \n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: tested, passed\n> \n> It builds failed by applying to the latest code version, and I try head\n> '73025140885c889410b9bfc4a30a3866396fc5db' which work well.\n> \n> The new status of this patch is: Waiting on Author\n\nThank you for the review, attached is v7 of the patch which should\napply correcly to HEAD.\n\nThis version now uses it's own production rule for the SET clause to\navoid the issue with MultiAssigmentRef nodes in the targetList.",
"msg_date": "Fri, 24 Apr 2020 12:04:33 +1200",
"msg_from": "Gareth Palmer <gareth@internetnz.net.nz>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "On 4/23/20 8:04 PM, Gareth Palmer wrote:\n> \n> Thank you for the review, attached is v7 of the patch which should\n> apply correcly to HEAD.\n> \n> This version now uses it's own production rule for the SET clause to\n> avoid the issue with MultiAssigmentRef nodes in the targetList.\n\nIbrar, Movead, you are the reviewers of this patch. Do you think all of \nTom's and Peter's concerns have been addressed?\n\nIf so, please mark as Ready for Committer so somebody can have a look.\n\nIf not, what remains to be done?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Wed, 3 Mar 2021 11:27:26 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "> On 4/23/20 8:04 PM, Gareth Palmer wrote:\n> >\n> > Thank you for the review, attached is v7 of the patch which should\n> > apply correcly to HEAD.\n> >\n\nHello Gareth,\n\nThis patch no longer applies to HEAD, can you please submit a rebased version?\n\nThanks,\nRachel\n\n\n",
"msg_date": "Tue, 21 Sep 2021 16:38:00 -0700",
"msg_from": "Rachel Heaton <rachelmheaton@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "Hello Rachel,\n\nOn Wed, 22 Sept 2021 at 17:13, Rachel Heaton <rachelmheaton@gmail.com> wrote:\n>\n> > On 4/23/20 8:04 PM, Gareth Palmer wrote:\n> > >\n> > > Thank you for the review, attached is v7 of the patch which should\n> > > apply correcly to HEAD.\n> > >\n>\n> Hello Gareth,\n>\n> This patch no longer applies to HEAD, can you please submit a rebased version?\n\nAttached is a rebased version that should apply to HEAD.\n\nGareth\n\n> Thanks,\n> Rachel\n>\n>\n>\n>",
"msg_date": "Wed, 22 Sep 2021 23:49:06 +1200",
"msg_from": "Gareth Palmer <gareth.palmer3@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "Since this feature adds INSERT OVERRIDING SET syntax, it is recommended to add some related testcases.\n\n\nRegards\nWenjing\n\n\n> 2021年9月22日 07:38,Rachel Heaton <rachelmheaton@gmail.com> 写道:\n> \n>> On 4/23/20 8:04 PM, Gareth Palmer wrote:\n>>> \n>>> Thank you for the review, attached is v7 of the patch which should\n>>> apply correcly to HEAD.\n>>> \n> \n> Hello Gareth,\n> \n> This patch no longer applies to HEAD, can you please submit a rebased version?\n> \n> Thanks,\n> Rachel\n> \n>",
"msg_date": "Fri, 21 Jan 2022 17:24:51 +0800",
"msg_from": "wenjing zeng <wjzeng2012@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "Hi,\n\nOn Fri, Jan 21, 2022 at 05:24:51PM +0800, wenjing zeng wrote:\n> Since this feature adds INSERT OVERRIDING SET syntax, it is recommended to add some related testcases.\n\nThanks for proposing some more tests.\n\nNote that your patch caused Gareth's patches to break under the cfbot.\nhttp://cfbot.cputube.org/gareth-palmer.html\n\nYou have to either include the pre-requisite patches as 0001, and your patch as\n0002 (as I'm doing now), or name your patch something other than *.diff or\n*.patch, so cfbot doesn't think it's a new version of the patch to be tested.\n\nThanks,\n-- \nJustin",
"msg_date": "Sun, 23 Jan 2022 21:58:05 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> You have to either include the pre-requisite patches as 0001, and your patch as\n> 0002 (as I'm doing now), or name your patch something other than *.diff or\n> *.patch, so cfbot doesn't think it's a new version of the patch to be tested.\n\nThis patch has been basically ignored for a full two years now.\n(Remarkably, it's still passing in the cfbot.)\n\nI have to think that that means there's just not enough interest\nto justify committing it. Should we mark it rejected and move on?\nIf not, what needs to happen to get it unstuck?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 23 Mar 2022 11:32:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "On Wed, Mar 23, 2022 at 5:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > You have to either include the pre-requisite patches as 0001, and your patch as\n> > 0002 (as I'm doing now), or name your patch something other than *.diff or\n> > *.patch, so cfbot doesn't think it's a new version of the patch to be tested.\n>\n> This patch has been basically ignored for a full two years now.\n> (Remarkably, it's still passing in the cfbot.)\n>\n> I have to think that that means there's just not enough interest\n> to justify committing it. Should we mark it rejected and move on?\n> If not, what needs to happen to get it unstuck?\n\nI can help with review and/or other work here. Please give me a\ncouple of weeks.\n\n\n.m\n\n\n",
"msg_date": "Thu, 7 Apr 2022 21:29:11 +0300",
"msg_from": "Marko Tiikkaja <marko@joh.to>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "On Thu, Apr 7, 2022 at 11:29 AM Marko Tiikkaja <marko@joh.to> wrote:\n> I can help with review and/or other work here. Please give me a\n> couple of weeks.\n\nHi Marko, did you get a chance to pick up this patchset? If not, no\nworries; I can mark this RwF and we can try again in a future\ncommitfest.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Tue, 5 Jul 2022 12:01:30 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "Hello,\n\nHere is a new version of the patch that applies to HEAD.\n\nIt also adds some regression tests for overriding {system,user} values\nbased on Wenjing Zeng's work.\n\nGareth\n\nOn Thu, 14 Jul 2022 at 22:40, Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Thu, Apr 7, 2022 at 11:29 AM Marko Tiikkaja <marko@joh.to> wrote:\n> > I can help with review and/or other work here. Please give me a\n> > couple of weeks.\n>\n> Hi Marko, did you get a chance to pick up this patchset? If not, no\n> worries; I can mark this RwF and we can try again in a future\n> commitfest.\n>\n> Thanks,\n> --Jacob\n>\n>\n>\n>",
"msg_date": "Thu, 14 Jul 2022 22:45:33 +1200",
"msg_from": "Gareth Palmer <gareth.palmer3@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
},
{
"msg_contents": "As discussed in [1], we're taking this opportunity to return some\npatchsets that don't appear to be getting enough reviewer interest.\n\nThis is not a rejection, since we don't necessarily think there's\nanything unacceptable about the entry, but it differs from a standard\n\"Returned with Feedback\" in that there's probably not much actionable\nfeedback at all. Rather than code changes, what this patch needs is more\ncommunity interest. You might\n\n- ask people for help with your approach,\n- see if there are similar patches that your code could supplement,\n- get interested parties to agree to review your patch in a CF, or\n- possibly present the functionality in a way that's easier to review\n overall.\n\n(Doing these things is no guarantee that there will be interest, but\nit's hopefully better than endlessly rebasing a patchset that is not\nreceiving any feedback from the community.)\n\nOnce you think you've built up some community support and the patchset\nis ready for review, you (or any interested party) can resurrect the\npatch entry by visiting\n\n https://commitfest.postgresql.org/38/2218/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n[1]\nhttps://postgr.es/m/flat/0ab66589-2f71-69b3-2002-49e821740b0d@timescale.com\n\n\n",
"msg_date": "Mon, 1 Aug 2022 13:18:35 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Implement INSERT SET syntax"
}
] |
[
{
"msg_contents": "Hi,\n\nI found that pg_rewind is failing against the latest sources of PG \nv12Beta2 / PG 13devel\n\nSteps to reproduce -\n=============\n0)mkdir /tmp/archive_dir\n1)Master Setup -> ./initdb -D master , add these parameters in \npostgresql.conf file -\n\"\nwal_level = hot_standby\nwal_log_hints = on\nhot_standby = on\narchive_mode=on\narchive_command='cp %p /tmp/archive_dir/%f'\nport=5432\n\"\nStart the server (./pg_ctl -D master start)\nConnect to psql terminal - create table/ insert few rows / select \npg_switch_wal(); -- fire 3 times\n\n2)Slave Setup -> ./pg_basebackup -PR -X stream -c fast -h 127.0.0.1 -U \ncentos -p 5432 -D slave\n\nadd these parameters in postgresql.conf file of SLAVE-\n\"\nprimary_conninfo = 'user=centos host=127.0.0.1 port=5432'\npromote_trigger_file = '/tmp/t00.txt'\nrestore_command='cp /tmp/archive_dir/%f %p'\nport=5555\n\"\nStart Slave (./pg_ctl -D slave start)\n\n3)Touch trigger file (touch /tmp/t00.txt) -> - standby.signal is gone \nfrom standby directory and now able to insert rows on standby server.\n4)stop master ( ./pg_ctl -D master stop)\n5)Perform pg_rewind\n\n[centos@mail-arts bin]$ ./pg_rewind -D master/ \n--source-server=\"host=localhost port=5555 user=centos password=nothing \ndbname=postgres\"\npg_rewind: servers diverged at WAL location 0/3000158 on timeline 1\n*pg_rewind: error: could not open file \n\"master//pg_wal/000000010000000000000003\": No such file or directory*\npg_rewind: fatal: could not find previous WAL record at 0/3000158\n\nEarlier ,i was getting this below result -\n\n[centos@mail-arts bin]$ ./pg_rewind -D master/ \n--source-server=\"host=localhost port=5555 user=centos password=edb \ndbname=postgres\"\npg_rewind: servers diverged at WAL location 0/3003538 on timeline 1\npg_rewind: rewinding from last common checkpoint at 0/2000060 on timeline 1\n\npg_rewind: Done!\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\nHi,\nI found that pg_rewind is failing against the latest sources of\n PG v12Beta2 / PG 13devel \n\nSteps to reproduce -\n \n =============\n \n 0)mkdir /tmp/archive_dir\n 1)Master Setup -> ./initdb -D master , add these parameters in\n postgresql.conf file -\n \n \"\n \n wal_level = hot_standby\n \n wal_log_hints = on\n \n hot_standby = on\n \n archive_mode=on\n \n archive_command='cp %p /tmp/archive_dir/%f'\n \n port=5432\n \n \"\n \n Start the server (./pg_ctl -D master start)\n \n Connect to psql terminal - create table/ insert few rows\n / select pg_switch_wal(); -- fire 3 times \n\n 2)Slave Setup -> ./pg_basebackup -PR -X stream -c fast -h\n 127.0.0.1 -U centos -p 5432 -D slave\n \n\n add these parameters in postgresql.conf file of SLAVE-\n \n \"\n \n primary_conninfo = 'user=centos host=127.0.0.1 port=5432'\n \n promote_trigger_file = '/tmp/t00.txt'\n \n restore_command='cp /tmp/archive_dir/%f %p'\n \n port=5555\n \"\n \n Start Slave (./pg_ctl -D slave start)\n \n\n 3)Touch trigger file (touch /tmp/t00.txt) -> - standby.signal\n is gone from standby directory and now able to insert rows on\n standby server.\n \n 4)stop master ( ./pg_ctl -D master stop)\n \n 5)Perform pg_rewind \n\n[centos@mail-arts bin]$ ./pg_rewind -D master/\n --source-server=\"host=localhost port=5555 user=centos\n password=nothing dbname=postgres\"\n pg_rewind: servers diverged at WAL location 0/3000158 on timeline\n 1\npg_rewind: error: could not open file\n \"master//pg_wal/000000010000000000000003\": No such file or\n directory\n pg_rewind: fatal: could not find previous WAL record at 0/3000158\n\nEarlier ,i was getting this below result - \n\n[centos@mail-arts bin]$ ./pg_rewind -D master/\n --source-server=\"host=localhost port=5555 user=centos password=edb\n dbname=postgres\"\n \n pg_rewind: servers diverged at WAL location 0/3003538 on timeline\n 1\n \n pg_rewind: rewinding from last common checkpoint at 0/2000060 on\n timeline 1\n \n\n pg_rewind: Done!\n \n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 17 Jul 2019 15:25:43 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "pg_rewind is failing on PG v12 BETA/PG HEAD"
}
] |
[
{
"msg_contents": "Hello -\n\nSince Procedures were introduced in PG 11, the workaround to invoke them\nwith JDBC is to send the native \"CALL proc()\" SQL and let it be treated as\na SQL statement, not a specific stored routine invocation.\n\n1) When using transaction control inside the stored proc, an exception is\ngenerated if autoCommit is false - see example code attached.\nException in thread \"main\" org.postgresql.util.PSQLException: ERROR:\ninvalid transaction termination\n\n2) Output parameters are not mapped as parameters, and app code cannot use\nregisterOutputParameter or getInt() style retrieval. Instead, outputs are\nleft in the result set and app code must retrieve the result and pull,\ncreating a big difference between how Procedures and Functions are invoked.\n\nI propose improving support for procedures. Either:\n(1) add support for \"CALL proc()\" to be treated as a routine invocation so\nthat output parameters can be registered, no begin transaction is silently\nsent from driver, and calling a procedure and calling a function would be\nvery similar (only differing in function still using the {call} escape\nsyntax.\nor\n(2) change the {call} syntax to optionally support procedures. {? = call}\nwould still be mapped to functions. Add a connection setting to control\nthis change, and make default false, so that default stays backwards\ncompatible with pre pg11 functionality.\n\nThoughts?",
"msg_date": "Wed, 17 Jul 2019 07:49:16 -0400",
"msg_from": "David Rader <david.rader@gmail.com>",
"msg_from_op": true,
"msg_subject": "Procedure support improvements"
},
{
"msg_contents": "Hmmm who knew you couldn't call a procedure inside a transaction. That just\nseems broken\n\n\nDave Cramer\n\ndavec@postgresintl.com\nwww.postgresintl.com\n\n\nOn Sun, 21 Jul 2019 at 13:31, David Rader <david.rader@gmail.com> wrote:\n\n> Hello -\n>\n> Since Procedures were introduced in PG 11, the workaround to invoke them\n> with JDBC is to send the native \"CALL proc()\" SQL and let it be treated as\n> a SQL statement, not a specific stored routine invocation.\n>\n> 1) When using transaction control inside the stored proc, an exception is\n> generated if autoCommit is false - see example code attached.\n> Exception in thread \"main\" org.postgresql.util.PSQLException: ERROR:\n> invalid transaction termination\n>\n> 2) Output parameters are not mapped as parameters, and app code cannot use\n> registerOutputParameter or getInt() style retrieval. Instead, outputs are\n> left in the result set and app code must retrieve the result and pull,\n> creating a big difference between how Procedures and Functions are invoked.\n>\n> I propose improving support for procedures. Either:\n> (1) add support for \"CALL proc()\" to be treated as a routine invocation so\n> that output parameters can be registered, no begin transaction is silently\n> sent from driver, and calling a procedure and calling a function would be\n> very similar (only differing in function still using the {call} escape\n> syntax.\n> or\n> (2) change the {call} syntax to optionally support procedures. {? = call}\n> would still be mapped to functions. Add a connection setting to control\n> this change, and make default false, so that default stays backwards\n> compatible with pre pg11 functionality.\n>\n> Thoughts?\n>\n>\n>\n>\n>\n\nHmmm who knew you couldn't call a procedure inside a transaction. That just seems brokenDave Cramerdavec@postgresintl.comwww.postgresintl.comOn Sun, 21 Jul 2019 at 13:31, David Rader <david.rader@gmail.com> wrote:Hello - Since Procedures were introduced in PG 11, the workaround to invoke them with JDBC is to send the native \"CALL proc()\" SQL and let it be treated as a SQL statement, not a specific stored routine invocation.1) When using transaction control inside the stored proc, an exception is generated if autoCommit is false - see example code attached.Exception in thread \"main\" org.postgresql.util.PSQLException: ERROR: invalid transaction termination2) Output parameters are not mapped as parameters, and app code cannot use registerOutputParameter or getInt() style retrieval. Instead, outputs are left in the result set and app code must retrieve the result and pull, creating a big difference between how Procedures and Functions are invoked.I propose improving support for procedures. Either:(1) add support for \"CALL proc()\" to be treated as a routine invocation so that output parameters can be registered, no begin transaction is silently sent from driver, and calling a procedure and calling a function would be very similar (only differing in function still using the {call} escape syntax.or(2) change the {call} syntax to optionally support procedures. {? = call} would still be mapped to functions. Add a connection setting to control this change, and make default false, so that default stays backwards compatible with pre pg11 functionality.Thoughts?",
"msg_date": "Tue, 23 Jul 2019 16:37:27 -0400",
"msg_from": "Dave Cramer <pg@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: Procedure support improvements"
},
{
"msg_contents": "On Tue, Jul 23, 2019 at 4:37 PM Dave Cramer <pg@fastcrypt.com> wrote:\n\n> Hmmm who knew you couldn't call a procedure inside a transaction. That\n> just seems broken\n>\n>\nYeah, the description in the docs is:\n\"Transaction control is only possible in CALL or DO invocations from the\ntop level or nested CALL or DO invocations without any other intervening\ncommand. \"\nhttps://www.postgresql.org/docs/11/plpgsql-transactions.html\n\n\nWhich means to be able to call procedures that use commit or rollback you\nhave to be able to call them without a begin...\n\n\n\n\n> Dave Cramer\n>\n> davec@postgresintl.com\n> www.postgresintl.com\n>\n>\n> On Sun, 21 Jul 2019 at 13:31, David Rader <david.rader@gmail.com> wrote:\n>\n>> Hello -\n>>\n>> Since Procedures were introduced in PG 11, the workaround to invoke them\n>> with JDBC is to send the native \"CALL proc()\" SQL and let it be treated as\n>> a SQL statement, not a specific stored routine invocation.\n>>\n>> 1) When using transaction control inside the stored proc, an exception is\n>> generated if autoCommit is false - see example code attached.\n>> Exception in thread \"main\" org.postgresql.util.PSQLException: ERROR:\n>> invalid transaction termination\n>>\n>> 2) Output parameters are not mapped as parameters, and app code cannot\n>> use registerOutputParameter or getInt() style retrieval. Instead, outputs\n>> are left in the result set and app code must retrieve the result and pull,\n>> creating a big difference between how Procedures and Functions are invoked.\n>>\n>> I propose improving support for procedures. Either:\n>> (1) add support for \"CALL proc()\" to be treated as a routine invocation\n>> so that output parameters can be registered, no begin transaction is\n>> silently sent from driver, and calling a procedure and calling a function\n>> would be very similar (only differing in function still using the {call}\n>> escape syntax.\n>> or\n>> (2) change the {call} syntax to optionally support procedures. {? = call}\n>> would still be mapped to functions. Add a connection setting to control\n>> this change, and make default false, so that default stays backwards\n>> compatible with pre pg11 functionality.\n>>\n>> Thoughts?\n>>\n>>\n>>\n>>\n>>\n\nOn Tue, Jul 23, 2019 at 4:37 PM Dave Cramer <pg@fastcrypt.com> wrote:Hmmm who knew you couldn't call a procedure inside a transaction. That just seems brokenYeah, the description in the docs is:\"Transaction control is only possible in CALL or DO invocations from the top level or nested CALL or DO invocations without any other intervening command. \"https://www.postgresql.org/docs/11/plpgsql-transactions.htmlWhich means to be able to call procedures that use commit or rollback you have to be able to call them without a begin... Dave Cramerdavec@postgresintl.comwww.postgresintl.comOn Sun, 21 Jul 2019 at 13:31, David Rader <david.rader@gmail.com> wrote:Hello - Since Procedures were introduced in PG 11, the workaround to invoke them with JDBC is to send the native \"CALL proc()\" SQL and let it be treated as a SQL statement, not a specific stored routine invocation.1) When using transaction control inside the stored proc, an exception is generated if autoCommit is false - see example code attached.Exception in thread \"main\" org.postgresql.util.PSQLException: ERROR: invalid transaction termination2) Output parameters are not mapped as parameters, and app code cannot use registerOutputParameter or getInt() style retrieval. Instead, outputs are left in the result set and app code must retrieve the result and pull, creating a big difference between how Procedures and Functions are invoked.I propose improving support for procedures. Either:(1) add support for \"CALL proc()\" to be treated as a routine invocation so that output parameters can be registered, no begin transaction is silently sent from driver, and calling a procedure and calling a function would be very similar (only differing in function still using the {call} escape syntax.or(2) change the {call} syntax to optionally support procedures. {? = call} would still be mapped to functions. Add a connection setting to control this change, and make default false, so that default stays backwards compatible with pre pg11 functionality.Thoughts?",
"msg_date": "Tue, 23 Jul 2019 21:59:52 -0400",
"msg_from": "David Rader <david.rader@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Procedure support improvements"
},
{
"msg_contents": "On Tue, 23 Jul 2019 at 22:00, David Rader <david.rader@gmail.com> wrote:\n\n>\n>\n> On Tue, Jul 23, 2019 at 4:37 PM Dave Cramer <pg@fastcrypt.com> wrote:\n>\n>> Hmmm who knew you couldn't call a procedure inside a transaction. That\n>> just seems broken\n>>\n>>\n> Yeah, the description in the docs is:\n> \"Transaction control is only possible in CALL or DO invocations from the\n> top level or nested CALL or DO invocations without any other intervening\n> command. \"\n> https://www.postgresql.org/docs/11/plpgsql-transactions.html\n>\n>\n> Which means to be able to call procedures that use commit or rollback you\n> have to be able to call them without a begin...\n>\n\nThis makes calling procedures a mostly useless feature IMO. What's the\nmotivation to make this work?\n\nDave\n\nOn Tue, 23 Jul 2019 at 22:00, David Rader <david.rader@gmail.com> wrote:On Tue, Jul 23, 2019 at 4:37 PM Dave Cramer <pg@fastcrypt.com> wrote:Hmmm who knew you couldn't call a procedure inside a transaction. That just seems brokenYeah, the description in the docs is:\"Transaction control is only possible in CALL or DO invocations from the top level or nested CALL or DO invocations without any other intervening command. \"https://www.postgresql.org/docs/11/plpgsql-transactions.htmlWhich means to be able to call procedures that use commit or rollback you have to be able to call them without a begin...This makes calling procedures a mostly useless feature IMO. What's the motivation to make this work?Dave",
"msg_date": "Wed, 24 Jul 2019 07:09:02 -0400",
"msg_from": "Dave Cramer <pg@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: Procedure support improvements"
},
{
"msg_contents": ">(2) change the {call} syntax to optionally support procedures. {? = call}\n>would still be mapped to functions. Add a connection setting to control\n>this change, and make default false, so that default stays backwards\n>compatible with pre pg11 functionality.\n\nGiven that stored procedures were added in PG11, but PGJDBC doesn't\nsupport calling them using JDBC's escape call syntax (\"{call ...}\"), I\nagree that an option to allow it is required, and would be beneficial.\n\nResorting to using the Postgres-native \"CALL ...\" is not always\nviable, for reasons such as:\n- It's not really desirable to use \"non-portable\" JDBC code.\n- You can't use PGJDBC and native \"CALL ...\" to invoke PostgreSQL\nstored procedures that have INOUT arguments.\n For example, if you attempt to invoke registerOutParameter() on a\nCallableStatement in this case, it results in the following error:\n This statement does not declare an OUT parameter. Use { ?=\ncall ... } to declare one.\n- Some software such as ORMs (e.g. JPA implementations like Hibernate,\nand similar) generate JDBC code that uses the JDBC escape call syntax\n(with the expectation that it will work), but attempted invocation of\nPostgreSQL stored procedures using such code fails (since PGJDBC\nalways transforms the JDBC escape call syntax into a SELECT statement,\nwhich can only invoke functions, not stored procedures).\n\nInability to support stored procedure invocation via the JDBC escape\ncall syntax might also be viewed as a(nother) migration issue, for\nthose wishing to migrate to PostgreSQL from another database vendor.\n\nThe suggested optional connection setting for JDBC escape call syntax\ncould be more granular than true/false.\nFor example, it could support different modes to:\n- map to SELECT always (default)\n- map to CALL if no return value\n i.e. when \"{call …}\" is specified\n- map to CALL if no return or output parameters\n i.e. when \"{call …}\" is specified, and no out parameters are registered\n- map to CALL always\n\n\nGreg Nancarrow\nFujitsu Australia\n\n\nOn Thu, Aug 22, 2019 at 3:03 PM David Rader <david.rader@gmail.com> wrote:\n>\n> Hello -\n>\n> Since Procedures were introduced in PG 11, the workaround to invoke them with JDBC is to send the native \"CALL proc()\" SQL and let it be treated as a SQL statement, not a specific stored routine invocation.\n>\n> 1) When using transaction control inside the stored proc, an exception is generated if autoCommit is false - see example code attached.\n> Exception in thread \"main\" org.postgresql.util.PSQLException: ERROR: invalid transaction termination\n>\n> 2) Output parameters are not mapped as parameters, and app code cannot use registerOutputParameter or getInt() style retrieval. Instead, outputs are left in the result set and app code must retrieve the result and pull, creating a big difference between how Procedures and Functions are invoked.\n>\n> I propose improving support for procedures. Either:\n> (1) add support for \"CALL proc()\" to be treated as a routine invocation so that output parameters can be registered, no begin transaction is silently sent from driver, and calling a procedure and calling a function would be very similar (only differing in function still using the {call} escape syntax.\n> or\n> (2) change the {call} syntax to optionally support procedures. {? = call} would still be mapped to functions. Add a connection setting to control this change, and make default false, so that default stays backwards compatible with pre pg11 functionality.\n>\n> Thoughts?\n>\n>\n>\n>\n\n\n",
"msg_date": "Thu, 22 Aug 2019 15:39:11 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Procedure support improvements"
},
{
"msg_contents": "Greg,\n\nWhile I understand the frustration I think more work needs to be done by\nthe server to make this a useful feature.\nCurrently you cannot call a procedure inside a transaction and from what I\ncan see here\nhttps://docs.jboss.org/hibernate/core/3.3/reference/en/html/transactions.html\nspecifically\n\"Hibernate disables, or expects the application server to disable,\nauto-commit mode immediately. Database transactions are never optional. All\ncommunication with a database has to occur inside a transaction.\" I fail to\nsee how this would work?\n\nAFAIK we need autonomous transactions to be implemented and ideally some\nmechanism to call functions or procedures with the same syntax.\n\nI think we need to be pressing the people who committed procedures to\ncomplete the work they started. Fixing this in the drivers will just end up\nbeing a kludge at best.\n\nDave Cramer\n\ndavec@postgresintl.com\nwww.postgresintl.com\n\n\nOn Thu, 22 Aug 2019 at 01:39, Greg Nancarrow <gregn4422@gmail.com> wrote:\n\n> >(2) change the {call} syntax to optionally support procedures. {? = call}\n> >would still be mapped to functions. Add a connection setting to control\n> >this change, and make default false, so that default stays backwards\n> >compatible with pre pg11 functionality.\n>\n> Given that stored procedures were added in PG11, but PGJDBC doesn't\n> support calling them using JDBC's escape call syntax (\"{call ...}\"), I\n> agree that an option to allow it is required, and would be beneficial.\n>\n> Resorting to using the Postgres-native \"CALL ...\" is not always\n> viable, for reasons such as:\n> - It's not really desirable to use \"non-portable\" JDBC code.\n> - You can't use PGJDBC and native \"CALL ...\" to invoke PostgreSQL\n> stored procedures that have INOUT arguments.\n> For example, if you attempt to invoke registerOutParameter() on a\n> CallableStatement in this case, it results in the following error:\n> This statement does not declare an OUT parameter. Use { ?=\n> call ... } to declare one.\n> - Some software such as ORMs (e.g. JPA implementations like Hibernate,\n> and similar) generate JDBC code that uses the JDBC escape call syntax\n> (with the expectation that it will work), but attempted invocation of\n> PostgreSQL stored procedures using such code fails (since PGJDBC\n> always transforms the JDBC escape call syntax into a SELECT statement,\n> which can only invoke functions, not stored procedures).\n>\n> Inability to support stored procedure invocation via the JDBC escape\n> call syntax might also be viewed as a(nother) migration issue, for\n> those wishing to migrate to PostgreSQL from another database vendor.\n>\n> The suggested optional connection setting for JDBC escape call syntax\n> could be more granular than true/false.\n> For example, it could support different modes to:\n> - map to SELECT always (default)\n> - map to CALL if no return value\n> i.e. when \"{call …}\" is specified\n> - map to CALL if no return or output parameters\n> i.e. when \"{call …}\" is specified, and no out parameters are registered\n> - map to CALL always\n>\n>\n> Greg Nancarrow\n> Fujitsu Australia\n>\n>\n> On Thu, Aug 22, 2019 at 3:03 PM David Rader <david.rader@gmail.com> wrote:\n> >\n> > Hello -\n> >\n> > Since Procedures were introduced in PG 11, the workaround to invoke them\n> with JDBC is to send the native \"CALL proc()\" SQL and let it be treated as\n> a SQL statement, not a specific stored routine invocation.\n> >\n> > 1) When using transaction control inside the stored proc, an exception\n> is generated if autoCommit is false - see example code attached.\n> > Exception in thread \"main\" org.postgresql.util.PSQLException: ERROR:\n> invalid transaction termination\n> >\n> > 2) Output parameters are not mapped as parameters, and app code cannot\n> use registerOutputParameter or getInt() style retrieval. Instead, outputs\n> are left in the result set and app code must retrieve the result and pull,\n> creating a big difference between how Procedures and Functions are invoked.\n> >\n> > I propose improving support for procedures. Either:\n> > (1) add support for \"CALL proc()\" to be treated as a routine invocation\n> so that output parameters can be registered, no begin transaction is\n> silently sent from driver, and calling a procedure and calling a function\n> would be very similar (only differing in function still using the {call}\n> escape syntax.\n> > or\n> > (2) change the {call} syntax to optionally support procedures. {? =\n> call} would still be mapped to functions. Add a connection setting to\n> control this change, and make default false, so that default stays\n> backwards compatible with pre pg11 functionality.\n> >\n> > Thoughts?\n> >\n> >\n> >\n> >\n>\n>\n>\n\nGreg,While I understand the frustration I think more work needs to be done by the server to make this a useful feature.Currently you cannot call a procedure inside a transaction and from what I can see here https://docs.jboss.org/hibernate/core/3.3/reference/en/html/transactions.html specifically \"Hibernate disables, or expects the application server to disable, auto-commit mode immediately. Database transactions are never optional. All communication with a database has to occur inside a transaction.\" I fail to see how this would work?AFAIK we need autonomous transactions to be implemented and ideally some mechanism to call functions or procedures with the same syntax.I think we need to be pressing the people who committed procedures to complete the work they started. Fixing this in the drivers will just end up being a kludge at best.Dave Cramerdavec@postgresintl.comwww.postgresintl.comOn Thu, 22 Aug 2019 at 01:39, Greg Nancarrow <gregn4422@gmail.com> wrote:>(2) change the {call} syntax to optionally support procedures. {? = call}\n>would still be mapped to functions. Add a connection setting to control\n>this change, and make default false, so that default stays backwards\n>compatible with pre pg11 functionality.\n\nGiven that stored procedures were added in PG11, but PGJDBC doesn't\nsupport calling them using JDBC's escape call syntax (\"{call ...}\"), I\nagree that an option to allow it is required, and would be beneficial.\n\nResorting to using the Postgres-native \"CALL ...\" is not always\nviable, for reasons such as:\n- It's not really desirable to use \"non-portable\" JDBC code.\n- You can't use PGJDBC and native \"CALL ...\" to invoke PostgreSQL\nstored procedures that have INOUT arguments.\n For example, if you attempt to invoke registerOutParameter() on a\nCallableStatement in this case, it results in the following error:\n This statement does not declare an OUT parameter. Use { ?=\ncall ... } to declare one.\n- Some software such as ORMs (e.g. JPA implementations like Hibernate,\nand similar) generate JDBC code that uses the JDBC escape call syntax\n(with the expectation that it will work), but attempted invocation of\nPostgreSQL stored procedures using such code fails (since PGJDBC\nalways transforms the JDBC escape call syntax into a SELECT statement,\nwhich can only invoke functions, not stored procedures).\n\nInability to support stored procedure invocation via the JDBC escape\ncall syntax might also be viewed as a(nother) migration issue, for\nthose wishing to migrate to PostgreSQL from another database vendor.\n\nThe suggested optional connection setting for JDBC escape call syntax\ncould be more granular than true/false.\nFor example, it could support different modes to:\n- map to SELECT always (default)\n- map to CALL if no return value\n i.e. when \"{call …}\" is specified\n- map to CALL if no return or output parameters\n i.e. when \"{call …}\" is specified, and no out parameters are registered\n- map to CALL always\n\n\nGreg Nancarrow\nFujitsu Australia\n\n\nOn Thu, Aug 22, 2019 at 3:03 PM David Rader <david.rader@gmail.com> wrote:\n>\n> Hello -\n>\n> Since Procedures were introduced in PG 11, the workaround to invoke them with JDBC is to send the native \"CALL proc()\" SQL and let it be treated as a SQL statement, not a specific stored routine invocation.\n>\n> 1) When using transaction control inside the stored proc, an exception is generated if autoCommit is false - see example code attached.\n> Exception in thread \"main\" org.postgresql.util.PSQLException: ERROR: invalid transaction termination\n>\n> 2) Output parameters are not mapped as parameters, and app code cannot use registerOutputParameter or getInt() style retrieval. Instead, outputs are left in the result set and app code must retrieve the result and pull, creating a big difference between how Procedures and Functions are invoked.\n>\n> I propose improving support for procedures. Either:\n> (1) add support for \"CALL proc()\" to be treated as a routine invocation so that output parameters can be registered, no begin transaction is silently sent from driver, and calling a procedure and calling a function would be very similar (only differing in function still using the {call} escape syntax.\n> or\n> (2) change the {call} syntax to optionally support procedures. {? = call} would still be mapped to functions. Add a connection setting to control this change, and make default false, so that default stays backwards compatible with pre pg11 functionality.\n>\n> Thoughts?\n>\n>\n>\n>",
"msg_date": "Thu, 22 Aug 2019 06:45:23 -0400",
"msg_from": "Dave Cramer <pg@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: Procedure support improvements"
},
{
"msg_contents": "Dave,\n\nThanks for responding.\n\nYou said that \"Currently you cannot call a procedure inside a transaction\".\nThat doesn't seem to be true. You CAN call a procedure inside a\ntransaction, provided that the procedure doesn't execute transaction\ncontrol statements (e.g. COMMIT/ROLLBACK).\n\n From the Notes section of the PostgreSQL CALL documentation:\n\n\"If CALL is executed in a transaction block, then the called procedure\ncannot execute transaction control statements. Transaction control\nstatements are only allowed if CALL is executed in its own\ntransaction.\"\n\nSo you can definitely call a procedure inside a transaction.\n\nA stored procedure is the natural fit for complex reusable processing\n(complex logic and data access), whereas a stored function is a\nroutine that returns values.\nI'm sure that new users who start using PostgreSQL 11+, and those\nmigrating from other DBMSs, would have that kind of viewpoint. They'd\nnaturally be creating stored procedures for various complex reusable\nprocessing (that does not necessarily need to commit/rollback\ntransactions within the procedure).\nCurrently, they wouldn't be able to successfully invoke those stored\nprocedures with PGJDBC using the escape call syntax (\"ERROR: xxxx is a\nprocedure Hint: To call a procedure, use CALL\"), and there would be\nproblems (already stated) with resorting to using native CALL with\nPGJDBC. It's not a great user experience.\nForcing the user to use a (void) function instead of a stored\nprocedure for such cases, in order to be able to invoke it from\nPGJDBC, could be seen as more of a kludge!\n\n\nGreg\n\nOn Thu, Aug 22, 2019 at 8:45 PM Dave Cramer <pg@fastcrypt.com> wrote:\n>\n> Greg,\n>\n> While I understand the frustration I think more work needs to be done by the server to make this a useful feature.\n> Currently you cannot call a procedure inside a transaction and from what I can see here https://docs.jboss.org/hibernate/core/3.3/reference/en/html/transactions.html specifically \"Hibernate disables, or expects the application server to disable, auto-commit mode immediately. Database transactions are never optional. All communication with a database has to occur inside a transaction.\" I fail to see how this would work?\n>\n> AFAIK we need autonomous transactions to be implemented and ideally some mechanism to call functions or procedures with the same syntax.\n>\n> I think we need to be pressing the people who committed procedures to complete the work they started. Fixing this in the drivers will just end up being a kludge at best.\n>\n> Dave Cramer\n>\n> davec@postgresintl.com\n> www.postgresintl.com\n>\n>\n> On Thu, 22 Aug 2019 at 01:39, Greg Nancarrow <gregn4422@gmail.com> wrote:\n>>\n>> >(2) change the {call} syntax to optionally support procedures. {? = call}\n>> >would still be mapped to functions. Add a connection setting to control\n>> >this change, and make default false, so that default stays backwards\n>> >compatible with pre pg11 functionality.\n>>\n>> Given that stored procedures were added in PG11, but PGJDBC doesn't\n>> support calling them using JDBC's escape call syntax (\"{call ...}\"), I\n>> agree that an option to allow it is required, and would be beneficial.\n>>\n>> Resorting to using the Postgres-native \"CALL ...\" is not always\n>> viable, for reasons such as:\n>> - It's not really desirable to use \"non-portable\" JDBC code.\n>> - You can't use PGJDBC and native \"CALL ...\" to invoke PostgreSQL\n>> stored procedures that have INOUT arguments.\n>> For example, if you attempt to invoke registerOutParameter() on a\n>> CallableStatement in this case, it results in the following error:\n>> This statement does not declare an OUT parameter. Use { ?=\n>> call ... } to declare one.\n>> - Some software such as ORMs (e.g. JPA implementations like Hibernate,\n>> and similar) generate JDBC code that uses the JDBC escape call syntax\n>> (with the expectation that it will work), but attempted invocation of\n>> PostgreSQL stored procedures using such code fails (since PGJDBC\n>> always transforms the JDBC escape call syntax into a SELECT statement,\n>> which can only invoke functions, not stored procedures).\n>>\n>> Inability to support stored procedure invocation via the JDBC escape\n>> call syntax might also be viewed as a(nother) migration issue, for\n>> those wishing to migrate to PostgreSQL from another database vendor.\n>>\n>> The suggested optional connection setting for JDBC escape call syntax\n>> could be more granular than true/false.\n>> For example, it could support different modes to:\n>> - map to SELECT always (default)\n>> - map to CALL if no return value\n>> i.e. when \"{call …}\" is specified\n>> - map to CALL if no return or output parameters\n>> i.e. when \"{call …}\" is specified, and no out parameters are registered\n>> - map to CALL always\n>>\n>>\n>> Greg Nancarrow\n>> Fujitsu Australia\n>>\n>>\n>> On Thu, Aug 22, 2019 at 3:03 PM David Rader <david.rader@gmail.com> wrote:\n>> >\n>> > Hello -\n>> >\n>> > Since Procedures were introduced in PG 11, the workaround to invoke them with JDBC is to send the native \"CALL proc()\" SQL and let it be treated as a SQL statement, not a specific stored routine invocation.\n>> >\n>> > 1) When using transaction control inside the stored proc, an exception is generated if autoCommit is false - see example code attached.\n>> > Exception in thread \"main\" org.postgresql.util.PSQLException: ERROR: invalid transaction termination\n>> >\n>> > 2) Output parameters are not mapped as parameters, and app code cannot use registerOutputParameter or getInt() style retrieval. Instead, outputs are left in the result set and app code must retrieve the result and pull, creating a big difference between how Procedures and Functions are invoked.\n>> >\n>> > I propose improving support for procedures. Either:\n>> > (1) add support for \"CALL proc()\" to be treated as a routine invocation so that output parameters can be registered, no begin transaction is silently sent from driver, and calling a procedure and calling a function would be very similar (only differing in function still using the {call} escape syntax.\n>> > or\n>> > (2) change the {call} syntax to optionally support procedures. {? = call} would still be mapped to functions. Add a connection setting to control this change, and make default false, so that default stays backwards compatible with pre pg11 functionality.\n>> >\n>> > Thoughts?\n>> >\n>> >\n>> >\n>> >\n>>\n>>\n\n\n",
"msg_date": "Fri, 23 Aug 2019 15:29:00 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Procedure support improvements"
},
{
"msg_contents": "On Fri, 23 Aug 2019 at 01:29, Greg Nancarrow <gregn4422@gmail.com> wrote:\n\n> Dave,\n>\n> Thanks for responding.\n>\n> You said that \"Currently you cannot call a procedure inside a transaction\".\n> That doesn't seem to be true. You CAN call a procedure inside a\n> transaction, provided that the procedure doesn't execute transaction\n> control statements (e.g. COMMIT/ROLLBACK).\n>\n> From the Notes section of the PostgreSQL CALL documentation:\n>\n> \"If CALL is executed in a transaction block, then the called procedure\n> cannot execute transaction control statements. Transaction control\n> statements are only allowed if CALL is executed in its own\n> transaction.\"\n>\n> So you can definitely call a procedure inside a transaction.\n>\n\nYes I mis-spoke, David R pointed this out to me off list.\n\n>\n> A stored procedure is the natural fit for complex reusable processing\n> (complex logic and data access), whereas a stored function is a\n> routine that returns values.\n>\nHistorically functions in PostgreSQL have done both.\n\n\n> I'm sure that new users who start using PostgreSQL 11+, and those\n> migrating from other DBMSs, would have that kind of viewpoint. They'd\n> naturally be creating stored procedures for various complex reusable\n> processing (that does not necessarily need to commit/rollback\n> transactions within the procedure).\n>\n\nI presume you have use cases that do not do transactions ?\n\n\n> Currently, they wouldn't be able to successfully invoke those stored\n> procedures with PGJDBC using the escape call syntax (\"ERROR: xxxx is a\n> procedure Hint: To call a procedure, use CALL\"), and there would be\n> problems (already stated) with resorting to using native CALL with\n> PGJDBC. It's not a great user experience.\n>\n\n\n> Forcing the user to use a (void) function instead of a stored\n> procedure for such cases, in order to be able to invoke it from\n> PGJDBC, could be seen as more of a kludge!\n>\n\nWell we have successfully been doing that for a number of years now.\n\nI'd still like to see pressure put on the server to fix this problem. If\nthe interfaces continually work around deficiencies then nothing gets done\nin the server. It's my (and others) opinion that this \"feature\" never\nshould have been committed in the half baked state it was.\n\nThat said I'd consider a PR that used a connection parameter to force\ncalling procedures.\n\nDave Cramer\n\ndavec@postgresintl.com\nwww.postgresintl.com\n\n>\n>\n> Greg\n>\n> On Thu, Aug 22, 2019 at 8:45 PM Dave Cramer <pg@fastcrypt.com> wrote:\n> >\n> > Greg,\n> >\n> > While I understand the frustration I think more work needs to be done by\n> the server to make this a useful feature.\n> > Currently you cannot call a procedure inside a transaction and from what\n> I can see here\n> https://docs.jboss.org/hibernate/core/3.3/reference/en/html/transactions.html\n> specifically \"Hibernate disables, or expects the application server to\n> disable, auto-commit mode immediately. Database transactions are never\n> optional. All communication with a database has to occur inside a\n> transaction.\" I fail to see how this would work?\n> >\n> > AFAIK we need autonomous transactions to be implemented and ideally some\n> mechanism to call functions or procedures with the same syntax.\n> >\n> > I think we need to be pressing the people who committed procedures to\n> complete the work they started. Fixing this in the drivers will just end up\n> being a kludge at best.\n> >\n> > Dave Cramer\n> >\n> > davec@postgresintl.com\n> > www.postgresintl.com\n> >\n> >\n> > On Thu, 22 Aug 2019 at 01:39, Greg Nancarrow <gregn4422@gmail.com>\n> wrote:\n> >>\n> >> >(2) change the {call} syntax to optionally support procedures. {? =\n> call}\n> >> >would still be mapped to functions. Add a connection setting to control\n> >> >this change, and make default false, so that default stays backwards\n> >> >compatible with pre pg11 functionality.\n> >>\n> >> Given that stored procedures were added in PG11, but PGJDBC doesn't\n> >> support calling them using JDBC's escape call syntax (\"{call ...}\"), I\n> >> agree that an option to allow it is required, and would be beneficial.\n> >>\n> >> Resorting to using the Postgres-native \"CALL ...\" is not always\n> >> viable, for reasons such as:\n> >> - It's not really desirable to use \"non-portable\" JDBC code.\n> >> - You can't use PGJDBC and native \"CALL ...\" to invoke PostgreSQL\n> >> stored procedures that have INOUT arguments.\n> >> For example, if you attempt to invoke registerOutParameter() on a\n> >> CallableStatement in this case, it results in the following error:\n> >> This statement does not declare an OUT parameter. Use { ?=\n> >> call ... } to declare one.\n> >> - Some software such as ORMs (e.g. JPA implementations like Hibernate,\n> >> and similar) generate JDBC code that uses the JDBC escape call syntax\n> >> (with the expectation that it will work), but attempted invocation of\n> >> PostgreSQL stored procedures using such code fails (since PGJDBC\n> >> always transforms the JDBC escape call syntax into a SELECT statement,\n> >> which can only invoke functions, not stored procedures).\n> >>\n> >> Inability to support stored procedure invocation via the JDBC escape\n> >> call syntax might also be viewed as a(nother) migration issue, for\n> >> those wishing to migrate to PostgreSQL from another database vendor.\n> >>\n> >> The suggested optional connection setting for JDBC escape call syntax\n> >> could be more granular than true/false.\n> >> For example, it could support different modes to:\n> >> - map to SELECT always (default)\n> >> - map to CALL if no return value\n> >> i.e. when \"{call …}\" is specified\n> >> - map to CALL if no return or output parameters\n> >> i.e. when \"{call …}\" is specified, and no out parameters are\n> registered\n> >> - map to CALL always\n> >>\n> >>\n> >> Greg Nancarrow\n> >> Fujitsu Australia\n> >>\n> >>\n> >> On Thu, Aug 22, 2019 at 3:03 PM David Rader <david.rader@gmail.com>\n> wrote:\n> >> >\n> >> > Hello -\n> >> >\n> >> > Since Procedures were introduced in PG 11, the workaround to invoke\n> them with JDBC is to send the native \"CALL proc()\" SQL and let it be\n> treated as a SQL statement, not a specific stored routine invocation.\n> >> >\n> >> > 1) When using transaction control inside the stored proc, an\n> exception is generated if autoCommit is false - see example code attached.\n> >> > Exception in thread \"main\" org.postgresql.util.PSQLException: ERROR:\n> invalid transaction termination\n> >> >\n> >> > 2) Output parameters are not mapped as parameters, and app code\n> cannot use registerOutputParameter or getInt() style retrieval. Instead,\n> outputs are left in the result set and app code must retrieve the result\n> and pull, creating a big difference between how Procedures and Functions\n> are invoked.\n> >> >\n> >> > I propose improving support for procedures. Either:\n> >> > (1) add support for \"CALL proc()\" to be treated as a routine\n> invocation so that output parameters can be registered, no begin\n> transaction is silently sent from driver, and calling a procedure and\n> calling a function would be very similar (only differing in function still\n> using the {call} escape syntax.\n> >> > or\n> >> > (2) change the {call} syntax to optionally support procedures. {? =\n> call} would still be mapped to functions. Add a connection setting to\n> control this change, and make default false, so that default stays\n> backwards compatible with pre pg11 functionality.\n> >> >\n> >> > Thoughts?\n> >> >\n> >> >\n> >> >\n> >> >\n> >>\n> >>\n>\n\nOn Fri, 23 Aug 2019 at 01:29, Greg Nancarrow <gregn4422@gmail.com> wrote:Dave,\n\nThanks for responding.\n\nYou said that \"Currently you cannot call a procedure inside a transaction\".\nThat doesn't seem to be true. You CAN call a procedure inside a\ntransaction, provided that the procedure doesn't execute transaction\ncontrol statements (e.g. COMMIT/ROLLBACK).\n\n From the Notes section of the PostgreSQL CALL documentation:\n\n\"If CALL is executed in a transaction block, then the called procedure\ncannot execute transaction control statements. Transaction control\nstatements are only allowed if CALL is executed in its own\ntransaction.\"\n\nSo you can definitely call a procedure inside a transaction.Yes I mis-spoke, David R pointed this out to me off list.\n\nA stored procedure is the natural fit for complex reusable processing\n(complex logic and data access), whereas a stored function is a\nroutine that returns values.Historically functions in PostgreSQL have done both. \nI'm sure that new users who start using PostgreSQL 11+, and those\nmigrating from other DBMSs, would have that kind of viewpoint. They'd\nnaturally be creating stored procedures for various complex reusable\nprocessing (that does not necessarily need to commit/rollback\ntransactions within the procedure).I presume you have use cases that do not do transactions ? \nCurrently, they wouldn't be able to successfully invoke those stored\nprocedures with PGJDBC using the escape call syntax (\"ERROR: xxxx is a\nprocedure Hint: To call a procedure, use CALL\"), and there would be\nproblems (already stated) with resorting to using native CALL with\nPGJDBC. It's not a great user experience. \nForcing the user to use a (void) function instead of a stored\nprocedure for such cases, in order to be able to invoke it from\nPGJDBC, could be seen as more of a kludge!Well we have successfully been doing that for a number of years now. I'd still like to see pressure put on the server to fix this problem. If the interfaces continually work around deficiencies then nothing gets done in the server. It's my (and others) opinion that this \"feature\" never should have been committed in the half baked state it was.That said I'd consider a PR that used a connection parameter to force calling procedures. Dave Cramerdavec@postgresintl.comwww.postgresintl.com\n\n\nGreg\n\nOn Thu, Aug 22, 2019 at 8:45 PM Dave Cramer <pg@fastcrypt.com> wrote:\n>\n> Greg,\n>\n> While I understand the frustration I think more work needs to be done by the server to make this a useful feature.\n> Currently you cannot call a procedure inside a transaction and from what I can see here https://docs.jboss.org/hibernate/core/3.3/reference/en/html/transactions.html specifically \"Hibernate disables, or expects the application server to disable, auto-commit mode immediately. Database transactions are never optional. All communication with a database has to occur inside a transaction.\" I fail to see how this would work?\n>\n> AFAIK we need autonomous transactions to be implemented and ideally some mechanism to call functions or procedures with the same syntax.\n>\n> I think we need to be pressing the people who committed procedures to complete the work they started. Fixing this in the drivers will just end up being a kludge at best.\n>\n> Dave Cramer\n>\n> davec@postgresintl.com\n> www.postgresintl.com\n>\n>\n> On Thu, 22 Aug 2019 at 01:39, Greg Nancarrow <gregn4422@gmail.com> wrote:\n>>\n>> >(2) change the {call} syntax to optionally support procedures. {? = call}\n>> >would still be mapped to functions. Add a connection setting to control\n>> >this change, and make default false, so that default stays backwards\n>> >compatible with pre pg11 functionality.\n>>\n>> Given that stored procedures were added in PG11, but PGJDBC doesn't\n>> support calling them using JDBC's escape call syntax (\"{call ...}\"), I\n>> agree that an option to allow it is required, and would be beneficial.\n>>\n>> Resorting to using the Postgres-native \"CALL ...\" is not always\n>> viable, for reasons such as:\n>> - It's not really desirable to use \"non-portable\" JDBC code.\n>> - You can't use PGJDBC and native \"CALL ...\" to invoke PostgreSQL\n>> stored procedures that have INOUT arguments.\n>> For example, if you attempt to invoke registerOutParameter() on a\n>> CallableStatement in this case, it results in the following error:\n>> This statement does not declare an OUT parameter. Use { ?=\n>> call ... } to declare one.\n>> - Some software such as ORMs (e.g. JPA implementations like Hibernate,\n>> and similar) generate JDBC code that uses the JDBC escape call syntax\n>> (with the expectation that it will work), but attempted invocation of\n>> PostgreSQL stored procedures using such code fails (since PGJDBC\n>> always transforms the JDBC escape call syntax into a SELECT statement,\n>> which can only invoke functions, not stored procedures).\n>>\n>> Inability to support stored procedure invocation via the JDBC escape\n>> call syntax might also be viewed as a(nother) migration issue, for\n>> those wishing to migrate to PostgreSQL from another database vendor.\n>>\n>> The suggested optional connection setting for JDBC escape call syntax\n>> could be more granular than true/false.\n>> For example, it could support different modes to:\n>> - map to SELECT always (default)\n>> - map to CALL if no return value\n>> i.e. when \"{call …}\" is specified\n>> - map to CALL if no return or output parameters\n>> i.e. when \"{call …}\" is specified, and no out parameters are registered\n>> - map to CALL always\n>>\n>>\n>> Greg Nancarrow\n>> Fujitsu Australia\n>>\n>>\n>> On Thu, Aug 22, 2019 at 3:03 PM David Rader <david.rader@gmail.com> wrote:\n>> >\n>> > Hello -\n>> >\n>> > Since Procedures were introduced in PG 11, the workaround to invoke them with JDBC is to send the native \"CALL proc()\" SQL and let it be treated as a SQL statement, not a specific stored routine invocation.\n>> >\n>> > 1) When using transaction control inside the stored proc, an exception is generated if autoCommit is false - see example code attached.\n>> > Exception in thread \"main\" org.postgresql.util.PSQLException: ERROR: invalid transaction termination\n>> >\n>> > 2) Output parameters are not mapped as parameters, and app code cannot use registerOutputParameter or getInt() style retrieval. Instead, outputs are left in the result set and app code must retrieve the result and pull, creating a big difference between how Procedures and Functions are invoked.\n>> >\n>> > I propose improving support for procedures. Either:\n>> > (1) add support for \"CALL proc()\" to be treated as a routine invocation so that output parameters can be registered, no begin transaction is silently sent from driver, and calling a procedure and calling a function would be very similar (only differing in function still using the {call} escape syntax.\n>> > or\n>> > (2) change the {call} syntax to optionally support procedures. {? = call} would still be mapped to functions. Add a connection setting to control this change, and make default false, so that default stays backwards compatible with pre pg11 functionality.\n>> >\n>> > Thoughts?\n>> >\n>> >\n>> >\n>> >\n>>\n>>",
"msg_date": "Fri, 23 Aug 2019 06:25:15 -0400",
"msg_from": "Dave Cramer <pg@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: Procedure support improvements"
},
{
"msg_contents": ">>\n>> I'm sure that new users who start using PostgreSQL 11+, and those\n>> migrating from other DBMSs, would have that kind of viewpoint. They'd\n>> naturally be creating stored procedures for various complex reusable\n>> processing (that does not necessarily need to commit/rollback\n>> transactions within the procedure).\n>\n>\n> I presume you have use cases that do not do transactions ?\n>\n\nWhat I was getting at here is that stored procedures can participate\nin transactions, without having to control them (i.e. without issuing\nCOMMIT/ROLLBACK themselves).\nFor example, a client JDBC-based application might start a transaction\n(auto-commit=FALSE), and invoke a couple of stored procedures as part\nof the transaction, and then COMMIT the transaction (or ROLLBACK if an\nexception is raised). The stored procedures in this case might\nUPDATE/INSERT records; they are participating in the transaction, but\nnot explicitly controlling it.\n\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 26 Aug 2019 18:01:04 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Procedure support improvements"
},
{
"msg_contents": "On Mon, 26 Aug 2019 at 04:01, Greg Nancarrow <gregn4422@gmail.com> wrote:\n\n> >>\n> >> I'm sure that new users who start using PostgreSQL 11+, and those\n> >> migrating from other DBMSs, would have that kind of viewpoint. They'd\n> >> naturally be creating stored procedures for various complex reusable\n> >> processing (that does not necessarily need to commit/rollback\n> >> transactions within the procedure).\n> >\n> >\n> > I presume you have use cases that do not do transactions ?\n> >\n>\n> What I was getting at here is that stored procedures can participate\n> in transactions, without having to control them (i.e. without issuing\n> COMMIT/ROLLBACK themselves).\n> For example, a client JDBC-based application might start a transaction\n> (auto-commit=FALSE), and invoke a couple of stored procedures as part\n> of the transaction, and then COMMIT the transaction (or ROLLBACK if an\n> exception is raised). The stored procedures in this case might\n> UPDATE/INSERT records; they are participating in the transaction, but\n> not explicitly controlling it.\n>\n\nYes, I do understand that. My issue is that without autonomous transactions\nprocedures are just functions with a different syntax.\n\nAs I said, I'd entertain a connection parameter that switched the CALL to\ncall procedures but ideally you'd complain to the server folks to make\nProcedures useful.\n\n\nDave Cramer\n\ndavec@postgresintl.com\nwww.postgresintl.com\n\nOn Mon, 26 Aug 2019 at 04:01, Greg Nancarrow <gregn4422@gmail.com> wrote:>>\n>> I'm sure that new users who start using PostgreSQL 11+, and those\n>> migrating from other DBMSs, would have that kind of viewpoint. They'd\n>> naturally be creating stored procedures for various complex reusable\n>> processing (that does not necessarily need to commit/rollback\n>> transactions within the procedure).\n>\n>\n> I presume you have use cases that do not do transactions ?\n>\n\nWhat I was getting at here is that stored procedures can participate\nin transactions, without having to control them (i.e. without issuing\nCOMMIT/ROLLBACK themselves).\nFor example, a client JDBC-based application might start a transaction\n(auto-commit=FALSE), and invoke a couple of stored procedures as part\nof the transaction, and then COMMIT the transaction (or ROLLBACK if an\nexception is raised). The stored procedures in this case might\nUPDATE/INSERT records; they are participating in the transaction, but\nnot explicitly controlling it.Yes, I do understand that. My issue is that without autonomous transactions procedures are just functions with a different syntax.As I said, I'd entertain a connection parameter that switched the CALL to call procedures but ideally you'd complain to the server folks to make Procedures useful.Dave Cramerdavec@postgresintl.comwww.postgresintl.com",
"msg_date": "Mon, 26 Aug 2019 13:02:24 -0400",
"msg_from": "Dave Cramer <pg@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: Procedure support improvements"
},
{
"msg_contents": "Dave Cramer wrote:\n> As I said, I'd entertain a connection parameter that switched the\n> CALL to call procedures but ideally you'd complain to the server\n> folks to make Procedures useful.\n\nApart from the obvious problem that procedures make life hard for the\nJDBC driver, because it does not know if it shall render a call as\nSELECT or CALL:\nWhat is missing in PostgreSQL procedures to make them useful?\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 26 Aug 2019 19:43:45 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Procedure support improvements"
},
{
"msg_contents": "On Mon, 26 Aug 2019 at 13:43, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n\n> Dave Cramer wrote:\n> > As I said, I'd entertain a connection parameter that switched the\n> > CALL to call procedures but ideally you'd complain to the server\n> > folks to make Procedures useful.\n>\n> Apart from the obvious problem that procedures make life hard for the\n> JDBC driver, because it does not know if it shall render a call as\n> SELECT or CALL:\n> What is missing in PostgreSQL procedures to make them useful?\n>\n\nbeing able to use transactions inside a procedure inside a transaction.\n\n\nDave Cramer\n\ndavec@postgresintl.com\nwww.postgresintl.com\n\n\n\n>\n> Yours,\n> Laurenz Albe\n>\n>\n\nOn Mon, 26 Aug 2019 at 13:43, Laurenz Albe <laurenz.albe@cybertec.at> wrote:Dave Cramer wrote:\n> As I said, I'd entertain a connection parameter that switched the\n> CALL to call procedures but ideally you'd complain to the server\n> folks to make Procedures useful.\n\nApart from the obvious problem that procedures make life hard for the\nJDBC driver, because it does not know if it shall render a call as\nSELECT or CALL:\nWhat is missing in PostgreSQL procedures to make them useful?being able to use transactions inside a procedure inside a transaction.Dave Cramerdavec@postgresintl.comwww.postgresintl.com \n\nYours,\nLaurenz Albe",
"msg_date": "Mon, 26 Aug 2019 13:48:23 -0400",
"msg_from": "Dave Cramer <pg@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: Procedure support improvements"
},
{
"msg_contents": "[CC to -hackers]\nDave Cramer wrote:\n> On Mon, 26 Aug 2019 at 13:43, Laurenz Albe <laurenz.albe@cybertec.at>\n> wrote:\n> > Dave Cramer wrote:\n> > > As I said, I'd entertain a connection parameter that switched the\n> > > CALL to call procedures but ideally you'd complain to the server\n> > > folks to make Procedures useful.\n> > \n> > Apart from the obvious problem that procedures make life hard for \n> > the JDBC driver, because it does not know if it shall render a call\n> > as SELECT or CALL:\n> > What is missing in PostgreSQL procedures to make them useful?\n> \n> being able to use transactions inside a procedure inside a \n> transaction.\n\ntest=> CREATE OR REPLACE PROCEDURE testproc() LANGUAGE plpgsql AS\n $$BEGIN PERFORM 42; COMMIT; PERFORM 'x'; END;$$;\nCREATE PROCEDURE\ntest=> CALL testproc();\nCALL\ntest=> BEGIN;\nBEGIN\ntest=> CALL testproc();\nERROR: invalid transaction termination\nCONTEXT: PL/pgSQL function testproc() line 1 at COMMIT\n\nOops.\nI find that indeed surprising.\n\nWhat is the rationale for this?\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 26 Aug 2019 20:06:39 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Procedure support improvements"
},
{
"msg_contents": "[CC to -hackers]\nDave Cramer wrote:\n> On Mon, 26 Aug 2019 at 13:43,\nLaurenz Albe <laurenz.albe@cybertec.at>\n> wrote:\n> > Dave Cramer wrote:\n>\n> > As I said, I'd entertain a connection parameter that switched the\n>\n> > CALL to call procedures but ideally you'd complain to the server\n> >\n> folks to make Procedures useful.\n> > \n> > Apart from the obvious\nproblem that procedures make life hard for \n> > the JDBC driver, because\nit does not know if it shall render a call\n> > as SELECT or CALL:\n> >\nWhat is missing in PostgreSQL procedures to make them useful?\n> \n> being\nable to use transactions inside a procedure inside a \n> transaction.\n\ntest=> CREATE OR REPLACE PROCEDURE testproc() LANGUAGE plpgsql AS\n $$BEGIN PERFORM 42; COMMIT; PERFORM 'x'; END;$$;\nCREATE PROCEDURE\ntest=> CALL testproc();\nCALL\ntest=> BEGIN;\nBEGIN\ntest=> CALL testproc();\nERROR: invalid transaction termination\nCONTEXT: PL/pgSQL function testproc() line 1 at COMMIT\n\nOops.\nI find that indeed surprising.\n\nWhat is the rationale for this?\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 26 Aug 2019 20:08:05 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Procedure support improvements"
},
{
"msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> Dave Cramer wrote:\n> test=> BEGIN;\n> BEGIN\n> test=> CALL testproc();\n> ERROR: invalid transaction termination\n> CONTEXT: PL/pgSQL function testproc() line 1 at COMMIT\n\n> What is the rationale for this?\n\nA procedure shouldn't be able to force commit of the surrounding\ntransaction.\n\nAs Dave noted, what would be nicer is for procedures to be able\nto start and commit autonomous transactions, without affecting\nthe state of the outer transaction. We haven't got that though,\nand it looks like a lot of work to get there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 26 Aug 2019 14:14:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Procedure support improvements"
},
{
"msg_contents": "On Mon, 26 Aug 2019 at 14:14, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > Dave Cramer wrote:\n> > test=> BEGIN;\n> > BEGIN\n> > test=> CALL testproc();\n> > ERROR: invalid transaction termination\n> > CONTEXT: PL/pgSQL function testproc() line 1 at COMMIT\n>\n> > What is the rationale for this?\n>\n> A procedure shouldn't be able to force commit of the surrounding\n> transaction.\n>\n> As Dave noted, what would be nicer is for procedures to be able\n> to start and commit autonomous transactions, without affecting\n> the state of the outer transaction. We haven't got that though,\n> and it looks like a lot of work to get there.\n>\n\nI'm less than motivated to hack the driver to make something work here\nuntil we finish the server feature.\n\nWho knows what that might bring ?\n\n\nDave Cramer\n\ndavec@postgresintl.com\nwww.postgresintl.com\n\n\n>\n>\n\nOn Mon, 26 Aug 2019 at 14:14, Tom Lane <tgl@sss.pgh.pa.us> wrote:Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> Dave Cramer wrote:\n> test=> BEGIN;\n> BEGIN\n> test=> CALL testproc();\n> ERROR: invalid transaction termination\n> CONTEXT: PL/pgSQL function testproc() line 1 at COMMIT\n\n> What is the rationale for this?\n\nA procedure shouldn't be able to force commit of the surrounding\ntransaction.\n\nAs Dave noted, what would be nicer is for procedures to be able\nto start and commit autonomous transactions, without affecting\nthe state of the outer transaction. We haven't got that though,\nand it looks like a lot of work to get there.I'm less than motivated to hack the driver to make something work here until we finish the server feature.Who knows what that might bring ?Dave Cramerdavec@postgresintl.comwww.postgresintl.com",
"msg_date": "Mon, 26 Aug 2019 14:31:44 -0400",
"msg_from": "Dave Cramer <pg@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: Procedure support improvements"
},
{
"msg_contents": "On 2019-08-26 20:08, Laurenz Albe wrote:\n> test=> CREATE OR REPLACE PROCEDURE testproc() LANGUAGE plpgsql AS\n> $$BEGIN PERFORM 42; COMMIT; PERFORM 'x'; END;$$;\n> CREATE PROCEDURE\n> test=> CALL testproc();\n> CALL\n> test=> BEGIN;\n> BEGIN\n> test=> CALL testproc();\n> ERROR: invalid transaction termination\n> CONTEXT: PL/pgSQL function testproc() line 1 at COMMIT\n> \n> Oops.\n> I find that indeed surprising.\n> \n> What is the rationale for this?\n\nIt's mostly an implementation restriction. You would need to teach\nSPI_commit() and SPI_rollback() to manipulate the top-level transaction\nblock state appropriately and carefully.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 28 Aug 2019 18:44:54 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Procedure support improvements"
}
] |
[
{
"msg_contents": "Hello\n\nCancel/terminate requests are held off during \"PREPARE TRANSACTION\"\nprocessing in function PrepareTransaction(). However, a subroutine invoked\nby PrepareTransaction() may perform elog(ERROR) or elog(FATAL).\n\nAnd if that happens after PREPARE WAL record is written and before\ntransaction state is cleaned up, normal abort processing is triggered, i.e.\nAbortTransaction(). It is not correct to perform abort transaction\nworkflow against a transaction that is already marked as prepared. A\nprepared transaction should only be finished using \"COMMIT/ROLLBACK\nPREPARED\" operation.\n\nI tried injecting an elog(ERROR) at the end of EndPrepare() and that\nresulted in a PANIC at some point.\n\nBefore delving into more details, I want to ascertain that this is a valid\nproblem to solve. Is the above problem worth worrying about?\n\nAsim\n\nHelloCancel/terminate requests are held off during \"PREPARE TRANSACTION\" processing in function PrepareTransaction(). However, a subroutine invoked by PrepareTransaction() may perform elog(ERROR) or elog(FATAL).And if that happens after PREPARE WAL record is written and before transaction state is cleaned up, normal abort processing is triggered, i.e. AbortTransaction(). It is not correct to perform abort transaction workflow against a transaction that is already marked as prepared. A prepared transaction should only be finished using \"COMMIT/ROLLBACK PREPARED\" operation.I tried injecting an elog(ERROR) at the end of EndPrepare() and that resulted in a PANIC at some point.Before delving into more details, I want to ascertain that this is a valid problem to solve. Is the above problem worth worrying about?Asim",
"msg_date": "Wed, 17 Jul 2019 18:16:55 +0530",
"msg_from": "Asim R P <apraveen@pivotal.io>",
"msg_from_op": true,
"msg_subject": "ERROR after writing PREPARE WAL record"
},
{
"msg_contents": "Asim R P <apraveen@pivotal.io> writes:\n> Cancel/terminate requests are held off during \"PREPARE TRANSACTION\"\n> processing in function PrepareTransaction(). However, a subroutine invoked\n> by PrepareTransaction() may perform elog(ERROR) or elog(FATAL).\n\nDoing anything that's likely to fail in the post-commit code path is\na Bad Idea (TM). There's no good recovery avenue, so the fact that\nyou generally end up at a PANIC is expected/intentional.\n\nThe correct response, if you notice code doing that, is to fix it so\nit doesn't do that. Typically the right answer is to move the\nfailure-prone operation to pre-commit processing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Jul 2019 09:37:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ERROR after writing PREPARE WAL record"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 7:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Asim R P <apraveen@pivotal.io> writes:\n> > Cancel/terminate requests are held off during \"PREPARE TRANSACTION\"\n> > processing in function PrepareTransaction(). However, a subroutine\ninvoked\n> > by PrepareTransaction() may perform elog(ERROR) or elog(FATAL).\n>\n> The correct response, if you notice code doing that, is to fix it so\n> it doesn't do that. Typically the right answer is to move the\n> failure-prone operation to pre-commit processing.\n\nThank you for the response. There is nothing particularly alarming. There\nis one case in LWLockAcquire that may error out if (num_held_lwlocks >=\nMAX_SIMUL_LWLOCKS). This problem also exists in CommitTransaction() and\nAbortTransaction() code paths. Then there is arbitrary add-on code\nregistered as Xact_callbacks.\n\nSyncRepWaitForLSN() directly checks ProcDiePending and QueryCancelPending\nwithout going through CHECK_FOR_INTERRUPTS and that is for good reason.\nMoreover, it only emits a WARNING, so no problem there.\n\nAsim\n\nOn Wed, Jul 17, 2019 at 7:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> Asim R P <apraveen@pivotal.io> writes:> > Cancel/terminate requests are held off during \"PREPARE TRANSACTION\"> > processing in function PrepareTransaction(). However, a subroutine invoked> > by PrepareTransaction() may perform elog(ERROR) or elog(FATAL).>> The correct response, if you notice code doing that, is to fix it so> it doesn't do that. Typically the right answer is to move the> failure-prone operation to pre-commit processing.Thank you for the response. There is nothing particularly alarming. There is one case in LWLockAcquire that may error out if (num_held_lwlocks >= MAX_SIMUL_LWLOCKS). This problem also exists in CommitTransaction() and AbortTransaction() code paths. Then there is arbitrary add-on code registered as Xact_callbacks.SyncRepWaitForLSN() directly checks ProcDiePending and QueryCancelPending without going through CHECK_FOR_INTERRUPTS and that is for good reason. Moreover, it only emits a WARNING, so no problem there.Asim",
"msg_date": "Thu, 18 Jul 2019 09:38:13 +0530",
"msg_from": "Asim R P <apraveen@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: ERROR after writing PREPARE WAL record"
}
] |
[
{
"msg_contents": "I tried to run the contrib/sepgsql tests, following the instructions,\non a recently-set-up Fedora 30 machine. I've done that successfully\non previous Fedora releases, but it's no go with F30.\n\nFirst off, building the sepgsql-regtest.pp policy file spews\na bunch of complaints that I don't recall having seen before:\n\n$ make -f /usr/share/selinux/devel/Makefile \n/usr/share/selinux/devel/include/services/container.if:14: Error: duplicate definition of container_runtime_domtrans(). Original definition on 14.\n/usr/share/selinux/devel/include/services/container.if:41: Error: duplicate definition of container_runtime_run(). Original definition on 41.\n/usr/share/selinux/devel/include/services/container.if:61: Error: duplicate definition of container_runtime_exec(). Original definition on 61.\n/usr/share/selinux/devel/include/services/container.if:80: Error: duplicate definition of container_read_state(). Original definition on 80.\n... more of the same ...\n/usr/share/selinux/devel/include/services/container.if:726: Error: duplicate definition of docker_stream_connect(). Original definition on 726.\n/usr/share/selinux/devel/include/services/container.if:730: Error: duplicate definition of docker_spc_stream_connect(). Original definition on 730.\n/usr/share/selinux/devel/include/services/container.if:744: Error: duplicate definition of container_spc_read_state(). Original definition on 744.\n/usr/share/selinux/devel/include/services/container.if:763: Error: duplicate definition of container_domain_template(). Original definition on 763.\n/usr/share/selinux/devel/include/services/container.if:791: Error: duplicate definition of container_spc_rw_pipes(). Original definition on 791.\nCompiling targeted sepgsql-regtest module\nCreating targeted sepgsql-regtest.pp policy package\nrm tmp/sepgsql-regtest.mod tmp/sepgsql-regtest.mod.fc\n$\n\nThe sepgsql-regtest.pp file is created anyway, and it seems to\nload into the kernel OK, so maybe these are harmless? Or not.\n\nI got through the remaining steps OK, until getting to actually\nrunning the test script:\n\n$ ./test_sepgsql \n\n============== checking selinux environment ==============\nchecking for matchpathcon ... ok\nchecking for runcon ... ok\nchecking for sestatus ... ok\nchecking current user domain ... unconfined_t\nchecking selinux operating mode ... enforcing\nchecking for sepgsql-regtest policy ... ok\nchecking whether policy is enabled ... on\non\nchecking whether we can run psql ... failed\n\n/home/tgl/testversion/bin/psql must be executable from the\nsepgsql_regtest_user_t domain. That domain has restricted privileges\ncompared to unconfined_t, so the problem may be the psql file's\nSELinux label. Try\n\n $ sudo restorecon -R /home/tgl/testversion/bin\n\nOr, using chcon\n\n $ sudo chcon -t user_home_t /home/tgl/testversion/bin/psql\n\n\n(BTW, what's that extra \"on\" after \"checking whether policy is enabled\"?)\n\npsql does already have that labeling according to \"ls -Z\",\nso unsurprisingly, the recommended remediation doesn't help.\n\nTrying to drill down a bit, I did what the script is doing:\n\n$ runcon -t sepgsql_regtest_user_t psql --help\npsql: fatal: could not look up effective user ID 1000: user does not exist\n\nBut uid 1000 is me according to /etc/passwd and according to \"id\":\n\n$ id\nuid=1000(tgl) gid=1000(tgl) groups=1000(tgl),10(wheel) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023\n\nso there's nothing much wrong with having that as euid.\n\nI speculate that the policy is forbidding sepgsql_regtest_user_t\nfrom reading /etc/passwd. Perhaps this is fallout from the\ncompile problems reported for the policy module? But I'm way\nout of my depth here.\n\nI'm pretty sure the test recipe last worked for me on F28.\nOff to try F29.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Jul 2019 12:32:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "sepgsql seems rather thoroughly broken on Fedora 30"
},
{
"msg_contents": "I wrote:\n> I tried to run the contrib/sepgsql tests, following the instructions,\n> on a recently-set-up Fedora 30 machine. I've done that successfully\n> on previous Fedora releases, but it's no go with F30.\n> ...\n> I'm pretty sure the test recipe last worked for me on F28.\n> Off to try F29.\n\nOn Fedora 29, compiling the policy file spews what look like exactly\nthe same \"errors\". Everything after that works.\n\nI don't have a functioning F28 installation right now, so I can't\ndouble-check whether the errors appear on that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Jul 2019 12:54:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: sepgsql seems rather thoroughly broken on Fedora 30"
},
{
"msg_contents": "On 7/17/19 12:54 PM, Tom Lane wrote:\n> I wrote:\n>> I tried to run the contrib/sepgsql tests, following the instructions,\n>> on a recently-set-up Fedora 30 machine. I've done that successfully\n>> on previous Fedora releases, but it's no go with F30.\n>> ...\n>> I'm pretty sure the test recipe last worked for me on F28.\n>> Off to try F29.\n> \n> On Fedora 29, compiling the policy file spews what look like exactly\n> the same \"errors\". Everything after that works.\n> \n> I don't have a functioning F28 installation right now, so I can't\n> double-check whether the errors appear on that.\n\n\nThanks for the report -- Mike Palmiotto said he would take a look as\nsoon as he can.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Wed, 17 Jul 2019 16:57:21 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: sepgsql seems rather thoroughly broken on Fedora 30"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 12:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I tried to run the contrib/sepgsql tests, following the instructions,\n> on a recently-set-up Fedora 30 machine. I've done that successfully\n> on previous Fedora releases, but it's no go with F30.\n>\n> First off, building the sepgsql-regtest.pp policy file spews\n> a bunch of complaints that I don't recall having seen before:\n>\n> $ make -f /usr/share/selinux/devel/Makefile\n> /usr/share/selinux/devel/include/services/container.if:14: Error: duplicate definition of container_runtime_domtrans(). Original definition on 14.\n> /usr/share/selinux/devel/include/services/container.if:41: Error: duplicate definition of container_runtime_run(). Original definition on 41.\n> /usr/share/selinux/devel/include/services/container.if:61: Error: duplicate definition of container_runtime_exec(). Original definition on 61.\n> /usr/share/selinux/devel/include/services/container.if:80: Error: duplicate definition of container_read_state(). Original definition on 80.\n> ... more of the same ...\n> /usr/share/selinux/devel/include/services/container.if:726: Error: duplicate definition of docker_stream_connect(). Original definition on 726.\n> /usr/share/selinux/devel/include/services/container.if:730: Error: duplicate definition of docker_spc_stream_connect(). Original definition on 730.\n> /usr/share/selinux/devel/include/services/container.if:744: Error: duplicate definition of container_spc_read_state(). Original definition on 744.\n> /usr/share/selinux/devel/include/services/container.if:763: Error: duplicate definition of container_domain_template(). Original definition on 763.\n> /usr/share/selinux/devel/include/services/container.if:791: Error: duplicate definition of container_spc_rw_pipes(). Original definition on 791.\n\nThese errors are due to a conflict between \"container-selinux\" and\n\"selinux-policy-devel.\" With both packages installed, you will see the\ncontainer interface file in both\n/usr/share/selinux/devel/include/contrib and\n/usr/share/selinux/devel/include/services:\n\n% sudo find /usr/share/selinux/devel -type f -name \"container.if\"\n/usr/share/selinux/devel/include/contrib/container.if\n/usr/share/selinux/devel/include/services/container.if\n\nThis is likely a bug that should be fixed by \"container-selinux.\" I'll\nsee what I can do about getting that fixed upstream.\nAs you noted, the build errors are likely a red herring, since the .pp\nstill installs and the test script recognizes the module as loaded. If\nyou want to get rid of these for now and you aren't particularly\nconcerned about your container policy module, you can just uninstall\nthe \"container-selinux\" package.\n\n>\n> ============== checking selinux environment ==============\n> checking for matchpathcon ... ok\n> checking for runcon ... ok\n> checking for sestatus ... ok\n> checking current user domain ... unconfined_t\n> checking selinux operating mode ... enforcing\n> checking for sepgsql-regtest policy ... ok\n> checking whether policy is enabled ... on\n> on\n> checking whether we can run psql ... failed\n>\n> <snip>\n> (BTW, what's that extra \"on\" after \"checking whether policy is enabled\"?)\n\nThe second \"on\" is from the `getsebool sepgsql_enable_users_ddl`\ncheck, which has no associated \"checking policy boolean\" message.\nWe'll minimally want to add more specific messages for the two\n`getsebool` checks.\n\n> $ runcon -t sepgsql_regtest_user_t psql --help\n> psql: fatal: could not look up effective user ID 1000: user does not exist\n>\n> But uid 1000 is me according to /etc/passwd and according to \"id\":\n>\n> $ id\n> uid=1000(tgl) gid=1000(tgl) groups=1000(tgl),10(wheel) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023\n>\n> so there's nothing much wrong with having that as euid.\n>\n> I speculate that the policy is forbidding sepgsql_regtest_user_t\n> from reading /etc/passwd. Perhaps this is fallout from the\n> compile problems reported for the policy module? But I'm way\n> out of my depth here.\n\nI wonder what your password file is labeled. It ought to be:\n% ls -Z /etc/passwd\nsystem_u:object_r:passwd_file_t:s0 /etc/passwd\n\nThe sepgsql_regtest_user_t domain should be allowed to read any file\nlabeled \"passwd_file_t\". We can check that with the `sesearch` tool,\nprovided by the \"setools-console\" package on F30:\n\n% sudo sesearch -A -s sepgsql_regtest_user_t -t passwd_file_t\nallow domain file_type:blk_file map; [ domain_can_mmap_files ]:True\nallow domain file_type:chr_file map; [ domain_can_mmap_files ]:True\nallow domain file_type:file map; [ domain_can_mmap_files ]:True\nallow nsswitch_domain passwd_file_t:file { getattr ioctl lock map open read };\n\nIf your /etc/passwd label is not correct, you can try just running\n`restorecon -RF /` to fix it.\n\nIn any case, it looks like this entire test script and policy could\nuse another layer of varnish, so I'll work on fixing up the\nmessages/functionality and post a patch which makes this a bit more\nrobust (hopefully a bit later tonight).\nSorry for the delayed response. Hopefully the band-aid fixes I\nprovided get you going for now.\n\n-- \nMike Palmiotto\nSoftware Engineer\nCrunchy Data Solutions\nhttps://crunchydata.com\n\n\n",
"msg_date": "Thu, 18 Jul 2019 19:23:08 -0400",
"msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>",
"msg_from_op": false,
"msg_subject": "Re: sepgsql seems rather thoroughly broken on Fedora 30"
},
{
"msg_contents": "Mike Palmiotto <mike.palmiotto@crunchydata.com> writes:\n> On Wed, Jul 17, 2019 at 12:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> $ runcon -t sepgsql_regtest_user_t psql --help\n>> psql: fatal: could not look up effective user ID 1000: user does not exist\n\n> I wonder what your password file is labeled. It ought to be:\n> % ls -Z /etc/passwd\n> system_u:object_r:passwd_file_t:s0 /etc/passwd\n\nGood thought, but no cigar:\n\n$ ls -Z /etc/passwd\nsystem_u:object_r:passwd_file_t:s0 /etc/passwd\n\nHappy to poke at anything else you can suggest.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jul 2019 23:06:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: sepgsql seems rather thoroughly broken on Fedora 30"
},
{
"msg_contents": "On Thu, Jul 18, 2019 at 11:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Mike Palmiotto <mike.palmiotto@crunchydata.com> writes:\n> > On Wed, Jul 17, 2019 at 12:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> $ runcon -t sepgsql_regtest_user_t psql --help\n> >> psql: fatal: could not look up effective user ID 1000: user does not exist\n\nYou can rule out SELinux for this piece by running `sudo setenforce\n0`. If the `runcon ... psql` command works in Permissive we should\nlook at your audit log to determine what is being denied. audit2allow\nwill provide a summary of the SELinux denials and is generally a good\nstarting point:\n\n# grep denied /var/log/audit/audit.log | audit2allow\n\nIf SELinux is indeed the issue here and you want to avoid doing all of\nthis detective work, it may be a good idea to just run a system-wide\nrestorecon (assuming you didn't already do that before) to make sure\nyour labels are in a decent state.\n\nFWIW, this appears to be working on my recently-installed F30 VM:\n\n% runcon -t sepgsql_regtest_user_t psql --help &> /dev/null\n% echo $?\n0\n\nHopefully a system-wide `restorecon` just magically fixes this for\nyou. Otherwise, we can start digging into denials.\n\n-- \nMike Palmiotto\nSoftware Engineer\nCrunchy Data Solutions\nhttps://crunchydata.com\n\n\n",
"msg_date": "Fri, 19 Jul 2019 09:37:45 -0400",
"msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>",
"msg_from_op": false,
"msg_subject": "Re: sepgsql seems rather thoroughly broken on Fedora 30"
},
{
"msg_contents": "Mike Palmiotto <mike.palmiotto@crunchydata.com> writes:\n> On Thu, Jul 18, 2019 at 11:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> $ runcon -t sepgsql_regtest_user_t psql --help\n>>> psql: fatal: could not look up effective user ID 1000: user does not exist\n\n> You can rule out SELinux for this piece by running `sudo setenforce\n> 0`. If the `runcon ... psql` command works in Permissive we should\n> look at your audit log to determine what is being denied. audit2allow\n> will provide a summary of the SELinux denials and is generally a good\n> starting point:\n\n> # grep denied /var/log/audit/audit.log | audit2allow\n\nIt's definitely SELinux. The grep finds entries like\n\ntype=AVC msg=audit(1563547268.044:465): avc: denied { read } for pid=10940 comm=\"psql\" name=\"passwd\" dev=\"sda6\" ino=4721184 scontext=unconfined_u:unconfined_r:sepgsql_regtest_user_t:s0-s0:c0.c1023 tcontext=system_u:object_r:passwd_file_t:s0 tclass=file permissive=0\n\nwhich audit2allow turns into\n\n#============= sepgsql_regtest_user_t ==============\nallow sepgsql_regtest_user_t passwd_file_t:file read;\n\nSo somehow, my system's interpretation of the test policy file does\nnot include that permission.\n\nI tried:\n\n* restorecon / ... no effect, which is unsurprising given that /etc/passwd\nwas OK already.\n\n* removing container-selinux ... this made the compile warnings go away,\nas you predicted, but no change in the test results.\n\n> FWIW, this appears to be working on my recently-installed F30 VM:\n\n> % runcon -t sepgsql_regtest_user_t psql --help &> /dev/null\n> % echo $?\n> 0\n\nWell, that's just weird. I've not done anything to the SELinux state\non this installation either, so what's different?\n\nI am wondering whether maybe the different behavior is a result of some\nRPM that's present on my system but not yours, or vice versa. As\na first stab at that, I see:\n\n$ rpm -qa | grep selinux | sort\ncockpit-selinux-198-1.fc30.noarch\ncontainer-selinux-2.107-1.git453b816.fc30.noarch\nflatpak-selinux-1.4.2-2.fc30.x86_64\nlibselinux-2.9-1.fc30.x86_64\nlibselinux-devel-2.9-1.fc30.x86_64\nlibselinux-utils-2.9-1.fc30.x86_64\npython3-libselinux-2.9-1.fc30.x86_64\nrpm-plugin-selinux-4.14.2.1-4.fc30.1.x86_64\nselinux-policy-3.14.3-40.fc30.noarch\nselinux-policy-devel-3.14.3-40.fc30.noarch\nselinux-policy-targeted-3.14.3-40.fc30.noarch\ntpm2-abrmd-selinux-2.0.0-4.fc30.noarch\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Jul 2019 11:03:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: sepgsql seems rather thoroughly broken on Fedora 30"
},
{
"msg_contents": "Mike Palmiotto <mike.palmiotto@crunchydata.com> writes:\n> The sepgsql_regtest_user_t domain should be allowed to read any file\n> labeled \"passwd_file_t\". We can check that with the `sesearch` tool,\n> provided by the \"setools-console\" package on F30:\n\n> % sudo sesearch -A -s sepgsql_regtest_user_t -t passwd_file_t\n> allow domain file_type:blk_file map; [ domain_can_mmap_files ]:True\n> allow domain file_type:chr_file map; [ domain_can_mmap_files ]:True\n> allow domain file_type:file map; [ domain_can_mmap_files ]:True\n> allow nsswitch_domain passwd_file_t:file { getattr ioctl lock map open read };\n\nI got around to trying this, and lookee here:\n\n$ sudo sesearch -A -s sepgsql_regtest_user_t -t passwd_file_t\nallow domain file_type:blk_file map; [ domain_can_mmap_files ]:True\nallow domain file_type:chr_file map; [ domain_can_mmap_files ]:True\nallow domain file_type:file map; [ domain_can_mmap_files ]:True\nallow domain file_type:lnk_file map; [ domain_can_mmap_files ]:True\n\nNothing about passwd_file_t. So *something* is different about the\nway the policy is being expanded.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Jul 2019 11:19:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: sepgsql seems rather thoroughly broken on Fedora 30"
},
{
"msg_contents": "On Fri, Jul 19, 2019 at 11:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I got around to trying this, and lookee here:\n>\n> $ sudo sesearch -A -s sepgsql_regtest_user_t -t passwd_file_t\n> allow domain file_type:blk_file map; [ domain_can_mmap_files ]:True\n> allow domain file_type:chr_file map; [ domain_can_mmap_files ]:True\n> allow domain file_type:file map; [ domain_can_mmap_files ]:True\n> allow domain file_type:lnk_file map; [ domain_can_mmap_files ]:True\n>\n> Nothing about passwd_file_t. So *something* is different about the\n> way the policy is being expanded.\n\nOkay, I was finally able to replicate the issue (and fix it). It looks\nlike perhaps the userdom_base_user_template changed and no longer\nallows reading of passwd_file_t? At any rate, I added some policy to\nensure that we have the proper permissions.\n\nI also beefed up the test script a bit so it now:\n- installs the SELinux policy module\n- spins up a temporary cluster to muddy postgresql.conf and run the\nsetup sql in an isolated environment\n\nWe probably need to polish this a bit more, but what do you think\nabout something similar to the attached patches? They should hopefully\nreduce some of the complexity of running these regression tests.\n\n\n\n\n\n\n\n--\nMike Palmiotto\nSoftware Engineer\nCrunchy Data Solutions\nhttps://crunchydata.com",
"msg_date": "Fri, 19 Jul 2019 15:55:22 -0400",
"msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>",
"msg_from_op": false,
"msg_subject": "Re: sepgsql seems rather thoroughly broken on Fedora 30"
},
{
"msg_contents": "Mike Palmiotto <mike.palmiotto@crunchydata.com> writes:\n> We probably need to polish this a bit more, but what do you think\n> about something similar to the attached patches? They should hopefully\n> reduce some of the complexity of running these regression tests.\n\nI can confirm that the 0001 patch fixes things on my Fedora 30 box.\nSo that's good, though I don't know enough to evaluate it for style\nor anything like that.\n\nI don't think I like the 0002 patch very much, because of its putting\nall the sudo actions into the script. I'd rather not give a script\nroot permissions, thanks. Maybe I'm in the minority on that.\nAlso, since the documentation explicitly says that the \n/usr/share/selinux/devel/Makefile path is not to be relied on,\nwhy would we hard-wire it into the script?\n\nA bigger-picture issue is that right now, configuring a cluster for\nsepgsql is a very manual process (cf. section F.35.2). I think there's\nsome advantage in forcing the user to run through that before running\nthe regression test, namely that they'll get the bugs out of any\nmisunderstandings or needed local changes. If we had that a bit more\nautomated then maybe having the test script do-it-for-you would be\nsensible. (IOW, the fact that the test process is more like \"make\ninstallcheck\" than \"make check\" seems like a feature not a bug.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Jul 2019 16:29:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: sepgsql seems rather thoroughly broken on Fedora 30"
},
{
"msg_contents": "On Fri, Jul 19, 2019 at 4:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Mike Palmiotto <mike.palmiotto@crunchydata.com> writes:\n> > We probably need to polish this a bit more, but what do you think\n> > about something similar to the attached patches? They should hopefully\n> > reduce some of the complexity of running these regression tests.\n>\n> I can confirm that the 0001 patch fixes things on my Fedora 30 box.\n> So that's good, though I don't know enough to evaluate it for style\n> or anything like that.\n\nI think the policy is in need of review/rewriting anyway. The proper\nthing to do would be to create a common template for all of the\nSELinux regtest user domains and create more of a hierarchical policy\nto reduce redundancy. If you want to wait for more formal policy\nupdates, I can do that in my spare time. Otherwise, the patch I posted\nshould work with the general style of this policy module.\n\n>\n> I don't think I like the 0002 patch very much, because of its putting\n> all the sudo actions into the script. I'd rather not give a script\n> root permissions, thanks. Maybe I'm in the minority on that.\n\nDefinitely not. I cringed a little bit as I was making those\nadditions, but figured it was fine since it's just a test script (and\nwe have to run `sudo` for various other installation items as well).\n\n> Also, since the documentation explicitly says that the\n> /usr/share/selinux/devel/Makefile path is not to be relied on,\n> why would we hard-wire it into the script?\n>\n> A bigger-picture issue is that right now, configuring a cluster for\n> sepgsql is a very manual process (cf. section F.35.2). I think there's\n> some advantage in forcing the user to run through that before running\n> the regression test, namely that they'll get the bugs out of any\n> misunderstandings or needed local changes. If we had that a bit more\n> automated then maybe having the test script do-it-for-you would be\n> sensible. (IOW, the fact that the test process is more like \"make\n> installcheck\" than \"make check\" seems like a feature not a bug.)\n\nMakes sense to me. Thanks for the feedback!\n\n-- \nMike Palmiotto\nSoftware Engineer\nCrunchy Data Solutions\nhttps://crunchydata.com\n\n\n",
"msg_date": "Fri, 19 Jul 2019 16:49:44 -0400",
"msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>",
"msg_from_op": false,
"msg_subject": "Re: sepgsql seems rather thoroughly broken on Fedora 30"
},
{
"msg_contents": "Mike Palmiotto <mike.palmiotto@crunchydata.com> writes:\n> On Fri, Jul 19, 2019 at 4:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I can confirm that the 0001 patch fixes things on my Fedora 30 box.\n>> So that's good, though I don't know enough to evaluate it for style\n>> or anything like that.\n\n> I think the policy is in need of review/rewriting anyway. The proper\n> thing to do would be to create a common template for all of the\n> SELinux regtest user domains and create more of a hierarchical policy\n> to reduce redundancy. If you want to wait for more formal policy\n> updates, I can do that in my spare time. Otherwise, the patch I posted\n> should work with the general style of this policy module.\n\nHearing no further comments, I went ahead and pushed 0001 (after\nchecking that it works on F28, which is the oldest Fedora version\nI have at hand right now). Stylistic improvements to the script\nare fine, but let's get the bug fixed for now.\n\nBTW, I noticed that the documentation about how to run the tests\nis a bit stale as well --- for instance, it says to use\n\n\t$ sudo semodule -u sepgsql-regtest.pp\n\nbut that slaps your wrist:\n\n\tThe --upgrade option is deprecated. Use --install instead.\n\nSo if anyone does feel like polishing things in this area, some doc\nreview seems indicated.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Jul 2019 11:09:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: sepgsql seems rather thoroughly broken on Fedora 30"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nMany thanks for the parallel improvements in Postgres 12. Here is one of\r\ncases where a costy function gets moved from a parallel worker into main\r\none, rendering spatial processing single core once again on some queries.\r\nPerhaps an assumption \"expressions should be mashed together as much as\r\npossible\" should be reviewed and something along \"biggest part of\r\nexpression should be pushed down into parallel worker\"?\r\n\r\nPostgreSQL 12beta2 (Ubuntu 12~beta2-1.pgdg19.04+1) on x86_64-pc-linux-gnu,\r\ncompiled by gcc (Ubuntu 8.3.0-6ubuntu1) 8.3.0, 64-bit\r\n\r\nHere is a reproducer:\r\n\r\n\r\n-- setup\r\ncreate extension postgis;\r\ncreate table postgis_test_table (a geometry, b geometry, id int);\r\nset force_parallel_mode to on;\r\ninsert into postgis_test_table (select 'POINT EMPTY', 'POINT EMPTY',\r\ngenerate_series(0,1000) );\r\n\r\n\r\n-- unwanted inlining moves difference and unary union calculation into\r\nmaster worker\r\n21:43:06 [gis] > explain verbose select ST_Collect(geom), id from\r\n(select ST_Difference(a,ST_UnaryUnion(b)) as geom, id from\r\npostgis_test_table) z group by id;\r\n┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY PLAN\r\n │\r\n├─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤\r\n│ Gather (cost=159.86..42668.93 rows=200 width=36)\r\n │\r\n│ Output: (st_collect(st_difference(postgis_test_table.a,\r\nst_unaryunion(postgis_test_table.b)))), postgis_test_table.id │\r\n│ Workers Planned: 1\r\n │\r\n│ Single Copy: true\r\n │\r\n│ -> GroupAggregate (cost=59.86..42568.73 rows=200 width=36)\r\n │\r\n│ Output: st_collect(st_difference(postgis_test_table.a,\r\nst_unaryunion(postgis_test_table.b))), postgis_test_table.id │\r\n│ Group Key: postgis_test_table.id\r\n │\r\n│ -> Sort (cost=59.86..61.98 rows=850 width=68)\r\n │\r\n│ Output: postgis_test_table.id, postgis_test_table.a,\r\npostgis_test_table.b │\r\n│ Sort Key: postgis_test_table.id\r\n │\r\n│ -> Seq Scan on public.postgis_test_table\r\n(cost=0.00..18.50 rows=850 width=68) │\r\n│ Output: postgis_test_table.id,\r\npostgis_test_table.a, postgis_test_table.b\r\n │\r\n└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(12 rows)\r\n\r\n-- when constrained by OFFSET 0, costy calculation is kept in parallel workers\r\n21:43:12 [gis] > explain verbose select ST_Collect(geom), id from\r\n(select ST_Difference(a,ST_UnaryUnion(b)) as geom, id from\r\npostgis_test_table offset 0) z group by id;\r\n┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY\r\nPLAN │\r\n├───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤\r\n│ GroupAggregate (cost=13863.45..13872.33 rows=200 width=36)\r\n │\r\n│ Output: st_collect(z.geom), z.id\r\n │\r\n│ Group Key: z.id\r\n │\r\n│ -> Sort (cost=13863.45..13865.58 rows=850 width=36)\r\n │\r\n│ Output: z.id, z.geom\r\n │\r\n│ Sort Key: z.id\r\n │\r\n│ -> Subquery Scan on z (cost=100.00..13822.09 rows=850\r\nwidth=36) │\r\n│ Output: z.id, z.geom\r\n │\r\n│ -> Gather (cost=100.00..13813.59 rows=850 width=36)\r\n │\r\n│ Output: (st_difference(postgis_test_table.a,\r\nst_unaryunion(postgis_test_table.b))), postgis_test_table.id │\r\n│ Workers Planned: 3\r\n │\r\n│ -> Parallel Seq Scan on\r\npublic.postgis_test_table (cost=0.00..13712.74 rows=274 width=36)\r\n │\r\n│ Output:\r\nst_difference(postgis_test_table.a,\r\nst_unaryunion(postgis_test_table.b)), postgis_test_table.id │\r\n└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(13 rows)\r\n\r\n-- teardown\r\ndrop table postgis_test_table;\r\n\r\n\r\n\r\n\r\n-- \r\nDarafei Praliaskouski\r\nSupport me: http://patreon.com/komzpa\r\n\nHi,Many thanks for the parallel improvements in Postgres 12. Here is one of cases where a costy function gets moved from a parallel worker into main one, rendering spatial processing single core once again on some queries. Perhaps an assumption \"expressions should be mashed together as much as possible\" should be reviewed and something along \"biggest part of expression should be pushed down into parallel worker\"?PostgreSQL 12beta2 (Ubuntu 12~beta2-1.pgdg19.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 8.3.0-6ubuntu1) 8.3.0, 64-bitHere is a reproducer:-- setupcreate extension postgis;create table postgis_test_table (a geometry, b geometry, id int);set force_parallel_mode to on;insert into postgis_test_table (select 'POINT EMPTY', 'POINT EMPTY', generate_series(0,1000) );-- unwanted inlining moves difference and unary union calculation into master worker21:43:06 [gis] > explain verbose select ST_Collect(geom), id from (select ST_Difference(a,ST_UnaryUnion(b)) as geom, id from postgis_test_table) z group by id;┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│ QUERY PLAN │├─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤│ Gather (cost=159.86..42668.93 rows=200 width=36) ││ Output: (st_collect(st_difference(postgis_test_table.a, st_unaryunion(postgis_test_table.b)))), postgis_test_table.id ││ Workers Planned: 1 ││ Single Copy: true ││ -> GroupAggregate (cost=59.86..42568.73 rows=200 width=36) ││ Output: st_collect(st_difference(postgis_test_table.a, st_unaryunion(postgis_test_table.b))), postgis_test_table.id ││ Group Key: postgis_test_table.id ││ -> Sort (cost=59.86..61.98 rows=850 width=68) ││ Output: postgis_test_table.id, postgis_test_table.a, postgis_test_table.b ││ Sort Key: postgis_test_table.id ││ -> Seq Scan on public.postgis_test_table (cost=0.00..18.50 rows=850 width=68) ││ Output: postgis_test_table.id, postgis_test_table.a, postgis_test_table.b │└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘(12 rows)-- when constrained by OFFSET 0, costy calculation is kept in parallel workers21:43:12 [gis] > explain verbose select ST_Collect(geom), id from (select ST_Difference(a,ST_UnaryUnion(b)) as geom, id from postgis_test_table offset 0) z group by id;┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│ QUERY PLAN │├───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤│ GroupAggregate (cost=13863.45..13872.33 rows=200 width=36) ││ Output: st_collect(z.geom), z.id ││ Group Key: z.id ││ -> Sort (cost=13863.45..13865.58 rows=850 width=36) ││ Output: z.id, z.geom ││ Sort Key: z.id ││ -> Subquery Scan on z (cost=100.00..13822.09 rows=850 width=36) ││ Output: z.id, z.geom ││ -> Gather (cost=100.00..13813.59 rows=850 width=36) ││ Output: (st_difference(postgis_test_table.a, st_unaryunion(postgis_test_table.b))), postgis_test_table.id ││ Workers Planned: 3 ││ -> Parallel Seq Scan on public.postgis_test_table (cost=0.00..13712.74 rows=274 width=36) ││ Output: st_difference(postgis_test_table.a, st_unaryunion(postgis_test_table.b)), postgis_test_table.id │└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘(13 rows)-- teardowndrop table postgis_test_table;\n-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Wed, 17 Jul 2019 21:54:21 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": true,
"msg_subject": "Unwanted expression simplification in PG12b2"
},
{
"msg_contents": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net> writes:\n> Many thanks for the parallel improvements in Postgres 12. Here is one of\n> cases where a costy function gets moved from a parallel worker into main\n> one, rendering spatial processing single core once again on some queries.\n> Perhaps an assumption \"expressions should be mashed together as much as\n> possible\" should be reviewed and something along \"biggest part of\n> expression should be pushed down into parallel worker\"?\n\nI don't see anything in your test case that proves what you think it does.\nThe expensive calculation *is* being done in the worker in the first\nexample. It's not real clear to me why the first example is only choosing\nto use one worker rather than 3, but probably with a larger test case\n(ie bigger table) that decision would change.\n\nJust to clarify --- when you see something like this:\n\n> │ Gather (cost=159.86..42668.93 rows=200 width=36)\n> │ Output: (st_collect(st_difference(postgis_test_table.a,st_unaryunion(postgis_test_table.b)))), postgis_test_table.id\n> │ -> GroupAggregate (cost=59.86..42568.73 rows=200 width=36)\n> │ Output: st_collect(st_difference(postgis_test_table.a,st_unaryunion(postgis_test_table.b))), postgis_test_table.id\n\nEXPLAIN is trying to tell you that the expression value is being\ncomputed by the lower plan node, and just passed up to the upper\nplan node --- that's what the extra parens in the upper expression\nprintout mean. Perhaps there's some way to make that clearer,\nbut I haven't thought of one that doesn't seem very clutter-y.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Jul 2019 16:58:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unwanted expression simplification in PG12b2"
},
{
"msg_contents": "Hi,\n\nOn Wed, Jul 17, 2019 at 11:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> =?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>\n> writes:\n> > Many thanks for the parallel improvements in Postgres 12. Here is one of\n> > cases where a costy function gets moved from a parallel worker into main\n> > one, rendering spatial processing single core once again on some queries.\n> > Perhaps an assumption \"expressions should be mashed together as much as\n> > possible\" should be reviewed and something along \"biggest part of\n> > expression should be pushed down into parallel worker\"?\n>\n> I don't see anything in your test case that proves what you think it does.\n> The expensive calculation *is* being done in the worker in the first\n> example. It's not real clear to me why the first example is only choosing\n> to use one worker rather than 3, but probably with a larger test case\n> (ie bigger table) that decision would change.\n>\n\nIndeed, it seems I failed to minimize my example.\n\nHere is the actual one, on 90GB table with 16M rows:\nhttps://gist.github.com/Komzpa/8d5b9008ad60f9ccc62423c256e78b4c\n\nI can share the table on request if needed, but hope that plan may be\nenough.\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nHi,On Wed, Jul 17, 2019 at 11:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net> writes:\n> Many thanks for the parallel improvements in Postgres 12. Here is one of\n> cases where a costy function gets moved from a parallel worker into main\n> one, rendering spatial processing single core once again on some queries.\n> Perhaps an assumption \"expressions should be mashed together as much as\n> possible\" should be reviewed and something along \"biggest part of\n> expression should be pushed down into parallel worker\"?\n\nI don't see anything in your test case that proves what you think it does.\nThe expensive calculation *is* being done in the worker in the first\nexample. It's not real clear to me why the first example is only choosing\nto use one worker rather than 3, but probably with a larger test case\n(ie bigger table) that decision would change.Indeed, it seems I failed to minimize my example. Here is the actual one, on 90GB table with 16M rows:https://gist.github.com/Komzpa/8d5b9008ad60f9ccc62423c256e78b4cI can share the table on request if needed, but hope that plan may be enough.-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Thu, 18 Jul 2019 00:20:21 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": true,
"msg_subject": "Re: Unwanted expression simplification in PG12b2"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 5:20 PM Darafei \"Komяpa\" Praliaskouski\n<me@komzpa.net> wrote:\n> Indeed, it seems I failed to minimize my example.\n>\n> Here is the actual one, on 90GB table with 16M rows:\n> https://gist.github.com/Komzpa/8d5b9008ad60f9ccc62423c256e78b4c\n>\n> I can share the table on request if needed, but hope that plan may be enough.\n\n[ replying to an old thread ]\n\nI think that this boils down to a lack of planner smarts about target\nlists. The planner currently assumes that any given relation - which\nfor planner purposes might be an actual table or might be the result\nof joining multiple tables, aggregating something, running a subquery,\netc. - more or less has one thing that it's supposed to produce. It\nonly tries to generate plans that produce that target list. There's\nsome support in there for the idea that there might be various paths\nfor the same relation that produce different answers, but I don't know\nof that actually being used anywhere (but it might be).\n\nWhat I taught the planner to do here had to do with making the costing\nmore accurate for cases like this. It now figures out that if it's\ngoing to stick a Gather in at that point, computing the expressions\nbelow the Gather rather than above the Gather makes a difference to\nthe cost of that plan vs. other plans. However, it still doesn't\nconsider any more paths than it did before; it just costs them more\naccurately. In your first example, I believe that the planner should\nbe able to consider both GroupAggregate -> Gather Merge -> Sort ->\nParallel Seq Scan and GroupAggregate -> Sort -> Gather -> Parallel Seq\nScan, but I think it's got a fixed idea about which fields should be\nfed into the Sort. In particular, I believe it thinks that sorting\nmore data is so undesirable that it doesn't want to carry any\nunnecessary baggage through the Sort for any reason. To solve this\nproblem, I think it would need to cost the second plan with projection\ndone both before the Sort and after the Sort and decide which one was\ncheaper.\n\nThis class of problem is somewhat annoying in that the extra planner\ncycles and complexity to deal with getting this right would be useless\nfor many queries, but at the same time, there are a few cases where it\ncan win big. I don't know what to do about that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 20 Sep 2019 16:14:25 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unwanted expression simplification in PG12b2"
},
{
"msg_contents": "Hi,\n\nOn Fri, Sep 20, 2019 at 11:14 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Jul 17, 2019 at 5:20 PM Darafei \"Komяpa\" Praliaskouski\n> <me@komzpa.net> wrote:\n> > Indeed, it seems I failed to minimize my example.\n> >\n> > Here is the actual one, on 90GB table with 16M rows:\n> > https://gist.github.com/Komzpa/8d5b9008ad60f9ccc62423c256e78b4c\n> >\n> > I can share the table on request if needed, but hope that plan may be\n> enough.\n>\n> What I taught the planner to do here had to do with making the costing\n> more accurate for cases like this. It now figures out that if it's\n> going to stick a Gather in at that point, computing the expressions\n> below the Gather rather than above the Gather makes a difference to\n> the cost of that plan vs. other plans. However, it still doesn't\n> consider any more paths than it did before; it just costs them more\n> accurately. In your first example, I believe that the planner should\n> be able to consider both GroupAggregate -> Gather Merge -> Sort ->\n> Parallel Seq Scan and GroupAggregate -> Sort -> Gather -> Parallel Seq\n> Scan, but I think it's got a fixed idea about which fields should be\n> fed into the Sort. In particular, I believe it thinks that sorting\n> more data is so undesirable that it doesn't want to carry any\n> unnecessary baggage through the Sort for any reason. To solve this\n> problem, I think it would need to cost the second plan with projection\n> done both before the Sort and after the Sort and decide which one was\n> cheaper.\n>\n> This class of problem is somewhat annoying in that the extra planner\n> cycles and complexity to deal with getting this right would be useless\n> for many queries, but at the same time, there are a few cases where it\n> can win big. I don't know what to do about that.\n>\n\nA heuristic I believe should help my case (and I hardly imagine how it can\nbreak others) is that in presence of Gather, all the function calls that\nare parallel safe should be pushed into it.\nIn a perfect future this query shouldn't even have a subquery that I have\nextracted for the sake of OFFSET 0 demo. Probably as a single loop that in\ncase of presence of a Gather tries to push down all the inner part of the\nnested functions call that is Parallel Safe.\nIf we go as far as starting more workers, it really makes sense to load\nthem with actual work and not only wait for the master process.\n\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nHi,On Fri, Sep 20, 2019 at 11:14 PM Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Jul 17, 2019 at 5:20 PM Darafei \"Komяpa\" Praliaskouski\n<me@komzpa.net> wrote:\n> Indeed, it seems I failed to minimize my example.\n>\n> Here is the actual one, on 90GB table with 16M rows:\n> https://gist.github.com/Komzpa/8d5b9008ad60f9ccc62423c256e78b4c\n>\n> I can share the table on request if needed, but hope that plan may be enough.\nWhat I taught the planner to do here had to do with making the costing\nmore accurate for cases like this. It now figures out that if it's\ngoing to stick a Gather in at that point, computing the expressions\nbelow the Gather rather than above the Gather makes a difference to\nthe cost of that plan vs. other plans. However, it still doesn't\nconsider any more paths than it did before; it just costs them more\naccurately. In your first example, I believe that the planner should\nbe able to consider both GroupAggregate -> Gather Merge -> Sort ->\nParallel Seq Scan and GroupAggregate -> Sort -> Gather -> Parallel Seq\nScan, but I think it's got a fixed idea about which fields should be\nfed into the Sort. In particular, I believe it thinks that sorting\nmore data is so undesirable that it doesn't want to carry any\nunnecessary baggage through the Sort for any reason. To solve this\nproblem, I think it would need to cost the second plan with projection\ndone both before the Sort and after the Sort and decide which one was\ncheaper.\n\nThis class of problem is somewhat annoying in that the extra planner\ncycles and complexity to deal with getting this right would be useless\nfor many queries, but at the same time, there are a few cases where it\ncan win big. I don't know what to do about that.A heuristic I believe should help my case (and I hardly imagine how it can break others) is that in presence of Gather, all the function calls that are parallel safe should be pushed into it. In a perfect future this query shouldn't even have a subquery that I have extracted for the sake of OFFSET 0 demo. Probably as a single loop that in case of presence of a Gather tries to push down all the inner part of the nested functions call that is Parallel Safe. If we go as far as starting more workers, it really makes sense to load them with actual work and not only wait for the master process. -- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Sun, 22 Sep 2019 14:47:17 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": true,
"msg_subject": "Re: Unwanted expression simplification in PG12b2"
},
{
"msg_contents": "On Sun, Sep 22, 2019 at 7:47 AM Darafei \"Komяpa\" Praliaskouski\n<me@komzpa.net> wrote:\n> A heuristic I believe should help my case (and I hardly imagine how it can break others) is that in presence of Gather, all the function calls that are parallel safe should be pushed into it.\n\nThe cost of pushing data through the Sort is not necessarily\ninsignificant. Your functions are (IIUC) extremely expensive, so it's\nworth going to any length to reduce the time spent evaluating them.\nHowever, if someone has ||(text,text) in the tlist, that might be the\nwrong approach, because it's not saving much to compute that earlier\nand it might make the sort a lot wider, especially if de-TOASTing is\ninvolved.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 23 Sep 2019 08:58:00 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unwanted expression simplification in PG12b2"
},
{
"msg_contents": "If the function was moved to the FROM clause where it would be executed as a lateral cross join instead of a target list expression, how would this affect the cost-based positioning of the Gather?\r\n\r\nOn 9/23/19, 8:59 AM, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\r\n\r\n On Sun, Sep 22, 2019 at 7:47 AM Darafei \"Komяpa\" Praliaskouski\r\n <me@komzpa.net> wrote:\r\n > A heuristic I believe should help my case (and I hardly imagine how it can break others) is that in presence of Gather, all the function calls that are parallel safe should be pushed into it.\r\n \r\n The cost of pushing data through the Sort is not necessarily\r\n insignificant. Your functions are (IIUC) extremely expensive, so it's\r\n worth going to any length to reduce the time spent evaluating them.\r\n However, if someone has ||(text,text) in the tlist, that might be the\r\n wrong approach, because it's not saving much to compute that earlier\r\n and it might make the sort a lot wider, especially if de-TOASTing is\r\n involved.\r\n \r\n -- \r\n Robert Haas\r\n EnterpriseDB: http://www.enterprisedb.com\r\n The Enterprise PostgreSQL Company\r\n \r\n \r\n \r\n\r\n",
"msg_date": "Tue, 24 Sep 2019 01:19:41 +0000",
"msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Unwanted expression simplification in PG12b2"
},
{
"msg_contents": "On Mon, Sep 23, 2019 at 9:20 PM Finnerty, Jim <jfinnert@amazon.com> wrote:\n> If the function was moved to the FROM clause where it would be executed as a lateral cross join instead of a target list expression, how would this affect the cost-based positioning of the Gather?\n\nI think you'd end up turning what is now a Seq Scan into a Nested Loop\nwith a Seq Scan on one side. I think the computation of the target\nlist would be done by the Function Scan or Result node on the other\nside of the Nested Loop, and couldn't move anywhere else. The planner\nwould consider putting the Gather either on top of the Nested Loop or\non top of the Seq Scan, and the former would probably win. So I think\nthis would give the desired behavior, but I haven't thought about it\nvery hard.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 24 Sep 2019 12:01:42 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unwanted expression simplification in PG12b2"
}
] |
[
{
"msg_contents": "Thinking more about the public/private field distinction we just\nspecified --- it's always annoyed me that SPITupleTable doesn't\nprovide a number-of-valid-rows field, so that callers have to\nlook at the entirely separate SPI_processed variable in order\nto make sense of SPI_tuptable. I looked a bit more closely at\nthe code in question, and realized that it was just Randomly\nDoing Things Differently from every other implementation we have\nof expansible arrays. Not only is it randomly different, but\nit's not even better than our usual method of tracking current\nand maximum numbers of elements: it has to do extra subtractions.\n\nAccordingly, I propose the attached follow-on to fec0778c8,\nwhich replaces the \"free\" field with a \"numvals\" field that\nis considered public.\n\nI poked around for callers that might prefer to use SPI_tuptable->numvals\nin place of SPI_processed, and didn't immediately find anything where the\nbenefit of changing seemed compelling. In principle, though, it should\nbe possible to simplify some callers by needing only one variable to be\npassed around instead of two.\n\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 17 Jul 2019 16:35:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Further hacking on SPITupleTable struct"
},
{
"msg_contents": "> On 17 Jul 2019, at 22:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Thinking more about the public/private field distinction we just\n> specified --- it's always annoyed me that SPITupleTable doesn't\n> provide a number-of-valid-rows field, so that callers have to\n> look at the entirely separate SPI_processed variable in order\n> to make sense of SPI_tuptable.\n\nSorry for being slow to return to this, I see you have already committed it.\nFWIW, I do agree that this makes a lot more sense. Retroactively +1’ing it.\n\nRegarding the core code I agree that no callers directly benefit without some\nrefactoring, but contrib/xml2/xpath.c has one case which seems applicable as\nper the attached. Now, since contrib/xml2 has been deprecated for a long time\nit’s probably not worth bothering, but it was the one case I found so I figured\nI’d record it in this thread.\n\ncheers ./daniel",
"msg_date": "Fri, 19 Jul 2019 14:11:02 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Further hacking on SPITupleTable struct"
}
] |
[
{
"msg_contents": "Sync our copy of the timezone library with IANA release tzcode2019b.\n\nA large fraction of this diff is just due to upstream's somewhat\nrandom decision to rename a bunch of internal variables and struct\nfields. However, there is an interesting new feature in zic:\nit's grown a \"-b slim\" option that emits zone files without 32-bit\ndata and other backwards-compatibility hacks. We should consider\nwhether we wish to enable that.\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/f285322f9cd3145ea2e5b870e6ba7e0c641422ac\n\nModified Files\n--------------\nsrc/timezone/README | 7 +-\nsrc/timezone/localtime.c | 89 +++++-----\nsrc/timezone/pgtz.h | 6 +-\nsrc/timezone/tzfile.h | 17 +-\nsrc/timezone/zic.c | 453 +++++++++++++++++++++++++----------------------\n5 files changed, 305 insertions(+), 267 deletions(-)\n\n",
"msg_date": "Wed, 17 Jul 2019 22:26:45 +0000",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "pgsql: Sync our copy of the timezone library with IANA release\n tzcode20"
},
{
"msg_contents": "Hi Tom,\n(moving to -hackers)\n\nOn Wed, Jul 17, 2019 at 10:26:45PM +0000, Tom Lane wrote:\n> Sync our copy of the timezone library with IANA release tzcode2019b.\n> \n> A large fraction of this diff is just due to upstream's somewhat\n> random decision to rename a bunch of internal variables and struct\n> fields. However, there is an interesting new feature in zic:\n> it's grown a \"-b slim\" option that emits zone files without 32-bit\n> data and other backwards-compatibility hacks. We should consider\n> whether we wish to enable that.\n\nThis is causing a compilation warning on Windows:\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=whelk&dt=2019-07-19%2001%3A41%3A13&stg=make\n\n\"C:\\buildfarm\\buildenv\\HEAD\\pgsql.build\\pgsql.sln\" (Standardziel) (1)\n->\n\"C:\\buildfarm\\buildenv\\HEAD\\pgsql.build\\zic.vcxproj\" (Standardziel)\n(72) ->\n src/timezone/zic.c(2401): warning C4804: '-' : unsafe use of type\n 'bool' in operation\n[C:\\buildfarm\\buildenv\\HEAD\\pgsql.build\\zic.vcxproj]\n\nBuildfarm members using VS like whelk complains about that, and I can\nsee the warning myself.\n--\nMichael",
"msg_date": "Fri, 19 Jul 2019 12:53:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Sync our copy of the timezone library with IANA release\n tzcode20"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> This is causing a compilation warning on Windows:\n\n> src/timezone/zic.c(2401): warning C4804: '-' : unsafe use of type\n> 'bool' in operation\n\nHmmm ... the code looks like\n\n bool locut,\n hicut;\n ...\n thistimecnt = -locut - hicut;\n\nso I think your compiler has a point. I shall complain to upstream.\nAt best, it's really unobvious what this code is meant to do, and\nat worst (eg, depending on whether bool promotes to signed or unsigned\nint) the results are unportable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Jul 2019 00:06:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Sync our copy of the timezone library with IANA release\n tzcode20"
},
{
"msg_contents": "I wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> This is causing a compilation warning on Windows:\n> ...so I think your compiler has a point. I shall complain to upstream.\n\nThe IANA folk want to fix it like this:\n\ndiff --git a/zic.c b/zic.c\nindex 8bf5628..a84703a 100644\n--- a/zic.c\n+++ b/zic.c\n@@ -2145,7 +2145,7 @@ writezone(const char *const name, const char *const string, char version,\n \t\t}\n \t\tif (pass == 1 && !want_bloat()) {\n \t\t utcnt = stdcnt = thisleapcnt = 0;\n-\t\t thistimecnt = - locut - hicut;\n+\t\t thistimecnt = - (locut + hicut);\n \t\t thistypecnt = thischarcnt = 1;\n \t\t thistimelim = thistimei;\n \t\t}\n\nI'm not quite convinced whether that will silence the warning, but\nat least it's a bit less unreadable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Jul 2019 14:56:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Sync our copy of the timezone library with IANA release\n tzcode20"
},
{
"msg_contents": "On Fri, Jul 19, 2019 at 02:56:34PM -0400, Tom Lane wrote:\n> I'm not quite convinced whether that will silence the warning, but\n> at least it's a bit less unreadable.\n\nThanks for working with upstream on this. From what I can see,\nwoodlouse & friends do not complain anymore.\n--\nMichael",
"msg_date": "Sat, 20 Jul 2019 18:17:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Sync our copy of the timezone library with IANA release\n tzcode20"
}
] |
[
{
"msg_contents": "I just finished updating our timezone code to match IANA release\n2019b. There's an interesting new switch in zic: if you say\n\"-b slim\", it generates zone data files that have only 64-bit\ndata (not the 32-bit plus 64-bit data that it's been emitting\nfor years), and it drops other space-wasting hacks that are needed\nonly for backwards compatibility with old timezone libraries.\n\nThat is not us, so I wonder whether we shouldn't turn on that switch.\n\nI did a quick comparison of the file sizes, and indeed there's\na noticeable per-file savings, eg\n\n$ ls -l timezone.fat/America/New_York timezone.slim/America/New_York \n-rw-r--r--. 3 postgres postgres 3536 Jul 17 18:08 timezone.fat/America/New_York\n-rw-r--r--. 3 postgres postgres 1744 Jul 17 18:07 timezone.slim/America/New_York\n\n$ ls -l timezone.fat/Europe/Paris timezone.slim/Europe/Paris \n-rw-r--r--. 1 postgres postgres 2962 Jul 17 18:08 timezone.fat/Europe/Paris\n-rw-r--r--. 1 postgres postgres 1105 Jul 17 18:07 timezone.slim/Europe/Paris\n\nNow, since the files are pretty much all under 4K, that translates\nto exactly no disk space savings on my ext4 filesystem :-(\n\n$ du -hs timezone.fat timezone.slim\n1.6M timezone.fat\n1.6M timezone.slim\n\nBut other filesystems that are smarter about small files would\nprobably benefit. Also, there's a significant difference in\nthe size of a compressed tarball:\n\n-rw-rw-r--. 1 postgres postgres 148501 Jul 17 18:09 timezone.fat.tgz\n-rw-rw-r--. 1 postgres postgres 80511 Jul 17 18:09 timezone.slim.tgz\n\nnot that that really helps us, because we don't include these\ngenerated files in our tarballs.\n\nDespite the marginal payoff, I'm strongly tempted to enable this\nswitch. The only reason I can think of not to do it is if somebody\nis using a Postgres installation's share/timezone tree as tzdata\nfor some other program with not-up-to-date timezone library code.\nBut who would that be?\n\nA possible compromise is to turn it on only in HEAD, though I'd\nrather keep all the branches working the same as far as the\ntimezone code goes.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Jul 2019 18:42:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "New \"-b slim\" option in 2019b zic: should we turn that on?"
},
{
"msg_contents": "\nOn 7/17/19 6:42 PM, Tom Lane wrote:\n>\n> Despite the marginal payoff, I'm strongly tempted to enable this\n> switch. The only reason I can think of not to do it is if somebody\n> is using a Postgres installation's share/timezone tree as tzdata\n> for some other program with not-up-to-date timezone library code.\n> But who would that be?\n>\n> A possible compromise is to turn it on only in HEAD, though I'd\n> rather keep all the branches working the same as far as the\n> timezone code goes.\n>\n\n\nI've just run into an issue with this (commit a1207910968). The makefile\nnow assumes that zic has this switch. But I was attempting to get around\nan issue on msys2 by using its zic, (ZIC=/usr/bin/zic configure ...). It\ncrashes on the floor because it doesn't know about \"-b slim\". I think we\nprobably need a way to turn this off.\n\n\ncheers\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sat, 5 Oct 2019 17:43:49 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: New \"-b slim\" option in 2019b zic: should we turn that on?"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> I've just run into an issue with this (commit a1207910968). The makefile\n> now assumes that zic has this switch. But I was attempting to get around\n> an issue on msys2 by using its zic, (ZIC=/usr/bin/zic configure ...). It\n> crashes on the floor because it doesn't know about \"-b slim\". I think we\n> probably need a way to turn this off.\n\nI had contemplated injecting the -b switch via\n\n # any custom options you might want to pass to zic while installing data files\n-ZIC_OPTIONS =\n+ZIC_OPTIONS = -b slim\n\nwhich would allow overriding it by defining the ZIC_OPTIONS macro.\nDoes that seem appropriate? I didn't do it because I worried about\ninterference with existing uses of ZIC_OPTIONS ... but who knows\nwhether there are any.\n\nBTW, building with old versions of zic is not guaranteed to work anyway.\nThey do tend to wait a year or two before they start to use new zic\nfeatures in the timezone data files, but they don't wait indefinitely.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 05 Oct 2019 18:33:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: New \"-b slim\" option in 2019b zic: should we turn that on?"
},
{
"msg_contents": "\nOn 10/5/19 6:33 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> I've just run into an issue with this (commit a1207910968). The makefile\n>> now assumes that zic has this switch. But I was attempting to get around\n>> an issue on msys2 by using its zic, (ZIC=/usr/bin/zic configure ...). It\n>> crashes on the floor because it doesn't know about \"-b slim\". I think we\n>> probably need a way to turn this off.\n> I had contemplated injecting the -b switch via\n>\n> # any custom options you might want to pass to zic while installing data files\n> -ZIC_OPTIONS =\n> +ZIC_OPTIONS = -b slim\n>\n> which would allow overriding it by defining the ZIC_OPTIONS macro.\n> Does that seem appropriate? I didn't do it because I worried about\n> interference with existing uses of ZIC_OPTIONS ... but who knows\n> whether there are any.\n\n\nI don't think that's going to work very well with a buildfarm member,\nwhere there's no convenient way to set it.\n\n\nBut it turns out there are bigger problems with what I'm doing, anyway,\nso let's leave sleeping dogs lie for now.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sat, 5 Oct 2019 22:22:01 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: New \"-b slim\" option in 2019b zic: should we turn that on?"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 10/5/19 6:33 PM, Tom Lane wrote:\n>> I had contemplated injecting the -b switch via\n>> -ZIC_OPTIONS =\n>> +ZIC_OPTIONS = -b slim\n\n> I don't think that's going to work very well with a buildfarm member,\n> where there's no convenient way to set it.\n\nCan't you set that from build_env?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 05 Oct 2019 22:33:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: New \"-b slim\" option in 2019b zic: should we turn that on?"
},
{
"msg_contents": "\nOn 10/5/19 10:33 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 10/5/19 6:33 PM, Tom Lane wrote:\n>>> I had contemplated injecting the -b switch via\n>>> -ZIC_OPTIONS =\n>>> +ZIC_OPTIONS = -b slim\n>> I don't think that's going to work very well with a buildfarm member,\n>> where there's no convenient way to set it.\n> Can't you set that from build_env?\n>\n> \t\t\t\n\n\nNo, build_env sets the environment, not makefile variables, and\nconfigure doesn't fill in ZIC_OPTIONS, unlike what it does with ZIC.\n\n\nAnyway, it turns out that avoiding the issue I was having here just\npostpones the problem for a few seconds, so while we should probably do\nsomething here it's not urgent from my POV.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sun, 6 Oct 2019 11:17:31 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: New \"-b slim\" option in 2019b zic: should we turn that on?"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile discussing partition related code with David in [1], I again was\nconfused by the layering of partition related code in\nnodeModifyTable.c.\n\n\n1) How come partition routing is done outside of ExecInsert()?\n\n case CMD_INSERT:\n /* Prepare for tuple routing if needed. */\n if (proute)\n slot = ExecPrepareTupleRouting(node, estate, proute,\n resultRelInfo, slot);\n slot = ExecInsert(node, slot, planSlot,\n estate, node->canSetTag);\n /* Revert ExecPrepareTupleRouting's state change. */\n if (proute)\n estate->es_result_relation_info = resultRelInfo;\n break;\n\nThat already seems like a layering violation, but it's made worse by\nExecUpdate() having its partition handling solely inside - including another\ncall to ExecInsert(), including the surrounding partition setup code.\n\nAnd even worse, after all that, ExecInsert() still contains partitioning\ncode.\n\nIt seems to me that if we just moved the ExecPrepareTupleRouting() into\nExecInsert(), we could remove the duplication.\n\n\n2) The contents of the\n /*\n * If a partition check failed, try to move the row into the right\n * partition.\n */\n if (partition_constraint_failed)\n\nblock ought to be moved to a separate function (maybe\nExecCrossPartitionUpdate or ExecMove). ExecUpdate() is already\ncomplicated enough without dealing with the partition move.\n\n\n3) How come we reset estate->es_result_relation_info after partition\n routing, but not the mtstate wide changes by\n ExecPrepareTupleRouting()? Note that its comment says:\n\n * Caller must revert the estate changes after executing the insertion!\n * In mtstate, transition capture changes may also need to be reverted.\n\nExecUpdate() contains\n\n /*\n * Updates set the transition capture map only when a new subplan\n * is chosen. But for inserts, it is set for each row. So after\n * INSERT, we need to revert back to the map created for UPDATE;\n * otherwise the next UPDATE will incorrectly use the one created\n * for INSERT. So first save the one created for UPDATE.\n */\n if (mtstate->mt_transition_capture)\n saved_tcs_map = mtstate->mt_transition_capture->tcs_map;\n\nbut as I read the code, that's not really true? It's\nExecPrepareTupleRouting() that does so, and that's called directly in ExecUpdate().\n\n\n4)\n /*\n * If this insert is the result of a partition key update that moved the\n * tuple to a new partition, put this row into the transition NEW TABLE,\n * if there is one. We need to do this separately for DELETE and INSERT\n * because they happen on different tables.\n */\n ar_insert_trig_tcs = mtstate->mt_transition_capture;\n if (mtstate->operation == CMD_UPDATE && mtstate->mt_transition_capture\n && mtstate->mt_transition_capture->tcs_update_new_table)\n {\n ExecARUpdateTriggers(estate, resultRelInfo, NULL,\n NULL,\n slot,\n NULL,\n mtstate->mt_transition_capture);\n\n /*\n * We've already captured the NEW TABLE row, so make sure any AR\n * INSERT trigger fired below doesn't capture it again.\n */\n ar_insert_trig_tcs = NULL;\n }\n\n /* AFTER ROW INSERT Triggers */\n ExecARInsertTriggers(estate, resultRelInfo, slot, recheckIndexes,\n ar_insert_trig_tcs);\n\nBesides not using the just defined ar_insert_trig_tcs and instead\nrepeatedly referring to mtstate->mt_transition_capture, wouldn't this be\na easier to understand if the were an if/else, instead of resetting\nar_insert_trig_tcs? If the block were\n\n /*\n * triggers behave differently depending on this being a delete as\n * part of a partion move, or a deletion proper.\n if (mtstate->operation == CMD_UPDATE)\n {\n /*\n * If this insert is the result of a partition key update that moved the\n * tuple to a new partition, put this row into the transition NEW TABLE,\n * if there is one. We need to do this separately for DELETE and INSERT\n * because they happen on different tables.\n */\n ExecARUpdateTriggers(estate, resultRelInfo, NULL,\n NULL,\n slot,\n NULL,\n mtstate->mt_transition_capture);\n\n /*\n * But we do want to fire plain per-row INSERT triggers on the\n * new table. By not passing in transition_capture we prevent\n * ....\n */\n ExecARInsertTriggers(estate, resultRelInfo, slot, recheckIndexes,\n NULL);\n }\n else\n {\n /* AFTER ROW INSERT Triggers */\n ExecARInsertTriggers(estate, resultRelInfo, slot, recheckIndexes,\n ar_insert_trig_tcs);\n }\n\nit seems like it'd be quite a bit clearer (although I do think the\ncomments also need a fair bit of polishing independent of this proposed\nchange).\n\n\nGreetings,\n\nAndres Freund\n\n\n[1] https://www.postgresql.org/message-id/CAKJS1f-YObQJTbncGJGRZ6gSFiS%2Bgw_Y5kvrpR%3DvEnFKH17AVA%40mail.gmail.com\n\n\n",
"msg_date": "Wed, 17 Jul 2019 18:09:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Hi Andres,\n\nOn Thu, Jul 18, 2019 at 10:09 AM Andres Freund <andres@anarazel.de> wrote:\n> 1) How come partition routing is done outside of ExecInsert()?\n>\n> case CMD_INSERT:\n> /* Prepare for tuple routing if needed. */\n> if (proute)\n> slot = ExecPrepareTupleRouting(node, estate, proute,\n> resultRelInfo, slot);\n> slot = ExecInsert(node, slot, planSlot,\n> estate, node->canSetTag);\n> /* Revert ExecPrepareTupleRouting's state change. */\n> if (proute)\n> estate->es_result_relation_info = resultRelInfo;\n> break;\n>\n> That already seems like a layering violation,\n\nThe decision to move partition routing out of ExecInsert() came about\nwhen we encountered a bug [1] whereby ExecInsert() would fail to reset\nestate->es_result_relation_info back to the root table if it had to\ntake an abnormal path out (early return), of which there are quite a\nfew instances. The first solution I came up with was to add a goto\nlabel for the code to reset estate->es_result_relation_info and jump\nto it from the various places that do an early return, which was\ncomplained about as reducing readability. So, the solution we\neventually settled on in 6666ee49f was to perform ResultRelInfos\nswitching at a higher level.\n\n> but it's made worse by\n> ExecUpdate() having its partition handling solely inside - including another\n> call to ExecInsert(), including the surrounding partition setup code.\n>\n> And even worse, after all that, ExecInsert() still contains partitioning\n> code.\n\nAFAIK, it's only to check the partition constraint when necessary.\nPartition routing complexity is totally outside, but based on what you\nwrite in point 4 below there's bit more...\n\n> It seems to me that if we just moved the ExecPrepareTupleRouting() into\n> ExecInsert(), we could remove the duplication.\n\nI agree that there's duplication here. Given what I wrote above, I\ncan think of doing this: move all of ExecInsert()'s code into\nExecInsertInternal() and make the former instead look like this:\n\nstatic TupleTableSlot *\nExecInsert(ModifyTableState *mtstate,\n TupleTableSlot *slot,\n TupleTableSlot *planSlot,\n EState *estate,\n bool canSetTag)\n{\n PartitionTupleRouting *proute = mtstate->mt_partition_tuple_routing;\n ResultRelInfo *resultRelInfo = estate->es_result_relation_info;\n\n /* Prepare for tuple routing if needed. */\n if (proute)\n slot = ExecPrepareTupleRouting(mtstate, estate, proute, resultRelInfo,\n slot);\n\n slot = ExecInsertInternal(mtstate, slot, planSlot, estate,\n mtstate->canSetTag);\n\n /* Revert ExecPrepareTupleRouting's state change. */\n if (proute)\n estate->es_result_relation_info = resultRelInfo;\n\n return slot;\n}\n\n> 2) The contents of the\n> /*\n> * If a partition check failed, try to move the row into the right\n> * partition.\n> */\n> if (partition_constraint_failed)\n>\n> block ought to be moved to a separate function (maybe\n> ExecCrossPartitionUpdate or ExecMove). ExecUpdate() is already\n> complicated enough without dealing with the partition move.\n\nI tend to agree with this. Adding Amit Khandekar in case he wants to\nchime in about this.\n\n> 3) How come we reset estate->es_result_relation_info after partition\n> routing, but not the mtstate wide changes by\n> ExecPrepareTupleRouting()? Note that its comment says:\n>\n> * Caller must revert the estate changes after executing the insertion!\n> * In mtstate, transition capture changes may also need to be reverted.\n>\n> ExecUpdate() contains\n>\n> /*\n> * Updates set the transition capture map only when a new subplan\n> * is chosen. But for inserts, it is set for each row. So after\n> * INSERT, we need to revert back to the map created for UPDATE;\n> * otherwise the next UPDATE will incorrectly use the one created\n> * for INSERT. So first save the one created for UPDATE.\n> */\n> if (mtstate->mt_transition_capture)\n> saved_tcs_map = mtstate->mt_transition_capture->tcs_map;\n>\n> but as I read the code, that's not really true? It's\n> ExecPrepareTupleRouting() that does so, and that's called directly in ExecUpdate().\n\nCalling ExecPrepareTupleRouting() is considered a part of a given\nINSERT operation, so anything it does is to facilitate the INSERT. In\nthis case, which map to assign to tcs_map can only be determined after\na partition is chosen and determining the partition (routing) is a job\nof ExecPrepareTupleRouting(). Perhaps, we need to update the comment\nhere a bit.\n\n> 4)\n> /*\n> * If this insert is the result of a partition key update that moved the\n> * tuple to a new partition, put this row into the transition NEW TABLE,\n> * if there is one. We need to do this separately for DELETE and INSERT\n> * because they happen on different tables.\n> */\n> ar_insert_trig_tcs = mtstate->mt_transition_capture;\n> if (mtstate->operation == CMD_UPDATE && mtstate->mt_transition_capture\n> && mtstate->mt_transition_capture->tcs_update_new_table)\n> {\n> ExecARUpdateTriggers(estate, resultRelInfo, NULL,\n> NULL,\n> slot,\n> NULL,\n> mtstate->mt_transition_capture);\n>\n> /*\n> * We've already captured the NEW TABLE row, so make sure any AR\n> * INSERT trigger fired below doesn't capture it again.\n> */\n> ar_insert_trig_tcs = NULL;\n> }\n>\n> /* AFTER ROW INSERT Triggers */\n> ExecARInsertTriggers(estate, resultRelInfo, slot, recheckIndexes,\n> ar_insert_trig_tcs);\n>\n> Besides not using the just defined ar_insert_trig_tcs and instead\n> repeatedly referring to mtstate->mt_transition_capture, wouldn't this be\n> a easier to understand if the were an if/else, instead of resetting\n> ar_insert_trig_tcs? If the block were\n>\n> /*\n> * triggers behave differently depending on this being a delete as\n> * part of a partion move, or a deletion proper.\n> if (mtstate->operation == CMD_UPDATE)\n> {\n> /*\n> * If this insert is the result of a partition key update that moved the\n> * tuple to a new partition, put this row into the transition NEW TABLE,\n> * if there is one. We need to do this separately for DELETE and INSERT\n> * because they happen on different tables.\n> */\n> ExecARUpdateTriggers(estate, resultRelInfo, NULL,\n> NULL,\n> slot,\n> NULL,\n> mtstate->mt_transition_capture);\n>\n> /*\n> * But we do want to fire plain per-row INSERT triggers on the\n> * new table. By not passing in transition_capture we prevent\n> * ....\n> */\n> ExecARInsertTriggers(estate, resultRelInfo, slot, recheckIndexes,\n> NULL);\n> }\n> else\n> {\n> /* AFTER ROW INSERT Triggers */\n> ExecARInsertTriggers(estate, resultRelInfo, slot, recheckIndexes,\n> ar_insert_trig_tcs);\n> }\n\nMaybe you meant to use mtstate->mt_transition_capture instead of\nar_insert_trig_tcs in the else block. We don't need\nar_insert_trig_tcs at all.\n\n> it seems like it'd be quite a bit clearer (although I do think the\n> comments also need a fair bit of polishing independent of this proposed\n> change).\n\nFwiw, I agree with your proposed restructuring, although I'd let Amit\nKh chime in as he'd be more familiar with this code. I wasn't aware\nof this partitioning-related bit being present in ExecInsert().\n\nWould you like me to write a patch for some or all items?\n\nThanks,\nAmit\n\n[1] https://www.postgresql.org/message-id/flat/0473bf5c-57b1-f1f7-3d58-455c2230bc5f%40lab.ntt.co.jp\n\n\n",
"msg_date": "Thu, 18 Jul 2019 14:24:29 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-18 14:24:29 +0900, Amit Langote wrote:\n> On Thu, Jul 18, 2019 at 10:09 AM Andres Freund <andres@anarazel.de> wrote:\n> > 1) How come partition routing is done outside of ExecInsert()?\n> >\n> > case CMD_INSERT:\n> > /* Prepare for tuple routing if needed. */\n> > if (proute)\n> > slot = ExecPrepareTupleRouting(node, estate, proute,\n> > resultRelInfo, slot);\n> > slot = ExecInsert(node, slot, planSlot,\n> > estate, node->canSetTag);\n> > /* Revert ExecPrepareTupleRouting's state change. */\n> > if (proute)\n> > estate->es_result_relation_info = resultRelInfo;\n> > break;\n> >\n> > That already seems like a layering violation,\n> \n> The decision to move partition routing out of ExecInsert() came about\n> when we encountered a bug [1] whereby ExecInsert() would fail to reset\n> estate->es_result_relation_info back to the root table if it had to\n> take an abnormal path out (early return), of which there are quite a\n> few instances. The first solution I came up with was to add a goto\n> label for the code to reset estate->es_result_relation_info and jump\n> to it from the various places that do an early return, which was\n> complained about as reducing readability. So, the solution we\n> eventually settled on in 6666ee49f was to perform ResultRelInfos\n> switching at a higher level.\n\nI think that was the wrong path, given that the code now lives in\nmultiple places. Without even a comment explaining that if one has to be\nchanged, the other has to be changed too.\n\n\n> > It seems to me that if we just moved the ExecPrepareTupleRouting() into\n> > ExecInsert(), we could remove the duplication.\n> \n> I agree that there's duplication here. Given what I wrote above, I\n> can think of doing this: move all of ExecInsert()'s code into\n> ExecInsertInternal() and make the former instead look like this:\n\nFor me just having the gotos is cleaner than that here.\n\nBut perhaps the right fix would be to not have ExecPrepareTupleRouting()\nchange global state at all, and instead change it much more locally\ninside ExecInsert(), around the calls that need it to be set\ndifferently.\n\nOr perhaps the actually correct fix is to remove es_result_relation_info\nalltogether, and just pass it down the places that need it - we've a lot\nmore code setting it than using the value. And it'd not be hard to\nactually pass it to the places that read it. Given all the\nsetting/resetting of it it's pretty obvious that a query-global resource\nisn't the right place for it.\n\n\n\n> > /*\n> > * triggers behave differently depending on this being a delete as\n> > * part of a partion move, or a deletion proper.\n> > if (mtstate->operation == CMD_UPDATE)\n> > {\n> > /*\n> > * If this insert is the result of a partition key update that moved the\n> > * tuple to a new partition, put this row into the transition NEW TABLE,\n> > * if there is one. We need to do this separately for DELETE and INSERT\n> > * because they happen on different tables.\n> > */\n> > ExecARUpdateTriggers(estate, resultRelInfo, NULL,\n> > NULL,\n> > slot,\n> > NULL,\n> > mtstate->mt_transition_capture);\n> >\n> > /*\n> > * But we do want to fire plain per-row INSERT triggers on the\n> > * new table. By not passing in transition_capture we prevent\n> > * ....\n> > */\n> > ExecARInsertTriggers(estate, resultRelInfo, slot, recheckIndexes,\n> > NULL);\n> > }\n> > else\n> > {\n> > /* AFTER ROW INSERT Triggers */\n> > ExecARInsertTriggers(estate, resultRelInfo, slot, recheckIndexes,\n> > ar_insert_trig_tcs);\n> > }\n> \n> Maybe you meant to use mtstate->mt_transition_capture instead of\n> ar_insert_trig_tcs in the else block. We don't need\n> ar_insert_trig_tcs at all.\n\nYes, it was just a untested example of how the code could be made\nclearer.\n\n\n> > it seems like it'd be quite a bit clearer (although I do think the\n> > comments also need a fair bit of polishing independent of this proposed\n> > change).\n> \n> Fwiw, I agree with your proposed restructuring, although I'd let Amit\n> Kh chime in as he'd be more familiar with this code. I wasn't aware\n> of this partitioning-related bit being present in ExecInsert().\n> \n> Would you like me to write a patch for some or all items?\n\nYes, that would be awesome.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Jul 2019 22:53:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Thu, Jul 18, 2019 at 2:53 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-07-18 14:24:29 +0900, Amit Langote wrote:\n> > On Thu, Jul 18, 2019 at 10:09 AM Andres Freund <andres@anarazel.de> wrote:\n> > > 1) How come partition routing is done outside of ExecInsert()?\n> > >\n> > > case CMD_INSERT:\n> > > /* Prepare for tuple routing if needed. */\n> > > if (proute)\n> > > slot = ExecPrepareTupleRouting(node, estate, proute,\n> > > resultRelInfo, slot);\n> > > slot = ExecInsert(node, slot, planSlot,\n> > > estate, node->canSetTag);\n> > > /* Revert ExecPrepareTupleRouting's state change. */\n> > > if (proute)\n> > > estate->es_result_relation_info = resultRelInfo;\n> > > break;\n> > >\n> > > That already seems like a layering violation,\n> >\n> > The decision to move partition routing out of ExecInsert() came about\n> > when we encountered a bug [1] whereby ExecInsert() would fail to reset\n> > estate->es_result_relation_info back to the root table if it had to\n> > take an abnormal path out (early return), of which there are quite a\n> > few instances. The first solution I came up with was to add a goto\n> > label for the code to reset estate->es_result_relation_info and jump\n> > to it from the various places that do an early return, which was\n> > complained about as reducing readability. So, the solution we\n> > eventually settled on in 6666ee49f was to perform ResultRelInfos\n> > switching at a higher level.\n>\n> I think that was the wrong path, given that the code now lives in\n> multiple places. Without even a comment explaining that if one has to be\n> changed, the other has to be changed too.\n>\n>\n> > > It seems to me that if we just moved the ExecPrepareTupleRouting() into\n> > > ExecInsert(), we could remove the duplication.\n> >\n> > I agree that there's duplication here. Given what I wrote above, I\n> > can think of doing this: move all of ExecInsert()'s code into\n> > ExecInsertInternal() and make the former instead look like this:\n>\n> For me just having the gotos is cleaner than that here.\n>\n> But perhaps the right fix would be to not have ExecPrepareTupleRouting()\n> change global state at all, and instead change it much more locally\n> inside ExecInsert(), around the calls that need it to be set\n> differently.\n>\n> Or perhaps the actually correct fix is to remove es_result_relation_info\n> alltogether, and just pass it down the places that need it - we've a lot\n> more code setting it than using the value. And it'd not be hard to\n> actually pass it to the places that read it. Given all the\n> setting/resetting of it it's pretty obvious that a query-global resource\n> isn't the right place for it.\n\nI tend to agree that managing state through es_result_relation_info\nacross various operations on a result relation has turned a bit messy\nat this point. That said, while most of the places that access the\ncurrently active result relation from es_result_relation_info can be\neasily modified to receive it directly, the FDW API BeginDirectModify\nposes bit of a challenge. BeginDirectlyModify() is called via\nExecInitForeignScan() that in turn can't be changed to add a result\nrelation (Index or ResultRelInfo *) argument, so the only way left for\nBeginDirectlyModify() is to access it via es_result_relation_info.\n\nMaybe we can do to ExecPrepareTupleRouting() what you say -- remove\nall code in it that changes ModifyTable-global and EState-global\nstates. Also, maybe call ExecPrepareTupleRouting() inside\nExecInsert() at the beginning instead of outside of it. I agree that\nsetting and reverting global states around the exact piece of code\nthat need that to be done is better for clarity. All of that assuming\nyou're not saying that we scrap ExecPrepareTupleRouting altogether.\n\nThoughts? Other opinions?\n\n> > Would you like me to write a patch for some or all items?\n>\n> Yes, that would be awesome.\n\nOK, I will try to post a patch soon.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 18 Jul 2019 16:50:57 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Thu, Jul 18, 2019 at 4:51 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Jul 18, 2019 at 2:53 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2019-07-18 14:24:29 +0900, Amit Langote wrote:\n> > > On Thu, Jul 18, 2019 at 10:09 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > 1) How come partition routing is done outside of ExecInsert()?\n> > > >\n> > > > case CMD_INSERT:\n> > > > /* Prepare for tuple routing if needed. */\n> > > > if (proute)\n> > > > slot = ExecPrepareTupleRouting(node, estate, proute,\n> > > > resultRelInfo, slot);\n> > > > slot = ExecInsert(node, slot, planSlot,\n> > > > estate, node->canSetTag);\n> > > > /* Revert ExecPrepareTupleRouting's state change. */\n> > > > if (proute)\n> > > > estate->es_result_relation_info = resultRelInfo;\n> > > > break;\n> > > >\n> > > > That already seems like a layering violation,\n> > >\n> > > The decision to move partition routing out of ExecInsert() came about\n> > > when we encountered a bug [1] whereby ExecInsert() would fail to reset\n> > > estate->es_result_relation_info back to the root table if it had to\n> > > take an abnormal path out (early return), of which there are quite a\n> > > few instances. The first solution I came up with was to add a goto\n> > > label for the code to reset estate->es_result_relation_info and jump\n> > > to it from the various places that do an early return, which was\n> > > complained about as reducing readability. So, the solution we\n> > > eventually settled on in 6666ee49f was to perform ResultRelInfos\n> > > switching at a higher level.\n> >\n> > I think that was the wrong path, given that the code now lives in\n> > multiple places. Without even a comment explaining that if one has to be\n> > changed, the other has to be changed too.\n\nI thought this would be OK because we have the\nExecPrepareTupleRouting() call in just two places in a single source\nfile, at least currently.\n\n> > Or perhaps the actually correct fix is to remove es_result_relation_info\n> > alltogether, and just pass it down the places that need it - we've a lot\n> > more code setting it than using the value. And it'd not be hard to\n> > actually pass it to the places that read it. Given all the\n> > setting/resetting of it it's pretty obvious that a query-global resource\n> > isn't the right place for it.\n>\n> I tend to agree that managing state through es_result_relation_info\n> across various operations on a result relation has turned a bit messy\n> at this point. That said, while most of the places that access the\n> currently active result relation from es_result_relation_info can be\n> easily modified to receive it directly, the FDW API BeginDirectModify\n> poses bit of a challenge. BeginDirectlyModify() is called via\n> ExecInitForeignScan() that in turn can't be changed to add a result\n> relation (Index or ResultRelInfo *) argument, so the only way left for\n> BeginDirectlyModify() is to access it via es_result_relation_info.\n\nThat's right. I'm not sure that's a good idea, because I think other\nextensions also might look at es_result_relation_info, and if so,\nremoving es_result_relation_info altogether would require the\nextension authors to update their extensions without any benefit,\nwhich I think isn't a good thing.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 18 Jul 2019 20:00:45 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Thu, Jul 18, 2019 at 4:50 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Jul 18, 2019 at 2:53 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2019-07-18 14:24:29 +0900, Amit Langote wrote:\n> > > On Thu, Jul 18, 2019 at 10:09 AM Andres Freund <andres@anarazel.de> wrote:\n> > Or perhaps the actually correct fix is to remove es_result_relation_info\n> > alltogether, and just pass it down the places that need it - we've a lot\n> > more code setting it than using the value. And it'd not be hard to\n> > actually pass it to the places that read it. Given all the\n> > setting/resetting of it it's pretty obvious that a query-global resource\n> > isn't the right place for it.\n>>\n> > > Would you like me to write a patch for some or all items?\n> >\n> > Yes, that would be awesome.\n>\n> OK, I will try to post a patch soon.\n\nAttached are two patches.\n\nThe first one (0001) deals with reducing the core executor's reliance\non es_result_relation_info to access the currently active result\nrelation, in favor of receiving it from the caller as a function\nargument. So no piece of core code relies on it being correctly set\nanymore. It still needs to be set correctly for the third-party code\nsuch as FDWs. Also, because the partition routing related suggestions\nupthread are closely tied into this, especially those around\nExecInsert(), I've included them in the same patch. I chose to keep\nthe function ExecPrepareTupleRouting, even though it's now only called\nfrom ExecInsert(), to preserve the readability of the latter.\n\nThe second patch (0002) implements some rearrangement of the UPDATE\ntuple movement code, addressing the point 2 of in the first email of\nthis thread. Mainly the block of code in ExecUpdate() that implements\nrow movement proper has been moved in a function called ExecMove().\nIt also contains the cosmetic improvements suggested in point 4.\n\nThanks,\nAmit",
"msg_date": "Fri, 19 Jul 2019 17:52:20 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-19 17:52:20 +0900, Amit Langote wrote:\n> Attached are two patches.\n\nAwesome.\n\n\n> The first one (0001) deals with reducing the core executor's reliance\n> on es_result_relation_info to access the currently active result\n> relation, in favor of receiving it from the caller as a function\n> argument. So no piece of core code relies on it being correctly set\n> anymore. It still needs to be set correctly for the third-party code\n> such as FDWs.\n\nI'm inclined to just remove it. There's not much code out there relying\non it, as far as I can tell. Most FDWs don't support the direct modify\nAPI, and that's afaict the case where we one needs to use\nes_result_relation_info?\n\nIn fact, I searched through alllFDWs listed on https://wiki.postgresql.org/wiki/Foreign_data_wrappers\nthat are on github and in first few categories (up and including to\n\"file wrappers\"), and there was only one reference to\nes_result_relation_info, and that just in comments in a test:\nhttps://github.com/pgspider/griddb_fdw/search?utf8=%E2%9C%93&q=es_result_relation_info&type=\nwhich I think was just copied from our source code.\n\nIOW, we should just change the direct modify calls to get the relevant\nResultRelationInfo or something in that vein (perhaps just the relevant\nRT index?).\n\npglogical also references it, but just because it creates its own\nEState afaict.\n\n\n\n> @@ -334,32 +335,50 @@ ExecComputeStoredGenerated(EState *estate, TupleTableSlot *slot)\n> *\t\tExecInsert\n> *\n> *\t\tFor INSERT, we have to insert the tuple into the target relation\n> - *\t\tand insert appropriate tuples into the index relations.\n> + *\t\t(or partition thereof) and insert appropriate tuples into the index\n> + *\t\trelations.\n> *\n> *\t\tReturns RETURNING result if any, otherwise NULL.\n> + *\n> + *\t\tThis may change the currently active tuple conversion map in\n> + *\t\tmtstate->mt_transition_capture, so the callers must take care to\n> + *\t\tsave the previous value to avoid losing track of it.\n> * ----------------------------------------------------------------\n> */\n> static TupleTableSlot *\n> ExecInsert(ModifyTableState *mtstate,\n> +\t\t ResultRelInfo *resultRelInfo,\n> \t\t TupleTableSlot *slot,\n> \t\t TupleTableSlot *planSlot,\n> \t\t EState *estate,\n> \t\t bool canSetTag)\n> {\n> -\tResultRelInfo *resultRelInfo;\n> \tRelation\tresultRelationDesc;\n> \tList\t *recheckIndexes = NIL;\n> \tTupleTableSlot *result = NULL;\n> \tTransitionCaptureState *ar_insert_trig_tcs;\n> \tModifyTable *node = (ModifyTable *) mtstate->ps.plan;\n> \tOnConflictAction onconflict = node->onConflictAction;\n> +\tPartitionTupleRouting *proute = mtstate->mt_partition_tuple_routing;\n> +\n> +\t/*\n> +\t * If the input result relation is a partitioned table, find the leaf\n> +\t * partition to insert the tuple into.\n> +\t */\n> +\tif (proute)\n> +\t{\n> +\t\tResultRelInfo *partRelInfo;\n> +\n> +\t\tslot = ExecPrepareTupleRouting(mtstate, estate, proute,\n> +\t\t\t\t\t\t\t\t\t resultRelInfo, slot,\n> +\t\t\t\t\t\t\t\t\t &partRelInfo);\n> +\t\tresultRelInfo = partRelInfo;\n> +\t\t/* Result relation has changed, so update EState reference too. */\n> +\t\testate->es_result_relation_info = resultRelInfo;\n> +\t}\n\nI think by removing es_result_relation entirely, this would look\ncleaner.\n\n\n> @@ -1271,18 +1274,18 @@ lreplace:;\n> \t\t\t\t\t\t\t\t\t\t\t mtstate->mt_root_tuple_slot);\n> \n> \t\t\t/*\n> -\t\t\t * Prepare for tuple routing, making it look like we're inserting\n> -\t\t\t * into the root.\n> +\t\t\t * ExecInsert() may scribble on mtstate->mt_transition_capture,\n> +\t\t\t * so save the currently active map.\n> \t\t\t */\n> +\t\t\tif (mtstate->mt_transition_capture)\n> +\t\t\t\tsaved_tcs_map = mtstate->mt_transition_capture->tcs_map;\n\nWonder if we could remove the need for this somehow, it's still pretty\ndarn ugly. Thomas, perhaps you have some insights?\n\nTo me the need to modify these ModifyTable wide state on a per-subplan\nand even per-partition basis indicates that the datastructures are in\nthe wrong place.\n\n\n\n> @@ -2212,23 +2207,17 @@ ExecModifyTable(PlanState *pstate)\n> \t\tswitch (operation)\n> \t\t{\n> \t\t\tcase CMD_INSERT:\n> -\t\t\t\t/* Prepare for tuple routing if needed. */\n> -\t\t\t\tif (proute)\n> -\t\t\t\t\tslot = ExecPrepareTupleRouting(node, estate, proute,\n> -\t\t\t\t\t\t\t\t\t\t\t\t resultRelInfo, slot);\n> -\t\t\t\tslot = ExecInsert(node, slot, planSlot,\n> +\t\t\t\tslot = ExecInsert(node, resultRelInfo, slot, planSlot,\n> \t\t\t\t\t\t\t\t estate, node->canSetTag);\n> -\t\t\t\t/* Revert ExecPrepareTupleRouting's state change. */\n> -\t\t\t\tif (proute)\n> -\t\t\t\t\testate->es_result_relation_info = resultRelInfo;\n> \t\t\t\tbreak;\n> \t\t\tcase CMD_UPDATE:\n> -\t\t\t\tslot = ExecUpdate(node, tupleid, oldtuple, slot, planSlot,\n> -\t\t\t\t\t\t\t\t &node->mt_epqstate, estate, node->canSetTag);\n> +\t\t\t\tslot = ExecUpdate(node, resultRelInfo, tupleid, oldtuple, slot,\n> +\t\t\t\t\t\t\t\t planSlot, &node->mt_epqstate, estate,\n> +\t\t\t\t\t\t\t\t node->canSetTag);\n> \t\t\t\tbreak;\n> \t\t\tcase CMD_DELETE:\n> -\t\t\t\tslot = ExecDelete(node, tupleid, oldtuple, planSlot,\n> -\t\t\t\t\t\t\t\t &node->mt_epqstate, estate,\n> +\t\t\t\tslot = ExecDelete(node, resultRelInfo, tupleid, oldtuple,\n> +\t\t\t\t\t\t\t\t planSlot, &node->mt_epqstate, estate,\n> \t\t\t\t\t\t\t\t true, node->canSetTag,\n> \t\t\t\t\t\t\t\t false /* changingPart */ , NULL, NULL);\n> \t\t\t\tbreak;\n\nThis reminds me of another complaint: ExecDelete and ExecInsert() have\ngotten more boolean parameters for partition moving, but only one of\nthem is explained with a comment (/* changingPart */) - think we should\ndo that for all.\n\n\n\n> diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c\n> index 43edfef089..7df3e78b22 100644\n> --- a/src/backend/replication/logical/worker.c\n> +++ b/src/backend/replication/logical/worker.c\n> @@ -173,10 +173,10 @@ ensure_transaction(void)\n> * This is based on similar code in copy.c\n> */\n> static EState *\n> -create_estate_for_relation(LogicalRepRelMapEntry *rel)\n> +create_estate_for_relation(LogicalRepRelMapEntry *rel,\n> +\t\t\t\t\t\t ResultRelInfo **resultRelInfo)\n> {\n> \tEState\t *estate;\n> -\tResultRelInfo *resultRelInfo;\n> \tRangeTblEntry *rte;\n> \n> \testate = CreateExecutorState();\n> @@ -188,12 +188,11 @@ create_estate_for_relation(LogicalRepRelMapEntry *rel)\n> \trte->rellockmode = AccessShareLock;\n> \tExecInitRangeTable(estate, list_make1(rte));\n> \n> -\tresultRelInfo = makeNode(ResultRelInfo);\n> -\tInitResultRelInfo(resultRelInfo, rel->localrel, 1, NULL, 0);\n> +\t*resultRelInfo = makeNode(ResultRelInfo);\n> +\tInitResultRelInfo(*resultRelInfo, rel->localrel, 1, NULL, 0);\n> \n> -\testate->es_result_relations = resultRelInfo;\n> +\testate->es_result_relations = *resultRelInfo;\n> \testate->es_num_result_relations = 1;\n> -\testate->es_result_relation_info = resultRelInfo;\n> \n> \testate->es_output_cid = GetCurrentCommandId(true);\n> \n> @@ -567,6 +566,7 @@ GetRelationIdentityOrPK(Relation rel)\n> static void\n> apply_handle_insert(StringInfo s)\n> {\n> +\tResultRelInfo *resultRelInfo;\n> \tLogicalRepRelMapEntry *rel;\n> \tLogicalRepTupleData newtup;\n> \tLogicalRepRelId relid;\n> @@ -589,7 +589,7 @@ apply_handle_insert(StringInfo s)\n> \t}\n> \n> \t/* Initialize the executor state. */\n> -\testate = create_estate_for_relation(rel);\n> +\testate = create_estate_for_relation(rel, &resultRelInfo);\n\nHm. It kinda seems cleaner if we were to instead return the relevant\nindex, rather than the entire ResultRelInfo, as an output from\ncreate_estate_for_relation(). Makes it clearer that it's still in the\nEState.\n\nOr perhaps we ought to compute it in a separate step? Then that'd be\nmore amenable to support replcating into partition roots.\n\n\n> \t/*\n> -\t * If this insert is the result of a partition key update that moved the\n> -\t * tuple to a new partition, put this row into the transition NEW TABLE,\n> -\t * if there is one. We need to do this separately for DELETE and INSERT\n> -\t * because they happen on different tables.\n> +\t * If this delete is a part of a partition key update, put this row into\n> +\t * the UPDATE trigger's NEW TABLE instead of that of an INSERT trigger.\n> \t */\n> -\tar_insert_trig_tcs = mtstate->mt_transition_capture;\n> -\tif (mtstate->operation == CMD_UPDATE && mtstate->mt_transition_capture\n> -\t\t&& mtstate->mt_transition_capture->tcs_update_new_table)\n> +\tif (mtstate->operation == CMD_UPDATE &&\n> +\t\tmtstate->mt_transition_capture &&\n> +\t\tmtstate->mt_transition_capture->tcs_update_new_table)\n> \t{\n> -\t\tExecARUpdateTriggers(estate, resultRelInfo, NULL,\n> -\t\t\t\t\t\t\t NULL,\n> -\t\t\t\t\t\t\t slot,\n> -\t\t\t\t\t\t\t NULL,\n> -\t\t\t\t\t\t\t mtstate->mt_transition_capture);\n> +\t\tExecARUpdateTriggers(estate, resultRelInfo, NULL, NULL, slot,\n> +\t\t\t\t\t\t\t NIL, mtstate->mt_transition_capture);\n> \n> \t\t/*\n> -\t\t * We've already captured the NEW TABLE row, so make sure any AR\n> -\t\t * INSERT trigger fired below doesn't capture it again.\n> +\t\t * Execute AFTER ROW INSERT Triggers, but such that the row is not\n> +\t\t * captured again in the transition table if any.\n> \t\t */\n> -\t\tar_insert_trig_tcs = NULL;\n> +\t\tExecARInsertTriggers(estate, resultRelInfo, slot, recheckIndexes,\n> +\t\t\t\t\t\t\t NULL);\n> +\t}\n> +\telse\n> +\t{\n> +\t\t/* AFTER ROW INSERT Triggers */\n> +\t\tExecARInsertTriggers(estate, resultRelInfo, slot, recheckIndexes,\n> +\t\t\t\t\t\t\t mtstate->mt_transition_capture);\n> \t}\n> -\n> -\t/* AFTER ROW INSERT Triggers */\n> -\tExecARInsertTriggers(estate, resultRelInfo, slot, recheckIndexes,\n> -\t\t\t\t\t\t ar_insert_trig_tcs);\n\nWhile a tiny bit more code, perhaps, this is considerably clearer\nimo. Thanks.\n\n\n> +/*\n> + *\tExecMove\n> + *\t\tMove an updated tuple from the input result relation to the\n> + *\t\tnew partition of its root parent table\n> + *\n> + *\tThis works by first deleting the tuple from the input result relation\n> + *\tfollowed by inserting it into the root parent table, that is,\n> + *\tmtstate->rootResultRelInfo.\n> + *\n> + *\tReturns true if it's detected that the tuple we're trying to move has\n> + *\tbeen concurrently updated.\n> + */\n> +static bool\n> +ExecMove(ModifyTableState *mtstate, ResultRelInfo *resultRelInfo,\n> +\t\t ItemPointer tupleid, HeapTuple oldtuple, TupleTableSlot *planSlot,\n> +\t\t EPQState *epqstate, bool canSetTag, TupleTableSlot **slot,\n> +\t\t TupleTableSlot **inserted_tuple)\n> +{\n\nI know that it was one of the names I proposed, but now that I'm\nthinking about it again, it sounds too generic. Perhaps\nExecCrossPartitionUpdate() wouldn't be a quite so generic name? Since\nthere's only one reference the longer name wouldn't be painful.\n\n\n> +\t/*\n> +\t * Row movement, part 1. Delete the tuple, but skip RETURNING\n> +\t * processing. We want to return rows from INSERT.\n> +\t */\n> +\tExecDelete(mtstate, resultRelInfo, tupleid, oldtuple, planSlot,\n> +\t\t\t epqstate, estate, false, false /* canSetTag */ ,\n> +\t\t\t true /* changingPart */ , &tuple_deleted, &epqslot);\n\nHere again it'd be nice if all the booleans would be explained with a\ncomment.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 19 Jul 2019 09:52:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On 2019-Jul-19, Andres Freund wrote:\n\n> On 2019-07-19 17:52:20 +0900, Amit Langote wrote:\n> > The first one (0001) deals with reducing the core executor's reliance\n> > on es_result_relation_info to access the currently active result\n> > relation, in favor of receiving it from the caller as a function\n> > argument. So no piece of core code relies on it being correctly set\n> > anymore. It still needs to be set correctly for the third-party code\n> > such as FDWs.\n> \n> I'm inclined to just remove it. There's not much code out there relying\n> on it, as far as I can tell. Most FDWs don't support the direct modify\n> API, and that's afaict the case where we one needs to use\n> es_result_relation_info?\n\nYeah, I too agree with removing it; since our code doesn't use it, it\nseems very likely that it will become slightly out of sync with reality\nand we'd not notice until some FDW misbehaves weirdly.\n\n> > -\t\t\t\tslot = ExecDelete(node, tupleid, oldtuple, planSlot,\n> > -\t\t\t\t\t\t\t\t &node->mt_epqstate, estate,\n> > +\t\t\t\tslot = ExecDelete(node, resultRelInfo, tupleid, oldtuple,\n> > +\t\t\t\t\t\t\t\t planSlot, &node->mt_epqstate, estate,\n> > \t\t\t\t\t\t\t\t true, node->canSetTag,\n> > \t\t\t\t\t\t\t\t false /* changingPart */ , NULL, NULL);\n> > \t\t\t\tbreak;\n> \n> This reminds me of another complaint: ExecDelete and ExecInsert() have\n> gotten more boolean parameters for partition moving, but only one of\n> them is explained with a comment (/* changingPart */) - think we should\n> do that for all.\n\nMaybe change the API to use a flags bitmask?\n\n(IMO the placement of the comment inside the function call, making the\ncomma appear preceded with a space, looks ugly. If we want to add\ncomments, let's put each param on its own line with the comment beyond\nthe comma. That's what we do in other places where this pattern is\nused.)\n\n> > \t/* Initialize the executor state. */\n> > -\testate = create_estate_for_relation(rel);\n> > +\testate = create_estate_for_relation(rel, &resultRelInfo);\n> \n> Hm. It kinda seems cleaner if we were to instead return the relevant\n> index, rather than the entire ResultRelInfo, as an output from\n> create_estate_for_relation(). Makes it clearer that it's still in the\n> EState.\n\nYeah.\n\n> Or perhaps we ought to compute it in a separate step? Then that'd be\n> more amenable to support replcating into partition roots.\n\nI'm not quite seeing the shape that you're imagining this would take.\nI vote not to mess with that for this patch; I bet that we'll have to\nchange a few other things in this code when we add better support for\npartitioning in logical replication.\n\n> > +/*\n> > + *\tExecMove\n> > + *\t\tMove an updated tuple from the input result relation to the\n> > + *\t\tnew partition of its root parent table\n> > + *\n> > + *\tThis works by first deleting the tuple from the input result relation\n> > + *\tfollowed by inserting it into the root parent table, that is,\n> > + *\tmtstate->rootResultRelInfo.\n> > + *\n> > + *\tReturns true if it's detected that the tuple we're trying to move has\n> > + *\tbeen concurrently updated.\n> > + */\n> > +static bool\n> > +ExecMove(ModifyTableState *mtstate, ResultRelInfo *resultRelInfo,\n> > +\t\t ItemPointer tupleid, HeapTuple oldtuple, TupleTableSlot *planSlot,\n> > +\t\t EPQState *epqstate, bool canSetTag, TupleTableSlot **slot,\n> > +\t\t TupleTableSlot **inserted_tuple)\n> > +{\n>\n> I know that it was one of the names I proposed, but now that I'm\n> thinking about it again, it sounds too generic. Perhaps\n> ExecCrossPartitionUpdate() wouldn't be a quite so generic name? Since\n> there's only one reference the longer name wouldn't be painful.\n\nThat name sounds good. Isn't the return convention backwards? Sounds\nlike \"true\" should mean that it succeeded.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 19 Jul 2019 17:11:10 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-19 17:11:10 -0400, Alvaro Herrera wrote:\n> On 2019-Jul-19, Andres Freund wrote:\n> > > -\t\t\t\tslot = ExecDelete(node, tupleid, oldtuple, planSlot,\n> > > -\t\t\t\t\t\t\t\t &node->mt_epqstate, estate,\n> > > +\t\t\t\tslot = ExecDelete(node, resultRelInfo, tupleid, oldtuple,\n> > > +\t\t\t\t\t\t\t\t planSlot, &node->mt_epqstate, estate,\n> > > \t\t\t\t\t\t\t\t true, node->canSetTag,\n> > > \t\t\t\t\t\t\t\t false /* changingPart */ , NULL, NULL);\n> > > \t\t\t\tbreak;\n> > \n> > This reminds me of another complaint: ExecDelete and ExecInsert() have\n> > gotten more boolean parameters for partition moving, but only one of\n> > them is explained with a comment (/* changingPart */) - think we should\n> > do that for all.\n> \n> Maybe change the API to use a flags bitmask?\n> \n> (IMO the placement of the comment inside the function call, making the\n> comma appear preceded with a space, looks ugly. If we want to add\n> comments, let's put each param on its own line with the comment beyond\n> the comma. That's what we do in other places where this pattern is\n> used.)\n\nWell, that's the pre-existing style, so I'd just have gone with\nthat. I'm not sure I buy there's much point in going for a bitmask, as\nthis is file-private code, not code where changing the signature\nrequires modifying multiple files.\n\n\n> > > \t/* Initialize the executor state. */\n> > > -\testate = create_estate_for_relation(rel);\n> > > +\testate = create_estate_for_relation(rel, &resultRelInfo);\n> > \n> > Hm. It kinda seems cleaner if we were to instead return the relevant\n> > index, rather than the entire ResultRelInfo, as an output from\n> > create_estate_for_relation(). Makes it clearer that it's still in the\n> > EState.\n> \n> Yeah.\n> \n> > Or perhaps we ought to compute it in a separate step? Then that'd be\n> > more amenable to support replcating into partition roots.\n> \n> I'm not quite seeing the shape that you're imagining this would take.\n> I vote not to mess with that for this patch; I bet that we'll have to\n> change a few other things in this code when we add better support for\n> partitioning in logical replication.\n\nYea, I think it's fine to do that separately. If we wanted to support\nreplication roots as replication targets, we'd obviously need to do\nsomething pretty similar to what ExecInsert()/ExecUpdate() already\ndo. And there we can't just reference an index in EState, as partition\nchildren aren't in there.\n\nI kind of was wondering if we were to have a separate function for\ngetting the ResultRelInfo targeted, we'd be able to just extend that\nfunction to support replication. But now that I think about it a bit\nmore, that's so much just scratching the surface...\n\nWe really ought to have the replication \"sink\" code share more code with\nnodeModifyTable.c.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 19 Jul 2019 14:17:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Hi Andres,\n\nSorry about the delay in replying as I was on vacation for the last few days.\n\nOn Sat, Jul 20, 2019 at 1:52 AM Andres Freund <andres@anarazel.de> wrote:\n> > The first one (0001) deals with reducing the core executor's reliance\n> > on es_result_relation_info to access the currently active result\n> > relation, in favor of receiving it from the caller as a function\n> > argument. So no piece of core code relies on it being correctly set\n> > anymore. It still needs to be set correctly for the third-party code\n> > such as FDWs.\n>\n> I'm inclined to just remove it. There's not much code out there relying\n> on it, as far as I can tell. Most FDWs don't support the direct modify\n> API, and that's afaict the case where we one needs to use\n> es_result_relation_info?\n\nRight, only the directly modify API uses it.\n\n> In fact, I searched through alllFDWs listed on https://wiki.postgresql.org/wiki/Foreign_data_wrappers\n> that are on github and in first few categories (up and including to\n> \"file wrappers\"), and there was only one reference to\n> es_result_relation_info, and that just in comments in a test:\n> https://github.com/pgspider/griddb_fdw/search?utf8=%E2%9C%93&q=es_result_relation_info&type=\n> which I think was just copied from our source code.\n>\n> IOW, we should just change the direct modify calls to get the relevant\n> ResultRelationInfo or something in that vein (perhaps just the relevant\n> RT index?).\n\nIt seems easy to make one of the two functions that constitute the\ndirect modify API, IterateDirectModify(), access the result relation\nfrom ForeignScanState by saving either the result relation RT index or\nResultRelInfo pointer itself into the ForeignScanState's FDW-private\narea. For example, for postgres_fdw, one would simply add a new\nmember to PgFdwDirectModifyState struct.\n\nDoing that for the other function BeginDirectModify() seems a bit more\ninvolved. We could add a new field to ForeignScan, say\nresultRelation, that's set by either PlanDirectModify() (the FDW code)\nor make_modifytable() (the core code) if the ForeignScan node contains\nthe command for direct modification. BeginDirectModify() can then use\nthat value instead of relying on es_result_relation_info being set.\n\nThoughts? Fujita-san, do you have any opinion on whether that would\nbe a good idea?\n\n> pglogical also references it, but just because it creates its own\n> EState afaict.\n\nThat sounds easily manageable.\n\n> > @@ -334,32 +335,50 @@ ExecComputeStoredGenerated(EState *estate, TupleTableSlot *slot)\n> > * ExecInsert\n> > *\n> > * For INSERT, we have to insert the tuple into the target relation\n> > - * and insert appropriate tuples into the index relations.\n> > + * (or partition thereof) and insert appropriate tuples into the index\n> > + * relations.\n> > *\n> > * Returns RETURNING result if any, otherwise NULL.\n> > + *\n> > + * This may change the currently active tuple conversion map in\n> > + * mtstate->mt_transition_capture, so the callers must take care to\n> > + * save the previous value to avoid losing track of it.\n> > * ----------------------------------------------------------------\n> > */\n> > static TupleTableSlot *\n> > ExecInsert(ModifyTableState *mtstate,\n> > + ResultRelInfo *resultRelInfo,\n> > TupleTableSlot *slot,\n> > TupleTableSlot *planSlot,\n> > EState *estate,\n> > bool canSetTag)\n> > {\n> > - ResultRelInfo *resultRelInfo;\n> > Relation resultRelationDesc;\n> > List *recheckIndexes = NIL;\n> > TupleTableSlot *result = NULL;\n> > TransitionCaptureState *ar_insert_trig_tcs;\n> > ModifyTable *node = (ModifyTable *) mtstate->ps.plan;\n> > OnConflictAction onconflict = node->onConflictAction;\n> > + PartitionTupleRouting *proute = mtstate->mt_partition_tuple_routing;\n> > +\n> > + /*\n> > + * If the input result relation is a partitioned table, find the leaf\n> > + * partition to insert the tuple into.\n> > + */\n> > + if (proute)\n> > + {\n> > + ResultRelInfo *partRelInfo;\n> > +\n> > + slot = ExecPrepareTupleRouting(mtstate, estate, proute,\n> > + resultRelInfo, slot,\n> > + &partRelInfo);\n> > + resultRelInfo = partRelInfo;\n> > + /* Result relation has changed, so update EState reference too. */\n> > + estate->es_result_relation_info = resultRelInfo;\n> > + }\n>\n> I think by removing es_result_relation entirely, this would look\n> cleaner.\n\nI agree. Maybe, setting es_result_relation_info here isn't really\nneeded, because the ResultRelInfo is directly passed through\nExecForeignInsert. Still, some FDWs may be relying on\nes_result_relation_info being correctly set despite the\naforementioned. Again, the only way to get them to stop doing so may\nbe to remove it.\n\n\n> > @@ -1271,18 +1274,18 @@ lreplace:;\n> > mtstate->mt_root_tuple_slot);\n> >\n> > /*\n> > - * Prepare for tuple routing, making it look like we're inserting\n> > - * into the root.\n> > + * ExecInsert() may scribble on mtstate->mt_transition_capture,\n> > + * so save the currently active map.\n> > */\n> > + if (mtstate->mt_transition_capture)\n> > + saved_tcs_map = mtstate->mt_transition_capture->tcs_map;\n>\n> Wonder if we could remove the need for this somehow, it's still pretty\n> darn ugly. Thomas, perhaps you have some insights?\n>\n> To me the need to modify these ModifyTable wide state on a per-subplan\n> and even per-partition basis indicates that the datastructures are in\n> the wrong place.\n\nI agree that having to ensure tcs_map is set correctly is cumbersome,\nbecause it has to be reset every time the currently active result\nrelation changes. I think a better place for the map to be is\nResultRelInfo itself. The trigger code can just get the correct map\nfrom the ResultRelInfo of the result relation it's processing.\n\nRegarding that idea, the necessary map is already present in the\ntuple-routing state struct that's embedded in the partition's\nResultRelInfo. But the UPDATE result relations that are never\nprocessed as tuple routing targets don't have routing info initialized\n(also think non-partition inheritance children), so we could add\nanother TupleConversionMap * field in ResultRelInfo. Attached patch\n0003 implements that.\n\nWith this change, we no longer need to track the map in a global\nvariable, that is, TransitionCaptureState no longer needs tcs_map. We\nstill have tcs_original_insert_tuple though, which must be set during\nExecInsert and reset after it's read by AfterTriggerSaveEvent. I have\nmoved the resetting of its value to right after where the originally\nset value is read to make it clear that the value must be read only\nonce.\n\n> > @@ -2212,23 +2207,17 @@ ExecModifyTable(PlanState *pstate)\n> > switch (operation)\n> > {\n> > case CMD_INSERT:\n> > - /* Prepare for tuple routing if needed. */\n> > - if (proute)\n> > - slot = ExecPrepareTupleRouting(node, estate, proute,\n> > - resultRelInfo, slot);\n> > - slot = ExecInsert(node, slot, planSlot,\n> > + slot = ExecInsert(node, resultRelInfo, slot, planSlot,\n> > estate, node->canSetTag);\n> > - /* Revert ExecPrepareTupleRouting's state change. */\n> > - if (proute)\n> > - estate->es_result_relation_info = resultRelInfo;\n> > break;\n> > case CMD_UPDATE:\n> > - slot = ExecUpdate(node, tupleid, oldtuple, slot, planSlot,\n> > - &node->mt_epqstate, estate, node->canSetTag);\n> > + slot = ExecUpdate(node, resultRelInfo, tupleid, oldtuple, slot,\n> > + planSlot, &node->mt_epqstate, estate,\n> > + node->canSetTag);\n> > break;\n> > case CMD_DELETE:\n> > - slot = ExecDelete(node, tupleid, oldtuple, planSlot,\n> > - &node->mt_epqstate, estate,\n> > + slot = ExecDelete(node, resultRelInfo, tupleid, oldtuple,\n> > + planSlot, &node->mt_epqstate, estate,\n> > true, node->canSetTag,\n> > false /* changingPart */ , NULL, NULL);\n> > break;\n>\n> This reminds me of another complaint: ExecDelete and ExecInsert() have\n> gotten more boolean parameters for partition moving, but only one of\n> them is explained with a comment (/* changingPart */) - think we should\n> do that for all.\n\nAgree about the confusing state of ExecDelete call sites. I've\nreformatted the calls to properly label the arguments (the changes are\ncontained in the revised 0001). I don't see many\npartitioning-specific boolean parameters in ExecInsert though.\n\n> > diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c\n> > index 43edfef089..7df3e78b22 100644\n> > --- a/src/backend/replication/logical/worker.c\n> > +++ b/src/backend/replication/logical/worker.c\n> > @@ -173,10 +173,10 @@ ensure_transaction(void)\n> > * This is based on similar code in copy.c\n> > */\n> > static EState *\n> > -create_estate_for_relation(LogicalRepRelMapEntry *rel)\n> > +create_estate_for_relation(LogicalRepRelMapEntry *rel,\n> > + ResultRelInfo **resultRelInfo)\n> > {\n> > EState *estate;\n> > - ResultRelInfo *resultRelInfo;\n> > RangeTblEntry *rte;\n> >\n> > estate = CreateExecutorState();\n> > @@ -188,12 +188,11 @@ create_estate_for_relation(LogicalRepRelMapEntry *rel)\n> > rte->rellockmode = AccessShareLock;\n> > ExecInitRangeTable(estate, list_make1(rte));\n> >\n> > - resultRelInfo = makeNode(ResultRelInfo);\n> > - InitResultRelInfo(resultRelInfo, rel->localrel, 1, NULL, 0);\n> > + *resultRelInfo = makeNode(ResultRelInfo);\n> > + InitResultRelInfo(*resultRelInfo, rel->localrel, 1, NULL, 0);\n> >\n> > - estate->es_result_relations = resultRelInfo;\n> > + estate->es_result_relations = *resultRelInfo;\n> > estate->es_num_result_relations = 1;\n> > - estate->es_result_relation_info = resultRelInfo;\n> >\n> > estate->es_output_cid = GetCurrentCommandId(true);\n> >\n> > @@ -567,6 +566,7 @@ GetRelationIdentityOrPK(Relation rel)\n> > static void\n> > apply_handle_insert(StringInfo s)\n> > {\n> > + ResultRelInfo *resultRelInfo;\n> > LogicalRepRelMapEntry *rel;\n> > LogicalRepTupleData newtup;\n> > LogicalRepRelId relid;\n> > @@ -589,7 +589,7 @@ apply_handle_insert(StringInfo s)\n> > }\n> >\n> > /* Initialize the executor state. */\n> > - estate = create_estate_for_relation(rel);\n> > + estate = create_estate_for_relation(rel, &resultRelInfo);\n>\n> Hm. It kinda seems cleaner if we were to instead return the relevant\n> index, rather than the entire ResultRelInfo, as an output from\n> create_estate_for_relation(). Makes it clearer that it's still in the\n> EState.\n\nFor now, I've reverted these changes in favor of just doing this:\n\n /* Initialize the executor state. */\n estate = create_estate_for_relation(rel);\n+ resultRelInfo = &estate->es_result_relations[0];\n\nThis seems OK as we know for sure that there is only one target relation.\n\n> Or perhaps we ought to compute it in a separate step? Then that'd be\n> more amenable to support replcating into partition roots.\n\nIf we think of create_estate_for_relation() being like InitPlan(),\nthen perhaps it makes sense to leave it as is. Any setup needed for\nreplicating into partition roots will have to be in a separate\nfunction anyway.\n\n> > +/*\n> > + * ExecMove\n> > + * Move an updated tuple from the input result relation to the\n> > + * new partition of its root parent table\n> > + *\n> > + * This works by first deleting the tuple from the input result relation\n> > + * followed by inserting it into the root parent table, that is,\n> > + * mtstate->rootResultRelInfo.\n> > + *\n> > + * Returns true if it's detected that the tuple we're trying to move has\n> > + * been concurrently updated.\n> > + */\n> > +static bool\n> > +ExecMove(ModifyTableState *mtstate, ResultRelInfo *resultRelInfo,\n> > + ItemPointer tupleid, HeapTuple oldtuple, TupleTableSlot *planSlot,\n> > + EPQState *epqstate, bool canSetTag, TupleTableSlot **slot,\n> > + TupleTableSlot **inserted_tuple)\n> > +{\n>\n> I know that it was one of the names I proposed, but now that I'm\n> thinking about it again, it sounds too generic. Perhaps\n> ExecCrossPartitionUpdate() wouldn't be a quite so generic name? Since\n> there's only one reference the longer name wouldn't be painful.\n\nOK, I've renamed ExecMove to ExecCrossPartitionUpdate.\n\n> > + /*\n> > + * Row movement, part 1. Delete the tuple, but skip RETURNING\n> > + * processing. We want to return rows from INSERT.\n> > + */\n> > + ExecDelete(mtstate, resultRelInfo, tupleid, oldtuple, planSlot,\n> > + epqstate, estate, false, false /* canSetTag */ ,\n> > + true /* changingPart */ , &tuple_deleted, &epqslot);\n>\n> Here again it'd be nice if all the booleans would be explained with a\n> comment.\n\nDone too.\n\nAttached updated 0001, 0002, and the new 0003 for transition tuple\nconversion map related refactoring as explained above.\n\nThanks,\nAmit",
"msg_date": "Tue, 30 Jul 2019 16:20:53 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 4:20 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Sat, Jul 20, 2019 at 1:52 AM Andres Freund <andres@anarazel.de> wrote:\n> > > The first one (0001) deals with reducing the core executor's reliance\n> > > on es_result_relation_info to access the currently active result\n> > > relation, in favor of receiving it from the caller as a function\n> > > argument. So no piece of core code relies on it being correctly set\n> > > anymore. It still needs to be set correctly for the third-party code\n> > > such as FDWs.\n> >\n> > I'm inclined to just remove it. There's not much code out there relying\n> > on it, as far as I can tell. Most FDWs don't support the direct modify\n> > API, and that's afaict the case where we one needs to use\n> > es_result_relation_info?\n>\n> Right, only the directly modify API uses it.\n>\n> > In fact, I searched through alllFDWs listed on https://wiki.postgresql.org/wiki/Foreign_data_wrappers\n> > that are on github and in first few categories (up and including to\n> > \"file wrappers\"), and there was only one reference to\n> > es_result_relation_info, and that just in comments in a test:\n> > https://github.com/pgspider/griddb_fdw/search?utf8=%E2%9C%93&q=es_result_relation_info&type=\n> > which I think was just copied from our source code.\n> >\n> > IOW, we should just change the direct modify calls to get the relevant\n> > ResultRelationInfo or something in that vein (perhaps just the relevant\n> > RT index?).\n>\n> It seems easy to make one of the two functions that constitute the\n> direct modify API, IterateDirectModify(), access the result relation\n> from ForeignScanState by saving either the result relation RT index or\n> ResultRelInfo pointer itself into the ForeignScanState's FDW-private\n> area. For example, for postgres_fdw, one would simply add a new\n> member to PgFdwDirectModifyState struct.\n>\n> Doing that for the other function BeginDirectModify() seems a bit more\n> involved. We could add a new field to ForeignScan, say\n> resultRelation, that's set by either PlanDirectModify() (the FDW code)\n> or make_modifytable() (the core code) if the ForeignScan node contains\n> the command for direct modification. BeginDirectModify() can then use\n> that value instead of relying on es_result_relation_info being set.\n>\n> Thoughts? Fujita-san, do you have any opinion on whether that would\n> be a good idea?\n\nI looked into trying to do the things I mentioned above and it seems\nto me that revising BeginDirectModify()'s API to receive the\nResultRelInfo directly as Andres suggested might be the best way\nforward. I've implemented that in the attached 0001. Patches that\nwere previously 0001, 0002, and 0003 are now 0002, 003, and 0004,\nrespectively. 0002 is now a patch to \"remove\"\nes_result_relation_info.\n\nThanks,\nAmit",
"msg_date": "Wed, 31 Jul 2019 17:04:38 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 5:05 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Jul 30, 2019 at 4:20 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Sat, Jul 20, 2019 at 1:52 AM Andres Freund <andres@anarazel.de> wrote:\n> > > IOW, we should just change the direct modify calls to get the relevant\n> > > ResultRelationInfo or something in that vein (perhaps just the relevant\n> > > RT index?).\n> >\n> > It seems easy to make one of the two functions that constitute the\n> > direct modify API, IterateDirectModify(), access the result relation\n> > from ForeignScanState by saving either the result relation RT index or\n> > ResultRelInfo pointer itself into the ForeignScanState's FDW-private\n> > area. For example, for postgres_fdw, one would simply add a new\n> > member to PgFdwDirectModifyState struct.\n> >\n> > Doing that for the other function BeginDirectModify() seems a bit more\n> > involved. We could add a new field to ForeignScan, say\n> > resultRelation, that's set by either PlanDirectModify() (the FDW code)\n> > or make_modifytable() (the core code) if the ForeignScan node contains\n> > the command for direct modification. BeginDirectModify() can then use\n> > that value instead of relying on es_result_relation_info being set.\n> >\n> > Thoughts? Fujita-san, do you have any opinion on whether that would\n> > be a good idea?\n\nI'm still not sure that it's a good idea to remove\nes_result_relation_info, but if I had to say then I think the latter\nwould probably be better. I'm planning to rework on direct\nmodification to base it on upper planner pathification so we can\nperform direct modification without the ModifyTable node. (I'm not\nsure we can really do this for inherited UPDATE/DELETE, though.) For\nthat rewrite, I'm thinking to call BeginDirectModify() from the\nForeignScan node (ie, ExecInitForeignScan()) as-is. The latter\napproach would allow that without any changes and avoid changing that\nAPI many times. That's the reason why I think the latter would\nprobably be better.\n\n> I looked into trying to do the things I mentioned above and it seems\n> to me that revising BeginDirectModify()'s API to receive the\n> ResultRelInfo directly as Andres suggested might be the best way\n> forward. I've implemented that in the attached 0001. Patches that\n> were previously 0001, 0002, and 0003 are now 0002, 003, and 0004,\n> respectively. 0002 is now a patch to \"remove\"\n> es_result_relation_info.\n\nSorry for speaking this late.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 31 Jul 2019 21:03:58 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-31 21:03:58 +0900, Etsuro Fujita wrote:\n> I'm still not sure that it's a good idea to remove\n> es_result_relation_info, but if I had to say then I think the latter\n> would probably be better. I'm planning to rework on direct\n> modification to base it on upper planner pathification so we can\n> perform direct modification without the ModifyTable node. (I'm not\n> sure we can really do this for inherited UPDATE/DELETE, though.) For\n> that rewrite, I'm thinking to call BeginDirectModify() from the\n> ForeignScan node (ie, ExecInitForeignScan()) as-is. The latter\n> approach would allow that without any changes and avoid changing that\n> API many times. That's the reason why I think the latter would\n> probably be better.\n\nI think if we did that, it'd become *more* urgent to remove\nes_result_relation. Having more and more plan nodes change global\nresources is a recipse for disaster.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 31 Jul 2019 08:35:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Fujita-san,\n\nThanks for the reply and sorry I didn't wait a bit more before posting\nthe patch.\n\nOn Wed, Jul 31, 2019 at 9:04 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Wed, Jul 31, 2019 at 5:05 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Tue, Jul 30, 2019 at 4:20 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > On Sat, Jul 20, 2019 at 1:52 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > IOW, we should just change the direct modify calls to get the relevant\n> > > > ResultRelationInfo or something in that vein (perhaps just the relevant\n> > > > RT index?).\n> > >\n> > > It seems easy to make one of the two functions that constitute the\n> > > direct modify API, IterateDirectModify(), access the result relation\n> > > from ForeignScanState by saving either the result relation RT index or\n> > > ResultRelInfo pointer itself into the ForeignScanState's FDW-private\n> > > area. For example, for postgres_fdw, one would simply add a new\n> > > member to PgFdwDirectModifyState struct.\n> > >\n> > > Doing that for the other function BeginDirectModify() seems a bit more\n> > > involved. We could add a new field to ForeignScan, say\n> > > resultRelation, that's set by either PlanDirectModify() (the FDW code)\n> > > or make_modifytable() (the core code) if the ForeignScan node contains\n> > > the command for direct modification. BeginDirectModify() can then use\n> > > that value instead of relying on es_result_relation_info being set.\n> > >\n> > > Thoughts? Fujita-san, do you have any opinion on whether that would\n> > > be a good idea?\n>\n> I'm still not sure that it's a good idea to remove\n> es_result_relation_info, but if I had to say then I think the latter\n> would probably be better.\n\nCould you please clarify what you meant by the \"latter\"?\n\nIf it's the approach of adding a resultRelation Index field to\nForeignScan node, I tried and had to give up, realizing that we don't\nmaintain ResultRelInfos in an array that is indexable by RT indexes.\nIt would've worked if es_result_relations had mirrored es_range_table,\nalthough that probably complicates how the individual ModifyTable\nnodes attach to that array. In any case, given this discussion,\nfurther hacking on a global variable like es_result_relations may be a\ncourse we might not want to pursue.\n\n> I'm planning to rework on direct\n> modification to base it on upper planner pathification so we can\n> perform direct modification without the ModifyTable node. (I'm not\n> sure we can really do this for inherited UPDATE/DELETE, though.) For\n> that rewrite, I'm thinking to call BeginDirectModify() from the\n> ForeignScan node (ie, ExecInitForeignScan()) as-is. The latter\n> approach would allow that without any changes and avoid changing that\n> API many times. That's the reason why I think the latter would\n> probably be better.\n\nWill the new planning approach you're thinking of get rid of needing\nany result relations at all (and so the ResultRelInfos in the\nexecutor)?\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 1 Aug 2019 10:32:44 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Amit-san,\n\nOn Thu, Aug 1, 2019 at 10:33 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Jul 31, 2019 at 9:04 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Wed, Jul 31, 2019 at 5:05 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > On Tue, Jul 30, 2019 at 4:20 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > > On Sat, Jul 20, 2019 at 1:52 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > > IOW, we should just change the direct modify calls to get the relevant\n> > > > > ResultRelationInfo or something in that vein (perhaps just the relevant\n> > > > > RT index?).\n> > > >\n> > > > It seems easy to make one of the two functions that constitute the\n> > > > direct modify API, IterateDirectModify(), access the result relation\n> > > > from ForeignScanState by saving either the result relation RT index or\n> > > > ResultRelInfo pointer itself into the ForeignScanState's FDW-private\n> > > > area. For example, for postgres_fdw, one would simply add a new\n> > > > member to PgFdwDirectModifyState struct.\n> > > >\n> > > > Doing that for the other function BeginDirectModify() seems a bit more\n> > > > involved. We could add a new field to ForeignScan, say\n> > > > resultRelation, that's set by either PlanDirectModify() (the FDW code)\n> > > > or make_modifytable() (the core code) if the ForeignScan node contains\n> > > > the command for direct modification. BeginDirectModify() can then use\n> > > > that value instead of relying on es_result_relation_info being set.\n> > > >\n> > > > Thoughts? Fujita-san, do you have any opinion on whether that would\n> > > > be a good idea?\n> >\n> > I'm still not sure that it's a good idea to remove\n> > es_result_relation_info, but if I had to say then I think the latter\n> > would probably be better.\n>\n> Could you please clarify what you meant by the \"latter\"?\n>\n> If it's the approach of adding a resultRelation Index field to\n> ForeignScan node, I tried and had to give up, realizing that we don't\n> maintain ResultRelInfos in an array that is indexable by RT indexes.\n> It would've worked if es_result_relations had mirrored es_range_table,\n> although that probably complicates how the individual ModifyTable\n> nodes attach to that array. In any case, given this discussion,\n> further hacking on a global variable like es_result_relations may be a\n> course we might not want to pursue.\n\nYeah, I mean that approach. To get the ResultRelInfo, I think\nsearching through the es_result_relations for the ResultRelInfo for\nthe resultRelation added to the ForeignScan in BeginDirectModify()\nlike the attached, which is created along your proposal.\nExecFindResultRelInfo() added by the patch wouldn't work efficiently\nfor inherited UPDATE/DELETE where there are many children that are\nforeign tables, but I think that would probably be OK because in most\nuse-cases, including sharding, the number of such children would be at\nmost < 100 or so. For improving the efficiency for the cases where\nthere are a lot more such children, however, I think it would be an\noption to do something about global variables so that we can access\nthe ResultRelInfos by RT indexes more efficiently, because IMO I don't\nthink that would be against the point here ie, removing the dependency\non es_result_relation_info. Maybe I'm missing something, though.\n\n> > I'm planning to rework on direct\n> > modification to base it on upper planner pathification so we can\n> > perform direct modification without the ModifyTable node. (I'm not\n> > sure we can really do this for inherited UPDATE/DELETE, though.) For\n> > that rewrite, I'm thinking to call BeginDirectModify() from the\n> > ForeignScan node (ie, ExecInitForeignScan()) as-is. The latter\n> > approach would allow that without any changes and avoid changing that\n> > API many times. That's the reason why I think the latter would\n> > probably be better.\n>\n> Will the new planning approach you're thinking of get rid of needing\n> any result relations at all (and so the ResultRelInfos in the\n> executor)?\n\nI think the new planning approach would still need result relations\nand ResultRelInfos in the executor as-is; and the FDW would probably\nuse the ResultRelInfo for the foreign table created by the core. Some\nof the ResultRelInfo data would prpbably need to be initialized by the\nFDW itesef, though (eg, WCO constraints and/or RETURNING if any).\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Thu, 1 Aug 2019 18:38:09 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-31 17:04:38 +0900, Amit Langote wrote:\n> I looked into trying to do the things I mentioned above and it seems\n> to me that revising BeginDirectModify()'s API to receive the\n> ResultRelInfo directly as Andres suggested might be the best way\n> forward. I've implemented that in the attached 0001. Patches that\n> were previously 0001, 0002, and 0003 are now 0002, 003, and 0004,\n> respectively. 0002 is now a patch to \"remove\"\n> es_result_relation_info.\n\nThanks! Some minor quibbles aside, the non FDW patches look good to me.\n\nFujita-san, do you have any comments on the FDW API change? Or anybody\nelse?\n\nI'm a bit woried about the move of BeginDirectModify() into\nnodeModifyTable.c - it just seems like an odd control flow to me. Not\nallowing any intermittent nodes between ForeignScan and ModifyTable also\nseems like an undesirable restriction for the future. I realize that we\nalready do that for BeginForeignModify() (just btw, that already accepts\nresultRelInfo as a parameter, so being symmetrical for BeginDirectModify\nmakes sense), but it still seems like the wrong direction to me.\n\nThe need for that move, I assume, comes from needing knowing the correct\nResultRelInfo, correct? I wonder if we shouldn't instead determine the\nat plan time (in setrefs.c), somewhat similar to how we determine\nModifyTable.resultRelIndex. Doesn't look like that'd be too hard?\n\nThen we could just have BeginForeignModify, BeginDirectModify,\nBeginForeignScan all be called from ExecInitForeignScan().\n\n\n\nPath 04 is such a nice improvement. Besides getting rid of a substantial\namount of code, it also makes the control flow a lot easier to read.\n\n\n> @@ -4644,9 +4645,7 @@ GetAfterTriggersTableData(Oid relid, CmdType cmdType)\n> * If there are no triggers in 'trigdesc' that request relevant transition\n> * tables, then return NULL.\n> *\n> - * The resulting object can be passed to the ExecAR* functions. The caller\n> - * should set tcs_map or tcs_original_insert_tuple as appropriate when dealing\n> - * with child tables.\n> + * The resulting object can be passed to the ExecAR* functions.\n> *\n> * Note that we copy the flags from a parent table into this struct (rather\n> * than subsequently using the relation's TriggerDesc directly) so that we can\n> @@ -5750,14 +5749,26 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo,\n> \t */\n> \tif (row_trigger && transition_capture != NULL)\n> \t{\n> -\t\tTupleTableSlot *original_insert_tuple = transition_capture->tcs_original_insert_tuple;\n> -\t\tTupleConversionMap *map = transition_capture->tcs_map;\n> +\t\tTupleTableSlot *original_insert_tuple;\n> +\t\tPartitionRoutingInfo *pinfo = relinfo->ri_PartitionInfo;\n> +\t\tTupleConversionMap *map = pinfo ?\n> +\t\t\t\t\t\t\t\tpinfo->pi_PartitionToRootMap :\n> +\t\t\t\t\t\t\t\trelinfo->ri_ChildToRootMap;\n> \t\tbool\t\tdelete_old_table = transition_capture->tcs_delete_old_table;\n> \t\tbool\t\tupdate_old_table = transition_capture->tcs_update_old_table;\n> \t\tbool\t\tupdate_new_table = transition_capture->tcs_update_new_table;\n> \t\tbool\t\tinsert_new_table = transition_capture->tcs_insert_new_table;\n> \n> \t\t/*\n> +\t\t * Get the originally inserted tuple from the global variable and set\n> +\t\t * the latter to NULL because any given tuple must be read only once.\n> +\t\t * Note that the TransitionCaptureState is shared across many calls\n> +\t\t * to this function.\n> +\t\t */\n> +\t\toriginal_insert_tuple = transition_capture->tcs_original_insert_tuple;\n> +\t\ttransition_capture->tcs_original_insert_tuple = NULL;\n\nMaybe I'm missing something, but original_insert_tuple is not a global\nvariable?\n\n\n> @@ -888,7 +889,8 @@ ExecInitRoutingInfo(ModifyTableState *mtstate,\n> \t\t\t\t\tPartitionTupleRouting *proute,\n> \t\t\t\t\tPartitionDispatch dispatch,\n> \t\t\t\t\tResultRelInfo *partRelInfo,\n> -\t\t\t\t\tint partidx)\n> +\t\t\t\t\tint partidx,\n> +\t\t\t\t\tbool is_update_result_rel)\n> {\n> \tMemoryContext oldcxt;\n> \tPartitionRoutingInfo *partrouteinfo;\n> @@ -935,10 +937,15 @@ ExecInitRoutingInfo(ModifyTableState *mtstate,\n> \tif (mtstate &&\n> \t\t(mtstate->mt_transition_capture || mtstate->mt_oc_transition_capture))\n> \t{\n> -\t\tpartrouteinfo->pi_PartitionToRootMap =\n> -\t\t\tconvert_tuples_by_name(RelationGetDescr(partRelInfo->ri_RelationDesc),\n> -\t\t\t\t\t\t\t\t RelationGetDescr(partRelInfo->ri_PartitionRoot),\n> -\t\t\t\t\t\t\t\t gettext_noop(\"could not convert row type\"));\n> +\t\t/* If partition is an update target, then we already got the map. */\n> +\t\tif (is_update_result_rel)\n> +\t\t\tpartrouteinfo->pi_PartitionToRootMap =\n> +\t\t\t\tpartRelInfo->ri_ChildToRootMap;\n> +\t\telse\n> +\t\t\tpartrouteinfo->pi_PartitionToRootMap =\n> +\t\t\t\tconvert_tuples_by_name(RelationGetDescr(partRelInfo->ri_RelationDesc),\n> +\t\t\t\t\t\t\t\t\t RelationGetDescr(partRelInfo->ri_PartitionRoot),\n> +\t\t\t\t\t\t\t\t\t gettext_noop(\"could not convert row type\"));\n> \t}\n\nHm, isn't is_update_result_rel just ModifyTable->operation == CMD_UPDATE?\n\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Aug 2019 11:01:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Hi,\n\nOn Sat, Aug 3, 2019 at 3:01 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-07-31 17:04:38 +0900, Amit Langote wrote:\n> > I looked into trying to do the things I mentioned above and it seems\n> > to me that revising BeginDirectModify()'s API to receive the\n> > ResultRelInfo directly as Andres suggested might be the best way\n> > forward. I've implemented that in the attached 0001.\n\n> Fujita-san, do you have any comments on the FDW API change? Or anybody\n> else?\n>\n> I'm a bit woried about the move of BeginDirectModify() into\n> nodeModifyTable.c - it just seems like an odd control flow to me. Not\n> allowing any intermittent nodes between ForeignScan and ModifyTable also\n> seems like an undesirable restriction for the future. I realize that we\n> already do that for BeginForeignModify() (just btw, that already accepts\n> resultRelInfo as a parameter, so being symmetrical for BeginDirectModify\n> makes sense), but it still seems like the wrong direction to me.\n>\n> The need for that move, I assume, comes from needing knowing the correct\n> ResultRelInfo, correct? I wonder if we shouldn't instead determine the\n> at plan time (in setrefs.c), somewhat similar to how we determine\n> ModifyTable.resultRelIndex. Doesn't look like that'd be too hard?\n\nI'd vote for that; I created a patch for that [1].\n\n> Then we could just have BeginForeignModify, BeginDirectModify,\n> BeginForeignScan all be called from ExecInitForeignScan().\n\nI think so too.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CAPmGK15%3DoFHmWNND5vopfokSGfn6jMXVvnHa7K7P49F7k1hWPQ%40mail.gmail.com\n\n\n",
"msg_date": "Sat, 3 Aug 2019 05:20:35 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-03 05:20:35 +0900, Etsuro Fujita wrote:\n> On Sat, Aug 3, 2019 at 3:01 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2019-07-31 17:04:38 +0900, Amit Langote wrote:\n> > > I looked into trying to do the things I mentioned above and it seems\n> > > to me that revising BeginDirectModify()'s API to receive the\n> > > ResultRelInfo directly as Andres suggested might be the best way\n> > > forward. I've implemented that in the attached 0001.\n>\n> > Fujita-san, do you have any comments on the FDW API change? Or anybody\n> > else?\n> >\n> > I'm a bit woried about the move of BeginDirectModify() into\n> > nodeModifyTable.c - it just seems like an odd control flow to me. Not\n> > allowing any intermittent nodes between ForeignScan and ModifyTable also\n> > seems like an undesirable restriction for the future. I realize that we\n> > already do that for BeginForeignModify() (just btw, that already accepts\n> > resultRelInfo as a parameter, so being symmetrical for BeginDirectModify\n> > makes sense), but it still seems like the wrong direction to me.\n> >\n> > The need for that move, I assume, comes from needing knowing the correct\n> > ResultRelInfo, correct? I wonder if we shouldn't instead determine the\n> > at plan time (in setrefs.c), somewhat similar to how we determine\n> > ModifyTable.resultRelIndex. Doesn't look like that'd be too hard?\n>\n> I'd vote for that; I created a patch for that [1].\n>\n> [1] https://www.postgresql.org/message-id/CAPmGK15%3DoFHmWNND5vopfokSGfn6jMXVvnHa7K7P49F7k1hWPQ%40mail.gmail.com\n\nOh, missed that. But that's not quite what I'm proposing. I don't like\nExecFindResultRelInfo at all. What's the point of it? It's introduction\nis still an API break - I don't understand why that break is better than\njust passing the ResultRelInfo directly to BeginDirectModify()? I want\nto again remark that BeginForeignModify() does get the ResultRelInfo -\nit should have been done the same when adding direct modify.\n\nEven if you need the loop - which I don't think is right - it should\nlive somewhere that individual FDWs don't have to care about.\n\n- Andres\n\n\n",
"msg_date": "Fri, 2 Aug 2019 13:30:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-01 18:38:09 +0900, Etsuro Fujita wrote:\n> On Thu, Aug 1, 2019 at 10:33 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > If it's the approach of adding a resultRelation Index field to\n> > ForeignScan node, I tried and had to give up, realizing that we don't\n> > maintain ResultRelInfos in an array that is indexable by RT indexes.\n> > It would've worked if es_result_relations had mirrored es_range_table,\n> > although that probably complicates how the individual ModifyTable\n> > nodes attach to that array.\n\nWe know at plan time what the the resultRelation offset for a\nModifyTable node is. We just need to transport that to the respective\nforeign scan node, and update it properly in setrefs? Then we can index\nes_result_relations without any additional mapping?\n\nMaybe I'm missing something? I think all we need to do is to have\nsetrefs.c:set_plan_refs() iterate over ->fdwDirectModifyPlans or such,\nand set the respective node's result_relation_offset or whatever we're\nnaming it to splan->resultRelIndex + offset from fdwDirectModifyPlans?\n\n\n> > In any case, given this discussion, further hacking on a global\n> > variable like es_result_relations may be a course we might not want\n> > to pursue.\n\nI don't think es_result_relations really is problem - it doesn't have to\nchange while processing individual subplans / partitions / whatnot. If\nwe needed a mapping between rtis and result indexes, I'd not see a\nproblem. Doubtful it's needed though.\n\nThere's a fundamental difference between EState->es_result_relations and\nEState->es_result_relation_info. The former stays static during the\nwhole query once initialized, whereas es_result_relation_info changes\ndepending on which relation we're processing. The latter is what makes\nthe code more complicated, because we cannot ever return early etc.\n\nSimilarly, ModifyTableState->mt_per_subplan_tupconv_maps is not a\nproblem, it stays static, but e.g. mtstate->mt_transition_capture is a\nproblem, because we have to change for each subplan / routing /\npartition movement.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Aug 2019 14:01:52 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Hi,\n\nOn Sat, Aug 3, 2019 at 5:31 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-08-03 05:20:35 +0900, Etsuro Fujita wrote:\n> > On Sat, Aug 3, 2019 at 3:01 AM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2019-07-31 17:04:38 +0900, Amit Langote wrote:\n> > > > I looked into trying to do the things I mentioned above and it seems\n> > > > to me that revising BeginDirectModify()'s API to receive the\n> > > > ResultRelInfo directly as Andres suggested might be the best way\n> > > > forward. I've implemented that in the attached 0001.\n> >\n> > > Fujita-san, do you have any comments on the FDW API change? Or anybody\n> > > else?\n> > >\n> > > I'm a bit woried about the move of BeginDirectModify() into\n> > > nodeModifyTable.c - it just seems like an odd control flow to me. Not\n> > > allowing any intermittent nodes between ForeignScan and ModifyTable also\n> > > seems like an undesirable restriction for the future. I realize that we\n> > > already do that for BeginForeignModify() (just btw, that already accepts\n> > > resultRelInfo as a parameter, so being symmetrical for BeginDirectModify\n> > > makes sense), but it still seems like the wrong direction to me.\n> > >\n> > > The need for that move, I assume, comes from needing knowing the correct\n> > > ResultRelInfo, correct? I wonder if we shouldn't instead determine the\n> > > at plan time (in setrefs.c), somewhat similar to how we determine\n> > > ModifyTable.resultRelIndex. Doesn't look like that'd be too hard?\n> >\n> > I'd vote for that; I created a patch for that [1].\n> >\n> > [1] https://www.postgresql.org/message-id/CAPmGK15%3DoFHmWNND5vopfokSGfn6jMXVvnHa7K7P49F7k1hWPQ%40mail.gmail.com\n>\n> Oh, missed that. But that's not quite what I'm proposing.\n\nSorry, I misread your message. I think I was too tired.\n\n> I don't like\n> ExecFindResultRelInfo at all. What's the point of it? It's introduction\n> is still an API break - I don't understand why that break is better than\n> just passing the ResultRelInfo directly to BeginDirectModify()?\n\nWhat API does that function break? The point of that function was to\nkeep the direct modify layering/API as-is, because 1) I too felt the\nsame way about the move of BeginDirectModify() to nodeModifyTable.c,\nand 2) I was thinking that when rewriting direct modify with upper\nplanner pathification so that we can perform it without ModifyTable,\nwe could still use the existing layering/API as-is, leading to smaller\nchanges to the core for that.\n\n> I want\n> to again remark that BeginForeignModify() does get the ResultRelInfo -\n> it should have been done the same when adding direct modify.\n\nMight have been so.\n\n> Even if you need the loop - which I don't think is right - it should\n> live somewhere that individual FDWs don't have to care about.\n\nI was thinking to use hash lookup in ExecFindResultRelInfo() when\nes_result_relations is very long, but I think the setters.c approach\nyou mentioned above might be much better. Will consider that.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Sat, 3 Aug 2019 19:41:55 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-03 19:41:55 +0900, Etsuro Fujita wrote:\n> > I don't like\n> > ExecFindResultRelInfo at all. What's the point of it? It's introduction\n> > is still an API break - I don't understand why that break is better than\n> > just passing the ResultRelInfo directly to BeginDirectModify()?\n> \n> What API does that function break?\n\nYou need to call it, whereas previously you did not need to call it. The\neffort to change an FDW to get one more parameter, or to call that\nfunction is about the same.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 3 Aug 2019 10:32:36 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-08-03 19:41:55 +0900, Etsuro Fujita wrote:\n>> What API does that function break?\n\n> You need to call it, whereas previously you did not need to call it. The\n> effort to change an FDW to get one more parameter, or to call that\n> function is about the same.\n\nIf those are the choices, adding a parameter is clearly the preferable\nsolution, because it makes the API breakage obvious at compile.\n\nAdding a function would make sense, perhaps, if only a minority of FDWs\nneed to do so. It'd still be risky if the need to do so could be missed\nin light testing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 03 Aug 2019 13:48:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-03 13:48:01 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-08-03 19:41:55 +0900, Etsuro Fujita wrote:\n> >> What API does that function break?\n> \n> > You need to call it, whereas previously you did not need to call it. The\n> > effort to change an FDW to get one more parameter, or to call that\n> > function is about the same.\n> \n> If those are the choices, adding a parameter is clearly the preferable\n> solution, because it makes the API breakage obvious at compile.\n\nRight. I think it's a *bit* less clear in this case because we'd also\nremove the field that such FDWs with direct modify support would use\nnow (EState.es_result_relation_info).\n\nBut I think it's also just plainly a better API to use the\nparameter. Even if, in contrast to the BeginDirectModify at hand,\nBeginForeignModify didn't already accept it. Requiring a function call to\ngather information that just about every realistic implementation is\ngoing to need doesn't make sense.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 3 Aug 2019 11:03:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Hi,\n\nOn Sun, Aug 4, 2019 at 3:03 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-08-03 13:48:01 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > On 2019-08-03 19:41:55 +0900, Etsuro Fujita wrote:\n> > >> What API does that function break?\n> >\n> > > You need to call it, whereas previously you did not need to call it. The\n> > > effort to change an FDW to get one more parameter, or to call that\n> > > function is about the same.\n\nI got the point.\n\n> > If those are the choices, adding a parameter is clearly the preferable\n> > solution, because it makes the API breakage obvious at compile.\n>\n> Right. I think it's a *bit* less clear in this case because we'd also\n> remove the field that such FDWs with direct modify support would use\n> now (EState.es_result_relation_info).\n>\n> But I think it's also just plainly a better API to use the\n> parameter. Even if, in contrast to the BeginDirectModify at hand,\n> BeginForeignModify didn't already accept it. Requiring a function call to\n> gather information that just about every realistic implementation is\n> going to need doesn't make sense.\n\nAgreed.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Sun, 4 Aug 2019 04:45:47 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Hi,\n\nOn Sun, Aug 4, 2019 at 4:45 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Sun, Aug 4, 2019 at 3:03 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2019-08-03 13:48:01 -0400, Tom Lane wrote:\n> > > If those are the choices, adding a parameter is clearly the preferable\n> > > solution, because it makes the API breakage obvious at compile.\n> >\n> > Right. I think it's a *bit* less clear in this case because we'd also\n> > remove the field that such FDWs with direct modify support would use\n> > now (EState.es_result_relation_info).\n> >\n> > But I think it's also just plainly a better API to use the\n> > parameter. Even if, in contrast to the BeginDirectModify at hand,\n> > BeginForeignModify didn't already accept it. Requiring a function call to\n> > gather information that just about every realistic implementation is\n> > going to need doesn't make sense.\n>\n> Agreed.\n\nSo, is it correct to think that the consensus is to add a parameter to\nBeginDirectModify()?\n\nAlso, avoid changing where BeginDirectModify() is called from, like my\npatch did, only to have easy access to the ResultRelInfo to pass. We\ncan do that by by augmenting ForeignScan node to add the information\nneeded to fetch the ResultRelInfo efficiently from\nExecInitForeignScan() itself. That information is the ordinal\nposition of a given result relation in PlannedStmt.resultRelations,\nnot the RT index as we were discussing.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Mon, 5 Aug 2019 13:31:09 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Amit-san,\n\nOn Mon, Aug 5, 2019 at 1:31 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Sun, Aug 4, 2019 at 4:45 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Sun, Aug 4, 2019 at 3:03 AM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2019-08-03 13:48:01 -0400, Tom Lane wrote:\n> > > > If those are the choices, adding a parameter is clearly the preferable\n> > > > solution, because it makes the API breakage obvious at compile.\n> > >\n> > > Right. I think it's a *bit* less clear in this case because we'd also\n> > > remove the field that such FDWs with direct modify support would use\n> > > now (EState.es_result_relation_info).\n> > >\n> > > But I think it's also just plainly a better API to use the\n> > > parameter. Even if, in contrast to the BeginDirectModify at hand,\n> > > BeginForeignModify didn't already accept it. Requiring a function call to\n> > > gather information that just about every realistic implementation is\n> > > going to need doesn't make sense.\n> >\n> > Agreed.\n>\n> So, is it correct to think that the consensus is to add a parameter to\n> BeginDirectModify()?\n\nI think so.\n\n> Also, avoid changing where BeginDirectModify() is called from, like my\n> patch did, only to have easy access to the ResultRelInfo to pass. We\n> can do that by by augmenting ForeignScan node to add the information\n> needed to fetch the ResultRelInfo efficiently from\n> ExecInitForeignScan() itself.\n\nI think so.\n\n> That information is the ordinal\n> position of a given result relation in PlannedStmt.resultRelations,\n> not the RT index as we were discussing.\n\nYeah, that would be what Andres is proposing, which I think is much\nbetter than what I proposed using the RT index.\n\nCould you update your patch?\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Mon, 5 Aug 2019 14:31:06 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Fujita-san,\n\nThanks for the quick follow up.\n\nOn Mon, Aug 5, 2019 at 2:31 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Mon, Aug 5, 2019 at 1:31 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Sun, Aug 4, 2019 at 4:45 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > > On Sun, Aug 4, 2019 at 3:03 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > On 2019-08-03 13:48:01 -0400, Tom Lane wrote:\n> > > > > If those are the choices, adding a parameter is clearly the preferable\n> > > > > solution, because it makes the API breakage obvious at compile.\n> > > >\n> > > > Right. I think it's a *bit* less clear in this case because we'd also\n> > > > remove the field that such FDWs with direct modify support would use\n> > > > now (EState.es_result_relation_info).\n> > > >\n> > > > But I think it's also just plainly a better API to use the\n> > > > parameter. Even if, in contrast to the BeginDirectModify at hand,\n> > > > BeginForeignModify didn't already accept it. Requiring a function call to\n> > > > gather information that just about every realistic implementation is\n> > > > going to need doesn't make sense.\n> > >\n> > > Agreed.\n> >\n> > So, is it correct to think that the consensus is to add a parameter to\n> > BeginDirectModify()?\n>\n> I think so.\n>\n> > Also, avoid changing where BeginDirectModify() is called from, like my\n> > patch did, only to have easy access to the ResultRelInfo to pass. We\n> > can do that by by augmenting ForeignScan node to add the information\n> > needed to fetch the ResultRelInfo efficiently from\n> > ExecInitForeignScan() itself.\n>\n> I think so.\n>\n> > That information is the ordinal\n> > position of a given result relation in PlannedStmt.resultRelations,\n> > not the RT index as we were discussing.\n>\n> Yeah, that would be what Andres is proposing, which I think is much\n> better than what I proposed using the RT index.\n>\n> Could you update your patch?\n\nOK, I will do that. I'll reply with the updated patches to an\nupthread email of Andres' [1], where he also comments on the other\npatches.\n\nThanks,\nAmit\n\n[1] https://www.postgresql.org/message-id/20190802180138.64zcircokw2upaho%40alap3.anarazel.de\n\n\n",
"msg_date": "Mon, 5 Aug 2019 14:36:38 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Amit-san,\n\nOn Mon, Aug 5, 2019 at 2:36 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Mon, Aug 5, 2019 at 2:31 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Mon, Aug 5, 2019 at 1:31 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > On Sun, Aug 4, 2019 at 4:45 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > > > On Sun, Aug 4, 2019 at 3:03 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > > On 2019-08-03 13:48:01 -0400, Tom Lane wrote:\n> > > > > > If those are the choices, adding a parameter is clearly the preferable\n> > > > > > solution, because it makes the API breakage obvious at compile.\n> > > > >\n> > > > > Right. I think it's a *bit* less clear in this case because we'd also\n> > > > > remove the field that such FDWs with direct modify support would use\n> > > > > now (EState.es_result_relation_info).\n> > > > >\n> > > > > But I think it's also just plainly a better API to use the\n> > > > > parameter. Even if, in contrast to the BeginDirectModify at hand,\n> > > > > BeginForeignModify didn't already accept it. Requiring a function call to\n> > > > > gather information that just about every realistic implementation is\n> > > > > going to need doesn't make sense.\n> > > >\n> > > > Agreed.\n> > >\n> > > So, is it correct to think that the consensus is to add a parameter to\n> > > BeginDirectModify()?\n> >\n> > I think so.\n> >\n> > > Also, avoid changing where BeginDirectModify() is called from, like my\n> > > patch did, only to have easy access to the ResultRelInfo to pass. We\n> > > can do that by by augmenting ForeignScan node to add the information\n> > > needed to fetch the ResultRelInfo efficiently from\n> > > ExecInitForeignScan() itself.\n> >\n> > I think so.\n> >\n> > > That information is the ordinal\n> > > position of a given result relation in PlannedStmt.resultRelations,\n> > > not the RT index as we were discussing.\n> >\n> > Yeah, that would be what Andres is proposing, which I think is much\n> > better than what I proposed using the RT index.\n> >\n> > Could you update your patch?\n>\n> OK, I will do that. I'll reply with the updated patches to an\n> upthread email of Andres' [1], where he also comments on the other\n> patches.\n\nThanks! Will review the updated version of the FDW patch, at least.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Mon, 5 Aug 2019 14:45:15 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Hi Andres, Fujita-san,\n\nOn Sat, Aug 3, 2019 at 3:01 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-07-31 17:04:38 +0900, Amit Langote wrote:\n> > I looked into trying to do the things I mentioned above and it seems\n> > to me that revising BeginDirectModify()'s API to receive the\n> > ResultRelInfo directly as Andres suggested might be the best way\n> > forward. I've implemented that in the attached 0001. Patches that\n> > were previously 0001, 0002, and 0003 are now 0002, 003, and 0004,\n> > respectively. 0002 is now a patch to \"remove\"\n> > es_result_relation_info.\n>\n> Thanks! Some minor quibbles aside, the non FDW patches look good to me.\n>\n> Fujita-san, do you have any comments on the FDW API change? Or anybody\n> else?\n\nBased on the discussion, I have updated the patch.\n\n> I'm a bit woried about the move of BeginDirectModify() into\n> nodeModifyTable.c - it just seems like an odd control flow to me. Not\n> allowing any intermittent nodes between ForeignScan and ModifyTable also\n> seems like an undesirable restriction for the future. I realize that we\n> already do that for BeginForeignModify() (just btw, that already accepts\n> resultRelInfo as a parameter, so being symmetrical for BeginDirectModify\n> makes sense), but it still seems like the wrong direction to me.\n>\n> The need for that move, I assume, comes from needing knowing the correct\n> ResultRelInfo, correct? I wonder if we shouldn't instead determine the\n> at plan time (in setrefs.c), somewhat similar to how we determine\n> ModifyTable.resultRelIndex. Doesn't look like that'd be too hard?\n\nThe patch adds a resultRelIndex field to ForeignScan node, which is\nset to >= 0 value for non-SELECT queries. I first thought to set it\nonly if direct modification is being used, but maybe it'd be simpler\nto set it even if direct modification is not used. To set it, the\npatch teaches set_plan_refs() to initialize resultRelIndex of\nForeignScan plans that appear under ModifyTable. Fujita-san said he\nplans to revise the planning of direct-modification style queries to\nnot require a ModifyTable node anymore, but maybe he'll just need to\nadd similar code elsewhere but not outside setrefs.c.\n\n> Then we could just have BeginForeignModify, BeginDirectModify,\n> BeginForeignScan all be called from ExecInitForeignScan().\n\nI too think that it would've been great if we could call both\nBeginForeignModify and BeginDirectModify from ExecInitForeignScan, but\nthe former's API seems to be designed to be called from\nExecInitModifyTable from the get-go. Maybe we should leave that\nas-is?\n\n> Path 04 is such a nice improvement. Besides getting rid of a substantial\n> amount of code, it also makes the control flow a lot easier to read.\n\nThanks.\n\n> > @@ -4644,9 +4645,7 @@ GetAfterTriggersTableData(Oid relid, CmdType cmdType)\n> > * If there are no triggers in 'trigdesc' that request relevant transition\n> > * tables, then return NULL.\n> > *\n> > - * The resulting object can be passed to the ExecAR* functions. The caller\n> > - * should set tcs_map or tcs_original_insert_tuple as appropriate when dealing\n> > - * with child tables.\n> > + * The resulting object can be passed to the ExecAR* functions.\n> > *\n> > * Note that we copy the flags from a parent table into this struct (rather\n> > * than subsequently using the relation's TriggerDesc directly) so that we can\n> > @@ -5750,14 +5749,26 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo,\n> > */\n> > if (row_trigger && transition_capture != NULL)\n> > {\n> > - TupleTableSlot *original_insert_tuple = transition_capture->tcs_original_insert_tuple;\n> > - TupleConversionMap *map = transition_capture->tcs_map;\n> > + TupleTableSlot *original_insert_tuple;\n> > + PartitionRoutingInfo *pinfo = relinfo->ri_PartitionInfo;\n> > + TupleConversionMap *map = pinfo ?\n> > + pinfo->pi_PartitionToRootMap :\n> > + relinfo->ri_ChildToRootMap;\n> > bool delete_old_table = transition_capture->tcs_delete_old_table;\n> > bool update_old_table = transition_capture->tcs_update_old_table;\n> > bool update_new_table = transition_capture->tcs_update_new_table;\n> > bool insert_new_table = transition_capture->tcs_insert_new_table;\n> >\n> > /*\n> > + * Get the originally inserted tuple from the global variable and set\n> > + * the latter to NULL because any given tuple must be read only once.\n> > + * Note that the TransitionCaptureState is shared across many calls\n> > + * to this function.\n> > + */\n> > + original_insert_tuple = transition_capture->tcs_original_insert_tuple;\n> > + transition_capture->tcs_original_insert_tuple = NULL;\n>\n> Maybe I'm missing something, but original_insert_tuple is not a global\n> variable?\n\nI really meant to refer to the fact that it's maintained in a\nModifyTable-global struct. I've updated this comment a bit.\n\n> > @@ -888,7 +889,8 @@ ExecInitRoutingInfo(ModifyTableState *mtstate,\n> > PartitionTupleRouting *proute,\n> > PartitionDispatch dispatch,\n> > ResultRelInfo *partRelInfo,\n> > - int partidx)\n> > + int partidx,\n> > + bool is_update_result_rel)\n> > {\n> > MemoryContext oldcxt;\n> > PartitionRoutingInfo *partrouteinfo;\n> > @@ -935,10 +937,15 @@ ExecInitRoutingInfo(ModifyTableState *mtstate,\n> > if (mtstate &&\n> > (mtstate->mt_transition_capture || mtstate->mt_oc_transition_capture))\n> > {\n> > - partrouteinfo->pi_PartitionToRootMap =\n> > - convert_tuples_by_name(RelationGetDescr(partRelInfo->ri_RelationDesc),\n> > - RelationGetDescr(partRelInfo->ri_PartitionRoot),\n> > - gettext_noop(\"could not convert row type\"));\n> > + /* If partition is an update target, then we already got the map. */\n> > + if (is_update_result_rel)\n> > + partrouteinfo->pi_PartitionToRootMap =\n> > + partRelInfo->ri_ChildToRootMap;\n> > + else\n> > + partrouteinfo->pi_PartitionToRootMap =\n> > + convert_tuples_by_name(RelationGetDescr(partRelInfo->ri_RelationDesc),\n> > + RelationGetDescr(partRelInfo->ri_PartitionRoot),\n> > + gettext_noop(\"could not convert row type\"));\n> > }\n>\n> Hm, isn't is_update_result_rel just ModifyTable->operation == CMD_UPDATE?\n\nNo. The operation being CMD_UPDATE doesn't mean that the\nResultRelInfo that is passed to ExecInitRoutingInfo() is an UPDATE\nresult rel. It could be a ResultRelInfo built by ExecFindPartition()\nwhen a row needed to be moved into a partition that is not present in\nthe UPDATE result rels contained in ModifyTableState. Though I\nrealized that we don't really need to add a new parameter to figure\nthat out. Looking at ri_RangeTableIndex property of the passed-in\nResultRelInfo is enough to distinguish the two types of\nResultRelInfos. I've updated the patch that way.\n\nI found more dead code related to transition capture setup, which I've\nremoved in the latest 0004. For example, the\nmt_per_subplan_tupconv_maps array and the code in nodeModifyTable.c\nthat was used to initialize it.\n\nAttached updated patches.\n\nThanks,\nAmit",
"msg_date": "Mon, 5 Aug 2019 18:16:10 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Amit-san,\n\nOn Mon, Aug 5, 2019 at 6:16 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Sat, Aug 3, 2019 at 3:01 AM Andres Freund <andres@anarazel.de> wrote:\n> Based on the discussion, I have updated the patch.\n>\n> > I'm a bit woried about the move of BeginDirectModify() into\n> > nodeModifyTable.c - it just seems like an odd control flow to me. Not\n> > allowing any intermittent nodes between ForeignScan and ModifyTable also\n> > seems like an undesirable restriction for the future. I realize that we\n> > already do that for BeginForeignModify() (just btw, that already accepts\n> > resultRelInfo as a parameter, so being symmetrical for BeginDirectModify\n> > makes sense), but it still seems like the wrong direction to me.\n> >\n> > The need for that move, I assume, comes from needing knowing the correct\n> > ResultRelInfo, correct? I wonder if we shouldn't instead determine the\n> > at plan time (in setrefs.c), somewhat similar to how we determine\n> > ModifyTable.resultRelIndex. Doesn't look like that'd be too hard?\n>\n> The patch adds a resultRelIndex field to ForeignScan node, which is\n> set to >= 0 value for non-SELECT queries.\n\nThanks for the updated patch!\n\n> I first thought to set it\n> only if direct modification is being used, but maybe it'd be simpler\n> to set it even if direct modification is not used. To set it, the\n> patch teaches set_plan_refs() to initialize resultRelIndex of\n> ForeignScan plans that appear under ModifyTable. Fujita-san said he\n> plans to revise the planning of direct-modification style queries to\n> not require a ModifyTable node anymore, but maybe he'll just need to\n> add similar code elsewhere but not outside setrefs.c.\n\nYeah, but I'm not sure this is a good idea:\n\n@ -877,12 +878,6 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)\n rc->rti += rtoffset;\n rc->prti += rtoffset;\n }\n- foreach(l, splan->plans)\n- {\n- lfirst(l) = set_plan_refs(root,\n- (Plan *) lfirst(l),\n- rtoffset);\n- }\n\n /*\n * Append this ModifyTable node's final result relation RT\n@@ -908,6 +903,27 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)\n lappend_int(root->glob->rootResultRelations,\n splan->rootRelation);\n }\n+\n+ resultRelIndex = splan->resultRelIndex;\n+ foreach(l, splan->plans)\n+ {\n+ lfirst(l) = set_plan_refs(root,\n+ (Plan *) lfirst(l),\n+ rtoffset);\n+\n+ /*\n+ * For foreign table result relations, save their index\n+ * in the global list of result relations into the\n+ * corresponding ForeignScan nodes.\n+ */\n+ if (IsA(lfirst(l), ForeignScan))\n+ {\n+ ForeignScan *fscan = (ForeignScan *) lfirst(l);\n+\n+ fscan->resultRelIndex = resultRelIndex;\n+ }\n+ resultRelIndex++;\n+ }\n }\n\nbecause I still feel the same way as mentioned above by Andres. What\nI'm thinking for the setrefs.c change is to modify ForeignScan (ie,\nset_foreignscan_references) rather than ModifyTable, like the\nattached. Maybe I'm missing something, but for direct modification\nwithout ModifyTable, I think we would probably only have to modify\nthat function further so that it not only adjusts resultRelIndex but\ndoes some extra work such as appending the result relation RT index to\nroot->glob->resultRelations as done for ModifyTable.\n\n> > Then we could just have BeginForeignModify, BeginDirectModify,\n> > BeginForeignScan all be called from ExecInitForeignScan().\n\nSorry, previously, I mistakenly agreed with that. As I said before, I\nthink I was too tired.\n\n> I too think that it would've been great if we could call both\n> BeginForeignModify and BeginDirectModify from ExecInitForeignScan, but\n> the former's API seems to be designed to be called from\n> ExecInitModifyTable from the get-go. Maybe we should leave that\n> as-is?\n\n+1 for leaving that as-is; it seems reasonable to me to call\nBeginForeignModify in ExecInitModifyTable, because the ForeignModify\nAPI is designed based on an analogy with local table modifications, in\nwhich case the initialization needed for performing\nExecInsert/ExecUpdate/ExecDelete is done in ModifyTable, not in the\nunderlying scan/join node.\n\n@@ -895,6 +898,12 @@ BeginDirectModify(ForeignScanState *node,\n for <function>ExplainDirectModify</function> and <function>EndDirectModif\\\ny</function>.\n </para>\n\n+ <note>\n+ Also note that it's a good idea to store the <literal>rinfo</literal>\n+ in the <structfield>fdw_state</structfield> for\n+ <function>IterateDirectModify</function> to use.\n+ </node>\n\nActually, if the FDW only supports direct modifications for queries\nwithout RETURNING, it wouldn't need the rinfo in IterateDirectModify,\nso I think we would probably need to update this as such. Having said\nthat, it seems too detailed to me to describe such a thing in the FDW\ndocumentation. To avoid making the documentation verbose, it would be\nbetter to not add such kind of thing at all?\n\nNote: other change in the attached patch is that I modified\n_readForeignScan accordingly.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Tue, 6 Aug 2019 21:55:52 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-05 18:16:10 +0900, Amit Langote wrote:\n> The patch adds a resultRelIndex field to ForeignScan node, which is\n> set to >= 0 value for non-SELECT queries. I first thought to set it\n> only if direct modification is being used, but maybe it'd be simpler\n> to set it even if direct modification is not used.\n\nYea, I think we should just always set it.\n\n\n> To set it, the\n> patch teaches set_plan_refs() to initialize resultRelIndex of\n> ForeignScan plans that appear under ModifyTable. Fujita-san said he\n> plans to revise the planning of direct-modification style queries to\n> not require a ModifyTable node anymore, but maybe he'll just need to\n> add similar code elsewhere but not outside setrefs.c.\n\nI think I prefer the approach in Fujita-san's email. While not extremely\npretty either, it would allow for having nodes between the foreign scan\nand the modify node.\n\n\n> > Then we could just have BeginForeignModify, BeginDirectModify,\n> > BeginForeignScan all be called from ExecInitForeignScan().\n> \n> I too think that it would've been great if we could call both\n> BeginForeignModify and BeginDirectModify from ExecInitForeignScan, but\n> the former's API seems to be designed to be called from\n> ExecInitModifyTable from the get-go. Maybe we should leave that\n> as-is?\n\nYea, we should leave it where it is. I think the API here is fairly\nugly, but it's probably not worth changing. And if we were to change it,\nit'd need a lot bigger hammer.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 6 Aug 2019 16:21:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Fujita-san,\n\nThanks a lot the review.\n\nOn Tue, Aug 6, 2019 at 9:56 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Mon, Aug 5, 2019 at 6:16 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > I first thought to set it\n> > only if direct modification is being used, but maybe it'd be simpler\n> > to set it even if direct modification is not used. To set it, the\n> > patch teaches set_plan_refs() to initialize resultRelIndex of\n> > ForeignScan plans that appear under ModifyTable. Fujita-san said he\n> > plans to revise the planning of direct-modification style queries to\n> > not require a ModifyTable node anymore, but maybe he'll just need to\n> > add similar code elsewhere but not outside setrefs.c.\n>\n> Yeah, but I'm not sure this is a good idea:\n>\n> @ -877,12 +878,6 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)\n> rc->rti += rtoffset;\n> rc->prti += rtoffset;\n> }\n> - foreach(l, splan->plans)\n> - {\n> - lfirst(l) = set_plan_refs(root,\n> - (Plan *) lfirst(l),\n> - rtoffset);\n> - }\n>\n> /*\n> * Append this ModifyTable node's final result relation RT\n> @@ -908,6 +903,27 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)\n> lappend_int(root->glob->rootResultRelations,\n> splan->rootRelation);\n> }\n> +\n> + resultRelIndex = splan->resultRelIndex;\n> + foreach(l, splan->plans)\n> + {\n> + lfirst(l) = set_plan_refs(root,\n> + (Plan *) lfirst(l),\n> + rtoffset);\n> +\n> + /*\n> + * For foreign table result relations, save their index\n> + * in the global list of result relations into the\n> + * corresponding ForeignScan nodes.\n> + */\n> + if (IsA(lfirst(l), ForeignScan))\n> + {\n> + ForeignScan *fscan = (ForeignScan *) lfirst(l);\n> +\n> + fscan->resultRelIndex = resultRelIndex;\n> + }\n> + resultRelIndex++;\n> + }\n> }\n>\n> because I still feel the same way as mentioned above by Andres.\n\nReading Andres' emails again, I now understand why we shouldn't set\nForeignScan's resultRelIndex the way my patches did.\n\n> What\n> I'm thinking for the setrefs.c change is to modify ForeignScan (ie,\n> set_foreignscan_references) rather than ModifyTable, like the\n> attached.\n\nThanks for the patch. I have couple of comments:\n\n* I'm afraid that we've implicitly created an ordering constraint on\nsome code in set_plan_refs(). That is, a ModifyTable's plans now must\nalways be processed before adding its result relations to the global\nlist, which for good measure, should be written down somewhere; I\npropose this comment in the ModifyTable's case block in set_plan_refs:\n\n@@ -877,6 +877,13 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)\n rc->rti += rtoffset;\n rc->prti += rtoffset;\n }\n+ /*\n+ * Caution: Do not change the relative ordering of this loop\n+ * and the statement below that adds the result relations to\n+ * root->glob->resultRelations, because we need to use the\n+ * current value of list_length(root->glob->resultRelations)\n+ * in some plans.\n+ */\n foreach(l, splan->plans)\n {\n lfirst(l) = set_plan_refs(root,\n\n* Regarding setting ForeignScan.resultRelIndex even for non-direct\nmodifications, maybe that's not a good idea anymore. A foreign table\nresult relation might be involved in a local join, which prevents it\nfrom being directly-modifiable and also hides the ForeignScan node\nfrom being easily modifiable in PlanForeignModify. Maybe, we should\njust interpret resultRelIndex as being set only when\ndirect-modification is feasible. Should we rename the field\naccordingly to be self-documenting?\n\nPlease let me know your thoughts, so that I can modify the patch.\n\n> Maybe I'm missing something, but for direct modification\n> without ModifyTable, I think we would probably only have to modify\n> that function further so that it not only adjusts resultRelIndex but\n> does some extra work such as appending the result relation RT index to\n> root->glob->resultRelations as done for ModifyTable.\n\nYeah, that seems reasonable.\n\n> > > Then we could just have BeginForeignModify, BeginDirectModify,\n> > > BeginForeignScan all be called from ExecInitForeignScan().\n>\n> Sorry, previously, I mistakenly agreed with that. As I said before, I\n> think I was too tired.\n>\n> > I too think that it would've been great if we could call both\n> > BeginForeignModify and BeginDirectModify from ExecInitForeignScan, but\n> > the former's API seems to be designed to be called from\n> > ExecInitModifyTable from the get-go. Maybe we should leave that\n> > as-is?\n>\n> +1 for leaving that as-is; it seems reasonable to me to call\n> BeginForeignModify in ExecInitModifyTable, because the ForeignModify\n> API is designed based on an analogy with local table modifications, in\n> which case the initialization needed for performing\n> ExecInsert/ExecUpdate/ExecDelete is done in ModifyTable, not in the\n> underlying scan/join node.\n\nThanks for the explanation.\n\n> @@ -895,6 +898,12 @@ BeginDirectModify(ForeignScanState *node,\n> for <function>ExplainDirectModify</function> and <function>EndDirectModif\\\n> y</function>.\n> </para>\n>\n> + <note>\n> + Also note that it's a good idea to store the <literal>rinfo</literal>\n> + in the <structfield>fdw_state</structfield> for\n> + <function>IterateDirectModify</function> to use.\n> + </node>\n>\n> Actually, if the FDW only supports direct modifications for queries\n> without RETURNING, it wouldn't need the rinfo in IterateDirectModify,\n> so I think we would probably need to update this as such. Having said\n> that, it seems too detailed to me to describe such a thing in the FDW\n> documentation. To avoid making the documentation verbose, it would be\n> better to not add such kind of thing at all?\n\nHmm OK. Perhaps, others who want to implement the direct modification\nAPI can work that out by looking at postgres_fdw implementation.\n\n> Note: other change in the attached patch is that I modified\n> _readForeignScan accordingly.\n\nThanks.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Wed, 7 Aug 2019 10:23:58 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Amit-san,\n\nOn Wed, Aug 7, 2019 at 10:24 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Aug 6, 2019 at 9:56 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > What\n> > I'm thinking for the setrefs.c change is to modify ForeignScan (ie,\n> > set_foreignscan_references) rather than ModifyTable, like the\n> > attached.\n>\n> Thanks for the patch. I have couple of comments:\n>\n> * I'm afraid that we've implicitly created an ordering constraint on\n> some code in set_plan_refs(). That is, a ModifyTable's plans now must\n> always be processed before adding its result relations to the global\n> list, which for good measure, should be written down somewhere; I\n> propose this comment in the ModifyTable's case block in set_plan_refs:\n>\n> @@ -877,6 +877,13 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)\n> rc->rti += rtoffset;\n> rc->prti += rtoffset;\n> }\n> + /*\n> + * Caution: Do not change the relative ordering of this loop\n> + * and the statement below that adds the result relations to\n> + * root->glob->resultRelations, because we need to use the\n> + * current value of list_length(root->glob->resultRelations)\n> + * in some plans.\n> + */\n> foreach(l, splan->plans)\n> {\n> lfirst(l) = set_plan_refs(root,\n\n+1\n\n> * Regarding setting ForeignScan.resultRelIndex even for non-direct\n> modifications, maybe that's not a good idea anymore. A foreign table\n> result relation might be involved in a local join, which prevents it\n> from being directly-modifiable and also hides the ForeignScan node\n> from being easily modifiable in PlanForeignModify. Maybe, we should\n> just interpret resultRelIndex as being set only when\n> direct-modification is feasible.\n\nYeah, I think so; when using PlanForeignModify because for example,\nthe foreign table result relation is involved in a local join, as you\nmentioned, ForeignScan.operation would be left unchanged (ie,\nCMD_SELECT), so to me it's more understandable to not set\nForeignScan.resultRelIndex.\n\n> Should we rename the field\n> accordingly to be self-documenting?\n\nIMO I like the name resultRelIndex, but do you have any better idea?\n\n> > @@ -895,6 +898,12 @@ BeginDirectModify(ForeignScanState *node,\n> > for <function>ExplainDirectModify</function> and <function>EndDirectModif\\\n> > y</function>.\n> > </para>\n> >\n> > + <note>\n> > + Also note that it's a good idea to store the <literal>rinfo</literal>\n> > + in the <structfield>fdw_state</structfield> for\n> > + <function>IterateDirectModify</function> to use.\n> > + </node>\n> >\n> > Actually, if the FDW only supports direct modifications for queries\n> > without RETURNING, it wouldn't need the rinfo in IterateDirectModify,\n> > so I think we would probably need to update this as such. Having said\n> > that, it seems too detailed to me to describe such a thing in the FDW\n> > documentation. To avoid making the documentation verbose, it would be\n> > better to not add such kind of thing at all?\n>\n> Hmm OK. Perhaps, others who want to implement the direct modification\n> API can work that out by looking at postgres_fdw implementation.\n\nYeah, I think so.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 7 Aug 2019 11:30:32 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Fujita-san,\n\nThanks for the quick follow up.\n\nOn Wed, Aug 7, 2019 at 11:30 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Wed, Aug 7, 2019 at 10:24 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > * Regarding setting ForeignScan.resultRelIndex even for non-direct\n> > modifications, maybe that's not a good idea anymore. A foreign table\n> > result relation might be involved in a local join, which prevents it\n> > from being directly-modifiable and also hides the ForeignScan node\n> > from being easily modifiable in PlanForeignModify. Maybe, we should\n> > just interpret resultRelIndex as being set only when\n> > direct-modification is feasible.\n>\n> Yeah, I think so; when using PlanForeignModify because for example,\n> the foreign table result relation is involved in a local join, as you\n> mentioned, ForeignScan.operation would be left unchanged (ie,\n> CMD_SELECT), so to me it's more understandable to not set\n> ForeignScan.resultRelIndex.\n\nOK.\n\n> > Should we rename the field\n> > accordingly to be self-documenting?\n>\n> IMO I like the name resultRelIndex, but do you have any better idea?\n\nOn second thought, I'm fine with sticking to resultRelIndex. Trying\nto make it self documenting might make the name very long.\n\nHere are the updated patches.\n\nThanks,\nAmit",
"msg_date": "Wed, 7 Aug 2019 11:47:35 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Hi,\n\nOn Wed, Aug 7, 2019 at 11:47 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Aug 7, 2019 at 11:30 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Wed, Aug 7, 2019 at 10:24 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > * Regarding setting ForeignScan.resultRelIndex even for non-direct\n> > > modifications, maybe that's not a good idea anymore. A foreign table\n> > > result relation might be involved in a local join, which prevents it\n> > > from being directly-modifiable and also hides the ForeignScan node\n> > > from being easily modifiable in PlanForeignModify. Maybe, we should\n> > > just interpret resultRelIndex as being set only when\n> > > direct-modification is feasible.\n> >\n> > Yeah, I think so; when using PlanForeignModify because for example,\n> > the foreign table result relation is involved in a local join, as you\n> > mentioned, ForeignScan.operation would be left unchanged (ie,\n> > CMD_SELECT), so to me it's more understandable to not set\n> > ForeignScan.resultRelIndex.\n>\n> OK.\n>\n> > > Should we rename the field\n> > > accordingly to be self-documenting?\n> >\n> > IMO I like the name resultRelIndex, but do you have any better idea?\n>\n> On second thought, I'm fine with sticking to resultRelIndex. Trying\n> to make it self documenting might make the name very long.\n\nOK\n\n> Here are the updated patches.\n\nIIUC, I think we reached a consensus at least on the 0001 patch.\nAndres, would you mind if I commit that patch?\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 7 Aug 2019 11:59:53 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Wed, Aug 7, 2019 at 12:00 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> IIUC, I think we reached a consensus at least on the 0001 patch.\n> Andres, would you mind if I commit that patch?\n\nI just noticed obsolete references to es_result_relation_info that\n0002 failed to remove. One of them is in fdwhandler.sgml:\n\n<programlisting>\nTupleTableSlot *\nIterateDirectModify(ForeignScanState *node);\n</programlisting>\n\n ... The data that was actually inserted, updated\n or deleted must be stored in the\n <literal>es_result_relation_info->ri_projectReturning->pi_exprContext->ecxt_scantuple</literal>\n of the node's <structname>EState</structname>.\n\nWe will need to rewrite this without mentioning\nes_result_relation_info. How about as follows:\n\n- <literal>es_result_relation_info->ri_projectReturning->pi_exprContext->ecxt_scantuple</literal>\n- of the node's <structname>EState</structname>.\n+ <literal>ri_projectReturning->pi_exprContext->ecxt_scantuple</literal>\n+ of the result relation's<structname>ResultRelInfo</structname> that has\n+ been made available via node.\n\nI've updated 0001 with the above change.\n\nAlso, I updated 0002 to remove other references.\n\nThanks,\nAmit",
"msg_date": "Wed, 7 Aug 2019 16:27:41 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Amit-san,\n\nOn Wed, Aug 7, 2019 at 4:28 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Aug 7, 2019 at 12:00 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > IIUC, I think we reached a consensus at least on the 0001 patch.\n> > Andres, would you mind if I commit that patch?\n>\n> I just noticed obsolete references to es_result_relation_info that\n> 0002 failed to remove. One of them is in fdwhandler.sgml:\n>\n> <programlisting>\n> TupleTableSlot *\n> IterateDirectModify(ForeignScanState *node);\n> </programlisting>\n>\n> ... The data that was actually inserted, updated\n> or deleted must be stored in the\n> <literal>es_result_relation_info->ri_projectReturning->pi_exprContext->ecxt_scantuple</literal>\n> of the node's <structname>EState</structname>.\n>\n> We will need to rewrite this without mentioning\n> es_result_relation_info. How about as follows:\n>\n> - <literal>es_result_relation_info->ri_projectReturning->pi_exprContext->ecxt_scantuple</literal>\n> - of the node's <structname>EState</structname>.\n> + <literal>ri_projectReturning->pi_exprContext->ecxt_scantuple</literal>\n> + of the result relation's<structname>ResultRelInfo</structname> that has\n> + been made available via node.\n>\n> I've updated 0001 with the above change.\n\nGood catch!\n\nThis would be nitpicking, but:\n\n* IIUC, we don't use the term \"result relation\" in fdwhandler.sgml.\nFor consistency with your change to the doc for BeginDirectModify, how\nabout using the term \"target foreign table\" instead of \"result\nrelation\"?\n\n* ISTM that \"<structname>ResultRelInfo</structname> that has been made\navailable via node\" would be a bit fuzzy to FDW authors. To be more\nspecific, how about changing it to\n\"<structname>ResultRelInfo</structname> passed to\n<function>BeginDirectModify</function>\" or something like that?\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 7 Aug 2019 18:00:04 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Fujita-san,\n\nOn Wed, Aug 7, 2019 at 6:00 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Wed, Aug 7, 2019 at 4:28 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > I just noticed obsolete references to es_result_relation_info that\n> > 0002 failed to remove. One of them is in fdwhandler.sgml:\n> >\n> > <programlisting>\n> > TupleTableSlot *\n> > IterateDirectModify(ForeignScanState *node);\n> > </programlisting>\n> >\n> > ... The data that was actually inserted, updated\n> > or deleted must be stored in the\n> > <literal>es_result_relation_info->ri_projectReturning->pi_exprContext->ecxt_scantuple</literal>\n> > of the node's <structname>EState</structname>.\n> >\n> > We will need to rewrite this without mentioning\n> > es_result_relation_info. How about as follows:\n> >\n> > - <literal>es_result_relation_info->ri_projectReturning->pi_exprContext->ecxt_scantuple</literal>\n> > - of the node's <structname>EState</structname>.\n> > + <literal>ri_projectReturning->pi_exprContext->ecxt_scantuple</literal>\n> > + of the result relation's<structname>ResultRelInfo</structname> that has\n> > + been made available via node.\n> >\n> > I've updated 0001 with the above change.\n>\n> Good catch!\n\nThanks for the review.\n\n> This would be nitpicking, but:\n>\n> * IIUC, we don't use the term \"result relation\" in fdwhandler.sgml.\n> For consistency with your change to the doc for BeginDirectModify, how\n> about using the term \"target foreign table\" instead of \"result\n> relation\"?\n\nAgreed, done.\n\n> * ISTM that \"<structname>ResultRelInfo</structname> that has been made\n> available via node\" would be a bit fuzzy to FDW authors. To be more\n> specific, how about changing it to\n> \"<structname>ResultRelInfo</structname> passed to\n> <function>BeginDirectModify</function>\" or something like that?\n\nThat works for me, although an FDW author reading this still has got\nto make the connection.\n\nAttached updated patches; only 0001 changed in this version.\n\nThanks,\nAmit",
"msg_date": "Thu, 8 Aug 2019 10:10:11 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Hi,\n\nOn Thu, Aug 8, 2019 at 10:10 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Aug 7, 2019 at 6:00 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Wed, Aug 7, 2019 at 4:28 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > I just noticed obsolete references to es_result_relation_info that\n> > > 0002 failed to remove. One of them is in fdwhandler.sgml:\n> > >\n> > > <programlisting>\n> > > TupleTableSlot *\n> > > IterateDirectModify(ForeignScanState *node);\n> > > </programlisting>\n> > >\n> > > ... The data that was actually inserted, updated\n> > > or deleted must be stored in the\n> > > <literal>es_result_relation_info->ri_projectReturning->pi_exprContext->ecxt_scantuple</literal>\n> > > of the node's <structname>EState</structname>.\n> > >\n> > > We will need to rewrite this without mentioning\n> > > es_result_relation_info. How about as follows:\n> > >\n> > > - <literal>es_result_relation_info->ri_projectReturning->pi_exprContext->ecxt_scantuple</literal>\n> > > - of the node's <structname>EState</structname>.\n> > > + <literal>ri_projectReturning->pi_exprContext->ecxt_scantuple</literal>\n> > > + of the result relation's<structname>ResultRelInfo</structname> that has\n> > > + been made available via node.\n> > >\n> > > I've updated 0001 with the above change.\n\n> > This would be nitpicking, but:\n> >\n> > * IIUC, we don't use the term \"result relation\" in fdwhandler.sgml.\n> > For consistency with your change to the doc for BeginDirectModify, how\n> > about using the term \"target foreign table\" instead of \"result\n> > relation\"?\n>\n> Agreed, done.\n>\n> > * ISTM that \"<structname>ResultRelInfo</structname> that has been made\n> > available via node\" would be a bit fuzzy to FDW authors. To be more\n> > specific, how about changing it to\n> > \"<structname>ResultRelInfo</structname> passed to\n> > <function>BeginDirectModify</function>\" or something like that?\n>\n> That works for me, although an FDW author reading this still has got\n> to make the connection.\n>\n> Attached updated patches; only 0001 changed in this version.\n\nThanks for the updated version, Amit-san! I updated the 0001 patch a\nbit further:\n\n* Tweaked comments in plannodes.h, createplan.c, and nodeForeignscan.c.\n* Made cosmetic changes to postgres_fdw.c.\n* Adjusted doc changes a bit, mainly not to produce unnecessary diff.\n* Modified the commit message.\n\nAttached is an updated version of the 0001 patch. Does that make sense?\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Thu, 8 Aug 2019 21:49:09 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Fujita-san,\n\nOn Thu, Aug 8, 2019 at 9:49 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Thu, Aug 8, 2019 at 10:10 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Attached updated patches; only 0001 changed in this version.\n>\n> Thanks for the updated version, Amit-san! I updated the 0001 patch a\n> bit further:\n>\n> * Tweaked comments in plannodes.h, createplan.c, and nodeForeignscan.c.\n> * Made cosmetic changes to postgres_fdw.c.\n> * Adjusted doc changes a bit, mainly not to produce unnecessary diff.\n> * Modified the commit message.\n>\n> Attached is an updated version of the 0001 patch. Does that make sense?\n\nLooks perfect, thank you.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Fri, 9 Aug 2019 10:51:49 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Fri, Aug 9, 2019 at 10:51 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Fujita-san,\n>\n> On Thu, Aug 8, 2019 at 9:49 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Thu, Aug 8, 2019 at 10:10 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > Attached updated patches; only 0001 changed in this version.\n> >\n> > Thanks for the updated version, Amit-san! I updated the 0001 patch a\n> > bit further:\n> >\n> > * Tweaked comments in plannodes.h, createplan.c, and nodeForeignscan.c.\n> > * Made cosmetic changes to postgres_fdw.c.\n> > * Adjusted doc changes a bit, mainly not to produce unnecessary diff.\n> > * Modified the commit message.\n> >\n> > Attached is an updated version of the 0001 patch. Does that make sense?\n>\n> Looks perfect, thank you.\n\nTo avoid losing track of this, I've added this to November CF.\n\nhttps://commitfest.postgresql.org/25/2277/\n\nStruggled a bit to give a title to the entry though.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 4 Sep 2019 10:45:25 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Wed, Sep 4, 2019 at 10:45 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Fri, Aug 9, 2019 at 10:51 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> To avoid losing track of this, I've added this to November CF.\n>\n> https://commitfest.postgresql.org/25/2277/\n>\n> Struggled a bit to give a title to the entry though.\n\nNoticed that one of the patches needed a rebase.\n\nAttached updated patches. Note that v8-0001 is v7-0001 unchanged that\nFujita-san posted on Aug 8.\n\nThanks,\nAmit",
"msg_date": "Thu, 26 Sep 2019 13:56:37 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Thu, Sep 26, 2019 at 1:56 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Sep 4, 2019 at 10:45 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Fri, Aug 9, 2019 at 10:51 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > To avoid losing track of this, I've added this to November CF.\n> >\n> > https://commitfest.postgresql.org/25/2277/\n> >\n> > Struggled a bit to give a title to the entry though.\n>\n> Noticed that one of the patches needed a rebase.\n>\n> Attached updated patches. Note that v8-0001 is v7-0001 unchanged that\n> Fujita-san posted on Aug 8.\n\nRebased again.\n\nThanks,\nAmit",
"msg_date": "Wed, 18 Dec 2019 15:30:22 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> Rebased again.\n\nSeems to need that again, according to cfbot :-(\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 01 Mar 2020 14:42:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Mon, Mar 2, 2020 at 4:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > Rebased again.\n>\n> Seems to need that again, according to cfbot :-(\n\nThank you, done.\n\nRegards,\nAmit",
"msg_date": "Mon, 2 Mar 2020 14:08:41 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "> On 2 Mar 2020, at 06:08, Amit Langote <amitlangote09@gmail.com> wrote:\n> \n> On Mon, Mar 2, 2020 at 4:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Amit Langote <amitlangote09@gmail.com> writes:\n>>> Rebased again.\n>> \n>> Seems to need that again, according to cfbot :-(\n> \n> Thank you, done.\n\n..and another one is needed as it no longer applies, please submit a rebased\nversion.\n\ncheers ./daniel\n\n\n",
"msg_date": "Wed, 1 Jul 2020 11:56:36 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Wed, Jul 1, 2020 at 6:56 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 2 Mar 2020, at 06:08, Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > On Mon, Mar 2, 2020 at 4:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Amit Langote <amitlangote09@gmail.com> writes:\n> >>> Rebased again.\n> >>\n> >> Seems to need that again, according to cfbot :-(\n> >\n> > Thank you, done.\n>\n> ..and another one is needed as it no longer applies, please submit a rebased\n> version.\n\nSorry, it took me a while to get to this.\n\nIt's been over 11 months since there was any significant commentary on\nthe contents of the patches themselves, so perhaps I should reiterate\nwhat the patches are about and why it might still be a good idea to\nconsider them.\n\nThe thread started with some very valid criticism of the way\nexecutor's partition tuple routing logic looks randomly sprinkled over\nin nodeModifyTable.c, execPartition.c. In the process of making it\nlook less random, we decided to get rid of the global variable\nes_result_relation_info to avoid complex maneuvers of\nsetting/resetting it correctly when performing partition tuple\nrouting, causing some other churn beside the partitioning code. Same\nwith another global variable TransitionCaptureState.tcs_map. So, the\npatches neither add any new capabilities, nor improve performance, but\nthey do make the code in this area a bit easier to follow.\n\nActually, there is a problem that some of the changes here conflict\nwith patches being discussed on other threads ([1], [2]), so much so\nthat I decided to absorb some changes here into another \"refactoring\"\npatch that I have posted at [2].\n\nAttached rebased patches.\n\n0001 contains preparatory FDW API changes to stop relying on\nes_result_relation_info being set correctly.\n\n0002 removes es_result_relation_info in favor passing the active\nresult relation around as a parameter in the various functions that\nneed it\n\n0003 Moves UPDATE tuple-routing logic into a new function\n\n0004 removes the global variable TransitionCaptureState.tcs_map which\nneeded to be set/reset whenever the active result relation relation\nchanges in favor of a new field in ResultRelInfo to store the same map\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n[1] https://commitfest.postgresql.org/28/2575/\n[2] https://commitfest.postgresql.org/28/2621/",
"msg_date": "Mon, 13 Jul 2020 14:47:48 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On 13/07/2020 08:47, Amit Langote wrote:\n> It's been over 11 months since there was any significant commentary on\n> the contents of the patches themselves, so perhaps I should reiterate\n> what the patches are about and why it might still be a good idea to\n> consider them.\n> \n> The thread started with some very valid criticism of the way\n> executor's partition tuple routing logic looks randomly sprinkled over\n> in nodeModifyTable.c, execPartition.c. In the process of making it\n> look less random, we decided to get rid of the global variable\n> es_result_relation_info to avoid complex maneuvers of\n> setting/resetting it correctly when performing partition tuple\n> routing, causing some other churn beside the partitioning code. Same\n> with another global variable TransitionCaptureState.tcs_map. So, the\n> patches neither add any new capabilities, nor improve performance, but\n> they do make the code in this area a bit easier to follow.\n> \n> Actually, there is a problem that some of the changes here conflict\n> with patches being discussed on other threads ([1], [2]), so much so\n> that I decided to absorb some changes here into another \"refactoring\"\n> patch that I have posted at [2].\n\nThanks for the summary. It's been a bit hard to follow what depends on \nwhat across these threads, and how they work together. It seems that \nthis patch set is the best place to start.\n\n> Attached rebased patches.\n> \n> 0001 contains preparatory FDW API changes to stop relying on\n> es_result_relation_info being set correctly.\n\nMakes sense. The only thing I don't like about this is the way the \nForeignScan->resultRelIndex field is set. make_foreignscan() initializes \nit to -1, and the FDW's PlanDirectModify() function is expected to set \nit, like you did in postgres_fdw:\n\n> @@ -2319,6 +2322,11 @@ postgresPlanDirectModify(PlannerInfo *root,\n> \t\t\trebuild_fdw_scan_tlist(fscan, returningList);\n> \t}\n> \n> +\t/*\n> +\t * Set the index of the subplan result rel.\n> +\t */\n> +\tfscan->resultRelIndex = subplan_index;\n> +\n> \ttable_close(rel, NoLock);\n> \treturn true;\n> }\n\nIt has to be set to that value (subplan_index is an argument to \nPlanDirectModify()), the FDW doesn't have any choice there, so this is \njust additional boilerplate code that has to be copied to every FDW that \nimplements direct modify. Furthermore, if the FDW doesn't set it \ncorrectly, you could have some very interesting results, like updating \nwrong table. It would be better to set it in make_modifytable(), just \nafter calling PlanDirectModify().\n\nI'm also a bit unhappy with the way it's updated in set_plan_refs():\n\n> --- a/src/backend/optimizer/plan/setrefs.c\n> +++ b/src/backend/optimizer/plan/setrefs.c\n> @@ -904,6 +904,13 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)\n> \t\t\t\t\trc->rti += rtoffset;\n> \t\t\t\t\trc->prti += rtoffset;\n> \t\t\t\t}\n> +\t\t\t\t/*\n> +\t\t\t\t * Caution: Do not change the relative ordering of this loop\n> +\t\t\t\t * and the statement below that adds the result relations to\n> +\t\t\t\t * root->glob->resultRelations, because we need to use the\n> +\t\t\t\t * current value of list_length(root->glob->resultRelations)\n> +\t\t\t\t * in some plans.\n> +\t\t\t\t */\n> \t\t\t\tforeach(l, splan->plans)\n> \t\t\t\t{\n> \t\t\t\t\tlfirst(l) = set_plan_refs(root,\n> @@ -1243,6 +1250,14 @@ set_foreignscan_references(PlannerInfo *root,\n> \t}\n> \n> \tfscan->fs_relids = offset_relid_set(fscan->fs_relids, rtoffset);\n> +\n> +\t/*\n> +\t * Adjust resultRelIndex if it's valid (note that we are called before\n> +\t * adding the RT indexes of ModifyTable result relations to the global\n> +\t * list)\n> +\t */\n> +\tif (fscan->resultRelIndex >= 0)\n> +\t\tfscan->resultRelIndex += list_length(root->glob->resultRelations);\n> }\n> \n> /*\n\nThat \"Caution\" comment is well deserved, but could we make this more \nrobust to begin with? The most straightforward solution would be to pass \ndown the \"current resultRelIndex\" as an extra parameter to \nset_plan_refs(), similar to rtoffset. If we did that, we wouldn't \nactually need to set it before setrefs.c processing at all.\n\nI'm a bit wary of adding another argument to set_plan_refs() because \nthat's a lot of code churn, but it does seem like the most natural \nsolution to me. Maybe create a new context struct to hold the \nPlannerInfo, rtoffset, and the new \"currentResultRelIndex\" value, \nsimilar to fix_scan_expr_context, to avoid passing through so many \narguments.\n\n\nAnother idea is to merge \"resultRelIndex\" and a \"range table index\" into \none value. Range table entries that are updated would have a \nResultRelInfo, others would not. I'm not sure if that would end up being \ncleaner or messier than what we have now, but might be worth trying.\n\n> 0002 removes es_result_relation_info in favor passing the active\n> result relation around as a parameter in the various functions that\n> need it\n\nLooks good.\n\n> 0003 Moves UPDATE tuple-routing logic into a new function\n> \n> 0004 removes the global variable TransitionCaptureState.tcs_map which\n> needed to be set/reset whenever the active result relation relation\n> changes in favor of a new field in ResultRelInfo to store the same map\n\nI didn't look closely, but these make sense at a quick glance.\n\n- Heikki\n\n\n\n",
"msg_date": "Mon, 5 Oct 2020 18:45:15 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "Hekki,\n\nThanks a lot for the review!\n\nOn Tue, Oct 6, 2020 at 12:45 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 13/07/2020 08:47, Amit Langote wrote:\n> > It's been over 11 months since there was any significant commentary on\n> > the contents of the patches themselves, so perhaps I should reiterate\n> > what the patches are about and why it might still be a good idea to\n> > consider them.\n> >\n> > The thread started with some very valid criticism of the way\n> > executor's partition tuple routing logic looks randomly sprinkled over\n> > in nodeModifyTable.c, execPartition.c. In the process of making it\n> > look less random, we decided to get rid of the global variable\n> > es_result_relation_info to avoid complex maneuvers of\n> > setting/resetting it correctly when performing partition tuple\n> > routing, causing some other churn beside the partitioning code. Same\n> > with another global variable TransitionCaptureState.tcs_map. So, the\n> > patches neither add any new capabilities, nor improve performance, but\n> > they do make the code in this area a bit easier to follow.\n> >\n> > Actually, there is a problem that some of the changes here conflict\n> > with patches being discussed on other threads ([1], [2]), so much so\n> > that I decided to absorb some changes here into another \"refactoring\"\n> > patch that I have posted at [2].\n>\n> Thanks for the summary. It's been a bit hard to follow what depends on\n> what across these threads, and how they work together. It seems that\n> this patch set is the best place to start.\n\nGreat. I'd be happy if I will have one less set of patches to keep at home. :-)\n\n> > Attached rebased patches.\n> >\n> > 0001 contains preparatory FDW API changes to stop relying on\n> > es_result_relation_info being set correctly.\n>\n> Makes sense. The only thing I don't like about this is the way the\n> ForeignScan->resultRelIndex field is set. make_foreignscan() initializes\n> it to -1, and the FDW's PlanDirectModify() function is expected to set\n> it, like you did in postgres_fdw:\n>\n> > @@ -2319,6 +2322,11 @@ postgresPlanDirectModify(PlannerInfo *root,\n> > rebuild_fdw_scan_tlist(fscan, returningList);\n> > }\n> >\n> > + /*\n> > + * Set the index of the subplan result rel.\n> > + */\n> > + fscan->resultRelIndex = subplan_index;\n> > +\n> > table_close(rel, NoLock);\n> > return true;\n> > }\n>\n> It has to be set to that value (subplan_index is an argument to\n> PlanDirectModify()), the FDW doesn't have any choice there, so this is\n> just additional boilerplate code that has to be copied to every FDW that\n> implements direct modify. Furthermore, if the FDW doesn't set it\n> correctly, you could have some very interesting results, like updating\n> wrong table. It would be better to set it in make_modifytable(), just\n> after calling PlanDirectModify().\n\nActually, that's how it was done in earlier iterations but I think I\ndecided to move that into the FDW's functions due to some concern of\none of the other patches that depended on this patch. Maybe it makes\nsense to bring that back into make_modifytable() and worry about the\nother patch later.\n\n> I'm also a bit unhappy with the way it's updated in set_plan_refs():\n>\n> > --- a/src/backend/optimizer/plan/setrefs.c\n> > +++ b/src/backend/optimizer/plan/setrefs.c\n> > @@ -904,6 +904,13 @@ set_plan_refs(PlannerInfo *root, Plan *plan, int rtoffset)\n> > rc->rti += rtoffset;\n> > rc->prti += rtoffset;\n> > }\n> > + /*\n> > + * Caution: Do not change the relative ordering of this loop\n> > + * and the statement below that adds the result relations to\n> > + * root->glob->resultRelations, because we need to use the\n> > + * current value of list_length(root->glob->resultRelations)\n> > + * in some plans.\n> > + */\n> > foreach(l, splan->plans)\n> > {\n> > lfirst(l) = set_plan_refs(root,\n> > @@ -1243,6 +1250,14 @@ set_foreignscan_references(PlannerInfo *root,\n> > }\n> >\n> > fscan->fs_relids = offset_relid_set(fscan->fs_relids, rtoffset);\n> > +\n> > + /*\n> > + * Adjust resultRelIndex if it's valid (note that we are called before\n> > + * adding the RT indexes of ModifyTable result relations to the global\n> > + * list)\n> > + */\n> > + if (fscan->resultRelIndex >= 0)\n> > + fscan->resultRelIndex += list_length(root->glob->resultRelations);\n> > }\n> >\n> > /*\n>\n> That \"Caution\" comment is well deserved, but could we make this more\n> robust to begin with? The most straightforward solution would be to pass\n> down the \"current resultRelIndex\" as an extra parameter to\n> set_plan_refs(), similar to rtoffset. If we did that, we wouldn't\n> actually need to set it before setrefs.c processing at all.\n\nHmm, I don't think I understand the last sentence. A given\nForeignScan node's resultRelIndex will have to be set before getting\nto set_plan_refs(). I mean we shouldn't be making it a job of\nsetrefs.c to figure out which ForeignScan nodes need to have its\nresultRelIndex set to a valid value.\n\n> I'm a bit wary of adding another argument to set_plan_refs() because\n> that's a lot of code churn, but it does seem like the most natural\n> solution to me. Maybe create a new context struct to hold the\n> PlannerInfo, rtoffset, and the new \"currentResultRelIndex\" value,\n> similar to fix_scan_expr_context, to avoid passing through so many\n> arguments.\n\nI like the idea of a context struct. I've implemented it as a\nseparate refactoring patch (0001) and 0002 (what was before 0001)\nextends it for \"current ResultRelIndex\", although I used the name\nrroffset for \"current ResultRelIndex\" to go along with rtoffset.\n\n> Another idea is to merge \"resultRelIndex\" and a \"range table index\" into\n> one value. Range table entries that are updated would have a\n> ResultRelInfo, others would not. I'm not sure if that would end up being\n> cleaner or messier than what we have now, but might be worth trying.\n\nI have thought about something like this before. An idea I had is to\nmake es_result_relations array indexable by plain RT indexes, then we\ndon't need to maintain separate indexes that we do today for result\nrelations.\n\n> > 0002 removes es_result_relation_info in favor passing the active\n> > result relation around as a parameter in the various functions that\n> > need it\n>\n> Looks good.\n>\n> > 0003 Moves UPDATE tuple-routing logic into a new function\n> >\n> > 0004 removes the global variable TransitionCaptureState.tcs_map which\n> > needed to be set/reset whenever the active result relation relation\n> > changes in favor of a new field in ResultRelInfo to store the same map\n>\n> I didn't look closely, but these make sense at a quick glance.\n\nUpdated patches attached.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 7 Oct 2020 18:50:54 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On 07/10/2020 12:50, Amit Langote wrote:\n> On Tue, Oct 6, 2020 at 12:45 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> It would be better to set it in make_modifytable(), just\n>> after calling PlanDirectModify().\n> \n> Actually, that's how it was done in earlier iterations but I think I\n> decided to move that into the FDW's functions due to some concern of\n> one of the other patches that depended on this patch. Maybe it makes\n> sense to bring that back into make_modifytable() and worry about the\n> other patch later.\n\nOn second thoughts, I take back my earlier comment. Setting it in \nmake_modifytable() relies on the assumption that the subplan is a single \nForeignScan node, on the target relation. The documentation for \nPlanDirectModify says:\n\n> To execute the direct modification on the remote server, this\n> function must rewrite the target subplan with a ForeignScan plan node\n> that executes the direct modification on the remote server.\nSo I guess that assumption is safe. But I'd like to have some wiggle \nroom here. Wouldn't it be OK to have a Result node on top of the \nForeignScan, for example? If it really must be a simple ForeignScan \nnode, the PlanDirectModify API seems pretty strange.\n\nI'm not entirely sure what I would like to do with this now. I could \nlive with either version, but I'm not totally happy with either. (I like \nyour suggestion below)\n\nLooking at this block in postgresBeginDirectModify:\n\n> \t/*\n> \t * Identify which user to do the remote access as. This should match what\n> \t * ExecCheckRTEPerms() does.\n> \t */\n> \tAssert(fsplan->resultRelIndex >= 0);\n> \tdmstate->resultRelIndex = fsplan->resultRelIndex;\n> \trtindex = list_nth_int(resultRelations, fsplan->resultRelIndex);\n> \trte = exec_rt_fetch(rtindex, estate);\n> \tuserid = rte->checkAsUser ? rte->checkAsUser : GetUserId();\n\nThat's a complicated way of finding out the target table's RTI. We \nshould probably store the result RTI in the ForeignScan in the first place.\n\n>> Another idea is to merge \"resultRelIndex\" and a \"range table index\" into\n>> one value. Range table entries that are updated would have a\n>> ResultRelInfo, others would not. I'm not sure if that would end up being\n>> cleaner or messier than what we have now, but might be worth trying.\n> \n> I have thought about something like this before. An idea I had is to\n> make es_result_relations array indexable by plain RT indexes, then we\n> don't need to maintain separate indexes that we do today for result\n> relations.\n\nThat sounds like a good idea. es_result_relations is currently an array \nof ResultRelInfos, so that would leave a lot of unfilled structs in the \narray. But in on of your other threads, you proposed turning \nes_result_relations into an array of pointers anyway \n(https://www.postgresql.org/message-id/CA+HiwqE4k1Q2TLmCAvekw+8_NXepbnfUOamOeX=KpHRDTfSKxA@mail.gmail.com).\n\n- Heikki\n\n\n",
"msg_date": "Wed, 7 Oct 2020 15:07:20 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Wed, Oct 7, 2020 at 9:07 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 07/10/2020 12:50, Amit Langote wrote:\n> > On Tue, Oct 6, 2020 at 12:45 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >> It would be better to set it in make_modifytable(), just\n> >> after calling PlanDirectModify().\n> >\n> > Actually, that's how it was done in earlier iterations but I think I\n> > decided to move that into the FDW's functions due to some concern of\n> > one of the other patches that depended on this patch. Maybe it makes\n> > sense to bring that back into make_modifytable() and worry about the\n> > other patch later.\n>\n> On second thoughts, I take back my earlier comment. Setting it in\n> make_modifytable() relies on the assumption that the subplan is a single\n> ForeignScan node, on the target relation. The documentation for\n> PlanDirectModify says:\n>\n> > To execute the direct modification on the remote server, this\n> > function must rewrite the target subplan with a ForeignScan plan node\n> > that executes the direct modification on the remote server.\n>>\n> So I guess that assumption is safe. But I'd like to have some wiggle\n> room here. Wouldn't it be OK to have a Result node on top of the\n> ForeignScan, for example? If it really must be a simple ForeignScan\n> node, the PlanDirectModify API seems pretty strange.\n>\n> I'm not entirely sure what I would like to do with this now. I could\n> live with either version, but I'm not totally happy with either. (I like\n> your suggestion below)\n\nAssuming you mean the idea of using RT index to access ResultRelInfos\nin es_result_relations, we would still need to store the index in the\nForeignScan node, so the question of whether to do it in\nmake_modifytable() or in PlanDirectModify() must still be answered.\n\n> Looking at this block in postgresBeginDirectModify:\n>\n> > /*\n> > * Identify which user to do the remote access as. This should match what\n> > * ExecCheckRTEPerms() does.\n> > */\n> > Assert(fsplan->resultRelIndex >= 0);\n> > dmstate->resultRelIndex = fsplan->resultRelIndex;\n> > rtindex = list_nth_int(resultRelations, fsplan->resultRelIndex);\n> > rte = exec_rt_fetch(rtindex, estate);\n> > userid = rte->checkAsUser ? rte->checkAsUser : GetUserId();\n>\n> That's a complicated way of finding out the target table's RTI. We\n> should probably store the result RTI in the ForeignScan in the first place.\n>\n> >> Another idea is to merge \"resultRelIndex\" and a \"range table index\" into\n> >> one value. Range table entries that are updated would have a\n> >> ResultRelInfo, others would not. I'm not sure if that would end up being\n> >> cleaner or messier than what we have now, but might be worth trying.\n> >\n> > I have thought about something like this before. An idea I had is to\n> > make es_result_relations array indexable by plain RT indexes, then we\n> > don't need to maintain separate indexes that we do today for result\n> > relations.\n>\n> That sounds like a good idea. es_result_relations is currently an array\n> of ResultRelInfos, so that would leave a lot of unfilled structs in the\n> array. But in on of your other threads, you proposed turning\n> es_result_relations into an array of pointers anyway\n> (https://www.postgresql.org/message-id/CA+HiwqE4k1Q2TLmCAvekw+8_NXepbnfUOamOeX=KpHRDTfSKxA@mail.gmail.com).\n\nOkay, I am reorganizing the patches around that idea and will post an\nupdate soon.\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Oct 2020 21:35:59 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Thu, Oct 8, 2020 at 9:35 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Oct 7, 2020 at 9:07 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > On 07/10/2020 12:50, Amit Langote wrote:\n> > > I have thought about something like this before. An idea I had is to\n> > > make es_result_relations array indexable by plain RT indexes, then we\n> > > don't need to maintain separate indexes that we do today for result\n> > > relations.\n> >\n> > That sounds like a good idea. es_result_relations is currently an array\n> > of ResultRelInfos, so that would leave a lot of unfilled structs in the\n> > array. But in on of your other threads, you proposed turning\n> > es_result_relations into an array of pointers anyway\n> > (https://www.postgresql.org/message-id/CA+HiwqE4k1Q2TLmCAvekw+8_NXepbnfUOamOeX=KpHRDTfSKxA@mail.gmail.com).\n>\n> Okay, I am reorganizing the patches around that idea and will post an\n> update soon.\n\nAttached updated patches.\n\n0001 makes es_result_relations an RTI-indexable array, which allows to\nget rid of all \"result relation index\" fields across the code.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 9 Oct 2020 17:01:39 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On 09/10/2020 11:01, Amit Langote wrote:\n> On Thu, Oct 8, 2020 at 9:35 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> On Wed, Oct 7, 2020 at 9:07 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>> On 07/10/2020 12:50, Amit Langote wrote:\n>>>> I have thought about something like this before. An idea I had is to\n>>>> make es_result_relations array indexable by plain RT indexes, then we\n>>>> don't need to maintain separate indexes that we do today for result\n>>>> relations.\n>>>\n>>> That sounds like a good idea. es_result_relations is currently an array\n>>> of ResultRelInfos, so that would leave a lot of unfilled structs in the\n>>> array. But in on of your other threads, you proposed turning\n>>> es_result_relations into an array of pointers anyway\n>>> (https://www.postgresql.org/message-id/CA+HiwqE4k1Q2TLmCAvekw+8_NXepbnfUOamOeX=KpHRDTfSKxA@mail.gmail.com).\n>>\n>> Okay, I am reorganizing the patches around that idea and will post an\n>> update soon.\n> \n> Attached updated patches.\n> \n> 0001 makes es_result_relations an RTI-indexable array, which allows to\n> get rid of all \"result relation index\" fields across the code.\n\nThanks! A couple small things I wanted to check with you before committing:\n\n1. We have many different cleanup/close routines now: \nExecCloseResultRelations, ExecCloseRangeTableRelations, and \nExecCleanUpTriggerState. Do we need them all? It seems to me that we \ncould merge ExecCloseRangeTableRelations() and \nExecCleanUpTriggerState(), they seem to do roughly the same thing: close \nrelations that were opened for ResultRelInfos. They are always called \ntogether, except in afterTriggerInvokeEvents(). And in \nafterTriggerInvokeEvents() too, there would be no harm in doing both, \neven though we know there aren't any entries in the es_result_relations \narray at that point.\n\n2. The way this is handled in worker.c is a bit funny. In \ncreate_estate_for_relation(), you create a ResultRelInfo, but you \n*don't* put it in the es_opened_result_relations list. That's \nsurprising, but I'm also surprised there are no \nExecCloseResultRelations() calls before the FreeExecutorState() calls in \nworker.c. It's not needed because the \napply_handle_insert/update/delete_internal() functions call \nExecCloseIndices() directly, so they don't rely on the \nExecCloseResultRelations() function for cleanup. That works too, but \nit's a bit surprising because it's different from how it's done in \ncopy.c and nodeModifyTable.c. It would feel natural to rely on \nExecCloseResultRelations() in worker.c as well, but on the other hand, \nit also calls ExecOpenIndices() in a more lazy fashion, and it makes \nsense to call ExecCloseIndices() in the same functions that \nExecOpenIndices() is called. So I'm not sure if changing that would be \nan improvement overall. What do you think? Did you consider doing that?\n\nAttached is your original patch v13, and a patch on top of it that \nmerges ExecCloseResultRelations() and ExecCleanUpTriggerState(), and \nmakes some minor comment changes. I didn't do anything about the \nworker.c business, aside from adding a comment about it.\n\n- Heikki",
"msg_date": "Mon, 12 Oct 2020 14:12:29 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Mon, Oct 12, 2020 at 8:12 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 09/10/2020 11:01, Amit Langote wrote:\n> > On Thu, Oct 8, 2020 at 9:35 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >> On Wed, Oct 7, 2020 at 9:07 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >>> On 07/10/2020 12:50, Amit Langote wrote:\n> >>>> I have thought about something like this before. An idea I had is to\n> >>>> make es_result_relations array indexable by plain RT indexes, then we\n> >>>> don't need to maintain separate indexes that we do today for result\n> >>>> relations.\n> >>>\n> >>> That sounds like a good idea. es_result_relations is currently an array\n> >>> of ResultRelInfos, so that would leave a lot of unfilled structs in the\n> >>> array. But in on of your other threads, you proposed turning\n> >>> es_result_relations into an array of pointers anyway\n> >>> (https://www.postgresql.org/message-id/CA+HiwqE4k1Q2TLmCAvekw+8_NXepbnfUOamOeX=KpHRDTfSKxA@mail.gmail.com).\n> >>\n> >> Okay, I am reorganizing the patches around that idea and will post an\n> >> update soon.\n> >\n> > Attached updated patches.\n> >\n> > 0001 makes es_result_relations an RTI-indexable array, which allows to\n> > get rid of all \"result relation index\" fields across the code.\n>\n> Thanks! A couple small things I wanted to check with you before committing:\n\nThanks for checking.\n\n> 1. We have many different cleanup/close routines now:\n> ExecCloseResultRelations, ExecCloseRangeTableRelations, and\n> ExecCleanUpTriggerState. Do we need them all? It seems to me that we\n> could merge ExecCloseRangeTableRelations() and\n> ExecCleanUpTriggerState(), they seem to do roughly the same thing: close\n> relations that were opened for ResultRelInfos. They are always called\n> together, except in afterTriggerInvokeEvents(). And in\n> afterTriggerInvokeEvents() too, there would be no harm in doing both,\n> even though we know there aren't any entries in the es_result_relations\n> array at that point.\n\nHmm, I find trigger result relations to behave differently enough to\ndeserve a separate function. For example, unlike plan-specified\nresult relations, they don't point to range table relations and don't\nopen indices. Maybe the name could be revisited, say,\nExecCloseTriggerResultRelations(). Also, maybe call the other\nfunctions:\n\nExecInitPlanResultRelationsArray()\nExecInitPlanResultRelation()\nExecClosePlanResultRelations()\n\nThoughts?\n\n> 2. The way this is handled in worker.c is a bit funny. In\n> create_estate_for_relation(), you create a ResultRelInfo, but you\n> *don't* put it in the es_opened_result_relations list. That's\n> surprising, but I'm also surprised there are no\n> ExecCloseResultRelations() calls before the FreeExecutorState() calls in\n> worker.c. It's not needed because the\n> apply_handle_insert/update/delete_internal() functions call\n> ExecCloseIndices() directly, so they don't rely on the\n> ExecCloseResultRelations() function for cleanup. That works too, but\n> it's a bit surprising because it's different from how it's done in\n> copy.c and nodeModifyTable.c. It would feel natural to rely on\n> ExecCloseResultRelations() in worker.c as well, but on the other hand,\n> it also calls ExecOpenIndices() in a more lazy fashion, and it makes\n> sense to call ExecCloseIndices() in the same functions that\n> ExecOpenIndices() is called. So I'm not sure if changing that would be\n> an improvement overall. What do you think? Did you consider doing that?\n\nYeah, that did bother me too a bit. I'm okay either way but it does\nlook a bit inconsistent.\n\nActually, maybe we don't need to be so paranoid about setting up\nes_result_relations in worker.c, because none of the downstream\nfunctionality invoked seems to rely on it, that is, no need to call\nExecInitResultRelationsArray() and ExecInitResultRelation().\nExecSimpleRelation* and downstream functionality assume a\nsingle-relation operation and the ResultRelInfo is explicitly passed.\n\n> Attached is your original patch v13, and a patch on top of it that\n> merges ExecCloseResultRelations() and ExecCleanUpTriggerState(), and\n> makes some minor comment changes. I didn't do anything about the\n> worker.c business, aside from adding a comment about it.\n\nThanks for the cleanup.\n\nI had noticed there was some funny capitalization in my patch:\n\n+ ResultRelInfo **es_result_relations; /* Array of Per-range-table-entry\n\ns/Per-/per-\n\nAlso, I think a comma may be needed in the parenthetical below:\n\n+ * can index it by the RT index (minus 1 to be accurate).\n\n...(minus 1, to be accurate)\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 12 Oct 2020 22:47:33 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On 12/10/2020 16:47, Amit Langote wrote:\n> On Mon, Oct 12, 2020 at 8:12 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> 1. We have many different cleanup/close routines now:\n>> ExecCloseResultRelations, ExecCloseRangeTableRelations, and\n>> ExecCleanUpTriggerState. Do we need them all? It seems to me that we\n>> could merge ExecCloseRangeTableRelations() and\n>> ExecCleanUpTriggerState(), they seem to do roughly the same thing: close\n>> relations that were opened for ResultRelInfos. They are always called\n>> together, except in afterTriggerInvokeEvents(). And in\n>> afterTriggerInvokeEvents() too, there would be no harm in doing both,\n>> even though we know there aren't any entries in the es_result_relations\n>> array at that point.\n> \n> Hmm, I find trigger result relations to behave differently enough to\n> deserve a separate function. For example, unlike plan-specified\n> result relations, they don't point to range table relations and don't\n> open indices. Maybe the name could be revisited, say,\n> ExecCloseTriggerResultRelations().\n\nMatter of perception I guess. I still prefer to club them together into \none Close call. It's true that they're slightly different, but they're \nalso pretty similar. And IMHO they're more similar than different.\n\n> Also, maybe call the other functions:\n> \n> ExecInitPlanResultRelationsArray()\n> ExecInitPlanResultRelation()\n> ExecClosePlanResultRelations()\n> \n> Thoughts?\n\nHmm. How about initializing the array lazily, on the first \nExecInitPlanResultRelation() call? It's not performance critical, and \nthat way there's one fewer initialization function that you need to \nremember to call.\n\nIt occurred to me that if we do that (initialize the array lazily), \nthere's very little need for the PlannedStmt->resultRelations list \nanymore. It's only used in ExecRelationIsTargetRelation(), but if we \nassume that ExecRelationIsTargetRelation() is only called after InitPlan \nhas initialized the result relation for the relation, it can easily \ncheck es_result_relations instead. I think that's a safe assumption. \nExecRelationIsTargetRelation() is only used in FDWs, and I believe the \nFDWs initialization routine can only be called after ExecInitModifyTable \nhas been called on the relation.\n\nThe PlannedStmt->rootResultRelations field is even more useless.\n\n> Actually, maybe we don't need to be so paranoid about setting up\n> es_result_relations in worker.c, because none of the downstream\n> functionality invoked seems to rely on it, that is, no need to call\n> ExecInitResultRelationsArray() and ExecInitResultRelation().\n> ExecSimpleRelation* and downstream functionality assume a\n> single-relation operation and the ResultRelInfo is explicitly passed.\n\nHmm, yeah, I like that. Similarly in ExecuteTruncateGuts(), there isn't \nactually any need to put the ResultRelInfos in the es_result_relations \narray.\n\nPutting all this together, I ended up with the attached. It doesn't \ninclude the subsequent commits in this patch set yet, for removal of \nes_result_relation_info et al.\n\n- Heikki",
"msg_date": "Mon, 12 Oct 2020 19:57:38 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Tue, Oct 13, 2020 at 1:57 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 12/10/2020 16:47, Amit Langote wrote:\n> > On Mon, Oct 12, 2020 at 8:12 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >> 1. We have many different cleanup/close routines now:\n> >> ExecCloseResultRelations, ExecCloseRangeTableRelations, and\n> >> ExecCleanUpTriggerState. Do we need them all? It seems to me that we\n> >> could merge ExecCloseRangeTableRelations() and\n> >> ExecCleanUpTriggerState(), they seem to do roughly the same thing: close\n> >> relations that were opened for ResultRelInfos. They are always called\n> >> together, except in afterTriggerInvokeEvents(). And in\n> >> afterTriggerInvokeEvents() too, there would be no harm in doing both,\n> >> even though we know there aren't any entries in the es_result_relations\n> >> array at that point.\n> >\n> > Hmm, I find trigger result relations to behave differently enough to\n> > deserve a separate function. For example, unlike plan-specified\n> > result relations, they don't point to range table relations and don't\n> > open indices. Maybe the name could be revisited, say,\n> > ExecCloseTriggerResultRelations().\n>\n> Matter of perception I guess. I still prefer to club them together into\n> one Close call. It's true that they're slightly different, but they're\n> also pretty similar. And IMHO they're more similar than different.\n\nOkay, fine with me.\n\n> > Also, maybe call the other functions:\n> >\n> > ExecInitPlanResultRelationsArray()\n> > ExecInitPlanResultRelation()\n> > ExecClosePlanResultRelations()\n> >\n> > Thoughts?\n>\n> Hmm. How about initializing the array lazily, on the first\n> ExecInitPlanResultRelation() call? It's not performance critical, and\n> that way there's one fewer initialization function that you need to\n> remember to call.\n\nAgree that's better.\n\n> It occurred to me that if we do that (initialize the array lazily),\n> there's very little need for the PlannedStmt->resultRelations list\n> anymore. It's only used in ExecRelationIsTargetRelation(), but if we\n> assume that ExecRelationIsTargetRelation() is only called after InitPlan\n> has initialized the result relation for the relation, it can easily\n> check es_result_relations instead. I think that's a safe assumption.\n> ExecRelationIsTargetRelation() is only used in FDWs, and I believe the\n> FDWs initialization routine can only be called after ExecInitModifyTable\n> has been called on the relation.\n>\n> The PlannedStmt->rootResultRelations field is even more useless.\n\nI am very much tempted to remove those fields from PlannedStmt,\nalthough I am concerned that the following now assumes that *all*\nresult relations are initialized in the executor initialization phase:\n\nbool\nExecRelationIsTargetRelation(EState *estate, Index scanrelid)\n{\n if (!estate->es_result_relations)\n return false;\n\n return estate->es_result_relations[scanrelid - 1] != NULL;\n}\n\nIn the other thread [1], I am proposing that we initialize result\nrelations lazily, but the above will be a blocker to that.\n\n> > Actually, maybe we don't need to be so paranoid about setting up\n> > es_result_relations in worker.c, because none of the downstream\n> > functionality invoked seems to rely on it, that is, no need to call\n> > ExecInitResultRelationsArray() and ExecInitResultRelation().\n> > ExecSimpleRelation* and downstream functionality assume a\n> > single-relation operation and the ResultRelInfo is explicitly passed.\n>\n> Hmm, yeah, I like that. Similarly in ExecuteTruncateGuts(), there isn't\n> actually any need to put the ResultRelInfos in the es_result_relations\n> array.\n>\n> Putting all this together, I ended up with the attached. It doesn't\n> include the subsequent commits in this patch set yet, for removal of\n> es_result_relation_info et al.\n\nThanks.\n\n+ * We put the ResultRelInfos in the es_opened_result_relations list, even\n+ * though we don't have a range table and don't populate the\n+ * es_result_relations array. That's a big bogus, but it's enough to make\n+ * ExecGetTriggerResultRel() find them.\n */\n estate = CreateExecutorState();\n resultRelInfos = (ResultRelInfo *)\n palloc(list_length(rels) * sizeof(ResultRelInfo));\n resultRelInfo = resultRelInfos;\n+ estate->es_result_relations = (ResultRelInfo **)\n+ palloc(list_length(rels) * sizeof(ResultRelInfo *));\n\nMaybe don't allocate es_result_relations here?\n\n+/*\n+ * Close all relations opened by ExecGetRangeTableRelation()\n+ */\n+void\n+ExecCloseRangeTableRelations(EState *estate)\n+{\n+ int i;\n+\n+ for (i = 0; i < estate->es_range_table_size; i++)\n {\n if (estate->es_relations[i])\n table_close(estate->es_relations[i], NoLock);\n }\n\nI think we have an optimization opportunity here (maybe as a separate\npatch). Why don't we introduce es_opened_relations? That way, if\nonly a single or few of potentially 1000s relations in the range table\nis/are opened, we don't needlessly loop over *all* relations here.\nThat can happen, for example, with a query where no partitions could\nbe pruned at planning time, so the range table contains all\npartitions, but only one or few are accessed during execution and the\nrest run-time pruned. Although, in the workloads where it would\nmatter, other overheads easily mask the overhead of this loop; see the\nfirst message at the linked thread [1], so it is hard to show an\nimmediate benefit from this.\n\nAnyway, other than my concern about ExecRelationIsTargetRelation()\nmentioned above, I think the patch looks good.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://commitfest.postgresql.org/30/2621/\n\n\n",
"msg_date": "Tue, 13 Oct 2020 13:32:29 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On 13/10/2020 07:32, Amit Langote wrote:\n> On Tue, Oct 13, 2020 at 1:57 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> It occurred to me that if we do that (initialize the array lazily),\n>> there's very little need for the PlannedStmt->resultRelations list\n>> anymore. It's only used in ExecRelationIsTargetRelation(), but if we\n>> assume that ExecRelationIsTargetRelation() is only called after InitPlan\n>> has initialized the result relation for the relation, it can easily\n>> check es_result_relations instead. I think that's a safe assumption.\n>> ExecRelationIsTargetRelation() is only used in FDWs, and I believe the\n>> FDWs initialization routine can only be called after ExecInitModifyTable\n>> has been called on the relation.\n>>\n>> The PlannedStmt->rootResultRelations field is even more useless.\n> \n> I am very much tempted to remove those fields from PlannedStmt,\n> although I am concerned that the following now assumes that *all*\n> result relations are initialized in the executor initialization phase:\n> \n> bool\n> ExecRelationIsTargetRelation(EState *estate, Index scanrelid)\n> {\n> if (!estate->es_result_relations)\n> return false;\n> \n> return estate->es_result_relations[scanrelid - 1] != NULL;\n> }\n> \n> In the other thread [1], I am proposing that we initialize result\n> relations lazily, but the above will be a blocker to that.\n\nOk, I'll leave it alone then. But I'll still merge resultRelations and \nrootResultRelations into one list. I don't see any point in keeping them \nseparate.\n\nI'm tempted to remove ExecRelationIsTargetRelation() altogether, but \nkeeping the resultRelations list isn't really a big deal, so I'll leave \nthat for another discussion.\n\n>>> Actually, maybe we don't need to be so paranoid about setting up\n>>> es_result_relations in worker.c, because none of the downstream\n>>> functionality invoked seems to rely on it, that is, no need to call\n>>> ExecInitResultRelationsArray() and ExecInitResultRelation().\n>>> ExecSimpleRelation* and downstream functionality assume a\n>>> single-relation operation and the ResultRelInfo is explicitly passed.\n>>\n>> Hmm, yeah, I like that. Similarly in ExecuteTruncateGuts(), there isn't\n>> actually any need to put the ResultRelInfos in the es_result_relations\n>> array.\n>>\n>> Putting all this together, I ended up with the attached. It doesn't\n>> include the subsequent commits in this patch set yet, for removal of\n>> es_result_relation_info et al.\n> \n> Thanks.\n> \n> + * We put the ResultRelInfos in the es_opened_result_relations list, even\n> + * though we don't have a range table and don't populate the\n> + * es_result_relations array. That's a big bogus, but it's enough to make\n> + * ExecGetTriggerResultRel() find them.\n> */\n> estate = CreateExecutorState();\n> resultRelInfos = (ResultRelInfo *)\n> palloc(list_length(rels) * sizeof(ResultRelInfo));\n> resultRelInfo = resultRelInfos;\n> + estate->es_result_relations = (ResultRelInfo **)\n> + palloc(list_length(rels) * sizeof(ResultRelInfo *));\n> \n> Maybe don't allocate es_result_relations here?\n\nFixed.\n\n> Anyway, other than my concern about ExecRelationIsTargetRelation()\n> mentioned above, I think the patch looks good.\n\nOk, committed. I'll continue to look at the rest of the patches in this \npatch series now.\n\n- Heikki\n\n\n",
"msg_date": "Tue, 13 Oct 2020 13:12:59 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Tue, Oct 13, 2020 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 13/10/2020 07:32, Amit Langote wrote:\n> > On Tue, Oct 13, 2020 at 1:57 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >> It occurred to me that if we do that (initialize the array lazily),\n> >> there's very little need for the PlannedStmt->resultRelations list\n> >> anymore. It's only used in ExecRelationIsTargetRelation(), but if we\n> >> assume that ExecRelationIsTargetRelation() is only called after InitPlan\n> >> has initialized the result relation for the relation, it can easily\n> >> check es_result_relations instead. I think that's a safe assumption.\n> >> ExecRelationIsTargetRelation() is only used in FDWs, and I believe the\n> >> FDWs initialization routine can only be called after ExecInitModifyTable\n> >> has been called on the relation.\n> >>\n> >> The PlannedStmt->rootResultRelations field is even more useless.\n> >\n> > I am very much tempted to remove those fields from PlannedStmt,\n> > although I am concerned that the following now assumes that *all*\n> > result relations are initialized in the executor initialization phase:\n> >\n> > bool\n> > ExecRelationIsTargetRelation(EState *estate, Index scanrelid)\n> > {\n> > if (!estate->es_result_relations)\n> > return false;\n> >\n> > return estate->es_result_relations[scanrelid - 1] != NULL;\n> > }\n> >\n> > In the other thread [1], I am proposing that we initialize result\n> > relations lazily, but the above will be a blocker to that.\n>\n> Ok, I'll leave it alone then. But I'll still merge resultRelations and\n> rootResultRelations into one list. I don't see any point in keeping them\n> separate.\n\nShould be fine. As you said in the commit message, it should probably\nhave been that way to begin with, but I don't recall why I didn't make\nit so.\n\n> I'm tempted to remove ExecRelationIsTargetRelation() altogether, but\n> keeping the resultRelations list isn't really a big deal, so I'll leave\n> that for another discussion.\n\nYeah, makes sense.\n\n> > Anyway, other than my concern about ExecRelationIsTargetRelation()\n> > mentioned above, I think the patch looks good.\n>\n> Ok, committed. I'll continue to look at the rest of the patches in this\n> patch series now.\n\nThanks.\n\nBTW, you mentioned the lazy ResultRelInfo optimization bit in the\ncommit message, so does that mean you intend to take a look at the\nother thread [1] too? Or should I post a rebased version of the lazy\nResultRelInfo initialization patch here in this thread? That patch is\njust a bunch of refactoring too.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://commitfest.postgresql.org/30/2621/\n\n\n",
"msg_date": "Tue, 13 Oct 2020 21:03:00 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On 13/10/2020 15:03, Amit Langote wrote:\n> On Tue, Oct 13, 2020 at 7:13 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> Ok, committed. I'll continue to look at the rest of the patches in this\n>> patch series now.\n\nI've reviewed the next two patches in the series, they are pretty much \nready for commit now. I made just a few minor changes, notably:\n\n- I moved the responsibility to set ForeignTable->resultRelation to the \nFDWs, like you had in the original patch version. Sorry for \nflip-flopping on that.\n\n- In postgres_fdw.c, I changed it to store the ResultRelInfo pointer in \nPgFdwDirectModifyState, instead of storing the RT index and looking it \nup in the BeginDirectModify and IterateDirectModify. I think you did it \nthat way in the earlier patch versions, too.\n\n- Some minor comment and docs kibitzing.\n\nOne little idea I had:\n\nI think all FDWs that support direct modify will have to carry the \nresultRelaton index or the ResultRelInfo pointer from BeginDirectModify \nto IterateDirectModify in the FDW's private struct. It's not \ncomplicated, but should we make life easier for FDWs by storing the \nResultRelInfo pointer in the ForeignScanState struct in the core code? \nThe doc now says:\n\n> The data that was actually inserted, updated or deleted must be\n> stored in the ri_projectReturning->pi_exprContext->ecxt_scantuple of\n> the target foreign table's ResultRelInfo obtained using the\n> information passed to BeginDirectModify. Return NULL if no more rows\n> are available.\n\nThat \"ResultRelInfo obtained using the information passed to \nBeginDirectModify\" part is a pretty vague. We could expand it, but if we \nstored the ResultRelInfo in the ForeignScanState, we could explain it \nsuccinctly.\n\n> BTW, you mentioned the lazy ResultRelInfo optimization bit in the\n> commit message, so does that mean you intend to take a look at the\n> other thread [1] too? Or should I post a rebased version of the lazy\n> ResultRelInfo initialization patch here in this thread? That patch is\n> just a bunch of refactoring too.\n\nNo promises, but yeah, now that I'm knee-deep in this ResultRelInfo \nbusiness, I'll try to take a look at that too :-).\n\n- Heikki",
"msg_date": "Tue, 13 Oct 2020 19:09:44 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On 13/10/2020 19:09, Heikki Linnakangas wrote:\n> One little idea I had:\n> \n> I think all FDWs that support direct modify will have to carry the\n> resultRelaton index or the ResultRelInfo pointer from BeginDirectModify\n> to IterateDirectModify in the FDW's private struct. It's not\n> complicated, but should we make life easier for FDWs by storing the\n> ResultRelInfo pointer in the ForeignScanState struct in the core code?\n> The doc now says:\n> \n>> The data that was actually inserted, updated or deleted must be\n>> stored in the ri_projectReturning->pi_exprContext->ecxt_scantuple of\n>> the target foreign table's ResultRelInfo obtained using the\n>> information passed to BeginDirectModify. Return NULL if no more rows\n>> are available.\n> \n> That \"ResultRelInfo obtained using the information passed to\n> BeginDirectModify\" part is a pretty vague. We could expand it, but if we\n> stored the ResultRelInfo in the ForeignScanState, we could explain it\n> succinctly.\n\nI tried that approach, see attached. Yeah, this feels better to me.\n\n- Heikki",
"msg_date": "Tue, 13 Oct 2020 19:30:28 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Wed, Oct 14, 2020 at 1:30 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 13/10/2020 19:09, Heikki Linnakangas wrote:\n> > One little idea I had:\n> >\n> > I think all FDWs that support direct modify will have to carry the\n> > resultRelaton index or the ResultRelInfo pointer from BeginDirectModify\n> > to IterateDirectModify in the FDW's private struct. It's not\n> > complicated, but should we make life easier for FDWs by storing the\n> > ResultRelInfo pointer in the ForeignScanState struct in the core code?\n> > The doc now says:\n> >\n> >> The data that was actually inserted, updated or deleted must be\n> >> stored in the ri_projectReturning->pi_exprContext->ecxt_scantuple of\n> >> the target foreign table's ResultRelInfo obtained using the\n> >> information passed to BeginDirectModify. Return NULL if no more rows\n> >> are available.\n> >\n> > That \"ResultRelInfo obtained using the information passed to\n> > BeginDirectModify\" part is a pretty vague. We could expand it, but if we\n> > stored the ResultRelInfo in the ForeignScanState, we could explain it\n> > succinctly.\n>\n> I tried that approach, see attached. Yeah, this feels better to me.\n\nI like the idea of storing the ResultRelInfo in ForeignScanState, but\nit would be better if we can document the fact that an FDW may not\nreliably access until IterateDirectModify(). That's because, setting\nit in ExecInitForeignScan() will mean *all* result relations must be\ninitialized during ExecInitModifyTable(), which defies my\nlazy-ResultRelInfo-initiailization proposal. As to why why I'm\npushing that proposal, consider that when we'll get the ability to use\nrun-time pruning for UPDATE/DELETE with [1], initializing all result\nrelations before initializing the plan tree will mean most of those\nResultRelInfos will be unused, because run-time pruning that occurs\nwhen the plan tree is initialized (and/or when it is executed) may\neliminate most but a few result relations.\n\nI've attached a diff to v17-0001 to show one way of delaying setting\nForeignScanState.resultRelInfo.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://commitfest.postgresql.org/30/2575/",
"msg_date": "Wed, 14 Oct 2020 15:44:37 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On 14/10/2020 09:44, Amit Langote wrote:\n> I like the idea of storing the ResultRelInfo in ForeignScanState, but\n> it would be better if we can document the fact that an FDW may not\n> reliably access until IterateDirectModify(). That's because, setting\n> it in ExecInitForeignScan() will mean *all* result relations must be\n> initialized during ExecInitModifyTable(), which defies my\n> lazy-ResultRelInfo-initiailization proposal. As to why why I'm\n> pushing that proposal, consider that when we'll get the ability to use\n> run-time pruning for UPDATE/DELETE with [1], initializing all result\n> relations before initializing the plan tree will mean most of those\n> ResultRelInfos will be unused, because run-time pruning that occurs\n> when the plan tree is initialized (and/or when it is executed) may\n> eliminate most but a few result relations.\n> \n> I've attached a diff to v17-0001 to show one way of delaying setting\n> ForeignScanState.resultRelInfo.\n\nThe BeginDirectModify function does a lot of expensive things, like \nopening a connection to the remote server. If we want to optimize \nrun-time pruning, I think we need to avoid calling BeginDirectModify for \npruned partitions altogether.\n\nI pushed this without those delay-setting-resultRelInfo changes. But we \ncan revisit those changes with the run-time pruning optimization patch.\n\nI'll continue with the last couple of patches in this thread.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 14 Oct 2020 12:04:06 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Wed, Oct 14, 2020 at 6:04 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 14/10/2020 09:44, Amit Langote wrote:\n> > I like the idea of storing the ResultRelInfo in ForeignScanState, but\n> > it would be better if we can document the fact that an FDW may not\n> > reliably access until IterateDirectModify(). That's because, setting\n> > it in ExecInitForeignScan() will mean *all* result relations must be\n> > initialized during ExecInitModifyTable(), which defies my\n> > lazy-ResultRelInfo-initiailization proposal. As to why why I'm\n> > pushing that proposal, consider that when we'll get the ability to use\n> > run-time pruning for UPDATE/DELETE with [1], initializing all result\n> > relations before initializing the plan tree will mean most of those\n> > ResultRelInfos will be unused, because run-time pruning that occurs\n> > when the plan tree is initialized (and/or when it is executed) may\n> > eliminate most but a few result relations.\n> >\n> > I've attached a diff to v17-0001 to show one way of delaying setting\n> > ForeignScanState.resultRelInfo.\n>\n> The BeginDirectModify function does a lot of expensive things, like\n> opening a connection to the remote server. If we want to optimize\n> run-time pruning, I think we need to avoid calling BeginDirectModify for\n> pruned partitions altogether.\n\nNote that if foreign partitions get pruned during the so called \"init\"\nrun-time pruning (that is, in the ExecInitNode() phase),\nBeginDirectModify() won't get called on them. Although, your concern\ndoes apply if there is only going to be \"exec\" run-time pruning and no\n\"initial\" pruning.\n\nFor me, the former case is a bit more interesting, as it occurs with\nprepared statements using a generic plan (update parted_table set ...\nwhere partkey = $1).\n\n> I pushed this without those delay-setting-resultRelInfo changes. But we\n> can revisit those changes with the run-time pruning optimization patch.\n\nSure, that makes sense.\n\n> I'll continue with the last couple of patches in this thread.\n\nOkay, thanks.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Oct 2020 18:51:42 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Wed, Oct 14, 2020 at 6:04 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> I'll continue with the last couple of patches in this thread.\n\nI committed the move of the cross-partition logic to new \nExecCrossPartitionUpdate() function, with just minor comment editing and \npgindent. I left out the refactoring around the calls to AFTER ROW \nINSERT/DELETE triggers. I stared at the change for a while, and wasn't \nsure if I liked the patched or the unpatched new version better, so I \nleft it alone.\n\nLooking at the last patch, \"Revise child-to-root tuple conversion map \nmanagement\", that's certainly an improvement. However, I find it \nconfusing that sometimes the mapping from child to root is in \nrelinfo->ri_ChildToRootMap, and sometimes in \nrelinfo->ri_PartitionInfo->pi_PartitionToRootMap. When is each of those \nfilled in? If both are set, is it well defined which one is initialized \nfirst?\n\nIn general, I'm pretty confused by the initialization of \nri_PartitionInfo. Where is initialized, and when? In execnodes.h, the \ndefinition of ResultRelInfo says:\n\n> \t/* info for partition tuple routing (NULL if not set up yet) */\n> \tstruct PartitionRoutingInfo *ri_PartitionInfo;\n\nThat implies that the field is initialized lazily. But in \nExecFindPartition, we have this:\n\n> \t\tif (partidx == partdesc->boundinfo->default_index)\n> \t\t{\n> \t\t\tPartitionRoutingInfo *partrouteinfo = rri->ri_PartitionInfo;\n> \n> \t\t\t/*\n> \t\t\t * The tuple must match the partition's layout for the constraint\n> \t\t\t * expression to be evaluated successfully. If the partition is\n> \t\t\t * sub-partitioned, that would already be the case due to the code\n> \t\t\t * above, but for a leaf partition the tuple still matches the\n> \t\t\t * parent's layout.\n> \t\t\t *\n> \t\t\t * Note that we have a map to convert from root to current\n> \t\t\t * partition, but not from immediate parent to current partition.\n> \t\t\t * So if we have to convert, do it from the root slot; if not, use\n> \t\t\t * the root slot as-is.\n> \t\t\t */\n> \t\t\tif (partrouteinfo)\n> \t\t\t{\n> \t\t\t\tTupleConversionMap *map = partrouteinfo->pi_RootToPartitionMap;\n> \n> \t\t\t\tif (map)\n> \t\t\t\t\tslot = execute_attr_map_slot(map->attrMap, rootslot,\n> \t\t\t\t\t\t\t\t\t\t\t\t partrouteinfo->pi_PartitionTupleSlot);\n> \t\t\t\telse\n> \t\t\t\t\tslot = rootslot;\n> \t\t\t}\n> \n> \t\t\tExecPartitionCheck(rri, slot, estate, true);\n> \t\t}\n\nThat check implies that it's not just lazily initialized, the code will \nwork differently if ri_PartitionInfo is set or not.\n\nI think all this would be more clear if ri_PartitionInfo and \nri_ChildToRootMap were both truly lazily initialized, the first time \nthey're needed. And if we removed \nri_PartitionInfo->pi_PartitionToRootMap, and always used \nri_ChildToRootMap for it.\n\nMaybe remove PartitionRoutingInfo struct altogether, and just move its \nfields directly to ResultRelInfo.\n\n- Heikki\n\n\n",
"msg_date": "Thu, 15 Oct 2020 17:59:13 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Thu, Oct 15, 2020 at 11:59 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On Wed, Oct 14, 2020 at 6:04 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > I'll continue with the last couple of patches in this thread.\n>\n> I committed the move of the cross-partition logic to new\n> ExecCrossPartitionUpdate() function, with just minor comment editing and\n> pgindent. I left out the refactoring around the calls to AFTER ROW\n> INSERT/DELETE triggers. I stared at the change for a while, and wasn't\n> sure if I liked the patched or the unpatched new version better, so I\n> left it alone.\n\nOkay, thanks for committing that.\n\n> Looking at the last patch, \"Revise child-to-root tuple conversion map\n> management\", that's certainly an improvement. However, I find it\n> confusing that sometimes the mapping from child to root is in\n> relinfo->ri_ChildToRootMap, and sometimes in\n> relinfo->ri_PartitionInfo->pi_PartitionToRootMap. When is each of those\n> filled in? If both are set, is it well defined which one is initialized\n> first?\n\nIt is ri_ChildToRootMap that is set first, because it's only set in\nchild UPDATE target relations which are initialized in\nExecInitModifyTable(), that is way before partition tuple routing\ncomes into picture.\n\nri_PartitionInfo and hence pi_PartitionToRootMap is set in tuple\nrouting target partition's ResultRelInfos, which are lazily\ninitialized when tuples land into them.\n\nIf a tuple routing target partition happens to be an UPDATE target\nrelation and we need to initialize the partition-to-root map, which\nfor a tuple routing target partition is to be saved in\npi_PartitionToRootMap, with the patch, we will try to reuse\nri_ChildToRootMap because it would already be initialized.\n\nBut as you say below, maybe we don't need to have two fields for the\nsame thing, which I agree with. Having only ri_ChildToRootMap as you\nsuggest below suffices.\n\n> In general, I'm pretty confused by the initialization of\n> ri_PartitionInfo. Where is initialized, and when? In execnodes.h, the\n> definition of ResultRelInfo says:\n>\n> > /* info for partition tuple routing (NULL if not set up yet) */\n> > struct PartitionRoutingInfo *ri_PartitionInfo;\n>\n> That implies that the field is initialized lazily. But in\n> ExecFindPartition, we have this:\n>\n> > if (partidx == partdesc->boundinfo->default_index)\n> > {\n> > PartitionRoutingInfo *partrouteinfo = rri->ri_PartitionInfo;\n> >\n> > /*\n> > * The tuple must match the partition's layout for the constraint\n> > * expression to be evaluated successfully. If the partition is\n> > * sub-partitioned, that would already be the case due to the code\n> > * above, but for a leaf partition the tuple still matches the\n> > * parent's layout.\n> > *\n> > * Note that we have a map to convert from root to current\n> > * partition, but not from immediate parent to current partition.\n> > * So if we have to convert, do it from the root slot; if not, use\n> > * the root slot as-is.\n> > */\n> > if (partrouteinfo)\n> > {\n> > TupleConversionMap *map = partrouteinfo->pi_RootToPartitionMap;\n> >\n> > if (map)\n> > slot = execute_attr_map_slot(map->attrMap, rootslot,\n> > partrouteinfo->pi_PartitionTupleSlot);\n> > else\n> > slot = rootslot;\n> > }\n> >\n> > ExecPartitionCheck(rri, slot, estate, true);\n> > }\n>\n> That check implies that it's not just lazily initialized, the code will\n> work differently if ri_PartitionInfo is set or not.\n>\n> I think all this would be more clear if ri_PartitionInfo and\n> ri_ChildToRootMap were both truly lazily initialized, the first time\n> they're needed.\n\nSo, we initialize these maps when we initialize a partition's\nResultRelInfo. I mean if the partition has a different tuple\ndescriptor than root, we know we are going to need to convert tuples\nbetween them (in either direction), so we might as well initialize the\nmaps when the ResultRelInfo is built, which we do lazily for tuple\nrouting target relations at least. In that sense, at least\nroot-to-partition maps are initialized lazily, that is only when a\npartition receives a tuple via routing.\n\nPartition-to-root maps' initialization though is not always lazy,\nbecause they are also needed by UPDATE target relations, whose\nResultRelInfo are initialized in ExecInitModifyTable(), which is not\nlazy enough. That will change with my other patch though. :)\n\n> And if we removed\n> ri_PartitionInfo->pi_PartitionToRootMap, and always used\n> ri_ChildToRootMap for it.\n\nDone in the attached.\n\n> Maybe remove PartitionRoutingInfo struct altogether, and just move its\n> fields directly to ResultRelInfo.\n\nIf we do that, we'll end up with 3 notations for the same thing across\nreleases: In v10 and v11, PartitionRoutingInfos members are saved in\narrays in ModifyTableState, totally detached from the partition\nResultRelInfos. In v12 (3f2393edef), we moved them into ResultRelInfo\nbut chose to add them into a sub-struct (PartitionRoutingInfo), which\nin retrospect was not a great decision. Now if we pull them into\nResultRelInfo, we'll have invented the 3rd notation. Maybe that makes\nthings hard when back-patching bug-fixes?\n\nAttached updated patch.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 16 Oct 2020 22:12:33 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On 2020-Oct-16, Amit Langote wrote:\n\n> On Thu, Oct 15, 2020 at 11:59 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > On Wed, Oct 14, 2020 at 6:04 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> > And if we removed\n> > ri_PartitionInfo->pi_PartitionToRootMap, and always used\n> > ri_ChildToRootMap for it.\n> \n> Done in the attached.\n\nHmm... Overall I like the simplification.\n\n> > Maybe remove PartitionRoutingInfo struct altogether, and just move its\n> > fields directly to ResultRelInfo.\n> \n> If we do that, we'll end up with 3 notations for the same thing across\n> releases: In v10 and v11, PartitionRoutingInfos members are saved in\n> arrays in ModifyTableState, totally detached from the partition\n> ResultRelInfos. In v12 (3f2393edef), we moved them into ResultRelInfo\n> but chose to add them into a sub-struct (PartitionRoutingInfo), which\n> in retrospect was not a great decision. Now if we pull them into\n> ResultRelInfo, we'll have invented the 3rd notation. Maybe that makes\n> things hard when back-patching bug-fixes?\n\nI don't necessarily agree that PartitionRoutingInfo was such a bad idea.\nIn fact I wonder if we shouldn't move *more* stuff into it\n(ri_PartitionCheckExpr), and keep struct ResultRelInfo clean of\npartitioning-related stuff (other than ri_PartitionInfo and\nri_PartitionRoot); there are plenty of ResultRelInfos that are not\npartitions, so I think it makes sense to keep the split. I'm thinking\nthat the ChildToRootMap should continue to be in PartitionRoutingInfo.\n\nMaybe what we need in order to keep the initialization \"lazy enough\" is\nsome inline functions that act as getters, initializing members of\nPartitionRoutingInfo when first needed. (This would probably need\nboolean flags, to distinguish \"hasn't been set up yet\" from \"it is not\nneeded for this partition\" for each member that requires it).\n\nBTW it is curious that ExecInitRoutingInfo is called both in\nExecInitPartitionInfo() (from ExecFindPartition when the ResultRelInfo\nfor the partition is not found) *and* from ExecFindPartition again, when\nthe ResultRelInfo for the partition *is* found. Doesn't this mean that\nri_PartitionInfo is set up twice for the same partition?\n\n\n\n",
"msg_date": "Fri, 16 Oct 2020 11:45:41 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Fri, Oct 16, 2020 at 11:45 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2020-Oct-16, Amit Langote wrote:\n> > On Thu, Oct 15, 2020 at 11:59 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > > On Wed, Oct 14, 2020 at 6:04 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> > > And if we removed\n> > > ri_PartitionInfo->pi_PartitionToRootMap, and always used\n> > > ri_ChildToRootMap for it.\n> >\n> > Done in the attached.\n>\n> Hmm... Overall I like the simplification.\n\nThank you for looking it over.\n\n> > > Maybe remove PartitionRoutingInfo struct altogether, and just move its\n> > > fields directly to ResultRelInfo.\n> >\n> > If we do that, we'll end up with 3 notations for the same thing across\n> > releases: In v10 and v11, PartitionRoutingInfos members are saved in\n> > arrays in ModifyTableState, totally detached from the partition\n> > ResultRelInfos. In v12 (3f2393edef), we moved them into ResultRelInfo\n> > but chose to add them into a sub-struct (PartitionRoutingInfo), which\n> > in retrospect was not a great decision. Now if we pull them into\n> > ResultRelInfo, we'll have invented the 3rd notation. Maybe that makes\n> > things hard when back-patching bug-fixes?\n>\n> I don't necessarily agree that PartitionRoutingInfo was such a bad idea.\n> In fact I wonder if we shouldn't move *more* stuff into it\n> (ri_PartitionCheckExpr), and keep struct ResultRelInfo clean of\n> partitioning-related stuff (other than ri_PartitionInfo and\n> ri_PartitionRoot); there are plenty of ResultRelInfos that are not\n> partitions, so I think it makes sense to keep the split. I'm thinking\n> that the ChildToRootMap should continue to be in PartitionRoutingInfo.\n\nHmm, I don't see ri_PartitionCheckExpr as being a piece of routing\ninformation, because it's primarily meant to be used when inserting\n*directly* into a partition, although it's true we do initialize it in\nrouting target partitions too in some cases.\n\nAlso, ChildToRootMap was introduced by the trigger transition table\nproject, not tuple routing. I think we misjudged this when we added\nPartitionToRootMap to PartitionRoutingInfo, because it doesn't really\nbelong there. This patch fixes that by removing PartitionToRootMap.\n\nRootToPartitionMap and the associated partition slot is the only piece\nof extra information that is needed by tuple routing target relations.\n\n> Maybe what we need in order to keep the initialization \"lazy enough\" is\n> some inline functions that act as getters, initializing members of\n> PartitionRoutingInfo when first needed. (This would probably need\n> boolean flags, to distinguish \"hasn't been set up yet\" from \"it is not\n> needed for this partition\" for each member that requires it).\n\nAs I said in my previous email, I don't see how we can make\ninitializing the map any lazier than it already is. If a partition\nhas a different tuple descriptor than the root table, then we know for\nsure that any tuples that are routed to it will need to be converted\nfrom the root tuple format to its tuple format, so we might as well\nbuild the map when the ResultRelInfo is built. If no tuple lands into\na partition, we would neither build its ResultRelInfo nor the map.\nWith the current arrangement, if the map field is NULL, it simply\nmeans that the partition has the same tuple format as the root table.\n\n> BTW it is curious that ExecInitRoutingInfo is called both in\n> ExecInitPartitionInfo() (from ExecFindPartition when the ResultRelInfo\n> for the partition is not found) *and* from ExecFindPartition again, when\n> the ResultRelInfo for the partition *is* found. Doesn't this mean that\n> ri_PartitionInfo is set up twice for the same partition?\n\nNo. ExecFindPartition() directly calls ExecInitRoutingInfo() only for\nreused update result relations, that too, only the first time a tuple\nlands into such a partition. For the subsequent tuples that land into\nthe same partition, ExecFindPartition() will be able to find that\nResultRelInfo in the proute->partitions[] array. All ResultRelInfos\nin that array are assumed to have been processed by\nExecInitRoutingInfo().\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 17 Oct 2020 16:44:56 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On 2020-Oct-17, Amit Langote wrote:\n\n> Hmm, I don't see ri_PartitionCheckExpr as being a piece of routing\n> information, because it's primarily meant to be used when inserting\n> *directly* into a partition, although it's true we do initialize it in\n> routing target partitions too in some cases.\n> \n> Also, ChildToRootMap was introduced by the trigger transition table\n> project, not tuple routing. I think we misjudged this when we added\n> PartitionToRootMap to PartitionRoutingInfo, because it doesn't really\n> belong there. This patch fixes that by removing PartitionToRootMap.\n> \n> RootToPartitionMap and the associated partition slot is the only piece\n> of extra information that is needed by tuple routing target relations.\n\nWell, I was thinking on making the ri_PartitionInfo be about\npartitioning in general, not just specifically for partition tuple\nrouting. Maybe Heikki is right that it may end up being simpler to\nremove ri_PartitionInfo altogether. It'd just be a couple of additional\npointers in ResultRelInfo after all. (Remember that we wanted to get\nrid of fields specific to only certain kinds of RTEs in RangeTblEntry\nfor example, to keep things cleanly separated, although that project\neventually found its demise for other reasons.)\n\n> As I said in my previous email, I don't see how we can make\n> initializing the map any lazier than it already is. If a partition\n> has a different tuple descriptor than the root table, then we know for\n> sure that any tuples that are routed to it will need to be converted\n> from the root tuple format to its tuple format, so we might as well\n> build the map when the ResultRelInfo is built. If no tuple lands into\n> a partition, we would neither build its ResultRelInfo nor the map.\n> With the current arrangement, if the map field is NULL, it simply\n> means that the partition has the same tuple format as the root table.\n\nI see -- makes sense.\n\n> On Fri, Oct 16, 2020 at 11:45 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > BTW it is curious that ExecInitRoutingInfo is called both in\n> > ExecInitPartitionInfo() (from ExecFindPartition when the ResultRelInfo\n> > for the partition is not found) *and* from ExecFindPartition again, when\n> > the ResultRelInfo for the partition *is* found. Doesn't this mean that\n> > ri_PartitionInfo is set up twice for the same partition?\n> \n> No. ExecFindPartition() directly calls ExecInitRoutingInfo() only for\n> reused update result relations, that too, only the first time a tuple\n> lands into such a partition. For the subsequent tuples that land into\n> the same partition, ExecFindPartition() will be able to find that\n> ResultRelInfo in the proute->partitions[] array. All ResultRelInfos\n> in that array are assumed to have been processed by\n> ExecInitRoutingInfo().\n\nDoh, right, sorry, I was misreading the if/else maze there.\n\n\n",
"msg_date": "Sat, 17 Oct 2020 12:54:29 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Sun, Oct 18, 2020 at 12:54 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2020-Oct-17, Amit Langote wrote:\n> > Hmm, I don't see ri_PartitionCheckExpr as being a piece of routing\n> > information, because it's primarily meant to be used when inserting\n> > *directly* into a partition, although it's true we do initialize it in\n> > routing target partitions too in some cases.\n> >\n> > Also, ChildToRootMap was introduced by the trigger transition table\n> > project, not tuple routing. I think we misjudged this when we added\n> > PartitionToRootMap to PartitionRoutingInfo, because it doesn't really\n> > belong there. This patch fixes that by removing PartitionToRootMap.\n> >\n> > RootToPartitionMap and the associated partition slot is the only piece\n> > of extra information that is needed by tuple routing target relations.\n>\n> Well, I was thinking on making the ri_PartitionInfo be about\n> partitioning in general, not just specifically for partition tuple\n> routing. Maybe Heikki is right that it may end up being simpler to\n> remove ri_PartitionInfo altogether. It'd just be a couple of additional\n> pointers in ResultRelInfo after all.\n\nSo that's 2 votes for removing PartitionRoutingInfo from the tree.\nOkay, I have tried that in the attached 0002 patch. Also, I fixed\nsome comments in 0001 that still referenced PartitionToRootMap.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 19 Oct 2020 13:54:27 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On 19/10/2020 07:54, Amit Langote wrote:\n> On Sun, Oct 18, 2020 at 12:54 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> On 2020-Oct-17, Amit Langote wrote:\n>>> Hmm, I don't see ri_PartitionCheckExpr as being a piece of routing\n>>> information, because it's primarily meant to be used when inserting\n>>> *directly* into a partition, although it's true we do initialize it in\n>>> routing target partitions too in some cases.\n>>>\n>>> Also, ChildToRootMap was introduced by the trigger transition table\n>>> project, not tuple routing. I think we misjudged this when we added\n>>> PartitionToRootMap to PartitionRoutingInfo, because it doesn't really\n>>> belong there. This patch fixes that by removing PartitionToRootMap.\n>>>\n>>> RootToPartitionMap and the associated partition slot is the only piece\n>>> of extra information that is needed by tuple routing target relations.\n>>\n>> Well, I was thinking on making the ri_PartitionInfo be about\n>> partitioning in general, not just specifically for partition tuple\n>> routing. Maybe Heikki is right that it may end up being simpler to\n>> remove ri_PartitionInfo altogether. It'd just be a couple of additional\n>> pointers in ResultRelInfo after all.\n> \n> So that's 2 votes for removing PartitionRoutingInfo from the tree.\n> Okay, I have tried that in the attached 0002 patch. Also, I fixed\n> some comments in 0001 that still referenced PartitionToRootMap.\n\nPushed, with minor comment changes.\n\nI also noticed that the way the getTargetResultRelInfo() helper function \nwas used, was a bit messy. It was used when firing AFTER STATEMENT \ntriggers, but for some reason the code to fire BEFORE STATEMENT triggers \ndidn't use it but duplicated the logic instead. I made that a bit \nsimpler, by always setting the rootResultRelInfo field in \nExecInitModifyTable(), making the getTargetResultRelInfo() function \nunnecessary.\n\nThanks!\n\n- Heikki\n\n\n",
"msg_date": "Mon, 19 Oct 2020 14:48:26 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On 17/10/2020 18:54, Alvaro Herrera wrote:\n> On 2020-Oct-17, Amit Langote wrote:\n>> As I said in my previous email, I don't see how we can make\n>> initializing the map any lazier than it already is. If a partition\n>> has a different tuple descriptor than the root table, then we know for\n>> sure that any tuples that are routed to it will need to be converted\n>> from the root tuple format to its tuple format, so we might as well\n>> build the map when the ResultRelInfo is built. If no tuple lands into\n>> a partition, we would neither build its ResultRelInfo nor the map.\n>> With the current arrangement, if the map field is NULL, it simply\n>> means that the partition has the same tuple format as the root table.\n> \n> I see -- makes sense.\n\nIt's probably true that there's no performance gain from initializing \nthem more lazily. But the reasoning and logic around the initialization \nis complicated. After tracing through various path through the code, I'm \nconvinced enough that it's correct, or at least these patches didn't \nbreak it, but I still think some sort of lazy initialization on first \nuse would make it more readable. Or perhaps there's some other \nrefactoring we could do.\n\nPerhaps we should have a magic TupleConversionMap value to mean \"no \nconversion required\". NULL could then mean \"not initialized yet\".\n\n>> On Fri, Oct 16, 2020 at 11:45 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n>>> BTW it is curious that ExecInitRoutingInfo is called both in\n>>> ExecInitPartitionInfo() (from ExecFindPartition when the ResultRelInfo\n>>> for the partition is not found) *and* from ExecFindPartition again, when\n>>> the ResultRelInfo for the partition *is* found. Doesn't this mean that\n>>> ri_PartitionInfo is set up twice for the same partition?\n>>\n>> No. ExecFindPartition() directly calls ExecInitRoutingInfo() only for\n>> reused update result relations, that too, only the first time a tuple\n>> lands into such a partition. For the subsequent tuples that land into\n>> the same partition, ExecFindPartition() will be able to find that\n>> ResultRelInfo in the proute->partitions[] array. All ResultRelInfos\n>> in that array are assumed to have been processed by\n>> ExecInitRoutingInfo().\n> \n> Doh, right, sorry, I was misreading the if/else maze there.\n\nI think that demonstrates my point that the logic is hard to follow :-).\n\n- Heikki\n\n\n",
"msg_date": "Mon, 19 Oct 2020 14:55:28 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Mon, Oct 19, 2020 at 8:48 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 19/10/2020 07:54, Amit Langote wrote:\n> > On Sun, Oct 18, 2020 at 12:54 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >> Well, I was thinking on making the ri_PartitionInfo be about\n> >> partitioning in general, not just specifically for partition tuple\n> >> routing. Maybe Heikki is right that it may end up being simpler to\n> >> remove ri_PartitionInfo altogether. It'd just be a couple of additional\n> >> pointers in ResultRelInfo after all.\n> >\n> > So that's 2 votes for removing PartitionRoutingInfo from the tree.\n> > Okay, I have tried that in the attached 0002 patch. Also, I fixed\n> > some comments in 0001 that still referenced PartitionToRootMap.\n>\n> Pushed, with minor comment changes.\n\nThank you.\n\n> I also noticed that the way the getTargetResultRelInfo() helper function\n> was used, was a bit messy. It was used when firing AFTER STATEMENT\n> triggers, but for some reason the code to fire BEFORE STATEMENT triggers\n> didn't use it but duplicated the logic instead. I made that a bit\n> simpler, by always setting the rootResultRelInfo field in\n> ExecInitModifyTable(), making the getTargetResultRelInfo() function\n> unnecessary.\n\nGood, I was mildly annoyed by that function too.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Oct 2020 11:28:26 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Mon, Oct 19, 2020 at 8:55 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 17/10/2020 18:54, Alvaro Herrera wrote:\n> > On 2020-Oct-17, Amit Langote wrote:\n> >> As I said in my previous email, I don't see how we can make\n> >> initializing the map any lazier than it already is. If a partition\n> >> has a different tuple descriptor than the root table, then we know for\n> >> sure that any tuples that are routed to it will need to be converted\n> >> from the root tuple format to its tuple format, so we might as well\n> >> build the map when the ResultRelInfo is built. If no tuple lands into\n> >> a partition, we would neither build its ResultRelInfo nor the map.\n> >> With the current arrangement, if the map field is NULL, it simply\n> >> means that the partition has the same tuple format as the root table.\n> >\n> > I see -- makes sense.\n>\n> It's probably true that there's no performance gain from initializing\n> them more lazily. But the reasoning and logic around the initialization\n> is complicated. After tracing through various path through the code, I'm\n> convinced enough that it's correct, or at least these patches didn't\n> break it, but I still think some sort of lazy initialization on first\n> use would make it more readable. Or perhaps there's some other\n> refactoring we could do.\n\nSo the other patch I have mentioned is about lazy initialization of\nthe ResultRelInfo itself, not the individual fields, but maybe with\nenough refactoring we can get the latter too.\n\nCurrently, ExecInitModifyTable() performs ExecInitResultRelation() for\nall relations in ModifyTable.resultRelations, which sets most but not\nall ResultRelInfo fields (whatever InitResultRelInfo() can set),\nfollowed by initializing some other fields based on the contents of\nthe ModifyTable plan. My patch moves those two steps into a function\nExecBuildResultRelation() which is called lazily during\nExecModifyTable() for a given result relation on the first tuple\nproduced by that relation's plan. Actually, there's a \"getter\" named\nExecGetResultRelation() which first consults es_result_relations[rti -\n1] for the requested relation and if it's NULL then calls\nExecBuildResultRelation().\n\nWould you mind taking a look at that as a starting point? I am\nthinking there's enough relevant discussion here that I should post\nthe rebased version of that patch here.\n\n> Perhaps we should have a magic TupleConversionMap value to mean \"no\n> conversion required\". NULL could then mean \"not initialized yet\".\n\nPerhaps, a TupleConversionMap with its attrMap set to NULL means \"no\nconversion required\".\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Oct 2020 21:57:31 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Tue, Oct 20, 2020 at 9:57 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Mon, Oct 19, 2020 at 8:55 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > It's probably true that there's no performance gain from initializing\n> > them more lazily. But the reasoning and logic around the initialization\n> > is complicated. After tracing through various path through the code, I'm\n> > convinced enough that it's correct, or at least these patches didn't\n> > break it, but I still think some sort of lazy initialization on first\n> > use would make it more readable. Or perhaps there's some other\n> > refactoring we could do.\n>\n> So the other patch I have mentioned is about lazy initialization of\n> the ResultRelInfo itself, not the individual fields, but maybe with\n> enough refactoring we can get the latter too.\n\nSo, I tried implementing a lazy-initialization-on-first-access\napproach for both the ResultRelInfos themselves and some of the\nindividual fields of ResultRelInfo that don't need to be set right\naway. You can see the end result in the attached 0003 patch. This\nslims down ExecInitModifyTable() significantly, both in terms of code\nfootprint and the amount of work that it does.\n\n0001 fixes a thinko of the recent commit 1375422c782 that I discovered\nwhen debugging a problem with 0003.\n\n0002 is for something I have mentioned upthread.\nForeignScanState.resultRelInfo cannot be set in ExecInit* stage as\nit's done now, because with 0003, child ResultRelInfos will not have\nbeen added to es_result_relations during that stage.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 22 Oct 2020 22:49:14 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On 2020-Oct-22, Amit Langote wrote:\n\n> 0001 fixes a thinko of the recent commit 1375422c782 that I discovered\n> when debugging a problem with 0003.\n\nHmm, how hard is it to produce a test case that fails because of this\nproblem?\n\n\n\n",
"msg_date": "Thu, 22 Oct 2020 11:25:48 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Thu, Oct 22, 2020 at 11:25 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2020-Oct-22, Amit Langote wrote:\n>\n> > 0001 fixes a thinko of the recent commit 1375422c782 that I discovered\n> > when debugging a problem with 0003.\n>\n> Hmm, how hard is it to produce a test case that fails because of this\n> problem?\n\nI checked and don't think there's any live bug here. You will notice\nif you take a look at 1375422c7 that we've made es_result_relations an\narray of pointers while the individual ModifyTableState nodes own the\nactual ResultRelInfos. So, EvalPlanQualStart() setting the parent\nEState's es_result_relations array to NULL implies that those pointers\nbecome inaccessible to the parent query's execution after\nEvalPlanQual() returns. However, nothing in the tree today accesses\nResulRelInfos through es_result_relations array, except during\nExecInit* stage (see ExecInitForeignScan()) but it would still be\nintact at that stage.\n\nWith the lazy-initialization patch though, we do check\nes_result_relations when trying to open a result relation to see if it\nhas already been initialized (a non-NULL pointer in that array means\nyes), so resetting it in the middle of the execution can't be safe.\nFor one example, we will end up initializing the same relation many\ntimes after not finding it in es_result_relations and also add it\n*duplicatively* to es_opened_result_relations list, breaking the\ninvariant that that list contains distinct relations.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 23 Oct 2020 11:56:57 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On 23/10/2020 05:56, Amit Langote wrote:\n> On Thu, Oct 22, 2020 at 11:25 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>\n>> On 2020-Oct-22, Amit Langote wrote:\n>>\n>>> 0001 fixes a thinko of the recent commit 1375422c782 that I discovered\n>>> when debugging a problem with 0003.\n>>\n>> Hmm, how hard is it to produce a test case that fails because of this\n>> problem?\n> \n> I checked and don't think there's any live bug here. You will notice\n> if you take a look at 1375422c7 that we've made es_result_relations an\n> array of pointers while the individual ModifyTableState nodes own the\n> actual ResultRelInfos. So, EvalPlanQualStart() setting the parent\n> EState's es_result_relations array to NULL implies that those pointers\n> become inaccessible to the parent query's execution after\n> EvalPlanQual() returns. However, nothing in the tree today accesses\n> ResulRelInfos through es_result_relations array, except during\n> ExecInit* stage (see ExecInitForeignScan()) but it would still be\n> intact at that stage.\n> \n> With the lazy-initialization patch though, we do check\n> es_result_relations when trying to open a result relation to see if it\n> has already been initialized (a non-NULL pointer in that array means\n> yes), so resetting it in the middle of the execution can't be safe.\n> For one example, we will end up initializing the same relation many\n> times after not finding it in es_result_relations and also add it\n> *duplicatively* to es_opened_result_relations list, breaking the\n> invariant that that list contains distinct relations.\n\nPushed that thinko-fix, thanks!\n\n- Heikki\n\n\n",
"msg_date": "Fri, 23 Oct 2020 09:39:16 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On 22/10/2020 16:49, Amit Langote wrote:\n> On Tue, Oct 20, 2020 at 9:57 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> On Mon, Oct 19, 2020 at 8:55 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>> It's probably true that there's no performance gain from initializing\n>>> them more lazily. But the reasoning and logic around the initialization\n>>> is complicated. After tracing through various path through the code, I'm\n>>> convinced enough that it's correct, or at least these patches didn't\n>>> break it, but I still think some sort of lazy initialization on first\n>>> use would make it more readable. Or perhaps there's some other\n>>> refactoring we could do.\n>>\n>> So the other patch I have mentioned is about lazy initialization of\n>> the ResultRelInfo itself, not the individual fields, but maybe with\n>> enough refactoring we can get the latter too.\n> \n> So, I tried implementing a lazy-initialization-on-first-access\n> approach for both the ResultRelInfos themselves and some of the\n> individual fields of ResultRelInfo that don't need to be set right\n> away. You can see the end result in the attached 0003 patch. This\n> slims down ExecInitModifyTable() significantly, both in terms of code\n> footprint and the amount of work that it does.\n\nHave you done any performance testing? I'd like to know how much of a \ndifference this makes in practice.\n\nAnother alternative is to continue to create the ResultRelInfos in \nExecInitModify(), but initialize the individual fields in them lazily.\n\nDoes this patch become moot if we do the \"Overhaul UPDATE/DELETE \nprocessing\"? \n(https://www.postgresql.org/message-id/CA%2BHiwqHpHdqdDn48yCEhynnniahH78rwcrv1rEX65-fsZGBOLQ%40mail.gmail.com)?\n\n- Heikki\n\n\n",
"msg_date": "Fri, 23 Oct 2020 10:04:31 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Fri, Oct 23, 2020 at 4:04 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 22/10/2020 16:49, Amit Langote wrote:\n> > On Tue, Oct 20, 2020 at 9:57 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >> On Mon, Oct 19, 2020 at 8:55 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >>> It's probably true that there's no performance gain from initializing\n> >>> them more lazily. But the reasoning and logic around the initialization\n> >>> is complicated. After tracing through various path through the code, I'm\n> >>> convinced enough that it's correct, or at least these patches didn't\n> >>> break it, but I still think some sort of lazy initialization on first\n> >>> use would make it more readable. Or perhaps there's some other\n> >>> refactoring we could do.\n> >>\n> >> So the other patch I have mentioned is about lazy initialization of\n> >> the ResultRelInfo itself, not the individual fields, but maybe with\n> >> enough refactoring we can get the latter too.\n> >\n> > So, I tried implementing a lazy-initialization-on-first-access\n> > approach for both the ResultRelInfos themselves and some of the\n> > individual fields of ResultRelInfo that don't need to be set right\n> > away. You can see the end result in the attached 0003 patch. This\n> > slims down ExecInitModifyTable() significantly, both in terms of code\n> > footprint and the amount of work that it does.\n>\n> Have you done any performance testing? I'd like to know how much of a\n> difference this makes in practice.\n\nI have shown some numbers here:\n\nhttps://www.postgresql.org/message-id/CA+HiwqG7ZruBmmih3wPsBZ4s0H2EhywrnXEduckY5Hr3fWzPWA@mail.gmail.com\n\nTo reiterate, if you apply the following patch:\n\n> Does this patch become moot if we do the \"Overhaul UPDATE/DELETE\n> processing\"?\n> (https://www.postgresql.org/message-id/CA%2BHiwqHpHdqdDn48yCEhynnniahH78rwcrv1rEX65-fsZGBOLQ%40mail.gmail.com)?\n\n...and run this benchmark with plan_cache_mode = force_generic_plan\n\npgbench -i -s 10 --partitions={0, 10, 100, 1000}\npgbench -T120 -f test.sql -M prepared\n\ntest.sql:\n\\set aid random(1, 1000000)\nupdate pgbench_accounts set abalance = abalance + 1 where aid = :aid;\n\nyou may see roughly the following results:\n\nHEAD:\n\n0 tps = 13045.485121 (excluding connections establishing)\n10 tps = 9358.157433 (excluding connections establishing)\n100 tps = 1878.274500 (excluding connections establishing)\n1000 tps = 84.684695 (excluding connections establishing)\n\nPatched (overhaul update/delete processing):\n\n0 tps = 12743.487196 (excluding connections establishing)\n10 tps = 12644.240748 (excluding connections establishing)\n100 tps = 4158.123345 (excluding connections establishing)\n1000 tps = 391.248067 (excluding connections establishing)\n\nAnd if you apply the patch being discussed here, TPS shoots up a bit,\nespecially for higher partition counts:\n\nPatched (lazy-ResultRelInfo-initialization)\n\n0 tps = 13419.283168 (excluding connections establishing)\n10 tps = 12588.016095 (excluding connections establishing)\n100 tps = 8560.824225 (excluding connections establishing)\n1000 tps = 1926.553901 (excluding connections establishing)\n\nTo explain these numbers a bit, \"overheaul update/delete processing\"\npatch improves the performance of that benchmark by allowing the\nupdates to use run-time pruning when executing generic plans, which\nthey can't today.\n\nHowever without \"lazy-ResultRelInfo-initialization\" patch,\nExecInitModifyTable() (or InitPlan() when I ran those benchmarks) can\nbe seen to be spending time initializing all of those result\nrelations, whereas only one of those will actually be used.\n\nAs mentioned further in that email, it's really the locking of all\nrelations by AcquireExecutorLocks() that occurs even before we enter\nthe executor that's a much thornier bottleneck for this benchmark.\nBut the ResultRelInfo initialization bottleneck sounded like it could\nget alleviated in a relatively straightforward manner. The patches\nthat were developed for attacking the locking bottleneck would require\nfurther reflection on whether they are correct.\n\n(Note: I've just copy pasted the numbers I reported in that email. To\nreproduce, I'll have to rebase the \"overhaul update/delete processing\"\npatch on this one, which I haven't yet done.)\n\n> Another alternative is to continue to create the ResultRelInfos in\n> ExecInitModify(), but initialize the individual fields in them lazily.\n\nIf you consider the above, maybe you can see how that will not really\neliminate the bottleneck I'm aiming to fix here.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 23 Oct 2020 18:37:41 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On 23/10/2020 12:37, Amit Langote wrote:\n> To explain these numbers a bit, \"overheaul update/delete processing\"\n> patch improves the performance of that benchmark by allowing the\n> updates to use run-time pruning when executing generic plans, which\n> they can't today.\n> \n> However without \"lazy-ResultRelInfo-initialization\" patch,\n> ExecInitModifyTable() (or InitPlan() when I ran those benchmarks) can\n> be seen to be spending time initializing all of those result\n> relations, whereas only one of those will actually be used.\n> \n> As mentioned further in that email, it's really the locking of all\n> relations by AcquireExecutorLocks() that occurs even before we enter\n> the executor that's a much thornier bottleneck for this benchmark.\n> But the ResultRelInfo initialization bottleneck sounded like it could\n> get alleviated in a relatively straightforward manner. The patches\n> that were developed for attacking the locking bottleneck would require\n> further reflection on whether they are correct.\n> \n> (Note: I've just copy pasted the numbers I reported in that email. To\n> reproduce, I'll have to rebase the \"overhaul update/delete processing\"\n> patch on this one, which I haven't yet done.)\n\nOk, thanks for the explanation, now I understand.\n\nThis patch looks reasonable to me at a quick glance. I'm a bit worried \nor unhappy about the impact on FDWs, though. It doesn't seem nice that \nthe ResultRelInfo is not available in the BeginDirectModify call. It's \nnot too bad, the FDW can call ExecGetResultRelation() if it needs it, \nbut still. Perhaps it would be better to delay calling \nBeginDirectModify() until the first modification is performed, to avoid \nany initialization overhead there, like establishing the connection in \npostgres_fdw.\n\nBut since this applies on top of the \"overhaul update/delete processing\" \npatch, let's tackle that patch set next. Could you rebase that, please?\n\n- Heikki\n\n\n",
"msg_date": "Tue, 27 Oct 2020 15:23:00 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Tue, Oct 27, 2020 at 10:23 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 23/10/2020 12:37, Amit Langote wrote:\n> > To explain these numbers a bit, \"overheaul update/delete processing\"\n> > patch improves the performance of that benchmark by allowing the\n> > updates to use run-time pruning when executing generic plans, which\n> > they can't today.\n> >\n> > However without \"lazy-ResultRelInfo-initialization\" patch,\n> > ExecInitModifyTable() (or InitPlan() when I ran those benchmarks) can\n> > be seen to be spending time initializing all of those result\n> > relations, whereas only one of those will actually be used.\n> >\n> > As mentioned further in that email, it's really the locking of all\n> > relations by AcquireExecutorLocks() that occurs even before we enter\n> > the executor that's a much thornier bottleneck for this benchmark.\n> > But the ResultRelInfo initialization bottleneck sounded like it could\n> > get alleviated in a relatively straightforward manner. The patches\n> > that were developed for attacking the locking bottleneck would require\n> > further reflection on whether they are correct.\n> >\n> > (Note: I've just copy pasted the numbers I reported in that email. To\n> > reproduce, I'll have to rebase the \"overhaul update/delete processing\"\n> > patch on this one, which I haven't yet done.)\n>\n> Ok, thanks for the explanation, now I understand.\n>\n> But since this applies on top of the \"overhaul update/delete processing\"\n> patch, let's tackle that patch set next. Could you rebase that, please?\n\nActually, I made lazy-ResultRelInfo-initialization apply on HEAD\ndirectly at one point because of its separate CF entry, that is, to\nappease the CF app's automatic patch tester that wouldn't know to\napply the other patch first. Because both of these patch sets want to\nchange thow ModifyTable works, there are conflicts.\n\nThe \"overhaul update/delete processing\" patch is somewhat complex and\nI expect some amount of back and forth on its design points. OTOH,\nthe lazy-ResultRelInfo-initialization patch is straightforward enough\nthat I hoped it would be easier to bring it into a committable state\nthan the other. But I can see why one may find it hard to justify\ncommitting the latter without the former already in, because the\nbottleneck it purports to alleviate (that of eager ResultRelInfo\ninitialization) is not apparent until update/delete can use run-time\npruning.\n\nAnyway, I will post the rebased patch on the \"overhaul update/delete\nprocessing\" thread.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 28 Oct 2020 12:02:23 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Tue, Oct 27, 2020 at 10:23 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> This patch looks reasonable to me at a quick glance. I'm a bit worried\n> or unhappy about the impact on FDWs, though. It doesn't seem nice that\n> the ResultRelInfo is not available in the BeginDirectModify call. It's\n> not too bad, the FDW can call ExecGetResultRelation() if it needs it,\n> but still. Perhaps it would be better to delay calling\n> BeginDirectModify() until the first modification is performed, to avoid\n> any initialization overhead there, like establishing the connection in\n> postgres_fdw.\n\nAh, calling BeginDirectModify() itself lazily sounds like a good idea;\nsee attached updated 0001 to see how that looks. While updating that\npatch, I realized that the ForeignScan.resultRelation that we\nintroduced in 178f2d560d will now be totally useless. :-(\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 28 Oct 2020 16:46:01 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Wed, Oct 28, 2020 at 12:02 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Oct 27, 2020 at 10:23 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > But since this applies on top of the \"overhaul update/delete processing\"\n> > patch, let's tackle that patch set next. Could you rebase that, please?\n>\n>\n> Anyway, I will post the rebased patch on the \"overhaul update/delete\n> processing\" thread.\n\nDone.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 29 Oct 2020 22:04:45 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
},
{
"msg_contents": "On Wed, Oct 28, 2020 at 4:46 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Tue, Oct 27, 2020 at 10:23 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > This patch looks reasonable to me at a quick glance. I'm a bit worried\n> > or unhappy about the impact on FDWs, though. It doesn't seem nice that\n> > the ResultRelInfo is not available in the BeginDirectModify call. It's\n> > not too bad, the FDW can call ExecGetResultRelation() if it needs it,\n> > but still. Perhaps it would be better to delay calling\n> > BeginDirectModify() until the first modification is performed, to avoid\n> > any initialization overhead there, like establishing the connection in\n> > postgres_fdw.\n>\n> Ah, calling BeginDirectModify() itself lazily sounds like a good idea;\n> see attached updated 0001 to see how that looks. While updating that\n> patch, I realized that the ForeignScan.resultRelation that we\n> introduced in 178f2d560d will now be totally useless. :-(\n\nGiven that we've closed the CF entry for this thread and given that\nthere seems to be enough context to these patches, I will move these\npatches back to their original thread, that is:\n\n* ModifyTable overheads in generic plans *\nhttps://www.postgresql.org/message-id/flat/CA%2BHiwqE4k1Q2TLmCAvekw%2B8_NXepbnfUOamOeX%3DKpHRDTfSKxA%40mail.gmail.com\n\nThat will also make the CF-bot happy.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 30 Oct 2020 15:06:50 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition routing layering in nodeModifyTable.c"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nWe found a race condition in pg_mkdir_p(), here is a simple reproducer:\n\n #!/bin/sh\n\n basedir=pgdata\n datadir=$basedir/a/b/c/d/e/f/g/h/i/j/k/l/m/n/o/p/q/r/s/t/u/v/w/x/y/z\n logdir=$basedir/logs\n n=2\n\n rm -rf $basedir\n mkdir -p $logdir\n\n # init databases concurrently, they will all try to create the parent\ndirs\n for i in `seq $n`; do\n initdb -D $datadir/$i >$logdir/$i.log 2>&1 &\n done\n\n wait\n\n # there is a chance one of the initdb commands failed to create the\ndatadir\n grep 'could not create directory' $logdir/*\n\nThe logic in pg_mkdir_p() is as below:\n\n /* check for pre-existing directory */\n if (stat(path, &sb) == 0)\n {\n if (!S_ISDIR(sb.st_mode))\n {\n if (last)\n errno = EEXIST;\n else\n errno = ENOTDIR;\n retval = -1;\n break;\n }\n }\n else if (mkdir(path, last ? omode : S_IRWXU | S_IRWXG | S_IRWXO) <\n0)\n {\n retval = -1;\n break;\n }\n\nThis seems buggy as it first checks the existence of the dir and makes the\ndir if it does not exist yet, however when executing concurrently a\npossible race condition can be as below:\n\nA: does a/ exists? no\nB: does a/ exists? no\nA: try to create a/, succeed\nB: try to create a/, failed as it already exists\n\nTo prevent the race condition we could mkdir() directly, if it returns -1\nand errno is EEXIST then check whether it's really a dir with stat(). In\nfact this is what is done in the `mkdir -p` command:\nhttps://github.com/coreutils/gnulib/blob/b5a9fa677847081c9b4f26908272f122b15df8b9/lib/mkdir-p.c#L130-L164\n\nBy the way, some callers of pg_mkdir_p() checks for EEXIST explicitly, such\nas in pg_basebackup.c:\n\n if (pg_mkdir_p(statusdir, pg_dir_create_mode) != 0 && errno !=\nEEXIST)\n {\n pg_log_error(\"could not create directory \\\"%s\\\": %m\",\nstatusdir);\n exit(1);\n }\n\nThis is still wrong with current code logic, because when the statusdir is\na file the errno is also EEXIST, but it can pass the check here. Even if\nwe fix pg_mkdir_p() by following the `mkdir -p` way the errno check here is\nstill wrong.\n\nBest Regards\nNing\n\nHi Hackers,We found a race condition in pg_mkdir_p(), here is a simple reproducer: #!/bin/sh basedir=pgdata datadir=$basedir/a/b/c/d/e/f/g/h/i/j/k/l/m/n/o/p/q/r/s/t/u/v/w/x/y/z logdir=$basedir/logs n=2 rm -rf $basedir mkdir -p $logdir # init databases concurrently, they will all try to create the parent dirs for i in `seq $n`; do initdb -D $datadir/$i >$logdir/$i.log 2>&1 & done wait # there is a chance one of the initdb commands failed to create the datadir grep 'could not create directory' $logdir/*The logic in pg_mkdir_p() is as below: /* check for pre-existing directory */ if (stat(path, &sb) == 0) { if (!S_ISDIR(sb.st_mode)) { if (last) errno = EEXIST; else errno = ENOTDIR; retval = -1; break; } } else if (mkdir(path, last ? omode : S_IRWXU | S_IRWXG | S_IRWXO) < 0) { retval = -1; break; }This seems buggy as it first checks the existence of the dir and makes the dir if it does not exist yet, however when executing concurrently a possible race condition can be as below:A: does a/ exists? noB: does a/ exists? noA: try to create a/, succeedB: try to create a/, failed as it already existsTo prevent the race condition we could mkdir() directly, if it returns -1 and errno is EEXIST then check whether it's really a dir with stat(). In fact this is what is done in the `mkdir -p` command: https://github.com/coreutils/gnulib/blob/b5a9fa677847081c9b4f26908272f122b15df8b9/lib/mkdir-p.c#L130-L164By the way, some callers of pg_mkdir_p() checks for EEXIST explicitly, such as in pg_basebackup.c: if (pg_mkdir_p(statusdir, pg_dir_create_mode) != 0 && errno != EEXIST) { pg_log_error(\"could not create directory \\\"%s\\\": %m\", statusdir); exit(1); }This is still wrong with current code logic, because when the statusdir is a file the errno is also EEXIST, but it can pass the check here. Even if we fix pg_mkdir_p() by following the `mkdir -p` way the errno check here is still wrong.Best RegardsNing",
"msg_date": "Thu, 18 Jul 2019 16:17:22 +0800",
"msg_from": "Ning Yu <nyu@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Possible race condition in pg_mkdir_p()?"
},
{
"msg_contents": "On Thu, Jul 18, 2019 at 04:17:22PM +0800, Ning Yu wrote:\n> This is still wrong with current code logic, because when the statusdir is\n> a file the errno is also EEXIST, but it can pass the check here. Even if\n> we fix pg_mkdir_p() by following the `mkdir -p` way the errno check here is\n> still wrong.\n\nWould you like to send a patch?\n--\nMichael",
"msg_date": "Thu, 18 Jul 2019 17:57:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Possible race condition in pg_mkdir_p()?"
},
{
"msg_contents": "On Thu, Jul 18, 2019 at 4:57 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Jul 18, 2019 at 04:17:22PM +0800, Ning Yu wrote:\n> > This is still wrong with current code logic, because when the statusdir\n> is\n> > a file the errno is also EEXIST, but it can pass the check here. Even if\n> > we fix pg_mkdir_p() by following the `mkdir -p` way the errno check here\n> is\n> > still wrong.\n>\n> Would you like to send a patch?\n>\n\nMichael, we'll send out the patch later. Checked code, it seems that there\nis another related mkdir() issue.\n\nMakePGDirectory() is actually a syscall mkdir(), and manpage says the errno\nmeaning of EEXIST,\n\n EEXIST pathname already exists (not necessarily as a directory).\nThis includes the case where pathname is a symbolic link, dangling or not.\n\nHowever it looks like some callers do not use that correctly, e.g.\n\n if (MakePGDirectory(directory) < 0)\n {\n if (errno == EEXIST)\n return;\n\nOR\n\n if (MakePGDirectory(parentdir) < 0 && errno != EEXIST)\n\ni.e. we should better use stat(path) && S_ISDIR(buf) && errno == EEXIST to\nreplace errno == EEXIST.\n\nOne possible fix is to add an argument like ignore_created (in case some\ncallers want to fail if the path has been created) in MakePGDirectory() and\nthen add that code logic into it.\n\nOn Thu, Jul 18, 2019 at 4:57 PM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Jul 18, 2019 at 04:17:22PM +0800, Ning Yu wrote:\n> This is still wrong with current code logic, because when the statusdir is\n> a file the errno is also EEXIST, but it can pass the check here. Even if\n> we fix pg_mkdir_p() by following the `mkdir -p` way the errno check here is\n> still wrong.\n\nWould you like to send a patch?Michael, we'll send out the patch later. Checked code, it seems that there is another related mkdir() issue.MakePGDirectory() is actually a syscall mkdir(), and manpage says the errno meaning of EEXIST, EEXIST pathname already exists (not necessarily as a directory). This includes the case where pathname is a symbolic link, dangling or not.However it looks like some callers do not use that correctly, e.g. if (MakePGDirectory(directory) < 0) { if (errno == EEXIST) return;OR if (MakePGDirectory(parentdir) < 0 && errno != EEXIST)i.e. we should better use stat(path) && S_ISDIR(buf) && errno == EEXIST to replace errno == EEXIST.One possible fix is to add an argument like ignore_created (in case some callers want to fail if the path has been created) in MakePGDirectory() and then add that code logic into it.",
"msg_date": "Thu, 18 Jul 2019 18:30:13 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Possible race condition in pg_mkdir_p()?"
},
{
"msg_contents": "Hi Michael,\n\nThe patches are attached. To make reviewing easier we spilt them into small\npieces:\n\n- v1-0001-Fix-race-condition-in-pg_mkdir_p.patch: the fix to pg_mkdir_p()\n itself, basically we are following the `mkdir -p` logic;\n- v1-0002-Test-concurrent-call-to-pg_mkdir_p.patch: the tests for\npg_mkdir_p(),\n we could see how it fails by reverting the first patch, and a reproducer\nwith\n initdb is also provided in the README; as we do not know how to create a\nunit\n test in postgresql we have to employ a test module to do the job, not\nsure if\n this is a proper solution;\n- v1-0003-Fix-callers-of-pg_mkdir_p.patch &\n v1-0004-Fix-callers-of-MakePGDirectory.patch: fix callers of pg_mkdir_p()\nand\n MakePGDirectory(), tests are not provided for these changes;\n\nMakePGDirectory() is also called in TablespaceCreateDbspace(), EEXIST is\nconsidered as non-error for parent directories, however as it considers\nEEXIST\nas a failure for the last level of the path so the logic is still correct,\nwe\ndo not add a final stat() check for it.\n\nBest Regards\nNing\n\n\nOn Thu, Jul 18, 2019 at 4:57 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Jul 18, 2019 at 04:17:22PM +0800, Ning Yu wrote:\n> > This is still wrong with current code logic, because when the statusdir\n> is\n> > a file the errno is also EEXIST, but it can pass the check here. Even if\n> > we fix pg_mkdir_p() by following the `mkdir -p` way the errno check here\n> is\n> > still wrong.\n>\n> Would you like to send a patch?\n> --\n> Michael\n>",
"msg_date": "Tue, 23 Jul 2019 14:54:20 +0800",
"msg_from": "Ning Yu <nyu@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Possible race condition in pg_mkdir_p()?"
},
{
"msg_contents": "On Tue, Jul 23, 2019 at 02:54:20PM +0800, Ning Yu wrote:\n> MakePGDirectory() is also called in TablespaceCreateDbspace(), EEXIST is\n> considered as non-error for parent directories, however as it considers\n> EEXIST as a failure for the last level of the path so the logic is\n> still correct, \n\nSo the complains here are about two things:\n- In some code paths calling mkdir, we don't care about the fact that\nEEXIST can happen for something else than a folder. This could be a\nproblem if we have conflicts in the backend related to the naming of\nthe files/folders created. I find a bit surprising to not perform\nthe sanity checks in MakePGDirectory() in your patch. What of all the\nexisting callers of this routine?\n- pg_mkdir_p is pretty bad at detecting problems with concurrent\ncreation of parent directories, leading to random failures where these\nshould not happen.\n\nI may be missing something, but your patch does not actually fix\nproblem 2, no? Trying to do an initdb with a set of N folders using\nthe same parent folders not created still results in random failures.\n--\nMichael",
"msg_date": "Tue, 30 Jul 2019 17:04:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Possible race condition in pg_mkdir_p()?"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 4:04 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jul 23, 2019 at 02:54:20PM +0800, Ning Yu wrote:\n> > MakePGDirectory() is also called in TablespaceCreateDbspace(), EEXIST is\n> > considered as non-error for parent directories, however as it considers\n> > EEXIST as a failure for the last level of the path so the logic is\n> > still correct,\n>\n> So the complains here are about two things:\n> - In some code paths calling mkdir, we don't care about the fact that\n> EEXIST can happen for something else than a folder. This could be a\n> problem if we have conflicts in the backend related to the naming of\n> the files/folders created. I find a bit surprising to not perform\n> the sanity checks in MakePGDirectory() in your patch. What of all the\n> existing callers of this routine?\n\nThanks for the reply. There are several callers of MakePGDirectory(), most of\nthem already treats EEXIST as an error; TablespaceCreateDbspace() already has\na proper checking for the target dir, it has the chance to fail on a\nconcurrently created dir, but at least it will not be confused by a file with\nthe same name; PathNameCreateTemporaryDir() is fixed by our patch 0004, we\nwill call stat() after mkdir() to double check the result.\n\nIn fact personally I'm thinking that whether could we replace all uses of\nMakePGDirectory() with pg_mkdir_p(), so we could simplify\nTablespaceCreateDbspace() and PathNameCreateTemporaryDir() and other callers\nsignificantly.\n\n> - pg_mkdir_p is pretty bad at detecting problems with concurrent\n> creation of parent directories, leading to random failures where these\n> should not happen.\n>\n> I may be missing something, but your patch does not actually fix\n> problem 2, no? Trying to do an initdb with a set of N folders using\n> the same parent folders not created still results in random failures.\n\nWell, we should have fixed problem 2, this is our major purpose of the patch\n0001, it performs sanity check with stat() after mkdir() at each part of the\npath.\n\nThe initdb test was just the one used by us to verify our fix, here is our\nscript:\n\n n=4\n testdir=testdir\n datadir=$testdir/a/b/c/d/e/f/g/h/i/j/k/l/m/n/o/p/q/r/s/t/u/v/w/x/y/z\n logdir=$testdir/logs\n\n rm -rf $testdir\n mkdir -p $logdir\n\n for i in `seq $n`; do\n initdb -D $datadir/$i >$logdir/$i.log 2>&1 &\n done\n\n wait\n\n # check for failures\n grep 'could not create directory' $logdir/*\n\nWe have provided a test module in patch 0002 to perform a similar test, it\ncalls pg_mkdir_p() concurrently to trigger the issue, which has higher fail\nrate than initdb. With the patch 0001 both the initdb test and the test\nmodule will always succeed in our local environment.\n\nThanks\nNing\n\n> --\n> Michael\n\n\n",
"msg_date": "Tue, 30 Jul 2019 18:22:59 +0800",
"msg_from": "Ning Yu <nyu@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Possible race condition in pg_mkdir_p()?"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 06:22:59PM +0800, Ning Yu wrote:\n> In fact personally I'm thinking that whether could we replace all uses of\n> MakePGDirectory() with pg_mkdir_p(), so we could simplify\n> TablespaceCreateDbspace() and PathNameCreateTemporaryDir() and other callers\n> significantly.\n\nI would still keep the wrapper, but I think that as a final result we\nshould be able to get the code in PathNameCreateTemporaryDir() shaped\nin such a way that there are no multiple attempts at calling\nMakePGDirectory() on EEXIST. This has been introduced by dc6c4c9d to\nallow sharing temporary files between backends, which is rather recent\nbut a fixed set of two retries is not a deterministic method of\nresolution.\n\n> Well, we should have fixed problem 2, this is our major purpose of the patch\n> 0001, it performs sanity check with stat() after mkdir() at each part of the\n> path.\n\nI just reuse the script presented at the top of the thread with n=2,\nand I get that:\npgdata/logs/1.log:creating directory\npgdata/a/b/c/d/e/f/g/h/i/j/k/l/m/n/o/p/q/r/s/t/u/v/w/x/y/z/1\n... initdb: error: could not create directory \"pgdata/a\": File exists\n\nBut the result expected is that all the paths should be created with\nno complains about the parents existing, no? This reproduces on my\nDebian box 100% of the time, for different sub-paths. So something\nlooks wrong in your solution. The code comes originally from FreeBSD,\nhow do things happen there. Do we get failures if doing something\nlike that? I would expect this sequence to not fail:\nfor i in `seq 1 100`; do mkdir -p b/c/d/f/g/h/j/$i; done\n--\nMichael",
"msg_date": "Wed, 31 Jul 2019 13:04:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Possible race condition in pg_mkdir_p()?"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-18 16:17:22 +0800, Ning Yu wrote:\n> This seems buggy as it first checks the existence of the dir and makes the\n> dir if it does not exist yet, however when executing concurrently a\n> possible race condition can be as below:\n> \n> A: does a/ exists? no\n> B: does a/ exists? no\n> A: try to create a/, succeed\n> B: try to create a/, failed as it already exists\n\nHm. I'm not really seing much of a point in making mkdir_p safe against\nall of this. What's the scenario for pg where this matters? I assume\nyou're using it for somewhat different purposes, and that's why it is\nproblematic for you?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Jul 2019 21:11:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Possible race condition in pg_mkdir_p()?"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 12:04 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jul 30, 2019 at 06:22:59PM +0800, Ning Yu wrote:\n> > In fact personally I'm thinking that whether could we replace all uses of\n> > MakePGDirectory() with pg_mkdir_p(), so we could simplify\n> > TablespaceCreateDbspace() and PathNameCreateTemporaryDir() and other callers\n> > significantly.\n>\n> I would still keep the wrapper, but I think that as a final result we\n> should be able to get the code in PathNameCreateTemporaryDir() shaped\n> in such a way that there are no multiple attempts at calling\n> MakePGDirectory() on EEXIST. This has been introduced by dc6c4c9d to\n> allow sharing temporary files between backends, which is rather recent\n> but a fixed set of two retries is not a deterministic method of\n> resolution.\n>\n> > Well, we should have fixed problem 2, this is our major purpose of the patch\n> > 0001, it performs sanity check with stat() after mkdir() at each part of the\n> > path.\n>\n> I just reuse the script presented at the top of the thread with n=2,\n> and I get that:\n> pgdata/logs/1.log:creating directory\n> pgdata/a/b/c/d/e/f/g/h/i/j/k/l/m/n/o/p/q/r/s/t/u/v/w/x/y/z/1\n> ... initdb: error: could not create directory \"pgdata/a\": File exists\n\nCould I double confirm with you that you made a clean rebuild after\napplying the patches? pg_mkdir_p() is compiled as part of libpgport.a,\nand the postgres makefile will not relink the initdb binary\nautomatically, for myself I must 'make clean' and 'make' to ensure\ninitdb gets relinked.\n\n>\n> But the result expected is that all the paths should be created with\n> no complains about the parents existing, no? This reproduces on my\n> Debian box 100% of the time, for different sub-paths. So something\n> looks wrong in your solution. The code comes originally from FreeBSD,\n> how do things happen there. Do we get failures if doing something\n> like that? I would expect this sequence to not fail:\n> for i in `seq 1 100`; do mkdir -p b/c/d/f/g/h/j/$i; done\n> --\n> Michael\n\n\n",
"msg_date": "Wed, 31 Jul 2019 12:26:30 +0800",
"msg_from": "Ning Yu <nyu@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Possible race condition in pg_mkdir_p()?"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 12:11 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-07-18 16:17:22 +0800, Ning Yu wrote:\n> > This seems buggy as it first checks the existence of the dir and makes the\n> > dir if it does not exist yet, however when executing concurrently a\n> > possible race condition can be as below:\n> >\n> > A: does a/ exists? no\n> > B: does a/ exists? no\n> > A: try to create a/, succeed\n> > B: try to create a/, failed as it already exists\n>\n> Hm. I'm not really seing much of a point in making mkdir_p safe against\n> all of this. What's the scenario for pg where this matters? I assume\n> you're using it for somewhat different purposes, and that's why it is\n> problematic for you?\n\nYes, you are right, postgres itself might not run into such kind of race\ncondition issue. The problem we encountered was on a downstream product\nof postgres, where multiple postgres clusters are deployed on the same\nmachine with common parent dirs.\n\nBest Regards\nNing\n\n>\n> Greetings,\n>\n> Andres Freund\n\n\n",
"msg_date": "Wed, 31 Jul 2019 12:35:02 +0800",
"msg_from": "Ning Yu <nyu@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Possible race condition in pg_mkdir_p()?"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 12:26:30PM +0800, Ning Yu wrote:\n> Could I double confirm with you that you made a clean rebuild after\n> applying the patches? pg_mkdir_p() is compiled as part of libpgport.a,\n> and the postgres makefile will not relink the initdb binary\n> automatically, for myself I must 'make clean' and 'make' to ensure\n> initdb gets relinked.\n\nFor any patch I test, I just do a \"git clean -d -x -f\" before building\nas I switch a lot across stable branches as well. It looks that you\nare right on this one though, I have just rebuilt from scratch and I\ndon't see the failures anymore. \n--\nMichael",
"msg_date": "Wed, 31 Jul 2019 13:40:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Possible race condition in pg_mkdir_p()?"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 09:11:44PM -0700, Andres Freund wrote:\n> Hm. I'm not really seing much of a point in making mkdir_p safe against\n> all of this. What's the scenario for pg where this matters? I assume\n> you're using it for somewhat different purposes, and that's why it is\n> problematic for you?\n\nI don't see why it is a problem to make our APIs more stable if we\nhave ways to do so. I actually fixed one recently as of 754b90f for a\nproblem that involved a tool linking to our version of readdir() that\nwe ship. Even with that, the retries for mkdir() on the base\ndirectory in PathNameCreateTemporaryDir() are basically caused by that\nsame limitation with the parent paths from this report, no? So we\ncould actually remove the dependency to the base directory in this\ncode path and just rely on pg_mkdir_p() to do the right thing for all\nthe parent paths. That's also a point raised by Ning upthread.\n--\nMichael",
"msg_date": "Wed, 31 Jul 2019 13:48:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Possible race condition in pg_mkdir_p()?"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 12:41 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jul 31, 2019 at 12:26:30PM +0800, Ning Yu wrote:\n> > Could I double confirm with you that you made a clean rebuild after\n> > applying the patches? pg_mkdir_p() is compiled as part of libpgport.a,\n> > and the postgres makefile will not relink the initdb binary\n> > automatically, for myself I must 'make clean' and 'make' to ensure\n> > initdb gets relinked.\n>\n> For any patch I test, I just do a \"git clean -d -x -f\" before building\n> as I switch a lot across stable branches as well. It looks that you\n> are right on this one though, I have just rebuilt from scratch and I\n> don't see the failures anymore.\n\nCool, glad to know that it works.\n\nBest Regards\nNing\n\n> --\n> Michael\n\n\n",
"msg_date": "Wed, 31 Jul 2019 12:48:48 +0800",
"msg_from": "Ning Yu <nyu@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Possible race condition in pg_mkdir_p()?"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-31 13:48:23 +0900, Michael Paquier wrote:\n> On Tue, Jul 30, 2019 at 09:11:44PM -0700, Andres Freund wrote:\n> > Hm. I'm not really seing much of a point in making mkdir_p safe against\n> > all of this. What's the scenario for pg where this matters? I assume\n> > you're using it for somewhat different purposes, and that's why it is\n> > problematic for you?\n> \n> I don't see why it is a problem to make our APIs more stable if we\n> have ways to do so.\n\nWell, wanting to support additional use-cases often isn't free. There's\nadditional code for the new usecase, there's review & commit time,\nthere's additional test time, there's bug fixes for the new code etc.\n\nWe're not developing a general application support library...\n\nI don't really have a problem fixing this case if we think it's\nuseful. But I'm a bit bothered by all the \"fixes\" being submitted that\ndon't matter for PG itself. They do eat up resources.\n\n\nAnd sorry, adding in-backend threading to test testing mkdir_p doesn't\nmake me inclined to believe that this is all well considered. There's\nminor issues like us not supporting threads in the backend, pthread not\nbeing portable, and also it being entirely out of proportion to the\nissue.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Jul 2019 22:19:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Possible race condition in pg_mkdir_p()?"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 10:19:45PM -0700, Andres Freund wrote:\n> I don't really have a problem fixing this case if we think it's\n> useful. But I'm a bit bothered by all the \"fixes\" being submitted that\n> don't matter for PG itself. They do eat up resources.\n\nSure. In this particular case, we can simplify at least one code path\nin the backend though for temporary path creation. Such cleanup rings\nlike a sufficient argument to me.\n\n> And sorry, adding in-backend threading to test testing mkdir_p doesn't\n> make me inclined to believe that this is all well considered. There's\n> minor issues like us not supporting threads in the backend, pthread not\n> being portable, and also it being entirely out of proportion to the\n> issue.\n\nAgreed on this one. The test case may be useful for the purpose of\ntesting the presented patches, but it does not have enough value to be\nmerged.\n--\nMichael",
"msg_date": "Wed, 31 Jul 2019 14:31:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Possible race condition in pg_mkdir_p()?"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 1:31 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jul 30, 2019 at 10:19:45PM -0700, Andres Freund wrote:\n> > I don't really have a problem fixing this case if we think it's\n> > useful. But I'm a bit bothered by all the \"fixes\" being submitted that\n> > don't matter for PG itself. They do eat up resources.\n>\n> Sure. In this particular case, we can simplify at least one code path\n> in the backend though for temporary path creation. Such cleanup rings\n> like a sufficient argument to me.\n\nYes, in current postgres source code there are several wrappers of\nmkdir() that do similar jobs. If we could have a safe mkdir_p()\nimplementation then we could use it directly in all these wrappers, that\ncould save a lot of maintenance effort in the long run. I'm not saying\nthat our patches are enough to make it safe and reliable, and I agree\nthat any patches may introduce new bugs, but I think that having a safe\nand unified mkdir_p() is a good direction to go.\n\n>\n> > And sorry, adding in-backend threading to test testing mkdir_p doesn't\n> > make me inclined to believe that this is all well considered. There's\n> > minor issues like us not supporting threads in the backend, pthread not\n> > being portable, and also it being entirely out of proportion to the\n> > issue.\n>\n> Agreed on this one. The test case may be useful for the purpose of\n> testing the presented patches, but it does not have enough value to be\n> merged.\n\nYes, that's why we put the testing module in a separate patch from the\nfix, feel free to ignore it. In fact ourselves have concerns about it ;)\n\nBest Regards\nNing\n\n> --\n> Michael\n\n\n",
"msg_date": "Wed, 31 Jul 2019 14:02:03 +0800",
"msg_from": "Ning Yu <nyu@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Possible race condition in pg_mkdir_p()?"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 02:02:03PM +0800, Ning Yu wrote:\n> Yes, in current postgres source code there are several wrappers of\n> mkdir() that do similar jobs. If we could have a safe mkdir_p()\n> implementation then we could use it directly in all these wrappers, that\n> could save a lot of maintenance effort in the long run. I'm not saying\n> that our patches are enough to make it safe and reliable, and I agree\n> that any patches may introduce new bugs, but I think that having a safe\n> and unified mkdir_p() is a good direction to go.\n\nThat's my impression as well. Please note that this does not involve\nan actual bug in Postgres and that this is rather invasive, so this\ndoes not really qualify for a back-patch. No objections to simplify\nthe backend on HEAD though. It would be good if you could actually\nregister a patch to the commit fest app [1] and also rework the patch\nset so as at least PathNameCreateTemporaryDir wins its simplifications\nfor the first problem (double-checking the other code paths would be\nnice as well). The EEXIST handling, and the confusion about EEXIST\nshowing for both a path and a file need some separate handling (not\nsure what to do on these parts yet).\n\n> Yes, that's why we put the testing module in a separate patch from the\n> fix, feel free to ignore it. In fact ourselves have concerns about it ;)\n\nI think that it is nice that you took the time to do so as you get\nyourself more familiar with the TAP infrastructure in the tree and\nprove your point. For this case, I would not have gone to do this\nmuch though ;p\n\n[1]: https://commitfest.postgresql.org/24/\n--\nMichael",
"msg_date": "Wed, 31 Jul 2019 15:21:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Possible race condition in pg_mkdir_p()?"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 2:21 PM Michael Paquier <michael@paquier.xyz> wrote:\n> That's my impression as well. Please note that this does not involve\n> an actual bug in Postgres and that this is rather invasive, so this\n> does not really qualify for a back-patch. No objections to simplify\n> the backend on HEAD though. It would be good if you could actually\n> register a patch to the commit fest app [1] and also rework the patch\n> set so as at least PathNameCreateTemporaryDir wins its simplifications\n> for the first problem (double-checking the other code paths would be\n> nice as well). The EEXIST handling, and the confusion about EEXIST\n> showing for both a path and a file need some separate handling (not\n> sure what to do on these parts yet).\n\nThanks for the suggestion and information, we will rework the patch and\nregister it. The planned changes are: 1) remove the tests (cool!), 2)\nsimplify PathNameCreateTemporaryDir() and other callers. The EEXIST\nhandling will be put in a separate patch so depends on the reviews we\ncould accept or drop it easily.\n\nBest Regards\nNing\n\n\n",
"msg_date": "Thu, 1 Aug 2019 09:15:46 +0800",
"msg_from": "Ning Yu <nyu@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Possible race condition in pg_mkdir_p()?"
}
] |
[
{
"msg_contents": "Hi All,\n\nIn the current code for ecpg, we can't use CALL statement to call\nstored procedures. The attached patch adds the support for it.\n\nWith the attached patch, we can now have the following SQL statement\nin ecpg application to call the stored procedures with IN or INOUT\nparams.\n\nEXEC SQL CALL SP1(:hv1, :hv2);\n\nAdditionally, we can also use indicator variables along with the\narguments of stored procedure with CALL statement like shown below:\n\nEXEC SQL CALL SP1(:hv1 :ind1, :hv2, :ind2);\n\nThe patch also adds some basic test-cases to verify if CALL statement\nin ecpg can be used to call stored procedures with different type of\nparameters.\n\nPlease have a look and let me know your thoughts.\n\nThank you.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com",
"msg_date": "Thu, 18 Jul 2019 16:38:54 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Support for CALL statement in ecpg"
},
{
"msg_contents": "Hello.\n\nAt Thu, 18 Jul 2019 16:38:54 +0530, Ashutosh Sharma <ashu.coek88@gmail.com> wrote in <CAE9k0PkKCsbZLurU5O5V3+c1F-ANKFoKpzpMUa6LQFP9+dcJFA@mail.gmail.com>\n> Hi All,\n> \n> In the current code for ecpg, we can't use CALL statement to call\n> stored procedures. The attached patch adds the support for it.\n> \n> With the attached patch, we can now have the following SQL statement\n> in ecpg application to call the stored procedures with IN or INOUT\n> params.\n> \n> EXEC SQL CALL SP1(:hv1, :hv2);\n> \n> Additionally, we can also use indicator variables along with the\n> arguments of stored procedure with CALL statement like shown below:\n> \n> EXEC SQL CALL SP1(:hv1 :ind1, :hv2, :ind2);\n> \n> The patch also adds some basic test-cases to verify if CALL statement\n> in ecpg can be used to call stored procedures with different type of\n> parameters.\n> \n> Please have a look and let me know your thoughts.\n> \n> Thank you.\n\n+ECPG: CallStmtCALLfunc_application\n\n Even though it is the default behavior, but as a written rule\n this needs the postfix \"block\".\n\n+ $$ = cat_str(2,mm_strdup(\"call\"),$2);\n\nLet's have proper spacing.\n\n+ * Copy input arguments to the result arguments list so that all the\n+ * host variables gets treated as INOUT params.\n\nThis fails for the following usage:\n\n-- define procedure\ncreate procedure ptest2 (in x int, inout y int) language plpgsql as $$\nbegin\n y := y + x;\nend;\n$$;\n\n-- in .pgc\n14: a = 3;\n15: r = 5;\n16: EXEC SQL call ptest2(:a, :r);\n21: printf(\"ret = %d, %d\\n\", a, r);\n\n\nThis complains like this:\n\n> SQL error: too many arguments on line 16\n> ret = 8, 5;\n\nThe result should be \"3, 8\". This is because the patch requests\ntwo return but actually retruned just one.\n\nI'm not sure how to know that previously on ecpg. Might need to\nlet users append INTO <vars> clause explicitly.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 19 Jul 2019 12:03:08 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support for CALL statement in ecpg"
},
{
"msg_contents": "Hi,\n\nThanks for the review. Please find my comments in-line.\n\nOn Fri, Jul 19, 2019 at 8:33 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Hello.\n>\n>\n> +ECPG: CallStmtCALLfunc_application\n>\n> Even though it is the default behavior, but as a written rule\n> this needs the postfix \"block\".\n>\n\nDone.\n\n> + $$ = cat_str(2,mm_strdup(\"call\"),$2);\n>\n> Let's have proper spacing.\n>\n> + * Copy input arguments to the result arguments list so that all the\n> + * host variables gets treated as INOUT params.\n>\n\nI've removed above comments so this is no more valid.\n\n> This fails for the following usage:\n>\n> -- define procedure\n> create procedure ptest2 (in x int, inout y int) language plpgsql as $$\n> begin\n> y := y + x;\n> end;\n> $$;\n>\n> -- in .pgc\n> 14: a = 3;\n> 15: r = 5;\n> 16: EXEC SQL call ptest2(:a, :r);\n> 21: printf(\"ret = %d, %d\\n\", a, r);\n>\n>\n> This complains like this:\n>\n> > SQL error: too many arguments on line 16\n> > ret = 8, 5;\n>\n> The result should be \"3, 8\". This is because the patch requests\n> two return but actually retruned just one.\n>\n> I'm not sure how to know that previously on ecpg. Might need to\n> let users append INTO <vars> clause explicitly.\n>\n\nAs the ecpg connector is not aware of the param types of the procedure\nthat it is calling, it becomes the responsibility of end users to\nensure that only those many out variables gets created by ecpg as the\nnumber of fields in the tuple returned by the server and for that, as\nyou rightly said they must use the INTO clause with CALL statement in\necpg. Considering this approach, now with the attached v2 patch, the\nCALL statement in ecpg application would be like this:\n\nEXEC SQL CALL(:hv1, :hv2) INTO :ret1, ret2;\n\nEXEC SQL CALL(:hv1, :hv2) INTO :ret1 :ind1, :ret2, :ind2;\n\nIn case if INTO clause is not used with the CALL statement then the\necpg compiler would fail with a parse error: \"INTO clause is required\nwith CALL statement\"\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com",
"msg_date": "Fri, 19 Jul 2019 16:42:12 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Support for CALL statement in ecpg"
},
{
"msg_contents": "I don't find this patch in any commit fest. Seems like a good addition.\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 17 Sep 2019 09:36:30 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Support for CALL statement in ecpg"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 1:06 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> I don't find this patch in any commit fest. Seems like a good addition.\n>\n\nThanks for the consideration. Will add an entry for it in the commit fest.\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:*http://www.enterprisedb.com <http://www.enterprisedb.com/>*\n\nOn Tue, Sep 17, 2019 at 1:06 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:I don't find this patch in any commit fest. Seems like a good addition.Thanks for the consideration. Will add an entry for it in the commit fest. -- With Regards,Ashutosh SharmaEnterpriseDB:http://www.enterprisedb.com",
"msg_date": "Tue, 17 Sep 2019 17:05:59 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Support for CALL statement in ecpg"
}
] |
[
{
"msg_contents": "pg_upgrade from 9.6 fails if old cluster had non-standard ACL\non pg_catalog functions that have changed between versions,\nfor example pg_stop_backup(boolean).\n\nError:\n\npg_restore: creating ACL \"pg_catalog.FUNCTION \"pg_stop_backup\"()\"\npg_restore: creating ACL \"pg_catalog.FUNCTION \n\"pg_stop_backup\"(\"exclusive\" boolean, OUT \"lsn\" \"pg_lsn\", OUT \n\"labelfile\" \"text\", OUT \"spcmapfile\" \"text\")\"\npg_restore: [archiver (db)] Error while PROCESSING TOC:\npg_restore: [archiver (db)] Error from TOC entry 2169; 0 0 ACL FUNCTION \n\"pg_stop_backup\"(\"exclusive\" boolean, OUT \"lsn\" \"pg_lsn\", OUT \n\"labelfile\" \"text\", OUT \"spcmapfile\" \"text\") anastasia\npg_restore: [archiver (db)] could not execute query: ERROR: function \npg_catalog.pg_stop_backup(boolean) does not exist\n Command was: GRANT ALL ON FUNCTION \n\"pg_catalog\".\"pg_stop_backup\"(\"exclusive\" boolean, OUT \"lsn\" \"pg_lsn\", \nOUT \"labelfile\" \"text\", OUT \"spcmapfile\" \"text\") TO \"backup\";\n\nSteps to reproduce:\n1) create a database with pg9.6\n2) create a user and change grants on pg_stop_backup(boolean):\nCREATE ROLE backup WITH LOGIN;\nGRANT USAGE ON SCHEMA pg_catalog TO backup;\nGRANT EXECUTE ON FUNCTION pg_stop_backup() TO backup;\nGRANT EXECUTE ON FUNCTION pg_stop_backup(boolean) TO backup;\n3) perform pg_upgrade to v10 (or any version above)\n\nThe problem exists since we added to pg_dump support of ACL changes of\npg_catalog functions in commit 23f34fa4b.\n\nI think this is a bug since it unpredictably affects user experience, so \nI propose to backpatch the fix.\nScript to reproduce the problem and the patch to fix it (credit to \nArthur Zakirov) are attached.\n\nCurrent patch contains a flag for pg_dump --change-old-names to enforce \ncorrect behavior.\nI wonder, if we can make it default behavior for pg_upgrade?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 18 Jul 2019 18:53:12 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On Thu, Jul 18, 2019 at 06:53:12PM +0300, Anastasia Lubennikova wrote:\n> pg_upgrade from 9.6 fails if old cluster had non-standard ACL\n> on pg_catalog functions that have changed between versions,\n> for example pg_stop_backup(boolean).\n> \n> Error:\n> \n> pg_restore: creating ACL \"pg_catalog.FUNCTION \"pg_stop_backup\"()\"\n> pg_restore: creating ACL \"pg_catalog.FUNCTION \"pg_stop_backup\"(\"exclusive\"\n> boolean, OUT \"lsn\" \"pg_lsn\", OUT \"labelfile\" \"text\", OUT \"spcmapfile\"\n> \"text\")\"\n> pg_restore: [archiver (db)] Error while PROCESSING TOC:\n> pg_restore: [archiver (db)] Error from TOC entry 2169; 0 0 ACL FUNCTION\n> \"pg_stop_backup\"(\"exclusive\" boolean, OUT \"lsn\" \"pg_lsn\", OUT \"labelfile\"\n> \"text\", OUT \"spcmapfile\" \"text\") anastasia\n> pg_restore: [archiver (db)] could not execute query: ERROR: function\n> pg_catalog.pg_stop_backup(boolean) does not exist\n> ��� Command was: GRANT ALL ON FUNCTION\n> \"pg_catalog\".\"pg_stop_backup\"(\"exclusive\" boolean, OUT \"lsn\" \"pg_lsn\", OUT\n> \"labelfile\" \"text\", OUT \"spcmapfile\" \"text\") TO \"backup\";\n> \n> Steps to reproduce:\n> 1) create a database with pg9.6\n> 2) create a user and change grants on pg_stop_backup(boolean):\n> CREATE ROLE backup WITH LOGIN;\n> GRANT USAGE ON SCHEMA pg_catalog TO backup;\n> GRANT EXECUTE ON FUNCTION pg_stop_backup() TO backup;\n> GRANT EXECUTE ON FUNCTION pg_stop_backup(boolean) TO backup;\n> 3) perform pg_upgrade to v10 (or any version above)\n> \n> The problem exists since we added to pg_dump support of ACL changes of\n> pg_catalog functions in commit 23f34fa4b.\n> \n> I think this is a bug since it unpredictably affects user experience, so I\n> propose to backpatch the fix.\n> Script to reproduce the problem and the patch to fix it (credit to Arthur\n> Zakirov) are attached.\n\nUh, wouldn't this affect any default-installed function where the\npermission are modified? Is fixing only a few functions really helpful?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Sat, 27 Jul 2019 20:51:28 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Thu, Jul 18, 2019 at 06:53:12PM +0300, Anastasia Lubennikova wrote:\n>> pg_upgrade from 9.6 fails if old cluster had non-standard ACL\n>> on pg_catalog functions that have changed between versions,\n>> for example pg_stop_backup(boolean).\n\n> Uh, wouldn't this affect any default-installed function where the\n> permission are modified? Is fixing only a few functions really helpful?\n\nNo, it's just functions whose signatures have changed enough that\na GRANT won't find them. I think the idea is that the set of\npotentially-affected functions is determinate. I have to say that\nthe proposed patch seems like a complete kluge, though. For one\nthing we'd have to maintain the list of affected functions in each\nfuture release, and I have no faith in our remembering to do that.\n\nIt's also fair to question whether pg_upgrade should even try to\ncope with such cases. If the function has changed signature,\nit might well be that it's also changed behavior enough so that\nany previously-made grants need reconsideration. (Maybe we should\njust suppress the old grant rather than transferring it.)\n\nStill, this does seem like a gap in the pg_init_privs mechanism.\nI wonder if Stephen has any thoughts about what ought to happen.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 27 Jul 2019 21:33:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Thu, Jul 18, 2019 at 06:53:12PM +0300, Anastasia Lubennikova wrote:\n> >> pg_upgrade from 9.6 fails if old cluster had non-standard ACL\n> >> on pg_catalog functions that have changed between versions,\n> >> for example pg_stop_backup(boolean).\n> \n> > Uh, wouldn't this affect any default-installed function where the\n> > permission are modified? Is fixing only a few functions really helpful?\n> \n> No, it's just functions whose signatures have changed enough that\n> a GRANT won't find them. I think the idea is that the set of\n> potentially-affected functions is determinate. I have to say that\n> the proposed patch seems like a complete kluge, though. For one\n> thing we'd have to maintain the list of affected functions in each\n> future release, and I have no faith in our remembering to do that.\n\nWell, we aren't likely to do that ourselves, no, but perhaps we could\nmanage it with some prodding by having the buildfarm check for such\ncases, not unlike how library maintainers check the ABI between versions\nof the library they manage. Extension authors also deal with these\nkinds of changes routinely when writing the upgrade scripts to go\nbetween versions of their extension. I'm not convinced that this is a\ngreat approach to go down either, to be clear. When going across major\nversions, making people update their systems/code and re-test is\ntypically entirely reasonable to me.\n\n> It's also fair to question whether pg_upgrade should even try to\n> cope with such cases. If the function has changed signature,\n> it might well be that it's also changed behavior enough so that\n> any previously-made grants need reconsideration. (Maybe we should\n> just suppress the old grant rather than transferring it.)\n\nSuppressing the GRANT strikes me as pretty reasonable as an approach but\nwouldn't that require us to similairly track what's changed between\nmajor versions..? Unless we arrange to ignore the GRANT failing, but\nthat seems like it would involve a fair bit of hacking around in pg_dump\nto have some option to ignore certain GRANTs failing. Did you have some\nother idea about how to suppress the old GRANT?\n\nA way to make things work for users while suppressing the GRANTS would\nbe to add a default role for things like file-level-backup, which would\nbe allowed to execute file-level-backup related functions, presumably\neven if we make changes to exactly what those function signatures are,\nand then encourage users to GRANT that role to the role that's allowed\nto log in and run the file-level backup. Obviously we wouldn't be doing\nthat in the back-branches, but we could moving forward.\n\n> Still, this does seem like a gap in the pg_init_privs mechanism.\n> I wonder if Stephen has any thoughts about what ought to happen.\n\nSo, in an interesting sort of way, we have a way to deal with this\nproblem when it comes to *extensions* and I suppose that's why we\nhaven't seen it there- namely the upgrade script, which can decide if it\nwants to drop an object and recreate it, or if it wants to do a\ncreate-or-replace, which would preserve the privileges (though the API\nhas to stay the same, so that isn't exactly the same) and avoid dropping\ndependant objects.\n\nUnfortunately, we don't have any good way today to add an optional\nargument to a function while preserving the privileges on it, which\nwould make a case like this one (and others where you'd prefer to not\ndrop/recreate the function due to dependencies) work, for extensions.\n\nSuppressing the GRANT also seems reasonable for the case of objects\nwhich have been renamed- clearly whatever is using those functions is\ngoing to have to be modified to deal with the new name of the function,\nrequiring that the GRANT be re-issued doesn't seem like it's that much\nmore to ask of users. On the other hand, properly written tools that\ncheck the version of PG and use the right function names could possibly\n\"just work\" following a major version upgrade, if the privilege was\nbrought across to the new major version correctly.\n\nWe also don't want to mistakenly GRANT users more access then they\nshould have though- if pg_stop_backup() one day grows an\noptional argument to run some server-side script, I don't think we'd\nwant to necessairly just give access to that ability to roles who,\ntoday, can execute the current pg_stop_backup() function. Of course, if\nwe added such a capability, hopefully we would do so in a way that less\nprivileged roles could continue to use the existing capability without\nhaving access to run such a server-side script.\n\nI also don't think that the current patch is actually sufficient to deal\nwith all the changes we've made between the versions- what about column\nnames on catalog tables/views that were removed, or changed/renamed..?\n\nIn an ideal world, it seems like we'd make a judgement call and arrange\nto pull the privileges across when we can do so without granting the\nuser privileges beyond those that were intended, and otherwise we'd\nsuppress the GRANT to avoid getting an error. I'm not sure what a good\nway is to go about either figuring out a way to pull the privileges\nacross, or to suppress the GRANTs when we need to (or always), would be\nthough. Neither seems easy to solve in a clean way.\n\nCertainly open to suggestions.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 29 Jul 2019 11:37:05 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "29.07.2019 18:37, Stephen Frost wrote:\n> Greetings,\n>\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> Bruce Momjian <bruce@momjian.us> writes:\n>>> On Thu, Jul 18, 2019 at 06:53:12PM +0300, Anastasia Lubennikova wrote:\n>>>> pg_upgrade from 9.6 fails if old cluster had non-standard ACL\n>>>> on pg_catalog functions that have changed between versions,\n>>>> for example pg_stop_backup(boolean).\n>>> Uh, wouldn't this affect any default-installed function where the\n>>> permission are modified? Is fixing only a few functions really helpful?\n>> No, it's just functions whose signatures have changed enough that\n>> a GRANT won't find them. I think the idea is that the set of\n>> potentially-affected functions is determinate. I have to say that\n>> the proposed patch seems like a complete kluge, though. For one\n>> thing we'd have to maintain the list of affected functions in each\n>> future release, and I have no faith in our remembering to do that.\n> Well, we aren't likely to do that ourselves, no, but perhaps we could\n> manage it with some prodding by having the buildfarm check for such\n> cases, not unlike how library maintainers check the ABI between versions\n> of the library they manage. Extension authors also deal with these\n> kinds of changes routinely when writing the upgrade scripts to go\n> between versions of their extension. I'm not convinced that this is a\n> great approach to go down either, to be clear. When going across major\n> versions, making people update their systems/code and re-test is\n> typically entirely reasonable to me.\nWhatever we choose to do, we need to keep a list of changed functions. I \ndon't\nthink that it will add too much extra work to maintaining other catalog \nchanges\nsuch as adding or renaming columns.\nWhat's more, we must mention changed functions in migration release \nnotes. I've\nchecked documentation [1] and found out, that function API changes are not\ndescribed properly.\n\nI think it is an important omission, so I attached the patch for \ndocumentation.\nNot quite sure, how many users have already migrated to version 10, still, I\nbelieve it will help many others.\n\n> Suppressing the GRANT also seems reasonable for the case of objects\n> which have been renamed- clearly whatever is using those functions is\n> going to have to be modified to deal with the new name of the function,\n> requiring that the GRANT be re-issued doesn't seem like it's that much\n> more to ask of users. On the other hand, properly written tools that\n> check the version of PG and use the right function names could possibly\n> \"just work\" following a major version upgrade, if the privilege was\n> brought across to the new major version correctly.\nThat's exactly the case.\n\n> We also don't want to mistakenly GRANT users more access then they\n> should have though- if pg_stop_backup() one day grows an\n> optional argument to run some server-side script, I don't think we'd\n> want to necessairly just give access to that ability to roles who,\n> today, can execute the current pg_stop_backup() function. Of course, if\n> we added such a capability, hopefully we would do so in a way that less\n> privileged roles could continue to use the existing capability without\n> having access to run such a server-side script.\n>\n> I also don't think that the current patch is actually sufficient to deal\n> with all the changes we've made between the versions- what about column\n> names on catalog tables/views that were removed, or changed/renamed..?\nI can't get the problem you describe here. As far as I understand, various\nchanges of catalog tables and views are already handled correctly in \npg_upgrade.\n\n> In an ideal world, it seems like we'd make a judgement call and arrange\n> to pull the privileges across when we can do so without granting the\n> user privileges beyond those that were intended, and otherwise we'd\n> suppress the GRANT to avoid getting an error. I'm not sure what a good\n> way is to go about either figuring out a way to pull the privileges\n> across, or to suppress the GRANTs when we need to (or always), would be\n> though. Neither seems easy to solve in a clean way.\n>\n> Certainly open to suggestions.\nBased on our initial bug report, I would vote against suppressing the \nGRANTS,\nbecause it leads to an unexpected failure in case a user has a special \nrole for\nuse in a third-party utility such as a backup tool, which already took \ncare of\ninternal API changes.\n\nStill I agree with your arguments about possibility of providing more grants\nthan expected. Ideally, we do not change behaviour of existing functions \nthat\nmuch, but in real-world it may happen.\n\nMaybe, as a compromise, we can reset grants to default for all changed \nfunctions\nand also generate a script that will allow a user to preserve privileges \nof the\nold cluster by analogy with analyze_new_cluster script.\nWhat do you think?\n\n[1] https://www.postgresql.org/docs/10/release-10.html\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 13 Aug 2019 19:04:42 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On Tue, Aug 13, 2019 at 07:04:42PM +0300, Anastasia Lubennikova wrote:\n> > In an ideal world, it seems like we'd make a judgement call and arrange\n> > to pull the privileges across when we can do so without granting the\n> > user privileges beyond those that were intended, and otherwise we'd\n> > suppress the GRANT to avoid getting an error. I'm not sure what a good\n> > way is to go about either figuring out a way to pull the privileges\n> > across, or to suppress the GRANTs when we need to (or always), would be\n> > though. Neither seems easy to solve in a clean way.\n> > \n> > Certainly open to suggestions.\n> Based on our initial bug report, I would vote against suppressing the\n> GRANTS,\n> because it leads to an unexpected failure in case a user has a special role\n> for\n> use in a third-party utility such as a backup tool, which already took care\n> of\n> internal API changes.\n> \n> Still I agree with your arguments about possibility of providing more grants\n> than expected. Ideally, we do not change behaviour of existing functions\n> that\n> much, but in real-world it may happen.\n> \n> Maybe, as a compromise, we can reset grants to default for all changed\n> functions\n> and also generate a script that will allow a user to preserve privileges of\n> the\n> old cluster by analogy with analyze_new_cluster script.\n> What do you think?\n\nI agree pg_upgrade should work without user correction as much as\npossible. However, as you can see, it can fail when user objects\nreference system table objects that have changed between major releases.\n\nAs much as it would be nice if the release notes covered all that, and\nwe updated pg_upgrade to somehow handle them, it just isn't realistic. \nAs we can see here, the problems often take years to show up, and even\nthen there were probably many other people who had the problem who never\nreported it to us.\n\nI think a realistic approach is to come up with a list of all the user\nbehaviors that can cause pg_upgrade to break (by reviewing previous\npg_upgrade bug reports), and then add code to pg_upgrade to detect them\nand either fix them or report them in --check mode.\n\nIn summary, I am saying that the odds that patch authors, committers,\nrelease note writers, and pg_upgrade maintainers are going to form a\nconsistent work flow that catches all these changes is unrealistic ---\nour best bet is to create something in the pg_upgrade code to detects\nthis. pg_upgrade already connects to the old and new cluster, so\ntechnically it can perform system table modification checks itself.\n\nThe only positive is that when pg_upgrade does fail, at least we have a\nsystem that clearly points to the cause, but unfortunately usually at\nrun-time, not at --check time.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 13 Aug 2019 12:52:35 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Tue, Aug 13, 2019 at 07:04:42PM +0300, Anastasia Lubennikova wrote:\n> > Maybe, as a compromise, we can reset grants to default for all changed\n> > functions\n> > and also generate a script that will allow a user to preserve privileges of\n> > the\n> > old cluster by analogy with analyze_new_cluster script.\n> > What do you think?\n> \n> I agree pg_upgrade should work without user correction as much as\n> possible. However, as you can see, it can fail when user objects\n> reference system table objects that have changed between major releases.\n\nRight.\n\n> As much as it would be nice if the release notes covered all that, and\n> we updated pg_upgrade to somehow handle them, it just isn't realistic. \n> As we can see here, the problems often take years to show up, and even\n> then there were probably many other people who had the problem who never\n> reported it to us.\n\nYeah, the possible changes when you think about column-level privileges\nas well really gets to be quite large..\n\nThis is why my thinking is that we should come up with additional\ndefault roles, which aren't tied to specific catalog structures but\ninstead are for a more general set of capabilities which we manage and\nusers can either decide to use, or not. If they decide to work with the\nindividual functions then they have to manage the upgrade process if and\nwhen we make changes to those functions.\n\n> I think a realistic approach is to come up with a list of all the user\n> behaviors that can cause pg_upgrade to break (by reviewing previous\n> pg_upgrade bug reports), and then add code to pg_upgrade to detect them\n> and either fix them or report them in --check mode.\n\nIn this case, we could, at least conceptually, perform a comparison\nbetween the different major versions and then check for any non-default\nprivileges for any of the objects changed and then report on those in\n--check mode with a recommendation to revert to the default privileges\nin the old cluster before running pg_upgrade, and then apply whatever\nprivileges are desired in the new cluster after the upgrade completes.\n\n> In summary, I am saying that the odds that patch authors, committers,\n> release note writers, and pg_upgrade maintainers are going to form a\n> consistent work flow that catches all these changes is unrealistic ---\n> our best bet is to create something in the pg_upgrade code to detects\n> this. pg_upgrade already connects to the old and new cluster, so\n> technically it can perform system table modification checks itself.\n\nIt'd be pretty neat if pg_upgrade could connect to the old and new\nclusters concurrently and then perform a generic catalog comparison\nbetween them and identify all objects which have been changed and\ndetermine if there's any non-default ACLs or dependencies on the catalog\nobjects which are different between the clusters. That seems like an\nawful lot of work though, and I'm not sure there's really any need,\ngiven that we don't change the catalog for a given major version- we\ncould just generate the list using some queries whenever we release a\nnew major version and update pg_upgrade with it.\n\n> The only positive is that when pg_upgrade does fail, at least we have a\n> system that clearly points to the cause, but unfortunately usually at\n> run-time, not at --check time.\n\nGetting it to be at check time would certainly be a great improvement.\n\nSolving this in pg_upgrade does seem like it's probably the better\napproach rather than trying to do it in pg_dump. Unfortunately, that\nlikely means that all we can do is have pg_upgrade point out to the user\nwhen something will fail, with recommendations on how to address it, but\nthat's also something users are likely used to and willing to accept,\nand puts the onus on them to consider their ACL decisions when we're\nmaking catalog changes, and it keeps these issues out of pg_dump.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 13 Aug 2019 20:28:12 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On Tue, Aug 13, 2019 at 08:28:12PM -0400, Stephen Frost wrote:\n> Getting it to be at check time would certainly be a great improvement.\n> \n> Solving this in pg_upgrade does seem like it's probably the better\n> approach rather than trying to do it in pg_dump. Unfortunately, that\n> likely means that all we can do is have pg_upgrade point out to the user\n> when something will fail, with recommendations on how to address it, but\n> that's also something users are likely used to and willing to accept,\n> and puts the onus on them to consider their ACL decisions when we're\n> making catalog changes, and it keeps these issues out of pg_dump.\n\nYeah, I think we just need to bite the bullet and create some\ninfrastructure to help folks solve the problem.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 13 Aug 2019 22:10:50 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "14.08.2019 3:28, Stephen Frost wrote:\n> * Bruce Momjian (bruce@momjian.us) wrote:\n>\n>> As much as it would be nice if the release notes covered all that, and\n>> we updated pg_upgrade to somehow handle them, it just isn't realistic.\n>> As we can see here, the problems often take years to show up, and even\n>> then there were probably many other people who had the problem who never\n>> reported it to us.\n> Yeah, the possible changes when you think about column-level privileges\n> as well really gets to be quite large..\n>\n> This is why my thinking is that we should come up with additional\n> default roles, which aren't tied to specific catalog structures but\n> instead are for a more general set of capabilities which we manage and\n> users can either decide to use, or not. If they decide to work with the\n> individual functions then they have to manage the upgrade process if and\n> when we make changes to those functions.\n\nIdea of having special roles looks good to me, though, I don't see\nhow to define what grants are needed for each role. Let's say, we\ndefine role \"backupuser\" that obviously must have grants on \npg_start_backup()\nand pg_stop_backup(). Should it also have access to pg_is_in_recovery()\nor for example version()?\n\n> It'd be pretty neat if pg_upgrade could connect to the old and new\n> clusters concurrently and then perform a generic catalog comparison\n> between them and identify all objects which have been changed and\n> determine if there's any non-default ACLs or dependencies on the catalog\n> objects which are different between the clusters. That seems like an\n> awful lot of work though, and I'm not sure there's really any need,\n> given that we don't change the catalog for a given major version- we\n> could just generate the list using some queries whenever we release a\n> new major version and update pg_upgrade with it.\n>\n>> The only positive is that when pg_upgrade does fail, at least we have a\n>> system that clearly points to the cause, but unfortunately usually at\n>> run-time, not at --check time.\n> Getting it to be at check time would certainly be a great improvement.\n>\n> Solving this in pg_upgrade does seem like it's probably the better\n> approach rather than trying to do it in pg_dump. Unfortunately, that\n> likely means that all we can do is have pg_upgrade point out to the user\n> when something will fail, with recommendations on how to address it, but\n> that's also something users are likely used to and willing to accept,\n> and puts the onus on them to consider their ACL decisions when we're\n> making catalog changes, and it keeps these issues out of pg_dump.\n\n\nI wrote a prototype to check API and ACL compatibility (see attachment).\nIn the current implementation it fetches the list of system procedures \nfor both old and new clusters\nand reports deleted functions or ACL changes during pg_upgrade check:\n\n\nPerforming Consistency Checks\n-----------------------------\n...\nChecking for system functions API compatibility\ndbname postgres : check procsig is equal pg_stop_backup(), procacl not \nequal {anastasia=X/anastasia,backup=X/anastasia} vs {anastasia=X/anastasia}\ndbname postgres : procedure pg_stop_backup(exclusive boolean, OUT lsn \npg_lsn, OUT labelfile text, OUT spcmapfile text) doesn't exist in \nnew_cluster\ndbname postgres : procedure pg_switch_xlog() doesn't exist in new_cluster\ndbname postgres : procedure pg_xlog_replay_pause() doesn't exist in \nnew_cluster\ndbname postgres : procedure pg_xlog_replay_resume() doesn't exist in \nnew_cluster\n...\n\nI think it's a good first step in the right direction.\nNow I have questions about implementation details:\n\n1) How exactly should we report this incompatibility to a user?\nI think it's fine to leave the warnings and also write some hint for the \nuser by analogy with other checks.\n\"Reset ACL on the problem functions to default in the old cluster to \ncontinue\"\n\nIs it enough?\n\n2) This approach can be extended to other catalog modifications, you \nmentioned above.\nI don't see what else can break pg_upgrade in the same way. However, I \ndon't mind\nimplementing more checks, while I work on this issue, if you can point \non them.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 20 Aug 2019 16:38:18 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "Greetings,\n\n* Anastasia Lubennikova (a.lubennikova@postgrespro.ru) wrote:\n> 14.08.2019 3:28, Stephen Frost wrote:\n> >* Bruce Momjian (bruce@momjian.us) wrote:\n> >>As much as it would be nice if the release notes covered all that, and\n> >>we updated pg_upgrade to somehow handle them, it just isn't realistic.\n> >>As we can see here, the problems often take years to show up, and even\n> >>then there were probably many other people who had the problem who never\n> >>reported it to us.\n> >Yeah, the possible changes when you think about column-level privileges\n> >as well really gets to be quite large..\n> >\n> >This is why my thinking is that we should come up with additional\n> >default roles, which aren't tied to specific catalog structures but\n> >instead are for a more general set of capabilities which we manage and\n> >users can either decide to use, or not. If they decide to work with the\n> >individual functions then they have to manage the upgrade process if and\n> >when we make changes to those functions.\n> \n> Idea of having special roles looks good to me, though, I don't see\n> how to define what grants are needed for each role. Let's say, we\n> define role \"backupuser\" that obviously must have grants on\n> pg_start_backup()\n> and pg_stop_backup(). Should it also have access to pg_is_in_recovery()\n> or for example version()?\n\npg_is_in_recovery() and version() are already allowed to be executed by\npublic, and I don't see any particular reason to change that, so I don't\nbelieve those would need to be explicitly GRANT'd to this new role.\n\nI would think the specific set would be those listed under:\n\nhttps://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-BACKUP\n\nwhich currently require superuser access.\n\nThis isn't a new idea, btw, there was a great deal of discussion three\nyears ago around all of this. Particularly relevant is this:\n\nhttps://www.postgresql.org/message-id/20160104175516.GC3685%40tamriel.snowman.net\n\n> >It'd be pretty neat if pg_upgrade could connect to the old and new\n> >clusters concurrently and then perform a generic catalog comparison\n> >between them and identify all objects which have been changed and\n> >determine if there's any non-default ACLs or dependencies on the catalog\n> >objects which are different between the clusters. That seems like an\n> >awful lot of work though, and I'm not sure there's really any need,\n> >given that we don't change the catalog for a given major version- we\n> >could just generate the list using some queries whenever we release a\n> >new major version and update pg_upgrade with it.\n> >\n> >>The only positive is that when pg_upgrade does fail, at least we have a\n> >>system that clearly points to the cause, but unfortunately usually at\n> >>run-time, not at --check time.\n> >Getting it to be at check time would certainly be a great improvement.\n> >\n> >Solving this in pg_upgrade does seem like it's probably the better\n> >approach rather than trying to do it in pg_dump. Unfortunately, that\n> >likely means that all we can do is have pg_upgrade point out to the user\n> >when something will fail, with recommendations on how to address it, but\n> >that's also something users are likely used to and willing to accept,\n> >and puts the onus on them to consider their ACL decisions when we're\n> >making catalog changes, and it keeps these issues out of pg_dump.\n> \n> I wrote a prototype to check API and ACL compatibility (see attachment).\n> In the current implementation it fetches the list of system procedures for\n> both old and new clusters\n> and reports deleted functions or ACL changes during pg_upgrade check:\n> \n> \n> Performing Consistency Checks\n> -----------------------------\n> ...\n> Checking for system functions API compatibility\n> dbname postgres : check procsig is equal pg_stop_backup(), procacl not equal\n> {anastasia=X/anastasia,backup=X/anastasia} vs {anastasia=X/anastasia}\n> dbname postgres : procedure pg_stop_backup(exclusive boolean, OUT lsn\n> pg_lsn, OUT labelfile text, OUT spcmapfile text) doesn't exist in\n> new_cluster\n> dbname postgres : procedure pg_switch_xlog() doesn't exist in new_cluster\n> dbname postgres : procedure pg_xlog_replay_pause() doesn't exist in\n> new_cluster\n> dbname postgres : procedure pg_xlog_replay_resume() doesn't exist in\n> new_cluster\n> ...\n> \n> I think it's a good first step in the right direction.\n> Now I have questions about implementation details:\n> \n> 1) How exactly should we report this incompatibility to a user?\n> I think it's fine to leave the warnings and also write some hint for the\n> user by analogy with other checks.\n> \"Reset ACL on the problem functions to default in the old cluster to\n> continue\"\n> \n> Is it enough?\n\nNot really sure what else we could do there..? Did you have something\nelse in mind? We could possibly provide the specific commands to run,\nthat seems like about the only other thing we could possibly do.\n\n> 2) This approach can be extended to other catalog modifications, you\n> mentioned above.\n> I don't see what else can break pg_upgrade in the same way. However, I don't\n> mind\n> implementing more checks, while I work on this issue, if you can point on\n> them.\n\nI was thinking of, for example, column-level privileges on system\nrelations (tables or views) where that column was later removed, for\nexample. I do appreciate that such changes are relatively rare but they\ndo happen...\n\nWill try to take a look at the actual patch later today.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 20 Aug 2019 11:04:57 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On Tue, Aug 20, 2019 at 04:38:18PM +0300, Anastasia Lubennikova wrote:\n> > Solving this in pg_upgrade does seem like it's probably the better\n> > approach rather than trying to do it in pg_dump. Unfortunately, that\n> > likely means that all we can do is have pg_upgrade point out to the user\n> > when something will fail, with recommendations on how to address it, but\n> > that's also something users are likely used to and willing to accept,\n> > and puts the onus on them to consider their ACL decisions when we're\n> > making catalog changes, and it keeps these issues out of pg_dump.\n> \n> \n> I wrote a prototype to check API and ACL compatibility (see attachment).\n> In the current implementation it fetches the list of system procedures for\n> both old and new clusters\n> and reports deleted functions or ACL changes during pg_upgrade check:\n> \n> \n> Performing Consistency Checks\n> -----------------------------\n> ...\n> Checking for system functions API compatibility\n> dbname postgres : check procsig is equal pg_stop_backup(), procacl not equal\n> {anastasia=X/anastasia,backup=X/anastasia} vs {anastasia=X/anastasia}\n> dbname postgres : procedure pg_stop_backup(exclusive boolean, OUT lsn\n> pg_lsn, OUT labelfile text, OUT spcmapfile text) doesn't exist in\n> new_cluster\n> dbname postgres : procedure pg_switch_xlog() doesn't exist in new_cluster\n> dbname postgres : procedure pg_xlog_replay_pause() doesn't exist in\n> new_cluster\n> dbname postgres : procedure pg_xlog_replay_resume() doesn't exist in\n> new_cluster\n> ...\n> \n> I think it's a good first step in the right direction.\n> Now I have questions about implementation details:\n> \n> 1) How exactly should we report this incompatibility to a user?\n> I think it's fine to leave the warnings and also write some hint for the\n> user by analogy with other checks.\n> \"Reset ACL on the problem functions to default in the old cluster to\n> continue\"\n\nYes, I think it is good to at least throw an error during --check so\nthey don't have to find out during a live upgrade. Odds are it will\nrequire manual repair.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Wed, 21 Aug 2019 15:47:38 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "Stephen,\n\nOn 2019-Aug-20, Stephen Frost wrote:\n\n> Will try to take a look at the actual patch later today.\n\nAny word on that review?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 11 Sep 2019 18:23:00 -0300",
"msg_from": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On 2019-Aug-21, Bruce Momjian wrote:\n\n> > 1) How exactly should we report this incompatibility to a user?\n> > I think it's fine to leave the warnings and also write some hint for the\n> > user by analogy with other checks.\n> > \"Reset ACL on the problem functions to default in the old cluster to\n> > continue\"\n>\n> Yes, I think it is good to at least throw an error during --check so\n> they don't have to find out during a live upgrade. Odds are it will\n> require manual repair.\n\nI'm not sure what you're proposing here ... are you saying that the user\nwould have to modify the source cluster before pg_upgrade accepts to\nrun? That sounds pretty catastrophic.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 11 Sep 2019 18:25:38 -0300",
"msg_from": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On Wed, Sep 11, 2019 at 06:25:38PM -0300, �lvaro Herrera wrote:\n> On 2019-Aug-21, Bruce Momjian wrote:\n> \n> > > 1) How exactly should we report this incompatibility to a user?\n> > > I think it's fine to leave the warnings and also write some hint for the\n> > > user by analogy with other checks.\n> > > \"Reset ACL on the problem functions to default in the old cluster to\n> > > continue\"\n> >\n> > Yes, I think it is good to at least throw an error during --check so\n> > they don't have to find out during a live upgrade. Odds are it will\n> > require manual repair.\n> \n> I'm not sure what you're proposing here ... are you saying that the user\n> would have to modify the source cluster before pg_upgrade accepts to\n> run? That sounds pretty catastrophic.\n\nWell, right now, pg_upgrade --check succeeds, but the upgrade fails. I\nam proposing, at a minimum, that pg_upgrade --check fails in such cases,\nwith a clear error message about how to fix it.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 26 Sep 2019 16:04:00 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On 2019-Sep-26, Bruce Momjian wrote:\n\n> On Wed, Sep 11, 2019 at 06:25:38PM -0300, �lvaro Herrera wrote:\n> > On 2019-Aug-21, Bruce Momjian wrote:\n> > \n> > > > 1) How exactly should we report this incompatibility to a user?\n> > > > I think it's fine to leave the warnings and also write some hint for the\n> > > > user by analogy with other checks.\n> > > > \"Reset ACL on the problem functions to default in the old cluster to\n> > > > continue\"\n> > >\n> > > Yes, I think it is good to at least throw an error during --check so\n> > > they don't have to find out during a live upgrade. Odds are it will\n> > > require manual repair.\n> > \n> > I'm not sure what you're proposing here ... are you saying that the user\n> > would have to modify the source cluster before pg_upgrade accepts to\n> > run? That sounds pretty catastrophic.\n> \n> Well, right now, pg_upgrade --check succeeds, but the upgrade fails. I\n> am proposing, at a minimum, that pg_upgrade --check fails in such cases,\n\nAgreed, that should be a minimum fix.\n\n> with a clear error message about how to fix it.\n\nSo the best solution being proposed is to reset the ACL to the default?\nSo we would be forcing the user to propagate the ACL change manually,\nrather than trying to make pg_upgrade propagate it automatically. I\nsuppose making pg_upgrade would be better, but I'm not sure to what\nextent that is a full solution.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 26 Sep 2019 17:16:19 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On Thu, Sep 26, 2019 at 05:16:19PM -0300, Alvaro Herrera wrote:\n> On 2019-Sep-26, Bruce Momjian wrote:\n> > Well, right now, pg_upgrade --check succeeds, but the upgrade fails. I\n> > am proposing, at a minimum, that pg_upgrade --check fails in such cases,\n> \n> Agreed, that should be a minimum fix.\n\nYes.\n\n> > with a clear error message about how to fix it.\n> \n> So the best solution being proposed is to reset the ACL to the default?\n> So we would be forcing the user to propagate the ACL change manually,\n> rather than trying to make pg_upgrade propagate it automatically. I\n> suppose making pg_upgrade would be better, but I'm not sure to what\n> extent that is a full solution.\n\nMe neither, which is why I was proposing the minimum fix. We might not\nknow how to fix it in all case, but maybe we can detect all cases.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 26 Sep 2019 16:19:38 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On Thu, Sep 26, 2019 at 04:19:38PM -0400, Bruce Momjian wrote:\n> On Thu, Sep 26, 2019 at 05:16:19PM -0300, Alvaro Herrera wrote:\n>> On 2019-Sep-26, Bruce Momjian wrote:\n>>> Well, right now, pg_upgrade --check succeeds, but the upgrade fails. I\n>>> am proposing, at a minimum, that pg_upgrade --check fails in such cases,\n>> \n>> Agreed, that should be a minimum fix.\n> \n> Yes.\n\nAgreed as well here. At least the latest patch proposed has the merit\nto track automatically functions not existing anymore from the\nsource's version to the target's version, so patching --check offers a\ngood compromise. Bruce, are you planning to look more at the patch\nposted at [1]?\n\n[1]: https://www.postgresql.org/message-id/392ca335-068d-7bd3-0ad8-fdf0a45d95d4@postgrespro.ru\n--\nMichael",
"msg_date": "Fri, 27 Sep 2019 16:22:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 04:22:15PM +0900, Michael Paquier wrote:\n> On Thu, Sep 26, 2019 at 04:19:38PM -0400, Bruce Momjian wrote:\n> > On Thu, Sep 26, 2019 at 05:16:19PM -0300, Alvaro Herrera wrote:\n> >> On 2019-Sep-26, Bruce Momjian wrote:\n> >>> Well, right now, pg_upgrade --check succeeds, but the upgrade fails. I\n> >>> am proposing, at a minimum, that pg_upgrade --check fails in such cases,\n> >> \n> >> Agreed, that should be a minimum fix.\n> > \n> > Yes.\n> \n> Agreed as well here. At least the latest patch proposed has the merit\n> to track automatically functions not existing anymore from the\n> source's version to the target's version, so patching --check offers a\n> good compromise. Bruce, are you planning to look more at the patch\n> posted at [1]?\n\nI did look at it. It has some TODO items listed in it still, and some\nC++ comments, but if everyone likes it I can apply it.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 27 Sep 2019 08:51:40 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "27.09.2019 15:51, Bruce Momjian wrote:\n> On Fri, Sep 27, 2019 at 04:22:15PM +0900, Michael Paquier wrote:\n>> On Thu, Sep 26, 2019 at 04:19:38PM -0400, Bruce Momjian wrote:\n>>> On Thu, Sep 26, 2019 at 05:16:19PM -0300, Alvaro Herrera wrote:\n>>>> On 2019-Sep-26, Bruce Momjian wrote:\n>>>>> Well, right now, pg_upgrade --check succeeds, but the upgrade fails. I\n>>>>> am proposing, at a minimum, that pg_upgrade --check fails in such cases,\n>>>> Agreed, that should be a minimum fix.\n>>> Yes.\n>> Agreed as well here. At least the latest patch proposed has the merit\n>> to track automatically functions not existing anymore from the\n>> source's version to the target's version, so patching --check offers a\n>> good compromise. Bruce, are you planning to look more at the patch\n>> posted at [1]?\n> I did look at it. It has some TODO items listed in it still, and some\n> C++ comments, but if everyone likes it I can apply it.\n\nCool. It seems that everyone agrees on this patch.\n\nI attached the updated version. Now it prints a better error message\nand generates an SQL script instead of multiple warnings. The attached \ntest script shows that.\n\nPatches for 10, 11, and 12 slightly differ due to merge conflicts, so I \nattached multiple versions.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 4 Oct 2019 14:53:40 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "Greetings,\n\n* Anastasia Lubennikova (a.lubennikova@postgrespro.ru) wrote:\n> Cool. It seems that everyone agrees on this patch.\n\nThanks for working on this, I took a quick look over it and I do have\nsome concerns.\n\n> I attached the updated version. Now it prints a better error message\n> and generates an SQL script instead of multiple warnings. The attached test\n> script shows that.\n\nHave you tested this with extensions, where the user has changed the\nprivileges on the extension? I'm concerned that we'll throw out\nwarnings and tell users to go REVOKE privileges on any case where the\nprivileges on an extension's object were changed, but that shouldn't be\nnecessary and we should be excluding those.\n\nChanges to privileges on extensions should be handled just fine using\nthe existing code, at least for upgrades of PG.\n\nOtherwise, at least imv, the code could use more comments (inside the\nfunctions, not just function-level...) and there's a few wording\nimprovements that could be made. Again, not a full endorsement, as I\njust took a quick look, but it generally seems like a reasonable\napproach to go in and the issue with extensions was the only thing that\ncame to mind as a potential problem.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 8 Oct 2019 10:08:22 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "08.10.2019 17:08, Stephen Frost wrote:\n>> I attached the updated version. Now it prints a better error message\n>> and generates an SQL script instead of multiple warnings. The attached test\n>> script shows that.\n> Have you tested this with extensions, where the user has changed the\n> privileges on the extension? I'm concerned that we'll throw out\n> warnings and tell users to go REVOKE privileges on any case where the\n> privileges on an extension's object were changed, but that shouldn't be\n> necessary and we should be excluding those.\nGood catch.\nFixed in v3.\n\nI also added this check to previous test script. It passes with both v2 \nand v3,\nbut v3 doesn't throw unneeded warning for extension functions.\n\n> Changes to privileges on extensions should be handled just fine using\n> the existing code, at least for upgrades of PG.\n>\n> Otherwise, at least imv, the code could use more comments (inside the\n> functions, not just function-level...) and there's a few wording\n> improvements that could be made. Again, not a full endorsement, as I\n> just took a quick look, but it generally seems like a reasonable\n> approach to go in and the issue with extensions was the only thing that\n> came to mind as a potential problem.\nI added more comments and updated the error message.\nPlease, feel free to fix them, if you have any suggestions.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 28 Oct 2019 17:40:44 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On Mon, Oct 28, 2019 at 05:40:44PM +0300, Anastasia Lubennikova wrote:\n> I added more comments and updated the error message.\n> Please, feel free to fix them, if you have any suggestions.\n\nI have begun looking at this one.\n\n+ /* REVOKE command must be executed in corresponding database */\n+ if (*new_db)\n+ {\n+ fprintf(*script, _(\"\\\\c %s \\n\"), olddbinfo->db_name);\n+ *new_db = false;\n+ }\nThis will fail if the database to use includes a space? And it seems\nto me that log_incompatible_procedure() does not quote things\nproperly either.\n\n+ * from initial privilleges. Only check privileges set by initdb.\ns/privilleges/privileges/\n\nI think that there is little point to keep get_catalog_procedures()\nand check_catalog_procedures() separated. Why not just using a single\nentry point.\n\nWouldn't it be more simple to just use a cast as\npg_proc.oid::regprocedure in the queries?\n\nLet's also put some effort in the formatting of the SQL queries here:\n- Schema-qualification with pg_catalog.\n- Format of the clauses could be better (for examples two-space\nindentation for clauses, etc.)\n\nI think that we should keep the core part of the fix more simple by\njust warning about missing function signatures in the target cluster\nwhen pg_upgrade --check is used. So I think that it would be better\nfor now to get to a point where we can warn about the function\nsignatures involved in a given database, without the generation of the\nscript with those REVOKE queries. Or would folks prefer keep that in\nthe first version? My take would be to handle both separately, and to\nwarn about everything so as there is no need to do pg_upgrade --check\nmore than once.\n\nI may be missing something, but it seems to me that there is no need\nto attach proc_arr to DbInfo or have traces of it in pg_upgrade.h as\nlong as you keep the checks of function signatures local to the single\nentry point I am mentioning above.\n--\nMichael",
"msg_date": "Fri, 8 Nov 2019 18:03:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On Fri, Nov 08, 2019 at 06:03:06PM +0900, Michael Paquier wrote:\n> I have begun looking at this one.\n\nAnother question I have: do we need to care more about other extra\nACLs applied to other object types? For example a subset of columns\non a table with a column being renamed could be an issue. Procedure\nrenamed in core are not that common still we did it.\n\nHere is another idea I have regarding this set of problems. We could\nuse pg_depend on the source for system objects and join it with\npg_init_privs, and then compare it with the entries of the target\nbased on the descriptions generated by pg_describe_object(). If there\nis an object renamed or an unmatching signature, then we would\nimmediately find about it, for any object types.\n--\nMichael",
"msg_date": "Sat, 9 Nov 2019 11:26:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "HelLo!\n\nOn 11/9/19 5:26 AM, Michael Paquier wrote:\n> On Fri, Nov 08, 2019 at 06:03:06PM +0900, Michael Paquier wrote:\n>> I have begun looking at this one.\n> Another question I have: do we need to care more about other extra\n> ACLs applied to other object types? For example a subset of columns\n> on a table with a column being renamed could be an issue. Procedure\n> renamed in core are not that common still we did it.\n\nI think that all objects must be supported.\nUser application may depend on them, so silently casting them to the \nvoid during upgrade may ruin someones day.\nBut also I think, that this work should be done piecemeal, to make \ntesting and reviewing easier, and functions are a good candidate for a \nfirst go.\n\n> I think that we should keep the core part of the fix more simple by\n> just warning about missing function signatures in the target cluster\n> when pg_upgrade --check is used.\n\nI think that warning without any concrete details on functions involved \nis confusing.\n> So I think that it would be better\n> for now to get to a point where we can warn about the function\n> signatures involved in a given database, without the generation of the\n> script with those REVOKE queries. Or would folks prefer keep that in\n> the first version? My take would be to handle both separately, and to\n> warn about everything so as there is no need to do pg_upgrade --check\n> more than once.\n\nI would prefer to warn about every function (so he can more quickly \nassess the situation)� AND generate script. It is good to save some user \ntime, because he is going to need that script anyway.\n\n\nI`ve tested the master patch:\n\n1. upgrade from 9.5 and lower is broken:\n\n[gsmol@deck upgrade_workdir]$ /home/gsmol/task/13_devel/bin/pg_upgrade \n-b /home/gsmol/task/9.5.19/bin/ -B /home/gsmol/task/13_devel/bin/ -d \n/home/gsmol/task/9.5.19/data -D /tmp/upgrade\nPerforming Consistency Checks\n-----------------------------\nChecking cluster versions���������������������������������� ok\nSQL command failed\nselect proname::text || '(' || \npg_get_function_arguments(pg_proc.oid)::text || ')' as funsig, array \n(SELECT unnest(pg_proc.proacl) EXCEPT SELECT \nunnest(pg_init_privs.initprivs)) from pg_proc join pg_init_privs on \npg_proc.oid = pg_init_privs.objoid where pg_init_privs.classoid = \n'pg_proc'::regclass::oid and pg_init_privs.privtype = 'i' and \npg_proc.proisagg = false and pg_proc.proacl != pg_init_privs.initprivs;\nERROR:� relation \"pg_init_privs\" does not exist\nLINE 1: ...nnest(pg_init_privs.initprivs)) from pg_proc join pg_init_pr...\n ������������������������������������������������������������ ^\nFailure, exiting\n\n\n2. I think that message \"Your installation contains non-default \nprivileges for system procedures for which the API has changed.\" must \ncontain versions:\n\"Your PostgreSQL 9.5 installation contains non-default privileges for \nsystem procedures for which the API has changed in PostgreSQL 12.\"\n\n-- \nGrigory Smolkin\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Fri, 15 Nov 2019 11:30:02 +0300",
"msg_from": "Grigory Smolkin <g.smolkin@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On Fri, Nov 15, 2019 at 11:30:02AM +0300, Grigory Smolkin wrote:\n> On 11/9/19 5:26 AM, Michael Paquier wrote:\n>> Another question I have: do we need to care more about other extra\n>> ACLs applied to other object types? For example a subset of columns\n>> on a table with a column being renamed could be an issue. Procedure\n>> renamed in core are not that common still we did it.\n> \n> I think that all objects must be supported.\n\nThe unfortunate part about the current approach is that it is not\nreally scalable from the point of view of the client. What the patch\ndoes is to compare the initdb-time ACLs and the ones stored in\npg_proc. In order to support all object types we would need to have\nmore client-side logic to join properly with all the catalogs holding\nthe ACLs of the objects to be compared. I am wondering if it would be\nmore simple to invent a backend function which uses an input similar\nto pg_describe_object() with (classid, objid and objsubid) that\nreturns the ACL of the corresponding object after looking at the\ncatalog defined by classid. This would simplify the client part to\njust scan pg_init_privs...\n\n>> So I think that it would be better\n>> for now to get to a point where we can warn about the function\n>> signatures involved in a given database, without the generation of the\n>> script with those REVOKE queries. Or would folks prefer keep that in\n>> the first version? My take would be to handle both separately, and to\n>> warn about everything so as there is no need to do pg_upgrade --check\n>> more than once.\n> \n> I would prefer to warn about every function (so he can more quickly assess\n> the situation) AND generate script. It is good to save some user time,\n> because he is going to need that script anyway.\n\nNot arguing against the fact that it is useful, but I'd think that it\nis a two-step process, where we need to understand what logic needs to\nbe in the backend or some frontend:\n1) Warn properly about the objects involved, where the object\ndescription returned by pg_describe_object would be fine enough to\nunderstand what's broken in a given database.\n2) Generate a script which may be used by the end-user.\n--\nMichael",
"msg_date": "Thu, 21 Nov 2019 17:53:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On Thu, Nov 21, 2019 at 05:53:16PM +0900, Michael Paquier wrote:\n> Not arguing against the fact that it is useful, but I'd think that it\n> is a two-step process, where we need to understand what logic needs to\n> be in the backend or some frontend:\n> 1) Warn properly about the objects involved, where the object\n> description returned by pg_describe_object would be fine enough to\n> understand what's broken in a given database.\n> 2) Generate a script which may be used by the end-user.\n\nSo, we have here a patch with no updates from the authors for the last\ntwo months. Anastasia, Arthur, are you still interested in this\nproblem? Gregory has provided a review lately and has pointed out\nsome issues.\n--\nMichael",
"msg_date": "Mon, 25 Nov 2019 16:16:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "Thank you for reviews!\n\nOn 2019/11/21 17:53, Michael Paquier wrote:\n> On Fri, Nov 15, 2019 at 11:30:02AM +0300, Grigory Smolkin wrote:\n>> On 11/9/19 5:26 AM, Michael Paquier wrote:\n>>> Another question I have: do we need to care more about other extra\n>>> ACLs applied to other object types? For example a subset of columns\n>>> on a table with a column being renamed could be an issue. Procedure\n>>> renamed in core are not that common still we did it.\n>>\n>> I think that all objects must be supported.\n> \n> The unfortunate part about the current approach is that it is not\n> really scalable from the point of view of the client. What the patch\n> does is to compare the initdb-time ACLs and the ones stored in\n> pg_proc. In order to support all object types we would need to have\n> more client-side logic to join properly with all the catalogs holding\n> the ACLs of the objects to be compared. I am wondering if it would be\n> more simple to invent a backend function which uses an input similar\n> to pg_describe_object() with (classid, objid and objsubid) that\n> returns the ACL of the corresponding object after looking at the\n> catalog defined by classid. This would simplify the client part to\n> just scan pg_init_privs...\n\nI've started to implement new backend function similar to \npg_describe_object() and tried to make new version of the patch. But I'm \nwondering now if it is possible to backpatch new functions to older \nPostgres releases? pg_upgrade will require a presence of this function \non an older source cluster.\n\nOther approach is similar to Anastasia's patch, which is scanning \npg_proc, pg_class, pg_attribute and others to get modified ACL's and \ncompare it with initial ACL from pg_init_privs. Next step is to find \nobjects which names or signatures were changed using \npg_describe_object() and scanning pg_depend of new cluster (there is a \nproblem here though: there are no entries for relations columns).\n\n-- \nArtur\n\n\n",
"msg_date": "Wed, 27 Nov 2019 11:35:14 +0900",
"msg_from": "Artur Zakirov <zaartur@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On Wed, Nov 27, 2019 at 11:35:14AM +0900, Artur Zakirov wrote:\n> I've started to implement new backend function similar to\n> pg_describe_object() and tried to make new version of the patch. But I'm\n> wondering now if it is possible to backpatch new functions to older\n> Postgres releases? pg_upgrade will require a presence of this function on an\n> older source cluster.\n\nNew functions cannot be backpatched because it would require a catalog\nbump.\n\n> Other approach is similar to Anastasia's patch, which is scanning pg_proc,\n> pg_class, pg_attribute and others to get modified ACL's and compare it with\n> initial ACL from pg_init_privs. Next step is to find objects which names or\n> signatures were changed using pg_describe_object() and scanning pg_depend of\n> new cluster\n\nYeah, the actual take is if we want to make the frontend code more\ncomplicated with a large set of SQL queries to check that each object\nACL is modified, which adds an additional maintenance cost in\npg_upgrade. Or if we want to keep the frontend simple and have more\nbackend facility to ease cross-catalog lookups for ACLs. Perhaps we\ncould also live with just checking after the ACLs of functions in the\nshort term and perhaps it covers most of the cases users would care\nabout.. That's tricky to conclude about and I am not sure which path\nis better in the long-term, but at least it's worth discussing all\npossible candidates IMO so as we make sure to not miss anything.\n\n> (there is a problem here though: there are no entries for\n> relations columns).\n\nWhen it comes to column ACLs, pg_shdepend stores a dependency between\nthe column's relation and the role.\n--\nMichael",
"msg_date": "Wed, 27 Nov 2019 13:22:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On 2019/11/27 13:22, Michael Paquier wrote:\n> On Wed, Nov 27, 2019 at 11:35:14AM +0900, Artur Zakirov wrote:\n>> Other approach is similar to Anastasia's patch, which is scanning pg_proc,\n>> pg_class, pg_attribute and others to get modified ACL's and compare it with\n>> initial ACL from pg_init_privs. Next step is to find objects which names or\n>> signatures were changed using pg_describe_object() and scanning pg_depend of\n>> new cluster\n> \n> Yeah, the actual take is if we want to make the frontend code more\n> complicated with a large set of SQL queries to check that each object\n> ACL is modified, which adds an additional maintenance cost in\n> pg_upgrade. Or if we want to keep the frontend simple and have more\n> backend facility to ease cross-catalog lookups for ACLs. Perhaps we\n> could also live with just checking after the ACLs of functions in the\n> short term and perhaps it covers most of the cases users would care\n> about.. That's tricky to conclude about and I am not sure which path\n> is better in the long-term, but at least it's worth discussing all\n> possible candidates IMO so as we make sure to not miss anything.\n\nI checked what objects changed their signatures between master and 9.6. \nI just ran pg_describe_object() for grantable object types, saved the \noutput into a file and diffed the outputs. It seems that only functions \nand table columns changed their signatures. A list of functions is big \nand here the list of columns:\n\ntable pg_attrdef column adsrc\ntable pg_class column relhasoids\ntable pg_class column relhaspkey\ntable pg_constraint column consrc\ntable pg_proc column proisagg\ntable pg_proc column proiswindow\ntable pg_proc column protransform\n\nAs a result I think in pg_upgrade we could just check functions and \ncolumns signatures. It might simplify the patch. And if something \nchanges in a future we could fix pg_upgrade too.\n\n>> (there is a problem here though: there are no entries for\n>> relations columns).\n> \n> When it comes to column ACLs, pg_shdepend stores a dependency between\n> the column's relation and the role.\n\nThank you for the hint. pg_shdepend could be used in a patch.\n\n-- \nArtur\n\n\n",
"msg_date": "Thu, 28 Nov 2019 12:29:34 +0900",
"msg_from": "Artur Zakirov <zaartur@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "If Anastasia doesn't mind I'd like to send new version of the patch.\n\nOn 2019/11/28 12:29, Artur Zakirov wrote:\n> On 2019/11/27 13:22, Michael Paquier wrote:\n>> Yeah, the actual take is if we want to make the frontend code more\n>> complicated with a large set of SQL queries to check that each object\n>> ACL is modified, which adds an additional maintenance cost in\n>> pg_upgrade.� Or if we want to keep the frontend simple and have more\n>> backend facility to ease cross-catalog lookups for ACLs.� Perhaps we\n>> could also live with just checking after the ACLs of functions in the\n>> short term and perhaps it covers most of the cases users would care\n>> about..� That's tricky to conclude about and I am not sure which path\n>> is better in the long-term, but at least it's worth discussing all\n>> possible candidates IMO so as we make sure to not miss anything.\n> \n> I checked what objects changed their signatures between master and 9.6. \n> I just ran pg_describe_object() for grantable object types, saved the \n> output into a file and diffed the outputs. It seems that only functions \n> and table columns changed their signatures. A list of functions is big \n> and here the list of columns:\n> \n> table pg_attrdef column adsrc\n> table pg_class column relhasoids\n> table pg_class column relhaspkey\n> table pg_constraint column consrc\n> table pg_proc column proisagg\n> table pg_proc column proiswindow\n> table pg_proc column protransform\n> \n> As a result I think in pg_upgrade we could just check functions and \n> columns signatures. It might simplify the patch. And if something \n> changes in a future we could fix pg_upgrade too.\nNew version of the patch differs from the previous:\n- it doesn't generate script to revoke conflicting permissions (but the \npatch can be fixed easily)\n- generates file incompatible_objects_for_acl.txt to report which \nobjects changed their signatures\n- uses pg_shdepend and pg_depend catalogs to get objects with custom \nprivileges\n- uses pg_describe_object() to find objects with different signatures\n\nCurrently relations, attributes, languages, functions and procedures are \nscanned. According to the documentation foreign databases, foreign-data \nwrappers, foreign servers, schemas and tablespaces also support ACLs. \nBut some of them doesn't have entries initialized during initdb, others \nlike schemas and tablespaces didn't change their names. So the patch \ndoesn't scan such objects.\n\nGrigory it would be great if you'll try the patch. I tested it but I \ncould miss something.\n\n-- \nArtur",
"msg_date": "Fri, 29 Nov 2019 17:07:13 +0900",
"msg_from": "Artur Zakirov <zaartur@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "Hello!\n\nOn 11/29/19 11:07 AM, Artur Zakirov wrote:\n> If Anastasia doesn't mind I'd like to send new version of the patch.\n>\n> On 2019/11/28 12:29, Artur Zakirov wrote:\n>> On 2019/11/27 13:22, Michael Paquier wrote:\n>>> Yeah, the actual take is if we want to make the frontend code more\n>>> complicated with a large set of SQL queries to check that each object\n>>> ACL is modified, which adds an additional maintenance cost in\n>>> pg_upgrade.� Or if we want to keep the frontend simple and have more\n>>> backend facility to ease cross-catalog lookups for ACLs. Perhaps we\n>>> could also live with just checking after the ACLs of functions in the\n>>> short term and perhaps it covers most of the cases users would care\n>>> about..� That's tricky to conclude about and I am not sure which path\n>>> is better in the long-term, but at least it's worth discussing all\n>>> possible candidates IMO so as we make sure to not miss anything.\n>>\n>> I checked what objects changed their signatures between master and \n>> 9.6. I just ran pg_describe_object() for grantable object types, \n>> saved the output into a file and diffed the outputs. It seems that \n>> only functions and table columns changed their signatures. A list of \n>> functions is big and here the list of columns:\n>>\n>> table pg_attrdef column adsrc\n>> table pg_class column relhasoids\n>> table pg_class column relhaspkey\n>> table pg_constraint column consrc\n>> table pg_proc column proisagg\n>> table pg_proc column proiswindow\n>> table pg_proc column protransform\n>>\n>> As a result I think in pg_upgrade we could just check functions and \n>> columns signatures. It might simplify the patch. And if something \n>> changes in a future we could fix pg_upgrade too.\n> New version of the patch differs from the previous:\n> - it doesn't generate script to revoke conflicting permissions (but \n> the patch can be fixed easily)\n> - generates file incompatible_objects_for_acl.txt to report which \n> objects changed their signatures\n> - uses pg_shdepend and pg_depend catalogs to get objects with custom \n> privileges\n> - uses pg_describe_object() to find objects with different signatures\n>\n> Currently relations, attributes, languages, functions and procedures \n> are scanned. According to the documentation foreign databases, \n> foreign-data wrappers, foreign servers, schemas and tablespaces also \n> support ACLs. But some of them doesn't have entries initialized during \n> initdb, others like schemas and tablespaces didn't change their names. \n> So the patch doesn't scan such objects.\n>\n> Grigory it would be great if you'll try the patch. I tested it but I \n> could miss something.\n\nI`ve tested the patch on upgrade from 9.5 to master and it works now, \nthank you.\nBut I think that 'incompatible_objects_for_acl.txt' is not a very \ninformative way of reporting to user the source of the problem.\nThere is no mentions of rolenames that holds permissions, that prevents \nthe upgrade, and even if they were, it is still up to user to conjure an \nscript to revoke all those grants, which is not a very user-friendly.\n\nI think it should generate 'catalog_procedures.sql' script as in \nprevious version with all REVOKE statements, that are required to \nexecute for pg_upgrade to work.\n\n\n-- \nGrigory Smolkin\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Sun, 1 Dec 2019 17:58:54 +0300",
"msg_from": "Grigory Smolkin <g.smolkin@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On 2019/12/01 23:58, Grigory Smolkin wrote:\n> On 11/29/19 11:07 AM, Artur Zakirov wrote:\n>> New version of the patch differs from the previous:\n>> - it doesn't generate script to revoke conflicting permissions (but \n>> the patch can be fixed easily)\n>> - generates file incompatible_objects_for_acl.txt to report which \n>> objects changed their signatures\n>> - uses pg_shdepend and pg_depend catalogs to get objects with custom \n>> privileges\n>> - uses pg_describe_object() to find objects with different signatures\n>>\n>> Currently relations, attributes, languages, functions and procedures \n>> are scanned. According to the documentation foreign databases, \n>> foreign-data wrappers, foreign servers, schemas and tablespaces also \n>> support ACLs. But some of them doesn't have entries initialized during \n>> initdb, others like schemas and tablespaces didn't change their names. \n>> So the patch doesn't scan such objects.\n>>\n>> Grigory it would be great if you'll try the patch. I tested it but I \n>> could miss something.\n> \n> I`ve tested the patch on upgrade from 9.5 to master and it works now, \n> thank you.\n\nGreat!\n\n> But I think that 'incompatible_objects_for_acl.txt' is not a very \n> informative way of reporting to user the source of the problem.\n> There is no mentions of rolenames that holds permissions, that prevents \n> the upgrade, and even if they were, it is still up to user to conjure an \n> script to revoke all those grants, which is not a very user-friendly.\n\nI tried to find some pattern how pg_upgrade decides to log list of \nproblem objects or to generate SQL script file to execute by user. \nNowadays only \"Checking for large objects\" and \"Checking for hash \nindexes\" generate SQL script files and log WARNING message.\n\n> I think it should generate 'catalog_procedures.sql' script as in \n> previous version with all REVOKE statements, that are required to \n> execute for pg_upgrade to work.\n\nI updated the patch. It generates \"revoke_objects.sql\" (similar to v3 \npatch) now and doesn't rely on --check option. It also logs still FATAL \nmessage because it seems pg_upgrade should stop here since it fails \nlater if there are objects with changed identities.\n\nThe patch doesn't generate \"incompatible_objects_for_acl.txt\" now \nbecause it would duplicate \"revoke_objects.sql\".\n\nIt now uses pg_identify_object() instead of pg_describe_object(), it is \neasier to get column names with it.\n\n-- \nArthur",
"msg_date": "Wed, 4 Dec 2019 12:17:25 +0900",
"msg_from": "Arthur Zakirov <zaartur@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On Wed, Dec 04, 2019 at 12:17:25PM +0900, Arthur Zakirov wrote:\n> I updated the patch. It generates \"revoke_objects.sql\" (similar to v3 patch)\n> now and doesn't rely on --check option. It also logs still FATAL message\n> because it seems pg_upgrade should stop here since it fails later if there\n> are objects with changed identities.\n\n(I haven't looked at the patch yet, sorry!)\n\nFWIW, I am not much a fan of that part because the output generated by\nthe description is most likely not compatible with the grammar\nsupported.\n\nIn order to make the review easier, and to test for all the patterns\nwe need to cover, I have an evil idea though. Could you write a\ndummy, still simple patch for HEAD which introduces\nbackward-incompatible changes for all the object types we want to\nstress? Then by having ACLs on the source server which are fakely\nbroken on the target server we can make sure that the queries we have\nare right, and that they report the objects we are looking for.\n--\nMichael",
"msg_date": "Wed, 4 Dec 2019 17:15:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On 2019/12/04 17:15, Michael Paquier wrote:\n> On Wed, Dec 04, 2019 at 12:17:25PM +0900, Arthur Zakirov wrote:\n>> I updated the patch. It generates \"revoke_objects.sql\" (similar to v3 patch)\n>> now and doesn't rely on --check option. It also logs still FATAL message\n>> because it seems pg_upgrade should stop here since it fails later if there\n>> are objects with changed identities.\n> \n> (I haven't looked at the patch yet, sorry!)\n> \n> FWIW, I am not much a fan of that part because the output generated by\n> the description is most likely not compatible with the grammar\n> supported.\n\nAh, I thought that pg_identify_object() gives properly quoted identity, \nand it could be used to make SQL script.\n\n> In order to make the review easier, and to test for all the patterns\n> we need to cover, I have an evil idea though. Could you write a\n> dummy, still simple patch for HEAD which introduces\n> backward-incompatible changes for all the object types we want to\n> stress? Then by having ACLs on the source server which are fakely\n> broken on the target server we can make sure that the queries we have\n> are right, and that they report the objects we are looking for.\n\nSure! But I'm not sure that I understood the idea. Do you mean the patch \nwhich adds a TAP test? It can initialize two clusters, in first it \nrenames some system objects and changes their ACLs. And finally the test \nruns pg_upgrade which will fail.\n\n-- \nArthur\n\n\n",
"msg_date": "Wed, 4 Dec 2019 18:15:52 +0900",
"msg_from": "Arthur Zakirov <zaartur@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On Wed, Dec 04, 2019 at 06:15:52PM +0900, Arthur Zakirov wrote:\n> On 2019/12/04 17:15, Michael Paquier wrote:\n>> FWIW, I am not much a fan of that part because the output generated by\n>> the description is most likely not compatible with the grammar\n>> supported.\n> \n> Ah, I thought that pg_identify_object() gives properly quoted identity, and\n> it could be used to make SQL script.\n\nIt depends on the object type. For columns I can see in your patch\nthat you are using a dedicated code path, but I don't think that we\nshould assume that there is an exact one-one mapping between the\nobject type and the grammar of GRANT/REVOKE for the object type\nsupported because both are completely independent things and\nfacilities. Take for example foreign data wrappers. Of course for\nthis patch this depends if we have system object of the type which\nwould be impacted. That's not the case of foreign data wrappers and\nlikely it will never be, but there is no way to be sure that this\nwon't get broken silently in the future.\n\n> Sure! But I'm not sure that I understood the idea. Do you mean the patch\n> which adds a TAP test? It can initialize two clusters, in first it renames\n> some system objects and changes their ACLs. And finally the test runs\n> pg_upgrade which will fail.\n\nA TAP test won't help here because the idea is to create a patch for\nHEAD which willingly introduces changes for system objects, where the\nsource binaries have ACLs on object types which are broken on the\ntarget binaries with the custom patch. That's to make sure that all\nthe logic which would get added to pu_upgrade is working correctly.\nOther ideas are welcome.\n--\nMichael",
"msg_date": "Thu, 5 Dec 2019 11:31:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "Hello,\n\nOn 2019/12/05 11:31, Michael Paquier wrote:\n> On Wed, Dec 04, 2019 at 06:15:52PM +0900, Arthur Zakirov wrote:\n>> Ah, I thought that pg_identify_object() gives properly quoted identity, and\n>> it could be used to make SQL script.\n> \n> It depends on the object type. For columns I can see in your patch\n> that you are using a dedicated code path, but I don't think that we\n> should assume that there is an exact one-one mapping between the\n> object type and the grammar of GRANT/REVOKE for the object type\n> supported because both are completely independent things and\n> facilities. Take for example foreign data wrappers. Of course for\n> this patch this depends if we have system object of the type which\n> would be impacted. That's not the case of foreign data wrappers and\n> likely it will never be, but there is no way to be sure that this\n> won't get broken silently in the future.\n\nI attached new version of the patch. It still uses pg_identify_object(), \nI'm not sure about other ways to build identities yet.\n\nIt handles relations, their columns, functions, procedure and languages. \nThere are other GRANT'able objects, like databases, foreign data \nwrappers and servers, schemas and tablespaces.\n\nI didn't include handling of databases, schemas and tablespaces because \nI don't know yet how to identify if a such object is system object other \nthan just hard code them. They are not pinned and are not created within \npg_catalog schema.\n\nForeign data wrappers and servers are not handled too. There are no such \nbuilt-in objects, so it is not possible to test them with \n\"test_rename_catalog_objects.patch\". And I'm not sure if they will be \npinned (to test if it is system) if there will be such objects in the \nfuture.\n\n>> Sure! But I'm not sure that I understood the idea. Do you mean the patch\n>> which adds a TAP test? It can initialize two clusters, in first it renames\n>> some system objects and changes their ACLs. And finally the test runs\n>> pg_upgrade which will fail.\n> \n> A TAP test won't help here because the idea is to create a patch for\n> HEAD which willingly introduces changes for system objects, where the\n> source binaries have ACLs on object types which are broken on the\n> target binaries with the custom patch. That's to make sure that all\n> the logic which would get added to pu_upgrade is working correctly.\n> Other ideas are welcome.\n\nI added \"test_rename_catalog_objects.patch\" and \n\"test_add_acl_to_catalog_objects.sql\". So testing steps are following:\n- initialize new source instance (e.g. pg v12) and run \n\"test_add_acl_to_catalog_objects.sql\" on it\n- apply \"pg_upgrade_ACL_check_v6.patch\" and \n\"test_add_acl_to_catalog_objects.sql\" for HEAD\n- initialize new target instance for HEAD\n- run pg_upgrade, it should create revoke_objects.sql file\n\n\"test_rename_catalog_objects.patch\" should be applied after \n\"pg_upgrade_ACL_check_v6.patch\".\n\nRenamed objects are the following:\n- table pg_subscription -> pg_sub\n- columns pg_subscription.subenabled -> subactive, \npg_subscription.subconninfo -> subconn\n- view pg_stat_subscription -> pg_stat_sub\n- columns pg_stat_subscription.received_lsn -> received_location, \npg_stat_subscription.latest_end_lsn -> latest_end_location\n- function pg_stat_get_subscription -> pg_stat_get_sub\n- language sql -> pgsql\n\n-- \nArthur",
"msg_date": "Tue, 17 Dec 2019 17:10:28 +0900",
"msg_from": "Arthur Zakirov <zaartur@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On 17.12.2019 11:10, Arthur Zakirov wrote:\n> On 2019/12/05 11:31, Michael Paquier wrote:\n>> On Wed, Dec 04, 2019 at 06:15:52PM +0900, Arthur Zakirov wrote:\n>>> Ah, I thought that pg_identify_object() gives properly quoted \n>>> identity, and\n>>> it could be used to make SQL script.\n>>\n>> It depends on the object type.� For columns I can see in your patch\n>> that you are using a dedicated code path, but I don't think that we\n>> should assume that there is an exact one-one mapping between the\n>> object type and the grammar of GRANT/REVOKE for the object type\n>> supported because both are completely independent things and\n>> facilities.� Take for example foreign data wrappers.� Of course for\n>> this patch this depends if we have system object of the type which\n>> would be impacted.� That's not the case of foreign data wrappers and\n>> likely it will never be, but there is no way to be sure that this\n>> won't get broken silently in the future.\n>\n> I attached new version of the patch. It still uses \n> pg_identify_object(), I'm not sure about other ways to build \n> identities yet.\n>\nMichael, do I understand it correctly that your concerns about \npg_identify_object() relate only to the revoke sql script?\n\nNow, I tend to agree that we should split this patch into two separate \nparts, to make it simpler.\nThe first patch is to find affected objects and print warnings and the \nsecond is to generate script.\n\n>\n> I added \"test_rename_catalog_objects.patch\" and \n> \"test_add_acl_to_catalog_objects.sql\". So testing steps are following:\n> - initialize new source instance (e.g. pg v12) and run \n> \"test_add_acl_to_catalog_objects.sql\" on it\n> - apply \"pg_upgrade_ACL_check_v6.patch\" and \n> \"test_add_acl_to_catalog_objects.sql\" for HEAD\n> - initialize new target instance for HEAD\n> - run pg_upgrade, it should create revoke_objects.sql file\n>\n> \"test_rename_catalog_objects.patch\" should be applied after \n> \"pg_upgrade_ACL_check_v6.patch\".\n>\n> Renamed objects are the following:\n> - table pg_subscription -> pg_sub\n> - columns pg_subscription.subenabled -> subactive, \n> pg_subscription.subconninfo -> subconn\n> - view pg_stat_subscription -> pg_stat_sub\n> - columns pg_stat_subscription.received_lsn -> received_location, \n> pg_stat_subscription.latest_end_lsn -> latest_end_location\n> - function pg_stat_get_subscription -> pg_stat_get_sub\n> - language sql -> pgsql\n>\nI've tried to test it. Script is attached.\nDescribed test case works. If a new cluster contains renamed objects, \n/pg_upgrade --check/ successfully finds them and generates a script to \nrevoke non-default ACL. Though, without \ntest_rename_catalog_objects.patch, /pg_upgrade --check/ still generates \nthe same message, which is false positive.\n\nI am going to fix it and send the updated patch later this week.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 24 Mar 2020 20:01:07 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On 12/17/19 3:10 AM, Arthur Zakirov wrote:\n> \n> I attached new version of the patch. It still uses pg_identify_object(), \n> I'm not sure about other ways to build identities yet.\n\nThis patch applies and builds but fails regression tests on Linux and \nWindows:\nhttps://travis-ci.org/postgresql-cfbot/postgresql/builds/666134656\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.85292\n\nThe CF entry has been updated to Waiting on Author.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 24 Mar 2020 13:08:50 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "Hello David,\n\nOn 3/25/2020 2:08 AM, David Steele wrote:\n> On 12/17/19 3:10 AM, Arthur Zakirov wrote:\n>>\n>> I attached new version of the patch. It still uses \n>> pg_identify_object(), I'm not sure about other ways to build \n>> identities yet.\n> \n> This patch applies and builds but fails regression tests on Linux and \n> Windows:\n> https://travis-ci.org/postgresql-cfbot/postgresql/builds/666134656\n> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.85292\n\nRegression tests fail because cfbot applies \n\"test_rename_catalog_objects.patch\". Regression tests pass without it.\n\nThis patch shouldn't be applied by cfbot. I'm not sure how to do this. \nBut maybe it is possible to use different extension name for the test \npatch, not \".patch\".\n\n-- \nArtur\n\n\n",
"msg_date": "Wed, 25 Mar 2020 11:12:05 +0900",
"msg_from": "Artur Zakirov <zaartur@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "> On 25 Mar 2020, at 03:12, Artur Zakirov <zaartur@gmail.com> wrote:\n\n> Regression tests fail because cfbot applies \"test_rename_catalog_objects.patch\". Regression tests pass without it.\n> \n> This patch shouldn't be applied by cfbot. I'm not sure how to do this. But maybe it is possible to use different extension name for the test patch, not \".patch\".\n\nTo get around that, post a new version again without the test_ patch, that\nshould make the cfbot pick up only the \"new\" patches.\n\ncheers ./daniel\n\n",
"msg_date": "Wed, 25 Mar 2020 09:14:51 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 11:12:05AM +0900, Artur Zakirov wrote:\n> Hello David,\n> \n> On 3/25/2020 2:08 AM, David Steele wrote:\n> > On 12/17/19 3:10 AM, Arthur Zakirov wrote:\n> > > \n> > > I attached new version of the patch. It still uses\n> > > pg_identify_object(), I'm not sure about other ways to build\n> > > identities yet.\n> > \n> > This patch applies and builds but fails regression tests on Linux and\n> > Windows:\n> > https://travis-ci.org/postgresql-cfbot/postgresql/builds/666134656\n> > https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.85292\n> \n> Regression tests fail because cfbot applies\n> \"test_rename_catalog_objects.patch\". Regression tests pass without it.\n> \n> This patch shouldn't be applied by cfbot. I'm not sure how to do this. But\n> maybe it is possible to use different extension name for the test patch, not\n> \".patch\".\n\nYes, see Tom's message here:\nhttps://www.postgresql.org/message-id/14255.1536781029%40sss.pgh.pa.us\n\nI don't know what extensions cfbot looks for; in that case I didn't have a file\nextension at all.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 25 Mar 2020 07:46:56 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "New version of the patch is attached. I fixed several issues in the \ncheck_for_changed_signatures().\n\nNow it passes check without \"test_rename_catalog_objects\" and fails \n(generates script) with it. Test script pg_upgrade_ACL_test.sh \ndemonstrates this.\n\nThe only known issue left is the usage of pg_identify_object(), though I \ndon't see a problem here with object types that this patch involves.\nAs I updated the code, I will leave this patch in Need Review.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 6 Apr 2020 16:49:31 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On 06.04.2020 16:49, Anastasia Lubennikova wrote:\n> New version of the patch is attached. I fixed several issues in the \n> check_for_changed_signatures().\n>\n> Now it passes check without \"test_rename_catalog_objects\" and fails \n> (generates script) with it. Test script pg_upgrade_ACL_test.sh \n> demonstrates this.\n>\n> The only known issue left is the usage of pg_identify_object(), though \n> I don't see a problem here with object types that this patch involves.\n> As I updated the code, I will leave this patch in Need Review.\n>\nOne more fix for free_acl_infos().\nThanks to cfbot.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 6 Apr 2020 19:40:39 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On 06.04.2020 19:40, Anastasia Lubennikova wrote:\n> On 06.04.2020 16:49, Anastasia Lubennikova wrote:\n>> New version of the patch is attached. I fixed several issues in the \n>> check_for_changed_signatures().\n>>\n>> Now it passes check without \"test_rename_catalog_objects\" and fails \n>> (generates script) with it. Test script pg_upgrade_ACL_test.sh \n>> demonstrates this.\n>>\n>> The only known issue left is the usage of pg_identify_object(), \n>> though I don't see a problem here with object types that this patch \n>> involves.\n>> As I updated the code, I will leave this patch in Need Review.\n>>\n> One more fix for free_acl_infos().\n> Thanks to cfbot.\n>\nOne more update.\nIn this version I rebased test patches,� added some more comments, fixed \nmemory allocation issue and also removed code that handles ACLs on \nlanguages. They require a lot of specific handling, while I doubt that \ntheir signatures, which consist of language name only, are subject to \nchange in any future versions.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 8 Jun 2020 18:44:23 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On 2020-Jun-08, Anastasia Lubennikova wrote:\n\n> In this version I rebased test patches,� added some more comments, fixed\n> memory allocation issue and also removed code that handles ACLs on\n> languages. They require a lot of specific handling, while I doubt that their\n> signatures, which consist of language name only, are subject to change in\n> any future versions.\n\nI'm thinking what's a good way to have a test that's committable. Maybe\nit would work to add a TAP test to pg_upgrade that runs initdb, does a\nfew GRANTS as per your attachment, then runs pg_upgrade? Currently the\npg_upgrade tests are not TAP ... we'd have to revive \nhttps://postgr.es/m/20180126080026.GI17847@paquier.xyz\n(Some progress was already made on the buildfarm front to permit this)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Jun 2020 12:31:26 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On 08.06.2020 19:31, Alvaro Herrera wrote:\n> On 2020-Jun-08, Anastasia Lubennikova wrote:\n>\n>> In this version I rebased test patches, added some more comments, fixed\n>> memory allocation issue and also removed code that handles ACLs on\n>> languages. They require a lot of specific handling, while I doubt that their\n>> signatures, which consist of language name only, are subject to change in\n>> any future versions.\n> I'm thinking what's a good way to have a test that's committable. Maybe\n> it would work to add a TAP test to pg_upgrade that runs initdb, does a\n> few GRANTS as per your attachment, then runs pg_upgrade? Currently the\n> pg_upgrade tests are not TAP ... we'd have to revive\n> https://postgr.es/m/20180126080026.GI17847@paquier.xyz\n> (Some progress was already made on the buildfarm front to permit this)\n\nI would be glad to add some test, but it seems to me that the infrastructure\nchanges for cross-version pg_upgrade test is much more complicated task than\nthis modest bugfix. Besides, I've read the discussion and it seems that \nMichael\nis not going to continue this work.\n\nAttached v10 patch contains more fix for uninitialized variable.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 11 Jun 2020 19:58:43 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On Thu, Jun 11, 2020 at 07:58:43PM +0300, Anastasia Lubennikova wrote:\n> I would be glad to add some test, but it seems to me that the infrastructure\n> changes for cross-version pg_upgrade test is much more complicated task than\n> this modest bugfix. Besides, I've read the discussion and it seems that\n> Michael\n> is not going to continue this work.\n\nThe main issue I recall from this patch series was the lack of\nenthusiasm because it would break potential users running major\nupgrade tests based on test.sh. At the same time, if you don't break\nthe wheel..\n\n> Attached v10 patch contains more fix for uninitialized variable.\n\nThanks. Sorry for the time it takes. I'd like to get into this issue\nbut I have not been able to dive into it seriously yet.\n--\nMichael",
"msg_date": "Fri, 12 Jun 2020 16:08:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "Tested this patch by running several upgrades from different major \nversions and different setups.\nACL, that are impossible to apply, are detected and reported, so it \nlooks good for me.\n\n\n\n",
"msg_date": "Mon, 23 Nov 2020 19:37:36 +0300",
"msg_from": "Grigory Smolkin <g.smolkin@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On Thu, Jun 11, 2020 at 07:58:43PM +0300, Anastasia Lubennikova wrote:\n> On 08.06.2020 19:31, Alvaro Herrera wrote:\n> >I'm thinking what's a good way to have a test that's committable. Maybe\n> >it would work to add a TAP test to pg_upgrade that runs initdb, does a\n> >few GRANTS as per your attachment, then runs pg_upgrade? Currently the\n> >pg_upgrade tests are not TAP ... we'd have to revive\n> >https://postgr.es/m/20180126080026.GI17847@paquier.xyz\n> >(Some progress was already made on the buildfarm front to permit this)\n> \n> I would be glad to add some test, but it seems to me that the infrastructure\n> changes for cross-version pg_upgrade test is much more complicated task than\n> this modest bugfix.\n\nAgreed.\n\n> --- a/src/bin/pg_upgrade/check.c\n> +++ b/src/bin/pg_upgrade/check.c\n\n> +static void\n> +check_for_changed_signatures(void)\n> +{\n\n> +\tsnprintf(output_path, sizeof(output_path), \"revoke_objects.sql\");\n\nIf an upgrade deleted function pg_last_wal_replay_lsn, upgrading a database in\nwhich \"REVOKE ALL ON FUNCTION pg_last_wal_replay_lsn FROM public\" happened\nrequires a GRANT. Can you use a file name that will still make sense if\nsomeone enhances pg_upgrade to generate such GRANT statements?\n\n\n> +\t\t\t\tif (script == NULL && (script = fopen_priv(output_path, \"w\")) == NULL)\n> +\t\t\t\t\tpg_fatal(\"could not open file \\\"%s\\\": %s\\n\",\n> +\t\t\t\t\t\t\t output_path, strerror(errno));\n\nUse %m instead of passing sterror(errno) to %s.\n\n> +\t\t\t\t}\n> +\n> +\t\t\t\t/* Handle columns separately */\n> +\t\t\t\tif (strstr(aclinfo->obj_type, \"column\") != NULL)\n> +\t\t\t\t{\n> +\t\t\t\t\tchar\t *pdot = last_dot_location(aclinfo->obj_ident);\n\nI'd write strrchr(aclinfo->obj_ident, '.') instead of introducing\nlast_dot_location().\n\nThis assumes column names don't contain '.'; how difficult would it be to\nremove that assumption? If it's difficult, a cheap approach would be to\nignore any entry containing too many dots. We're not likely to create such\ncolumn names at initdb time, but v13 does allow:\n\n ALTER VIEW pg_locks RENAME COLUMN objid TO \"a.b\";\n GRANT SELECT (\"a.b\") ON pg_locks TO backup;\n\nAfter which revoke_objects.sql has:\n\n psql:./revoke_objects.sql:9: ERROR: syntax error at or near \"\") ON pg_catalog.pg_locks.\"\"\n LINE 1: REVOKE ALL (b\") ON pg_catalog.pg_locks.\"a FROM \"backup\";\n\nWhile that ALTER VIEW probably should have required allow_system_table_mods,\nwe need to assume such databases exist.\n\n> --- a/src/bin/pg_upgrade/info.c\n> +++ b/src/bin/pg_upgrade/info.c\n\n> +get_non_default_acl_infos(ClusterInfo *cluster)\n> +{\n\n> +\t\tres = executeQueryOrDie(conn,\n> +\t\t\t/*\n> +\t\t\t * Get relations, attributes, functions and procedures. Some system\n> +\t\t\t * objects like views are not pinned, but these type of objects are\n> +\t\t\t * created in pg_catalog schema.\n> +\t\t\t */\n> +\t\t\t\"SELECT obj.type, obj.identity, shd.refobjid::regrole rolename, \"\n> +\t\t\t\" CASE WHEN shd.classid = 'pg_class'::regclass THEN true \"\n> +\t\t\t\" ELSE false \"\n> +\t\t\t\" END is_relation \"\n> +\t\t\t\"FROM pg_catalog.pg_shdepend shd, \"\n> +\t\t\t\"LATERAL pg_catalog.pg_identify_object(\"\n> +\t\t\t\" shd.classid, shd.objid, shd.objsubid) obj \"\n> +\t\t\t/* 'a' is for SHARED_DEPENDENCY_ACL */\n> +\t\t\t\"WHERE shd.deptype = 'a' AND shd.dbid = %d \"\n> +\t\t\t\" AND shd.classid IN ('pg_proc'::regclass, 'pg_class'::regclass) \"\n> +\t\t\t/* get only system objects */\n> +\t\t\t\" AND obj.schema = 'pg_catalog' \"\n\nOverall, this patch predicts a subset of cases where pg_dump will emit a\nfailing GRANT or REVOKE that targets a pg_catalog object. Can you write a\ncode comment stating what it does and does not detect? I think it's okay to\nnot predict every failure, but let's record the intended coverage. Here are a\nfew examples I know; there may be more. The above query only finds GRANTs to\nnon-pinned roles. pg_dump dumps the following, but the above query doesn't\nfind them:\n\n REVOKE ALL ON FUNCTION pg_last_wal_replay_lsn FROM public;\n GRANT EXECUTE ON FUNCTION pg_reload_conf() TO pg_signal_backend;\n\nThe above query should test refclassid.\n\nThis function should not run queries against servers older than 9.6, because\npg_dump emits GRANT/REVOKE for pg_catalog objects only when targeting 9.6 or\nlater. An upgrade from 8.4 to master is failing on the above query:\n\n===\nChecking for large objects ok\nSQL command failed\nSELECT obj.type, obj.identity, shd.refobjid::regrole rolename, CASE WHEN shd.classid = 'pg_class'::regclass THEN true ELSE false END is_relation FROM pg_catalog.pg_shdepend shd, LATERAL pg_catalog.pg_identify_object( shd.classid, shd.objid, shd.objsubid) obj WHERE shd.deptype = 'a' AND shd.dbid = 11564 AND shd.classid IN ('pg_proc'::regclass, 'pg_class'::regclass) AND obj.schema = 'pg_catalog' ORDER BY obj.identity COLLATE \"C\";\nERROR: syntax error at or near \".\"\nLINE 1: ...ROM pg_catalog.pg_shdepend shd, LATERAL pg_catalog.pg_identi...\n ^\nFailure, exiting\n===\n\n> +\t\t\twhile (aclnum < PQntuples(res) &&\n> +\t\t\t\t strcmp(curr->obj_ident, PQgetvalue(res, aclnum, i_obj_ident)) == 0)\n\nThe patch adds many lines wider than 78 columns, this being one example. In\ngeneral, break such lines. (Don't break string literal arguments of\nereport(), though.)\n\nnm\n\n\n",
"msg_date": "Sun, 3 Jan 2021 03:29:55 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On 03.01.2021 14:29, Noah Misch wrote:\n> On Thu, Jun 11, 2020 at 07:58:43PM +0300, Anastasia Lubennikova wrote:\n>> On 08.06.2020 19:31, Alvaro Herrera wrote:\n>>> I'm thinking what's a good way to have a test that's committable. Maybe\n>>> it would work to add a TAP test to pg_upgrade that runs initdb, does a\n>>> few GRANTS as per your attachment, then runs pg_upgrade? Currently the\n>>> pg_upgrade tests are not TAP ... we'd have to revive\n>>> https://postgr.es/m/20180126080026.GI17847@paquier.xyz\n>>> (Some progress was already made on the buildfarm front to permit this)\n>> I would be glad to add some test, but it seems to me that the infrastructure\n>> changes for cross-version pg_upgrade test is much more complicated task than\n>> this modest bugfix.\n> Agreed.\nThank you for the review.\nNew version of the patch is attached, though I haven't tested it \nproperly yet. I am planning to do in a couple of days.\n>> --- a/src/bin/pg_upgrade/check.c\n>> +++ b/src/bin/pg_upgrade/check.c\n>> +static void\n>> +check_for_changed_signatures(void)\n>> +{\n>> +\tsnprintf(output_path, sizeof(output_path), \"revoke_objects.sql\");\n> If an upgrade deleted function pg_last_wal_replay_lsn, upgrading a database in\n> which \"REVOKE ALL ON FUNCTION pg_last_wal_replay_lsn FROM public\" happened\n> requires a GRANT. Can you use a file name that will still make sense if\n> someone enhances pg_upgrade to generate such GRANT statements?\nI changed the name to 'fix_system_objects_ACL.sql'. Does it look good to \nyou?\n\n>\n>> +\t\t\t\tif (script == NULL && (script = fopen_priv(output_path, \"w\")) == NULL)\n>> +\t\t\t\t\tpg_fatal(\"could not open file \\\"%s\\\": %s\\n\",\n>> +\t\t\t\t\t\t\t output_path, strerror(errno));\n> Use %m instead of passing sterror(errno) to %s.\nDone.\n>> +\t\t\t\t}\n>> +\n>> +\t\t\t\t/* Handle columns separately */\n>> +\t\t\t\tif (strstr(aclinfo->obj_type, \"column\") != NULL)\n>> +\t\t\t\t{\n>> +\t\t\t\t\tchar\t *pdot = last_dot_location(aclinfo->obj_ident);\n> I'd write strrchr(aclinfo->obj_ident, '.') instead of introducing\n> last_dot_location().\n>\n> This assumes column names don't contain '.'; how difficult would it be to\n> remove that assumption? If it's difficult, a cheap approach would be to\n> ignore any entry containing too many dots. We're not likely to create such\n> column names at initdb time, but v13 does allow:\n>\n> ALTER VIEW pg_locks RENAME COLUMN objid TO \"a.b\";\n> GRANT SELECT (\"a.b\") ON pg_locks TO backup;\n>\n> After which revoke_objects.sql has:\n>\n> psql:./revoke_objects.sql:9: ERROR: syntax error at or near \"\") ON pg_catalog.pg_locks.\"\"\n> LINE 1: REVOKE ALL (b\") ON pg_catalog.pg_locks.\"a FROM \"backup\";\n>\n> While that ALTER VIEW probably should have required allow_system_table_mods,\n> we need to assume such databases exist.\n\nI've fixed it by using pg_identify_object_as_address(). Now we don't \nhave to parse obj_identity.\n\n>\n> Overall, this patch predicts a subset of cases where pg_dump will emit a\n> failing GRANT or REVOKE that targets a pg_catalog object. Can you write a\n> code comment stating what it does and does not detect? I think it's okay to\n> not predict every failure, but let's record the intended coverage. Here are a\n> few examples I know; there may be more. The above query only finds GRANTs to\n> non-pinned roles. pg_dump dumps the following, but the above query doesn't\n> find them:\n>\n> REVOKE ALL ON FUNCTION pg_last_wal_replay_lsn FROM public;\n> GRANT EXECUTE ON FUNCTION pg_reload_conf() TO pg_signal_backend;\n>\n> The above query should test refclassid.\n\n>\n> This function should not run queries against servers older than 9.6, because\n> pg_dump emits GRANT/REVOKE for pg_catalog objects only when targeting 9.6 or\n> later. An upgrade from 8.4 to master is failing on the above query:\n>\nFixed.\n\n> The patch adds many lines wider than 78 columns, this being one example. In\n> general, break such lines. (Don't break string literal arguments of\n> ereport(), though.)\nFixed.\n> nm\n\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 21 Jan 2021 01:03:58 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On Thu, Jan 21, 2021 at 01:03:58AM +0300, Anastasia Lubennikova wrote:\n> On 03.01.2021 14:29, Noah Misch wrote:\n> >On Thu, Jun 11, 2020 at 07:58:43PM +0300, Anastasia Lubennikova wrote:\n\n> Thank you for the review.\n> New version of the patch is attached, though I haven't tested it properly\n> yet. I am planning to do in a couple of days.\n\nOnce that testing completes, please change the commitfest entry status to\nReady for Committer.\n\n> >>+\tsnprintf(output_path, sizeof(output_path), \"revoke_objects.sql\");\n> >If an upgrade deleted function pg_last_wal_replay_lsn, upgrading a database in\n> >which \"REVOKE ALL ON FUNCTION pg_last_wal_replay_lsn FROM public\" happened\n> >requires a GRANT. Can you use a file name that will still make sense if\n> >someone enhances pg_upgrade to generate such GRANT statements?\n> I changed the name to 'fix_system_objects_ACL.sql'. Does it look good to\n> you?\n\nThat name is fine with me.\n\n> > ALTER VIEW pg_locks RENAME COLUMN objid TO \"a.b\";\n> > GRANT SELECT (\"a.b\") ON pg_locks TO backup;\n> >\n> >After which revoke_objects.sql has:\n> >\n> > psql:./revoke_objects.sql:9: ERROR: syntax error at or near \"\") ON pg_catalog.pg_locks.\"\"\n> > LINE 1: REVOKE ALL (b\") ON pg_catalog.pg_locks.\"a FROM \"backup\";\n> >\n> >While that ALTER VIEW probably should have required allow_system_table_mods,\n> >we need to assume such databases exist.\n> \n> I've fixed it by using pg_identify_object_as_address(). Now we don't have to\n> parse obj_identity.\n\nNice.\n\n\n",
"msg_date": "Wed, 20 Jan 2021 20:07:41 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On 21.01.2021 07:07, Noah Misch wrote:\n>> On Thu, Jun 11, 2020 at 07:58:43PM +0300, Anastasia Lubennikova wrote:\n>> Thank you for the review.\n>> New version of the patch is attached, though I haven't tested it properly\n>> yet. I am planning to do in a couple of days.\n> Once that testing completes, please change the commitfest entry status to\n> Ready for Committer.\n>\nNew version is attached along with a test script.\n\nTo use it, you can simply run pg_upgrade_ACL_test.sh script in a \ndirectory that contains postrges git repository. See comments in the file.\n\nThe test includes patch to rename several system objects.\n\nRenamed objects are the following:\n- table pg_subscription -> pg_sub\n- columns pg_subscription.subenabled -> subactive,\n- view pg_stat_subscription -> pg_stat_sub\npg_stat_subscription.latest_end_lsn -> latest_end_location\n- function pg_stat_get_subscription -> pg_stat_get_sub\n- language sql -> pgsql\n\n\nCompared to v11, I fixed a couple of minor issues and removed language \nsupport.\nIt seems unlikely, that we will ever change language signature, which \nconsist of language name only.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 21 Jan 2021 18:42:38 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On Thu, Jan 21, 2021 at 01:03:58AM +0300, Anastasia Lubennikova wrote:\n> On 03.01.2021 14:29, Noah Misch wrote:\n> >Overall, this patch predicts a subset of cases where pg_dump will emit a\n> >failing GRANT or REVOKE that targets a pg_catalog object. Can you write a\n> >code comment stating what it does and does not detect? I think it's okay to\n> >not predict every failure, but let's record the intended coverage. Here are a\n> >few examples I know; there may be more. The above query only finds GRANTs to\n> >non-pinned roles. pg_dump dumps the following, but the above query doesn't\n> >find them:\n> >\n> > REVOKE ALL ON FUNCTION pg_last_wal_replay_lsn FROM public;\n> > GRANT EXECUTE ON FUNCTION pg_reload_conf() TO pg_signal_backend;\n\nI see a new comment listing object types. Please also explain the lack of\npreventing REVOKE failures (first example query above) and the limitation\naround non-pinned roles (second example).\n\n> >The above query should test refclassid.\n\nPlease do so.\n\n> +\t\t\t\t/* Handle table column objects */\n> +\t\t\t\tif (strstr(aclinfo->obj_type, \"column\") != NULL)\n> +\t\t\t\t{\n> +\t\t\t\t\tchar\t *name_pos = strstr(aclinfo->obj_ident,\n> +\t\t\t\t\t\t\t\t\t\t\t\t aclinfo->obj_name);\n\n> +\t\t\t\t\tif (*name_pos == '\\\"')\n> +\t\t\t\t\t\tname_pos--;\n\nThis solves the problem affecting a column named \"a.b\", but it fails for a\ncolumn named \"pg_catalog\" or \"a\"\"b\". I recommend solving this by retrieving\nall three elements of the pg_identify_object_as_address array, then quoting\neach of them on the client side.\n\n\n",
"msg_date": "Sun, 24 Jan 2021 00:39:08 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On 24.01.2021 11:39, Noah Misch wrote:\n> On Thu, Jan 21, 2021 at 01:03:58AM +0300, Anastasia Lubennikova wrote:\n>> On 03.01.2021 14:29, Noah Misch wrote:\n>>> Overall, this patch predicts a subset of cases where pg_dump will emit a\n>>> failing GRANT or REVOKE that targets a pg_catalog object. Can you write a\n>>> code comment stating what it does and does not detect? I think it's okay to\n>>> not predict every failure, but let's record the intended coverage. Here are a\n>>> few examples I know; there may be more. The above query only finds GRANTs to\n>>> non-pinned roles. pg_dump dumps the following, but the above query doesn't\n>>> find them:\n>>>\n>>> REVOKE ALL ON FUNCTION pg_last_wal_replay_lsn FROM public;\n>>> GRANT EXECUTE ON FUNCTION pg_reload_conf() TO pg_signal_backend;\n> I see a new comment listing object types. Please also explain the lack of\n> preventing REVOKE failures (first example query above) and the limitation\n> around non-pinned roles (second example).\n>\n\n1) Could you please clarify, what do you mean by REVOKE failures?\n\nI tried following example:\n\nStart 9.6 cluster.\n\nREVOKE ALL ON function pg_switch_xlog() FROM public;\nREVOKE ALL ON function pg_switch_xlog() FROM backup;\n\nThe upgrade to the current master passes with and without patch.\nIt seems that current implementation of pg_upgrade doesn't take into \naccount revoke ACLs.\n\n2) As for pinned roles, I think we should fix this behavior, rather than \nadding a comment. Because such roles can have grants on system objects.\n\nIn earlier versions of the patch, I gathered ACLs directly from system \ncatalogs: pg_proc.proacl, pg_class.relacl pg_attribute.attacl and \npg_type.typacl.\n\nThe only downside of this approach is that it cannot be easily extended \nto other object types, as we need to handle every object type separately.\nI don't think it is a big deal, as we already do it in \ncheck_for_changed_signatures()\n\nAnd also the query to gather non-standard ACLs won't look as nice as \nnow, because of the need to parse arrays of aclitems.\n\nWhat do you think?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Mon, 25 Jan 2021 22:14:43 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On Mon, Jan 25, 2021 at 10:14:43PM +0300, Anastasia Lubennikova wrote:\n> On 24.01.2021 11:39, Noah Misch wrote:\n> >On Thu, Jan 21, 2021 at 01:03:58AM +0300, Anastasia Lubennikova wrote:\n> >>On 03.01.2021 14:29, Noah Misch wrote:\n> >>>Overall, this patch predicts a subset of cases where pg_dump will emit a\n> >>>failing GRANT or REVOKE that targets a pg_catalog object. Can you write a\n> >>>code comment stating what it does and does not detect? I think it's okay to\n> >>>not predict every failure, but let's record the intended coverage. Here are a\n> >>>few examples I know; there may be more. The above query only finds GRANTs to\n> >>>non-pinned roles. pg_dump dumps the following, but the above query doesn't\n> >>>find them:\n> >>>\n> >>> REVOKE ALL ON FUNCTION pg_last_wal_replay_lsn FROM public;\n> >>> GRANT EXECUTE ON FUNCTION pg_reload_conf() TO pg_signal_backend;\n> >I see a new comment listing object types. Please also explain the lack of\n> >preventing REVOKE failures (first example query above) and the limitation\n> >around non-pinned roles (second example).\n> \n> 1) Could you please clarify, what do you mean by REVOKE failures?\n> \n> I tried following example:\n> \n> Start 9.6 cluster.\n> \n> REVOKE ALL ON function pg_switch_xlog() FROM public;\n> REVOKE ALL ON function pg_switch_xlog() FROM backup;\n> \n> The upgrade to the current master passes with and without patch.\n> It seems that current implementation of pg_upgrade doesn't take into account\n> revoke ACLs.\n\nI think you can observe it by adding \"revoke all on function\npg_stat_get_subscription from public;\" to test_add_acl_to_catalog_objects.sql\nand then rerunning your test script. pg_dump will reproduce that REVOKE,\nwhich would fail if postgresql.org had removed that function. That's fine, so\nlong as a comment mentions the limitation.\n\n> 2) As for pinned roles, I think we should fix this behavior, rather than\n> adding a comment. Because such roles can have grants on system objects.\n> \n> In earlier versions of the patch, I gathered ACLs directly from system\n> catalogs: pg_proc.proacl, pg_class.relacl pg_attribute.attacl and\n> pg_type.typacl.\n> \n> The only downside of this approach is that it cannot be easily extended to\n> other object types, as we need to handle every object type separately.\n> I don't think it is a big deal, as we already do it in\n> check_for_changed_signatures()\n> \n> And also the query to gather non-standard ACLs won't look as nice as now,\n> because of the need to parse arrays of aclitems.\n> \n> What do you think?\n\nHard to say for certain without seeing the code both ways. I'm not generally\nenthusiastic about adding pg_upgrade code to predict whether the dump will\nfail to restore, because such code will never be as good as just trying the\nrestore. The patch has 413 lines of code, which is substantial. I would balk\nif, for example, the patch tripled in size to catch more cases.\n\n\n",
"msg_date": "Wed, 27 Jan 2021 03:21:35 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On 27.01.2021 14:21, Noah Misch wrote:\n> On Mon, Jan 25, 2021 at 10:14:43PM +0300, Anastasia Lubennikova wrote:\n>\n>> 1) Could you please clarify, what do you mean by REVOKE failures?\n>>\n>> I tried following example:\n>>\n>> Start 9.6 cluster.\n>>\n>> REVOKE ALL ON function pg_switch_xlog() FROM public;\n>> REVOKE ALL ON function pg_switch_xlog() FROM backup;\n>>\n>> The upgrade to the current master passes with and without patch.\n>> It seems that current implementation of pg_upgrade doesn't take into account\n>> revoke ACLs.\n> I think you can observe it by adding \"revoke all on function\n> pg_stat_get_subscription from public;\" to test_add_acl_to_catalog_objects.sql\n> and then rerunning your test script. pg_dump will reproduce that REVOKE,\n> which would fail if postgresql.org had removed that function. That's fine, so\n> long as a comment mentions the limitation.\n\nIn the updated patch, I implemented generation of both GRANT ALL and \nREVOKE ALL for problematic objects. If I understand it correctly, these \ncalls will clean object's ACL completely. And I see no harm in doing \nthis, because the objects don't exist in the new cluster anyway.\n\nTo test it I attempted to reproduce the problem, using attached \ntest_revoke.sql in the test. Still, pg_upgrade works fine without any \nadjustments. I'd be grateful if you test it some more.\n\n>\n>> 2) As for pinned roles, I think we should fix this behavior, rather than\n>> adding a comment. Because such roles can have grants on system objects.\n>>\n>> In earlier versions of the patch, I gathered ACLs directly from system\n>> catalogs: pg_proc.proacl, pg_class.relacl pg_attribute.attacl and\n>> pg_type.typacl.\n>>\n>> The only downside of this approach is that it cannot be easily extended to\n>> other object types, as we need to handle every object type separately.\n>> I don't think it is a big deal, as we already do it in\n>> check_for_changed_signatures()\n>>\n>> And also the query to gather non-standard ACLs won't look as nice as now,\n>> because of the need to parse arrays of aclitems.\n>>\n>> What do you think?\n> Hard to say for certain without seeing the code both ways. I'm not generally\n> enthusiastic about adding pg_upgrade code to predict whether the dump will\n> fail to restore, because such code will never be as good as just trying the\n> restore. The patch has 413 lines of code, which is substantial. I would balk\n> if, for example, the patch tripled in size to catch more cases.\n\nAgree.\nI attempted to write alternative version, but it seems too complicated. \nSo I just added a comment about this limitation.\n\nQuoting of table column GRANT/REVOKE statements is fixed in this version.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 27 Jan 2021 19:32:42 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On Wed, Jan 27, 2021 at 07:32:42PM +0300, Anastasia Lubennikova wrote:\n> On 27.01.2021 14:21, Noah Misch wrote:\n> >On Mon, Jan 25, 2021 at 10:14:43PM +0300, Anastasia Lubennikova wrote:\n> >\n> >>1) Could you please clarify, what do you mean by REVOKE failures?\n> >>\n> >>I tried following example:\n> >>\n> >>Start 9.6 cluster.\n> >>\n> >>REVOKE ALL ON function pg_switch_xlog() FROM public;\n> >>REVOKE ALL ON function pg_switch_xlog() FROM backup;\n> >>\n> >>The upgrade to the current master passes with and without patch.\n> >>It seems that current implementation of pg_upgrade doesn't take into account\n> >>revoke ACLs.\n> >I think you can observe it by adding \"revoke all on function\n> >pg_stat_get_subscription from public;\" to test_add_acl_to_catalog_objects.sql\n> >and then rerunning your test script. pg_dump will reproduce that REVOKE,\n> >which would fail if postgresql.org had removed that function. That's fine, so\n> >long as a comment mentions the limitation.\n> \n> In the updated patch, I implemented generation of both GRANT ALL and REVOKE\n> ALL for problematic objects. If I understand it correctly, these calls will\n> clean object's ACL completely. And I see no harm in doing this, because the\n> objects don't exist in the new cluster anyway.\n> \n> To test it I attempted to reproduce the problem, using attached\n> test_revoke.sql in the test. Still, pg_upgrade works fine without any\n> adjustments. I'd be grateful if you test it some more.\n\ntest_revoke.sql has \"revoke all on function pg_stat_get_subscription() from\ntest\", which does nothing. You need something that causes a REVOKE in pg_dump\noutput. Worked example:\n\n=== shell script\nset -x\ncreateuser test\npg_dump | grep -E 'GRANT|REVOKE'\npsql -Xc 'revoke all on function pg_stat_get_subscription() from test;'\npsql -Xc 'revoke all on function pg_stat_get_subscription from test;'\npg_dump | grep -E 'GRANT|REVOKE'\npsql -Xc 'revoke all on function pg_stat_get_subscription from public;'\npg_dump | grep -E 'GRANT|REVOKE'\n\n=== output\n+ createuser test\n+ pg_dump\n+ grep -E 'GRANT|REVOKE'\n+ psql -Xc 'revoke all on function pg_stat_get_subscription() from test;'\nERROR: function pg_stat_get_subscription() does not exist\n+ psql -Xc 'revoke all on function pg_stat_get_subscription from test;'\nREVOKE\n+ pg_dump\n+ grep -E 'GRANT|REVOKE'\n+ psql -Xc 'revoke all on function pg_stat_get_subscription from public;'\nREVOKE\n+ pg_dump\n+ grep -E 'GRANT|REVOKE'\nREVOKE ALL ON FUNCTION pg_catalog.pg_stat_get_subscription(subid oid, OUT subid oid, OUT relid oid, OUT pid integer, OUT received_lsn pg_lsn, OUT last_msg_send_time timestamp with time zone, OUT last_msg_receipt_time timestamp with time zone, OUT latest_end_lsn pg_lsn, OUT latest_end_time timestamp with time zone) FROM PUBLIC;\n\nThat REVOKE is going to fail if the upgrade target cluster lacks the function\nin question, and your test_rename_catalog_objects_v13 does simulate\npg_stat_get_subscription not existing.\n\n\n",
"msg_date": "Wed, 27 Jan 2021 22:55:08 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On 28.01.2021 09:55, Noah Misch wrote:\n> On Wed, Jan 27, 2021 at 07:32:42PM +0300, Anastasia Lubennikova wrote:\n>> On 27.01.2021 14:21, Noah Misch wrote:\n>>> On Mon, Jan 25, 2021 at 10:14:43PM +0300, Anastasia Lubennikova wrote:\n>>>\n>>>> 1) Could you please clarify, what do you mean by REVOKE failures?\n>>>>\n>>>> I tried following example:\n>>>>\n>>>> Start 9.6 cluster.\n>>>>\n>>>> REVOKE ALL ON function pg_switch_xlog() FROM public;\n>>>> REVOKE ALL ON function pg_switch_xlog() FROM backup;\n>>>>\n>>>> The upgrade to the current master passes with and without patch.\n>>>> It seems that current implementation of pg_upgrade doesn't take into account\n>>>> revoke ACLs.\n>>> I think you can observe it by adding \"revoke all on function\n>>> pg_stat_get_subscription from public;\" to test_add_acl_to_catalog_objects.sql\n>>> and then rerunning your test script. pg_dump will reproduce that REVOKE,\n>>> which would fail if postgresql.org had removed that function. That's fine, so\n>>> long as a comment mentions the limitation.\n>> In the updated patch, I implemented generation of both GRANT ALL and REVOKE\n>> ALL for problematic objects. If I understand it correctly, these calls will\n>> clean object's ACL completely. And I see no harm in doing this, because the\n>> objects don't exist in the new cluster anyway.\n>>\n>> To test it I attempted to reproduce the problem, using attached\n>> test_revoke.sql in the test. Still, pg_upgrade works fine without any\n>> adjustments. I'd be grateful if you test it some more.\n> test_revoke.sql has \"revoke all on function pg_stat_get_subscription() from\n> test\", which does nothing. You need something that causes a REVOKE in pg_dump\n> output. Worked example:\n> ...\nIt took a while to debug new version.\nNow it detects and fixes both GRANT and REVOKE privileges on the catalog \nobjects.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 11 Feb 2021 20:16:30 +0300",
"msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "On Thu, Feb 11, 2021 at 08:16:30PM +0300, Anastasia Lubennikova wrote:\n> On 28.01.2021 09:55, Noah Misch wrote:\n> >On Wed, Jan 27, 2021 at 07:32:42PM +0300, Anastasia Lubennikova wrote:\n> >>On 27.01.2021 14:21, Noah Misch wrote:\n> >>>On Mon, Jan 25, 2021 at 10:14:43PM +0300, Anastasia Lubennikova wrote:\n> >>>>1) Could you please clarify, what do you mean by REVOKE failures?\n> >>>>\n> >>>>I tried following example:\n> >>>>\n> >>>>Start 9.6 cluster.\n> >>>>\n> >>>>REVOKE ALL ON function pg_switch_xlog() FROM public;\n> >>>>REVOKE ALL ON function pg_switch_xlog() FROM backup;\n> >>>>\n> >>>>The upgrade to the current master passes with and without patch.\n> >>>>It seems that current implementation of pg_upgrade doesn't take into account\n> >>>>revoke ACLs.\n> >>>I think you can observe it by adding \"revoke all on function\n> >>>pg_stat_get_subscription from public;\" to test_add_acl_to_catalog_objects.sql\n> >>>and then rerunning your test script. pg_dump will reproduce that REVOKE,\n> >>>which would fail if postgresql.org had removed that function. That's fine, so\n> >>>long as a comment mentions the limitation.\n\nI've now tested exactly that. It didn't cause a test script failure, because\npg_upgrade_ACL_test.sh runs \"pg_upgrade --check\" but does not run the final\npg_upgrade (without --check). The failure happens only without --check. I've\nattached a modified version of your test script that reproduces the problem.\nIt uses a different function, timenow(), so the test does not depend on\ntest_rename_catalog_objects_v14. The attached script fails with this output:\n\n===\nConsult the last few lines of \"pg_upgrade_dump_13325.log\" for\nthe probable cause of the failure.\nFailure, exiting\n===\n\nThat log file contains:\n\n===\ncommand: \"/tmp/workdir/postgresql_bin_test_new/bin/pg_dump\" --host /home/nm/src/pg/backbranch/extra --port 50432 --username nm --schema-only --quote-all-identifiers --binary-upgrade --format=custom --file=\"pg_upgrade_dump_13325.custom\" 'dbname=postgres' >> \"pg_upgrade_dump_13325.log\" 2>&1\n\n\ncommand: \"/tmp/workdir/postgresql_bin_test_new/bin/pg_restore\" --host /home/nm/src/pg/backbranch/extra --port 50432 --username nm --clean --create --exit-on-error --verbose --dbname template1 \"pg_upgrade_dump_13325.custom\" >> \"pg_upgrade_dump_13325.log\" 2>&1\npg_restore: connecting to database for restore\npg_restore: dropping DATABASE PROPERTIES postgres\npg_restore: dropping DATABASE postgres\npg_restore: creating DATABASE \"postgres\"\npg_restore: connecting to new database \"postgres\"\npg_restore: creating COMMENT \"DATABASE \"postgres\"\"\npg_restore: creating DATABASE PROPERTIES \"postgres\"\npg_restore: connecting to new database \"postgres\"\npg_restore: creating pg_largeobject \"pg_largeobject\"\npg_restore: creating ACL \"pg_catalog.FUNCTION \"timenow\"()\"\npg_restore: while PROCESSING TOC:\npg_restore: from TOC entry 3047; 0 0 ACL FUNCTION \"timenow\"() nm\npg_restore: error: could not execute query: ERROR: function pg_catalog.timenow() does not exist\nCommand was: REVOKE ALL ON FUNCTION \"pg_catalog\".\"timenow\"() FROM PUBLIC;\n===\n\n> >>In the updated patch, I implemented generation of both GRANT ALL and REVOKE\n> >>ALL for problematic objects. If I understand it correctly, these calls will\n> >>clean object's ACL completely. And I see no harm in doing this, because the\n> >>objects don't exist in the new cluster anyway.\n\nHere's fix_system_objects_ACL.sql with your v14 test script:\n\n===\n\\connect postgres\n GRANT ALL ON function pg_catalog.pg_stat_get_subscription(pg_catalog.oid) TO \"backup\" ;\n REVOKE ALL ON function pg_catalog.pg_stat_get_subscription(pg_catalog.oid) FROM \"backup\" ;\n GRANT ALL ON pg_catalog.pg_stat_subscription TO \"backup\" ;\n REVOKE ALL ON pg_catalog.pg_stat_subscription FROM \"backup\" ;\n GRANT ALL ON pg_catalog.pg_subscription TO \"backup\",\"test\" ;\n REVOKE ALL ON pg_catalog.pg_subscription FROM \"backup\",\"test\" ;\n GRANT ALL (subenabled) ON pg_catalog.pg_subscription TO \"backup\" ;\n REVOKE ALL (subenabled) ON pg_catalog.pg_subscription FROM \"backup\" ;\n===\n\nConsidering the REVOKE statements, those new GRANT statements have no effect.\nTo prevent the final pg_upgrade failure, fix_system_objects_ACL.sql would need\n\"GRANT ALL ON FUNCTION pg_catalog.pg_stat_get_subscription(pg_catalog.oid) TO\nPUBLIC;\". Alternately, again, I don't mind if this case continues to fail, so\nlong as a comment mentions the limitation. How would you like to proceed?\n\nOne other thing:\n\n> diff --git a/src/bin/pg_upgrade/check.c b/src/bin/pg_upgrade/check.c\n> index 43fc297eb6..f2d593f574 100644\n> --- a/src/bin/pg_upgrade/check.c\n> +++ b/src/bin/pg_upgrade/check.c\n...\n> +check_for_changed_signatures(void)\n> +{\n...\n> +\t\t/*\n> +\t\t *\n> +\t\t * AclInfo array is sorted by obj_ident. This allows us to compare\n> +\t\t * AclInfo entries with the query result above efficiently.\n> +\t\t */\n> +\t\tfor (aclnum = 0; aclnum < dbinfo->non_def_acl_arr.nacls; aclnum++)\n> +\t\t{\n> +\t\t\tAclInfo\t *aclinfo = &dbinfo->non_def_acl_arr.aclinfos[aclnum];\n> +\t\t\tbool\t\treport = false;\n> +\n> +\t\t\twhile (objnum < ntups)\n> +\t\t\t{\n> +\t\t\t\tint\t\t\tret;\n> +\n> +\t\t\t\tret = strcmp(aclinfo->obj_ident,\n> +\t\t\t\t\t\t\t PQgetvalue(res, objnum, i_obj_ident));\n\nI think this should check obj_type in addition to obj_ident. Otherwise, it\ncan't distinguish a relation from a type of the same name.\n\nnm",
"msg_date": "Mon, 8 Mar 2021 23:25:47 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
},
{
"msg_contents": "This patch has been marked Waiting on Author since early March, with the thread\nstalled since then. I'm marking this CF entry Returned with Feedback, please\nfeel free to resubmit it if/when a new version of the patch is available.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 26 Nov 2021 13:46:22 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade fails with non-standard ACL"
}
] |
[
{
"msg_contents": "Previous discussion: \nhttps://postgr.es/m/1407012053.15301.53.camel%40jeff-desktop\n\nThis patch introduces a way to ask a memory context how much memory it\ncurrently has allocated. Each time a new block (not an individual\nchunk, but a new malloc'd block) is allocated, it increments a struct\nmember; and each time a block is free'd, it decrements the struct\nmember. So it tracks blocks allocated by malloc, not what is actually\nused for chunks allocated by palloc.\n\nThe purpose is for Memory Bounded Hash Aggregation, but it may be\nuseful in more cases as we try to better adapt to and control memory\nusage at execution time.\n\nI ran the same tests as Robert did before[1] on my laptop[2]. The only\ndifference is that I also set max_parallel_workers[_per_gather]=0 to be\nsure. I did 5 runs, alternating between memory-accounting and master,\nand I got the following results for \"elapsed\" (as reported by\ntrace_sort):\n\n\nregression=# select version, min(s), max(s), percentile_disc(0.5)\nwithin group (order by s) as median, avg(s)::numeric(10,2) from tmp\ngroup by version;\n version | min | max | median | avg\n-------------------+-------+-------+--------+-------\n master | 13.92 | 14.40 | 14.06 | 14.12\n memory accounting | 13.43 | 14.46 | 14.11 | 14.09\n(2 rows)\n\n\nSo, I don't see any significant performance impact for this patch in\nthis test. That may be because:\n\n* It was never really significant except on PPC64.\n* I changed it to only update mem_allocated for the current context,\nnot recursively for all parent contexts. It's now up to the function\nthat reports memory usage to recurse or not (as needed).\n* A lot of changes to sort have happened since that time, so perhaps\nit's not a good test of memory contexts any more.\n\npgbench didn't show any slowdown either.\n\nI also did another test with hash aggregation that uses significant\nmemory (t10m is a table with 10M distinct values and work_mem is 1GB):\n\npostgres=# select (select (i, count(*)) from t10m group by i having\ncount(*) > n) from (values(1),(2),(3),(4),(5)) as s(n);\n\nI didn't see any noticable difference there, either.\n\nRegards,\n\tJeff Davis\n\n[1] \nhttps://postgr.es/m/CA%2BTgmobnu7XEn1gRdXnFo37P79bF%3DqLt46%3D37ajP3Cro9dBRaA%40mail.gmail.com\n[2] Linux jdavis 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24\nUTC 2019 x86_64 x86_64 x86_64 GNU/Linux",
"msg_date": "Thu, 18 Jul 2019 11:24:25 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Memory Accounting"
},
{
"msg_contents": "On 18/07/2019 21:24, Jeff Davis wrote:\n> Previous discussion:\n> https://postgr.es/m/1407012053.15301.53.camel%40jeff-desktop\n> \n> This patch introduces a way to ask a memory context how much memory it\n> currently has allocated. Each time a new block (not an individual\n> chunk, but a new malloc'd block) is allocated, it increments a struct\n> member; and each time a block is free'd, it decrements the struct\n> member. So it tracks blocks allocated by malloc, not what is actually\n> used for chunks allocated by palloc.\n> \n> The purpose is for Memory Bounded Hash Aggregation, but it may be\n> useful in more cases as we try to better adapt to and control memory\n> usage at execution time.\n\nSeems handy.\n\n> * I changed it to only update mem_allocated for the current context,\n> not recursively for all parent contexts. It's now up to the function\n> that reports memory usage to recurse or not (as needed).\n\nIs that OK for memory bounded hash aggregation? Might there be a lot of \nsub-contexts during hash aggregation?\n\n- Heikki\n\n\n",
"msg_date": "Mon, 22 Jul 2019 11:30:37 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "On Mon, Jul 22, 2019 at 11:30:37AM +0300, Heikki Linnakangas wrote:\n>On 18/07/2019 21:24, Jeff Davis wrote:\n>>Previous discussion:\n>>https://postgr.es/m/1407012053.15301.53.camel%40jeff-desktop\n>>\n>>This patch introduces a way to ask a memory context how much memory it\n>>currently has allocated. Each time a new block (not an individual\n>>chunk, but a new malloc'd block) is allocated, it increments a struct\n>>member; and each time a block is free'd, it decrements the struct\n>>member. So it tracks blocks allocated by malloc, not what is actually\n>>used for chunks allocated by palloc.\n>>\n>>The purpose is for Memory Bounded Hash Aggregation, but it may be\n>>useful in more cases as we try to better adapt to and control memory\n>>usage at execution time.\n>\n>Seems handy.\n>\n\nIndeed.\n\n>>* I changed it to only update mem_allocated for the current context,\n>>not recursively for all parent contexts. It's now up to the function\n>>that reports memory usage to recurse or not (as needed).\n>\n>Is that OK for memory bounded hash aggregation? Might there be a lot \n>of sub-contexts during hash aggregation?\n>\n\nThere shouldn't be, at least not since b419865a814abbc. There might be\ncases where custom aggregates still do that, but I think that's simply a\ndesign we should discourage.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Mon, 22 Jul 2019 18:16:58 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "On Mon, 2019-07-22 at 18:16 +0200, Tomas Vondra wrote:\n> * I changed it to only update mem_allocated for the current\n> > > context,\n> > > not recursively for all parent contexts. It's now up to the\n> > > function\n> > > that reports memory usage to recurse or not (as needed).\n> > \n> > Is that OK for memory bounded hash aggregation? Might there be a\n> > lot \n> > of sub-contexts during hash aggregation?\n> > \n> \n> There shouldn't be, at least not since b419865a814abbc. There might\n> be\n> cases where custom aggregates still do that, but I think that's\n> simply a\n> design we should discourage.\n\nRight, I don't think memory-context-per-group is something we should\noptimize for.\n\nDiscussion:\nhttps://www.postgresql.org/message-id/3839201.Nfa2RvcheX%40techfox.foxi\nhttps://www.postgresql.org/message-id/5334D7A5.2000907%40fuzzy.cz\n\nCommit link:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=b419865a814abbca12bdd6eef6a3d5ed67f432e1\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Mon, 22 Jul 2019 09:33:03 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "On Thu, Jul 18, 2019 at 11:24 AM Jeff Davis <pgsql@j-davis.com> wrote:\n\n> Previous discussion:\n> https://postgr.es/m/1407012053.15301.53.camel%40jeff-desktop\n>\n> This patch introduces a way to ask a memory context how much memory it\n> currently has allocated. Each time a new block (not an individual\n> chunk, but a new malloc'd block) is allocated, it increments a struct\n> member; and each time a block is free'd, it decrements the struct\n> member. So it tracks blocks allocated by malloc, not what is actually\n> used for chunks allocated by palloc.\n>\n>\nCool! I like how straight-forward this approach is. It seems easy to\nbuild on, as well.\n\nAre there cases where we are likely to palloc a lot without needing to\nmalloc in a certain memory context? For example, do we have a pattern\nwhere, for some kind of memory intensive operator, we might palloc in\na per tuple context and consistently get chunks without having to\nmalloc and then later, where we to try and check the bytes allocated\nfor one of these per tuple contexts to decide on some behavior, the\nnumber would not be representative?\n\nI think that is basically what Heikki is asking about with HashAgg,\nbut I wondered if there were other cases that you had already thought\nthrough where this might happen.\n\n\n> The purpose is for Memory Bounded Hash Aggregation, but it may be\n> useful in more cases as we try to better adapt to and control memory\n> usage at execution time.\n>\n>\nThis approach seems like it would be good for memory intensive\noperators which use a large, representative context. I think the\nHashTableContext for HashJoin might be one?\n\n-- \nMelanie Plageman\n\nOn Thu, Jul 18, 2019 at 11:24 AM Jeff Davis <pgsql@j-davis.com> wrote:Previous discussion: \nhttps://postgr.es/m/1407012053.15301.53.camel%40jeff-desktop\n\nThis patch introduces a way to ask a memory context how much memory it\ncurrently has allocated. Each time a new block (not an individual\nchunk, but a new malloc'd block) is allocated, it increments a struct\nmember; and each time a block is free'd, it decrements the struct\nmember. So it tracks blocks allocated by malloc, not what is actually\nused for chunks allocated by palloc.\nCool! I like how straight-forward this approach is. It seems easy tobuild on, as well.Are there cases where we are likely to palloc a lot without needing tomalloc in a certain memory context? For example, do we have a patternwhere, for some kind of memory intensive operator, we might palloc ina per tuple context and consistently get chunks without having tomalloc and then later, where we to try and check the bytes allocatedfor one of these per tuple contexts to decide on some behavior, thenumber would not be representative?I think that is basically what Heikki is asking about with HashAgg,but I wondered if there were other cases that you had already thoughtthrough where this might happen. \nThe purpose is for Memory Bounded Hash Aggregation, but it may be\nuseful in more cases as we try to better adapt to and control memory\nusage at execution time.\n\nThis approach seems like it would be good for memory intensiveoperators which use a large, representative context. I think theHashTableContext for HashJoin might be one?-- Melanie Plageman",
"msg_date": "Tue, 23 Jul 2019 18:18:26 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "On Tue, Jul 23, 2019 at 06:18:26PM -0700, Melanie Plageman wrote:\n>On Thu, Jul 18, 2019 at 11:24 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n>> Previous discussion:\n>> https://postgr.es/m/1407012053.15301.53.camel%40jeff-desktop\n>>\n>> This patch introduces a way to ask a memory context how much memory it\n>> currently has allocated. Each time a new block (not an individual\n>> chunk, but a new malloc'd block) is allocated, it increments a struct\n>> member; and each time a block is free'd, it decrements the struct\n>> member. So it tracks blocks allocated by malloc, not what is actually\n>> used for chunks allocated by palloc.\n>>\n>>\n>Cool! I like how straight-forward this approach is. It seems easy to\n>build on, as well.\n>\n>Are there cases where we are likely to palloc a lot without needing to\n>malloc in a certain memory context? For example, do we have a pattern\n>where, for some kind of memory intensive operator, we might palloc in\n>a per tuple context and consistently get chunks without having to\n>malloc and then later, where we to try and check the bytes allocated\n>for one of these per tuple contexts to decide on some behavior, the\n>number would not be representative?\n>\n\nI think there's plenty of places where we quickly get into a state with\nenough chunks in the freelist - the per-tuple contexts are a good\nexample of that, I think.\n\n>I think that is basically what Heikki is asking about with HashAgg,\n>but I wondered if there were other cases that you had already thought\n>through where this might happen.\n>\n\nI think Heikki was asking about places with a lot of sub-contexts, which a\ncompletely different issue. It used to be the case that some aggregates\ncreated a separate context for each group - like array_agg. That would\nmake Jeff's approach to accounting rather inefficient, because checking\nhow much memory is used would be very expensive (having to loop over a\nlarge number of contexts).\n\n>\n>> The purpose is for Memory Bounded Hash Aggregation, but it may be\n>> useful in more cases as we try to better adapt to and control memory\n>> usage at execution time.\n>>\n>>\n>This approach seems like it would be good for memory intensive\n>operators which use a large, representative context. I think the\n>HashTableContext for HashJoin might be one?\n>\n\nYes, that might me a good candidate (and it would be much simpler than\nthe manual accounting we use now).\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 24 Jul 2019 23:52:28 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "I wanted to address a couple of questions Jeff asked me off-list about\nGreenplum's implementations of Memory Accounting.\n\nGreenplum has two memory accounting sub-systems -- one is the\nMemoryContext-based system proposed here.\nThe other memory accounting system tracks \"logical\" memory owners in\nglobal accounts. For example, every node in a plan has an account,\nhowever, there are other execution contexts, such as the parser, which\nhave their own logical memory accounts.\nNotably, this logical memory account system also tracks chunks instead\nof blocks.\n\nThe rationale for tracking memory at the logical owner level was that\nmemory for a logical owner may allocate memory across multiple\ncontexts and a single context may contain memory belonging to several\nof these logical owners.\n\nMore compellingly, many of the allocations done during execution are\ndone directly in the per query or per tuple context--as opposed to\nbeing done in their own uniquely named context. Arguably, this is\nbecause those logical owners (a Result node, for example) are not\nmemory-intensive and thus do not require specific memory accounting.\nHowever, when debugging a memory leak or OOM, the specificity of\nlogical owner accounts was seen as desirable. A discrepancy between\nmemory allocated and memory freed in the per query context doesn't\nprovide a lot of hints as to the source of the leak.\nAt the least, there was no meaningful way to represent MemoryContext\naccount balances in EXPLAIN ANALYZE. Where would the TopMemoryContext\nbe represented, for example.\n\nAlso, by using logical accounts, each node in the plan could be\nassigned a quota at plan time--because many memory intensive operators\nwill not have relinquished the memory they hold when other nodes are\nexecuting (e.g. Materialize)--so, instead of granting each node\nwork_mem, work_mem is divided up into quotas for each operator in a\nparticular way. This was meant to pave the way for work_mem\nenforcement. This is a topic that has come up in various ways in other\nforums. For example, in the XPRS thread, the discussion of erroring\nout for queries with no \"escape mechanism\" brought up by Thomas Munro\n[1] is a kind of work_mem enforcement (this discussion was focused\nmore on a proposed session-level memory setting, but it is still\nenforcement of a memory setting).\nIt was also discussed at PGCon this year in an unconference session on\nOOM-detection and debugging, runaway query termination, and\nsession-level memory consumption tracking [2].\n\nThe motivation for tracking chunks instead of blocks was to understand\nthe \"actual\" memory consumption of different components in the\ndatabase. Then, eventually, memory consumption patterns would emerge\nand improvements could be made to memory allocation strategies to suit\ndifferent use cases--perhaps other implementations of the\nMemoryContext API similar to Slab and Generation were envisioned.\nApparently, it did lead to the discovery of some memory fragmentation\nissues which were tuned.\n\nI bring these up not just to answer Jeff's question but also to\nprovide some anecdotal evidence that the patch here is a good base for\nother memory accounting and tracking schemes.\n\nEven if HashAgg is the only initial consumer of the memory accounting\nframework, we know that tuplesort can make use of it in its current\nstate as well. And, if another node or component requires\nchunk-tracking, they could implement a different MemoryContext API\nimplementation which uses the MemoryContextData->mem_allocated field\nto track chunks instead of blocks by tracking chunks in its alloc/free\nfunctions.\n\nIdeas like logical memory accounting could leverage the mem_allocated\nfield and build on top of it.\n\n[1]\nhttps://www.postgresql.org/message-id/CA%2BhUKGJEMT7SSZRqt-knu_3iLkdscBCe9M2nrhC259FdE5bX7g%40mail.gmail.com\n[2] https://wiki.postgresql.org/wiki/PgCon_2019_Developer_Unconference\n\nI wanted to address a couple of questions Jeff asked me off-list aboutGreenplum's implementations of Memory Accounting.Greenplum has two memory accounting sub-systems -- one is theMemoryContext-based system proposed here.The other memory accounting system tracks \"logical\" memory owners inglobal accounts. For example, every node in a plan has an account,however, there are other execution contexts, such as the parser, whichhave their own logical memory accounts.Notably, this logical memory account system also tracks chunks insteadof blocks.The rationale for tracking memory at the logical owner level was thatmemory for a logical owner may allocate memory across multiplecontexts and a single context may contain memory belonging to severalof these logical owners.More compellingly, many of the allocations done during execution aredone directly in the per query or per tuple context--as opposed tobeing done in their own uniquely named context. Arguably, this isbecause those logical owners (a Result node, for example) are notmemory-intensive and thus do not require specific memory accounting.However, when debugging a memory leak or OOM, the specificity oflogical owner accounts was seen as desirable. A discrepancy betweenmemory allocated and memory freed in the per query context doesn'tprovide a lot of hints as to the source of the leak.At the least, there was no meaningful way to represent MemoryContextaccount balances in EXPLAIN ANALYZE. Where would the TopMemoryContextbe represented, for example.Also, by using logical accounts, each node in the plan could beassigned a quota at plan time--because many memory intensive operatorswill not have relinquished the memory they hold when other nodes areexecuting (e.g. Materialize)--so, instead of granting each nodework_mem, work_mem is divided up into quotas for each operator in aparticular way. This was meant to pave the way for work_memenforcement. This is a topic that has come up in various ways in otherforums. For example, in the XPRS thread, the discussion of erroringout for queries with no \"escape mechanism\" brought up by Thomas Munro[1] is a kind of work_mem enforcement (this discussion was focusedmore on a proposed session-level memory setting, but it is stillenforcement of a memory setting).It was also discussed at PGCon this year in an unconference session onOOM-detection and debugging, runaway query termination, andsession-level memory consumption tracking [2].The motivation for tracking chunks instead of blocks was to understandthe \"actual\" memory consumption of different components in thedatabase. Then, eventually, memory consumption patterns would emergeand improvements could be made to memory allocation strategies to suitdifferent use cases--perhaps other implementations of theMemoryContext API similar to Slab and Generation were envisioned.Apparently, it did lead to the discovery of some memory fragmentationissues which were tuned.I bring these up not just to answer Jeff's question but also toprovide some anecdotal evidence that the patch here is a good base forother memory accounting and tracking schemes.Even if HashAgg is the only initial consumer of the memory accountingframework, we know that tuplesort can make use of it in its currentstate as well. And, if another node or component requireschunk-tracking, they could implement a different MemoryContext APIimplementation which uses the MemoryContextData->mem_allocated fieldto track chunks instead of blocks by tracking chunks in its alloc/freefunctions.Ideas like logical memory accounting could leverage the mem_allocatedfield and build on top of it.[1] https://www.postgresql.org/message-id/CA%2BhUKGJEMT7SSZRqt-knu_3iLkdscBCe9M2nrhC259FdE5bX7g%40mail.gmail.com[2] https://wiki.postgresql.org/wiki/PgCon_2019_Developer_Unconference",
"msg_date": "Wed, 18 Sep 2019 14:32:13 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "On Wed, 2019-09-18 at 13:50 -0700, Soumyadeep Chakraborty wrote:\n> Hi Jeff,\n\nHi Soumyadeep and Melanie,\n\nThank you for the review!\n\n> max_stack_depth\tmax level\tlazy (ms)\teager (ms)\t(eage\n> r/lazy)\n> 2MB\t82\t302.715\t427.554\t1.4123978\n> 3MB\t3474\t567.829\t896.143\t1.578191674\n> 7.67MB\t8694\t2657.972\t4903.063\t1.844663149\n\nThank you for collecting data on this. Were you able to find any\nregression when compared to no memory accounting at all?\n\nIt looks like you agree with the approach and the results. Did you find\nany other issues with the patch?\n\nI am also including Robert in this thread. He had some concerns the\nlast time around due to a small regression on POWER.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 19 Sep 2019 11:00:37 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 11:52:28PM +0200, Tomas Vondra wrote:\n> I think Heikki was asking about places with a lot of sub-contexts, which a\n> completely different issue. It used to be the case that some aggregates\n> created a separate context for each group - like array_agg. That would\n> make Jeff's approach to accounting rather inefficient, because checking\n> how much memory is used would be very expensive (having to loop over a\n> large number of contexts).\n\nThe patch has been marked as ready for committer for a week or so, but\nit seems to me that this comment has not been addressed, no? Are we\nsure that we want this method if it proves to be inefficient when\nthere are many sub-contexts and shouldn't we at least test such\nscenarios with a worst-case, customly-made, function?\n--\nMichael",
"msg_date": "Tue, 24 Sep 2019 14:21:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 02:21:40PM +0900, Michael Paquier wrote:\n>On Wed, Jul 24, 2019 at 11:52:28PM +0200, Tomas Vondra wrote:\n>> I think Heikki was asking about places with a lot of sub-contexts, which a\n>> completely different issue. It used to be the case that some aggregates\n>> created a separate context for each group - like array_agg. That would\n>> make Jeff's approach to accounting rather inefficient, because checking\n>> how much memory is used would be very expensive (having to loop over a\n>> large number of contexts).\n>\n>The patch has been marked as ready for committer for a week or so, but\n>it seems to me that this comment has not been addressed, no? Are we\n>sure that we want this method if it proves to be inefficient when\n>there are many sub-contexts and shouldn't we at least test such\n>scenarios with a worst-case, customly-made, function?\n\nI don't think so.\n\nAggregates creating many memory contexts (context for each group) was\ndiscussed extensively in the thread about v11 [1] in 2015. And back then\nthe conclusion was that that's a pretty awful pattern anyway, as it uses\nmuch more memory (no cross-context freelists), and has various other\nissues. In a way, those aggregates are wrong and should be fixed just\nlike we fixed array_agg/string_agg (even without the memory accounting).\n\nThe way I see it we can do either eager or lazy accounting. Eager might\nwork better for aggregates with many contexts, but it does increase the\noverhead for the \"regular\" aggregates with just one or two contexts.\nConsidering how rare those many-context aggregates are (I'm not aware of\nany such aggregate at the moment), it seems reasonable to pick the lazy\naccounting.\n\n(Note: Another factor affecting the lazy vs. eager efficiency is the\nnumber of palloc/pfree calls vs. calls to determine amount of mem used,\nbut that's mostly orthogonal and we cn ignore it here).\n\nSo I think the approach Jeff ended up with sensible - certainly as a\nfirst step. We may improve it in the future, of course, once we have\nmore practical experience.\n\nBarring objections, I do plan to get this committed by the end of this\nCF (i.e. sometime later this week).\n\n[1] https://www.postgresql.org/message-id/1434311039.4369.39.camel%40jeff-desktop\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 24 Sep 2019 14:05:51 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 11:00 AM Jeff Davis <pgsql@j-davis.com> wrote:\n\n> On Wed, 2019-09-18 at 13:50 -0700, Soumyadeep Chakraborty wrote:\n> > Hi Jeff,\n>\n> Hi Soumyadeep and Melanie,\n>\n> Thank you for the review!\n>\n> > max_stack_depth max level lazy (ms) eager (ms)\n> (eage\n> > r/lazy)\n> > 2MB 82 302.715 427.554 1.4123978\n> > 3MB 3474 567.829 896.143 1.578191674\n> > 7.67MB 8694 2657.972 4903.063 1.844663149\n>\n> Thank you for collecting data on this. Were you able to find any\n> regression when compared to no memory accounting at all?\n>\n>\nWe didn't spend much time comparing performance with and without\nmemory accounting, as it seems like this was discussed extensively in\nthe previous thread.\n\n\n> It looks like you agree with the approach and the results. Did you find\n> any other issues with the patch?\n>\n\nWe didn't observe any other problems with the patch and agree with the\napproach. It is a good start.\n\n\n>\n> I am also including Robert in this thread. He had some concerns the\n> last time around due to a small regression on POWER.\n>\n\nI think it would be helpful if we could repeat the performance tests\nRobert did on that machine with the current patch (unless this version\nof the patch is exactly the same as the ones he tested previously).\n\nThanks,\nSoumyadeep & Melanie\n\nOn Thu, Sep 19, 2019 at 11:00 AM Jeff Davis <pgsql@j-davis.com> wrote:On Wed, 2019-09-18 at 13:50 -0700, Soumyadeep Chakraborty wrote:\n> Hi Jeff,\n\nHi Soumyadeep and Melanie,\n\nThank you for the review!\n\n> max_stack_depth max level lazy (ms) eager (ms) (eage\n> r/lazy)\n> 2MB 82 302.715 427.554 1.4123978\n> 3MB 3474 567.829 896.143 1.578191674\n> 7.67MB 8694 2657.972 4903.063 1.844663149\n\nThank you for collecting data on this. Were you able to find any\nregression when compared to no memory accounting at all?\nWe didn't spend much time comparing performance with and withoutmemory accounting, as it seems like this was discussed extensively inthe previous thread. \nIt looks like you agree with the approach and the results. Did you find\nany other issues with the patch?We didn't observe any other problems with the patch and agree with theapproach. It is a good start. \n\nI am also including Robert in this thread. He had some concerns the\nlast time around due to a small regression on POWER.\nI think it would be helpful if we could repeat the performance testsRobert did on that machine with the current patch (unless this versionof the patch is exactly the same as the ones he tested previously).Thanks,Soumyadeep & Melanie",
"msg_date": "Tue, 24 Sep 2019 11:46:49 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 02:05:51PM +0200, Tomas Vondra wrote:\n> The way I see it we can do either eager or lazy accounting. Eager might\n> work better for aggregates with many contexts, but it does increase the\n> overhead for the \"regular\" aggregates with just one or two contexts.\n> Considering how rare those many-context aggregates are (I'm not aware of\n> any such aggregate at the moment), it seems reasonable to pick the lazy\n> accounting.\n\nOkay.\n\n> So I think the approach Jeff ended up with sensible - certainly as a\n> first step. We may improve it in the future, of course, once we have\n> more practical experience.\n> \n> Barring objections, I do plan to get this committed by the end of this\n> CF (i.e. sometime later this week).\n\nSounds good to me. Though I have not looked at the patch in details,\nthe arguments are sensible. Thanks for confirming.\n--\nMichael",
"msg_date": "Wed, 25 Sep 2019 09:47:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 11:46:49AM -0700, Melanie Plageman wrote:\n>On Thu, Sep 19, 2019 at 11:00 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n>> On Wed, 2019-09-18 at 13:50 -0700, Soumyadeep Chakraborty wrote:\n>> > Hi Jeff,\n>>\n>> Hi Soumyadeep and Melanie,\n>>\n>> Thank you for the review!\n>>\n>> > max_stack_depth max level lazy (ms) eager (ms)\n>> (eage\n>> > r/lazy)\n>> > 2MB 82 302.715 427.554 1.4123978\n>> > 3MB 3474 567.829 896.143 1.578191674\n>> > 7.67MB 8694 2657.972 4903.063 1.844663149\n>>\n>> Thank you for collecting data on this. Were you able to find any\n>> regression when compared to no memory accounting at all?\n>>\n>>\n>We didn't spend much time comparing performance with and without\n>memory accounting, as it seems like this was discussed extensively in\n>the previous thread.\n>\n>\n>> It looks like you agree with the approach and the results. Did you find\n>> any other issues with the patch?\n>>\n>\n>We didn't observe any other problems with the patch and agree with the\n>approach. It is a good start.\n>\n>\n>>\n>> I am also including Robert in this thread. He had some concerns the\n>> last time around due to a small regression on POWER.\n>>\n>\n>I think it would be helpful if we could repeat the performance tests\n>Robert did on that machine with the current patch (unless this version\n>of the patch is exactly the same as the ones he tested previously).\n>\n\nI agree that would be nice, but I don't have access to any Power machine\nsuitable for this kind of benchmarking :-( Robert, any chance you still\nhave access to that machine?\n\nIt's worth mentioning that those bechmarks (I'm assuming we're talking\nabout the numbers Rober shared in [1]) were done on patches that used\nthe eager accounting approach (i.e. walking all parent contexts and\nupdating the accounting for them).\n\nI'm pretty sure the current \"lazy accounting\" patches don't have that\nissue, so unless someone objects and/or can show numbers demonstrating\nI'wrong I'll stick to my plan to get this committed soon.\n\nregards\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BTgmobnu7XEn1gRdXnFo37P79bF%3DqLt46%3D37ajP3Cro9dBRaA%40mail.gmail.com#3e2dc9e70a9f9eb2d695ab94a580c5a2\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 26 Sep 2019 21:22:44 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "On Thu, 2019-09-26 at 21:22 +0200, Tomas Vondra wrote:\n> It's worth mentioning that those bechmarks (I'm assuming we're\n> talking\n> about the numbers Rober shared in [1]) were done on patches that used\n> the eager accounting approach (i.e. walking all parent contexts and\n> updating the accounting for them).\n> \n> I'm pretty sure the current \"lazy accounting\" patches don't have that\n> issue, so unless someone objects and/or can show numbers\n> demonstrating\n> I'wrong I'll stick to my plan to get this committed soon.\n\nThat was my conclusion, as well.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 26 Sep 2019 13:36:46 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "On Thu, Sep 26, 2019 at 01:36:46PM -0700, Jeff Davis wrote:\n>On Thu, 2019-09-26 at 21:22 +0200, Tomas Vondra wrote:\n>> It's worth mentioning that those bechmarks (I'm assuming we're\n>> talking\n>> about the numbers Rober shared in [1]) were done on patches that used\n>> the eager accounting approach (i.e. walking all parent contexts and\n>> updating the accounting for them).\n>>\n>> I'm pretty sure the current \"lazy accounting\" patches don't have that\n>> issue, so unless someone objects and/or can show numbers\n>> demonstrating\n>> I'wrong I'll stick to my plan to get this committed soon.\n>\n>That was my conclusion, as well.\n>\n\nI was about to commit the patch, but during the final review I've\nnoticed two places that I think are bugs:\n\n1) aset.c (AllocSetDelete)\n--------------------------\n\n#ifdef CLOBBER_FREED_MEMORY\n wipe_mem(block, block->freeptr - ((char *) block));\n#endif\n\n if (block != set->keeper)\n {\n context->mem_allocated -= block->endptr - ((char *) block);\n free(block);\n }\n\n2) generation.c (GenerationReset)\n---------------------------------\n\n#ifdef CLOBBER_FREED_MEMORY\n wipe_mem(block, block->blksize);\n#endif\n context->mem_allocated -= block->blksize;\n\n\nNotice that when CLOBBER_FREED_MEMORY is defined, the code first calls\nwipe_mem and then accesses fields of the (wiped) block. Interesringly\nenough, the regression tests don't seem to exercise these bits - I've\ntried adding elog(ERROR) and it still passes. For (2) that's not very\nsurprising because Generation context is only really used in logical\ndecoding (and we don't delete the context I think). Not sure about (1)\nbut it might be because AllocSetReset does the right thing and only\nleaves behind the keeper block.\n\nI'm pretty sure a custom function calling the contexts explicitly would\nfall over, but I haven't tried.\n\nAside from that, I've repeated the REINDEX benchmarks done by Robert in\n[1] with different scales on two different machines, and I've measured\nno difference. Both machines are x86_64, I don't have access to any\nPower machine at the moment, unfortunately.\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmobnu7XEn1gRdXnFo37P79bF%3DqLt46%3D37ajP3Cro9dBRaA%40mail.gmail.com\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 29 Sep 2019 00:12:49 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "On Sun, Sep 29, 2019 at 12:12:49AM +0200, Tomas Vondra wrote:\n>On Thu, Sep 26, 2019 at 01:36:46PM -0700, Jeff Davis wrote:\n>>On Thu, 2019-09-26 at 21:22 +0200, Tomas Vondra wrote:\n>>>It's worth mentioning that those bechmarks (I'm assuming we're\n>>>talking\n>>>about the numbers Rober shared in [1]) were done on patches that used\n>>>the eager accounting approach (i.e. walking all parent contexts and\n>>>updating the accounting for them).\n>>>\n>>>I'm pretty sure the current \"lazy accounting\" patches don't have that\n>>>issue, so unless someone objects and/or can show numbers\n>>>demonstrating\n>>>I'wrong I'll stick to my plan to get this committed soon.\n>>\n>>That was my conclusion, as well.\n>>\n>\n>I was about to commit the patch, but during the final review I've\n>noticed two places that I think are bugs:\n>\n>1) aset.c (AllocSetDelete)\n>--------------------------\n>\n>#ifdef CLOBBER_FREED_MEMORY\n> wipe_mem(block, block->freeptr - ((char *) block));\n>#endif\n>\n> if (block != set->keeper)\n> {\n> context->mem_allocated -= block->endptr - ((char *) block);\n> free(block);\n> }\n>\n>2) generation.c (GenerationReset)\n>---------------------------------\n>\n>#ifdef CLOBBER_FREED_MEMORY\n> wipe_mem(block, block->blksize);\n>#endif\n> context->mem_allocated -= block->blksize;\n>\n>\n>Notice that when CLOBBER_FREED_MEMORY is defined, the code first calls\n>wipe_mem and then accesses fields of the (wiped) block. Interesringly\n>enough, the regression tests don't seem to exercise these bits - I've\n>tried adding elog(ERROR) and it still passes. For (2) that's not very\n>surprising because Generation context is only really used in logical\n>decoding (and we don't delete the context I think). Not sure about (1)\n>but it might be because AllocSetReset does the right thing and only\n>leaves behind the keeper block.\n>\n>I'm pretty sure a custom function calling the contexts explicitly would\n>fall over, but I haven't tried.\n>\n\nOh, and one more thing - this probably needs to add at least some basic \nexplanation of the accounting to src/backend/mmgr/README.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 29 Sep 2019 00:22:06 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "On Sun, 2019-09-29 at 00:22 +0200, Tomas Vondra wrote:\n> Notice that when CLOBBER_FREED_MEMORY is defined, the code first\n> > calls\n> > wipe_mem and then accesses fields of the (wiped) block.\n> > Interesringly\n> > enough, the regression tests don't seem to exercise these bits -\n> > I've\n> > tried adding elog(ERROR) and it still passes. For (2) that's not\n> > very\n> > surprising because Generation context is only really used in\n> > logical\n> > decoding (and we don't delete the context I think). Not sure about\n> > (1)\n> > but it might be because AllocSetReset does the right thing and only\n> > leaves behind the keeper block.\n> > \n> > I'm pretty sure a custom function calling the contexts explicitly\n> > would\n> > fall over, but I haven't tried.\n> > \n\nFixed.\n\nI tested with some custom use of memory contexts. The reason\nAllocSetDelete() didn't fail before is that most memory contexts use\nthe free lists (the list of free memory contexts, not the free list of\nchunks), so you need to specify a non-default minsize in order to\nprevent that and trigger the bug.\n\nAllocSetReset() worked, but it was reading the header of the keeper\nblock after wiping the contents of the keeper block. It technically\nworked, because the header of the keeper block was not wiped, but it\nseems more clear to explicitly save the size of the keeper block. In\nAllocSetDelete(), saving the keeper size is required, because it wipes\nthe block headers in addition to the contents.\n\n> Oh, and one more thing - this probably needs to add at least some\n> basic \n> explanation of the accounting to src/backend/mmgr/README.\n\nAdded.\n\nRegards,\n\tJeff Davis",
"msg_date": "Mon, 30 Sep 2019 13:34:13 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "On Mon, Sep 30, 2019 at 01:34:13PM -0700, Jeff Davis wrote:\n>On Sun, 2019-09-29 at 00:22 +0200, Tomas Vondra wrote:\n>> Notice that when CLOBBER_FREED_MEMORY is defined, the code first\n>> > calls\n>> > wipe_mem and then accesses fields of the (wiped) block.\n>> > Interesringly\n>> > enough, the regression tests don't seem to exercise these bits -\n>> > I've\n>> > tried adding elog(ERROR) and it still passes. For (2) that's not\n>> > very\n>> > surprising because Generation context is only really used in\n>> > logical\n>> > decoding (and we don't delete the context I think). Not sure about\n>> > (1)\n>> > but it might be because AllocSetReset does the right thing and only\n>> > leaves behind the keeper block.\n>> >\n>> > I'm pretty sure a custom function calling the contexts explicitly\n>> > would\n>> > fall over, but I haven't tried.\n>> >\n>\n>Fixed.\n>\n>I tested with some custom use of memory contexts. The reason\n>AllocSetDelete() didn't fail before is that most memory contexts use\n>the free lists (the list of free memory contexts, not the free list of\n>chunks), so you need to specify a non-default minsize in order to\n>prevent that and trigger the bug.\n>\n>AllocSetReset() worked, but it was reading the header of the keeper\n>block after wiping the contents of the keeper block. It technically\n>worked, because the header of the keeper block was not wiped, but it\n>seems more clear to explicitly save the size of the keeper block. In\n>AllocSetDelete(), saving the keeper size is required, because it wipes\n>the block headers in addition to the contents.\n>\n\nOK, looks good to me now.\n\n>> Oh, and one more thing - this probably needs to add at least some\n>> basic\n>> explanation of the accounting to src/backend/mmgr/README.\n>\n>Added.\n>\n\nWell, I've meant a couple of paragraphs explaining the motivation, and\nrelevant trade-offs considered. So I've written a brief summary of the\ndesign as I understand it and pushed it like that. Of course, if you\ncould proof-read it, that'd be good.\n\nI had a bit of a hard time deciding who to list as a reviewer - this\npatch started sometime in ~2015, and it was initially discussed as part\nof the larger hashagg effort, with plenty of people discussing various\nancient versions of the patch. In the end I've included just people from\nthe current thread. If that omits important past reviews, I'm sorry.\n\nFor the record, results of the benchmarks I've done over the past couple\nof days are in [1]. It includes both the reindex benchmark used by by\nRobert in 2015, and a regular read-only pgbench. In general, the\noverhead of the accounting is pretty much indistinguishable from noise\n(at least on those two machines).\n\nIn any case, thanks for the perseverance working on this.\n\n\n[1] https://github.com/tvondra/memory-accounting-benchmarks\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 1 Oct 2019 04:11:14 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "So ... why exactly did this patch define MemoryContextData.mem_allocated\nas int64? That seems to me to be doubly wrong: it is not the right width\non 32-bit machines, and it is not the right signedness anywhere. I think\nthat field ought to be of type Size (a/k/a size_t, but memnodes.h always\ncalls it Size).\n\nI let this pass when the patch went in, but now I'm on the warpath\nabout it, because since c477f3e449 went in, some of the 32-bit buildfarm\nmembers are failing with\n\n2019-10-04 00:41:56.569 CEST [66916:86] pg_regress/_int LOG: statement: CREATE INDEX text_idx on test__int using gist ( a gist__int_ops );\nTRAP: FailedAssertion(\"total_allocated == context->mem_allocated\", File: \"aset.c\", Line: 1533)\n2019-10-04 00:42:25.505 CEST [63836:11] LOG: server process (PID 66916) was terminated by signal 6: Abort trap\n2019-10-04 00:42:25.505 CEST [63836:12] DETAIL: Failed process was running: CREATE INDEX text_idx on test__int using gist ( a gist__int_ops );\n\nWhat I think is happening is that c477f3e449 allowed this bit in\nAllocSetRealloc:\n\n\tcontext->mem_allocated += blksize - oldblksize;\n\nto be executed in situations where blksize < oldblksize, where before\nthat was not possible. Of course blksize and oldblksize are of type\nSize, hence unsigned, so the subtraction result underflows in this\ncase. If mem_allocated is of the same width as Size then this does\nnot matter because the final result wraps around to the proper value,\nbut if we're going to allow mem_allocated to be wider than Size then\nwe will need more logic here to add or subtract as appropriate.\n\n(I'm not quite sure why we're not seeing this failure on *all* the\n32-bit machines; maybe there's some other factor involved?)\n\nI see no value in defining mem_allocated to be wider than Size.\nYes, the C standard contemplates the possibility that the total\navailable address space is larger than the largest chunk you can\never ask malloc for; but nobody has built a platform like that in\nthis century, and they sucked to program on back in the dark ages\nwhen they did exist. (I speak from experience.) I do not think\nwe need to design Postgres to allow for that.\n\nLikewise, there's no evident value in allowing mem_allocated\nto be negative.\n\nI haven't chased down exactly what else would need to change.\nIt might be that s/int64/Size/g throughout the patch is\nsufficient, but I haven't analyzed it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Oct 2019 00:36:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "On Fri, Oct 04, 2019 at 12:36:01AM -0400, Tom Lane wrote:\n>So ... why exactly did this patch define MemoryContextData.mem_allocated\n>as int64? That seems to me to be doubly wrong: it is not the right width\n>on 32-bit machines, and it is not the right signedness anywhere. I think\n>that field ought to be of type Size (a/k/a size_t, but memnodes.h always\n>calls it Size).\n>\n\nYeah, I think that's an oversight. Maybe there's a reason why Jeff used\nint64, but I can't think of any.\n\n>I let this pass when the patch went in, but now I'm on the warpath\n>about it, because since c477f3e449 went in, some of the 32-bit buildfarm\n>members are failing with\n>\n>2019-10-04 00:41:56.569 CEST [66916:86] pg_regress/_int LOG: statement: CREATE INDEX text_idx on test__int using gist ( a gist__int_ops );\n>TRAP: FailedAssertion(\"total_allocated == context->mem_allocated\", File: \"aset.c\", Line: 1533)\n>2019-10-04 00:42:25.505 CEST [63836:11] LOG: server process (PID 66916) was terminated by signal 6: Abort trap\n>2019-10-04 00:42:25.505 CEST [63836:12] DETAIL: Failed process was running: CREATE INDEX text_idx on test__int using gist ( a gist__int_ops );\n>\n>What I think is happening is that c477f3e449 allowed this bit in\n>AllocSetRealloc:\n>\n>\tcontext->mem_allocated += blksize - oldblksize;\n>\n>to be executed in situations where blksize < oldblksize, where before\n>that was not possible. Of course blksize and oldblksize are of type\n>Size, hence unsigned, so the subtraction result underflows in this\n>case. If mem_allocated is of the same width as Size then this does\n>not matter because the final result wraps around to the proper value,\n>but if we're going to allow mem_allocated to be wider than Size then\n>we will need more logic here to add or subtract as appropriate.\n>\n>(I'm not quite sure why we're not seeing this failure on *all* the\n>32-bit machines; maybe there's some other factor involved?)\n>\n\nInteresting failure mode (especially that it does *not* fail on some\n32-bit machines).\n\n>I see no value in defining mem_allocated to be wider than Size.\n>Yes, the C standard contemplates the possibility that the total\n>available address space is larger than the largest chunk you can\n>ever ask malloc for; but nobody has built a platform like that in\n>this century, and they sucked to program on back in the dark ages\n>when they did exist. (I speak from experience.) I do not think\n>we need to design Postgres to allow for that.\n>\n>Likewise, there's no evident value in allowing mem_allocated\n>to be negative.\n>\n>I haven't chased down exactly what else would need to change.\n>It might be that s/int64/Size/g throughout the patch is\n>sufficient, but I haven't analyzed it.\n>\n\nI think so too, but I'll take a closer look in the afternoon, unless you\nbeat me to it.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 4 Oct 2019 10:26:44 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "On Fri, Oct 04, 2019 at 10:26:44AM +0200, Tomas Vondra wrote:\n>On Fri, Oct 04, 2019 at 12:36:01AM -0400, Tom Lane wrote:\n>>I haven't chased down exactly what else would need to change.\n>>It might be that s/int64/Size/g throughout the patch is\n>>sufficient, but I haven't analyzed it.\n>>\n>\n>I think so too, but I'll take a closer look in the afternoon, unless you\n>beat me to it.\n>\n\nI've pushed a fix changing the type to Size, splitting the mem_allocated\nto two separate updates (to prevent any underflows in the subtraction).\nHopefully this fixes the 32-bit machines ...\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 4 Oct 2019 16:29:03 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "On Fri, 2019-10-04 at 10:26 +0200, Tomas Vondra wrote:\n> On Fri, Oct 04, 2019 at 12:36:01AM -0400, Tom Lane wrote:\n> > So ... why exactly did this patch define\n> > MemoryContextData.mem_allocated\n> > as int64? That seems to me to be doubly wrong: it is not the right\n> > width\n> > on 32-bit machines, and it is not the right signedness anywhere. I\n> > think\n> > that field ought to be of type Size (a/k/a size_t, but memnodes.h\n> > always\n> > calls it Size).\n> > \n> \n> Yeah, I think that's an oversight. Maybe there's a reason why Jeff\n> used\n> int64, but I can't think of any.\n\nI had chosen a 64-bit value to account for the situation Tom mentioned:\nthat, in theory, Size might not be large enough to represent all\nallocations in a memory context. Apparently, that theoretical situation\nis not worth being concerned about.\n\nThe patch has been floating around for a very long time, so I don't\nremember exactly why I chose a signed value. Sorry.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Fri, 04 Oct 2019 07:32:21 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> On Fri, 2019-10-04 at 10:26 +0200, Tomas Vondra wrote:\n>> Yeah, I think that's an oversight. Maybe there's a reason why Jeff\n>> used int64, but I can't think of any.\n\n> I had chosen a 64-bit value to account for the situation Tom mentioned:\n> that, in theory, Size might not be large enough to represent all\n> allocations in a memory context. Apparently, that theoretical situation\n> is not worth being concerned about.\n\nWell, you could also argue it the other way: maybe in our children's\ntime, int64 won't be as wide as Size. (Yeah, I know that sounds\nridiculous, but needing pointers wider than 32 bits was a ridiculous\nidea too when I started in this business.)\n\nThe committed fix seems OK to me except that I think you should've\nalso changed MemoryContextMemAllocated() to return Size.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Oct 2019 10:43:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Fri, Oct 04, 2019 at 12:36:01AM -0400, Tom Lane wrote:\n>> What I think is happening is that c477f3e449 allowed this bit in\n>> AllocSetRealloc:\n>> context->mem_allocated += blksize - oldblksize;\n>> to be executed in situations where blksize < oldblksize, where before\n>> that was not possible.\n>> ...\n>> (I'm not quite sure why we're not seeing this failure on *all* the\n>> 32-bit machines; maybe there's some other factor involved?)\n\n> Interesting failure mode (especially that it does *not* fail on some\n> 32-bit machines).\n\nJust to make things even more mysterious, prairiedog finally showed\nthe Assert failure on its fourth run with c477f3e449 included:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prairiedog&dt=2019-10-04%2012%3A35%3A41\n\nIt's also now apparent that lapwing and locust were failing only\nsometimes, as well. I totally don't understand why that failure\nwould've been only partially reproducible. Maybe we should dig\na bit harder, rather than just deciding that we fixed it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Oct 2019 11:41:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "I wrote:\n> Just to make things even more mysterious, prairiedog finally showed\n> the Assert failure on its fourth run with c477f3e449 included:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prairiedog&dt=2019-10-04%2012%3A35%3A41\n> It's also now apparent that lapwing and locust were failing only\n> sometimes, as well. I totally don't understand why that failure\n> would've been only partially reproducible. Maybe we should dig\n> a bit harder, rather than just deciding that we fixed it.\n\nI did that digging, reproducing the problem on florican's host\n(again intermittently). Here's a stack trace from the spot where\nwe sometimes downsize a large chunk:\n\n#5 0x0851c70a in AllocSetRealloc (context=0x31b35000, pointer=0x319e5020, \n size=1224) at aset.c:1158\n#6 0x085232eb in repalloc (pointer=0x319e5020, size=1224) at mcxt.c:1082\n#7 0x31b69591 in resize_intArrayType (num=300, a=0x319e5020)\n at _int_tool.c:268\n#8 resize_intArrayType (a=0x319e5020, num=300) at _int_tool.c:250\n#9 0x31b6995d in _int_unique (r=0x319e5020) at _int_tool.c:329\n#10 0x31b66a00 in g_int_union (fcinfo=0xffbfcc5c) at _int_gist.c:146\n#11 0x084ff9c4 in FunctionCall2Coll (flinfo=0x319e2bb4, collation=100, \n arg1=834250780, arg2=4290759884) at fmgr.c:1162\n#12 0x080db3a3 in gistMakeUnionItVec (giststate=0x319e2820, itvec=0x31bae4a4, \n len=15, attr=0xffbfce5c, isnull=0xffbfcedc) at gistutil.c:204\n#13 0x080e410d in gistunionsubkeyvec (giststate=giststate@entry=0x319e2820, \n itvec=itvec@entry=0x31bb5ed4, gsvp=gsvp@entry=0xffbfcd4c) at gistsplit.c:64\n#14 0x080e416f in gistunionsubkey (giststate=giststate@entry=0x319e2820, \n itvec=itvec@entry=0x31bb5ed4, spl=spl@entry=0xffbfce3c) at gistsplit.c:91\n#15 0x080e4456 in gistSplitByKey (r=<optimized out>, page=<optimized out>, \n itup=<optimized out>, len=<optimized out>, giststate=<optimized out>, \n v=<optimized out>, attno=<optimized out>) at gistsplit.c:689\n#16 0x080d8797 in gistSplit (r=0x31bbb424, page=0x297e0b80 \"\", \n itup=0x31bb5ed4, len=16, giststate=0x319e2820) at gist.c:1432\n#17 0x080d8d9c in gistplacetopage (rel=<optimized out>, \n freespace=<optimized out>, giststate=<optimized out>, \n buffer=<optimized out>, itup=<optimized out>, ntup=<optimized out>, \n oldoffnum=<optimized out>, newblkno=<optimized out>, \n leftchildbuf=<optimized out>, splitinfo=<optimized out>, \n markfollowright=<optimized out>, heapRel=<optimized out>, \n is_build=<optimized out>) at gist.c:299\n\nSo the potential downsize is expected, triggered by _int_unique\nbeing able to remove some duplicate entries from a GIST union key.\nAFAICT the fact that it happens only intermittently must boil down\nto the randomized insertion choices that gistchoose() sometimes makes.\n\nIn short: nothing to see here, move along.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Oct 2019 14:37:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "On Fri, Oct 4, 2019 at 7:32 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> The patch has been floating around for a very long time, so I don't\n> remember exactly why I chose a signed value. Sorry.\n\nI am reminded of the fact that int64 is used to size buffers within\ntuplesort.c, because it needs to support negative availMem sizes --\nwhen huge allocations were first supported, tuplesort.c briefly used\n\"Size\", which didn't work. Perhaps it had something to do with that.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 4 Oct 2019 11:43:06 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Fri, Oct 4, 2019 at 7:32 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>> The patch has been floating around for a very long time, so I don't\n>> remember exactly why I chose a signed value. Sorry.\n\n> I am reminded of the fact that int64 is used to size buffers within\n> tuplesort.c, because it needs to support negative availMem sizes --\n> when huge allocations were first supported, tuplesort.c briefly used\n> \"Size\", which didn't work. Perhaps it had something to do with that.\n\nI wonder if we should make that use ssize_t instead. Probably\nnot worth the trouble.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Oct 2019 14:58:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Memory Accounting"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 2:47 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I think it would be helpful if we could repeat the performance tests\n> Robert did on that machine with the current patch (unless this version\n> of the patch is exactly the same as the ones he tested previously).\n\nI still have access to a POWER machine, but it's currently being used\nby somebody else for a benchmarking project, so I can't test this\nimmediately.\n\nIt's probably worth noting that, in addition to whatever's changed in\nthis patch, tuplesort.c's memory management has been altered\nsignificantly since 2015 - see\n0011c0091e886b874e485a46ff2c94222ffbf550 and\ne94568ecc10f2638e542ae34f2990b821bbf90ac. I'm not immediately sure how\nthat would affect the REINDEX case that I tested back then, but it\nseems at least possible that they would have the effect of making\npalloc overhead less significant. More generally, so much of the\nsorting machinery has been overhauled by Peter Geoghegan since then\nthat what happens now may just not be very comparable to what happened\nback then.\n\nI do agree that this approach looks pretty light weight. Tomas's point\nabout the difference between updating only the current context and\nupdating all parent contexts seems right on target to me.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 4 Oct 2019 16:11:35 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory Accounting"
}
] |
[
{
"msg_contents": "Hi all,\n(Peter Eisentraut in CC)\n\ncrake has just complained about a failure with the LDAP test suite:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2019-07-19%2001%3A33%3A31\n\n# Running: /usr/sbin/slapd -f\n/home/bf/bfr/root/HEAD/pgsql.build/src/test/ldap/tmp_check/slapd.conf\n -h ldap://localhost:55306 ldaps://localhost:55307\n# loading LDAP data\n# Running: ldapadd -x -y\n/home/bf/bfr/root/HEAD/pgsql.build/src/test/ldap/tmp_check/ldappassword\n-f authdata.ldif\nldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)\n\nAnd FWIW, when running a parallel check-world to make my laptop busy\nenough, it is rather usual to face failures with this test, which is\nannoying. Shouldn't we have at least a number of retries with\nintermediate sleeps for the commands run in 001_auth.pl? As far as I\nrecall, I think that we can run into failures when calling ldapadd and\nldappasswd.\n\nThanks,\n--\nMichael",
"msg_date": "Fri, 19 Jul 2019 12:30:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "On the stability of TAP tests for LDAP"
},
{
"msg_contents": "On Fri, Jul 19, 2019 at 3:30 PM Michael Paquier <michael@paquier.xyz> wrote:\n> # Running: /usr/sbin/slapd -f\n> /home/bf/bfr/root/HEAD/pgsql.build/src/test/ldap/tmp_check/slapd.conf\n> -h ldap://localhost:55306 ldaps://localhost:55307\n> # loading LDAP data\n> # Running: ldapadd -x -y\n> /home/bf/bfr/root/HEAD/pgsql.build/src/test/ldap/tmp_check/ldappassword\n> -f authdata.ldif\n> ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)\n>\n> And FWIW, when running a parallel check-world to make my laptop busy\n> enough, it is rather usual to face failures with this test, which is\n> annoying. Shouldn't we have at least a number of retries with\n> intermediate sleeps for the commands run in 001_auth.pl? As far as I\n> recall, I think that we can run into failures when calling ldapadd and\n> ldappasswd.\n\nYeah, it seems we need to figure out a way to wait for it to be ready\nto accept connections. I wondered how other people do this, and found\none example that polls for the .pid file:\n\nhttps://github.com/tiredofit/docker-openldap/blob/master/install/etc/cont-init.d/10-openldap#L347\n\nThat looks nice and tidy but I'm not sure it can be trusted, given\nthat OpenLDAP's own tests poll a trivial ldapsearch (also based on a\ncursory glance at slapd/main.c which appears to write the .pid file a\nbit too early, though I may have misread):\n\nhttps://github.com/openldap/openldap/blob/master/tests/scripts/test039-glue-ldap-concurrency#L59\n\nI guess we should do that too. I don't know how to write Perl but I'll try...\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Jul 2019 15:52:40 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: On the stability of TAP tests for LDAP"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 3:52 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I guess we should do that too. I don't know how to write Perl but I'll try...\n\nDoes this look about right?\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Wed, 24 Jul 2019 16:41:05 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: On the stability of TAP tests for LDAP"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 04:41:05PM +1200, Thomas Munro wrote:\n> On Wed, Jul 24, 2019 at 3:52 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > I guess we should do that too. I don't know how to write Perl but I'll try...\n> \n> Does this look about right?\n\nSome comments from here. I have not tested the patch.\n\nI would recommend using TestLib::system_log instead of plain system().\nThe command should be a list of arguments with one element per\nargument (see call of system_log in PostgresNode.pm for example). The\nindentation is incorrect, and that I would make the retry longer as I\ngot the feeling that on slow machines we could still have issues. We\nalso usually tend to increase the timeout up to 5 minutes, and the\nsleep phases make use of Time::HiRes::usleep.\n--\nMichael",
"msg_date": "Wed, 24 Jul 2019 14:26:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: On the stability of TAP tests for LDAP"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 5:26 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > Does this look about right?\n>\n> Some comments from here. I have not tested the patch.\n>\n> I would recommend using TestLib::system_log instead of plain system().\n> The command should be a list of arguments with one element per\n> argument (see call of system_log in PostgresNode.pm for example). The\n> indentation is incorrect, and that I would make the retry longer as I\n> got the feeling that on slow machines we could still have issues. We\n> also usually tend to increase the timeout up to 5 minutes, and the\n> sleep phases make use of Time::HiRes::usleep.\n\nThanks, here's v2.\n\n\n--\nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Wed, 24 Jul 2019 17:47:13 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: On the stability of TAP tests for LDAP"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 05:47:13PM +1200, Thomas Munro wrote:\n> Thanks, here's v2.\n\nPerhaps this worked on freebsd? Now that I test it, the test gets\nstuck on my Debian box:\n# waiting for slapd to accept requests...\n# Running: ldapsearch -h localhost -p 49534 -s base -b\ndc=example,dc=net -n 'objectclass=*'\nSASL/DIGEST-MD5 authentication started\nPlease enter your password:\nldap_sasl_interactive_bind_s: Invalid credentials (49)\n additional info: SASL(-13): user not found: no secret in\n\tdatabase\n\npgperltidy complains about the patch indentation using perltidy\nv20170521 (version mentioned in tools/pgindent/README).\n--\nMichael",
"msg_date": "Wed, 24 Jul 2019 16:50:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: On the stability of TAP tests for LDAP"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 7:50 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Perhaps this worked on freebsd? Now that I test it, the test gets\n> stuck on my Debian box:\n> # waiting for slapd to accept requests...\n> # Running: ldapsearch -h localhost -p 49534 -s base -b\n> dc=example,dc=net -n 'objectclass=*'\n> SASL/DIGEST-MD5 authentication started\n> Please enter your password:\n> ldap_sasl_interactive_bind_s: Invalid credentials (49)\n> additional info: SASL(-13): user not found: no secret in\n> database\n\nHuh, yeah, I don't know why slapd requires credentials on Debian, when\nthe version that ships with FreeBSD is OK with an anonymous\nconnection. Rather than worrying about that, I just adjusted it to\nsupply the credentials. It works on both for me.\n\n> pgperltidy complains about the patch indentation using perltidy\n> v20170521 (version mentioned in tools/pgindent/README).\n\nFixed.\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Wed, 24 Jul 2019 21:01:47 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: On the stability of TAP tests for LDAP"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 09:01:47PM +1200, Thomas Munro wrote:\n> Huh, yeah, I don't know why slapd requires credentials on Debian, when\n> the version that ships with FreeBSD is OK with an anonymous\n> connection. Rather than worrying about that, I just adjusted it to\n> supply the credentials. It works on both for me.\n\nThanks for the updated patch, this looks good. I have done a series\nof tests keeping my laptop busy and I haven't seen a failure where I\nusually see problems 10%~20% of the time. So that seems to help,\nthanks!\n--\nMichael",
"msg_date": "Thu, 25 Jul 2019 09:51:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: On the stability of TAP tests for LDAP"
},
{
"msg_contents": "On Thu, Jul 25, 2019 at 12:51 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Thanks for the updated patch, this looks good. I have done a series\n> of tests keeping my laptop busy and I haven't seen a failure where I\n> usually see problems 10%~20% of the time. So that seems to help,\n> thanks!\n\nPushed, thanks.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Jul 2019 10:18:48 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: On the stability of TAP tests for LDAP"
},
{
"msg_contents": "On Fri, Jul 26, 2019 at 10:18:48AM +1200, Thomas Munro wrote:\n> Pushed, thanks.\n\nThanks for fixing! I'll update this thread if there are still some\nproblems.\n--\nMichael",
"msg_date": "Fri, 26 Jul 2019 08:44:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: On the stability of TAP tests for LDAP"
}
] |
[
{
"msg_contents": "Hi all,\n\nIs there any way to create a named portal except cursor in PG?\nI tried postgres-jdbc driver and use PrepareStatement. Backend could\nreceive `bind` and `execute` message, but the portal name is still empty.\nHow can I specify the portal name?\n\n-- \nThanks\n\nHubert Zhang\n\nHi all,Is there any way to create a named portal except cursor in PG?I tried postgres-jdbc driver and use PrepareStatement. Backend could receive `bind` and `execute` message, but the portal name is still empty. How can I specify the portal name?-- ThanksHubert Zhang",
"msg_date": "Fri, 19 Jul 2019 11:57:55 +0800",
"msg_from": "Hubert Zhang <hzhang@pivotal.io>",
"msg_from_op": true,
"msg_subject": "How to create named portal except cursor?"
},
{
"msg_contents": "On Thu, Jul 18, 2019 at 8:58 PM Hubert Zhang <hzhang@pivotal.io> wrote:\n\n> Is there any way to create a named portal except cursor in PG?\n> I tried postgres-jdbc driver and use PrepareStatement. Backend could\n> receive `bind` and `execute` message, but the portal name is still empty.\n> How can I specify the portal name?\n>\n\nThis seems like a question better posed to the JDBC list as opposed to\n-hackers. The protocol can technically (from the description of\nparse+bind) handle what you describe but whether this specific client\nprovides access to the capability is outside the scope of this list. I\nwill say that there doesn't seem to be a way in pure SQL to use a prepared\nstatement and have it create a named portal (PREPARE not allowing DECLARE\nto be the underlying statement).\n\nYou may wish to describe what you are trying to do at a higher level. If\nit is something like the above you might find a solution to your\nunspecified problem by using cursors within a pl/pgsql function.\n\nDavid J.\n\nOn Thu, Jul 18, 2019 at 8:58 PM Hubert Zhang <hzhang@pivotal.io> wrote:Is there any way to create a named portal except cursor in PG?I tried postgres-jdbc driver and use PrepareStatement. Backend could receive `bind` and `execute` message, but the portal name is still empty. How can I specify the portal name?This seems like a question better posed to the JDBC list as opposed to -hackers. The protocol can technically (from the description of parse+bind) handle what you describe but whether this specific client provides access to the capability is outside the scope of this list. I will say that there doesn't seem to be a way in pure SQL to use a prepared statement and have it create a named portal (PREPARE not allowing DECLARE to be the underlying statement).You may wish to describe what you are trying to do at a higher level. If it is something like the above you might find a solution to your unspecified problem by using cursors within a pl/pgsql function.David J.",
"msg_date": "Thu, 18 Jul 2019 22:42:54 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How to create named portal except cursor?"
}
] |
[
{
"msg_contents": "Hi\n\nI find a suspicious code in libpq:PQconnectPoll(). I think it should be\nfixed, but I could not produce a concrete problem.\nWhat do you think about it?\n\nI understand that PQconnectPoll() returns PGRES_POLLING_WRITING or\nPGRES_POLLING_READ until state machine reaches terminal state(OK or BAD).\nThe return value indicates for users which event they should wait for\nbefore next PQconnectPoll().\nBut PQconnectPoll() calls PQsendQuery(\"SHOW transaction_read_only\")\nin CONNECTION_AUTH_OK without returning PGRES_POLLING_WRITING before.\n\n\nMy idea is as following:\n\ncase CONNECTION_AWAITING_RESPONSE:\n receive authetication OK, transit state machine to AUTH_OK and\n return PGRES_POLLING_READING.\n\ncase CONNECTION_AUTH_OK:\n clear any data from backend using PQisBusy(), transit to\n CHECK_WRITABLE_STARTED(new state!), and return PGRES_POLLING_WRITING.\n\ncase CONNECTION_CHECK_WRITABLE_STARTED (new state!):\n call PQsendQuery(\"SHOW transaction_read_only\"), and transit to\n CONNECTION_CHECK_WRITABLE, and return CONNECTION_CHECK_READING.\n\n\nRegards\nRyo Matsumura\n\n\n",
"msg_date": "Fri, 19 Jul 2019 05:07:04 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "A suspicious code in PQconnectPoll()"
}
] |
[
{
"msg_contents": "Hi all,\n\nJust browsing through the logs of the buildfarm, I have noticed that\nsome buildfarm animals complain with warnings (jacana uses MinGW):\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=jacana&dt=2019-07-19%2001%3A45%3A28&stg=make\n\nThere are two of them:\nc:/mingw/msys/1.0/home/pgrunner/bf/root/HEAD/pgsql.build/../pgsql/src/backend/port/win32/mingwcompat.c:60:1:\nwarning: 'RegisterWaitForSingleObject' redeclared without dllimport\nattribute: previous dllimport ignored [-Wattributes]\n\nc:/mingw/msys/1.0/home/pgrunner/bf/root/HEAD/pgsql.build/../pgsql/src/bin/pg_basebackup/pg_basebackup.c:1448:8:\nwarning: variable 'filemode' set but not used\n[-Wunused-but-set-variable]\nJul 18 21:59:49 int filemode;\n\nThe first one has been discussed already some time ago and is a cause\nof 811be893, but nothing got actually fixed (protagonists in CC):\nhttps://www.postgresql.org/message-id/CABUevEyeZfUvaYMuNop3NyRvvRh2Up2tStK8SXVAPDERf8p9eg@mail.gmail.com\n\nThe second one is rather obvious to fix, because we don't care about\nthe file mode on Windows, so the attached should do the work. I am\nactually surprised that the Visual Studio compilers don't complain\nabout that, but let's fix it.\n\nThoughts?\n--\nMichael",
"msg_date": "Fri, 19 Jul 2019 14:08:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Compiler warnings with MinGW"
},
{
"msg_contents": "\nOn 7/19/19 1:08 AM, Michael Paquier wrote:\n> Hi all,\n>\n> Just browsing through the logs of the buildfarm, I have noticed that\n> some buildfarm animals complain with warnings (jacana uses MinGW):\n> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=jacana&dt=2019-07-19%2001%3A45%3A28&stg=make\n>\n> There are two of them:\n> c:/mingw/msys/1.0/home/pgrunner/bf/root/HEAD/pgsql.build/../pgsql/src/backend/port/win32/mingwcompat.c:60:1:\n> warning: 'RegisterWaitForSingleObject' redeclared without dllimport\n> attribute: previous dllimport ignored [-Wattributes]\n>\n> c:/mingw/msys/1.0/home/pgrunner/bf/root/HEAD/pgsql.build/../pgsql/src/bin/pg_basebackup/pg_basebackup.c:1448:8:\n> warning: variable 'filemode' set but not used\n> [-Wunused-but-set-variable]\n> Jul 18 21:59:49 int filemode;\n>\n> The first one has been discussed already some time ago and is a cause\n> of 811be893, but nothing got actually fixed (protagonists in CC):\n> https://www.postgresql.org/message-id/CABUevEyeZfUvaYMuNop3NyRvvRh2Up2tStK8SXVAPDERf8p9eg@mail.gmail.com\n\n\nTo answer Magnus' question in that thread, the Mingw headers on jacana\ndeclare the function with WINBASEAPI which in turn is defined as\nDECLSPEC_IMPORT, as long as _KERNEL32_ isn't defined, and we don't do\nthat (and I don't think anything else does either).\n\n\nSo the fix Peter proposed looks like it should be correct.\n\n\n\n>\n> The second one is rather obvious to fix, because we don't care about\n> the file mode on Windows, so the attached should do the work. I am\n> actually surprised that the Visual Studio compilers don't complain\n> about that, but let's fix it.\n>\n> Thoughts?\n\n\n+1.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 19 Jul 2019 08:41:28 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Compiler warnings with MinGW"
},
{
"msg_contents": "On Fri, Jul 19, 2019 at 08:41:28AM -0400, Andrew Dunstan wrote:\n> On 7/19/19 1:08 AM, Michael Paquier wrote:\n>> The second one is rather obvious to fix, because we don't care about\n>> the file mode on Windows, so the attached should do the work. I am\n>> actually surprised that the Visual Studio compilers don't complain\n>> about that, but let's fix it.\n>>\n>> Thoughts?\n> \n> +1.\n\nJust wanted to double-check something. We usually don't bother\nback-patching warning fixes like this one, right?\n--\nMichael",
"msg_date": "Sat, 20 Jul 2019 18:19:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Compiler warnings with MinGW"
},
{
"msg_contents": "On Sat, Jul 20, 2019 at 06:19:34PM +0900, Michael Paquier wrote:\n> Just wanted to double-check something. We usually don't bother\n> back-patching warning fixes like this one, right?\n\nI have double-checked the thing, and applied it only on HEAD as we\nhave that for some time (since 9.1 actually and 00cdd83 has improved\nthe original situation here).\n--\nMichael",
"msg_date": "Sun, 21 Jul 2019 22:47:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Compiler warnings with MinGW"
},
{
"msg_contents": "On 2019-07-19 14:41, Andrew Dunstan wrote:\n> On 7/19/19 1:08 AM, Michael Paquier wrote:\n>> Just browsing through the logs of the buildfarm, I have noticed that\n>> some buildfarm animals complain with warnings (jacana uses MinGW):\n>> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=jacana&dt=2019-07-19%2001%3A45%3A28&stg=make\n>>\n>> There are two of them:\n>> c:/mingw/msys/1.0/home/pgrunner/bf/root/HEAD/pgsql.build/../pgsql/src/backend/port/win32/mingwcompat.c:60:1:\n>> warning: 'RegisterWaitForSingleObject' redeclared without dllimport\n>> attribute: previous dllimport ignored [-Wattributes]\n>>\n>> The first one has been discussed already some time ago and is a cause\n>> of 811be893, but nothing got actually fixed (protagonists in CC):\n>> https://www.postgresql.org/message-id/CABUevEyeZfUvaYMuNop3NyRvvRh2Up2tStK8SXVAPDERf8p9eg@mail.gmail.com\n> \n> To answer Magnus' question in that thread, the Mingw headers on jacana\n> declare the function with WINBASEAPI which in turn is defined as\n> DECLSPEC_IMPORT, as long as _KERNEL32_ isn't defined, and we don't do\n> that (and I don't think anything else does either).\n> \n> So the fix Peter proposed looks like it should be correct.\n\nI'm not sure exactly what the upstream of mingw is these days, but I\nthink the original issue that led to 811be893 has long been fixed [0],\nand the other stuff in mingwcompat.c is also no longer relevant [1]. I\nthink mingwcompat.c could be removed altogether. I'm not sure to what\nextent we need to support 5+ year old mingw versions.\n\n[0]:\nhttps://sourceforge.net/p/mingw-w64/mingw-w64/ci/9d937a7f4f766f903c9433044f77bfa97a0bc1d8/\n[1]:\nhttps://sourceforge.net/p/mingw-w64/mingw-w64/ci/88ab6fbdd0a185702a1fce4db935e303030e082f/\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 7 Sep 2019 00:11:25 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Compiler warnings with MinGW"
},
{
"msg_contents": "On Sat, Sep 07, 2019 at 12:11:25AM +0200, Peter Eisentraut wrote:\n> I'm not sure exactly what the upstream of mingw is these days, but I\n> think the original issue that led to 811be893 has long been fixed [0],\n> and the other stuff in mingwcompat.c is also no longer relevant [1]. I\n> think mingwcompat.c could be removed altogether. I'm not sure to what\n> extent we need to support 5+ year old mingw versions.\n\nOn HEAD I would not be against removing that as this leads to a\ncleanup of our code. For MSVC, we only support VS 2013~ on HEAD, so\nsaying that we don't support MinGW older than what was proposed 5\nyears ago sounds sensible.\n--\nMichael",
"msg_date": "Sat, 7 Sep 2019 11:58:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Compiler warnings with MinGW"
},
{
"msg_contents": "On Sat, Sep 7, 2019 at 4:58 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sat, Sep 07, 2019 at 12:11:25AM +0200, Peter Eisentraut wrote:\n> > I'm not sure exactly what the upstream of mingw is these days, but I\n> > think the original issue that led to 811be893 has long been fixed [0],\n> > and the other stuff in mingwcompat.c is also no longer relevant [1]. I\n> > think mingwcompat.c could be removed altogether. I'm not sure to what\n> > extent we need to support 5+ year old mingw versions.\n>\n> On HEAD I would not be against removing that as this leads to a\n> cleanup of our code. For MSVC, we only support VS 2013~ on HEAD, so\n> saying that we don't support MinGW older than what was proposed 5\n> years ago sounds sensible.\n>\n\n+1, definitely.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Sep 7, 2019 at 4:58 AM Michael Paquier <michael@paquier.xyz> wrote:On Sat, Sep 07, 2019 at 12:11:25AM +0200, Peter Eisentraut wrote:\n> I'm not sure exactly what the upstream of mingw is these days, but I\n> think the original issue that led to 811be893 has long been fixed [0],\n> and the other stuff in mingwcompat.c is also no longer relevant [1]. I\n> think mingwcompat.c could be removed altogether. I'm not sure to what\n> extent we need to support 5+ year old mingw versions.\n\nOn HEAD I would not be against removing that as this leads to a\ncleanup of our code. For MSVC, we only support VS 2013~ on HEAD, so\nsaying that we don't support MinGW older than what was proposed 5\nyears ago sounds sensible.+1, definitely. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 9 Sep 2019 14:24:22 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Compiler warnings with MinGW"
},
{
"msg_contents": "On 2019-09-09 14:24, Magnus Hagander wrote:\n> On Sat, Sep 7, 2019 at 4:58 AM Michael Paquier <michael@paquier.xyz\n> <mailto:michael@paquier.xyz>> wrote:\n> \n> On Sat, Sep 07, 2019 at 12:11:25AM +0200, Peter Eisentraut wrote:\n> > I'm not sure exactly what the upstream of mingw is these days, but I\n> > think the original issue that led to 811be893 has long been fixed [0],\n> > and the other stuff in mingwcompat.c is also no longer relevant\n> [1]. I\n> > think mingwcompat.c could be removed altogether. I'm not sure to what\n> > extent we need to support 5+ year old mingw versions.\n> \n> On HEAD I would not be against removing that as this leads to a\n> cleanup of our code. For MSVC, we only support VS 2013~ on HEAD, so\n> saying that we don't support MinGW older than what was proposed 5\n> years ago sounds sensible.\n> \n> +1, definitely. \n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 17 Sep 2019 12:00:39 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Compiler warnings with MinGW"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 12:00:39PM +0200, Peter Eisentraut wrote:\n> committed\n\nThanks, Peter.\n--\nMichael",
"msg_date": "Wed, 18 Sep 2019 10:30:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Compiler warnings with MinGW"
}
] |
[
{
"msg_contents": "Hi\n\nIn pg_basebackup's GenerateRecoveryConf() function, the value for\n\"primary_slot_name\" is escaped, but the original, non-escaped value\nis written. See attached patch.\n\nThis has been present since the code was added in 9.6 (commit 0dc848b0314).\n\nRegards\n\nIan Barwick\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Fri, 19 Jul 2019 17:08:39 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] minor bugfix for pg_basebackup (9.6 ~ )"
},
{
"msg_contents": "Hi\n\nOh. Replication slot name currently can contains only a-z0-9_ characters. So we can not actually write such recovery.conf, pg_basebackup will stop before. But perform escape_quotes on string and not use result - error anyway.\n\nregards, Sergei\n\n\n",
"msg_date": "Fri, 19 Jul 2019 13:45:05 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] minor bugfix for pg_basebackup (9.6 ~ )"
},
{
"msg_contents": "On 7/19/19 7:45 PM, Sergei Kornilov wrote:\n> Hi\n> \n> Oh. Replication slot name currently can contains only a-z0-9_ characters. So\n> we can not actually write such recovery.conf, pg_basebackup will stop\n> before. But perform escape_quotes on string and not use result - error anyway.\n\nGood point, it does actually fail with an error if an impossible slot name\nis provided, so the escaping is superfluous anyway.\n\nI'll take another look at it later as it's not exactly critical, just stuck\nout when I was passing through the code.\n\n\nRegards\n\nIan Barwick\n\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 19 Jul 2019 22:40:42 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] minor bugfix for pg_basebackup (9.6 ~ )"
},
{
"msg_contents": "On Fri, Jul 19, 2019 at 10:40:42PM +0900, Ian Barwick wrote:\n> Good point, it does actually fail with an error if an impossible slot name\n> is provided, so the escaping is superfluous anyway.\n\nFWIW, ReplicationSlotValidateName() gives the reason behind that\nrestriction:\nSlot names may consist out of [a-z0-9_]{1,NAMEDATALEN-1} which should allow\nthe name to be used as a directory name on every supported OS.\n \n> I'll take another look at it later as it's not exactly critical, just stuck\n> out when I was passing through the code.\n\nThis restriction is unlikely going to be removed, still I would rather\nkeep the escaped logic in pg_basebackup. This is the usual,\nrecommended coding pattern, and there is a risk that folks refer to\nthis code block for their own fancy stuff, spreading the problem. The\nintention behind the code is to use an escaped name as well. For \nthose reasons your patch is fine by me.\n--\nMichael",
"msg_date": "Sat, 20 Jul 2019 10:04:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] minor bugfix for pg_basebackup (9.6 ~ )"
},
{
"msg_contents": "On Sat, Jul 20, 2019 at 10:04:19AM +0900, Michael Paquier wrote:\n> This restriction is unlikely going to be removed, still I would rather\n> keep the escaped logic in pg_basebackup. This is the usual,\n> recommended coding pattern, and there is a risk that folks refer to\n> this code block for their own fancy stuff, spreading the problem. The\n> intention behind the code is to use an escaped name as well. For \n> those reasons your patch is fine by me.\n\nAttempting to use a slot with an unsupported set of characters will\nlead beforehand to a failure when trying to fetch the WAL segments\nwith START_REPLICATION, meaning that this spot will never be reached\nand that there is no active bug, but for the sake of consistency I see\nno problems with applying the fix on HEAD. So, are there any\nobjections with that?\n--\nMichael",
"msg_date": "Mon, 22 Jul 2019 16:36:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] minor bugfix for pg_basebackup (9.6 ~ )"
},
{
"msg_contents": "On 2019-Jul-22, Michael Paquier wrote:\n\n> On Sat, Jul 20, 2019 at 10:04:19AM +0900, Michael Paquier wrote:\n> > This restriction is unlikely going to be removed, still I would rather\n> > keep the escaped logic in pg_basebackup. This is the usual,\n> > recommended coding pattern, and there is a risk that folks refer to\n> > this code block for their own fancy stuff, spreading the problem. The\n> > intention behind the code is to use an escaped name as well. For \n> > those reasons your patch is fine by me.\n> \n> Attempting to use a slot with an unsupported set of characters will\n> lead beforehand to a failure when trying to fetch the WAL segments\n> with START_REPLICATION, meaning that this spot will never be reached\n> and that there is no active bug, but for the sake of consistency I see\n> no problems with applying the fix on HEAD. So, are there any\n> objections with that?\n\nMaybe it's just me, but it seems weird to try to forestall a problem\nthat cannot occur by definition. I would rather remove the escaping,\nand add a one-line comment explaining why we don't do it?\n\n if (replication_slot)\n\t/* needn't escape because slot name must comprise [a-zA-Z0-9_] only */\n appendPQExpBuffer(recoveryconfcontents, \"primary_slot_name = '%s'\\n\",\n\t\t\t replication_slot);\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 22 Jul 2019 12:58:40 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] minor bugfix for pg_basebackup (9.6 ~ )"
},
{
"msg_contents": "On Mon, Jul 22, 2019 at 12:58:40PM -0400, Alvaro Herrera wrote:\n> Maybe it's just me, but it seems weird to try to forestall a problem\n> that cannot occur by definition. I would rather remove the escaping,\n> and add a one-line comment explaining why we don't do it?\n\nNo objections with doing that either, really. Perhaps you would\nprefer pushing a patch among those lines by yourself?\n\nOne argument that I got in mind to justify the escaping would be if we\nadd a new feature in pg_basebackup to write a new set of recovery\noptions on an existing data folder, which does not require an option.\nIn this case, if the escaping does not exist, starting the server\nwould fail with a confusing parsing error if a quote is added to the\nslot name. But if the escaping is done, then we would get a correct\nerror that the replication slot value includes an incorrect character.\nIf such an hypothetical option is added, most likely this would be\nnoticed anyway, so that's mainly nannyism from my side.\n--\nMichael",
"msg_date": "Tue, 23 Jul 2019 17:10:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] minor bugfix for pg_basebackup (9.6 ~ )"
},
{
"msg_contents": "On 7/23/19 5:10 PM, Michael Paquier wrote:\n> On Mon, Jul 22, 2019 at 12:58:40PM -0400, Alvaro Herrera wrote:\n>> Maybe it's just me, but it seems weird to try to forestall a problem\n>> that cannot occur by definition. I would rather remove the escaping,\n>> and add a one-line comment explaining why we don't do it?\n> \n> No objections with doing that either, really. Perhaps you would\n> prefer pushing a patch among those lines by yourself?\n> \n> One argument that I got in mind to justify the escaping would be if we\n> add a new feature in pg_basebackup to write a new set of recovery\n> options on an existing data folder, which does not require an option.\n> In this case, if the escaping does not exist, starting the server\n> would fail with a confusing parsing error if a quote is added to the\n> slot name. But if the escaping is done, then we would get a correct\n> error that the replication slot value includes an incorrect character.\n> If such an hypothetical option is added, most likely this would be\n> noticed anyway, so that's mainly nannyism from my side.\n\nIt'd be better if such a hypothetical option validated the provided\nslot name anwyay, to prevent later surprises.\n\nRevised patch attached, which as Alvaro suggests removes the escaping\nand adds a comment explaining why the raw value can be passed as-is.\n\n\nRegards\n\nIan Barwick\n\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Wed, 24 Jul 2019 13:12:33 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] minor bugfix for pg_basebackup (9.6 ~ )"
},
{
"msg_contents": "On 2019-Jul-24, Ian Barwick wrote:\n\n> It'd be better if such a hypothetical option validated the provided\n> slot name anwyay, to prevent later surprises.\n\nHmm, but what would we do if the validation failed?\n\n> Revised patch attached, which as Alvaro suggests removes the escaping\n> and adds a comment explaining why the raw value can be passed as-is.\n\nHeh, yesterday I revised the original patch as attached and was about to\npush when the bell rang. I like this one because it keeps the comment\nto one line and it mentions the function name in charge of the\nvalidation (useful for grepping later on). It's a bit laconic because\nof the long function name and the desire to keep it to one line, but it\nseems sufficient to me.\n\nBTW upper case letters are not allowed :-)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 24 Jul 2019 11:23:30 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] minor bugfix for pg_basebackup (9.6 ~ )"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 11:23:30AM -0400, Alvaro Herrera wrote:\n> Heh, yesterday I revised the original patch as attached and was about to\n> push when the bell rang. I like this one because it keeps the comment\n> to one line and it mentions the function name in charge of the\n> validation (useful for grepping later on). It's a bit laconic because\n> of the long function name and the desire to keep it to one line, but it\n> seems sufficient to me.\n\nLooks fine to me. A nit: addition of braces for the if block. Even\nif that a one-liner, there is a comment so I think that this makes the\ncode more readable.\n--\nMichael",
"msg_date": "Thu, 25 Jul 2019 08:53:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] minor bugfix for pg_basebackup (9.6 ~ )"
},
{
"msg_contents": "On 2019-Jul-25, Michael Paquier wrote:\n\n> On Wed, Jul 24, 2019 at 11:23:30AM -0400, Alvaro Herrera wrote:\n> > Heh, yesterday I revised the original patch as attached and was about to\n> > push when the bell rang. I like this one because it keeps the comment\n> > to one line and it mentions the function name in charge of the\n> > validation (useful for grepping later on). It's a bit laconic because\n> > of the long function name and the desire to keep it to one line, but it\n> > seems sufficient to me.\n> \n> Looks fine to me. A nit: addition of braces for the if block. Even\n> if that a one-liner, there is a comment so I think that this makes the\n> code more readable.\n\nYeah, I had removed those on purpose, but that was probably inconsistent\nwith my own reviews of others' patches. I pushed it with them.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 26 Jul 2019 17:52:43 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] minor bugfix for pg_basebackup (9.6 ~ )"
},
{
"msg_contents": "On Fri, Jul 26, 2019 at 05:52:43PM -0400, Alvaro Herrera wrote:\n> Yeah, I had removed those on purpose, but that was probably inconsistent\n> with my own reviews of others' patches. I pushed it with them.\n\nThanks!\n--\nMichael",
"msg_date": "Mon, 29 Jul 2019 10:51:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] minor bugfix for pg_basebackup (9.6 ~ )"
},
{
"msg_contents": "On 7/27/19 6:52 AM, Alvaro Herrera wrote:\n> On 2019-Jul-25, Michael Paquier wrote:\n> \n>> On Wed, Jul 24, 2019 at 11:23:30AM -0400, Alvaro Herrera wrote:\n>>> Heh, yesterday I revised the original patch as attached and was about to\n>>> push when the bell rang. I like this one because it keeps the comment\n>>> to one line and it mentions the function name in charge of the\n>>> validation (useful for grepping later on). It's a bit laconic because\n>>> of the long function name and the desire to keep it to one line, but it\n>>> seems sufficient to me.\n>>\n>> Looks fine to me. A nit: addition of braces for the if block. Even\n>> if that a one-liner, there is a comment so I think that this makes the\n>> code more readable.\n> \n> Yeah, I had removed those on purpose, but that was probably inconsistent\n> with my own reviews of others' patches. I pushed it with them.\n\nThanks\n\nRegards\n\nIan Barwick\n\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 1 Aug 2019 11:37:21 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] minor bugfix for pg_basebackup (9.6 ~ )"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen reviewing a recent patch, I missed a place where Datum was being\nconverted to another type implicitly (ie without going though a\nDatumGetXXX() macro). Thanks to Jeff for fixing that (commit\nb538c90b), but I was curious to see if I could convince my compiler to\ntell me about that sort of thing. Here's an experimental hack that\nmakes Datum a struct (unfortunately defined in two places, but like I\nsaid it's a hack), and then fixes all the resulting compile errors.\nThe main categories of change are:\n\n1. Many cases of replacing \"(Datum) 0\" with a new macro \"NullDatum\"\nand adjusting code that compares with 0/NULL, so you can pretty much\nignore that noise. Likewise code that compares datums directly\nwithout apparent knowledge of the expected type.\n\n2. VARDATA etc macros taking a Datum instead of a varlena *. I think\nthe interface is suppose to be able to take both, so I think you can\npretty much ignore that noise too, I just couldn't immediately think\nof a trick that would make that polymorphism work so I added\nDatumGetPointer(x) wherever a Datum x was given directly to those\nmacros.\n\n3. Many cases of object IDs being converted implicitly, for example\nin syscache calls. A few cases of implicit use as booleans.\n\n4. Various confusions about the types involved in PG_RETURN_XXX and\nPG_GETARGS_XXX macros, and failure to convert values from\nDatum-returning functions, or unnecessary conversion of results (eg\nmakeArrayResult).\n\nI should probably split this into \"actionable\" (categories 3 and 4)\nand \"noise and scaffolding\" patches.\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Sat, 20 Jul 2019 11:54:32 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Catching missing Datum conversions"
},
{
"msg_contents": ">\n> I should probably split this into \"actionable\" (categories 3 and 4)\n> and \"noise and scaffolding\" patches.\n>\n\nBreaking down the noise-and-scaffolding into some subgroups might make the\nrather long patches more palatable/exceedingly-obvious:\n* (Datum) 0 ---> NullDatum\n* 0 ----> NullDatum\n* The DatumGetPointer(allParameterTypes) null tests\n\nHaving said that, everything you did seems really straightforward, except\nfor\n\nsrc/backend/rewrite/rewriteDefine.c\nsrc/backend/statistics/mcv.c\nsrc/backend/tsearch/ts_parse.c\n\nand those seem like cases where the DatumGetXXX was a no-op before Datum\nwas a struct.\n\nI should probably split this into \"actionable\" (categories 3 and 4)\nand \"noise and scaffolding\" patches.Breaking down the noise-and-scaffolding into some subgroups might make the rather long patches more palatable/exceedingly-obvious:* (Datum) 0 ---> NullDatum* 0 ----> NullDatum* The DatumGetPointer(allParameterTypes) null testsHaving said that, everything you did seems really straightforward, except for src/backend/rewrite/rewriteDefine.csrc/backend/statistics/mcv.csrc/backend/tsearch/ts_parse.cand those seem like cases where the DatumGetXXX was a no-op before Datum was a struct.",
"msg_date": "Sat, 20 Jul 2019 21:10:33 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Catching missing Datum conversions"
},
{
"msg_contents": "On Fri, Jul 19, 2019 at 7:55 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> When reviewing a recent patch, I missed a place where Datum was being\n> converted to another type implicitly (ie without going though a\n> DatumGetXXX() macro). Thanks to Jeff for fixing that (commit\n> b538c90b), but I was curious to see if I could convince my compiler to\n> tell me about that sort of thing.\n\nThis is a very easy mistake to make, so if you ever feel like\nreturning to this topic in earnest, I think it could be a worthwhile\nexpenditure of time.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 29 Nov 2021 10:05:28 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Catching missing Datum conversions"
}
] |
[
{
"msg_contents": "Hello.\n\nCurrently I am working a lot with cluster consist a few of big tables.\nAbout 2-3 TB. These tables are heavily updated, some rows are removed, new\nrows are inserted... Kind of typical OLTP workload.\n\nPhysical table size keeps mostly stable while regular VACUUM is working. It\nis fast enough to clean some place from removed rows.\n\nBut time to time \"to prevent wraparound\" comes. And it works like 8-9 days.\nDuring that time relation size starting to expand quickly. Freezing all\nblocks in such table takes a lot of time and bloat is generated much more\nquickly.\n\nOf course after aggressive vacuum finishes table are not shrink back (some\nkind of repacking required). And even after repacking - relation shrinking\ncauses all cluster to stuck for some time (due exclusive locking, see (1)).\n\nSo, I was thinking about it and I saw two possible solutions:\n\n 1. Track two block pointers for aggressive vacuum. One is to freeze all\nblocks and other is to perform regular vacuum on non-all-visible blocks.\nSecond one is circular (could process table multiple times while first one\nis moving from start to end of the table). And some parameters to spread\nresources between pointers is required.\n\n 2. Separate \"to prevent wraparound\" from regular Vacuum to allow them run\nconcurrently. But it seems to be much more work here.\n\nCould you please share some thoughts on it? Is it worth to be implemented?\n\nThanks.\n\n[1]\nhttps://www.postgresql.org/message-id/c9374921e50a5e8fb1ecf04eb8c6ebc3%40postgrespro.ru\n\nHello.Currently I am working a lot with cluster consist a few of big tables. About 2-3 TB. These tables are heavily updated, some rows are removed, new rows are inserted... Kind of typical OLTP workload.Physical table size keeps mostly stable while regular VACUUM is working. It is fast enough to clean some place from removed rows.But time to time \"to prevent wraparound\" comes. And it works like 8-9 days. During that time relation size starting to expand quickly. Freezing all blocks in such table takes a lot of time and bloat is generated much more quickly. Of course after aggressive vacuum finishes table are not shrink back (some kind of repacking required). And even after repacking - relation shrinking causes all cluster to stuck for some time (due exclusive locking, see (1)).So, I was thinking about it and I saw two possible solutions: 1. Track two block pointers for aggressive vacuum. One is to freeze all blocks and other is to perform regular vacuum on non-all-visible blocks. Second one is circular (could process table multiple times while first one is moving from start to end of the table). And some parameters to spread resources between pointers is required. 2. Separate \"to prevent wraparound\" from regular Vacuum to allow them run concurrently. But it seems to be much more work here.Could you please share some thoughts on it? Is it worth to be implemented? Thanks.[1] https://www.postgresql.org/message-id/c9374921e50a5e8fb1ecf04eb8c6ebc3%40postgrespro.ru",
"msg_date": "Sat, 20 Jul 2019 15:35:57 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "thoughts on \"prevent wraparound\" vacuum"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-20 15:35:57 +0300, Michail Nikolaev wrote:\n> Currently I am working a lot with cluster consist a few of big tables.\n> About 2-3 TB. These tables are heavily updated, some rows are removed, new\n> rows are inserted... Kind of typical OLTP workload.\n> \n> Physical table size keeps mostly stable while regular VACUUM is working. It\n> is fast enough to clean some place from removed rows.\n> \n> But time to time \"to prevent wraparound\" comes. And it works like 8-9 days.\n> During that time relation size starting to expand quickly. Freezing all\n> blocks in such table takes a lot of time and bloat is generated much more\n> quickly.\n\nSeveral questions:\n- Which version of postgres is this? Newer versions avoid scanning\n unchanged parts of the heap even for freezing (9.6+, with additional\n smaller improvements in 11).\n- have you increased the vacuum cost limits? Before PG 12 they're so low\n they're entirely unsuitable for larger databases, and even in 12 you\n should likely increase them for a multi-TB database\n\nUnfortunately even if those are fixed the indexes are still likely going\nto be scanned in their entirety - but most of the time not modified\nmuch, so that's not as bad.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 20 Jul 2019 13:02:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: thoughts on \"prevent wraparound\" vacuum"
},
{
"msg_contents": "Hello.\n\n>- Which version of postgres is this? Newer versions avoid scanning\n> unchanged parts of the heap even for freezing (9.6+, with additional\n> smaller improvements in 11).\n\nOh, totally forgot about version and settings...\n\nserver_version 10.9 (Ubuntu 10.9-103)\n\nSo, \"don't vacuum all-frozen pages\" included.\n\n> - have you increased the vacuum cost limits? Before PG 12 they're so low\n> they're entirely unsuitable for larger databases, and even in 12 you\n> should likely increase them for a multi-TB database\n\nCurrent settings are:\n\nautovacuum_max_workers 8\nautovacuum_vacuum_cost_delay 5ms\nautovacuum_vacuum_cost_limit 400\nautovacuum_work_mem -1\n\nvacuum_cost_page_dirty 40\nvacuum_cost_page_hit 1\nvacuum_cost_page_miss 10\n\n\"autovacuum_max_workers\" set to 8 because server needs to process a lot of\nchanging relations.\nSettings were more aggressive previously (autovacuum_vacuum_cost_limit was\n2800) but it leads to very high IO load causing issues with application\nperformance and stability (even on SSD).\n\n \"vacuum_cost_page_dirty\" was set to 40 few month ago. High IO write peaks\nwere causing application requests to stuck into WALWriteLock.\nAfter some investigations we found it was caused by WAL-logging peaks.\nSuch WAL-peaks are mostly consist of such records:\n\nType N(%)\n Record size (%) FPI size (%)\n Combined size (%)\n------\nHeap2/CLEAN 10520 ( 0.86)\n 623660 ( 0.21) 5317532 ( 0.53) 5941192\n( 0.46)\nHeap2/FREEZE_PAGE 113419 ( 9.29)\n 6673877 ( 2.26) 635354048 ( 63.12) 642027925 (\n49.31)\n\nanother example:\n\nHeap2/CLEAN 196707 ( 6.96)\n 12116527 ( 1.56) 292317231 ( 37.77) 304433758 (\n19.64)\nHeap2/FREEZE_PAGE 1819 ( 0.06)\n104012 ( 0.01) 13324269 ( 1.72) 13428281 (\n 0.87)\n\nThanks,\nMichail.\n\nHello.>- Which version of postgres is this? Newer versions avoid scanning> unchanged parts of the heap even for freezing (9.6+, with additional> smaller improvements in 11).Oh, totally forgot about version and settings...server_version 10.9 (Ubuntu 10.9-103)So, \"don't vacuum all-frozen pages\" included.> - have you increased the vacuum cost limits? Before PG 12 they're so low> they're entirely unsuitable for larger databases, and even in 12 you> should likely increase them for a multi-TB databaseCurrent settings are:autovacuum_max_workers 8autovacuum_vacuum_cost_delay 5msautovacuum_vacuum_cost_limit 400autovacuum_work_mem -1vacuum_cost_page_dirty 40vacuum_cost_page_hit 1vacuum_cost_page_miss 10\"autovacuum_max_workers\" set to 8 because server needs to process a lot of changing relations.Settings were more aggressive previously (autovacuum_vacuum_cost_limit was 2800) but it leads to very high IO load causing issues with application performance and stability (even on SSD). \"vacuum_cost_page_dirty\" was set to 40 few month ago. High IO write peaks were causing application requests to stuck into WALWriteLock.After some investigations we found it was caused by WAL-logging peaks.Such WAL-peaks are mostly consist of such records:Type N(%) Record size (%) FPI size (%) Combined size (%)------Heap2/CLEAN 10520 ( 0.86) 623660 ( 0.21) 5317532 ( 0.53) 5941192 ( 0.46)Heap2/FREEZE_PAGE 113419 ( 9.29) 6673877 ( 2.26) 635354048 ( 63.12) 642027925 ( 49.31)another example:Heap2/CLEAN 196707 ( 6.96) 12116527 ( 1.56) 292317231 ( 37.77) 304433758 ( 19.64)Heap2/FREEZE_PAGE 1819 ( 0.06) 104012 ( 0.01) 13324269 ( 1.72) 13428281 ( 0.87)Thanks,Michail.",
"msg_date": "Sun, 21 Jul 2019 08:46:37 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: thoughts on \"prevent wraparound\" vacuum"
}
] |
[
{
"msg_contents": "HI Team,\n\n\n\nI’m looking to convert QMF Queries , QMF forms and QMF procedure to the\nPOSTGRESQL will it support all of them.\n\nIf yes please help us with the sample example. Or any Documentation.\n\n\n\nThanking in anticipation .\n\n\n\nRegards\n\nJotiram More\n\nHI\nTeam,\n \nI’m\nlooking to convert QMF Queries , QMF forms and QMF procedure to the\nPOSTGRESQL will it support all of them.\nIf\nyes please help us with the sample example. Or any Documentation.\n \nThanking\nin anticipation .\n \nRegards\nJotiram\nMore",
"msg_date": "Sat, 20 Jul 2019 19:26:07 +0530",
"msg_from": "\"JVM .\" <jotirammore@gmail.com>",
"msg_from_op": true,
"msg_subject": "Queries on QMF to POSTGRE"
},
{
"msg_contents": "-hackers\n+pgsql general <pgsql-general@postgresql.org>\n\nOn Sun, Jul 21, 2019 at 7:33 PM JVM . <jotirammore@gmail.com> wrote:\n\n>\n>\n> I’m looking to convert QMF Queries , QMF forms and QMF procedure to the\n> POSTGRESQL will it support all of them.\n>\n> If yes please help us with the sample example. Or any Documentation.\n>\n\nWhat would help anyone willing to help you, is you providing documentation\nor definition of QFM and some examples of those.\n\nCheers,\n--\nAlex\n\n-hackers +pgsql general On Sun, Jul 21, 2019 at 7:33 PM JVM . <jotirammore@gmail.com> wrote: \nI’m\nlooking to convert QMF Queries , QMF forms and QMF procedure to the\nPOSTGRESQL will it support all of them.\nIf\nyes please help us with the sample example. Or any Documentation.What would help anyone willing to help you, is you providing documentation or definition of QFM and some examples of those.Cheers,--Alex",
"msg_date": "Tue, 23 Jul 2019 07:30:10 +0200",
"msg_from": "Oleksandr Shulgin <oleksandr.shulgin@zalando.de>",
"msg_from_op": false,
"msg_subject": "Re: Queries on QMF to POSTGRE"
},
{
"msg_contents": "On 7/23/19 12:30 AM, Oleksandr Shulgin wrote:\n> -hackers\n> +pgsql general <mailto:pgsql-general@postgresql.org>\n>\n> On Sun, Jul 21, 2019 at 7:33 PM JVM . <jotirammore@gmail.com \n> <mailto:jotirammore@gmail.com>> wrote:\n>\n>\n> I’m looking to convert QMF Queries , QMF forms and QMF procedure to\n> the POSTGRESQL will it support all of them.\n>\n> If yes please help us with the sample example. Or any Documentation.\n>\n>\n> What would help anyone willing to help you, is you providing documentation \n> or definition of QFM and some examples of those.\n\nOP might be referring to \nhttps://en.wikipedia.org/wiki/IBM_Query_Management_Facility\n\n\n-- \nAngular momentum makes the world go 'round.\n\n\n\n\n\n\n On 7/23/19 12:30 AM, Oleksandr Shulgin wrote:\n\n\n\n-hackers \n+pgsql general \n\n\n\nOn Sun, Jul 21, 2019 at 7:33 PM JVM . <jotirammore@gmail.com>\n wrote:\n\n\n\n\n \n\nI’m\n looking to convert QMF Queries , QMF forms and QMF\n procedure to the\n POSTGRESQL will it support all of them.\nIf\n yes please help us with the sample example. Or any Documentation.\n\n\n\n\nWhat would help anyone willing to help you, is you\n providing documentation or definition of QFM and some\n examples of those.\n\n\n\n\n OP might be referring to https://en.wikipedia.org/wiki/IBM_Query_Management_Facility\n\n\n-- \n Angular momentum makes the world go 'round.",
"msg_date": "Tue, 23 Jul 2019 00:42:56 -0500",
"msg_from": "Ron <ronljohnsonjr@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Queries on QMF to POSTGRE"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nPlease consider fixing the next pack of typos and inconsistencies in the\ntree:\n7.1. h04m05s06 -> h04mm05s06 (in fact it's broken since 6af04882, but\nh04mm05s06.789 still accepted)\n7.2. hasbucketcleanup -> hashbucketcleanup\n7.3. _hashm_spare -> hashm_spares\n7.4. hashtbl -> hash table\n7.5. HAS_RELFILENODES -> XINFO_HAS_RELFILENODES\n7.6. HAS_SUBXACT -> XINFO_HAS_SUBXACT\n7.7. HAVE_FCVT -> remove (survived after ff4628f3)\n7.8. HAVE_FINITE -> remove (orphaned after cac2d912)\n7.9. HAVE_STRUCT_SOCKADDR_UN -> remove (not used since introduction in\n399a36a7)\n7.10. HAVE_SYSCONF -> remove (survived after ff4628f3)\n7.11. HAVE_ZLIB -> HAVE_LIBZ\n7.12. HEAP_CLEAN -> XLOG_HEAP2_CLEAN\n7.13. HEAP_CONTAINS_NEW_TUPLE_DATA -> XLH_UPDATE_CONTAINS_NEW_TUPLE,\nXLOG_HEAP_CONTAINS_OLD_TUPLE -> XLH_UPDATE_CONTAINS_OLD_TUPLE,\nXLOG_HEAP_CONTAINS_OLD_KEY -> XLH_UPDATE_CONTAINS_OLD_KEY,\nXLOG_HEAP_PREFIX_FROM_OLD -> XLH_UPDATE_PREFIX_FROM_OLD,\nXLOG_HEAP_SUFFIX_FROM_OLD -> XLH_UPDATE_SUFFIX_FROM_OLD (renamed in\n168d5805)\n7.14. HEAP_FREEZE -> FREEZE_PAGE (an inconsistency since introduction in\n48188e16)\n7.15. heapio.c -> hio.c\n7.16. heap_newpage -> XLOG_FPI (orphaned since 54685338)\n7.17. heaxadecimal -> hexadecimal\n7.18. hostlen -> nodelen, host -> node, serv -> service, servlen ->\nservicelen\n7.19. i386s -> x86_64\n7.20. IConst/FConst -> ICONST/FCONST\n7.21. imit -> limit\n7.22. IN_ARCHIVE_RECOVERY -> DB_IN_ARCHIVE_RECOVERY\n7.23. ind_arraysize, ind_value -> ind_arrsize, ind_pointer\n7.24. index_getnext -> index_getnext_slot\n7.25. IndexTupleVector -> IndexTuple vector\n7.26. innerConsistent -> innerConsistentFn\n7.27. in-progres -> in-progress\n7.28. inspire with -> inspired by the (sync with 192b0c94)\n7.29. internalerrpos -> internalerrposition\n7.30. internal_procname -> internal_proname\n7.31. interruptOK -> remove (orphaned after d0699571)\n7.32. intratransaction -> intra-transaction\n7.33. InvalidOffset -> InvalidOffsetNumber\n7.34. invtrans -> invtransfn\n7.35. isbuiltin -> fmgr_isbuiltin\n7.36. iself -> itself\n7.37. isnoinherit -> noinherit\n7.38. ISO_DATES -> USE_ISO_DATES\n7.39. isParentRoot -> remove (orphaned after 218f5158)\n7.40. isPrefix -> prefix\n7.41. ItemPointerIsMax -> remove (orphaned after e20c70cb)\n7.42. itemsin -> items in\n7.43. jbVal -> jbval\n7.44. json_plperl -> jsonb_plperlu\n7.45. jvbBinary -> jbvBinary\n7.46. keyAttrs -> attrKind\n7.47. keyinfo -> key info\n7.48. key_modified -> key_changed\n7.49. killitems -> killedItems\n7.50. KnownAssignedTransactions -> KnownAssignedTransactionIds\n\nAlso, I found e-mail headers in optimizer/plan/README not relevant, so I\npropose to remove them.\nAnd another finding is related to the sleep effective resolution. `man 7\ntime` says \"Since kernel 2.6.13, the HZ value is a kernel configuration \nparameter and can be 100, 250 (the default) ...\", so the 10\nmilliseconds is not the most common effective resolution nowadays.\nI propose the corresponding patch for pgsleep.c, but we have a similar\nstatement in doc/.../config.sgml. I think It should be fixed too.\n\nBest regards,\nAlexander",
"msg_date": "Sun, 21 Jul 2019 08:28:53 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix typos and inconsistencies for HEAD (take 7)"
},
{
"msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> Also, I found e-mail headers in optimizer/plan/README not relevant, so I\n> propose to remove them.\n\nFWIW, I think they're highly relevant, because they put a date on\nthat text. I've not gone through that README lately, but I wouldn't\nbe surprised if it's largely obsolete --- it hasn't been maintained\nin any meaningful way since Vadim wrote it. Without the headers, a\nreader would have no warning of that.\n\nWhat really ought to happen, likely, is for somebody to extract\nwhatever is still useful there into a new(?) section in the parent\ndirectory's README.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Jul 2019 12:34:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos and inconsistencies for HEAD (take 7)"
},
{
"msg_contents": "On Sun, Jul 21, 2019 at 08:28:53AM +0300, Alexander Lakhin wrote:\n> Please consider fixing the next pack of typos and inconsistencies in the\n> tree:\n\nThanks, all those things look fine. I have noticed one mistake.\n\n> 7.44. json_plperl -> jsonb_plperlu\n\nThe path was incorrect here.\n\n> Also, I found e-mail headers in optimizer/plan/README not relevant, so I\n> propose to remove them.\n\nNot sure about that part.\n\n> And another finding is related to the sleep effective resolution. `man 7\n> time` says \"Since kernel 2.6.13, the HZ value is a kernel configuration \n> parameter and can be 100, 250 (the default) ...\", so the 10\n> milliseconds is not the most common effective resolution nowadays.\n> I propose the corresponding patch for pgsleep.c, but we have a similar\n> statement in doc/.../config.sgml. I think It should be fixed too.\n\nFixing both places sounds adapted to me. An alternative we could use\nhere is just to say something like that:\nThe effective resolution is only 1/HZ, which can be configured with\nkernel parameter (see man 7 time), and is 4 milliseconds by\ndefault.\n--\nMichael",
"msg_date": "Mon, 22 Jul 2019 10:05:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos and inconsistencies for HEAD (take 7)"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sun, Jul 21, 2019 at 08:28:53AM +0300, Alexander Lakhin wrote:\n>> And another finding is related to the sleep effective resolution. `man 7\n>> time` says \"Since kernel 2.6.13, the HZ value is a kernel configuration \n>> parameter and can be 100, 250 (the default) ...\", so the 10\n>> milliseconds is not the most common effective resolution nowadays.\n>> I propose the corresponding patch for pgsleep.c, but we have a similar\n>> statement in doc/.../config.sgml. I think It should be fixed too.\n\n> Fixing both places sounds adapted to me. An alternative we could use\n> here is just to say something like that:\n> The effective resolution is only 1/HZ, which can be configured with\n> kernel parameter (see man 7 time), and is 4 milliseconds by\n> default.\n\nWhatever we say here is going to be a lie on some platforms.\n\nProbably best just to say that the sleep resolution is platform-dependent\nand leave it at that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Jul 2019 00:14:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos and inconsistencies for HEAD (take 7)"
},
{
"msg_contents": "Hello Michael,\n22.07.2019 4:05, Michael Paquier wrote:\n>> Also, I found e-mail headers in optimizer/plan/README not relevant, so I\n>> propose to remove them.\n> Not sure about that part.\nI agree that the proposed fix is not complete, but just raises the\ndemand for a subsequent fix.\nIf you don't mind, I would return to such questionable and aside items\nafter finishing with the unicums en masse.\n\nBest regards.\nAlexander\n\n\n",
"msg_date": "Mon, 22 Jul 2019 07:31:22 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix typos and inconsistencies for HEAD (take 7)"
},
{
"msg_contents": "Hello Tom,\n22.07.2019 7:14, Tom Lane wrote:\n>> Fixing both places sounds adapted to me. An alternative we could use\n>> here is just to say something like that:\n>> The effective resolution is only 1/HZ, which can be configured with\n>> kernel parameter (see man 7 time), and is 4 milliseconds by\n>> default.\n> Whatever we say here is going to be a lie on some platforms.\n>\n> Probably best just to say that the sleep resolution is platform-dependent\n> and leave it at that.\nI think, we can say \"on many systems\"/ \"on most Unixen\", and it would\nnot be a lie.\nIn my opinion, while a generic reference to platform-dependency is OK\nfor developer' documentation, it makes the following passage in\nconfig.sgml vague (we don't give user a hint, what \"the effective\nresolution\" can be - several seconds/milliseconds/nanoseconds?):\n/The default value is 200 milliseconds (<literal>200ms</literal>). Note\nthat on many systems, the//\n//effective resolution of sleep delays is 10 milliseconds; setting//\n//<varname>bgwriter_delay</varname> to a value that is not a multiple of\n10//\n//might have the same results as setting it to the next higher multiple\nof 10. /\n->\n/The default value is 200 milliseconds (<literal>200ms</literal>). Note\nthat the//\n//effective resolution of sleep delays is paltform-dependent. setting//\n//<varname>bgwriter_delay</varname> to a value that is not a multiple of\nthe effective resolution,/\n/might have the same results as setting it to the next higher multiple./\n\nBest regards,\nAlexander\n\n\n\n\n\n\nHello Tom,\n 22.07.2019 7:14, Tom Lane wrote:\n\n\n\nFixing both places sounds adapted to me. An alternative we could use\nhere is just to say something like that:\nThe effective resolution is only 1/HZ, which can be configured with\nkernel parameter (see man 7 time), and is 4 milliseconds by\ndefault.\n\n\nWhatever we say here is going to be a lie on some platforms.\n\nProbably best just to say that the sleep resolution is platform-dependent\nand leave it at that.\n\n\n I think, we can say \"on many systems\"/ \"on most Unixen\", and it\n would not be a lie.\n In my opinion, while a generic reference to platform-dependency is\n OK for developer' documentation, it makes the following passage in\n config.sgml vague (we don't give user a hint, what \"the effective\n resolution\" can be - several seconds/milliseconds/nanoseconds?):\nThe default value is 200 milliseconds\n (<literal>200ms</literal>). Note that on many systems,\n the\neffective resolution of sleep delays is 10 milliseconds;\n setting\n<varname>bgwriter_delay</varname> to a value that\n is not a multiple of 10\nmight have the same results as setting it to the next higher\n multiple of 10. \n ->\nThe default value is 200 milliseconds\n (<literal>200ms</literal>). Note that the\n\n effective resolution of sleep delays is paltform-dependent.\n setting\n\n <varname>bgwriter_delay</varname> to a value that is\n not a multiple of the effective resolution,\nmight have the same results as setting it to the next higher\n multiple.\n\n Best regards,\n Alexander",
"msg_date": "Mon, 22 Jul 2019 08:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix typos and inconsistencies for HEAD (take 7)"
}
] |
[
{
"msg_contents": "Hi\n\n# I rewrote my previous mail.\n\nPQconnectPoll() is used as method for asynchronous using externally or internally.\nIf a caller confirms a socket ready for writing or reading that is\nrequested by return value of previous PQconnectPoll(), next PQconnectPoll()\nmust not be blocked. But if the caller specifies target_session_attrs to\n'read-write', PQconnectPoll() may be blocked.\n\nDetail:\nIf target_session_attrs is set to read-write, PQconnectPoll() calls\nPQsendQuery(\"SHOW transaction_read_only\") althogh previous return value was\nPGRES_POLLING_READING not WRITING.\nIn result, PQsendQuery() may be blocked in pqsecure_raw_write().\n\nI attach a patch.\n\nRegards\nRyo Matsumura",
"msg_date": "Mon, 22 Jul 2019 02:28:22 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "[Patch] PQconnectPoll() is blocked if target_session_attrs is\n read-write"
},
{
"msg_contents": "Hello.\n\nAt Mon, 22 Jul 2019 02:28:22 +0000, \"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com> wrote in <03040DFF97E6E54E88D3BFEE5F5480F74AC15BBD@G01JPEXMBYT04>\n> Hi\n> \n> # I rewrote my previous mail.\n> \n> PQconnectPoll() is used as method for asynchronous using externally or internally.\n> If a caller confirms a socket ready for writing or reading that is\n> requested by return value of previous PQconnectPoll(), next PQconnectPoll()\n> must not be blocked. But if the caller specifies target_session_attrs to\n> 'read-write', PQconnectPoll() may be blocked.\n> \n> Detail:\n> If target_session_attrs is set to read-write, PQconnectPoll() calls\n> PQsendQuery(\"SHOW transaction_read_only\") althogh previous return value was\n> PGRES_POLLING_READING not WRITING.\n> In result, PQsendQuery() may be blocked in pqsecure_raw_write().\n> \n> I attach a patch.\n> \n> Regards\n> Ryo Matsumura\n\nFirst, this patch looks broken.\n\npatched> if (conn->sversion >= 70400 &&\npatched> conn->target_session_attrs != NULL &&\npatched> strcmp(conn->target_session_attrs, \"read-write\") == 0)\npatched> {\npatched> }\n\nPerhaps you did cut-n-paste instead of copy-n-paste.\n\nI'm not sure such a small write just after reading can block, but\ndoing that makes things tidy.\n\nYou also need to update the corresponding documentation.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 22 Jul 2019 19:08:50 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] PQconnectPoll() is blocked if target_session_attrs is\n read-write"
},
{
"msg_contents": "Kyotaro-san\n\nThank you for your review.\n\n> First, this patch looks broken.\n\nI took a serious mistake.\n\n> You also need to update the corresponding documentation.\n\nI attach a new patch that includes updating of document.\n\nRegards\nRyo Matasumura",
"msg_date": "Tue, 23 Jul 2019 00:40:43 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [Patch] PQconnectPoll() is blocked if target_session_attrs is\n read-write"
},
{
"msg_contents": "From: Matsumura, Ryo [mailto:matsumura.ryo@jp.fujitsu.com]\n> Detail:\n> If target_session_attrs is set to read-write, PQconnectPoll() calls\n> PQsendQuery(\"SHOW transaction_read_only\") althogh previous return value\n> was PGRES_POLLING_READING not WRITING.\n\nThe current code probably assumes that PQsendQuery() to send \"SHOW transaction_read_only\" shouldn't block, because the message is small enough to fit in the socket send buffer. Likewise, the code in CONNECTION_AWAITING_RESPONSE case sends authentication data using pg_fe_sendauth() without checking for the write-ready status. OTOH, the code in CONNECTION_MADE case waits for write-ready status in advance before sending the startup packet. That's because the startup packet could get large enough to cause pqPacketSend() to block.\n\nSo, I don't think the fix is necessary.\n\n> In result, PQsendQuery() may be blocked in pqsecure_raw_write().\n\nFWIW, if PQsendQuery() blocked during connection establishment, I think it should block in poll() called from .... from pqWait(), because the libpq's socket is set non-blocking.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Thu, 25 Jul 2019 04:12:55 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [Patch] PQconnectPoll() is blocked if target_session_attrs is\n read-write"
},
{
"msg_contents": "Tsunakawa-san\n\nThank you for your comment.\nI understand the sense. I don't require an explicit rule.\n\nRegards\nRyo Matsumura\n\n\n",
"msg_date": "Fri, 26 Jul 2019 06:20:00 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [Patch] PQconnectPoll() is blocked if target_session_attrs is\n read-write"
}
] |
[
{
"msg_contents": "Hello. I happened to find that I cannot obtain a patch from the\nEmails field of a patch on the CF-App.\n\nhttps://commitfest.postgresql.org/23/1525/\n\nEmails:\n> Latest attachment (crash_dump_before_main_v2.patch) at 2018-03-01 05:13:34 from \"Tsunakawa, Takayuki\" <tsunakawa.takay at jp.fujitsu.com> \n\nThe link directly shown in Emails (shown as crash_dump..patch) field is:\n\nhttps://www.postgresql.org/message-id/attachment/59143/crash_dump_before_main_v2.patch\n\nAnd it results in \"Attachment not found\".\n\n\nI can get the patch with the same name from the message (shown as\n\"2018-03..13:34) page. The link shown there is:\n\nhttps://www.postgresql.org/message-id/attachment/68293/crash_dump_before_main_v2.patch\n\nSeems like something is getting wrong in the CF-App.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 23 Jul 2019 12:26:28 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "\"Attachment not found\" by CF-app"
}
] |
[
{
"msg_contents": "Hi hackers,\nIt's a report about my GSoC project[1] during the second period.\n\nWhat I've done:\n- Submitted the patch[2] to optimize partial TOAST decompression\n- Submitted the patch[3] as a prototype of de-TOAST iterator and tested its\nperformance on position() function\n- Perfected patches according to hacker's review comments\n\nWhat I'm doing:\n- Make overall tests on the iterator\n- Applying the de-TOAST iterator to the json/jsonb data type\n\nThanks to hackers and mentors for helping me advance this project.\n\n-- \nBest regards,\nBinguo Bao\n\n[1] https://summerofcode.withgoogle.com/projects/#6467011507388416\n[2]\nhttps://www.postgresql.org/message-id/flat/CAL-OGkthU9Gs7TZchf5OWaL-Gsi=hXqufTxKv9qpNG73d5na_g@mail.gmail.com\n[3]\nhttps://www.postgresql.org/message-id/flat/CAL-OGks_onzpc9M9bXPCztMofWULcFkyeCeKiAgXzwRL8kXiag@mail.gmail.com\n\nHi hackers,It's a report about my GSoC project[1] during the second period.What I've done:- Submitted the patch[2] to optimize partial TOAST decompression- Submitted the patch[3] as a prototype of de-TOAST iterator and tested its performance on position() function- Perfected patches according to hacker's review commentsWhat I'm doing:- Make overall tests on the iterator- Applying the de-TOAST iterator to the json/jsonb data typeThanks to hackers and mentors for helping me advance this project.-- Best regards,Binguo Bao[1] https://summerofcode.withgoogle.com/projects/#6467011507388416[2] https://www.postgresql.org/message-id/flat/CAL-OGkthU9Gs7TZchf5OWaL-Gsi=hXqufTxKv9qpNG73d5na_g@mail.gmail.com[3] https://www.postgresql.org/message-id/flat/CAL-OGks_onzpc9M9bXPCztMofWULcFkyeCeKiAgXzwRL8kXiag@mail.gmail.com",
"msg_date": "Tue, 23 Jul 2019 19:59:59 +0800",
"msg_from": "Binguo Bao <djydewang@gmail.com>",
"msg_from_op": true,
"msg_subject": "[GSoC] Second period status report for the de-TOAST iterator"
}
] |
[
{
"msg_contents": "Hello,\n\nFetching the timeline from a standby could be useful in various situation.\nEither for backup tools [1] or failover tools during some kind of election\nprocess.\n\nPlease, find in attachment a first trivial patch to support pg_walfile_name()\nand pg_walfile_name_offset() on a standby.\n\nPrevious restriction on this functions seems related to ThisTimeLineID not\nbeing safe on standby. This patch is fetching the timeline from\nWalRcv->receivedTLI using GetWalRcvWriteRecPtr(). As far as I understand,\nthis is updated each time some data are flushed to the WAL. \n\nAs the SQL function pg_last_wal_receive_lsn() reads WalRcv->receivedUpto\nwhich is flushed in the same time, any tool relying on these functions should be\nquite fine. It will just have to parse the TL from the walfile name.\n\nIt doesn't seems perfectly sain though. I suspect a race condition in any SQL\nstatement that would try to get the LSN and the walfile name in the same time\nif the timeline changes in the meantime. Ideally, a function should be able to\nreturn both LSN and TL in the same time, with only one read from WalRcv. I'm not\nsure if I should change the result from pg_last_wal_receive_lsn() or add a\nbrand new admin function. Any advice?\n\nLast, I plan to produce an extension to support this on older release. Is\nit something that could be integrated in official source tree during a minor\nrelease or should I publish it on eg. pgxn?\n\nRegards,\n\n[1]\nhttps://www.postgresql.org/message-id/flat/BF2AD4A8-E7F5-486F-92C8-A6959040DEB6%40yandex-team.ru",
"msg_date": "Tue, 23 Jul 2019 18:05:18 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Fetching timeline during recovery"
},
{
"msg_contents": "\n\n> 23 июля 2019 г., в 21:05, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> написал(а):\n> \n> Fetching the timeline from a standby could be useful in various situation.\n> Either for backup tools [1] or failover tools during some kind of election\n> process.\nThat backup tool is reading timeline from pg_control_checkpoint(). And formats WAL file name itself when necessary.\n\n> Please, find in attachment a first trivial patch to support pg_walfile_name()\n> and pg_walfile_name_offset() on a standby.\n\nYou just cannot format WAL file name for LSN when timeline changed. Because there are at least three WALs for that point: previous, new and partial. However, reading TLI from checkpoint seems safe for backup purposes.\nThe only reason for WAL-G to read that timeline is to mark backup invalid: if it's name is base_00000001XXXXXXXXYY00000YY and timeline change happens, it should be named base_00000002XXXXXXXXYY00000YY (consistency point is not on TLI 2), but WAL-G cannot rename backup during backup-push.\n\nHope this information is useful. Thanks!\n\nBest regards, Andrey Borodin.\n\n[0] https://github.com/wal-g/wal-g/blob/master/internal/timeline.go#L39\n\n",
"msg_date": "Tue, 23 Jul 2019 23:59:00 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "On 7/23/19 2:59 PM, Andrey Borodin wrote:\n> \n>> 23 июля 2019 г., в 21:05, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> написал(а):\n>>\n>> Fetching the timeline from a standby could be useful in various situation.\n>> Either for backup tools [1] or failover tools during some kind of election\n>> process.\n> That backup tool is reading timeline from pg_control_checkpoint(). And formats WAL file name itself when necessary.\n\nWe do the same [1].\n\n>> Please, find in attachment a first trivial patch to support pg_walfile_name()\n>> and pg_walfile_name_offset() on a standby.\n> \n> You just cannot format WAL file name for LSN when timeline changed. Because there are at least three WALs for that point: previous, new and partial. However, reading TLI from checkpoint seems safe for backup purposes.\n> The only reason for WAL-G to read that timeline is to mark backup invalid: if it's name is base_00000001XXXXXXXXYY00000YY and timeline change happens, it should be named base_00000002XXXXXXXXYY00000YY (consistency point is not on TLI 2), but WAL-G cannot rename backup during backup-push.\n\nNaming considerations aside, I don't think that a timeline switch during\na standby backup is a good idea, mostly because it is (currently) not\ntested. We don't allow it in pgBackRest.\n\n[1]\nhttps://github.com/pgbackrest/pgbackrest/blob/release/2.15.1/lib/pgBackRest/Db.pm#L1008\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 23 Jul 2019 16:00:29 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "On Tue, 23 Jul 2019 16:00:29 -0400\nDavid Steele <david@pgmasters.net> wrote:\n\n> On 7/23/19 2:59 PM, Andrey Borodin wrote:\n> > \n> >> 23 июля 2019 г., в 21:05, Jehan-Guillaume de Rorthais <jgdr@dalibo.com>\n> >> написал(а):\n> >>\n> >> Fetching the timeline from a standby could be useful in various situation.\n> >> Either for backup tools [1] or failover tools during some kind of election\n> >> process. \n> > That backup tool is reading timeline from pg_control_checkpoint(). And\n> > formats WAL file name itself when necessary. \n> \n> We do the same [1].\n\nThank you both for your comments.\n\nOK, so backup tools are fine with reading slightly outdated data from\ncontroldata file.\n\nAnyway, my use case is mostly about auto failover. During election, I currently\nhave to force a checkpoint on standbys to get their real timeline from the\ncontroldata.\n\nHowever, the forced checkpoint could be very long[1] (considering auto\nfailover). I need to be able to compare TL without all the burden of a\nCHECKPOINT just for this.\n\nAs I wrote, my favorite solution would be a function returning BOTH\ncurrent TL and LSN at the same time. I'll send a patch tomorrow to the list\nand I'll bikeshedding later depending on the feedback.\n\nIn the meantime, previous patch might still be useful for some other purpose.\nComments are welcomed.\n\nThanks,\n\n[1] this exact use case is actually hiding behind this thread:\nhttps://www.postgresql.org/message-id/flat/CAEkBuzeno6ztiM1g4WdzKRJFgL8b2nfePNU%3Dq3sBiEZUm-D-sQ%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 23 Jul 2019 23:01:43 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "On Tue, Jul 23, 2019 at 06:05:18PM +0200, Jehan-Guillaume de Rorthais wrote:\n> Please, find in attachment a first trivial patch to support pg_walfile_name()\n> and pg_walfile_name_offset() on a standby.\n> Previous restriction on this functions seems related to ThisTimeLineID not\n> being safe on standby. This patch is fetching the timeline from\n> WalRcv->receivedTLI using GetWalRcvWriteRecPtr(). As far as I understand,\n> this is updated each time some data are flushed to the WAL.\n\nFWIW, I don't have any objections to lift a bit the restrictions on\nthose functions if we can make that reliable enough. Now during\nrecovery you cannot rely on ThisTimeLineID as you say, per mostly the\nfollowing bit in xlog.c (the comment block a little bit up also has\nexplanations):\n /*\n * ThisTimeLineID is normally not set when we're still in recovery.\n * However, recycling/preallocating segments above needed ThisTimeLineID\n * to determine which timeline to install the segments on. Reset it now,\n * to restore the normal state of affairs for debugging purposes.\n */\n if (RecoveryInProgress())\n ThisTimeLineID = 0;\n\nYour patch does not count for the case of archive recovery, where\nthere is no WAL receiver, and as the shared memory state of the WAL\nreceiver is not set 0 would be set. The replay timeline is something\nwe could use here instead via GetXLogReplayRecPtr().\nCreateRestartPoint actually takes the latest WAL receiver or replayed\npoint for its end LSN position, whichever is newer.\n\n> Last, I plan to produce an extension to support this on older release. Is\n> it something that could be integrated in official source tree during a minor\n> release or should I publish it on eg. pgxn?\n\nUnfortunately no. This is a behavior change so it cannot find its way\ninto back branches. The WAL receiver state is in shared memory and\npublished, so that's easy enough to get. We don't do that for XLogCtl\nunfortunately. I think that there are arguments for being more\nflexible with it, and perhaps have a system-level view to be able to\nlook at some of its fields.\n\nThere is also a downside with get_controlfile(), which is that it\nfetches directly the data from the on-disk pg_control, and\npost-recovery this only gets updated at the first checkpoint.\n--\nMichael",
"msg_date": "Wed, 24 Jul 2019 09:49:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "Hello Michael,\n\nOn Wed, 24 Jul 2019 09:49:05 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Jul 23, 2019 at 06:05:18PM +0200, Jehan-Guillaume de Rorthais wrote:\n> > Please, find in attachment a first trivial patch to support\n> > pg_walfile_name() and pg_walfile_name_offset() on a standby.\n> > Previous restriction on this functions seems related to ThisTimeLineID not\n> > being safe on standby. This patch is fetching the timeline from\n> > WalRcv->receivedTLI using GetWalRcvWriteRecPtr(). As far as I understand,\n> > this is updated each time some data are flushed to the WAL. \n[...]\n> Your patch does not count for the case of archive recovery, where\n> there is no WAL receiver, and as the shared memory state of the WAL\n> receiver is not set 0 would be set.\n\nIndeed. I tested this topic with the following query and was fine with the\nNULL result:\n\n select pg_walfile_name(pg_last_wal_receive_lsn());\n\nI was fine with this result because my use case requires replication anyway. A\nNULL result would mean that the node never streamed from the old primary since\nits last startup, so a failover should ignore it anyway.\n\nHowever, NULL just comes from pg_last_wal_receive_lsn() here. The following\nquery result is wrong:\n\n > select pg_walfile_name('0/1')\n 000000000000000000000000\n\nI fixed that. See patch 0001-v2-* in attachement\n\n\n> The replay timeline is something we could use here instead via\n> GetXLogReplayRecPtr(). CreateRestartPoint actually takes the latest WAL\n> receiver or replayed point for its end LSN position, whichever is newer.\n\nI did consider GetXLogReplayRecPtr() or even XLogCtl->replayEndTLI (which is\nupdated right before the replay). However, both depend on read activity on the\nstandby. That's why I picked WalRcv->receivedTLI which is updated whatever the\nreading activity on the standby.\n\n> > Last, I plan to produce an extension to support this on older release. Is\n> > it something that could be integrated in official source tree during a minor\n> > release or should I publish it on eg. pgxn? \n> \n> Unfortunately no. This is a behavior change so it cannot find its way\n> into back branches.\n\nYes, my patch is a behavior change. But here, I was yalking about an\nextension, not the core itself, to support this feature in older releases.\n\n> The WAL receiver state is in shared memory and published, so that's easy\n> enough to get. We don't do that for XLogCtl unfortunately.\n\nBoth are in shared memory, but WalRcv have a public function to get its\nreceivedTLI member.\n\nXLogCtl has nothing in public to expose its ThisTimeLineID member. However, from\na module, I'm able to fetch it using:\n\n XLogCtl = ShmemInitStruct(\"XLOG Ctl\", XLOGShmemSize(), &found);\n SpinLockAcquire(&XLogCtl->info_lck);\n tl = XLogCtl->ThisTimeLineID;\n SpinLockRelease(&XLogCtl->info_lck);\n\nAs the \"XLOG Ctl\" index entry already exists in shmem, ShmemInitStruct returns\nthe correct structure from there. Not sure this was supposed to be used this\nway though...Adding a public function might be cleaner, but it will not help\nfor older releases.\n\n> I think that there are arguments for being more flexible with it, and perhaps\n> have a system-level view to be able to look at some of its fields.\n\nGreat idea. I'll give it a try to keep the discussion on.\n\n> There is also a downside with get_controlfile(), which is that it\n> fetches directly the data from the on-disk pg_control, and\n> post-recovery this only gets updated at the first checkpoint.\n\nIndeed, that's why I started this patch and thread.\n\nThanks,",
"msg_date": "Wed, 24 Jul 2019 14:33:27 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "Hello,\n\nOn Wed, 24 Jul 2019 14:33:27 +0200\nJehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n\n> On Wed, 24 Jul 2019 09:49:05 +0900\n> Michael Paquier <michael@paquier.xyz> wrote:\n> \n> > On Tue, Jul 23, 2019 at 06:05:18PM +0200, Jehan-Guillaume de Rorthais\n> > wrote: \n[...]\n> > I think that there are arguments for being more flexible with it, and\n> > perhaps have a system-level view to be able to look at some of its fields. \n> \n> Great idea. I'll give it a try to keep the discussion on.\n\nAfter some thinking, I did not find enough data to expose to justify the\ncreation a system-level view. As I just need the current timeline I\nwrote \"pg_current_timeline()\". Please, find the patch in attachment.\n\nThe current behavior is quite simple: \n* if the cluster is in production, return ThisTimeLineID\n* else return walrcv->receivedTLI (using GetWalRcvWriteRecPtr)\n\nThis is really naive implementation. We should probably add some code around\nthe startup process to gather and share general recovery stats. This would\nallow to fetch eg. the current recovery method, latest xlog file name restored\nfrom archives or streaming, its timeline, etc.\n\nAny thoughts?\n\nRegards,",
"msg_date": "Thu, 25 Jul 2019 19:38:08 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "Hi.\n\nAt Thu, 25 Jul 2019 19:38:08 +0200, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote in <20190725193808.1648ddc8@firost>\n> On Wed, 24 Jul 2019 14:33:27 +0200\n> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> \n> > On Wed, 24 Jul 2019 09:49:05 +0900\n> > Michael Paquier <michael@paquier.xyz> wrote:\n> > \n> > > On Tue, Jul 23, 2019 at 06:05:18PM +0200, Jehan-Guillaume de Rorthais\n> > > wrote: \n> [...]\n> > > I think that there are arguments for being more flexible with it, and\n> > > perhaps have a system-level view to be able to look at some of its fields. \n> > \n> > Great idea. I'll give it a try to keep the discussion on.\n> \n> After some thinking, I did not find enough data to expose to justify the\n> creation a system-level view. As I just need the current timeline I\n> wrote \"pg_current_timeline()\". Please, find the patch in attachment.\n> \n> The current behavior is quite simple: \n> * if the cluster is in production, return ThisTimeLineID\n> * else return walrcv->receivedTLI (using GetWalRcvWriteRecPtr)\n> \n> This is really naive implementation. We should probably add some code around\n> the startup process to gather and share general recovery stats. This would\n> allow to fetch eg. the current recovery method, latest xlog file name restored\n> from archives or streaming, its timeline, etc.\n> \n> Any thoughts?\n\nIf replay is delayed behind timeline switch point, replay-LSN and\nreceive/write/flush LSNs are on different timelines. When\nreplica have not reached the new timeline to which alredy\nreceived file belongs, the fucntion returns wrong file name,\nspecifically a name consisting of the latest segment number and\nthe older timeline where the segment doesn't belong to.\n\nWe have an LSN reporting function each for several objectives.\n\n pg_current_wal_lsn\n pg_current_wal_insert_lsn\n pg_current_wal_flush_lsn\n pg_last_wal_receive_lsn\n pg_last_wal_replay_lsn\n\nBut, I'm not sure just adding further pg_last_*_timeline() to\nthis list is a good thing..\n\n\nThe function returns NULL for NULL input (STRICT behavior) but\nreturns (NULL, NULL) for undefined timeline. I don't think the\ndifferene is meaningful.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 26 Jul 2019 16:49:53 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "On Fri, 26 Jul 2019 16:49:53 +0900 (Tokyo Standard Time)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> Hi.\n> \n> At Thu, 25 Jul 2019 19:38:08 +0200, Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote in <20190725193808.1648ddc8@firost>\n> > On Wed, 24 Jul 2019 14:33:27 +0200\n> > Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> > \n> > > On Wed, 24 Jul 2019 09:49:05 +0900\n> > > Michael Paquier <michael@paquier.xyz> wrote:\n> > > \n> > > > On Tue, Jul 23, 2019 at 06:05:18PM +0200, Jehan-Guillaume de Rorthais\n> > > > wrote: \n> > [...] \n> > > > I think that there are arguments for being more flexible with it, and\n> > > > perhaps have a system-level view to be able to look at some of its\n> > > > fields. \n> > > \n> > > Great idea. I'll give it a try to keep the discussion on. \n> > \n> > After some thinking, I did not find enough data to expose to justify the\n> > creation a system-level view. As I just need the current timeline I\n> > wrote \"pg_current_timeline()\". Please, find the patch in attachment.\n> > \n> > The current behavior is quite simple: \n> > * if the cluster is in production, return ThisTimeLineID\n> > * else return walrcv->receivedTLI (using GetWalRcvWriteRecPtr)\n> > \n> > This is really naive implementation. We should probably add some code around\n> > the startup process to gather and share general recovery stats. This would\n> > allow to fetch eg. the current recovery method, latest xlog file name\n> > restored from archives or streaming, its timeline, etc.\n> > \n> > Any thoughts? \n> \n> If replay is delayed behind timeline switch point, replay-LSN and\n> receive/write/flush LSNs are on different timelines. When\n> replica have not reached the new timeline to which alredy\n> received file belongs, the fucntion returns wrong file name,\n> specifically a name consisting of the latest segment number and\n> the older timeline where the segment doesn't belong to.\n\nIndeed.\n\n> We have an LSN reporting function each for several objectives.\n> \n> pg_current_wal_lsn\n> pg_current_wal_insert_lsn\n> pg_current_wal_flush_lsn\n> pg_last_wal_receive_lsn\n> pg_last_wal_replay_lsn\n\nYes. In fact, my current implementation might be split as:\n\n pg_current_wal_tl: returns TL on a production cluster\n pg_last_wal_received_tl: returns last received TL on a standby\n\nIf useful, I could add pg_last_wal_replayed_tl. I don't think *insert_tl and\n*flush_tl would be useful as a cluster in production is not supposed to\nchange its timeline during its lifetime.\n\n> But, I'm not sure just adding further pg_last_*_timeline() to\n> this list is a good thing..\n\nI think this is a much better idea than mixing different case (production and\nstandby) in the same function as I did. Moreover, it's much more coherent with\nother existing functions.\n\n> The function returns NULL for NULL input (STRICT behavior) but\n> returns (NULL, NULL) for undefined timeline. I don't think the\n> differene is meaningful.\n\nUnless I'm missing something, nothing\nreturns \"(NULL, NULL)\" in 0001-v1-Add-function-pg_current_timeline.patch.\n\nThank you for your feedback!\n\n\n",
"msg_date": "Fri, 26 Jul 2019 10:02:58 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "On Fri, 26 Jul 2019 10:02:58 +0200\nJehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n\n> On Fri, 26 Jul 2019 16:49:53 +0900 (Tokyo Standard Time)\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n[...]\n> > We have an LSN reporting function each for several objectives.\n> > \n> > pg_current_wal_lsn\n> > pg_current_wal_insert_lsn\n> > pg_current_wal_flush_lsn\n> > pg_last_wal_receive_lsn\n> > pg_last_wal_replay_lsn \n> \n> Yes. In fact, my current implementation might be split as:\n> \n> pg_current_wal_tl: returns TL on a production cluster\n> pg_last_wal_received_tl: returns last received TL on a standby\n> \n> If useful, I could add pg_last_wal_replayed_tl. I don't think *insert_tl and\n> *flush_tl would be useful as a cluster in production is not supposed to\n> change its timeline during its lifetime.\n> \n> > But, I'm not sure just adding further pg_last_*_timeline() to\n> > this list is a good thing.. \n> \n> I think this is a much better idea than mixing different case (production and\n> standby) in the same function as I did. Moreover, it's much more coherent with\n> other existing functions.\n\nPlease, find in attachment a new version of the patch. It now creates two new\nfonctions: \n\n pg_current_wal_tl()\n pg_last_wal_received_tl()\n\nRegards,",
"msg_date": "Fri, 26 Jul 2019 18:22:25 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "On Fri, 26 Jul 2019 18:22:25 +0200\nJehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n\n> On Fri, 26 Jul 2019 10:02:58 +0200\n> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> \n> > On Fri, 26 Jul 2019 16:49:53 +0900 (Tokyo Standard Time)\n> > Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> [...]\n> > > We have an LSN reporting function each for several objectives.\n> > > \n> > > pg_current_wal_lsn\n> > > pg_current_wal_insert_lsn\n> > > pg_current_wal_flush_lsn\n> > > pg_last_wal_receive_lsn\n> > > pg_last_wal_replay_lsn \n> > \n> > Yes. In fact, my current implementation might be split as:\n> > \n> > pg_current_wal_tl: returns TL on a production cluster\n> > pg_last_wal_received_tl: returns last received TL on a standby\n> > \n> > If useful, I could add pg_last_wal_replayed_tl. I don't think *insert_tl and\n> > *flush_tl would be useful as a cluster in production is not supposed to\n> > change its timeline during its lifetime.\n> > \n> > > But, I'm not sure just adding further pg_last_*_timeline() to\n> > > this list is a good thing.. \n> > \n> > I think this is a much better idea than mixing different case (production\n> > and standby) in the same function as I did. Moreover, it's much more\n> > coherent with other existing functions.\n> \n> Please, find in attachment a new version of the patch. It now creates two new\n> fonctions: \n> \n> pg_current_wal_tl()\n> pg_last_wal_received_tl()\n\nI just found I forgot to use PG_RETURN_INT32 in pg_last_wal_received_tl().\nPlease find the corrected patch in attachment:\n0001-v3-Add-functions-to-get-timeline.patch\n\nAlso, TimeLineID is declared as a uint32. So why do we use\nPG_RETURN_INT32/Int32GetDatum to return a timeline and not PG_RETURN_UINT32?\nSee eg. in pg_stat_get_wal_receiver().\n\nRegards,",
"msg_date": "Mon, 29 Jul 2019 12:26:31 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "On Mon, Jul 29, 2019 at 7:26 PM Jehan-Guillaume de Rorthais\n<jgdr@dalibo.com> wrote:\n>\n> On Fri, 26 Jul 2019 18:22:25 +0200\n> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n>\n> > On Fri, 26 Jul 2019 10:02:58 +0200\n> > Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> >\n> > > On Fri, 26 Jul 2019 16:49:53 +0900 (Tokyo Standard Time)\n> > > Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > [...]\n> > > > We have an LSN reporting function each for several objectives.\n> > > >\n> > > > pg_current_wal_lsn\n> > > > pg_current_wal_insert_lsn\n> > > > pg_current_wal_flush_lsn\n> > > > pg_last_wal_receive_lsn\n> > > > pg_last_wal_replay_lsn\n> > >\n> > > Yes. In fact, my current implementation might be split as:\n> > >\n> > > pg_current_wal_tl: returns TL on a production cluster\n> > > pg_last_wal_received_tl: returns last received TL on a standby\n> > >\n> > > If useful, I could add pg_last_wal_replayed_tl. I don't think *insert_tl and\n> > > *flush_tl would be useful as a cluster in production is not supposed to\n> > > change its timeline during its lifetime.\n> > >\n> > > > But, I'm not sure just adding further pg_last_*_timeline() to\n> > > > this list is a good thing..\n> > >\n> > > I think this is a much better idea than mixing different case (production\n> > > and standby) in the same function as I did. Moreover, it's much more\n> > > coherent with other existing functions.\n> >\n> > Please, find in attachment a new version of the patch. It now creates two new\n> > fonctions:\n> >\n> > pg_current_wal_tl()\n> > pg_last_wal_received_tl()\n>\n> I just found I forgot to use PG_RETURN_INT32 in pg_last_wal_received_tl().\n> Please find the corrected patch in attachment:\n> 0001-v3-Add-functions-to-get-timeline.patch\n\nThanks for the patch! Here are some comments from me.\n\nYou need to write the documentation explaining the functions\nthat you're thinking to add.\n\n+/*\n+ * Returns the current timeline on a production cluster\n+ */\n+Datum\n+pg_current_wal_tl(PG_FUNCTION_ARGS)\n\nThe timeline ID that this function returns seems almost\nthe same as pg_control_checkpoint().timeline_id,\nwhen the server is in production. So I'm not sure\nif it's worth adding that new function.\n\n+ currentTL = GetCurrentTimeLine();\n+\n+ PG_RETURN_INT32(currentTL);\n\nIs GetCurrentTimeLine() really necessary? Seems ThisTimeLineID can be\nreturned directly since it indicates the current timeline ID in production.\n\n+pg_last_wal_received_tl(PG_FUNCTION_ARGS)\n+{\n+ TimeLineID lastReceivedTL;\n+ WalRcvData *walrcv = WalRcv;\n+\n+ SpinLockAcquire(&walrcv->mutex);\n+ lastReceivedTL = walrcv->receivedTLI;\n+ SpinLockRelease(&walrcv->mutex);\n\nI think that it's smarter to use GetWalRcvWriteRecPtr() to\nget the last received TLI, like pg_last_wal_receive_lsn() does.\n\nThe timeline ID that this function returns is the same as\npg_stat_wal_receiver.received_tli while walreceiver is running.\nBut when walreceiver is not running, pg_stat_wal_receiver returns\nno record, and pg_last_wal_received_tl() would be useful to\nget the timeline only in this case. Is this my understanding right?\n\n> Also, TimeLineID is declared as a uint32. So why do we use\n> PG_RETURN_INT32/Int32GetDatum to return a timeline and not PG_RETURN_UINT32?\n> See eg. in pg_stat_get_wal_receiver().\n\npg_stat_wal_receiver.received_tli is declared as integer.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Wed, 4 Sep 2019 00:32:03 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "On Wed, 4 Sep 2019 00:32:03 +0900\nFujii Masao <masao.fujii@gmail.com> wrote:\n\n> On Mon, Jul 29, 2019 at 7:26 PM Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote:\n> >\n> > On Fri, 26 Jul 2019 18:22:25 +0200\n> > Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> > \n> > > On Fri, 26 Jul 2019 10:02:58 +0200\n> > > Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n[...]\n> > > Please, find in attachment a new version of the patch. It now creates two\n> > > new fonctions:\n> > >\n> > > pg_current_wal_tl()\n> > > pg_last_wal_received_tl() \n> >\n> > I just found I forgot to use PG_RETURN_INT32 in pg_last_wal_received_tl().\n> > Please find the corrected patch in attachment:\n> > 0001-v3-Add-functions-to-get-timeline.patch \n> \n> Thanks for the patch! Here are some comments from me.\n\nThank you for your review!\n\nPlease, find in attachment the v4 of the patch:\n0001-v4-Add-functions-to-get-timeline.patch\n\nAnswers bellow.\n\n> You need to write the documentation explaining the functions\n> that you're thinking to add.\n\nDone.\n\n> +/*\n> + * Returns the current timeline on a production cluster\n> + */\n> +Datum\n> +pg_current_wal_tl(PG_FUNCTION_ARGS)\n> \n> The timeline ID that this function returns seems almost\n> the same as pg_control_checkpoint().timeline_id,\n> when the server is in production. So I'm not sure\n> if it's worth adding that new function.\n\npg_control_checkpoint().timeline_id is read from the controldata file on disk\nwhich is asynchronously updated with the real status of the local cluster.\nRight after a promotion, fetching the TL from pg_control_checkpoint() is wrong\nand can cause race conditions on client side.\n\nThis is the main reason I am working on this patch.\n\n> + currentTL = GetCurrentTimeLine();\n> +\n> + PG_RETURN_INT32(currentTL);\n> \n> Is GetCurrentTimeLine() really necessary? Seems ThisTimeLineID can be\n> returned directly since it indicates the current timeline ID in production.\n\nIndeed. I might have over-focused on memory state. ThisTimeLineID seems to be\nupdated soon enough during the promotion, in fact, even before\nXLogCtl->ThisTimeLineID:\n\n if (ArchiveRecoveryRequested)\n {\n [...]\n ThisTimeLineID = findNewestTimeLine(recoveryTargetTLI) + 1;\n [...]\n }\n\n /* Save the selected TimeLineID in shared memory, too */\n XLogCtl->ThisTimeLineID = ThisTimeLineID;\n\n> +pg_last_wal_received_tl(PG_FUNCTION_ARGS)\n> +{\n> + TimeLineID lastReceivedTL;\n> + WalRcvData *walrcv = WalRcv;\n> +\n> + SpinLockAcquire(&walrcv->mutex);\n> + lastReceivedTL = walrcv->receivedTLI;\n> + SpinLockRelease(&walrcv->mutex);\n> \n> I think that it's smarter to use GetWalRcvWriteRecPtr() to\n> get the last received TLI, like pg_last_wal_receive_lsn() does.\n\nI has been hesitant between the current implementation and using\nGetWalRcvWriteRecPtr(). I choose the current implementation to avoid unnecessary\noperations during the spinlock and make it as fast as possible.\n\nHowever, maybe I'm scratching nothing or just dust here in comparison to\ncalling GetWalRcvWriteRecPtr() and avoiding minor code duplication.\n\nBeing hesitant, v4 of the patch use GetWalRcvWriteRecPtr() as suggested.\n\n> The timeline ID that this function returns is the same as\n> pg_stat_wal_receiver.received_tli while walreceiver is running.\n> But when walreceiver is not running, pg_stat_wal_receiver returns\n> no record, and pg_last_wal_received_tl() would be useful to\n> get the timeline only in this case. Is this my understanding right?\n\nExactly.\n \n> > Also, TimeLineID is declared as a uint32. So why do we use\n> > PG_RETURN_INT32/Int32GetDatum to return a timeline and not PG_RETURN_UINT32?\n> > See eg. in pg_stat_get_wal_receiver(). \n> \n> pg_stat_wal_receiver.received_tli is declared as integer.\n\nOh, right. Thank you.\n\nThanks,",
"msg_date": "Fri, 6 Sep 2019 17:06:34 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "On Sat, Sep 7, 2019 at 12:06 AM Jehan-Guillaume de Rorthais\n<jgdr@dalibo.com> wrote:\n>\n> On Wed, 4 Sep 2019 00:32:03 +0900\n> Fujii Masao <masao.fujii@gmail.com> wrote:\n>\n> > On Mon, Jul 29, 2019 at 7:26 PM Jehan-Guillaume de Rorthais\n> > <jgdr@dalibo.com> wrote:\n> > >\n> > > On Fri, 26 Jul 2019 18:22:25 +0200\n> > > Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> > >\n> > > > On Fri, 26 Jul 2019 10:02:58 +0200\n> > > > Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> [...]\n> > > > Please, find in attachment a new version of the patch. It now creates two\n> > > > new fonctions:\n> > > >\n> > > > pg_current_wal_tl()\n> > > > pg_last_wal_received_tl()\n> > >\n> > > I just found I forgot to use PG_RETURN_INT32 in pg_last_wal_received_tl().\n> > > Please find the corrected patch in attachment:\n> > > 0001-v3-Add-functions-to-get-timeline.patch\n> >\n> > Thanks for the patch! Here are some comments from me.\n>\n> Thank you for your review!\n>\n> Please, find in attachment the v4 of the patch:\n> 0001-v4-Add-functions-to-get-timeline.patch\n\nThanks for updating the patch!\n\nShould we add regression tests for these functions? For example,\nwhat about using these functions to check the timeline switch case,\nin src/test/recovery/t/004_timeline_switch.pl?\n\n>\n> Answers bellow.\n>\n> > You need to write the documentation explaining the functions\n> > that you're thinking to add.\n>\n> Done.\n\nThanks!\n\n+ <entry>Get current write-ahead log timeline</entry>\n\nI'm not sure if \"current write-ahead log timeline\" is proper word.\n\"timeline ID of current write-ahead log\" is more appropriate?\n\n+ <entry><type>int</type></entry>\n+ <entry>Get last write-ahead log timeline received and sync to disk by\n+ streaming replication.\n\nSame as above. I think that \"timeline ID of last write-ahead log received\nand sync to disk ...\" is better here.\n\nLike pg_last_wal_receive_lsn(), something like \"If recovery has\ncompleted this will remain static at the value of the last WAL\nrecord received and synced to disk during recovery.\nIf streaming replication is disabled, or if it has not yet started,\nthe function returns NULL.\" should be in this description?\n\n>\n> > +/*\n> > + * Returns the current timeline on a production cluster\n> > + */\n> > +Datum\n> > +pg_current_wal_tl(PG_FUNCTION_ARGS)\n\nI think that \"tl\" in the function name should be \"tli\". \"tli\" is used\nused for other functions and views related to timeline, e.g.,\npg_stat_wal_receiver.received_tli. Thought?\n\n> >\n> > The timeline ID that this function returns seems almost\n> > the same as pg_control_checkpoint().timeline_id,\n> > when the server is in production. So I'm not sure\n> > if it's worth adding that new function.\n>\n> pg_control_checkpoint().timeline_id is read from the controldata file on disk\n> which is asynchronously updated with the real status of the local cluster.\n> Right after a promotion, fetching the TL from pg_control_checkpoint() is wrong\n> and can cause race conditions on client side.\n\nUnderstood.\n\n> > The timeline ID that this function returns is the same as\n> > pg_stat_wal_receiver.received_tli while walreceiver is running.\n> > But when walreceiver is not running, pg_stat_wal_receiver returns\n> > no record, and pg_last_wal_received_tl() would be useful to\n> > get the timeline only in this case. Is this my understanding right?\n>\n> Exactly.\n\nI'm just imaging that some users want to use pg_last_wal_receive_lsn() and\npg_last_wal_receive_tli() together to, e.g., get the name of WAL file received\nlast. But there can be a corner case where the return values of\npg_last_wal_receive_lsn() and of pg_last_wal_receive_tli() are inconsistent.\nThis can happen because those values are NOT gotten within single lock.\nThat is, each function takes each lock to get each value.\n\nSo, to avoid that corner case and get consistent WAL file name,\nwe might want to have the function that gets both LSN and\ntimeline ID of the last received WAL record within single lock\n(i.e., just uses GetWalRcvWriteRecPtr()) and returns them.\nThought?\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Mon, 9 Sep 2019 19:44:10 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "On Mon, 9 Sep 2019 19:44:10 +0900\nFujii Masao <masao.fujii@gmail.com> wrote:\n\n> On Sat, Sep 7, 2019 at 12:06 AM Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote:\n> >\n> > On Wed, 4 Sep 2019 00:32:03 +0900\n> > Fujii Masao <masao.fujii@gmail.com> wrote:\n> > \n[...]\n> Thanks for updating the patch!\n\nThank you for your review!\n\nPlease find in attachment a new version of the patch.\n\n 0001-v5-Add-facilities-to-fetch-real-timeline-from-SQL.patch\n\n> Should we add regression tests for these functions? For example,\n> what about using these functions to check the timeline switch case,\n> in src/test/recovery/t/004_timeline_switch.pl?\n\nIndeed, I added 6 tests to this file.\n\n> [...] \n\nThank you for all other suggestions. They all make sense for v4 of the patch.\nHowever, I removed pg_current_wal_tl() and pg_last_wal_received_tl() to explore\na patch paying attention to your next comment.\n\n> I'm just imaging that some users want to use pg_last_wal_receive_lsn() and\n> pg_last_wal_receive_tli() together to, e.g., get the name of WAL file received\n> last. But there can be a corner case where the return values of\n> pg_last_wal_receive_lsn() and of pg_last_wal_receive_tli() are inconsistent.\n> This can happen because those values are NOT gotten within single lock.\n> That is, each function takes each lock to get each value.\n> \n> So, to avoid that corner case and get consistent WAL file name,\n> we might want to have the function that gets both LSN and\n> timeline ID of the last received WAL record within single lock\n> (i.e., just uses GetWalRcvWriteRecPtr()) and returns them.\n> Thought?\n\nYou are right.\n\nSO either I add some new functions or I overload the existing ones.\n\nI was not convinced to add two new functions very close to pg_current_wal_lsn\nand pg_last_wal_receive_lsn but with a slightly different name (eg. suffixed\nwith _tli?).\n\nI choose to overload pg_current_wal_lsn and pg_last_wal_receive_lsn with\npg_current_wal_lsn(with_tli bool) and pg_last_wal_receive_lsn(with_tli bool).\n\nBoth function returns the record (lsn pg_lsn,timeline int4). If with_tli is\nNULL or false, the timeline field is NULL.\n\nDocumentation is updated to reflect this.\n\nThoughts?\n\nIf this solution is accepted, some other function of the same family might be\ngood candidates as well, for the sake of homogeneity:\n\n* pg_current_wal_insert_lsn\n* pg_current_wal_flush_lsn\n* pg_last_wal_replay_lsn\n\nHowever, I'm not sure how useful this would be.\n\nThanks again for your time, suggestions and review!\n\nRegards,",
"msg_date": "Thu, 26 Sep 2019 19:20:46 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "On Thu, Sep 26, 2019 at 07:20:46PM +0200, Jehan-Guillaume de Rorthais wrote:\n> If this solution is accepted, some other function of the same family might be\n> good candidates as well, for the sake of homogeneity:\n> \n> * pg_current_wal_insert_lsn\n> * pg_current_wal_flush_lsn\n> * pg_last_wal_replay_lsn\n> \n> However, I'm not sure how useful this would be.\n> \n> Thanks again for your time, suggestions and review!\n\n+{ oid => '3435', descr => 'current wal flush location',\n+ proname => 'pg_last_wal_receive_lsn', provolatile => 'v',\nproisstrict => 'f',\nThis description is incorrect.\n\nAnd please use OIDs in the range of 8000~9999 for patches in\ndevelopment. You could just use src/include/catalog/unused_oids which\nwould point out a random range.\n\n+ if (recptr == 0) {\n+ nulls[0] = 1;\n+ nulls[1] = 1;\n+ }\nThe indendation of the code is incorrect, these should use actual\nbooleans and recptr should be InvalidXLogRecPtr (note also the\nexistence of the macro XLogRecPtrIsInvalid). Just for the style.\n\nAs said in the last emails exchanged on this thread, I don't see how\nyou cannot use multiple functions which have different meaning\ndepending on if the cluster is a primary or a standby knowing that we\nhave two different concepts of WAL when at recovery: the received\nLSN and the replayed LSN, and three concepts for primaries (insert,\ncurrent, flush). I agree as well with the point of Fujii-san about\nnot returning the TLI and the LSN across different functions as this\nopens the door for a risk of inconsistency for the data received by\nthe client.\n\n+ * When the first parameter (variable 'with_tli') is true, returns the current\n+ * timeline as second field. If false, second field is null.\nI don't see much the point of having this input parameter which\ndetermines the NULL-ness of one of the result columns, and I think\nthat you had better use a completely different function name for each\none of them instead of enforcing the functions. Let's remember that a\nlot of tools use the existing functions directly in the SELECT clause\nfor LSN calculations, which is just a 64-bit integer *without* a\ntimeline assigned to it. However your patch mixes both concepts by\nusing pg_current_wal_lsn.\n\nSo we could do more with the introduction of five new functions which \nallow to grab the LSN and the TLI in use for replay, received, insert,\nwrite and flush positions:\n- pg_current_wal_flush_info\n- pg_current_wal_insert_info\n- pg_current_wal_info\n- pg_last_wal_receive_info\n- pg_last_wal_replay_info\n\nI would be actually tempted to do the following: one single SRF\nfunction, say pg_wal_info which takes a text argument in input with\nthe following values: flush, write, insert, receive, replay. Thinking\nmore about it that would be rather neat, and more extensible than the\nrest discussed until now. See for example PostgresNode::lsn.\n--\nMichael",
"msg_date": "Wed, 11 Dec 2019 14:20:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> I would be actually tempted to do the following: one single SRF\n> function, say pg_wal_info which takes a text argument in input with\n> the following values: flush, write, insert, receive, replay. Thinking\n> more about it that would be rather neat, and more extensible than the\n> rest discussed until now. See for example PostgresNode::lsn.\n\nI've not followed this discussion very closely but I agree entirely that\nit's really nice to have the timeline be able to be queried in a more\ntimely manner than asking through pg_control_checkpoint() gives you.\n\nI'm not sure about adding a text argument to such a function though, I\nwould think you'd either have multiple rows if it's an SRF that gives\nyou the information on each row and allows a user to filter with a WHERE\nclause, or do something like what pg_stat_replication has and just have\na bunch of columns.\n\nGiven that we've already gone with the \"bunch of columns\" approach\nelsewhere, it seems like that approach would be more consistent.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 11 Dec 2019 10:16:29 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "On Wed, Dec 11, 2019 at 10:16:29AM -0500, Stephen Frost wrote:\n> I've not followed this discussion very closely but I agree entirely that\n> it's really nice to have the timeline be able to be queried in a more\n> timely manner than asking through pg_control_checkpoint() gives you.\n> \n> I'm not sure about adding a text argument to such a function though, I\n> would think you'd either have multiple rows if it's an SRF that gives\n> you the information on each row and allows a user to filter with a WHERE\n> clause, or do something like what pg_stat_replication has and just have\n> a bunch of columns.\n\nWith a NULL added for the values which cannot be defined then, like\ntrying to use the function on a primary for the fields which can only\nshow up at recovery? That would be possible, still my heart tells me\nthat a function returning one row is a more natural approach for\nthis stuff. I may be under too much used to what we have in the TAP\ntests though.\n--\nMichael",
"msg_date": "Thu, 12 Dec 2019 00:24:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Wed, Dec 11, 2019 at 10:16:29AM -0500, Stephen Frost wrote:\n> > I've not followed this discussion very closely but I agree entirely that\n> > it's really nice to have the timeline be able to be queried in a more\n> > timely manner than asking through pg_control_checkpoint() gives you.\n> > \n> > I'm not sure about adding a text argument to such a function though, I\n> > would think you'd either have multiple rows if it's an SRF that gives\n> > you the information on each row and allows a user to filter with a WHERE\n> > clause, or do something like what pg_stat_replication has and just have\n> > a bunch of columns.\n> \n> With a NULL added for the values which cannot be defined then, like\n> trying to use the function on a primary for the fields which can only\n> show up at recovery? \n\nSure, the function would only return those values that make sense for\nthe state that the system is in.\n\n> That would be possible, still my heart tells me\n> that a function returning one row is a more natural approach for\n> this stuff. I may be under too much used to what we have in the TAP\n> tests though.\n\nI'm confused- wouldn't the above approach be a function that's returning\nonly one row, if you had a bunch of columns and then had NULL values for\nthose cases that didn't apply..? Or, if you were thinking about the SRF\napproach that you suggested, you could use a WHERE clause to make it\nonly one row... Though I can see how it's nicer to just have one row in\nsome cases which is why I was suggesting the \"bunch of columns\"\napproach.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 11 Dec 2019 10:45:25 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "On Wed, Dec 11, 2019 at 10:45:25AM -0500, Stephen Frost wrote:\n> I'm confused- wouldn't the above approach be a function that's returning\n> only one row, if you had a bunch of columns and then had NULL values for\n> those cases that didn't apply..? Or, if you were thinking about the SRF\n> approach that you suggested, you could use a WHERE clause to make it\n> only one row... Though I can see how it's nicer to just have one row in\n> some cases which is why I was suggesting the \"bunch of columns\"\n> approach.\n\nOh, sorry. I see the confusion now and that's my fault. In\nhttps://www.postgresql.org/message-id/20191211052002.GK72921@paquier.xyz\nI mentioned a SRF function which takes an input argument, but that\nmakes no sense. What I would prefer having is just having one\nfunction, returning one row (LSN, TLI), using in input one argument to\nextract the WAL information the caller wants with five possible cases\n(write, insert, flush, receive, replay).\n\nThen, what you are referring to is one function which returns all\n(LSN,TLI) for the five cases (write, insert, etc.), so it would return\none row with 10 columns, with NULL mapping to the values which have no\nmeaning (like replay on a primary).\n\nAnd on top of that we have a third possibility: one SRF function\nreturning 5 rows with three attributes (mode, LSN, TLI), where mode\ncorresponds to one value in the set {write, insert, etc.}.\n\nI actually prefer the first one, and you mentioned the second. But\nthere could be a point in doing the third one. An advantage of the\nsecond and third ones is that you may be able to get a consistent view\nof all the data, but it means holding locks to look at the values a\nbit longer. Let's see what others think.\n--\nMichael",
"msg_date": "Fri, 13 Dec 2019 16:12:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "On Wed, 11 Dec 2019 14:20:02 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Sep 26, 2019 at 07:20:46PM +0200, Jehan-Guillaume de Rorthais wrote:\n> > If this solution is accepted, some other function of the same family might\n> > be good candidates as well, for the sake of homogeneity:\n> > \n> > * pg_current_wal_insert_lsn\n> > * pg_current_wal_flush_lsn\n> > * pg_last_wal_replay_lsn\n> > \n> > However, I'm not sure how useful this would be.\n> > \n> > Thanks again for your time, suggestions and review! \n> \n> +{ oid => '3435', descr => 'current wal flush location',\n> + proname => 'pg_last_wal_receive_lsn', provolatile => 'v',\n> proisstrict => 'f',\n> This description is incorrect.\n\nIndeed. And the one for pg_current_wal_lsn(bool) as well.\n\n> And please use OIDs in the range of 8000~9999 for patches in\n> development. You could just use src/include/catalog/unused_oids which\n> would point out a random range.\n\nThank you for this information, I wasn't aware.\n\n> + if (recptr == 0) {\n> + nulls[0] = 1;\n> + nulls[1] = 1;\n> + }\n> The indendation of the code is incorrect, these should use actual\n> booleans and recptr should be InvalidXLogRecPtr (note also the\n> existence of the macro XLogRecPtrIsInvalid). Just for the style.\n\nFixed on my side. Thanks.\n\n> As said in the last emails exchanged on this thread, I don't see how\n> you cannot use multiple functions which have different meaning\n> depending on if the cluster is a primary or a standby knowing that we\n> have two different concepts of WAL when at recovery: the received\n> LSN and the replayed LSN, and three concepts for primaries (insert,\n> current, flush). \n\nAs I wrote in my previous email, existing functions could be overloaded\nas well for the sake of homogeneity. So the five of them would have similar\nbehavior/API.\n\n> I agree as well with the point of Fujii-san about\n> not returning the TLI and the LSN across different functions as this\n> opens the door for a risk of inconsistency for the data received by\n> the client.\n\nMy last patch fixed that, indeed.\n\n> + * When the first parameter (variable 'with_tli') is true, returns the\n> current\n> + * timeline as second field. If false, second field is null.\n> I don't see much the point of having this input parameter which\n> determines the NULL-ness of one of the result columns, and I think\n> that you had better use a completely different function name for each\n> one of them instead of enforcing the functions. Let's remember that a\n> lot of tools use the existing functions directly in the SELECT clause\n> for LSN calculations, which is just a 64-bit integer *without* a\n> timeline assigned to it. However your patch mixes both concepts by\n> using pg_current_wal_lsn.\n\nSorry, I realize I was not clear enough about implementation details.\nMy latest patch does **not** introduce regression for existing tools. If you do\nnot pass any parameter, the behavior is the same, only one column:\n\n # primary\n $ cat <<EOQ|psql -XAtp 5432\n select * from pg_current_wal_lsn();\n select * from pg_current_wal_lsn(NULL);\n select * from pg_current_wal_lsn(true);\n EOQ\n 0/15D5BA0\n 0/15D5BA0|\n 0/15D5BA0|1\n\n # secondary\n $ cat <<EOQ|psql -XAtp 5433\n select * from pg_last_wal_receive_lsn();\n select * from pg_last_wal_receive_lsn(NULL);\n select * from pg_last_wal_receive_lsn(true);\n EOQ\n 0/15D5BA0\n 0/15D5BA0|\n 0/15D5BA0|1\n\nIt's kind of the same approach than when parameters has been added to\neg. pg_stat_backup() to change its behavior between exclusive and\nnon-exclusive backups. But I admit I know no function changing its return type\nbased on the given parameter. I understand your concern.\n\n> So we could do more with the introduction of five new functions which \n> allow to grab the LSN and the TLI in use for replay, received, insert,\n> write and flush positions:\n> - pg_current_wal_flush_info\n> - pg_current_wal_insert_info\n> - pg_current_wal_info\n> - pg_last_wal_receive_info\n> - pg_last_wal_replay_info\n\nI could go this way if you prefer, maybe using _tli as suffix instead of _info\nas this is the only new info added. I think it feels redundant with original\nfuncs, but it might be the simplest solution.\n\n> I would be actually tempted to do the following: one single SRF\n> function, say pg_wal_info which takes a text argument in input with\n> the following values: flush, write, insert, receive, replay. Thinking\n> more about it that would be rather neat, and more extensible than the\n> rest discussed until now. See for example PostgresNode::lsn.\n\nI'll answer in your other mail that summary other possibilities.\n\nThanks!\n\n\n",
"msg_date": "Thu, 19 Dec 2019 23:41:36 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "On Fri, 13 Dec 2019 16:12:55 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Dec 11, 2019 at 10:45:25AM -0500, Stephen Frost wrote:\n> > I'm confused- wouldn't the above approach be a function that's returning\n> > only one row, if you had a bunch of columns and then had NULL values for\n> > those cases that didn't apply..? Or, if you were thinking about the SRF\n> > approach that you suggested, you could use a WHERE clause to make it\n> > only one row... Though I can see how it's nicer to just have one row in\n> > some cases which is why I was suggesting the \"bunch of columns\"\n> > approach. \n> \n> Oh, sorry. I see the confusion now and that's my fault. In\n> https://www.postgresql.org/message-id/20191211052002.GK72921@paquier.xyz\n> I mentioned a SRF function which takes an input argument, but that\n> makes no sense. What I would prefer having is just having one\n> function, returning one row (LSN, TLI), using in input one argument to\n> extract the WAL information the caller wants with five possible cases\n> (write, insert, flush, receive, replay).\n\nIt looks odd when we look at other five existing functions of the same family\nbut without the tli. And this user interaction with admin function is quite\ndifferent of what we are used to with other admin funcs. But mostly, when I\nthink of such function, I keep thinking this parameter should be a WHERE\nclause after a SRF function.\n\n-1\n\n> Then, what you are referring to is one function which returns all\n> (LSN,TLI) for the five cases (write, insert, etc.), so it would return\n> one row with 10 columns, with NULL mapping to the values which have no\n> meaning (like replay on a primary).\n\nThis would looks like some other pg_stat_* functions, eg. pg_stat_get_archiver.\nI'm OK with this. This could even be turned as a catalog view.\n\nHowever, what's the point of gathering all the values eg from a production\ncluster? Is it really useful to compare current/insert/flush LSN from wal\nwriter?\n\nIt's easier to answer from a standby point of view as the lag between received\nand replayed might be interesting to report in various situations.\n\n> And on top of that we have a third possibility: one SRF function\n> returning 5 rows with three attributes (mode, LSN, TLI), where mode\n> corresponds to one value in the set {write, insert, etc.}.\n\nI prefer the second one. Just select the field(s) you need, no need WHERE\nclause, similar to some other stats function.\n\n-1\n\n\nAs a fourth possibility, as I badly explained my last implementation details, I\nstill hope we can keep it in the loop here. Just overload existing functions\nwith ones that takes a boolean as parameter and add the TLI as a second field,\neg.:\n\n Name | Result type | Argument data types\n-------------------+--------------+-------------------------------------------\npg_current_wal_lsn | pg_lsn |\npg_current_wal_lsn | SETOF record | with_tli bool, OUT lsn pg_lsn, OUT tli int\n\n\nAnd the fifth one, implementing brand new functions:\n\n pg_current_wal_lsn_tli\n pg_current_wal_insert_lsn_tli\n pg_current_wal_flush_lsn_tli\n pg_last_wal_receive_lsn_tli\n pg_last_wal_replay_lsn_tli\n\n> I actually prefer the first one, and you mentioned the second. But\n> there could be a point in doing the third one. An advantage of the\n> second and third ones is that you may be able to get a consistent view\n> of all the data, but it means holding locks to look at the values a\n> bit longer. Let's see what others think.\n\nI like the fourth one, but I was not able to return only one field if given\nparameter is false or NULL. Giving false as argument to these funcs has no\nmeaning compared to the original one without arg. I end up with this solution\nbecause I was worried about adding five more funcs really close to some\nexisting one.\n\nFifth one is more consistent with what we already have.\n\nThanks again.\n\nRegards,\n\n\n",
"msg_date": "Fri, 20 Dec 2019 00:35:19 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "At Fri, 20 Dec 2019 00:35:19 +0100, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote in \n> On Fri, 13 Dec 2019 16:12:55 +0900\n> Michael Paquier <michael@paquier.xyz> wrote:\n\nThe first one;\n\n> > I mentioned a SRF function which takes an input argument, but that\n> > makes no sense. What I would prefer having is just having one\n> > function, returning one row (LSN, TLI), using in input one argument to\n> > extract the WAL information the caller wants with five possible cases\n> > (write, insert, flush, receive, replay).\n> \n> It looks odd when we look at other five existing functions of the same family\n> but without the tli. And this user interaction with admin function is quite\n> different of what we are used to with other admin funcs. But mostly, when I\n> think of such function, I keep thinking this parameter should be a WHERE\n> clause after a SRF function.\n> \n> -1\n\nIt is realted to the third one, it may be annoying that the case names\ncannot have an aid of psql-completion..\n\n\nThe second one;\n\n> > Then, what you are referring to is one function which returns all\n> > (LSN,TLI) for the five cases (write, insert, etc.), so it would return\n> > one row with 10 columns, with NULL mapping to the values which have no\n> > meaning (like replay on a primary).\n> \n> This would looks like some other pg_stat_* functions, eg. pg_stat_get_archiver.\n> I'm OK with this. This could even be turned as a catalog view.\n> \n> However, what's the point of gathering all the values eg from a production\n> cluster? Is it really useful to compare current/insert/flush LSN from wal\n> writer?\n\nThere is a period where pg_controldata shows the previous TLI after\npromotion. It's useful if we can read the up-to-date TLI from live\nstandby. I thought that this project is for that case..\n\n> It's easier to answer from a standby point of view as the lag between received\n> and replayed might be interesting to report in various situations.\n\n\nThe third one;\n\n> > And on top of that we have a third possibility: one SRF function\n> > returning 5 rows with three attributes (mode, LSN, TLI), where mode\n> > corresponds to one value in the set {write, insert, etc.}.\n> \n> I prefer the second one. Just select the field(s) you need, no need WHERE\n> clause, similar to some other stats function.\n> \n> -1\n\nIt might be clean in a sense, but I don't come up with the case where\nthe format is useful..\n\nAnyway as the same with the first one, the case names (write, insert,\nflush, receive, replay) comes from two different machineries and\nshowing them in a row could be confusing.\n\n\n> As a fourth possibility, as I badly explained my last implementation details, I\n> still hope we can keep it in the loop here. Just overload existing functions\n> with ones that takes a boolean as parameter and add the TLI as a second field,\n> eg.:\n> \n> Name | Result type | Argument data types\n> -------------------+--------------+-------------------------------------------\n> pg_current_wal_lsn | pg_lsn |\n> pg_current_wal_lsn | SETOF record | with_tli bool, OUT lsn pg_lsn, OUT tli int\n\nI prefer this one, in the sense of similarity with existing functions.\n\n> And the fifth one, implementing brand new functions:\n> \n> pg_current_wal_lsn_tli\n> pg_current_wal_insert_lsn_tli\n> pg_current_wal_flush_lsn_tli\n> pg_last_wal_receive_lsn_tli\n> pg_last_wal_replay_lsn_tli\n\nMmmmm.... We should remove exiting ones instead? (Of couse we don't,\nthough.)\n\n> > I actually prefer the first one, and you mentioned the second. But\n> > there could be a point in doing the third one. An advantage of the\n> > second and third ones is that you may be able to get a consistent view\n> > of all the data, but it means holding locks to look at the values a\n> > bit longer. Let's see what others think.\n> \n> I like the fourth one, but I was not able to return only one field if given\n> parameter is false or NULL. Giving false as argument to these funcs has no\n> meaning compared to the original one without arg. I end up with this solution\n> because I was worried about adding five more funcs really close to some\n> existing one.\n\nRight. It is a restriction of polymorphic functions. It is in the same\nrelation with pg_stop_backup() and pg_stop_backup(true).\n\n> Fifth one is more consistent with what we already have.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 20 Dec 2019 13:41:25 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "On Fri, 20 Dec 2019 13:41:25 +0900 (JST)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> At Fri, 20 Dec 2019 00:35:19 +0100, Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote in \n> > On Fri, 13 Dec 2019 16:12:55 +0900\n> > Michael Paquier <michael@paquier.xyz> wrote: \n> \n> The first one;\n> \n> > > I mentioned a SRF function which takes an input argument, but that\n> > > makes no sense. What I would prefer having is just having one\n> > > function, returning one row (LSN, TLI), using in input one argument to\n> > > extract the WAL information the caller wants with five possible cases\n> > > (write, insert, flush, receive, replay). \n> > \n> > It looks odd when we look at other five existing functions of the same\n> > family but without the tli. And this user interaction with admin function\n> > is quite different of what we are used to with other admin funcs. But\n> > mostly, when I think of such function, I keep thinking this parameter\n> > should be a WHERE clause after a SRF function.\n> > \n> > -1 \n> \n> It is realted to the third one, it may be annoying that the case names\n> cannot have an aid of psql-completion..\n\nindeed.\n\n> The second one;\n> \n> > > Then, what you are referring to is one function which returns all\n> > > (LSN,TLI) for the five cases (write, insert, etc.), so it would return\n> > > one row with 10 columns, with NULL mapping to the values which have no\n> > > meaning (like replay on a primary). \n> > \n> > This would looks like some other pg_stat_* functions, eg.\n> > pg_stat_get_archiver. I'm OK with this. This could even be turned as a\n> > catalog view.\n> > \n> > However, what's the point of gathering all the values eg from a production\n> > cluster? Is it really useful to compare current/insert/flush LSN from wal\n> > writer? \n> \n> There is a period where pg_controldata shows the previous TLI after\n> promotion. It's useful if we can read the up-to-date TLI from live\n> standby. I thought that this project is for that case..\n\nI was not asking about the usefulness of LSN+TLI itself. \nI was wondering about the usecase of gathering all 6 cols current+tli,\ninsert+tli and flush+tli from a production/primary cluster.\n\n[...]\n> > As a fourth possibility, as I badly explained my last implementation\n> > details, I still hope we can keep it in the loop here. Just overload\n> > existing functions with ones that takes a boolean as parameter and add the\n> > TLI as a second field, eg.:\n> > \n> > Name | Result type | Argument data types\n> > -------------------+--------------+-------------------------------------------\n> > pg_current_wal_lsn | pg_lsn |\n> > pg_current_wal_lsn | SETOF record | with_tli bool, OUT lsn pg_lsn, OUT tli\n> > int \n> \n> I prefer this one, in the sense of similarity with existing functions.\n\nthanks\n\n> > And the fifth one, implementing brand new functions:\n> > \n> > pg_current_wal_lsn_tli\n> > pg_current_wal_insert_lsn_tli\n> > pg_current_wal_flush_lsn_tli\n> > pg_last_wal_receive_lsn_tli\n> > pg_last_wal_replay_lsn_tli \n> \n> Mmmmm.... We should remove exiting ones instead? (Of couse we don't,\n> though.)\n\nYes, that would be great but sadly, it would introduce a regression on various\ntools relying on them. At least, the one doing \"select *\" or most\nprobably \"select func()\".\n\nBut anyway, adding 5 funcs is not a big deal neither. Too bad they are so close\nto existing ones though.\n\n> > > I actually prefer the first one, and you mentioned the second. But\n> > > there could be a point in doing the third one. An advantage of the\n> > > second and third ones is that you may be able to get a consistent view\n> > > of all the data, but it means holding locks to look at the values a\n> > > bit longer. Let's see what others think. \n> > \n> > I like the fourth one, but I was not able to return only one field if given\n> > parameter is false or NULL. Giving false as argument to these funcs has no\n> > meaning compared to the original one without arg. I end up with this\n> > solution because I was worried about adding five more funcs really close to\n> > some existing one. \n> \n> Right. It is a restriction of polymorphic functions. It is in the same\n> relation with pg_stop_backup() and pg_stop_backup(true).\n\nindeed.\n\n\n\n",
"msg_date": "Fri, 20 Dec 2019 11:14:28 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 11:14:28AM +0100, Jehan-Guillaume de Rorthais wrote:\n> Yes, that would be great but sadly, it would introduce a regression on various\n> tools relying on them. At least, the one doing \"select *\" or most\n> probably \"select func()\".\n> \n> But anyway, adding 5 funcs is not a big deal neither. Too bad they are so close\n> to existing ones though.\n\nConsistency of the data matters a lot if we want to build reliable\ntools on top of them in case someone would like to compare the various\nmodes, and using different functions for those fields creates locking\nissues (somewhat the point of Fujii-san upthread?). If nobody likes\nthe approach of one function, returning one row, taking in input the\nmode wanted, then I would not really object Stephen's idea on the\nmatter about having a multi-column function returning one row.\nissues\n\n>> Right. It is a restriction of polymorphic functions. It is in the same\n>> relation with pg_stop_backup() and pg_stop_backup(true).\n\n(pg_current_wal_lsn & co talk about LSNs, not TLIs).\n--\nMichael",
"msg_date": "Mon, 23 Dec 2019 12:36:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "On Mon, 23 Dec 2019 12:36:56 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Dec 20, 2019 at 11:14:28AM +0100, Jehan-Guillaume de Rorthais wrote:\n> > Yes, that would be great but sadly, it would introduce a regression on\n> > various tools relying on them. At least, the one doing \"select *\" or most\n> > probably \"select func()\".\n> > \n> > But anyway, adding 5 funcs is not a big deal neither. Too bad they are so\n> > close to existing ones though. \n> \n> Consistency of the data matters a lot if we want to build reliable\n> tools on top of them in case someone would like to compare the various\n> modes, and using different functions for those fields creates locking\n> issues (somewhat the point of Fujii-san upthread?).\n\nTo sum up: the original patch was about fetching the current timeline of a\nstandby from memory without relying on the asynchronous controlfile or\npg_stat_get_wal_receiver() which only shows data when the wal_receiver is\nrunning.\n\nFujii-san was pointing out we must fetch both the received LSN and its timeline\nwith the same lock so they are consistent.\n\nMichael is now discussing about fetching multiple LSN and their timeline,\nwhile keeping them consistent, eg. received+tli and applied+tli. Thank you for\npointing this out. \n\nI thought about various way to deal with this concern and would like to\ndiscuss/defend a new option based on existing pg_stat_get_wal_receiver()\nfunction. The only problem I'm facing with this function is that it returns\na full NULL record if no wal receiver is active.\n\nMy idea would be to return a row from pg_stat_get_wal_receiver() as soon as\na wal receiver has been replicating during the uptime of the standby, no\nmatter if there's one currently working or not. If no wal receiver is active,\nthe \"pid\" field would be NULL and the \"status\" would reports eg. \"inactive\".\nAll other fields would report their last known value as they are kept in\nshared memory WalRcv struct.\n\nFrom the monitoring and HA point of view, we are now able to know that a wal\nreceiver existed, the lsn it has stopped, on what timeline, all consistent\nwith the same lock. That answer my original goal. We could extend this with two\nmore fields about replayed lsn and timeline to address last Michael's concern\nif we decide it's really needed (and I think it's a valid concern for eg.\nmonitoring tools).\n\nThere's some more potential discussion about the pg_stat_wal_receiver view\nwhich relies on pg_stat_get_wal_receiver(). My proposal do not introduce\nregression with it as the view already filter out NULL data using \"WHERE s.pid\nIS NOT NULL\". But:\n\n 1. we could decide to remove this filter to expose the data even when no wal\n receiver is active. It's the same behavior than pg_stat_subscription view.\n It could introduce regression from tools point of view, but adds some\n useful information. I would vote 0 for it.\n 2. we could extend it with new replayed lsn/tli fields. I would vote +1 for\n it.\n\nOn the \"dark\" side of this proposal, we do not deal with the primary side. We\nstill have no way to fetch various lsn+tli from the WAL Writer. However, I\nincluded pg_current_wal_lsn_tl() in my original patch only for homogeneity\nreason and the discussion slipped on this side while paying attention to the\nuser facing function logic and homogeneity. If this discussion decide this is a\nuseful feature, I think it could be addressed in another patch (and I volunteer\nto deal with it).\n\nBellow the sum up this 6th proposition with examples. When wal receiver never\nstarted (same as today):\n\n -[ RECORD 1 ]---------+--\n pid | Ø\n status | Ø\n receive_start_lsn | Ø\n receive_start_tli | Ø\n received_lsn | Ø\n received_tli | Ø\n last_msg_send_time | Ø\n last_msg_receipt_time | Ø\n latest_end_lsn | Ø\n latest_end_time | Ø\n slot_name | Ø\n sender_host | Ø\n sender_port | Ø\n conninfo | Ø\n\nWhen wal receiver is active:\n\n $ select * from pg_stat_get_wal_receiver();\n -[ RECORD 1 ]---------+-----------------------------\n pid | 8576\n status | streaming\n receive_start_lsn | 0/4000000\n receive_start_tli | 1\n received_lsn | 0/4000148\n received_tli | 1\n last_msg_send_time | 2019-12-23 12:28:52.588738+01\n last_msg_receipt_time | 2019-12-23 12:28:52.588839+01\n latest_end_lsn | 0/4000148\n latest_end_time | 2019-12-23 11:15:43.431657+01\n slot_name | Ø\n sender_host | /tmp\n sender_port | 15441\n conninfo | port=15441 application_name=s\n\nWhen wal receiver is not running and shared memory WalRcv is reporting past\nactivity:\n\n $ select * from pg_stat_get_wal_receiver();\n -[ RECORD 1 ]---------+-----------------------------\n pid | Ø\n status | inactive\n receive_start_lsn | 0/4000000\n receive_start_tli | 1\n received_lsn | 0/4000148\n received_tli | 1\n last_msg_send_time | 2019-12-23 12:28:52.588738+01\n last_msg_receipt_time | 2019-12-23 12:28:52.588839+01\n latest_end_lsn | 0/4000148\n latest_end_time | 2019-12-23 11:15:43.431657+01\n slot_name | Ø\n sender_host | /tmp\n sender_port | 15441\n conninfo | port=15441 application_name=s\n\nI just have a doubt about including the last three fields or setting them to\nNULL. Note that the information is present and might still be useful to\nunderstand what was the original source of a standby before disconnection.\n\nRegards,\n\n\n",
"msg_date": "Mon, 23 Dec 2019 15:38:16 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "Hi,\n\nOn Mon, 23 Dec 2019 15:38:16 +0100\nJehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n[...]\n> My idea would be to return a row from pg_stat_get_wal_receiver() as soon as\n> a wal receiver has been replicating during the uptime of the standby, no\n> matter if there's one currently working or not. If no wal receiver is active,\n> the \"pid\" field would be NULL and the \"status\" would reports eg. \"inactive\".\n> All other fields would report their last known value as they are kept in\n> shared memory WalRcv struct.\n\nPlease, find in attachment a patch implementing the above proposal.\n\nRegards,",
"msg_date": "Fri, 3 Jan 2020 16:11:38 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "At Fri, 3 Jan 2020 16:11:38 +0100, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote in \n> Hi,\n> \n> On Mon, 23 Dec 2019 15:38:16 +0100\n> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> [...]\n> > My idea would be to return a row from pg_stat_get_wal_receiver() as soon as\n> > a wal receiver has been replicating during the uptime of the standby, no\n> > matter if there's one currently working or not. If no wal receiver is active,\n> > the \"pid\" field would be NULL and the \"status\" would reports eg. \"inactive\".\n> > All other fields would report their last known value as they are kept in\n> > shared memory WalRcv struct.\n> \n> Please, find in attachment a patch implementing the above proposal.\n\nAt Mon, 23 Dec 2019 15:38:16 +0100, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote in \n> 1. we could decide to remove this filter to expose the data even when no wal\n> receiver is active. It's the same behavior than pg_stat_subscription view.\n> It could introduce regression from tools point of view, but adds some\n> useful information. I would vote 0 for it.\n\nA subscription exists since it is defined and regardless whether it is\nactive or not. It is strange that we see a line in the view if\nreplication is not configured. But it is reasonable to show if it is\nconfigured. We could do that by checking PrimaryConnInfo. (I would\nvote +0.5 for it).\n\n> 2. we could extend it with new replayed lsn/tli fields. I would vote +1 for\n> it.\n\n+1. As of now a walsender lives for just one timeline, because it ends\nfor disconnection from walsender when the master moves to a new\ntimeline. That being said, we already have the columns for TLI for\nboth starting and received-up-to LSN so we would need it also for\nreplayed LSN for a consistent looking.\n\nThe function is going to show \"streaming\" but conninfo is not shown\nuntil connection establishes. That state is currently hidden by the\nPID filtering of the view. We might need to keep the WALRCV_STARTING\nstate until connection establishes.\n\nsender_host and sender_port have bogus values until connection is\nactually established when conninfo is changed. They as well as\nconninfo should be hidden until connection is established, too, I\nthink.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 07 Jan 2020 15:57:29 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "On Tue, 07 Jan 2020 15:57:29 +0900 (JST)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> At Mon, 23 Dec 2019 15:38:16 +0100, Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote in \n> > 1. we could decide to remove this filter to expose the data even when no\n> > wal receiver is active. It's the same behavior than pg_stat_subscription\n> > view. It could introduce regression from tools point of view, but adds some\n> > useful information. I would vote 0 for it. \n> \n> A subscription exists since it is defined and regardless whether it is\n> active or not. It is strange that we see a line in the view if\n> replication is not configured. But it is reasonable to show if it is\n> configured. We could do that by checking PrimaryConnInfo. (I would\n> vote +0.5 for it).\n\nThanks. I put this on hold for now, I'm waiting for some more opinons as\nthere's no strong position yet.\n\n> > 2. we could extend it with new replayed lsn/tli fields. I would vote +1 for\n> > it. \n> \n> +1. As of now a walsender lives for just one timeline, because it ends\n> for disconnection from walsender when the master moves to a new\n> timeline. That being said, we already have the columns for TLI for\n> both starting and received-up-to LSN so we would need it also for\n> replayed LSN for a consistent looking.\n\nI added applied_lsn and applied_tli to the pg_stat_get_wal_receiver function\noutput columns.\n\nHowever, note that applying xlog is the responsibility of the startup process,\nnot the wal receiver one. Is it OK that pg_stat_get_wal_receiver\nreturns stats not directly related to the wal receiver?\n\n> The function is going to show \"streaming\" but conninfo is not shown\n> until connection establishes. That state is currently hidden by the\n> PID filtering of the view. We might need to keep the WALRCV_STARTING\n> state until connection establishes.\n\nIndeed, fixed.\n\n> sender_host and sender_port have bogus values until connection is\n> actually established when conninfo is changed. They as well as\n> conninfo should be hidden until connection is established, too, I\n> think.\n\nFixed as well.\n\nPlease find the new version of the patch in attachment.\n\nThank you for your review!",
"msg_date": "Thu, 23 Jan 2020 17:54:08 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "On Thu, Jan 23, 2020 at 05:54:08PM +0100, Jehan-Guillaume de Rorthais wrote:\n> Please find the new version of the patch in attachment.\n\nTo be honest, I find the concept of this patch confusing.\npg_stat_wal_receiver is just a one-one mapping with the shared memory\nstate of the WAL receiver itself and show data *if and only if* a WAL\nreceiver is running and iff it is ready to display any data, so I'd\nrather not change its nature and it has nothing to do with the state\nof WAL being applied by the startup process. So this gets a -1 from\nme.\n\n- /*\n- * No WAL receiver (or not ready yet), just return a tuple with NULL\n- * values\n- */\n- if (pid == 0 || !ready_to_display)\n- PG_RETURN_NULL();\nNote that this took a couple of attempts to get right, so I'd rather\nnot change this part of the logic on security grounds.\n\nIsn't what you are looking for here a different system view which maps\ndirectly to XLogCtl so as you can retrieve the status of the applied\nWAL at recovery anytime, say pg_stat_recovery?\n\nIt is the end of the CF, I am marking this patch as returned with\nfeedback for now.\n--\nMichael",
"msg_date": "Fri, 31 Jan 2020 15:12:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fetching timeline during recovery"
},
{
"msg_contents": "On Fri, 31 Jan 2020 15:12:30 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Jan 23, 2020 at 05:54:08PM +0100, Jehan-Guillaume de Rorthais wrote:\n> > Please find the new version of the patch in attachment.\n> \n> To be honest, I find the concept of this patch confusing.\n> pg_stat_wal_receiver is just a one-one mapping with the shared memory\n> state of the WAL receiver itself and show data *if and only if* a WAL\n> receiver is running and iff it is ready to display any data, so I'd\n> rather not change its nature\n\nIf you are talking about the pg_stat_wal_receiver view, I don't have a strong\nopinion on this anyway as I vote 0 when discussing it. My current patch\ndoesn't alter its nature.\n\n> and it has nothing to do with the state of WAL being applied by the startup\n> process.\n\nIndeed, I was feeling this was a bad design to add these columns, as stated in\nmy last mail. So I withdraw this.\n\n> So this gets a -1 from me.\n\nOK.\n\n[...]\n> Isn't what you are looking for here a different system view which maps\n> directly to XLogCtl so as you can retrieve the status of the applied\n> WAL at recovery anytime\n\nMy main objective is received LSN/TLI. This is kept by WalRcv for streaming.\nThat's why pg_stat_wal_receiver was the very good place for my need. But again,\nyou are right, I shouldn't have add the replied bits to it.\n\n> say pg_stat_recovery?\n\nI finally dig this path. I was in the hope we could find something\nsimpler and lighter, but other solutions we studied so far (thanks all for your\ntime) were all discarded [1].\n\nA new pg_stat_get_recovery() view might be useful for various monitoring\npurpose. After poking around in the code, it seems the patch would be bigger\nthan previous solutions, so I prefer discussing the specs first. \n\nAt a first glance, I would imagine the following columns as a minimal patch:\n\n* source: stream, archive or pg_wal\n* write/flush/replayed LSN\n* write/flush/replayed TLI\n\nThis has already some heavy impact in the code. Source might be taken from\nxlog.c:currentSource, so it should probably be included in XLogCtl to be\naccessible from any backend.\n\nAs replayed LSN/TLI comes from XLogCtl too, we might probably need a new\ndedicated function to gather these fields plus currentSource under the same\ninfo_lck.\n\nNext, write lsn/tli is not accessible from WalRcv, only flush. So either we do\nnot include it, or we would probably need to replace WalRcv->receivedUpto with\nexisting LogstreamResult.\n\nNext, there's no stats about wal shipping recovery. Restoring a WAL from\narchive do not increment anything about write/flush LSN/TLI. I wonder if both\nwal_receiver stats and WAL shipping stats might be merged together in the same\nrefactored structure in shmem, as they might share a fair number of field\ntogether? This would be pretty invasive in the code, but I feel it's heavier to\nadd another new struct in shmem just to track WAL shipping stats whereas WalRcv\nalready exists there.\n\nNow, I think the following additional field might be useful for monitoring. But\nas this is out my my original scope, I prefer discussing how useful this might\nbe:\n\n* start_time: start time of the current source\n* restored_count: total number of WAL restored. We might want to split this\n counter to track each method individually.\n* last_received_time: last time we received something from the current source\n* last_fail_time: last failure time, whatever the source\n\nThanks for reading up to here!\n\nRegards,\n\n\n[1] even if I still hope the pg_stat_get_wal_receiver might still gather some\nmore positive vote :)\n\n\n",
"msg_date": "Tue, 11 Feb 2020 19:51:10 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Fetching timeline during recovery"
}
] |
[
{
"msg_contents": "Does anyone have a stress test for parallel workers ?\n\nOn a customer's new VM, I got this several times while (trying to) migrate their DB:\n\n< 2019-07-23 10:33:51.552 CDT postgres >FATAL: postmaster exited during a parallel transaction\n< 2019-07-23 10:33:51.552 CDT postgres >STATEMENT: CREATE UNIQUE INDEX unused0_huawei_umts_nodeb_locell_201907_unique_idx ON child.unused0_huawei_umts_nodeb_locell_201907 USING btree ...\n\nThere's nothing in dmesg nor in postgres logs.\n\nAt first I thought it's maybe because of a network disconnection, then I\nthought it's because we ran out of space (wal), then they had a faulty blade.\nAfter that, I'd tried and failed to reproduce it a number of times, but it's\njust recurred during what was intended to be their final restore. I've set\nmax_parallel_workers_per_gather=0, but I'm planning to try to diagnose an issue\nin another instance. Ideally a minimal test, since I'm apparently going to\nhave to run under gdb to see how it's dying, or even what process is failing.\n\nDMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 09/21/2015\nCentOS release 6.9 (Final)\nLinux alextelsasrv01 2.6.32-754.17.1.el6.x86_64 #1 SMP Tue Jul 2 12:42:48 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux\nversion | PostgreSQL 11.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-23), 64-bit\n\nJustin\n\n\n",
"msg_date": "Tue, 23 Jul 2019 11:27:03 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "stress test for parallel workers"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> Does anyone have a stress test for parallel workers ?\n> On a customer's new VM, I got this several times while (trying to) migrate their DB:\n\n> < 2019-07-23 10:33:51.552 CDT postgres >FATAL: postmaster exited during a parallel transaction\n\nWe've been seeing this irregularly in the buildfarm, too. I've been\nsuspicious that it's from an OOM kill on the postmaster in the\nbuildfarm cases, but ...\n\n> There's nothing in dmesg nor in postgres logs.\n\n... you'd think an OOM kill would show up in the kernel log.\n(Not necessarily in dmesg, though. Did you try syslog?)\n\n> Ideally a minimal test, since I'm apparently going to\n> have to run under gdb to see how it's dying, or even what process is failing.\n\nLike it told you, it's the postmaster that's going away.\nThat's Not Supposed To Happen, of course, but unfortunately Linux'\nOOM kill heuristic preferentially targets the postmaster when\nits children are consuming too many resources.\n\nIf that is the problem, there's some info on working around it at\n\nhttps://www.postgresql.org/docs/current/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Jul 2019 13:28:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On Tue, Jul 23, 2019 at 01:28:47PM -0400, Tom Lane wrote:\n> ... you'd think an OOM kill would show up in the kernel log.\n> (Not necessarily in dmesg, though. Did you try syslog?)\n\nNothing in /var/log/messages (nor dmesg ring).\n\nI enabled abrtd while trying to reproduce it last week. Since you asked I\nlooked again in messages, and found it'd logged 10 hours ago about this:\n\n(gdb) bt\n#0 0x000000395be32495 in raise () from /lib64/libc.so.6\n#1 0x000000395be33c75 in abort () from /lib64/libc.so.6\n#2 0x000000000085ddff in errfinish (dummy=<value optimized out>) at elog.c:555\n#3 0x00000000006f7e94 in CheckPointReplicationOrigin () at origin.c:588\n#4 0x00000000004f6ef1 in CheckPointGuts (checkPointRedo=5507491783792, flags=128) at xlog.c:9150\n#5 0x00000000004feff6 in CreateCheckPoint (flags=128) at xlog.c:8937\n#6 0x00000000006d49e2 in CheckpointerMain () at checkpointer.c:491\n#7 0x000000000050fe75 in AuxiliaryProcessMain (argc=2, argv=0x7ffe00d56b00) at bootstrap.c:451\n#8 0x00000000006dcf54 in StartChildProcess (type=CheckpointerProcess) at postmaster.c:5337\n#9 0x00000000006de78a in reaper (postgres_signal_arg=<value optimized out>) at postmaster.c:2867\n#10 <signal handler called>\n#11 0x000000395bee1603 in __select_nocancel () from /lib64/libc.so.6\n#12 0x00000000006e1488 in ServerLoop (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:1671\n#13 PostmasterMain (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:1380\n#14 0x0000000000656420 in main (argc=3, argv=0x27ae410) at main.c:228\n\n#2 0x000000000085ddff in errfinish (dummy=<value optimized out>) at elog.c:555\n edata = <value optimized out>\n elevel = 22\n oldcontext = 0x27e15d0\n econtext = 0x0\n __func__ = \"errfinish\"\n#3 0x00000000006f7e94 in CheckPointReplicationOrigin () at origin.c:588\n save_errno = <value optimized out>\n tmppath = 0x9c4518 \"pg_logical/replorigin_checkpoint.tmp\"\n path = 0x9c4300 \"pg_logical/replorigin_checkpoint\"\n tmpfd = 64\n i = <value optimized out>\n magic = 307747550\n crc = 4294967295\n __func__ = \"CheckPointReplicationOrigin\"\n#4 0x00000000004f6ef1 in CheckPointGuts (checkPointRedo=5507491783792, flags=128) at xlog.c:9150\nNo locals.\n#5 0x00000000004feff6 in CreateCheckPoint (flags=128) at xlog.c:8937\n shutdown = false\n checkPoint = {redo = 5507491783792, ThisTimeLineID = 1, PrevTimeLineID = 1, fullPageWrites = true, nextXidEpoch = 0, nextXid = 2141308, nextOid = 496731439, nextMulti = 1, nextMultiOffset = 0, \n oldestXid = 561, oldestXidDB = 1, oldestMulti = 1, oldestMultiDB = 1, time = 1563781930, oldestCommitTsXid = 0, newestCommitTsXid = 0, oldestActiveXid = 2141308}\n recptr = <value optimized out>\n _logSegNo = <value optimized out>\n Insert = <value optimized out>\n freespace = <value optimized out>\n PriorRedoPtr = <value optimized out>\n curInsert = <value optimized out>\n last_important_lsn = <value optimized out>\n vxids = 0x280afb8\n nvxids = 0\n __func__ = \"CreateCheckPoint\"\n#6 0x00000000006d49e2 in CheckpointerMain () at checkpointer.c:491\n ckpt_performed = false\n do_restartpoint = <value optimized out>\n flags = 128\n do_checkpoint = <value optimized out>\n now = 1563781930\n elapsed_secs = <value optimized out>\n cur_timeout = <value optimized out>\n rc = <value optimized out>\n local_sigjmp_buf = {{__jmpbuf = {2, -1669940128760174522, 9083146, 0, 140728912407216, 140728912407224, -1669940128812603322, 1670605924426606662}, __mask_was_saved = 1, __saved_mask = {__val = {\n 18446744066192964103, 0, 246358747096, 140728912407296, 140446084917816, 140446078556040, 9083146, 0, 246346239061, 140446078556040, 140447207471460, 0, 140447207471424, 140446084917816, 0, \n 7864320}}}}\n checkpointer_context = 0x27e15d0\n __func__ = \"CheckpointerMain\"\n\nSupposedly it's trying to do this:\n\n|\tereport(PANIC, \n|\t\t\t(errcode_for_file_access(),\n|\t\t\t errmsg(\"could not write to file \\\"%s\\\": %m\",\n|\t\t\t\t\ttmppath)));\n\nAnd since there's consistently nothing in logs, I'm guessing there's a\nlegitimate write error (legitimate from PG perspective). Storage here is ext4\nplus zfs tablespace on top of LVM on top of vmware thin volume.\n\nJustin\n\n\n",
"msg_date": "Tue, 23 Jul 2019 12:42:22 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 4:27 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> < 2019-07-23 10:33:51.552 CDT postgres >FATAL: postmaster exited during a parallel transaction\n> < 2019-07-23 10:33:51.552 CDT postgres >STATEMENT: CREATE UNIQUE INDEX unused0_huawei_umts_nodeb_locell_201907_unique_idx ON child.unused0_huawei_umts_nodeb_locell_201907 USING btree ...\n\n> ... I've set\n> max_parallel_workers_per_gather=0, ...\n\nJust by the way, parallelism in CREATE INDEX is controlled by\nmax_parallel_maintenance_workers, not max_parallel_workers_per_gather.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Jul 2019 09:10:41 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 5:42 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> #2 0x000000000085ddff in errfinish (dummy=<value optimized out>) at elog.c:555\n> edata = <value optimized out>\n> elevel = 22\n> oldcontext = 0x27e15d0\n> econtext = 0x0\n> __func__ = \"errfinish\"\n> #3 0x00000000006f7e94 in CheckPointReplicationOrigin () at origin.c:588\n> save_errno = <value optimized out>\n> tmppath = 0x9c4518 \"pg_logical/replorigin_checkpoint.tmp\"\n> path = 0x9c4300 \"pg_logical/replorigin_checkpoint\"\n> tmpfd = 64\n> i = <value optimized out>\n> magic = 307747550\n> crc = 4294967295\n> __func__ = \"CheckPointReplicationOrigin\"\n\n> Supposedly it's trying to do this:\n>\n> | ereport(PANIC,\n> | (errcode_for_file_access(),\n> | errmsg(\"could not write to file \\\"%s\\\": %m\",\n> | tmppath)));\n>\n> And since there's consistently nothing in logs, I'm guessing there's a\n> legitimate write error (legitimate from PG perspective). Storage here is ext4\n> plus zfs tablespace on top of LVM on top of vmware thin volume.\n\nIf you have that core, it might be interesting to go to frame 2 and\nprint *edata or edata->saved_errno. If the errno is EIO, it's a bit\nstrange if that's not showing up in some form in kernel logs or dmesg\nor something; if it's ENOSPC I guess it'd be normal that it doesn't\nshow up anywhere and there is nothing in the PostgreSQL logs if\nthey're on the same full filesystem, but then you would probably\nalready have mentioned that your filesystem was out of space. Could\nit have been fleetingly full due to some other thing happening on the\nsystem that rapidly expands and contracts?\n\nI'm confused by the evidence, though. If this PANIC is the origin of\nthe problem, how do we get to postmaster-death based exit in a\nparallel leader*, rather than quickdie() (the kind of exit that\nhappens when the postmaster sends SIGQUIT to every process and they\nsay \"terminating connection because of crash of another server\nprocess\", because some backend crashed or panicked). Perhaps it would\nbe clearer what's going on if you could put the PostgreSQL log onto a\ndifferent filesystem, so we get a better chance of collecting\nevidence? But then... the parallel leader process was apparently able\nto log something -- maybe it was just lucky, but you said this\nhappened this way more than once. I'm wondering how it could be that\nyou got some kind of IO failure and weren't able to log the PANIC\nmessage AND your postmaster was killed, and you were able to log a\nmessage about that. Perhaps we're looking at evidence from two\nunrelated failures.\n\n*I suspect that the only thing implicating parallelism in this failure\nis that parallel leaders happen to print out that message if the\npostmaster dies while they are waiting for workers; most other places\n(probably every other backend in your cluster) just quietly exit.\nThat tells us something about what's happening, but on its own doesn't\ntell us that parallelism plays an important role in the failure mode.\n\n\n--\nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Jul 2019 10:03:25 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> *I suspect that the only thing implicating parallelism in this failure\n> is that parallel leaders happen to print out that message if the\n> postmaster dies while they are waiting for workers; most other places\n> (probably every other backend in your cluster) just quietly exit.\n> That tells us something about what's happening, but on its own doesn't\n> tell us that parallelism plays an important role in the failure mode.\n\nI agree that there's little evidence implicating parallelism directly.\nThe reason I'm suspicious about a possible OOM kill is that parallel\nqueries would appear to the OOM killer to be eating more resources\nthan the same workload non-parallel, so that we might be at more\nhazard of getting OOM'd just because of that.\n\nA different theory is that there's some hard-to-hit bug in the\npostmaster's processing of parallel workers that doesn't apply to\nregular backends. I've looked for one in a desultory way but not\nreally focused on it.\n\nIn any case, the evidence from the buildfarm is pretty clear that\nthere is *some* connection. We've seen a lot of recent failures\ninvolving \"postmaster exited during a parallel transaction\", while\nthe number of postmaster failures not involving that is epsilon.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Jul 2019 18:11:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 10:03 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > edata = <value optimized out>\n\n> If you have that core, it might be interesting to go to frame 2 and\n> print *edata or edata->saved_errno. ...\n\nRats. We already saw that it's optimised out so unless we can find\nthat somewhere else in a variable that's present in the core, we\nprobably can't find out what the operating system said. So my other\nidea for getting this information next time is to try putting the\nPostgreSQL logs somewhere that's more likely to be still working when\nthat thing fails.\n\n--\nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Jul 2019 10:40:16 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 10:03:25AM +1200, Thomas Munro wrote:\n> On Wed, Jul 24, 2019 at 5:42 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > #2 0x000000000085ddff in errfinish (dummy=<value optimized out>) at elog.c:555\n> > edata = <value optimized out>\n> \n> If you have that core, it might be interesting to go to frame 2 and\n> print *edata or edata->saved_errno.\n\nAs you saw..unless someone you know a trick, it's \"optimized out\".\n\n> Could it have been fleetingly full due to some other thing happening on the\n> system that rapidly expands and contracts?\n\nIt's not impossible, especially while loading data, and data_dir is only 64GB;\nit may have happened that way sometimes; but it's hard to believe I that's been\nthe case 5-10 times now. If I don't forget to drop the old database previously\nloaded, when loading old/historic data, it should have ~40GB free on data_dir,\nand no clients connected other than pg_restore.\n\n$ df -h /var/lib/pgsql\nFilesystem Size Used Avail Use% Mounted on\n/dev/mapper/data-postgres\n 64G 26G 38G 41% /var/lib/pgsql\n\n> | ereport(PANIC, \n> | (errcode_for_file_access(),\n> | errmsg(\"could not write to file \\\"%s\\\": %m\",\n> | tmppath)));\n> \n> And since there's consistently nothing in logs, I'm guessing there's a\n> legitimate write error (legitimate from PG perspective). Storage here is ext4\n> plus zfs tablespace on top of LVM on top of vmware thin volume.\n\nI realized this probably is *not* an issue with zfs, since it's failing to log\n(for one reason or another) to /var/lib/pgsql (ext4).\n\n> Perhaps it would be clearer what's going on if you could put the PostgreSQL\n> log onto a different filesystem, so we get a better chance of collecting\n> evidence?\n\nI didn't mention it but last weekend I'd left a loop around the restore process\nrunning overnight, and had convinced myself the issue didn't recur since their\nfaulty blade was taken out of service... My plan was to leave the server\nrunning in the foreground with logging_collector=no, which I hope is enough,\nunless logging is itself somehow implicated. I'm trying to stress test that\nway now.\n\n> But then... the parallel leader process was apparently able\n> to log something -- maybe it was just lucky, but you said this\n> happened this way more than once. I'm wondering how it could be that\n> you got some kind of IO failure and weren't able to log the PANIC\n> message AND your postmaster was killed, and you were able to log a\n> message about that. Perhaps we're looking at evidence from two\n> unrelated failures.\n\nThe messages from the parallel leader (building indices) were visible to the\nclient, not via the server log. I was loading their data and the errors were\nvisible when pg_restore failed.\n\nOn Wed, Jul 24, 2019 at 09:10:41AM +1200, Thomas Munro wrote:\n> Just by the way, parallelism in CREATE INDEX is controlled by\n> max_parallel_maintenance_workers, not max_parallel_workers_per_gather.\n\nThank you.\n\nJustin\n\n\n",
"msg_date": "Tue, 23 Jul 2019 17:42:55 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 10:42 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Wed, Jul 24, 2019 at 10:03:25AM +1200, Thomas Munro wrote:\n> > On Wed, Jul 24, 2019 at 5:42 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > #2 0x000000000085ddff in errfinish (dummy=<value optimized out>) at elog.c:555\n> > > edata = <value optimized out>\n> >\n> > If you have that core, it might be interesting to go to frame 2 and\n> > print *edata or edata->saved_errno.\n>\n> As you saw..unless someone you know a trick, it's \"optimized out\".\n\nHow about something like this:\n\nprint errorData[errordata_stack_depth]\n\nIf you can't find errordata_stack_depth, maybe look at the whole array\nand try to find the interesting bit?\n\n\n--\nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Jul 2019 10:46:42 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 10:46:42AM +1200, Thomas Munro wrote:\n> On Wed, Jul 24, 2019 at 10:42 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Wed, Jul 24, 2019 at 10:03:25AM +1200, Thomas Munro wrote:\n> > > On Wed, Jul 24, 2019 at 5:42 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > > #2 0x000000000085ddff in errfinish (dummy=<value optimized out>) at elog.c:555\n> > > > edata = <value optimized out>\n> > >\n> > > If you have that core, it might be interesting to go to frame 2 and\n> > > print *edata or edata->saved_errno.\n> >\n> > As you saw..unless someone you know a trick, it's \"optimized out\".\n> \n> How about something like this:\n> \n> print errorData[errordata_stack_depth]\n\nClever.\n\n(gdb) p errordata[errordata_stack_depth]\n$2 = {elevel = 13986192, output_to_server = 254, output_to_client = 127, show_funcname = false, hide_stmt = false, hide_ctx = false, filename = 0x27b3790 \"< %m %u >\", lineno = 41745456, \n funcname = 0x3030313335 <Address 0x3030313335 out of bounds>, domain = 0x0, context_domain = 0x27cff90 \"postgres\", sqlerrcode = 0, message = 0xe8800000001 <Address 0xe8800000001 out of bounds>, \n detail = 0x297a <Address 0x297a out of bounds>, detail_log = 0x0, hint = 0xe88 <Address 0xe88 out of bounds>, context = 0x297a <Address 0x297a out of bounds>, message_id = 0x0, schema_name = 0x0, \n table_name = 0x0, column_name = 0x0, datatype_name = 0x0, constraint_name = 0x0, cursorpos = 0, internalpos = 0, internalquery = 0x0, saved_errno = 0, assoc_context = 0x0}\n(gdb) p errordata\n$3 = {{elevel = 22, output_to_server = true, output_to_client = false, show_funcname = false, hide_stmt = false, hide_ctx = false, filename = 0x9c4030 \"origin.c\", lineno = 591, \n funcname = 0x9c46e0 \"CheckPointReplicationOrigin\", domain = 0x9ac810 \"postgres-11\", context_domain = 0x9ac810 \"postgres-11\", sqlerrcode = 4293, \n message = 0x27b0e40 \"could not write to file \\\"pg_logical/replorigin_checkpoint.tmp\\\": No space left on device\", detail = 0x0, detail_log = 0x0, hint = 0x0, context = 0x0, \n message_id = 0x8a7aa8 \"could not write to file \\\"%s\\\": %m\", ...\n\nI ought to have remembered that it *was* in fact out of space this AM when this\ncore was dumped (due to having not touched it since scheduling transition to\nthis VM last week).\n\nI want to say I'm almost certain it wasn't ENOSPC in other cases, since,\nfailing to find log output, I ran df right after the failure.\n\nBut that gives me an idea: is it possible there's an issue with files being\nheld opened by worker processes ? Including by parallel workers? Probably\nWALs, even after they're rotated ? If there were worker processes holding\nopened lots of rotated WALs, that could cause ENOSPC, but that wouldn't be\nobvious after they die, since the space would then be freed.\n\nJustin\n\n\n",
"msg_date": "Tue, 23 Jul 2019 18:04:40 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> I want to say I'm almost certain it wasn't ENOSPC in other cases, since,\n> failing to find log output, I ran df right after the failure.\n\nThe fact that you're not finding log output matching what was reported\nto the client seems to me to be a mighty strong indication that there\n*was* an ENOSPC problem. Can you reconfigure to put the postmaster\nlog on a different volume?\n\n> But that gives me an idea: is it possible there's an issue with files being\n> held opened by worker processes ? Including by parallel workers? Probably\n> WALs, even after they're rotated ? If there were worker processes holding\n> opened lots of rotated WALs, that could cause ENOSPC, but that wouldn't be\n> obvious after they die, since the space would then be freed.\n\nParallel workers aren't ever allowed to write, in the current\nimplementation, so it's not real obvious why they'd have any\nWAL log files open at all.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Jul 2019 19:29:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 11:04 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I ought to have remembered that it *was* in fact out of space this AM when this\n> core was dumped (due to having not touched it since scheduling transition to\n> this VM last week).\n>\n> I want to say I'm almost certain it wasn't ENOSPC in other cases, since,\n> failing to find log output, I ran df right after the failure.\n\nOk, cool, so the ENOSPC thing we understand, and the postmaster death\nthing is probably something entirely different. Which brings us to\nthe question: what is killing your postmaster or causing it to exit\nsilently and unexpectedly, but leaving no trace in any operating\nsystem log? You mentioned that you couldn't see any signs of the OOM\nkiller. Are you in a situation to test an OOM failure so you can\nconfirm what that looks like on your system? You might try typing\nthis into Python:\n\nx = [42]\nfor i in range(1000):\n x = x + x\n\nOn my non-Linux system, it ran for a while and then was killed, and\ndmesg showed:\n\npid 15956 (python3.6), jid 0, uid 1001, was killed: out of swap space\npid 40238 (firefox), jid 0, uid 1001, was killed: out of swap space\n\nAdmittedly it is quite hard for to distinguish between a web browser\nand a program designed to eat memory as fast as possible... Anyway on\nLinux you should see stuff about killed processes and/or OOM in one of\ndmesg, syslog, messages.\n\n> But that gives me an idea: is it possible there's an issue with files being\n> held opened by worker processes ? Including by parallel workers? Probably\n> WALs, even after they're rotated ? If there were worker processes holding\n> opened lots of rotated WALs, that could cause ENOSPC, but that wouldn't be\n> obvious after they die, since the space would then be freed.\n\nParallel workers don't do anything with WAL files, but they can create\ntemporary files. If you're building humongous indexes with parallel\nworkers, you'll get some of those, but I don't think it'd be more than\nyou'd get without parallelism. If you were using up all of your disk\nspace with temporary files, wouldn't this be reproducible? I think\nyou said you were testing this repeatedly, so if that were the problem\nI'd expect to see some non-panicky out-of-space errors when the temp\nfiles blow out your disk space, and only rarely a panic if a\ncheckpoint happens to run exactly at a moment where the create index\nhasn't yet written the byte that breaks the camel's back, but the\ncheckpoint pushes it over edge in one of these places where it panics\non failure.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Jul 2019 11:32:30 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 10:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > *I suspect that the only thing implicating parallelism in this failure\n> > is that parallel leaders happen to print out that message if the\n> > postmaster dies while they are waiting for workers; most other places\n> > (probably every other backend in your cluster) just quietly exit.\n> > That tells us something about what's happening, but on its own doesn't\n> > tell us that parallelism plays an important role in the failure mode.\n>\n> I agree that there's little evidence implicating parallelism directly.\n> The reason I'm suspicious about a possible OOM kill is that parallel\n> queries would appear to the OOM killer to be eating more resources\n> than the same workload non-parallel, so that we might be at more\n> hazard of getting OOM'd just because of that.\n>\n> A different theory is that there's some hard-to-hit bug in the\n> postmaster's processing of parallel workers that doesn't apply to\n> regular backends. I've looked for one in a desultory way but not\n> really focused on it.\n>\n> In any case, the evidence from the buildfarm is pretty clear that\n> there is *some* connection. We've seen a lot of recent failures\n> involving \"postmaster exited during a parallel transaction\", while\n> the number of postmaster failures not involving that is epsilon.\n\nI don't have access to the build farm history in searchable format\n(I'll go and ask for that). Do you have an example to hand? Is this\nfailure always happening on Linux?\n\n--\nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Jul 2019 11:48:57 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On 2019-Jul-23, Justin Pryzby wrote:\n\n> I want to say I'm almost certain it wasn't ENOSPC in other cases, since,\n> failing to find log output, I ran df right after the failure.\n\nI'm not sure that this proves much, since I expect temporary files to be\ndeleted on failure; by the time you run 'df' the condition might have\nalready been cleared. You'd need to be capturing diskspace telemetry\nwith sufficient granularity ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 23 Jul 2019 19:57:34 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 11:32:30AM +1200, Thomas Munro wrote:\n> On Wed, Jul 24, 2019 at 11:04 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I ought to have remembered that it *was* in fact out of space this AM when this\n> > core was dumped (due to having not touched it since scheduling transition to\n> > this VM last week).\n> >\n> > I want to say I'm almost certain it wasn't ENOSPC in other cases, since,\n> > failing to find log output, I ran df right after the failure.\n\nI meant it wasn't a trivial error on my part of failing to drop the previously\nloaded DB instance. It occured to me to check inodes, which can also cause\nENOSPC. This is mkfs -T largefile, so running out of inodes is not an\nimpossibility. But seems an unlikely culprit, unless something made tens of\nthousands of (small) files. \n\n[pryzbyj@alextelsasrv01 ~]$ df -i /var/lib/pgsql\nFilesystem Inodes IUsed IFree IUse% Mounted on\n/dev/mapper/data-postgres\n 65536 5605 59931 9% /var/lib/pgsql\n\n> Ok, cool, so the ENOSPC thing we understand, and the postmaster death\n> thing is probably something entirely different. Which brings us to\n> the question: what is killing your postmaster or causing it to exit\n> silently and unexpectedly, but leaving no trace in any operating\n> system log? You mentioned that you couldn't see any signs of the OOM\n> killer. Are you in a situation to test an OOM failure so you can\n> confirm what that looks like on your system?\n\n$ command time -v python -c \"'x'*4999999999\" |wc\nTraceback (most recent call last):\n File \"<string>\", line 1, in <module>\nMemoryError\nCommand exited with non-zero status 1\n...\n Maximum resident set size (kbytes): 4276\n\n$ dmesg\n...\nOut of memory: Kill process 10665 (python) score 478 or sacrifice child\nKilled process 10665, UID 503, (python) total-vm:4024260kB, anon-rss:3845756kB, file-rss:1624kB\n\nI wouldn't burn too much more time on it until I can reproduce it. The\nfailures were all during pg_restore, so checkpointer would've been very busy.\nIt seems possible it for it to notice ENOSPC before workers...which would be\nfsyncing WAL, where checkpointer is fsyncing data.\n\n> Admittedly it is quite hard for to distinguish between a web browser\n> and a program designed to eat memory as fast as possible...\n\nBrowsers making lots of progress here but still clearly 2nd place.\n\nJustin\n\n\n",
"msg_date": "Tue, 23 Jul 2019 19:33:43 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Wed, Jul 24, 2019 at 10:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> In any case, the evidence from the buildfarm is pretty clear that\n>> there is *some* connection. We've seen a lot of recent failures\n>> involving \"postmaster exited during a parallel transaction\", while\n>> the number of postmaster failures not involving that is epsilon.\n\n> I don't have access to the build farm history in searchable format\n> (I'll go and ask for that).\n\nYeah, it's definitely handy to be able to do SQL searches in the\nhistory. I forget whether Dunstan or Frost is the person to ask\nfor access, but there's no reason you shouldn't have it.\n\n> Do you have an example to hand? Is this\n> failure always happening on Linux?\n\nI dug around a bit further, and while my recollection of a lot of\n\"postmaster exited during a parallel transaction\" failures is accurate,\nthere is a very strong correlation I'd not noticed: it's just a few\nbuildfarm critters that are producing those. To wit, I find that\nstring in these recent failures (checked all runs in the past 3 months):\n\n sysname | branch | snapshot \n-----------+---------------+---------------------\n lorikeet | HEAD | 2019-06-16 20:28:25\n lorikeet | HEAD | 2019-07-07 14:58:38\n lorikeet | HEAD | 2019-07-02 10:38:08\n lorikeet | HEAD | 2019-06-14 14:58:24\n lorikeet | HEAD | 2019-07-04 20:28:44\n lorikeet | HEAD | 2019-04-30 11:00:49\n lorikeet | HEAD | 2019-06-19 20:29:27\n lorikeet | HEAD | 2019-05-21 08:28:26\n lorikeet | REL_11_STABLE | 2019-07-11 08:29:08\n lorikeet | REL_11_STABLE | 2019-07-09 08:28:41\n lorikeet | REL_12_STABLE | 2019-07-16 08:28:37\n lorikeet | REL_12_STABLE | 2019-07-02 21:46:47\n lorikeet | REL9_6_STABLE | 2019-07-02 20:28:14\n vulpes | HEAD | 2019-06-14 09:18:18\n vulpes | HEAD | 2019-06-27 09:17:19\n vulpes | HEAD | 2019-07-21 09:01:45\n vulpes | HEAD | 2019-06-12 09:11:02\n vulpes | HEAD | 2019-07-05 08:43:29\n vulpes | HEAD | 2019-07-15 08:43:28\n vulpes | HEAD | 2019-07-19 09:28:12\n wobbegong | HEAD | 2019-06-09 20:43:22\n wobbegong | HEAD | 2019-07-02 21:17:41\n wobbegong | HEAD | 2019-06-04 21:06:07\n wobbegong | HEAD | 2019-07-14 20:43:54\n wobbegong | HEAD | 2019-06-19 21:05:04\n wobbegong | HEAD | 2019-07-08 20:55:18\n wobbegong | HEAD | 2019-06-28 21:18:46\n wobbegong | HEAD | 2019-06-02 20:43:20\n wobbegong | HEAD | 2019-07-04 21:01:37\n wobbegong | HEAD | 2019-06-14 21:20:59\n wobbegong | HEAD | 2019-06-23 21:36:51\n wobbegong | HEAD | 2019-07-18 21:31:36\n(32 rows)\n\nWe already knew that lorikeet has its own peculiar stability\nproblems, and these other two critters run different compilers\non the same Fedora 27 ppc64le platform.\n\nSo I think I've got to take back the assertion that we've got\nsome lurking generic problem. This pattern looks way more\nlike a platform-specific issue. Overaggressive OOM killer\nwould fit the facts on vulpes/wobbegong, perhaps, though\nit's odd that it only happens on HEAD runs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Jul 2019 01:15:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 5:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Wed, Jul 24, 2019 at 10:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Do you have an example to hand? Is this\n> > failure always happening on Linux?\n>\n> I dug around a bit further, and while my recollection of a lot of\n> \"postmaster exited during a parallel transaction\" failures is accurate,\n> there is a very strong correlation I'd not noticed: it's just a few\n> buildfarm critters that are producing those. To wit, I find that\n> string in these recent failures (checked all runs in the past 3 months):\n>\n> sysname | branch | snapshot\n> -----------+---------------+---------------------\n> lorikeet | HEAD | 2019-06-16 20:28:25\n> lorikeet | HEAD | 2019-07-07 14:58:38\n> lorikeet | HEAD | 2019-07-02 10:38:08\n> lorikeet | HEAD | 2019-06-14 14:58:24\n> lorikeet | HEAD | 2019-07-04 20:28:44\n> lorikeet | HEAD | 2019-04-30 11:00:49\n> lorikeet | HEAD | 2019-06-19 20:29:27\n> lorikeet | HEAD | 2019-05-21 08:28:26\n> lorikeet | REL_11_STABLE | 2019-07-11 08:29:08\n> lorikeet | REL_11_STABLE | 2019-07-09 08:28:41\n> lorikeet | REL_12_STABLE | 2019-07-16 08:28:37\n> lorikeet | REL_12_STABLE | 2019-07-02 21:46:47\n> lorikeet | REL9_6_STABLE | 2019-07-02 20:28:14\n> vulpes | HEAD | 2019-06-14 09:18:18\n> vulpes | HEAD | 2019-06-27 09:17:19\n> vulpes | HEAD | 2019-07-21 09:01:45\n> vulpes | HEAD | 2019-06-12 09:11:02\n> vulpes | HEAD | 2019-07-05 08:43:29\n> vulpes | HEAD | 2019-07-15 08:43:28\n> vulpes | HEAD | 2019-07-19 09:28:12\n> wobbegong | HEAD | 2019-06-09 20:43:22\n> wobbegong | HEAD | 2019-07-02 21:17:41\n> wobbegong | HEAD | 2019-06-04 21:06:07\n> wobbegong | HEAD | 2019-07-14 20:43:54\n> wobbegong | HEAD | 2019-06-19 21:05:04\n> wobbegong | HEAD | 2019-07-08 20:55:18\n> wobbegong | HEAD | 2019-06-28 21:18:46\n> wobbegong | HEAD | 2019-06-02 20:43:20\n> wobbegong | HEAD | 2019-07-04 21:01:37\n> wobbegong | HEAD | 2019-06-14 21:20:59\n> wobbegong | HEAD | 2019-06-23 21:36:51\n> wobbegong | HEAD | 2019-07-18 21:31:36\n> (32 rows)\n>\n> We already knew that lorikeet has its own peculiar stability\n> problems, and these other two critters run different compilers\n> on the same Fedora 27 ppc64le platform.\n>\n> So I think I've got to take back the assertion that we've got\n> some lurking generic problem. This pattern looks way more\n> like a platform-specific issue. Overaggressive OOM killer\n> would fit the facts on vulpes/wobbegong, perhaps, though\n> it's odd that it only happens on HEAD runs.\n\nchipmunk also:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2019-08-06%2014:16:16\n\nI wondered if the build farm should try to report OOM kill -9 or other\nsignal activity affecting the postmaster.\n\nOn some systems (depending on sysctl kernel.dmesg_restrict on Linux,\nsecurity.bsd.unprivileged_read_msgbuf on FreeBSD etc) you can run\ndmesg as a non-root user, and there the OOM killer's footprints or\nsignaled exit statuses for processes under init might normally be found,\nbut that seems a bit invasive for the host system (I guess you'd\nfilter it carefully). Unfortunately it isn't enabled on many common\nsystems anyway.\n\nMaybe there is a systemd-specific way to get the info we need without\nbeing root?\n\nAnother idea: start the postmaster under a subreaper (Linux 3.4+\nprctl(PR_SET_CHILD_SUBREAPER), FreeBSD 10.2+\nprocctl(PROC_REAP_ACQUIRE)) that exists just to report on its\nchildren's exit status, so the build farm could see \"pid XXX was\nkilled by signal 9\" message if it is nuked by the OOM killer. Perhaps\nthere is a common subreaper wrapper out there that would wait, print\nmessages like that, rince and repeat until it has no children and then\nexit, or perhaps pg_ctl or even a perl script could do somethign like\nthat if requested. Another thought, not explored, is the brand new\nLinux pidfd stuff that can be used to wait and get exit status for a\nnon-child process (or the older BSD equivalent), but the paint isn't\neven dry on that stuff anwyay.\n\n--\nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 7 Aug 2019 11:57:23 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I wondered if the build farm should try to report OOM kill -9 or other\n> signal activity affecting the postmaster.\n\nYeah, I've been wondering whether pg_ctl could fork off a subprocess\nthat would fork the postmaster, wait for the postmaster to exit, and then\nreport the exit status. Where to report it *to* seems like the hard part,\nbut maybe an answer that worked for the buildfarm would be enough for now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Aug 2019 00:29:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On Wed, Aug 7, 2019 at 4:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > I wondered if the build farm should try to report OOM kill -9 or other\n> > signal activity affecting the postmaster.\n>\n> Yeah, I've been wondering whether pg_ctl could fork off a subprocess\n> that would fork the postmaster, wait for the postmaster to exit, and then\n> report the exit status. Where to report it *to* seems like the hard part,\n> but maybe an answer that worked for the buildfarm would be enough for now.\n\nOh, right, you don't even need subreaper tricks (I was imagining we\nhad a double fork somewhere we don't).\n\nAnother question is whether the build farm should be setting the Linux\noom score adjust thing.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 7 Aug 2019 17:00:29 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Another question is whether the build farm should be setting the Linux\n> oom score adjust thing.\n\nAFAIK you can't do that without being root.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Aug 2019 01:06:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On Wed, Aug 7, 2019 at 5:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Another question is whether the build farm should be setting the Linux\n> > oom score adjust thing.\n>\n> AFAIK you can't do that without being root.\n\nRats, yeah you need CAP_SYS_RESOURCE or root to lower it.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 7 Aug 2019 17:12:44 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On 07/08/2019 02:57, Thomas Munro wrote:\n> On Wed, Jul 24, 2019 at 5:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> So I think I've got to take back the assertion that we've got\n>> some lurking generic problem. This pattern looks way more\n>> like a platform-specific issue. Overaggressive OOM killer\n>> would fit the facts on vulpes/wobbegong, perhaps, though\n>> it's odd that it only happens on HEAD runs.\n> \n> chipmunk also:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2019-08-06%2014:16:16\n> \n> I wondered if the build farm should try to report OOM kill -9 or other\n> signal activity affecting the postmaster.\n\nFWIW, I looked at the logs in /var/log/* on chipmunk, and found no \nevidence of OOM killings. I can see nothing unusual in the OS logs \naround the time of that failure.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 7 Aug 2019 16:30:46 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 07/08/2019 02:57, Thomas Munro wrote:\n>> On Wed, Jul 24, 2019 at 5:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> So I think I've got to take back the assertion that we've got\n>>> some lurking generic problem. This pattern looks way more\n>>> like a platform-specific issue. Overaggressive OOM killer\n>>> would fit the facts on vulpes/wobbegong, perhaps, though\n>>> it's odd that it only happens on HEAD runs.\n\n>> chipmunk also:\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2019-08-06%2014:16:16\n\n> FWIW, I looked at the logs in /var/log/* on chipmunk, and found no \n> evidence of OOM killings. I can see nothing unusual in the OS logs \n> around the time of that failure.\n\nOh, that is very useful info, thanks. That seems to mean that we\nshould be suspecting a segfault, assertion failure, etc inside\nthe postmaster. I don't see any TRAP message in chipmunk's log,\nso assertion failure seems to be ruled out, but other sorts of\nprocess-crashing errors would fit the facts.\n\nA stack trace from the crash would be mighty useful info along\nabout here. I wonder whether chipmunk has the infrastructure\nneeded to create such a thing. From memory, the buildfarm requires\ngdb for that, but not sure if there are additional requirements.\nAlso, if you're using systemd or something else that thinks it\nought to interfere with where cores get dropped, that could be\na problem.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Aug 2019 09:57:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On 07/08/2019 16:57, Tom Lane wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>> On 07/08/2019 02:57, Thomas Munro wrote:\n>>> On Wed, Jul 24, 2019 at 5:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>> So I think I've got to take back the assertion that we've got\n>>>> some lurking generic problem. This pattern looks way more\n>>>> like a platform-specific issue. Overaggressive OOM killer\n>>>> would fit the facts on vulpes/wobbegong, perhaps, though\n>>>> it's odd that it only happens on HEAD runs.\n> \n>>> chipmunk also:\n>>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2019-08-06%2014:16:16\n> \n>> FWIW, I looked at the logs in /var/log/* on chipmunk, and found no\n>> evidence of OOM killings. I can see nothing unusual in the OS logs\n>> around the time of that failure.\n> \n> Oh, that is very useful info, thanks. That seems to mean that we\n> should be suspecting a segfault, assertion failure, etc inside\n> the postmaster. I don't see any TRAP message in chipmunk's log,\n> so assertion failure seems to be ruled out, but other sorts of\n> process-crashing errors would fit the facts.\n> \n> A stack trace from the crash would be mighty useful info along\n> about here. I wonder whether chipmunk has the infrastructure\n> needed to create such a thing. From memory, the buildfarm requires\n> gdb for that, but not sure if there are additional requirements.\n\nIt does have gdb installed.\n\n> Also, if you're using systemd or something else that thinks it\n> ought to interfere with where cores get dropped, that could be\n> a problem.\n\nI think they should just go to a file called \"core\", I don't think I've \nchanged any settings related to it, at least. I tried \"find / -name \ncore*\", but didn't find any core files, though.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 7 Aug 2019 17:30:51 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 07/08/2019 16:57, Tom Lane wrote:\n>> Also, if you're using systemd or something else that thinks it\n>> ought to interfere with where cores get dropped, that could be\n>> a problem.\n\n> I think they should just go to a file called \"core\", I don't think I've \n> changed any settings related to it, at least. I tried \"find / -name \n> core*\", but didn't find any core files, though.\n\nOn Linux machines the first thing to check is\n\ncat /proc/sys/kernel/core_pattern\n\nOn a Debian machine I have handy, that just says \"core\", but Red Hat\ntends to mess with it ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Aug 2019 10:45:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On 07/08/2019 17:45, Tom Lane wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>> On 07/08/2019 16:57, Tom Lane wrote:\n>>> Also, if you're using systemd or something else that thinks it\n>>> ought to interfere with where cores get dropped, that could be\n>>> a problem.\n> \n>> I think they should just go to a file called \"core\", I don't think I've\n>> changed any settings related to it, at least. I tried \"find / -name\n>> core*\", but didn't find any core files, though.\n> \n> On Linux machines the first thing to check is\n> \n> cat /proc/sys/kernel/core_pattern\n> \n> On a Debian machine I have handy, that just says \"core\", but Red Hat\n> tends to mess with it ...\n\nIt's just \"core\" on chipmunk, too.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 7 Aug 2019 19:08:04 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Wed, Aug 7, 2019 at 4:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah, I've been wondering whether pg_ctl could fork off a subprocess\n>> that would fork the postmaster, wait for the postmaster to exit, and then\n>> report the exit status. Where to report it *to* seems like the hard part,\n>> but maybe an answer that worked for the buildfarm would be enough for now.\n\n> Oh, right, you don't even need subreaper tricks (I was imagining we\n> had a double fork somewhere we don't).\n\nI got around to looking at how to do this. Seeing that chipmunk hasn't\nfailed again, I'm inclined to write that off as perhaps unrelated.\nThat leaves us to diagnose the pg_upgrade failures on wobbegong and\nvulpes. The pg_upgrade test uses pg_ctl to start the postmaster,\nand the only simple way to wedge this requirement into pg_ctl is as\nattached. Now, the attached is completely *not* suitable as a permanent\npatch, because it degrades or breaks a number of pg_ctl behaviors that\nrely on knowing the postmaster's actual PID rather than that of the\nparent shell. But it gets through check-world, so I think we can stick it\nin transiently to see what it can teach us about the buildfarm failures.\nGiven wobbegong's recent failure rate, I don't think we'll have to wait\nlong.\n\nSome notes about the patch:\n\n* The core idea is to change start_postmaster's shell invocation\nso that the shell doesn't just exec the postmaster, but runs a\nmini shell script that runs the postmaster and then reports its\nexit status. I found that this still needed a dummy exec to force\nthe shell to perform the I/O redirections on itself, else pg_ctl's\nTAP tests fail. (I think what was happening was that if the shell\ncontinued to hold open its original stdin, IPC::Run didn't believe\nthe command was done.)\n\n* This means that what start_postmaster returns is not the postmaster's\nown PID, but that of the parent shell. So we have to lobotomize\nwait_for_postmaster to handle the PID the same way as on Windows\n(where that was already true); it can't test for exact equality\nbetween the child process PID and what's in postmaster.pid.\n(trap_sigint_during_startup is also broken, but we don't need that\nto work to get through the regression tests.)\n\n* That makes recovery/t/017_shm.pl fail, because there's a race\ncondition: after killing the old postmaster, the existing\npostmaster.pid is enough to fool \"pg_ctl start\" into thinking the new\npostmaster is already running. I fixed that by making pg_ctl reject\nany PID seen in a pre-existing postmaster.pid file. That has a\nnonzero probability of false match, so I would not want to stick it\nin as a permanent thing on Unix ... but I wonder if it wouldn't be\nan improvement over the current situation on Windows.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 07 Oct 2019 00:07:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On Tue, Jul 23, 2019 at 7:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Parallel workers aren't ever allowed to write, in the current\n> implementation, so it's not real obvious why they'd have any\n> WAL log files open at all.\n\nParallel workers are not forbidden to write WAL, nor are they\nforbidden to modify blocks. They could legally HOT-prune, for example,\nthough I'm not positive whether they actually do.\n\nThe prohibition is at a higher level: they can't create new tuples or\ndelete existing ones.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 7 Oct 2019 09:00:35 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "I wrote:\n>>> Yeah, I've been wondering whether pg_ctl could fork off a subprocess\n>>> that would fork the postmaster, wait for the postmaster to exit, and then\n>>> report the exit status.\n\n> [ pushed at 6a5084eed ]\n> Given wobbegong's recent failure rate, I don't think we'll have to wait\n> long.\n\nIndeed, we didn't:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wobbegong&dt=2019-10-10%2020%3A54%3A46\n\nThe tail end of the system log looks like\n\n2019-10-10 21:00:33.717 UTC [15127:306] pg_regress/date FATAL: postmaster exited during a parallel transaction\n2019-10-10 21:00:33.717 UTC [15127:307] pg_regress/date LOG: disconnection: session time: 0:00:02.896 user=fedora database=regression host=[local]\n/bin/sh: line 1: 14168 Segmentation fault (core dumped) \"/home/fedora/build-farm-10-clang/buildroot/HEAD/pgsql.build/tmp_install/home/fedora/build-farm-clang/buildroot/HEAD/inst/bin/postgres\" -F -c listen_addresses=\"\" -k \"/tmp/pg_upgrade_check-ZrhQ4h\"\npostmaster exit status is 139\n\nSo that's definitive proof that the postmaster is suffering a SIGSEGV.\nUnfortunately, we weren't blessed with a stack trace, even though\nwobbegong is running a buildfarm client version that is new enough\nto try to collect one. However, seeing that wobbegong is running\na pretty-recent Fedora release, the odds are that systemd-coredump\nhas commandeered the core dump and squirreled it someplace where\nwe can't find it.\n\nMuch as one could wish otherwise, systemd doesn't seem likely to\neither go away or scale back its invasiveness; so I'm afraid we\nare probably going to need to teach the buildfarm client how to\nnegotiate with systemd-coredump at some point. I don't much want\nto do that right this minute, though.\n\nA nearer-term solution would be to reproduce this manually and\ndig into the core. Mark, are you in a position to give somebody\nssh access to wobbegong's host, or another similarly-configured VM?\n\n(While at it, it'd be nice to investigate the infinite_recurse\nfailures we've been seeing on all those ppc64 critters ...)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 10 Oct 2019 17:34:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On Thu, Oct 10, 2019 at 05:34:51PM -0400, Tom Lane wrote:\n> A nearer-term solution would be to reproduce this manually and\n> dig into the core. Mark, are you in a position to give somebody\n> ssh access to wobbegong's host, or another similarly-configured VM?\n> \n> (While at it, it'd be nice to investigate the infinite_recurse\n> failures we've been seeing on all those ppc64 critters ...)\n\nYeah, whoever would like access, just send me your ssh key and login\nyou'd like to use, and I'll get you set up.\n\nRegards,\nMark\n\n-- \nMark Wong\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Thu, 10 Oct 2019 14:53:51 -0700",
"msg_from": "Mark Wong <mark@2ndQuadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "\nOn 10/10/19 5:34 PM, Tom Lane wrote:\n> I wrote:\n>>>> Yeah, I've been wondering whether pg_ctl could fork off a subprocess\n>>>> that would fork the postmaster, wait for the postmaster to exit, and then\n>>>> report the exit status.\n>> [ pushed at 6a5084eed ]\n>> Given wobbegong's recent failure rate, I don't think we'll have to wait\n>> long.\n> Indeed, we didn't:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wobbegong&dt=2019-10-10%2020%3A54%3A46\n>\n> The tail end of the system log looks like\n>\n> 2019-10-10 21:00:33.717 UTC [15127:306] pg_regress/date FATAL: postmaster exited during a parallel transaction\n> 2019-10-10 21:00:33.717 UTC [15127:307] pg_regress/date LOG: disconnection: session time: 0:00:02.896 user=fedora database=regression host=[local]\n> /bin/sh: line 1: 14168 Segmentation fault (core dumped) \"/home/fedora/build-farm-10-clang/buildroot/HEAD/pgsql.build/tmp_install/home/fedora/build-farm-clang/buildroot/HEAD/inst/bin/postgres\" -F -c listen_addresses=\"\" -k \"/tmp/pg_upgrade_check-ZrhQ4h\"\n> postmaster exit status is 139\n>\n> So that's definitive proof that the postmaster is suffering a SIGSEGV.\n> Unfortunately, we weren't blessed with a stack trace, even though\n> wobbegong is running a buildfarm client version that is new enough\n> to try to collect one. However, seeing that wobbegong is running\n> a pretty-recent Fedora release, the odds are that systemd-coredump\n> has commandeered the core dump and squirreled it someplace where\n> we can't find it.\n\n\n\nAt least on F29 I have set /proc/sys/kernel/core_pattern and it works.\n\n\n\n>\n> Much as one could wish otherwise, systemd doesn't seem likely to\n> either go away or scale back its invasiveness; so I'm afraid we\n> are probably going to need to teach the buildfarm client how to\n> negotiate with systemd-coredump at some point. I don't much want\n> to do that right this minute, though.\n>\n> A nearer-term solution would be to reproduce this manually and\n> dig into the core. Mark, are you in a position to give somebody\n> ssh access to wobbegong's host, or another similarly-configured VM?\n\n\n\nI have given Mark my SSH key. That doesn't mean others interested shouldn't.\n\n\n>\n> (While at it, it'd be nice to investigate the infinite_recurse\n> failures we've been seeing on all those ppc64 critters ...)\n>\n> \t\t\t\n\n\n\nYeah.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 10 Oct 2019 18:01:14 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "\nOn 10/10/19 6:01 PM, Andrew Dunstan wrote:\n> On 10/10/19 5:34 PM, Tom Lane wrote:\n>> I wrote:\n>>>>> Yeah, I've been wondering whether pg_ctl could fork off a subprocess\n>>>>> that would fork the postmaster, wait for the postmaster to exit, and then\n>>>>> report the exit status.\n>>> [ pushed at 6a5084eed ]\n>>> Given wobbegong's recent failure rate, I don't think we'll have to wait\n>>> long.\n>> Indeed, we didn't:\n>>\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wobbegong&dt=2019-10-10%2020%3A54%3A46\n>>\n>> The tail end of the system log looks like\n>>\n>> 2019-10-10 21:00:33.717 UTC [15127:306] pg_regress/date FATAL: postmaster exited during a parallel transaction\n>> 2019-10-10 21:00:33.717 UTC [15127:307] pg_regress/date LOG: disconnection: session time: 0:00:02.896 user=fedora database=regression host=[local]\n>> /bin/sh: line 1: 14168 Segmentation fault (core dumped) \"/home/fedora/build-farm-10-clang/buildroot/HEAD/pgsql.build/tmp_install/home/fedora/build-farm-clang/buildroot/HEAD/inst/bin/postgres\" -F -c listen_addresses=\"\" -k \"/tmp/pg_upgrade_check-ZrhQ4h\"\n>> postmaster exit status is 139\n>>\n>> So that's definitive proof that the postmaster is suffering a SIGSEGV.\n>> Unfortunately, we weren't blessed with a stack trace, even though\n>> wobbegong is running a buildfarm client version that is new enough\n>> to try to collect one. However, seeing that wobbegong is running\n>> a pretty-recent Fedora release, the odds are that systemd-coredump\n>> has commandeered the core dump and squirreled it someplace where\n>> we can't find it.\n>\n>\n> At least on F29 I have set /proc/sys/kernel/core_pattern and it works.\n\n\n\n\nI have done the same on this machine. wobbegong runs every hour, so\nlet's see what happens next. With any luck the buildfarm will give us a\nstack trace without needing further action.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 11 Oct 2019 11:12:28 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> At least on F29 I have set /proc/sys/kernel/core_pattern and it works.\n\nFWIW, I'm not excited about that as a permanent solution. It requires\nroot privilege, and it affects the whole machine not only the buildfarm,\nand making it persist across reboots is even more invasive.\n\n> I have done the same on this machine. wobbegong runs every hour, so\n> let's see what happens next. With any luck the buildfarm will give us a\n> stack trace without needing further action.\n\nI already collected one manually. It looks like this:\n\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 sigusr1_handler (postgres_signal_arg=10) at postmaster.c:5114\n5114 {\nMissing separate debuginfos, use: dnf debuginfo-install glibc-2.26-30.fc27.ppc64le\n(gdb) bt\n#0 sigusr1_handler (postgres_signal_arg=10) at postmaster.c:5114\n#1 <signal handler called>\n#2 0x00007fff93923ca4 in sigprocmask () from /lib64/libc.so.6\n#3 0x00000000103fad08 in reaper (postgres_signal_arg=<optimized out>)\n at postmaster.c:3215\n#4 <signal handler called>\n#5 0x00007fff93923ca4 in sigprocmask () from /lib64/libc.so.6\n#6 0x00000000103f9f98 in sigusr1_handler (postgres_signal_arg=<optimized out>)\n at postmaster.c:5275\n#7 <signal handler called>\n#8 0x00007fff93923ca4 in sigprocmask () from /lib64/libc.so.6\n#9 0x00000000103fad08 in reaper (postgres_signal_arg=<optimized out>)\n at postmaster.c:3215\n#10 <signal handler called>\n#11 sigusr1_handler (postgres_signal_arg=10) at postmaster.c:5114\n#12 <signal handler called>\n#13 0x00007fff93923ca4 in sigprocmask () from /lib64/libc.so.6\n#14 0x00000000103f9f98 in sigusr1_handler (postgres_signal_arg=<optimized out>)\n at postmaster.c:5275\n#15 <signal handler called>\n#16 0x00007fff93923ca4 in sigprocmask () from /lib64/libc.so.6\n#17 0x00000000103fad08 in reaper (postgres_signal_arg=<optimized out>)\n at postmaster.c:3215\n...\n#572 <signal handler called>\n#573 0x00007fff93923ca4 in sigprocmask () from /lib64/libc.so.6\n#574 0x00000000103f9f98 in sigusr1_handler (\n postgres_signal_arg=<optimized out>) at postmaster.c:5275\n#575 <signal handler called>\n#576 0x00007fff93923ca4 in sigprocmask () from /lib64/libc.so.6\n#577 0x00000000103fad08 in reaper (postgres_signal_arg=<optimized out>)\n at postmaster.c:3215\n#578 <signal handler called>\n#579 sigusr1_handler (postgres_signal_arg=10) at postmaster.c:5114\n#580 <signal handler called>\n#581 0x00007fff93a01514 in select () from /lib64/libc.so.6\n#582 0x00000000103f7ad8 in ServerLoop () at postmaster.c:1682\n#583 PostmasterMain (argc=<optimized out>, argv=<optimized out>)\n at postmaster.c:1391\n#584 0x0000000000000000 in ?? ()\n\nWhat we've apparently got here is that signals were received\nso fast that the postmaster ran out of stack space. I remember\nAndres complaining about this as a theoretical threat, but I\nhadn't seen it in the wild before.\n\nI haven't finished investigating though, as there are some things\nthat remain to be explained. The dependency on having\nforce_parallel_mode = regress makes sense now, because the extra\ntraffic to launch and reap all those parallel workers would\nincrease the stress on the postmaster (and it seems likely that\nthis stack trace corresponds exactly to alternating launch and\nreap signals). But why does it only happen during the pg_upgrade\ntest --- plain \"make check\" ought to be about the same? I also\nwant to investigate why clang builds seem more prone to this\nthan gcc builds on the same machine; that might just be down to\nmore or less stack consumption, but it bears looking into.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 11 Oct 2019 11:45:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "\nOn 10/11/19 11:45 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>>> At least on F29 I have set /proc/sys/kernel/core_pattern and it works.\n> FWIW, I'm not excited about that as a permanent solution. It requires\n> root privilege, and it affects the whole machine not only the buildfarm,\n> and making it persist across reboots is even more invasive.\n\n\n\nOK, but I'm not keen to have to tussle with coredumpctl. Right now our\nlogic says: for every core file in the data directory try to get a\nbacktrace. Use of systemd-coredump means that gets blown out of the\nwater, and we no longer even have a simple test to see if our program\ncaused a core dump.\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 11 Oct 2019 14:04:41 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "I wrote:\n> What we've apparently got here is that signals were received\n> so fast that the postmaster ran out of stack space. I remember\n> Andres complaining about this as a theoretical threat, but I\n> hadn't seen it in the wild before.\n\n> I haven't finished investigating though, as there are some things\n> that remain to be explained.\n\nI still don't have a good explanation for why this only seems to\nhappen in the pg_upgrade test sequence. However, I did notice\nsomething very interesting: the postmaster crashes after consuming\nonly about 1MB of stack space. This is despite the prevailing\nsetting of \"ulimit -s\" being 8192 (8MB). I also confirmed that\nthe value of max_stack_depth within the crashed process is 2048,\nwhich implies that get_stack_depth_rlimit got some value larger\nthan 2MB from getrlimit(RLIMIT_STACK). And yet, here we have\na crash, and the process memory map confirms that only 1MB was\nallocated in the stack region. So it's really hard to explain\nthat as anything except a kernel bug: sometimes, the kernel\ndoesn't give us as much stack as it promised it would. And the\nmachine is not loaded enough for there to be any rational\nresource-exhaustion excuse for that.\n\nThis matches up with the intermittent infinite_recurse failures\nwe've been seeing in the buildfarm. Those are happening across\na range of systems, but they're (almost) all Linux-based ppc64,\nsuggesting that there's a longstanding arch-specific kernel bug\ninvolved. For reference, I scraped the attached list of such\nfailures in the last three months. I wonder whether we can get\nthe attention of any kernel hackers about that.\n\nAnyway, as to what to do about it --- it occurred to me to wonder\nwhy we are relying on having the signal handlers block and unblock\nsignals manually, when we could tell sigaction() that we'd like\nsignals blocked. It is reasonable to expect that the signal support\nis designed to not recursively consume stack space in the face of\na series of signals, while the way we are doing it clearly opens\nus up to recursive space consumption. The stack trace I showed\nbefore proves that the recursion happens at the points where the\nsignal handlers unblock signals.\n\nAs a quick hack I made the attached patch, and it seems to fix the\nproblem on wobbegong's host. I don't see crashes any more, and\nwatching the postmaster's stack space consumption, it stays\ncomfortably at a tad under 200KB (probably the default initial\nallocation), while without the patch it tends to blow up to 700K\nor more even in runs that don't crash.\n\nThis patch isn't committable as-is because it will (I suppose)\nbreak things on Windows; we still need the old way there for lack\nof sigaction(). But that could be fixed with a few #ifdefs.\nI'm also kind of tempted to move pqsignal_no_restart into\nbackend/libpq/pqsignal.c (where BlockSig is defined) and maybe\nrename it, but I'm not sure to what.\n\nThis issue might go away if we switched to a postmaster implementation\nthat doesn't do work in the signal handlers, but I'm not entirely\nconvinced of that. The existing handlers don't seem to consume a lot\nof stack space in themselves (there's not many local variables in them).\nThe bulk of the stack consumption is seemingly in the platform's signal\ninfrastructure, so that we might still have a stack consumption issue\neven with fairly trivial handlers, if we don't tell sigaction to block\nsignals. In any case, this fix seems potentially back-patchable,\nwhile we surely wouldn't risk back-patching a postmaster rewrite.\n\nComments?\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 11 Oct 2019 14:56:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 10/11/19 11:45 AM, Tom Lane wrote:\n>> FWIW, I'm not excited about that as a permanent solution. It requires\n>> root privilege, and it affects the whole machine not only the buildfarm,\n>> and making it persist across reboots is even more invasive.\n\n> OK, but I'm not keen to have to tussle with coredumpctl. Right now our\n> logic says: for every core file in the data directory try to get a\n> backtrace. Use of systemd-coredump means that gets blown out of the\n> water, and we no longer even have a simple test to see if our program\n> caused a core dump.\n\nI haven't played that much with this software, but it seems you can\ndo \"coredumpctl list <path-to-executable>\" to find out what it has\nfor a particular executable. You would likely need a time-based\nfilter too (to avoid regurgitating previous runs' failures),\nbut that seems do-able.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 11 Oct 2019 15:11:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On Sat, Oct 12, 2019 at 7:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> This matches up with the intermittent infinite_recurse failures\n> we've been seeing in the buildfarm. Those are happening across\n> a range of systems, but they're (almost) all Linux-based ppc64,\n> suggesting that there's a longstanding arch-specific kernel bug\n> involved. For reference, I scraped the attached list of such\n> failures in the last three months. I wonder whether we can get\n> the attention of any kernel hackers about that.\n\nYeah, I don't know anything about this stuff, but I was also beginning\nto wonder if something is busted in the arch-specific fault.c code\nthat checks if stack expansion is valid[1], in a way that fails with a\nrapidly growing stack, well timed incoming signals, and perhaps\nDocker/LXC (that's on Mark's systems IIUC, not sure about the ARM\nboxes that failed or if it could be relevant here). Perhaps the\narbitrary tolerances mentioned in that comment are relevant.\n\n[1] https://github.com/torvalds/linux/blob/master/arch/powerpc/mm/fault.c#L244\n\n\n",
"msg_date": "Sat, 12 Oct 2019 08:41:12 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Yeah, I don't know anything about this stuff, but I was also beginning\n> to wonder if something is busted in the arch-specific fault.c code\n> that checks if stack expansion is valid[1], in a way that fails with a\n> rapidly growing stack, well timed incoming signals, and perhaps\n> Docker/LXC (that's on Mark's systems IIUC, not sure about the ARM\n> boxes that failed or if it could be relevant here). Perhaps the\n> arbitrary tolerances mentioned in that comment are relevant.\n> [1] https://github.com/torvalds/linux/blob/master/arch/powerpc/mm/fault.c#L244\n\nHm, the bit about \"we'll allow up to 1MB unconditionally\" sure seems\nto match up with the observations here. I also wonder about the\narbitrary definition of \"a long way\" as 2KB. Could it be that that\nmisbehaves in the face of a userland function with more than 2KB of\nlocal variables?\n\nIt's not very clear how those things would lead to an intermittent\nfailure though. In the case of the postmaster crashes, we now see\nthat timing of signal receipts is relevant. For infinite_recurse,\nmaybe it only fails if an sinval interrupt happens at the wrong time?\n(This theory would predict that commit 798070ec0 made the problem\nway more prevalent than it had been ... need to go see if the\nbuildfarm history supports that.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 11 Oct 2019 16:13:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On Sat, Oct 12, 2019 at 08:41:12AM +1300, Thomas Munro wrote:\n> On Sat, Oct 12, 2019 at 7:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > This matches up with the intermittent infinite_recurse failures\n> > we've been seeing in the buildfarm. Those are happening across\n> > a range of systems, but they're (almost) all Linux-based ppc64,\n> > suggesting that there's a longstanding arch-specific kernel bug\n> > involved. For reference, I scraped the attached list of such\n> > failures in the last three months. I wonder whether we can get\n> > the attention of any kernel hackers about that.\n> \n> Yeah, I don't know anything about this stuff, but I was also beginning\n> to wonder if something is busted in the arch-specific fault.c code\n> that checks if stack expansion is valid[1], in a way that fails with a\n> rapidly growing stack, well timed incoming signals, and perhaps\n> Docker/LXC (that's on Mark's systems IIUC, not sure about the ARM\n> boxes that failed or if it could be relevant here). Perhaps the\n> arbitrary tolerances mentioned in that comment are relevant.\n\nThis specific one (wobbegon) is OpenStack/KVM[2], for what it's worth...\n\n\"... cluster is an OpenStack based cluster offering POWER8 & POWER9 LE\ninstances running on KVM ...\"\n\nBut to keep you on your toes, some of my ppc animals are Docker within\nother OpenStack/KVM instance...\n\nRegards,\nMark\n\n[1] https://github.com/torvalds/linux/blob/master/arch/powerpc/mm/fault.c#L244\n[2] https://osuosl.org/services/powerdev/\n\n-- \nMark Wong\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Fri, 11 Oct 2019 13:28:53 -0700",
"msg_from": "Mark Wong <mark@2ndQuadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "Hi,\n\nOn 2019-10-11 14:56:41 -0400, Tom Lane wrote:\n> I still don't have a good explanation for why this only seems to\n> happen in the pg_upgrade test sequence. However, I did notice\n> something very interesting: the postmaster crashes after consuming\n> only about 1MB of stack space. This is despite the prevailing\n> setting of \"ulimit -s\" being 8192 (8MB). I also confirmed that\n> the value of max_stack_depth within the crashed process is 2048,\n> which implies that get_stack_depth_rlimit got some value larger\n> than 2MB from getrlimit(RLIMIT_STACK). And yet, here we have\n> a crash, and the process memory map confirms that only 1MB was\n> allocated in the stack region. So it's really hard to explain\n> that as anything except a kernel bug: sometimes, the kernel\n> doesn't give us as much stack as it promised it would. And the\n> machine is not loaded enough for there to be any rational\n> resource-exhaustion excuse for that.\n\nLinux expands stack space only on demand, thus it's possible to run out\nof stack space while there ought to be stack space. Unfortunately that\nduring a stack expansion, which means there's no easy place to report\nthat. I've seen this be hit in production on busy machines.\n\nI wonder if the machine is configured with overcommit_memory=2,\ni.e. don't overcommit. cat /proc/sys/vm/overcommit_memory would tell.\nWhat does grep -E '^(Mem|Commit)' /proc/meminfo show while it's\nhappening?\n\nWhat does the signal information say? You can see it with\np $_siginfo\nafter receiving the signal. A SIGSEGV here, I assume.\n\nIIRC si_code and si_errno should indicate whether ENOMEM is the reason.\n\n\n> This matches up with the intermittent infinite_recurse failures\n> we've been seeing in the buildfarm. Those are happening across\n> a range of systems, but they're (almost) all Linux-based ppc64,\n> suggesting that there's a longstanding arch-specific kernel bug\n> involved. For reference, I scraped the attached list of such\n> failures in the last three months. I wonder whether we can get\n> the attention of any kernel hackers about that.\n\nMost of them are operated by Mark, right? So it could also just be high\nmemory pressure on those.\n[1;5B\n\n> Anyway, as to what to do about it --- it occurred to me to wonder\n> why we are relying on having the signal handlers block and unblock\n> signals manually, when we could tell sigaction() that we'd like\n> signals blocked. It is reasonable to expect that the signal support\n> is designed to not recursively consume stack space in the face of\n> a series of signals, while the way we are doing it clearly opens\n> us up to recursive space consumption. The stack trace I showed\n> before proves that the recursion happens at the points where the\n> signal handlers unblock signals.\n\nYea, that seems like it might be good. But we have to be careful too, as\nthere's some thing were do want to be interruptable from within a signal\nhandler. We start some processes from within one after all...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 11 Oct 2019 13:31:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-10-11 14:56:41 -0400, Tom Lane wrote:\n>> ... So it's really hard to explain\n>> that as anything except a kernel bug: sometimes, the kernel\n>> doesn't give us as much stack as it promised it would. And the\n>> machine is not loaded enough for there to be any rational\n>> resource-exhaustion excuse for that.\n\n> Linux expands stack space only on demand, thus it's possible to run out\n> of stack space while there ought to be stack space. Unfortunately that\n> during a stack expansion, which means there's no easy place to report\n> that. I've seen this be hit in production on busy machines.\n\nAs I said, this machine doesn't seem busy enough for that to be a\ntenable excuse; there's nobody but me logged in, and the buildfarm\ncritter isn't running.\n\n> I wonder if the machine is configured with overcommit_memory=2,\n> i.e. don't overcommit. cat /proc/sys/vm/overcommit_memory would tell.\n\n$ cat /proc/sys/vm/overcommit_memory\n0\n\n> What does grep -E '^(Mem|Commit)' /proc/meminfo show while it's\n> happening?\n\nidle:\n\n$ grep -E '^(Mem|Commit)' /proc/meminfo \nMemTotal: 2074816 kB\nMemFree: 36864 kB\nMemAvailable: 1779584 kB\nCommitLimit: 1037376 kB\nCommitted_AS: 412480 kB\n\na few captures while regression tests are running:\n\n$ grep -E '^(Mem|Commit)' /proc/meminfo \nMemTotal: 2074816 kB\nMemFree: 8512 kB\nMemAvailable: 1819264 kB\nCommitLimit: 1037376 kB\nCommitted_AS: 371904 kB\n$ grep -E '^(Mem|Commit)' /proc/meminfo \nMemTotal: 2074816 kB\nMemFree: 32640 kB\nMemAvailable: 1753792 kB\nCommitLimit: 1037376 kB\nCommitted_AS: 585984 kB\n$ grep -E '^(Mem|Commit)' /proc/meminfo \nMemTotal: 2074816 kB\nMemFree: 56640 kB\nMemAvailable: 1695744 kB\nCommitLimit: 1037376 kB\nCommitted_AS: 568768 kB\n\n\n> What does the signal information say? You can see it with\n> p $_siginfo\n> after receiving the signal. A SIGSEGV here, I assume.\n\n(gdb) p $_siginfo\n$1 = {si_signo = 11, si_errno = 0, si_code = 128, _sifields = {_pad = {0 <repeats 28 times>}, _kill = {si_pid = 0, si_uid = 0}, \n _timer = {si_tid = 0, si_overrun = 0, si_sigval = {sival_int = 0, sival_ptr = 0x0}}, _rt = {si_pid = 0, si_uid = 0, si_sigval = {\n sival_int = 0, sival_ptr = 0x0}}, _sigchld = {si_pid = 0, si_uid = 0, si_status = 0, si_utime = 0, si_stime = 0}, _sigfault = {\n si_addr = 0x0}, _sigpoll = {si_band = 0, si_fd = 0}}}\n\n> Yea, that seems like it might be good. But we have to be careful too, as\n> there's some thing were do want to be interruptable from within a signal\n> handler. We start some processes from within one after all...\n\nThe proposed patch has zero effect on what the signal mask will be inside\na signal handler, only on the transient state during handler entry/exit.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 11 Oct 2019 16:40:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On Sat, Oct 12, 2019 at 9:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-10-11 14:56:41 -0400, Tom Lane wrote:\n> >> ... So it's really hard to explain\n> >> that as anything except a kernel bug: sometimes, the kernel\n> >> doesn't give us as much stack as it promised it would. And the\n> >> machine is not loaded enough for there to be any rational\n> >> resource-exhaustion excuse for that.\n>\n> > Linux expands stack space only on demand, thus it's possible to run out\n> > of stack space while there ought to be stack space. Unfortunately that\n> > during a stack expansion, which means there's no easy place to report\n> > that. I've seen this be hit in production on busy machines.\n>\n> As I said, this machine doesn't seem busy enough for that to be a\n> tenable excuse; there's nobody but me logged in, and the buildfarm\n> critter isn't running.\n\nYeah. As I speculated in the other thread[1], the straightforward\ncan't-allocate-any-more-space-but-no-other-way-to-tell-you-that case,\nie, the explanation that doesn't involve a bug in Linux or PostgreSQL,\nseems unlikely unless we also see other more obvious signs of\noccasional overcommit problems (ie not during stack expansion) on\nthose hosts, doesn't it? How likely is it that this 1-2MB of stack\nspace is the straw that breaks the camels back, every time?\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGJ_MkqdEH-LmmebhNLSFeyWwvYVXfPaz3A2_p27EQfZwA%40mail.gmail.com\n\n\n",
"msg_date": "Sat, 12 Oct 2019 10:03:22 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "I wrote:\n> It's not very clear how those things would lead to an intermittent\n> failure though. In the case of the postmaster crashes, we now see\n> that timing of signal receipts is relevant. For infinite_recurse,\n> maybe it only fails if an sinval interrupt happens at the wrong time?\n> (This theory would predict that commit 798070ec0 made the problem\n> way more prevalent than it had been ... need to go see if the\n> buildfarm history supports that.)\n\nThat seems to fit, roughly: commit 798070ec0 moved errors.sql to execute\nas part of a parallel group on 2019-04-11, and the first failure of the\ninfinite_recurse test happened on 2019-04-27. Since then we've averaged\nabout one such failure every four days, which makes a sixteen-day gap a\nlittle more than you'd expect, but not a huge amount more. Anyway,\nI do not see any other commits in between that would plausibly have\naffected this.\n\nIn other news, I reproduced the problem with gcc on wobbegong's host,\nand confirmed that the gcc build uses less stack space: one recursive\ncycle of reaper() and sigusr1_handler() consumes 14768 bytes with clang,\nbut just 9296 bytes with gcc. So the evident difference in failure rate\nbetween wobbegong and vulpes is nicely explained by that. Still no\ntheory about pg_upgrade versus vanilla \"make check\" though. I did manage\nto make it happen during \"make check\" by dint of reducing the \"ulimit -s\"\nsetting, so it's *possible* for it to happen there, it just doesn't.\nWeird.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 11 Oct 2019 17:13:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "I've now also been able to reproduce the \"infinite_recurse\" segfault\non wobbegong's host (or, since I was using a gcc build, I guess I\nshould say vulpes' host). The first-order result is that it's the\nsame problem with the kernel not giving us as much stack space as\nwe expect: there's only 1179648 bytes in the stack segment in the\ncore dump, though we should certainly have been allowed at least 8MB.\n\nThe next interesting thing is that looking closely at the identified\nspot of the SIGSEGV, there's nothing there that should be touching\nthe stack at all:\n\n(gdb) x/4i $pc\n=> 0x10201df0 <core_yylex+1072>: ld r9,0(r30)\n 0x10201df4 <core_yylex+1076>: ld r8,128(r30)\n 0x10201df8 <core_yylex+1080>: ld r10,152(r30)\n 0x10201dfc <core_yylex+1084>: ld r9,0(r9)\n\n(r30 is not pointing at the stack, but at a valid heap location.)\nThis code is the start of the switch case at scan.l:1064, so the\nmost recent successfully-executed instructions were the switch jump,\nand they don't involve the stack either.\n\nThe reported sp,\n\n(gdb) i reg sp\nsp 0x7fffe6940890 140737061849232\n\nis a good 2192 bytes above the bottom of the allocated stack space,\nwhich is 0x7fffe6940000 according to gdb. So we really ought to\nhave plenty of margin here. What's going on?\n\nWhat I suspect, given the difficulty of reproducing this, is that\nwhat really happened is that the kernel tried to deliver a SIGUSR1\nsignal to us just at this point. The kernel source code that\nThomas pointed to comments that\n\n\t * The kernel signal delivery code writes up to about 1.5kB\n\t * below the stack pointer (r1) before decrementing it.\n\nThere's more than 1.5kB available below sp, but what if that comment\nis a lie? In particular, I'm wondering if that number dates to PPC32\nand needs to be doubled, or nearly so, to describe PPC64 reality.\nIf that were the case, then the signal code would not have been\nable to fit its requirement, and would probably have come here to\nask for more stack space, and the hard-wired 2048 test a little\nfurther down would have decided that that was a wild stack access.\n\nIn short, my current belief is that Linux PPC64 fails when trying\nto deliver a signal if there's right around 2KB of stack remaining,\neven though it should be able to expand the stack and press on.\n\nIt may well be that the reason is just that this heuristic in\nbad_stack_expansion() is out of date. Or there might be a similarly\nbogus value somewhere in the signal-delivery code.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 12 Oct 2019 17:25:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "I wrote:\n> In short, my current belief is that Linux PPC64 fails when trying\n> to deliver a signal if there's right around 2KB of stack remaining,\n> even though it should be able to expand the stack and press on.\n\nI figured I should try to remove some variables from the equation\nby demonstrating this claim without involving Postgres. The attached\ntest program eats some stack space and then waits to be sent SIGUSR1.\nFor some values of \"some stack space\", it dumps core:\n\n[tgl@postgresql-fedora ~]$ gcc -g -Wall -O1 stacktest.c\n[tgl@postgresql-fedora ~]$ ./a.out 1240000 &\n[1] 11796\n[tgl@postgresql-fedora ~]$ kill -USR1 11796\n[tgl@postgresql-fedora ~]$ signal delivered, stack base 0x7fffdc160000 top 0x7fffdc031420 (1240032 used)\n\n[1]+ Done ./a.out 1240000\n[tgl@postgresql-fedora ~]$ ./a.out 1242000 &\n[1] 11797\n[tgl@postgresql-fedora ~]$ kill -USR1 11797\n[tgl@postgresql-fedora ~]$ \n[1]+ Segmentation fault (core dumped) ./a.out 1242000\n[tgl@postgresql-fedora ~]$ uname -a\nLinux postgresql-fedora.novalocal 4.18.19-100.fc27.ppc64le #1 SMP Wed Nov 14 21:53:32 UTC 2018 ppc64le ppc64le ppc64le GNU/Linux\n\nI don't think any further proof is required that this is\na kernel bug. Where would be a good place to file it?\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 12 Oct 2019 20:06:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On Sun, Oct 13, 2019 at 1:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I don't think any further proof is required that this is\n> a kernel bug. Where would be a good place to file it?\n\nlinuxppc-dev@lists.ozlabs.org might be the right place.\n\nhttps://lists.ozlabs.org/listinfo/linuxppc-dev\n\n\n",
"msg_date": "Sun, 13 Oct 2019 13:44:59 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "Hi,\n\nOn 2019-10-13 13:44:59 +1300, Thomas Munro wrote:\n> On Sun, Oct 13, 2019 at 1:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I don't think any further proof is required that this is\n> > a kernel bug. Where would be a good place to file it?\n> \n> linuxppc-dev@lists.ozlabs.org might be the right place.\n> \n> https://lists.ozlabs.org/listinfo/linuxppc-dev\n\nProbably requires reproducing on a pretty recent kernel first, to have a\ndecent chance of being investigated...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 13 Oct 2019 06:31:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Probably requires reproducing on a pretty recent kernel first, to have a\n> decent chance of being investigated...\n\nHow recent do you think it needs to be? The machine I was testing on\nyesterday is under a year old:\n\nuname -m = ppc64le\nuname -r = 4.18.19-100.fc27.ppc64le\nuname -s = Linux\nuname -v = #1 SMP Wed Nov 14 21:53:32 UTC 2018\n\nThe latest-by-version-number ppc64 kernel I can find in the buildfarm\nis bonito,\n\nuname -m = ppc64le\nuname -r = 4.19.15-300.fc29.ppc64le\nuname -s = Linux\nuname -v = #1 SMP Mon Jan 14 16:21:04 UTC 2019\n\nand that's certainly shown it too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 13 Oct 2019 10:29:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "Hi,\n\nOn 2019-10-13 10:29:45 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Probably requires reproducing on a pretty recent kernel first, to have a\n> > decent chance of being investigated...\n> \n> How recent do you think it needs to be? The machine I was testing on\n> yesterday is under a year old:\n>\n> uname -r = 4.18.19-100.fc27.ppc64le\n> ...\n> uname -r = 4.19.15-300.fc29.ppc64le\n\nMy experience reporting kernel bugs is that the latest released version,\nor even just the tip of the git tree, is your best bet :/. And that\nreports using distro kernels - with all their out of tree changes - are\nalso less likely to receive a response. IIRC there's a fedora repo with\nupstream kernels.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 13 Oct 2019 07:50:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-10-13 10:29:45 -0400, Tom Lane wrote:\n>> How recent do you think it needs to be?\n\n> My experience reporting kernel bugs is that the latest released version,\n> or even just the tip of the git tree, is your best bet :/.\n\nConsidering that we're going to point them at chapter and verse in\nTorvald's own repo, I do not think they can claim that we're worried\nabout obsolete code.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 13 Oct 2019 10:57:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "Filed at\n\nhttps://bugzilla.kernel.org/show_bug.cgi?id=205183\n\nWe'll see what happens ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 13 Oct 2019 12:07:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "I wrote:\n> Filed at\n> https://bugzilla.kernel.org/show_bug.cgi?id=205183\n> We'll see what happens ...\n\nFurther to this --- I went back and looked at the outlier events\nwhere we saw an infinite_recurse failure on a non-Linux-PPC64\nplatform. There were only three:\n\n mereswine | ARMv7 | Linux debian-armhf | Clarence Ho | REL_11_STABLE | 2019-08-11 02:10:12 | InstallCheck-C | 2019-08-11 02:36:10.159 PDT [5004:4] DETAIL: Failed process was running: select infinite_recurse();\n mereswine | ARMv7 | Linux debian-armhf | Clarence Ho | REL_12_STABLE | 2019-08-11 09:52:46 | pg_upgradeCheck | 2019-08-11 04:21:16.756 PDT [6804:5] DETAIL: Failed process was running: select infinite_recurse();\n mereswine | ARMv7 | Linux debian-armhf | Clarence Ho | HEAD | 2019-08-11 11:29:27 | pg_upgradeCheck | 2019-08-11 07:15:28.454 PDT [9954:76] DETAIL: Failed process was running: select infinite_recurse();\n \nLooking closer at these, though, they were *not* SIGSEGV failures,\nbut SIGKILLs. Seeing that they were all on the same machine on the\nsame day, I'm thinking we can write them off as a transiently\nmisconfigured OOM killer.\n\nSo, pending some other theory emerging from the kernel hackers, we're\ndown to it's-a-PPC64-kernel-bug. That leaves me wondering what if\nanything we want to do about it. Even if it's fixed reasonably promptly\nin Linux upstream, and then we successfully nag assorted vendors to\nincorporate the fix quickly, that's still going to leave us with frequent\nbuildfarm failures on Mark's flotilla of not-the-very-shiniest Linux\nversions.\n\nShould we move the infinite_recurse test to happen alone in a parallel\ngroup just to stop these failures? That's annoying from a parallelism\nstandpoint, but I don't see any other way to avoid these failures.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 14 Oct 2019 11:50:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "I want to give some conclusion to our occurance of this, which now I think was\nneither an instance nor indicitive of any bug. Summary: postgres was being\nkill -9 by a deployment script, after it \"timed out\". Thus no log messages.\n\nI initially experienced this while testing migration of a customer's DB using\nan ansible script, which did \"async: 2222, poll: 0\". Which I guess it takes to\nmean \"let the process run for 2222 seconds, after which send it SIGKILL\". I\nprobably made multiple attempts to migrate (for example to fix tablespaces or\ndependences on postgis), and the process was started during one test and never\nstopped nor restarted during following tests, until it finally hit 2222sec and\nstopped ungracefully.\n\nThis just happened again, during a differnet migration, so I reproduced it like:\n\nansible --sudo --sudo-user pryzbyj 'database.customer.*' --background=1 --poll=1 -m command -a '/usr/pgsql-12/bin/postmaster -D /home/pryzbyj/pg12.dat -c port=5678 -c unix-socket-directories=/tmp -c client_min_messages=debug -c log_temp_files=0 -c log_lock_waits=1'\n\nConnect to customer and verify it was killed uncleanly:\n\n[pryzbyj@database ~]$ /usr/pgsql-12/bin/postmaster -D ./pg12.dat -c port=5678 -c unix-socket-directories=/tmp -c client_min_messages=debug -c log_temp_files=0 -c log_lock_waits=1 -c logging_collector=off\n2019-10-22 17:57:58.251 EDT [5895] FATAL: lock file \"postmaster.pid\" already exists\n2019-10-22 17:57:58.251 EDT [5895] HINT: Is another postmaster (PID 5608) running in data directory \"/home/pryzbyj/./pg12.dat\"?\n[pryzbyj@database ~]$ /usr/pgsql-12/bin/postmaster -D ./pg12.dat -c port=5678 -c unix-socket-directories=/tmp -c client_min_messages=debug -c log_temp_files=0 -c log_lock_waits=1 -c logging_collector=off\n2019-10-22 17:58:01.312 EDT [5962] LOG: starting PostgreSQL 12.0 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit\n2019-10-22 17:58:01.312 EDT [5962] LOG: listening on IPv6 address \"::1\", port 5678\n2019-10-22 17:58:01.312 EDT [5962] LOG: listening on IPv4 address \"127.0.0.1\", port 5678\n2019-10-22 17:58:01.328 EDT [5962] LOG: listening on Unix socket \"/tmp/.s.PGSQL.5678\"\n2019-10-22 17:58:01.353 EDT [5963] LOG: database system was interrupted; last known up at 2019-10-22 17:57:48 EDT\n2019-10-22 17:58:01.460 EDT [5963] LOG: database system was not properly shut down; automatic recovery in progress\n2019-10-22 17:58:01.478 EDT [5963] LOG: invalid record length at 6/E829D128: wanted 24, got 0\n2019-10-22 17:58:01.478 EDT [5963] LOG: redo is not required\n2019-10-22 17:58:01.526 EDT [5962] LOG: database system is ready to accept connections\n\n\nOn Tue, Jul 23, 2019 at 11:27:03AM -0500, Justin Pryzby wrote:\n> Does anyone have a stress test for parallel workers ?\n> \n> On a customer's new VM, I got this several times while (trying to) migrate their DB:\n> \n> < 2019-07-23 10:33:51.552 CDT postgres >FATAL: postmaster exited during a parallel transaction\n> < 2019-07-23 10:33:51.552 CDT postgres >STATEMENT: CREATE UNIQUE INDEX unused0_huawei_umts_nodeb_locell_201907_unique_idx ON child.unused0_huawei_umts_nodeb_locell_201907 USING btree ...\n> \n> There's nothing in dmesg nor in postgres logs.\n> \n> At first I thought it's maybe because of a network disconnection, then I\n> thought it's because we ran out of space (wal), then they had a faulty blade.\n> After that, I'd tried and failed to reproduce it a number of times, but it's\n> just recurred during what was intended to be their final restore. I've set\n> max_parallel_workers_per_gather=0, but I'm planning to try to diagnose an issue\n> in another instance. Ideally a minimal test, since I'm apparently going to\n> have to run under gdb to see how it's dying, or even what process is failing.\n> \n> DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 09/21/2015\n> CentOS release 6.9 (Final)\n> Linux alextelsasrv01 2.6.32-754.17.1.el6.x86_64 #1 SMP Tue Jul 2 12:42:48 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux\n> version | PostgreSQL 11.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-23), 64-bit\n\n\n",
"msg_date": "Tue, 22 Oct 2019 17:03:58 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On Tue, Oct 15, 2019 at 4:50 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Filed at\n> > https://bugzilla.kernel.org/show_bug.cgi?id=205183\n\nFor the curious-and-not-subscribed, there's now a kernel patch\nproposed for this. We guessed pretty close, but the problem wasn't\nthose dodgy looking magic numbers, it was that the bad stack expansion\ncheck only allows for user space to expand the stack\n(FAULT_FLAG_USER), and here the kernel itself wants to build a stack\nframe.\n\n\n",
"msg_date": "Wed, 11 Dec 2019 15:22:41 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On Wed, Dec 11, 2019 at 3:22 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Oct 15, 2019 at 4:50 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Filed at\n> > > https://bugzilla.kernel.org/show_bug.cgi?id=205183\n>\n> For the curious-and-not-subscribed, there's now a kernel patch\n> proposed for this. We guessed pretty close, but the problem wasn't\n> those dodgy looking magic numbers, it was that the bad stack expansion\n> check only allows for user space to expand the stack\n> (FAULT_FLAG_USER), and here the kernel itself wants to build a stack\n> frame.\n\nHehe, the dodgy looking magic numbers *were* wrong:\n\n- * The kernel signal delivery code writes up to about 1.5kB\n+ * The kernel signal delivery code writes a bit over 4KB\n\nhttps://patchwork.ozlabs.org/project/linuxppc-dev/patch/20200724092528.1578671-2-mpe@ellerman.id.au/\n\n\n",
"msg_date": "Tue, 28 Jul 2020 13:35:45 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Hehe, the dodgy looking magic numbers *were* wrong:\n> - * The kernel signal delivery code writes up to about 1.5kB\n> + * The kernel signal delivery code writes a bit over 4KB\n> https://patchwork.ozlabs.org/project/linuxppc-dev/patch/20200724092528.1578671-2-mpe@ellerman.id.au/\n\n... and, having seemingly not learned a thing, they just replaced\nthem with new magic numbers. Mumble sizeof() mumble.\n\nAnyway, I guess the interesting question for us is how long it\nwill take for this fix to propagate into real-world systems.\nI don't have much of a clue about the Linux kernel workflow,\nanybody want to venture a guess?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 27 Jul 2020 23:27:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On Tue, Jul 28, 2020 at 3:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Anyway, I guess the interesting question for us is how long it\n> will take for this fix to propagate into real-world systems.\n> I don't have much of a clue about the Linux kernel workflow,\n> anybody want to venture a guess?\n\nMe neither. It just hit Torvalds' tree[1] marked \"Cc:\nstable@vger.kernel.org # v2.6.27+\". I looked at the time for a couple\nof other PowerPC-related commits of similar complexity involving some\nof the same names to get from there to a Debian stable kernel package\nand it seemed to be under a couple of months.\n\n[1] https://github.com/torvalds/linux/commit/63dee5df43a31f3844efabc58972f0a206ca4534\n\n\n",
"msg_date": "Tue, 11 Aug 2020 16:38:58 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Tue, Jul 28, 2020 at 3:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Anyway, I guess the interesting question for us is how long it\n>> will take for this fix to propagate into real-world systems.\n>> I don't have much of a clue about the Linux kernel workflow,\n>> anybody want to venture a guess?\n\n> Me neither. It just hit Torvalds' tree[1] marked \"Cc:\n> stable@vger.kernel.org # v2.6.27+\". I looked at the time for a couple\n> of other PowerPC-related commits of similar complexity involving some\n> of the same names to get from there to a Debian stable kernel package\n> and it seemed to be under a couple of months.\n> [1] https://github.com/torvalds/linux/commit/63dee5df43a31f3844efabc58972f0a206ca4534\n\nFor our archives' sake: today I got seemingly-automated mail informing me\nthat this patch has been merged into the 4.19-stable, 5.4-stable,\n5.7-stable, and 5.8-stable kernel branches; but not 4.4-stable,\n4.9-stable, or 4.14-stable, because it failed to apply.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Aug 2020 12:14:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "I wrote:\n> For our archives' sake: today I got seemingly-automated mail informing me\n> that this patch has been merged into the 4.19-stable, 5.4-stable,\n> 5.7-stable, and 5.8-stable kernel branches; but not 4.4-stable,\n> 4.9-stable, or 4.14-stable, because it failed to apply.\n\nAnd this morning's mail brought news that the latter three branches\nare now patched as well. So I guess at this point it's down to\nplatform vendors as to whether or how fast they absorb such changes.\n\nIt might help for us to file platform-specific bug reports asking\nfor the change to be merged.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 Aug 2020 09:43:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
},
{
"msg_contents": "On Tue, Aug 25, 2020 at 1:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > For our archives' sake: today I got seemingly-automated mail informing me\n> > that this patch has been merged into the 4.19-stable, 5.4-stable,\n> > 5.7-stable, and 5.8-stable kernel branches; but not 4.4-stable,\n> > 4.9-stable, or 4.14-stable, because it failed to apply.\n>\n> And this morning's mail brought news that the latter three branches\n> are now patched as well. So I guess at this point it's down to\n> platform vendors as to whether or how fast they absorb such changes.\n\nToday I upgraded a Debian buster box and saw a new kernel image roll\nin. Lo and behold:\n\n$ zgrep 'stack expansion'\n/usr/share/doc/linux-image-4.19.0-11-amd64/changelog.gz\n - [powerpc*] Allow 4224 bytes of stack expansion for the signal frame\n\n\n",
"msg_date": "Fri, 16 Oct 2020 11:11:34 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: stress test for parallel workers"
}
] |
[
{
"msg_contents": "Hello devs,\n\nWhile doing some performance tests and reviewing patches, I needed to \ncreate partitioned tables. Given the current syntax this is time \nconsumming.\n\nThe attached patch adds two options to create a partitioned \"account\" \ntable in pgbench.\n\nIt allows to answer quickly simple questions, eg \"what is the overhead of \nhash partitioning on a simple select on my laptop\"? Answer:\n\n # N=0..?\n sh> pgench -i -s 1 --partition-number=$N --partition-type=hash\n\n # then run\n sh> pgench -S -M prepared -P 1 -T 10\n\n # and look at latency:\n # no parts = 0.071 ms\n # 1 hash = 0.071 ms (did someone optimize this case?!)\n # 2 hash ~ 0.126 ms (+ 0.055 ms)\n # 50 hash ~ 0.155 ms\n # 100 hash ~ 0.178 ms\n # 150 hash ~ 0.232 ms\n # 200 hash ~ 0.279 ms\n # overhead ~ (0.050 + [0.0005-0.0008] * nparts) ms\n\n-- \nFabien.",
"msg_date": "Tue, 23 Jul 2019 18:26:17 +0000 (GMT)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Tue, 23 Jul 2019 at 19:26, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n>\n> Hello devs,\n>\n> While doing some performance tests and reviewing patches, I needed to\n> create partitioned tables. Given the current syntax this is time\n> consumming.\n>\n\nGood idea. I wonder why we didn't have it already.\n\n\n> The attached patch adds two options to create a partitioned \"account\"\n> table in pgbench.\n>\n> It allows to answer quickly simple questions, eg \"what is the overhead of\n> hash partitioning on a simple select on my laptop\"? Answer:\n>\n> # N=0..?\n> sh> pgench -i -s 1 --partition-number=$N --partition-type=hash\n>\n\nGiven current naming of options, I would call this\n--partitions=number-of-partitions and --partition-method=hash\n\n\n> # then run\n> sh> pgench -S -M prepared -P 1 -T 10\n>\n> # and look at latency:\n> # no parts = 0.071 ms\n> # 1 hash = 0.071 ms (did someone optimize this case?!)\n> # 2 hash ~ 0.126 ms (+ 0.055 ms)\n> # 50 hash ~ 0.155 ms\n> # 100 hash ~ 0.178 ms\n> # 150 hash ~ 0.232 ms\n> # 200 hash ~ 0.279 ms\n> # overhead ~ (0.050 + [0.0005-0.0008] * nparts) ms\n>\n\nIt is linear?\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Tue, 23 Jul 2019 at 19:26, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\nHello devs,\n\nWhile doing some performance tests and reviewing patches, I needed to \ncreate partitioned tables. Given the current syntax this is time \nconsumming.Good idea. I wonder why we didn't have it already. \nThe attached patch adds two options to create a partitioned \"account\" \ntable in pgbench.\n\nIt allows to answer quickly simple questions, eg \"what is the overhead of \nhash partitioning on a simple select on my laptop\"? Answer:\n\n # N=0..?\n sh> pgench -i -s 1 --partition-number=$N --partition-type=hashGiven current naming of options, I would call this --partitions=number-of-partitions and --partition-method=hash \n # then run\n sh> pgench -S -M prepared -P 1 -T 10\n\n # and look at latency:\n # no parts = 0.071 ms\n # 1 hash = 0.071 ms (did someone optimize this case?!)\n # 2 hash ~ 0.126 ms (+ 0.055 ms)\n # 50 hash ~ 0.155 ms\n # 100 hash ~ 0.178 ms\n # 150 hash ~ 0.232 ms\n # 200 hash ~ 0.279 ms\n # overhead ~ (0.050 + [0.0005-0.0008] * nparts) msIt is linear? -- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Tue, 23 Jul 2019 23:16:35 +0100",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "Hello Simon,\n\n>> While doing some performance tests and reviewing patches, I needed to\n>> create partitioned tables. Given the current syntax this is time\n>> consumming.\n>\n> Good idea. I wonder why we didn't have it already.\n\nProbably because I did not have to create partitioned table for some \ntesting:-)\n\n>> sh> pgench -i -s 1 --partition-number=$N --partition-type=hash\n>\n> Given current naming of options, I would call this\n> --partitions=number-of-partitions and --partition-method=hash\n\nOk.\n\n>> # then run\n>> sh> pgench -S -M prepared -P 1 -T 10\n>>\n>> # and look at latency:\n>> # no parts = 0.071 ms\n>> # 1 hash = 0.071 ms (did someone optimize this case?!)\n>> # 2 hash ~ 0.126 ms (+ 0.055 ms)\n>> # 50 hash ~ 0.155 ms\n>> # 100 hash ~ 0.178 ms\n>> # 150 hash ~ 0.232 ms\n>> # 200 hash ~ 0.279 ms\n>> # overhead ~ (0.050 + [0.0005-0.0008] * nparts) ms\n>\n> It is linear?\n\nGood question. I would have hoped affine, but this is not very clear on \nthese data, which are the median of about five runs, hence the bracket on \nthe slope factor. At least it is increasing with the number of partitions. \nMaybe it would be clearer on the minimum of five runs.\n\n-- \nFabien.",
"msg_date": "Wed, 24 Jul 2019 08:23:44 +0000 (GMT)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": ">>> # and look at latency:\n>>> # no parts = 0.071 ms\n>>> # 1 hash = 0.071 ms (did someone optimize this case?!)\n>>> # 2 hash ~ 0.126 ms (+ 0.055 ms)\n>>> # 50 hash ~ 0.155 ms\n>>> # 100 hash ~ 0.178 ms\n>>> # 150 hash ~ 0.232 ms\n>>> # 200 hash ~ 0.279 ms\n>>> # overhead ~ (0.050 + [0.0005-0.0008] * nparts) ms\n>> \n>> It is linear?\n>\n> Good question. I would have hoped affine, but this is not very clear on these \n> data, which are the median of about five runs, hence the bracket on the slope \n> factor. At least it is increasing with the number of partitions. Maybe it \n> would be clearer on the minimum of five runs.\n\nHere is a fellow up.\n\nOn the minimum of all available runs the query time on hash partitions is \nabout:\n\n 0.64375 nparts + 118.30979 (in ᅵs).\n\nSo the overhead is about 47.30979 + 0.64375 nparts, and it is indeed \npretty convincingly linear as suggested by the attached figure.\n\n-- \nFabien.",
"msg_date": "Wed, 24 Jul 2019 22:26:34 +0000 (GMT)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "Attached v3 fixes strcasecmp non portability on windows, per postgresql \npatch tester.\n\n-- \nFabien.",
"msg_date": "Fri, 26 Jul 2019 20:52:14 +0000 (GMT)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nHi,\r\n\r\nThe patch looks good to me, Just one suggestion --partition-method option should be made dependent on --partitions, because it has no use unless used with --partitions. What do you think? \r\n \r\nRegards,\r\nAsif\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Mon, 26 Aug 2019 12:53:06 +0000",
"msg_from": "Asif Rehman <asifr.rehman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "> Just one suggestion --partition-method option should be made dependent \n> on --partitions, because it has no use unless used with --partitions. \n> What do you think?\n\nWhy not. V4 attached.\n\n-- \nFabien.",
"msg_date": "Mon, 26 Aug 2019 19:34:18 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nThanks. All looks good, making it ready for committer.\r\n\r\nRegards,\r\nAsif\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Tue, 27 Aug 2019 08:19:19 +0000",
"msg_from": "Asif Rehman <asifr.rehman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Mon, Aug 26, 2019 at 11:04 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> > Just one suggestion --partition-method option should be made dependent\n> > on --partitions, because it has no use unless used with --partitions.\n> > What do you think?\n>\n\nSome comments:\n*\n+ case 11: /* partitions */\n+ initialization_option_set = true;\n+ partitions = atoi(optarg);\n+ if (partitions < 0)\n+ {\n+ fprintf(stderr, \"invalid number of partitions: \\\"%s\\\"\\n\",\n+ optarg);\n+ exit(1);\n+ }\n+ break;\n\nIs there a reason why we treat \"partitions = 0\" as a valid value?\nAlso, shouldn't we keep some max limit for this parameter as well?\nForex. how realistic it will be if the user gives the value of\npartitions the same or greater than the number of rows in\npgbench_accounts table? I understand it is not sensible to give such\na value, but I guess the API should behave sanely in such cases as\nwell. I am not sure what will be the good max value for it, but I\nthink there should be one. Anyone else have any better suggestions\nfor this?\n\n*\n@@ -3625,6 +3644,7 @@ initCreateTables(PGconn *con)\n const char *bigcols; /* column decls if accountIDs are 64 bits */\n int declare_fillfactor;\n };\n+\n static const struct ddlinfo DDLs[] = {\n\nSpurious line change.\n\n*\n+ \" --partitions=NUM partition account table in NUM parts\n(defaults: 0)\\n\"\n+ \" --partition-method=(range|hash)\\n\"\n+ \" partition account table with this\nmethod (default: range)\\n\"\n\nRefer complete table name like pgbench_accounts instead of just\naccount. It will be clear and in sync with what we display in some\nother options like --skip-some-updates.\n\n*\n+ \" --partitions=NUM partition account table in NUM parts\n(defaults: 0)\\n\"\n\n/defaults/default.\n\n*\nI think we should print the information about partitions in\nprintResults. It can help users while analyzing results.\n\n*\n+enum { PART_NONE, PART_RANGE, PART_HASH }\n+ partition_method = PART_NONE;\n+\n\nI think it is better to follow the style of QueryMode enum by using\ntypedef here, that will make look code in sync with nearby code.\n\n*\n- int i;\n\n fprintf(stderr, \"creating tables...\\n\");\n\n- for (i = 0; i < lengthof(DDLs); i++)\n+ for (int i = 0; i < lengthof(DDLs); i++)\n\nThis is unnecessary change as far as this patch is concerned. I\nunderstand there is no problem in writing either way, but let's not\nchange the coding pattern here as part of this patch.\n\n*\n+ if (partitions >= 1)\n+ {\n+ int64 part_size = (naccounts * (int64) scale + partitions - 1) / partitions;\n+ char ff[64];\n+ ff[0] = '\\0';\n+ append_fillfactor(ff, sizeof(ff));\n+\n+ fprintf(stderr, \"creating %d partitions...\\n\", partitions);\n+\n+ for (int p = 1; p <= partitions; p++)\n+ {\n+ char query[256];\n+\n+ if (partition_method == PART_RANGE)\n+ {\n\npart_size can be defined inside \"if (partition_method == PART_RANGE)\"\nas it is used here. In general, this part of the code can use some\ncomments.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 10 Sep 2019 11:18:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "Hello Amit,\n\nThanks for the feedback.\n\n> + case 11: /* partitions */\n> + initialization_option_set = true;\n> + partitions = atoi(optarg);\n> + if (partitions < 0)\n> + {\n> + fprintf(stderr, \"invalid number of partitions: \\\"%s\\\"\\n\",\n> + optarg);\n> + exit(1);\n> + }\n> + break;\n>\n> Is there a reason why we treat \"partitions = 0\" as a valid value?\n\nYes. It is an explicit \"do not create partitioned tables\", which differ \nfrom 1 which says \"create a partitionned table with just one partition\".\n\n> Also, shouldn't we keep some max limit for this parameter as well?\n\nI do not think so. If someone wants to test how terrible it is to use \n100000 partitions, we should not prevent it.\n\n> Forex. how realistic it will be if the user gives the value of\n> partitions the same or greater than the number of rows in\n> pgbench_accounts table?\n\nAlthough I agree that it does not make much sense, for testing purposes \nwhy not, to test overheads in critical cases for instance.\n\n> I understand it is not sensible to give such a value, but I guess the \n> API should behave sanely in such cases as well.\n\nYep, it should work.\n\n> I am not sure what will be the good max value for it, but I\n> think there should be one.\n\nI disagree. Pgbench is a tool for testing performance for given \nparameters. If postgres accepts a parameter there is no reason why pgbench \nshould reject it.\n\n> @@ -3625,6 +3644,7 @@ initCreateTables(PGconn *con)\n> const char *bigcols; /* column decls if accountIDs are 64 bits */\n> int declare_fillfactor;\n> };\n> +\n> static const struct ddlinfo DDLs[] = {\n>\n> Spurious line change.\n\nIndeed.\n\n> *\n> + \" --partitions=NUM partition account table in NUM parts\n> (defaults: 0)\\n\"\n> + \" --partition-method=(range|hash)\\n\"\n> + \" partition account table with this\n> method (default: range)\\n\"\n>\n> Refer complete table name like pgbench_accounts instead of just account. \n> It will be clear and in sync with what we display in some other options \n> like --skip-some-updates.\n\nOk.\n\n> *\n> + \" --partitions=NUM partition account table in NUM parts\n> (defaults: 0)\\n\"\n>\n> /defaults/default.\n\nOk.\n\n> I think we should print the information about partitions in\n> printResults. It can help users while analyzing results.\n\nHmmm. Why not, with some hocus-pocus to get the information out of \npg_catalog, and trying to fail gracefully so that if pgbench is run \nagainst a no partitioning-support version.\n\n> *\n> +enum { PART_NONE, PART_RANGE, PART_HASH }\n> + partition_method = PART_NONE;\n> +\n>\n> I think it is better to follow the style of QueryMode enum by using\n> typedef here, that will make look code in sync with nearby code.\n\nHmmm. Why not. This means inventing a used-once type name for \npartition_method. My great creativity lead to partition_method_t.\n\n> *\n> - int i;\n>\n> fprintf(stderr, \"creating tables...\\n\");\n>\n> - for (i = 0; i < lengthof(DDLs); i++)\n> + for (int i = 0; i < lengthof(DDLs); i++)\n>\n> This is unnecessary change as far as this patch is concerned. I\n> understand there is no problem in writing either way, but let's not\n> change the coding pattern here as part of this patch.\n\nThe reason I did that is that I had a stupid bug in a development version \nwhich was due to an accidental reuse of this index, which would have been \nprevented by this declaration style. Removed anyway.\n\n> + if (partitions >= 1)\n> + {\n> + int64 part_size = (naccounts * (int64) scale + partitions - 1) / partitions;\n> + char ff[64];\n> + ff[0] = '\\0';\n> + append_fillfactor(ff, sizeof(ff));\n> +\n> + fprintf(stderr, \"creating %d partitions...\\n\", partitions);\n> +\n> + for (int p = 1; p <= partitions; p++)\n> + {\n> + char query[256];\n> +\n> + if (partition_method == PART_RANGE)\n> + {\n>\n> part_size can be defined inside \"if (partition_method == PART_RANGE)\"\n> as it is used here.\n\nI just wanted to avoid recomputing the value in the loop, but indeed it \nmay be computed needlessly. Moved.\n\n> In general, this part of the code can use some comments.\n\nOk.\n\nAttached an updated version.\n\n-- \nFabien.",
"msg_date": "Wed, 11 Sep 2019 14:38:13 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Wed, Sep 11, 2019 at 6:08 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> Attached an updated version.\nI have reviewed the patch and done some basic testing. It works as\nper the expectation\n\nI have a few cosmetic comments\n\n1.\n+ if (partitions >= 1)\n+ {\n+ char ff[64];\n+ ff[0] = '\\0';\n+ append_fillfactor(ff, sizeof(ff));\n\nGenerally, we give one blank line between the variable declaration and\nthe first statement of the block.\n\n2.\n+ if (p == 1)\n+ sprintf(minvalue, \"minvalue\");\n+ else\n+ sprintf(minvalue, INT64_FORMAT, (p-1) * part_size + 1);\n\n(p-1) -> (p - 1)\n\nI am just wondering will it be a good idea to expand it to support\nmulti-level partitioning?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 13 Sep 2019 09:34:50 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Wed, Sep 11, 2019 at 6:08 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n\nI would like to take inputs from others as well for the display part\nof this patch. After this patch, for a simple-update pgbench test,\nthe changed output will be as follows (note: partition method and\npartitions):\npgbench.exe -c 4 -j 4 -T 10 -N postgres\nstarting vacuum...end.\ntransaction type: <builtin: simple update>\nscaling factor: 1\npartition method: hash\npartitions: 3\nquery mode: simple\nnumber of clients: 4\nnumber of threads: 4\nduration: 10 s\nnumber of transactions actually processed: 14563\nlatency average = 2.749 ms\ntps = 1454.899150 (including connections establishing)\ntps = 1466.689412 (excluding connections establishing)\n\nWhat do others think about this? This will be the case when the user\nhas used --partitions option in pgbench, otherwise, it won't change.\n\n>\n> > + case 11: /* partitions */\n> > + initialization_option_set = true;\n> > + partitions = atoi(optarg);\n> > + if (partitions < 0)\n> > + {\n> > + fprintf(stderr, \"invalid number of partitions: \\\"%s\\\"\\n\",\n> > + optarg);\n> > + exit(1);\n> > + }\n> > + break;\n> >\n> > Is there a reason why we treat \"partitions = 0\" as a valid value?\n>\n> Yes. It is an explicit \"do not create partitioned tables\", which differ\n> from 1 which says \"create a partitionned table with just one partition\".\n>\n\nWhy would anyone want to use --partitions option in the first case\n(\"do not create partitioned tables\")?\n\n>\n> > I think we should print the information about partitions in\n> > printResults. It can help users while analyzing results.\n>\n> Hmmm. Why not, with some hocus-pocus to get the information out of\n> pg_catalog, and trying to fail gracefully so that if pgbench is run\n> against a no partitioning-support version.\n>\n\n+ res = PQexec(con,\n+ \"select p.partstrat, count(*) \"\n+ \"from pg_catalog.pg_class as c \"\n+ \"left join pg_catalog.pg_partitioned_table as p on (p.partrelid = c.oid) \"\n+ \"left join pg_catalog.pg_inherits as i on (c.oid = i.inhparent) \"\n+ \"where c.relname = 'pgbench_accounts' \"\n+ \"group by 1, c.oid\");\n\nCan't we write this query with inner join instead of left join? What\nadditional purpose you are trying to serve by using left join?\n\n> > *\n> > +enum { PART_NONE, PART_RANGE, PART_HASH }\n> > + partition_method = PART_NONE;\n> > +\n> >\n> > I think it is better to follow the style of QueryMode enum by using\n> > typedef here, that will make look code in sync with nearby code.\n>\n> Hmmm. Why not. This means inventing a used-once type name for\n> partition_method. My great creativity lead to partition_method_t.\n>\n\n+partition_method_t partition_method = PART_NONE;\n\nIt is better to make this static.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 13 Sep 2019 12:17:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "Hello Dilip,\n\n> Generally, we give one blank line between the variable declaration and\n> the first statement of the block.\n\nOk.\n\n> (p-1) -> (p - 1)\n\nOk.\n\n> I am just wondering will it be a good idea to expand it to support \n> multi-level partitioning?\n\nISTM that how the user could specify multi-level parameters is pretty \nunclear, so I would let that as a possible extension if someone wants it \nenough.\n\nAttached v6 implements the two cosmetic changes outlined above.\n\n-- \nFabien.",
"msg_date": "Fri, 13 Sep 2019 10:05:13 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Fri, Sep 13, 2019 at 1:35 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n\nThanks for the updated version of the patch.\n\n> > Generally, we give one blank line between the variable declaration and\n> > the first statement of the block.\n>\n> Ok.\n>\n> > (p-1) -> (p - 1)\n>\n> Ok.\n>\n> > I am just wondering will it be a good idea to expand it to support\n> > multi-level partitioning?\n>\n> ISTM that how the user could specify multi-level parameters is pretty\n> unclear, so I would let that as a possible extension if someone wants it\n> enough.\nOk\n>\n> Attached v6 implements the two cosmetic changes outlined above.\n\n+ /* For RANGE, we use open-ended partitions at the beginning and end */\n+ if (p == 1)\n+ sprintf(minvalue, \"minvalue\");\n+ else\n+ sprintf(minvalue, INT64_FORMAT, (p-1) * part_size + 1);\n+\n+ if (p < partitions)\n+ sprintf(maxvalue, INT64_FORMAT, p * part_size + 1);\n+ else\n+ sprintf(maxvalue, \"maxvalue\");\n\nI do not understand the reason why first partition need to be\nopen-ended? Because we are clear that the minimum value of the aid is\n1 in pgbench_accout. So if you directly use\nsprintf(minvalue, INT64_FORMAT, (p-1) * part_size + 1); then also it\nwill give 1 as minvalue for the first partition and that will be the\nright thing to do. Am I missing something here?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 13 Sep 2019 13:47:23 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "\nHello Amit,\n\n>>> Is there a reason why we treat \"partitions = 0\" as a valid value?\n>>\n>> Yes. It is an explicit \"do not create partitioned tables\", which differ\n>> from 1 which says \"create a partitionned table with just one partition\".\n>\n> Why would anyone want to use --partitions option in the first case\n> (\"do not create partitioned tables\")?\n\nHaving an explicit value for the default is generally a good idea, eg for \na script to tests various partitioning settings:\n\n for nparts in 0 1 2 3 4 5 6 7 8 9 ; do\n pgbench -i --partitions=$nparts ... ;\n ...\n done\n\nOtherwise you would need significant kludging to add/remove the option.\nAllowing 0 does not harm anyone.\n\nNow if the consensus is to remove an explicit 0, it is simple enough to \nchange it, but my opinion is that it is better to have it.\n\n>>> I think we should print the information about partitions in\n>>> printResults. It can help users while analyzing results.\n>\n> + res = PQexec(con,\n> + \"select p.partstrat, count(*) \"\n> + \"from pg_catalog.pg_class as c \"\n> + \"left join pg_catalog.pg_partitioned_table as p on (p.partrelid = c.oid) \"\n> + \"left join pg_catalog.pg_inherits as i on (c.oid = i.inhparent) \"\n> + \"where c.relname = 'pgbench_accounts' \"\n> + \"group by 1, c.oid\");\n>\n> Can't we write this query with inner join instead of left join? What\n> additional purpose you are trying to serve by using left join?\n\nI'm ensuring that there is always a one line answer, whether it is \npartitioned or not. Maybe the count(*) should be count(something in p) to \nget 0 instead of 1 on non partitioned tables, though, but this is hidden \nin the display anyway.\n\n> +partition_method_t partition_method = PART_NONE;\n>\n> It is better to make this static.\n\nI do agree, but this would depart from all other global variables around \nwhich are currently not static. Maybe a separate patch could turn them all \nas static, but ISTM that this patch should not change the current style.\n\n-- \nFabien.\n\n\n",
"msg_date": "Fri, 13 Sep 2019 10:20:18 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "\nHello Dilip,\n\n> + /* For RANGE, we use open-ended partitions at the beginning and end */\n> + if (p == 1)\n> + sprintf(minvalue, \"minvalue\");\n> + else\n> + sprintf(minvalue, INT64_FORMAT, (p-1) * part_size + 1);\n> +\n> + if (p < partitions)\n> + sprintf(maxvalue, INT64_FORMAT, p * part_size + 1);\n> + else\n> + sprintf(maxvalue, \"maxvalue\");\n>\n> I do not understand the reason why first partition need to be \n> open-ended? Because we are clear that the minimum value of the aid is 1 \n> in pgbench_accout. So if you directly use sprintf(minvalue, \n> INT64_FORMAT, (p-1) * part_size + 1); then also it will give 1 as \n> minvalue for the first partition and that will be the right thing to do. \n> Am I missing something here?\n\nThis is simply for the principle that any value allowed for the primary \nkey type has a corresponding partition, and also that it exercices these \nspecial values.\n\nIt also probably reduces the cost of checking whether a value belongs to \nthe first partition because one test is removed, so there is a small \nadditional performance benefit beyond principle and coverage.\n\n-- \nFabien.\n\n\n",
"msg_date": "Fri, 13 Sep 2019 10:35:07 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Fri, Sep 13, 2019 at 2:05 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Dilip,\n>\n> > + /* For RANGE, we use open-ended partitions at the beginning and end */\n> > + if (p == 1)\n> > + sprintf(minvalue, \"minvalue\");\n> > + else\n> > + sprintf(minvalue, INT64_FORMAT, (p-1) * part_size + 1);\n> > +\n> > + if (p < partitions)\n> > + sprintf(maxvalue, INT64_FORMAT, p * part_size + 1);\n> > + else\n> > + sprintf(maxvalue, \"maxvalue\");\n> >\n> > I do not understand the reason why first partition need to be\n> > open-ended? Because we are clear that the minimum value of the aid is 1\n> > in pgbench_accout. So if you directly use sprintf(minvalue,\n> > INT64_FORMAT, (p-1) * part_size + 1); then also it will give 1 as\n> > minvalue for the first partition and that will be the right thing to do.\n> > Am I missing something here?\n\n>\n> This is simply for the principle that any value allowed for the primary\n> key type has a corresponding partition, and also that it exercices these\n> special values.\n\nIMHO, the primary key values for the pgbench_accout tables are always\nwithin the defined range minvalue=1 and maxvalue=scale*100000, right?\n>\n> It also probably reduces the cost of checking whether a value belongs to\n> the first partition because one test is removed, so there is a small\n> additional performance benefit beyond principle and coverage.\n\nOk, I agree that it will slightly reduce the cost for the tuple\nfalling in the first and the last partition.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 13 Sep 2019 14:30:11 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Fri, Sep 13, 2019 at 1:50 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> >>> Is there a reason why we treat \"partitions = 0\" as a valid value?\n> >>\n> >> Yes. It is an explicit \"do not create partitioned tables\", which differ\n> >> from 1 which says \"create a partitionned table with just one partition\".\n> >\n> > Why would anyone want to use --partitions option in the first case\n> > (\"do not create partitioned tables\")?\n>\n> Having an explicit value for the default is generally a good idea, eg for\n> a script to tests various partitioning settings:\n>\n> for nparts in 0 1 2 3 4 5 6 7 8 9 ; do\n> pgbench -i --partitions=$nparts ... ;\n> ...\n> done\n>\n> Otherwise you would need significant kludging to add/remove the option.\n> Allowing 0 does not harm anyone.\n>\n> Now if the consensus is to remove an explicit 0, it is simple enough to\n> change it, but my opinion is that it is better to have it.\n>\n\nFair enough, let us see if anyone else wants to weigh in.\n\n> >>> I think we should print the information about partitions in\n> >>> printResults. It can help users while analyzing results.\n> >\n> > + res = PQexec(con,\n> > + \"select p.partstrat, count(*) \"\n> > + \"from pg_catalog.pg_class as c \"\n> > + \"left join pg_catalog.pg_partitioned_table as p on (p.partrelid = c.oid) \"\n> > + \"left join pg_catalog.pg_inherits as i on (c.oid = i.inhparent) \"\n> > + \"where c.relname = 'pgbench_accounts' \"\n> > + \"group by 1, c.oid\");\n> >\n> > Can't we write this query with inner join instead of left join? What\n> > additional purpose you are trying to serve by using left join?\n>\n> I'm ensuring that there is always a one line answer, whether it is\n> partitioned or not. Maybe the count(*) should be count(something in p) to\n> get 0 instead of 1 on non partitioned tables, though, but this is hidden\n> in the display anyway.\n>\n\nSure, but I feel the code will be simplified. I see no reason for\nusing left join here.\n\n> > +partition_method_t partition_method = PART_NONE;\n> >\n> > It is better to make this static.\n>\n> I do agree, but this would depart from all other global variables around\n> which are currently not static.\n>\n\nCheck QueryMode.\n\n> Maybe a separate patch could turn them all\n> as static, but ISTM that this patch should not change the current style.\n>\n\nNo need to change others, but we can do it for this one.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 13 Sep 2019 15:37:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On 2019-Sep-13, Amit Kapila wrote:\n\n> On Fri, Sep 13, 2019 at 1:50 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > >>> Is there a reason why we treat \"partitions = 0\" as a valid value?\n> > >>\n> > >> Yes. It is an explicit \"do not create partitioned tables\", which differ\n> > >> from 1 which says \"create a partitionned table with just one partition\".\n> > >\n> > > Why would anyone want to use --partitions option in the first case\n> > > (\"do not create partitioned tables\")?\n> >\n> > Having an explicit value for the default is generally a good idea, eg for\n> > a script to tests various partitioning settings:\n> >\n> > for nparts in 0 1 2 3 4 5 6 7 8 9 ; do\n> > pgbench -i --partitions=$nparts ... ;\n> > ...\n> > done\n> >\n> > Otherwise you would need significant kludging to add/remove the option.\n> > Allowing 0 does not harm anyone.\n> >\n> > Now if the consensus is to remove an explicit 0, it is simple enough to\n> > change it, but my opinion is that it is better to have it.\n> \n> Fair enough, let us see if anyone else wants to weigh in.\n\nIt seems convenient UI -- I vote to keep it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 13 Sep 2019 09:55:02 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On 2019-Sep-13, Amit Kapila wrote:\n\n> I would like to take inputs from others as well for the display part\n> of this patch. After this patch, for a simple-update pgbench test,\n> the changed output will be as follows (note: partition method and\n> partitions):\n\n> pgbench.exe -c 4 -j 4 -T 10 -N postgres\n> starting vacuum...end.\n> transaction type: <builtin: simple update>\n> scaling factor: 1\n> partition method: hash\n> partitions: 3\n> query mode: simple\n> number of clients: 4\n> number of threads: 4\n> duration: 10 s\n> number of transactions actually processed: 14563\n> latency average = 2.749 ms\n> tps = 1454.899150 (including connections establishing)\n> tps = 1466.689412 (excluding connections establishing)\n> \n> What do others think about this? This will be the case when the user\n> has used --partitions option in pgbench, otherwise, it won't change.\n\nI wonder what's the intended usage of this output ... it seems to be\ngetting a bit too long. Is this intended for machine processing? I\nwould rather have more things per line in a more compact header.\nBut then I'm not the kind of person who automates multiple pgbench runs.\nMaybe we can get some input from Tomas, who does -- how do you automate\nextracting data from collected pgbench output, or do you instead just\nredirect the output to a file whose path/name indicates the parameters\nthat were used? (I do the latter.)\n\nI mean, if we changed it like this (and I'm not proposing to do it in\nthis patch, this is only an example), would it bother anyone?\n\n$ pgbench -x -y -z ...\nstarting vacuum...end.\nscaling factor: 1 partition method: hash partitions: 1\ntransaction type: <builtin: simple update> query mode: simple\nnumber of clients: 4 number of threads: 4 duration: 10s\nnumber of transactions actually processed: 14563\nlatency average = 2.749 ms\ntps = 1454.899150 (including connections establishing)\ntps = 1466.689412 (excluding connections establishing)\n\n\nIf this output doesn't bother people, then I suggest that this patch\nshould put the partition information in the line together with scaling\nfactor.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 13 Sep 2019 10:05:55 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "Hello Amit,\n\n>>> + res = PQexec(con,\n>>> + \"select p.partstrat, count(*) \"\n>>> + \"from pg_catalog.pg_class as c \"\n>>> + \"left join pg_catalog.pg_partitioned_table as p on (p.partrelid = c.oid) \"\n>>> + \"left join pg_catalog.pg_inherits as i on (c.oid = i.inhparent) \"\n>>> + \"where c.relname = 'pgbench_accounts' \"\n>>> + \"group by 1, c.oid\");\n>>>\n>>> Can't we write this query with inner join instead of left join? What\n>>> additional purpose you are trying to serve by using left join?\n>>\n>> I'm ensuring that there is always a one line answer, whether it is\n>> partitioned or not. Maybe the count(*) should be count(something in p) to\n>> get 0 instead of 1 on non partitioned tables, though, but this is hidden\n>> in the display anyway.\n>\n> Sure, but I feel the code will be simplified. I see no reason for\n> using left join here.\n\nWithout a left join, the query result is empty if there are no partitions, \nwhereas there is one line with it. This fact simplifies managing the query \nresult afterwards because we are always expecting 1 row in the \"normal\" \ncase, whether partitioned or not.\n\n>>> +partition_method_t partition_method = PART_NONE;\n>>>\n>>> It is better to make this static.\n>>\n>> I do agree, but this would depart from all other global variables around\n>> which are currently not static.\n>\n> Check QueryMode.\n\nIndeed, there is a mix of static (about 8) and non static (29 cases). I \nthink static is better anyway, so why not.\n\nAttached a v7.\n\n-- \nFabien.",
"msg_date": "Fri, 13 Sep 2019 19:35:57 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Fri, Sep 13, 2019 at 6:36 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Sep-13, Amit Kapila wrote:\n>\n> > I would like to take inputs from others as well for the display part\n> > of this patch. After this patch, for a simple-update pgbench test,\n> > the changed output will be as follows (note: partition method and\n> > partitions):\n>\n> > pgbench.exe -c 4 -j 4 -T 10 -N postgres\n> > starting vacuum...end.\n> > transaction type: <builtin: simple update>\n> > scaling factor: 1\n> > partition method: hash\n> > partitions: 3\n> > query mode: simple\n> > number of clients: 4\n> > number of threads: 4\n> > duration: 10 s\n> > number of transactions actually processed: 14563\n> > latency average = 2.749 ms\n> > tps = 1454.899150 (including connections establishing)\n> > tps = 1466.689412 (excluding connections establishing)\n> >\n> > What do others think about this? This will be the case when the user\n> > has used --partitions option in pgbench, otherwise, it won't change.\n>\n> I wonder what's the intended usage of this output ... it seems to be\n> getting a bit too long. Is this intended for machine processing? I\n> would rather have more things per line in a more compact header.\n> But then I'm not the kind of person who automates multiple pgbench runs.\n> Maybe we can get some input from Tomas, who does -- how do you automate\n> extracting data from collected pgbench output, or do you instead just\n> redirect the output to a file whose path/name indicates the parameters\n> that were used? (I do the latter.)\n>\n> I mean, if we changed it like this (and I'm not proposing to do it in\n> this patch, this is only an example), would it bother anyone?\n>\n> $ pgbench -x -y -z ...\n> starting vacuum...end.\n> scaling factor: 1 partition method: hash partitions: 1\n> transaction type: <builtin: simple update> query mode: simple\n> number of clients: 4 number of threads: 4 duration: 10s\n> number of transactions actually processed: 14563\n> latency average = 2.749 ms\n> tps = 1454.899150 (including connections establishing)\n> tps = 1466.689412 (excluding connections establishing)\n>\n>\n> If this output doesn't bother people, then I suggest that this patch\n> should put the partition information in the line together with scaling\n> factor.\n>\n\nIIUC, there are two things here (a) you seem to be fine displaying\n'partitions' and 'partition method' information, (b) you would prefer\nto put it along with 'scaling factor' line.\n\nI personally prefer each parameter to be displayed in a separate line,\nbut I am fine if more people would like to see the 'multiple\nparameters information in a single line'. I think it is better to\nthat (point (b)) as a separate patch even if we agree on changing the\ndisplay format.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 14 Sep 2019 10:26:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Fri, Sep 13, 2019 at 11:06 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n> Hello Amit,\n>\n> >>> + res = PQexec(con,\n> >>> + \"select p.partstrat, count(*) \"\n> >>> + \"from pg_catalog.pg_class as c \"\n> >>> + \"left join pg_catalog.pg_partitioned_table as p on (p.partrelid = c.oid) \"\n> >>> + \"left join pg_catalog.pg_inherits as i on (c.oid = i.inhparent) \"\n> >>> + \"where c.relname = 'pgbench_accounts' \"\n> >>> + \"group by 1, c.oid\");\n> >>>\n> >>> Can't we write this query with inner join instead of left join? What\n> >>> additional purpose you are trying to serve by using left join?\n> >>\n> >> I'm ensuring that there is always a one line answer, whether it is\n> >> partitioned or not. Maybe the count(*) should be count(something in p) to\n> >> get 0 instead of 1 on non partitioned tables, though, but this is hidden\n> >> in the display anyway.\n> >\n> > Sure, but I feel the code will be simplified. I see no reason for\n> > using left join here.\n>\n> Without a left join, the query result is empty if there are no partitions,\n> whereas there is one line with it. This fact simplifies managing the query\n> result afterwards because we are always expecting 1 row in the \"normal\"\n> case, whether partitioned or not.\n>\n\nWhy can't we change it as attached? I find using left join to always\nget one row as an ugly way to manipulate the results later. We\nshouldn't go in that direction unless we can't handle this with some\nsimple code.\n\nSome more comments:\n*\n- '--initialize --init-steps=dtpvg --scale=1 --unlogged-tables\n--fillfactor=98 --foreign-keys --quiet --tablespace=pg_default\n--index-tablespace=pg_default',\n+ '--initialize --init-steps=dtpvg --scale=1 --unlogged-tables\n--fillfactor=98 --foreign-keys --quiet\n--tablespace=regress_pgbench_tap_1_ts\n--index-tablespace=regress_pgbench_tap_1_ts --partitions=2\n--partition-method=hash',\n\nWhat is the need of using regress_pgbench_tap_1_ts in this test? I\nthink we don't need to change existing tests unless required for the\nnew functionality.\n\n*\n- 'pgbench scale 1 initialization');\n+ 'pgbench scale 1 initialization with options');\n\nSimilar to the above, it is not clear to me why we need to change this?\n\n*pgbench(\n-\n # given the expected rate and the 2 ms tx duration, at most one is executed\n '-t 10 --rate=100000 --latency-limit=1 -n -r',\n 0,\n\nThe above appears to be a spurious line change.\n\n* I think we need to change the docs [1] to indicate the new step for\npartitioning. See section --init-steps=init_steps\n\n[1] - https://www.postgresql.org/docs/devel/pgbench.html\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sat, 14 Sep 2019 17:13:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "Hello Amit,\n\n>>>> I'm ensuring that there is always a one line answer, whether it is\n>>>> partitioned or not. Maybe the count(*) should be count(something in p) to\n>>>> get 0 instead of 1 on non partitioned tables, though, but this is hidden\n>>>> in the display anyway.\n>>>\n>>> Sure, but I feel the code will be simplified. I see no reason for\n>>> using left join here.\n>>\n>> Without a left join, the query result is empty if there are no partitions,\n>> whereas there is one line with it. This fact simplifies managing the query\n>> result afterwards because we are always expecting 1 row in the \"normal\"\n>> case, whether partitioned or not.\n>\n> Why can't we change it as attached?\n\nI think that your version works, but I do not like much the condition for \nthe normal case which is implicitely assumed. The solution I took has 3 \nclear-cut cases: 1 error against a server without partition support, \ndetect multiple pgbench_accounts table -- argh, and then the normal \nexpected case, whether partitioned or not. Your solution has 4 cases \nbecause of the last implicit zero-row select that relies on default, which \nwould need some explanations.\n\n> I find using left join to always get one row as an ugly way to \n> manipulate the results later.\n\nHmmm. It is really a matter of taste. I do not share your distate for left \njoin on principle. In the case at hand, I find that getting one row in all \ncases pretty elegant because there is just one code for handling them all.\n\n> We shouldn't go in that direction unless we can't handle this with some \n> simple code.\n\nHmmm. Left join does not strike me as over complex code. I wish my student \nwould remember that this thing exists:-)\n\n> What is the need of using regress_pgbench_tap_1_ts in this test?\n\nI wanted to check that tablespace options work appropriately with \npartition tables, as I changed the create table stuff significantly, and \njust using \"pg_default\" is kind of cheating.\n\n> I think we don't need to change existing tests unless required for the \n> new functionality.\n\nI do agree, but there was a motivation behind the addition.\n\n> *\n> - 'pgbench scale 1 initialization');\n> + 'pgbench scale 1 initialization with options');\n>\n> Similar to the above, it is not clear to me why we need to change this?\n\nBecause I noticed that it had the same description as the previous one, so \nI made the test name distinct and more precise, while I was adding options \non it.\n\n> *pgbench(\n> -\n> # given the expected rate and the 2 ms tx duration, at most one is executed\n> '-t 10 --rate=100000 --latency-limit=1 -n -r',\n> 0,\n>\n> The above appears to be a spurious line change.\n\nIndeed. I think that this empty line is a typo, but I can let it as it is.\n\n> * I think we need to change the docs [1] to indicate the new step for\n> partitioning. See section --init-steps=init_steps\n>\n> [1] - https://www.postgresql.org/docs/devel/pgbench.html\n\nThe partitioned table generation is integrated into the existing create \ntable step, it is not a separate step because I cannot see an interest to \ndo something in between the table creations.\n\nPatch v8 attached adds some comments around partition detection, ensures \nthat 0 is returned for the no partition case and let the spurious empty \nline where it is.\n\n-- \nFabien.",
"msg_date": "Sat, 14 Sep 2019 15:05:21 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Sat, Sep 14, 2019 at 6:35 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> >>>> I'm ensuring that there is always a one line answer, whether it is\n> >>>> partitioned or not. Maybe the count(*) should be count(something in p) to\n> >>>> get 0 instead of 1 on non partitioned tables, though, but this is hidden\n> >>>> in the display anyway.\n> >>>\n> >>> Sure, but I feel the code will be simplified. I see no reason for\n> >>> using left join here.\n> >>\n> >> Without a left join, the query result is empty if there are no partitions,\n> >> whereas there is one line with it. This fact simplifies managing the query\n> >> result afterwards because we are always expecting 1 row in the \"normal\"\n> >> case, whether partitioned or not.\n> >\n> > Why can't we change it as attached?\n>\n> I think that your version works, but I do not like much the condition for\n> the normal case which is implicitely assumed. The solution I took has 3\n> clear-cut cases: 1 error against a server without partition support,\n> detect multiple pgbench_accounts table -- argh, and then the normal\n> expected case, whether partitioned or not. Your solution has 4 cases\n> because of the last implicit zero-row select that relies on default, which\n> would need some explanations.\n>\n\nWhy? Here, we are fetching the partitioning information. If it\nexists, then we remember that to display for later, otherwise, the\ndefault should apply.\n\n> > I find using left join to always get one row as an ugly way to\n> > manipulate the results later.\n>\n> Hmmm. It is really a matter of taste. I do not share your distate for left\n> join on principle.\n>\n\nOh no, I am not generally against using left join, but here it appears\nlike using it without much need. If nothing else, it will consume\nmore cycles to fetch one extra row when we can avoid it.\n\nIrrespective of whether we use left join or not, I think the below\nchange from my patch is important.\n- /* only print partitioning information if some partitioning was detected */\n- if (partition_method != PART_NONE && partition_method != PART_UNKNOWN)\n+ /* print partitioning information only if there exists any partition */\n+ if (partitions > 0)\n\nBasically, it would be good if we just rely on 'partitions' to decide\nwhether we have partitions or not.\n\n> In the case at hand, I find that getting one row in all\n> cases pretty elegant because there is just one code for handling them all.\n>\n\nHmm, I would be fine if you can show some other place in code where\nsuch a method is used or if someone else also shares your viewpoint.\n\n>\n> > What is the need of using regress_pgbench_tap_1_ts in this test?\n>\n> I wanted to check that tablespace options work appropriately with\n> partition tables, as I changed the create table stuff significantly, and\n> just using \"pg_default\" is kind of cheating.\n>\n\nI think your change will be tested if there is a '--tablespace'\noption. Even if you want to test win non-default tablespace, then\nalso, adding additional test would make more sense rather than\nchanging existing one which is testing a valid thing. Also, there is\nan existing way to create tablespace location in\n\"src/bin/pg_checksums/t/002_actions\". I think we can use the same. I\ndon't find any problem with your way, but why having multiple ways of\ndoing same thing in code. We need to test this on windows also once\nas this involves some path creation which might vary, although I don't\nthink there should be any problem in that especially if we use\nexisting way.\n\n> > I think we don't need to change existing tests unless required for the\n> > new functionality.\n>\n> I do agree, but there was a motivation behind the addition.\n>\n> > *\n> > - 'pgbench scale 1 initialization');\n> > + 'pgbench scale 1 initialization with options');\n> >\n> > Similar to the above, it is not clear to me why we need to change this?\n>\n> Because I noticed that it had the same description as the previous one, so\n> I made the test name distinct and more precise, while I was adding options\n> on it.\n>\n\nGood observation, but better be done separately. I think in general\nthe more unrelated changes are present in patch, the more time it\ntakes to review.\n\nOne more comment:\n+typedef enum { PART_NONE, PART_RANGE, PART_HASH, PART_UNKNOWN }\n+ partition_method_t;\n\nSee, if we can eliminate PART_UNKNOWN. I don't see much use of same.\nIt is used at one place where we can set PART_NONE without much loss.\nHaving lesser invalid values makes code easier to follow.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Sep 2019 16:24:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 4:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Sep 14, 2019 at 6:35 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n> One more comment:\n> +typedef enum { PART_NONE, PART_RANGE, PART_HASH, PART_UNKNOWN }\n> + partition_method_t;\n>\n> See, if we can eliminate PART_UNKNOWN. I don't see much use of same.\n> It is used at one place where we can set PART_NONE without much loss.\n> Having lesser invalid values makes code easier to follow.\n>\n\nLooking more closely at this case:\n+ else if (PQntuples(res) != 1)\n+ {\n+ /* unsure because multiple (or no) pgbench_accounts found */\n+ partition_method = PART_UNKNOWN;\n+ partitions = 0;\n+ }\n\nIs it ever possible to have multiple pgbench_accounts considering we\nhave unique index on (relname, relnamespace) for pg_class?\n\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Sep 2019 17:46:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "\nHello Amit,\n\n>> One more comment:\n>> +typedef enum { PART_NONE, PART_RANGE, PART_HASH, PART_UNKNOWN }\n>> + partition_method_t;\n>>\n>> See, if we can eliminate PART_UNKNOWN.\n\nI'm not very happy with this one, but I wanted to differentiate \"we do \nknow that it is not partitioned\" from \"we do not know if it is \npartitioned\", and I did not have a better idea.\n\n> I don't see much use of same.\n\nAlthough it is not used afterwards, we could display the partitioning \ninformation differently between the two cases. This is not done because I \ndid not want to add more lines on the \"normal\" case.\n\n>> It is used at one place where we can set PART_NONE without much loss.\n>> Having lesser invalid values makes code easier to follow.\n>\n> Looking more closely at this case:\n> + else if (PQntuples(res) != 1)\n> + {\n> + /* unsure because multiple (or no) pgbench_accounts found */\n> + partition_method = PART_UNKNOWN;\n> + partitions = 0;\n> + }\n>\n> Is it ever possible to have multiple pgbench_accounts considering we\n> have unique index on (relname, relnamespace) for pg_class?\n\nThe issue is that it is not directly obvious which relnamespace will be \nused by the queries which rely on non schema qualified \"pgbench_accounts\". \nEach schema could theoretically hold a pgbench_accounts table. As this is \npretty unlikely, I did not attempt to add complexity to resolve taking \ninto account the search_path, but just skipped to unknown in this case, \nwhich I expect nobody would hit in normal circumstances.\n\nAnother possible and unlikely issue is that pgbench_accounts could have \nbeen deleted but not pgbench_branches which is used earlier to get the \ncurrent \"scale\". If so, the queries will fail later on anyway.\n\n-- \nFabien.\n\n\n",
"msg_date": "Tue, 17 Sep 2019 15:07:53 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "Hello Amit,\n\n>>> Why can't we change it as attached?\n>>\n>> I think that your version works, but I do not like much the condition for\n>> the normal case which is implicitely assumed. The solution I took has 3\n>> clear-cut cases: 1 error against a server without partition support,\n>> detect multiple pgbench_accounts table -- argh, and then the normal\n>> expected case, whether partitioned or not. Your solution has 4 cases\n>> because of the last implicit zero-row select that relies on default, which\n>> would need some explanations.\n>\n> Why?\n\nHmmm. This is a coding-philosophy question:-)\n\nTo be nice to the code reader?\n\nYou have several if cases, but the last one is to keep the default *which \nmeans something*. ISTM that the default is kept in two cases: when there \nis a pgbench_accounts without partitioning, and when no pgbench_accounts \nwas found, in which case the defaults are plain false. I could be okay of \nthe default say \"we do not know\", but for me having all cases explicitely \ncovered in one place helps understand the behavior of a code.\n\n> Here, we are fetching the partitioning information. If it exists, then \n> we remember that to display for later, otherwise, the default should \n> apply.\n\nYep, but the default is also kept if nothing is found, whereas the left \njoin solution would give one row when found and empty when not found, \nwhich for me are quite distinct cases.\n\n> Oh no, I am not generally against using left join, but here it appears\n> like using it without much need. If nothing else, it will consume\n> more cycles to fetch one extra row when we can avoid it.\n\nAs pointed out, the left join allows to distinguish \"not found\" from \"not \npartitioned\" logically, even if no explicit use of that is done \nafterwards.\n\n> Irrespective of whether we use left join or not, I think the below\n> change from my patch is important.\n> - /* only print partitioning information if some partitioning was detected */\n> - if (partition_method != PART_NONE && partition_method != PART_UNKNOWN)\n> + /* print partitioning information only if there exists any partition */\n> + if (partitions > 0)\n>\n> Basically, it would be good if we just rely on 'partitions' to decide\n> whether we have partitions or not.\n\nCould be, although I was thinking of telling the user that we do not know \non unknown. I'll think about this one.\n\n>> In the case at hand, I find that getting one row in all cases pretty \n>> elegant because there is just one code for handling them all.\n>\n> Hmm, I would be fine if you can show some other place in code where\n> such a method is used\n\nNo problem:-) Although there are no other catalog queries in \"pgbench\", \nthere are plenty in \"psql\" and \"pg_dump\", and also in some other commands, \nand they often rely on \"LEFT\" joins:\n\n sh> grep LEFT src/bin/psql/*.c | wc -l # 58\n sh> grep LEFT src/bin/pg_dump/*.c | wc -l # 54\n\nNote that there are no \"RIGHT\" nor \"FULL\" joins…\n\n>>> What is the need of using regress_pgbench_tap_1_ts in this test?\n>>\n>> I wanted to check that tablespace options work appropriately with\n>> partition tables, as I changed the create table stuff significantly, and\n>> just using \"pg_default\" is kind of cheating.\n>\n> I think your change will be tested if there is a '--tablespace'\n> option.\n\nYes. There is just one, really.\n\n> Even if you want to test win non-default tablespace, then also, adding \n> additional test would make more sense rather than changing existing one \n> which is testing a valid thing.\n\nTom tends to think that there are already too many tests, so I try to keep \nthem as compact/combined as possible. Moreover, the spirit of this test is \nto cover \"all possible options\", so it made also sense to add the new \noptions there, and it achieves both coverage and testing my changes with \nan explicit tablespace.\n\n> Also, there is an existing way to create tablespace location in \n> \"src/bin/pg_checksums/t/002_actions\". I think we can use the same. I \n> don't find any problem with your way, but why having multiple ways of \n> doing same thing in code. We need to test this on windows also once as \n> this involves some path creation which might vary, although I don't \n> think there should be any problem in that especially if we use existing \n> way.\n\nOk, I'll look at the pg_checksums way to do that.\n\n>>> - 'pgbench scale 1 initialization');\n>>> + 'pgbench scale 1 initialization with options');\n>>>\n>>> Similar to the above, it is not clear to me why we need to change this?\n>>\n>> Because I noticed that it had the same description as the previous one, \n>> so I made the test name distinct and more precise, while I was adding \n>> options on it.\n\nHmmm. Keeping the same name is really a copy paste error, and I wanted to \navoid a distinct commit for more than very minor thing.\n\n> Good observation, but better be done separately. I think in general\n> the more unrelated changes are present in patch, the more time it\n> takes to review.\n\nThen let's keep the same name.\n\n> One more comment:\n> +typedef enum { PART_NONE, PART_RANGE, PART_HASH, PART_UNKNOWN }\n> + partition_method_t;\n>\n> See, if we can eliminate PART_UNKNOWN. I don't see much use of same.\n> It is used at one place where we can set PART_NONE without much loss.\n> Having lesser invalid values makes code easier to follow.\n\nDiscussed in other mail.\n\n-- \nFabien.",
"msg_date": "Tue, 17 Sep 2019 15:33:34 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 6:38 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> >> It is used at one place where we can set PART_NONE without much loss.\n> >> Having lesser invalid values makes code easier to follow.\n> >\n> > Looking more closely at this case:\n> > + else if (PQntuples(res) != 1)\n> > + {\n> > + /* unsure because multiple (or no) pgbench_accounts found */\n> > + partition_method = PART_UNKNOWN;\n> > + partitions = 0;\n> > + }\n> >\n> > Is it ever possible to have multiple pgbench_accounts considering we\n> > have unique index on (relname, relnamespace) for pg_class?\n>\n> The issue is that it is not directly obvious which relnamespace will be\n> used by the queries which rely on non schema qualified \"pgbench_accounts\".\n>\n\nIt seems to me the patch already uses namespace in the query, so this\nshould not be a problem here. The part of query is as below:\n+ res = PQexec(con,\n+ \"select p.partstrat, count(p.partrelid) \"\n+ \"from pg_catalog.pg_class as c \"\n\nThis uses pg_catalog, so it should not have multiple entries for\n\"pgbench_accounts\".\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Sep 2019 20:08:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "Attached v9:\n\n - remove the PART_UNKNOWN and use partitions = -1 to tell\n that there is an error, and partitions >= 1 to print info\n - use search_path to find at most one pgbench_accounts\n It still uses left join because I still think that it is appropriate.\n I added a lateral to avoid repeating the array_position call\n to manage the search_path, and use explicit pg_catalog everywhere.\n - let the wrongly repeated test name as is\n - somehow use pg_checksums tablespace creation method, however:\n - I kept testing that mkdir succeeds\n - I kept escaping single quotes, if the path contains a \"'\"\n so the only difference is that on some msys platform it may\n avoid some unclear issue.\n\n-- \nFabien.",
"msg_date": "Tue, 17 Sep 2019 20:49:12 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On 2019-09-17 20:49, Fabien COELHO wrote:\n> Attached v9:\n> \n> [pgbench-init-partitioned-9.patch]\n\nTurns out this patch needed a dos2unix treatment.\n\nIt's easy to do but it takes time to figure it out (I'm dumb). I for \none would be happy to receive patches not so encumbered :)\n\n\nthanks,\n\nErik Rijkers\n\n\n",
"msg_date": "Tue, 17 Sep 2019 21:54:54 +0200",
"msg_from": "Erikjan Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "\nHello Erikjan,\n\n>> [pgbench-init-partitioned-9.patch]\n>\n> Turns out this patch needed a dos2unix treatment.\n\n> It's easy to do but it takes time to figure it out (I'm dumb). I for one \n> would be happy to receive patches not so encumbered :)\n\nAFAICR this is usually because your mailer does not conform to MIME spec, \nwhich *requires* that text files be sent over with \\r\\n terminations, so \nmy mailer does it for text/x-diff, and your mailer should translate back \nEOL for your platform, but it does not, so you have to do it manually.\n\nI could edit my /etc/mime.types file to switch patch files to some binary \nmime type, but it may have side effects on my system, so I refrained.\n\nHoping that mailer writers read and conform to MIME seems desperate.\n\nLast time this discussion occured there was no obvious solution beside me \nswitching to another bug-compatible mailer, but this is not really \nconvenient for me. ISTM that the \"patch\" command accepts these files with \nwarnings.\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 18 Sep 2019 07:01:30 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "Hi Fabien,\n\nOn Wed, Sep 18, 2019 at 3:49 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> Attached v9:\n\nThanks. This seems to work well.\n\nCouple of nitpicks on parameter error messages.\n\n+ fprintf(stderr, \"invalid partition type,\nexpecting \\\"range\\\" or \\\"hash\\\",\"\n\nHow about \"partitioning method\" instead of \"partition type\"?\n\n+ fprintf(stderr, \"--partition-method requires actual\npartitioning with --partitions\\n\");\n\nAssuming that this error message is to direct the user to fix a\nmistake they might have inadvertently made in specifying --partitions,\nI don't think the message is very clear. How about:\n\n\"--partition-method requires --partitions to be greater than zero\"\n\nbut this wording might suggest to some users that some partitioning\nmethods do allow zero partitions. So, maybe:\n\n\"specifying --partition-method requires --partitions to be greater than zero\"\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 18 Sep 2019 15:46:12 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "Hello Amit,\n\n> + fprintf(stderr, \"invalid partition type,\n> expecting \\\"range\\\" or \\\"hash\\\",\"\n>\n> How about \"partitioning method\" instead of \"partition type\"?\n\nIndeed, this is a left over from a previous version.\n\n> + fprintf(stderr, \"--partition-method requires actual\n> partitioning with --partitions\\n\");\n>\n> [...] \"--partition-method requires --partitions to be greater than zero\"\n\n\nI think the first suggestion is clear enough. I've put a shorter variant \nin the same spirit:\n\n \"--partitions-method requires greater than zero --partitions\"\n\nAttached v10 fixes both messages.\n\n-- \nFabien.",
"msg_date": "Wed, 18 Sep 2019 09:31:40 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Wed, Sep 18, 2019 at 1:02 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n*/\n+ res = PQexec(con,\n+ \"select o.n, p.partstrat, pg_catalog.count(p.partrelid) \"\n+ \"from pg_catalog.pg_class as c \"\n+ \"join pg_catalog.pg_namespace as n on (n.oid = c.relnamespace) \"\n+ \"cross join lateral (select\npg_catalog.array_position(pg_catalog.current_schemas(true),\nn.nspname)) as o(n) \"\n+ \"left join pg_catalog.pg_partitioned_table as p on (p.partrelid = c.oid) \"\n+ \"left join pg_catalog.pg_inherits as i on (c.oid = i.inhparent) \"\n+ /* right name and schema in search_path */\n+ \"where c.relname = 'pgbench_accounts' and o.n is not null \"\n+ \"group by 1, 2 \"\n+ \"order by 1 asc \"\n\nI have a question, wouldn't it be sufficient to just group by 1? Are\nyou expecting multiple pgbench_account tables partitioned by different\nstrategy under the same schema?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 18 Sep 2019 16:17:45 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Wed, Sep 18, 2019 at 12:19 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Attached v9:\n>\n> - remove the PART_UNKNOWN and use partitions = -1 to tell\n> that there is an error, and partitions >= 1 to print info\n> - use search_path to find at most one pgbench_accounts\n> It still uses left join because I still think that it is appropriate.\n> I added a lateral to avoid repeating the array_position call\n> to manage the search_path, and use explicit pg_catalog everywhere.\n\nIt would be good if you can add some more comments to explain the\nintent of query.\n\nFew more comments:\n*\nelse\n+ {\n+ /* PQntupes(res) == 1: normal case, extract the partition status */\n+ char *ps = PQgetvalue(res, 0, 1);\n+\n+ if (ps == NULL)\n+ partition_method = PART_NONE;\n\n\nWhen can we expect ps as NULL? If this is not a valid case, then\nprobably and Assert would be better.\n\n*\n+ else if (PQntuples(res) == 0)\n+ {\n+ /* no pgbench_accounts found, builtin script should fail later */\n+ partition_method = PART_NONE;\n+ partitions = -1;\n+ }\n\nIf we don't find pgbench_accounts, let's give error here itself rather\nthan later unless you have a valid case in mind.\n\n*\n+\n+ /*\n+ * Partition information. Assume no partitioning on any failure, so as\n+ * to avoid failing on an older version.\n+ */\n..\n+ if (PQresultStatus(res) != PGRES_TUPLES_OK)\n+ {\n+ /* probably an older version, coldly assume no partitioning */\n+ partition_method = PART_NONE;\n+ partitions = 0;\n+ }\n\nSo, here we are silently absorbing the error when pgbench is executed\nagainst older server version which doesn't support partitioning. If\nthat is the case, then I think if user gives --partitions for the old\nserver version, it will also give an error? It is not clear in\ndocumentation whether we support or not using pgbench with older\nserver versions. I guess it didn't matter, but with this feature, it\ncan matter. Do we need to document this?\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 18 Sep 2019 16:17:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "\n> + \"group by 1, 2 \"\n>\n> I have a question, wouldn't it be sufficient to just group by 1?\n\nConceptually yes, it is what is happening in practice, but SQL requires \nthat non aggregated columns must appear explicitely in the GROUP BY \nclause, so I have to put it even if it will not change groups.\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 18 Sep 2019 15:47:28 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "Hello Amit,\n\n>> - use search_path to find at most one pgbench_accounts\n>> It still uses left join because I still think that it is appropriate.\n>> I added a lateral to avoid repeating the array_position call\n>> to manage the search_path, and use explicit pg_catalog everywhere.\n>\n> It would be good if you can add some more comments to explain the\n> intent of query.\n\nIndeed, I put too few comments on the query.\n\n> + if (ps == NULL)\n> + partition_method = PART_NONE;\n>\n> When can we expect ps as NULL? If this is not a valid case, then\n> probably and Assert would be better.\n\nNo, ps is really NULL if there is no partitioning, because of the LEFT \nJOIN and pg_partitioned_table is just empty in that case.\n\nThe last else where there is an unexpected entry is different, see \ncomments about v11 below.\n\n> + else if (PQntuples(res) == 0)\n> + {\n> + /* no pgbench_accounts found, builtin script should fail later */\n> + partition_method = PART_NONE;\n> + partitions = -1;\n>\n> If we don't find pgbench_accounts, let's give error here itself rather\n> than later unless you have a valid case in mind.\n\nI thought of it, but decided not to: Someone could add a builtin script \nwhich does not use pgbench_accounts, or a parallel running script could \ncreate a table dynamically, whatever, so I prefer the error to be raised \nby the script itself, rather than deciding that it will fail before even \ntrying.\n\n> + /*\n> + * Partition information. Assume no partitioning on any failure, so as\n> + * to avoid failing on an older version.\n> + */\n> ..\n> + if (PQresultStatus(res) != PGRES_TUPLES_OK)\n> + {\n> + /* probably an older version, coldly assume no partitioning */\n> + partition_method = PART_NONE;\n> + partitions = 0;\n> + }\n>\n> So, here we are silently absorbing the error when pgbench is executed\n> against older server version which doesn't support partitioning.\n\nYes, exactly.\n\n> If that is the case, then I think if user gives --partitions for the old \n> server version, it will also give an error?\n\nYes, on -i it will fail because the syntax will not be recognized.\n\n> It is not clear in documentation whether we support or not using pgbench \n> with older server versions.\n\nIndeed. We more or less do in practice. Command \"psql\" works back to 8 \nAFAICR, and pgbench as well.\n\n> I guess it didn't matter, but with this feature, it can matter. Do we \n> need to document this?\n\nThis has been discussed in the past, and the conclusion was that it was \nnot worth the effort. We just try not to break things if it is avoidable. \nOn this regard, the patch slightly changes FILLFACTOR output, which is \nremoved if the value is 100 (%) as it is the default, which means that \ntable creation would work on very very old version which did not support \nfillfactor, unless you specify a lower percentage.\n\nAttached v11:\n\n - add quite a few comments on the pg_catalog query\n\n - reverts the partitions >= 1 test; If some new partition method is\n added that pgbench does not know about, the failure mode will be that\n nothing is printed rather than printing something strange like\n \"method none with 2 partitions\".\n\n-- \nFabien.",
"msg_date": "Wed, 18 Sep 2019 19:03:02 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "Hi Fabien,\n\nOn Thu, Sep 19, 2019 at 2:03 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > If that is the case, then I think if user gives --partitions for the old\n> > server version, it will also give an error?\n>\n> Yes, on -i it will fail because the syntax will not be recognized.\n\nMaybe we should be checking the server version, which would allow to\nproduce more useful error messages when these options are used against\nolder servers, like\n\nif (sversion < 10000)\n fprintf(stderr, \"cannot use --partitions/--partitions-method\nagainst servers older than 10\");\n\nWe would also have to check that partition-method=hash is not used against v10.\n\nMaybe overkill?\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 19 Sep 2019 11:10:16 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "\nHello Amit,\n\n>> Yes, on -i it will fail because the syntax will not be recognized.\n>\n> Maybe we should be checking the server version, which would allow to\n> produce more useful error messages when these options are used against\n> older servers, like\n>\n> if (sversion < 10000)\n> fprintf(stderr, \"cannot use --partitions/--partitions-method\n> against servers older than 10\");\n>\n> We would also have to check that partition-method=hash is not used against v10.\n>\n> Maybe overkill?\n\nYes, I think so: the error detection and messages would be more or less \nreplicated from the server and would vary from version to version.\n\nI do not think that it is worth going this path because the use case is \nvirtually void as people in 99.9% of cases would use a pgbench matching \nthe server version. For those who do not, the error message should be \nclear enough to let them guess what the issue is. Also, it would be \nuntestable.\n\nOne thing we could eventually do is just to check pgbench version against \nthe server version like psql does and output a generic warning if they \ndiffer, but franckly I do not think it is worth the effort: ISTM that \nnobody ever complained about such issues. Also, that would be matter for \nanother patch.\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 19 Sep 2019 06:55:32 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Wed, Sep 18, 2019 at 10:33 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Amit,\n>\n> >> - use search_path to find at most one pgbench_accounts\n> >> It still uses left join because I still think that it is appropriate.\n> >> I added a lateral to avoid repeating the array_position call\n> >> to manage the search_path, and use explicit pg_catalog everywhere.\n> >\n> > It would be good if you can add some more comments to explain the\n> > intent of query.\n>\n> Indeed, I put too few comments on the query.\n>\n> > + if (ps == NULL)\n> > + partition_method = PART_NONE;\n> >\n> > When can we expect ps as NULL? If this is not a valid case, then\n> > probably and Assert would be better.\n>\n> No, ps is really NULL if there is no partitioning, because of the LEFT\n> JOIN and pg_partitioned_table is just empty in that case.\n>\n\n'ps' itself won't be NULL in that case, the value it contains is NULL.\nI have debugged this case as well. 'ps' itself can be NULL only when\nyou pass wrong column number or something like that to PQgetvalue.\n\n> The last else where there is an unexpected entry is different, see\n> comments about v11 below.\n>\n> > + else if (PQntuples(res) == 0)\n> > + {\n> > + /* no pgbench_accounts found, builtin script should fail later */\n> > + partition_method = PART_NONE;\n> > + partitions = -1;\n> >\n> > If we don't find pgbench_accounts, let's give error here itself rather\n> > than later unless you have a valid case in mind.\n>\n> I thought of it, but decided not to: Someone could add a builtin script\n> which does not use pgbench_accounts, or a parallel running script could\n> create a table dynamically, whatever, so I prefer the error to be raised\n> by the script itself, rather than deciding that it will fail before even\n> trying.\n>\n\nI think this is not a possibility today and I don't know of the\nfuture. I don't think it is a good idea to add code which we can't\nreach today. You can probably add Assert if required.\n\n> > + /*\n> > + * Partition information. Assume no partitioning on any failure, so as\n> > + * to avoid failing on an older version.\n> > + */\n> > ..\n> > + if (PQresultStatus(res) != PGRES_TUPLES_OK)\n> > + {\n> > + /* probably an older version, coldly assume no partitioning */\n> > + partition_method = PART_NONE;\n> > + partitions = 0;\n> > + }\n> >\n> > So, here we are silently absorbing the error when pgbench is executed\n> > against older server version which doesn't support partitioning.\n>\n> Yes, exactly.\n>\n> > If that is the case, then I think if user gives --partitions for the old\n> > server version, it will also give an error?\n>\n> Yes, on -i it will fail because the syntax will not be recognized.\n>\n> > It is not clear in documentation whether we support or not using pgbench\n> > with older server versions.\n>\n> Indeed. We more or less do in practice. Command \"psql\" works back to 8\n> AFAICR, and pgbench as well.\n>\n> > I guess it didn't matter, but with this feature, it can matter. Do we\n> > need to document this?\n>\n> This has been discussed in the past, and the conclusion was that it was\n> not worth the effort. We just try not to break things if it is avoidable.\n> On this regard, the patch slightly changes FILLFACTOR output, which is\n> removed if the value is 100 (%) as it is the default, which means that\n> table creation would work on very very old version which did not support\n> fillfactor, unless you specify a lower percentage.\n>\n\nHmm, why you need to change the fill factor behavior? If it is not\nspecifically required for the functionality of this patch, then I\nsuggest keeping that behavior as it is.\n\n> Attached v11:\n>\n> - add quite a few comments on the pg_catalog query\n>\n> - reverts the partitions >= 1 test; If some new partition method is\n> added that pgbench does not know about, the failure mode will be that\n> nothing is printed rather than printing something strange like\n> \"method none with 2 partitions\".\n>\n\nbut how will that new partition method will be associated with a table\ncreated via pgbench? I think the previous check was good because it\nmakes partition checking consistent throughout the patch.\n\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 19 Sep 2019 11:38:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 10:25 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> Hello Amit,\n>\n> >> Yes, on -i it will fail because the syntax will not be recognized.\n> >\n> > Maybe we should be checking the server version, which would allow to\n> > produce more useful error messages when these options are used against\n> > older servers, like\n> >\n> > if (sversion < 10000)\n> > fprintf(stderr, \"cannot use --partitions/--partitions-method\n> > against servers older than 10\");\n> >\n> > We would also have to check that partition-method=hash is not used against v10.\n> >\n> > Maybe overkill?\n>\n> Yes, I think so: the error detection and messages would be more or less\n> replicated from the server and would vary from version to version.\n>\n\nYeah, but I think Amit L's point is worth considering. I think it\nwould be good if a few other people can also share their suggestion on\nthis point. Alvaro, Dilip, anybody else following this thread, would\nlike to comment? It is important to know others opinion on this\nbecause this will change how pgbench behaves with prior versions.\n\n> I do not think that it is worth going this path because the use case is\n> virtually void as people in 99.9% of cases would use a pgbench matching\n> the server version.\n\nFair enough, but there is no restriction of using it with prior\nversions. In fact some people might want to use this with v11 where\npartitioning was present. So, we shouldn't ignore this point.\n\n\n> One thing we could eventually do is just to check pgbench version against\n> the server version like psql does and output a generic warning if they\n> differ, but franckly I do not think it is worth the effort:\n>\n\nYeah and even if we want to do something like that, it should not be\npart of this patch.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 19 Sep 2019 11:47:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "Hi Fabien,\n\nOn Thu, Sep 19, 2019 at 1:55 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> Hello Amit,\n>\n> >> Yes, on -i it will fail because the syntax will not be recognized.\n> >\n> > Maybe we should be checking the server version, which would allow to\n> > produce more useful error messages when these options are used against\n> > older servers, like\n> >\n> > if (sversion < 10000)\n> > fprintf(stderr, \"cannot use --partitions/--partitions-method\n> > against servers older than 10\");\n> >\n> > We would also have to check that partition-method=hash is not used against v10.\n> >\n> > Maybe overkill?\n>\n> Yes, I think so: the error detection and messages would be more or less\n> replicated from the server and would vary from version to version.\n>\n> I do not think that it is worth going this path because the use case is\n> virtually void as people in 99.9% of cases would use a pgbench matching\n> the server version. For those who do not, the error message should be\n> clear enough to let them guess what the issue is. Also, it would be\n> untestable.\n\nOkay, I can understand the desire to not add code for rarely occurring\nsituations where the server's error is a good enough clue.\n\n> One thing we could eventually do is just to check pgbench version against\n> the server version like psql does and output a generic warning if they\n> differ, but franckly I do not think it is worth the effort: ISTM that\n> nobody ever complained about such issues.\n\nAgree.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 19 Sep 2019 17:37:13 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 11:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Sep 19, 2019 at 10:25 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > Hello Amit,\n> >\n> > >> Yes, on -i it will fail because the syntax will not be recognized.\n> > >\n> > > Maybe we should be checking the server version, which would allow to\n> > > produce more useful error messages when these options are used against\n> > > older servers, like\n> > >\n> > > if (sversion < 10000)\n> > > fprintf(stderr, \"cannot use --partitions/--partitions-method\n> > > against servers older than 10\");\n> > >\n> > > We would also have to check that partition-method=hash is not used against v10.\n> > >\n> > > Maybe overkill?\n> >\n> > Yes, I think so: the error detection and messages would be more or less\n> > replicated from the server and would vary from version to version.\n> >\n>\n> Yeah, but I think Amit L's point is worth considering. I think it\n> would be good if a few other people can also share their suggestion on\n> this point. Alvaro, Dilip, anybody else following this thread, would\n> like to comment? It is important to know others opinion on this\n> because this will change how pgbench behaves with prior versions.\n\nIMHO, we don't need to invent the error handling at the pgbench\ninstead we can rely on the server's error.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 19 Sep 2019 15:54:05 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "Hello Amit,\n\n> [...] 'ps' itself won't be NULL in that case, the value it contains is \n> NULL. I have debugged this case as well. 'ps' itself can be NULL only \n> when you pass wrong column number or something like that to PQgetvalue.\n\nArgh, you are right! I mixed up C NULL and SQL NULL:-(\n\n>>> If we don't find pgbench_accounts, let's give error here itself rather\n>>> than later unless you have a valid case in mind.\n>>\n>> I thought of it, but decided not to: Someone could add a builtin script\n>> which does not use pgbench_accounts, or a parallel running script could\n>> create a table dynamically, whatever, so I prefer the error to be raised\n>> by the script itself, rather than deciding that it will fail before even\n>> trying.\n>\n> I think this is not a possibility today and I don't know of the\n> future. I don't think it is a good idea to add code which we can't\n> reach today. You can probably add Assert if required.\n\nI added a fail on an unexpected partition method, i.e. not 'r' or 'h',\nand an Assert of PQgetvalue returns NULL.\n\nI fixed the query so that it counts actual partitions, otherwise I was \ngetting one for a partitioned table without partitions attached, which \ndoes not generate an error by the way. I just figured out that pgbench \ndoes not check that UPDATE updates anything. Hmmm.\n\n> Hmm, why you need to change the fill factor behavior? If it is not\n> specifically required for the functionality of this patch, then I\n> suggest keeping that behavior as it is.\n\nThe behavior is not actually changed, but I had to move fillfactor away \nbecause it cannot be declared on partitioned tables, it must be declared \non partitions only. Once there is a function to handle that it is pretty \neasy to add the test.\n\nI can remove it but franckly there are only benefits: the default is now \ntested by pgbench, the create query is smaller, and it would work with \nolder versions of pg, which does not matter but is good on principle.\n\n>> added that pgbench does not know about, the failure mode will be that\n>> nothing is printed rather than printing something strange like\n>> \"method none with 2 partitions\".\n>\n> but how will that new partition method will be associated with a table\n> created via pgbench?\n\nThe user could do a -i with a version of pgbench and bench with another \none. I do that often while developing…\n\n> I think the previous check was good because it makes partition checking \n> consistent throughout the patch.\n\nThis case now generates a fail.\n\nv12:\n - fixes NULL vs NULL\n - works correctly with a partitioned table without partitions attached\n - generates an error if the partition method is unknown\n - adds an assert\n\n-- \nFabien.",
"msg_date": "Thu, 19 Sep 2019 21:11:13 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Fri, Sep 20, 2019 at 12:41 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n> This case now generates a fail.\n>\n> v12:\n> - fixes NULL vs NULL\n> - works correctly with a partitioned table without partitions attached\n> - generates an error if the partition method is unknown\n> - adds an assert\n>\n\nYou seem to have attached some previous version (v2) of this patch. I\ncould see old issues in the patch which we have sorted out in the\nreview.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 20 Sep 2019 08:30:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Fri, Sep 20, 2019 at 12:41 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Amit,\n>\n> > [...] 'ps' itself won't be NULL in that case, the value it contains is\n> > NULL. I have debugged this case as well. 'ps' itself can be NULL only\n> > when you pass wrong column number or something like that to PQgetvalue.\n>\n> Argh, you are right! I mixed up C NULL and SQL NULL:-(\n>\n> >>> If we don't find pgbench_accounts, let's give error here itself rather\n> >>> than later unless you have a valid case in mind.\n> >>\n> >> I thought of it, but decided not to: Someone could add a builtin script\n> >> which does not use pgbench_accounts, or a parallel running script could\n> >> create a table dynamically, whatever, so I prefer the error to be raised\n> >> by the script itself, rather than deciding that it will fail before even\n> >> trying.\n> >\n> > I think this is not a possibility today and I don't know of the\n> > future. I don't think it is a good idea to add code which we can't\n> > reach today. You can probably add Assert if required.\n>\n> I added a fail on an unexpected partition method, i.e. not 'r' or 'h',\n> and an Assert of PQgetvalue returns NULL.\n>\n> I fixed the query so that it counts actual partitions, otherwise I was\n> getting one for a partitioned table without partitions attached, which\n> does not generate an error by the way. I just figured out that pgbench\n> does not check that UPDATE updates anything. Hmmm.\n>\n> > Hmm, why you need to change the fill factor behavior? If it is not\n> > specifically required for the functionality of this patch, then I\n> > suggest keeping that behavior as it is.\n>\n> The behavior is not actually changed, but I had to move fillfactor away\n> because it cannot be declared on partitioned tables, it must be declared\n> on partitions only. Once there is a function to handle that it is pretty\n> easy to add the test.\n>\n> I can remove it but franckly there are only benefits: the default is now\n> tested by pgbench, the create query is smaller, and it would work with\n> older versions of pg, which does not matter but is good on principle.\n>\n\nI am not saying that it is a bad check on its own, rather it might be\ngood, but let's not do any unrelated change as that will delay the\nmain patch. Once, we are done with the main patch, you can propose\nthese as improvements.\n\n> >> added that pgbench does not know about, the failure mode will be that\n> >> nothing is printed rather than printing something strange like\n> >> \"method none with 2 partitions\".\n> >\n> > but how will that new partition method will be associated with a table\n> > created via pgbench?\n>\n> The user could do a -i with a version of pgbench and bench with another\n> one. I do that often while developing…\n>\n\nI am not following what you want to say here especially (\"pgbench and\nbench with another one\"). Can you explain with some example?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 20 Sep 2019 09:02:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": ">> v12:\n>> - fixes NULL vs NULL\n>> - works correctly with a partitioned table without partitions attached\n>> - generates an error if the partition method is unknown\n>> - adds an assert\n>\n> You seem to have attached some previous version (v2) of this patch. I\n> could see old issues in the patch which we have sorted out in the\n> review.\n\nIndeed. This is a change from forgetting the attachement.\n\nHere is v12. Hopefully.\n\n-- \nFabien.",
"msg_date": "Fri, 20 Sep 2019 06:50:53 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": ">> The behavior is not actually changed, but I had to move fillfactor away\n>> because it cannot be declared on partitioned tables, it must be declared\n>> on partitions only. Once there is a function to handle that it is pretty\n>> easy to add the test.\n>>\n>> I can remove it but franckly there are only benefits: the default is now\n>> tested by pgbench, the create query is smaller, and it would work with\n>> older versions of pg, which does not matter but is good on principle.\n>\n> I am not saying that it is a bad check on its own, rather it might be\n> good, but let's not do any unrelated change as that will delay the\n> main patch. Once, we are done with the main patch, you can propose\n> these as improvements.\n\nI would not bother to create a patch for so small an improvement. This \nmakes sense in passing because the created function makes it very easy, \nbut otherwise I'll just drop it.\n\n>> The user could do a -i with a version of pgbench and bench with another\n>> one. I do that often while developing…\n>\n> I am not following what you want to say here especially (\"pgbench and\n> bench with another one\"). Can you explain with some example?\n\nWhile developing, I often run pgbench under development client against an \nalready created set of tables on an already created cluster, and usually \nthe server side on my laptop is the last major release from pgdg (ie 11.5) \nwhile the pgbench I'm testing is from sources (ie 12dev). If I type \n\"pgbench\" I run 11.5, and in the sources \"./pgbench\" runs the dev version.\n\n-- \nFabien.",
"msg_date": "Fri, 20 Sep 2019 06:59:25 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Fri, Sep 20, 2019 at 10:29 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n> >> The behavior is not actually changed, but I had to move fillfactor away\n> >> because it cannot be declared on partitioned tables, it must be declared\n> >> on partitions only. Once there is a function to handle that it is pretty\n> >> easy to add the test.\n> >>\n> >> I can remove it but franckly there are only benefits: the default is now\n> >> tested by pgbench, the create query is smaller, and it would work with\n> >> older versions of pg, which does not matter but is good on principle.\n> >\n> > I am not saying that it is a bad check on its own, rather it might be\n> > good, but let's not do any unrelated change as that will delay the\n> > main patch. Once, we are done with the main patch, you can propose\n> > these as improvements.\n>\n> I would not bother to create a patch for so small an improvement. This\n> makes sense in passing because the created function makes it very easy,\n> but otherwise I'll just drop it.\n>\n\nI would prefer to drop for now.\n\n> >> The user could do a -i with a version of pgbench and bench with another\n> >> one. I do that often while developing…\n> >\n> > I am not following what you want to say here especially (\"pgbench and\n> > bench with another one\"). Can you explain with some example?\n>\n> While developing, I often run pgbench under development client against an\n> already created set of tables on an already created cluster, and usually\n> the server side on my laptop is the last major release from pgdg (ie 11.5)\n> while the pgbench I'm testing is from sources (ie 12dev). If I type\n> \"pgbench\" I run 11.5, and in the sources \"./pgbench\" runs the dev version.\n>\n\nHmm, I think some such thing is possible when you are running pgbench\nof lower version with tables initialized by some higher version of\npgbench. Because higher version pgbench must be a superset of lower\nversion unless we drop support for one of the partitioning method. I\nthink even if there is some unknown partition method, it should be\ndetected much earlier rather than reaching the stage of printing the\nresults like after the query for partitions in below code.\n\n+ else\n+ {\n+ fprintf(stderr, \"unexpected partition method: \\\"%s\\\"\\n\", ps);\n+ exit(1);\n+ }\n\nIf we can't catch that earlier, then it might be better to have some\nversion-specific checks rather than such obscure code which is\ndifficult to understand for others.\n\nI have made a few modifications in the attached patch.\n* move the create partitions related code into a separate function.\n* make the check related to number of partitions consistent i.e check\npartitions > 0 apart from where we print which I also want to change\nbut let us first discuss one of the above points\n* when we don't found pgbench_accounts table, error out instead of continuing\n* ensure append_fillfactor doesn't assume that it has to append\nfillfactor and removed fillfactor < 100 check from it.\n* improve the comments around query to fetch partitions\n* improve the comments in the patch and make the code look like nearby code\n* pgindent the patch\n\nI think we should try to add some note or comment that why we only\nchoose to partition pgbench_accounts table when the user has given\n--partitions option.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 20 Sep 2019 17:00:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "Hello Amit,\n\n>> I would not bother to create a patch for so small an improvement. This\n>> makes sense in passing because the created function makes it very easy,\n>> but otherwise I'll just drop it.\n>\n> I would prefer to drop for now.\n\nAttached v13 does that, I added a comment instead. I do not think that it \nis an improvement.\n\n> + else\n> + {\n> + fprintf(stderr, \"unexpected partition method: \\\"%s\\\"\\n\", ps);\n> + exit(1);\n> + }\n>\n> If we can't catch that earlier, then it might be better to have some\n> version-specific checks rather than such obscure code which is\n> difficult to understand for others.\n\nHmmm. The code simply checks for the current partitioning and fails if the \nresult is unknown, which I understood was what you asked, the previous \nversion was just ignoring the result.\n\nThe likelyhood of postgres dropping support for range or hash partitions \nseems unlikely.\n\nThis issue rather be raised if an older partition-enabled pgbench is run \nagainst a newer postgres which adds a new partition method. But then I \ncannot guess when a new partition method will be added, so I cannot put a \nguard with a version about something in the future. Possibly, if no new \nmethod is ever added, the code will never be triggered.\n\n> I have made a few modifications in the attached patch.\n> * move the create partitions related code into a separate function.\n\nWhy not. Not sure it is an improvement.\n\n> * make the check related to number of partitions consistent i.e check\n> partitions > 0 apart from where we print which I also want to change\n> but let us first discuss one of the above points\n\nI switched two instances of >= 1 to > 0, which had 1 instance before.\n\n> * when we don't found pgbench_accounts table, error out instead of continuing\n\nI do not think that it is a a good idea, but I did it anyway to move \nthings forward.\n\n> * ensure append_fillfactor doesn't assume that it has to append\n> fillfactor and removed fillfactor < 100 check from it.\n\nDone, which is too bad.\n\n> * improve the comments around query to fetch partitions\n\nWhat? How?\n\nThere are already quite a few comments compared to the length of the \nquery.\n\n> * improve the comments in the patch and make the code look like nearby \n> code\n\nThis requirement is to fuzzy. I re-read the changes, and both code and \ncomments look okay to me.\n\n> * pgindent the patch\n\nDone.\n\n> I think we should try to add some note or comment that why we only\n> choose to partition pgbench_accounts table when the user has given\n> --partitions option.\n\nAdded as a comment on the initPartition function.\n\n-- \nFabien.",
"msg_date": "Fri, 20 Sep 2019 20:55:47 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Sat, Sep 21, 2019 at 12:26 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n> >> I would not bother to create a patch for so small an improvement. This\n> >> makes sense in passing because the created function makes it very easy,\n> >> but otherwise I'll just drop it.\n> >\n> > I would prefer to drop for now.\n>\n> Attached v13 does that, I added a comment instead. I do not think that it\n> is an improvement.\n>\n> > + else\n> > + {\n> > + fprintf(stderr, \"unexpected partition method: \\\"%s\\\"\\n\", ps);\n> > + exit(1);\n> > + }\n> >\n> > If we can't catch that earlier, then it might be better to have some\n> > version-specific checks rather than such obscure code which is\n> > difficult to understand for others.\n>\n> Hmmm. The code simply checks for the current partitioning and fails if the\n> result is unknown, which I understood was what you asked, the previous\n> version was just ignoring the result.\n>\n\nYes, this code is correct. I am not sure if you understood the point,\nso let me try again. I am bothered about below code in the patch:\n+ /* only print partitioning information if some partitioning was detected */\n+ if (partition_method != PART_NONE)\n\nThis is the only place now where we check 'whether there are any\npartitions' differently. I am suggesting to make this similar to\nother checks (if (partitions > 0)).\n\n> The likelyhood of postgres dropping support for range or hash partitions\n> seems unlikely.\n>\n> This issue rather be raised if an older partition-enabled pgbench is run\n> against a newer postgres which adds a new partition method. But then I\n> cannot guess when a new partition method will be added, so I cannot put a\n> guard with a version about something in the future. Possibly, if no new\n> method is ever added, the code will never be triggered.\n>\n\nSure, even in that case your older version of pgbench will be able to\ndetect by below code:\n+ else\n+ {\n+ fprintf(stderr, \"unexpected partition method: \\\"%s\\\"\\n\", ps);\n+ exit(1);\n+ }\n\n>\n> > * improve the comments around query to fetch partitions\n>\n> What? How?\n>\n> There are already quite a few comments compared to the length of the\n> query.\n>\n\nHmm, you have just written what each part of the query is doing which\nI think one can identify if we write some general comment as I have in\nthe patch to explain the overall intent. Even if we write what each\npart of the statement is doing, the comment explaining overall intent\nis required. I personally don't like writing a comment for each\nsub-part of the query as that makes reading the query difficult. See\nthe patch sent by me in my previous email.\n\n> > * improve the comments in the patch and make the code look like nearby\n> > code\n>\n> This requirement is to fuzzy. I re-read the changes, and both code and\n> comments look okay to me.\n>\n\nI have done that in some of the cases in the patch attached by me in\nmy last email. Have you looked at those changes? Try to make those\nchanges in the next version unless you see something wrong is written\nin comments.\n\n> > * pgindent the patch\n>\n> Done.\n>\n> > I think we should try to add some note or comment that why we only\n> > choose to partition pgbench_accounts table when the user has given\n> > --partitions option.\n>\n> Added as a comment on the initPartition function.\n>\n\nI am not sure if something like that is required in the docs, but we\ncan leave it for now.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 21 Sep 2019 09:02:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "Hello Amit,\n\n> Yes, this code is correct. I am not sure if you understood the point,\n> so let me try again. I am bothered about below code in the patch:\n> + /* only print partitioning information if some partitioning was detected */\n> + if (partition_method != PART_NONE)\n>\n> This is the only place now where we check 'whether there are any\n> partitions' differently. I am suggesting to make this similar to\n> other checks (if (partitions > 0)).\n\nAs I said somewhere up thread, you can have a partitioned table with zero \npartitions, and it works fine (yep! the update just does not do anything…) \nso partitions > 0 is not a good way to know whether there is a partitioned \ntable when running a bench. It is a good way for initialization, though, \nbecause we are creating them.\n\n sh> pgbench -i --partitions=1\n sh> psql -c 'DROP TABLE pgbench_accounts_1'\n sh> pgbench -T 10\n ...\n transaction type: <builtin: TPC-B (sort of)>\n scaling factor: 1\n partition method: hash\n partitions: 0\n query mode: simple\n number of clients: 1\n number of threads: 1\n duration: 10 s\n number of transactions actually processed: 2314\n latency average = 4.323 ms\n tps = 231.297122 (including connections establishing)\n tps = 231.549125 (excluding connections establishing)\n\nAs postgres does not break, there is no good reason to forbid it.\n\n> [...] Sure, even in that case your older version of pgbench will be able \n> to detect by below code [...] \"unexpected partition method: \" [...].\n\nYes, that is what I was saying.\n\n> Hmm, you have just written what each part of the query is doing which I \n> think one can identify if we write some general comment as I have in the \n> patch to explain the overall intent. Even if we write what each part of \n> the statement is doing, the comment explaining overall intent is \n> required.\n\nThere was some comments.\n\n> I personally don't like writing a comment for each sub-part of the query \n> as that makes reading the query difficult. See the patch sent by me in \n> my previous email.\n\nI did not notice there was an attachment.\n\n> I have done that in some of the cases in the patch attached by me in\n> my last email. Have you looked at those changes?\n\nNope, as I was not expected one.\n\n> Try to make those changes in the next version unless you see something \n> wrong is written in comments.\n\nI incorporated most of them, although I made them terser, and fixed them \nwhen inaccurate.\n\nI did not buy moving the condition inside the fillfactor function.\n\nSee attached v14.\n\n-- \nFabien.",
"msg_date": "Sat, 21 Sep 2019 09:48:31 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Sat, Sep 21, 2019 at 1:18 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > Yes, this code is correct. I am not sure if you understood the point,\n> > so let me try again. I am bothered about below code in the patch:\n> > + /* only print partitioning information if some partitioning was detected */\n> > + if (partition_method != PART_NONE)\n> >\n> > This is the only place now where we check 'whether there are any\n> > partitions' differently. I am suggesting to make this similar to\n> > other checks (if (partitions > 0)).\n>\n> As I said somewhere up thread, you can have a partitioned table with zero\n> partitions, and it works fine (yep! the update just does not do anything…)\n> so partitions > 0 is not a good way to know whether there is a partitioned\n> table when running a bench. It is a good way for initialization, though,\n> because we are creating them.\n>\n> sh> pgbench -i --partitions=1\n> sh> psql -c 'DROP TABLE pgbench_accounts_1'\n> sh> pgbench -T 10\n> ...\n> transaction type: <builtin: TPC-B (sort of)>\n> scaling factor: 1\n> partition method: hash\n> partitions: 0\n>\n\nI am not sure how many users would be able to make out that it is a\nrun where actual partitions are not present unless they beforehand\nknow and detect such a condition in their scripts. What is the use of\nsuch a run which completes without actual updates? I think it is\nbetter if give an error for such a case rather than allowing to\nexecute it and then give some information which doesn't make much\nsense.\n\n>\n> I incorporated most of them, although I made them terser, and fixed them\n> when inaccurate.\n>\n> I did not buy moving the condition inside the fillfactor function.\n>\n\nI also don't agree with your position. My main concern here is that\nwe can't implicitly assume that fillfactor need to be appended. At\nthe very least we should have a comment saying why we are always\nappending the fillfactor for partitions, something like I had in my\npatch.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 22 Sep 2019 10:37:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "\nHello Amit,\n\n>> sh> pgbench -T 10\n>> ...\n>> partitions: 0\n>\n> I am not sure how many users would be able to make out that it is a\n> run where actual partitions are not present unless they beforehand\n> know and detect such a condition in their scripts.\n\n> What is the use of such a run which completes without actual updates?\n\nWhy should we decide that they cannot do that?\n\nThe user could be testing the overhead of no-op updates, which is \nsomething interesting, and check what happens with partitioning in this \ncase. For that, they may delete pgbench_accounts contents or its \npartitions for partitioned version, or only some partitions, or whatever.\n\nA valid (future) case is that hopefully dynamic partitioning could be \nimplemented, thus no partitions would be a perfectly legal state even with \nthe standard benchmarking practice. Maybe the user just wrote a clever \nextension to do that with a trigger and wants to test the performance \noverhead with pgbench. Fine!\n\nIMHO we should not babysit the user by preventing them to run a bench \nwhich would not generate any error, so is fundamentaly legal. If running a \nbench should fail, it should fail while running it, not before even \nstarting. I have already added at your request early failures modes to the \npatch about which I'm not very happy.\n\nNote that I'm mostly okay with warnings, but I know that I do not know \nwhat use may be done with pgbench, and I do not want to decide for users.\n\nIn this case, franckly I would not bother to issue a warning which has a \nvery low probability ever to be raised.\n\n> I think it is better if give an error for such a case rather than \n> allowing to execute it and then give some information which doesn't make \n> much sense.\n\nI strongly disagree, as explained above.\n\n>> I incorporated most of them, although I made them terser, and fixed them\n>> when inaccurate.\n>>\n>> I did not buy moving the condition inside the fillfactor function.\n>\n> I also don't agree with your position. My main concern here is that\n> we can't implicitly assume that fillfactor need to be appended.\n\nSure.\n\n> At the very least we should have a comment saying why we are always \n> appending the fillfactor for partitions\n\nThe patch does not do that, the condition is just before the call, not \ninside it with a boolean passed as an argument. AFAICS the behavior of v14 \nis exactly the same as your version and as the initial code.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 22 Sep 2019 08:52:23 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Sun, Sep 22, 2019 at 12:22 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> >> sh> pgbench -T 10\n> >> ...\n> >> partitions: 0\n> >\n> > I am not sure how many users would be able to make out that it is a\n> > run where actual partitions are not present unless they beforehand\n> > know and detect such a condition in their scripts.\n>\n> > What is the use of such a run which completes without actual updates?\n>\n> Why should we decide that they cannot do that?\n>\n> The user could be testing the overhead of no-op updates, which is\n> something interesting, and check what happens with partitioning in this\n> case. For that, they may delete pgbench_accounts contents or its\n> partitions for partitioned version, or only some partitions, or whatever.\n>\n> A valid (future) case is that hopefully dynamic partitioning could be\n> implemented, thus no partitions would be a perfectly legal state even with\n> the standard benchmarking practice. Maybe the user just wrote a clever\n> extension to do that with a trigger and wants to test the performance\n> overhead with pgbench. Fine!\n>\n\nIt is better for a user to write a custom script for such cases.\nBecause after that \"select-only\" or \"simple-update\" script doesn't\nmake any sense. In the \"select-only\" case why would anyone like test\nfetching zero rows, similarly in \"simple-update\" case, 2 out of 3\nstatements will be a no-op. In \"tpcb-like\" script, 2 out of 5 queries\nwill be no-op and it won't be completely no-op updates as you are\ntelling. Having said that, I see your point and don't mind allowing\nsuch cases until we don't have to write special checks in the code to\nsupport such cases. Now, we can have a detailed comment in\nprintResults to explain why we have a different check there as compare\nto other code paths or change other code paths to have a similar check\nas printResults, but I am not convinced of any of those options.\n\n\n> >>\n> >> I did not buy moving the condition inside the fillfactor function.\n> >\n> > I also don't agree with your position. My main concern here is that\n> > we can't implicitly assume that fillfactor need to be appended.\n>\n> Sure.\n>\n> > At the very least we should have a comment saying why we are always\n> > appending the fillfactor for partitions\n>\n> The patch does not do that, the condition is just before the call, not\n> inside it with a boolean passed as an argument. AFAICS the behavior of v14\n> is exactly the same as your version and as the initial code.\n>\n\nHere, I am talking about the call to append_fillfactor in\ncreatePartitions() function. See, in my version, there are some\ncomments.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 23 Sep 2019 08:23:36 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "Hello Amit,\n\n> It is better for a user to write a custom script for such cases.\n\nI kind-of agree, but IMHO this is not for pgbench to decide what is better \nfor the user and to fail on a script that would not fail.\n\n> Because after that \"select-only\" or \"simple-update\" script doesn't\n> make any sense. [...].\n\nWhat make sense in a benchmarking context may not be what you think. For \ninstance, AFAICR, I already removed benevolent but misplaced guards which \nwere preventing running scripts without queries: if one wants to look at \npgbench overheads because they are warry that it may be too high, they \nreally need to be allowed to run such scripts.\n\nThis not for us to decide, and as I already said they do if you want to \ntest no-op overheads. Moreover the problem pre-exists: if the user deletes \nthe contents of pgbench_accounts these scripts are no-op, and we do not \ncomplain. The no partition attached is just a particular case.\n\n> Having said that, I see your point and don't mind allowing such cases \n> until we don't have to write special checks in the code to support such \n> cases.\n\nIndeed, it is also simpler to not care about such issues in the code.\n\n> [...] Now, we can have a detailed comment in printResults to explain why \n> we have a different check there as compare to other code paths or change \n> other code paths to have a similar check as printResults, but I am not \n> convinced of any of those options.\n\nYep. ISTM that the current version is reasonable.\n\n> [...] I am talking about the call to append_fillfactor in \n> createPartitions() function. See, in my version, there are some \n> comments.\n\nOk, I understand that you want a comment. Patch v15 does that.\n\n-- \nFabien.",
"msg_date": "Mon, 23 Sep 2019 08:28:49 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Mon, Sep 23, 2019 at 11:58 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Amit,\n>\n> > It is better for a user to write a custom script for such cases.\n>\n> I kind-of agree, but IMHO this is not for pgbench to decide what is better\n> for the user and to fail on a script that would not fail.\n>\n> > Because after that \"select-only\" or \"simple-update\" script doesn't\n> > make any sense. [...].\n>\n> What make sense in a benchmarking context may not be what you think. For\n> instance, AFAICR, I already removed benevolent but misplaced guards which\n> were preventing running scripts without queries: if one wants to look at\n> pgbench overheads because they are warry that it may be too high, they\n> really need to be allowed to run such scripts.\n>\n> This not for us to decide, and as I already said they do if you want to\n> test no-op overheads. Moreover the problem pre-exists: if the user deletes\n> the contents of pgbench_accounts these scripts are no-op, and we do not\n> complain. The no partition attached is just a particular case.\n>\n> > Having said that, I see your point and don't mind allowing such cases\n> > until we don't have to write special checks in the code to support such\n> > cases.\n>\n> Indeed, it is also simpler to not care about such issues in the code.\n>\n\nIf you agree with this, then why haven't you changed below check in patch:\n+ if (partition_method != PART_NONE)\n+ printf(\"partition method: %s\\npartitions: %d\\n\",\n+ PARTITION_METHOD[partition_method], partitions);\n\nThis is exactly the thing bothering me. It won't be easy for others\nto understand why this check for partitioning information is different\nfrom other checks. For you or me, it might be okay as we have\ndiscussed this case, but it won't be apparent to others. This doesn't\nbuy us much, so it is better to keep this code consistent with other\nplaces that check for partitions.\n\n> > [...] Now, we can have a detailed comment in printResults to explain why\n> > we have a different check there as compare to other code paths or change\n> > other code paths to have a similar check as printResults, but I am not\n> > convinced of any of those options.\n>\n> Yep. ISTM that the current version is reasonable.\n>\n> > [...] I am talking about the call to append_fillfactor in\n> > createPartitions() function. See, in my version, there are some\n> > comments.\n>\n> Ok, I understand that you want a comment. Patch v15 does that.\n>\n\nThanks!\n\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 23 Sep 2019 14:46:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "Hello Amit,\n\n> {...]\n> If you agree with this, then why haven't you changed below check in patch:\n>\n> + if (partition_method != PART_NONE)\n> + printf(\"partition method: %s\\npartitions: %d\\n\",\n> + PARTITION_METHOD[partition_method], partitions);\n>\n> This is exactly the thing bothering me. It won't be easy for others\n> to understand why this check for partitioning information is different\n> from other checks.\n\nAs I tried to explain with an example, using \"partitions > 0\" does not \nwork in this case because you can have a partitioned table with zero \npartitions attached while benchmarking, but this cannot happen while \ncreating.\n\n> For you or me, it might be okay as we have discussed this case, but it \n> won't be apparent to others. This doesn't buy us much, so it is better \n> to keep this code consistent with other places that check for \n> partitions.\n\nAttached uses \"partition_method != PART_NONE\" consistently, plus an assert \non \"partitions > 0\" for checking and for triggering the default method at \nthe end of option processing.\n\n-- \nFabien.",
"msg_date": "Tue, 24 Sep 2019 15:29:42 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 6:59 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n> > For you or me, it might be okay as we have discussed this case, but it\n> > won't be apparent to others. This doesn't buy us much, so it is better\n> > to keep this code consistent with other places that check for\n> > partitions.\n>\n> Attached uses \"partition_method != PART_NONE\" consistently, plus an assert\n> on \"partitions > 0\" for checking and for triggering the default method at\n> the end of option processing.\n>\n\nOkay, I think making the check consistent is a step forward. The\nlatest patch is not compiling for me. You have used the wrong\nvariable name in below line:\n+ /* Partition pgbench_accounts table */\n+ if (partitions_method != PART_NONE && strcmp(ddl->table,\n\"pgbench_accounts\") == 0)\n\nAnother point is:\n+ else if (PQntuples(res) == 0)\n+ {\n+ /*\n+ * This case is unlikely as pgbench already found \"pgbench_branches\"\n+ * above to compute the scale.\n+ */\n+ fprintf(stderr,\n+ \"No pgbench_accounts table found in search_path. \"\n+ \"Perhaps you need to do initialization (\\\"pgbench -i\\\") in database\n\\\"%s\\\"\\n\", PQdb(con));\n\nWe don't recommend to start messages with a capital letter. See\n\"Upper Case vs. Lower Case\" section in docs [1]. It is not that we\nhave not used it anywhere else, but I think we should try to avoid it.\n\n[1] - https://www.postgresql.org/docs/devel/error-style-guide.html\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 26 Sep 2019 09:17:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "> Okay, I think making the check consistent is a step forward. The\n> latest patch is not compiling for me.\n\nArgh, shame on me!\n\n> [...] We don't recommend to start messages with a capital letter. See \n> \"Upper Case vs. Lower Case\" section in docs [1]. It is not that we have \n> not used it anywhere else, but I think we should try to avoid it.\n\nOk.\n\nPatch v17 makes both above changes, compiles and passes pgbench TAP tests \non my laptop.\n\n-- \nFabien.",
"msg_date": "Thu, 26 Sep 2019 08:41:33 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "pgbench's main() is overly long already, and the new code being added\nseems to pollute it even more. Can we split it out into a static\nfunction that gets placed, say, just below disconnect_all() or maybe\nafter runInitSteps?\n\n(Also, we seem to be afraid of function prototypes. Why not move the\nappend_fillfactor() to *below* the functions that use it?)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 26 Sep 2019 10:22:42 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "\nHello Alvaro,\n\n> pgbench's main() is overly long already, and the new code being added\n> seems to pollute it even more. Can we split it out into a static\n> function that gets placed, say, just below disconnect_all() or maybe\n> after runInitSteps?\n\nI agree that refactoring is a good idea, but I do not think it belongs to \nthis patch. The file is pretty long too, probably some functions could be \nmoved to distinct files (eg expression evaluation, variable management, \n...).\n\n> (Also, we seem to be afraid of function prototypes. Why not move the \n> append_fillfactor() to *below* the functions that use it?)\n\nBecause we avoid one more line for the function prototype? I try to put \nfunctions in def/use order if possible, especially for small functions \nlike this one.\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 26 Sep 2019 22:57:54 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On 2019-Sep-26, Fabien COELHO wrote:\n\n> \n> Hello Alvaro,\n> \n> > pgbench's main() is overly long already, and the new code being added\n> > seems to pollute it even more. Can we split it out into a static\n> > function that gets placed, say, just below disconnect_all() or maybe\n> > after runInitSteps?\n> \n> I agree that refactoring is a good idea, but I do not think it belongs to\n> this patch. The file is pretty long too, probably some functions could be\n> moved to distinct files (eg expression evaluation, variable management,\n> ...).\n\nI'm not suggesting to refactor anything as part of this patch -- just\nthat instead of adding that new code to main(), you create a new\nfunction for it.\n\n> > (Also, we seem to be afraid of function prototypes. Why not move the\n> > append_fillfactor() to *below* the functions that use it?)\n> \n> Because we avoid one more line for the function prototype? I try to put\n> functions in def/use order if possible, especially for small functions like\n> this one.\n\nI can see that ... I used to do that too. But nowadays I think it's\nless messy to put important stuff first, secondary uninteresting stuff\nlater. So I suggest to move that new function so that it appears below\nthe code that uses it. Not a big deal anyhow.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 26 Sep 2019 18:06:06 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 2:36 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2019-Sep-26, Fabien COELHO wrote:\n> > > pgbench's main() is overly long already, and the new code being added\n> > > seems to pollute it even more. Can we split it out into a static\n> > > function that gets placed, say, just below disconnect_all() or maybe\n> > > after runInitSteps?\n> >\n> > I agree that refactoring is a good idea, but I do not think it belongs to\n> > this patch. The file is pretty long too, probably some functions could be\n> > moved to distinct files (eg expression evaluation, variable management,\n> > ...).\n>\n> I'm not suggesting to refactor anything as part of this patch -- just\n> that instead of adding that new code to main(), you create a new\n> function for it.\n>\n> > > (Also, we seem to be afraid of function prototypes. Why not move the\n> > > append_fillfactor() to *below* the functions that use it?)\n> >\n> > Because we avoid one more line for the function prototype? I try to put\n> > functions in def/use order if possible, especially for small functions like\n> > this one.\n>\n> I can see that ... I used to do that too. But nowadays I think it's\n> less messy to put important stuff first, secondary uninteresting stuff\n> later. So I suggest to move that new function so that it appears below\n> the code that uses it. Not a big deal anyhow.\n>\n\nThanks, Alvaro, both seem like good suggestions to me. However, there\nare a few more things where your feedback can help:\na. With new options, we will partition pgbench_accounts and the\nreason is that because that is the largest table. Do we need to be\nexplicit about the reason in docs?\nb. I am not comfortable with test modification in\n001_pgbench_with_server.pl. Basically, it doesn't seem like we should\nmodify the existing test to use non-default tablespaces as part of\nthis patch. It might be a good idea in general, but I am not sure\ndoing as part of this patch is a good idea as there is no big value\naddition with that modification as far as this patch is concerned.\nOTOH, as such there is no harm in testing with non-default\ntablespaces.\n\nThe other thing is that the query used in patch to fetch partition\ninformation seems correct to me, but maybe there is a better way to\nget that information.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 27 Sep 2019 07:51:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On 2019-Sep-27, Amit Kapila wrote:\n\n> Thanks, Alvaro, both seem like good suggestions to me. However, there\n> are a few more things where your feedback can help:\n> a. With new options, we will partition pgbench_accounts and the\n> reason is that because that is the largest table. Do we need to be\n> explicit about the reason in docs?\n\nHmm, I would document what is it that we do, and stop there without\nexplaining why. Unless you have concrete reasons to want the reason\ndocumented?\n\n> b. I am not comfortable with test modification in\n> 001_pgbench_with_server.pl. Basically, it doesn't seem like we should\n> modify the existing test to use non-default tablespaces as part of\n> this patch. It might be a good idea in general, but I am not sure\n> doing as part of this patch is a good idea as there is no big value\n> addition with that modification as far as this patch is concerned.\n> OTOH, as such there is no harm in testing with non-default\n> tablespaces.\n\nYeah, this change certainly is out of place in this patch.\n\n> The other thing is that the query used in patch to fetch partition\n> information seems correct to me, but maybe there is a better way to\n> get that information.\n\nI hadn't looked at that, but yeah it seems that it should be using\npg_partition_tree().\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 27 Sep 2019 09:33:41 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 7:05 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Sep-27, Amit Kapila wrote:\n>\n> > The other thing is that the query used in patch to fetch partition\n> > information seems correct to me, but maybe there is a better way to\n> > get that information.\n>\n> I hadn't looked at that, but yeah it seems that it should be using\n> pg_partition_tree().\n>\n\nI think we might also need to use pg_get_partkeydef along with\npg_partition_tree to fetch the partition method information. However,\nI think to find reloid of pgbench_accounts in the current search path,\nwe might need to use some part of query constructed by Fabien.\n\nFabien, what do you think about Alvaro's suggestion?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 28 Sep 2019 10:49:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "Hello Amit,\n\n> I think we might also need to use pg_get_partkeydef along with\n> pg_partition_tree to fetch the partition method information. However,\n> I think to find reloid of pgbench_accounts in the current search path,\n> we might need to use some part of query constructed by Fabien.\n>\n> Fabien, what do you think about Alvaro's suggestion?\n\nI think that the current straightforward SQL query is and works fine, and \nI find it pretty elegant. No doubt other solutions could be implemented to \nthe same effect, with SQL or possibly through introspection functions.\n\nIncidentally, ISTM that \"pg_partition_tree\" appears in v12, while \npartitions exist in v11, so it would break uselessly backward \ncompatibility of the feature which currently work with v11, which I do not \nfind desirable.\n\nAttached v18:\n - remove the test tablespace\n I had to work around a strange issue around partitioned tables and\n the default tablespace.\n - creates a separate function for setting scale, partitions and\n partition_method\n\n-- \nFabien.",
"msg_date": "Sat, 28 Sep 2019 08:11:14 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Sat, Sep 28, 2019 at 11:41 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Amit,\n>\n> > I think we might also need to use pg_get_partkeydef along with\n> > pg_partition_tree to fetch the partition method information. However,\n> > I think to find reloid of pgbench_accounts in the current search path,\n> > we might need to use some part of query constructed by Fabien.\n> >\n> > Fabien, what do you think about Alvaro's suggestion?\n>\n> I think that the current straightforward SQL query is and works fine, and\n> I find it pretty elegant. No doubt other solutions could be implemented to\n> the same effect, with SQL or possibly through introspection functions.\n>\n> Incidentally, ISTM that \"pg_partition_tree\" appears in v12, while\n> partitions exist in v11, so it would break uselessly backward\n> compatibility of the feature which currently work with v11, which I do not\n> find desirable.\n>\n\nFair enough. Alvaro, do let us know if you think this can be\nsimplified? I think even if we find some better way to get that\ninformation as compare to what Fabien has done here, we can change it\nlater without any impact.\n\n> Attached v18:\n> - remove the test tablespace\n> I had to work around a strange issue around partitioned tables and\n> the default tablespace.\n\n- if (tablespace != NULL)\n+\n+ if (tablespace != NULL && strcmp(tablespace, \"pg_default\") != 0)\n {\n\n- if (index_tablespace != NULL)\n+ if (index_tablespace != NULL && strcmp(index_tablespace, \"pg_default\") != 0)\n\nI don't think such a workaround is a good idea for two reasons (a)\nhaving check on the name (\"pg_default\") is not advisable, we should\nget the tablespace oid and then check if it is same as\nDEFAULTTABLESPACE_OID, (b) this will change something which was\npreviously allowed i.e. to append default tablespace name for the\nnon-partitioned tables.\n\nI don't think we need any such check, rather if the user gives\ndefault_tablespace with 'partitions' option, then let it fail with an\nerror \"cannot specify default tablespace for partitioned relations\".\nIf we do that then one of the modified pgbench tests will start\nfailing. I think we have two options here:\n\n(a) Don't test partitions with \"all possible options\" test and add a\ncomment on why we are not testing it there.\n(b) Create a non-default tablespace to test partitions with \"all\npossible options\" test as you have in your previous version. Also,\nadd a comment explaining why in that test we are using non-default\ntablespace.\n\nI am leaning towards approach (b) unless you and or Alvaro feels (a)\nis good for now or if you have some other idea.\n\nIf we want to go with option (b), I have small comment in your previous test:\n+# tablespace for testing\n+my $ts = $node->basedir . '/regress_pgbench_tap_1_ts_dir';\n+mkdir $ts or die \"cannot create directory $ts\";\n+my $ets = TestLib::perl2host($ts);\n+# add needed escaping!\n+$ets =~ s/'/''/;\n\nI am not sure if we really need this quote skipping stuff. Why can't\nwe write the test as below:\n\n# tablespace for testing\nmy $basedir = $node->basedir;\nmy $ts = \"$basedir/regress_pgbench_tap_1_ts_dir\";\nmkdir $ts or die \"cannot create directory $ts\";\n$ts = TestLib::perl2host($ts);\n$node->safe_psql('postgres',\n \"CREATE TABLESPACE regress_pgbench_tap_1_ts LOCATION '$ets';\"\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 30 Sep 2019 11:36:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "Hello Amit,\n\n>> Attached v18:\n>> - remove the test tablespace\n>> I had to work around a strange issue around partitioned tables and\n>> the default tablespace.\n>\n> - if (tablespace != NULL)\n> + if (tablespace != NULL && strcmp(tablespace, \"pg_default\") != 0)\n>\n> [...]\n>\n> I don't think we need any such check, rather if the user gives\n> default_tablespace with 'partitions' option, then let it fail with an\n> error \"cannot specify default tablespace for partitioned relations\".\n\nThat is the one I wanted to avoid, which is triggered by TAP tests, but \nI'm fine with putting back a tablespace. Given partitioned table strange \nconstraints, ISTM desirable to check that it works with options such as \ntablespace and fillfactor.\n\n> (b) Create a non-default tablespace to test partitions with \"all\n> possible options\" test as you have in your previous version.\n\n> Also, add a comment explaining why in that test we are using non-default \n> tablespace.\n\n> I am leaning towards approach (b) unless you and or Alvaro feels (a)\n> is good for now or if you have some other idea.\n\nNo other idea. I put back the tablespace creation which I just removed, \nwith comments about why it is there.\n\n> If we want to go with option (b), I have small comment in your previous test:\n> +# tablespace for testing\n> +my $ts = $node->basedir . '/regress_pgbench_tap_1_ts_dir';\n> +mkdir $ts or die \"cannot create directory $ts\";\n> +my $ets = TestLib::perl2host($ts);\n> +# add needed escaping!\n> +$ets =~ s/'/''/;\n>\n> I am not sure if we really need this quote skipping stuff. Why can't\n> we write the test as below:\n>\n> # tablespace for testing\n> my $basedir = $node->basedir;\n> my $ts = \"$basedir/regress_pgbench_tap_1_ts_dir\";\n> mkdir $ts or die \"cannot create directory $ts\";\n> $ts = TestLib::perl2host($ts);\n> $node->safe_psql('postgres',\n> \"CREATE TABLESPACE regress_pgbench_tap_1_ts LOCATION '$ets';\"\n\nI think that this last command fails if the path contains a \"'\", so the \n'-escaping is necessary. I had to make changes in TAP tests before because \nit was not working when the path was a little bit strange, so now I'm \ncareful.\n\nAttached v19:\n - put back a local tablespace plus comments\n - remove the pg_default doubtful workaround.\n\n-- \nFabien.",
"msg_date": "Mon, 30 Sep 2019 10:56:21 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Mon, Sep 30, 2019 at 2:26 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > I am leaning towards approach (b) unless you and or Alvaro feels (a)\n> > is good for now or if you have some other idea.\n>\n> No other idea. I put back the tablespace creation which I just removed,\n> with comments about why it is there.\n>\n> > If we want to go with option (b), I have small comment in your previous test:\n> > +# tablespace for testing\n> > +my $ts = $node->basedir . '/regress_pgbench_tap_1_ts_dir';\n> > +mkdir $ts or die \"cannot create directory $ts\";\n> > +my $ets = TestLib::perl2host($ts);\n> > +# add needed escaping!\n> > +$ets =~ s/'/''/;\n> >\n> > I am not sure if we really need this quote skipping stuff. Why can't\n> > we write the test as below:\n> >\n> > # tablespace for testing\n> > my $basedir = $node->basedir;\n> > my $ts = \"$basedir/regress_pgbench_tap_1_ts_dir\";\n> > mkdir $ts or die \"cannot create directory $ts\";\n> > $ts = TestLib::perl2host($ts);\n> > $node->safe_psql('postgres',\n> > \"CREATE TABLESPACE regress_pgbench_tap_1_ts LOCATION '$ets';\"\n>\n> I think that this last command fails if the path contains a \"'\", so the\n> '-escaping is necessary. I had to make changes in TAP tests before because\n> it was not working when the path was a little bit strange, so now I'm\n> careful.\n>\n\nHmm, I don't know what kind of issues you have earlier faced, but\ntablespace creation doesn't allow quotes. See the message \"tablespace\nlocation cannot contain single quotes\" in CreateTableSpace. Also,\nthere are other places in tests like\nsrc/bin/pg_checksums/t/002_actions.pl which uses the way I have\nmentioned. I don't think there is any need for escaping single-quotes\nhere and I am not seeing the use of same. I don't want to introduce a\nnew pattern in tests which people can then tomorrow copy at other\nplaces even though such code is not required. OTOH, if there is a\ngenuine need for the same, then I am fine.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 30 Sep 2019 16:12:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "Hello Amit,\n\n>>> $node->safe_psql('postgres',\n>>> \"CREATE TABLESPACE regress_pgbench_tap_1_ts LOCATION '$ets';\"\n>>\n>> I think that this last command fails if the path contains a \"'\", so the\n>> '-escaping is necessary. I had to make changes in TAP tests before because\n>> it was not working when the path was a little bit strange, so now I'm\n>> careful.\n>\n> Hmm, I don't know what kind of issues you have earlier faced,\n\nAFAICR, path with shell-sensitive characters ($ ? * ...) which was \nbreaking something somewhere.\n\n> but tablespace creation doesn't allow quotes. See the message \n> \"tablespace location cannot contain single quotes\" in CreateTableSpace.\n\nHmmm. That is the problem of CreateTableSpace. From an SQL perspective, \nescaping is required. If the command fails later, that is the problem of \nthe command implementation, but otherwise this is just a plain syntax \nerror at the SQL level.\n\n> Also, there are other places in tests like \n> src/bin/pg_checksums/t/002_actions.pl which uses the way I have \n> mentioned.\n\nYes, I looked at it and imported the window-specific function to handle \nthe path. It does not do anything about escaping.\n\n> I don't think there is any need for escaping single-quotes\n> here\n\nAs said, this is required for SQL, or you must know that there are no \nsingle quotes in the string.\n\n> and I am not seeing the use of same.\n\nSure. It is probably buggy there too.\n\n> I don't want to introduce a new pattern in tests which people can then \n> tomorrow copy at other places even though such code is not required. \n> OTOH, if there is a genuine need for the same, then I am fine.\n\nHmmm. The committer is right by definition. Here is a version without \nescaping but with a comment instead.\n\n-- \nFabien.",
"msg_date": "Mon, 30 Sep 2019 13:47:04 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Mon, Sep 30, 2019 at 5:17 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> > I don't want to introduce a new pattern in tests which people can then\n> > tomorrow copy at other places even though such code is not required.\n> > OTOH, if there is a genuine need for the same, then I am fine.\n>\n> Hmmm. The committer is right by definition. Here is a version without\n> escaping but with a comment instead.\n>\n\nThanks, attached is a patch with minor modifications which I am\nplanning to push after one more round of review on Thursday morning\nIST unless there are more comments by anyone else.\n\nThe changes include:\n1. ran pgindent\n2. As per Alvaro's suggestions move few function definitions.\n3. Changed one or two comments and fixed spelling at one place.\n\nThe one place where some suggestion might help:\n+ else if (PQntuples(res) == 0)\n+ {\n+ /*\n+ * This case is unlikely as pgbench already found \"pgbench_branches\"\n+ * above to compute the scale.\n+ */\n+ fprintf(stderr,\n+ \"no pgbench_accounts table found in search_path\\n\"\n+ \"Perhaps you need to do initialization (\\\"pgbench -i\\\") in database\n\\\"%s\\\"\\n\", PQdb(con));\n+ exit(1);\n+ }\n\nCan anyone else think of a better error message either in wording or\nstyle for above case?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 1 Oct 2019 10:20:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "\nHello Amit,\n\n> 1. ran pgindent\n> 2. As per Alvaro's suggestions move few function definitions.\n> 3. Changed one or two comments and fixed spelling at one place.\n\nThanks for the improvements.\n\nNot sure why you put \"XXX - \" in front of \"append_fillfactor\" comment, \nthough.\n\n> + fprintf(stderr,\n> + \"no pgbench_accounts table found in search_path\\n\"\n> + \"Perhaps you need to do initialization (\\\"pgbench -i\\\") in database\n> \\\"%s\\\"\\n\", PQdb(con));\n\n> Can anyone else think of a better error message either in wording or\n> style for above case?\n\nNo better idea from me. The second part is a duplicate from a earlier \ncomment, when getting the scale fails.\n\n-- \nFabien.\n\n\n",
"msg_date": "Tue, 1 Oct 2019 08:20:59 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Tue, Oct 1, 2019 at 11:51 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n>\n> Hello Amit,\n>\n> > 1. ran pgindent\n> > 2. As per Alvaro's suggestions move few function definitions.\n> > 3. Changed one or two comments and fixed spelling at one place.\n>\n> Thanks for the improvements.\n>\n> Not sure why you put \"XXX - \" in front of \"append_fillfactor\" comment,\n> though.\n>\n>\nIt is to indicate that we can do this after further consideration.\n\n\n> > + fprintf(stderr,\n> > + \"no pgbench_accounts table found in search_path\\n\"\n> > + \"Perhaps you need to do initialization (\\\"pgbench -i\\\") in database\n> > \\\"%s\\\"\\n\", PQdb(con));\n>\n> > Can anyone else think of a better error message either in wording or\n> > style for above case?\n>\n> No better idea from me. The second part is a duplicate from a earlier\n> comment, when getting the scale fails.\n>\n\nYeah, I know that, but this doesn't look quite right. I mean to say\nwhatever we want to say via this message is correct, but I am not\ncompletely happy with the display part. How about something like:\n\"pgbench_accounts is missing, you need to do initialization (\\\"pgbench\n-i\\\") in database \\\"%s\\\"\\n\"? Feel free to propose something else on\nsimilar lines? If possible, I want to convey this information in a single\nsentence.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, Oct 1, 2019 at 11:51 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\nHello Amit,\n\n> 1. ran pgindent\n> 2. As per Alvaro's suggestions move few function definitions.\n> 3. Changed one or two comments and fixed spelling at one place.\n\nThanks for the improvements.\n\nNot sure why you put \"XXX - \" in front of \"append_fillfactor\" comment, \nthough.\nIt is to indicate that we can do this after further consideration. \n> + fprintf(stderr,\n> + \"no pgbench_accounts table found in search_path\\n\"\n> + \"Perhaps you need to do initialization (\\\"pgbench -i\\\") in database\n> \\\"%s\\\"\\n\", PQdb(con));\n\n> Can anyone else think of a better error message either in wording or\n> style for above case?\n\nNo better idea from me. The second part is a duplicate from a earlier \ncomment, when getting the scale fails.Yeah, I know that, but this doesn't look quite right. I mean to say whatever we want to say via this message is correct, but I am not completely happy with the display part. How about something like: \"pgbench_accounts is missing, you need to do initialization (\\\"pgbench -i\\\") in database \\\"%s\\\"\\n\"? Feel free to propose something else on similar lines? If possible, I want to convey this information in a single sentence.-- With Regards,Amit Kapila.EnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 1 Oct 2019 19:09:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Tue, 1 Oct 2019 at 15:39, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Tue, Oct 1, 2019 at 11:51 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>>\n>> Hello Amit,\n>>\n>> > 1. ran pgindent\n>> > 2. As per Alvaro's suggestions move few function definitions.\n>> > 3. Changed one or two comments and fixed spelling at one place.\n>>\n>> Thanks for the improvements.\n>>\n>> Not sure why you put \"XXX - \" in front of \"append_fillfactor\" comment,\n>> though.\n>>\n>>\n> It is to indicate that we can do this after further consideration.\n>\n>\n>> > + fprintf(stderr,\n>> > + \"no pgbench_accounts table found in search_path\\n\"\n>> > + \"Perhaps you need to do initialization (\\\"pgbench -i\\\") in database\n>> > \\\"%s\\\"\\n\", PQdb(con));\n>>\n>> > Can anyone else think of a better error message either in wording or\n>> > style for above case?\n>>\n>> No better idea from me. The second part is a duplicate from a earlier\n>> comment, when getting the scale fails.\n>>\n>\n> Yeah, I know that, but this doesn't look quite right. I mean to say\n> whatever we want to say via this message is correct, but I am not\n> completely happy with the display part. How about something like:\n> \"pgbench_accounts is missing, you need to do initialization (\\\"pgbench\n> -i\\\") in database \\\"%s\\\"\\n\"? Feel free to propose something else on\n> similar lines? If possible, I want to convey this information in a single\n> sentence.\n>\n> How about, \"pgbench_accounts is missing, initialize (\\\"pgbench -i\\\") in\ndatabase \\\"%s\\\"\\n\"?\n\n>\n-- \nRegards,\nRafia Sabih\n\nOn Tue, 1 Oct 2019 at 15:39, Amit Kapila <amit.kapila16@gmail.com> wrote:On Tue, Oct 1, 2019 at 11:51 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\nHello Amit,\n\n> 1. ran pgindent\n> 2. As per Alvaro's suggestions move few function definitions.\n> 3. Changed one or two comments and fixed spelling at one place.\n\nThanks for the improvements.\n\nNot sure why you put \"XXX - \" in front of \"append_fillfactor\" comment, \nthough.\nIt is to indicate that we can do this after further consideration. \n> + fprintf(stderr,\n> + \"no pgbench_accounts table found in search_path\\n\"\n> + \"Perhaps you need to do initialization (\\\"pgbench -i\\\") in database\n> \\\"%s\\\"\\n\", PQdb(con));\n\n> Can anyone else think of a better error message either in wording or\n> style for above case?\n\nNo better idea from me. The second part is a duplicate from a earlier \ncomment, when getting the scale fails.Yeah, I know that, but this doesn't look quite right. I mean to say whatever we want to say via this message is correct, but I am not completely happy with the display part. How about something like: \"pgbench_accounts is missing, you need to do initialization (\\\"pgbench -i\\\") in database \\\"%s\\\"\\n\"? Feel free to propose something else on similar lines? If possible, I want to convey this information in a single sentence.How about, \"pgbench_accounts is missing, initialize (\\\"pgbench -i\\\") in database \\\"%s\\\"\\n\"? -- Regards,Rafia Sabih",
"msg_date": "Tue, 1 Oct 2019 16:21:00 +0200",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "\n>> Yeah, I know that, but this doesn't look quite right. I mean to say\n>> whatever we want to say via this message is correct, but I am not\n>> completely happy with the display part. How about something like:\n>> \"pgbench_accounts is missing, you need to do initialization (\\\"pgbench\n>> -i\\\") in database \\\"%s\\\"\\n\"? Feel free to propose something else on\n>> similar lines? If possible, I want to convey this information in a single\n>> sentence.\n>>\n>> How about, \"pgbench_accounts is missing, initialize (\\\"pgbench -i\\\") in\n> database \\\"%s\\\"\\n\"?\n\nI think that we should not presume too much about the solution: perhaps \nthe user did not specify the right database or host and it has nothing to \ndo with initialization.\n\nMaybe something like:\n\n\"pgbench_accounts is missing, perhaps you need to initialize (\\\"pgbench \n-i\\\") in database \\\"%s\\\"\\n\"\n\nThe two sentences approach has the logic of \"error\" and a separate \"hint\" \nwhich is often used.\n\n-- \nFabien.\n\n\n",
"msg_date": "Tue, 1 Oct 2019 16:48:33 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Tue, 1 Oct 2019 at 16:48, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n>\n> >> Yeah, I know that, but this doesn't look quite right. I mean to say\n> >> whatever we want to say via this message is correct, but I am not\n> >> completely happy with the display part. How about something like:\n> >> \"pgbench_accounts is missing, you need to do initialization (\\\"pgbench\n> >> -i\\\") in database \\\"%s\\\"\\n\"? Feel free to propose something else on\n> >> similar lines? If possible, I want to convey this information in a\n> single\n> >> sentence.\n> >>\n> >> How about, \"pgbench_accounts is missing, initialize (\\\"pgbench -i\\\") in\n> > database \\\"%s\\\"\\n\"?\n>\n> I think that we should not presume too much about the solution: perhaps\n> the user did not specify the right database or host and it has nothing to\n> do with initialization.\n>\n> Maybe something like:\n>\n> \"pgbench_accounts is missing, perhaps you need to initialize (\\\"pgbench\n> -i\\\") in database \\\"%s\\\"\\n\"\n>\n> The two sentences approach has the logic of \"error\" and a separate \"hint\"\n> which is often used.\n>\n\n+1 for error and hint separation.\n\n\n-- \nRegards,\nRafia Sabih\n\nOn Tue, 1 Oct 2019 at 16:48, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>> Yeah, I know that, but this doesn't look quite right. I mean to say\n>> whatever we want to say via this message is correct, but I am not\n>> completely happy with the display part. How about something like:\n>> \"pgbench_accounts is missing, you need to do initialization (\\\"pgbench\n>> -i\\\") in database \\\"%s\\\"\\n\"? Feel free to propose something else on\n>> similar lines? If possible, I want to convey this information in a single\n>> sentence.\n>>\n>> How about, \"pgbench_accounts is missing, initialize (\\\"pgbench -i\\\") in\n> database \\\"%s\\\"\\n\"?\n\nI think that we should not presume too much about the solution: perhaps \nthe user did not specify the right database or host and it has nothing to \ndo with initialization.\n\nMaybe something like:\n\n\"pgbench_accounts is missing, perhaps you need to initialize (\\\"pgbench \n-i\\\") in database \\\"%s\\\"\\n\"\n\nThe two sentences approach has the logic of \"error\" and a separate \"hint\" \nwhich is often used.+1 for error and hint separation.-- Regards,Rafia Sabih",
"msg_date": "Tue, 1 Oct 2019 17:15:21 +0200",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Tue, Oct 1, 2019 at 8:45 PM Rafia Sabih <rafia.pghackers@gmail.com>\nwrote:\n\n> On Tue, 1 Oct 2019 at 16:48, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>>\n>> >> Yeah, I know that, but this doesn't look quite right. I mean to say\n>> >> whatever we want to say via this message is correct, but I am not\n>> >> completely happy with the display part. How about something like:\n>> >> \"pgbench_accounts is missing, you need to do initialization (\\\"pgbench\n>> >> -i\\\") in database \\\"%s\\\"\\n\"? Feel free to propose something else on\n>> >> similar lines? If possible, I want to convey this information in a\n>> single\n>> >> sentence.\n>> >>\n>> >> How about, \"pgbench_accounts is missing, initialize (\\\"pgbench -i\\\") in\n>> > database \\\"%s\\\"\\n\"?\n>>\n>> I think that we should not presume too much about the solution: perhaps\n>> the user did not specify the right database or host and it has nothing to\n>> do with initialization.\n>>\n>> Maybe something like:\n>>\n>> \"pgbench_accounts is missing, perhaps you need to initialize (\\\"pgbench\n>> -i\\\") in database \\\"%s\\\"\\n\"\n>>\n>> The two sentences approach has the logic of \"error\" and a separate \"hint\"\n>> which is often used.\n>>\n>\n> +1 for error and hint separation.\n>\n\nOkay, if you people like the approach of two sentences for the separation\nof \"hint\" and \"error\", then I think the second line should end with a\nperiod. See below note in docs[1]:\n\"Grammar and Punctuation\n\nThe rules are different for primary error messages and for detail/hint\nmessages:\n\nPrimary error messages: Do not capitalize the first letter. Do not end a\nmessage with a period. Do not even think about ending a message with an\nexclamation point.\n\nDetail and hint messages: Use complete sentences, and end each with a\nperiod. Capitalize the first word of sentences. Put two spaces after the\nperiod if another sentence follows (for English text; might be\ninappropriate in other languages).\"\n\nAlso, the similar style is used in other places in code, see\ncontrib/oid2name/oid2name.c, contrib/pg_standby/pg_standby.c for similar\nusage.\n\nI shall modify this before commit unless you disagree.\n\n[1] - https://www.postgresql.org/docs/devel/error-style-guide.html\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, Oct 1, 2019 at 8:45 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:On Tue, 1 Oct 2019 at 16:48, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>> Yeah, I know that, but this doesn't look quite right. I mean to say\n>> whatever we want to say via this message is correct, but I am not\n>> completely happy with the display part. How about something like:\n>> \"pgbench_accounts is missing, you need to do initialization (\\\"pgbench\n>> -i\\\") in database \\\"%s\\\"\\n\"? Feel free to propose something else on\n>> similar lines? If possible, I want to convey this information in a single\n>> sentence.\n>>\n>> How about, \"pgbench_accounts is missing, initialize (\\\"pgbench -i\\\") in\n> database \\\"%s\\\"\\n\"?\n\nI think that we should not presume too much about the solution: perhaps \nthe user did not specify the right database or host and it has nothing to \ndo with initialization.\n\nMaybe something like:\n\n\"pgbench_accounts is missing, perhaps you need to initialize (\\\"pgbench \n-i\\\") in database \\\"%s\\\"\\n\"\n\nThe two sentences approach has the logic of \"error\" and a separate \"hint\" \nwhich is often used.+1 for error and hint separation.Okay, if you people like the approach of two sentences for the separation of \"hint\" and \"error\", then I think the second line should end with a period. See below note in docs[1]:\"Grammar and PunctuationThe rules are different for primary error messages and for detail/hint messages:Primary error messages: Do not capitalize the first letter. Do not end a message with a period. Do not even think about ending a message with an exclamation point.Detail and hint messages: Use complete sentences, and end each with a period. Capitalize the first word of sentences. Put two spaces after the period if another sentence follows (for English text; might be inappropriate in other languages).\"Also, the similar style is used in other places in code, see contrib/oid2name/oid2name.c, contrib/pg_standby/pg_standby.c for similar usage.I shall modify this before commit unless you disagree.[1] - https://www.postgresql.org/docs/devel/error-style-guide.html-- With Regards,Amit Kapila.EnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 2 Oct 2019 04:53:40 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Tue, Oct 1, 2019 at 10:20 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Mon, Sep 30, 2019 at 5:17 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> >\n> > > I don't want to introduce a new pattern in tests which people can then\n> > > tomorrow copy at other places even though such code is not required.\n> > > OTOH, if there is a genuine need for the same, then I am fine.\n> >\n> > Hmmm. The committer is right by definition. Here is a version without\n> > escaping but with a comment instead.\n> >\n>\n> Thanks, attached is a patch with minor modifications which I am\n> planning to push after one more round of review on Thursday morning\n> IST unless there are more comments by anyone else.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, Oct 1, 2019 at 10:20 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Mon, Sep 30, 2019 at 5:17 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n> > I don't want to introduce a new pattern in tests which people can then\n> > tomorrow copy at other places even though such code is not required.\n> > OTOH, if there is a genuine need for the same, then I am fine.\n>\n> Hmmm. The committer is right by definition. Here is a version without\n> escaping but with a comment instead.\n>\n\nThanks, attached is a patch with minor modifications which I am\nplanning to push after one more round of review on Thursday morning\nIST unless there are more comments by anyone else.Pushed.-- With Regards,Amit Kapila.EnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 3 Oct 2019 08:37:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "\n>> Thanks, attached is a patch with minor modifications which I am\n>> planning to push after one more round of review on Thursday morning\n>> IST unless there are more comments by anyone else.\n>\n> Pushed.\n\nThanks!\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 3 Oct 2019 07:00:22 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "Hi Fabien, Amit,\n\nI could see that when an invalid number of partitions is specified,\nsometimes pgbench fails with an error \"invalid number of partitions:\n...\" whereas many a times it doesn't, instead it creates number of\npartitions that hasn't been specified by the user.\n\nAs partitions is an integer type variable, the maximum value it can\nhold is \"2147483647\". But if I specify partitions as \"3147483647\",\natoi function returns a value lesser than zero and pgbench terminates\nwith an error. However, if the value for number of partitions\nspecified is something like \"5147483647\", atoi returns a non-negative\nnumber and pgbench creates as many number of partitions as the value\nreturned by atoi function.\n\nHave a look at the below examples,\n\n[ashu@localhost bin]$ ./pgbench -i -s 10 --partitions=2147483647 postgres\ndropping old tables...\ncreating tables...\ncreating 2147483647 partitions...\n^C\n[ashu@localhost bin]$ ./pgbench -i -s 10 --partitions=3147483647 postgres\ninvalid number of partitions: \"3147483647\"\n\n[ashu@localhost bin]$ ./pgbench -i -s 10 --partitions=5147483647 postgres\ndropping old tables...\ncreating tables...\ncreating 852516351 partitions...\n^C\n\nThis seems like a problem with atoi function, isn't it?\n\natoi functions has been used at several places in pgbench script and I\ncan see similar behaviour for all. For e.g. it has been used with\nscale factor and above observation is true for that as well. So, is\nthis a bug or you guys feel that it isn't and can be ignored? Please\nlet me know your thoughts on this. Thank you.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nOn Thu, Oct 3, 2019 at 10:30 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> >> Thanks, attached is a patch with minor modifications which I am\n> >> planning to push after one more round of review on Thursday morning\n> >> IST unless there are more comments by anyone else.\n> >\n> > Pushed.\n>\n> Thanks!\n>\n> --\n> Fabien.\n>\n>\n\n\n",
"msg_date": "Thu, 3 Oct 2019 13:26:35 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "\nHello,\n\n> As partitions is an integer type variable, the maximum value it can\n> hold is \"2147483647\". But if I specify partitions as \"3147483647\",\n> atoi function returns a value lesser than zero and pgbench terminates\n> with an error. However, if the value for number of partitions\n> specified is something like \"5147483647\", atoi returns a non-negative\n> number and pgbench creates as many number of partitions as the value\n> returned by atoi function.\n>\n> This seems like a problem with atoi function, isn't it?\n\nYes.\n\n> atoi functions has been used at several places in pgbench script and I\n> can see similar behaviour for all. For e.g. it has been used with\n> scale factor and above observation is true for that as well. So, is\n> this a bug or you guys feel that it isn't and can be ignored? Please\n> let me know your thoughts on this. Thank you.\n\nI think that it is a known bug (as you noted atoi is used more or less \neverywhere in pgbench and other commands) which shoud be addressed \nseparately: all integer user inputs should be validated for syntax and \noverflow, everywhere, really. This is not currently the case, so I simply \nreplicated the current bad practice when developing this feature.\n\nThere is/was a current patch/discussion to improve integer parsing, which \ncould address this.\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 3 Oct 2019 10:22:56 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Thu, Oct 3, 2019 at 1:53 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello,\n>\n> > As partitions is an integer type variable, the maximum value it can\n> > hold is \"2147483647\". But if I specify partitions as \"3147483647\",\n> > atoi function returns a value lesser than zero and pgbench terminates\n> > with an error. However, if the value for number of partitions\n> > specified is something like \"5147483647\", atoi returns a non-negative\n> > number and pgbench creates as many number of partitions as the value\n> > returned by atoi function.\n> >\n> > This seems like a problem with atoi function, isn't it?\n>\n> Yes.\n>\n> > atoi functions has been used at several places in pgbench script and I\n> > can see similar behaviour for all. For e.g. it has been used with\n> > scale factor and above observation is true for that as well. So, is\n> > this a bug or you guys feel that it isn't and can be ignored? Please\n> > let me know your thoughts on this. Thank you.\n>\n> I think that it is a known bug (as you noted atoi is used more or less\n> everywhere in pgbench and other commands) which shoud be addressed\n> separately: all integer user inputs should be validated for syntax and\n> overflow, everywhere, really. This is not currently the case, so I simply\n> replicated the current bad practice when developing this feature.\n>\n\nOkay, I think we should possibly replace atoi with strtol function\ncall for better error handling. It handles the erroneous inputs better\nthan atoi.\n\n> There is/was a current patch/discussion to improve integer parsing, which\n> could address this.\n>\n\nIt seems like you are trying to point out the following discussion on hackers,\n\nhttps://www.postgresql.org/message-id/flat/20190724040237.GB64205%40begriffs.com#5677c361d3863518b0db5d5baae72bbe\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 3 Oct 2019 14:28:25 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "The documentation and pgbench --help output that accompanied this patch \nclaims that the argument to pgbench --partition-method is optional and \ndefaults to \"range\", but that is not actually the case, as the \nimplementation requires an argument. Could you please sort this out?\n\nPersonally, I think making the argument optional is unnecessary and \nconfusing, so I'd just change the documentation.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 3 Jan 2020 10:54:14 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 3:24 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> The documentation and pgbench --help output that accompanied this patch\n> claims that the argument to pgbench --partition-method is optional and\n> defaults to \"range\", but that is not actually the case, as the\n> implementation requires an argument. Could you please sort this out?\n>\n\nAFAICS, if the user omits this argument, then the default is range as\nspecified in docs. I tried by using something like 'pgbench.exe -i -s\n1 --partitions=2 postgres' and then run 'pgbench -S postgres'.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 3 Jan 2020 15:34:47 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "On 2020-01-03 11:04, Amit Kapila wrote:\n> On Fri, Jan 3, 2020 at 3:24 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>>\n>> The documentation and pgbench --help output that accompanied this patch\n>> claims that the argument to pgbench --partition-method is optional and\n>> defaults to \"range\", but that is not actually the case, as the\n>> implementation requires an argument. Could you please sort this out?\n>>\n> \n> AFAICS, if the user omits this argument, then the default is range as\n> specified in docs. I tried by using something like 'pgbench.exe -i -s\n> 1 --partitions=2 postgres' and then run 'pgbench -S postgres'.\n\nAh, the way I interpreted this is that the argument to \n--partition-method itself is optional.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 3 Jan 2020 11:51:15 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
},
{
"msg_contents": "Hello Peter,\n\n>>> The documentation and pgbench --help output that accompanied this patch\n>>> claims that the argument to pgbench --partition-method is optional and\n>>> defaults to \"range\", but that is not actually the case, as the\n>>> implementation requires an argument. Could you please sort this out?\n>> \n>> AFAICS, if the user omits this argument, then the default is range as\n>> specified in docs. I tried by using something like 'pgbench.exe -i -s\n>> 1 --partitions=2 postgres' and then run 'pgbench -S postgres'.\n>\n> Ah, the way I interpreted this is that the argument to --partition-method \n> itself is optional.\n\nYep. Optionnal stuff would be in [], where () is used for choices.\n\nWould the attached have improved your understanding? It is somehow more \nconsistent with other help lines.\n\n-- \nFabien.",
"msg_date": "Fri, 3 Jan 2020 13:07:33 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - allow to create partitioned tables"
}
] |
[
{
"msg_contents": "\nIn commit ed8a7c6fcf9 we added some extra tests to pgbench, including\nthis snippet:\n\n\n \\setshell two\\\n expr \\\n 1 + :one\n\nUnfortunately, this isn't portable, as I've just discovered at the cost\nof quite a bit of time. In particular, you can't assume expr is present\nand in the path on Windows. The Windows equivalent would be something like:\n\n\n \\setshell two\\\n @set /a c = 1 + :one && echo %c%\n\n\nI propose to prepare a patch along these lines. Alternatively we could\njust drop it - I don't think the test matters all that hugely.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 23 Jul 2019 18:25:34 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "pgbench tests vs Windows"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> In commit ed8a7c6fcf9 we added some extra tests to pgbench, including\n> this snippet:\n> \\setshell two\\\n> expr \\\n> 1 + :one\n> Unfortunately, this isn't portable, as I've just discovered at the cost\n> of quite a bit of time. In particular, you can't assume expr is present\n> and in the path on Windows.\n\nUgh.\n\n> The Windows equivalent would be something like:\n> \\setshell two\\\n> @set /a c = 1 + :one && echo %c%\n\nI wonder how universal that is, either.\n\n> I propose to prepare a patch along these lines. Alternatively we could\n> just drop it - I don't think the test matters all that hugely.\n\nI'd say try that, but if it doesn't work right away, just skip the\ntest on Windows.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Jul 2019 19:13:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench tests vs Windows"
},
{
"msg_contents": "On Tue, Jul 23, 2019 at 07:13:51PM -0400, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> I propose to prepare a patch along these lines. Alternatively we could\n>> just drop it - I don't think the test matters all that hugely.\n> \n> I'd say try that, but if it doesn't work right away, just skip the\n> test on Windows.\n\n+1. I don't see exactly why we should drop it either.\n--\nMichael",
"msg_date": "Wed, 24 Jul 2019 11:36:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench tests vs Windows"
},
{
"msg_contents": "Hello Andrew,\n\n> Unfortunately, this isn't portable, as I've just discovered at the cost\n> of quite a bit of time. In particular, you can't assume expr is present\n> and in the path on Windows. The Windows equivalent would be something like:\n>\n> \\setshell two\\\n> @set /a c = 1 + :one && echo %c%\n\nHmmm... Can we assume that echo is really always there on Windows? If so, \nthe attached patch does something only with \"echo\".\n\n> I propose to prepare a patch along these lines. Alternatively we could\n> just drop it - I don't think the test matters all that hugely.\n\nThe point is to have some minimal coverage so that unexpected changes are \ncaught. This is the only call to a working \\setshell.\n\n-- \nFabien.",
"msg_date": "Wed, 24 Jul 2019 07:56:29 +0000 (GMT)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgbench tests vs Windows"
},
{
"msg_contents": "\nOn 7/24/19 3:56 AM, Fabien COELHO wrote:\n>\n> Hello Andrew,\n>\n>> Unfortunately, this isn't portable, as I've just discovered at the cost\n>> of quite a bit of time. In particular, you can't assume expr is present\n>> and in the path on Windows. The Windows equivalent would be something\n>> like:\n>>\n>> \\setshell two\\\n>> @set /a c = 1 + :one && echo %c%\n>\n> Hmmm... Can we assume that echo is really always there on Windows? If\n> so, the attached patch does something only with \"echo\".\n\n\nYes, it's built into the cmd processor (as is \"set /a\", to answer Tom's\nearlier question about portability - I tested the above back to XP.)\n\n\necho is much more universal, and I can confirm that the suggested fix\nworks on the Windows test box I'm using.\n\n\nI'll apply and backpatch that. Thanks\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 24 Jul 2019 09:08:59 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: pgbench tests vs Windows"
}
] |
[
{
"msg_contents": "Ever since pg_basebackup was created, it had a comment like this:\n\n * End of chunk. If requested, and this is the base tablespace\n * write configuration file into the tarfile. When done, close the\n * file (but not stdout).\n\nBut, why make the exception for output going to stdout? If we are done\nwith it, why not close it?\n\nAfter a massive maintenance operation, I want to re-seed a streaming\nstandby, which I start to do by:\n\npg_basebackup -D - -Ft -P -X none | pxz > base.tar.xz\n\nBut the archiver is way behind, so when it finishes the basebackup part, I\nget:\n\nNOTICE: pg_stop_backup cleanup done, waiting for required WAL segments to\nbe archived\nWARNING: pg_stop_backup still waiting for all required WAL segments to be\narchived (60 seconds elapsed)\n...\n\nThe base backup file is not finalized, because pg_basebackup has not closed\nits stdout while waiting for the WAL segment to be archived. The file is\nincomplete due to data stuck in buffers, so I can't copy it to where I want\nand bring up a new streaming replica (which bypasses the WAL archive, so\nwould otherwise work). Also, if pg_basebackup gets interupted somehow while\nit is waiting for WAL archiving, the backup will be invalid, as it won't\nflush the last bit of data. Of course if it gets interupted, I would have\nto test the backup to make sure it is valid. But testing it and finding\nthat it is valid is better than testing it and finding that it is not.\n\nI think it makes sense for pg_basebackup to wait for the WAL to be\narchived, but there is no reason for it to hold the base.tar.xz file\nhostage while it does so.\n\nIf I simply remove the test for strcmp(basedir, \"-\"), as in the attached, I\nget the behavior I desire, and nothing bad seems to happen. Meaning, \"make\ncheck -C src/bin/pg_basebackup/\" still passes (but only tested on Linux).\n\nIs there a reason not to do this?\n\nCheers,\n\nJeff",
"msg_date": "Tue, 23 Jul 2019 22:16:26 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_basebackup delays closing of stdout"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-23 22:16:26 -0400, Jeff Janes wrote:\n> Ever since pg_basebackup was created, it had a comment like this:\n> \n> * End of chunk. If requested, and this is the base tablespace\n> * write configuration file into the tarfile. When done, close the\n> * file (but not stdout).\n> \n> But, why make the exception for output going to stdout? If we are done\n> with it, why not close it?\n\nI think closing stdout is a bad idea that can cause issues in a lot of\nsituations. E.g. because a later open() will then use that fd (the\nlowest unused fd always gets used), and then the next time somebody\nwants to write something to stdout, there's normal output interspersed\nwith some random file. You'd at the least have to reopen /dev/null into\nits place or such.\n\nIt also seems likely to be a trap for some future feature additions that\nwant to write another file to stdout or such - in contrast to the normal\nfiles it can't be reopened.\n\n\n> After a massive maintenance operation, I want to re-seed a streaming\n> standby, which I start to do by:\n> \n> pg_basebackup -D - -Ft -P -X none | pxz > base.tar.xz\n> \n> But the archiver is way behind, so when it finishes the basebackup part, I\n> get:\n> \n> NOTICE: pg_stop_backup cleanup done, waiting for required WAL segments to\n> be archived\n> WARNING: pg_stop_backup still waiting for all required WAL segments to be\n> archived (60 seconds elapsed)\n> ...\n> \n> The base backup file is not finalized, because pg_basebackup has not closed\n> its stdout while waiting for the WAL segment to be archived. The file is\n> incomplete due to data stuck in buffers, so I can't copy it to where I want\n> and bring up a new streaming replica (which bypasses the WAL archive, so\n> would otherwise work).\n\nThat seems more like an argument for sticking a fflush() there, than\nclosing stdout.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Jul 2019 21:54:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup delays closing of stdout"
}
] |
[
{
"msg_contents": "Sorry in advance for link-breaking message, but the original mail was\ntoo old and gmail doesn't allow me to craft required headers to link\nto it.\n\nhttps://www.postgresql.org/message-id/CAKm4Xs71Ma8bV1fY6Gfz9Mg3AKmiHuoJNpxeDVF_KTVOKoy1WQ%40mail.gmail.com\n\n> Please find the proposed patch for review. I will attach it to\n> commitfest as well\n\nPacemaker suffers the same thing. We suggest our customers that \"start\nserver alone to perform recovery then start pacemaker if it is\nexpected to take a long time for recovery so that reaches time out\".\n\nI don't think it is good think to let status SERVICE_RUNNING although\nit actually is not (yet). I think the right direction here is that, if\npg_ctl returns by timeout, pgwin32_ServiceMain kills the starting\nserver then report something like \"timedout and server was stopped,\nplease make sure the server not to take a long time to perform\nrecovery.\".\n\nThougts?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 24 Jul 2019 15:17:36 +0900",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Problem during Windows service start"
},
{
"msg_contents": "On 2019-Jul-24, Kyotaro Horiguchi wrote:\n\n> > Please find the proposed patch for review. I will attach it to\n> > commitfest as well\n> \n> Pacemaker suffers the same thing. We suggest our customers that \"start\n> server alone to perform recovery then start pacemaker if it is\n> expected to take a long time for recovery so that reaches time out\".\n> \n> I don't think it is good think to let status SERVICE_RUNNING although\n> it actually is not (yet). I think the right direction here is that, if\n> pg_ctl returns by timeout, pgwin32_ServiceMain kills the starting\n> server then report something like \"timedout and server was stopped,\n> please make sure the server not to take a long time to perform\n> recovery.\".\n\nI'm not sure that's a great reaction; it makes total recovery time\neven longer. How would the user ensure that recovery takes a shorter\ntime? We'd be forcing them to start the service over and over, until\nrecovery completes.\n\nCan't we have pg_ctl just continue to wait indefinitely? So we'd set\nSERVICE_START_PENDING when wait_for_postmaster is out of patience, then\nloop again -- until recovery completes. Exiting pg_ctl on timeout seems\nreasonable for interactive use, but maybe for service use it's not\nreasonable.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 5 Sep 2019 19:09:45 -0400",
"msg_from": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Problem during Windows service start"
},
{
"msg_contents": "On Thu, Sep 05, 2019 at 07:09:45PM -0400, Alvaro Herrera from 2ndQuadrant wrote:\n> Can't we have pg_ctl just continue to wait indefinitely? So we'd set\n> SERVICE_START_PENDING when wait_for_postmaster is out of patience, then\n> loop again -- until recovery completes. Exiting pg_ctl on timeout seems\n> reasonable for interactive use, but maybe for service use it's not\n> reasonable.\n\nThe root of the problem here is that the time recovery takes is not\nsomething that can be guessed, and that service registering happens in\nthe background. It depends on the time the last checkpoint occurred,\nthe load on the machine involved and the WAL operations done. So it\nseems to me that Alvaro's idea is something which we could work on for\nat least HEAD. There is also the path of providing a longer timeout,\nstill that's just a workaround..\n\nMy understanding is that this could be qualified as a bug because of\nthe fact that we require using again pg_ctl after starting the service\nfrom the windows service control center.\n\nSo, are there plans to move on with this patch? It is waiting on\nauthor for some time now.\n--\nMichael",
"msg_date": "Thu, 7 Nov 2019 12:55:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Problem during Windows service start"
},
{
"msg_contents": "On Thu, Nov 07, 2019 at 12:55:13PM +0900, Michael Paquier wrote:\n> So, are there plans to move on with this patch? It is waiting on\n> author for some time now.\n\nSeeing no activity from the author or even the reviewer, I have marked\nthe patch as returned with feedback for now. I am not actually fully\nconvinced that this should be backpatched either, so it could be done\nas a future improvement.\n--\nMichael",
"msg_date": "Mon, 25 Nov 2019 16:06:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Problem during Windows service start"
}
] |
[
{
"msg_contents": "Hi all,\n\nRecently, one of the test beds we use has blown up once when doing\nstreaming replication like that:\nFATAL: could not seek to end of file \"base/16386/19817_fsm\": No such\n file or directory \nCONTEXT: WAL redo at 60/8DA22448 for Heap2/CLEAN: remxid 65751197\nLOG: startup process (PID 44886) exited with exit code 1\n\nAll the WAL records have been wiped out since, so I don't know exactly\nwhat happened, but I could track down that this FSM file got removed\na couple of hours before as I got my hands on some FS-level logs which\nshowed a deletion.\n\nThis happens in the context of a WAL record XLOG_HEAP2_CLEAN, and the\nredo logic is in heap_xlog_clean(), where there are FSM lookups within\nXLogRecordPageWithFreeSpace() -> XLogReadBufferExtended(). At the\nsubsequent restart, recovery has been able to move on after the\nfailing record, so the FSM has been rebuilt correctly, still that\ncaused an HA setup to be less... Available. However, we are rather\ncareful in those code paths to call smgrcreate() so as the file gets\ncreated at redo if it is not around. Before blaming a lower level of\nthe application stack, I am wondering if we have some issues with\nmdfd_vfd meaning that the file has been removed but that it is still\ntracked as opened. A quick lookup of the code does not show any\nissues, has anyone seen this particular error recently?\n\nThe last commit on REL_11_STABLE which touched this area is this one\nFWIW:\ncommit: 6872c2be6a97057aa736110e31f0390a53305c9e\nauthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\ndate: Wed, 15 Aug 2018 18:09:29 -0300\nUpdate FSM on WAL replay of page all-visible/frozen\n\nAlso, this setup was using 11.2 (I know this one lags behind a bit,\nanyway...).\n\nThanks,\n--\nMichael",
"msg_date": "Wed, 24 Jul 2019 15:45:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Seek failure at end of FSM file during WAL replay (in 11)"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Recently, one of the test beds we use has blown up once when doing\n> streaming replication like that:\n> FATAL: could not seek to end of file \"base/16386/19817_fsm\": No such\n> file or directory \n> CONTEXT: WAL redo at 60/8DA22448 for Heap2/CLEAN: remxid 65751197\n> LOG: startup process (PID 44886) exited with exit code 1\n\n> All the WAL records have been wiped out since, so I don't know exactly\n> what happened, but I could track down that this FSM file got removed\n> a couple of hours before as I got my hands on some FS-level logs which\n> showed a deletion.\n\nHm. AFAICS the immediate issuer of the error must have been\n_mdnblocks(); there are other matches to that error string but\nthey are in places where we can tell which file the seek must\nhave been applied to, and it wasn't a FSM file.\n\n> Before blaming a lower level of\n> the application stack, I am wondering if we have some issues with\n> mdfd_vfd meaning that the file has been removed but that it is still\n> tracked as opened.\n\nlseek() per se presumably would never return ENOENT. A more likely\ntheory is that the file wasn't actually open but only had a leftover\nVFD entry, and when FileSize() -> FileAccess() tried to open it,\nthe open failed with ENOENT --- but _mdnblocks() would still call it\na seek failure.\n\nSo I'd opine that this is a pretty high-level failure --- what are\nwe doing trying to replay WAL against a table that's been dropped?\nOr if it wasn't dropped, why was the FSM removed?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Jul 2019 13:30:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Seek failure at end of FSM file during WAL replay (in 11)"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 01:30:42PM -0400, Tom Lane wrote:\n> Hm. AFAICS the immediate issuer of the error must have been\n> _mdnblocks(); there are other matches to that error string but\n> they are in places where we can tell which file the seek must\n> have been applied to, and it wasn't a FSM file.\n\nYep, that matches my guesses. _mdnblocks() is the only match among\nthe three places with this error string.\n\n> lseek() per se presumably would never return ENOENT. A more likely\n> theory is that the file wasn't actually open but only had a leftover\n> VFD entry, and when FileSize() -> FileAccess() tried to open it,\n> the open failed with ENOENT --- but _mdnblocks() would still call it\n> a seek failure.\n> \n> So I'd opine that this is a pretty high-level failure --- what are\n> we doing trying to replay WAL against a table that's been dropped?\n> Or if it wasn't dropped, why was the FSM removed?\n\nInteresting theory. In this particular workload, all DDLs are run\nwhen the product runs firstboot and the schema does not change\nafterwards, so the stuff does not drop any tables. Actually I have\nbeen able to extract more information from the log bundles I have as\nthis stuff does a lookup of all the files of the data folder at the\nmoment a log bundle is taken. For this relation the main fork exists\non the primary and there is no FSM and VM. On the standby, the main\nfork also exists but there is a FSM, which I guess has been rebuilt at\nthe follow-up recovery. So could it be possible that a FSM has been\nremoved on the primary, with its removal done on the standby because\nof that without the proper VFD entry cleaned up?\n--\nMichael",
"msg_date": "Fri, 26 Jul 2019 09:59:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Seek failure at end of FSM file during WAL replay (in 11)"
}
] |
[
{
"msg_contents": "Sorry in advance for link-breaking message force by gmail..\n\nhttps://www.postgresql.org/message-id/flat/CY4PR2101MB0804CE9836E582C0702214E8AAD30@CY4PR2101MB0804.namprd21.prod.outlook.com\n\nI assume that we are in a consensus about the problem we are to fix\nhere.\n\n> 0a 00000004`8080cc30 00000004`80dcf917 postgres!PGSemaphoreLock+0x65 [d:\\orcasqlagsea10\\14\\s\\src\\backend\\port\\win32_sema.c @ 158]\n> 0b 00000004`8080cc90 00000004`80db025c postgres!LWLockAcquire+0x137 [d:\\orcasqlagsea10\\14\\s\\src\\backend\\storage\\lmgr\\lwlock.c @ 1234]\n> 0c 00000004`8080ccd0 00000004`80db25db postgres!AbortBufferIO+0x2c [d:\\orcasqlagsea10\\14\\s\\src\\backend\\storage\\buffer\\bufmgr.c @ 3995]\n> 0d 00000004`8080cd20 00000004`80dbce36 postgres!AtProcExit_Buffers+0xb [d:\\orcasqlagsea10\\14\\s\\src\\backend\\storage\\buffer\\bufmgr.c @ 2479]\n> 0e 00000004`8080cd50 00000004`80dbd1bd postgres!shmem_exit+0xf6 [d:\\orcasqlagsea10\\14\\s\\src\\backend\\storage\\ipc\\ipc.c @ 262]\n> 0f 00000004`8080cd80 00000004`80dbccfd postgres!proc_exit_prepare+0x4d [d:\\orcasqlagsea10\\14\\s\\src\\backend\\storage\\ipc\\ipc.c @ 188]\n> 10 00000004`8080cdb0 00000004`80ef9e74 postgres!proc_exit+0xd [d:\\orcasqlagsea10\\14\\s\\src\\backend\\storage\\ipc\\ipc.c @ 141]\n> 11 00000004`8080cde0 00000004`80ddb6ef postgres!errfinish+0x204 [d:\\orcasqlagsea10\\14\\s\\src\\backend\\utils\\error\\elog.c @ 624]\n> 12 00000004`8080ce50 00000004`80db0f59 postgres!mdread+0x12f [d:\\orcasqlagsea10\\14\\s\\src\\backend\\storage\\smgr\\md.c @ 806]\n\nOk, we are fixing this. The proposed patch lets LWLockReleaseAll()\ncalled before InitBufferPoolBackend() by registering the former after\nthe latter into on_shmem_exit list. Even if it works, I think it's\nneither clean nor safe to register multiple order-sensitive callbacks.\n\nAtProcExit_Buffers has the following comment:\n\n> * During backend exit, ensure that we released all shared-buffer locks and\n> * assert that we have no remaining pins.\n\nAnd the only caller of it is shmem_exit. More of that, all other\ncaller sites calls LWLockReleaseAll() just before calling it. If\nthat's the case, why don't we just release all LWLocks in shmem_exit\nor in AtProcExit_Buffers before calling AbortBufferIO()? I think it's\nsufficient that AtProcExit_Buffers calls it at the beginning. (The\ncomment for the funcgtion needs editing).\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 24 Jul 2019 16:08:54 +0900",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Fix Proposal - Deadlock Issue in Single User Mode When IO\n Failure Occurs"
}
] |
[
{
"msg_contents": "Sorry in advance for link-breaking message forced by gmail..\n\nhttps://www.postgresql.org/message-id/flat/20190202083822.GC32531@gust.leadboat.com\n\n> 1. The result of the test is valid only until we release the SLRU ControlLock,\n> which we do before SlruScanDirCbDeleteCutoff() uses the cutoff to evaluate\n> segments for deletion. Once we release that lock, latest_page_number can\n> advance. This creates a TOCTOU race condition, allowing excess deletion:\n>\n>\n> [local] test=# table trunc_clog_concurrency ;\n> ERROR: could not access status of transaction 2149484247\n> DETAIL: Could not open file \"pg_xact/0801\": No such file or directory.\n\nIt seems like some other vacuum process saw larger cutoff page? If I'm\nnot missing something, the missing page is no longer the\n\"recently-populated\" page at the time (As I understand it as the last\npage that holds valid data). Couldn't we just ignore ENOENT there?\n\n> 2. By the time the \"apparent wraparound\" test fires, we've already WAL-logged\n> the truncation. clog_redo() suppresses the \"apparent wraparound\" test,\n> then deletes too much. Startup then fails:\n\nI agree that if truncation is skipped after issuing log, it will\nlead to data-loss at the next recovery. But the follwoing log..:\n\n> 881997 2019-02-10 02:53:32.105 GMT FATAL: could not access status of transaction 708112327\n> 881997 2019-02-10 02:53:32.105 GMT DETAIL: Could not open file \"pg_xact/02A3\": No such file or directory.\n> 881855 2019-02-10 02:53:32.107 GMT LOG: startup process (PID 881997) exited with exit code 1\n\nIf it came from the same reason as 1, the log is simply ignorable, so\nrecovery stopping by the error is unacceptable, but the ENOENT is just\nignorable for the same reason.\n\nAs the result, I agree to (a) (fix rounding), and (c) (test\nwrap-around before writing WAL) but I'm not sure for others. And\nadditional fix for ignorable ENOENT is needed.\n\nWhat do you think about this?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 24 Jul 2019 17:27:18 +0900",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Spurious \"apparent wraparound\" via SimpleLruTruncate() rounding"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 05:27:18PM +0900, Kyotaro Horiguchi wrote:\n> Sorry in advance for link-breaking message forced by gmail..\n\nUsing the archives page \"Resend email\" link avoids that.\n\n> https://www.postgresql.org/message-id/flat/20190202083822.GC32531@gust.leadboat.com\n> \n> > 1. The result of the test is valid only until we release the SLRU ControlLock,\n> > which we do before SlruScanDirCbDeleteCutoff() uses the cutoff to evaluate\n> > segments for deletion. Once we release that lock, latest_page_number can\n> > advance. This creates a TOCTOU race condition, allowing excess deletion:\n> >\n> >\n> > [local] test=# table trunc_clog_concurrency ;\n> > ERROR: could not access status of transaction 2149484247\n> > DETAIL: Could not open file \"pg_xact/0801\": No such file or directory.\n> \n> It seems like some other vacuum process saw larger cutoff page?\n\nNo, just one VACUUM suffices.\n\n> If I'm\n> not missing something, the missing page is no longer the\n> \"recently-populated\" page at the time (As I understand it as the last\n> page that holds valid data). Couldn't we just ignore ENOENT there?\n\nThe server reported this error while attempting to read CLOG to determine\nwhether a tuple's xmin committed or aborted. That ENOENT means the relevant\nCLOG page is not available. To ignore that ENOENT, the server would need to\nguess whether to consider the xmin committed or consider it aborted. So, no,\nwe can't just ignore the ENOENT.\n\n\n",
"msg_date": "Wed, 24 Jul 2019 20:45:48 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Spurious \"apparent wraparound\" via SimpleLruTruncate() rounding"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed the issue in to_timestamp()/to_date() while handling the double\nquote literal string. If any double quote literal characters found in\nformat, we generate the NODE_TYPE_CHAR in parse format and store that\nactual character in FormatNode->character. n DCH_from_char, we just\nincrement the input string by length of character for NODE_TYPE_CHAR.\nWe are actually not matching these characters in input string and because\nof this, date values get changed if quoted literal string is not identical\nin input and format string.\n\ne.g:\n\npostgres@78619=#select to_timestamp('2019-05-24T23:12:45',\n'yyyy-mm-dd\"TT\"hh24:mi:ss');\n to_timestamp\n---------------------------\n 2019-05-24 03:12:45+05:30\n(1 row)\n\n\nIn above example, the quoted string is 'TT', so it just increment the input\nstring by 2 while handling these characters and returned the wrong hour\nvalue.\n\nMy suggestion is to match the exact characters from quoted literal string\nin input string and if doesn't match then throw an error.\n\nAttached is the POC patch which almost works for all scenarios except for\nwhitespace - as a quote character.\n\nSuggestions?\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\nEnterpriseDB Corporation,\nThe Postgres Database Company.",
"msg_date": "Wed, 24 Jul 2019 17:08:15 +0530",
"msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Issue in to_timestamp/to_date while handling the quoted literal\n string"
},
{
"msg_contents": "Hi Suraj,\n\nI think the documentation is reasonably clear about this behaviour, quote:\n\n\" In to_date, to_number, and to_timestamp, literal text and double-quoted\nstrings result in skipping the number of characters contained in the\nstring; for example \"XX\" skips two input characters (whether or not they\nare XX).\"\n\nI can appreciate that this isn't the behaviour you intuitively expected\nfrom to_timestamp, and I don't think you'd be the first or the last. The\npurpose of these functions was never to validate that your input string\nprecisely matches the non-coding parts of your format pattern. For that, I\nthink you'd be better served by using regular expressions.\n\nJust as an aside, in the example you gave, the string '2019-05-24T23:12:45'\nwill cast directly to timestamp just fine, so it isn't the kind of\nsituation to_timestamp was really intended for. It's more for when your\ninput string is in an obscure (or ambiguous) format that is known to you in\nadvance.\n\nI hope that helps.\n\nCheers\nBrendan\n\nOn Wed, 24 Jul 2019 at 21:38, Suraj Kharage <suraj.kharage@enterprisedb.com>\nwrote:\n\n> Hi,\n>\n> I noticed the issue in to_timestamp()/to_date() while handling the double\n> quote literal string. If any double quote literal characters found in\n> format, we generate the NODE_TYPE_CHAR in parse format and store that\n> actual character in FormatNode->character. n DCH_from_char, we just\n> increment the input string by length of character for NODE_TYPE_CHAR.\n> We are actually not matching these characters in input string and because\n> of this, date values get changed if quoted literal string is not identical\n> in input and format string.\n>\n> e.g:\n>\n> postgres@78619=#select to_timestamp('2019-05-24T23:12:45',\n> 'yyyy-mm-dd\"TT\"hh24:mi:ss');\n> to_timestamp\n> ---------------------------\n> 2019-05-24 03:12:45+05:30\n> (1 row)\n>\n>\n> In above example, the quoted string is 'TT', so it just increment the\n> input string by 2 while handling these characters and returned the wrong\n> hour value.\n>\n> My suggestion is to match the exact characters from quoted literal string\n> in input string and if doesn't match then throw an error.\n>\n> Attached is the POC patch which almost works for all scenarios except for\n> whitespace - as a quote character.\n>\n> Suggestions?\n> --\n> --\n>\n> Thanks & Regards,\n> Suraj kharage,\n> EnterpriseDB Corporation,\n> The Postgres Database Company.\n>\n\nHi Suraj,I think the documentation is reasonably clear about this behaviour, quote:\"\nIn to_date, to_number, and to_timestamp, literal text and double-quoted strings result in skipping the number of characters contained in the string; for example \"XX\" skips two input characters (whether or not they are XX).\"I can appreciate that this isn't the behaviour you intuitively expected from to_timestamp, and I don't think you'd be the first or the last. The purpose of these functions was never to validate that your input string precisely matches the non-coding parts of your format pattern. For that, I think you'd be better served by using regular expressions.Just as an aside, in the example you gave, the string '2019-05-24T23:12:45' will cast directly to timestamp just fine, so it isn't the kind of situation to_timestamp was really intended for. It's more for when your input string is in an obscure (or ambiguous) format that is known to you in advance.I hope that helps.CheersBrendanOn Wed, 24 Jul 2019 at 21:38, Suraj Kharage <suraj.kharage@enterprisedb.com> wrote:Hi,I noticed the issue in to_timestamp()/to_date() while handling the double quote literal string. If any double quote literal characters found in format, we generate the NODE_TYPE_CHAR in parse format and store that actual character in FormatNode->character. n DCH_from_char, we just increment the input string by length of character for NODE_TYPE_CHAR. We are actually not matching these characters in input string and because of this, date values get changed if quoted literal string is not identical in input and format string.e.g:postgres@78619=#select to_timestamp('2019-05-24T23:12:45', 'yyyy-mm-dd\"TT\"hh24:mi:ss'); to_timestamp --------------------------- 2019-05-24 03:12:45+05:30(1 row)In above example, the quoted string is 'TT', so it just increment the input string by 2 while handling these characters and returned the wrong hour value.My suggestion is to match the exact characters from quoted literal string in input string and if doesn't match then throw an error. Attached is the POC patch which almost works for all scenarios except for whitespace - as a quote character.Suggestions?-- --Thanks & Regards, Suraj kharage, EnterpriseDB Corporation, The Postgres Database Company.",
"msg_date": "Wed, 24 Jul 2019 22:54:39 +1000",
"msg_from": "Brendan Jurd <direvus@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Issue in to_timestamp/to_date while handling the quoted literal\n string"
},
{
"msg_contents": "Thanks for the clarification Brendan, that really helps.\n\nOn Wed, Jul 24, 2019 at 6:24 PM Brendan Jurd <direvus@gmail.com> wrote:\n\n> Hi Suraj,\n>\n> I think the documentation is reasonably clear about this behaviour, quote:\n>\n> \" In to_date, to_number, and to_timestamp, literal text and double-quoted\n> strings result in skipping the number of characters contained in the\n> string; for example \"XX\" skips two input characters (whether or not they\n> are XX).\"\n>\n> I can appreciate that this isn't the behaviour you intuitively expected\n> from to_timestamp, and I don't think you'd be the first or the last. The\n> purpose of these functions was never to validate that your input string\n> precisely matches the non-coding parts of your format pattern. For that, I\n> think you'd be better served by using regular expressions.\n>\n> Just as an aside, in the example you gave, the string\n> '2019-05-24T23:12:45' will cast directly to timestamp just fine, so it\n> isn't the kind of situation to_timestamp was really intended for. It's\n> more for when your input string is in an obscure (or ambiguous) format that\n> is known to you in advance.\n>\n> I hope that helps.\n>\n> Cheers\n> Brendan\n>\n> On Wed, 24 Jul 2019 at 21:38, Suraj Kharage <\n> suraj.kharage@enterprisedb.com> wrote:\n>\n>> Hi,\n>>\n>> I noticed the issue in to_timestamp()/to_date() while handling the double\n>> quote literal string. If any double quote literal characters found in\n>> format, we generate the NODE_TYPE_CHAR in parse format and store that\n>> actual character in FormatNode->character. n DCH_from_char, we just\n>> increment the input string by length of character for NODE_TYPE_CHAR.\n>> We are actually not matching these characters in input string and because\n>> of this, date values get changed if quoted literal string is not identical\n>> in input and format string.\n>>\n>> e.g:\n>>\n>> postgres@78619=#select to_timestamp('2019-05-24T23:12:45',\n>> 'yyyy-mm-dd\"TT\"hh24:mi:ss');\n>> to_timestamp\n>> ---------------------------\n>> 2019-05-24 03:12:45+05:30\n>> (1 row)\n>>\n>>\n>> In above example, the quoted string is 'TT', so it just increment the\n>> input string by 2 while handling these characters and returned the wrong\n>> hour value.\n>>\n>> My suggestion is to match the exact characters from quoted literal string\n>> in input string and if doesn't match then throw an error.\n>>\n>> Attached is the POC patch which almost works for all scenarios except for\n>> whitespace - as a quote character.\n>>\n>> Suggestions?\n>> --\n>> --\n>>\n>> Thanks & Regards,\n>> Suraj kharage,\n>> EnterpriseDB Corporation,\n>> The Postgres Database Company.\n>>\n>\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\nEnterpriseDB Corporation,\nThe Postgres Database Company.\n\nThanks for the clarification Brendan, that really helps.On Wed, Jul 24, 2019 at 6:24 PM Brendan Jurd <direvus@gmail.com> wrote:Hi Suraj,I think the documentation is reasonably clear about this behaviour, quote:\"\nIn to_date, to_number, and to_timestamp, literal text and double-quoted strings result in skipping the number of characters contained in the string; for example \"XX\" skips two input characters (whether or not they are XX).\"I can appreciate that this isn't the behaviour you intuitively expected from to_timestamp, and I don't think you'd be the first or the last. The purpose of these functions was never to validate that your input string precisely matches the non-coding parts of your format pattern. For that, I think you'd be better served by using regular expressions.Just as an aside, in the example you gave, the string '2019-05-24T23:12:45' will cast directly to timestamp just fine, so it isn't the kind of situation to_timestamp was really intended for. It's more for when your input string is in an obscure (or ambiguous) format that is known to you in advance.I hope that helps.CheersBrendanOn Wed, 24 Jul 2019 at 21:38, Suraj Kharage <suraj.kharage@enterprisedb.com> wrote:Hi,I noticed the issue in to_timestamp()/to_date() while handling the double quote literal string. If any double quote literal characters found in format, we generate the NODE_TYPE_CHAR in parse format and store that actual character in FormatNode->character. n DCH_from_char, we just increment the input string by length of character for NODE_TYPE_CHAR. We are actually not matching these characters in input string and because of this, date values get changed if quoted literal string is not identical in input and format string.e.g:postgres@78619=#select to_timestamp('2019-05-24T23:12:45', 'yyyy-mm-dd\"TT\"hh24:mi:ss'); to_timestamp --------------------------- 2019-05-24 03:12:45+05:30(1 row)In above example, the quoted string is 'TT', so it just increment the input string by 2 while handling these characters and returned the wrong hour value.My suggestion is to match the exact characters from quoted literal string in input string and if doesn't match then throw an error. Attached is the POC patch which almost works for all scenarios except for whitespace - as a quote character.Suggestions?-- --Thanks & Regards, Suraj kharage, EnterpriseDB Corporation, The Postgres Database Company.\n\n-- --Thanks & Regards, Suraj kharage, EnterpriseDB Corporation, The Postgres Database Company.",
"msg_date": "Thu, 25 Jul 2019 12:17:29 +0530",
"msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Issue in to_timestamp/to_date while handling the quoted literal\n string"
}
] |
[
{
"msg_contents": "Server is generally running pretty well, and is high volume. This\nquery is not new and is also medium volume. Database rebooted in\nabout 4 seconds with no damage; fast enough we didn't even trip alarms\n(I noticed this troubleshooting another issue). We are a couple of\nbug fixes releases behind but I didn't see anything obvious in the\nrelease notes suggesting a resolved issue. Anyone have any ideas?\nthanks in advance.\n\nmerlin\n\n*** glibc detected *** postgres: rms ysconfig 10.33.190.21(36788)\nSELECT: double free or corruption (!prev): 0x0000000001fb2140 ***\n======= Backtrace: =========\n/lib64/libc.so.6(+0x75dee)[0x7f4fde053dee]\n/lib64/libc.so.6(+0x78c80)[0x7f4fde056c80]\npostgres: rms ysconfig 10.33.190.21(36788) SELECT(ExecHashJoin+0x5a2)[0x5e2d32]\npostgres: rms ysconfig 10.33.190.21(36788) SELECT(ExecProcNode+0x208)[0x5cf728]\npostgres: rms ysconfig 10.33.190.21(36788)\nSELECT(standard_ExecutorRun+0x18a)[0x5cd1ca]\npostgres: rms ysconfig 10.33.190.21(36788) SELECT[0x6e5607]\npostgres: rms ysconfig 10.33.190.21(36788) SELECT(PortalRun+0x188)[0x6e67d8]\npostgres: rms ysconfig 10.33.190.21(36788) SELECT[0x6e2af3]\npostgres: rms ysconfig 10.33.190.21(36788) SELECT(PostgresMain+0x75a)[0x6e456a]\npostgres: rms ysconfig 10.33.190.21(36788)\nSELECT(PostmasterMain+0x1875)[0x6840b5]\npostgres: rms ysconfig 10.33.190.21(36788) SELECT(main+0x7a8)[0x60b528]\n/lib64/libc.so.6(__libc_start_main+0xfd)[0x7f4fddffcd1d]\npostgres: rms ysconfig 10.33.190.21(36788) SELECT[0x46c589]\n\n2019-07-23 09:41:41 CDT[:@]:LOG: server process (PID 18057) was\nterminated by signal 6: Aborted\n2019-07-23 09:41:41 CDT[:@]:DETAIL: Failed process was running:\nSELECT JR.job_id as jobId,JR.job_execution_id as\njobResultId,JR.created as lastRunDate, JR.status as status,\nJR.status_message as statusMessage, JR.output_format as outputFormat,\nJS.schedule_name as scheduleName, JS.job_name as reportName,\nJS.created_by as scheduledBy, JS.product as source FROM (SELECT\nJR.job_id, MAX(JR.created) AS MaxCreated FROM job_schedule JS JOIN\njob_result JR ON JR.job_id=JS.job_id WHERE (lower(JS.recepients) like\nlower($1) OR lower(JS.created_by) = lower($2)) GROUP BY JR.job_id) TMP\nJOIN job_result JR ON JR.job_id = TMP.job_id AND JR.created =\nTMP.MaxCreated JOIN job_schedule JS ON JS.job_id = JR.job_id AND\nJS.job_type='CRON'\n\nmerlin\n\n\n",
"msg_date": "Wed, 24 Jul 2019 09:40:06 -0500",
"msg_from": "Merlin Moncure <mmoncure@gmail.com>",
"msg_from_op": true,
"msg_subject": "double free in ExecHashJoin, 9.6.12"
},
{
"msg_contents": "On Thu, Jul 25, 2019 at 2:39 AM Merlin Moncure <mmoncure@gmail.com> wrote:\n> Server is generally running pretty well, and is high volume. This\n> query is not new and is also medium volume. Database rebooted in\n> about 4 seconds with no damage; fast enough we didn't even trip alarms\n> (I noticed this troubleshooting another issue). We are a couple of\n> bug fixes releases behind but I didn't see anything obvious in the\n> release notes suggesting a resolved issue. Anyone have any ideas?\n> thanks in advance.\n\n> postgres: rms ysconfig 10.33.190.21(36788) SELECT(ExecHashJoin+0x5a2)[0x5e2d32]\n\nHi Merlin,\n\nWhere's the binary from (exact package name, if installed with a\npackage)? Not sure if this is going to help, but is there any chance\nyou could disassemble that function so we can try to see what it's\ndoing at that offset? For example on Debian if you have\npostgresql-9.6 and postgresql-9.6-dbg installed you could run \"gdb\n/usr/lib/postgresql/9.6/bin/postgres\" and then \"disassemble\nExecHashJoin\". The code at \"<+1442>\" (0x5a2) is presumably calling\nfree or some other libc thing (though I'm surprised not to see an\nintervening palloc thing).\n\n--\nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Jul 2019 16:00:28 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: double free in ExecHashJoin, 9.6.12"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 11:01 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Thu, Jul 25, 2019 at 2:39 AM Merlin Moncure <mmoncure@gmail.com> wrote:\n> > Server is generally running pretty well, and is high volume. This\n> > query is not new and is also medium volume. Database rebooted in\n> > about 4 seconds with no damage; fast enough we didn't even trip alarms\n> > (I noticed this troubleshooting another issue). We are a couple of\n> > bug fixes releases behind but I didn't see anything obvious in the\n> > release notes suggesting a resolved issue. Anyone have any ideas?\n> > thanks in advance.\n>\n> > postgres: rms ysconfig 10.33.190.21(36788) SELECT(ExecHashJoin+0x5a2)[0x5e2d32]\n>\n> Hi Merlin,\n>\n> Where's the binary from (exact package name, if installed with a\n> package)? Not sure if this is going to help, but is there any chance\n> you could disassemble that function so we can try to see what it's\n> doing at that offset? For example on Debian if you have\n> postgresql-9.6 and postgresql-9.6-dbg installed you could run \"gdb\n> /usr/lib/postgresql/9.6/bin/postgres\" and then \"disassemble\n> ExecHashJoin\". The code at \"<+1442>\" (0x5a2) is presumably calling\n> free or some other libc thing (though I'm surprised not to see an\n> intervening palloc thing).\n\nThanks -- great suggestion. I'll report back with any interesting findings.\n\nmerlin\n\n\n",
"msg_date": "Fri, 26 Jul 2019 16:32:45 -0500",
"msg_from": "Merlin Moncure <mmoncure@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: double free in ExecHashJoin, 9.6.12"
}
] |
[
{
"msg_contents": "Greetings everyone.\n\nIn (certain) out-of-the-box PostgreSQL installations, the timezone GUC is\nset to \"localtime\", which seems to mean to query the OS for the value.\nUnless I'm mistaken, the issue with this is that it doesn't allow clients\ninspecting the TimeZone GUC to actually know what timezone the server is\nin, making the GUC largely useless (and creates friction as the GUC can't\nbe expected to always contain valid IANA/Olson values). It would be more\nuseful if PostgreSQL exposed the actual timezone provided by the OS.\n\nDoes this make sense?\n\nAs a side note, there doesn't seem to be any specific documentation on the\nspecial \"localtime\" value of this GUC (e.g.\nhttps://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-TIMEZONES\n).\n\nShay\n\nGreetings everyone.In (certain) out-of-the-box PostgreSQL installations, the timezone GUC is set to \"localtime\", which seems to mean to query the OS for the value. Unless I'm mistaken, the issue with this is that it doesn't allow clients inspecting the TimeZone GUC to actually know what timezone the server is in, making the GUC largely useless (and creates friction as the GUC can't be expected to always contain valid IANA/Olson values). It would be more useful if PostgreSQL exposed the actual timezone provided by the OS.Does this make sense?As a side note, there doesn't seem to be any specific documentation on the special \"localtime\" value of this GUC (e.g. https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-TIMEZONES).Shay",
"msg_date": "Wed, 24 Jul 2019 17:19:40 +0200",
"msg_from": "Shay Rojansky <roji@roji.org>",
"msg_from_op": true,
"msg_subject": "\"localtime\" value in TimeZone"
},
{
"msg_contents": "Shay Rojansky <roji@roji.org> writes:\n> In (certain) out-of-the-box PostgreSQL installations, the timezone GUC is\n> set to \"localtime\", which seems to mean to query the OS for the value.\n> Unless I'm mistaken, the issue with this is that it doesn't allow clients\n> inspecting the TimeZone GUC to actually know what timezone the server is\n> in, making the GUC largely useless (and creates friction as the GUC can't\n> be expected to always contain valid IANA/Olson values). It would be more\n> useful if PostgreSQL exposed the actual timezone provided by the OS.\n\n> Does this make sense?\n\nYeah, this is something that some tzdb packagers do --- they put a\n\"localtime\" file into /usr/share/zoneinfo that is a symlink or hard link\nto the active zone file, and then initdb tends to seize on that as being\nthe shortest available spelling of the active zone.\n\nI opined in\nhttps://www.postgresql.org/message-id/27991.1560984458@sss.pgh.pa.us\nthat we should avoid choosing \"localtime\", but that thread seems\nstalled on larger disagreements about how complicated we want that\nmechanism to be.\n\n> As a side note, there doesn't seem to be any specific documentation on the\n> special \"localtime\" value of this GUC\n\nThat's because it's nonstandard and platform-specific. It's also\nnot special from our standpoint --- it's jsut another zone file.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Jul 2019 11:50:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"localtime\" value in TimeZone"
},
{
"msg_contents": "> Yeah, this is something that some tzdb packagers do --- they put a\n> \"localtime\" file into /usr/share/zoneinfo that is a symlink or hard link\n> to the active zone file, and then initdb tends to seize on that as being\n> the shortest available spelling of the active zone.\n\nI see, I wasn't aware that this was a distribution-level mechanism - I\nunderstand the situation better now.\n\nI followed the conversations you linked to, and disagreements seem to be\nmostly about other aspects of timezone selection. Does it make sense to\nhave a limited, restricted conversation specifically about avoiding\n\"localtime\"?\n\n> Yeah, this is something that some tzdb packagers do --- they put a> \"localtime\" file into /usr/share/zoneinfo that is a symlink or hard link> to the active zone file, and then initdb tends to seize on that as being> the shortest available spelling of the active zone.I see, I wasn't aware that this was a distribution-level mechanism - I understand the situation better now.I followed the conversations you linked to, and disagreements seem to be mostly about other aspects of timezone selection. Does it make sense to have a limited, restricted conversation specifically about avoiding \"localtime\"?",
"msg_date": "Thu, 25 Jul 2019 18:51:25 +0200",
"msg_from": "Shay Rojansky <roji@roji.org>",
"msg_from_op": true,
"msg_subject": "Re: \"localtime\" value in TimeZone"
},
{
"msg_contents": "Shay Rojansky <roji@roji.org> writes:\n> I followed the conversations you linked to, and disagreements seem to be\n> mostly about other aspects of timezone selection. Does it make sense to\n> have a limited, restricted conversation specifically about avoiding\n> \"localtime\"?\n\nI've tried to kick-start the other thread...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Jul 2019 16:37:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \"localtime\" value in TimeZone"
}
] |
[
{
"msg_contents": "Hackers,\n\nSince all DAC checks should have corresponding MAC, this patch adds a\nhook to allow extensions to implement a MAC check on TRUNCATE. I have\nalso implemented this access check in the sepgsql extension.\n\nOne important thing to note is that refpolicy [1] and Redhat based\ndistributions do not have the SELinux permission for db_table {truncate}\nimplemented. This patch is the first step to add this permission to the\nupstream SELinux policy. If this permission does not exist in the\npolicy, sepgsql is being used, and `deny_unknown` is set to 1, the\nTRUNCATE will be denied.\n\nAs a workaround for this behavior, the SELinux aware system would need\nto have `/sys/fs/selinux/deny_unknown` set to 0 until the permission has\nbeen added to refpolicy/Redhat SELinux policy.\n\nThe deny_unknown behavior can be set using CIL [2] by extracting the\nbase SELinux module, and setting how the kernel handles unknown\npermissions. The dependencies for overriding handle_unknown are\npolicycoreutils, selinux-policy-targeted, and a libsemanage version that\nsupports CIL (CentOS 7+).\n\n$ sudo semodule -cE base\n$ sed -Ei 's/(handleunknown )deny/\\1allow/g' base.cil\n$ sudo semodule -i base.cil\n\nThanks,\n\nYuli\n\n[1] https://github.com/SELinuxProject/refpolicy/blob/master/policy/flask/access_vectors#L794\n[2] https://github.com/SELinuxProject/selinux/blob/master/secilc/docs/cil_policy_config_statements.md#handleunknown\n0001-Use-MAC-in-addition-to-DAC-for-TRUNCATE.patch",
"msg_date": "Wed, 24 Jul 2019 14:51:37 -0400",
"msg_from": "Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "add a MAC check for TRUNCATE"
},
{
"msg_contents": "Hello Yuli,\n\n2019年7月25日(木) 3:52 Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com>:\n> Since all DAC checks should have corresponding MAC, this patch adds a\n> hook to allow extensions to implement a MAC check on TRUNCATE. I have\n> also implemented this access check in the sepgsql extension.\n>\n> One important thing to note is that refpolicy [1] and Redhat based\n> distributions do not have the SELinux permission for db_table {truncate}\n> implemented.\n>\nHow db_table:{delete} permission is different from truncate?\n From the standpoint of data access, TRUNCATE is equivalent to DELETE\nwithout WHERE, isn't it?\nOf course, there are some differences between them. TRUNCATE takes\nexclusive locks and eliminates underlying data blocks, on the other hands,\nDELETE removes rows under MVCC manner. However, both of them\neventually removes data from the target table.\n\nI like to recommend to reuse \"db_table:{delete}\" permission for TRUNCATE.\nHow about your opinions?\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Mon, 2 Sep 2019 23:58:08 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "Greetings,\n\n* Kohei KaiGai (kaigai@heterodb.com) wrote:\n> 2019年7月25日(木) 3:52 Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com>:\n> > Since all DAC checks should have corresponding MAC, this patch adds a\n> > hook to allow extensions to implement a MAC check on TRUNCATE. I have\n> > also implemented this access check in the sepgsql extension.\n> >\n> > One important thing to note is that refpolicy [1] and Redhat based\n> > distributions do not have the SELinux permission for db_table {truncate}\n> > implemented.\n> >\n> How db_table:{delete} permission is different from truncate?\n> >From the standpoint of data access, TRUNCATE is equivalent to DELETE\n> without WHERE, isn't it?\n> Of course, there are some differences between them. TRUNCATE takes\n> exclusive locks and eliminates underlying data blocks, on the other hands,\n> DELETE removes rows under MVCC manner. However, both of them\n> eventually removes data from the target table.\n> \n> I like to recommend to reuse \"db_table:{delete}\" permission for TRUNCATE.\n> How about your opinions?\n\nThere's been much discussion and justifcation for adding an independent\nTRUNCATE privilege to GRANT (which actually took many years to be\nallowed). I don't see why we wouldn't represent that as a different\nprivilege to external MAC systems. If the external MAC system wishes to\nuse db_table:{delete} to decide if the privilege is allowed or not, they\ncan, but I don't think core should force that when we have them as\nindependent permissions.\n\nSo, perhaps we can argue about what the sepgsql extension should do, but\nit's clear that we should have an independent hook for this in core.\n\nIsn't there a way to allow an admin to control if db_table:{truncate} is\nallowed for users with db_table:{delete}, or not? Ideally, this could\nbe managed at the SELinux level instead of having to have something\ndifferent in sepgsql or core, but if it needs to be configurable there\ntoo then hopefully we can come up with a good solution.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 3 Sep 2019 15:25:31 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On Mon, Sep 2, 2019 at 10:58 AM Kohei KaiGai <kaigai@heterodb.com> wrote:\n>\n> Hello Yuli,\n\nHello KaiGai,\n\n>\n> 2019年7月25日(木) 3:52 Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com>:\n> > Since all DAC checks should have corresponding MAC, this patch adds a\n> > hook to allow extensions to implement a MAC check on TRUNCATE. I have\n> > also implemented this access check in the sepgsql extension.\n> >\n> > One important thing to note is that refpolicy [1] and Redhat based\n> > distributions do not have the SELinux permission for db_table {truncate}\n> > implemented.\n> >\n> How db_table:{delete} permission is different from truncate?\n> From the standpoint of data access, TRUNCATE is equivalent to DELETE\n> without WHERE, isn't it?\n\nTo echo Stephen's reply, since TRUNCATE has a dedicated privilege in\nthe GRANT system, there should be a MAC based permission as well.\nIncreased granularity for an integrator to add least privileged policy\nis a good idea in my view.\n\n> Of course, there are some differences between them. TRUNCATE takes\n> exclusive locks and eliminates underlying data blocks, on the other hands,\n> DELETE removes rows under MVCC manner. However, both of them\n> eventually removes data from the target table.\n>\n> I like to recommend to reuse \"db_table:{delete}\" permission for TRUNCATE.\n> How about your opinions?\n\nNow that I think about it, using \"db_table { delete }\" would be fine,\nand that would remove the CIL requirement that I stated earlier. Thank\nyou for the suggestion. I'll send a v2 patch using the delete\npermission.\n\nThank you,\n\nYuli\n\n\n>\n> Best regards,\n> --\n> HeteroDB, Inc / The PG-Strom Project\n> KaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Thu, 5 Sep 2019 15:36:02 -0400",
"msg_from": "Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On Tue, Sep 3, 2019 at 3:25 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Kohei KaiGai (kaigai@heterodb.com) wrote:\n> > 2019年7月25日(木) 3:52 Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com>:\n> > > Since all DAC checks should have corresponding MAC, this patch adds a\n> > > hook to allow extensions to implement a MAC check on TRUNCATE. I have\n> > > also implemented this access check in the sepgsql extension.\n> > >\n> > > One important thing to note is that refpolicy [1] and Redhat based\n> > > distributions do not have the SELinux permission for db_table {truncate}\n> > > implemented.\n> > >\n> > How db_table:{delete} permission is different from truncate?\n> > >From the standpoint of data access, TRUNCATE is equivalent to DELETE\n> > without WHERE, isn't it?\n> > Of course, there are some differences between them. TRUNCATE takes\n> > exclusive locks and eliminates underlying data blocks, on the other hands,\n> > DELETE removes rows under MVCC manner. However, both of them\n> > eventually removes data from the target table.\n> >\n> > I like to recommend to reuse \"db_table:{delete}\" permission for TRUNCATE.\n> > How about your opinions?\n>\n> There's been much discussion and justifcation for adding an independent\n> TRUNCATE privilege to GRANT (which actually took many years to be\n> allowed). I don't see why we wouldn't represent that as a different\n> privilege to external MAC systems. If the external MAC system wishes to\n> use db_table:{delete} to decide if the privilege is allowed or not, they\n> can, but I don't think core should force that when we have them as\n> independent permissions.\n>\n> So, perhaps we can argue about what the sepgsql extension should do, but\n> it's clear that we should have an independent hook for this in core.\n>\n> Isn't there a way to allow an admin to control if db_table:{truncate} is\n> allowed for users with db_table:{delete}, or not? Ideally, this could\n> be managed at the SELinux level instead of having to have something\n> different in sepgsql or core, but if it needs to be configurable there\n> too then hopefully we can come up with a good solution.\n\nIf I understand you correctly, you are asking if an SELinux domain can\nhave the db_table:{truncate} permission but not db_table:{delete}\nusing SELinux policy? This would only work if the userspace object\nmanager, sepgsql in this case, reaches out to the policy server to\ncheck if db_table:{truncate} is allowed for a subject accessing an\nobject.\n\nI think it should be okay to use db_table:{delete} as the permission\nto check for TRUNCATE in the object manager. I have attached a second\nversion of the hook and sepgsql changes to demonstrate this.\n\nThank you.\n\n>\n> Thanks,\n>\n> Stephen",
"msg_date": "Fri, 6 Sep 2019 09:51:17 -0400",
"msg_from": "Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "Greetings,\n\n* Yuli Khodorkovskiy (yuli.khodorkovskiy@crunchydata.com) wrote:\n> On Tue, Sep 3, 2019 at 3:25 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > * Kohei KaiGai (kaigai@heterodb.com) wrote:\n> > > 2019年7月25日(木) 3:52 Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com>:\n> > > > Since all DAC checks should have corresponding MAC, this patch adds a\n> > > > hook to allow extensions to implement a MAC check on TRUNCATE. I have\n> > > > also implemented this access check in the sepgsql extension.\n> > > >\n> > > > One important thing to note is that refpolicy [1] and Redhat based\n> > > > distributions do not have the SELinux permission for db_table {truncate}\n> > > > implemented.\n> > > >\n> > > How db_table:{delete} permission is different from truncate?\n> > > >From the standpoint of data access, TRUNCATE is equivalent to DELETE\n> > > without WHERE, isn't it?\n> > > Of course, there are some differences between them. TRUNCATE takes\n> > > exclusive locks and eliminates underlying data blocks, on the other hands,\n> > > DELETE removes rows under MVCC manner. However, both of them\n> > > eventually removes data from the target table.\n> > >\n> > > I like to recommend to reuse \"db_table:{delete}\" permission for TRUNCATE.\n> > > How about your opinions?\n> >\n> > There's been much discussion and justifcation for adding an independent\n> > TRUNCATE privilege to GRANT (which actually took many years to be\n> > allowed). I don't see why we wouldn't represent that as a different\n> > privilege to external MAC systems. If the external MAC system wishes to\n> > use db_table:{delete} to decide if the privilege is allowed or not, they\n> > can, but I don't think core should force that when we have them as\n> > independent permissions.\n> >\n> > So, perhaps we can argue about what the sepgsql extension should do, but\n> > it's clear that we should have an independent hook for this in core.\n> >\n> > Isn't there a way to allow an admin to control if db_table:{truncate} is\n> > allowed for users with db_table:{delete}, or not? Ideally, this could\n> > be managed at the SELinux level instead of having to have something\n> > different in sepgsql or core, but if it needs to be configurable there\n> > too then hopefully we can come up with a good solution.\n> \n> If I understand you correctly, you are asking if an SELinux domain can\n> have the db_table:{truncate} permission but not db_table:{delete}\n> using SELinux policy? This would only work if the userspace object\n> manager, sepgsql in this case, reaches out to the policy server to\n> check if db_table:{truncate} is allowed for a subject accessing an\n> object.\n\nI was saying that, I believe, it would be pretty straight-forward for an\nSELinux admin to add db_table:{truncate} to whatever set of individuals\nare allowed to use db_table:{delete}.\n\n> I think it should be okay to use db_table:{delete} as the permission\n> to check for TRUNCATE in the object manager. I have attached a second\n> version of the hook and sepgsql changes to demonstrate this.\n\nThere are actual reasons why the 'DELETE' privilege is *not* the same as\n'TRUNCATE' in PostgreSQL and I'm really not convinced that we should\njust be tossing that distinction out the window for users of SELinux. A\npretty obvious one is that DELETE triggers don't get fired for a\nTRUNCATE command, but TRUNCATE also doesn't follow the same MVCC rules\nthat the rest of the system does.\n\nIf TRUNCATE and DELETE were the same, we'd only have DELETE and we would\njust make it super-fast by implementing it the way we implement\nTRUNCATE.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 6 Sep 2019 10:40:28 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On Fri, Sep 6, 2019 at 10:40 AM Stephen Frost <sfrost@snowman.net> wrote:\n> There are actual reasons why the 'DELETE' privilege is *not* the same as\n> 'TRUNCATE' in PostgreSQL and I'm really not convinced that we should\n> just be tossing that distinction out the window for users of SELinux.\n\n+1.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 6 Sep 2019 11:21:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On Fri, Sep 6, 2019 at 10:40 AM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n\nHello Stephen,\n\n>\n> * Yuli Khodorkovskiy (yuli.khodorkovskiy@crunchydata.com) wrote:\n> > On Tue, Sep 3, 2019 at 3:25 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > * Kohei KaiGai (kaigai@heterodb.com) wrote:\n> > > > 2019年7月25日(木) 3:52 Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com>:\n> > > > > Since all DAC checks should have corresponding MAC, this patch adds a\n> > > > > hook to allow extensions to implement a MAC check on TRUNCATE. I have\n> > > > > also implemented this access check in the sepgsql extension.\n> > > > >\n> > > > > One important thing to note is that refpolicy [1] and Redhat based\n> > > > > distributions do not have the SELinux permission for db_table {truncate}\n> > > > > implemented.\n> > > > >\n> > > > How db_table:{delete} permission is different from truncate?\n> > > > >From the standpoint of data access, TRUNCATE is equivalent to DELETE\n> > > > without WHERE, isn't it?\n> > > > Of course, there are some differences between them. TRUNCATE takes\n> > > > exclusive locks and eliminates underlying data blocks, on the other hands,\n> > > > DELETE removes rows under MVCC manner. However, both of them\n> > > > eventually removes data from the target table.\n> > > >\n> > > > I like to recommend to reuse \"db_table:{delete}\" permission for TRUNCATE.\n> > > > How about your opinions?\n> > >\n> > > There's been much discussion and justifcation for adding an independent\n> > > TRUNCATE privilege to GRANT (which actually took many years to be\n> > > allowed). I don't see why we wouldn't represent that as a different\n> > > privilege to external MAC systems. If the external MAC system wishes to\n> > > use db_table:{delete} to decide if the privilege is allowed or not, they\n> > > can, but I don't think core should force that when we have them as\n> > > independent permissions.\n> > >\n> > > So, perhaps we can argue about what the sepgsql extension should do, but\n> > > it's clear that we should have an independent hook for this in core.\n> > >\n> > > Isn't there a way to allow an admin to control if db_table:{truncate} is\n> > > allowed for users with db_table:{delete}, or not? Ideally, this could\n> > > be managed at the SELinux level instead of having to have something\n> > > different in sepgsql or core, but if it needs to be configurable there\n> > > too then hopefully we can come up with a good solution.\n> >\n> > If I understand you correctly, you are asking if an SELinux domain can\n> > have the db_table:{truncate} permission but not db_table:{delete}\n> > using SELinux policy? This would only work if the userspace object\n> > manager, sepgsql in this case, reaches out to the policy server to\n> > check if db_table:{truncate} is allowed for a subject accessing an\n> > object.\n>\n> I was saying that, I believe, it would be pretty straight-forward for an\n> SELinux admin to add db_table:{truncate} to whatever set of individuals\n> are allowed to use db_table:{delete}.\n\nOkay that makes sense. Yes that can definitely be done, and the\noriginal sepgsql patch accomplished what you are describing. I did not\nadd tests or SELinux policy granting `db_table: { truncate }` in the\nregressions of the original patch. If the community decides a new\nSELinux permission in sepgsql for TRUNCATE is the correct path, I will\ngladly update the original patch.\n\n>\n> > I think it should be okay to use db_table:{delete} as the permission\n> > to check for TRUNCATE in the object manager. I have attached a second\n> > version of the hook and sepgsql changes to demonstrate this.\n>\n> There are actual reasons why the 'DELETE' privilege is *not* the same as\n> 'TRUNCATE' in PostgreSQL and I'm really not convinced that we should\n> just be tossing that distinction out the window for users of SELinux. A\n> pretty obvious one is that DELETE triggers don't get fired for a\n> TRUNCATE command, but TRUNCATE also doesn't follow the same MVCC rules\n> that the rest of the system does.\n\nI do agree with you there should be a distinction between TRUNCATE and\nDELETE in the SELinux perms. I'll wait a few days for more discussion\nand send an updated patch.\n\nThank you.\n\n>\n> If TRUNCATE and DELETE were the same, we'd only have DELETE and we would\n> just make it super-fast by implementing it the way we implement\n> TRUNCATE.\n>\n> Thanks,\n>\n> Stephen\n\n\n",
"msg_date": "Fri, 6 Sep 2019 11:26:38 -0400",
"msg_from": "Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On 9/6/19 11:26 AM, Yuli Khodorkovskiy wrote:\n> On Fri, Sep 6, 2019 at 10:40 AM Stephen Frost <sfrost@snowman.net> wrote:\n>> There are actual reasons why the 'DELETE' privilege is *not* the same as\n>> 'TRUNCATE' in PostgreSQL and I'm really not convinced that we should\n>> just be tossing that distinction out the window for users of SELinux. A\n>> pretty obvious one is that DELETE triggers don't get fired for a\n>> TRUNCATE command, but TRUNCATE also doesn't follow the same MVCC rules\n>> that the rest of the system does.\n> \n> I do agree with you there should be a distinction between TRUNCATE and\n> DELETE in the SELinux perms. I'll wait a few days for more discussion\n> and send an updated patch.\n\n\n+1 - I don't think there is any question about it.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Fri, 6 Sep 2019 11:38:48 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On Fri, Sep 6, 2019 at 11:26 AM Yuli Khodorkovskiy\n<yuli.khodorkovskiy@crunchydata.com> wrote:\n>\n> On Fri, Sep 6, 2019 at 10:40 AM Stephen Frost <sfrost@snowman.net> wrote:\n> >\n> > Greetings,\n>\n> Hello Stephen,\n>\n> >\n> > * Yuli Khodorkovskiy (yuli.khodorkovskiy@crunchydata.com) wrote:\n> > > On Tue, Sep 3, 2019 at 3:25 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > > * Kohei KaiGai (kaigai@heterodb.com) wrote:\n> > > > > 2019年7月25日(木) 3:52 Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com>:\n> > > > > > Since all DAC checks should have corresponding MAC, this patch adds a\n> > > > > > hook to allow extensions to implement a MAC check on TRUNCATE. I have\n> > > > > > also implemented this access check in the sepgsql extension.\n> > > > > >\n> > > > > > One important thing to note is that refpolicy [1] and Redhat based\n> > > > > > distributions do not have the SELinux permission for db_table {truncate}\n> > > > > > implemented.\n> > > > > >\n> > > > > How db_table:{delete} permission is different from truncate?\n> > > > > >From the standpoint of data access, TRUNCATE is equivalent to DELETE\n> > > > > without WHERE, isn't it?\n> > > > > Of course, there are some differences between them. TRUNCATE takes\n> > > > > exclusive locks and eliminates underlying data blocks, on the other hands,\n> > > > > DELETE removes rows under MVCC manner. However, both of them\n> > > > > eventually removes data from the target table.\n> > > > >\n> > > > > I like to recommend to reuse \"db_table:{delete}\" permission for TRUNCATE.\n> > > > > How about your opinions?\n> > > >\n> > > > There's been much discussion and justifcation for adding an independent\n> > > > TRUNCATE privilege to GRANT (which actually took many years to be\n> > > > allowed). I don't see why we wouldn't represent that as a different\n> > > > privilege to external MAC systems. If the external MAC system wishes to\n> > > > use db_table:{delete} to decide if the privilege is allowed or not, they\n> > > > can, but I don't think core should force that when we have them as\n> > > > independent permissions.\n> > > >\n> > > > So, perhaps we can argue about what the sepgsql extension should do, but\n> > > > it's clear that we should have an independent hook for this in core.\n> > > >\n> > > > Isn't there a way to allow an admin to control if db_table:{truncate} is\n> > > > allowed for users with db_table:{delete}, or not? Ideally, this could\n> > > > be managed at the SELinux level instead of having to have something\n> > > > different in sepgsql or core, but if it needs to be configurable there\n> > > > too then hopefully we can come up with a good solution.\n> > >\n> > > If I understand you correctly, you are asking if an SELinux domain can\n> > > have the db_table:{truncate} permission but not db_table:{delete}\n> > > using SELinux policy? This would only work if the userspace object\n> > > manager, sepgsql in this case, reaches out to the policy server to\n> > > check if db_table:{truncate} is allowed for a subject accessing an\n> > > object.\n> >\n> > I was saying that, I believe, it would be pretty straight-forward for an\n> > SELinux admin to add db_table:{truncate} to whatever set of individuals\n> > are allowed to use db_table:{delete}.\n>\n> Okay that makes sense. Yes that can definitely be done, and the\n> original sepgsql patch accomplished what you are describing. I did not\n> add tests or SELinux policy granting `db_table: { truncate }` in the\n> regressions of the original patch. If the community decides a new\n> SELinux permission in sepgsql for TRUNCATE is the correct path, I will\n> gladly update the original patch.\n\nAh, now I remember why I didn't add regressions to the original patch.\nAs stated at the top of the thread, the \"db_table: { truncate }\"\npermission does not currently exist in refpolicy. A workaround would\nbe to add the policy with CIL, but that adds unneeded complexity to\nthe regressions. I think the correct path forward is:\n\n1) Get the sepgsql changes in without policy/regressions\n2) Send a patch to refpolicy for the new permission\n3) Once Redhat updates the selinux-policy-targeted RPM to include the\nnew permissions, I will send an update to the sepgsql regressions and\npolicy.\n\nThank you.\n\n>\n> >\n> > > I think it should be okay to use db_table:{delete} as the permission\n> > > to check for TRUNCATE in the object manager. I have attached a second\n> > > version of the hook and sepgsql changes to demonstrate this.\n> >\n> > There are actual reasons why the 'DELETE' privilege is *not* the same as\n> > 'TRUNCATE' in PostgreSQL and I'm really not convinced that we should\n> > just be tossing that distinction out the window for users of SELinux. A\n> > pretty obvious one is that DELETE triggers don't get fired for a\n> > TRUNCATE command, but TRUNCATE also doesn't follow the same MVCC rules\n> > that the rest of the system does.\n>\n> I do agree with you there should be a distinction between TRUNCATE and\n> DELETE in the SELinux perms. I'll wait a few days for more discussion\n> and send an updated patch.\n>\n> Thank you.\n>\n> >\n> > If TRUNCATE and DELETE were the same, we'd only have DELETE and we would\n> > just make it super-fast by implementing it the way we implement\n> > TRUNCATE.\n> >\n> > Thanks,\n> >\n> > Stephen\n\n\n",
"msg_date": "Fri, 6 Sep 2019 11:40:48 -0400",
"msg_from": "Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com> writes:\n> Ah, now I remember why I didn't add regressions to the original patch.\n> As stated at the top of the thread, the \"db_table: { truncate }\"\n> permission does not currently exist in refpolicy. A workaround would\n> be to add the policy with CIL, but that adds unneeded complexity to\n> the regressions. I think the correct path forward is:\n\n> 1) Get the sepgsql changes in without policy/regressions\n> 2) Send a patch to refpolicy for the new permission\n> 3) Once Redhat updates the selinux-policy-targeted RPM to include the\n> new permissions, I will send an update to the sepgsql regressions and\n> policy.\n\nThat's going to be a problem. I do not think it will be acceptable\nto commit tests that fail on less-than-bleeding-edge SELinux.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 Sep 2019 11:47:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com> writes:\n> > Ah, now I remember why I didn't add regressions to the original patch.\n> > As stated at the top of the thread, the \"db_table: { truncate }\"\n> > permission does not currently exist in refpolicy. A workaround would\n> > be to add the policy with CIL, but that adds unneeded complexity to\n> > the regressions. I think the correct path forward is:\n> \n> > 1) Get the sepgsql changes in without policy/regressions\n> > 2) Send a patch to refpolicy for the new permission\n> > 3) Once Redhat updates the selinux-policy-targeted RPM to include the\n> > new permissions, I will send an update to the sepgsql regressions and\n> > policy.\n> \n> That's going to be a problem. I do not think it will be acceptable\n> to commit tests that fail on less-than-bleeding-edge SELinux.\n\nThis is why I was suggesting up-thread that it'd be neat if we made this\nsomehow optional, though I don't quite see a way to do that sensibly.\n\nWe could though, of course, make running the regression test optional\nand then have a buildfarm member that's got the bleeding-edge SELinux\n(or is just configured with the additional control) and then have it\nenabled there.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 6 Sep 2019 11:50:04 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On Fri, Sep 6, 2019 at 11:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com> writes:\n> > Ah, now I remember why I didn't add regressions to the original patch.\n> > As stated at the top of the thread, the \"db_table: { truncate }\"\n> > permission does not currently exist in refpolicy. A workaround would\n> > be to add the policy with CIL, but that adds unneeded complexity to\n> > the regressions. I think the correct path forward is:\n>\n> > 1) Get the sepgsql changes in without policy/regressions\n> > 2) Send a patch to refpolicy for the new permission\n> > 3) Once Redhat updates the selinux-policy-targeted RPM to include the\n> > new permissions, I will send an update to the sepgsql regressions and\n> > policy.\n>\n> That's going to be a problem. I do not think it will be acceptable\n> to commit tests that fail on less-than-bleeding-edge SELinux.\n>\n> regards, tom lane\n\nThe tests pass as long as deny_unknown is set to 0, which is the\ndefault on fedora 30.\n\n\n",
"msg_date": "Fri, 6 Sep 2019 11:52:32 -0400",
"msg_from": "Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com> writes:\n>>> 1) Get the sepgsql changes in without policy/regressions\n>>> 2) Send a patch to refpolicy for the new permission\n>>> 3) Once Redhat updates the selinux-policy-targeted RPM to include the\n>>> new permissions, I will send an update to the sepgsql regressions and\n>>> policy.\n\n>> That's going to be a problem. I do not think it will be acceptable\n>> to commit tests that fail on less-than-bleeding-edge SELinux.\n\n> This is why I was suggesting up-thread that it'd be neat if we made this\n> somehow optional, though I don't quite see a way to do that sensibly.\n> We could though, of course, make running the regression test optional\n> and then have a buildfarm member that's got the bleeding-edge SELinux\n> (or is just configured with the additional control) and then have it\n> enabled there.\n\nWell, the larger question, independent of the regression tests, is\nwill the new policy work at all on older SELinux? If not, that\ndoesn't seem very acceptable. Worse, it implies we're going to\nhave another flag day anytime we want to add any new element\nto sepgsql's view of the universe. I think we need some hard\nthought about upgrade paths here --- at least, if we want to\nbelieve that sepgsql is anything but a toy for demonstration\npurposes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 Sep 2019 11:57:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On Fri, Sep 6, 2019 at 11:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com> writes:\n> >>> 1) Get the sepgsql changes in without policy/regressions\n> >>> 2) Send a patch to refpolicy for the new permission\n> >>> 3) Once Redhat updates the selinux-policy-targeted RPM to include the\n> >>> new permissions, I will send an update to the sepgsql regressions and\n> >>> policy.\n>\n> >> That's going to be a problem. I do not think it will be acceptable\n> >> to commit tests that fail on less-than-bleeding-edge SELinux.\n>\n> > This is why I was suggesting up-thread that it'd be neat if we made this\n> > somehow optional, though I don't quite see a way to do that sensibly.\n> > We could though, of course, make running the regression test optional\n> > and then have a buildfarm member that's got the bleeding-edge SELinux\n> > (or is just configured with the additional control) and then have it\n> > enabled there.\n>\n> Well, the larger question, independent of the regression tests, is\n> will the new policy work at all on older SELinux? If not, that\n> doesn't seem very acceptable. Worse, it implies we're going to\n> have another flag day anytime we want to add any new element\n> to sepgsql's view of the universe. I think we need some hard\n> thought about upgrade paths here --- at least, if we want to\n> believe that sepgsql is anything but a toy for demonstration\n> purposes.\n>\n> regards, tom lane\n\nThe default SELinux policy on Fedora ships with deny_unknown set to 0.\nDeny_unknown was added to the kernel in 2.6.24, so unless someone is\nusing RHEL 5.x, which is in ELS, they will have the ability to\noverride the default behavior on CentOS/RHEL.\n\nCIL was added to RHEL starting with RHEL 7. As stated before, an\nintegrator can export the base module and override the deny_unknown\nbehavior.\n\nOn RHEL 6, which goes into ELS in 2020, it's a bit more complicated\nand requires rebuilding the base SELinux module from source.\n\nHope this helps,\n\nYuli\n\n\n",
"msg_date": "Fri, 6 Sep 2019 13:00:39 -0400",
"msg_from": "Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "As Joe Conway pointed out to me out of band, the build animal for RHEL\n7 has handle_unknown set to `0`. Are there any other concerns with\nthis approach?\n\nOn Fri, Sep 6, 2019 at 1:00 PM Yuli Khodorkovskiy\n<yuli.khodorkovskiy@crunchydata.com> wrote:\n>\n> On Fri, Sep 6, 2019 at 11:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Stephen Frost <sfrost@snowman.net> writes:\n> > > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > >> Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com> writes:\n> > >>> 1) Get the sepgsql changes in without policy/regressions\n> > >>> 2) Send a patch to refpolicy for the new permission\n> > >>> 3) Once Redhat updates the selinux-policy-targeted RPM to include the\n> > >>> new permissions, I will send an update to the sepgsql regressions and\n> > >>> policy.\n> >\n> > >> That's going to be a problem. I do not think it will be acceptable\n> > >> to commit tests that fail on less-than-bleeding-edge SELinux.\n> >\n> > > This is why I was suggesting up-thread that it'd be neat if we made this\n> > > somehow optional, though I don't quite see a way to do that sensibly.\n> > > We could though, of course, make running the regression test optional\n> > > and then have a buildfarm member that's got the bleeding-edge SELinux\n> > > (or is just configured with the additional control) and then have it\n> > > enabled there.\n> >\n> > Well, the larger question, independent of the regression tests, is\n> > will the new policy work at all on older SELinux? If not, that\n> > doesn't seem very acceptable. Worse, it implies we're going to\n> > have another flag day anytime we want to add any new element\n> > to sepgsql's view of the universe. I think we need some hard\n> > thought about upgrade paths here --- at least, if we want to\n> > believe that sepgsql is anything but a toy for demonstration\n> > purposes.\n> >\n> > regards, tom lane\n>\n> The default SELinux policy on Fedora ships with deny_unknown set to 0.\n> Deny_unknown was added to the kernel in 2.6.24, so unless someone is\n> using RHEL 5.x, which is in ELS, they will have the ability to\n> override the default behavior on CentOS/RHEL.\n>\n> CIL was added to RHEL starting with RHEL 7. As stated before, an\n> integrator can export the base module and override the deny_unknown\n> behavior.\n>\n> On RHEL 6, which goes into ELS in 2020, it's a bit more complicated\n> and requires rebuilding the base SELinux module from source.\n>\n> Hope this helps,\n>\n> Yuli\n\n\n",
"msg_date": "Fri, 6 Sep 2019 14:13:01 -0400",
"msg_from": "Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com> writes:\n> On Fri, Sep 6, 2019 at 11:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Well, the larger question, independent of the regression tests, is\n>> will the new policy work at all on older SELinux? If not, that\n>> doesn't seem very acceptable.\n\n> The default SELinux policy on Fedora ships with deny_unknown set to 0.\n> Deny_unknown was added to the kernel in 2.6.24, so unless someone is\n> using RHEL 5.x, which is in ELS, they will have the ability to\n> override the default behavior on CentOS/RHEL.\n\nOK, that sounds like it will work.\n\n> On RHEL 6, which goes into ELS in 2020, it's a bit more complicated\n> and requires rebuilding the base SELinux module from source.\n\nsepgsql hasn't worked on RHEL6 in a long time, if ever; it requires\na newer version of libselinux than what ships in RHEL6. So I'm not\nconcerned about that. We do need to worry about RHEL7, and whatever\nis the oldest version of Fedora that is running the sepgsql tests\nin the buildfarm.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 Sep 2019 14:18:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On 9/6/19 2:18 PM, Tom Lane wrote:\n> Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com> writes:\n>> On Fri, Sep 6, 2019 at 11:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Well, the larger question, independent of the regression tests, is\n>>> will the new policy work at all on older SELinux? If not, that\n>>> doesn't seem very acceptable.\n> \n>> The default SELinux policy on Fedora ships with deny_unknown set to 0.\n>> Deny_unknown was added to the kernel in 2.6.24, so unless someone is\n>> using RHEL 5.x, which is in ELS, they will have the ability to\n>> override the default behavior on CentOS/RHEL.\n> \n> OK, that sounds like it will work.\n> \n>> On RHEL 6, which goes into ELS in 2020, it's a bit more complicated\n>> and requires rebuilding the base SELinux module from source.\n> \n> sepgsql hasn't worked on RHEL6 in a long time, if ever; it requires\n> a newer version of libselinux than what ships in RHEL6. So I'm not\n> concerned about that. We do need to worry about RHEL7, and whatever\n> is the oldest version of Fedora that is running the sepgsql tests\n> in the buildfarm.\n\n\nI could be wrong, but as far as I know rhinoceros is the only buildfarm\nanimal running sepgsql tests.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Fri, 6 Sep 2019 15:50:09 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On 9/6/19 2:13 PM, Yuli Khodorkovskiy wrote:\n> As Joe Conway pointed out to me out of band, the build animal for RHEL\n> 7 has handle_unknown set to `0`. Are there any other concerns with\n> this approach?\n\n\nYou mean deny_unknown I believe.\n\n\"Allow unknown object class / permissions. This will set the returned AV\n with all 1's.\"\n\nAs I understand it, this would make the sepgsql behavior unchanged from\nbefore if the policy does not support the new permission.\n\nJoe\n\n> On Fri, Sep 6, 2019 at 1:00 PM Yuli Khodorkovskiy wrote:\n>> The default SELinux policy on Fedora ships with deny_unknown set to 0.\n>> Deny_unknown was added to the kernel in 2.6.24, so unless someone is\n>> using RHEL 5.x, which is in ELS, they will have the ability to\n>> override the default behavior on CentOS/RHEL.\n\n\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Fri, 6 Sep 2019 16:31:46 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On Fri, Sep 6, 2019 at 4:31 PM Joe Conway <mail@joeconway.com> wrote:\n>\n> On 9/6/19 2:13 PM, Yuli Khodorkovskiy wrote:\n> > As Joe Conway pointed out to me out of band, the build animal for RHEL\n> > 7 has handle_unknown set to `0`. Are there any other concerns with\n> > this approach?\n>\n>\n> You mean deny_unknown I believe.\n\nI do, thanks. Not sure where I pulled handle_unknown from.\n\n>\n> \"Allow unknown object class / permissions. This will set the returned AV\n> with all 1's.\"\n>\n> As I understand it, this would make the sepgsql behavior unchanged from\n> before if the policy does not support the new permission.\n>\n> Joe\n>\n> > On Fri, Sep 6, 2019 at 1:00 PM Yuli Khodorkovskiy wrote:\n> >> The default SELinux policy on Fedora ships with deny_unknown set to 0.\n> >> Deny_unknown was added to the kernel in 2.6.24, so unless someone is\n> >> using RHEL 5.x, which is in ELS, they will have the ability to\n> >> override the default behavior on CentOS/RHEL.\n>\n>\n>\n> --\n> Crunchy Data - http://crunchydata.com\n> PostgreSQL Support for Secure Enterprises\n> Consulting, Training, & Open Source Development\n\n\n",
"msg_date": "Fri, 6 Sep 2019 16:51:33 -0400",
"msg_from": "Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> On 9/6/19 2:18 PM, Tom Lane wrote:\n>> sepgsql hasn't worked on RHEL6 in a long time, if ever; it requires\n>> a newer version of libselinux than what ships in RHEL6. So I'm not\n>> concerned about that. We do need to worry about RHEL7, and whatever\n>> is the oldest version of Fedora that is running the sepgsql tests\n>> in the buildfarm.\n\n> I could be wrong, but as far as I know rhinoceros is the only buildfarm\n> animal running sepgsql tests.\n\nIt seems reasonable to define RHEL7 as the oldest SELinux version we\nstill care about. But it'd be a good idea for somebody to be running\na fairly bleeding-edge Fedora animal with sepgsql enabled, so we get\ncoverage of the other end of the scale.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 Sep 2019 20:07:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On 9/6/19 8:07 PM, Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>> On 9/6/19 2:18 PM, Tom Lane wrote:\n>>> sepgsql hasn't worked on RHEL6 in a long time, if ever; it requires\n>>> a newer version of libselinux than what ships in RHEL6. So I'm not\n>>> concerned about that. We do need to worry about RHEL7, and whatever\n>>> is the oldest version of Fedora that is running the sepgsql tests\n>>> in the buildfarm.\n> \n>> I could be wrong, but as far as I know rhinoceros is the only buildfarm\n>> animal running sepgsql tests.\n> \n> It seems reasonable to define RHEL7 as the oldest SELinux version we\n> still care about. But it'd be a good idea for somebody to be running\n> a fairly bleeding-edge Fedora animal with sepgsql enabled, so we get\n> coverage of the other end of the scale.\n\n\nYeah -- I was planning to eventually register a RHEL8 animal, but I\nshould probably do one for Fedora as well. I'll bump the priority for\nthat on my personal TODO.\n\nJoe\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Fri, 6 Sep 2019 21:09:36 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On Fri, Sep 6, 2019 at 9:09 PM Joe Conway <mail@joeconway.com> wrote:\n>\n> On 9/6/19 8:07 PM, Tom Lane wrote:\n> > Joe Conway <mail@joeconway.com> writes:\n> >> On 9/6/19 2:18 PM, Tom Lane wrote:\n> >>> sepgsql hasn't worked on RHEL6 in a long time, if ever; it requires\n> >>> a newer version of libselinux than what ships in RHEL6. So I'm not\n> >>> concerned about that. We do need to worry about RHEL7, and whatever\n> >>> is the oldest version of Fedora that is running the sepgsql tests\n> >>> in the buildfarm.\n> >\n> >> I could be wrong, but as far as I know rhinoceros is the only buildfarm\n> >> animal running sepgsql tests.\n> >\n> > It seems reasonable to define RHEL7 as the oldest SELinux version we\n> > still care about. But it'd be a good idea for somebody to be running\n> > a fairly bleeding-edge Fedora animal with sepgsql enabled, so we get\n> > coverage of the other end of the scale.\n>\n>\n> Yeah -- I was planning to eventually register a RHEL8 animal, but I\n> should probably do one for Fedora as well. I'll bump the priority for\n> that on my personal TODO.\n>\n> Joe\n> --\n> Crunchy Data - http://crunchydata.com\n> PostgreSQL Support for Secure Enterprises\n> Consulting, Training, & Open Source Development\n\nHello,\n\nI have included an updated version of the sepgql patch. The\nTruncate-Hook patch is unchanged from the last version.\n\nThe sepgsql changes now check if the db_table:{ truncate } permission\nexists in the loaded SELinux policy before running the truncate\nregression test. If the permission does not exist, then the new\nregression test will not run.\n\nTesting the TRUNCATE regression test can be done by manually adding\nthe permission with CIL:\n\n```\nsudo semodule -cE base\nsudo sed -i -E 's/(class db_table.*?) \\)/\\1 truncate\\)/' base.cil\nsudo semodule -i base.cil\n```\n\nThanks,\n\nYuli",
"msg_date": "Mon, 9 Sep 2019 15:27:01 -0400",
"msg_from": "Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "Hello,\n\nI moved this patch from \"Bug Fixes\" to Security.\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 11 Sep 2019 18:20:39 -0300",
"msg_from": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "Hello\n\nOn 2019-Sep-09, Yuli Khodorkovskiy wrote:\n\n> I have included an updated version of the sepgql patch. The\n> Truncate-Hook patch is unchanged from the last version.\n\nThis patch no longer applies. Can you please rebase?\n\nJoe, do you plan on being committer for this patch? There seems to be\nsubstantial agreement on it.\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 25 Sep 2019 16:56:58 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On 9/25/19 3:56 PM, Alvaro Herrera wrote:\n> Hello\n> \n> On 2019-Sep-09, Yuli Khodorkovskiy wrote:\n> \n>> I have included an updated version of the sepgql patch. The\n>> Truncate-Hook patch is unchanged from the last version.\n> \n> This patch no longer applies. Can you please rebase?\n> \n> Joe, do you plan on being committer for this patch? There seems to be\n> substantial agreement on it.\n\n\nI should be able to do that.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Wed, 25 Sep 2019 16:47:06 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On Wed, Sep 25, 2019 at 3:57 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> Hello\n>\n> On 2019-Sep-09, Yuli Khodorkovskiy wrote:\n>\n> > I have included an updated version of the sepgql patch. The\n> > Truncate-Hook patch is unchanged from the last version.\n>\n> This patch no longer applies. Can you please rebase?\n\nHi Alvaro,\n\nI have attached the updated patches which should rebase.\n\nSince all existing DAC checks should have MAC, should these patches be\nconsidered a bug fix and therefore back patched?\n\nThank you",
"msg_date": "Wed, 25 Sep 2019 17:40:35 -0400",
"msg_from": "Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On 2019-Sep-25, Yuli Khodorkovskiy wrote:\n\n> Hi Alvaro,\n> \n> I have attached the updated patches which should rebase.\n\nGreat, thanks.\n\n> Since all existing DAC checks should have MAC, should these patches be\n> considered a bug fix and therefore back patched?\n\nI don't know the answer to that. My impression from earlier discussion\nis that this was seen as a non-backpatchable change, but I defer to Joe\non that as committer. If it were up to me, the ultimate question would\nbe: would such a change adversely affect existing running systems?\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 25 Sep 2019 18:49:15 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Sep-25, Yuli Khodorkovskiy wrote:\n>> Since all existing DAC checks should have MAC, should these patches be\n>> considered a bug fix and therefore back patched?\n\n> I don't know the answer to that. My impression from earlier discussion\n> is that this was seen as a non-backpatchable change, but I defer to Joe\n> on that as committer. If it were up to me, the ultimate question would\n> be: would such a change adversely affect existing running systems?\n\nI don't see how the addition of a new permissions check could sanely\nbe back-patched unless it were to default to \"allow\", which seems like\nan odd choice.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 25 Sep 2019 17:57:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On Wed, Sep 25, 2019 at 5:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n<snip>\n\n> I don't see how the addition of a new permissions check could sanely\n> be back-patched unless it were to default to \"allow\", which seems like\n> an odd choice.\n>\n> regards, tom lane\n\nThat makes sense. Alternatively, we could back patch just the hook to\nat least allow the option for an integrator to implement MAC using an\nextension. Then the sepgsql changes could be back patched once the\nSELinux policy has been merged into Fedora.\n\nThank you\n\n\n",
"msg_date": "Thu, 26 Sep 2019 09:45:03 -0400",
"msg_from": "Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On 9/25/19 4:47 PM, Joe Conway wrote:\n> On 9/25/19 3:56 PM, Alvaro Herrera wrote:\n>> Hello\n>> \n>> On 2019-Sep-09, Yuli Khodorkovskiy wrote:\n>> \n>>> I have included an updated version of the sepgql patch. The\n>>> Truncate-Hook patch is unchanged from the last version.\n>> \n>> This patch no longer applies. Can you please rebase?\n>> \n>> Joe, do you plan on being committer for this patch? There seems to be\n>> substantial agreement on it.\n> \n> \n> I should be able to do that.\n\n\nI am not sure I will get to this today. I assume it is ok for me to move\nit forward e.g. next weekend, or is that not in line with commitfest rules?\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Mon, 30 Sep 2019 10:28:40 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On 2019-Sep-30, Joe Conway wrote:\n\n> I am not sure I will get to this today. I assume it is ok for me to move\n> it forward e.g. next weekend, or is that not in line with commitfest rules?\n\nYou can commit whatever patch whenever you feel like it. I will\nprobably move this patch to the next commitfest before that, but you can\nmark it committed there as soon as you commit it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 30 Sep 2019 11:38:05 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On Mon, Sep 30, 2019 at 11:38:05AM -0300, Alvaro Herrera wrote:\n> On 2019-Sep-30, Joe Conway wrote:\n> \n> > I am not sure I will get to this today. I assume it is ok for me to move\n> > it forward e.g. next weekend, or is that not in line with commitfest rules?\n> \n> You can commit whatever patch whenever you feel like it. I will\n> probably move this patch to the next commitfest before that, but you can\n> mark it committed there as soon as you commit it.\n\nOne month later, nothing has happened here. Joe, are you planning to\nlook at this patch?\n\nThe last patch I found does not apply properly, so please provide a\nrebase. I am switching the patch as waiting on author.\n--\nMichael",
"msg_date": "Fri, 8 Nov 2019 09:46:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On Thu, Nov 7, 2019 at 7:46 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Sep 30, 2019 at 11:38:05AM -0300, Alvaro Herrera wrote:\n> > On 2019-Sep-30, Joe Conway wrote:\n> >\n> > > I am not sure I will get to this today. I assume it is ok for me to move\n> > > it forward e.g. next weekend, or is that not in line with commitfest rules?\n> >\n> > You can commit whatever patch whenever you feel like it. I will\n> > probably move this patch to the next commitfest before that, but you can\n> > mark it committed there as soon as you commit it.\n>\n> One month later, nothing has happened here. Joe, are you planning to\n> look at this patch?\n>\n> The last patch I found does not apply properly, so please provide a\n> rebase. I am switching the patch as waiting on author.\n\nMichael,\n\nI was able to apply the latest patches in the thread (9/25/19) on top\nof master. I have attached them for convenience.\n\n⇒ git rev-parse HEAD\n879c1176157175e0a83742b810f137aebccef4a4\n⇒ md5sum Truncate-Hook.patch v3-Sepgsql-Truncate.patch\n3b8c2b03e30f519f32ebb9fcbc943c70 Truncate-Hook.patch\n728e90596b99cfb8eef74dc1effce46d v3-Sepgsql-Truncate.patch\n⇒ git am Truncate-Hook.patch\nApplying: Add a hook to allow MAC check for TRUNCATE\n⇒ git am v3-Sepgsql-Truncate.patch\nApplying: Update sepgsql to add MAC for TRUNCATE\n\nThank you,\n\nYuli",
"msg_date": "Fri, 8 Nov 2019 09:02:44 -0500",
"msg_from": "Yuli Khodorkovskiy <yuli.khodorkovskiy@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On 11/8/19 9:02 AM, Yuli Khodorkovskiy wrote:\n> On Thu, Nov 7, 2019 at 7:46 PM Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On Mon, Sep 30, 2019 at 11:38:05AM -0300, Alvaro Herrera wrote:\n>> > On 2019-Sep-30, Joe Conway wrote:\n>> >\n>> > > I am not sure I will get to this today. I assume it is ok for me to move\n>> > > it forward e.g. next weekend, or is that not in line with commitfest rules?\n>> >\n>> > You can commit whatever patch whenever you feel like it. I will\n>> > probably move this patch to the next commitfest before that, but you can\n>> > mark it committed there as soon as you commit it.\n>>\n>> One month later, nothing has happened here. Joe, are you planning to\n>> look at this patch?\n>>\n>> The last patch I found does not apply properly, so please provide a\n>> rebase. I am switching the patch as waiting on author.\n> \n> Michael,\n> \n> I was able to apply the latest patches in the thread (9/25/19) on top\n> of master. I have attached them for convenience.\n\n\nYes, I will look when I am able. Hopefully this weekend, almost\ncertainly before the end of this commitfest.\n\nJoe\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Fri, 8 Nov 2019 09:16:37 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On 11/8/19 9:16 AM, Joe Conway wrote:\n> On 11/8/19 9:02 AM, Yuli Khodorkovskiy wrote:\n>> On Thu, Nov 7, 2019 at 7:46 PM Michael Paquier <michael@paquier.xyz> wrote:\n>>>\n>>> On Mon, Sep 30, 2019 at 11:38:05AM -0300, Alvaro Herrera wrote:\n>>> > On 2019-Sep-30, Joe Conway wrote:\n>>> >\n>>> > > I am not sure I will get to this today. I assume it is ok for me to move\n>>> > > it forward e.g. next weekend, or is that not in line with commitfest rules?\n>>> >\n>>> > You can commit whatever patch whenever you feel like it. I will\n>>> > probably move this patch to the next commitfest before that, but you can\n>>> > mark it committed there as soon as you commit it.\n>>>\n>>> One month later, nothing has happened here. Joe, are you planning to\n>>> look at this patch?\n>>>\n>>> The last patch I found does not apply properly, so please provide a\n>>> rebase. I am switching the patch as waiting on author.\n>> \n>> Michael,\n>> \n>> I was able to apply the latest patches in the thread (9/25/19) on top\n>> of master. I have attached them for convenience.\n> \n> Yes, I will look when I am able. Hopefully this weekend, almost\n> certainly before the end of this commitfest.\n\nI tested this successfully on Rhinoceros, both with and without\n\"db_table: { truncate }\" loaded in the policy. Updated patches attached\nhere with some editorialization. If there are no objections I will\ncommit/push both in about a day or two.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Wed, 20 Nov 2019 14:30:12 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On 11/20/19 2:30 PM, Joe Conway wrote:\n> On 11/8/19 9:16 AM, Joe Conway wrote:\n>> On 11/8/19 9:02 AM, Yuli Khodorkovskiy wrote:\n>>> On Thu, Nov 7, 2019 at 7:46 PM Michael Paquier <michael@paquier.xyz> wrote:\n>>>>\n>>>> On Mon, Sep 30, 2019 at 11:38:05AM -0300, Alvaro Herrera wrote:\n>>>> > On 2019-Sep-30, Joe Conway wrote:\n>>>> >\n>>>> > > I am not sure I will get to this today. I assume it is ok for me to move\n>>>> > > it forward e.g. next weekend, or is that not in line with commitfest rules?\n>>>> >\n>>>> > You can commit whatever patch whenever you feel like it. I will\n>>>> > probably move this patch to the next commitfest before that, but you can\n>>>> > mark it committed there as soon as you commit it.\n>>>>\n>>>> One month later, nothing has happened here. Joe, are you planning to\n>>>> look at this patch?\n>>>>\n>>>> The last patch I found does not apply properly, so please provide a\n>>>> rebase. I am switching the patch as waiting on author.\n>>> \n>>> Michael,\n>>> \n>>> I was able to apply the latest patches in the thread (9/25/19) on top\n>>> of master. I have attached them for convenience.\n>> \n>> Yes, I will look when I am able. Hopefully this weekend, almost\n>> certainly before the end of this commitfest.\n> \n> I tested this successfully on Rhinoceros, both with and without\n> \"db_table: { truncate }\" loaded in the policy. Updated patches attached\n> here with some editorialization. If there are no objections I will\n> commit/push both in about a day or two.\n\n\n...and I managed to drop the new files from the sepgsql patch. Complete\nversion attached.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Wed, 20 Nov 2019 16:19:32 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On Wed, Nov 20, 2019 at 02:30:12PM -0500, Joe Conway wrote:\n> I tested this successfully on Rhinoceros, both with and without\n> \"db_table: { truncate }\" loaded in the policy. Updated patches attached\n> here with some editorialization. If there are no objections I will\n> commit/push both in about a day or two.\n\nThanks for the update, Joe. I have switched the patch as ready for\ncommitter, with your name as committer of the entry to reflect this\nstatus.\n--\nMichael",
"msg_date": "Fri, 22 Nov 2019 17:07:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
},
{
"msg_contents": "On 11/22/19 3:07 AM, Michael Paquier wrote:\n> On Wed, Nov 20, 2019 at 02:30:12PM -0500, Joe Conway wrote:\n>> I tested this successfully on Rhinoceros, both with and without\n>> \"db_table: { truncate }\" loaded in the policy. Updated patches attached\n>> here with some editorialization. If there are no objections I will\n>> commit/push both in about a day or two.\n> \n> Thanks for the update, Joe. I have switched the patch as ready for\n> committer, with your name as committer of the entry to reflect this\n> status.\n\nPushed.\n\nThanks,\n\nJoe\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Sat, 23 Nov 2019 10:51:54 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: add a MAC check for TRUNCATE"
}
] |
[
{
"msg_contents": "Hi,\n\n\nScenario is a very plain upsert:\n\nCREATE TABLE upsert(key int primary key);\nINSERT INTO upsert VALUES(1) ON CONFLICT (key) DO UPDATE SET key = excluded.key;\nINSERT INTO upsert VALUES(1) ON CONFLICT (key) DO UPDATE SET key = excluded.key;\nINSERT 0 1\nINSERT 0 1\n\npostgres[8755][1]=# SELECT page_items.* FROM generate_series(0, pg_relation_size('upsert'::regclass::text) / 8192 - 1) blkno, get_raw_page('upsert'::regclass::text, blkno::int4) AS raw_page, heap_page_items(raw_page) as page_items;\n┌────┬────────┬──────────┬────────┬──────────┬──────────┬──────────┬────────┬─────────────┬────────────┬────────┬────────┬────────┬────────────┐\n│ lp │ lp_off │ lp_flags │ lp_len │ t_xmin │ t_xmax │ t_field3 │ t_ctid │ t_infomask2 │ t_infomask │ t_hoff │ t_bits │ t_oid │ t_data │\n├────┼────────┼──────────┼────────┼──────────┼──────────┼──────────┼────────┼─────────────┼────────────┼────────┼────────┼────────┼────────────┤\n│ 1 │ 8160 │ 1 │ 28 │ 19742591 │ 19742592 │ 0 │ (0,2) │ 24577 │ 256 │ 24 │ (null) │ (null) │ \\x01000000 │\n│ 2 │ 8128 │ 1 │ 28 │ 19742592 │ 19742592 │ 0 │ (0,2) │ 32769 │ 8336 │ 24 │ (null) │ (null) │ \\x01000000 │\n└────┴────────┴──────────┴────────┴──────────┴──────────┴──────────┴────────┴─────────────┴────────────┴────────┴────────┴────────┴────────────┘\n(2 rows)\n\nas you can see the same xmax is set for both row version, with the new\ninfomask being HEAP_XMAX_KEYSHR_LOCK | HEAP_XMAX_LOCK_ONLY | HEAP_UPDATED.\n\n\nThe reason that happens is that ExecOnConflictUpdate() needs to lock the\nrow to be able to reliably compute a new tuple version based on that\nrow. heap_update() then decides it needs to carry that xmax forward, as\nit's a valid lock:\n\n\n /*\n * If the tuple we're updating is locked, we need to preserve the locking\n * info in the old tuple's Xmax. Prepare a new Xmax value for this.\n */\n compute_new_xmax_infomask(HeapTupleHeaderGetRawXmax(oldtup.t_data),\n oldtup.t_data->t_infomask,\n oldtup.t_data->t_infomask2,\n xid, *lockmode, true,\n &xmax_old_tuple, &infomask_old_tuple,\n &infomask2_old_tuple);\n\n /*\n * And also prepare an Xmax value for the new copy of the tuple. If there\n * was no xmax previously, or there was one but all lockers are now gone,\n * then use InvalidXid; otherwise, get the xmax from the old tuple. (In\n * rare cases that might also be InvalidXid and yet not have the\n * HEAP_XMAX_INVALID bit set; that's fine.)\n */\n if ((oldtup.t_data->t_infomask & HEAP_XMAX_INVALID) ||\n HEAP_LOCKED_UPGRADED(oldtup.t_data->t_infomask) ||\n (checked_lockers && !locker_remains))\n xmax_new_tuple = InvalidTransactionId;\n else\n xmax_new_tuple = HeapTupleHeaderGetRawXmax(oldtup.t_data);\n\nbut we really don't need to do any of that in this case - the only\nlocker is the current backend, after all.\n\nI think this isn't great, because it'll later will cause unnecessary\nhint bit writes (although ones somewhat likely combined with setting\nXMIN_COMMITTED), and even extra work for freezing.\n\nBased on a quick look this wasn't the case before the finer grained\ntuple locking - which makes sense, there was no cases where locks would\nneed to be carried forward.\n\n\nIt's worthwhile to note that this *nearly* already works, because of the\nfollowing codepath in compute_new_xmax_infomask():\n\n\t\t * If the lock to be acquired is for the same TransactionId as the\n\t\t * existing lock, there's an optimization possible: consider only the\n\t\t * strongest of both locks as the only one present, and restart.\n\t\t */\n\t\tif (xmax == add_to_xmax)\n\t\t{\n\t\t\t/*\n\t\t\t * Note that it's not possible for the original tuple to be updated:\n\t\t\t * we wouldn't be here because the tuple would have been invisible and\n\t\t\t * we wouldn't try to update it. As a subtlety, this code can also\n\t\t\t * run when traversing an update chain to lock future versions of a\n\t\t\t * tuple. But we wouldn't be here either, because the add_to_xmax\n\t\t\t * would be different from the original updater.\n\t\t\t */\n\t\t\tAssert(HEAP_XMAX_IS_LOCKED_ONLY(old_infomask));\n\n\t\t\t/* acquire the strongest of both */\n\t\t\tif (mode < old_mode)\n\t\t\t\tmode = old_mode;\n\t\t\t/* mustn't touch is_update */\n\n\t\t\told_infomask |= HEAP_XMAX_INVALID;\n\t\t\tgoto l5;\n\t\t}\n\nwhich set HEAP_XMAX_INVALID for old_infomask, which would then trigger\nthe code above not carrying forward xmax to the new tuple - but\ncompute_new_xmax_infomask() operates on a copy of old_infomask.\n\n\nNote that this contradict comments in heap_update() itself:\n\n\t\t * If we found a valid Xmax for the new tuple, then the infomask bits\n\t\t * to use on the new tuple depend on what was there on the old one.\n\t\t * Note that since we're doing an update, the only possibility is that\n\t\t * the lockers had FOR KEY SHARE lock.\n\t\t */\n\nwhich seems to indicate that this behaviour wasn't forseen.\n\n\nI find the separation of concerns (and variable naming) between\ncomputations happening in heap_update() itself, and\ncompute_new_xmax_infomask() fairly confusing and redundant.\n\nI mean, compute_new_xmax_infomask() expands multis, builds a new one\nwith the updater added. Then re-expands it via GetMultiXactIdHintBits(),\nto compute infomask bits for the old tuple. Then returns. Just for\nheap_update() to re-re-expand the multi to compute the infomask bits for\nthe new tuple, again with GetMultiXactIdHintBits()? I know that there's\na cache, but still. That's some seriously repetitive work.\n\nOh, and the things GetMultiXactIdHintBits() returns imo aren't really\nwell described with hint bits.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 Jul 2019 16:24:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "ON CONFLICT (and manual row locks) cause xmax of updated tuple to\n unnecessarily be set"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 4:24 PM Andres Freund <andres@anarazel.de> wrote:\n> as you can see the same xmax is set for both row version, with the new\n> infomask being HEAP_XMAX_KEYSHR_LOCK | HEAP_XMAX_LOCK_ONLY | HEAP_UPDATED.\n\nMeta remark about your test case: I am a big fan of microbenchmarks\nlike this, which execute simple DML queries using a single connection,\nand then consider if the on-disk state looks as good as expected, for\nsome value of \"good\". I had a lot of success with this approach while\ndeveloping the v12 work on nbtree, where I went to the trouble of\nautomating everything. The same test suite also helped with the nbtree\ncompression/deduplication patch just today.\n\nI like to call these tests \"wind tunnel tests\". It's far from obvious\nthat you can take a totally synthetic, serial test, and use it to\nmeasure something that is important to real workloads. It seems to\nwork well when there is a narrow, specific thing that you're\ninterested in. This is especially true when there is a real risk of\nregressing performance in some way.\n\n> but we really don't need to do any of that in this case - the only\n> locker is the current backend, after all.\n>\n> I think this isn't great, because it'll later will cause unnecessary\n> hint bit writes (although ones somewhat likely combined with setting\n> XMIN_COMMITTED), and even extra work for freezing.\n>\n> Based on a quick look this wasn't the case before the finer grained\n> tuple locking - which makes sense, there was no cases where locks would\n> need to be carried forward.\n\nI agree that this is unfortunate. Are you planning on working on it?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 24 Jul 2019 17:14:39 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: ON CONFLICT (and manual row locks) cause xmax of updated tuple to\n unnecessarily be set"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-24 17:14:39 -0700, Peter Geoghegan wrote:\n> On Wed, Jul 24, 2019 at 4:24 PM Andres Freund <andres@anarazel.de> wrote:\n> > but we really don't need to do any of that in this case - the only\n> > locker is the current backend, after all.\n> >\n> > I think this isn't great, because it'll later will cause unnecessary\n> > hint bit writes (although ones somewhat likely combined with setting\n> > XMIN_COMMITTED), and even extra work for freezing.\n> >\n> > Based on a quick look this wasn't the case before the finer grained\n> > tuple locking - which makes sense, there was no cases where locks would\n> > need to be carried forward.\n> \n> I agree that this is unfortunate. Are you planning on working on it?\n\nNot at the moment, no. Are you planning / hoping to take a stab at it?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 25 Jul 2019 15:10:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: ON CONFLICT (and manual row locks) cause xmax of updated tuple\n to unnecessarily be set"
},
{
"msg_contents": "On Thu, Jul 25, 2019 at 3:10 PM Andres Freund <andres@anarazel.de> wrote:\n> > I agree that this is unfortunate. Are you planning on working on it?\n>\n> Not at the moment, no. Are you planning / hoping to take a stab at it?\n\nThe current behavior ought to be fixed, and it seems like it falls to\nme to do that. OTOH, anything that's MultiXact adjacent makes my eyes\nwater. I don't consider myself to be particularly well qualified.\n\nI'm sure that I could quickly find a way of making the behavior you\nhave pointed out match what is expected without causing any regression\ntests to fail, but that's the easy part.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 25 Jul 2019 17:02:21 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: ON CONFLICT (and manual row locks) cause xmax of updated tuple to\n unnecessarily be set"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.