threads
listlengths
1
2.99k
[ { "msg_contents": "Hi:\n\nWhen I run make -C subscription check, then I see the following logs\nin ./tmp_check/log/013_partition_publisher.log\n\n2020-05-11 09:37:40.778 CST [69541] sub_viaroot WARNING: terminating\nconnection because of crash of another server process\n\n2020-05-11 09:37:40.778 CST [69541] sub_viaroot DETAIL: The postmaster\nhas commanded this server process to roll back the current transaction and\nexit,\nbecause another server process exited abnormally and possibly corrupted\nshared memory.\n\nHowever there is no core file generated. In my other cases(like start pg\nmanually with bin/postgres xxx) can generate core file successfully at\nthe same machine. What might be the problem for PostgresNode case?\n\nI tried this modification, but it doesn't help.\n\n--- a/src/test/perl/PostgresNode.pm\n+++ b/src/test/perl/PostgresNode.pm\n@@ -766,7 +766,7 @@ sub start\n\n # Note: We set the cluster_name here, not in\npostgresql.conf (in\n # sub init) so that it does not get copied to standbys.\n- $ret = TestLib::system_log('pg_ctl', '-D', $self->data_dir,\n'-l',\n+ $ret = TestLib::system_log('pg_ctl', \"-c\", '-D',\n$self->data_dir, '-l',\n $self->logfile, '-o', \"--cluster-name=$name\",\n'start');\n }\n\nBest Regards\nAndy Fan\n\nHi:When I run make -C subscription check,  then I see the following logsin ./tmp_check/log/013_partition_publisher.log 2020-05-11 09:37:40.778 CST [69541] sub_viaroot WARNING:  terminating connection because of crash of another server process2020-05-11 09:37:40.778 CST [69541] sub_viaroot DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.However there is no core file generated. In my other cases(like start pgmanually with bin/postgres xxx) can  generate core file successfully at the same machine.  What might be the problem for PostgresNode case? I tried this modification, but it doesn't help.--- a/src/test/perl/PostgresNode.pm+++ b/src/test/perl/PostgresNode.pm@@ -766,7 +766,7 @@ sub start                # Note: We set the cluster_name here, not in postgresql.conf (in                # sub init) so that it does not get copied to standbys.-               $ret = TestLib::system_log('pg_ctl', '-D', $self->data_dir, '-l',+               $ret = TestLib::system_log('pg_ctl', \"-c\", '-D', $self->data_dir, '-l',                        $self->logfile, '-o', \"--cluster-name=$name\", 'start');        }Best RegardsAndy Fan", "msg_date": "Mon, 11 May 2020 09:48:34 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "No core file generated after PostgresNode->start" }, { "msg_contents": "On Mon, May 11, 2020 at 9:48 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> Hi:\n>\n>\n> 2020-05-11 09:37:40.778 CST [69541] sub_viaroot WARNING: terminating\n> connection because of crash of another server process\n>\n> Looks this doesn't mean a crash. If the test case(subscription/t/\n013_partition.pl)\nfailed, test framework kill some process, which leads the above message.\nSo you can\nignore this issue now. Thanks\n\nBest Regards\nAndy Fan\n\nOn Mon, May 11, 2020 at 9:48 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:Hi:2020-05-11 09:37:40.778 CST [69541] sub_viaroot WARNING:  terminating connection because of crash of another server processLooks this doesn't mean a crash.   If the test case(subscription/t/013_partition.pl)failed,  test framework kill some process, which leads the above message.  So you canignore this issue now.  ThanksBest RegardsAndy Fan", "msg_date": "Mon, 11 May 2020 11:21:01 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: No core file generated after PostgresNode->start" }, { "msg_contents": "On Sun, May 10, 2020 at 11:21 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> Looks this doesn't mean a crash. If the test case(subscription/t/013_partition.pl)\n> failed, test framework kill some process, which leads the above message. So you can\n> ignore this issue now. Thanks\n\nI think there might be a real issue here someplace, though, because I\ncouldn't get a core dump last week when I did have a crash happening\nlocally. I didn't poke into it very hard though so I never figured out\nexactly why not, but ulimit -c unlimited didn't help.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 11 May 2020 15:35:51 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: No core file generated after PostgresNode->start" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Sun, May 10, 2020 at 11:21 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > Looks this doesn't mean a crash. If the test case(subscription/t/013_partition.pl)\n> > failed, test framework kill some process, which leads the above message. So you can\n> > ignore this issue now. Thanks\n> \n> I think there might be a real issue here someplace, though, because I\n> couldn't get a core dump last week when I did have a crash happening\n> locally. I didn't poke into it very hard though so I never figured out\n> exactly why not, but ulimit -c unlimited didn't help.\n\nCould \"sysctl kernel.core_pattern\" be the problem? I discovered this setting\nsometime when I also couldn't find the core dump on linux.\n\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 11 May 2020 22:26:23 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: No core file generated after PostgresNode->start" }, { "msg_contents": "On Mon, May 11, 2020 at 4:24 PM Antonin Houska <ah@cybertec.at> wrote:\n> Could \"sysctl kernel.core_pattern\" be the problem? I discovered this setting\n> sometime when I also couldn't find the core dump on linux.\n\nWell, I'm running on macOS and the core files normally show up in\n/cores, but in this case they didn't.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 11 May 2020 22:37:18 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: No core file generated after PostgresNode->start" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, May 11, 2020 at 4:24 PM Antonin Houska <ah@cybertec.at> wrote:\n>> Could \"sysctl kernel.core_pattern\" be the problem? I discovered this setting\n>> sometime when I also couldn't find the core dump on linux.\n\n> Well, I'm running on macOS and the core files normally show up in\n> /cores, but in this case they didn't.\n\nI have a standing note to check the permissions on /cores after any macOS\nupgrade, because every so often Apple decides that that directory ought to\nbe read-only.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 May 2020 22:48:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: No core file generated after PostgresNode->start" }, { "msg_contents": "On Tue, May 12, 2020 at 3:36 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Sun, May 10, 2020 at 11:21 PM Andy Fan <zhihui.fan1213@gmail.com>\n> wrote:\n> > Looks this doesn't mean a crash. If the test case(subscription/t/\n> 013_partition.pl)\n> > failed, test framework kill some process, which leads the above\n> message. So you can\n> > ignore this issue now. Thanks\n>\n> I think there might be a real issue here someplace, though, because I\n> couldn't get a core dump last week when I did have a crash happening\n> locally.\n\n\nI forget to say the failure happens on my modified version, I guess this is\nwhat\nhappened in my case (subscription/t/013_partition.pl ).\n\n1. It need to read data from slave, however it get ERROR, elog(ERROR, ..)\nrather crash.\n2. The test framework knows the case failed, so it kill the primary in\nsome way.\n3. The primary raises the error below.\n\n2020-05-11 09:37:40.778 CST [69541] sub_viaroot WARNING: terminating\nconnection because of crash of another server process\n\n2020-05-11 09:37:40.778 CST [69541] sub_viaroot DETAIL: The postmaster\nhas commanded this server process to roll back the current transaction and\nexit,\nbecause another server process exited abnormally and possibly corrupted\nshared memory.\n\nFinally I get the root cause by looking into the error log in slave.\nAfter I fix\nmy bug, the issue gone.\n\nBest Regards\nAndy Fan\n\nOn Tue, May 12, 2020 at 3:36 AM Robert Haas <robertmhaas@gmail.com> wrote:On Sun, May 10, 2020 at 11:21 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> Looks this doesn't mean a crash.   If the test case(subscription/t/013_partition.pl)\n> failed,  test framework kill some process, which leads the above message.  So you can\n> ignore this issue now.  Thanks\n\nI think there might be a real issue here someplace, though, because I\ncouldn't get a core dump last week when I did have a crash happening\nlocally. I forget to say the failure happens on my modified version, I guess this is what happened in my case (subscription/t/013_partition.pl ).1.  It need to read data from slave, however it get ERROR,  elog(ERROR, ..) rather crash.2.  The test framework knows the case failed, so it kill the primary in some way.3.  The primary raises the error below. 2020-05-11 09:37:40.778 CST [69541] sub_viaroot WARNING:  terminating connection because of crash of another server process2020-05-11 09:37:40.778 CST [69541] sub_viaroot DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.Finally I get the root cause  by looking into the error log in slave.  After I fixmy bug, the issue gone.  Best RegardsAndy Fan", "msg_date": "Tue, 12 May 2020 18:14:09 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: No core file generated after PostgresNode->start" }, { "msg_contents": "On Mon, May 11, 2020 at 10:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I have a standing note to check the permissions on /cores after any macOS\n> upgrade, because every so often Apple decides that that directory ought to\n> be read-only.\n\nThanks, that was my problem.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 12 May 2020 16:15:26 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: No core file generated after PostgresNode->start" }, { "msg_contents": "On Tue, May 12, 2020 at 04:15:26PM -0400, Robert Haas wrote:\n> On Mon, May 11, 2020 at 10:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I have a standing note to check the permissions on /cores after any macOS\n>> upgrade, because every so often Apple decides that that directory ought to\n>> be read-only.\n> \n> Thanks, that was my problem.\n\nWas that a recent problem with Catalina and/or Mojave? I have never\nseen an actual problem up to 10.13.\n--\nMichael", "msg_date": "Wed, 13 May 2020 16:30:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: No core file generated after PostgresNode->start" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, May 12, 2020 at 04:15:26PM -0400, Robert Haas wrote:\n>> On Mon, May 11, 2020 at 10:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> I have a standing note to check the permissions on /cores after any macOS\n>>> upgrade, because every so often Apple decides that that directory ought to\n>>> be read-only.\n\n>> Thanks, that was my problem.\n\n> Was that a recent problem with Catalina and/or Mojave? I have never\n> seen an actual problem up to 10.13.\n\nI don't recall exactly when I started seeing this, but it was at least\na couple years back, so maybe Mojave. I think it's related to Apple's\nefforts to make the root filesystem read-only. (It's not apparent to\nme how come I can write in /cores when \"mount\" clearly reports\n\n/dev/disk1s1 on / (apfs, local, read-only, journaled)\n\nbut nonetheless it works, as long as the directory permissions permit.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 13 May 2020 09:49:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: No core file generated after PostgresNode->start" } ]
[ { "msg_contents": "Hi,\n\nAttached is a draft of the press release for the 2020-05-14 cumulative\nupdate. Please let me know your feedback by 2020-05-13 :)\n\nThanks,\n\nJonathan", "msg_date": "Sun, 10 May 2020 22:08:46 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "2020-05-14 Press Release Draft" }, { "msg_contents": "At Sun, 10 May 2020 22:08:46 -0400, \"Jonathan S. Katz\" <jkatz@postgresql.org> wrote in \n> Attached is a draft of the press release for the 2020-05-14 cumulative\n> update. Please let me know your feedback by 2020-05-13 :)\n\nThank you. I found a typo in it.\n\n> * Ensure that a detatched partition has triggers that come from its former\n> parent removed.\n\ns/detatched/detached/ ?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 11 May 2020 13:38:57 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 2020-05-14 Press Release Draft" }, { "msg_contents": "On Mon, 11 May 2020 at 14:09, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> Attached is a draft of the press release for the 2020-05-14 cumulative\n> update. Please let me know your feedback by 2020-05-13 :)\n\nHi,\n\nThanks for drafting those up.\n\nFor:\n\n* Several fixes for GENERATED columns, including an issue where it was possible\nto crash or corrupt data in a table when the output of the generated column was\nthe exact copy of a physical column on the table.\n\nI think it's important to include the \"or if the expression called a\nfunction which could, in certain cases, return its own input\".\n\nThe reason I think that's important is because there's likely no\nlegitimate case for having the expression an exact copy of the column.\n\nDavid\n\n\n", "msg_date": "Mon, 11 May 2020 21:45:44 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 2020-05-14 Press Release Draft" }, { "msg_contents": "On Sun, May 10, 2020 at 10:08:46PM -0400, Jonathan S. Katz wrote:\n> * Ensure that a detatched partition has triggers that come from its former\n> parent removed.\n\nI would have said: \"fix for issue which prevented/precluded detaching\npartitions which have inherited ROW triggers\"\n\n> * Several fixes for `REINDEX CONCURRENTLY`, particular with dealing with issue\n> when a `REINDEX CONCURRENTLY` operation fails.\n\n\".. in particular relating to an issue ...\"\n\n> * Avoid scanning irrelevant timelines during archive recovery, which can\n> eliminate attempts to fetch nonexistent WAL files from archive storage.\n\nI feel like this is phrased backwards. The goal is to avoid (attempting to)\nfetch nonextant WALs, and the mechanism is by skipping timelines. Maybe:\n\n * Avoid attempting to fetch nonexistent WAL files from archive storage during\n * recovery by skipping irrelevant timelines.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 11 May 2020 13:01:18 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: 2020-05-14 Press Release Draft" }, { "msg_contents": "Hi,\n\nOn 5/10/20 10:08 PM, Jonathan S. Katz wrote:\n> Hi,\n> \n> Attached is a draft of the press release for the 2020-05-14 cumulative\n> update. Please let me know your feedback by 2020-05-13 :)\n\nThank you for the feedback. As per usual, I applied some combination of\n{all, some, none}.\n\nPlease see v2.\n\nThanks again for the review!\n\nJonathan", "msg_date": "Tue, 12 May 2020 09:01:27 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: 2020-05-14 Press Release Draft" } ]
[ { "msg_contents": "Hello hackers,\r\n\r\nI am researching about 'origin' in PostgreSQL, mainly it used in logical\r\ndecoding to filter transaction from non-local source. I notice that the\r\n'origin' is stored in commit_ts so that I think we are possible to get 'origin'\r\nof a transaction from commit_ts.\r\n\r\nBut I can not fond any code to get 'origin' from commit_ts, just like it is\r\nproducing data which nobody cares about. Can I know what's the purpose\r\nof the 'origin' in commit_ts? Do you think we should add some support\r\nto the careless data?\r\n\r\nFor example, I add a function to get 'origin' from commit_ts:\r\n=======================================\r\npostgres=# select pg_xact_commit_origin('490');\r\n pg_xact_commit_origin \r\n-----------------------\r\n test_origin\r\n(1 row)\r\n\r\npostgres=# select pg_xact_commit_origin('491');\r\n pg_xact_commit_origin \r\n-----------------------\r\n test_origin1\r\n(1 row)\r\n\r\npostgres=#\r\n=======================================\r\n\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Mon, 11 May 2020 16:43:11 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "A patch for get origin from commit_ts." }, { "msg_contents": "On Mon, May 11, 2020 at 04:43:11PM +0800, movead.li@highgo.ca wrote:\n> But I can not fond any code to get 'origin' from commit_ts, just like it is\n> producing data which nobody cares about. Can I know what's the purpose\n> of the 'origin' in commit_ts? Do you think we should add some support\n> to the careless data?\n\nI have not thought about this matter, but it seems to me that you\nshould add this patch to the upcoming commit fest for evaluation:\nhttps://commitfest.postgresql.org/28/\n\nThis is going to take a couple of months though as the main focus\nlately is the stability of 13.\n--\nMichael", "msg_date": "Tue, 12 May 2020 10:45:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": ">I have not thought about this matter, but it seems to me that you\r\n>should add this patch to the upcoming commit fest for evaluation:\r\n>https://commitfest.postgresql.org/28/ \r\nThanks.\r\n\r\nI think about it more detailed, and find it's better to show the 'roident'\r\nother than 'roname'. Because an old 'roident' value will be used\r\nimmediately after dropped, and a new patch attached with test case\r\nand documentation.\r\n\r\n============================================\r\nSELECT pg_xact_commit_origin('490');\r\n pg_xact_commit_origin \r\n-----------------------\r\n 1\r\n(1 row)\r\n\r\nSELECT pg_xact_commit_origin('491');\r\n pg_xact_commit_origin \r\n-----------------------\r\n 2\r\n(1 row)\r\n============================================\r\n\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Wed, 13 May 2020 16:29:19 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": "Hello hackers,\n\nWe already have pg_xact_commit_timestamp() that returns the timestamp of\nthe commit. It may be better to have one single function returning both\ntimestamp and origin for a given transaction ID.\n\nA second thing is that TransactionIdGetCommitTsData() was introdued in\ncore(73c986add). It has only one caller pg_xact_commit_timestamp() which\npasses RepOriginId as NULL, making last argument to the\nTransactionIdGetCommitTsData() a dead code in core.\n\nQuick code search shows that it is getting used by pglogical (caller:\nhttps://sources.debian.org/src/pglogical/2.3.2-1/pglogical_conflict.c/?hl=509#L509).\nCCing Craig Ringer and Petr Jelinek for the inputs.\n\nWarm Regards,\nMadan Kumar K\n\"There is no Elevator to Success. You have to take the Stairs\"\n\n\n", "msg_date": "Mon, 29 Jun 2020 18:17:27 -0700", "msg_from": "Madan Kumar <madankumar1993@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": "On Mon, Jun 29, 2020 at 06:17:27PM -0700, Madan Kumar wrote:\n> We already have pg_xact_commit_timestamp() that returns the timestamp of\n> the commit. It may be better to have one single function returning both\n> timestamp and origin for a given transaction ID.\n> \n> A second thing is that TransactionIdGetCommitTsData() was introdued in\n> core(73c986add). It has only one caller pg_xact_commit_timestamp() which\n> passes RepOriginId as NULL, making last argument to the\n> TransactionIdGetCommitTsData() a dead code in core.\n> \n> Quick code search shows that it is getting used by pglogical (caller:\n> https://sources.debian.org/src/pglogical/2.3.2-1/pglogical_conflict.c/?hl=509#L509).\n> CCing Craig Ringer and Petr Jelinek for the inputs.\n\nAnother question that has popped up when doing this review is what\nwould be the use-case of adding this information at SQL level knowing\nthat logical replication exists since 10? Having dead code in the\nbackend tree is not a good idea of course, so we can also have as\nargument to simplify TransactionIdGetCommitTsData(). Now, pglogical\nhas pglogical_xact_commit_timestamp_origin() to get the replication\norigin with its own function so providing an extending equivalent\nreturning one row with two fields would be nice for pglogical so as\nthis function is not necessary. As mentioned by Madan, the portion of\nthe code using TransactionIdGetCommitTsData() relies on it for\nconflicts of updates (the first win, last win logic at quick glance).\n\nI am adding Peter E in CC for an opinion, the last commits of\npglogical are from him.\n--\nMichael", "msg_date": "Tue, 30 Jun 2020 13:41:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": ">> A second thing is that TransactionIdGetCommitTsData() was introdued in\r\n>> core(73c986add). It has only one caller pg_xact_commit_timestamp() which\r\n>> passes RepOriginId as NULL, making last argument to the\r\n>> TransactionIdGetCommitTsData() a dead code in core.\r\n>>\r\n>> Quick code search shows that it is getting used by pglogical (caller:\r\n>> https://sources.debian.org/src/pglogical/2.3.2-1/pglogical_conflict.c/?hl=509#L509).\r\n>> CCing Craig Ringer and Petr Jelinek for the inputs.\r\n \r\n>Another question that has popped up when doing this review is what\r\n>would be the use-case of adding this information at SQL level knowing\r\n>that logical replication exists since 10? Having dead code in the\r\n>backend tree is not a good idea of course, so we can also have as\r\n>argument to simplify TransactionIdGetCommitTsData(). Now, pglogical\r\n>has pglogical_xact_commit_timestamp_origin() to get the replication\r\n>origin with its own function so providing an extending equivalent\r\n>returning one row with two fields would be nice for pglogical so as\r\n>this function is not necessary. As mentioned by Madan, the portion of\r\n>the code using TransactionIdGetCommitTsData() relies on it for\r\n>conflicts of updates (the first win, last win logic at quick glance).\r\n\r\nThanks for the explanation, the origin in commit_ts seems useless, I am just\r\nwant to know why it appears there. It's ok to close this issue if we do not\r\nwant to touch it now.\r\n\r\nAnd I am more interest in origin in wal, if data from a logical replicate or a \r\nmanual origin then many wal records will get a 'RepOriginId', 'RepOriginId'\r\nin 'xact' wal record may help to do some filter, the other same dead code\r\ntoo. So can you help me to understand why or the historical reason for that?\r\n\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\n\n>> A second thing is that TransactionIdGetCommitTsData() was introdued in>> core(73c986add). It has only one caller pg_xact_commit_timestamp() which>> passes RepOriginId as NULL, making last argument to the>> TransactionIdGetCommitTsData() a dead code in core.>>>> Quick code search shows that it is getting used by pglogical (caller:>> https://sources.debian.org/src/pglogical/2.3.2-1/pglogical_conflict.c/?hl=509#L509).>> CCing Craig Ringer and Petr Jelinek for the inputs. >Another question that has popped up when doing this review is what>would be the use-case of adding this information at SQL level knowing>that logical replication exists since 10?  Having dead code in the>backend tree is not a good idea of course, so we can also have as>argument to simplify TransactionIdGetCommitTsData().  Now, pglogical>has pglogical_xact_commit_timestamp_origin() to get the replication>origin with its own function so providing an extending equivalent>returning one row with two fields would be nice for pglogical so as>this function is not necessary.  As mentioned by Madan, the portion of>the code using TransactionIdGetCommitTsData() relies on it for>conflicts of updates (the first win, last win logic at quick glance).Thanks for the explanation, the origin in commit_ts seems useless, I am justwant to know why it appears there. It's ok to close this issue if we do notwant to touch it now.And I am more interest in origin in wal, if data from a logical replicate or a manual origin then many wal records will get a 'RepOriginId',  'RepOriginId'in 'xact' wal record may help to do some filter, the other same dead codetoo. So can you help me to understand why or the historical reason for that?\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Tue, 30 Jun 2020 13:57:40 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": "On Tue, 30 Jun 2020 at 02:17, Madan Kumar <madankumar1993@gmail.com> wrote:\n\n\n> We already have pg_xact_commit_timestamp() that returns the timestamp of\n> the commit.\n\n\nYes, pg_xact_commit_origin() is a good name for an additional function. +1\nfor this.\n\n\n> It may be better to have one single function returning both\n> timestamp and origin for a given transaction ID.\n>\n\nNo need to change existing APIs.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nMission Critical Databases\n\nOn Tue, 30 Jun 2020 at 02:17, Madan Kumar <madankumar1993@gmail.com> wrote: We already have pg_xact_commit_timestamp() that returns the timestamp of\nthe commit. Yes, pg_xact_commit_origin() is a good name for an additional function. +1 for this. It may be better to have one single function returning both\ntimestamp and origin for a given transaction ID.No need to change existing APIs.-- Simon Riggs                http://www.2ndQuadrant.com/Mission Critical Databases", "msg_date": "Tue, 30 Jun 2020 13:58:17 +0100", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": "On 2020-Jun-30, Michael Paquier wrote:\n\n> Another question that has popped up when doing this review is what\n> would be the use-case of adding this information at SQL level knowing\n> that logical replication exists since 10?\n\nLogical replication in core is a far cry from a fully featured\nreplication solution. Kindly do not claim that we can now remove\nfeatures just because in-core logical replication does not use them;\nthis argument is ignoring the fact that we're still a long way from\ndeveloping actually powerful logical replication capabilities.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 30 Jun 2020 14:32:47 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": "On Tue, Jun 30, 2020 at 02:32:47PM -0400, Alvaro Herrera wrote:\n> On 2020-Jun-30, Michael Paquier wrote:\n>> Another question that has popped up when doing this review is what\n>> would be the use-case of adding this information at SQL level knowing\n>> that logical replication exists since 10?\n> \n> Logical replication in core is a far cry from a fully featured\n> replication solution. Kindly do not claim that we can now remove\n> features just because in-core logical replication does not use them;\n> this argument is ignoring the fact that we're still a long way from\n> developing actually powerful logical replication capabilities.\n\nThanks for the feedback. If that sounded aggressive in some way, this\nwas not my intention, so my apologies for that. Now, I have to admit\nthat I am worried to see in core code that stands as dead without any\nactual way to test it directly. Somebody hacking this code cannot be\nsure if they are breaking it or not, except if they test it with\npglogical. So it is good to close the gap here. It also brings a\nsecond point IMO, could the documentation be improved to describe more \nuse-cases where these functions would be useful? The documentation\ngap is not a problem this patch has to deal with, though.\n--\nMichael", "msg_date": "Thu, 2 Jul 2020 10:50:46 +0900", "msg_from": "michael@paquier.xyz", "msg_from_op": false, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": "On Tue, Jun 30, 2020 at 01:58:17PM +0100, Simon Riggs wrote:\n> On Tue, 30 Jun 2020 at 02:17, Madan Kumar <madankumar1993@gmail.com> wrote:\n>> It may be better to have one single function returning both\n>> timestamp and origin for a given transaction ID.\n> \n> No need to change existing APIs.\n\nAdding a new function able to return both fields at the same time does\nnot imply that we'd remove the original one, it just implies that we\nwould be able to retrieve both fields with a single call of\nTransactionIdGetCommitTsData(), saving from an extra CommitTsSLRULock \ntaken, etc. That's actually what pglogical does with\nits pglogical_xact_commit_timestamp_origin() in\npglogical_functions.c. So adding one function able to return one\ntuple with the two fields, without removing the existing\npg_xact_commit_timestamp() makes the most sense, no?\n--\nMichael", "msg_date": "Thu, 2 Jul 2020 10:58:52 +0900", "msg_from": "michael@paquier.xyz", "msg_from_op": false, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": "On Thu, 2 Jul 2020 at 02:58, <michael@paquier.xyz> wrote:\n\n> On Tue, Jun 30, 2020 at 01:58:17PM +0100, Simon Riggs wrote:\n> > On Tue, 30 Jun 2020 at 02:17, Madan Kumar <madankumar1993@gmail.com>\n> wrote:\n> >> It may be better to have one single function returning both\n> >> timestamp and origin for a given transaction ID.\n> >\n> > No need to change existing APIs.\n>\n> Adding a new function able to return both fields at the same time does\n> not imply that we'd remove the original one, it just implies that we\n> would be able to retrieve both fields with a single call of\n> TransactionIdGetCommitTsData(), saving from an extra CommitTsSLRULock\n> taken, etc. That's actually what pglogical does with\n> its pglogical_xact_commit_timestamp_origin() in\n> pglogical_functions.c. So adding one function able to return one\n> tuple with the two fields, without removing the existing\n> pg_xact_commit_timestamp() makes the most sense, no?\n>\n\nOK\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nMission Critical Databases\n\nOn Thu, 2 Jul 2020 at 02:58, <michael@paquier.xyz> wrote:On Tue, Jun 30, 2020 at 01:58:17PM +0100, Simon Riggs wrote:\n> On Tue, 30 Jun 2020 at 02:17, Madan Kumar <madankumar1993@gmail.com> wrote:\n>> It may be better to have one single function returning both\n>> timestamp and origin for a given transaction ID.\n> \n> No need to change existing APIs.\n\nAdding a new function able to return both fields at the same time does\nnot imply that we'd remove the original one, it just implies that we\nwould be able to retrieve both fields with a single call of\nTransactionIdGetCommitTsData(), saving from an extra CommitTsSLRULock \ntaken, etc.  That's actually what pglogical does with\nits pglogical_xact_commit_timestamp_origin() in\npglogical_functions.c.  So adding one function able to return one\ntuple with the two fields, without removing the existing\npg_xact_commit_timestamp() makes the most sense, no?OK -- Simon Riggs                http://www.2ndQuadrant.com/Mission Critical Databases", "msg_date": "Thu, 2 Jul 2020 08:52:45 +0100", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": "On 02/07/2020 03:58, michael@paquier.xyz wrote:\n> On Tue, Jun 30, 2020 at 01:58:17PM +0100, Simon Riggs wrote:\n>> On Tue, 30 Jun 2020 at 02:17, Madan Kumar <madankumar1993@gmail.com> wrote:\n>>> It may be better to have one single function returning both\n>>> timestamp and origin for a given transaction ID.\n>>\n>> No need to change existing APIs.\n> \n> Adding a new function able to return both fields at the same time does\n> not imply that we'd remove the original one, it just implies that we\n> would be able to retrieve both fields with a single call of\n> TransactionIdGetCommitTsData(), saving from an extra CommitTsSLRULock\n> taken, etc. That's actually what pglogical does with\n> its pglogical_xact_commit_timestamp_origin() in\n> pglogical_functions.c. So adding one function able to return one\n> tuple with the two fields, without removing the existing\n> pg_xact_commit_timestamp() makes the most sense, no?\n\n\nAgreed, sounds reasonable.\n\nI also (I suspect like �lvaro) parsed your original message as wanting \nto remove origin from the record completely.\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n", "msg_date": "Thu, 2 Jul 2020 10:12:02 +0200", "msg_from": "Petr Jelinek <petr@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": "On Thu, Jul 02, 2020 at 10:12:02AM +0200, Petr Jelinek wrote:\n> On 02/07/2020 03:58, michael@paquier.xyz wrote:\n>> Adding a new function able to return both fields at the same time does\n>> not imply that we'd remove the original one, it just implies that we\n>> would be able to retrieve both fields with a single call of\n>> TransactionIdGetCommitTsData(), saving from an extra CommitTsSLRULock\n>> taken, etc. That's actually what pglogical does with\n>> its pglogical_xact_commit_timestamp_origin() in\n>> pglogical_functions.c. So adding one function able to return one\n>> tuple with the two fields, without removing the existing\n>> pg_xact_commit_timestamp() makes the most sense, no?\n> \n> Agreed, sounds reasonable.\n\nThanks. Movead, please note that the patch is waiting on author?\nCould you send an update if you think that those changes make sense?\n--\nMichael", "msg_date": "Fri, 3 Jul 2020 14:10:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": ">Thanks. Movead, please note that the patch is waiting on author? \n\n>Could you send an update if you think that those changes make sense? \n\nThanks for approval the issue, I will send a patch at Monday. \nRegards,\n\nHighgo Software (Canada/China/Pakistan) \n\nURL : http://www.highgo.ca/ \n\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\n>Thanks. Movead, please note that the patch is waiting on author? >Could you send an update if you think that those changes make sense? Thanks for approval the issue, I will send a patch at Monday. Regards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Sat, 04 Jul 2020 13:44:49 +0800", "msg_from": "Movead Li <movead.li@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": ">Thanks. Movead, please note that the patch is waiting on author?\r\n>Could you send an update if you think that those changes make sense?\r\n\r\nI make a patch as Michael Paquier described that use a new function to\r\nreturn transactionid and origin, and I add a origin version to \r\npg_last_committed_xact() too, now it looks like below:\r\n\r\n============================================\r\npostgres=# SELECT txid_current() as txid \\gset\r\npostgres=# SELECT * FROM pg_xact_commit_timestamp_origin(:'txid');\r\n timestamp | origin \r\n-------------------------------------+--------\r\n 2020-07-04 17:52:10.199623+08 | 1\r\n(1 row)\r\n\r\npostgres=# SELECT * FROM pg_last_committed_xact_with_origin();\r\n xid | timestamp | origin \r\n-----+------------------------------------+--------\r\n 506 | 2020-07-04 17:52:10.199623+08 | 1\r\n(1 row)\r\n\r\npostgres=#\r\n============================================\r\n\r\n---\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Sat, 4 Jul 2020 18:01:28 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": "On Sat, Jul 04, 2020 at 06:01:28PM +0800, movead.li@highgo.ca wrote:\n> I make a patch as Michael Paquier described that use a new function to\n> return transactionid and origin, and I add a origin version to \n> pg_last_committed_xact() too, now it looks like below:\n\n+SELECT pg_replication_origin_create('test_commit_ts: get_origin_1');\n+SELECT pg_replication_origin_create('test_commit_ts: get_origin_2');\n+SELECT pg_replication_origin_create('test_commit_ts: get_origin_3');\n\nWhy do you need three replication origins to test three times the same\npattern? Wouldn't one be enough and why don't you check after the\ntimestamp? I would also two extra tests: one with a NULL input and an\nextra one where the data could not be found.\n\n+ found = TransactionIdGetCommitTsData(xid, &ts, &nodeid);\n+\n+ if (!found)\n+ PG_RETURN_NULL();\n\nThis part also looks incorrect to me, I think that you should still\nreturn two tuples, both marked as NULL. You can do that just by\nswitching the nulls flags to true for the two values if nothing can be\nfound.\n--\nMichael", "msg_date": "Mon, 6 Jul 2020 09:36:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": ">+SELECT pg_replication_origin_create('test_commit_ts: get_origin_1');\r\n>+SELECT pg_replication_origin_create('test_commit_ts: get_origin_2');\r\n>+SELECT pg_replication_origin_create('test_commit_ts: get_origin_3');\r\n>\r\n>Why do you need three replication origins to test three times the same\r\n>pattern? Wouldn't one be enough and why don't you check after the\r\n>timestamp? I would also two extra tests: one with a NULL input and an\r\n>extra one where the data could not be found.\r\n> \r\n>+ found = TransactionIdGetCommitTsData(xid, &ts, &nodeid);\r\n>+\r\n>+ if (!found)\r\n>+ PG_RETURN_NULL();\r\n> \r\n>This part also looks incorrect to me, I think that you should still\r\n>return two tuples, both marked as NULL. You can do that just by\r\n>switching the nulls flags to true for the two values if nothing can be\r\n>found.\r\nThanks for the points and follow them, new patch attached.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Mon, 6 Jul 2020 11:12:30 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": "On Mon, Jul 06, 2020 at 11:12:30AM +0800, movead.li@highgo.ca wrote:\n> Thanks for the points and follow them, new patch attached.\n\nThat was fast, thanks. I have not tested the patch, but there are\ntwo things I missed a couple of hours back. Why do you need\npg_last_committed_xact_with_origin() to begin with? Wouldn't it be\nmore simple to just add a new column to pg_last_committed_xact() for\nthe replication origin? Contrary to pg_xact_commit_timestamp() that\nshould not be broken for compatibility reasons because it returns only\none value, we don't have this problem with pg_last_committed_xact() as\nit already returns one tuple with two values.\n\n+{ oid => '4179', descr => 'get commit origin of a transaction',\n\nA second thing is that the OID of the new function should be in the \nrange 8000..9999, as per the policy introduced in commit a6417078.\nsrc/include/catalog/unused_oids can be used to pick up a value.\n--\nMichael", "msg_date": "Mon, 6 Jul 2020 17:01:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": ">That was fast, thanks. I have not tested the patch, but there are\r\n>two things I missed a couple of hours back. Why do you need\r\n>pg_last_committed_xact_with_origin() to begin with? Wouldn't it be\r\n>more simple to just add a new column to pg_last_committed_xact() for\r\n>the replication origin? Contrary to pg_xact_commit_timestamp() that\r\n>should not be broken for compatibility reasons because it returns only\r\n>one value, we don't have this problem with pg_last_committed_xact() as\r\n>it already returns one tuple with two values.\r\nYes make sense, changed in new patch.\r\n \r\n>+{ oid => '4179', descr => 'get commit origin of a transaction',\r\n>A second thing is that the OID of the new function should be in the\r\n>range 8000..9999, as per the policy introduced in commit a6417078.\r\n>src/include/catalog/unused_oids can be used to pick up a value.\r\nThanks, very helpful information and I have follow that.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Tue, 7 Jul 2020 10:02:29 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": "On Tue, Jul 07, 2020 at 10:02:29AM +0800, movead.li@highgo.ca wrote:\n> Thanks, very helpful information and I have followed that.\n\nCool, thanks. I have gone through your patch in details, and updated\nit as the attached. Here are some comments.\n\n'8000' as OID for the new function was not really random, so to be\nfair with the other patches, I picked up the first random value\nunused_oids has given me instead.\n\nThere were some indentation issues, and pgindent got that fixed.\n\nI think that it is better to use \"replication origin\" in the docs\ninstead of just origin. I have kept \"origin\" in the functions for\nnow as that sounded cleaner to me, but we may consider using something\nlike \"reporigin\" as well as attribute name.\n\nThe tests could just use tstzrange() to make sure that the timestamps\nhave valid values, so I have switched to that, and did not resist to\ndo the same in the existing tests.\n\n+-- Test when it can not find the transaction\n+SELECT * FROM pg_xact_commit_timestamp_origin((:'txid_set_origin'::text::int +\n10)::text::xid) x;\nThis test could become unstable, particularly if it gets used in a\nparallel environment, so I have removed it. Perhaps I am just\nover-pessimistic here though..\n\nAs a side note, I think that we could just remove the alternate output\nof commit_ts/, as it does not really get used because of the\nNO_INSTALLCHECK present in the module's Makefile. That would be the\njob of a different patch, so I have updated it accordingly. Glad to\nsee that you did not forget to adapt it in your own patch.\n\n(The change in catversion.h is a self-reminder...)\n--\nMichael", "msg_date": "Tue, 7 Jul 2020 14:35:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": ">Cool, thanks. I have gone through your patch in details, and updated\r\n>it as the attached. Here are some comments.\r\n \r\n>'8000' as OID for the new function was not really random, so to be\r\n>fair with the other patches, I picked up the first random value\r\n>unused_oids has given me instead.\r\n> \r\n>There were some indentation issues, and pgindent got that fixed.\r\n \r\n>I think that it is better to use \"replication origin\" in the docs\r\n>instead of just origin. I have kept \"origin\" in the functions for\r\n>now as that sounded cleaner to me, but we may consider using something\r\n>like \"reporigin\" as well as attribute name.\r\n> \r\n>The tests could just use tstzrange() to make sure that the timestamps\r\n>have valid values, so I have switched to that, and did not resist to\r\n>do the same in the existing tests.\r\n> \r\n>+-- Test when it can not find the transaction\r\n>+SELECT * FROM pg_xact_commit_timestamp_origin((:'txid_set_origin'::text::int +\r\n>10)::text::xid) x;\r\n>This test could become unstable, particularly if it gets used in a\r\n>parallel environment, so I have removed it. Perhaps I am just\r\n>over-pessimistic here though..\r\n> \r\n>As a side note, I think that we could just remove the alternate output\r\n>of commit_ts/, as it does not really get used because of the\r\n>NO_INSTALLCHECK present in the module's Makefile. That would be the\r\n>job of a different patch, so I have updated it accordingly. Glad to\r\n>see that you did not forget to adapt it in your own patch.\r\n> \r\n>(The change in catversion.h is a self-reminder...)\r\nThanks for all of that, so many details I still need to pay attention when \r\nsubmit a patch.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\n>Cool, thanks.  I have gone through your patch in details, and updated>it as the attached.  Here are some comments. >'8000' as OID for the new function was not really random, so to be>fair with the other patches, I picked up the first random value>unused_oids has given me instead.> >There were some indentation issues, and pgindent got that fixed. >I think that it is better to use \"replication origin\" in the docs>instead of just origin.  I have kept \"origin\" in the functions for>now as that sounded cleaner to me, but we may consider using something>like \"reporigin\" as well as attribute name.> >The tests could just use tstzrange() to make sure that the timestamps>have valid values, so I have switched to that, and did not resist to>do the same in the existing tests.> >+-- Test when it can not find the transaction>+SELECT * FROM pg_xact_commit_timestamp_origin((:'txid_set_origin'::text::int +>10)::text::xid) x;>This test could become unstable, particularly if it gets used in a>parallel environment, so I have removed it.  Perhaps I am just>over-pessimistic here though..> >As a side note, I think that we could just remove the alternate output>of commit_ts/, as it does not really get used because of the>NO_INSTALLCHECK present in the module's Makefile.  That would be the>job of a different patch, so I have updated it accordingly.  Glad to>see that you did not forget to adapt it in your own patch.> >(The change in catversion.h is a self-reminder...)Thanks for all of that, so many details I still need to pay attention when submit a patch.\n\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Wed, 8 Jul 2020 09:31:24 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": "On Wed, Jul 08, 2020 at 09:31:24AM +0800, movead.li@highgo.ca wrote:\n> Thanks for all of that, so many details I still need to pay attention when \n> submit a patch.\n\nNo problem. We are all here to learn, and nothing can be perfect, if\nperfection is even possible :)\n\nRegarding the attribute name, I was actually considering to just use\n\"roident\" instead. This is more consistent with pglogical, and that's\nalso the field name we use in ReplicationState[OnDisk]. What do you\nthink?\n--\nMichael", "msg_date": "Wed, 8 Jul 2020 10:51:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": ">Regarding the attribute name, I was actually considering to just use\r\n>\"roident\" instead. This is more consistent with pglogical, and that's\r\n>also the field name we use in ReplicationState[OnDisk]. What do you\r\n>think?\r\nYes that's is the right way, I can see it's 'roident' in pg_replication_origin\r\ncatalog too.\r\nWhat's your v6 patch based on, I can not apply it.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\r\n\n\n>Regarding the attribute name, I was actually considering to just use>\"roident\" instead.  This is more consistent with pglogical, and that's>also the field name we use in ReplicationState[OnDisk].  What do you>think?Yes that's is the right way, I can see it's 'roident' in pg_replication_origincatalog too.What's your v6 patch based on, I can not apply it.\n\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Wed, 8 Jul 2020 10:11:28 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": "On Wed, Jul 08, 2020 at 10:11:28AM +0800, movead.li@highgo.ca wrote:\n> Yes that's is the right way, I can see it's 'roident' in pg_replication_origin\n> catalog too.\n> What's your v6 patch based on, I can not apply it.\n\nThere is a conflict in catversion.h. If you wish to test the patch,\nplease feel free to use the attached where I have updated the\nattribute name to roident.\n--\nMichael", "msg_date": "Wed, 8 Jul 2020 15:08:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": ">There is a conflict in catversion.h. If you wish to test the patch,\r\n>please feel free to use the attached where I have updated the\r\n>attribute name to roident.\r\nI think everything is ok, but be careful the new patch is in Windows\r\nformat now.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\r\n\n\n>There is a conflict in catversion.h.  If you wish to test the patch,>please feel free to use the attached where I have updated the>attribute name to roident.\nI think everything is ok, but be careful the new patch is in Windowsformat now.\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Thu, 9 Jul 2020 10:04:23 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": "On Thu, Jul 09, 2020 at 10:04:23AM +0800, movead.li@highgo.ca wrote:\n> but be careful the new patch is in Windows format now.\n\nThat would be surprising. Why do you think that?\n--\nMichael", "msg_date": "Thu, 9 Jul 2020 14:37:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": ">> but be careful the new patch is in Windows format now. \r\n>That would be surprising. Why do you think that?\r\nI am not sure why, I can not apply your patch, and I open it\r\nwith vscode and shows a CRLF format, if I change the patch to\r\nLF then nothing wrong.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n \r\n\n\n>> but be careful the new patch is in Windows format now. >That would be surprising.  Why do you think that?I am not sure why, I can not apply your patch, and I open itwith vscode and shows a CRLF format, if I change the patch toLF then nothing wrong.\n\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Thu, 9 Jul 2020 14:19:46 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": "On Thu, Jul 09, 2020 at 02:19:46PM +0800, movead.li@highgo.ca wrote:\n> I am not sure why, I can not apply your patch, and I open it\n> with vscode and shows a CRLF format, if I change the patch to\n> LF then nothing wrong.\n\nThis looks like an issue in your environment, like with git's autocrlf\nor such? rep-origin-superuser-v7.patch has no CRLF.\n--\nMichael", "msg_date": "Thu, 9 Jul 2020 15:33:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": ">> I am not sure why, I can not apply your patch, and I open it\r\n>> with vscode and shows a CRLF format, if I change the patch to\r\n>> LF then nothing wrong.\r\n \r\n>This looks like an issue in your environment, like with git's autocrlf\r\n>or such? rep-origin-superuser-v7.patch has no CRLF.\r\nYes thanks, that's my environment problem.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\n>> I am not sure why, I can not apply your patch, and I open it>> with vscode and shows a CRLF format, if I change the patch to>> LF then nothing wrong. >This looks like an issue in your environment, like with git's autocrlf>or such?  rep-origin-superuser-v7.patch has no CRLF.Yes thanks, that's my environment problem.\n\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Thu, 9 Jul 2020 14:42:30 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": "On Wed, Jul 08, 2020 at 03:08:24PM +0900, Michael Paquier wrote:\n> There is a conflict in catversion.h. If you wish to test the patch,\n> please feel free to use the attached where I have updated the\n> attribute name to roident.\n\nPlease note that I have switched the patch as ready for committer. So\nI'll try to get that done, with roident as attribute name. If\nsomebody prefers a different name or has an objection, please feel\nfree to chime in.\n--\nMichael", "msg_date": "Fri, 10 Jul 2020 10:06:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: A patch for get origin from commit_ts." }, { "msg_contents": "On Fri, Jul 10, 2020 at 10:06:06AM +0900, Michael Paquier wrote:\n> Please note that I have switched the patch as ready for committer. So\n> I'll try to get that done, with roident as attribute name. If\n> somebody prefers a different name or has an objection, please feel\n> free to chime in.\n\nHearing nothing, committed after fixing few things:\n- The docs reversed <parameter> and <type>.\n- The comment on top of GetLatestCommitTsData() mentioned \"extra\"\ninstead of \"nodeid\". Not an issue of this patch but I have just fixed\nit.\n- We could just have used memset for nulls when the data could not be\nfound in pg_xact_commit_timestamp_origin().\n- Added some casts to Oid for the new ObjectIdGetDatum() calls.\n- Changed the tests to not show numerical values for roident, checking\ninstead that the values are non-zero for the cases where we don't\nexpect a valid replication origin. For the valid cases, I have just\nused a join with pg_replication_origin to grab roname. This makes the\ntests more portable.\n\nAfter applying the patch as of b1e48bb, longfin has also complained\nthat regression tests should prefix replication origins with\n\"regress_\". This has been fixed with ea3e15d.\n--\nMichael", "msg_date": "Sun, 12 Jul 2020 21:42:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: A patch for get origin from commit_ts." } ]
[ { "msg_contents": "Hello\n\nPer discussion in thread [1], I propose the following patch to give\nanother adjustment to the xlogreader API. This results in a small but\nnot insignificat net reduction of lines of code. What this patch does\nis adjust the signature of these new xlogreader callbacks, making the\nAPI simpler. The changes are:\n\n* the segment_open callback installs the FD in xlogreader state itself,\n instead of passing the FD back. This was suggested by Kyotaro\n Horiguchi in that thread[2].\n\n* We no longer pass segcxt to segment_open; it's in XLogReaderState, \n which is already an argument.\n\n* We no longer pass seg/segcxt to WALRead; instead, that function takes\n them from XLogReaderState, which is already an argument.\n (This means XLogSendPhysical has to drink more of the fake_xlogreader\n kool-aid.)\n\nI claim the reason to do it now instead of pg14 is to make it simpler\nfor third-party xlogreader callers to adjust.\n\n(Some might be thinking that I do this to avoid an API change later, but\nmy guts tell me that we'll adjust xlogreader again in pg14 for the\nencryption stuff and other reasons, so.)\n\n\n[1] https://postgr.es/m/20200406025651.fpzdb5yyb7qyhqko@alap3.anarazel.de\n[2] https://postgr.es/m/20200508.114228.963995144765118400.horikyota.ntt@gmail.com\n\n-- \n�lvaro Herrera Developer, https://www.PostgreSQL.org/", "msg_date": "Mon, 11 May 2020 16:33:36 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "pg13: xlogreader API adjust" }, { "msg_contents": "At Mon, 11 May 2020 16:33:36 -0400, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> Hello\n> \n> Per discussion in thread [1], I propose the following patch to give\n> another adjustment to the xlogreader API. This results in a small but\n> not insignificat net reduction of lines of code. What this patch does\n> is adjust the signature of these new xlogreader callbacks, making the\n> API simpler. The changes are:\n> \n> * the segment_open callback installs the FD in xlogreader state itself,\n> instead of passing the FD back. This was suggested by Kyotaro\n> Horiguchi in that thread[2].\n> \n> * We no longer pass segcxt to segment_open; it's in XLogReaderState, \n> which is already an argument.\n> \n> * We no longer pass seg/segcxt to WALRead; instead, that function takes\n> them from XLogReaderState, which is already an argument.\n> (This means XLogSendPhysical has to drink more of the fake_xlogreader\n> kool-aid.)\n> \n> I claim the reason to do it now instead of pg14 is to make it simpler\n> for third-party xlogreader callers to adjust.\n> \n> (Some might be thinking that I do this to avoid an API change later, but\n> my guts tell me that we'll adjust xlogreader again in pg14 for the\n> encryption stuff and other reasons, so.)\n> \n> \n> [1] https://postgr.es/m/20200406025651.fpzdb5yyb7qyhqko@alap3.anarazel.de\n> [2] https://postgr.es/m/20200508.114228.963995144765118400.horikyota.ntt@gmail.com\n\nThe simplified interface of WALRead looks far better to me since it no\nlonger has unreasonable duplicates of parameters. I agree to the\ndiscussion about third-party xlogreader callers but not sure about\nback-patching burden.\n\nI'm not sure the reason for wal_segment_open and WalSndSegmentOpen\nbeing modified different way about error handling of BasicOpenFile, I\nprefer the WalSndSegmentOpen way. However, that difference doesn't\nharm anything so I'm fine with the current patch.\n\n\n+\tfake_xlogreader.seg = *sendSeg;\n+\tfake_xlogreader.segcxt = *sendCxt;\n\nfake_xlogreader.seg is a different instance from *sendSeg. WALRead\nmodifies fake_xlogreader.seg but does not modify *sendSeg. Thus the\nchange doesn't persist. On the other hand WalSndSegmentOpen reads\n*sendSeg, which is not under control of WALRead.\n\nMaybe we had better to make fake_xlogreader be a global variable of\nwalsender.c that covers the current sendSeg and sendCxt.\n\nregards.\n\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 12 May 2020 11:01:03 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg13: xlogreader API adjust" }, { "msg_contents": "On 2020-May-12, Kyotaro Horiguchi wrote:\n\n> I'm not sure the reason for wal_segment_open and WalSndSegmentOpen\n> being modified different way about error handling of BasicOpenFile, I\n> prefer the WalSndSegmentOpen way. However, that difference doesn't\n> harm anything so I'm fine with the current patch.\n\nYeah, I couldn't decide which style I liked the most. I used the one\nyou suggested.\n\n> +\tfake_xlogreader.seg = *sendSeg;\n> +\tfake_xlogreader.segcxt = *sendCxt;\n> \n> fake_xlogreader.seg is a different instance from *sendSeg. WALRead\n> modifies fake_xlogreader.seg but does not modify *sendSeg. Thus the\n> change doesn't persist. On the other hand WalSndSegmentOpen reads\n> *sendSeg, which is not under control of WALRead.\n> \n> Maybe we had better to make fake_xlogreader be a global variable of\n> walsender.c that covers the current sendSeg and sendCxt.\n\nI tried that. I was about to leave it at just modifying physical\nwalsender (simple enough, and it passed tests), but I noticed that\nWalSndErrorCleanup() would be a problem because we don't know if it's\nphysical or logical walsender. So in the end I added a global\n'xlogreader' pointer in walsender.c -- logical walsender sets it to the\ntrue xlogreader it has inside the logical decoding context, and physical\nwalsender sets it to its fake xlogreader. That seems to work nicely.\nsendSeg/sendCxt are gone entirely. Logical walsender was doing\nWALOpenSegmentInit() uselessly during InitWalSender(), since it was\nusing the separate sendSeg/sendCxt structs instead of the ones in its\nxlogreader. (Some mysteries become clearer!) \n\nIt's slightly disquieting that the segment_close call in\nWalSndErrorCleanup is not covered, but in any case this should work well\nAFAICS. I think this is simpler to understand than formerly.\n\nNow the only silliness remaining is the fact that different users of the\nxlogreader interface are doing different things about the TLI.\nHopefully we can unify everything to something sensible one day .. but\nthat's not going to happen in pg13.\n\nI'll get this pushed tomorrow, unless there are further objections.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 12 May 2020 20:16:52 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg13: xlogreader API adjust" }, { "msg_contents": "Pushed. Thanks for the help!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 13 May 2020 12:35:27 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg13: xlogreader API adjust" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Pushed. Thanks for the help!\n\nThis seems to have fixed bowerbird. Were you expecting that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 13 May 2020 19:30:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg13: xlogreader API adjust" }, { "msg_contents": "On 2020-May-13, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > Pushed. Thanks for the help!\n> \n> This seems to have fixed bowerbird. Were you expecting that?\n\nHm, not really.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 13 May 2020 19:33:33 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg13: xlogreader API adjust" }, { "msg_contents": "I think I've discovered a problem with 850196b6. The following steps\r\ncan be used to trigger a segfault:\r\n\r\n # wal_level = logical\r\n psql postgres -c \"create database testdb;\"\r\n psql testdb -c \"select pg_create_logical_replication_slot('slot', 'test_decoding');\"\r\n psql \"dbname=postgres replication=database\" -c \"START_REPLICATION SLOT slot LOGICAL 0/0;\"\r\n\r\nFrom a quick glance, I think the problem starts in\r\nStartLogicalReplication() in walsender.c. The call to\r\nCreateDecodingContext() may ERROR before xlogreader is initialized in\r\nthe next line, so the subsequent call to WalSndErrorCleanup()\r\nsegfaults when it attempts to access xlogreader.\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 14 May 2020 01:03:48 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: pg13: xlogreader API adjust" }, { "msg_contents": "At Thu, 14 May 2020 01:03:48 +0000, \"Bossart, Nathan\" <bossartn@amazon.com> wrote in \n> I think I've discovered a problem with 850196b6. The following steps\n> can be used to trigger a segfault:\n> \n> # wal_level = logical\n> psql postgres -c \"create database testdb;\"\n> psql testdb -c \"select pg_create_logical_replication_slot('slot', 'test_decoding');\"\n> psql \"dbname=postgres replication=database\" -c \"START_REPLICATION SLOT slot LOGICAL 0/0;\"\n> \n> From a quick glance, I think the problem starts in\n> StartLogicalReplication() in walsender.c. The call to\n> CreateDecodingContext() may ERROR before xlogreader is initialized in\n> the next line, so the subsequent call to WalSndErrorCleanup()\n> segfaults when it attempts to access xlogreader.\n\nGood catch! That's not only for CreateDecodingContet. That happens\neverywhere in the query loop in PostgresMain() until logreader is\ninitialized. So that also happens, for example, by starting logical\nreplication using invalidated slot. Checking xlogreader != NULL in\nWalSndErrorCleanup is sufficient. It doesn't make actual difference,\nbut the attached explicitly initialize the pointer with NULL.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 14 May 2020 14:12:25 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg13: xlogreader API adjust" }, { "msg_contents": "On Thu, May 14, 2020 at 02:12:25PM +0900, Kyotaro Horiguchi wrote:\n> Good catch! That's not only for CreateDecodingContet. That happens\n> everywhere in the query loop in PostgresMain() until logreader is\n> initialized. So that also happens, for example, by starting logical\n> replication using invalidated slot. Checking xlogreader != NULL in\n> WalSndErrorCleanup is sufficient. It doesn't make actual difference,\n> but the attached explicitly initialize the pointer with NULL.\n\nAlvaro, are you planning to look at that? Should we have an open item\nfor this matter?\n--\nMichael", "msg_date": "Fri, 15 May 2020 20:18:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg13: xlogreader API adjust" }, { "msg_contents": "On 2020-May-15, Michael Paquier wrote:\n\n> On Thu, May 14, 2020 at 02:12:25PM +0900, Kyotaro Horiguchi wrote:\n> > Good catch! That's not only for CreateDecodingContet. That happens\n> > everywhere in the query loop in PostgresMain() until logreader is\n> > initialized. So that also happens, for example, by starting logical\n> > replication using invalidated slot. Checking xlogreader != NULL in\n> > WalSndErrorCleanup is sufficient. It doesn't make actual difference,\n> > but the attached explicitly initialize the pointer with NULL.\n> \n> Alvaro, are you planning to look at that? Should we have an open item\n> for this matter?\n\nOn it now. I'm trying to add a test for this (needs a small change to\nPostgresNode->psql), but I'm probably doing something stupid in the Perl\nside, because it doesn't detect things as well as I'd like. Still\ntrying, but I may be asked to evict the office soon ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 15 May 2020 19:24:28 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg13: xlogreader API adjust" }, { "msg_contents": "At Fri, 15 May 2020 19:24:28 -0400, Alvaro Herrera <alvherre@2ndquadrant.com> wrote in \n> On 2020-May-15, Michael Paquier wrote:\n> \n> > On Thu, May 14, 2020 at 02:12:25PM +0900, Kyotaro Horiguchi wrote:\n> > > Good catch! That's not only for CreateDecodingContet. That happens\n> > > everywhere in the query loop in PostgresMain() until logreader is\n> > > initialized. So that also happens, for example, by starting logical\n> > > replication using invalidated slot. Checking xlogreader != NULL in\n> > > WalSndErrorCleanup is sufficient. It doesn't make actual difference,\n> > > but the attached explicitly initialize the pointer with NULL.\n> > \n> > Alvaro, are you planning to look at that? Should we have an open item\n> > for this matter?\n> \n> On it now. I'm trying to add a test for this (needs a small change to\n> PostgresNode->psql), but I'm probably doing something stupid in the Perl\n> side, because it doesn't detect things as well as I'd like. Still\n> trying, but I may be asked to evict the office soon ...\n\nFWIW, and I'm not sure which of the mail and the commit 1d3743023e was\nearlier, but I confirmed that the committed test in\n006_logical_decoding.pl causes a crash, and the crash is fixed by the\nchange of code.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 21 May 2020 17:50:55 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg13: xlogreader API adjust" } ]
[ { "msg_contents": "There are a couple of things left to do yet:\n\n* Collapse temporary OID assignments down to the permanent range.\nAFAICS there's no reason to delay this step any longer, since there\naren't any open issues that seem likely to touch the catalog data.\n\n* In recent years we've usually done a preliminary pgindent run before\nbeta1 (and then another right before making the stable branch). I've\nbeen holding off on this to avoid breaking WIP patches for open issues,\nbut we really ought to be close to done on those.\n\nBarring objections, I'll do the OID renumbering tomorrow (Tuesday)\nand a pgindent run on Thursday.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 11 May 2020 18:04:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Remaining routine pre-beta tasks" } ]
[ { "msg_contents": "Hello.\n\nThere is a recent commit about changes in way read-only commands are\nprevented to be executed [1].\n\nIt seems like hs_standby_disallowed test is broken now.\n\nSo, a simple patch to fix the test is attached.\n\nThanks,\nMichail.\n\n[1] https://www.postgresql.org/message-id/flat/154701965766.11631.2240747476287499810.pgcf%40coridan.postgresql.org#168075c6e89267e11b862aa0c55b1910", "msg_date": "Tue, 12 May 2020 02:03:22 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] hs_standby_disallowed test fix" }, { "msg_contents": "\n\nOn 2020/05/12 8:03, Michail Nikolaev wrote:\n> Hello.\n> \n> There is a recent commit about changes in way read-only commands are\n> prevented to be executed [1].\n> \n> It seems like hs_standby_disallowed test is broken now.\n> \n> So, a simple patch to fix the test is attached.\n\nThanks for the report and patch! LGTM.\n\nI just wonder why standbycheck regression test doesn't run by default\nin buildfarm. Which caused us not to notice this issue long time. Maybe\nbecause it's difficult to set up hot-standby environment in the\nregression test? If so, we might need to merge standbycheck test into\nTAP tests for recovery.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 12 May 2020 12:05:22 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] hs_standby_disallowed test fix" }, { "msg_contents": "\n\nOn 2020/05/12 12:05, Fujii Masao wrote:\n> \n> \n> On 2020/05/12 8:03, Michail Nikolaev wrote:\n>> Hello.\n>>\n>> There is a recent commit about changes in way read-only commands are\n>> prevented to be executed [1].\n>>\n>> It seems like hs_standby_disallowed test is broken now.\n>>\n>> So, a simple patch to fix the test is attached.\n> \n> Thanks for the report and patch! LGTM.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 12 May 2020 13:57:49 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] hs_standby_disallowed test fix" }, { "msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> I just wonder why standbycheck regression test doesn't run by default\n> in buildfarm. Which caused us not to notice this issue long time. Maybe\n> because it's difficult to set up hot-standby environment in the\n> regression test? If so, we might need to merge standbycheck test into\n> TAP tests for recovery.\n\nIt seems likely to me that the standbycheck stuff has been completely\nobsoleted by the TAP-based recovery tests. We should get rid of it,\nafter adding any missing coverage to the TAP tests.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 May 2020 13:35:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] hs_standby_disallowed test fix" }, { "msg_contents": "On 2020-05-12 19:35, Tom Lane wrote:\n> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>> I just wonder why standbycheck regression test doesn't run by default\n>> in buildfarm. Which caused us not to notice this issue long time. Maybe\n>> because it's difficult to set up hot-standby environment in the\n>> regression test? If so, we might need to merge standbycheck test into\n>> TAP tests for recovery.\n> \n> It seems likely to me that the standbycheck stuff has been completely\n> obsoleted by the TAP-based recovery tests. We should get rid of it,\n> after adding any missing coverage to the TAP tests.\n\nI have looked into this a few times. It should definitely be done, but \nthere is actually a fair amount of coverage in the standbycheck that is \nnot in a TAP test, so it would be a fair amount of careful leg work to \nget this all moved over.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 18 May 2020 16:23:23 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] hs_standby_disallowed test fix" } ]
[ { "msg_contents": "Skimming through the code in event_trigger.c and noticed that while most of\nthe stanzas that reference IsUnderPostmaster refer back to the code comment\nbeginning on line 675 the block for table rewrite copied it in\nverbatim starting at line 842. The currentEventTriggerState comment at\nlines 730 and 861 seem to be the same too.\n\nhttps://github.com/postgres/postgres/blob/60c90c16c1885bb9aa2047b66f958b865c5d397e/src/backend/commands/event_trigger.c#L675\nhttps://github.com/postgres/postgres/blob/60c90c16c1885bb9aa2047b66f958b865c5d397e/src/backend/commands/event_trigger.c#L730\n\nhttps://github.com/postgres/postgres/blob/60c90c16c1885bb9aa2047b66f958b865c5d397e/src/backend/commands/event_trigger.c#L842\nhttps://github.com/postgres/postgres/blob/60c90c16c1885bb9aa2047b66f958b865c5d397e/src/backend/commands/event_trigger.c#L861\n\nhttps://github.com/postgres/postgres/blob/4d1563717fb1860168a40b852e1d61a33ecdd62f/src/backend/commands/event_trigger.c#L785\n\nI did also notice a difference with the test on line 861 compared to line\n785 though I unable to evaluate whether the absence of a \"rewriteList\" is\nexpected (there is a \"dropList\" at 785 ...).\n\nDavid J.\n\nSkimming through the code in event_trigger.c and noticed that while most of the stanzas that reference IsUnderPostmaster refer back to the code comment beginning on line 675 the block for table rewrite copied it in verbatim starting at line 842.  The currentEventTriggerState comment at lines 730 and 861 seem to be the same too.https://github.com/postgres/postgres/blob/60c90c16c1885bb9aa2047b66f958b865c5d397e/src/backend/commands/event_trigger.c#L675https://github.com/postgres/postgres/blob/60c90c16c1885bb9aa2047b66f958b865c5d397e/src/backend/commands/event_trigger.c#L730https://github.com/postgres/postgres/blob/60c90c16c1885bb9aa2047b66f958b865c5d397e/src/backend/commands/event_trigger.c#L842https://github.com/postgres/postgres/blob/60c90c16c1885bb9aa2047b66f958b865c5d397e/src/backend/commands/event_trigger.c#L861https://github.com/postgres/postgres/blob/4d1563717fb1860168a40b852e1d61a33ecdd62f/src/backend/commands/event_trigger.c#L785I did also notice a difference with the test on line 861 compared to line 785 though I unable to evaluate whether the absence of a \"rewriteList\" is expected (there is a \"dropList\" at 785 ...).David J.", "msg_date": "Mon, 11 May 2020 17:13:38 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Event trigger code comment duplication" }, { "msg_contents": "On Mon, May 11, 2020 at 05:13:38PM -0700, David G. Johnston wrote:\n> Skimming through the code in event_trigger.c and noticed that while most of\n> the stanzas that reference IsUnderPostmaster refer back to the code comment\n> beginning on line 675 the block for table rewrite copied it in\n> verbatim starting at line 842. The currentEventTriggerState comment at\n> lines 730 and 861 seem to be the same too.\n\nAn even more interesting part here is that EventTriggerDDLCommandEnd()\nand Drop() have basically the same comments, but they tell to refer\nback toEventTriggerDDLCommandStart(). So let's just do the same for\nall the exact duplicate in EventTriggerTableRewrite().\n\nThe second point about the check with (!currentEventTriggerState) in\nEventTriggerTableRewrite() and EventTriggerDDLCommandEnd() shows that\nboth comments share the same first sentence, but there is enough\ndifferent context to just keep them as separate IMO.\n\n> I did also notice a difference with the test on line 861 compared to line\n> 785 though I unable to evaluate whether the absence of a \"rewriteList\" is\n> expected (there is a \"dropList\" at 785 ...).\n\nAn event table rewrite happens only for one relation at a time.\n\nIn short, something like the attached sounds enough to me. What do\nyou think?\n--\nMichael", "msg_date": "Tue, 12 May 2020 15:30:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Event trigger code comment duplication" }, { "msg_contents": "On Mon, May 11, 2020 at 11:30 PM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> The second point about the check with (!currentEventTriggerState) in\n> EventTriggerTableRewrite() and EventTriggerDDLCommandEnd() shows that\n> both comments share the same first sentence, but there is enough\n> different context to just keep them as separate IMO.\n>\n\nWent back and looked this over - the comment differences in the check for\ncurrentEventTriggerState boil down to:\n\nthe word \"required\" vs \"important\" - either works for both.\n\nthe fact that the DDLCommandEnd function probably wouldn't crash absent the\ncheck - which while I do not know whether DDLTriggerRewrite would crash for\ncertain (hence the \"required\") as a practical matter it is required (and\nbesides if keeping note of which of these would crash or not is deemed\nimportant that can be commented upon specifically in each - both\nDDLCommandStart (which lacks the check altogether...) and SQLDrop both\nchoose not to elaborate on that point at all.\n\nWhether its a style thing, or some requirement of the C-language, I found\nit odd that the four nearly identical checks were left inline in the\nfunctions instead of being pulled out into a function. I've attached a\nconceptual patch that does just this and more clearly presents on my\nthoughts on the topic. In particular I tried to cleanup the quadruple\nnegative sentence (and worse for the whole paragraph) as part of the\nrefactoring of the currentEventTriggerState comment.\n\nDavid J.", "msg_date": "Tue, 12 May 2020 18:48:51 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Event trigger code comment duplication" }, { "msg_contents": "On Tue, May 12, 2020 at 06:48:51PM -0700, David G. Johnston wrote:\n> Whether its a style thing, or some requirement of the C-language, I found\n> it odd that the four nearly identical checks were left inline in the\n> functions instead of being pulled out into a function. I've attached a\n> conceptual patch that does just this and more clearly presents on my\n> thoughts on the topic. In particular I tried to cleanup the quadruple\n> negative sentence (and worse for the whole paragraph) as part of the\n> refactoring of the currentEventTriggerState comment.\n\nYou may want to check that your code compiles first :)\n\n+bool\n+EventTriggerValidContext(bool requireState)\n+{\n[...]\n- if (!IsUnderPostmaster)\n- return;\n+ if (!EventTriggerValidContext(true))\n+ return\nEventTriggerValidContext() should be static, and the code as written\nsimply would not compile.\n\n+ if (requireState) {\n+ /*\n+ * Only move forward if our state is set up. This is required\n+ * to handle concurrency - if we proceed, without state already set up,\n+ * and allow EventTriggerCommonSetup to run it may find triggers that\n+ * didn't exist when the command started.\n+ */\n+ if (!currentEventTriggerState)\n+ return false;\n+ }\nComment format and the if block don't have a format consistent with\nthe project.\n\n+ /*\n+ * See EventTriggerDDLCommandStart for a discussion about why event\n+ * triggers are disabled in single user mode.\n+ */\n+ if (!IsUnderPostmaster)\n+ return false;\nAnd here I am pretty sure that you don't want to remove the\nexplanation why event triggers are disabled in standalone mode.\n\nNote the reason why we don't expect a state being set for\nddl_command_start is present in EventTriggerBeginCompleteQuery():\n /*\n * Currently, sql_drop, table_rewrite, ddl_command_end events are the only\n * reason to have event trigger state at all; so if there are none, don't\n * install one.\n */\n\nEven with all that, I am not sure that we need to complicate further\nwhat we have here. An empty currentEventTriggerState gets checks in\nthree places, and each one of them has a slight different of the\nreason why we cannot process further, so I would prefer applying my\nprevious, simple patch if there are no objections to remove the\nduplication about event triggers with standalone mode, keeping the\nexplanations local to each event trigger type, and call it a day.\n--\nMichael", "msg_date": "Wed, 13 May 2020 14:15:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Event trigger code comment duplication" }, { "msg_contents": "On Tuesday, May 12, 2020, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, May 12, 2020 at 06:48:51PM -0700, David G. Johnston wrote:\n> > Whether its a style thing, or some requirement of the C-language, I found\n> > it odd that the four nearly identical checks were left inline in the\n> > functions instead of being pulled out into a function. I've attached a\n> > conceptual patch that does just this and more clearly presents on my\n> > thoughts on the topic. In particular I tried to cleanup the quadruple\n> > negative sentence (and worse for the whole paragraph) as part of the\n> > refactoring of the currentEventTriggerState comment.\n>\n> You may want to check that your code compiles first :)\n\n\nI said It was a conceptual patch...the inability to write correct C code\ndoesn’t wholly impact opinions of general code form.\n\n\n> + /*\n> + * See EventTriggerDDLCommandStart for a discussion about why event\n> + * triggers are disabled in single user mode.\n> + */\n> + if (!IsUnderPostmaster)\n> + return false;\n> And here I am pretty sure that you don't want to remove the\n> explanation why event triggers are disabled in standalone mode.\n\n\nThe full comment should have remained in the common function...so it moved\nbut wasn’t added or removed so not visible...in hindsight diff mode may\nhave been a less than ideal choice here. Or I may have fat-fingered the\ncopy-paste...\n\n\n>\n> Note the reason why we don't expect a state being set for\n> ddl_command_start is present in EventTriggerBeginCompleteQuery():\n> /*\n> * Currently, sql_drop, table_rewrite, ddl_command_end events are the\n> only\n> * reason to have event trigger state at all; so if there are none,\n> don't\n> * install one.\n> */\n\n\nThanks\n\n\n>\n> Even with all that, I am not sure that we need to complicate further\n> what we have here. An empty currentEventTriggerState gets checks in\n> three places, and each one of them has a slight different of the\n> reason why we cannot process further, so I would prefer applying my\n> previous, simple patch if there are no objections to remove the\n> duplication about event triggers with standalone mode, keeping the\n> explanations local to each event trigger type, and call it a day.\n>\n\nI’ll defer at this point - though maybe keep/improve the fix for the\nquadruple negative and related commentary.\n\nDavid J.\n\nOn Tuesday, May 12, 2020, Michael Paquier <michael@paquier.xyz> wrote:On Tue, May 12, 2020 at 06:48:51PM -0700, David G. Johnston wrote:\n> Whether its a style thing, or some requirement of the C-language, I found\n> it odd that the four nearly identical checks were left inline in the\n> functions instead of being pulled out into a function.  I've attached a\n> conceptual patch that does just this and more clearly presents on my\n> thoughts on the topic.  In particular I tried to cleanup the quadruple\n> negative sentence (and worse for the whole paragraph) as part of the\n> refactoring of the currentEventTriggerState comment.\n\nYou may want to check that your code compiles first :)I said It was a conceptual patch...the inability to write correct C code doesn’t wholly impact opinions of general code form.\n+   /*\n+    * See EventTriggerDDLCommandStart for a discussion about why event\n+    * triggers are disabled in single user mode.\n+    */\n+   if (!IsUnderPostmaster)\n+       return false;\nAnd here I am pretty sure that you don't want to remove the\nexplanation why event triggers are disabled in standalone mode.The full comment should have remained in the common function...so it moved but wasn’t added or removed so not visible...in hindsight diff mode may have been a less than ideal choice here.  Or I may have fat-fingered the copy-paste... \n\nNote the reason why we don't expect a state being set for\nddl_command_start is present in EventTriggerBeginCompleteQuery():\n    /*\n     * Currently, sql_drop, table_rewrite, ddl_command_end events are the only\n     * reason to have event trigger state at all; so if there are none, don't\n     * install one.\n     */Thanks \n\nEven with all that, I am not sure that we need to complicate further\nwhat we have here.  An empty currentEventTriggerState gets checks in\nthree places, and each one of them has a slight different of the\nreason why we cannot process further, so I would prefer applying my\nprevious, simple patch if there are no objections to remove the\nduplication about event triggers with standalone mode, keeping the\nexplanations local to each event trigger type, and call it a day.\nI’ll defer at this point - though maybe keep/improve the fix for the quadruple negative and related commentary.David J.", "msg_date": "Tue, 12 May 2020 22:26:46 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Event trigger code comment duplication" }, { "msg_contents": "On Tue, May 12, 2020 at 10:26:46PM -0700, David G. Johnston wrote:\n> On Tuesday, May 12, 2020, Michael Paquier <michael@paquier.xyz> wrote:\n>> Even with all that, I am not sure that we need to complicate further\n>> what we have here. An empty currentEventTriggerState gets checks in\n>> three places, and each one of them has a slight different of the\n>> reason why we cannot process further, so I would prefer applying my\n>> previous, simple patch if there are no objections to remove the\n>> duplication about event triggers with standalone mode, keeping the\n>> explanations local to each event trigger type, and call it a day.\n> \n> I’ll defer at this point - though maybe keep/improve the fix for the\n> quadruple negative and related commentary.\n\nStill not sure that's worth bothering. So, let's wait a couple of\ndays first to see if anybody has any comments, though I'd like to just\ngo with the simplest solution at hand and remove only the duplicated\ncomment about the standalone business with event triggers.\n--\nMichael", "msg_date": "Wed, 13 May 2020 16:48:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Event trigger code comment duplication" }, { "msg_contents": "On Wed, May 13, 2020 at 04:48:59PM +0900, Michael Paquier wrote:\n> Still not sure that's worth bothering. So, let's wait a couple of\n> days first to see if anybody has any comments, though I'd like to just\n> go with the simplest solution at hand and remove only the duplicated\n> comment about the standalone business with event triggers.\n\nI have gone through this one again this morning, and applied the\nsimple version as merging the checks on the current event trigger\nstate would make us lose some context about why each event gets\nskipped.\n--\nMichael", "msg_date": "Fri, 15 May 2020 08:25:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Event trigger code comment duplication" }, { "msg_contents": "On Thu, May 14, 2020 at 4:25 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, May 13, 2020 at 04:48:59PM +0900, Michael Paquier wrote:\n> > Still not sure that's worth bothering. So, let's wait a couple of\n> > days first to see if anybody has any comments, though I'd like to just\n> > go with the simplest solution at hand and remove only the duplicated\n> > comment about the standalone business with event triggers.\n>\n> I have gone through this one again this morning, and applied the\n> simple version as merging the checks on the current event trigger\n> state would make us lose some context about why each event gets\n> skipped.\n>\n>\nOk. Thanks!\n\nDavid J.\n\nOn Thu, May 14, 2020 at 4:25 PM Michael Paquier <michael@paquier.xyz> wrote:On Wed, May 13, 2020 at 04:48:59PM +0900, Michael Paquier wrote:\n> Still not sure that's worth bothering.  So, let's wait a couple of\n> days first to see if anybody has any comments, though I'd like to just\n> go with the simplest solution at hand and remove only the duplicated\n> comment about the standalone business with event triggers.\n\nI have gone through this one again this morning, and applied the\nsimple version as merging the checks on the current event trigger\nstate would make us lose some context about why each event gets\nskipped.Ok.  Thanks!David J.", "msg_date": "Thu, 14 May 2020 16:27:40 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Event trigger code comment duplication" } ]
[ { "msg_contents": "Hello hackers,\n\n1. StartupXLOG() does fsync on the whole data directory early in the crash\nrecovery. I'm wondering if we could skip some directories (at least the\npg_log/, table directories) since wal, etc could ensure consistency. Here\nis the related code.\n\n if (ControlFile->state != DB_SHUTDOWNED &&\n ControlFile->state != DB_SHUTDOWNED_IN_RECOVERY)\n {\n RemoveTempXlogFiles();\n SyncDataDirectory();\n }\n\nI have this concern since I saw an issue in a real product environment that\nthe startup process needs 10+ seconds to start wal replay after relaunch\ndue to elog(PANIC) (it was seen on postgres based product Greenplum but it\nis a common issue in postgres also). I highly suspect the delay was mostly\ndue to this. Also it is noticed that on public clouds fsync is much slower\nthan that on local storage so the slowness should be more severe on cloud.\nIf we at least disable fsync on the table directories we could skip a lot\nof file fsync - this may save a lot of seconds during crash recovery.\n\n2. CheckPointTwoPhase()\n\nThis may be a small issue.\n\nSee the code below,\n\nfor (i = 0; i < TwoPhaseState->numPrepXacts; i++)\n RecreateTwoPhaseFile(gxact->xid, buf, len);\n\nRecreateTwoPhaseFile() writes a state file for a prepared transaction and\ndoes fsync. It might be good to do fsync for all files once after writing\nthem, given the kernel is able to do asynchronous flush when writing those\nfile contents. If the TwoPhaseState->numPrepXacts is large we could do\nbatching to avoid the fd resource limit. I did not test them yet but this\nshould be able to speed up checkpoint/restartpoint a bit.\n\nAny thoughts?\n\nRegards.\n\nHello hackers,1. StartupXLOG() does fsync on the whole data directory early in the crash recovery. I'm wondering if we could skip some directories (at least the pg_log/, table directories) since wal, etc could ensure consistency. Here is the related code.      if (ControlFile->state != DB_SHUTDOWNED &&          ControlFile->state != DB_SHUTDOWNED_IN_RECOVERY)      {          RemoveTempXlogFiles();          SyncDataDirectory();      }I have this concern since I saw an issue in a real product environment that the startup process needs 10+ seconds to start wal replay after relaunch due to elog(PANIC) (it was seen on postgres based product Greenplum but it is a common issue in postgres also). I highly suspect the delay was mostly due to this. Also it is noticed that on public clouds fsync is much slower than that on local storage so the slowness should be more severe on cloud. If we at least disable fsync on the table directories we could skip a lot of file fsync - this may save a lot of seconds during crash recovery.2.  CheckPointTwoPhase()This may be a small issue.See the code below,for (i = 0; i < TwoPhaseState->numPrepXacts; i++)    RecreateTwoPhaseFile(gxact->xid, buf, len);RecreateTwoPhaseFile() writes a state file for a prepared transaction and does fsync. It might be good to do fsync for all files once after writing them, given the kernel is able to do asynchronous flush when writing those file contents. If the TwoPhaseState->numPrepXacts is large we could do batching to avoid the fd resource limit. I did not test them yet but this should be able to speed up checkpoint/restartpoint a bit.Any thoughts?Regards.", "msg_date": "Tue, 12 May 2020 08:42:23 +0800", "msg_from": "Paul Guo <pguo@pivotal.io>", "msg_from_op": true, "msg_subject": "Two fsync related performance issues?" }, { "msg_contents": "\n\nOn 2020/05/12 9:42, Paul Guo wrote:\n> Hello hackers,\n> \n> 1. StartupXLOG() does fsync on the whole data directory early in the crash recovery. I'm wondering if we could skip some directories (at least the pg_log/, table directories) since wal, etc could ensure consistency.\n\nI agree that we can skip log directory but I'm not sure if skipping\ntable directory is really safe. Also ISTM that we can skip the directories\nthat those contents are removed or zeroed during recovery,\nfor example, pg_snapshots, pg_substrans, etc.\n\n> Here is the related code.\n> \n>       if (ControlFile->state != DB_SHUTDOWNED &&\n>           ControlFile->state != DB_SHUTDOWNED_IN_RECOVERY)\n>       {\n>           RemoveTempXlogFiles();\n>           SyncDataDirectory();\n>       }\n> \n> I have this concern since I saw an issue in a real product environment that the startup process needs 10+ seconds to start wal replay after relaunch due to elog(PANIC) (it was seen on postgres based product Greenplum but it is a common issue in postgres also). I highly suspect the delay was mostly due to this. Also it is noticed that on public clouds fsync is much slower than that on local storage so the slowness should be more severe on cloud. If we at least disable fsync on the table directories we could skip a lot of file fsync - this may save a lot of seconds during crash recovery.\n> \n> 2.  CheckPointTwoPhase()\n> \n> This may be a small issue.\n> \n> See the code below,\n> \n> for (i = 0; i < TwoPhaseState->numPrepXacts; i++)\n>     RecreateTwoPhaseFile(gxact->xid, buf, len);\n> \n> RecreateTwoPhaseFile() writes a state file for a prepared transaction and does fsync. It might be good to do fsync for all files once after writing them, given the kernel is able to do asynchronous flush when writing those file contents. If the TwoPhaseState->numPrepXacts is large we could do batching to avoid the fd resource limit. I did not test them yet but this should be able to speed up checkpoint/restartpoint a bit.\n> \n> Any thoughts?\n\nIt seems worth making the patch and measuring the performance improvement.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 12 May 2020 12:55:37 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Two fsync related performance issues?" }, { "msg_contents": "On Tue, May 12, 2020 at 12:55:37PM +0900, Fujii Masao wrote:\n> On 2020/05/12 9:42, Paul Guo wrote:\n>> 1. StartupXLOG() does fsync on the whole data directory early in\n>> the crash recovery. I'm wondering if we could skip some\n>> directories (at least the pg_log/, table directories) since wal,\n>> etc could ensure consistency.\n> \n> I agree that we can skip log directory but I'm not sure if skipping\n> table directory is really safe. Also ISTM that we can skip the directories\n> that those contents are removed or zeroed during recovery,\n> for example, pg_snapshots, pg_substrans, etc.\n\nBasically excludeDirContents[] as of basebackup.c.\n\n>> RecreateTwoPhaseFile() writes a state file for a prepared\n>> transaction and does fsync. It might be good to do fsync for all\n>> files once after writing them, given the kernel is able to do\n>> asynchronous flush when writing those file contents. If\n>> the TwoPhaseState->numPrepXacts is large we could do batching to\n>> avoid the fd resource limit. I did not test them yet but this\n>> should be able to speed up checkpoint/restartpoint a bit.\n> \n> It seems worth making the patch and measuring the performance improvement.\n\nYou would need to do some micro-benchmarking here, so you could\nplug-in some pg_rusage_init() & co within this code path with many 2PC\nfiles present at the same time. However, I would believe that this is\nnot really worth the potential code complications.\n--\nMichael", "msg_date": "Tue, 12 May 2020 15:04:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Two fsync related performance issues?" }, { "msg_contents": "Thanks for the replies.\n\nOn Tue, May 12, 2020 at 2:04 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, May 12, 2020 at 12:55:37PM +0900, Fujii Masao wrote:\n> > On 2020/05/12 9:42, Paul Guo wrote:\n> >> 1. StartupXLOG() does fsync on the whole data directory early in\n> >> the crash recovery. I'm wondering if we could skip some\n> >> directories (at least the pg_log/, table directories) since wal,\n> >> etc could ensure consistency.\n> >\n> > I agree that we can skip log directory but I'm not sure if skipping\n> > table directory is really safe. Also ISTM that we can skip the\n> directories\n> > that those contents are removed or zeroed during recovery,\n> > for example, pg_snapshots, pg_substrans, etc.\n>\n> Basically excludeDirContents[] as of basebackup.c.\n>\n\ntable directories & wal fsync probably dominates the fsync time. Do we\nknow any possible real scenario that requires table directory fsync?\n\nThanks for the replies.On Tue, May 12, 2020 at 2:04 PM Michael Paquier <michael@paquier.xyz> wrote:On Tue, May 12, 2020 at 12:55:37PM +0900, Fujii Masao wrote:\n> On 2020/05/12 9:42, Paul Guo wrote:\n>> 1. StartupXLOG() does fsync on the whole data directory early in\n>> the crash recovery. I'm wondering if we could skip some\n>> directories (at least the pg_log/, table directories) since wal,\n>> etc could ensure consistency.\n> \n> I agree that we can skip log directory but I'm not sure if skipping\n> table directory is really safe. Also ISTM that we can skip the directories\n> that those contents are removed or zeroed during recovery,\n> for example, pg_snapshots, pg_substrans, etc.\n\nBasically excludeDirContents[] as of basebackup.c.table directories & wal fsync probably dominates the fsync time. Do weknow any possible real scenario that requires table directory fsync?", "msg_date": "Mon, 18 May 2020 23:12:32 +0800", "msg_from": "Paul Guo <pguo@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Two fsync related performance issues?" }, { "msg_contents": "Paul Guo <pguo@pivotal.io> writes:\n> table directories & wal fsync probably dominates the fsync time. Do we\n> know any possible real scenario that requires table directory fsync?\n\nYes, there are filesystems where that's absolutely required. See\npast discussions that led to putting in those fsyncs (we did not\nalways have them).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 May 2020 14:26:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Two fsync related performance issues?" }, { "msg_contents": "On Mon, May 11, 2020 at 8:43 PM Paul Guo <pguo@pivotal.io> wrote:\n> I have this concern since I saw an issue in a real product environment that the startup process needs 10+ seconds to start wal replay after relaunch due to elog(PANIC) (it was seen on postgres based product Greenplum but it is a common issue in postgres also). I highly suspect the delay was mostly due to this. Also it is noticed that on public clouds fsync is much slower than that on local storage so the slowness should be more severe on cloud. If we at least disable fsync on the table directories we could skip a lot of file fsync - this may save a lot of seconds during crash recovery.\n\nI've seen this problem be way worse than that. Running fsync() on all\nthe files and performing the unlogged table cleanup steps can together\ntake minutes or, I think, even tens of minutes. What I think sucks\nmost in this area is that we don't even emit any log messages if the\nprocess takes a long time, so the user has no idea why things are\napparently hanging. I think we really ought to try to figure out some\nway to give the user a periodic progress indication when this kind of\nthing is underway, so that they at least have some idea what's\nhappening.\n\nAs Tom says, I don't think there's any realistic way that we can\ndisable it altogether, but maybe there's some way we could make it\nquicker, like some kind of parallelism, or by overlapping it with\nother things. It seems to me that we have to complete the fsync pass\nbefore we can safely checkpoint, but I don't know that it needs to be\ndone any sooner than that... not sure though.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 19 May 2020 08:50:42 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Two fsync related performance issues?" }, { "msg_contents": "On Wed, May 20, 2020 at 12:51 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, May 11, 2020 at 8:43 PM Paul Guo <pguo@pivotal.io> wrote:\n> > I have this concern since I saw an issue in a real product environment that the startup process needs 10+ seconds to start wal replay after relaunch due to elog(PANIC) (it was seen on postgres based product Greenplum but it is a common issue in postgres also). I highly suspect the delay was mostly due to this. Also it is noticed that on public clouds fsync is much slower than that on local storage so the slowness should be more severe on cloud. If we at least disable fsync on the table directories we could skip a lot of file fsync - this may save a lot of seconds during crash recovery.\n>\n> I've seen this problem be way worse than that. Running fsync() on all\n> the files and performing the unlogged table cleanup steps can together\n> take minutes or, I think, even tens of minutes. What I think sucks\n> most in this area is that we don't even emit any log messages if the\n> process takes a long time, so the user has no idea why things are\n> apparently hanging. I think we really ought to try to figure out some\n> way to give the user a periodic progress indication when this kind of\n> thing is underway, so that they at least have some idea what's\n> happening.\n>\n> As Tom says, I don't think there's any realistic way that we can\n> disable it altogether, but maybe there's some way we could make it\n> quicker, like some kind of parallelism, or by overlapping it with\n> other things. It seems to me that we have to complete the fsync pass\n> before we can safely checkpoint, but I don't know that it needs to be\n> done any sooner than that... not sure though.\n\nI suppose you could with the whole directory tree what\nregister_dirty_segment() does, for the pathnames that you recognise as\nregular md.c segment names. Then it'd be done as part of the next\ncheckpoint, though you might want to bring the pre_sync_fname() stuff\nback into it somehow to get more I/O parallelism on Linux (elsewhere\nit does nothing). Of course that syscall could block, and the\ncheckpointer queue can fill up and then you have to do it\nsynchronously anyway, so you'd have to look into whether that's\nenough.\n\nThe whole concept of SyncDataDirectory() bothers me a bit though,\nbecause although it's apparently trying to be safe by being\nconservative, it assumes a model of write back error handling that we\nnow know to be false on Linux. And then it thrashes the inode cache\nto make it more likely that error state is forgotten, just for good\nmeasure.\n\nWhat would a precise version of this look like? Maybe we really only\nneed to fsync relation files that recovery modifies (as we already\ndo), plus those that it would have touched but didn't because of the\npage LSN (a new behaviour to catch segment files that were written by\nthe last run but not yet flushed, which I guess in practice would only\nhappen with full_page_writes=off)? (If you were paranoid enough to\nbelieve that the buffer cache were actively trying to trick you and\nmarked dirty pages clean and lost the error state so you'll never hear\nabout it, you might even want to rewrite such pages once.)\n\n\n", "msg_date": "Wed, 20 May 2020 08:30:33 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Two fsync related performance issues?" }, { "msg_contents": "On Tue, May 19, 2020 at 4:31 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> What would a precise version of this look like? Maybe we really only\n> need to fsync relation files that recovery modifies (as we already\n> do), plus those that it would have touched but didn't because of the\n> page LSN (a new behaviour to catch segment files that were written by\n> the last run but not yet flushed, which I guess in practice would only\n> happen with full_page_writes=off)? (If you were paranoid enough to\n> believe that the buffer cache were actively trying to trick you and\n> marked dirty pages clean and lost the error state so you'll never hear\n> about it, you might even want to rewrite such pages once.)\n\nI suspect there was also a worry that perhaps we'd been running before\nwith fsync=off, or that maybe we'd just created this data directory\nwith a non-fsyncing tool like 'cp' or 'tar -xvf'. In normal cases we\nshouldn't need to be nearly that conservative, but it's unclear how we\ncan know when it's needed.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 20 May 2020 13:51:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Two fsync related performance issues?" }, { "msg_contents": "On Tue, 12 May 2020, 08:42 Paul Guo, <pguo@pivotal.io> wrote:\n\n> Hello hackers,\n>\n> 1. StartupXLOG() does fsync on the whole data directory early in the crash\n> recovery. I'm wondering if we could skip some directories (at least the\n> pg_log/, table directories) since wal, etc could ensure consistency. Here\n> is the related code.\n>\n> if (ControlFile->state != DB_SHUTDOWNED &&\n> ControlFile->state != DB_SHUTDOWNED_IN_RECOVERY)\n> {\n> RemoveTempXlogFiles();\n> SyncDataDirectory();\n> }\n>\n\nThis would actually be a good candidate for a thread pool. Dispatch sync\nrequests and don't wait. Come back later when they're done.\n\nUnsure if that's at all feasible given that pretty much all the Pg APIs\naren't thread safe though. No palloc, no elog/ereport, etc. However I don't\nthink we're ready to run bgworkers or use shm_mq etc at that stage.\n\nOf course if OSes would provide asynchronous IO interfaces that weren't\nutterly vile and broken, we wouldn't have to worry...\n\n\n>\n> RecreateTwoPhaseFile() writes a state file for a prepared transaction and\n> does fsync. It might be good to do fsync for all files once after writing\n> them, given the kernel is able to do asynchronous flush when writing those\n> file contents. If the TwoPhaseState->numPrepXacts is large we could do\n> batching to avoid the fd resource limit. I did not test them yet but this\n> should be able to speed up checkpoint/restartpoint a bit.\n>\n\nI seem to recall some hints we can set on a FD or mmapped range that\nencourage dirty buffers to be written without blocking us, too. I'll have\nto look them up...\n\n\n>\n\nOn Tue, 12 May 2020, 08:42 Paul Guo, <pguo@pivotal.io> wrote:Hello hackers,1. StartupXLOG() does fsync on the whole data directory early in the crash recovery. I'm wondering if we could skip some directories (at least the pg_log/, table directories) since wal, etc could ensure consistency. Here is the related code.      if (ControlFile->state != DB_SHUTDOWNED &&          ControlFile->state != DB_SHUTDOWNED_IN_RECOVERY)      {          RemoveTempXlogFiles();          SyncDataDirectory();      }This would actually be a good candidate for a thread pool. Dispatch sync requests and don't wait. Come back later when they're done. Unsure if that's at all feasible given that pretty much all the Pg APIs aren't thread safe though. No palloc, no elog/ereport, etc. However I don't think we're ready to run bgworkers or use shm_mq etc at that stage.Of course if OSes would provide asynchronous IO interfaces that weren't utterly vile and broken, we wouldn't have to worry...RecreateTwoPhaseFile() writes a state file for a prepared transaction and does fsync. It might be good to do fsync for all files once after writing them, given the kernel is able to do asynchronous flush when writing those file contents. If the TwoPhaseState->numPrepXacts is large we could do batching to avoid the fd resource limit. I did not test them yet but this should be able to speed up checkpoint/restartpoint a bit.I seem to recall some hints we can set on a FD or mmapped  range that encourage dirty buffers to be written without blocking us, too. I'll have to look them up...", "msg_date": "Tue, 26 May 2020 20:30:49 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Two fsync related performance issues?" }, { "msg_contents": "On Wed, May 27, 2020 at 12:31 AM Craig Ringer <craig@2ndquadrant.com> wrote:\n> On Tue, 12 May 2020, 08:42 Paul Guo, <pguo@pivotal.io> wrote:\n>> 1. StartupXLOG() does fsync on the whole data directory early in the crash recovery. I'm wondering if we could skip some directories (at least the pg_log/, table directories) since wal, etc could ensure consistency. Here is the related code.\n>>\n>> if (ControlFile->state != DB_SHUTDOWNED &&\n>> ControlFile->state != DB_SHUTDOWNED_IN_RECOVERY)\n>> {\n>> RemoveTempXlogFiles();\n>> SyncDataDirectory();\n>> }\n>\n> This would actually be a good candidate for a thread pool. Dispatch sync requests and don't wait. Come back later when they're done.\n>\n> Unsure if that's at all feasible given that pretty much all the Pg APIs aren't thread safe though. No palloc, no elog/ereport, etc. However I don't think we're ready to run bgworkers or use shm_mq etc at that stage.\n\nWe could run auxiliary processes. I think it'd be very useful if we\nhad a general purpose worker pool that could perform arbitrary tasks\nand could even replace current and future dedicated launcher and\nworker processes, but in this case I think the problem is quite\nclosely linked to infrastructure that we already have. I think we\nshould:\n\n1. Run the checkpointer during crash recovery (see\nhttps://commitfest.postgresql.org/29/2706/).\n2. In walkdir(), don't call stat() on all the files, so that we don't\nimmediately fault in all 42 bazillion inodes synchronously on a cold\nsystem (see https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BFzxupGGN4GpUdbzZN%2Btn6FQPHo8w0Q%2BAPH5Wz8RG%2Bww%40mail.gmail.com).\n3. In src/common/relpath.c, add a function ParseRelationPath() that\ndoes the opposite of GetRelationPath().\n4. In datadir_fsync_fname(), if ParseRelationPath() is able to\nrecognise a file as being a relation file, build a FileTag and call\nRegisterSyncRequest() to tell the checkpointer to sync this file as\npart of the end checkpoint (currently the end-of-recovery checkpoint,\nbut that could also be relaxed).\n\nThere are a couple of things I'm not sure about though, which is why I\ndon't have a patch for 3 and 4:\n1. You have to move a few things around to avoid hideous modularity\nviolations: it'd be weird if fd.c knew how to make md.c's sync\nrequests. So you'd need to find a new place to put the crash-recovery\ndata-dir-sync routine, but then it can't use walkdir().\n2. I don't know how to fit the pre_sync_fname() part into this\nscheme. Or even if you still need it: if recovery is short, it\nprobably doesn't help much, and if it's long then the kernel is likely\nto have started writing back before the checkpoint anyway due to dirty\nwriteback timer policies. On the other hand, it'd be nice to start\nthe work of *opening* the files sooner than the the start of the\ncheckpoint, if cold inode access is slow, so perhaps a little bit more\ninfrastructure is needed; a new way to queue a message for the\ncheckpointer that says \"hey, why don't you start presyncing stuff\".\nOn the third hand, it's arguably better to wait for more pages to get\ndirtied by recovery before doing any pre-sync work anyway, because the\nWAL will likely be redirtying the same pages again we'd ideally not\nlike to write our data out twice, which is one of the reasons to want\nto collapse the work into the next checkpoint. So I'm not sure what\nthe best plan is here.\n\nAs I mentioned earlier, I think it's also possible to do smarter\nanalysis based on WAL information so that we don't even need to open\nall 42 bazillion files, but just the ones touched since the last\ncheckpoint, if you're prepared to ignore the\npreviously-running-with-fsync=off scenario Robert mentioned. I'm not\ntoo sure about that. But something like the above scheme would at\nleast get some more concurrency going, without changing the set of\nhazards we believe our scheme protects against (I mean, you could\nargue that SyncDataDirectory() is a bit like using a sledgehammer to\ncrack an unspecified nut, and then not even quite hitting it if it's a\nLinux nut, but I'm trying to practise kai-zen here).\n\n\n", "msg_date": "Thu, 3 Sep 2020 11:30:50 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Two fsync related performance issues?" }, { "msg_contents": "On Tue, May 12, 2020 at 12:43 PM Paul Guo <pguo@pivotal.io> wrote:\n> 2. CheckPointTwoPhase()\n>\n> This may be a small issue.\n>\n> See the code below,\n>\n> for (i = 0; i < TwoPhaseState->numPrepXacts; i++)\n> RecreateTwoPhaseFile(gxact->xid, buf, len);\n>\n> RecreateTwoPhaseFile() writes a state file for a prepared transaction and does fsync. It might be good to do fsync for all files once after writing them, given the kernel is able to do asynchronous flush when writing those file contents. If the TwoPhaseState->numPrepXacts is large we could do batching to avoid the fd resource limit. I did not test them yet but this should be able to speed up checkpoint/restartpoint a bit.\n\nHi Paul,\n\nI hadn't previously focused on this second part of your email. I\nthink the fsync() call in RecreateTwoPhaseFile() might be a candidate\nfor processing by the checkpoint code through the new facilities in\nsync.c, which effectively does something like what you describe. Take\na look at the code I'm proposing for slru.c in\nhttps://commitfest.postgresql.org/29/2669/ and also in md.c. You'd\nneed a way to describe the path of these files in a FileTag struct, so\nyou can defer the work; it will invoke your callback to sync the file\nas part of the next checkpoint (or panic if it can't). You also need\nto make sure to tell it to forget the sync request before you unlink\nthe file. Although it still has to call fsync one-by-one on the files\nat checkpoint time, by deferring it until then there is a good chance\nthe kernel has already done the work so you don't have to go off-CPU\nat all. I hope that means we'd often never have to perform the fsync\nat all, because the file is usually gone before we reach the\ncheckpoint. Do I have that right?\n\n\n", "msg_date": "Thu, 3 Sep 2020 12:09:29 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Two fsync related performance issues?" }, { "msg_contents": "On Thu, Sep 3, 2020 at 11:30 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, May 27, 2020 at 12:31 AM Craig Ringer <craig@2ndquadrant.com> wrote:\n> > On Tue, 12 May 2020, 08:42 Paul Guo, <pguo@pivotal.io> wrote:\n> >> 1. StartupXLOG() does fsync on the whole data directory early in the crash recovery. I'm wondering if we could skip some directories (at least the pg_log/, table directories) since wal, etc could ensure consistency. Here is the related code.\n> >>\n> >> if (ControlFile->state != DB_SHUTDOWNED &&\n> >> ControlFile->state != DB_SHUTDOWNED_IN_RECOVERY)\n> >> {\n> >> RemoveTempXlogFiles();\n> >> SyncDataDirectory();\n> >> }\n\n> 4. In datadir_fsync_fname(), if ParseRelationPath() is able to\n> recognise a file as being a relation file, build a FileTag and call\n> RegisterSyncRequest() to tell the checkpointer to sync this file as\n> part of the end checkpoint (currently the end-of-recovery checkpoint,\n> but that could also be relaxed).\n\nFor the record, Andres Freund mentioned a few problems with this\noff-list and suggested we consider calling Linux syncfs() for each top\nlevel directory that could potentially be on a different filesystem.\nThat seems like a nice idea to look into.\n\n\n", "msg_date": "Wed, 9 Sep 2020 15:49:55 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Two fsync related performance issues?" }, { "msg_contents": "On Thu, Sep 3, 2020 at 12:09 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, May 12, 2020 at 12:43 PM Paul Guo <pguo@pivotal.io> wrote:\n> > RecreateTwoPhaseFile(gxact->xid, buf, len);\n\n> I hadn't previously focused on this second part of your email. I\n> think the fsync() call in RecreateTwoPhaseFile() might be a candidate\n> for processing by the checkpoint code through the new facilities in\n> sync.c, which effectively does something like what you describe. Take\n\nI looked at this more closely and realised that I misunderstood; I was\nthinking of a problem like the one that was already solved years ago\nwith commit 728bd991c3c4389fb39c45dcb0fe57e4a1dccd71. Sorry for the\nnoise.\n\n\n", "msg_date": "Thu, 10 Sep 2020 23:40:22 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Two fsync related performance issues?" }, { "msg_contents": "On Wed, Sep 9, 2020 at 3:49 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Sep 3, 2020 at 11:30 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Wed, May 27, 2020 at 12:31 AM Craig Ringer <craig@2ndquadrant.com> wrote:\n> > > On Tue, 12 May 2020, 08:42 Paul Guo, <pguo@pivotal.io> wrote:\n> > >> 1. StartupXLOG() does fsync on the whole data directory early in the crash recovery. I'm wondering if we could skip some directories (at least the pg_log/, table directories) since wal, etc could ensure consistency. Here is the related code.\n> > >>\n> > >> if (ControlFile->state != DB_SHUTDOWNED &&\n> > >> ControlFile->state != DB_SHUTDOWNED_IN_RECOVERY)\n> > >> {\n> > >> RemoveTempXlogFiles();\n> > >> SyncDataDirectory();\n> > >> }\n>\n> > 4. In datadir_fsync_fname(), if ParseRelationPath() is able to\n> > recognise a file as being a relation file, build a FileTag and call\n> > RegisterSyncRequest() to tell the checkpointer to sync this file as\n> > part of the end checkpoint (currently the end-of-recovery checkpoint,\n> > but that could also be relaxed).\n>\n> For the record, Andres Freund mentioned a few problems with this\n> off-list and suggested we consider calling Linux syncfs() for each top\n> level directory that could potentially be on a different filesystem.\n> That seems like a nice idea to look into.\n\nHere's an experimental patch to try that. One problem is that before\nLinux 5.8, syncfs() doesn't report failures[1]. I'm not sure what to\nthink about that; in the current coding we just log them and carry on\nanyway, but any theoretical problems that causes for BLK_DONE should\nbe moot anyway because of FPIs which result in more writes and syncs.\nAnother is that it may affect other files that aren't under pgdata as\ncollateral damage, but that seems acceptable. It also makes me a bit\nsad that this wouldn't help other OSes.\n\n(Archeological note: The improved syncfs() error reporting is linked\nto 2018 PostgreSQL/Linux hacker discussions[2], because it was thought\nthat syncfs() might be useful for checkpointing, though I believe\nsince then things have moved on and the new thinking is that we'd use\na new proposed interface to read per-filesystem I/O error counters\nwhile checkpointing.)\n\n[1] https://man7.org/linux/man-pages/man2/sync.2.html\n[2] https://lwn.net/Articles/752063/", "msg_date": "Mon, 5 Oct 2020 14:38:32 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Two fsync related performance issues?" }, { "msg_contents": "On Mon, Oct 5, 2020 at 2:38 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Sep 9, 2020 at 3:49 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > For the record, Andres Freund mentioned a few problems with this\n> > off-list and suggested we consider calling Linux syncfs() for each top\n> > level directory that could potentially be on a different filesystem.\n> > That seems like a nice idea to look into.\n>\n> Here's an experimental patch to try that. One problem is that before\n> Linux 5.8, syncfs() doesn't report failures[1]. I'm not sure what to\n> think about that; in the current coding we just log them and carry on\n> anyway, but any theoretical problems that causes for BLK_DONE should\n> be moot anyway because of FPIs which result in more writes and syncs.\n> Another is that it may affect other files that aren't under pgdata as\n> collateral damage, but that seems acceptable. It also makes me a bit\n> sad that this wouldn't help other OSes.\n\n... and for comparison/discussion, here is an alternative patch that\nfigures out precisely which files need to be fsync'd using information\nin the WAL. On a system with full_page_writes=on, this effectively\nmeans that we don't have to do anything at all for relation files, as\ndescribed in more detail in the commit message. You still need to\nfsync the WAL files to make sure you can't replay records from the log\nthat were written but not yet fdatasync'd, addressed in the patch.\nI'm not yet sure which other kinds of special files might need special\ntreatment.\n\nSome thoughts:\n1. Both patches avoid the need to open many files. With 1 million\ntables this can take over a minute even on a fast system with warm\ncaches and/or fast local storage, before replay begins, and for a cold\nsystem with high latency storage it can be a serious problem.\n2. The syncfs() one is comparatively simple, but it only works on\nLinux. If you used sync() (or sync(); sync()) instead, you'd be\nrelying on non-POSIX behaviour, because POSIX says it doesn't wait for\ncompletion and indeed many non-Linux systems really are like that.\n3. Though we know of kernel/filesystem pairs that can mark buffers\nclean while retaining the dirty contents (which would cause corruption\nwith the current code in master, or either of these patches),\nfortunately those systems can't possibly run with full_page_writes=off\nso such problems are handled the same way torn pages are fixed.\n4. Perhaps you could set a flag in the buffer to say 'needs sync' as\na way to avoid repeatedly requesting syncs for the same page, but it\nmight not be worth the hassle involved.\n\nSome other considerations that have been mentioned to me by colleagues\nI talked to about this:\n1. The ResetUnloggedRelations() code still has to visit all relation\nfiles looking for init forks after a crash. But that turns out to be\nOK, it only reads directory entries in a straight line. It doesn't\nstat() or open() files with non-matching names, so unless you have\nvery many unlogged tables, the problem is already avoided. (It also\ncalls fsync() on the result, which could perhaps be replaced with a\ndeferred request too, not sure, for another day.)\n2. There may be some more directories that need special fsync()\ntreatment. SLRUs are already covered (either handled by checkpoint or\nthey hold ephemeral data), and I think pg_tblspc changes will be\nredone. pg_logical? I am not sure.", "msg_date": "Wed, 7 Oct 2020 18:17:27 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Two fsync related performance issues?" }, { "msg_contents": "On Wed, Oct 7, 2020 at 6:17 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Mon, Oct 5, 2020 at 2:38 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Wed, Sep 9, 2020 at 3:49 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > For the record, Andres Freund mentioned a few problems with this\n> > > off-list and suggested we consider calling Linux syncfs() for each top\n> > > level directory that could potentially be on a different filesystem.\n> > > That seems like a nice idea to look into.\n\n> ... and for comparison/discussion, here is an alternative patch that\n> figures out precisely which files need to be fsync'd using information\n> in the WAL. [...]\n\nMichael Banck reported[1] a system that spent 20 minute in\nSyncDataDirectory(). His summary caused me to go back and read the\ndiscussions[1][2] that produced the current behaviour via commits\n2ce439f3 and d8179b00, and I wanted to add a couple more observations\nabout the two draft patches mentioned above.\n\nAbout the need to sync files that were dirtied during a previous run:\n\n1. The syncfs() patch has the same ignore errors-and-press-on\nbehaviour as d8179b00 gave us, though on Linux < 5.8 it would not even\nreport them at LOG level.\n\n2. The \"precise\" fsync() patch defers the work until after redo, but\nif you get errors while processing the queued syncs, you would not be\nable to complete the end-of-recovery checkpoint. This is correct\nbehaviour in my opinion; any such checkpoint that is allowed to\ncomplete would be a lie, and would make the corruption permanent,\nreleasing the WAL that was our only hope of recovering. If you want\nto run a so-damaged system for forensic purposes, you could always\nbring it up with fsync=off, or consider the idea from a nearby thread\nto allow the end-of-recovery checkpoint to be disabled (then the\neventual first checkpoint will likely take down your system, but\nthat's the case with the current\nignore-errors-and-hope-for-the-best-after-crash coding for the *next*\ncheckpoint, assuming your damaged filesystem continues to produce\nerrors, it's just more principled IMHO).\n\nI recognise that this sounds an absolutist argument that might attract\nsome complaints on practical grounds, but I don't think it really\nmakes too much difference in practice. Consider a typical Linux\nfilesystem: individual errors aren't going to be reported more than\nonce, and full_page_writes must be on on such a system so we'll be\nwriting out all affected pages again and then trying to fsync again in\nthe end-of-recovery checkpoint, so despite our attempt at creating a\nplease-ignore-errors-and-corrupt-my-database-and-play-on mode, you'll\nlikely panic again if I/O errors persist, or survive and continue\nwithout corruption if the error-producing-conditions were fleeting and\ntransient (as in the thin provisioning EIO or NFS ENOSPC conditions\ndiscussed in other threads).\n\nAbout the need to fsync everything in sight on a system that\npreviously ran with fsync=off, as discussed in the earlier threads:\n\n1. The syncfs() patch provides about the same weak guarantee as the\ncurrent coding. Something like: it can convert all checkpoints that\nwere logged in the time since the kernel started from fiction to fact,\nexcept those corrupted by (unlikely) I/O errors, which may only be\nreported in the kernel logs if at all.\n\n2. The \"precise\" fsync() patch provides no such weak guarantee. It\ntakes the last checkpoint at face value, and can't help you with\nanything that happened before that.\n\nThe problem I have with this is that the current coding *only does it\nfor crash scenarios*. So, if you're moving a system from fsync=off to\nfsync=on with a clean shutdown in between, we already don't help you.\nEffectively, you need to run sync(1) (but see man page for caveats and\nkernel logs for errors) to convert your earlier checkpoints from\nfiction to fact. So all we're discussing here is what we do with a\nsystem that crashed. Why is that a sane time to transition from\nfsync=off to fsync=on, and, supposing someone does that, why should we\noffer any more guarantees about that than we do when you make the same\ntransition on a system that shut down cleanly?\n\n[1] https://www.postgresql.org/message-id/flat/4a5d233fcd28b5f1008aec79119b02b5a9357600.camel%40credativ.de\n[2] https://www.postgresql.org/message-id/flat/20150114105908.GK5245%40awork2.anarazel.de#1525fab691dbaaef35108016f0b99467\n[3] https://www.postgresql.org/message-id/flat/20150523172627.GA24277%40msg.df7cb.de\n\n\n", "msg_date": "Thu, 8 Oct 2020 10:51:20 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Two fsync related performance issues?" }, { "msg_contents": "Hi,\n\nAm Mittwoch, den 07.10.2020, 18:17 +1300 schrieb Thomas Munro:\n> On Mon, Oct 5, 2020 at 2:38 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Wed, Sep 9, 2020 at 3:49 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > For the record, Andres Freund mentioned a few problems with this\n> > > off-list and suggested we consider calling Linux syncfs() for each top\n> > > level directory that could potentially be on a different filesystem.\n> > > That seems like a nice idea to look into.\n> > \n> > Here's an experimental patch to try that. One problem is that before\n> > Linux 5.8, syncfs() doesn't report failures[1]. I'm not sure what to\n> > think about that; in the current coding we just log them and carry on\n> > anyway, but any theoretical problems that causes for BLK_DONE should\n> > be moot anyway because of FPIs which result in more writes and syncs.\n> > Another is that it may affect other files that aren't under pgdata as\n> > collateral damage, but that seems acceptable. It also makes me a bit\n> > sad that this wouldn't help other OSes.\n> \n> ... and for comparison/discussion, here is an alternative patch that\n> figures out precisely which files need to be fsync'd using information\n> in the WAL. On a system with full_page_writes=on, this effectively\n> means that we don't have to do anything at all for relation files, as\n> described in more detail in the commit message. You still need to\n> fsync the WAL files to make sure you can't replay records from the log\n> that were written but not yet fdatasync'd, addressed in the patch.\n> I'm not yet sure which other kinds of special files might need special\n> treatment.\n> \n> Some thoughts:\n> 1. Both patches avoid the need to open many files. With 1 million\n> tables this can take over a minute even on a fast system with warm\n> caches and/or fast local storage, before replay begins, and for a cold\n> system with high latency storage it can be a serious problem.\n\nYou mention \"serious problem\" and \"over a minute\", but I don't recall\nyou mentioning how long it takes with those two patches (or maybe I\nmissed it), so here goes:\n\nI created an instance with 250 databases on 250 tablespaces on a SAN\nstorage. This is on 12.4, the patches can be backpatched with minimal\nchanges.\n\nAfter pg_ctl stop -m immediate, a pg_ctl start -w (or rather the time\nbetween the two log messages \"database system was interrupted; last\nknown up at %s\" and \"database system was not properly shut down;\nautomatic recovery in progress\" took\n\n1. 12-13 seconds on vanilla\n2. usually < 10 ms, sometimes 70-80 ms with the syncfs patch\n3. 4 ms with the optimized sync patch\n\nThat's a dramatic improvement, but maybe also a best case scenario as no\ntraffic happened since the last checkpoint. I did some light pgbench\nbefore killing the server again, but couldn't get the optimzid sync\npatch to take any longer, might try harder at some point but won't have\nmuch more time to test today.\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB Mönchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 Mönchengladbach\nGeschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n\n", "msg_date": "Fri, 09 Oct 2020 15:49:56 +0200", "msg_from": "Michael Banck <michael.banck@credativ.de>", "msg_from_op": false, "msg_subject": "Re: Two fsync related performance issues?" }, { "msg_contents": "Hi,\n\nAm Mittwoch, den 07.10.2020, 18:17 +1300 schrieb Thomas Munro:\n> ... and for comparison/discussion, here is an alternative patch that\n> figures out precisely which files need to be fsync'd using information\n> in the WAL. \n\nOne question about this: Did you consider the case of a basebackup being\ncopied/restored somewhere and the restore/PITR being started? Shouldn't\nPostgres then sync the whole data directory first in order to assure\ndurability, or do we consider this to be on the tool that does the\ncopying? Or is this not needed somehow?\n\nMy understanding is that Postgres would go through the same code path\nduring PITR:\n\n \tif (ControlFile->state != DB_SHUTDOWNED &&\n \t\tControlFile->state != DB_SHUTDOWNED_IN_RECOVERY)\n\t{\n\t\tRemoveTempXlogFiles();\n\t\tSyncDataDirectory();\n\nIf I didn't miss anything, that would be a point for the syncfs patch?\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB Mönchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 Mönchengladbach\nGeschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n\n", "msg_date": "Tue, 13 Oct 2020 13:53:49 +0200", "msg_from": "Michael Banck <michael.banck@credativ.de>", "msg_from_op": false, "msg_subject": "Re: Two fsync related performance issues?" }, { "msg_contents": "On Wed, Oct 14, 2020 at 12:53 AM Michael Banck\n<michael.banck@credativ.de> wrote:\n> Am Mittwoch, den 07.10.2020, 18:17 +1300 schrieb Thomas Munro:\n> > ... and for comparison/discussion, here is an alternative patch that\n> > figures out precisely which files need to be fsync'd using information\n> > in the WAL.\n>\n> One question about this: Did you consider the case of a basebackup being\n> copied/restored somewhere and the restore/PITR being started? Shouldn't\n> Postgres then sync the whole data directory first in order to assure\n> durability, or do we consider this to be on the tool that does the\n> copying? Or is this not needed somehow?\n\nTo go with precise fsyncs, we'd have to say that it's the job of the\ncreator of the secondary copy. Unfortunately that's not a terribly\nconvenient thing to do (or at least the details vary).\n\n[The devil's advocate enters the chat]\n\nLet me ask you this: If you copy the pgdata directory of a system\nthat has shut down cleanly, for example with cp or rsync as described\nin the manual, who will sync the files before you start up the\ncluster? Not us, anyway, because SyncDataDirectory() only runs after\na crash. A checkpoint is, after all, a promise that all changes up to\nsome LSN are durably on disk, and we leave it up to people who are\ncopying files around underneath us while we're not running to worry\nabout what happens if you make that untrue. Now, why is a database\nthat crashed any different?\n\n> My understanding is that Postgres would go through the same code path\n> during PITR:\n>\n> if (ControlFile->state != DB_SHUTDOWNED &&\n> ControlFile->state != DB_SHUTDOWNED_IN_RECOVERY)\n> {\n> RemoveTempXlogFiles();\n> SyncDataDirectory();\n>\n> If I didn't miss anything, that would be a point for the syncfs patch?\n\nYeah.\n\n\n", "msg_date": "Wed, 14 Oct 2020 14:48:18 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Two fsync related performance issues?" }, { "msg_contents": "On Wed, Oct 14, 2020 at 02:48:18PM +1300, Thomas Munro wrote:\n> On Wed, Oct 14, 2020 at 12:53 AM Michael Banck\n> <michael.banck@credativ.de> wrote:\n>> One question about this: Did you consider the case of a basebackup being\n>> copied/restored somewhere and the restore/PITR being started? Shouldn't\n>> Postgres then sync the whole data directory first in order to assure\n>> durability, or do we consider this to be on the tool that does the\n>> copying? Or is this not needed somehow?\n> \n> To go with precise fsyncs, we'd have to say that it's the job of the\n> creator of the secondary copy. Unfortunately that's not a terribly\n> convenient thing to do (or at least the details vary).\n\nYeah, it is safer to assume that it is the responsability of the\nbackup tool to ensure that because it could be possible that a host is\nunplugged just after taking a backup, and having Postgres do this work\nat the beginning of archive recovery would not help in most cases.\nIMO this comes back to the point where we usually should not care much\nhow long a backup takes as long as it is done right. Users care much\nmore about how long a restore takes until consistency is reached. And\nthis is in line with things that have been done via bc34223b or\n96a7128.\n--\nMichael", "msg_date": "Wed, 14 Oct 2020 14:06:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Two fsync related performance issues?" }, { "msg_contents": "Hi,\n\nAm Mittwoch, den 14.10.2020, 14:06 +0900 schrieb Michael Paquier:\n> On Wed, Oct 14, 2020 at 02:48:18PM +1300, Thomas Munro wrote:\n> > On Wed, Oct 14, 2020 at 12:53 AM Michael Banck\n> > <michael.banck@credativ.de> wrote:\n> > > One question about this: Did you consider the case of a basebackup being\n> > > copied/restored somewhere and the restore/PITR being started? Shouldn't\n> > > Postgres then sync the whole data directory first in order to assure\n> > > durability, or do we consider this to be on the tool that does the\n> > > copying? Or is this not needed somehow?\n> > \n> > To go with precise fsyncs, we'd have to say that it's the job of the\n> > creator of the secondary copy. Unfortunately that's not a terribly\n> > convenient thing to do (or at least the details vary).\n> \n> Yeah, it is safer to assume that it is the responsability of the\n> backup tool to ensure that because it could be possible that a host is\n> unplugged just after taking a backup, and having Postgres do this work\n> at the beginning of archive recovery would not help in most cases.\n> IMO this comes back to the point where we usually should not care much\n> how long a backup takes as long as it is done right. Users care much\n> more about how long a restore takes until consistency is reached. And\n> this is in line with things that have been done via bc34223b or\n> 96a7128.\n\nI agree that the backup tool should make sure the backup is durable and\nthis is out of scope.\n\nI was worried more about the restore part, right now, \nhttps://www.postgresql.org/docs/13/continuous-archiving.html#BACKUP-PITR-RECOVERY\nsays this in point 4:\n\n|Restore the database files from your file system backup. Be sure that\n|they are restored with the right ownership (the database system user,\n|not root!) and with the right permissions. If you are using\n|tablespaces, you should verify that the symbolic links in pg_tblspc/\n|were correctly restored.\n\nThere's no word of running sync afterwards or making otherwise sure that\neverything is back in a durable fashion. Currently, we run fsync on all\nthe files on startup anyway during a PITR, but with the second patch,\nthat will no longer happen. Maybe that is not a problem, but if that's\nthe case, it's at least not clear to me.\n\nAlso, Tom seemed to imply up-thread in 3750.1589826415@sss.pgh.pa.us\nthat syncing the files was necessary, but maybe I'm reading too much into his rather short mail.\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB Mönchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 Mönchengladbach\nGeschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n\n", "msg_date": "Wed, 14 Oct 2020 10:14:26 +0200", "msg_from": "Michael Banck <michael.banck@credativ.de>", "msg_from_op": false, "msg_subject": "Re: Two fsync related performance issues?" }, { "msg_contents": "On Wed, 14 Oct 2020, 13:06 Michael Paquier, <michael@paquier.xyz> wrote:\n\n> On Wed, Oct 14, 2020 at 02:48:18PM +1300, Thomas Munro wrote:\n> > On Wed, Oct 14, 2020 at 12:53 AM Michael Banck\n> > <michael.banck@credativ.de> wrote:\n> >> One question about this: Did you consider the case of a basebackup being\n> >> copied/restored somewhere and the restore/PITR being started? Shouldn't\n> >> Postgres then sync the whole data directory first in order to assure\n> >> durability, or do we consider this to be on the tool that does the\n> >> copying? Or is this not needed somehow?\n> >\n> > To go with precise fsyncs, we'd have to say that it's the job of the\n> > creator of the secondary copy. Unfortunately that's not a terribly\n> > convenient thing to do (or at least the details vary).\n>\n> Yeah, it is safer to assume that it is the responsability of the\n> backup tool to ensure that because it could be possible that a host is\n> unplugged just after taking a backup, and having Postgres do this work\n> at the beginning of archive recovery would not help in most cases.\n>\n\nLet's document that assumption in the docs for pg_basebackup and the file\nsystem copy based replica creation docs. With a reference to initdb's\ndatadir sync option.\n\nIMO this comes back to the point where we usually should not care much\n> how long a backup takes as long as it is done right. Users care much\n> more about how long a restore takes until consistency is reached. And\n> this is in line with things that have been done via bc34223b or\n> 96a7128.\n> --\n> Michael\n>\n\nOn Wed, 14 Oct 2020, 13:06 Michael Paquier, <michael@paquier.xyz> wrote:On Wed, Oct 14, 2020 at 02:48:18PM +1300, Thomas Munro wrote:\n> On Wed, Oct 14, 2020 at 12:53 AM Michael Banck\n> <michael.banck@credativ.de> wrote:\n>> One question about this: Did you consider the case of a basebackup being\n>> copied/restored somewhere and the restore/PITR being started? Shouldn't\n>> Postgres then sync the whole data directory first in order to assure\n>> durability, or do we consider this to be on the tool that does the\n>> copying? Or is this not needed somehow?\n> \n> To go with precise fsyncs, we'd have to say that it's the job of the\n> creator of the secondary copy.  Unfortunately that's not a terribly\n> convenient thing to do (or at least the details vary).\n\nYeah, it is safer to assume that it is the responsability of the\nbackup tool to ensure that because it could be possible that a host is\nunplugged just after taking a backup, and having Postgres do this work\nat the beginning of archive recovery would not help in most cases.Let's document that assumption in the docs for pg_basebackup and the file system copy based replica creation docs. With a reference to initdb's datadir sync option.\nIMO this comes back to the point where we usually should not care much\nhow long a backup  takes as long as it is done right.  Users care much\nmore about how long a restore takes until consistency is reached.  And\nthis is in line with things that have been done via bc34223b or\n96a7128.\n--\nMichael", "msg_date": "Tue, 1 Dec 2020 19:39:30 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Two fsync related performance issues?" }, { "msg_contents": "On Tue, Dec 01, 2020 at 07:39:30PM +0800, Craig Ringer wrote:\n> On Wed, 14 Oct 2020, 13:06 Michael Paquier, <michael@paquier.xyz> wrote:\n>> Yeah, it is safer to assume that it is the responsability of the\n>> backup tool to ensure that because it could be possible that a host is\n>> unplugged just after taking a backup, and having Postgres do this work\n>> at the beginning of archive recovery would not help in most cases.\n> \n> Let's document that assumption in the docs for pg_basebackup and the file\n> system copy based replica creation docs. With a reference to initdb's\n> datadir sync option.\n\nDo you have any suggestion about that, in the shape of a patch perhaps?\n--\nMichael", "msg_date": "Wed, 2 Dec 2020 16:41:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Two fsync related performance issues?" }, { "msg_contents": "On Wed, 2 Dec 2020 at 15:41, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Dec 01, 2020 at 07:39:30PM +0800, Craig Ringer wrote:\n> > On Wed, 14 Oct 2020, 13:06 Michael Paquier, <michael@paquier.xyz> wrote:\n> >> Yeah, it is safer to assume that it is the responsability of the\n> >> backup tool to ensure that because it could be possible that a host is\n> >> unplugged just after taking a backup, and having Postgres do this work\n> >> at the beginning of archive recovery would not help in most cases.\n> >\n> > Let's document that assumption in the docs for pg_basebackup and the file\n> > system copy based replica creation docs. With a reference to initdb's\n> > datadir sync option.\n>\n> Do you have any suggestion about that, in the shape of a patch perhaps?\n>\n\nI'll try to get to that, but I have some other docs patches outstanding\nthat I'd like to resolve first.\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Wed, 2 Dec 2020 at 15:41, Michael Paquier <michael@paquier.xyz> wrote:On Tue, Dec 01, 2020 at 07:39:30PM +0800, Craig Ringer wrote:\n> On Wed, 14 Oct 2020, 13:06 Michael Paquier, <michael@paquier.xyz> wrote:\n>> Yeah, it is safer to assume that it is the responsability of the\n>> backup tool to ensure that because it could be possible that a host is\n>> unplugged just after taking a backup, and having Postgres do this work\n>> at the beginning of archive recovery would not help in most cases.\n> \n> Let's document that assumption in the docs for pg_basebackup and the file\n> system copy based replica creation docs. With a reference to initdb's\n> datadir sync option.\n\nDo you have any suggestion about that, in the shape of a patch perhaps?I'll try to get to that, but I have some other docs patches outstanding that I'd like to resolve first.--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Thu, 3 Dec 2020 14:10:38 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Two fsync related performance issues?" } ]
[ { "msg_contents": "Hi,\n\nWhen we run prepared statements, as far as I know, running\nEXPLAIN is the only way to know whether executed plans are\ngeneric or custom.\nThere seems no way to know how many times a prepared\nstatement was executed as generic and custom.\n\nI think it may be useful to record the number of generic\nand custom plans mainly to ensure the prepared statements\nare executed as expected plan type.\nIf people also feel it's useful, I'm going to think about adding\ncolumns such as 'generic plans' and 'custom plans' to\npg_stat_statements.\n\nAs you know, pg_stat_statements can now track not only\n'calls' but 'plans', so we can presume which plan type\nwas executed from them.\nWhen both 'calls' and 'plans' were incremented, plan\ntype would be custom. When only 'calls' was incremented,\nit would be generic.\nBut considering the case such as only the plan phase has\nsucceeded and the execution phase has failed, this\npresumption can be wrong.\n\nThoughts?\n\n\nRegards,\n\n--\nAtsushi Torikoshi\n\nHi,When we run prepared statements, as far as I know, runningEXPLAIN is the only way to know whether executed plans aregeneric or custom.There seems no way to know how many times a preparedstatement was executed as generic and custom.I think it may be useful to record the number of genericand custom plans mainly to ensure the prepared statementsare executed as expected plan type.If people also feel it's useful,  I'm going to think about addingcolumns such as 'generic plans' and 'custom plans' topg_stat_statements.As you know, pg_stat_statements can now track not only'calls' but 'plans', so we can presume which plan typewas executed from them.When both 'calls' and 'plans' were incremented, plantype would be custom. When only 'calls' was incremented,it would be generic.But considering the case such as only the plan phase hassucceeded and the execution phase has failed, thispresumption can be wrong.Thoughts?Regards,--Atsushi Torikoshi", "msg_date": "Tue, 12 May 2020 13:53:15 +0900", "msg_from": "Atsushi Torikoshi <atorik@gmail.com>", "msg_from_op": true, "msg_subject": "Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "Hello,\n\nyes this can be usefull, under the condition of differentiating all the\ncounters\nfor a queryid using a generic plan and the one using a custom one.\n\nFor me one way to do that is adding a generic_plan column to \npg_stat_statements key, someting like:\n- userid,\n- dbid,\n- queryid,\n- generic_plan\n\nI don't know if generic/custom plan types are available during planning\nand/or \nexecution hooks ...\n\nRegards\nPAscal\n\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Wed, 13 May 2020 10:28:12 -0700 (MST)", "msg_from": "legrand legrand <legrand_legrand@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On Thu, May 14, 2020 at 2:28 AM legrand legrand <legrand_legrand@hotmail.com>\nwrote:\n\n> Hello,\n>\n> yes this can be usefull, under the condition of differentiating all the\n> counters\n> for a queryid using a generic plan and the one using a custom one.\n>\n> For me one way to do that is adding a generic_plan column to\n> pg_stat_statements key, someting like:\n> - userid,\n> - dbid,\n> - queryid,\n> - generic_plan\n>\n\nThanks for your kind advice!\n\n\n> I don't know if generic/custom plan types are available during planning\n> and/or\n> execution hooks ...\n\n\nYeah, that's what I'm concerned about.\n\nAs far as I can see, there are no variables tracking executed\nplan types so we may need to add variables in\nCachedPlanSource or somewhere.\n\nCachedPlanSource.num_custom_plans looked like what is needed,\nbut it is the number of PLANNING so it also increments when\nthe planner calculates both plans and decides to take generic\nplan.\n\n\nTo track executed plan types, I think execution layer hooks\nare appropriate.\nThese hooks, however, take QueryDesc as a param and it does\nnot include cached plan information.\nPortal includes CachedPlanSource but there seem no hooks to\ntake Portal.\n\nSo I'm wondering it's necessary to add a hook to get Portal\nor CachedPlanSource.\nAre these too much change for getting plan types?\n\n\nRegards,\n\n--\nAtsushi Torikoshi\n\nOn Thu, May 14, 2020 at 2:28 AM legrand legrand <legrand_legrand@hotmail.com> wrote:Hello,\n\nyes this can be usefull, under the condition of differentiating all the\ncounters\nfor a queryid using a generic plan and the one using a custom one.\n\nFor me one way to do that is adding a generic_plan column to \npg_stat_statements key, someting like:\n- userid,\n- dbid,\n- queryid,\n- generic_planThanks for your kind advice! \nI don't know if generic/custom plan types are available during planning\nand/or \nexecution hooks ...Yeah, that's what I'm concerned about.As far as I can see, there are no variables tracking executedplan types so we may need to add variables inCachedPlanSource or somewhere.CachedPlanSource.num_custom_plans looked like what is needed,but it is the number of PLANNING so it also increments whenthe planner calculates both plans and decides to take genericplan.To track executed plan types, I think execution layer hooksare appropriate.These hooks, however, take QueryDesc as a param and it doesnot include cached plan information.Portal includes CachedPlanSource but there seem no hooks totake Portal.So I'm wondering it's necessary to add a hook to get Portalor CachedPlanSource.Are these too much change for getting plan types?Regards,--Atsushi Torikoshi", "msg_date": "Fri, 15 May 2020 17:47:41 +0900", "msg_from": "Atsushi Torikoshi <atorik@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "> To track executed plan types, I think execution layer hooks\n> are appropriate.\n> These hooks, however, take QueryDesc as a param and it does\n> not include cached plan information.\n\nIt seems that the same QueryDesc entry is reused when executing\na generic plan.\nFor exemple marking queryDesc->plannedstmt->queryId (outside \npg_stat_statements) with a pseudo tag during ExecutorStart \nreappears in later executions with generic plans ...\n\nIs this QueryDesc reusable by a custom plan ? If not maybe a solution\ncould be to add a flag in queryDesc->plannedstmt ?\n\n(sorry, I haven't understood yet how and when this generic plan is \nmanaged during planning)\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Sat, 16 May 2020 02:01:35 -0700 (MST)", "msg_from": "legrand legrand <legrand_legrand@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On Sat, May 16, 2020 at 6:01 PM legrand legrand <legrand_legrand@hotmail.com>\nwrote:\n\n> > To track executed plan types, I think execution layer hooks\n> > are appropriate.\n> > These hooks, however, take QueryDesc as a param and it does\n> > not include cached plan information.\n>\n> It seems that the same QueryDesc entry is reused when executing\n> a generic plan.\n> For exemple marking queryDesc->plannedstmt->queryId (outside\n> pg_stat_statements) with a pseudo tag during ExecutorStart\n> reappears in later executions with generic plans ...\n>\n> Is this QueryDesc reusable by a custom plan ? If not maybe a solution\n> could be to add a flag in queryDesc->plannedstmt ?\n>\n\nThanks for your proposal!\n\nI first thought it was a good idea and tried to add a flag to QueryDesc,\nbut the comments on QueryDesc say it encapsulates everything that\nthe executor needs to execute a query.\n\nWhether a plan is generic or custom is not what executor needs to\nknow for running queries, so now I hesitate to do so.\n\nInstead, I'm now considering using a static hash for prepared queries\n(static HTAB *prepared_queries).\n\n\nBTW, I'd also appreciate other opinions about recording the number\nof generic and custom plans on pg_stat_statemtents.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\n\nOn Sat, May 16, 2020 at 6:01 PM legrand legrand <legrand_legrand@hotmail.com> wrote:> To track executed plan types, I think execution layer hooks\n> are appropriate.\n> These hooks, however, take QueryDesc as a param and it does\n> not include cached plan information.\n\nIt seems that the same QueryDesc entry is reused when executing\na generic plan.\nFor exemple marking queryDesc->plannedstmt->queryId (outside \npg_stat_statements) with a pseudo tag during ExecutorStart \nreappears in later executions with generic plans ...\n\nIs this QueryDesc reusable by a custom plan ? If not maybe a solution\ncould be to add a flag in queryDesc->plannedstmt ?Thanks for your proposal!I first thought it was a good idea and tried to add a flag to QueryDesc,but the comments on QueryDesc say it encapsulates everything thatthe executor needs to execute a query.Whether a plan is generic or custom is not what executor needs toknow for running queries, so now I hesitate to do so.Instead, I'm now considering using a static hash for prepared queries(static HTAB *prepared_queries).BTW, I'd also appreciate other opinions about recording the numberof generic and custom plans on pg_stat_statemtents. Regards,--Atsushi Torikoshi", "msg_date": "Tue, 19 May 2020 22:56:17 +0900", "msg_from": "Atsushi Torikoshi <atorik@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "At Tue, 19 May 2020 22:56:17 +0900, Atsushi Torikoshi <atorik@gmail.com> wrote in \n> On Sat, May 16, 2020 at 6:01 PM legrand legrand <legrand_legrand@hotmail.com>\n> wrote:\n> \n> > > To track executed plan types, I think execution layer hooks\n> > > are appropriate.\n> > > These hooks, however, take QueryDesc as a param and it does\n> > > not include cached plan information.\n> >\n> > It seems that the same QueryDesc entry is reused when executing\n> > a generic plan.\n> > For exemple marking queryDesc->plannedstmt->queryId (outside\n> > pg_stat_statements) with a pseudo tag during ExecutorStart\n> > reappears in later executions with generic plans ...\n> >\n> > Is this QueryDesc reusable by a custom plan ? If not maybe a solution\n> > could be to add a flag in queryDesc->plannedstmt ?\n> >\n> \n> Thanks for your proposal!\n> \n> I first thought it was a good idea and tried to add a flag to QueryDesc,\n> but the comments on QueryDesc say it encapsulates everything that\n> the executor needs to execute a query.\n> \n> Whether a plan is generic or custom is not what executor needs to\n> know for running queries, so now I hesitate to do so.\n> \n> Instead, I'm now considering using a static hash for prepared queries\n> (static HTAB *prepared_queries).\n> \n> \n> BTW, I'd also appreciate other opinions about recording the number\n> of generic and custom plans on pg_stat_statemtents.\n\nIf you/we just want to know how a prepared statement is executed,\ncouldn't we show that information in pg_prepared_statements view?\n\n=# select * from pg_prepared_statements;\n-[ RECORD 1 ]---+----------------------------------------------------\nname | stmt1\nstatement | prepare stmt1 as select * from t where b = $1;\nprepare_time | 2020-05-20 12:01:55.733469+09\nparameter_types | {text}\nfrom_sql | t\nexec_custom | 5 <- existing num_custom_plans\nexec_total\t | 40 <- new member of CachedPlanSource\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 20 May 2020 13:32:04 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On Wed, May 20, 2020 at 1:32 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Tue, 19 May 2020 22:56:17 +0900, Atsushi Torikoshi <atorik@gmail.com>\n> wrote in\n> > On Sat, May 16, 2020 at 6:01 PM legrand legrand <\n> legrand_legrand@hotmail.com>\n> > wrote:\n> >\n> > BTW, I'd also appreciate other opinions about recording the number\n> > of generic and custom plans on pg_stat_statemtents.\n>\n> If you/we just want to know how a prepared statement is executed,\n> couldn't we show that information in pg_prepared_statements view?\n>\n> =# select * from pg_prepared_statements;\n> -[ RECORD 1 ]---+----------------------------------------------------\n> name | stmt1\n> statement | prepare stmt1 as select * from t where b = $1;\n> prepare_time | 2020-05-20 12:01:55.733469+09\n> parameter_types | {text}\n> from_sql | t\n> exec_custom | 5 <- existing num_custom_plans\n> exec_total | 40 <- new member of CachedPlanSource\n>\n\nThanks, Horiguchi-san!\n\nAdding counters to pg_prepared_statements seems useful when we want\nto know the way prepared statements executed in the current session.\n\nAnd I also feel adding counters to pg_stat_statements will be convenient\nespecially in production environments because it enables us to get\ninformation about not only the current session but all sessions of a\nPostgreSQL instance.\n\nIf both changes are worthwhile, considering implementation complexity,\nit may be reasonable to firstly add columns to pg_prepared_statements\nand then work on pg_stat_statements.\n\nRegards,\n\n--\nAtsushi Torikoshi\n\nOn Wed, May 20, 2020 at 1:32 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Tue, 19 May 2020 22:56:17 +0900, Atsushi Torikoshi <atorik@gmail.com> wrote in \n> On Sat, May 16, 2020 at 6:01 PM legrand legrand <legrand_legrand@hotmail.com>\n> wrote:\n> > BTW, I'd also appreciate other opinions about recording the number\n> of generic and custom plans on pg_stat_statemtents.\n\nIf you/we just want to know how a prepared statement is executed,\ncouldn't we show that information in pg_prepared_statements view?\n\n=# select * from pg_prepared_statements;\n-[ RECORD 1 ]---+----------------------------------------------------\nname            | stmt1\nstatement       | prepare stmt1 as select * from t where b = $1;\nprepare_time    | 2020-05-20 12:01:55.733469+09\nparameter_types | {text}\nfrom_sql        | t\nexec_custom     | 5    <- existing num_custom_plans\nexec_total          | 40   <- new member of CachedPlanSourceThanks, Horiguchi-san!Adding counters to pg_prepared_statements seems useful when we wantto know the way prepared statements executed in the current session.And I also feel adding counters to pg_stat_statements will be convenientespecially in production environments because it enables us to getinformation about not only the current session but all sessions of aPostgreSQL instance.If both changes are worthwhile, considering implementation complexity,it may be reasonable to firstly add columns to pg_prepared_statementsand then work on pg_stat_statements. Regards,--Atsushi Torikoshi", "msg_date": "Wed, 20 May 2020 21:56:04 +0900", "msg_from": "Atsushi Torikoshi <atorik@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "\n\nOn 2020/05/20 21:56, Atsushi Torikoshi wrote:\n> \n> On Wed, May 20, 2020 at 1:32 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com <mailto:horikyota.ntt@gmail.com>> wrote:\n> \n> At Tue, 19 May 2020 22:56:17 +0900, Atsushi Torikoshi <atorik@gmail.com <mailto:atorik@gmail.com>> wrote in\n> > On Sat, May 16, 2020 at 6:01 PM legrand legrand <legrand_legrand@hotmail.com <mailto:legrand_legrand@hotmail.com>>\n> > wrote:\n> >\n> > BTW, I'd also appreciate other opinions about recording the number\n> > of generic and custom plans on pg_stat_statemtents.\n> \n> If you/we just want to know how a prepared statement is executed,\n> couldn't we show that information in pg_prepared_statements view?\n> \n> =# select * from pg_prepared_statements;\n> -[ RECORD 1 ]---+----------------------------------------------------\n> name            | stmt1\n> statement       | prepare stmt1 as select * from t where b = $1;\n> prepare_time    | 2020-05-20 12:01:55.733469+09\n> parameter_types | {text}\n> from_sql        | t\n> exec_custom     | 5    <- existing num_custom_plans\n> exec_total          | 40   <- new member of CachedPlanSource\n> \n> \n> Thanks, Horiguchi-san!\n> \n> Adding counters to pg_prepared_statements seems useful when we want\n> to know the way prepared statements executed in the current session.\n\nI like the idea exposing more CachedPlanSource fields in\npg_prepared_statements. I agree it's useful, e.g., for the debug purpose.\nThis is why I implemented the similar feature in my extension.\nPlease see [1] for details.\n\n> And I also feel adding counters to pg_stat_statements will be convenient\n> especially in production environments because it enables us to get\n> information about not only the current session but all sessions of a\n> PostgreSQL instance.\n\n+1\n\nRegards,\n\n[1]\nhttps://github.com/MasaoFujii/pg_cheat_funcs#record-pg_cached_plan_sourcestmt-text\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 21 May 2020 12:18:16 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "At Thu, 21 May 2020 12:18:16 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/05/20 21:56, Atsushi Torikoshi wrote:\n> > On Wed, May 20, 2020 at 1:32 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com <mailto:horikyota.ntt@gmail.com>> wrote:\n> > At Tue, 19 May 2020 22:56:17 +0900, Atsushi Torikoshi\n> > <atorik@gmail.com <mailto:atorik@gmail.com>> wrote in\n> > > On Sat, May 16, 2020 at 6:01 PM legrand legrand\n> > > <legrand_legrand@hotmail.com <mailto:legrand_legrand@hotmail.com>>\n> > > wrote:\n> > >\n> > > BTW, I'd also appreciate other opinions about recording the number\n> > > of generic and custom plans on pg_stat_statemtents.\n> > If you/we just want to know how a prepared statement is executed,\n> > couldn't we show that information in pg_prepared_statements view?\n> > =# select * from pg_prepared_statements;\n> > -[ RECORD 1 ]---+----------------------------------------------------\n> > name            | stmt1\n> > statement       | prepare stmt1 as select * from t where b = $1;\n> > prepare_time    | 2020-05-20 12:01:55.733469+09\n> > parameter_types | {text}\n> > from_sql        | t\n> > exec_custom     | 5    <- existing num_custom_plans\n> > exec_total          | 40   <- new member of CachedPlanSource\n> > Thanks, Horiguchi-san!\n> > Adding counters to pg_prepared_statements seems useful when we want\n> > to know the way prepared statements executed in the current session.\n> \n> I like the idea exposing more CachedPlanSource fields in\n> pg_prepared_statements. I agree it's useful, e.g., for the debug\n> purpose.\n> This is why I implemented the similar feature in my extension.\n> Please see [1] for details.\n\nThanks. I'm not sure plan_cache_mode should be a part of the view.\nCost numbers would look better if it is cooked a bit. Is it worth\nbeing in core?\n\n=# select * from pg_prepared_statements;\n-[ RECORD 1 ]---+--------------------------------------------\nname | p1\nstatement | prepare p1 as select a from t where a = $1;\nprepare_time | 2020-05-21 15:41:50.419578+09\nparameter_types | {integer}\nfrom_sql | t\ncalls | 7\ncustom_calls | 5\nplan_generation | 6\ngeneric_cost | 4.3100000000000005\ncustom_cost | 9.31\n\nPerhaps plan_generation is not needed there.\n\n> > And I also feel adding counters to pg_stat_statements will be\n> > convenient\n> > especially in production environments because it enables us to get\n> > information about not only the current session but all sessions of a\n> > PostgreSQL instance.\n> \n> +1\n\nAgreed. It is global and persistent.\n\nAt Tue, 19 May 2020 22:56:17 +0900, Atsushi Torikoshi <atorik@gmail.com> wrote in \n> Instead, I'm now considering using a static hash for prepared queries\n> (static HTAB *prepared_queries).\n\nThat might be complex and fragile considering nested query and SPI\ncalls. I'm not sure, but could we use ActivePortal?\nActivePortal->cplan is a CachedPlan, which can hold the generic/custom\ninformation.\n\nInstead, \n\n\n> [1]\n> https://github.com/MasaoFujii/pg_cheat_funcs#record-pg_cached_plan_sourcestmt-text\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 21 May 2020 17:10:05 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "Hi Torikoshi-san!\n\nOn 2020/05/21 17:10, Kyotaro Horiguchi wrote:\n> At Thu, 21 May 2020 12:18:16 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>\n>>\n>> On 2020/05/20 21:56, Atsushi Torikoshi wrote:\n>>> On Wed, May 20, 2020 at 1:32 PM Kyotaro Horiguchi\n>>> <horikyota.ntt@gmail.com <mailto:horikyota.ntt@gmail.com>> wrote:\n>>> At Tue, 19 May 2020 22:56:17 +0900, Atsushi Torikoshi\n>>> <atorik@gmail.com <mailto:atorik@gmail.com>> wrote in\n>>> > On Sat, May 16, 2020 at 6:01 PM legrand legrand\n>>> > <legrand_legrand@hotmail.com <mailto:legrand_legrand@hotmail.com>>\n>>> > wrote:\n>>> >\n>>> > BTW, I'd also appreciate other opinions about recording the number\n>>> > of generic and custom plans on pg_stat_statemtents.\n>>> If you/we just want to know how a prepared statement is executed,\n>>> couldn't we show that information in pg_prepared_statements view?\n>>> =# select * from pg_prepared_statements;\n>>> -[ RECORD 1 ]---+----------------------------------------------------\n>>> name | stmt1\n>>> statement | prepare stmt1 as select * from t where b = $1;\n>>> prepare_time | 2020-05-20 12:01:55.733469+09\n>>> parameter_types | {text}\n>>> from_sql | t\n>>> exec_custom | 5 <- existing num_custom_plans\n>>> exec_total | 40 <- new member of CachedPlanSource\n>>> Thanks, Horiguchi-san!\n>>> Adding counters to pg_prepared_statements seems useful when we want\n>>> to know the way prepared statements executed in the current session.\n>>\n>> I like the idea exposing more CachedPlanSource fields in\n>> pg_prepared_statements. I agree it's useful, e.g., for the debug\n>> purpose.\n>> This is why I implemented the similar feature in my extension.\n>> Please see [1] for details.\n> \n> Thanks. I'm not sure plan_cache_mode should be a part of the view.\n> Cost numbers would look better if it is cooked a bit. Is it worth\n> being in core?\n> \n> =# select * from pg_prepared_statements;\n> -[ RECORD 1 ]---+--------------------------------------------\n> name | p1\n> statement | prepare p1 as select a from t where a = $1;\n> prepare_time | 2020-05-21 15:41:50.419578+09\n> parameter_types | {integer}\n> from_sql | t\n> calls | 7\n> custom_calls | 5\n> plan_generation | 6\n> generic_cost | 4.3100000000000005\n> custom_cost | 9.31\n> \n> Perhaps plan_generation is not needed there.\n\nI tried to creating PoC patch too, so I share it.\nPlease find attached file.\n\n# Test case\nprepare count as select count(*) from pg_class where oid >$1;\nexecute count(1); select * from pg_prepared_statements;\n\n-[ RECORD 1 ]---+--------------------------------------------------------------\nname | count\nstatement | prepare count as select count(*) from pg_class where oid >$1;\nprepare_time | 2020-05-21 17:41:16.134362+09\nparameter_types | {oid}\nfrom_sql | t\nis_generic_plan | f <= False\n\nYou can see the following result, when you execute it 6 times.\n\n-[ RECORD 1 ]---+--------------------------------------------------------------\nname | count\nstatement | prepare count as select count(*) from pg_class where oid >$1;\nprepare_time | 2020-05-21 17:41:16.134362+09\nparameter_types | {oid}\nfrom_sql | t\nis_generic_plan | t <= True\n\n\nThanks,\nTatsuro Yamada", "msg_date": "Thu, 21 May 2020 17:43:01 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "Thanks for writing a patch!\n\nOn Thu, May 21, 2020 at 5:10 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Thu, 21 May 2020 12:18:16 +0900, Fujii Masao <\n> masao.fujii@oss.nttdata.com> wrote in\n> > I like the idea exposing more CachedPlanSource fields in\n> > pg_prepared_statements. I agree it's useful, e.g., for the debug\n> > purpose.\n> > This is why I implemented the similar feature in my extension.\n> > Please see [1] for details.\n>\n> Thanks. I'm not sure plan_cache_mode should be a part of the view.\n> Cost numbers would look better if it is cooked a bit. Is it worth\n> being in core?\n\n\nI didn't come up with ideas about how to use them.\nIMHO they might not so necessary..\n\n\n> =# select * from pg_prepared_statements;\n> -[ RECORD 1 ]---+--------------------------------------------\n> name | p1\n> statement | prepare p1 as select a from t where a = $1;\n> prepare_time | 2020-05-21 15:41:50.419578+09\n> parameter_types | {integer}\n> from_sql | t\n> calls | 7\n> custom_calls | 5\n> plan_generation | 6\n> generic_cost | 4.3100000000000005\n> custom_cost | 9.31\n>\n> Perhaps plan_generation is not needed there.\n>\n\n+1.\nNow 'calls' is sufficient and 'plan_generation' may confuse users.\n\nBTW, considering 'calls' in pg_stat_statements is the number of times\nstatements were EXECUTED and 'plans' is the number of times\nstatements were PLANNED, how about substituting 'calls' for 'plans'?\n\n\nAt Tue, 19 May 2020 22:56:17 +0900, Atsushi Torikoshi <atorik@gmail.com>\n> wrote in\n> > Instead, I'm now considering using a static hash for prepared queries\n> > (static HTAB *prepared_queries).\n>\n> That might be complex and fragile considering nested query and SPI\n> calls. I'm not sure, but could we use ActivePortal?\n> ActivePortal->cplan is a CachedPlan, which can hold the generic/custom\n> information.\n>\n\nYes, I once looked for hooks which can get Portal, I couldn't find them.\nAnd it also seems difficult getting keys for HTAB *prepared_queries\nin existing executor hooks.\nThere may be oversights, but I'm now feeling returning to the idea\nhook additions.\n\n| Portal includes CachedPlanSource but there seem no hooks to\n| take Portal.\n| So I'm wondering it's necessary to add a hook to get Portal\n| or CachedPlanSource.\n| Are these too much change for getting plan types?\n\n\nOn Thu, May 21, 2020 at 5:43 PM Tatsuro Yamada <\ntatsuro.yamada.tf@nttcom.co.jp> wrote:\n\n> I tried to creating PoC patch too, so I share it.\n> Please find attached file.\n>\n\nThanks!\n\nI agree with your idea showing the latest plan is generic or custom.\n\nThis patch judges whether the lastest plan was generic based on\nplansource->gplan existence, but plansource->gplan can exist even\nwhen the planner chooses custom.\nFor example, a prepared statement was executed first 6 times and\na generic plan was generated for comparison but the custom plan\nwon.\n\nAttached another patch showing latest plan based on\n'0001-Expose-counters-of-plancache-to-pg_prepared_statemen.patch'.\n\nAs I wrote above, I suppose some of the columns might not necessary\nand it'd better change some column and variable names, but I left them\nfor other opinions.\n\nRegards,\n\n--\nAtsushi Torikoshi", "msg_date": "Mon, 25 May 2020 10:54:05 +0900", "msg_from": "Atsushi Torikoshi <atorik@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On Mon, May 25, 2020 at 10:54 AM Atsushi Torikoshi <atorik@gmail.com> wrote:\n\n> On Thu, May 21, 2020 at 5:10 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n>\n>> Cost numbers would look better if it is cooked a bit. Is it worth\n>> being in core?\n>\n>\n> I didn't come up with ideas about how to use them.\n> IMHO they might not so necessary..\n>\n\n\n> Perhaps plan_generation is not needed there.\n>>\n>\n> +1.\n> Now 'calls' is sufficient and 'plan_generation' may confuse users.\n>\n> BTW, considering 'calls' in pg_stat_statements is the number of times\n> statements were EXECUTED and 'plans' is the number of times\n> statements were PLANNED, how about substituting 'calls' for 'plans'?\n>\n\nI've modified the above points and also exposed the numbers of each\n generic plans and custom plans.\n\nI'm now a little bit worried about the below change which removed\nthe overflow checking for num_custom_plans, which was introduced\nin 0001-Expose-counters-of-plancache-to-pg_prepared_statement.patch,\nbut I've left it because the maximum of int64 seems enough large\nfor counters.\nAlso referencing other counters in pg_stat_user_tables, they don't\nseem to care about it.\n\n```\n- /* Accumulate total costs of custom plans, but 'ware\noverflow */\n- if (plansource->num_custom_plans < INT_MAX)\n- {\n- plansource->total_custom_cost +=\ncached_plan_cost(plan, true);\n- plansource->num_custom_plans++;\n- }\n```\n\nRegards,\n\nAtsushi Torikoshi\n\n>", "msg_date": "Thu, 4 Jun 2020 17:04:36 +0900", "msg_from": "Atsushi Torikoshi <atorik@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On 2020-06-04 17:04, Atsushi Torikoshi wrote:\n> On Mon, May 25, 2020 at 10:54 AM Atsushi Torikoshi <atorik@gmail.com>\n> wrote:\n> \n>> On Thu, May 21, 2020 at 5:10 PM Kyotaro Horiguchi\n>> <horikyota.ntt@gmail.com> wrote:\n>> \n>>> Cost numbers would look better if it is cooked a bit. Is it worth\n>>> being in core?\n>> \n>> I didn't come up with ideas about how to use them.\n>> IMHO they might not so necessary..\n> \n>>> Perhaps plan_generation is not needed there.\n>> \n>> +1.\n>> Now 'calls' is sufficient and 'plan_generation' may confuse users.\n>> \n>> BTW, considering 'calls' in pg_stat_statements is the number of\n>> times\n>> statements were EXECUTED and 'plans' is the number of times\n>> statements were PLANNED, how about substituting 'calls' for\n>> 'plans'?\n> \n> I've modified the above points and also exposed the numbers of each\n> generic plans and custom plans.\n> \n> I'm now a little bit worried about the below change which removed\n> the overflow checking for num_custom_plans, which was introduced\n> in 0001-Expose-counters-of-plancache-to-pg_prepared_statement.patch,\n> but I've left it because the maximum of int64 seems enough large\n> for counters.\n> Also referencing other counters in pg_stat_user_tables, they don't\n> seem to care about it.\n> \n> ```\n> - /* Accumulate total costs of custom plans, but 'ware\n> overflow */\n> - if (plansource->num_custom_plans < INT_MAX)\n> - {\n> - plansource->total_custom_cost +=\n> cached_plan_cost(plan, true);\n> - plansource->num_custom_plans++;\n> - }\n> \n> ```\n> \n> Regards,\n> \n> Atsushi Torikoshi\n> \n>> \n\nAs a user, I think this feature is useful for performance analysis.\nI hope that this feature is merged.\n\nBTW, I found that the dependency between function's comments and\nthe modified code is broken at latest patch. Before this is\ncommitted, please fix it.\n\n```\ndiff --git a/src/backend/commands/prepare.c \nb/src/backend/commands/prepare.c\nindex 990782e77f..b63d3214df 100644\n--- a/src/backend/commands/prepare.c\n+++ b/src/backend/commands/prepare.c\n@@ -694,7 +694,8 @@ ExplainExecuteQuery(ExecuteStmt *execstmt, \nIntoClause *into, ExplainState *es,\n\n /*\n * This set returning function reads all the prepared statements and\n- * returns a set of (name, statement, prepare_time, param_types, \nfrom_sql).\n+ * returns a set of (name, statement, prepare_time, param_types, \nfrom_sql,\n+ * generic_plans, custom_plans, last_plan).\n */\n Datum\n pg_prepared_statement(PG_FUNCTION_ARGS)\n```\n\nRegards,\n\n-- \nMasahiro Ikeda\n\n\n", "msg_date": "Mon, 08 Jun 2020 20:45:15 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On 2020-06-08 20:45, Masahiro Ikeda wrote:\n\n> BTW, I found that the dependency between function's comments and\n> the modified code is broken at latest patch. Before this is\n> committed, please fix it.\n> \n> ```\n> diff --git a/src/backend/commands/prepare.c \n> b/src/backend/commands/prepare.c\n> index 990782e77f..b63d3214df 100644\n> --- a/src/backend/commands/prepare.c\n> +++ b/src/backend/commands/prepare.c\n> @@ -694,7 +694,8 @@ ExplainExecuteQuery(ExecuteStmt *execstmt,\n> IntoClause *into, ExplainState *es,\n> \n> /*\n> * This set returning function reads all the prepared statements and\n> - * returns a set of (name, statement, prepare_time, param_types, \n> from_sql).\n> + * returns a set of (name, statement, prepare_time, param_types, \n> from_sql,\n> + * generic_plans, custom_plans, last_plan).\n> */\n> Datum\n> pg_prepared_statement(PG_FUNCTION_ARGS)\n> ```\n\nThanks for reviewing!\n\nI've fixed it.\n\n\n\nRegards,\n\n--\nAtsushi Torikoshi", "msg_date": "Wed, 10 Jun 2020 10:50:58 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "At Wed, 10 Jun 2020 10:50:58 +0900, torikoshia <torikoshia@oss.nttdata.com> wrote in \n> On 2020-06-08 20:45, Masahiro Ikeda wrote:\n> \n> > BTW, I found that the dependency between function's comments and\n> > the modified code is broken at latest patch. Before this is\n> > committed, please fix it.\n> > ```\n> > diff --git a/src/backend/commands/prepare.c\n> > b/src/backend/commands/prepare.c\n> > index 990782e77f..b63d3214df 100644\n> > --- a/src/backend/commands/prepare.c\n> > +++ b/src/backend/commands/prepare.c\n> > @@ -694,7 +694,8 @@ ExplainExecuteQuery(ExecuteStmt *execstmt,\n> > IntoClause *into, ExplainState *es,\n> > /*\n> > * This set returning function reads all the prepared statements and\n> > - * returns a set of (name, statement, prepare_time, param_types,\n> > - * from_sql).\n> > + * returns a set of (name, statement, prepare_time, param_types,\n> > from_sql,\n> > + * generic_plans, custom_plans, last_plan).\n> > */\n> > Datum\n> > pg_prepared_statement(PG_FUNCTION_ARGS)\n> > ```\n> \n> Thanks for reviewing!\n> \n> I've fixed it.\n\n+\tTupleDescInitEntry(tupdesc, (AttrNumber) 8, \"last_plan\",\n\nThis could be a problem if we showed the last plan in this view. I\nthink \"last_plan_type\" would be better.\n\n+\t\t\tif (prep_stmt->plansource->last_plan_type == PLAN_CACHE_TYPE_CUSTOM)\n+\t\t\t\tvalues[7] = CStringGetTextDatum(\"custom\");\n+\t\t\telse if (prep_stmt->plansource->last_plan_type == PLAN_CACHE_TYPE_GENERIC)\n+\t\t\t\tvalues[7] = CStringGetTextDatum(\"generic\");\n+\t\t\telse\n+\t\t\t\tnulls[7] = true;\n\nUsing swith-case prevents future additional type (if any) from being\nunhandled. I think we are recommending that as a convension.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 10 Jun 2020 18:00:43 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On 2020-06-10 18:00, Kyotaro Horiguchi wrote:\n\n> \n> +\tTupleDescInitEntry(tupdesc, (AttrNumber) 8, \"last_plan\",\n> \n> This could be a problem if we showed the last plan in this view. I\n> think \"last_plan_type\" would be better.\n> \n> +\t\t\tif (prep_stmt->plansource->last_plan_type == \n> PLAN_CACHE_TYPE_CUSTOM)\n> +\t\t\t\tvalues[7] = CStringGetTextDatum(\"custom\");\n> +\t\t\telse if (prep_stmt->plansource->last_plan_type == \n> PLAN_CACHE_TYPE_GENERIC)\n> +\t\t\t\tvalues[7] = CStringGetTextDatum(\"generic\");\n> +\t\t\telse\n> +\t\t\t\tnulls[7] = true;\n> \n> Using swith-case prevents future additional type (if any) from being\n> unhandled. I think we are recommending that as a convension.\n\nThanks for your reviewing!\n\nI've attached a patch that reflects your comments.\n\n\nRegards,\n\n--\nAtsushi Torikoshi", "msg_date": "Thu, 11 Jun 2020 14:59:22 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "\n\nOn 2020/06/11 14:59, torikoshia wrote:\n> On 2020-06-10 18:00, Kyotaro Horiguchi wrote:\n> \n>>\n>> +��� TupleDescInitEntry(tupdesc, (AttrNumber) 8, \"last_plan\",\n>>\n>> This could be a problem if we showed the last plan in this view.� I\n>> think \"last_plan_type\" would be better.\n>>\n>> +����������� if (prep_stmt->plansource->last_plan_type == PLAN_CACHE_TYPE_CUSTOM)\n>> +��������������� values[7] = CStringGetTextDatum(\"custom\");\n>> +����������� else if (prep_stmt->plansource->last_plan_type == PLAN_CACHE_TYPE_GENERIC)\n>> +��������������� values[7] = CStringGetTextDatum(\"generic\");\n>> +����������� else\n>> +��������������� nulls[7] = true;\n>>\n>> Using swith-case prevents future additional type (if any) from being\n>> unhandled.� I think we are recommending that as a convension.\n> \n> Thanks for your reviewing!\n> \n> I've attached a patch that reflects your comments.\n\nThanks for the patch! Here are the comments.\n\n\n+ Number of times generic plan was choosen\n+ Number of times custom plan was choosen\n\nTypo: \"choosen\" should be \"chosen\"?\n\n\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>last_plan_type</structfield> <type>text</type>\n+ </para>\n+ <para>\n+ Tells the last plan type was generic or custom. If the prepared\n+ statement has not executed yet, this field is null\n+ </para></entry>\n\nCould you tell me how this information is expected to be used?\nI think that generic_plans and custom_plans are useful when investigating\nthe cause of performance drop by cached plan mode. But I failed to get\nhow much useful last_plan_type is.\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 6 Jul 2020 22:16:18 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On 2020-07-06 22:16, Fujii Masao wrote:\n> On 2020/06/11 14:59, torikoshia wrote:\n>> On 2020-06-10 18:00, Kyotaro Horiguchi wrote:\n>> \n>>> \n>>> +    TupleDescInitEntry(tupdesc, (AttrNumber) 8, \"last_plan\",\n>>> \n>>> This could be a problem if we showed the last plan in this view.  I\n>>> think \"last_plan_type\" would be better.\n>>> \n>>> +            if (prep_stmt->plansource->last_plan_type == \n>>> PLAN_CACHE_TYPE_CUSTOM)\n>>> +                values[7] = CStringGetTextDatum(\"custom\");\n>>> +            else if (prep_stmt->plansource->last_plan_type == \n>>> PLAN_CACHE_TYPE_GENERIC)\n>>> +                values[7] = CStringGetTextDatum(\"generic\");\n>>> +            else\n>>> +                nulls[7] = true;\n>>> \n>>> Using swith-case prevents future additional type (if any) from being\n>>> unhandled.  I think we are recommending that as a convension.\n>> \n>> Thanks for your reviewing!\n>> \n>> I've attached a patch that reflects your comments.\n> \n> Thanks for the patch! Here are the comments.\n\nThanks for your review!\n\n> + Number of times generic plan was choosen\n> + Number of times custom plan was choosen\n> \n> Typo: \"choosen\" should be \"chosen\"?\n\nThanks, fixed them.\n\n> + <entry role=\"catalog_table_entry\"><para \n> role=\"column_definition\">\n> + <structfield>last_plan_type</structfield> <type>text</type>\n> + </para>\n> + <para>\n> + Tells the last plan type was generic or custom. If the \n> prepared\n> + statement has not executed yet, this field is null\n> + </para></entry>\n> \n> Could you tell me how this information is expected to be used?\n> I think that generic_plans and custom_plans are useful when \n> investigating\n> the cause of performance drop by cached plan mode. But I failed to get\n> how much useful last_plan_type is.\n\nThis may be an exceptional case, but I once had a case needed\nto ensure whether generic or custom plan was chosen for specific\nqueries in a development environment.\n\nOf course, we can know it from adding EXPLAIN and ensuring whether $n\nis contained in the plan, but I feel using the view is easier to use\nand understand.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Wed, 08 Jul 2020 10:14:42 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "\n\nOn 2020/07/08 10:14, torikoshia wrote:\n> On 2020-07-06 22:16, Fujii Masao wrote:\n>> On 2020/06/11 14:59, torikoshia wrote:\n>>> On 2020-06-10 18:00, Kyotaro Horiguchi wrote:\n>>>\n>>>>\n>>>> +    TupleDescInitEntry(tupdesc, (AttrNumber) 8, \"last_plan\",\n>>>>\n>>>> This could be a problem if we showed the last plan in this view.  I\n>>>> think \"last_plan_type\" would be better.\n>>>>\n>>>> +            if (prep_stmt->plansource->last_plan_type == PLAN_CACHE_TYPE_CUSTOM)\n>>>> +                values[7] = CStringGetTextDatum(\"custom\");\n>>>> +            else if (prep_stmt->plansource->last_plan_type == PLAN_CACHE_TYPE_GENERIC)\n>>>> +                values[7] = CStringGetTextDatum(\"generic\");\n>>>> +            else\n>>>> +                nulls[7] = true;\n>>>>\n>>>> Using swith-case prevents future additional type (if any) from being\n>>>> unhandled.  I think we are recommending that as a convension.\n>>>\n>>> Thanks for your reviewing!\n>>>\n>>> I've attached a patch that reflects your comments.\n>>\n>> Thanks for the patch! Here are the comments.\n> \n> Thanks for your review!\n> \n>> +        Number of times generic plan was choosen\n>> +        Number of times custom plan was choosen\n>>\n>> Typo: \"choosen\" should be \"chosen\"?\n> \n> Thanks, fixed them.\n> \n>> +      <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n>> +       <structfield>last_plan_type</structfield> <type>text</type>\n>> +      </para>\n>> +      <para>\n>> +        Tells the last plan type was generic or custom. If the prepared\n>> +        statement has not executed yet, this field is null\n>> +      </para></entry>\n>>\n>> Could you tell me how this information is expected to be used?\n>> I think that generic_plans and custom_plans are useful when investigating\n>> the cause of performance drop by cached plan mode. But I failed to get\n>> how much useful last_plan_type is.\n> \n> This may be an exceptional case, but I once had a case needed\n> to ensure whether generic or custom plan was chosen for specific\n> queries in a development environment.\n\nIn your case, probably you had to ensure that the last multiple (or every)\nexecutions chose generic or custom plan? If yes, I'm afraid that displaying\nonly the last plan mode is not enough for your case. No?\nSo it seems better to check generic_plans or custom_plans columns in the\nview rather than last_plan_type even in your case. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 8 Jul 2020 16:41:31 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On 2020-07-08 16:41, Fujii Masao wrote:\n> On 2020/07/08 10:14, torikoshia wrote:\n>> On 2020-07-06 22:16, Fujii Masao wrote:\n>>> On 2020/06/11 14:59, torikoshia wrote:\n>>>> On 2020-06-10 18:00, Kyotaro Horiguchi wrote:\n>>>> \n>>>>> \n>>>>> +    TupleDescInitEntry(tupdesc, (AttrNumber) 8, \"last_plan\",\n>>>>> \n>>>>> This could be a problem if we showed the last plan in this view.  I\n>>>>> think \"last_plan_type\" would be better.\n>>>>> \n>>>>> +            if (prep_stmt->plansource->last_plan_type == \n>>>>> PLAN_CACHE_TYPE_CUSTOM)\n>>>>> +                values[7] = CStringGetTextDatum(\"custom\");\n>>>>> +            else if (prep_stmt->plansource->last_plan_type == \n>>>>> PLAN_CACHE_TYPE_GENERIC)\n>>>>> +                values[7] = CStringGetTextDatum(\"generic\");\n>>>>> +            else\n>>>>> +                nulls[7] = true;\n>>>>> \n>>>>> Using swith-case prevents future additional type (if any) from \n>>>>> being\n>>>>> unhandled.  I think we are recommending that as a convension.\n>>>> \n>>>> Thanks for your reviewing!\n>>>> \n>>>> I've attached a patch that reflects your comments.\n>>> \n>>> Thanks for the patch! Here are the comments.\n>> \n>> Thanks for your review!\n>> \n>>> +        Number of times generic plan was choosen\n>>> +        Number of times custom plan was choosen\n>>> \n>>> Typo: \"choosen\" should be \"chosen\"?\n>> \n>> Thanks, fixed them.\n>> \n>>> +      <entry role=\"catalog_table_entry\"><para \n>>> role=\"column_definition\">\n>>> +       <structfield>last_plan_type</structfield> <type>text</type>\n>>> +      </para>\n>>> +      <para>\n>>> +        Tells the last plan type was generic or custom. If the \n>>> prepared\n>>> +        statement has not executed yet, this field is null\n>>> +      </para></entry>\n>>> \n>>> Could you tell me how this information is expected to be used?\n>>> I think that generic_plans and custom_plans are useful when \n>>> investigating\n>>> the cause of performance drop by cached plan mode. But I failed to \n>>> get\n>>> how much useful last_plan_type is.\n>> \n>> This may be an exceptional case, but I once had a case needed\n>> to ensure whether generic or custom plan was chosen for specific\n>> queries in a development environment.\n> \n> In your case, probably you had to ensure that the last multiple (or \n> every)\n> executions chose generic or custom plan? If yes, I'm afraid that \n> displaying\n> only the last plan mode is not enough for your case. No?\n> So it seems better to check generic_plans or custom_plans columns in \n> the\n> view rather than last_plan_type even in your case. Thought?\n\nYeah, I now feel last_plan is not so necessary and only the numbers of\ngeneric/custom plan is enough.\n\nIf there are no objections, I'm going to remove this column and related \ncodes.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 10 Jul 2020 10:49:11 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On 2020-07-10 10:49, torikoshia wrote:\n> On 2020-07-08 16:41, Fujii Masao wrote:\n>> On 2020/07/08 10:14, torikoshia wrote:\n>>> On 2020-07-06 22:16, Fujii Masao wrote:\n>>>> On 2020/06/11 14:59, torikoshia wrote:\n>>>>> On 2020-06-10 18:00, Kyotaro Horiguchi wrote:\n>>>>> \n>>>>>> \n>>>>>> +    TupleDescInitEntry(tupdesc, (AttrNumber) 8, \"last_plan\",\n>>>>>> \n>>>>>> This could be a problem if we showed the last plan in this view.  \n>>>>>> I\n>>>>>> think \"last_plan_type\" would be better.\n>>>>>> \n>>>>>> +            if (prep_stmt->plansource->last_plan_type == \n>>>>>> PLAN_CACHE_TYPE_CUSTOM)\n>>>>>> +                values[7] = CStringGetTextDatum(\"custom\");\n>>>>>> +            else if (prep_stmt->plansource->last_plan_type == \n>>>>>> PLAN_CACHE_TYPE_GENERIC)\n>>>>>> +                values[7] = CStringGetTextDatum(\"generic\");\n>>>>>> +            else\n>>>>>> +                nulls[7] = true;\n>>>>>> \n>>>>>> Using swith-case prevents future additional type (if any) from \n>>>>>> being\n>>>>>> unhandled.  I think we are recommending that as a convension.\n>>>>> \n>>>>> Thanks for your reviewing!\n>>>>> \n>>>>> I've attached a patch that reflects your comments.\n>>>> \n>>>> Thanks for the patch! Here are the comments.\n>>> \n>>> Thanks for your review!\n>>> \n>>>> +        Number of times generic plan was choosen\n>>>> +        Number of times custom plan was choosen\n>>>> \n>>>> Typo: \"choosen\" should be \"chosen\"?\n>>> \n>>> Thanks, fixed them.\n>>> \n>>>> +      <entry role=\"catalog_table_entry\"><para \n>>>> role=\"column_definition\">\n>>>> +       <structfield>last_plan_type</structfield> <type>text</type>\n>>>> +      </para>\n>>>> +      <para>\n>>>> +        Tells the last plan type was generic or custom. If the \n>>>> prepared\n>>>> +        statement has not executed yet, this field is null\n>>>> +      </para></entry>\n>>>> \n>>>> Could you tell me how this information is expected to be used?\n>>>> I think that generic_plans and custom_plans are useful when \n>>>> investigating\n>>>> the cause of performance drop by cached plan mode. But I failed to \n>>>> get\n>>>> how much useful last_plan_type is.\n>>> \n>>> This may be an exceptional case, but I once had a case needed\n>>> to ensure whether generic or custom plan was chosen for specific\n>>> queries in a development environment.\n>> \n>> In your case, probably you had to ensure that the last multiple (or \n>> every)\n>> executions chose generic or custom plan? If yes, I'm afraid that \n>> displaying\n>> only the last plan mode is not enough for your case. No?\n>> So it seems better to check generic_plans or custom_plans columns in \n>> the\n>> view rather than last_plan_type even in your case. Thought?\n> \n> Yeah, I now feel last_plan is not so necessary and only the numbers of\n> generic/custom plan is enough.\n> \n> If there are no objections, I'm going to remove this column and related \n> codes.\n\nAs mentioned, I removed last_plan column.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Tue, 14 Jul 2020 21:24:25 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "\n\nOn 2020/07/14 21:24, torikoshia wrote:\n> On 2020-07-10 10:49, torikoshia wrote:\n>> On 2020-07-08 16:41, Fujii Masao wrote:\n>>> On 2020/07/08 10:14, torikoshia wrote:\n>>>> On 2020-07-06 22:16, Fujii Masao wrote:\n>>>>> On 2020/06/11 14:59, torikoshia wrote:\n>>>>>> On 2020-06-10 18:00, Kyotaro Horiguchi wrote:\n>>>>>>\n>>>>>>>\n>>>>>>> +    TupleDescInitEntry(tupdesc, (AttrNumber) 8, \"last_plan\",\n>>>>>>>\n>>>>>>> This could be a problem if we showed the last plan in this view. I\n>>>>>>> think \"last_plan_type\" would be better.\n>>>>>>>\n>>>>>>> +            if (prep_stmt->plansource->last_plan_type == PLAN_CACHE_TYPE_CUSTOM)\n>>>>>>> +                values[7] = CStringGetTextDatum(\"custom\");\n>>>>>>> +            else if (prep_stmt->plansource->last_plan_type == PLAN_CACHE_TYPE_GENERIC)\n>>>>>>> +                values[7] = CStringGetTextDatum(\"generic\");\n>>>>>>> +            else\n>>>>>>> +                nulls[7] = true;\n>>>>>>>\n>>>>>>> Using swith-case prevents future additional type (if any) from being\n>>>>>>> unhandled.  I think we are recommending that as a convension.\n>>>>>>\n>>>>>> Thanks for your reviewing!\n>>>>>>\n>>>>>> I've attached a patch that reflects your comments.\n>>>>>\n>>>>> Thanks for the patch! Here are the comments.\n>>>>\n>>>> Thanks for your review!\n>>>>\n>>>>> +        Number of times generic plan was choosen\n>>>>> +        Number of times custom plan was choosen\n>>>>>\n>>>>> Typo: \"choosen\" should be \"chosen\"?\n>>>>\n>>>> Thanks, fixed them.\n>>>>\n>>>>> +      <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n>>>>> +       <structfield>last_plan_type</structfield> <type>text</type>\n>>>>> +      </para>\n>>>>> +      <para>\n>>>>> +        Tells the last plan type was generic or custom. If the prepared\n>>>>> +        statement has not executed yet, this field is null\n>>>>> +      </para></entry>\n>>>>>\n>>>>> Could you tell me how this information is expected to be used?\n>>>>> I think that generic_plans and custom_plans are useful when investigating\n>>>>> the cause of performance drop by cached plan mode. But I failed to get\n>>>>> how much useful last_plan_type is.\n>>>>\n>>>> This may be an exceptional case, but I once had a case needed\n>>>> to ensure whether generic or custom plan was chosen for specific\n>>>> queries in a development environment.\n>>>\n>>> In your case, probably you had to ensure that the last multiple (or every)\n>>> executions chose generic or custom plan? If yes, I'm afraid that displaying\n>>> only the last plan mode is not enough for your case. No?\n>>> So it seems better to check generic_plans or custom_plans columns in the\n>>> view rather than last_plan_type even in your case. Thought?\n>>\n>> Yeah, I now feel last_plan is not so necessary and only the numbers of\n>> generic/custom plan is enough.\n>>\n>> If there are no objections, I'm going to remove this column and related codes.\n> \n> As mentioned, I removed last_plan column.\n\nThanks for updating the patch! It basically looks good to me.\n\nI have one comment; you added the regression tests for generic and\ncustom plans into prepare.sql. But the similar tests already exist in\nplancache.sql. So isn't it better to add the tests for generic_plans and\ncustom_plans columns, into plancache.sql?\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 15 Jul 2020 11:44:22 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On 2020-07-15 11:44, Fujii Masao wrote:\n> On 2020/07/14 21:24, torikoshia wrote:\n>> On 2020-07-10 10:49, torikoshia wrote:\n>>> On 2020-07-08 16:41, Fujii Masao wrote:\n>>>> On 2020/07/08 10:14, torikoshia wrote:\n>>>>> On 2020-07-06 22:16, Fujii Masao wrote:\n>>>>>> On 2020/06/11 14:59, torikoshia wrote:\n>>>>>>> On 2020-06-10 18:00, Kyotaro Horiguchi wrote:\n>>>>>>> \n>>>>>>>> \n>>>>>>>> +    TupleDescInitEntry(tupdesc, (AttrNumber) 8, \"last_plan\",\n>>>>>>>> \n>>>>>>>> This could be a problem if we showed the last plan in this view. \n>>>>>>>> I\n>>>>>>>> think \"last_plan_type\" would be better.\n>>>>>>>> \n>>>>>>>> +            if (prep_stmt->plansource->last_plan_type == \n>>>>>>>> PLAN_CACHE_TYPE_CUSTOM)\n>>>>>>>> +                values[7] = CStringGetTextDatum(\"custom\");\n>>>>>>>> +            else if (prep_stmt->plansource->last_plan_type == \n>>>>>>>> PLAN_CACHE_TYPE_GENERIC)\n>>>>>>>> +                values[7] = CStringGetTextDatum(\"generic\");\n>>>>>>>> +            else\n>>>>>>>> +                nulls[7] = true;\n>>>>>>>> \n>>>>>>>> Using swith-case prevents future additional type (if any) from \n>>>>>>>> being\n>>>>>>>> unhandled.  I think we are recommending that as a convension.\n>>>>>>> \n>>>>>>> Thanks for your reviewing!\n>>>>>>> \n>>>>>>> I've attached a patch that reflects your comments.\n>>>>>> \n>>>>>> Thanks for the patch! Here are the comments.\n>>>>> \n>>>>> Thanks for your review!\n>>>>> \n>>>>>> +        Number of times generic plan was choosen\n>>>>>> +        Number of times custom plan was choosen\n>>>>>> \n>>>>>> Typo: \"choosen\" should be \"chosen\"?\n>>>>> \n>>>>> Thanks, fixed them.\n>>>>> \n>>>>>> +      <entry role=\"catalog_table_entry\"><para \n>>>>>> role=\"column_definition\">\n>>>>>> +       <structfield>last_plan_type</structfield> \n>>>>>> <type>text</type>\n>>>>>> +      </para>\n>>>>>> +      <para>\n>>>>>> +        Tells the last plan type was generic or custom. If the \n>>>>>> prepared\n>>>>>> +        statement has not executed yet, this field is null\n>>>>>> +      </para></entry>\n>>>>>> \n>>>>>> Could you tell me how this information is expected to be used?\n>>>>>> I think that generic_plans and custom_plans are useful when \n>>>>>> investigating\n>>>>>> the cause of performance drop by cached plan mode. But I failed to \n>>>>>> get\n>>>>>> how much useful last_plan_type is.\n>>>>> \n>>>>> This may be an exceptional case, but I once had a case needed\n>>>>> to ensure whether generic or custom plan was chosen for specific\n>>>>> queries in a development environment.\n>>>> \n>>>> In your case, probably you had to ensure that the last multiple (or \n>>>> every)\n>>>> executions chose generic or custom plan? If yes, I'm afraid that \n>>>> displaying\n>>>> only the last plan mode is not enough for your case. No?\n>>>> So it seems better to check generic_plans or custom_plans columns in \n>>>> the\n>>>> view rather than last_plan_type even in your case. Thought?\n>>> \n>>> Yeah, I now feel last_plan is not so necessary and only the numbers \n>>> of\n>>> generic/custom plan is enough.\n>>> \n>>> If there are no objections, I'm going to remove this column and \n>>> related codes.\n>> \n>> As mentioned, I removed last_plan column.\n> \n> Thanks for updating the patch! It basically looks good to me.\n> \n> I have one comment; you added the regression tests for generic and\n> custom plans into prepare.sql. But the similar tests already exist in\n> plancache.sql. So isn't it better to add the tests for generic_plans \n> and\n> custom_plans columns, into plancache.sql?\n\n\nThanks for your comments!\n\nAgreed.\nI removed tests on prepare.sql and added them to plancache.sql.\nThoughts?\n\n\nRegards,\n\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Thu, 16 Jul 2020 11:50:20 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On 2020/07/16 11:50, torikoshia wrote:\n> On 2020-07-15 11:44, Fujii Masao wrote:\n>> On 2020/07/14 21:24, torikoshia wrote:\n>>> On 2020-07-10 10:49, torikoshia wrote:\n>>>> On 2020-07-08 16:41, Fujii Masao wrote:\n>>>>> On 2020/07/08 10:14, torikoshia wrote:\n>>>>>> On 2020-07-06 22:16, Fujii Masao wrote:\n>>>>>>> On 2020/06/11 14:59, torikoshia wrote:\n>>>>>>>> On 2020-06-10 18:00, Kyotaro Horiguchi wrote:\n>>>>>>>>\n>>>>>>>>>\n>>>>>>>>> +    TupleDescInitEntry(tupdesc, (AttrNumber) 8, \"last_plan\",\n>>>>>>>>>\n>>>>>>>>> This could be a problem if we showed the last plan in this view. I\n>>>>>>>>> think \"last_plan_type\" would be better.\n>>>>>>>>>\n>>>>>>>>> +            if (prep_stmt->plansource->last_plan_type == PLAN_CACHE_TYPE_CUSTOM)\n>>>>>>>>> +                values[7] = CStringGetTextDatum(\"custom\");\n>>>>>>>>> +            else if (prep_stmt->plansource->last_plan_type == PLAN_CACHE_TYPE_GENERIC)\n>>>>>>>>> +                values[7] = CStringGetTextDatum(\"generic\");\n>>>>>>>>> +            else\n>>>>>>>>> +                nulls[7] = true;\n>>>>>>>>>\n>>>>>>>>> Using swith-case prevents future additional type (if any) from being\n>>>>>>>>> unhandled.  I think we are recommending that as a convension.\n>>>>>>>>\n>>>>>>>> Thanks for your reviewing!\n>>>>>>>>\n>>>>>>>> I've attached a patch that reflects your comments.\n>>>>>>>\n>>>>>>> Thanks for the patch! Here are the comments.\n>>>>>>\n>>>>>> Thanks for your review!\n>>>>>>\n>>>>>>> +        Number of times generic plan was choosen\n>>>>>>> +        Number of times custom plan was choosen\n>>>>>>>\n>>>>>>> Typo: \"choosen\" should be \"chosen\"?\n>>>>>>\n>>>>>> Thanks, fixed them.\n>>>>>>\n>>>>>>> +      <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n>>>>>>> +       <structfield>last_plan_type</structfield> <type>text</type>\n>>>>>>> +      </para>\n>>>>>>> +      <para>\n>>>>>>> +        Tells the last plan type was generic or custom. If the prepared\n>>>>>>> +        statement has not executed yet, this field is null\n>>>>>>> +      </para></entry>\n>>>>>>>\n>>>>>>> Could you tell me how this information is expected to be used?\n>>>>>>> I think that generic_plans and custom_plans are useful when investigating\n>>>>>>> the cause of performance drop by cached plan mode. But I failed to get\n>>>>>>> how much useful last_plan_type is.\n>>>>>>\n>>>>>> This may be an exceptional case, but I once had a case needed\n>>>>>> to ensure whether generic or custom plan was chosen for specific\n>>>>>> queries in a development environment.\n>>>>>\n>>>>> In your case, probably you had to ensure that the last multiple (or every)\n>>>>> executions chose generic or custom plan? If yes, I'm afraid that displaying\n>>>>> only the last plan mode is not enough for your case. No?\n>>>>> So it seems better to check generic_plans or custom_plans columns in the\n>>>>> view rather than last_plan_type even in your case. Thought?\n>>>>\n>>>> Yeah, I now feel last_plan is not so necessary and only the numbers of\n>>>> generic/custom plan is enough.\n>>>>\n>>>> If there are no objections, I'm going to remove this column and related codes.\n>>>\n>>> As mentioned, I removed last_plan column.\n>>\n>> Thanks for updating the patch! It basically looks good to me.\n>>\n>> I have one comment; you added the regression tests for generic and\n>> custom plans into prepare.sql. But the similar tests already exist in\n>> plancache.sql. So isn't it better to add the tests for generic_plans and\n>> custom_plans columns, into plancache.sql?\n> \n> \n> Thanks for your comments!\n> \n> Agreed.\n> I removed tests on prepare.sql and added them to plancache.sql.\n> Thoughts?\n\nThanks for updating the patch!\nI also applied the following minor changes to the patch.\n\n- Number of times generic plan was chosen\n+ Number of times generic plan was chosen\n- Number of times custom plan was chosen\n+ Number of times custom plan was chosen\n\nI got rid of one space character before those descriptions because\nthey should start from the position of 7th character.\n\n -- but we can force a custom plan\n set plan_cache_mode to force_custom_plan;\n explain (costs off) execute test_mode_pp(2);\n+select name, generic_plans, custom_plans from pg_prepared_statements\n+ where name = 'test_mode_pp';\n\nIn the regression test, I added the execution of pg_prepared_statements\nafter the last execution of test query, to confirm that custom plan is used\nwhen force_custom_plan is set, by checking from pg_prepared_statements.\n\nI changed the status of this patch to \"Ready for Committer\" in CF.\n\nBarring any objection, I will commit this patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Fri, 17 Jul 2020 16:25:58 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "\n\nOn 2020/07/17 16:25, Fujii Masao wrote:\n> \n> \n> On 2020/07/16 11:50, torikoshia wrote:\n>> On 2020-07-15 11:44, Fujii Masao wrote:\n>>> On 2020/07/14 21:24, torikoshia wrote:\n>>>> On 2020-07-10 10:49, torikoshia wrote:\n>>>>> On 2020-07-08 16:41, Fujii Masao wrote:\n>>>>>> On 2020/07/08 10:14, torikoshia wrote:\n>>>>>>> On 2020-07-06 22:16, Fujii Masao wrote:\n>>>>>>>> On 2020/06/11 14:59, torikoshia wrote:\n>>>>>>>>> On 2020-06-10 18:00, Kyotaro Horiguchi wrote:\n>>>>>>>>>\n>>>>>>>>>>\n>>>>>>>>>> +    TupleDescInitEntry(tupdesc, (AttrNumber) 8, \"last_plan\",\n>>>>>>>>>>\n>>>>>>>>>> This could be a problem if we showed the last plan in this view. I\n>>>>>>>>>> think \"last_plan_type\" would be better.\n>>>>>>>>>>\n>>>>>>>>>> +            if (prep_stmt->plansource->last_plan_type == PLAN_CACHE_TYPE_CUSTOM)\n>>>>>>>>>> +                values[7] = CStringGetTextDatum(\"custom\");\n>>>>>>>>>> +            else if (prep_stmt->plansource->last_plan_type == PLAN_CACHE_TYPE_GENERIC)\n>>>>>>>>>> +                values[7] = CStringGetTextDatum(\"generic\");\n>>>>>>>>>> +            else\n>>>>>>>>>> +                nulls[7] = true;\n>>>>>>>>>>\n>>>>>>>>>> Using swith-case prevents future additional type (if any) from being\n>>>>>>>>>> unhandled.  I think we are recommending that as a convension.\n>>>>>>>>>\n>>>>>>>>> Thanks for your reviewing!\n>>>>>>>>>\n>>>>>>>>> I've attached a patch that reflects your comments.\n>>>>>>>>\n>>>>>>>> Thanks for the patch! Here are the comments.\n>>>>>>>\n>>>>>>> Thanks for your review!\n>>>>>>>\n>>>>>>>> +        Number of times generic plan was choosen\n>>>>>>>> +        Number of times custom plan was choosen\n>>>>>>>>\n>>>>>>>> Typo: \"choosen\" should be \"chosen\"?\n>>>>>>>\n>>>>>>> Thanks, fixed them.\n>>>>>>>\n>>>>>>>> +      <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n>>>>>>>> +       <structfield>last_plan_type</structfield> <type>text</type>\n>>>>>>>> +      </para>\n>>>>>>>> +      <para>\n>>>>>>>> +        Tells the last plan type was generic or custom. If the prepared\n>>>>>>>> +        statement has not executed yet, this field is null\n>>>>>>>> +      </para></entry>\n>>>>>>>>\n>>>>>>>> Could you tell me how this information is expected to be used?\n>>>>>>>> I think that generic_plans and custom_plans are useful when investigating\n>>>>>>>> the cause of performance drop by cached plan mode. But I failed to get\n>>>>>>>> how much useful last_plan_type is.\n>>>>>>>\n>>>>>>> This may be an exceptional case, but I once had a case needed\n>>>>>>> to ensure whether generic or custom plan was chosen for specific\n>>>>>>> queries in a development environment.\n>>>>>>\n>>>>>> In your case, probably you had to ensure that the last multiple (or every)\n>>>>>> executions chose generic or custom plan? If yes, I'm afraid that displaying\n>>>>>> only the last plan mode is not enough for your case. No?\n>>>>>> So it seems better to check generic_plans or custom_plans columns in the\n>>>>>> view rather than last_plan_type even in your case. Thought?\n>>>>>\n>>>>> Yeah, I now feel last_plan is not so necessary and only the numbers of\n>>>>> generic/custom plan is enough.\n>>>>>\n>>>>> If there are no objections, I'm going to remove this column and related codes.\n>>>>\n>>>> As mentioned, I removed last_plan column.\n>>>\n>>> Thanks for updating the patch! It basically looks good to me.\n>>>\n>>> I have one comment; you added the regression tests for generic and\n>>> custom plans into prepare.sql. But the similar tests already exist in\n>>> plancache.sql. So isn't it better to add the tests for generic_plans and\n>>> custom_plans columns, into plancache.sql?\n>>\n>>\n>> Thanks for your comments!\n>>\n>> Agreed.\n>> I removed tests on prepare.sql and added them to plancache.sql.\n>> Thoughts?\n> \n> Thanks for updating the patch!\n> I also applied the following minor changes to the patch.\n> \n> -        Number of times generic plan was chosen\n> +       Number of times generic plan was chosen\n> -        Number of times custom plan was chosen\n> +       Number of times custom plan was chosen\n> \n> I got rid of one space character before those descriptions because\n> they should start from the position of 7th character.\n> \n>  -- but we can force a custom plan\n>  set plan_cache_mode to force_custom_plan;\n>  explain (costs off) execute test_mode_pp(2);\n> +select name, generic_plans, custom_plans from pg_prepared_statements\n> +  where  name = 'test_mode_pp';\n> \n> In the regression test, I added the execution of pg_prepared_statements\n> after the last execution of test query, to confirm that custom plan is used\n> when force_custom_plan is set, by checking from pg_prepared_statements.\n> \n> I changed the status of this patch to \"Ready for Committer\" in CF.\n> \n> Barring any objection, I will commit this patch.\n\nCommitted. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 20 Jul 2020 11:57:39 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On 2020-07-20 11:57, Fujii Masao wrote:\n> On 2020/07/17 16:25, Fujii Masao wrote:\n>> \n>> \n>> On 2020/07/16 11:50, torikoshia wrote:\n>>> On 2020-07-15 11:44, Fujii Masao wrote:\n>>>> On 2020/07/14 21:24, torikoshia wrote:\n>>>>> On 2020-07-10 10:49, torikoshia wrote:\n>>>>>> On 2020-07-08 16:41, Fujii Masao wrote:\n>>>>>>> On 2020/07/08 10:14, torikoshia wrote:\n>>>>>>>> On 2020-07-06 22:16, Fujii Masao wrote:\n>>>>>>>>> On 2020/06/11 14:59, torikoshia wrote:\n>>>>>>>>>> On 2020-06-10 18:00, Kyotaro Horiguchi wrote:\n>>>>>>>>>> \n>>>>>>>>>>> \n>>>>>>>>>>> +    TupleDescInitEntry(tupdesc, (AttrNumber) 8, \"last_plan\",\n>>>>>>>>>>> \n>>>>>>>>>>> This could be a problem if we showed the last plan in this \n>>>>>>>>>>> view. I\n>>>>>>>>>>> think \"last_plan_type\" would be better.\n>>>>>>>>>>> \n>>>>>>>>>>> +            if (prep_stmt->plansource->last_plan_type == \n>>>>>>>>>>> PLAN_CACHE_TYPE_CUSTOM)\n>>>>>>>>>>> +                values[7] = CStringGetTextDatum(\"custom\");\n>>>>>>>>>>> +            else if (prep_stmt->plansource->last_plan_type \n>>>>>>>>>>> == PLAN_CACHE_TYPE_GENERIC)\n>>>>>>>>>>> +                values[7] = CStringGetTextDatum(\"generic\");\n>>>>>>>>>>> +            else\n>>>>>>>>>>> +                nulls[7] = true;\n>>>>>>>>>>> \n>>>>>>>>>>> Using swith-case prevents future additional type (if any) \n>>>>>>>>>>> from being\n>>>>>>>>>>> unhandled.  I think we are recommending that as a convension.\n>>>>>>>>>> \n>>>>>>>>>> Thanks for your reviewing!\n>>>>>>>>>> \n>>>>>>>>>> I've attached a patch that reflects your comments.\n>>>>>>>>> \n>>>>>>>>> Thanks for the patch! Here are the comments.\n>>>>>>>> \n>>>>>>>> Thanks for your review!\n>>>>>>>> \n>>>>>>>>> +        Number of times generic plan was choosen\n>>>>>>>>> +        Number of times custom plan was choosen\n>>>>>>>>> \n>>>>>>>>> Typo: \"choosen\" should be \"chosen\"?\n>>>>>>>> \n>>>>>>>> Thanks, fixed them.\n>>>>>>>> \n>>>>>>>>> +      <entry role=\"catalog_table_entry\"><para \n>>>>>>>>> role=\"column_definition\">\n>>>>>>>>> +       <structfield>last_plan_type</structfield> \n>>>>>>>>> <type>text</type>\n>>>>>>>>> +      </para>\n>>>>>>>>> +      <para>\n>>>>>>>>> +        Tells the last plan type was generic or custom. If the \n>>>>>>>>> prepared\n>>>>>>>>> +        statement has not executed yet, this field is null\n>>>>>>>>> +      </para></entry>\n>>>>>>>>> \n>>>>>>>>> Could you tell me how this information is expected to be used?\n>>>>>>>>> I think that generic_plans and custom_plans are useful when \n>>>>>>>>> investigating\n>>>>>>>>> the cause of performance drop by cached plan mode. But I failed \n>>>>>>>>> to get\n>>>>>>>>> how much useful last_plan_type is.\n>>>>>>>> \n>>>>>>>> This may be an exceptional case, but I once had a case needed\n>>>>>>>> to ensure whether generic or custom plan was chosen for specific\n>>>>>>>> queries in a development environment.\n>>>>>>> \n>>>>>>> In your case, probably you had to ensure that the last multiple \n>>>>>>> (or every)\n>>>>>>> executions chose generic or custom plan? If yes, I'm afraid that \n>>>>>>> displaying\n>>>>>>> only the last plan mode is not enough for your case. No?\n>>>>>>> So it seems better to check generic_plans or custom_plans columns \n>>>>>>> in the\n>>>>>>> view rather than last_plan_type even in your case. Thought?\n>>>>>> \n>>>>>> Yeah, I now feel last_plan is not so necessary and only the \n>>>>>> numbers of\n>>>>>> generic/custom plan is enough.\n>>>>>> \n>>>>>> If there are no objections, I'm going to remove this column and \n>>>>>> related codes.\n>>>>> \n>>>>> As mentioned, I removed last_plan column.\n>>>> \n>>>> Thanks for updating the patch! It basically looks good to me.\n>>>> \n>>>> I have one comment; you added the regression tests for generic and\n>>>> custom plans into prepare.sql. But the similar tests already exist \n>>>> in\n>>>> plancache.sql. So isn't it better to add the tests for generic_plans \n>>>> and\n>>>> custom_plans columns, into plancache.sql?\n>>> \n>>> \n>>> Thanks for your comments!\n>>> \n>>> Agreed.\n>>> I removed tests on prepare.sql and added them to plancache.sql.\n>>> Thoughts?\n>> \n>> Thanks for updating the patch!\n>> I also applied the following minor changes to the patch.\n>> \n>> -        Number of times generic plan was chosen\n>> +       Number of times generic plan was chosen\n>> -        Number of times custom plan was chosen\n>> +       Number of times custom plan was chosen\n>> \n>> I got rid of one space character before those descriptions because\n>> they should start from the position of 7th character.\n>> \n>>  -- but we can force a custom plan\n>>  set plan_cache_mode to force_custom_plan;\n>>  explain (costs off) execute test_mode_pp(2);\n>> +select name, generic_plans, custom_plans from pg_prepared_statements\n>> +  where  name = 'test_mode_pp';\n>> \n>> In the regression test, I added the execution of \n>> pg_prepared_statements\n>> after the last execution of test query, to confirm that custom plan is \n>> used\n>> when force_custom_plan is set, by checking from \n>> pg_prepared_statements.\n>> \n>> I changed the status of this patch to \"Ready for Committer\" in CF.\n>> \n>> Barring any objection, I will commit this patch.\n> \n> Committed. Thanks!\n\nThanks!\n\nAs I proposed earlier in this thread, I'm now trying to add information\nabout generic/cudstom plan to pg_stat_statements.\nI'll share the idea and the poc patch soon.\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 20 Jul 2020 13:57:42 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On 2020-07-20 13:57, torikoshia wrote:\n\n> As I proposed earlier in this thread, I'm now trying to add information\n> about generic/cudstom plan to pg_stat_statements.\n> I'll share the idea and the poc patch soon.\n\nAttached a poc patch.\n\nMain purpose is to decide (1) the user interface and (2) the\nway to get the plan type from pg_stat_statements.\n\n(1) the user interface\nI added a new boolean column 'generic_plan' to both\npg_stat_statements view and the member of the hash key of\npg_stat_statements.\n\nThis is because as Legrand pointed out the feature seems\nuseful under the condition of differentiating all the\ncounters for a queryid using a generic plan and the one\nusing a custom one.\n\nI thought it might be preferable to make a GUC to enable\nor disable this feature, but changing the hash key makes\nit harder.\n\n(2) way to get the plan type from pg_stat_statements\nTo know whether the plan is generic or not, I added a\nmember to CachedPlan and get it in the ExecutorStart_hook\nfrom ActivePortal.\nI wished to do it in the ExecutorEnd_hook, but the\nActivePortal is not available on executorEnd, so I keep\nit on a global variable newly defined in pg_stat_statements.\n\n\nAny thoughts?\n\nThis is a poc patch and I'm going to do below things later:\n\n- update pg_stat_statements version\n- change default value for the newly added parameter in\n pg_stat_statements_reset() from -1 to 0(since default for\n other parameters are all 0)\n- add regression tests and update docs\n\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Wed, 22 Jul 2020 16:49:53 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "\n\nOn 2020/07/22 16:49, torikoshia wrote:\n> On 2020-07-20 13:57, torikoshia wrote:\n> \n>> As I proposed earlier in this thread, I'm now trying to add information\n>> about generic/cudstom plan to pg_stat_statements.\n>> I'll share the idea and the poc patch soon.\n> \n> Attached a poc patch.\n\nThanks for the POC patch!\n\nWith the patch, when I ran \"CREATE EXTENSION pg_stat_statements\",\nI got the following error.\n\nERROR: function pg_stat_statements_reset(oid, oid, bigint) does not exist\n\n\n> \n> Main purpose is to decide (1) the user interface and (2) the\n> way to get the plan type from pg_stat_statements.\n> \n> (1) the user interface\n> I added a new boolean column 'generic_plan' to both\n> pg_stat_statements view and the member of the hash key of\n> pg_stat_statements.\n> \n> This is because as Legrand pointed out the feature seems\n> useful under the condition of differentiating all the\n> counters for a queryid using a generic plan and the one\n> using a custom one.\n\nI don't like this because this may double the number of entries in pgss.\nWhich means that the number of entries can more easily reach\npg_stat_statements.max and some entries will be discarded.\n\n \n> I thought it might be preferable to make a GUC to enable\n> or disable this feature, but changing the hash key makes\n> it harder.\n\nWhat happens if the server was running with this option enabled and then\nrestarted with the option disabled? Firstly two entries for the same query\nwere stored in pgss because the option was enabled. But when it's disabled\nand the server is restarted, those two entries should be merged into one\nat the startup of server? If so, that's problematic because it may take\na long time.\n\nTherefore I think that it's better and simple to just expose the number of\ntimes generic/custom plan was chosen for each query.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 30 Jul 2020 14:31:47 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": ">> Main purpose is to decide (1) the user interface and (2) the\n>> way to get the plan type from pg_stat_statements.\n>>\n>> (1) the user interface\n>> I added a new boolean column 'generic_plan' to both\n>> pg_stat_statements view and the member of the hash key of\n>> pg_stat_statements.\n>>\n>> This is because as Legrand pointed out the feature seems\n>> useful under the condition of differentiating all the\n>> counters for a queryid using a generic plan and the one\n>> using a custom one.\n\n> I don't like this because this may double the number of entries in pgss.\n> Which means that the number of entries can more easily reach\n> pg_stat_statements.max and some entries will be discarded.\n\nNot all the entries will be doubled, only the ones that are prepared.\nAnd even if auto prepare was implemented, having 5000, 10000 or 20000\nmax entries seems not a problem to me.\n\n>> I thought it might be preferable to make a GUC to enable\n>> or disable this feature, but changing the hash key makes\n>> it harder.\n\n> What happens if the server was running with this option enabled and then\n> restarted with the option disabled? Firstly two entries for the same query\n> were stored in pgss because the option was enabled. But when it's disabled\n> and the server is restarted, those two entries should be merged into one\n> at the startup of server? If so, that's problematic because it may take\n> a long time.\n\nWhat would you think about a third value for this flag to handle disabled\ncase (no need to merge entries in this situation) ?\n\n> Therefore I think that it's better and simple to just expose the number of\n> times generic/custom plan was chosen for each query.\n\nI thought this feature was mainly needed to identifiy \"under optimal generic\nplans\". Without differentiating execution counters, this will be simpler but\nuseless in this case ... isn't it ?\n\n> Regards,\n\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n\nRegards\nPAscal\n\n\n\n\n\n\n\n>> Main purpose is to decide (1) the user interface and (2) the\n\n\n>> way to get the plan type from pg_stat_statements.\n>> \n>> (1) the user interface\n>> I added a new boolean column 'generic_plan' to both\n>> pg_stat_statements view and the member of the hash key of\n>> pg_stat_statements.\n>> \n>> This is because as Legrand pointed out the feature seems\n>> useful under the condition of differentiating all the\n>> counters for a queryid using a generic plan and the one\n>> using a custom one.\n\n> I don't like this because this may double the number of entries in pgss.\n> Which means that the number of entries can more easily reach\n> pg_stat_statements.max and some entries will be discarded.\n\n\n\nNot all the entries will be doubled, only the ones that are prepared.\nAnd even if auto prepare was implemented, having 5000, 10000 or 20000\nmax entries seems not a problem to me.\n  \n>> I thought it might be preferable to make a GUC to enable\n>> or disable this feature, but changing the hash key makes\n>> it harder.\n\n> What happens if the server was running with this option enabled and then\n> restarted with the option disabled? Firstly two entries for the same query\n> were stored in pgss because the option was enabled. But when it's disabled\n> and the server is restarted, those two entries should be merged into one\n> at the startup of server? If so, that's problematic because it may take\n> a long time.\n\n\n\nWhat would you think about a third value for this flag to handle disabled\n\n\ncase (no need to merge entries in this situation) ?\n\n\n> Therefore I think that it's better and simple to just expose the number of\n> times generic/custom plan was chosen for each query.\n\n\n\nI thought this feature was mainly needed to identifiy \"under optimal generic\n\n\nplans\". Without differentiating execution counters, this will be simpler but\nuseless in this case ... isn't it ? \n\n\n\n> Regards,\n\n> -- \n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n\n\nRegards\nPAscal", "msg_date": "Thu, 30 Jul 2020 07:34:35 +0000", "msg_from": "legrand legrand <legrand_legrand@hotmail.com>", "msg_from_op": false, "msg_subject": "RE: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On 2020-07-30 14:31, Fujii Masao wrote:\n> On 2020/07/22 16:49, torikoshia wrote:\n>> On 2020-07-20 13:57, torikoshia wrote:\n>> \n>>> As I proposed earlier in this thread, I'm now trying to add \n>>> information\n>>> about generic/cudstom plan to pg_stat_statements.\n>>> I'll share the idea and the poc patch soon.\n>> \n>> Attached a poc patch.\n> \n> Thanks for the POC patch!\n> \n> With the patch, when I ran \"CREATE EXTENSION pg_stat_statements\",\n> I got the following error.\n> \n> ERROR: function pg_stat_statements_reset(oid, oid, bigint) does not \n> exist\n\nOops, sorry about that.\nI just fixed it there for now.\n\n>> \n>> Main purpose is to decide (1) the user interface and (2) the\n>> way to get the plan type from pg_stat_statements.\n>> \n>> (1) the user interface\n>> I added a new boolean column 'generic_plan' to both\n>> pg_stat_statements view and the member of the hash key of\n>> pg_stat_statements.\n>> \n>> This is because as Legrand pointed out the feature seems\n>> useful under the condition of differentiating all the\n>> counters for a queryid using a generic plan and the one\n>> using a custom one.\n> \n> I don't like this because this may double the number of entries in \n> pgss.\n> Which means that the number of entries can more easily reach\n> pg_stat_statements.max and some entries will be discarded.\n> \n> \n>> I thought it might be preferable to make a GUC to enable\n>> or disable this feature, but changing the hash key makes\n>> it harder.\n> \n> What happens if the server was running with this option enabled and \n> then\n> restarted with the option disabled? Firstly two entries for the same \n> query\n> were stored in pgss because the option was enabled. But when it's \n> disabled\n> and the server is restarted, those two entries should be merged into \n> one\n> at the startup of server? If so, that's problematic because it may take\n> a long time.\n> \n> Therefore I think that it's better and simple to just expose the number \n> of\n> times generic/custom plan was chosen for each query.\n> \n> Regards,\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Fri, 31 Jul 2020 18:47:48 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "\n\nOn 2020/07/30 16:34, legrand legrand wrote:\n> >> Main purpose is to decide (1) the user interface and (2) the\n>>> way to get the plan type from pg_stat_statements.\n>>> \n>>> (1) the user interface\n>>> I added a new boolean column 'generic_plan' to both\n>>> pg_stat_statements view and the member of the hash key of\n>>> pg_stat_statements.\n>>> \n>>> This is because as Legrand pointed out the feature seems\n>>> useful under the condition of differentiating all the\n>>> counters for a queryid using a generic plan and the one\n>>> using a custom one.\n> \n>> I don't like this because this may double the number of entries in pgss.\n>> Which means that the number of entries can more easily reach\n>> pg_stat_statements.max and some entries will be discarded.\n> \n> Not all the entries will be doubled, only the ones that are prepared.\n> And even if auto prepare was implemented, having 5000, 10000 or 20000\n> max entries seems not a problem to me.\n> \n>>> I thought it might be preferable to make a GUC to enable\n>>> or disable this feature, but changing the hash key makes\n>>> it harder.\n> \n>> What happens if the server was running with this option enabled and then\n>> restarted with the option disabled? Firstly two entries for the same query\n>> were stored in pgss because the option was enabled. But when it's disabled\n>> and the server is restarted, those two entries should be merged into one\n>> at the startup of server? If so, that's problematic because it may take\n>> a long time.\n> \n> What would you think about a third value for this flag to handle disabled\n> case (no need to merge entries in this situation) ?\n\nSorry I failed to understand your point. You mean that we can have another flag\nto specify whether to merge the entries for the same query at that case or not?\n\nIf those entries are not merged, what does pg_stat_statements return?\nIt returns the sum of those entries? Or either generic or custom entry is\nreturned?\n\n\n> \n>> Therefore I think that it's better and simple to just expose the number of\n>> times generic/custom plan was chosen for each query.\n> \n> I thought this feature was mainly needed to identifiy \"under optimal generic\n> plans\". Without differentiating execution counters, this will be simpler but\n> useless in this case ... isn't it ?\n\nCould you elaborate \"under optimal generic plans\"? Sorry, I failed to\nunderstand your point.. But maybe you're thinking to use this feature to\ncheck which generic or custom plan is better for each query?\n\nI was just thinking that this feature was useful to detect the case where\nthe query was executed with unpected plan mode. That is, we can detect\nthe unexpected case where the query that should be executed with generic\nplan is actually executed with custom plan, and vice versa.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 17 Aug 2020 23:00:54 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": ">>>> I thought it might be preferable to make a GUC to enable\n>>>> or disable this feature, but changing the hash key makes\n>>>> it harder.\n>>\n>>> What happens if the server was running with this option enabled and then\n>>> restarted with the option disabled? Firstly two entries for the same query\n>>> were stored in pgss because the option was enabled. But when it's disabled\n>>> and the server is restarted, those two entries should be merged into one\n>>> at the startup of server? If so, that's problematic because it may take\n>>> a long time.\n>>\n>> What would you think about a third value for this flag to handle disabled\n>> case (no need to merge entries in this situation) ?\n>\n> Sorry I failed to understand your point. You mean that we can have another flag\n> to specify whether to merge the entries for the same query at that case or not?\n>\n> If those entries are not merged, what does pg_stat_statements return?\n> It returns the sum of those entries? Or either generic or custom entry is\n> returned?\n\nThe idea is to use a plan_type enum with 'generic','custom','unknown' or 'unset'.\nif tracking plan_type is disabled, then plan_type='unknown',\nelse plan_type can take 'generic' or 'custom' value.\n\nI'm not proposing to merge results for plan_type when disabling or enabling its tracking.\n\n\n>>> Therefore I think that it's better and simple to just expose the number of\n>>> times generic/custom plan was chosen for each query.\n>>\n>> I thought this feature was mainly needed to identifiy \"under optimal generic\n>> plans\". Without differentiating execution counters, this will be simpler but\n>> useless in this case ... isn't it ?\n\n> Could you elaborate \"under optimal generic plans\"? Sorry, I failed to\n> understand your point.. But maybe you're thinking to use this feature to\n> check which generic or custom plan is better for each query?\n\n> I was just thinking that this feature was useful to detect the case where\n> the query was executed with unpected plan mode. That is, we can detect\n> the unexpected case where the query that should be executed with generic\n> plan is actually executed with custom plan, and vice versa.\n\nThere are many exemples in pg lists, where users comes saying that sometimes\ntheir query takes a (very) longer time than before, without understand why.\nI some of some exemples, it is that there is a switch from custom to generic after\nn executions, and it takes a longer time because generic plan is not as good as\ncustom one (I call them under optimal generic plans). If pgss keeps counters\naggregated for both plan_types, I don't see how this (under optimal) can be shown.\nIf there is a line in pgss for custom and an other for generic, then it would be easier\nto compare.\n\nDoes this makes sence ?\n\nRegards\nPAscal\n\n> Regards,\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n\n\n\n\n\n\n\n\n\n\n\n>>>> I thought it might be preferable to make a GUC to enable\n\n\n>>>> or disable this feature, but changing the hash key makes\n>>>> it harder.\n>> \n>>> What happens if the server was running with this option enabled and then\n>>> restarted with the option disabled? Firstly two entries for the same query\n>>> were stored in pgss because the option was enabled. But when it's disabled\n>>> and the server is restarted, those two entries should be merged into one\n>>> at the startup of server? If so, that's problematic because it may take\n>>> a long time.\n>> \n>> What would you think about a third value for this flag to handle disabled\n>> case (no need to merge entries in this situation) ?\n>\n> Sorry I failed to understand your point. You mean that we can have another flag\n> to specify whether to merge the entries for the same query at that case or not?\n>\n> If those entries are not merged, what does pg_stat_statements return?\n> It returns the sum of those entries? Or either generic or custom entry is\n> returned?\n\n\nThe idea is to use a plan_type enum with 'generic','custom','unknown' or 'unset'.\nif tracking plan_type is disabled, then plan_type='unknown',\nelse plan_type can take 'generic' or 'custom' value.\n\n\nI'm not proposing to merge results for plan_type when disabling or enabling its tracking.\n\n\n\n \n>>> Therefore I think that it's better and simple to just expose the number of\n>>> times generic/custom plan was chosen for each query.\n>> \n>> I thought this feature was mainly needed to identifiy \"under optimal generic\n>> plans\". Without differentiating execution counters, this will be simpler but\n>> useless in this case ... isn't it ?\n\n> Could you elaborate \"under optimal generic plans\"? Sorry, I failed to\n> understand your point.. But maybe you're thinking to use this feature to\n> check which generic or custom plan is better for each query?\n\n> I was just thinking that this feature was useful to detect the case where\n> the query was executed with unpected plan mode. That is, we can detect\n> the unexpected case where the query that should be executed with generic\n> plan is actually executed with custom plan, and vice versa.\n\n\nThere are many exemples in pg lists, where users comes saying that sometimes\ntheir query takes a (very) longer time than before, without understand why.\nI some of some exemples, it is that there is a switch from custom to generic after\nn executions, and it takes a longer time because generic plan is not as good as\n\n\ncustom one (I call them under optimal generic plans). If pgss keeps counters\n\n\naggregated for both plan_types, I don't see how this (under optimal) can be shown.\nIf there is a line in pgss for custom and an other for generic, then it would be easier\n\n\nto compare.\n\n\nDoes this makes sence ?\n\n\n\nRegards\nPAscal\n\n\n> Regards,\n> -- \n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION", "msg_date": "Mon, 17 Aug 2020 16:35:39 +0000", "msg_from": "legrand legrand <legrand_legrand@hotmail.com>", "msg_from_op": false, "msg_subject": "RE: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On Fri, Jul 31, 2020 at 06:47:48PM +0900, torikoshia wrote:\n> Oops, sorry about that.\n> I just fixed it there for now.\n\nThe regression tests of the patch look unstable, and the CF bot is\nreporting a failure here:\nhttps://travis-ci.org/github/postgresql-cfbot/postgresql/builds/727833416\n--\nMichael", "msg_date": "Thu, 17 Sep 2020 13:46:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On 2020-09-17 13:46, Michael Paquier wrote:\n> On Fri, Jul 31, 2020 at 06:47:48PM +0900, torikoshia wrote:\n>> Oops, sorry about that.\n>> I just fixed it there for now.\n> \n> The regression tests of the patch look unstable, and the CF bot is\n> reporting a failure here:\n> https://travis-ci.org/github/postgresql-cfbot/postgresql/builds/727833416\n> --\n> Michael\n\n\nThank you for letting me know!\n\n\nI'd like to reach a basic agreement on how we expose the\ngeneric/custom plan information in pgss first.\n\nGiven the discussion so far, adding a new attribute to pgss key\nis not appropriate since it can easily increase the number of\nentries in pgss.\n\nOTOH, just exposing the number of times generic/custom plan was\nchosen seems not enough to know whether performance is degraded.\n\nI'm now thinking about exposing not only the number of times\ngeneric/custom plan was chosen but also some performance\nmetrics like 'total_time' for both generic and custom plans.\n\nAttached a poc patch which exposes total, min, max, mean and\nstddev time for both generic and custom plans.\n\n\n =# SELECT * FROM =# SELECT * FROM pg_stat_statements;\n -[ RECORD 1 \n]-------+---------------------------------------------------------\n userid | 10\n dbid | 12878\n queryid | 4617094108938234366\n query | PREPARE pr1 AS SELECT * FROM pg_class WHERE \nrelname = $1\n plans | 0\n total_plan_time | 0\n min_plan_time | 0\n max_plan_time | 0\n mean_plan_time | 0\n stddev_plan_time | 0\n calls | 6\n total_exec_time | 0.46600699999999995\n min_exec_time | 0.029376000000000003\n max_exec_time | 0.237413\n mean_exec_time | 0.07766783333333334\n stddev_exec_time | 0.07254973134206326\n generic_calls | 1\n total_generic_time | 0.045334000000000006\n min_generic_time | 0.045334000000000006\n max_generic_time | 0.045334000000000006\n mean_generic_time | 0.045334000000000006\n stddev_generic_time | 0\n custom_calls | 5\n total_custom_time | 0.42067299999999996\n min_custom_time | 0.029376000000000003\n max_custom_time | 0.237413\n mean_custom_time | 0.0841346\n stddev_custom_time | 0.07787966226583164\n ...\n\nIn this patch, exposing new columns is mandatory, but I think\nit's better to make it optional by adding a GUC something\nlike 'pgss.track_general_custom_plans.\n\nI also feel it makes the number of columns too many.\nJust adding the total time may be sufficient.\n\n\nAny thoughts?\n\n\nRegards,\n\n--\nAtsushi Torikoshi", "msg_date": "Mon, 28 Sep 2020 22:14:01 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "Hi Atsushi,\n\n+1: Your proposal is a good answer for time based performance analysis \n(even if parsing durationor blks are not differentiated) .\n\nAs it makes pgss number of columns wilder, may be an other solution \nwould be to create a pg_stat_statements_xxx view with the same key \nas pgss (dbid,userid,queryid) and all thoses new counters.\n\nAnd last solution would be to display only generic counters, \nbecause in most cases (and per default) executions are custom ones \n(and generic counters = 0).\nif not (when generic counters != 0), customs ones could be deducted from \ntotal_exec_time - total_generic_time, calls - generic_calls.\n\nGood luck for this feature development\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Mon, 28 Sep 2020 10:39:39 -0700 (MST)", "msg_from": "legrand legrand <legrand_legrand@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On 2020-09-29 02:39, legrand legrand wrote:\n> Hi Atsushi,\n> \n> +1: Your proposal is a good answer for time based performance analysis\n> (even if parsing durationor blks are not differentiated) .\n> \n> As it makes pgss number of columns wilder, may be an other solution\n> would be to create a pg_stat_statements_xxx view with the same key\n> as pgss (dbid,userid,queryid) and all thoses new counters.\n\nThanks for your ideas and sorry for my late reply.\n\nIt seems creating pg_stat_statements_xxx views both for generic and\ncustom plans is better than my PoC patch.\n\nHowever, I also began to wonder how effective it would be to just\ndistinguish between generic and custom plans. Custom plans can\ninclude all sorts of plans. and thinking cache validation, generic\nplans can also include various plans.\n\nConsidering this, I'm starting to feel that it would be better to\nnot just keeping whether generic or cutom but the plan itself as\ndiscussed in the below thread.\n\nhttps://www.postgresql.org/message-id/flat/CAKU4AWq5_jx1Vyai0_Sumgn-Ks0R%2BN80cf%2Bt170%2BzQs8x6%3DHew%40mail.gmail.com#f57e64b8d37697c808e4385009340871\n\n\nAny thoughts?\n\n\nRegards,\n\n--\nAtsushi Torikoshi\n\n\n", "msg_date": "Thu, 12 Nov 2020 10:49:53 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "čt 12. 11. 2020 v 2:50 odesílatel torikoshia <torikoshia@oss.nttdata.com>\nnapsal:\n\n> On 2020-09-29 02:39, legrand legrand wrote:\n> > Hi Atsushi,\n> >\n> > +1: Your proposal is a good answer for time based performance analysis\n> > (even if parsing durationor blks are not differentiated) .\n> >\n> > As it makes pgss number of columns wilder, may be an other solution\n> > would be to create a pg_stat_statements_xxx view with the same key\n> > as pgss (dbid,userid,queryid) and all thoses new counters.\n>\n> Thanks for your ideas and sorry for my late reply.\n>\n> It seems creating pg_stat_statements_xxx views both for generic and\n> custom plans is better than my PoC patch.\n>\n> However, I also began to wonder how effective it would be to just\n> distinguish between generic and custom plans. Custom plans can\n> include all sorts of plans. and thinking cache validation, generic\n> plans can also include various plans.\n>\n> Considering this, I'm starting to feel that it would be better to\n> not just keeping whether generic or cutom but the plan itself as\n> discussed in the below thread.\n>\n>\n> https://www.postgresql.org/message-id/flat/CAKU4AWq5_jx1Vyai0_Sumgn-Ks0R%2BN80cf%2Bt170%2BzQs8x6%3DHew%40mail.gmail.com#f57e64b8d37697c808e4385009340871\n>\n>\n> Any thoughts?\n>\n\nyes, the plan self is very interesting information - and information if\nplan was generic or not is interesting too. It is other dimension of query\n- maybe there can be rule - for any query store max 100 most slows plans\nwith all attributes. The next issue is fact so first first 5 execution of\ngeneric plans are not generic in real. This fact should be visible too.\n\nRegards\n\nPavel\n\n\n\n>\n> Regards,\n>\n> --\n> Atsushi Torikoshi\n>\n>\n>\n\nčt 12. 11. 2020 v 2:50 odesílatel torikoshia <torikoshia@oss.nttdata.com> napsal:On 2020-09-29 02:39, legrand legrand wrote:\n> Hi Atsushi,\n> \n> +1: Your proposal is a good answer for time based performance analysis\n> (even if parsing durationor blks are not differentiated) .\n> \n> As it makes pgss number of columns wilder, may be an other solution\n> would be to create a pg_stat_statements_xxx view with the same key\n> as pgss (dbid,userid,queryid) and all thoses new counters.\n\nThanks for your ideas and sorry for my late reply.\n\nIt seems creating pg_stat_statements_xxx views both for generic and\ncustom plans is better than my PoC patch.\n\nHowever, I also began to wonder how effective it would be to just\ndistinguish between generic and custom plans.  Custom plans can\ninclude all sorts of plans. and thinking cache validation, generic\nplans can also include various plans.\n\nConsidering this, I'm starting to feel that it would be better to\nnot just keeping whether generic or cutom but the plan itself as\ndiscussed in the below thread.\n\nhttps://www.postgresql.org/message-id/flat/CAKU4AWq5_jx1Vyai0_Sumgn-Ks0R%2BN80cf%2Bt170%2BzQs8x6%3DHew%40mail.gmail.com#f57e64b8d37697c808e4385009340871\n\n\nAny thoughts?yes, the plan self is very interesting information - and information if plan was generic or not is interesting too. It is other dimension of query - maybe there can be rule - for any query store max 100 most slows plans with all attributes. The next issue is fact so first first 5 execution of generic plans are not generic in real. This fact should be visible too.RegardsPavel \n\n\nRegards,\n\n--\nAtsushi Torikoshi", "msg_date": "Thu, 12 Nov 2020 06:23:06 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On 2020-11-12 14:23, Pavel Stehule wrote:\n\n> yes, the plan self is very interesting information - and information\n> if plan was generic or not is interesting too. It is other dimension\n> of query - maybe there can be rule - for any query store max 100 most\n> slows plans with all attributes. The next issue is fact so first first\n> 5 execution of generic plans are not generic in real. This fact should\n> be visible too.\n\nThanks!\nHowever, AFAIU, we can know whether the plan type is generic or custom\nfrom the plan information as described in the manual.\n\n-- https://www.postgresql.org/docs/devel/sql-prepare.html\n> If a generic plan is in use, it will contain parameter symbols $n, \n> while a custom plan will have the supplied parameter values substituted \n> into it.\n\nIf we can get the plan information, the case like 'first 5 execution\nof generic plans are not generic in real' does not happen, doesn't it?\n\n\nRegards,\n\n--\nAtsushi Torikoshi\n\n\n", "msg_date": "Tue, 17 Nov 2020 23:21:53 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "út 17. 11. 2020 v 15:21 odesílatel torikoshia <torikoshia@oss.nttdata.com>\r\nnapsal:\r\n\r\n> On 2020-11-12 14:23, Pavel Stehule wrote:\r\n>\r\n> > yes, the plan self is very interesting information - and information\r\n> > if plan was generic or not is interesting too. It is other dimension\r\n> > of query - maybe there can be rule - for any query store max 100 most\r\n> > slows plans with all attributes. The next issue is fact so first first\r\n> > 5 execution of generic plans are not generic in real. This fact should\r\n> > be visible too.\r\n>\r\n> Thanks!\r\n> However, AFAIU, we can know whether the plan type is generic or custom\r\n> from the plan information as described in the manual.\r\n>\r\n> -- https://www.postgresql.org/docs/devel/sql-prepare.html\r\n> > If a generic plan is in use, it will contain parameter symbols $n,\r\n> > while a custom plan will have the supplied parameter values substituted\r\n> > into it.\r\n>\r\n> If we can get the plan information, the case like 'first 5 execution\r\n> of generic plans are not generic in real' does not happen, doesn't it?\r\n>\r\n\r\nyes\r\n\r\npostgres=# create table foo(a int);\r\nCREATE TABLE\r\npostgres=# prepare x as select * from foo where a = $1;\r\nPREPARE\r\npostgres=# explain execute x(10);\r\n┌─────────────────────────────────────────────────────┐\r\n│ QUERY PLAN │\r\n╞═════════════════════════════════════════════════════╡\r\n│ Seq Scan on foo (cost=0.00..41.88 rows=13 width=4) │\r\n│ Filter: (a = 10) │\r\n└─────────────────────────────────────────────────────┘\r\n(2 rows)\r\n\r\npostgres=# explain execute x(10);\r\n┌─────────────────────────────────────────────────────┐\r\n│ QUERY PLAN │\r\n╞═════════════════════════════════════════════════════╡\r\n│ Seq Scan on foo (cost=0.00..41.88 rows=13 width=4) │\r\n│ Filter: (a = 10) │\r\n└─────────────────────────────────────────────────────┘\r\n(2 rows)\r\n\r\npostgres=# explain execute x(10);\r\n┌─────────────────────────────────────────────────────┐\r\n│ QUERY PLAN │\r\n╞═════════════════════════════════════════════════════╡\r\n│ Seq Scan on foo (cost=0.00..41.88 rows=13 width=4) │\r\n│ Filter: (a = 10) │\r\n└─────────────────────────────────────────────────────┘\r\n(2 rows)\r\n\r\npostgres=# explain execute x(10);\r\n┌─────────────────────────────────────────────────────┐\r\n│ QUERY PLAN │\r\n╞═════════════════════════════════════════════════════╡\r\n│ Seq Scan on foo (cost=0.00..41.88 rows=13 width=4) │\r\n│ Filter: (a = 10) │\r\n└─────────────────────────────────────────────────────┘\r\n(2 rows)\r\n\r\npostgres=# explain execute x(10);\r\n┌─────────────────────────────────────────────────────┐\r\n│ QUERY PLAN │\r\n╞═════════════════════════════════════════════════════╡\r\n│ Seq Scan on foo (cost=0.00..41.88 rows=13 width=4) │\r\n│ Filter: (a = 10) │\r\n└─────────────────────────────────────────────────────┘\r\n(2 rows)\r\n\r\npostgres=# explain execute x(10);\r\n┌─────────────────────────────────────────────────────┐\r\n│ QUERY PLAN │\r\n╞═════════════════════════════════════════════════════╡\r\n│ Seq Scan on foo (cost=0.00..41.88 rows=13 width=4) │\r\n│ Filter: (a = $1) │\r\n└─────────────────────────────────────────────────────┘\r\n(2 rows)\r\n\r\npostgres=# explain execute x(10);\r\n┌─────────────────────────────────────────────────────┐\r\n│ QUERY PLAN │\r\n╞═════════════════════════════════════════════════════╡\r\n│ Seq Scan on foo (cost=0.00..41.88 rows=13 width=4) │\r\n│ Filter: (a = $1) │\r\n└─────────────────────────────────────────────────────┘\r\n(2 rows)\r\n\r\n\r\n>\r\n> Regards,\r\n>\r\n> --\r\n> Atsushi Torikoshi\r\n>\r\n\nút 17. 11. 2020 v 15:21 odesílatel torikoshia <torikoshia@oss.nttdata.com> napsal:On 2020-11-12 14:23, Pavel Stehule wrote:\n\r\n> yes, the plan self is very interesting information - and information\r\n> if plan was generic or not is interesting too. It is other dimension\r\n> of query - maybe there can be rule - for any query store max 100 most\r\n> slows plans with all attributes. The next issue is fact so first first\r\n> 5 execution of generic plans are not generic in real. This fact should\r\n> be visible too.\n\r\nThanks!\r\nHowever, AFAIU, we can know whether the plan type is generic or custom\r\nfrom the plan information as described in the manual.\n\r\n-- https://www.postgresql.org/docs/devel/sql-prepare.html\r\n> If a generic plan is in use, it will contain parameter symbols $n, \r\n> while a custom plan will have the supplied parameter values substituted \r\n> into it.\n\r\nIf we can get the plan information, the case like 'first 5 execution\r\nof generic plans are not generic in real' does not happen, doesn't it?yes postgres=# create table foo(a int);CREATE TABLEpostgres=# prepare x as select * from foo where a = $1;PREPAREpostgres=# explain execute x(10);┌─────────────────────────────────────────────────────┐│                     QUERY PLAN                      │╞═════════════════════════════════════════════════════╡│ Seq Scan on foo  (cost=0.00..41.88 rows=13 width=4) ││   Filter: (a = 10)                                  │└─────────────────────────────────────────────────────┘(2 rows)postgres=# explain execute x(10);┌─────────────────────────────────────────────────────┐│                     QUERY PLAN                      │╞═════════════════════════════════════════════════════╡│ Seq Scan on foo  (cost=0.00..41.88 rows=13 width=4) ││   Filter: (a = 10)                                  │└─────────────────────────────────────────────────────┘(2 rows)postgres=# explain execute x(10);┌─────────────────────────────────────────────────────┐│                     QUERY PLAN                      │╞═════════════════════════════════════════════════════╡│ Seq Scan on foo  (cost=0.00..41.88 rows=13 width=4) ││   Filter: (a = 10)                                  │└─────────────────────────────────────────────────────┘(2 rows)postgres=# explain execute x(10);┌─────────────────────────────────────────────────────┐│                     QUERY PLAN                      │╞═════════════════════════════════════════════════════╡│ Seq Scan on foo  (cost=0.00..41.88 rows=13 width=4) ││   Filter: (a = 10)                                  │└─────────────────────────────────────────────────────┘(2 rows)postgres=# explain execute x(10);┌─────────────────────────────────────────────────────┐│                     QUERY PLAN                      │╞═════════════════════════════════════════════════════╡│ Seq Scan on foo  (cost=0.00..41.88 rows=13 width=4) ││   Filter: (a = 10)                                  │└─────────────────────────────────────────────────────┘(2 rows)postgres=# explain execute x(10);┌─────────────────────────────────────────────────────┐│                     QUERY PLAN                      │╞═════════════════════════════════════════════════════╡│ Seq Scan on foo  (cost=0.00..41.88 rows=13 width=4) ││   Filter: (a = $1)                                  │└─────────────────────────────────────────────────────┘(2 rows)postgres=# explain execute x(10);┌─────────────────────────────────────────────────────┐│                     QUERY PLAN                      │╞═════════════════════════════════════════════════════╡│ Seq Scan on foo  (cost=0.00..41.88 rows=13 width=4) ││   Filter: (a = $1)                                  │└─────────────────────────────────────────────────────┘(2 rows)\n\n\r\nRegards,\n\r\n--\r\nAtsushi Torikoshi", "msg_date": "Tue, 17 Nov 2020 15:31:31 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "Hi Torikoshi-san,\n\n\n> In this patch, exposing new columns is mandatory, but I think\n> it's better to make it optional by adding a GUC something\n> like 'pgss.track_general_custom_plans.\n> \n> I also feel it makes the number of columns too many.\n> Just adding the total time may be sufficient.\n\n\nI think this feature is useful for DBA. So I hope that it gets\ncommitted to PG14. IMHO, many columns are Okay because DBA can\nselect specific columns by their query.\nTherefore, it would be better to go with the current design.\n\nI did the regression test using your patch on 7e5e1bba03, and\nit failed unfortunately. See below:\n\n=======================================================\n 122 of 201 tests failed, 1 of these failures ignored.\n=======================================================\n...\n2020-11-30 09:45:18.160 JST [12977] LOG: database system was not\nproperly shut down; automatic recovery in progress\n\n\nRegards,\nTatsuro Yamada\n\n\n\n\n", "msg_date": "Mon, 30 Nov 2020 15:24:23 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "\n\nOn 2020/11/30 15:24, Tatsuro Yamada wrote:\n> Hi Torikoshi-san,\n> \n> \n>> In this patch, exposing new columns is mandatory, but I think\n>> it's better to make it optional by adding a GUC something\n>> like 'pgss.track_general_custom_plans.\n>>\n>> I also feel it makes the number of columns too many.\n>> Just adding the total time may be sufficient.\n> \n> \n> I think this feature is useful for DBA. So I hope that it gets\n> committed to PG14. IMHO, many columns are Okay because DBA can\n> select specific columns by their query.\n> Therefore, it would be better to go with the current design.\n\nBut that design may waste lots of memory. No? For example, when\nplan_cache_mode=force_custom_plan, the memory used for the columns\nfor generic plans is not used.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 4 Dec 2020 14:29:35 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On 2020-12-04 14:29, Fujii Masao wrote:\n> On 2020/11/30 15:24, Tatsuro Yamada wrote:\n>> Hi Torikoshi-san,\n>> \n>> \n>>> In this patch, exposing new columns is mandatory, but I think\n>>> it's better to make it optional by adding a GUC something\n>>> like 'pgss.track_general_custom_plans.\n>>> \n>>> I also feel it makes the number of columns too many.\n>>> Just adding the total time may be sufficient.\n>> \n>> \n>> I think this feature is useful for DBA. So I hope that it gets\n>> committed to PG14. IMHO, many columns are Okay because DBA can\n>> select specific columns by their query.\n>> Therefore, it would be better to go with the current design.\n> \n> But that design may waste lots of memory. No? For example, when\n> plan_cache_mode=force_custom_plan, the memory used for the columns\n> for generic plans is not used.\n> \n\nYeah.\n\nISTM now that creating pg_stat_statements_xxx views\nboth for generic andcustom plans is better than my PoC patch.\n\nAnd I'm also struggling with the following.\n\n| However, I also began to wonder how effective it would be to just\n| distinguish between generic and custom plans. Custom plans can\n| include all sorts of plans. and thinking cache validation, generic\n| plans can also include various plans.\n\n| Considering this, I'm starting to feel that it would be better to\n| not just keeping whether generic or cutom but the plan itself as\n| discussed in the below thread.\n\n\nYamada-san,\n\nDo you think it's effective just distinguish between generic\nand custom plans?\n\nRegards,\n\n\n", "msg_date": "Fri, 04 Dec 2020 15:03:25 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "At Fri, 04 Dec 2020 15:03:25 +0900, torikoshia <torikoshia@oss.nttdata.com> wrote in \n> On 2020-12-04 14:29, Fujii Masao wrote:\n> > On 2020/11/30 15:24, Tatsuro Yamada wrote:\n> >> Hi Torikoshi-san,\n> >> \n> >>> In this patch, exposing new columns is mandatory, but I think\n> >>> it's better to make it optional by adding a GUC something\n> >>> like 'pgss.track_general_custom_plans.\n> >>> I also feel it makes the number of columns too many.\n> >>> Just adding the total time may be sufficient.\n> >> I think this feature is useful for DBA. So I hope that it gets\n> >> committed to PG14. IMHO, many columns are Okay because DBA can\n> >> select specific columns by their query.\n> >> Therefore, it would be better to go with the current design.\n> > But that design may waste lots of memory. No? For example, when\n> > plan_cache_mode=force_custom_plan, the memory used for the columns\n> > for generic plans is not used.\n> > \n> \n> Yeah.\n> \n> ISTM now that creating pg_stat_statements_xxx views\n> both for generic andcustom plans is better than my PoC patch.\n> \n> And I'm also struggling with the following.\n> \n> | However, I also began to wonder how effective it would be to just\n> | distinguish between generic and custom plans. Custom plans can\n> | include all sorts of plans. and thinking cache validation, generic\n> | plans can also include various plans.\n> \n> | Considering this, I'm starting to feel that it would be better to\n> | not just keeping whether generic or cutom but the plan itself as\n> | discussed in the below thread.\n\nFWIW, that seems to me to be like some existing extension modules,\npg_stat_plans or pg_store_plans.. The former is faster but may lose\nplans, the latter doesn't lose plans but slower. I feel that we'd\nbeter consider simpler feature if we are intendeng it to be a part of\na contrib module,\n\n> Yamada-san,\n> \n> Do you think it's effective just distinguish between generic\n> and custom plans?\n> \n> Regards,\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 04 Dec 2020 15:37:33 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "> <torikoshia@oss.nttdata.com> wrote in\n\n>> ISTM now that creating pg_stat_statements_xxx views\n>> both for generic andcustom plans is better than my PoC patch.\n\nOn my second thought, it also makes pg_stat_statements too complicated\ncompared to what it makes possible..\n\nI'm also worrying that whether taking generic and custom plan execution\ntime or not would be controlled by a GUC variable, and the default\nwould be not taking them.\nNot many people will change the default.\n\nSince the same queryid can contain various queries (different plan,\ndifferent parameter $n, etc.), I also started to feel that it is not\nappropriate to get the execution time of only generic/custom queries\nseparately.\n\nI suppose it would be normal practice to store past results of\npg_stat_statements for future comparisons.\nIf this is the case, I think that if we only add the number of\ngeneric plan execution, it will give us a hint to notice the cause\nof performance degradation due to changes in the plan between\ngeneric and custom.\n\nFor example, if there is a clear difference in the number of times\nthe generic plan is executed between before and after performance\ndegradation as below, it would be natural to check if there is a\nproblem with the generic plan.\n\n [after performance degradation]\n =# SELECT query, calls, generic_calls FROM pg_stat_statements where \nquery like '%t1%';\n query | calls | generic_calls\n ---------------------------------------------+-------+---------------\n PREPARE p1 as select * from t1 where i = $1 | 1100 | 50\n\n [before performance degradation]\n =# SELECT query, calls, generic_calls FROM pg_stat_statements where \nquery like '%t1%';\n query | calls | generic_calls\n ---------------------------------------------+-------+---------------\n PREPARE p1 as select * from t1 where i = $1 | 1000 | 0\n\n\nAttached a patch that just adds a generic call counter to\npg_stat_statements.\n\nAny thoughts?\n\n\nRegards,\n\n--\nAtsushi Torikoshi", "msg_date": "Tue, 12 Jan 2021 20:36:58 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: tested, failed\nSpec compliant: not tested\nDocumentation: not tested\n\nHi Atsushi,\r\n\r\nI just run a few test on your latest patch. It works well. I agree with you, I think just tracking generic_calls is enough to get the reason of performance change from pg_stat_statements. I mean if you need more detailed information about the plan and execution of prepared statements, it is better to store this information in a separate view. It seems more reasonable to me.\r\n\r\nRegards,", "msg_date": "Fri, 22 Jan 2021 11:25:40 +0000", "msg_from": "Chengxi Sun <sunchengxi@highgo.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "At Tue, 12 Jan 2021 20:36:58 +0900, torikoshia <torikoshia@oss.nttdata.com> wrote in \n> I suppose it would be normal practice to store past results of\n> pg_stat_statements for future comparisons.\n> If this is the case, I think that if we only add the number of\n> generic plan execution, it will give us a hint to notice the cause\n> of performance degradation due to changes in the plan between\n> generic and custom.\n\nAgreed.\n\n> Attached a patch that just adds a generic call counter to\n> pg_stat_statements.\n> \n> Any thoughts?\n\nNote that ActivePortal is the closest nested portal. So it gives the\nwrong result for nested portals.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 25 Jan 2021 14:10:20 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On 2020/12/04 14:29, Fujii Masao wrote:\n> \n> On 2020/11/30 15:24, Tatsuro Yamada wrote:\n>> Hi Torikoshi-san,\n>>\n>>\n>>> In this patch, exposing new columns is mandatory, but I think\n>>> it's better to make it optional by adding a GUC something\n>>> like 'pgss.track_general_custom_plans.\n>>>\n>>> I also feel it makes the number of columns too many.\n>>> Just adding the total time may be sufficient.\n>>\n>>\n>> I think this feature is useful for DBA. So I hope that it gets\n>> committed to PG14. IMHO, many columns are Okay because DBA can\n>> select specific columns by their query.\n>> Therefore, it would be better to go with the current design.\n> \n> But that design may waste lots of memory. No? For example, when\n> plan_cache_mode=force_custom_plan, the memory used for the columns\n> for generic plans is not used.\n> \n> Regards,\n\n\nSorry for the super delayed replay.\nI don't think that because I suppose that DBA uses plan_cache_mode if\nthey faced an inefficient execution plan. And the parameter will be used\nas a session-level GUC parameter, not a database-level.\n\n\nRegards,\nTatsuro Yamada\n\n\n\n\n\n", "msg_date": "Thu, 28 Jan 2021 07:53:18 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "Torikoshi-san,\n\nOn 2020/12/04 15:03, torikoshia wrote:\n>\n> ISTM now that creating pg_stat_statements_xxx views\n> both for generic andcustom plans is better than my PoC patch.\n> \n> And I'm also struggling with the following.\n> \n> | However, I also began to wonder how effective it would be to just\n> | distinguish between generic and custom plans.  Custom plans can\n> | include all sorts of plans. and thinking cache validation, generic\n> | plans can also include various plans.\n> \n> | Considering this, I'm starting to feel that it would be better to\n> | not just keeping whether generic or cutom but the plan itself as\n> | discussed in the below thread.\n> \n> \n> Yamada-san,\n> \n> Do you think it's effective just distinguish between generic\n> and custom plans?\n\nSorry for the super delayed replay.\n\nAh, it's okay.\nIt would be better to check both info by using a single view from the\nperspective of usability. However, it's okay to divide both information\ninto two views to use memory effectively.\n\nRegards,\nTatsuro Yamada\n\n\n\n", "msg_date": "Thu, 28 Jan 2021 07:56:22 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "Horiguchi-san,\n\nOn 2020/12/04 15:37, Kyotaro Horiguchi wrote:\n>> And I'm also struggling with the following.\n>>\n>> | However, I also began to wonder how effective it would be to just\n>> | distinguish between generic and custom plans. Custom plans can\n>> | include all sorts of plans. and thinking cache validation, generic\n>> | plans can also include various plans.\n>>\n>> | Considering this, I'm starting to feel that it would be better to\n>> | not just keeping whether generic or cutom but the plan itself as\n>> | discussed in the below thread.\n> \n> FWIW, that seems to me to be like some existing extension modules,\n> pg_stat_plans or pg_store_plans.. The former is faster but may lose\n> plans, the latter doesn't lose plans but slower. I feel that we'd\n> beter consider simpler feature if we are intendeng it to be a part of\n> a contrib module,\n\nThere is also pg_show_plans.\nIdeally, it would be better to able to track all of the plan changes by\nchecking something view since Plan Stability is important for DBA when\nthey use PostgreSQL in Mission-critical systems.\nI prefer that the feature will be released as a contrib module. :-D\n\nRegards,\nTatsuro Yamada\n \n\n\n\n", "msg_date": "Thu, 28 Jan 2021 08:02:02 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "Hi Toricoshi-san,\n\nOn 2021/01/12 20:36, torikoshia wrote:\n> I suppose it would be normal practice to store past results of\n> pg_stat_statements for future comparisons.\n> If this is the case, I think that if we only add the number of\n> generic plan execution, it will give us a hint to notice the cause\n> of performance degradation due to changes in the plan between\n> generic and custom.\n> \n> For example, if there is a clear difference in the number of times\n> the generic plan is executed between before and after performance\n> degradation as below, it would be natural to check if there is a\n> problem with the generic plan.\n...\n> Attached a patch that just adds a generic call counter to\n> pg_stat_statements.\n\n\nI think that I'd like to use the view when we faced a performance\nproblem and find the reason. If we did the fixed-point observation\n(should I say time-series analysis?) of generic_calls, it allows us to\nrealize the counter changes, and we can know whether the suspect is\ngeneric_plan or not. So the patch helps DBA, I believe.\n\nRegards,\nTatsuro Yamada\n\n\n\n", "msg_date": "Thu, 28 Jan 2021 08:11:08 +0900", "msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "Chengxi Sun, Yamada-san, Horiguchi-san,\n\nThanks for all your comments.\nAdding only the number of generic plan execution seems acceptable.\n\nOn Mon, Jan 25, 2021 at 2:10 PM Kyotaro Horiguchi \n<horikyota.ntt@gmail.com> wrote:\n> Note that ActivePortal is the closest nested portal. So it gives the\n> wrong result for nested portals.\n\nI may be wrong, but I thought it was ok since the closest nested portal \nis the portal to be executed.\n\nActivePortal is used in ExecutorStart hook in the patch.\nAnd as far as I read PortalStart(), ActivePortal is changed to the \nportal to be executed before ExecutorStart().\n\nIf possible, could you tell me the specific case which causes wrong \nresults?\n\nRegards,\n\n--\nAtsushi Torikoshi\n\n\n", "msg_date": "Thu, 04 Feb 2021 10:16:47 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "At Thu, 04 Feb 2021 10:16:47 +0900, torikoshia <torikoshia@oss.nttdata.com> wrote in \n> Chengxi Sun, Yamada-san, Horiguchi-san,\n> \n> Thanks for all your comments.\n> Adding only the number of generic plan execution seems acceptable.\n> \n> On Mon, Jan 25, 2021 at 2:10 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > Note that ActivePortal is the closest nested portal. So it gives the\n> > wrong result for nested portals.\n> \n> I may be wrong, but I thought it was ok since the closest nested\n> portal is the portal to be executed.\n\nAfter executing the inner-most portal, is_plan_type_generic has a\nvalue for the inner-most portal and it won't be changed ever after. At\nthe ExecutorEnd of all the upper-portals see the value for the\ninner-most portal left behind is_plan_type_generic nevertheless the\nportals at every nest level are indenpendent.\n\n> ActivePortal is used in ExecutorStart hook in the patch.\n> And as far as I read PortalStart(), ActivePortal is changed to the\n> portal to be executed before ExecutorStart().\n> \n> If possible, could you tell me the specific case which causes wrong\n> results?\n\nRunning a plpgsql function that does PREPRE in a query that does\nPREPARE?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 04 Feb 2021 11:19:53 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On 2021-02-04 11:19, Kyotaro Horiguchi wrote:\n> At Thu, 04 Feb 2021 10:16:47 +0900, torikoshia\n> <torikoshia@oss.nttdata.com> wrote in\n>> Chengxi Sun, Yamada-san, Horiguchi-san,\n>> \n>> Thanks for all your comments.\n>> Adding only the number of generic plan execution seems acceptable.\n>> \n>> On Mon, Jan 25, 2021 at 2:10 PM Kyotaro Horiguchi\n>> <horikyota.ntt@gmail.com> wrote:\n>> > Note that ActivePortal is the closest nested portal. So it gives the\n>> > wrong result for nested portals.\n>> \n>> I may be wrong, but I thought it was ok since the closest nested\n>> portal is the portal to be executed.\n> \n> After executing the inner-most portal, is_plan_type_generic has a\n> value for the inner-most portal and it won't be changed ever after. At\n> the ExecutorEnd of all the upper-portals see the value for the\n> inner-most portal left behind is_plan_type_generic nevertheless the\n> portals at every nest level are independent.\n> \n>> ActivePortal is used in ExecutorStart hook in the patch.\n>> And as far as I read PortalStart(), ActivePortal is changed to the\n>> portal to be executed before ExecutorStart().\n>> \n>> If possible, could you tell me the specific case which causes wrong\n>> results?\n> \n> Running a plpgsql function that does PREPRE in a query that does\n> PREPARE?\n\nThanks for your explanation!\n\nI confirmed that it in fact happened.\n\nTo avoid it, attached patch preserves the is_plan_type_generic before \nchanging it and sets it back at the end of pgss_ExecutorEnd().\n\nAny thoughts?\n\n\nRegards,\n\n--\nAtsushi Torikoshi", "msg_date": "Mon, 08 Feb 2021 14:02:23 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "\n\nOn 2021/01/28 8:11, Tatsuro Yamada wrote:\n> Hi Toricoshi-san,\n> \n> On 2021/01/12 20:36, torikoshia wrote:\n>> I suppose it would be normal practice to store past results of\n>> pg_stat_statements for future comparisons.\n>> If this is the case, I think that if we only add the number of\n>> generic plan execution, it will give us a hint to notice the cause\n>> of performance degradation due to changes in the plan between\n>> generic and custom.\n>>\n>> For example, if there is a clear difference in the number of times\n>> the generic plan is executed between before and after performance\n>> degradation as below, it would be natural to check if there is a\n>> problem with the generic plan.\n> ...\n>> Attached a patch that just adds a generic call counter to\n>> pg_stat_statements.\n> \n> \n> I think that I'd like to use the view when we faced a performance\n> problem and find the reason. If we did the fixed-point observation\n> (should I say time-series analysis?) of generic_calls, it allows us to\n> realize the counter changes, and we can know whether the suspect is\n> generic_plan or not. So the patch helps DBA, I believe.\n\nIn that use case maybe what you actually want to see is whether the plan was\nchanged or not, rather than whether generic plan or custom plan is used?\nIf so, it's better to expose seq_scan (num of sequential scans processed by\nthe query) and idx_scan (num of index scans processed by the query) like\npg_stat_all_tables, per query in pg_stat_statements?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 5 Mar 2021 17:46:04 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "\n\nOn 2021/02/08 14:02, torikoshia wrote:\n> On 2021-02-04 11:19, Kyotaro Horiguchi wrote:\n>> At Thu, 04 Feb 2021 10:16:47 +0900, torikoshia\n>> <torikoshia@oss.nttdata.com> wrote in\n>>> Chengxi Sun, Yamada-san, Horiguchi-san,\n>>>\n>>> Thanks for all your comments.\n>>> Adding only the number of generic plan execution seems acceptable.\n>>>\n>>> On Mon, Jan 25, 2021 at 2:10 PM Kyotaro Horiguchi\n>>> <horikyota.ntt@gmail.com> wrote:\n>>> > Note that ActivePortal is the closest nested portal. So it gives the\n>>> > wrong result for nested portals.\n>>>\n>>> I may be wrong, but I thought it was ok since the closest nested\n>>> portal is the portal to be executed.\n>>\n>> After executing the inner-most portal, is_plan_type_generic has a\n>> value for the inner-most portal and it won't be changed ever after. At\n>> the ExecutorEnd of all the upper-portals see the value for the\n>> inner-most portal left behind is_plan_type_generic nevertheless the\n>> portals at every nest level are independent.\n>>\n>>> ActivePortal is used in ExecutorStart hook in the patch.\n>>> And as far as I read PortalStart(), ActivePortal is changed to the\n>>> portal to be executed before ExecutorStart().\n>>>\n>>> If possible, could you tell me the specific case which causes wrong\n>>> results?\n>>\n>> Running a plpgsql function that does PREPRE in a query that does\n>> PREPARE?\n> \n> Thanks for your explanation!\n> \n> I confirmed that it in fact happened.\n> \n> To avoid it, attached patch preserves the is_plan_type_generic before changing it and sets it back at the end of pgss_ExecutorEnd().\n> \n> Any thoughts?\n\nI just tried this feature. When I set plan_cache_mode to force_generic_plan\nand executed the following queries, I found that pg_stat_statements.generic_calls\nand pg_prepared_statements.generic_plans were not the same.\nIs this behavior expected? I was thinking that they are basically the same.\n\n\nDEALLOCATE ALL;\nSELECT pg_stat_statements_reset();\nPREPARE hoge AS SELECT * FROM pgbench_accounts WHERE aid = $1;\nEXECUTE hoge(1);\nEXECUTE hoge(1);\nEXECUTE hoge(1);\n\nSELECT generic_plans, statement FROM pg_prepared_statements WHERE statement LIKE '%hoge%';\n generic_plans | statement\n---------------+----------------------------------------------------------------\n 3 | PREPARE hoge AS SELECT * FROM pgbench_accounts WHERE aid = $1;\n\nSELECT calls, generic_calls, query FROM pg_stat_statements WHERE query LIKE '%hoge%';\n calls | generic_calls | query\n-------+---------------+---------------------------------------------------------------\n 3 | 2 | PREPARE hoge AS SELECT * FROM pgbench_accounts WHERE aid = $1\n\n\n\n\nWhen I executed the prepared statements via EXPLAIN ANALYZE, I found\npg_stat_statements.generic_calls was not incremented. Is this behavior expected?\nOr we should count generic_calls even when executing the queries via ProcessUtility()?\n\nDEALLOCATE ALL;\nSELECT pg_stat_statements_reset();\nPREPARE hoge AS SELECT * FROM pgbench_accounts WHERE aid = $1;\nEXPLAIN ANALYZE EXECUTE hoge(1);\nEXPLAIN ANALYZE EXECUTE hoge(1);\nEXPLAIN ANALYZE EXECUTE hoge(1);\n\nSELECT generic_plans, statement FROM pg_prepared_statements WHERE statement LIKE '%hoge%';\n generic_plans | statement\n---------------+----------------------------------------------------------------\n 3 | PREPARE hoge AS SELECT * FROM pgbench_accounts WHERE aid = $1;\n\nSELECT calls, generic_calls, query FROM pg_stat_statements WHERE query LIKE '%hoge%';\n calls | generic_calls | query\n-------+---------------+---------------------------------------------------------------\n 3 | 0 | PREPARE hoge AS SELECT * FROM pgbench_accounts WHERE aid = $1\n 3 | 0 | EXPLAIN ANALYZE EXECUTE hoge(1)\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 5 Mar 2021 17:47:54 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On 2021-03-05 17:47, Fujii Masao wrote:\n\nThanks for your comments!\n\n> I just tried this feature. When I set plan_cache_mode to \n> force_generic_plan\n> and executed the following queries, I found that\n> pg_stat_statements.generic_calls\n> and pg_prepared_statements.generic_plans were not the same.\n> Is this behavior expected? I was thinking that they are basically the \n> same.\n\nIt's not expected behavior, fixed.\n\n> \n> DEALLOCATE ALL;\n> SELECT pg_stat_statements_reset();\n> PREPARE hoge AS SELECT * FROM pgbench_accounts WHERE aid = $1;\n> EXECUTE hoge(1);\n> EXECUTE hoge(1);\n> EXECUTE hoge(1);\n> \n> SELECT generic_plans, statement FROM pg_prepared_statements WHERE\n> statement LIKE '%hoge%';\n> generic_plans | statement\n> ---------------+----------------------------------------------------------------\n> 3 | PREPARE hoge AS SELECT * FROM pgbench_accounts WHERE \n> aid = $1;\n> \n> SELECT calls, generic_calls, query FROM pg_stat_statements WHERE query\n> LIKE '%hoge%';\n> calls | generic_calls | query\n> -------+---------------+---------------------------------------------------------------\n> 3 | 2 | PREPARE hoge AS SELECT * FROM\n> pgbench_accounts WHERE aid = $1\n> \n> \n> \n> \n> When I executed the prepared statements via EXPLAIN ANALYZE, I found\n> pg_stat_statements.generic_calls was not incremented. Is this behavior \n> expected?\n> Or we should count generic_calls even when executing the queries via\n> ProcessUtility()?\n\nI think prepared statements via EXPLAIN ANALYZE also should be counted\nfor consistency with pg_prepared_statements.\n\nSince ActivePortal did not keep the plan type in the \nProcessUtility_hook,\nI moved the global variables 'is_plan_type_generic' and\n'is_prev_plan_type_generic' from pg_stat_statements to plancache.c.\n\n> \n> DEALLOCATE ALL;\n> SELECT pg_stat_statements_reset();\n> PREPARE hoge AS SELECT * FROM pgbench_accounts WHERE aid = $1;\n> EXPLAIN ANALYZE EXECUTE hoge(1);\n> EXPLAIN ANALYZE EXECUTE hoge(1);\n> EXPLAIN ANALYZE EXECUTE hoge(1);\n> \n> SELECT generic_plans, statement FROM pg_prepared_statements WHERE\n> statement LIKE '%hoge%';\n> generic_plans | statement\n> ---------------+----------------------------------------------------------------\n> 3 | PREPARE hoge AS SELECT * FROM pgbench_accounts WHERE \n> aid = $1;\n> \n> SELECT calls, generic_calls, query FROM pg_stat_statements WHERE query\n> LIKE '%hoge%';\n> calls | generic_calls | query\n> -------+---------------+---------------------------------------------------------------\n> 3 | 0 | PREPARE hoge AS SELECT * FROM\n> pgbench_accounts WHERE aid = $1\n> 3 | 0 | EXPLAIN ANALYZE EXECUTE hoge(1)\n> \n> \n\nRegards,", "msg_date": "Tue, 23 Mar 2021 16:32:40 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "\n\nOn 2021/03/23 16:32, torikoshia wrote:\n> On 2021-03-05 17:47, Fujii Masao wrote:\n> \n> Thanks for your comments!\n\nThanks for updating the patch!\n\nPostgreSQL Patch Tester reported that the patched version failed to be compiled\nat Windows. Could you fix this issue?\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.131238\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 25 Mar 2021 22:14:51 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On 2021-03-25 22:14, Fujii Masao wrote:\n> On 2021/03/23 16:32, torikoshia wrote:\n>> On 2021-03-05 17:47, Fujii Masao wrote:\n>> \n>> Thanks for your comments!\n> \n> Thanks for updating the patch!\n> \n> PostgreSQL Patch Tester reported that the patched version failed to be \n> compiled\n> at Windows. Could you fix this issue?\n> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.131238\n> \n\nIt seems PGDLLIMPORT was necessary..\nAttached a new one.\n\nRegards.", "msg_date": "Fri, 26 Mar 2021 00:33:08 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "\n\nOn 2021/03/26 0:33, torikoshia wrote:\n> On 2021-03-25 22:14, Fujii Masao wrote:\n>> On 2021/03/23 16:32, torikoshia wrote:\n>>> On 2021-03-05 17:47, Fujii Masao wrote:\n>>>\n>>> Thanks for your comments!\n>>\n>> Thanks for updating the patch!\n>>\n>> PostgreSQL Patch Tester reported that the patched version failed to be compiled\n>> at Windows. Could you fix this issue?\n>> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.131238\n>>\n> \n> It seems PGDLLIMPORT was necessary..\n> Attached a new one.\n\nThanks for updating the patch!\n\nIn my test, generic_calls for a utility command was not incremented\nbefore PL/pgSQL function was executed. Maybe this is expected behavior.\nBut it was incremented after the function was executed. Is this a bug?\nPlease see the following example.\n\n-------------------------------------------\nSELECT pg_stat_statements_reset();\nSET enable_seqscan TO on;\nSELECT calls, generic_calls, query FROM pg_stat_statements WHERE query LIKE '%seqscan%';\nCREATE OR REPLACE FUNCTION do_ckpt() RETURNS VOID AS $$\nBEGIN\n EXECUTE 'CHECKPOINT';\nEND $$ LANGUAGE plpgsql;\nSET enable_seqscan TO on;\nSET enable_seqscan TO on;\n\n-- SET commands were executed three times before do_ckpt() was called.\n-- generic_calls for SET command is zero in this case.\nSELECT calls, generic_calls, query FROM pg_stat_statements WHERE query LIKE '%seqscan%';\n\nSELECT do_ckpt();\nSET enable_seqscan TO on;\nSET enable_seqscan TO on;\nSET enable_seqscan TO on;\n\n-- SET commands were executed additionally three times after do_ckpt() was called.\n-- generic_calls for SET command is three in this case.\nSELECT calls, generic_calls, query FROM pg_stat_statements WHERE query LIKE '%seqscan%';\n-------------------------------------------\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 26 Mar 2021 17:46:03 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" }, { "msg_contents": "On 2021-03-26 17:46, Fujii Masao wrote:\n> On 2021/03/26 0:33, torikoshia wrote:\n>> On 2021-03-25 22:14, Fujii Masao wrote:\n>>> On 2021/03/23 16:32, torikoshia wrote:\n>>>> On 2021-03-05 17:47, Fujii Masao wrote:\n>>>> \n>>>> Thanks for your comments!\n>>> \n>>> Thanks for updating the patch!\n>>> \n>>> PostgreSQL Patch Tester reported that the patched version failed to \n>>> be compiled\n>>> at Windows. Could you fix this issue?\n>>> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.131238\n>>> \n>> \n>> It seems PGDLLIMPORT was necessary..\n>> Attached a new one.\n> \n> Thanks for updating the patch!\n> \n> In my test, generic_calls for a utility command was not incremented\n> before PL/pgSQL function was executed. Maybe this is expected behavior.\n> But it was incremented after the function was executed. Is this a bug?\n> Please see the following example.\n\nThanks for reviewing!\n\nIt's a bug and regrettably it seems difficult to fix it during this\ncommitfest.\n\nMarked the patch as \"Withdrawn\".\n\n\nRegards,\n\n\n", "msg_date": "Mon, 05 Apr 2021 18:01:48 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Is it useful to record whether plans are generic or custom?" } ]
[ { "msg_contents": "Rebased onto current master (fb544735f1).\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 12 May 2020 15:24:18 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "\n\nOn 2020/05/12 19:24, Andrey Lepikhov wrote:\n> Rebased onto current master (fb544735f1).\n\nThanks for the patches!\n\nThese patches are no longer applied cleanly and caused the compilation failure.\nSo could you rebase and update them?\n\nThe patches seem not to be registered in CommitFest yet.\nAre you planning to do that?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 9 Jun 2020 15:41:56 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On 09.06.2020 11:41, Fujii Masao wrote:\n>\n>\n> On 2020/05/12 19:24, Andrey Lepikhov wrote:\n>> Rebased onto current master (fb544735f1).\n>\n> Thanks for the patches!\n>\n> These patches are no longer applied cleanly and caused the compilation \n> failure.\n> So could you rebase and update them?\nRebased onto 57cb806308 (see attachment).\n>\n> The patches seem not to be registered in CommitFest yet.\n> Are you planning to do that?\nNot now. It is a sharding-related feature. I'm not sure that this \napproach is fully consistent with the sharding way now.\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com", "msg_date": "Wed, 10 Jun 2020 08:05:47 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On Wed, Jun 10, 2020 at 8:36 AM Andrey V. Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n>\n>\n> On 09.06.2020 11:41, Fujii Masao wrote:\n> >\n> >\n> > The patches seem not to be registered in CommitFest yet.\n> > Are you planning to do that?\n> Not now. It is a sharding-related feature. I'm not sure that this\n> approach is fully consistent with the sharding way now.\n>\n\nCan you please explain in detail, why you think so? There is no\ncommit message explaining what each patch does so it is difficult to\nunderstand why you said so? Also, can you let us know if this\nsupports 2PC in some way and if so how is it different from what the\nother thread on the same topic [1] is trying to achieve? Also, I\nwould like to know if the patch related to CSN based snapshot [2] is a\nprecursor for this, if not, then is it any way related to this patch\nbecause I see the latest reply on that thread [2] which says it is an\ninfrastructure of sharding feature but I don't understand completely\nwhether these patches are related?\n\nBasically, there seem to be three threads, first, this one and then\n[1] and [2] which seems to be doing the work for sharding feature but\nthere is no clear explanation anywhere if these are anyway related or\nwhether combining all these three we are aiming for a solution for\natomic commit and atomic visibility.\n\nI am not sure if you know answers to all these questions so I added\nthe people who seem to be working on the other two patches. I am also\nafraid that if there is any duplicate or conflicting work going on in\nthese threads so we should try to find that as well.\n\n\n[1] - https://www.postgresql.org/message-id/CA%2Bfd4k4v%2BKdofMyN%2BjnOia8-7rto8tsh9Zs3dd7kncvHp12WYw%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/2020061911294657960322%40highgo.ca\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 19 Jun 2020 12:18:30 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On 6/19/20 11:48 AM, Amit Kapila wrote:\n> On Wed, Jun 10, 2020 at 8:36 AM Andrey V. Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>> On 09.06.2020 11:41, Fujii Masao wrote:\n>>> The patches seem not to be registered in CommitFest yet.\n>>> Are you planning to do that?\n>> Not now. It is a sharding-related feature. I'm not sure that this\n>> approach is fully consistent with the sharding way now.\n> Can you please explain in detail, why you think so? There is no\n> commit message explaining what each patch does so it is difficult to\n> understand why you said so?\nFor now I used this patch set for providing correct visibility in the \ncase of access to the table with foreign partitions from many nodes in \nparallel. So I saw at this patch set as a sharding-related feature, but \n[1] shows another useful application.\nCSN-based approach has weak points such as:\n1. Dependency on clocks synchronization\n2. Needs guarantees of monotonically increasing of the CSN in the case \nof an instance restart/crash etc.\n3. We need to delay increasing of OldestXmin because it can be needed \nfor a transaction snapshot at another node.\nSo I do not have full conviction that it will be better than a single \ndistributed transaction manager.\n Also, can you let us know if this\n> supports 2PC in some way and if so how is it different from what the\n> other thread on the same topic [1] is trying to achieve?\nYes, the patch '0003-postgres_fdw-support-for-global-snapshots' contains \n2PC machinery. Now I'd not judge which approach is better.\n Also, I\n> would like to know if the patch related to CSN based snapshot [2] is a\n> precursor for this, if not, then is it any way related to this patch\n> because I see the latest reply on that thread [2] which says it is an\n> infrastructure of sharding feature but I don't understand completely\n> whether these patches are related?\nI need some time to study this patch. At first sight it is different.\n> \n> Basically, there seem to be three threads, first, this one and then\n> [1] and [2] which seems to be doing the work for sharding feature but\n> there is no clear explanation anywhere if these are anyway related or\n> whether combining all these three we are aiming for a solution for\n> atomic commit and atomic visibility.\nIt can be useful to study all approaches.\n> \n> I am not sure if you know answers to all these questions so I added\n> the people who seem to be working on the other two patches. I am also\n> afraid that if there is any duplicate or conflicting work going on in\n> these threads so we should try to find that as well.\nOk\n> \n> \n> [1] - https://www.postgresql.org/message-id/CA%2Bfd4k4v%2BKdofMyN%2BjnOia8-7rto8tsh9Zs3dd7kncvHp12WYw%40mail.gmail.com\n> [2] - https://www.postgresql.org/message-id/2020061911294657960322%40highgo.ca\n> \n\n[1] \nhttps://www.postgresql.org/message-id/flat/20200301083601.ews6hz5dduc3w2se%40alap3.anarazel.de\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\n\n\n", "msg_date": "Fri, 19 Jun 2020 13:11:59 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": ">> would like to know if the patch related to CSN based snapshot [2] is a\r\n>> precursor for this, if not, then is it any way related to this patch\r\n>> because I see the latest reply on that thread [2] which says it is an\r\n>> infrastructure of sharding feature but I don't understand completely\r\n>> whether these patches are related?\r\n>I need some time to study this patch. At first sight it is different.\r\n\r\nThis patch[2] is almost base on [3], because I think [1] is talking about 2PC\r\nand FDW, so this patch focus on CSN only and I detach the global snapshot\r\npart and FDW part from the [1] patch. \r\n\r\nI notice CSN will not survival after a restart in [1] patch, I think it may not the\r\nright way, may be it is what in last mail \"Needs guarantees of monotonically\r\nincreasing of the CSN in the case of an instance restart/crash etc\" so I try to\r\nadd wal support for CSN on this patch.\r\n\r\nThat's why this thread exist.\r\n\r\n> [1] - https://www.postgresql.org/message-id/CA%2Bfd4k4v%2BKdofMyN%2BjnOia8-7rto8tsh9Zs3dd7kncvHp12WYw%40mail.gmail.com\r\n> [2] - https://www.postgresql.org/message-id/2020061911294657960322%40highgo.ca\r\n[3]https://www.postgresql.org/message-id/21BC916B-80A1-43BF-8650-3363CCDAE09C%40postgrespro.ru \r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\n>> would like to know if the patch related to CSN based snapshot [2] is a>> precursor for this, if not, then is it any way related to this patch>> because I see the latest reply on that thread [2] which says it is an>> infrastructure of sharding feature but I don't understand completely>> whether these patches are related?>I need some time to study this patch.. At first sight it is different.\nThis patch[2] is almost base on [3], because I think [1] is talking about 2PCand FDW, so this patch focus on CSN only and I detach the global snapshotpart and FDW part from the [1] patch. I notice CSN will not survival after a restart in [1] patch, I think it may not theright way, may be it is what in last mail \"Needs guarantees of monotonicallyincreasing of the CSN in the case of an instance restart/crash etc\" so I try toadd wal support for CSN on this patch.That's why this thread exist.> [1] - https://www.postgresql.org/message-id/CA%2Bfd4k4v%2BKdofMyN%2BjnOia8-7rto8tsh9Zs3dd7kncvHp12WYw%40mail.gmail.com> [2] - https://www.postgresql.org/message-id/2020061911294657960322%40highgo.ca[3]https://www.postgresql.org/message-id/21BC916B-80A1-43BF-8650-3363CCDAE09C%40postgrespro.ru \nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Fri, 19 Jun 2020 17:03:20 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On Fri, Jun 19, 2020 at 05:03:20PM +0800, movead.li@highgo.ca wrote:\n> \n> >> would like to know if the patch related to CSN based snapshot [2] is a\n> >> precursor for this, if not, then is it any way related to this patch\n> >> because I see the latest reply on that thread [2] which says it is an\n> >> infrastructure of sharding feature but I don't understand completely\n> >> whether these patches are related?\n> >I need some time to study this patch.. At first sight it is different.\n> \n> This patch[2] is almost base on [3], because I think [1] is talking about 2PC\n> and FDW, so this patch focus on CSN only and I detach the global snapshot\n> part and FDW part from the [1] patch. \n> \n> I notice CSN will not survival after a restart in [1] patch, I think it may not\n> the\n> right way, may be it is what in last mail \"Needs guarantees of monotonically\n> increasing of the CSN in the case of an instance restart/crash etc\" so I try to\n> add wal support for CSN on this patch.\n> \n> That's why this thread exist.\n\nI was certainly missing how these items fit together. Sharding needs\nparallel FDWs, atomic commits, and atomic snapshots. To get atomic\nsnapshots, we need CSN. This new sharding wiki pages has more details:\n\n\thttps://wiki.postgresql.org/wiki/WIP_PostgreSQL_Sharding\n\nAfter all that is done, we will need optimizer improvements and shard\nmanagement tooling.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Fri, 19 Jun 2020 09:02:57 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On Fri, Jun 19, 2020 at 1:42 PM Andrey V. Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n>\n> On 6/19/20 11:48 AM, Amit Kapila wrote:\n> > On Wed, Jun 10, 2020 at 8:36 AM Andrey V. Lepikhov\n> > <a.lepikhov@postgrespro.ru> wrote:\n> >> On 09.06.2020 11:41, Fujii Masao wrote:\n> >>> The patches seem not to be registered in CommitFest yet.\n> >>> Are you planning to do that?\n> >> Not now. It is a sharding-related feature. I'm not sure that this\n> >> approach is fully consistent with the sharding way now.\n> > Can you please explain in detail, why you think so? There is no\n> > commit message explaining what each patch does so it is difficult to\n> > understand why you said so?\n> For now I used this patch set for providing correct visibility in the\n> case of access to the table with foreign partitions from many nodes in\n> parallel. So I saw at this patch set as a sharding-related feature, but\n> [1] shows another useful application.\n> CSN-based approach has weak points such as:\n> 1. Dependency on clocks synchronization\n> 2. Needs guarantees of monotonically increasing of the CSN in the case\n> of an instance restart/crash etc.\n> 3. We need to delay increasing of OldestXmin because it can be needed\n> for a transaction snapshot at another node.\n>\n\nSo, is anyone working on improving these parts of the patch. AFAICS\nfrom what Bruce has shared [1], some people from HighGo are working on\nit but I don't see any discussion of that yet.\n\n> So I do not have full conviction that it will be better than a single\n> distributed transaction manager.\n>\n\nWhen you say \"single distributed transaction manager\" do you mean\nsomething like pg_dtm which is inspired by Postgres-XL?\n\n> Also, can you let us know if this\n> > supports 2PC in some way and if so how is it different from what the\n> > other thread on the same topic [1] is trying to achieve?\n> Yes, the patch '0003-postgres_fdw-support-for-global-snapshots' contains\n> 2PC machinery. Now I'd not judge which approach is better.\n>\n\nYeah, I have studied both the approaches a little and I feel the main\ndifference seems to be that in this patch atomicity is tightly coupled\nwith how we achieve global visibility, basically in this patch \"all\nrunning transactions are marked as InDoubt on all nodes in prepare\nphase, and after that, each node commit it and stamps each xid with a\ngiven GlobalCSN.\". There are no separate APIs for\nprepare/commit/rollback exposed by postgres_fdw as we do it in the\napproach followed by Sawada-San's patch. It seems to me in the patch\nin this email one of postgres_fdw node can be a sort of coordinator\nwhich prepares and commit the transaction on all other nodes whereas\nthat is not true in Sawada-San's patch (where the coordinator is a\nlocal Postgres node, am I right Sawada-San?). OTOH, Sawada-San's\npatch has advanced concepts like a resolver process that can\ncommit/abort the transactions later. I couldn't still get a complete\ngrip of both patches so difficult to say which is better approach but\nI think at the least we should have some discussion.\n\nI feel if Sawada-San or someone involved in another patch also once\nstudies this approach and try to come up with some form of comparison\nthen we might be able to make better decision. It is possible that\nthere are few good things in each approach which we can use.\n\n> Also, I\n> > would like to know if the patch related to CSN based snapshot [2] is a\n> > precursor for this, if not, then is it any way related to this patch\n> > because I see the latest reply on that thread [2] which says it is an\n> > infrastructure of sharding feature but I don't understand completely\n> > whether these patches are related?\n> I need some time to study this patch. At first sight it is different.\n>\n\nI feel the opposite. I think it has extracted some stuff from this\npatch series and extended the same.\n\nThanks for the inputs. I feel inputs from you and others who were\ninvolved in this project will be really helpful to move this project\nforward.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 20 Jun 2020 17:51:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On Sat, Jun 20, 2020 at 5:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n>\n> So, is anyone working on improving these parts of the patch. AFAICS\n> from what Bruce has shared [1],\n>\n\noops, forgot to share the link [1] -\nhttps://wiki.postgresql.org/wiki/WIP_PostgreSQL_Sharding\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 20 Jun 2020 17:52:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On Fri, Jun 19, 2020 at 6:33 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Fri, Jun 19, 2020 at 05:03:20PM +0800, movead.li@highgo.ca wrote:\n> >\n> > >> would like to know if the patch related to CSN based snapshot [2] is a\n> > >> precursor for this, if not, then is it any way related to this patch\n> > >> because I see the latest reply on that thread [2] which says it is an\n> > >> infrastructure of sharding feature but I don't understand completely\n> > >> whether these patches are related?\n> > >I need some time to study this patch.. At first sight it is different.\n> >\n> > This patch[2] is almost base on [3], because I think [1] is talking about 2PC\n> > and FDW, so this patch focus on CSN only and I detach the global snapshot\n> > part and FDW part from the [1] patch.\n> >\n> > I notice CSN will not survival after a restart in [1] patch, I think it may not\n> > the\n> > right way, may be it is what in last mail \"Needs guarantees of monotonically\n> > increasing of the CSN in the case of an instance restart/crash etc\" so I try to\n> > add wal support for CSN on this patch.\n> >\n> > That's why this thread exist.\n>\n> I was certainly missing how these items fit together. Sharding needs\n> parallel FDWs, atomic commits, and atomic snapshots. To get atomic\n> snapshots, we need CSN. This new sharding wiki pages has more details:\n>\n> https://wiki.postgresql.org/wiki/WIP_PostgreSQL_Sharding\n>\n\nThanks for maintaining this page. It is quite helpful!\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 20 Jun 2020 17:54:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On Sat, Jun 20, 2020 at 05:54:18PM +0530, Amit Kapila wrote:\n> On Fri, Jun 19, 2020 at 6:33 PM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Fri, Jun 19, 2020 at 05:03:20PM +0800, movead.li@highgo.ca wrote:\n> > >\n> > > >> would like to know if the patch related to CSN based snapshot [2] is a\n> > > >> precursor for this, if not, then is it any way related to this patch\n> > > >> because I see the latest reply on that thread [2] which says it is an\n> > > >> infrastructure of sharding feature but I don't understand completely\n> > > >> whether these patches are related?\n> > > >I need some time to study this patch.. At first sight it is different.\n> > >\n> > > This patch[2] is almost base on [3], because I think [1] is talking about 2PC\n> > > and FDW, so this patch focus on CSN only and I detach the global snapshot\n> > > part and FDW part from the [1] patch.\n> > >\n> > > I notice CSN will not survival after a restart in [1] patch, I think it may not\n> > > the\n> > > right way, may be it is what in last mail \"Needs guarantees of monotonically\n> > > increasing of the CSN in the case of an instance restart/crash etc\" so I try to\n> > > add wal support for CSN on this patch.\n> > >\n> > > That's why this thread exist.\n> >\n> > I was certainly missing how these items fit together. Sharding needs\n> > parallel FDWs, atomic commits, and atomic snapshots. To get atomic\n> > snapshots, we need CSN. This new sharding wiki pages has more details:\n> >\n> > https://wiki.postgresql.org/wiki/WIP_PostgreSQL_Sharding\n> >\n> \n> Thanks for maintaining this page. It is quite helpful!\n\nAhsan Hadi <ahsan.hadi@highgo.ca> created that page, and I just made a\nfew wording edits. Ahsan is copying information from this older\nsharding wiki page:\n\n\thttps://wiki.postgresql.org/wiki/Built-in_Sharding\n\nto the new one you listed above.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 22 Jun 2020 11:00:38 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On Sat, Jun 20, 2020 at 05:51:21PM +0530, Amit Kapila wrote:\n> I feel if Sawada-San or someone involved in another patch also once\n> studies this approach and try to come up with some form of comparison\n> then we might be able to make better decision. It is possible that\n> there are few good things in each approach which we can use.\n\nAgreed. Postgres-XL code is under the Postgres license:\n\n\tPostgres-XL is released under the PostgreSQL License, a liberal Open\n\tSource license, similar to the BSD or MIT licenses.\n\nand even says they want it moved into Postgres core:\n\n\thttps://www.postgres-xl.org/2017/08/postgres-xl-9-5-r1-6-announced/\n\n\tPostgres-XL is a massively parallel database built on top of,\n\tand very closely compatible with PostgreSQL 9.5 and its set of advanced\n\tfeatures. Postgres-XL is fully open source and many parts of it will\n\tfeed back directly or indirectly into later releases of PostgreSQL, as\n\twe begin to move towards a fully parallel sharded version of core PostgreSQL.\n\nso we should understand what can be used from it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 22 Jun 2020 11:06:36 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On Mon, Jun 22, 2020 at 8:36 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Sat, Jun 20, 2020 at 05:51:21PM +0530, Amit Kapila wrote:\n> > I feel if Sawada-San or someone involved in another patch also once\n> > studies this approach and try to come up with some form of comparison\n> > then we might be able to make better decision. It is possible that\n> > there are few good things in each approach which we can use.\n>\n> Agreed. Postgres-XL code is under the Postgres license:\n>\n> Postgres-XL is released under the PostgreSQL License, a liberal Open\n> Source license, similar to the BSD or MIT licenses.\n>\n> and even says they want it moved into Postgres core:\n>\n> https://www.postgres-xl.org/2017/08/postgres-xl-9-5-r1-6-announced/\n>\n> Postgres-XL is a massively parallel database built on top of,\n> and very closely compatible with PostgreSQL 9.5 and its set of advanced\n> features. Postgres-XL is fully open source and many parts of it will\n> feed back directly or indirectly into later releases of PostgreSQL, as\n> we begin to move towards a fully parallel sharded version of core PostgreSQL.\n>\n> so we should understand what can be used from it.\n>\n\n+1. I think that will be quite useful.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 23 Jun 2020 09:42:09 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On Sat, 20 Jun 2020 at 21:21, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jun 19, 2020 at 1:42 PM Andrey V. Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n> >\n> > On 6/19/20 11:48 AM, Amit Kapila wrote:\n> > > On Wed, Jun 10, 2020 at 8:36 AM Andrey V. Lepikhov\n> > > <a.lepikhov@postgrespro.ru> wrote:\n> > >> On 09.06.2020 11:41, Fujii Masao wrote:\n> > >>> The patches seem not to be registered in CommitFest yet.\n> > >>> Are you planning to do that?\n> > >> Not now. It is a sharding-related feature. I'm not sure that this\n> > >> approach is fully consistent with the sharding way now.\n> > > Can you please explain in detail, why you think so? There is no\n> > > commit message explaining what each patch does so it is difficult to\n> > > understand why you said so?\n> > For now I used this patch set for providing correct visibility in the\n> > case of access to the table with foreign partitions from many nodes in\n> > parallel. So I saw at this patch set as a sharding-related feature, but\n> > [1] shows another useful application.\n> > CSN-based approach has weak points such as:\n> > 1. Dependency on clocks synchronization\n> > 2. Needs guarantees of monotonically increasing of the CSN in the case\n> > of an instance restart/crash etc.\n> > 3. We need to delay increasing of OldestXmin because it can be needed\n> > for a transaction snapshot at another node.\n> >\n>\n> So, is anyone working on improving these parts of the patch. AFAICS\n> from what Bruce has shared [1], some people from HighGo are working on\n> it but I don't see any discussion of that yet.\n>\n> > So I do not have full conviction that it will be better than a single\n> > distributed transaction manager.\n> >\n>\n> When you say \"single distributed transaction manager\" do you mean\n> something like pg_dtm which is inspired by Postgres-XL?\n>\n> > Also, can you let us know if this\n> > > supports 2PC in some way and if so how is it different from what the\n> > > other thread on the same topic [1] is trying to achieve?\n> > Yes, the patch '0003-postgres_fdw-support-for-global-snapshots' contains\n> > 2PC machinery. Now I'd not judge which approach is better.\n> >\n>\n\nSorry for being late.\n\n> Yeah, I have studied both the approaches a little and I feel the main\n> difference seems to be that in this patch atomicity is tightly coupled\n> with how we achieve global visibility, basically in this patch \"all\n> running transactions are marked as InDoubt on all nodes in prepare\n> phase, and after that, each node commit it and stamps each xid with a\n> given GlobalCSN.\". There are no separate APIs for\n> prepare/commit/rollback exposed by postgres_fdw as we do it in the\n> approach followed by Sawada-San's patch. It seems to me in the patch\n> in this email one of postgres_fdw node can be a sort of coordinator\n> which prepares and commit the transaction on all other nodes whereas\n> that is not true in Sawada-San's patch (where the coordinator is a\n> local Postgres node, am I right Sawada-San?).\n\nYeah, where to manage foreign transactions is different: postgres_fdw\nmanages foreign transactions in this patch whereas the PostgreSQL core\ndoes that in that 2PC patch.\n\n>\n> I feel if Sawada-San or someone involved in another patch also once\n> studies this approach and try to come up with some form of comparison\n> then we might be able to make better decision. It is possible that\n> there are few good things in each approach which we can use.\n>\n\nI studied this patch and did a simple comparison between this patch\n(0002 patch) and my 2PC patch.\n\nIn terms of atomic commit, the features that are not implemented in\nthis patch but in the 2PC patch are:\n\n* Crash safe.\n* PREPARE TRANSACTION command support.\n* Query cancel during waiting for the commit.\n* Automatically in-doubt transaction resolution.\n\nOn the other hand, the feature that is implemented in this patch but\nnot in the 2PC patch is:\n\n* Executing PREPARE TRANSACTION (and other commands) in parallel\n\nWhen the 2PC patch was proposed, IIRC it was like this patch (0002\npatch). I mean, it changed only postgres_fdw to support 2PC. But after\ndiscussion, we changed the approach to have the core manage foreign\ntransaction for crash-safe. From my perspective, this patch has a\nminimum implementation of 2PC to work the global snapshot feature and\nhas some missing features important for supporting crash-safe atomic\ncommit. So I personally think we should consider how to integrate this\nglobal snapshot feature with the 2PC patch, rather than improving this\npatch if we want crash-safe atomic commit.\n\nLooking at the commit procedure with this patch:\n\nWhen starting a new transaction on a foreign server, postgres_fdw\nexecutes pg_global_snapshot_import() to import the global snapshot.\nAfter some work, in pre-commit phase we do:\n\n1. generate global transaction id, say 'gid'\n2. execute PREPARE TRANSACTION 'gid' on all participants.\n3. prepare global snapshot locally, if the local node also involves\nthe transaction\n4. execute pg_global_snapshot_prepare('gid') for all participants\n\nDuring step 2 to 4, we calculate the maximum CSN from the CSNs\nreturned from each pg_global_snapshot_prepare() executions.\n\n5. assign global snapshot locally, if the local node also involves the\ntransaction\n6. execute pg_global_snapshot_assign('gid', max-csn) on all participants.\n\nThen, we commit locally (i.g. mark the current transaction as\ncommitted in clog).\n\nAfter that, in post-commit phase, execute COMMIT PREPARED 'gid' on all\nparticipants.\n\nConsidering how to integrate this global snapshot feature with the 2PC\npatch, what the 2PC patch needs to at least change is to allow FDW to\nstore an FDW-private data that is passed to subsequent FDW transaction\nAPI calls. Currently, in the current 2PC patch, we call Prepare API\nfor each participant servers one by one, and the core pass only\nmetadata such as ForeignServer, UserMapping, and global transaction\nidentifier. So it's not easy to calculate the maximum CSN across\nmultiple transaction API calls. I think we can change the 2PC patch to\nadd a void pointer into FdwXactRslvState, struct passed from the core,\nin order to store FDW-private data. It's going to be the maximum CSN\nin this case. That way, at the first Prepare API calls postgres_fdw\nallocates the space and stores CSN to that space. And at subsequent\nPrepare API calls it can calculate the maximum of csn, and then is\nable to the step 3 to 6 when preparing the transaction on the last\nparticipant. Another idea would be to change 2PC patch so that the\ncore passes a bunch of participants grouped by FDW.\n\nI’ve not read this patch deeply yet and have considered it without any\ncoding but my first feeling is not hard to integrate this feature with\nthe 2PC patch.\n\nRegards,\n\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 3 Jul 2020 15:48:16 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On Fri, Jul 3, 2020 at 12:18 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Sat, 20 Jun 2020 at 21:21, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jun 19, 2020 at 1:42 PM Andrey V. Lepikhov\n> > <a.lepikhov@postgrespro.ru> wrote:\n> >\n> > > Also, can you let us know if this\n> > > > supports 2PC in some way and if so how is it different from what the\n> > > > other thread on the same topic [1] is trying to achieve?\n> > > Yes, the patch '0003-postgres_fdw-support-for-global-snapshots' contains\n> > > 2PC machinery. Now I'd not judge which approach is better.\n> > >\n> >\n>\n> Sorry for being late.\n>\n\nNo problem, your summarization, and comparisons of both approaches are\nquite helpful.\n\n>\n> I studied this patch and did a simple comparison between this patch\n> (0002 patch) and my 2PC patch.\n>\n> In terms of atomic commit, the features that are not implemented in\n> this patch but in the 2PC patch are:\n>\n> * Crash safe.\n> * PREPARE TRANSACTION command support.\n> * Query cancel during waiting for the commit.\n> * Automatically in-doubt transaction resolution.\n>\n> On the other hand, the feature that is implemented in this patch but\n> not in the 2PC patch is:\n>\n> * Executing PREPARE TRANSACTION (and other commands) in parallel\n>\n> When the 2PC patch was proposed, IIRC it was like this patch (0002\n> patch). I mean, it changed only postgres_fdw to support 2PC. But after\n> discussion, we changed the approach to have the core manage foreign\n> transaction for crash-safe. From my perspective, this patch has a\n> minimum implementation of 2PC to work the global snapshot feature and\n> has some missing features important for supporting crash-safe atomic\n> commit. So I personally think we should consider how to integrate this\n> global snapshot feature with the 2PC patch, rather than improving this\n> patch if we want crash-safe atomic commit.\n>\n\nOkay, but isn't there some advantage with this approach (manage 2PC at\npostgres_fdw level) as well which is that any node will be capable of\nhandling global transactions rather than doing them via central\ncoordinator? I mean any node can do writes or reads rather than\nprobably routing them (at least writes) via coordinator node. Now, I\nagree that even if this advantage is there in the current approach, we\ncan't lose the crash-safety aspect of other approach. Will you be\nable to summarize what was the problem w.r.t crash-safety and how your\npatch has dealt it?\n\n> Looking at the commit procedure with this patch:\n>\n> When starting a new transaction on a foreign server, postgres_fdw\n> executes pg_global_snapshot_import() to import the global snapshot.\n> After some work, in pre-commit phase we do:\n>\n> 1. generate global transaction id, say 'gid'\n> 2. execute PREPARE TRANSACTION 'gid' on all participants.\n> 3. prepare global snapshot locally, if the local node also involves\n> the transaction\n> 4. execute pg_global_snapshot_prepare('gid') for all participants\n>\n> During step 2 to 4, we calculate the maximum CSN from the CSNs\n> returned from each pg_global_snapshot_prepare() executions.\n>\n> 5. assign global snapshot locally, if the local node also involves the\n> transaction\n> 6. execute pg_global_snapshot_assign('gid', max-csn) on all participants.\n>\n> Then, we commit locally (i.g. mark the current transaction as\n> committed in clog).\n>\n> After that, in post-commit phase, execute COMMIT PREPARED 'gid' on all\n> participants.\n>\n\nAs per my current understanding, the overall idea is as follows. For\nglobal transactions, pg_global_snapshot_prepare('gid') will set the\ntransaction status as InDoubt and generate CSN (let's call it NodeCSN)\nat the node where that function is executed, it also returns the\nNodeCSN to the coordinator. Then the coordinator (the current\npostgres_fdw node on which write transaction is being executed)\ncomputes MaxCSN based on the return value (NodeCSN) of prepare\n(pg_global_snapshot_prepare) from all nodes. It then assigns MaxCSN\nto each node. Finally, when Commit Prepared is issued for each node\nthat MaxCSN will be written to each node including the current node.\nSo, with this idea, each node will have the same view of CSN value\ncorresponding to any particular transaction.\n\nFor Snapshot management, the node which receives the query generates a\nCSN (CurrentCSN) and follows the simple rule that the tuple having a\nxid with CSN lesser than CurrentCSN will be visible. Now, it is\npossible that when we are examining a tuple, the CSN corresponding to\nxid that has written the tuple has a value as INDOUBT which will\nindicate that the transaction is yet not committed on all nodes. And\nwe wait till we get the valid CSN value corresponding to xid and then\nuse it to check if the tuple is visible.\n\nNow, one thing to note here is that for global transactions we\nprimarily rely on CSN value corresponding to a transaction for its\nvisibility even though we still maintain CLOG for local transaction\nstatus.\n\nLeaving aside the incomplete parts and or flaws of the current patch,\ndoes the above match the top-level idea of this patch? I am not sure\nif my understanding of this patch at this stage is completely correct\nor whether we want to follow the approach of this patch but I think at\nleast lets first be sure if such a top-level idea can achieve what we\nwant to do here.\n\n> Considering how to integrate this global snapshot feature with the 2PC\n> patch, what the 2PC patch needs to at least change is to allow FDW to\n> store an FDW-private data that is passed to subsequent FDW transaction\n> API calls. Currently, in the current 2PC patch, we call Prepare API\n> for each participant servers one by one, and the core pass only\n> metadata such as ForeignServer, UserMapping, and global transaction\n> identifier. So it's not easy to calculate the maximum CSN across\n> multiple transaction API calls. I think we can change the 2PC patch to\n> add a void pointer into FdwXactRslvState, struct passed from the core,\n> in order to store FDW-private data. It's going to be the maximum CSN\n> in this case. That way, at the first Prepare API calls postgres_fdw\n> allocates the space and stores CSN to that space. And at subsequent\n> Prepare API calls it can calculate the maximum of csn, and then is\n> able to the step 3 to 6 when preparing the transaction on the last\n> participant. Another idea would be to change 2PC patch so that the\n> core passes a bunch of participants grouped by FDW.\n>\n\nIIUC with this the coordinator needs the communication with the nodes\ntwice at the prepare stage, once to prepare the transaction in each\nnode and get CSN from each node and then to communicate MaxCSN to each\nnode? Also, we probably need InDoubt CSN status at prepare phase to\nmake snapshots and global visibility work.\n\n> I’ve not read this patch deeply yet and have considered it without any\n> coding but my first feeling is not hard to integrate this feature with\n> the 2PC patch.\n>\n\nOkay.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Jul 2020 12:10:20 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On Tue, 7 Jul 2020 at 15:40, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 3, 2020 at 12:18 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Sat, 20 Jun 2020 at 21:21, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Jun 19, 2020 at 1:42 PM Andrey V. Lepikhov\n> > > <a.lepikhov@postgrespro.ru> wrote:\n> > >\n> > > > Also, can you let us know if this\n> > > > > supports 2PC in some way and if so how is it different from what the\n> > > > > other thread on the same topic [1] is trying to achieve?\n> > > > Yes, the patch '0003-postgres_fdw-support-for-global-snapshots' contains\n> > > > 2PC machinery. Now I'd not judge which approach is better.\n> > > >\n> > >\n> >\n> > Sorry for being late.\n> >\n>\n> No problem, your summarization, and comparisons of both approaches are\n> quite helpful.\n>\n> >\n> > I studied this patch and did a simple comparison between this patch\n> > (0002 patch) and my 2PC patch.\n> >\n> > In terms of atomic commit, the features that are not implemented in\n> > this patch but in the 2PC patch are:\n> >\n> > * Crash safe.\n> > * PREPARE TRANSACTION command support.\n> > * Query cancel during waiting for the commit.\n> > * Automatically in-doubt transaction resolution.\n> >\n> > On the other hand, the feature that is implemented in this patch but\n> > not in the 2PC patch is:\n> >\n> > * Executing PREPARE TRANSACTION (and other commands) in parallel\n> >\n> > When the 2PC patch was proposed, IIRC it was like this patch (0002\n> > patch). I mean, it changed only postgres_fdw to support 2PC. But after\n> > discussion, we changed the approach to have the core manage foreign\n> > transaction for crash-safe. From my perspective, this patch has a\n> > minimum implementation of 2PC to work the global snapshot feature and\n> > has some missing features important for supporting crash-safe atomic\n> > commit. So I personally think we should consider how to integrate this\n> > global snapshot feature with the 2PC patch, rather than improving this\n> > patch if we want crash-safe atomic commit.\n> >\n>\n> Okay, but isn't there some advantage with this approach (manage 2PC at\n> postgres_fdw level) as well which is that any node will be capable of\n> handling global transactions rather than doing them via central\n> coordinator? I mean any node can do writes or reads rather than\n> probably routing them (at least writes) via coordinator node.\n\nThe postgres server where the client started the transaction works as\nthe coordinator node. I think this is true for both this patch and\nthat 2PC patch. From the perspective of atomic commit, any node will\nbe capable of handling global transactions in both approaches.\n\n> Now, I\n> agree that even if this advantage is there in the current approach, we\n> can't lose the crash-safety aspect of other approach. Will you be\n> able to summarize what was the problem w.r.t crash-safety and how your\n> patch has dealt it?\n\nSince this patch proceeds 2PC without any logging, foreign\ntransactions prepared on foreign servers are left over without any\nclues if the coordinator crashes during commit. Therefore, after\nrestart, the user will need to find and resolve in-doubt foreign\ntransactions manually.\n\nIn that 2PC patch, the information of foreign transactions is WAL\nlogged before PREPARE TRANSACTION. So even if the coordinator crashes\nafter preparing some foreign transactions, the prepared foreign\ntransactions are recovered during crash recovery, and then the\ntransaction resolver resolves them automatically or the user also can\nresolve them. The user doesn't need to check other participants node\nto resolve in-doubt foreign transactions. Also, since the foreign\ntransaction information is replicated to physical standbys the new\nmaster can take over resolving in-doubt transactions.\n\n>\n> > Looking at the commit procedure with this patch:\n> >\n> > When starting a new transaction on a foreign server, postgres_fdw\n> > executes pg_global_snapshot_import() to import the global snapshot.\n> > After some work, in pre-commit phase we do:\n> >\n> > 1. generate global transaction id, say 'gid'\n> > 2. execute PREPARE TRANSACTION 'gid' on all participants.\n> > 3. prepare global snapshot locally, if the local node also involves\n> > the transaction\n> > 4. execute pg_global_snapshot_prepare('gid') for all participants\n> >\n> > During step 2 to 4, we calculate the maximum CSN from the CSNs\n> > returned from each pg_global_snapshot_prepare() executions.\n> >\n> > 5. assign global snapshot locally, if the local node also involves the\n> > transaction\n> > 6. execute pg_global_snapshot_assign('gid', max-csn) on all participants.\n> >\n> > Then, we commit locally (i.g. mark the current transaction as\n> > committed in clog).\n> >\n> > After that, in post-commit phase, execute COMMIT PREPARED 'gid' on all\n> > participants.\n> >\n>\n> As per my current understanding, the overall idea is as follows. For\n> global transactions, pg_global_snapshot_prepare('gid') will set the\n> transaction status as InDoubt and generate CSN (let's call it NodeCSN)\n> at the node where that function is executed, it also returns the\n> NodeCSN to the coordinator. Then the coordinator (the current\n> postgres_fdw node on which write transaction is being executed)\n> computes MaxCSN based on the return value (NodeCSN) of prepare\n> (pg_global_snapshot_prepare) from all nodes. It then assigns MaxCSN\n> to each node. Finally, when Commit Prepared is issued for each node\n> that MaxCSN will be written to each node including the current node.\n> So, with this idea, each node will have the same view of CSN value\n> corresponding to any particular transaction.\n>\n> For Snapshot management, the node which receives the query generates a\n> CSN (CurrentCSN) and follows the simple rule that the tuple having a\n> xid with CSN lesser than CurrentCSN will be visible. Now, it is\n> possible that when we are examining a tuple, the CSN corresponding to\n> xid that has written the tuple has a value as INDOUBT which will\n> indicate that the transaction is yet not committed on all nodes. And\n> we wait till we get the valid CSN value corresponding to xid and then\n> use it to check if the tuple is visible.\n>\n> Now, one thing to note here is that for global transactions we\n> primarily rely on CSN value corresponding to a transaction for its\n> visibility even though we still maintain CLOG for local transaction\n> status.\n>\n> Leaving aside the incomplete parts and or flaws of the current patch,\n> does the above match the top-level idea of this patch?\n\nI'm still studying this patch but your understanding seems right to me.\n\n> I am not sure\n> if my understanding of this patch at this stage is completely correct\n> or whether we want to follow the approach of this patch but I think at\n> least lets first be sure if such a top-level idea can achieve what we\n> want to do here.\n>\n> > Considering how to integrate this global snapshot feature with the 2PC\n> > patch, what the 2PC patch needs to at least change is to allow FDW to\n> > store an FDW-private data that is passed to subsequent FDW transaction\n> > API calls. Currently, in the current 2PC patch, we call Prepare API\n> > for each participant servers one by one, and the core pass only\n> > metadata such as ForeignServer, UserMapping, and global transaction\n> > identifier. So it's not easy to calculate the maximum CSN across\n> > multiple transaction API calls. I think we can change the 2PC patch to\n> > add a void pointer into FdwXactRslvState, struct passed from the core,\n> > in order to store FDW-private data. It's going to be the maximum CSN\n> > in this case. That way, at the first Prepare API calls postgres_fdw\n> > allocates the space and stores CSN to that space. And at subsequent\n> > Prepare API calls it can calculate the maximum of csn, and then is\n> > able to the step 3 to 6 when preparing the transaction on the last\n> > participant. Another idea would be to change 2PC patch so that the\n> > core passes a bunch of participants grouped by FDW.\n> >\n>\n> IIUC with this the coordinator needs the communication with the nodes\n> twice at the prepare stage, once to prepare the transaction in each\n> node and get CSN from each node and then to communicate MaxCSN to each\n> node?\n\nYes, I think so too.\n\n> Also, we probably need InDoubt CSN status at prepare phase to\n> make snapshots and global visibility work.\n\nI think it depends on how global CSN feature works.\n\nFor instance, in that 2PC patch, if the coordinator crashes during\npreparing a foreign transaction, the global transaction manager\nrecovers and regards it as \"prepared\" regardless of the foreign\ntransaction actually having been prepared. And it sends ROLLBACK\nPREPARED after recovery completed. With global CSN patch, as you\nmentioned, at prepare phase the coordinator needs to communicate\nparticipants twice other than sending PREPARE TRANSACTION:\npg_global_snapshot_prepare() and pg_global_snapshot_assign().\n\nIf global CSN patch needs different cleanup work depending on the CSN\nstatus, we will need InDoubt CSN status so that the global transaction\nmanager can distinguish between a foreign transaction that has\nexecuted pg_global_snapshot_prepare() and the one that has executed\npg_global_snapshot_assign().\n\nOn the other hand, if it's enough to just send ROLLBACK or ROLLBACK\nPREPARED in that case, I think we don't need InDoubt CSN status. There\nis no difference between those foreign transactions from the global\ntransaction manager perspective.\n\nAs far as I read the patch, on failure postgres_fdw simply send\nROLLBACK PREPARED to participants, and there seems no additional work\nother than that. I might be missing something.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 8 Jul 2020 14:46:02 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On Wed, Jul 8, 2020 at 11:16 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 7 Jul 2020 at 15:40, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Okay, but isn't there some advantage with this approach (manage 2PC at\n> > postgres_fdw level) as well which is that any node will be capable of\n> > handling global transactions rather than doing them via central\n> > coordinator? I mean any node can do writes or reads rather than\n> > probably routing them (at least writes) via coordinator node.\n>\n> The postgres server where the client started the transaction works as\n> the coordinator node. I think this is true for both this patch and\n> that 2PC patch. From the perspective of atomic commit, any node will\n> be capable of handling global transactions in both approaches.\n>\n\nOkay, but then probably we need to ensure that GID has to be unique\neven if that gets generated on different nodes? I don't know if that\nis ensured.\n\n> > Now, I\n> > agree that even if this advantage is there in the current approach, we\n> > can't lose the crash-safety aspect of other approach. Will you be\n> > able to summarize what was the problem w.r.t crash-safety and how your\n> > patch has dealt it?\n>\n> Since this patch proceeds 2PC without any logging, foreign\n> transactions prepared on foreign servers are left over without any\n> clues if the coordinator crashes during commit. Therefore, after\n> restart, the user will need to find and resolve in-doubt foreign\n> transactions manually.\n>\n\nOkay, but is it because we can't directly WAL log in postgres_fdw or\nthere is some other reason for not doing so?\n\n>\n> >\n> > > Looking at the commit procedure with this patch:\n> > >\n> > > When starting a new transaction on a foreign server, postgres_fdw\n> > > executes pg_global_snapshot_import() to import the global snapshot.\n> > > After some work, in pre-commit phase we do:\n> > >\n> > > 1. generate global transaction id, say 'gid'\n> > > 2. execute PREPARE TRANSACTION 'gid' on all participants.\n> > > 3. prepare global snapshot locally, if the local node also involves\n> > > the transaction\n> > > 4. execute pg_global_snapshot_prepare('gid') for all participants\n> > >\n> > > During step 2 to 4, we calculate the maximum CSN from the CSNs\n> > > returned from each pg_global_snapshot_prepare() executions.\n> > >\n> > > 5. assign global snapshot locally, if the local node also involves the\n> > > transaction\n> > > 6. execute pg_global_snapshot_assign('gid', max-csn) on all participants.\n> > >\n> > > Then, we commit locally (i.g. mark the current transaction as\n> > > committed in clog).\n> > >\n> > > After that, in post-commit phase, execute COMMIT PREPARED 'gid' on all\n> > > participants.\n> > >\n> >\n> > As per my current understanding, the overall idea is as follows. For\n> > global transactions, pg_global_snapshot_prepare('gid') will set the\n> > transaction status as InDoubt and generate CSN (let's call it NodeCSN)\n> > at the node where that function is executed, it also returns the\n> > NodeCSN to the coordinator. Then the coordinator (the current\n> > postgres_fdw node on which write transaction is being executed)\n> > computes MaxCSN based on the return value (NodeCSN) of prepare\n> > (pg_global_snapshot_prepare) from all nodes. It then assigns MaxCSN\n> > to each node. Finally, when Commit Prepared is issued for each node\n> > that MaxCSN will be written to each node including the current node.\n> > So, with this idea, each node will have the same view of CSN value\n> > corresponding to any particular transaction.\n> >\n> > For Snapshot management, the node which receives the query generates a\n> > CSN (CurrentCSN) and follows the simple rule that the tuple having a\n> > xid with CSN lesser than CurrentCSN will be visible. Now, it is\n> > possible that when we are examining a tuple, the CSN corresponding to\n> > xid that has written the tuple has a value as INDOUBT which will\n> > indicate that the transaction is yet not committed on all nodes. And\n> > we wait till we get the valid CSN value corresponding to xid and then\n> > use it to check if the tuple is visible.\n> >\n> > Now, one thing to note here is that for global transactions we\n> > primarily rely on CSN value corresponding to a transaction for its\n> > visibility even though we still maintain CLOG for local transaction\n> > status.\n> >\n> > Leaving aside the incomplete parts and or flaws of the current patch,\n> > does the above match the top-level idea of this patch?\n>\n> I'm still studying this patch but your understanding seems right to me.\n>\n\nCool. While studying, if you can try to think whether this approach is\ndifferent from the global coordinator based approach then it would be\ngreat. Here is my initial thought apart from other reasons the global\ncoordinator based design can help us to do the global transaction\nmanagement and snapshots. It can allocate xids for each transaction\nand then collect the list of running xacts (or CSN) from each node and\nthen prepare a global snapshot that can be used to perform any\ntransaction.\n\nOTOH, in the design proposed in this patch, we don't need any\ncoordinator to manage transactions and snapshots because each node's\ncurrent CSN will be sufficient for snapshot and visibility as\nexplained above. Now, sure this assumes that there is no clock skew\non different nodes or somehow we take care of the same (Note that in\nthe proposed patch the CSN is a timestamp.).\n\n> > I am not sure\n> > if my understanding of this patch at this stage is completely correct\n> > or whether we want to follow the approach of this patch but I think at\n> > least lets first be sure if such a top-level idea can achieve what we\n> > want to do here.\n> >\n> > > Considering how to integrate this global snapshot feature with the 2PC\n> > > patch, what the 2PC patch needs to at least change is to allow FDW to\n> > > store an FDW-private data that is passed to subsequent FDW transaction\n> > > API calls. Currently, in the current 2PC patch, we call Prepare API\n> > > for each participant servers one by one, and the core pass only\n> > > metadata such as ForeignServer, UserMapping, and global transaction\n> > > identifier. So it's not easy to calculate the maximum CSN across\n> > > multiple transaction API calls. I think we can change the 2PC patch to\n> > > add a void pointer into FdwXactRslvState, struct passed from the core,\n> > > in order to store FDW-private data. It's going to be the maximum CSN\n> > > in this case. That way, at the first Prepare API calls postgres_fdw\n> > > allocates the space and stores CSN to that space. And at subsequent\n> > > Prepare API calls it can calculate the maximum of csn, and then is\n> > > able to the step 3 to 6 when preparing the transaction on the last\n> > > participant. Another idea would be to change 2PC patch so that the\n> > > core passes a bunch of participants grouped by FDW.\n> > >\n> >\n> > IIUC with this the coordinator needs the communication with the nodes\n> > twice at the prepare stage, once to prepare the transaction in each\n> > node and get CSN from each node and then to communicate MaxCSN to each\n> > node?\n>\n> Yes, I think so too.\n>\n> > Also, we probably need InDoubt CSN status at prepare phase to\n> > make snapshots and global visibility work.\n>\n> I think it depends on how global CSN feature works.\n>\n> For instance, in that 2PC patch, if the coordinator crashes during\n> preparing a foreign transaction, the global transaction manager\n> recovers and regards it as \"prepared\" regardless of the foreign\n> transaction actually having been prepared. And it sends ROLLBACK\n> PREPARED after recovery completed. With global CSN patch, as you\n> mentioned, at prepare phase the coordinator needs to communicate\n> participants twice other than sending PREPARE TRANSACTION:\n> pg_global_snapshot_prepare() and pg_global_snapshot_assign().\n>\n> If global CSN patch needs different cleanup work depending on the CSN\n> status, we will need InDoubt CSN status so that the global transaction\n> manager can distinguish between a foreign transaction that has\n> executed pg_global_snapshot_prepare() and the one that has executed\n> pg_global_snapshot_assign().\n>\n> On the other hand, if it's enough to just send ROLLBACK or ROLLBACK\n> PREPARED in that case, I think we don't need InDoubt CSN status. There\n> is no difference between those foreign transactions from the global\n> transaction manager perspective.\n>\n\nI think InDoubt status helps in checking visibility in the proposed\npatch wherein if we find the status of the transaction as InDoubt, we\nwait till we get some valid CSN for it as explained in my previous\nemail. So whether we use it for Rollback/Rollback Prepared, it is\nrequired for this design.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Jul 2020 18:04:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On Wed, 8 Jul 2020 at 21:35, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jul 8, 2020 at 11:16 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Tue, 7 Jul 2020 at 15:40, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > Okay, but isn't there some advantage with this approach (manage 2PC at\n> > > postgres_fdw level) as well which is that any node will be capable of\n> > > handling global transactions rather than doing them via central\n> > > coordinator? I mean any node can do writes or reads rather than\n> > > probably routing them (at least writes) via coordinator node.\n> >\n> > The postgres server where the client started the transaction works as\n> > the coordinator node. I think this is true for both this patch and\n> > that 2PC patch. From the perspective of atomic commit, any node will\n> > be capable of handling global transactions in both approaches.\n> >\n>\n> Okay, but then probably we need to ensure that GID has to be unique\n> even if that gets generated on different nodes? I don't know if that\n> is ensured.\n\nYes, if you mean GID is global transaction id specified to PREPARE\nTRANSACTION, it has to be unique. In that 2PC patch, GID is generated\nin form of 'fx_<random string>_<server oid>_<user oid>'. I believe it\ncan ensure uniqueness in most cases. In addition, there is FDW API to\ngenerate an arbitrary identifier.\n\n>\n> > > Now, I\n> > > agree that even if this advantage is there in the current approach, we\n> > > can't lose the crash-safety aspect of other approach. Will you be\n> > > able to summarize what was the problem w.r.t crash-safety and how your\n> > > patch has dealt it?\n> >\n> > Since this patch proceeds 2PC without any logging, foreign\n> > transactions prepared on foreign servers are left over without any\n> > clues if the coordinator crashes during commit. Therefore, after\n> > restart, the user will need to find and resolve in-doubt foreign\n> > transactions manually.\n> >\n>\n> Okay, but is it because we can't directly WAL log in postgres_fdw or\n> there is some other reason for not doing so?\n\nYes, I think it is because we cannot WAL log in postgres_fdw. Maybe I\nmissed the point in your question. Please correct me if I missed\nsomething.\n\n>\n> >\n> > >\n> > > > Looking at the commit procedure with this patch:\n> > > >\n> > > > When starting a new transaction on a foreign server, postgres_fdw\n> > > > executes pg_global_snapshot_import() to import the global snapshot.\n> > > > After some work, in pre-commit phase we do:\n> > > >\n> > > > 1. generate global transaction id, say 'gid'\n> > > > 2. execute PREPARE TRANSACTION 'gid' on all participants.\n> > > > 3. prepare global snapshot locally, if the local node also involves\n> > > > the transaction\n> > > > 4. execute pg_global_snapshot_prepare('gid') for all participants\n> > > >\n> > > > During step 2 to 4, we calculate the maximum CSN from the CSNs\n> > > > returned from each pg_global_snapshot_prepare() executions.\n> > > >\n> > > > 5. assign global snapshot locally, if the local node also involves the\n> > > > transaction\n> > > > 6. execute pg_global_snapshot_assign('gid', max-csn) on all participants.\n> > > >\n> > > > Then, we commit locally (i.g. mark the current transaction as\n> > > > committed in clog).\n> > > >\n> > > > After that, in post-commit phase, execute COMMIT PREPARED 'gid' on all\n> > > > participants.\n> > > >\n> > >\n> > > As per my current understanding, the overall idea is as follows. For\n> > > global transactions, pg_global_snapshot_prepare('gid') will set the\n> > > transaction status as InDoubt and generate CSN (let's call it NodeCSN)\n> > > at the node where that function is executed, it also returns the\n> > > NodeCSN to the coordinator. Then the coordinator (the current\n> > > postgres_fdw node on which write transaction is being executed)\n> > > computes MaxCSN based on the return value (NodeCSN) of prepare\n> > > (pg_global_snapshot_prepare) from all nodes. It then assigns MaxCSN\n> > > to each node. Finally, when Commit Prepared is issued for each node\n> > > that MaxCSN will be written to each node including the current node.\n> > > So, with this idea, each node will have the same view of CSN value\n> > > corresponding to any particular transaction.\n> > >\n> > > For Snapshot management, the node which receives the query generates a\n> > > CSN (CurrentCSN) and follows the simple rule that the tuple having a\n> > > xid with CSN lesser than CurrentCSN will be visible. Now, it is\n> > > possible that when we are examining a tuple, the CSN corresponding to\n> > > xid that has written the tuple has a value as INDOUBT which will\n> > > indicate that the transaction is yet not committed on all nodes. And\n> > > we wait till we get the valid CSN value corresponding to xid and then\n> > > use it to check if the tuple is visible.\n> > >\n> > > Now, one thing to note here is that for global transactions we\n> > > primarily rely on CSN value corresponding to a transaction for its\n> > > visibility even though we still maintain CLOG for local transaction\n> > > status.\n> > >\n> > > Leaving aside the incomplete parts and or flaws of the current patch,\n> > > does the above match the top-level idea of this patch?\n> >\n> > I'm still studying this patch but your understanding seems right to me.\n> >\n>\n> Cool. While studying, if you can try to think whether this approach is\n> different from the global coordinator based approach then it would be\n> great. Here is my initial thought apart from other reasons the global\n> coordinator based design can help us to do the global transaction\n> management and snapshots. It can allocate xids for each transaction\n> and then collect the list of running xacts (or CSN) from each node and\n> then prepare a global snapshot that can be used to perform any\n> transaction. OTOH, in the design proposed in this patch, we don't need any\n> coordinator to manage transactions and snapshots because each node's\n> current CSN will be sufficient for snapshot and visibility as\n> explained above.\n\nYeah, my thought is the same as you. Since both approaches have strong\npoints and weak points I cannot mention which is a better approach,\nbut that 2PC patch would go well together with the design proposed in\nthis patch.\n\n> Now, sure this assumes that there is no clock skew\n> on different nodes or somehow we take care of the same (Note that in\n> the proposed patch the CSN is a timestamp.).\n\nAs far as I read Clock-SI paper, we take care of the clock skew by\nputting some waits on the transaction start and reading tuples on the\nremote node.\n\n>\n> > > I am not sure\n> > > if my understanding of this patch at this stage is completely correct\n> > > or whether we want to follow the approach of this patch but I think at\n> > > least lets first be sure if such a top-level idea can achieve what we\n> > > want to do here.\n> > >\n> > > > Considering how to integrate this global snapshot feature with the 2PC\n> > > > patch, what the 2PC patch needs to at least change is to allow FDW to\n> > > > store an FDW-private data that is passed to subsequent FDW transaction\n> > > > API calls. Currently, in the current 2PC patch, we call Prepare API\n> > > > for each participant servers one by one, and the core pass only\n> > > > metadata such as ForeignServer, UserMapping, and global transaction\n> > > > identifier. So it's not easy to calculate the maximum CSN across\n> > > > multiple transaction API calls. I think we can change the 2PC patch to\n> > > > add a void pointer into FdwXactRslvState, struct passed from the core,\n> > > > in order to store FDW-private data. It's going to be the maximum CSN\n> > > > in this case. That way, at the first Prepare API calls postgres_fdw\n> > > > allocates the space and stores CSN to that space. And at subsequent\n> > > > Prepare API calls it can calculate the maximum of csn, and then is\n> > > > able to the step 3 to 6 when preparing the transaction on the last\n> > > > participant. Another idea would be to change 2PC patch so that the\n> > > > core passes a bunch of participants grouped by FDW.\n> > > >\n> > >\n> > > IIUC with this the coordinator needs the communication with the nodes\n> > > twice at the prepare stage, once to prepare the transaction in each\n> > > node and get CSN from each node and then to communicate MaxCSN to each\n> > > node?\n> >\n> > Yes, I think so too.\n> >\n> > > Also, we probably need InDoubt CSN status at prepare phase to\n> > > make snapshots and global visibility work.\n> >\n> > I think it depends on how global CSN feature works.\n> >\n> > For instance, in that 2PC patch, if the coordinator crashes during\n> > preparing a foreign transaction, the global transaction manager\n> > recovers and regards it as \"prepared\" regardless of the foreign\n> > transaction actually having been prepared. And it sends ROLLBACK\n> > PREPARED after recovery completed. With global CSN patch, as you\n> > mentioned, at prepare phase the coordinator needs to communicate\n> > participants twice other than sending PREPARE TRANSACTION:\n> > pg_global_snapshot_prepare() and pg_global_snapshot_assign().\n> >\n> > If global CSN patch needs different cleanup work depending on the CSN\n> > status, we will need InDoubt CSN status so that the global transaction\n> > manager can distinguish between a foreign transaction that has\n> > executed pg_global_snapshot_prepare() and the one that has executed\n> > pg_global_snapshot_assign().\n> >\n> > On the other hand, if it's enough to just send ROLLBACK or ROLLBACK\n> > PREPARED in that case, I think we don't need InDoubt CSN status. There\n> > is no difference between those foreign transactions from the global\n> > transaction manager perspective.\n> >\n>\n> I think InDoubt status helps in checking visibility in the proposed\n> patch wherein if we find the status of the transaction as InDoubt, we\n> wait till we get some valid CSN for it as explained in my previous\n> email. So whether we use it for Rollback/Rollback Prepared, it is\n> required for this design.\n\nYes, InDoubt status is required for checking visibility. My comment\nwas it's not necessary from the perspective of atomic commit.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 10 Jul 2020 12:16:11 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On Fri, Jul 10, 2020 at 8:46 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Wed, 8 Jul 2020 at 21:35, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Cool. While studying, if you can try to think whether this approach is\n> > different from the global coordinator based approach then it would be\n> > great. Here is my initial thought apart from other reasons the global\n> > coordinator based design can help us to do the global transaction\n> > management and snapshots. It can allocate xids for each transaction\n> > and then collect the list of running xacts (or CSN) from each node and\n> > then prepare a global snapshot that can be used to perform any\n> > transaction. OTOH, in the design proposed in this patch, we don't need any\n> > coordinator to manage transactions and snapshots because each node's\n> > current CSN will be sufficient for snapshot and visibility as\n> > explained above.\n>\n> Yeah, my thought is the same as you. Since both approaches have strong\n> points and weak points I cannot mention which is a better approach,\n> but that 2PC patch would go well together with the design proposed in\n> this patch.\n>\n\nI also think with some modifications we might be able to integrate\nyour 2PC patch with the patches proposed here. However, if we decide\nnot to pursue this approach then it is uncertain whether your proposed\npatch can be further enhanced for global visibility. Does it make\nsense to dig the design of this approach a bit further so that we can\nbe somewhat more sure that pursuing your 2PC patch would be a good\nidea and we can, in fact, enhance it later for global visibility?\nAFAICS, Andrey has mentioned couple of problems with this approach\n[1], the details of which I am also not sure at this stage but if we\ncan dig those it would be really great.\n\n> > Now, sure this assumes that there is no clock skew\n> > on different nodes or somehow we take care of the same (Note that in\n> > the proposed patch the CSN is a timestamp.).\n>\n> As far as I read Clock-SI paper, we take care of the clock skew by\n> putting some waits on the transaction start and reading tuples on the\n> remote node.\n>\n\nOh, but I am not sure if this patch is able to solve that, and if so, how?\n\n> >\n> > I think InDoubt status helps in checking visibility in the proposed\n> > patch wherein if we find the status of the transaction as InDoubt, we\n> > wait till we get some valid CSN for it as explained in my previous\n> > email. So whether we use it for Rollback/Rollback Prepared, it is\n> > required for this design.\n>\n> Yes, InDoubt status is required for checking visibility. My comment\n> was it's not necessary from the perspective of atomic commit.\n>\n\nTrue and probably we can enhance your patch for InDoubt status if required.\n\nThanks for moving this work forward. I know the progress is a bit\nslow due to various reasons but I think it is important to keep making\nsome progress.\n\n[1] - https://www.postgresql.org/message-id/f23083b9-38d0-6126-eb6e-091816a78585%40postgrespro.ru\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Jul 2020 16:48:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "Hello,\r\n\r\nWhile I'm thinking of the following issues of the current approach Andrey raised, I'm getting puzzled and can't help asking certain things. Please forgive me if I'm missing some discussions in the past.\r\n\r\n> 1. Dependency on clocks synchronization\r\n> 2. Needs guarantees of monotonically increasing of the CSN in the case \r\n> of an instance restart/crash etc.\r\n> 3. We need to delay increasing of OldestXmin because it can be needed \r\n> for a transaction snapshot at another node.\r\n\r\nWhile Clock-SI seems to be considered the best promising for global serializability here,\r\n\r\n* Why does Clock-SI gets so much attention? How did Clock-SI become the only choice?\r\n\r\n* Clock-SI was devised in Microsoft Research. Does Microsoft or some other organization use Clock-SI?\r\n\r\n\r\nHave anyone examined the following Multiversion Commitment Ordering (MVCO)? Although I haven't understood this yet, it insists that no concurrency control information including timestamps needs to be exchanged among the cluster nodes. I'd appreciate it if someone could give an opinion.\r\n\r\nCommitment Ordering Based Distributed Concurrency Control for Bridging Single and Multi Version Resources.\r\n Proceedings of the Third IEEE International Workshop on Research Issues on Data Engineering: Interoperability in Multidatabase Systems (RIDE-IMS), Vienna, Austria, pp. 189-198, April 1993. (also DEC-TR 853, July 1992)\r\nhttps://ieeexplore.ieee.org/document/281924?arnumber=281924\r\n\r\n\r\nThe author of the above paper, Yoav Raz, seems to have had strong passion at least until 2011 about making people believe the mightiness of Commitment Ordering (CO) for global serializability. However, he complains (sadly) that almost all researchers ignore his theory, as written in his following site and wikipedia page for Commitment Ordering. Does anyone know why CO is ignored?\r\n\r\nCommitment ordering (CO) - yoavraz2\r\nhttps://sites.google.com/site/yoavraz2/the_principle_of_co\r\n\r\n\r\nFWIW, some researchers including Michael Stonebraker evaluated the performance of various distributed concurrency control methods in 2017. Have anyone looked at this? (I don't mean there was some promising method that we might want to adopt.)\r\n\r\nAn Evaluation of Distributed Concurrency Control\r\nRachael Harding, Dana Van Aken, Andrew Pavlo, and Michael Stonebraker. 2017.\r\nProc. VLDB Endow. 10, 5 (January 2017), 553-564. \r\nhttps://doi.org/10.14778/3055540.3055548\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Thu, 23 Jul 2020 04:46:30 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Global snapshots" }, { "msg_contents": "On Mon, 13 Jul 2020 at 20:18, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 10, 2020 at 8:46 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Wed, 8 Jul 2020 at 21:35, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > Cool. While studying, if you can try to think whether this approach is\n> > > different from the global coordinator based approach then it would be\n> > > great. Here is my initial thought apart from other reasons the global\n> > > coordinator based design can help us to do the global transaction\n> > > management and snapshots. It can allocate xids for each transaction\n> > > and then collect the list of running xacts (or CSN) from each node and\n> > > then prepare a global snapshot that can be used to perform any\n> > > transaction. OTOH, in the design proposed in this patch, we don't need any\n> > > coordinator to manage transactions and snapshots because each node's\n> > > current CSN will be sufficient for snapshot and visibility as\n> > > explained above.\n> >\n> > Yeah, my thought is the same as you. Since both approaches have strong\n> > points and weak points I cannot mention which is a better approach,\n> > but that 2PC patch would go well together with the design proposed in\n> > this patch.\n> >\n>\n> I also think with some modifications we might be able to integrate\n> your 2PC patch with the patches proposed here. However, if we decide\n> not to pursue this approach then it is uncertain whether your proposed\n> patch can be further enhanced for global visibility.\n\nYes. I think even if we decide not to pursue this approach it's not\nthe reason for not pursuing the 2PC patch. if so we would need to\nconsider the design of 2PC patch again so it generically resolves the\natomic commit problem.\n\n> Does it make\n> sense to dig the design of this approach a bit further so that we can\n> be somewhat more sure that pursuing your 2PC patch would be a good\n> idea and we can, in fact, enhance it later for global visibility?\n\nAgreed.\n\n> AFAICS, Andrey has mentioned couple of problems with this approach\n> [1], the details of which I am also not sure at this stage but if we\n> can dig those it would be really great.\n>\n> > > Now, sure this assumes that there is no clock skew\n> > > on different nodes or somehow we take care of the same (Note that in\n> > > the proposed patch the CSN is a timestamp.).\n> >\n> > As far as I read Clock-SI paper, we take care of the clock skew by\n> > putting some waits on the transaction start and reading tuples on the\n> > remote node.\n> >\n>\n> Oh, but I am not sure if this patch is able to solve that, and if so, how?\n\nI'm not sure the details but, as far as I read the patch I guess the\ntransaction will sleep at GlobalSnapshotSync() when the received\nglobal csn is greater than the local global csn.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 23 Jul 2020 15:13:12 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "Hi Andrey san, Movead san,\r\n\r\n\r\nFrom: tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com>\r\n> While Clock-SI seems to be considered the best promising for global\r\n> serializability here,\r\n> \r\n> * Why does Clock-SI gets so much attention? How did Clock-SI become the\r\n> only choice?\r\n> \r\n> * Clock-SI was devised in Microsoft Research. Does Microsoft or some other\r\n> organization use Clock-SI?\r\n\r\nCould you take a look at this patent? I'm afraid this is the Clock-SI for MVCC. Microsoft holds this until 2031. I couldn't find this with the keyword \"Clock-SI.\"\"\r\n\r\n\r\nUS8356007B2 - Distributed transaction management for database systems with multiversioning - Google Patents\r\nhttps://patents.google.com/patent/US8356007\r\n\r\n\r\nIf it is, can we circumvent this patent?\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n", "msg_date": "Mon, 27 Jul 2020 06:22:45 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Global snapshots" }, { "msg_contents": "On 7/27/20 11:22 AM, tsunakawa.takay@fujitsu.com wrote:\n> Hi Andrey san, Movead san,\n> \n> \n> From: tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com>\n>> While Clock-SI seems to be considered the best promising for global\n>> serializability here,\n>>\n>> * Why does Clock-SI gets so much attention? How did Clock-SI become the\n>> only choice?\n>>\n>> * Clock-SI was devised in Microsoft Research. Does Microsoft or some other\n>> organization use Clock-SI?\n> \n> Could you take a look at this patent? I'm afraid this is the Clock-SI for MVCC. Microsoft holds this until 2031. I couldn't find this with the keyword \"Clock-SI.\"\"\n> \n> \n> US8356007B2 - Distributed transaction management for database systems with multiversioning - Google Patents\n> https://patents.google.com/patent/US8356007\n> \n> \n> If it is, can we circumvent this patent?\n> \n> \n> Regards\n> Takayuki Tsunakawa\n> \n> \n\nThank you for the research (and previous links too).\nI haven't seen this patent before. This should be carefully studied.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Mon, 27 Jul 2020 11:44:54 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "Hi,\n\nOn 2020-07-27 09:44, Andrey V. Lepikhov wrote:\n> On 7/27/20 11:22 AM, tsunakawa.takay@fujitsu.com wrote:\n>> \n>> US8356007B2 - Distributed transaction management for database systems \n>> with multiversioning - Google Patents\n>> https://patents.google.com/patent/US8356007\n>> \n>> \n>> If it is, can we circumvent this patent?\n>> \n> \n> Thank you for the research (and previous links too).\n> I haven't seen this patent before. This should be carefully studied.\n\nI had a look on the patch set, although it is quite outdated, especially \non 0003.\n\nTwo thoughts about 0003:\n\nFirst, IIUC atomicity of the distributed transaction in the postgres_fdw \nis achieved by the usage of 2PC. I think that this postgres_fdw 2PC \nsupport should be separated from global snapshots. It could be useful to \nhave such atomic distributed transactions even without a proper \nvisibility, which is guaranteed by the global snapshot. Especially \ntaking into account the doubts about Clock-SI and general questions \nabout algorithm choosing criteria above in the thread.\n\nThus, I propose to split 0003 into two parts and add a separate GUC \n'postgres_fdw.use_twophase', which could be turned on independently from \n'postgres_fdw.use_global_snapshots'. Of course if the latter is enabled, \nthen 2PC should be forcedly turned on as well.\n\nSecond, there are some problems with errors handling in the 0003 (thanks \nto Arseny Sher for review).\n\n+error:\n+\t\t\tif (!res)\n+\t\t\t{\n+\t\t\t\tsql = psprintf(\"ABORT PREPARED '%s'\", fdwTransState->gid);\n+\t\t\t\tBroadcastCmd(sql);\n+\t\t\t\telog(ERROR, \"Failed to PREPARE transaction on remote node\");\n+\t\t\t}\n\nIt seems that we should never reach this point, just because \nBroadcastStmt will throw an ERROR if it fails to prepare transaction on \nthe foreign server:\n\n+\t\t\tif (PQresultStatus(result) != expectedStatus ||\n+\t\t\t\t(handler && !handler(result, arg)))\n+\t\t\t{\n+\t\t\t\telog(WARNING, \"Failed command %s: status=%d, expected status=%d\", \nsql, PQresultStatus(result), expectedStatus);\n+\t\t\t\tpgfdw_report_error(ERROR, result, entry->conn, true, sql);\n+\t\t\t\tallOk = false;\n+\t\t\t}\n\nMoreover, It doesn't make much sense to try to abort prepared xacts, \nsince if we failed to prepare it somewhere, then some foreign servers \nmay become unavailable already and this doesn't provide us a 100% \nguarantee of clean up.\n\n+\t/* COMMIT open transaction of we were doing 2PC */\n+\tif (fdwTransState->two_phase_commit &&\n+\t\t(event == XACT_EVENT_PARALLEL_COMMIT || event == XACT_EVENT_COMMIT))\n+\t{\n+\t\tBroadcastCmd(psprintf(\"COMMIT PREPARED '%s'\", fdwTransState->gid));\n+\t}\n\nAt this point, the host (local) transaction is already committed and \nthere is no way to abort it gracefully. However, BroadcastCmd may rise \nan ERROR that will cause a PANIC, since it is non-recoverable state:\n\nPANIC: cannot abort transaction 487, it was already committed\n\nAttached is a patch, which implements a plain 2PC in the postgres_fdw \nand adds a GUC 'postgres_fdw.use_twophase'. Also it solves these errors \nhandling issues above and tries to add proper comments everywhere. I \nthink, that 0003 should be rebased on the top of it, or it could be a \nfirst patch in the set, since it may be used independently. What do you \nthink?\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company", "msg_date": "Fri, 04 Sep 2020 21:31:14 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "\n\nOn 2020/09/05 3:31, Alexey Kondratov wrote:\n> Hi,\n> \n> On 2020-07-27 09:44, Andrey V. Lepikhov wrote:\n>> On 7/27/20 11:22 AM, tsunakawa.takay@fujitsu.com wrote:\n>>>\n>>> US8356007B2 - Distributed transaction management for database systems with multiversioning - Google Patents\n>>> https://patents.google.com/patent/US8356007\n>>>\n>>>\n>>> If it is, can we circumvent this patent?\n>>>\n>>\n>> Thank you for the research (and previous links too).\n>> I haven't seen this patent before. This should be carefully studied.\n> \n> I had a look on the patch set, although it is quite outdated, especially on 0003.\n> \n> Two thoughts about 0003:\n> \n> First, IIUC atomicity of the distributed transaction in the postgres_fdw is achieved by the usage of 2PC. I think that this postgres_fdw 2PC support should be separated from global snapshots.\n\nAgreed.\n\n\n> It could be useful to have such atomic distributed transactions even without a proper visibility, which is guaranteed by the global snapshot. Especially taking into account the doubts about Clock-SI and general questions about algorithm choosing criteria above in the thread.\n> \n> Thus, I propose to split 0003 into two parts and add a separate GUC 'postgres_fdw.use_twophase', which could be turned on independently from 'postgres_fdw.use_global_snapshots'. Of course if the latter is enabled, then 2PC should be forcedly turned on as well.\n> \n> Second, there are some problems with errors handling in the 0003 (thanks to Arseny Sher for review).\n> \n> +error:\n> +����������� if (!res)\n> +����������� {\n> +��������������� sql = psprintf(\"ABORT PREPARED '%s'\", fdwTransState->gid);\n> +��������������� BroadcastCmd(sql);\n> +��������������� elog(ERROR, \"Failed to PREPARE transaction on remote node\");\n> +����������� }\n> \n> It seems that we should never reach this point, just because BroadcastStmt will throw an ERROR if it fails to prepare transaction on the foreign server:\n> \n> +����������� if (PQresultStatus(result) != expectedStatus ||\n> +��������������� (handler && !handler(result, arg)))\n> +����������� {\n> +��������������� elog(WARNING, \"Failed command %s: status=%d, expected status=%d\", sql, PQresultStatus(result), expectedStatus);\n> +��������������� pgfdw_report_error(ERROR, result, entry->conn, true, sql);\n> +��������������� allOk = false;\n> +����������� }\n> \n> Moreover, It doesn't make much sense to try to abort prepared xacts, since if we failed to prepare it somewhere, then some foreign servers may become unavailable already and this doesn't provide us a 100% guarantee of clean up.\n> \n> +��� /* COMMIT open transaction of we were doing 2PC */\n> +��� if (fdwTransState->two_phase_commit &&\n> +������� (event == XACT_EVENT_PARALLEL_COMMIT || event == XACT_EVENT_COMMIT))\n> +��� {\n> +������� BroadcastCmd(psprintf(\"COMMIT PREPARED '%s'\", fdwTransState->gid));\n> +��� }\n> \n> At this point, the host (local) transaction is already committed and there is no way to abort it gracefully. However, BroadcastCmd may rise an ERROR that will cause a PANIC, since it is non-recoverable state:\n> \n> PANIC:� cannot abort transaction 487, it was already committed\n> \n> Attached is a patch, which implements a plain 2PC in the postgres_fdw and adds a GUC 'postgres_fdw.use_twophase'. Also it solves these errors handling issues above and tries to add proper comments everywhere. I think, that 0003 should be rebased on the top of it, or it could be a first patch in the set, since it may be used independently. What do you think?\n\nThanks for the patch!\n\nSawada-san was proposing another 2PC patch at [1]. Do you have any thoughts\nabout pros and cons between your patch and Sawada-san's?\n\nRegards,\n\n[1]\nhttps://www.postgresql.org/message-id/CA+fd4k4z6_B1ETEvQamwQhu4RX7XsrN5ORL7OhJ4B5B6sW-RgQ@mail.gmail.com\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 8 Sep 2020 11:49:36 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On 2020-09-08 05:49, Fujii Masao wrote:\n> On 2020/09/05 3:31, Alexey Kondratov wrote:\n>> \n>> Attached is a patch, which implements a plain 2PC in the postgres_fdw \n>> and adds a GUC 'postgres_fdw.use_twophase'. Also it solves these \n>> errors handling issues above and tries to add proper comments \n>> everywhere. I think, that 0003 should be rebased on the top of it, or \n>> it could be a first patch in the set, since it may be used \n>> independently. What do you think?\n> \n> Thanks for the patch!\n> \n> Sawada-san was proposing another 2PC patch at [1]. Do you have any \n> thoughts\n> about pros and cons between your patch and Sawada-san's?\n> \n> [1]\n> https://www.postgresql.org/message-id/CA+fd4k4z6_B1ETEvQamwQhu4RX7XsrN5ORL7OhJ4B5B6sW-RgQ@mail.gmail.com\n\nThank you for the link!\n\nAfter a quick look on the Sawada-san's patch set I think that there are \ntwo major differences:\n\n1. There is a built-in foreign xacts resolver in the [1], which should \nbe much more convenient from the end-user perspective. It involves huge \nin-core changes and additional complexity that is of course worth of.\n\nHowever, it's still not clear for me that it is possible to resolve all \nforeign prepared xacts on the Postgres' own side with a 100% guarantee. \nImagine a situation when the coordinator node is actually a HA cluster \ngroup (primary + sync + async replica) and it failed just after PREPARE \nstage of after local COMMIT. In that case all foreign xacts will be left \nin the prepared state. After failover process complete synchronous \nreplica will become a new primary. Would it have all required info to \nproperly resolve orphan prepared xacts?\n\nProbably, this situation is handled properly in the [1], but I've not \nyet finished a thorough reading of the patch set, though it has a great \ndoc!\n\nOn the other hand, previous 0003 and my proposed patch rely on either \nmanual resolution of hung prepared xacts or usage of external \nmonitor/resolver. This approach is much simpler from the in-core \nperspective, but doesn't look as complete as [1] though.\n\n2. In the patch from this thread all 2PC logic sit in the postgres_fdw, \nwhile [1] tries to put it into the generic fdw core, which also feels \nlike a more general and architecturally correct way. However, how many \nfrom the currently available dozens of various FDWs are capable to \nperform 2PC? And how many of them are maintained well enough to adopt \nthis new API? This is not an argument against [1] actually, since \npostgres_fdw is known to be the most advanced FDW and an early adopter \nof new feature, just a little doubt about a usefulness of this \npreliminary generalisation.\n\nAnyway, I think that [1] is a great work and really hope to find more \ntime to investigate it deeper later this year.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n", "msg_date": "Tue, 08 Sep 2020 13:36:16 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "\n\nOn 2020/09/08 19:36, Alexey Kondratov wrote:\n> On 2020-09-08 05:49, Fujii Masao wrote:\n>> On 2020/09/05 3:31, Alexey Kondratov wrote:\n>>>\n>>> Attached is a patch, which implements a plain 2PC in the postgres_fdw and adds a GUC 'postgres_fdw.use_twophase'. Also it solves these errors handling issues above and tries to add proper comments everywhere. I think, that 0003 should be rebased on the top of it, or it could be a first patch in the set, since it may be used independently. What do you think?\n>>\n>> Thanks for the patch!\n>>\n>> Sawada-san was proposing another 2PC patch at [1]. Do you have any thoughts\n>> about pros and cons between your patch and Sawada-san's?\n>>\n>> [1]\n>> https://www.postgresql.org/message-id/CA+fd4k4z6_B1ETEvQamwQhu4RX7XsrN5ORL7OhJ4B5B6sW-RgQ@mail.gmail.com\n> \n> Thank you for the link!\n> \n> After a quick look on the Sawada-san's patch set I think that there are two major differences:\n\nThanks for sharing your thought! As far as I read your patch quickly,\nI basically agree with your this view.\n\n\n> \n> 1. There is a built-in foreign xacts resolver in the [1], which should be much more convenient from the end-user perspective. It involves huge in-core changes and additional complexity that is of course worth of.\n> \n> However, it's still not clear for me that it is possible to resolve all foreign prepared xacts on the Postgres' own side with a 100% guarantee. Imagine a situation when the coordinator node is actually a HA cluster group (primary + sync + async replica) and it failed just after PREPARE stage of after local COMMIT. In that case all foreign xacts will be left in the prepared state. After failover process complete synchronous replica will become a new primary. Would it have all required info to properly resolve orphan prepared xacts?\n\nIIUC, yes, the information required for automatic resolution is\nWAL-logged and the standby tries to resolve those orphan transactions\nfrom WAL after the failover. But Sawada-san's patch provides\nthe special function for manual resolution, so there may be some cases\nwhere manual resolution is necessary.\n\n\n> \n> Probably, this situation is handled properly in the [1], but I've not yet finished a thorough reading of the patch set, though it has a great doc!\n> \n> On the other hand, previous 0003 and my proposed patch rely on either manual resolution of hung prepared xacts or usage of external monitor/resolver. This approach is much simpler from the in-core perspective, but doesn't look as complete as [1] though.\n> \n> 2. In the patch from this thread all 2PC logic sit in the postgres_fdw, while [1] tries to put it into the generic fdw core, which also feels like a more general and architecturally correct way. However, how many from the currently available dozens of various FDWs are capable to perform 2PC? And how many of them are maintained well enough to adopt this new API? This is not an argument against [1] actually, since postgres_fdw is known to be the most advanced FDW and an early adopter of new feature, just a little doubt about a usefulness of this preliminary generalisation.\n\nIf we implement 2PC feature only for PostgreSQL sharding using\npostgres_fdw, IMO it's ok to support only postgres_fdw.\nBut if we implement 2PC as the improvement on FDW independently\nfrom PostgreSQL sharding and global visibility, I think that it's\nnecessary to support other FDW. I'm not sure how many FDW\nactually will support this new 2PC interface. But if the interface is\nnot so complicated, I *guess* some FDW will support it in the near future.\n\nImplementing 2PC feature only inside postgres_fdw seems to cause\nanother issue; COMMIT PREPARED is issued to the remote servers\nafter marking the local transaction as committed\n(i.e., ProcArrayEndTransaction()). Is this safe? This issue happens\nbecause COMMIT PREPARED is issued via\nCallXactCallbacks(XACT_EVENT_COMMIT) and that CallXactCallbacks()\nis called after ProcArrayEndTransaction().\n\n\n> \n> Anyway, I think that [1] is a great work and really hope to find more time to investigate it deeper later this year.\n\nI'm sure your work is also great! I hope we can discuss the design\nof 2PC feature together!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 8 Sep 2020 20:48:16 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On 2020-09-08 14:48, Fujii Masao wrote:\n> On 2020/09/08 19:36, Alexey Kondratov wrote:\n>> On 2020-09-08 05:49, Fujii Masao wrote:\n>>> On 2020/09/05 3:31, Alexey Kondratov wrote:\n>>>> \n>>>> Attached is a patch, which implements a plain 2PC in the \n>>>> postgres_fdw and adds a GUC 'postgres_fdw.use_twophase'. Also it \n>>>> solves these errors handling issues above and tries to add proper \n>>>> comments everywhere. I think, that 0003 should be rebased on the top \n>>>> of it, or it could be a first patch in the set, since it may be used \n>>>> independently. What do you think?\n>>> \n>>> Thanks for the patch!\n>>> \n>>> Sawada-san was proposing another 2PC patch at [1]. Do you have any \n>>> thoughts\n>>> about pros and cons between your patch and Sawada-san's?\n>>> \n>>> [1]\n>>> https://www.postgresql.org/message-id/CA+fd4k4z6_B1ETEvQamwQhu4RX7XsrN5ORL7OhJ4B5B6sW-RgQ@mail.gmail.com\n>> \n>> Thank you for the link!\n>> \n>> After a quick look on the Sawada-san's patch set I think that there \n>> are two major differences:\n> \n> Thanks for sharing your thought! As far as I read your patch quickly,\n> I basically agree with your this view.\n> \n> \n>> \n>> 1. There is a built-in foreign xacts resolver in the [1], which should \n>> be much more convenient from the end-user perspective. It involves \n>> huge in-core changes and additional complexity that is of course worth \n>> of.\n>> \n>> However, it's still not clear for me that it is possible to resolve \n>> all foreign prepared xacts on the Postgres' own side with a 100% \n>> guarantee. Imagine a situation when the coordinator node is actually a \n>> HA cluster group (primary + sync + async replica) and it failed just \n>> after PREPARE stage of after local COMMIT. In that case all foreign \n>> xacts will be left in the prepared state. After failover process \n>> complete synchronous replica will become a new primary. Would it have \n>> all required info to properly resolve orphan prepared xacts?\n> \n> IIUC, yes, the information required for automatic resolution is\n> WAL-logged and the standby tries to resolve those orphan transactions\n> from WAL after the failover. But Sawada-san's patch provides\n> the special function for manual resolution, so there may be some cases\n> where manual resolution is necessary.\n> \n\nI've found a note about manual resolution in the v25 0002:\n\n+After that we prepare all foreign transactions by calling\n+PrepareForeignTransaction() API. If we failed on any of them we change \nto\n+rollback, therefore at this time some participants might be prepared \nwhereas\n+some are not prepared. The former foreign transactions need to be \nresolved\n+using pg_resolve_foreign_xact() manually and the latter ends \ntransaction\n+in one-phase by calling RollbackForeignTransaction() API.\n\nbut it's not yet clear for me.\n\n> \n> Implementing 2PC feature only inside postgres_fdw seems to cause\n> another issue; COMMIT PREPARED is issued to the remote servers\n> after marking the local transaction as committed\n> (i.e., ProcArrayEndTransaction()).\n> \n\nAccording to the Sawada-san's v25 0002 the logic is pretty much the same \nthere:\n\n+2. Pre-Commit phase (1st phase of two-phase commit)\n\n+3. Commit locally\n+Once we've prepared all of them, commit the transaction locally.\n\n+4. Post-Commit Phase (2nd phase of two-phase commit)\n\nBrief look at the code confirms this scheme. IIUC, AtEOXact_FdwXact / \nFdwXactParticipantEndTransaction happens after ProcArrayEndTransaction() \nin the CommitTransaction(). Thus, I don't see many difference between \nthese approach and CallXactCallbacks() usage regarding this point.\n\n> Is this safe? This issue happens\n> because COMMIT PREPARED is issued via\n> CallXactCallbacks(XACT_EVENT_COMMIT) and that CallXactCallbacks()\n> is called after ProcArrayEndTransaction().\n> \n\nOnce the transaction is committed locally any ERROR (or higher level \nmessage) will be escalated to PANIC. And I do see possible ERROR level \nmessages in the postgresCommitForeignTransaction() for example:\n\n+\tif (PQresultStatus(res) != PGRES_COMMAND_OK)\n+\t\tereport(ERROR, (errmsg(\"could not commit transaction on server %s\",\n+\t\t\t\t\t\t\t frstate->server->servername)));\n\nI don't think that it's very convenient to get a PANIC every time we \nfailed to commit one of the prepared foreign xacts, since it could be \nnot so rare in the distributed system. That's why I tried to get rid of \npossible ERRORs as far as possible in my proposed patch.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n", "msg_date": "Tue, 08 Sep 2020 20:00:44 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On Wed, 9 Sep 2020 at 02:00, Alexey Kondratov\n<a.kondratov@postgrespro.ru> wrote:\n>\n> On 2020-09-08 14:48, Fujii Masao wrote:\n> > On 2020/09/08 19:36, Alexey Kondratov wrote:\n> >> On 2020-09-08 05:49, Fujii Masao wrote:\n> >>> On 2020/09/05 3:31, Alexey Kondratov wrote:\n> >>>>\n> >>>> Attached is a patch, which implements a plain 2PC in the\n> >>>> postgres_fdw and adds a GUC 'postgres_fdw.use_twophase'. Also it\n> >>>> solves these errors handling issues above and tries to add proper\n> >>>> comments everywhere. I think, that 0003 should be rebased on the top\n> >>>> of it, or it could be a first patch in the set, since it may be used\n> >>>> independently. What do you think?\n> >>>\n> >>> Thanks for the patch!\n> >>>\n> >>> Sawada-san was proposing another 2PC patch at [1]. Do you have any\n> >>> thoughts\n> >>> about pros and cons between your patch and Sawada-san's?\n> >>>\n> >>> [1]\n> >>> https://www.postgresql.org/message-id/CA+fd4k4z6_B1ETEvQamwQhu4RX7XsrN5ORL7OhJ4B5B6sW-RgQ@mail.gmail.com\n> >>\n> >> Thank you for the link!\n> >>\n> >> After a quick look on the Sawada-san's patch set I think that there\n> >> are two major differences:\n> >\n> > Thanks for sharing your thought! As far as I read your patch quickly,\n> > I basically agree with your this view.\n> >\n> >\n> >>\n> >> 1. There is a built-in foreign xacts resolver in the [1], which should\n> >> be much more convenient from the end-user perspective. It involves\n> >> huge in-core changes and additional complexity that is of course worth\n> >> of.\n> >>\n> >> However, it's still not clear for me that it is possible to resolve\n> >> all foreign prepared xacts on the Postgres' own side with a 100%\n> >> guarantee. Imagine a situation when the coordinator node is actually a\n> >> HA cluster group (primary + sync + async replica) and it failed just\n> >> after PREPARE stage of after local COMMIT. In that case all foreign\n> >> xacts will be left in the prepared state. After failover process\n> >> complete synchronous replica will become a new primary. Would it have\n> >> all required info to properly resolve orphan prepared xacts?\n> >\n> > IIUC, yes, the information required for automatic resolution is\n> > WAL-logged and the standby tries to resolve those orphan transactions\n> > from WAL after the failover. But Sawada-san's patch provides\n> > the special function for manual resolution, so there may be some cases\n> > where manual resolution is necessary.\n> >\n>\n> I've found a note about manual resolution in the v25 0002:\n>\n> +After that we prepare all foreign transactions by calling\n> +PrepareForeignTransaction() API. If we failed on any of them we change\n> to\n> +rollback, therefore at this time some participants might be prepared\n> whereas\n> +some are not prepared. The former foreign transactions need to be\n> resolved\n> +using pg_resolve_foreign_xact() manually and the latter ends\n> transaction\n> +in one-phase by calling RollbackForeignTransaction() API.\n>\n> but it's not yet clear for me.\n\nSorry, the above description in README is out of date. In the v25\npatch, it's true that if a backend fails to prepare a transaction on a\nforeign server, it’s possible that some foreign transactions are\nprepared whereas others are not. But at the end of the transaction\nafter changing to rollback, the process does rollback (or rollback\nprepared) all of them. So the use case of pg_resolve_foreign_xact() is\nto resolve orphaned foreign prepared transactions or to resolve a\nforeign transaction that is not resolved for some reasons, bugs etc.\n\n>\n> >\n> > Implementing 2PC feature only inside postgres_fdw seems to cause\n> > another issue; COMMIT PREPARED is issued to the remote servers\n> > after marking the local transaction as committed\n> > (i.e., ProcArrayEndTransaction()).\n> >\n>\n> According to the Sawada-san's v25 0002 the logic is pretty much the same\n> there:\n>\n> +2. Pre-Commit phase (1st phase of two-phase commit)\n>\n> +3. Commit locally\n> +Once we've prepared all of them, commit the transaction locally.\n>\n> +4. Post-Commit Phase (2nd phase of two-phase commit)\n>\n> Brief look at the code confirms this scheme. IIUC, AtEOXact_FdwXact /\n> FdwXactParticipantEndTransaction happens after ProcArrayEndTransaction()\n> in the CommitTransaction(). Thus, I don't see many difference between\n> these approach and CallXactCallbacks() usage regarding this point.\n>\n> > Is this safe? This issue happens\n> > because COMMIT PREPARED is issued via\n> > CallXactCallbacks(XACT_EVENT_COMMIT) and that CallXactCallbacks()\n> > is called after ProcArrayEndTransaction().\n> >\n>\n> Once the transaction is committed locally any ERROR (or higher level\n> message) will be escalated to PANIC.\n\nI think this is true only inside the critical section and it's not\nnecessarily true for all errors happening after the local commit,\nright?\n\n> And I do see possible ERROR level\n> messages in the postgresCommitForeignTransaction() for example:\n>\n> + if (PQresultStatus(res) != PGRES_COMMAND_OK)\n> + ereport(ERROR, (errmsg(\"could not commit transaction on server %s\",\n> + frstate->server->servername)));\n>\n> I don't think that it's very convenient to get a PANIC every time we\n> failed to commit one of the prepared foreign xacts, since it could be\n> not so rare in the distributed system. That's why I tried to get rid of\n> possible ERRORs as far as possible in my proposed patch.\n>\n\nIn my patch, the second phase of 2PC is executed only by the resolver\nprocess. Therefore, even if an error would happen during committing a\nforeign prepared transaction, we just need to relaunch the resolver\nprocess and trying again. During that, the backend process will be\njust waiting. If a backend process raises an error after the local\ncommit, the client will see transaction failure despite the local\ntransaction having been committed. An error could happen even by\npalloc. So the patch uses a background worker to commit prepared\nforeign transactions, not by backend itself.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 9 Sep 2020 14:35:04 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On 2020-09-09 08:35, Masahiko Sawada wrote:\n> On Wed, 9 Sep 2020 at 02:00, Alexey Kondratov\n> <a.kondratov@postgrespro.ru> wrote:\n>> \n>> On 2020-09-08 14:48, Fujii Masao wrote:\n>> >\n>> > IIUC, yes, the information required for automatic resolution is\n>> > WAL-logged and the standby tries to resolve those orphan transactions\n>> > from WAL after the failover. But Sawada-san's patch provides\n>> > the special function for manual resolution, so there may be some cases\n>> > where manual resolution is necessary.\n>> >\n>> \n>> I've found a note about manual resolution in the v25 0002:\n>> \n>> +After that we prepare all foreign transactions by calling\n>> +PrepareForeignTransaction() API. If we failed on any of them we \n>> change\n>> to\n>> +rollback, therefore at this time some participants might be prepared\n>> whereas\n>> +some are not prepared. The former foreign transactions need to be\n>> resolved\n>> +using pg_resolve_foreign_xact() manually and the latter ends\n>> transaction\n>> +in one-phase by calling RollbackForeignTransaction() API.\n>> \n>> but it's not yet clear for me.\n> \n> Sorry, the above description in README is out of date. In the v25\n> patch, it's true that if a backend fails to prepare a transaction on a\n> foreign server, it’s possible that some foreign transactions are\n> prepared whereas others are not. But at the end of the transaction\n> after changing to rollback, the process does rollback (or rollback\n> prepared) all of them. So the use case of pg_resolve_foreign_xact() is\n> to resolve orphaned foreign prepared transactions or to resolve a\n> foreign transaction that is not resolved for some reasons, bugs etc.\n> \n\nOK, thank you for the explanation!\n\n>> \n>> Once the transaction is committed locally any ERROR (or higher level\n>> message) will be escalated to PANIC.\n> \n> I think this is true only inside the critical section and it's not\n> necessarily true for all errors happening after the local commit,\n> right?\n> \n\nIt's not actually related to critical section errors escalation. Any \nerror in the backend after the local commit and \nProcArrayEndTransaction() will try to abort the current transaction and \ndo RecordTransactionAbort(), but it's too late to do so and PANIC will \nbe risen:\n\n\t/*\n\t * Check that we haven't aborted halfway through \nRecordTransactionCommit.\n\t */\n\tif (TransactionIdDidCommit(xid))\n\t\telog(PANIC, \"cannot abort transaction %u, it was already committed\",\n\t\t\t xid);\n\nAt least that's how I understand it.\n\n\n>> And I do see possible ERROR level\n>> messages in the postgresCommitForeignTransaction() for example:\n>> \n>> + if (PQresultStatus(res) != PGRES_COMMAND_OK)\n>> + ereport(ERROR, (errmsg(\"could not commit transaction \n>> on server %s\",\n>> + \n>> frstate->server->servername)));\n>> \n>> I don't think that it's very convenient to get a PANIC every time we\n>> failed to commit one of the prepared foreign xacts, since it could be\n>> not so rare in the distributed system. That's why I tried to get rid \n>> of\n>> possible ERRORs as far as possible in my proposed patch.\n>> \n> \n> In my patch, the second phase of 2PC is executed only by the resolver\n> process. Therefore, even if an error would happen during committing a\n> foreign prepared transaction, we just need to relaunch the resolver\n> process and trying again. During that, the backend process will be\n> just waiting. If a backend process raises an error after the local\n> commit, the client will see transaction failure despite the local\n> transaction having been committed. An error could happen even by\n> palloc. So the patch uses a background worker to commit prepared\n> foreign transactions, not by backend itself.\n> \n\nYes, if it's a background process, then it seems to be safe.\n\nBTW, it seems that I've chosen a wrong thread for posting my patch and \nstaring a discussion :) Activity from this thread moved to [1] and you \nsolution with built-in resolver is discussed [2]. I'll try to take a \nlook on v25 closely and write to [2] instead.\n\n\n[1] \nhttps://www.postgresql.org/message-id/2020081009525213277261%40highgo.ca\n\n[2] \nhttps://www.postgresql.org/message-id/CAExHW5uBy9QwjdSO4j82WC4aeW-Q4n2ouoZ1z70o%3D8Vb0skqYQ%40mail.gmail.com\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n", "msg_date": "Wed, 09 Sep 2020 13:45:02 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "\n\nOn 2020/09/09 2:00, Alexey Kondratov wrote:\n> On 2020-09-08 14:48, Fujii Masao wrote:\n>> On 2020/09/08 19:36, Alexey Kondratov wrote:\n>>> On 2020-09-08 05:49, Fujii Masao wrote:\n>>>> On 2020/09/05 3:31, Alexey Kondratov wrote:\n>>>>>\n>>>>> Attached is a patch, which implements a plain 2PC in the postgres_fdw and adds a GUC 'postgres_fdw.use_twophase'. Also it solves these errors handling issues above and tries to add proper comments everywhere. I think, that 0003 should be rebased on the top of it, or it could be a first patch in the set, since it may be used independently. What do you think?\n>>>>\n>>>> Thanks for the patch!\n>>>>\n>>>> Sawada-san was proposing another 2PC patch at [1]. Do you have any thoughts\n>>>> about pros and cons between your patch and Sawada-san's?\n>>>>\n>>>> [1]\n>>>> https://www.postgresql.org/message-id/CA+fd4k4z6_B1ETEvQamwQhu4RX7XsrN5ORL7OhJ4B5B6sW-RgQ@mail.gmail.com\n>>>\n>>> Thank you for the link!\n>>>\n>>> After a quick look on the Sawada-san's patch set I think that there are two major differences:\n>>\n>> Thanks for sharing your thought! As far as I read your patch quickly,\n>> I basically agree with your this view.\n>>\n>>\n>>>\n>>> 1. There is a built-in foreign xacts resolver in the [1], which should be much more convenient from the end-user perspective. It involves huge in-core changes and additional complexity that is of course worth of.\n>>>\n>>> However, it's still not clear for me that it is possible to resolve all foreign prepared xacts on the Postgres' own side with a 100% guarantee. Imagine a situation when the coordinator node is actually a HA cluster group (primary + sync + async replica) and it failed just after PREPARE stage of after local COMMIT. In that case all foreign xacts will be left in the prepared state. After failover process complete synchronous replica will become a new primary. Would it have all required info to properly resolve orphan prepared xacts?\n>>\n>> IIUC, yes, the information required for automatic resolution is\n>> WAL-logged and the standby tries to resolve those orphan transactions\n>> from WAL after the failover. But Sawada-san's patch provides\n>> the special function for manual resolution, so there may be some cases\n>> where manual resolution is necessary.\n>>\n> \n> I've found a note about manual resolution in the v25 0002:\n> \n> +After that we prepare all foreign transactions by calling\n> +PrepareForeignTransaction() API. If we failed on any of them we change to\n> +rollback, therefore at this time some participants might be prepared whereas\n> +some are not prepared. The former foreign transactions need to be resolved\n> +using pg_resolve_foreign_xact() manually and the latter ends transaction\n> +in one-phase by calling RollbackForeignTransaction() API.\n> \n> but it's not yet clear for me.\n> \n>>\n>> Implementing 2PC feature only inside postgres_fdw seems to cause\n>> another issue; COMMIT PREPARED is issued to the remote servers\n>> after marking the local transaction as committed\n>> (i.e., ProcArrayEndTransaction()).\n>>\n> \n> According to the Sawada-san's v25 0002 the logic is pretty much the same there:\n> \n> +2. Pre-Commit phase (1st phase of two-phase commit)\n> \n> +3. Commit locally\n> +Once we've prepared all of them, commit the transaction locally.\n> \n> +4. Post-Commit Phase (2nd phase of two-phase commit)\n> \n> Brief look at the code confirms this scheme. IIUC, AtEOXact_FdwXact / FdwXactParticipantEndTransaction happens after ProcArrayEndTransaction() in the CommitTransaction(). Thus, I don't see many difference between these approach and CallXactCallbacks() usage regarding this point.\n\nIIUC the commit logic in Sawada-san's patch looks like\n\n1. PreCommit_FdwXact()\n PREPARE TRANSACTION command is issued\n\n2. RecordTransactionCommit()\n 2-1. WAL-log the commit record\n 2-2. Update CLOG\n 2-3. Wait for sync rep\n 2-4. FdwXactWaitForResolution()\n Wait until COMMIT PREPARED commands are issued to the remote servers and completed.\n\n3. ProcArrayEndTransaction()\n4. AtEOXact_FdwXact(true)\n\nSo ISTM that the timing of when COMMIT PREPARED is issued\nto the remote server is different between the patches.\nAm I missing something?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 10 Sep 2020 02:29:23 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "Hi Andrey san,\r\n\r\nFrom: Andrey V. Lepikhov <a.lepikhov@postgrespro.ru>> > From: tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com>\r\n> >> While Clock-SI seems to be considered the best promising for global\r\n> >>> > Could you take a look at this patent? I'm afraid this is the Clock-SI for MVCC.\r\n> Microsoft holds this until 2031. I couldn't find this with the keyword\r\n> \"Clock-SI.\"\"\r\n> >\r\n> >\r\n> > US8356007B2 - Distributed transaction management for database systems\r\n> with multiversioning - Google Patents\r\n> > https://patents.google.com/patent/US8356007\r\n> >\r\n> >\r\n> > If it is, can we circumvent this patent?\r\n> >> \r\n> Thank you for the research (and previous links too).\r\n> I haven't seen this patent before. This should be carefully studied.\r\n\r\nI wanted to ask about this after I've published the revised scale-out design wiki, but I'm taking too long, so could you share your study results? I think we need to make it clear about the patent before discussing the code. After we hear your opinion, we also have to check to see if Clock-SI is patented or avoid it by modifying part of the algorithm. Just in case we cannot use it, we have to proceed with thinking about alternatives.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n", "msg_date": "Thu, 10 Sep 2020 01:38:15 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Global snapshots" }, { "msg_contents": "\n\nOn 2020/09/10 10:38, tsunakawa.takay@fujitsu.com wrote:\n> Hi Andrey san,\n> \n> From: Andrey V. Lepikhov <a.lepikhov@postgrespro.ru>> > From: tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com>\n>>>> While Clock-SI seems to be considered the best promising for global\n>>>>>> Could you take a look at this patent? I'm afraid this is the Clock-SI for MVCC.\n>> Microsoft holds this until 2031. I couldn't find this with the keyword\n>> \"Clock-SI.\"\"\n>>>\n>>>\n>>> US8356007B2 - Distributed transaction management for database systems\n>> with multiversioning - Google Patents\n>>> https://patents.google.com/patent/US8356007\n>>>\n>>>\n>>> If it is, can we circumvent this patent?\n>>>>\n>> Thank you for the research (and previous links too).\n>> I haven't seen this patent before. This should be carefully studied.\n> \n> I wanted to ask about this after I've published the revised scale-out design wiki, but I'm taking too long, so could you share your study results? I think we need to make it clear about the patent before discussing the code.\n\nYes.\n\nBut I'm concerned about that it's really hard to say there is no patent risk\naround that. I'm not sure who can judge there is no patent risk,\nin the community. Maybe no one? Anyway, I was thinking that Google Spanner,\nYugabyteDB, etc use the global transaction approach based on the clock\nsimilar to Clock-SI. Since I've never heard they have the patent issues,\nI was just thinking Clock-SI doesn't have. No? This type of *guess* is not\nsafe, though...\n\n\n> After we hear your opinion, we also have to check to see if Clock-SI is patented or avoid it by modifying part of the algorithm. Just in case we cannot use it, we have to proceed with thinking about alternatives.\n\nOne alternative is to add only hooks into PostgreSQL core so that we can\nimplement the global transaction management outside. This idea was\ndiscussed before as the title \"eXtensible Transaction Manager API\".\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 10 Sep 2020 17:16:53 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "From: Fujii Masao <masao.fujii@oss.nttdata.com>\r\n> But I'm concerned about that it's really hard to say there is no patent risk\r\n> around that. I'm not sure who can judge there is no patent risk,\r\n> in the community. Maybe no one? Anyway, I was thinking that Google Spanner,\r\n> YugabyteDB, etc use the global transaction approach based on the clock\r\n> similar to Clock-SI. Since I've never heard they have the patent issues,\r\n> I was just thinking Clock-SI doesn't have. No? This type of *guess* is not\r\n> safe, though...\r\n\r\nHm, it may be difficult to be sure that the algorithm does not violate a patent. But it may not be difficult to know if the algorithm apparently violates a patent or is highly likely (for those who know Clock-SI well.) At least, Andrey-san seems to have felt that it needs careful study, so I guess he had some hunch.\r\n\r\nI understand this community is sensitive to patents. After the discussions at and after PGCon 2018, the community concluded that it won't accept patented technology. In the distant past, the community released Postgres 8.0 that contains an IBM's pending patent ARC, and removed it in 8.0.2. I wonder how could this could be detected, and how hard to cope with the patent issue. Bruce warned that we should be careful not to violate Greenplum's patents.\r\n\r\nE.25. Release 8.0.2\r\nhttps://www.postgresql.org/docs/8.0/release-8-0-2.html\r\n--------------------------------------------------\r\nNew cache management algorithm 2Q replaces ARC (Tom)\r\nThis was done to avoid a pending US patent on ARC. The 2Q code might be a few percentage points slower than ARC for some work loads. A better cache management algorithm will appear in 8.1.\r\n--------------------------------------------------\r\n\r\n\r\nI think I'll try to contact the people listed in Clock-SI paper and the Microsoft patent to ask about this. I'm going to have a late summer vacation next week, so this is my summer homework?\r\n\r\n\r\n> One alternative is to add only hooks into PostgreSQL core so that we can\r\n> implement the global transaction management outside. This idea was\r\n> discussed before as the title \"eXtensible Transaction Manager API\".\r\n\r\nYeah, I read that discussion. And I remember Robert Haas and Postgres Pro people said it's not good...\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n", "msg_date": "Thu, 10 Sep 2020 09:01:44 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Global snapshots" }, { "msg_contents": "\n\nOn 2020/09/10 18:01, tsunakawa.takay@fujitsu.com wrote:\n> From: Fujii Masao <masao.fujii@oss.nttdata.com>\n>> But I'm concerned about that it's really hard to say there is no patent risk\n>> around that. I'm not sure who can judge there is no patent risk,\n>> in the community. Maybe no one? Anyway, I was thinking that Google Spanner,\n>> YugabyteDB, etc use the global transaction approach based on the clock\n>> similar to Clock-SI. Since I've never heard they have the patent issues,\n>> I was just thinking Clock-SI doesn't have. No? This type of *guess* is not\n>> safe, though...\n> \n> Hm, it may be difficult to be sure that the algorithm does not violate a patent. But it may not be difficult to know if the algorithm apparently violates a patent or is highly likely (for those who know Clock-SI well.) At least, Andrey-san seems to have felt that it needs careful study, so I guess he had some hunch.\n> \n> I understand this community is sensitive to patents. After the discussions at and after PGCon 2018, the community concluded that it won't accept patented technology. In the distant past, the community released Postgres 8.0 that contains an IBM's pending patent ARC, and removed it in 8.0.2. I wonder how could this could be detected, and how hard to cope with the patent issue. Bruce warned that we should be careful not to violate Greenplum's patents.\n> \n> E.25. Release 8.0.2\n> https://www.postgresql.org/docs/8.0/release-8-0-2.html\n> --------------------------------------------------\n> New cache management algorithm 2Q replaces ARC (Tom)\n> This was done to avoid a pending US patent on ARC. The 2Q code might be a few percentage points slower than ARC for some work loads. A better cache management algorithm will appear in 8.1.\n> --------------------------------------------------\n> \n> \n> I think I'll try to contact the people listed in Clock-SI paper and the Microsoft patent to ask about this.\n\nThanks!\n\n\n> I'm going to have a late summer vacation next week, so this is my summer homework?\n> \n> \n>> One alternative is to add only hooks into PostgreSQL core so that we can\n>> implement the global transaction management outside. This idea was\n>> discussed before as the title \"eXtensible Transaction Manager API\".\n> \n> Yeah, I read that discussion. And I remember Robert Haas and Postgres Pro people said it's not good...\n\nBut it may be worth revisiting this idea if we cannot avoid the patent issue.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 10 Sep 2020 19:50:08 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On 2020-09-09 20:29, Fujii Masao wrote:\n> On 2020/09/09 2:00, Alexey Kondratov wrote:\n>> \n>> According to the Sawada-san's v25 0002 the logic is pretty much the \n>> same there:\n>> \n>> +2. Pre-Commit phase (1st phase of two-phase commit)\n>> \n>> +3. Commit locally\n>> +Once we've prepared all of them, commit the transaction locally.\n>> \n>> +4. Post-Commit Phase (2nd phase of two-phase commit)\n>> \n>> Brief look at the code confirms this scheme. IIUC, AtEOXact_FdwXact / \n>> FdwXactParticipantEndTransaction happens after \n>> ProcArrayEndTransaction() in the CommitTransaction(). Thus, I don't \n>> see many difference between these approach and CallXactCallbacks() \n>> usage regarding this point.\n> \n> IIUC the commit logic in Sawada-san's patch looks like\n> \n> 1. PreCommit_FdwXact()\n> PREPARE TRANSACTION command is issued\n> \n> 2. RecordTransactionCommit()\n> 2-1. WAL-log the commit record\n> 2-2. Update CLOG\n> 2-3. Wait for sync rep\n> 2-4. FdwXactWaitForResolution()\n> Wait until COMMIT PREPARED commands are issued to the\n> remote servers and completed.\n> \n> 3. ProcArrayEndTransaction()\n> 4. AtEOXact_FdwXact(true)\n> \n> So ISTM that the timing of when COMMIT PREPARED is issued\n> to the remote server is different between the patches.\n> Am I missing something?\n> \n\nNo, you are right, sorry. At a first glance I thought that \nAtEOXact_FdwXact is responsible for COMMIT PREPARED as well, but it is \nonly calling FdwXactParticipantEndTransaction in the abort case.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n", "msg_date": "Thu, 10 Sep 2020 17:22:27 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On Thu, Sep 10, 2020 at 4:20 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> >> One alternative is to add only hooks into PostgreSQL core so that we can\n> >> implement the global transaction management outside. This idea was\n> >> discussed before as the title \"eXtensible Transaction Manager API\".\n> >\n> > Yeah, I read that discussion. And I remember Robert Haas and Postgres Pro people said it's not good...\n>\n> But it may be worth revisiting this idea if we cannot avoid the patent issue.\n>\n\nIt is not very clear what exactly we can do about the point raised by\nTsunakawa-San related to patent in this technology as I haven't seen\nthat discussed during other development but maybe we can try to study\na bit. One more thing I would like to bring here is that it seems to\nbe there have been some concerns about this idea when originally\ndiscussed [1]. It is not very clear to me if all the concerns are\naddressed or not. If one can summarize the concerns discussed and how\nthe latest patch is able to address those then it will be great.\n\nAlso, I am not sure but maybe global deadlock detection also needs to\nbe considered as that also seems to be related because it depends on\nhow we manage global transactions. We need to prevent deadlock among\ntransaction operations spanned across multiple nodes. Say a\ntransaction T-1 has updated row r-1 of tbl-1 on node-1 and tries to\nupdate row r-1 of tbl-2 on node n-2. Similarly, a transaction T-2\ntries to perform those two operations in reverse order. Now, this will\nlead to the deadlock that spans across multiple nodes and our current\ndeadlock detector doesn't have that capability. Having some form of\nglobal/distributed transaction id might help to resolve it but not\nsure how it can be solved with this clock-si based algorithm.\n\nAs all these problems are related, that is why I am insisting on this\nthread and other thread \"Transactions involving multiple postgres\nforeign servers\" [2] to have a high-level idea on how the distributed\ntransaction management will work before we decide on a particular\napproach and commit one part of that patch.\n\n[1] - https://www.postgresql.org/message-id/21BC916B-80A1-43BF-8650-3363CCDAE09C%40postgrespro.ru\n[2] - https://www.postgresql.org/message-id/CAA4eK1J86S%3DmeivVsH%2Boy%3DTwUC%2Byr9jj2VtmmqMfYRmgs2JzUA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 17 Sep 2020 12:26:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On Tue, Sep 8, 2020 at 01:36:16PM +0300, Alexey Kondratov wrote:\n> Thank you for the link!\n> \n> After a quick look on the Sawada-san's patch set I think that there are two\n> major differences:\n> \n> 1. There is a built-in foreign xacts resolver in the [1], which should be\n> much more convenient from the end-user perspective. It involves huge in-core\n> changes and additional complexity that is of course worth of.\n> \n> However, it's still not clear for me that it is possible to resolve all\n> foreign prepared xacts on the Postgres' own side with a 100% guarantee.\n> Imagine a situation when the coordinator node is actually a HA cluster group\n> (primary + sync + async replica) and it failed just after PREPARE stage of\n> after local COMMIT. In that case all foreign xacts will be left in the\n> prepared state. After failover process complete synchronous replica will\n> become a new primary. Would it have all required info to properly resolve\n> orphan prepared xacts?\n> \n> Probably, this situation is handled properly in the [1], but I've not yet\n> finished a thorough reading of the patch set, though it has a great doc!\n> \n> On the other hand, previous 0003 and my proposed patch rely on either manual\n> resolution of hung prepared xacts or usage of external monitor/resolver.\n> This approach is much simpler from the in-core perspective, but doesn't look\n> as complete as [1] though.\n\nHave we considered how someone would clean up foreign transactions if the\ncoordinating server dies? Could it be done manually? Would an external\nresolver, rather than an internal one, make this easier?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Thu, 17 Sep 2020 17:54:32 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On 2020-09-18 00:54, Bruce Momjian wrote:\n> On Tue, Sep 8, 2020 at 01:36:16PM +0300, Alexey Kondratov wrote:\n>> Thank you for the link!\n>> \n>> After a quick look on the Sawada-san's patch set I think that there \n>> are two\n>> major differences:\n>> \n>> 1. There is a built-in foreign xacts resolver in the [1], which should \n>> be\n>> much more convenient from the end-user perspective. It involves huge \n>> in-core\n>> changes and additional complexity that is of course worth of.\n>> \n>> However, it's still not clear for me that it is possible to resolve \n>> all\n>> foreign prepared xacts on the Postgres' own side with a 100% \n>> guarantee.\n>> Imagine a situation when the coordinator node is actually a HA cluster \n>> group\n>> (primary + sync + async replica) and it failed just after PREPARE \n>> stage of\n>> after local COMMIT. In that case all foreign xacts will be left in the\n>> prepared state. After failover process complete synchronous replica \n>> will\n>> become a new primary. Would it have all required info to properly \n>> resolve\n>> orphan prepared xacts?\n>> \n>> Probably, this situation is handled properly in the [1], but I've not \n>> yet\n>> finished a thorough reading of the patch set, though it has a great \n>> doc!\n>> \n>> On the other hand, previous 0003 and my proposed patch rely on either \n>> manual\n>> resolution of hung prepared xacts or usage of external \n>> monitor/resolver.\n>> This approach is much simpler from the in-core perspective, but \n>> doesn't look\n>> as complete as [1] though.\n> \n> Have we considered how someone would clean up foreign transactions if \n> the\n> coordinating server dies? Could it be done manually? Would an \n> external\n> resolver, rather than an internal one, make this easier?\n\nBoth Sawada-san's patch [1] and in this thread (e.g. mine [2]) use 2PC \nwith a special gid format including a xid + server identification info. \nThus, one can select from pg_prepared_xacts, get xid and coordinator \ninfo, then use txid_status() on the coordinator (or ex-coordinator) to \nget transaction status and finally either commit or abort these stale \nprepared xacts. Of course this could be wrapped into some user-level \nsupport routines as it is done in the [1].\n\nAs for the benefits of using an external resolver, I think that there \nare some of them from the whole system perspective:\n\n1) If one follows the logic above, then this resolver could be \nstateless, it takes all the required info from the Postgres nodes \nthemselves.\n\n2) Then you can easily put it into container, which make it easier do \ndeploy to all these 'cloud' stuff like kubernetes.\n\n3) Also you can scale resolvers independently from Postgres nodes.\n\nI do not think that either of these points is a game changer, but we use \na very simple external resolver altogether with [2] in our sharding \nprototype and it works just fine so far.\n\n\n[1] \nhttps://www.postgresql.org/message-id/CA%2Bfd4k4HOVqqC5QR4H984qvD0Ca9g%3D1oLYdrJT_18zP9t%2BUsJg%40mail.gmail.com\n\n[2] \nhttps://www.postgresql.org/message-id/3ef7877bfed0582019eab3d462a43275%40postgrespro.ru\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n", "msg_date": "Mon, 21 Sep 2020 17:24:22 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "Hi Andrey-san, all,\r\n\r\nFrom: Andrey V. Lepikhov <a.lepikhov@postgrespro.ru>\r\n> On 7/27/20 11:22 AM, tsunakawa.takay@fujitsu.com wrote:\r\n> > Could you take a look at this patent? I'm afraid this is the Clock-SI for MVCC.\r\n> Microsoft holds this until 2031. I couldn't find this with the keyword\r\n> \"Clock-SI.\"\"\r\n> >\r\n> >\r\n> > US8356007B2 - Distributed transaction management for database systems\r\n> with multiversioning - Google Patents\r\n> > https://patents.google.com/patent/US8356007\r\n> >\r\n> >\r\n> > If it is, can we circumvent this patent?\r\n\r\n> I haven't seen this patent before. This should be carefully studied.\r\n\r\n\r\nI contacted 6 people individually, 3 holders of the patent and different 3 authors of the Clock-SI paper. I got replies from two people. (It's a regret I couldn't get a reply from the main author of Clock-SI paper.)\r\n\r\n[Reply from the patent holder Per-Ake Larson]\r\n--------------------------------------------------\r\nThanks for your interest in my patent. \r\n\r\nThe answer to your question is: No, Clock-SI is not based on the patent - it was an entirely independent development. The two approaches are similar in the sense that there is no global clock, the commit time of a distributed transaction is the same in every partition where it modified data, and a transaction gets it snapshot timestamp from a local clock. The difference is whether a distributed transaction gets its commit timestamp before or after the prepare phase in 2PC.\r\n\r\nHope this helpful.\r\n\r\nBest regards,\r\nPer-Ake\r\n--------------------------------------------------\r\n\r\n\r\n[Reply from the Clock-SI author Willy Zwaenepoel]\r\n--------------------------------------------------\r\nThank you for your kind words about our work.\r\n\r\nI was unaware of this patent at the time I wrote the paper. The two came out more or less at the same time.\r\n\r\nI am not a lawyer, so I cannot tell you if something based on Clock-SI would infringe on the Microsoft patent. The main distinction to me seems to be that Clock-SI is based on physical clocks, while the Microsoft patent talks about logical clocks, but again I am not a lawyer.\r\n\r\nBest regards,\r\n\r\nWilly.\r\n--------------------------------------------------\r\n\r\n\r\nDoes this make sense from your viewpoint, and can we think that we can use Clock-SI without infrindging on the patent? According to the patent holder, the difference between Clock-SI and the patent seems to be fewer than the similarities.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Tue, 22 Sep 2020 00:47:52 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Global snapshots" }, { "msg_contents": "\n\n22.09.2020 03:47, tsunakawa.takay@fujitsu.com пишет:\n> Does this make sense from your viewpoint, and can we think that we can use Clock-SI without infrindging on the patent? According to the patent holder, the difference between Clock-SI and the patent seems to be fewer than the similarities.\nThank you for this work!\nAs I can see, main development difficulties placed in other areas: CSN, \nresolver, global deadlocks, 2PC commit... I'm not lawyer too. But if we \nget remarks from the patent holders, we can rewrite our Clock-SI \nimplementation.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Tue, 22 Sep 2020 10:41:11 +0300", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "From: Andrey Lepikhov <a.lepikhov@postgrespro.ru>\r\n> Thank you for this work!\r\n> As I can see, main development difficulties placed in other areas: CSN, resolver,\r\n> global deadlocks, 2PC commit... I'm not lawyer too. But if we get remarks from\r\n> the patent holders, we can rewrite our Clock-SI implementation.\r\n\r\nYeah, I understand your feeling. I personally don't want like patents, and don't want to be disturbed by them. But the world is not friendly... We are not a lawyer, but we have to do our best to make sure PostgreSQL will be patent-free by checking the technologies as engineers.\r\n\r\nAmong the above items, CSN is the only concerning one. Other items are written in textbooks, well-known, and used in other DBMSs, so they should be free from patents. However, CSN is not (at least to me.) Have you checked if CSN is not related to some patent? Or is CSN or similar technology already widely used in famous software and we can regard it as patent-free?\r\n\r\nAnd please wait. As below, the patent holder just says that Clock-SI is not based on the patent and an independent development. He doesn't say Clock-SI does not overlap with the patent or implementing Clock-SI does not infringe on the patent. Rather, he suggests that Clock-SI has many similarities and thus those may match the claims of the patent (unintentionally?) I felt this is a sign of risking infringement.\r\n\r\n\"The answer to your question is: No, Clock-SI is not based on the patent - it was an entirely independent development. The two approaches are similar in the sense that there is no global clock, the commit time of a distributed transaction is the same in every partition where it modified data, and a transaction gets it snapshot timestamp from a local clock. The difference is whether a distributed transaction gets its commit timestamp before or after the prepare phase in 2PC.\"\r\n\r\nThe timeline of events also worries me. It seems unnatural to consider that Clock-SI and the patent are independent.\r\n\r\n 2010/6 - 2010/9 One Clock-SI author worked for Microsoft Research as an research intern\r\n 2010/10 Microsoft filed the patent\r\n 2011/9 - 2011/12 The same Clock-SI author worked for Microsoft Research as an research intern\r\n 2013 The same author moved to EPFL and published the Clock-SI paper with another author who has worked for Microsoft Research since then.\r\n\r\nSo, could you give your opinion whether we can use Clock-SI without overlapping with the patent claims? I also will try to check and see, so that I can understand your technical analysis.\r\n\r\nAnd I've just noticed that I got in touch with another author of Clock-SI via SNS, and sent an inquiry to him. I'll report again when I have a reply.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n", "msg_date": "Wed, 23 Sep 2020 00:44:22 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Global snapshots" }, { "msg_contents": "Hi Andrey san, all,\r\n\r\nFrom: tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com>\r\n> And please wait. As below, the patent holder just says that Clock-SI is not\r\n> based on the patent and an independent development. He doesn't say\r\n> Clock-SI does not overlap with the patent or implementing Clock-SI does not\r\n> infringe on the patent. Rather, he suggests that Clock-SI has many\r\n> similarities and thus those may match the claims of the patent\r\n> (unintentionally?) I felt this is a sign of risking infringement.\r\n> \r\n> \"The answer to your question is: No, Clock-SI is not based on the patent - it\r\n> was an entirely independent development. The two approaches are similar in\r\n> the sense that there is no global clock, the commit time of a distributed\r\n> transaction is the same in every partition where it modified data, and a\r\n> transaction gets it snapshot timestamp from a local clock. The difference is\r\n> whether a distributed transaction gets its commit timestamp before or after the\r\n> prepare phase in 2PC.\"\r\n> \r\n> The timeline of events also worries me. It seems unnatural to consider that\r\n> Clock-SI and the patent are independent.\r\n> \r\n> 2010/6 - 2010/9 One Clock-SI author worked for Microsoft Research as\r\n> an research intern\r\n> 2010/10 Microsoft filed the patent\r\n> 2011/9 - 2011/12 The same Clock-SI author worked for Microsoft\r\n> Research as an research intern\r\n> 2013 The same author moved to EPFL and published the Clock-SI paper\r\n> with another author who has worked for Microsoft Research since then.\r\n> \r\n> So, could you give your opinion whether we can use Clock-SI without\r\n> overlapping with the patent claims? I also will try to check and see, so that I\r\n> can understand your technical analysis.\r\n> \r\n> And I've just noticed that I got in touch with another author of Clock-SI via SNS,\r\n> and sent an inquiry to him. I'll report again when I have a reply.\r\n\r\nI got a reply from the main author of the Clock-SI paper:\r\n\r\n[Reply from the Clock-SI author Jiaqing Du]\r\n--------------------------------------------------\r\nThanks for reaching out.\r\n\r\nI actually did not know that Microsoft wrote a patent which is similar to the ideas in my paper. I worked there as an intern. My Clock-SI paper was done at my school (EPFL) after my internships at Microsoft. The paper was very loosely related to my internship project at Microsoft. In a sense, the internship project at Microsoft inspired me to work on Clock-SI after I finished the internship. As you see in the paper, my coauthor, who is my internship host, is also from Microsoft, but interestingly he is not on the patent :)\r\n\r\nCheers,\r\nJiaqing\r\n--------------------------------------------------\r\n\r\n\r\nUnfortunately, he also did not assert that Clock-SI does not infringe on the patent. Rather, worrying words are mixed: \"similar to my ideas\", \"loosely related\", \"inspired\".\r\n\r\nAlso, his internship host is the co-author of the Clock-SI paper. That person should be Sameh Elnikety, who has been working for Microsoft Research. I also asked him about the same question, but he has been silent for about 10 days.\r\n\r\nWhen I had a quick look, the patent appeared to be broader than Clock-SI, and Clock-SI is a concrete application of the patent. This is just my guess, but Sameh Elnikety had known the patent and set an internship theme at Microsoft or the research subject at EPFL based on it, whether he was aware or not.\r\n\r\nAs of now, it seems that the Clock-SI needs to be evaluated against the patent claims by two or more persons -- one from someone who knows Clock-SI well and implemented it for Postgres (Andrey-san?), and someone else who shares little benefit with the former person and can see it objectively.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Mon, 28 Sep 2020 01:36:19 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Global snapshots" }, { "msg_contents": "\n\nOn 2020/09/17 15:56, Amit Kapila wrote:\n> On Thu, Sep 10, 2020 at 4:20 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>>> One alternative is to add only hooks into PostgreSQL core so that we can\n>>>> implement the global transaction management outside. This idea was\n>>>> discussed before as the title \"eXtensible Transaction Manager API\".\n>>>\n>>> Yeah, I read that discussion. And I remember Robert Haas and Postgres Pro people said it's not good...\n>>\n>> But it may be worth revisiting this idea if we cannot avoid the patent issue.\n>>\n> \n> It is not very clear what exactly we can do about the point raised by\n> Tsunakawa-San related to patent in this technology as I haven't seen\n> that discussed during other development but maybe we can try to study\n> a bit. One more thing I would like to bring here is that it seems to\n> be there have been some concerns about this idea when originally\n> discussed [1]. It is not very clear to me if all the concerns are\n> addressed or not. If one can summarize the concerns discussed and how\n> the latest patch is able to address those then it will be great.\n\nI have one concern about Clock-SI (sorry if this concern was already\ndiscussed in the past). As far as I read the paper about Clock-SI, ISTM that\nTx2 that starts after Tx1's commit can fail to see the results by Tx1,\ndue to the clock skew. Please see the following example;\n\n1. Tx1 starts at the server A.\n\n2. Tx1 writes some records at the server A.\n\n3. Tx1 gets the local clock 20, uses 20 as CommitTime, then completes\n the commit at the server A.\n This means that Tx1 is the local transaction, not distributed one.\n\n4. Tx2 starts at the server B, i.e., the server B works as\n the coordinator node for Tx2.\n\n5. Tx2 gets the local clock 10 (i.e., it's delayed behind the server A\n due to clock skew) and uses 10 as SnapshotTime at the server B.\n\n6. Tx2 starts the remote transaction at the server A with SnapshotTime 10.\n\n7. Tx2 doesn't need to wait due to clock skew because the imported\n SnapshotTime 10 is smaller than the local clock at the server A.\n\n8. Tx2 fails to see the records written by Tx1 at the server A because\n Tx1's CommitTime 20 is larger than SnapshotTime 10.\n\nSo Tx1 was successfully committed before Tx2 starts. But, at the above example,\nthe subsequent transaction Tx2 fails to see the committed results.\n\nThe single PostgreSQL instance seems to guarantee that linearizability of\nthe transactions, but Clock-SI doesn't in the distributed env. Is this my\nunderstanding right? Or am I missing something?\n\nIf my understanding is right, shouldn't we address that issue when using\nClock-SI? Or the patch has already addressed the issue?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 15 Oct 2020 01:41:34 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "On Thu, 15 Oct 2020 at 01:41, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/09/17 15:56, Amit Kapila wrote:\n> > On Thu, Sep 10, 2020 at 4:20 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>>> One alternative is to add only hooks into PostgreSQL core so that we can\n> >>>> implement the global transaction management outside. This idea was\n> >>>> discussed before as the title \"eXtensible Transaction Manager API\".\n> >>>\n> >>> Yeah, I read that discussion. And I remember Robert Haas and Postgres Pro people said it's not good...\n> >>\n> >> But it may be worth revisiting this idea if we cannot avoid the patent issue.\n> >>\n> >\n> > It is not very clear what exactly we can do about the point raised by\n> > Tsunakawa-San related to patent in this technology as I haven't seen\n> > that discussed during other development but maybe we can try to study\n> > a bit. One more thing I would like to bring here is that it seems to\n> > be there have been some concerns about this idea when originally\n> > discussed [1]. It is not very clear to me if all the concerns are\n> > addressed or not. If one can summarize the concerns discussed and how\n> > the latest patch is able to address those then it will be great.\n>\n> I have one concern about Clock-SI (sorry if this concern was already\n> discussed in the past). As far as I read the paper about Clock-SI, ISTM that\n> Tx2 that starts after Tx1's commit can fail to see the results by Tx1,\n> due to the clock skew. Please see the following example;\n>\n> 1. Tx1 starts at the server A.\n>\n> 2. Tx1 writes some records at the server A.\n>\n> 3. Tx1 gets the local clock 20, uses 20 as CommitTime, then completes\n> the commit at the server A.\n> This means that Tx1 is the local transaction, not distributed one.\n>\n> 4. Tx2 starts at the server B, i.e., the server B works as\n> the coordinator node for Tx2.\n>\n> 5. Tx2 gets the local clock 10 (i.e., it's delayed behind the server A\n> due to clock skew) and uses 10 as SnapshotTime at the server B.\n>\n> 6. Tx2 starts the remote transaction at the server A with SnapshotTime 10.\n>\n> 7. Tx2 doesn't need to wait due to clock skew because the imported\n> SnapshotTime 10 is smaller than the local clock at the server A.\n>\n> 8. Tx2 fails to see the records written by Tx1 at the server A because\n> Tx1's CommitTime 20 is larger than SnapshotTime 10.\n>\n> So Tx1 was successfully committed before Tx2 starts. But, at the above example,\n> the subsequent transaction Tx2 fails to see the committed results.\n>\n> The single PostgreSQL instance seems to guarantee that linearizability of\n> the transactions, but Clock-SI doesn't in the distributed env. Is this my\n> understanding right? Or am I missing something?\n>\n> If my understanding is right, shouldn't we address that issue when using\n> Clock-SI? Or the patch has already addressed the issue?\n\nAs far as I read the paper, the above scenario can happen. I could\nreproduce the above scenario with the patch. Moreover, a stale read\ncould happen even if Tx1 was initiated at server B (i.g., both\ntransactions started at the same server in sequence). In this case,\nTx1's commit timestamp would be 20 taken from server A's local clock\nwhereas Tx2's snapshot timestamp would be 10 same as the above case.\nTherefore, in spite of both transactions were initiated at the same\nserver the linearizability is not provided.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 23 Oct 2020 11:58:16 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "\n\nOn 2020/10/23 11:58, Masahiko Sawada wrote:\n> On Thu, 15 Oct 2020 at 01:41, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2020/09/17 15:56, Amit Kapila wrote:\n>>> On Thu, Sep 10, 2020 at 4:20 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>\n>>>>>> One alternative is to add only hooks into PostgreSQL core so that we can\n>>>>>> implement the global transaction management outside. This idea was\n>>>>>> discussed before as the title \"eXtensible Transaction Manager API\".\n>>>>>\n>>>>> Yeah, I read that discussion. And I remember Robert Haas and Postgres Pro people said it's not good...\n>>>>\n>>>> But it may be worth revisiting this idea if we cannot avoid the patent issue.\n>>>>\n>>>\n>>> It is not very clear what exactly we can do about the point raised by\n>>> Tsunakawa-San related to patent in this technology as I haven't seen\n>>> that discussed during other development but maybe we can try to study\n>>> a bit. One more thing I would like to bring here is that it seems to\n>>> be there have been some concerns about this idea when originally\n>>> discussed [1]. It is not very clear to me if all the concerns are\n>>> addressed or not. If one can summarize the concerns discussed and how\n>>> the latest patch is able to address those then it will be great.\n>>\n>> I have one concern about Clock-SI (sorry if this concern was already\n>> discussed in the past). As far as I read the paper about Clock-SI, ISTM that\n>> Tx2 that starts after Tx1's commit can fail to see the results by Tx1,\n>> due to the clock skew. Please see the following example;\n>>\n>> 1. Tx1 starts at the server A.\n>>\n>> 2. Tx1 writes some records at the server A.\n>>\n>> 3. Tx1 gets the local clock 20, uses 20 as CommitTime, then completes\n>> the commit at the server A.\n>> This means that Tx1 is the local transaction, not distributed one.\n>>\n>> 4. Tx2 starts at the server B, i.e., the server B works as\n>> the coordinator node for Tx2.\n>>\n>> 5. Tx2 gets the local clock 10 (i.e., it's delayed behind the server A\n>> due to clock skew) and uses 10 as SnapshotTime at the server B.\n>>\n>> 6. Tx2 starts the remote transaction at the server A with SnapshotTime 10.\n>>\n>> 7. Tx2 doesn't need to wait due to clock skew because the imported\n>> SnapshotTime 10 is smaller than the local clock at the server A.\n>>\n>> 8. Tx2 fails to see the records written by Tx1 at the server A because\n>> Tx1's CommitTime 20 is larger than SnapshotTime 10.\n>>\n>> So Tx1 was successfully committed before Tx2 starts. But, at the above example,\n>> the subsequent transaction Tx2 fails to see the committed results.\n>>\n>> The single PostgreSQL instance seems to guarantee that linearizability of\n>> the transactions, but Clock-SI doesn't in the distributed env. Is this my\n>> understanding right? Or am I missing something?\n>>\n>> If my understanding is right, shouldn't we address that issue when using\n>> Clock-SI? Or the patch has already addressed the issue?\n> \n> As far as I read the paper, the above scenario can happen. I could\n> reproduce the above scenario with the patch. Moreover, a stale read\n> could happen even if Tx1 was initiated at server B (i.g., both\n> transactions started at the same server in sequence). In this case,\n> Tx1's commit timestamp would be 20 taken from server A's local clock\n> whereas Tx2's snapshot timestamp would be 10 same as the above case.\n> Therefore, in spite of both transactions were initiated at the same\n> server the linearizability is not provided.\n\nYeah, so if we need to guarantee the transaction linearizability even\nin distributed env (probably this is yes. Right?), using only Clock-SI\nis not enough. We would need to implement something more\nin addition to Clock-SI or adopt the different approach other than Clock-SI.\nThought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 28 Oct 2020 16:00:27 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "Fujii-san, Sawada-san, all,\r\n\r\nFrom: Fujii Masao <masao.fujii@oss.nttdata.com>\r\n> Yeah, so if we need to guarantee the transaction linearizability even\r\n> in distributed env (probably this is yes. Right?), using only Clock-SI\r\n> is not enough. We would need to implement something more\r\n> in addition to Clock-SI or adopt the different approach other than Clock-SI.\r\n> Thought?\r\n\r\nCould you please try interpreting MVCO and see if we have any hope in this? This doesn't fit in my small brain. I'll catch up with understanding this when I have time.\r\n\r\nMVCO - Technical report - IEEE RIDE-IMS 93 (PDF; revised version of DEC-TR 853)\r\nhttps://sites.google.com/site/yoavraz2/MVCO-WDE.pdf\r\n\r\n\r\nMVCO is a multiversion member of Commitment Ordering algorithms described below:\r\n\r\nCommitment ordering (CO) - yoavraz2\r\nhttps://sites.google.com/site/yoavraz2/the_principle_of_co\r\n\r\nCommitment ordering - Wikipedia\r\nhttps://en.wikipedia.org/wiki/Commitment_ordering\r\n\r\n\r\nRelated patents are as follows. The last one is MVCO.\r\n\r\nUS5504900A - Commitment ordering for guaranteeing serializability across distributed transactions\r\nhttps://patents.google.com/patent/US5504900A/en?oq=US5504900\r\n\r\nUS5504899A - Guaranteeing global serializability by applying commitment ordering selectively to global transactions\r\nhttps://patents.google.com/patent/US5504899A/en?oq=US5504899\r\n\r\nUS5701480A - Distributed multi-version commitment ordering protocols for guaranteeing serializability during transaction processing\r\nhttps://patents.google.com/patent/US5701480A/en?oq=US5701480\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n", "msg_date": "Wed, 28 Oct 2020 07:20:16 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Global snapshots" }, { "msg_contents": "Hello,\r\n\r\n\r\nFujii-san and I discussed how to move the scale-out development forward. We are both worried that Clock-SI is (highly?) likely to infringe the said Microsoft's patent. So we agreed we are going to investigate the Clock-SI and the patent, and if we have to conclude that we cannot embrace Clock-SI, we will explore other possibilities.\r\n\r\nIMO, it seems that Clock-SI overlaps with the patent and we can't use it. First, looking back how to interpret the patent document, patent \"claims\" are what we should pay our greatest attention. According to the following citation from the IP guide by Software Freedom Law Center (SFLC) [1], software infringes a patent if it implements everything of any claim, not all claims.\r\n\r\n\r\n--------------------------------------------------\r\n4.2 Patent Infringement\r\nTo prove that you5 infringe a patent, the patent holder must show that you make, use, offer to sell, or sell the invention as it is defined in at least one claim of the patent.\r\n\r\nFor software to infringe a patent, the software essentially must implement everything recited in one of the patent�fs claims. It is crucial to recognize that infringement is based directly on the claims of the patent, and not on what is stated or described in other parts of the patent document. \r\n--------------------------------------------------\r\n\r\n\r\nAnd, Clock-SI implements at least claims 11 and 20 cited below. It doesn't matter whether Clock-SI uses a physical clock or logical one.\r\n\r\n\r\n--------------------------------------------------\r\n11. A method comprising:\r\nreceiving information relating to a distributed database transaction operating on data in data stores associated with respective participating nodes associated with the distributed database transaction;\r\nrequesting commit time votes from the respective participating nodes, the commit time votes reflecting local clock values of the respective participating nodes;\r\nreceiving the commit time votes from the respective participating nodes in response to the requesting;\r\ncomputing a global commit timestamp for the distributed database transaction based at least in part on the commit time votes, the global commit timestamp reflecting a maximum value of the commit time votes received from the respective participating nodes; and\r\nsynchronizing commitment of the distributed database transaction at the respective participating nodes to the global commit timestamp,\r\nwherein at least the computing is performed by a computing device.\r\n\r\n20. A method for managing a distributed database transaction, the method comprising:\r\nreceiving information relating to the distributed database transaction from a transaction coordinator associated with the distributed database transaction;\r\ndetermining a commit time vote for the distributed database transaction based at least in part on a local clock;\r\ncommunicating the commit time vote for the distributed database transaction to the transaction coordinator;\r\nreceiving a global commit timestamp from the transaction coordinator;\r\nsynchronizing commitment of the distributed database transaction to the global commit timestamp;\r\nreceiving a remote request from a requesting database node corresponding to the distributed database transaction;\r\ncreating a local transaction corresponding to the distributed database transaction;\r\ncompiling a list of database nodes involved in generating a result of the local transaction and access types utilized by respective database nodes in the list of database nodes; and\r\nreturning the list of database nodes and the access types to the requesting database node in response to the remote request,\r\nwherein at least the compiling is performed by a computing device.\r\n--------------------------------------------------\r\n\r\n\r\nMy question is that the above claims appear to cover somewhat broad range. I wonder if other patents or unpatented technologies overlap with this kind of description.\r\n\r\nThoughts?\r\n\r\n\r\n[1]\r\nA Legal Issues Primer for Open Source and Free Software Projects\r\nhttps://www.softwarefreedom.org/resources/2008/foss-primer.pdf\r\n\r\n[2]\r\nUS8356007B2 - Distributed transaction management for database systems with multiversioning - Google Patents\r\nhttps://patents.google.com/patent/US8356007\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Fri, 1 Jan 2021 03:14:07 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Global snapshots" }, { "msg_contents": "\n\nOn 2021/01/01 12:14, tsunakawa.takay@fujitsu.com wrote:\n> Hello,\n> \n> \n> Fujii-san and I discussed how to move the scale-out development forward. We are both worried that Clock-SI is (highly?) likely to infringe the said Microsoft's patent. So we agreed we are going to investigate the Clock-SI and the patent, and if we have to conclude that we cannot embrace Clock-SI, we will explore other possibilities.\n\nYes.\n\n\n> \n> IMO, it seems that Clock-SI overlaps with the patent and we can't use it. First, looking back how to interpret the patent document, patent \"claims\" are what we should pay our greatest attention. According to the following citation from the IP guide by Software Freedom Law Center (SFLC) [1], software infringes a patent if it implements everything of any claim, not all claims.\n> \n> \n> --------------------------------------------------\n> 4.2 Patent Infringement\n> To prove that you5 infringe a patent, the patent holder must show that you make, use, offer to sell, or sell the invention as it is defined in at least one claim of the patent.\n> \n> For software to infringe a patent, the software essentially must implement everything recited in one of the patent�fs claims. It is crucial to recognize that infringement is based directly on the claims of the patent, and not on what is stated or described in other parts of the patent document.\n> --------------------------------------------------\n> \n> \n> And, Clock-SI implements at least claims 11 and 20 cited below. It doesn't matter whether Clock-SI uses a physical clock or logical one.\n\nThanks for sharing the result of your investigation!\n\nRegarding at least claim 11, I reached the same conclusion. As far as\nI understand correctly, Clock-SI actually does the method described\nat the claim 11 when determing the commit time and doing the commit\non each node.\n\nI don't intend to offend Clock-SI and any activities based on that. OTOH,\nI'm now wondering if it's worth considering another approach for global\ntransaction support, while I'm still interested in Clock-SI technically.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 8 Jan 2021 21:21:53 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "Hello, Andrey-san, all,\r\n\r\n\r\nBased on the request at HighGo's sharding meeting, I'm re-sending the information on Commitment Ordering that could be used for global visibility. Their patents have already expired.\r\n\r\n\r\n\r\n--------------------------------------------------\r\nHave anyone examined the following Multiversion Commitment Ordering (MVCO)? Although I haven't understood this yet, it insists that no concurrency control information including timestamps needs to be exchanged among the cluster nodes. I'd appreciate it if someone could give an opinion.\r\n\r\n\r\n\r\nCommitment Ordering Based Distributed Concurrency Control for Bridging Single and Multi Version Resources.\r\n Proceedings of the Third IEEE International Workshop on Research Issues on Data Engineering: Interoperability in Multidatabase Systems (RIDE-IMS), Vienna, Austria, pp. 189-198, April 1993. (also DEC-TR 853, July 1992)\r\nhttps://ieeexplore.ieee.org/document/281924?arnumber=281924\r\n\r\n\r\n\r\nThe author of the above paper, Yoav Raz, seems to have had strong passion at least until 2011 about making people believe the mightiness of Commitment Ordering (CO) for global serializability. However, he complains (sadly) that almost all researchers ignore his theory, as written in his following site and wikipedia page for Commitment Ordering. Does anyone know why CO is ignored?\r\n\r\n\r\n--------------------------------------------------\r\n* Or, maybe we can use the following Commitment ordering that doesn't require the timestamp or any other information to be transferred among the cluster nodes. However, this seems to have to track the order of read and write operations among concurrent transactions to ensure the correct commit order, so I'm not sure about the performance. The MVCO paper seems to present the information we need, but I haven't understood it well yet (it's difficult.) Could you anybody kindly interpret this?\r\n\r\n\r\n\r\nCommitment ordering (CO) - yoavraz2\r\nhttps://sites.google.com/site/yoavraz2/the_principle_of_co\r\n\r\n\r\n\r\n--------------------------------------------------\r\nCould you please try interpreting MVCO and see if we have any hope in this? This doesn't fit in my small brain. I'll catch up with understanding this when I have time.\r\n\r\n\r\n\r\nMVCO - Technical report - IEEE RIDE-IMS 93 (PDF; revised version of DEC-TR 853)\r\nhttps://sites.google.com/site/yoavraz2/MVCO-WDE.pdf\r\n\r\n\r\n\r\nMVCO is a multiversion member of Commitment Ordering algorithms described below:\r\n\r\n\r\n\r\nCommitment ordering (CO) - yoavraz2\r\nhttps://sites.google.com/site/yoavraz2/the_principle_of_co\r\n\r\n\r\n\r\nCommitment ordering - Wikipedia\r\nhttps://en.wikipedia.org/wiki/Commitment_ordering\r\n\r\n\r\n\r\nRelated patents are as follows. The last one is MVCO.\r\n\r\n\r\n\r\nUS5504900A - Commitment ordering for guaranteeing serializability across distributed transactions\r\nhttps://patents.google.com/patent/US5504900A/en?oq=US5504900\r\n\r\n\r\n\r\nUS5504899A - Guaranteeing global serializability by applying commitment ordering selectively to global transactions\r\nhttps://patents.google.com/patent/US5504899A/en?oq=US5504899\r\n\r\n\r\n\r\nUS5701480A - Distributed multi-version commitment ordering protocols for guaranteeing serializability during transaction processing\r\nhttps://patents.google.com/patent/US5701480A/en?oq=US5701480\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Tue, 19 Jan 2021 06:32:56 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Global snapshots" }, { "msg_contents": "On 1/1/21 8:14 AM, tsunakawa.takay@fujitsu.com wrote:\n> --------------------------------------------------\n> 11. A method comprising:\n> receiving information relating to a distributed database transaction operating on data in data stores associated with respective participating nodes associated with the distributed database transaction;\n> requesting commit time votes from the respective participating nodes, the commit time votes reflecting local clock values of the respective participating nodes;\n> receiving the commit time votes from the respective participating nodes in response to the requesting;\n> computing a global commit timestamp for the distributed database transaction based at least in part on the commit time votes, the global commit timestamp reflecting a maximum value of the commit time votes received from the respective participating nodes; and\n> synchronizing commitment of the distributed database transaction at the respective participating nodes to the global commit timestamp,\n> wherein at least the computing is performed by a computing device.\n\nThank you for this analysis of the patent.\nAfter researching in depth, I think this is the real problem.\nMy idea was that we are not using real clocks, we only use clock ticks \nto measure time intervals. It can also be interpreted as a kind of clock.\n\nThat we can do:\n1. Use global clocks at the start of transaction.\n2. Use CSN-based snapshot as a machinery and create an extension to \nallow user defined commit protocols.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Fri, 26 Feb 2021 11:20:49 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "From: Andrey V. Lepikhov <a.lepikhov@postgrespro.ru>\r\n> After researching in depth, I think this is the real problem.\r\n> My idea was that we are not using real clocks, we only use clock ticks to\r\n> measure time intervals. It can also be interpreted as a kind of clock.\r\n\r\nYes, patent claims tend to be written to cover broad interpretation. That's too sad.\r\n\r\n\r\n> That we can do:\r\n> 1. Use global clocks at the start of transaction.\r\n> 2. Use CSN-based snapshot as a machinery and create an extension to allow\r\n> user defined commit protocols.\r\n\r\nIs this your suggestion to circumvent the patent? Sorry, I'm afraid I can't understand it yet (I have to study more.) I hope others will comment on this.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n", "msg_date": "Fri, 26 Feb 2021 06:34:09 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Global snapshots" }, { "msg_contents": "Current state of the patch set rebased on master, 5aed6a1fc2.\n\nIt is development version. Here some problems with visibility still \ndetected in two tests:\n1. CSN Snapshot module - TAP test on time skew.\n2. Clock SI implementation - TAP test on emulation of bank transaction.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Tue, 23 Mar 2021 12:54:57 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "From: Andrey V. Lepikhov <a.lepikhov@postgrespro.ru>\r\n> Current state of the patch set rebased on master, 5aed6a1fc2.\r\n> \r\n> It is development version. Here some problems with visibility still detected in\r\n> two tests:\r\n> 1. CSN Snapshot module - TAP test on time skew.\r\n> 2. Clock SI implementation - TAP test on emulation of bank transaction.\r\n\r\nI'm sorry to be late to respond. Thank you for the update.\r\n\r\nAs discussed at the HighGo meeting, what do you think we should do about this patch set, now that we agreed that Clock-SI is covered by Microsoft's patent? I'd appreciate it if you could share some idea to change part of the algorithm and circumvent the patent.\r\n\r\nOtherwise, why don't we discuss alternatives, such as the Commitment Ordering?\r\n\r\nI have a hunch that YugabyteDB's method seems promising, which I wrote in the following wiki. Of course, we should make efforts to see if it's patented before diving deeper into the design or implementation.\r\n\r\nScaleout Design - PostgreSQL wiki\r\nhttps://wiki.postgresql.org/wiki/Scaleout_Design\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n", "msg_date": "Thu, 25 Mar 2021 05:06:11 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Global snapshots" }, { "msg_contents": "Next version of CSN implementation in snapshots to achieve a proper \nsnapshot isolation in the case of a cross-instance distributed transaction.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Thu, 18 Nov 2021 09:04:29 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" }, { "msg_contents": "Patch in the previous letter is full of faulties. Please, use new version.\nAlso, here we fixed the problem with loosing CSN value in a parallel \nworker (TAP test 003_parallel_safe.pl). Thanks for a.pyhalov for the \nproblem detection and a bugfix.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Fri, 19 Nov 2021 16:10:59 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Global snapshots" } ]
[ { "msg_contents": "I was doing some memory testing under fractional CPU allocations and it became\npainfully obvious that the repeat() function needs CHECK_FOR_INTERRUPTS().\n\nI exchanged a few emails offlist with Tom about it, and (at the risk of putting\nwords in his mouth) he agreed and felt it was a candidate for backpatching.\n\nVery small patch attached. Quick and dirty performance test:\n\nexplain analyze SELECT repeat('A', 300000000);\nexplain analyze SELECT repeat('A', 300000000);\nexplain analyze SELECT repeat('A', 300000000);\n\nWith an -O2 optimized build:\n\nWithout CHECK_FOR_INTERRUPTS\n\n Planning Time: 1077.238 ms\n Execution Time: 0.016 ms\n\n Planning Time: 1080.381 ms\n Execution Time: 0.013 ms\n\n Planning Time: 1072.049 ms\n Execution Time: 0.013 ms\n\nWith CHECK_FOR_INTERRUPTS\n\n Planning Time: 1078.703 ms\n Execution Time: 0.013 ms\n\n Planning Time: 1077.495 ms\n Execution Time: 0.013 ms\n\n Planning Time: 1076.793 ms\n Execution Time: 0.013 ms\n\n\nWhile discussing the above, Tom also wondered whether we should add unlikely()\nto the CHECK_FOR_INTERRUPTS() macro.\n\nSmall patch for that also attached. I was not sure about the WIN32 stanza on\nthat (to do it or not; if so, what about the UNBLOCKED_SIGNAL_QUEUE() test).\n\nI tested as above with unlikely() and did not see any discernible difference,\nbut the added check might improve other code paths.\n\nComments or objections?\n\nThanks,\n\nJoe\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Tue, 12 May 2020 08:06:58 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "repeat() function, CHECK_FOR_INTERRUPTS(), and unlikely()" }, { "msg_contents": "On 5/12/20 8:06 AM, Joe Conway wrote:\n> I was doing some memory testing under fractional CPU allocations and it became\n> painfully obvious that the repeat() function needs CHECK_FOR_INTERRUPTS().\n> \n> I exchanged a few emails offlist with Tom about it, and (at the risk of putting\n> words in his mouth) he agreed and felt it was a candidate for backpatching.\n> \n> Very small patch attached. Quick and dirty performance test:\n\n<snip>\n\n> While discussing the above, Tom also wondered whether we should add unlikely()\n> to the CHECK_FOR_INTERRUPTS() macro.\n> \n> Small patch for that also attached. I was not sure about the WIN32 stanza on\n> that (to do it or not; if so, what about the UNBLOCKED_SIGNAL_QUEUE() test).\n> \n> I tested as above with unlikely() and did not see any discernible difference,\n> but the added check might improve other code paths.\n> \n> Comments or objections?\n\nSeeing none ... I intend to backpatch and push these two patches in the next day\nor so.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Mon, 25 May 2020 09:02:04 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: repeat() function, CHECK_FOR_INTERRUPTS(), and unlikely()" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n>> Comments or objections?\n\n> Seeing none ... I intend to backpatch and push these two patches in the next day\n> or so.\n\nThere was some question as to what (if anything) to do with the Windows\nversion of CHECK_FOR_INTERRUPTS. Have you resolved that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 May 2020 09:52:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: repeat() function, CHECK_FOR_INTERRUPTS(), and unlikely()" }, { "msg_contents": "On 5/25/20 9:52 AM, Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>>> Comments or objections?\n> \n>> Seeing none ... I intend to backpatch and push these two patches in the next day\n>> or so.\n> \n> There was some question as to what (if anything) to do with the Windows\n> version of CHECK_FOR_INTERRUPTS. Have you resolved that?\n\n\nRelevant hunk:\n\n*************** do { \\\n*** 107,113 ****\n do { \\\n \tif (UNBLOCKED_SIGNAL_QUEUE()) \\\n \t\tpgwin32_dispatch_queued_signals(); \\\n! \tif (InterruptPending) \\\n \t\tProcessInterrupts(); \\\n } while(0)\n #endif\t\t\t\t\t\t\t/* WIN32 */\n--- 107,113 ----\n do { \\\n \tif (UNBLOCKED_SIGNAL_QUEUE()) \\\n \t\tpgwin32_dispatch_queued_signals(); \\\n! \tif (unlikely(InterruptPending)) \\\n \t\tProcessInterrupts(); \\\n } while(0)\n #endif\t\t\t\t\t\t\t/* WIN32 */\n\n\nTwo questions.\n\nFirst, as I understand it, unlikely() is a gcc thing, so it does nothing at all\nfor MSVC builds of Windows, which presumably are the predominate ones. The\nquestion here is whether it is worth doing at all for Windows builds. On the\nother hand it seems unlikely to harm anything, so I think it is reasonable to\nleave the patch as is in that respect.\n\nThe second question is whether UNBLOCKED_SIGNAL_QUEUE() warrants its own\nlikely() or unlikely() wrapper. I have no idea, but we could always add that\nlater if someone deems it worthwhile.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Mon, 25 May 2020 10:10:43 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: repeat() function, CHECK_FOR_INTERRUPTS(), and unlikely()" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> On 5/25/20 9:52 AM, Tom Lane wrote:\n>> There was some question as to what (if anything) to do with the Windows\n>> version of CHECK_FOR_INTERRUPTS. Have you resolved that?\n\n> Two questions.\n\n> First, as I understand it, unlikely() is a gcc thing, so it does nothing at all\n> for MSVC builds of Windows, which presumably are the predominate ones. The\n> question here is whether it is worth doing at all for Windows builds. On the\n> other hand it seems unlikely to harm anything, so I think it is reasonable to\n> leave the patch as is in that respect.\n\nPerhaps I'm an optimist, but I think that eventually we will figure out\nhow to make unlikely() work for MSVC. In the meantime we might as well\nlet it work for gcc-on-Windows builds.\n\n> The second question is whether UNBLOCKED_SIGNAL_QUEUE() warrants its own\n> likely() or unlikely() wrapper. I have no idea, but we could always add that\n> later if someone deems it worthwhile.\n\nI think that each of those tests should have a separate unlikely() marker,\nsince the whole point here is that we don't expect either of those tests\nto yield true in the huge majority of CHECK_FOR_INTERRUPTS executions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 May 2020 11:14:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: repeat() function, CHECK_FOR_INTERRUPTS(), and unlikely()" }, { "msg_contents": "On Mon, May 25, 2020 at 11:14:39AM -0400, Tom Lane wrote:\n> Perhaps I'm an optimist, but I think that eventually we will figure out\n> how to make unlikely() work for MSVC. In the meantime we might as well\n> let it work for gcc-on-Windows builds.\n\nI am less optimistic than that, but there is hope. This was mentioned\nas something considered for implementation in April 2019:\nhttps://developercommunity.visualstudio.com/idea/488669/please-add-likelyunlikely-builtins.html\n\n> I think that each of those tests should have a separate unlikely() marker,\n> since the whole point here is that we don't expect either of those tests\n> to yield true in the huge majority of CHECK_FOR_INTERRUPTS executions.\n\n+1. I am not sure that the addition of unlikely() should be\nbackpatched though, that's not something usually done.\n--\nMichael", "msg_date": "Wed, 27 May 2020 16:29:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: repeat() function, CHECK_FOR_INTERRUPTS(), and unlikely()" }, { "msg_contents": "On 5/27/20 3:29 AM, Michael Paquier wrote:\n>> I think that each of those tests should have a separate unlikely() marker,\n>> since the whole point here is that we don't expect either of those tests\n>> to yield true in the huge majority of CHECK_FOR_INTERRUPTS executions.\n> \n> +1. I am not sure that the addition of unlikely() should be\n> backpatched though, that's not something usually done.\n\n\nI backpatched and pushed the changes to the repeat() function. Any other\nopinions regarding backpatch of the unlikely() addition to CHECK_FOR_INTERRUPTS()?\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Thu, 28 May 2020 13:23:46 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: repeat() function, CHECK_FOR_INTERRUPTS(), and unlikely()" }, { "msg_contents": "On 5/28/20 1:23 PM, Joe Conway wrote:\n> On 5/27/20 3:29 AM, Michael Paquier wrote:\n>>> I think that each of those tests should have a separate unlikely() marker,\n>>> since the whole point here is that we don't expect either of those tests\n>>> to yield true in the huge majority of CHECK_FOR_INTERRUPTS executions.\n>> \n>> +1. I am not sure that the addition of unlikely() should be\n>> backpatched though, that's not something usually done.\n> \n> I backpatched and pushed the changes to the repeat() function. Any other\n> opinions regarding backpatch of the unlikely() addition to CHECK_FOR_INTERRUPTS()?\n\nSo far I have\n\n Tom +1\n Michael -1\n me +0\n\non backpatching the addition of unlikely() to CHECK_FOR_INTERRUPTS().\n\nAssuming no one else chimes in I will push the attached to all supported\nbranches sometime before Tom creates the REL_13_STABLE branch on Sunday.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Thu, 4 Jun 2020 16:48:19 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: repeat() function, CHECK_FOR_INTERRUPTS(), and unlikely()" }, { "msg_contents": "On 2020-May-28, Joe Conway wrote:\n\n> I backpatched and pushed the changes to the repeat() function. Any other\n> opinions regarding backpatch of the unlikely() addition to CHECK_FOR_INTERRUPTS()?\n\nWe don't use unlikely() in 9.6 at all, so I would stop that backpatching\nat 10 anyhow. (We did backpatch unlikely()'s definition afterwards.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 4 Jun 2020 17:20:57 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: repeat() function, CHECK_FOR_INTERRUPTS(), and unlikely()" }, { "msg_contents": "On 6/4/20 5:20 PM, Alvaro Herrera wrote:\n> On 2020-May-28, Joe Conway wrote:\n> \n>> I backpatched and pushed the changes to the repeat() function. Any other\n>> opinions regarding backpatch of the unlikely() addition to CHECK_FOR_INTERRUPTS()?\n> \n> We don't use unlikely() in 9.6 at all, so I would stop that backpatching\n> at 10 anyhow. (We did backpatch unlikely()'s definition afterwards.)\n\n\nCorrect you are -- thanks for the heads up! Pushed to REL_10_STABLE and later.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Fri, 5 Jun 2020 16:53:01 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: repeat() function, CHECK_FOR_INTERRUPTS(), and unlikely()" } ]
[ { "msg_contents": "I've been trying to reformat table 27.4 (wait events) to fit\ninto PDF output, which has caused me to study its contents\nmore than I ever had before. The lack of consistency, or\neven any weak attempt at consistency, is not just glaring;\nit's embarrassing.\n\nWe have a lot of wait event names like these:\n\n\tXidGenLock\n\tProcArrayLock\n\tSInvalReadLock\n\tSInvalWriteLock\n\tWALBufMappingLock\n\tWALWriteLock\n\nwhich are more or less fine; maybe one could wish for having\njust one way of capitalizing acronyms not two, but I'll let\nthat pass. But could we be satisfied with handling all multi\nword names in that style? Nope:\n\n\tcommit_timestamp\n\tmultixact_offset\n\tmultixact_member\n\twal_insert\n\n(and in case you are wondering, yes, \"WAL\" is also spelled \"Wal\"\nin yet other places.)\n\nAnd then somebody else, unwilling to use either of those styles,\nthought it would be cute to do\n\n\tHash/Batch/Allocating\n\tHash/Batch/Electing\n\tHash/Batch/Loading\n\tHash/GrowBatches/Allocating\n\nand all alone in the remotest stretch of left field, we've got\n\n\tspeculative token\n\n(yes, with a space in it).\n\nAlso, while the average length of these names exceeds 16 characters,\nwith such gems as SerializablePredicateLockListLock, think not that\nprolixity is the uniform rule:\n\n\toldserxid\n\tproc\n\ttbm\n\nIs it unreasonable of me to think that there should be *some*\namount of editorial control over these user-visible names?\nAt the rock bottom minimum, shouldn't we insist that they all\nbe legal identifiers?\n\nI'm not sure what our stance is on version-to-version consistency\nof these names, but I'd like to think that we are not stuck for\nall time with the results of these random coin tosses.\n\nMy inclination is to propose that we settle on the first style\nshown above, which is the majority case now, and rename the\nother events to fit that. As long as we're breaking compatibility\nanyway, I'd also like to shorten one or two of the very longest\nnames, because they're just giving me fits in fixing the PDF\nrendering. (They would make a mess of the display of\npg_stat_activity, too, anytime they come up in the field.)\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 May 2020 11:16:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Our naming of wait events is a disaster." }, { "msg_contents": "On Tue, May 12, 2020 at 11:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> My inclination is to propose that we settle on the first style\n> shown above, which is the majority case now, and rename the\n> other events to fit that. As long as we're breaking compatibility\n> anyway, I'd also like to shorten one or two of the very longest\n> names, because they're just giving me fits in fixing the PDF\n> rendering. (They would make a mess of the display of\n> pg_stat_activity, too, anytime they come up in the field.)\n>\n> Thoughts?\n>\n\n+1\n\n-- \nJonah H. Harris\n\nOn Tue, May 12, 2020 at 11:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:My inclination is to propose that we settle on the first style\nshown above, which is the majority case now, and rename the\nother events to fit that.  As long as we're breaking compatibility\nanyway, I'd also like to shorten one or two of the very longest\nnames, because they're just giving me fits in fixing the PDF\nrendering.  (They would make a mess of the display of\npg_stat_activity, too, anytime they come up in the field.)\n\nThoughts?+1-- Jonah H. Harris", "msg_date": "Tue, 12 May 2020 11:19:11 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "\n\n> 12 мая 2020 г., в 20:16, Tom Lane <tgl@sss.pgh.pa.us> написал(а):\n> \n> Thoughts?\n> \n\nI've been coping with cognitive load of these names recently. 2 cents of my impressions:\n1. Names are somewhat recognisable and seem to have some meaning. But there is not so much information about them in the Internet. But I did not try to Google them all, just a small subset.\n2. Anyway, names should be grepable and googlable, i.e. unique amid identifiers.\n3. I think names observed in wait_event and wait_event_type should not duplicate information. i.e. \"XidGenLock\" is already \"LWLock\".\n4. It's hard to tell the difference between \"buffer_content\", \"buffer_io\", \"buffer_mapping\", \"BufferPin\", \"BufFileRead\", \"BufFileWrite\" and some others. \"CLogControlLock\" vs \"clog\"? I'm not sure good DBA can tell the difference without looking up into the code.\nI hope some thoughts will be useful.\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Tue, 12 May 2020 22:51:23 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru> writes:\n> 3. I think names observed in wait_event and wait_event_type should not duplicate information. i.e. \"XidGenLock\" is already \"LWLock\".\n\nYeah, I'd been wondering about that too: we could strip the \"Lock\" suffix\nfrom all the names in the LWLock category, and make pg_stat_activity\noutput a bit narrower.\n\nThere are a lot of other things that seem inconsistent, but I'm not sure\nhow much patience people would have for judgment-call renamings. An\nexample is that \"ProcSignalBarrier\" is under IO, but why? Shouldn't it\nbe reclassified as IPC? Other than that, *almost* all the IO events\nare named SomethingRead, SomethingWrite, or SomethingSync, which\nmakes sense to me ... should we insist they all follow that pattern?\n\nAnyway, I was just throwing this idea out to see if there would be\nhowls of \"you can't rename anything\" anguish. Since there haven't\nbeen so far, I'll spend a bit more time and try to create a concrete\nlist of possible changes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 May 2020 14:11:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "On Tue, 12 May 2020 at 19:11, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n\n> Anyway, I was just throwing this idea out to see if there would be\n> howls of \"you can't rename anything\" anguish. Since there haven't\n> been so far, I'll spend a bit more time and try to create a concrete\n> list of possible changes.\n>\n\nIf we add in extensions and lwlocks, will they show up as well?\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nMission Critical Databases\n\nOn Tue, 12 May 2020 at 19:11, Tom Lane <tgl@sss.pgh.pa.us> wrote: Anyway, I was just throwing this idea out to see if there would be\nhowls of \"you can't rename anything\" anguish.  Since there haven't\nbeen so far, I'll spend a bit more time and try to create a concrete\nlist of possible changes.If we add in extensions and lwlocks, will they show up as well?-- Simon Riggs                http://www.2ndQuadrant.com/Mission Critical Databases", "msg_date": "Tue, 12 May 2020 20:30:20 +0100", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "Simon Riggs <simon@2ndquadrant.com> writes:\n> On Tue, 12 May 2020 at 19:11, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Anyway, I was just throwing this idea out to see if there would be\n>> howls of \"you can't rename anything\" anguish. Since there haven't\n>> been so far, I'll spend a bit more time and try to create a concrete\n>> list of possible changes.\n\n> If we add in extensions and lwlocks, will they show up as well?\n\nYeah, I was just looking into that. Part of the reason for the\ninconsistency is that we've exposed names that are passed to,\neg, SimpleLruInit that previously were strictly internal debugging\nidentifiers, so that approximately zero thought was put into them.\n\nWe're going to have to document SimpleLruInit and similar functions\nalong the lines of \"The name you give here will be user-visible as\na wait event. Choose it with an eye to consistency with existing\nwait event names, and add it to the user-facing documentation.\"\nBut that requirement isn't something I just invented, it was\neffectively created by whoever implemented things this way.\n\nSaid user-facing documentation largely fails to explain that the\nset of wait events can be enlarged by extensions; that needs to\nbe fixed, too.\n\nThere isn't a lot we can do to force extensions to pick consistent\nnames, but on the other hand we won't be documenting such names\nanyway, so for my immediate purposes it doesn't matter ;-)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 May 2020 16:00:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "> There are a lot of other things that seem inconsistent, but I'm not sure\n> how much patience people would have for judgment-call renamings. An\n> example is that \"ProcSignalBarrier\" is under IO, but why? Shouldn't it\n> be reclassified as IPC?\n\nHmm, that seems like a goof.\n\n> Other than that, *almost* all the IO events\n> are named SomethingRead, SomethingWrite, or SomethingSync, which\n> makes sense to me ... should we insist they all follow that pattern?\n\nMaybe, but sometimes module X does more than one kind of\nread/write/sync, and I'm not necessarily keen on merging things\ntogether. The whole point of this is to be able to tell where you're\nstuck in the code, and the more you merge related things together, the\nless you can actually tell that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 12 May 2020 16:08:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "On Tue, May 12, 2020 at 4:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Said user-facing documentation largely fails to explain that the\n> set of wait events can be enlarged by extensions; that needs to\n> be fixed, too.\n\nIs that true? How can they do that? I thought they were stuck with\nPG_WAIT_EXTENSION.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 12 May 2020 16:10:18 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "On Tue, May 12, 2020 at 11:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I've been trying to reformat table 27.4 (wait events) to fit\n> into PDF output, which has caused me to study its contents\n> more than I ever had before.\n\nThat reminds me that it might be easier to maintain that table if we\nbroke it up into one table per major category - that is, one table for\nlwlocks, one table for IPC, one table for IO, etc. - instead of a\nsingle table with a row-span number that is large and frequently\nupdated incorrectly.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 12 May 2020 16:12:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "On 2020-May-12, Robert Haas wrote:\n\n> That reminds me that it might be easier to maintain that table if we\n> broke it up into one table per major category - that is, one table for\n> lwlocks, one table for IPC, one table for IO, etc. - instead of a\n> single table with a row-span number that is large and frequently\n> updated incorrectly.\n\n(Didn't we have a patch to generate the table programmatically?)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 12 May 2020 16:27:26 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "On Tue, May 12, 2020 at 8:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm not sure what our stance is on version-to-version consistency\n> of these names, but I'd like to think that we are not stuck for\n> all time with the results of these random coin tosses.\n\nThese names are fundamentally implementation details, and\nimplementation details are subject to change without too much warning.\nI think it's okay to change the names for consistency along the lines\nyou propose. ISTM that it's worth going to a little bit of effort to\npreserve any existing names. But not too much.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 12 May 2020 13:28:54 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, May 12, 2020 at 11:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I've been trying to reformat table 27.4 (wait events) to fit\n>> into PDF output, which has caused me to study its contents\n>> more than I ever had before.\n\n> That reminds me that it might be easier to maintain that table if we\n> broke it up into one table per major category - that is, one table for\n> lwlocks, one table for IPC, one table for IO, etc. - instead of a\n> single table with a row-span number that is large and frequently\n> updated incorrectly.\n\nYeah, see my last attempt at\n\nhttps://www.postgresql.org/message-id/26961.1589260206%40sss.pgh.pa.us\n\nI'm probably going to go with that, but as given that patch conflicts\nwith my other pending patch to change the catalog description tables,\nso I want to push that other one first and then clean up the wait-\nevent one. In the meantime, I'm going to look at these naming issues,\nwhich will also be changing that patch ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 May 2020 16:54:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, May 12, 2020 at 4:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Said user-facing documentation largely fails to explain that the\n>> set of wait events can be enlarged by extensions; that needs to\n>> be fixed, too.\n\n> Is that true? How can they do that? I thought they were stuck with\n> PG_WAIT_EXTENSION.\n\nExtensions can definitely add new LWLock tranches, and thereby\nenlarge the set of names in that category. I haven't figured out\nwhether there are other avenues.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 May 2020 17:17:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "On Tue, 12 May 2020 at 21:00, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Simon Riggs <simon@2ndquadrant.com> writes:\n> > On Tue, 12 May 2020 at 19:11, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Anyway, I was just throwing this idea out to see if there would be\n> >> howls of \"you can't rename anything\" anguish. Since there haven't\n> >> been so far, I'll spend a bit more time and try to create a concrete\n> >> list of possible changes.\n>\n> > If we add in extensions and lwlocks, will they show up as well?\n>\n> Yeah, I was just looking into that. Part of the reason for the\n> inconsistency is that we've exposed names that are passed to,\n> eg, SimpleLruInit that previously were strictly internal debugging\n> identifiers, so that approximately zero thought was put into them.\n>\n> We're going to have to document SimpleLruInit and similar functions\n> along the lines of \"The name you give here will be user-visible as\n> a wait event. Choose it with an eye to consistency with existing\n> wait event names, and add it to the user-facing documentation.\"\n> But that requirement isn't something I just invented, it was\n> effectively created by whoever implemented things this way.\n>\n> Said user-facing documentation largely fails to explain that the\n> set of wait events can be enlarged by extensions; that needs to\n> be fixed, too.\n>\n> There isn't a lot we can do to force extensions to pick consistent\n> names, but on the other hand we won't be documenting such names\n> anyway, so for my immediate purposes it doesn't matter ;-)\n>\n\n I think we need to plan the namespace with extensions in mind.\n\nThere are now dozens; some of them even help you view wait events...\n\nWe don't want the equivalent of the Dewey decimal system: 300 categories of\nExaggeration and one small corner for Science.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nMission Critical Databases\n\nOn Tue, 12 May 2020 at 21:00, Tom Lane <tgl@sss.pgh.pa.us> wrote:Simon Riggs <simon@2ndquadrant.com> writes:\n> On Tue, 12 May 2020 at 19:11, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Anyway, I was just throwing this idea out to see if there would be\n>> howls of \"you can't rename anything\" anguish.  Since there haven't\n>> been so far, I'll spend a bit more time and try to create a concrete\n>> list of possible changes.\n\n> If we add in extensions and lwlocks, will they show up as well?\n\nYeah, I was just looking into that.  Part of the reason for the\ninconsistency is that we've exposed names that are passed to,\neg, SimpleLruInit that previously were strictly internal debugging\nidentifiers, so that approximately zero thought was put into them.\n\nWe're going to have to document SimpleLruInit and similar functions\nalong the lines of \"The name you give here will be user-visible as\na wait event.  Choose it with an eye to consistency with existing\nwait event names, and add it to the user-facing documentation.\"\nBut that requirement isn't something I just invented, it was\neffectively created by whoever implemented things this way.\n\nSaid user-facing documentation largely fails to explain that the\nset of wait events can be enlarged by extensions; that needs to\nbe fixed, too.\n\nThere isn't a lot we can do to force extensions to pick consistent\nnames, but on the other hand we won't be documenting such names\nanyway, so for my immediate purposes it doesn't matter ;-) I think we need to plan the namespace with extensions in mind.There are now dozens; some of them even help you view wait events...We don't want the equivalent of the Dewey decimal system: 300 categories of Exaggeration and one small corner for Science.-- Simon Riggs                http://www.2ndQuadrant.com/Mission Critical Databases", "msg_date": "Tue, 12 May 2020 22:32:45 +0100", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> (Didn't we have a patch to generate the table programmatically?)\n\nHaving now looked around a bit at where the names come from, I think\nsuch a patch would be impossible as things stand, which is a pity\nbecause as-things-stand is a totally unmaintainable situation.\nAnybody at all can call LWLockRegisterTranche and thereby create a new\nname that ought to be listed in the SGML table. Apparently, somebody\ngrepped for such calls and put all the ones that existed at the time\ninto the table; unsurprisingly, the results are already out of date.\nSeveral of the hard-wired calls in RegisterLWLockTranches() are not\nreflected in the SGML table AFAICS.\n\nOr, if you don't want to call LWLockRegisterTranche, you can instead\ncall RequestNamedLWLockTranche. Whee. At least there are none of\nthose in the core code. However, we do have both autoprewarm and\npg_stat_statements calling these things from contrib.\n\nThat raises a policy question, or really two of them: should contrib\ncode be held to the standards we're going to set forth for tranche\nnames chosen by core code? And should contrib-created tranche names\nbe listed in chapter 27's table? (If not, should the individual\ncontrib modules document their additions?)\n\nWe could make things a little better perhaps if we got rid of all the\ncowboy calls to LWLockRegisterTranche and had RegisterLWLockTranches\nmake all of them using a single table of names, as it already does\nwith MainLWLockNames[] but not the other ones. Then it'd be possible\nto have documentation beside that table warning people to add entries\nto the SGML docs; and even for the people who can't be bothered to\nread comments, at least they'd be adding names to a list of names that\nwould give them some precedent and context for how to choose a new name.\nI think 50% of the problem right now is that if you just write a\nrandom new call to LWLockRegisterTranche in a random new place, you\nhave no context about what the tranche name should look like.\n\nEven with all the names declared in some reasonably centralized\nplace(s), we'd be a long way from making the SGML tables programmatically,\nbecause we'd not have text descriptions for the wait events. I can\nimagine extending the source-code conventions a bit to include those\nstrings there, but I'm doubtful that it's worth the trouble.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 May 2020 18:11:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "For the specific category of the heavyweight lock types, I'm\nnow thinking that we can't change the event names very much, because\nthose are also exposed in pg_locks' locktype column. We can be\ndarn certain, for example, that changing the spelling of \"relation\"\nin that column would break a lot of user queries. Conceivably we\ncould decouple the wait event names from the locktype column, but\non the whole that doesn't seem like a great plan.\n\nHowever, having said that, I remain on the warpath about \"speculative\ntoken\". That's an utterly horrid choice for both locktype and wait\nevent. I also notice, with no amusement, that \"speculative token\"\nis not documented in the pg_locks documentation. So I think we should\nchange it ... but to what, exactly? Looking at the other existing names:\n\nconst char *const LockTagTypeNames[] = {\n\t\"relation\",\n\t\"extend\",\n\t\"page\",\n\t\"tuple\",\n\t\"transactionid\",\n\t\"virtualxid\",\n\t\"speculative token\",\n\t\"object\",\n\t\"userlock\",\n\t\"advisory\"\n};\n\nI'm inclined to propose \"spectoken\". I'd be okay with \"spec_token\" as\nwell, but there are not underscores in the longer-established names.\n\n(Needless to say, this array is going to gain a comment noting that\nthere are two places to document any changes. Also, if we split up\nthe wait_event table as discussed earlier, it might make sense for\npg_locks' documentation to cross-reference the sub-table for heavyweight\nlock events, since that has some explanation of what the codes mean.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 May 2020 19:41:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "On Tue, 12 May 2020 at 18:11, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > (Didn't we have a patch to generate the table programmatically?)\n>\n> Having now looked around a bit at where the names come from, I think\n> such a patch would be impossible as things stand, which is a pity\n> because as-things-stand is a totally unmaintainable situation.\n> Anybody at all can call LWLockRegisterTranche and thereby create a new\n> name that ought to be listed in the SGML table. Apparently, somebody\n> grepped for such calls and put all the ones that existed at the time\n> into the table; unsurprisingly, the results are already out of date.\n> Several of the hard-wired calls in RegisterLWLockTranches() are not\n> reflected in the SGML table AFAICS.\n>\n\nI expect there is a reason why this hasn’t been suggested, but just in case\nit is at all helpful:\n\nWhen do these names get created? That is, do all the wait types get created\nand registered on startup, or is it more like whenever something needs to\ndo something the name gets passed in ad hoc? I'm wondering because it seems\nlike it might be helpful to have a system view which gives all the wait\nevent types, names, and descriptions. Maybe even add a column for which\nextension (or core) it came from. The documentation could then just explain\nthe general situation and point people at the system view to see exactly\nwhich wait types exist in their system. This would require every instance\nwhere a type is registered to pass an additional parameter — the\ndescription, as currently seen in the table in the documentation.\n\nOf course if the names get passed in ad hoc then such a view could only\nshow the types that happen to have been created up to the moment it is\nqueried, which would defeat the purpose. And I can think of a few potential\nreasons why this might not work at all, starting with the need to re-write\nevery example of registering a new type to provide the documentation string\nfor the view.\n\nInspiration due to pg_setting, pg_config, pg_available_extensions and\npg_get_keywords ().\n\nOn Tue, 12 May 2020 at 18:11, Tom Lane <tgl@sss.pgh.pa.us> wrote:Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> (Didn't we have a patch to generate the table programmatically?)\n\nHaving now looked around a bit at where the names come from, I think\nsuch a patch would be impossible as things stand, which is a pity\nbecause as-things-stand is a totally unmaintainable situation.\nAnybody at all can call LWLockRegisterTranche and thereby create a new\nname that ought to be listed in the SGML table.  Apparently, somebody\ngrepped for such calls and put all the ones that existed at the time\ninto the table; unsurprisingly, the results are already out of date.\nSeveral of the hard-wired calls in RegisterLWLockTranches() are not\nreflected in the SGML table AFAICS.I expect there is a reason why this hasn’t been suggested, but just in case it is at all helpful:When do these names get created? That is, do all the wait types get created and registered on startup, or is it more like whenever something needs to do something the name gets passed in ad hoc? I'm wondering because it seems like it might be helpful to have a system view which gives all the wait event types, names, and descriptions. Maybe even add a column for which extension (or core) it came from. The documentation could then just explain the general situation and point people at the system view to see exactly which wait types exist in their system. This would require every instance where a type is registered to pass an additional parameter — the description, as currently seen in the table in the documentation.Of course if the names get passed in ad hoc then such a view could only show the types that happen to have been created up to the moment it is queried, which would defeat the purpose. And I can think of a few potential reasons why this might not work at all, starting with the need to re-write every example of registering a new type to provide the documentation string for the view.Inspiration due to pg_setting, pg_config, pg_available_extensions and pg_get_keywords ().", "msg_date": "Tue, 12 May 2020 21:16:40 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "On Wed, May 13, 2020 at 3:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> And then somebody else, unwilling to use either of those styles,\n> thought it would be cute to do\n>\n> Hash/Batch/Allocating\n> Hash/Batch/Electing\n> Hash/Batch/Loading\n> Hash/GrowBatches/Allocating\n\nPerhaps we should also drop the 'ing' from the verbs, to be more like\n...Read etc.\n\n\n", "msg_date": "Wed, 13 May 2020 14:30:09 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Wed, May 13, 2020 at 3:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hash/Batch/Allocating\n>> Hash/Batch/Electing\n>> Hash/Batch/Loading\n>> Hash/GrowBatches/Allocating\n\n> Perhaps we should also drop the 'ing' from the verbs, to be more like\n> ...Read etc.\n\nYeah, that aspect was bothering me too. Comparing these to other\nwait event names, you could make a case for either \"Allocate\" or\n\"Allocation\"; but there are no other names with -ing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 May 2020 22:35:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n> ... I'm wondering because it seems\n> like it might be helpful to have a system view which gives all the wait\n> event types, names, and descriptions. Maybe even add a column for which\n> extension (or core) it came from. The documentation could then just explain\n> the general situation and point people at the system view to see exactly\n> which wait types exist in their system.\n\nThere's certainly an argument for doing things like that, but I think it'd\nbe a net negative in terms of quality and consistency of documentation.\nWe'd basically be deciding those are non-goals.\n\nOf course, ripping out table 27.4 altogether would be a simple solution\nto the formatting problem I started with ;-). But it doesn't really\nseem like the answer we want.\n\n> Of course if the names get passed in ad hoc then such a view could only\n> show the types that happen to have been created up to the moment it is\n> queried, which would defeat the purpose.\n\nYes, exactly.\n\nI don't actually understand why the LWLock tranche mechanism is designed\nthe way it is. It seems to be intended to support different backends\nhaving different sets of LWLocks, but I fail to see why that's a good idea,\nor even useful at all. In any case, dynamically-created LWLocks are\nclearly out of scope for the documentation. The problem that I'm trying\nto deal with right now is that even LWLocks that are hard-wired into the\nbackend code are difficult to enumerate. That wasn't a problem before\nwe decided we needed to expose them all to user view; but now it is.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 May 2020 22:54:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "I looked through the naming situation for LWLocks, and SLRUs which turn\nout to be intimately connected to that, because most of the dubious\nLWLock names we're currently exposing actually are derived from SLRUs.\nHere's some ideas (not quite a full proposal yet) for changing that.\n\nNote that because of the SLRU connection, I feel justified in treating\nthis business as an open item for v13, not something to fix later.\nWhatever your position on the mutability of wait event names, we have\nas of v13 exposed SLRU names in other places besides that: the\npg_stat_slru view shows them, and the pg_stat_reset_slru() function\nacccepts them as arguments. So there will certainly be a larger\ncompatibility penalty to be paid if we don't fix this now.\n\nFor each SLRU, there is a named \"control lock\" that guards the SLRU's\ncontrol info, and a tranche of per-buffer locks. The control locks\nare mostly named as the SLRU name followed by \"ControlLock\", though\nit should surprise nobody to hear that we haven't quite been 100% on\nthat. As of right now, the per-buffer tranche just has the same name\nas the SLRU, which is not great, not least because it gives the wrong\nimpression about the scopes of those locks.\n\nDigging through the existing callers of SimpleLruInit, we have\n\nname control lock subdir\n\n\"async\" AsyncCtlLock \"pg_notify\"\n\n\"clog\" CLogControlLock \"pg_xact\"\n\n\"commit_timestamp\" CommitTsControlLock \"pg_commit_ts\"\n\n\"multixact_member\" MultiXactMemberControlLock \"pg_multixact/members\"\n\n\"multixact_offset\" MultiXactOffsetControlLock \"pg_multixact/offsets\"\n\n\"oldserxid\" OldSerXidLock \"pg_serial\"\n\n\"subtrans\" SubtransControlLock \"pg_subtrans\"\n\nAfter studying that list for awhile, it seems to me that we could\ndo worse than to name the SLRUs to match their on-disk subdirectories,\nwhich are names that are already user-visible. So I propose these\nbase names for the SLRUs:\n\nNotify\nXact\nCommitTs\nMultiXactMember (or MultiXactMembers)\nMultiXactOffset (or MultiXactOffsets)\nSerial\nSubtrans\n\nI could go either way on whether to include \"s\" in the two mxact SLRU\nnames --- using \"s\" matches the on-disk names, but the other SLRU\nnames are not pluralized.\n\nI think we should expose exactly these names in the pg_stat_slru view\nand as pg_stat_reset_slru() arguments. (Maybe pg_stat_reset_slru\nshould match its argument case-insensitively ... it does not now.)\n\nAs for the control locks, they should all be named in a directly\npredictable way from their SLRUs. We could stick with the\n\"ControlLock\" suffix, or we could change to a less generic term\nlike \"SLRULock\". There are currently two locks that are named\nsomethingControlLock but are not SLRU guards:\n\nDynamicSharedMemoryControlLock\nReplicationSlotControlLock\n\nso I'd kind of like to either rename those two, or stop using\n\"ControlLock\" as the SLRU suffix, or arguably both, because \"Control\"\nis practically a noise word in this context. (Any renaming here will\nimply source code adjustments, but I don't think any of these locks\nare touched widely enough for that to be problematic.)\n\nAs for the per-buffer locks, maybe name those tranches as SLRU name\nplus \"BufferLock\"? Or \"BufferLocks\", to emphasize that there's not\njust one?\n\nMoving on to the other tranches that don't correspond to single\npredefined locks, I propose renaming as follows:\n\nexisting name proposed name\n\nbuffer_content BufferContent\nbuffer_io BufferIO\nbuffer_mapping BufferMapping\nlock_manager LockManager\nparallel_append ParallelAppend\nparallel_hash_join ParallelHashJoin\nparallel_query_dsa ParallelQueryDSA\npredicate_lock_manager PredicateLockManager\nproc FastPath (see below)\nreplication_origin ReplicationOrigin\nreplication_slot_io ReplicationSlotIO\nserializable_xact PerXactPredicateList (see below)\nsession_dsa PerSessionDSA\nsession_record_table PerSessionRecordType\nsession_typmod_table PerSessionRecordTypmod\nshared_tuplestore SharedTupleStore\ntbm SharedTidBitmap\nwal_insert WALInsert\n\nThese are mostly just adjusting the names to correspond to the new\nrule about spelling of multi-word names, but there are two that\nperhaps require extra discussion:\n\n\"proc\": it hardly needs pointing out that this name utterly sucks.\nI looked into the code, and the tranche corresponds to the PGPROC\n\"backendLock\" fields; that name also fails to explain anything\nat all, as does its comment. Further research shows that what those\nlocks actually guard is the \"fast path lock\" data within each PGPROC,\nthat is the \"fpXXX\" fields. I propose renaming the PGPROC fields to\n\"fpInfoLock\" and the tranche to FastPath.\n\n\"serializable_xact\": this is pretty awful as well, seeing that we\nhave half a dozen other kinds of locks related to the serializable\nmachinery, and these locks are not nearly as widely scoped as\nthe name might make you think. In reality, per predicate.c:\n\n * SERIALIZABLEXACT's member 'predicateLockListLock'\n * - Protects the linked list of locks held by a transaction. Only\n * needed for parallel mode, where multiple backends share the\n * same SERIALIZABLEXACT object. Not needed if\n * SerializablePredicateLockListLock is held exclusively.\n\nSo my tentative proposal for the tranche name is PerXactPredicateList,\nand the field member ought to get some name derived from that. It\nmight be better if this name included something about \"Parallel\", but\nI couldn't squeeze that in without making the name longer than I'd like.\n\nFinally, of the individually-named lwlocks (see lwlocknames.txt),\nthe only ones not related to SLRUs that I feel a need to rename\nare\n\nAsyncQueueLock => NotifyQueueLock for consistency with SLRU rename\nCLogTruncationLock => XactTruncationLock for consistency with SLRU\nSerializablePredicateLockListLock => shorten to SerializablePredicateListLock\nDynamicSharedMemoryControlLock => drop \"Control\"?\nReplicationSlotControlLock => drop \"Control\"?\n\nLastly there's the issue of whether we want to drop the \"Lock\" suffix\nin the names of these locks as presented in the wait_event data.\nI'm kind of inclined to do so, just for brevity. Also, if we don't\ndo that, then it seems like the tranche names for the\nnot-individually-named locks ought to gain a suffix, like \"Locks\".\n\nComments?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 13 May 2020 17:29:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "The attached patch doesn't actually change any LWLock names, but it\nis a useful start on that project. What it does is to get rid of the\ncurrent scheme of dynamically registering the names of built-in\nLWLocks in favor of having a constant array of those names. It's\ncompletely foolish to expend process-startup cycles on constructing\nan array of constant data; moreover, the way things are done now\nresults in the tranche names being defined all over creation. I draw\na short straight line between that technique and the lack of consistency\nin the tranche names. Given that we have an enum in lwlock.h enumerating\nthe built-in tranches, there's certainly no expectation that somebody's\ngoing to create a new one without letting the lwlock module know about\nit, so this gives up no flexibility. In fact, it adds some, because\nwe can now name an SLRU's buffer-locks tranche whatever we want ---\nit's not hard-wired as being the same as the SLRU's base name.\n\nThe dynamic registration mechanism is still there, but it's now\n*only* used if you load an extension that creates dynamic LWLocks.\n\nAt some point it might be interesting to generate the enum\nBuiltinTrancheIds and the BuiltinTrancheNames array from a common\nsource file, as we do for lwlocknames.h/.c. I didn't feel a need\nto make that happen today, though.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 13 May 2020 21:29:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "On Tue, May 12, 2020 at 10:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I don't actually understand why the LWLock tranche mechanism is designed\n> the way it is. It seems to be intended to support different backends\n> having different sets of LWLocks, but I fail to see why that's a good idea,\n> or even useful at all. In any case, dynamically-created LWLocks are\n> clearly out of scope for the documentation. The problem that I'm trying\n> to deal with right now is that even LWLocks that are hard-wired into the\n> backend code are difficult to enumerate. That wasn't a problem before\n> we decided we needed to expose them all to user view; but now it is.\n\nIf you are using parallel query, your backend might have a DSM segment\nthat contains LWLocks. Anyone who is not attached to that DSM segment\nwon't know about them, though possibly they have a different DSM\nsegment containing a tranche with the same name.\n\nAlso, if you are using extensions that use LWLocks, a particular\nextension may be loaded into backend A but not backend B. Suppose\nbackend A has an extension loaded but backend B does not. Then suppose\nthat A waits for an LWLock and B meanwhile examines pg_stat_activity.\n\nIt's hard to come up with totally satisfying solutions to problems\nlike these, but I think the important thing to remember is that, at\nleast in the extension case, this is really an obscure corner case.\nNormally the same extensions will be loaded everywhere and anything\nknown to one backend will be known also the others. However, it's not\nguaranteed.\n\nI tend to prefer that modules register their own tranches rather than\nhaving a central table someplace, because I like the idea that the\nthings that a particular module knows about are contained within its\nown source files and not spread all over the code base. I think that\nwe've done rather badly with this in a number of places, lwlock.c\namong them, and I don't like it much. It tends to lead to layering\nviolations, and it also tends to create interfaces that extensions\ncan't really use. I admit that it's not ideal as far as this\nparticular problem is concerned, but I don't think that makes it a bad\nidea in general. In some sense, the lack of naming consistency here is\na manifestation of an underlying chaos in the code: we've created many\ndifferent ways of waiting for things with many different\ncharacteristics and little thought to consistency, and this mechanism\nhas exposed that underlying problem. The wait state interface is\nsurely not to blame for the fact that LWLock names and heavyweight\nlock types are capitalized inconsistently. In fact, there seem to be\nfew things that PostgreSQL hackers love more than inconsistent\ncapitalization, and this is just one of a whole lot of instances of\nit.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 14 May 2020 14:16:29 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I tend to prefer that modules register their own tranches rather than\n> having a central table someplace, because I like the idea that the\n> things that a particular module knows about are contained within its\n> own source files and not spread all over the code base. I think that\n> we've done rather badly with this in a number of places, lwlock.c\n> among them, and I don't like it much.\n\nWell, we could solve this problem very easily by ripping out everything\nhaving to do with wait-state monitoring ... and personally I'd be a lot\nin favor of that, because I haven't seen anything about either the\ndesign or documentation of the feature that I thought was very well\ndone. However, if you'd like to have wait-state monitoring, and you'd\nlike the documentation for it to be more useful than \"go read the code\",\nthen I don't see any way around the conclusion that there are going to\nbe centralized lists of the possible wait states.\n\nThat being the case, refusing to use a centralized list in the\nimplementation seems rather pointless; and having some aspects of the\nimplementation use centralized lists (see the enums in lwlock.h and\nelsewhere) while other aspects don't is just schizophrenic.\n\n> In some sense, the lack of naming consistency here is\n> a manifestation of an underlying chaos in the code: we've created many\n> different ways of waiting for things with many different\n> characteristics and little thought to consistency, and this mechanism\n> has exposed that underlying problem.\n\nYeah, agreed. Nonetheless, now we have a problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 May 2020 14:54:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "On Thu, May 14, 2020 at 2:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Well, we could solve this problem very easily by ripping out everything\n> having to do with wait-state monitoring ... and personally I'd be a lot\n> in favor of that, because I haven't seen anything about either the\n> design or documentation of the feature that I thought was very well\n> done.\n\nWell, I'm going to disagree with that, but opinions can vary. If I'd\ntried to create naming consistency here when I created this stuff, I\nwould've had to rename things in existing systems rather than just\nexpose what was already there, and that wasn't the goal of the patch,\nand I don't see a very good reason why it should have been. Providing\ninformation is a separate project from cleaning up naming. And, while\nI don't love the fact that people have added new things without trying\nvery hard to be consistent with existing things all that much, I still\ndon't think inconsistent naming rises to the level of a disaster.\n\n> However, if you'd like to have wait-state monitoring, and you'd\n> like the documentation for it to be more useful than \"go read the code\",\n> then I don't see any way around the conclusion that there are going to\n> be centralized lists of the possible wait states.\n>\n> That being the case, refusing to use a centralized list in the\n> implementation seems rather pointless; and having some aspects of the\n> implementation use centralized lists (see the enums in lwlock.h and\n> elsewhere) while other aspects don't is just schizophrenic.\n\nThere's something to that argument, especially it enable us to\nauto-generate the documentation tables.\n\nThat being said, my view of this system is that it's good to document\nthe wait events that we have, but also that there are almost certainly\ngoing to be cases where we can't say a whole lot more than \"go read\nthe code,\" or at least not without an awful lot of work. I think\nthere's a reasonable chance that someone who sees a lot of ClientRead\nor DataFileWrite wait events will have some idea what kind of problem\nis indicated, even without consulting the documentation and even\nmoreso if we have some good documentation which they can consult. But\nI don't know what anybody's going to do if they see a lot of\nOldSerXidLock or AddinShmemInitLock contention. Presumably those are\ncases that the developers thought were unlikely, or they'd have chosen\na different locking regimen. If they were wrong, I think it's a good\nthing for users to have a relatively easy way to find that out, but\nI'm not sure what anybody's going to do be able to do about it without\npatching the code, or at least looking at it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 14 May 2020 15:37:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> That being said, my view of this system is that it's good to document\n> the wait events that we have, but also that there are almost certainly\n> going to be cases where we can't say a whole lot more than \"go read\n> the code,\" or at least not without an awful lot of work.\n\nCan't disagree with that.\n\n> I think\n> there's a reasonable chance that someone who sees a lot of ClientRead\n> or DataFileWrite wait events will have some idea what kind of problem\n> is indicated, even without consulting the documentation and even\n> moreso if we have some good documentation which they can consult. But\n> I don't know what anybody's going to do if they see a lot of\n> OldSerXidLock or AddinShmemInitLock contention.\n\nI submit that at least part of the problem is precisely one of crappy\nnaming. I didn't know what OldSerXidLock did either, until yesterday\nwhen I dug into the code to find out. If it's named something like\n\"SerialSLRULock\", then at least somebody who has heard of SLRUs will\nhave an idea of what is indicated. And we are exposing the notion\nof SLRUs pretty prominently in the monitoring docs as of v13, so that's\nnot an unreasonable presumption.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 May 2020 15:58:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "On Thu, May 14, 2020 at 3:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I submit that at least part of the problem is precisely one of crappy\n> naming. I didn't know what OldSerXidLock did either, until yesterday\n> when I dug into the code to find out. If it's named something like\n> \"SerialSLRULock\", then at least somebody who has heard of SLRUs will\n> have an idea of what is indicated. And we are exposing the notion\n> of SLRUs pretty prominently in the monitoring docs as of v13, so that's\n> not an unreasonable presumption.\n\nTo the extent that exposing some of this information causes us to\nthink more carefully about the naming, I think that's all to the good.\nI don't expect such measures to solve all of our problems in this\narea, but the idea that we can choose names with no consideration to\nwhether anybody can understand what they mean is wrong even when the\naudience is only developers.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 14 May 2020 16:03:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "I wrote:\n> Digging through the existing callers of SimpleLruInit, we have\n> name control lock subdir\n> \"async\" AsyncCtlLock \"pg_notify\"\n> \"clog\" CLogControlLock \"pg_xact\"\n> \"commit_timestamp\" CommitTsControlLock \"pg_commit_ts\"\n> \"multixact_member\" MultiXactMemberControlLock \"pg_multixact/members\"\n> \"multixact_offset\" MultiXactOffsetControlLock \"pg_multixact/offsets\"\n> \"oldserxid\" OldSerXidLock \"pg_serial\"\n> \"subtrans\" SubtransControlLock \"pg_subtrans\"\n\n> After studying that list for awhile, it seems to me that we could\n> do worse than to name the SLRUs to match their on-disk subdirectories,\n> which are names that are already user-visible. So I propose these\n> base names for the SLRUs:\n\n> Notify\n> Xact\n> CommitTs\n> MultiXactMember (or MultiXactMembers)\n> MultiXactOffset (or MultiXactOffsets)\n> Serial\n> Subtrans\n\nAs POC for this, here's a draft patch to rename the \"async\" SLRU and\nassociated locks. If people are good with this then I'll go through\nand do similarly for the other SLRUs.\n\nA case could be made for doing s/async/notify/ more widely in async.c;\nfor instance it's odd that the struct protected by NotifyQueueLock\ndidn't get renamed to NotifyQueueControl. But that seems a bit out\nof scope for the immediate problem, and anyway I'm not sure how far to\ntake it. I don't really want to rename async.c's externally-visible\nfunctions, for instance. For the moment I just renamed symbols used\nin the SimpleLruInit() call.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 14 May 2020 16:27:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "On 2020-May-14, Tom Lane wrote:\n\n> A case could be made for doing s/async/notify/ more widely in async.c;\n> for instance it's odd that the struct protected by NotifyQueueLock\n> didn't get renamed to NotifyQueueControl. But that seems a bit out\n> of scope for the immediate problem, and anyway I'm not sure how far to\n> take it. I don't really want to rename async.c's externally-visible\n> functions, for instance. For the moment I just renamed symbols used\n> in the SimpleLruInit() call.\n\nThat approach seems fine -- we'd only rename those things if and when we\nmodified them for other reasons; and the file itself, probably not at all.\nMuch like our renaming of XLOG to WAL, we changed the user-visible term\nall at once, but the code kept the original names until changed.\n\n\nMaybe in N years, when the SCM tooling is much better (so that it\ndoesn't get confused by us having renamed the file in the newer branches\nand back-patching to an older branch), we can rename xlog.c to wal.c and\nasync.c to notify.c. Or maybe not.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 14 May 2020 16:56:54 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Our naming of wait events is a disaster." }, { "msg_contents": "I wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> On Wed, May 13, 2020 at 3:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Hash/Batch/Allocating\n>>> Hash/Batch/Electing\n>>> Hash/Batch/Loading\n>>> Hash/GrowBatches/Allocating\n\n>> Perhaps we should also drop the 'ing' from the verbs, to be more like\n>> ...Read etc.\n\n> Yeah, that aspect was bothering me too. Comparing these to other\n> wait event names, you could make a case for either \"Allocate\" or\n> \"Allocation\"; but there are no other names with -ing.\n\nAfter contemplating these for a bit, my proposal is to drop the\nslashes and convert \"verbing\" to \"verb\", giving\n\nHashBatchAllocate\nHashBatchElect\nHashBatchLoad\nHashBuildAllocate\nHashBuildElect\nHashBuildHashInner\nHashBuildHashOuter\nHashGrowBatchesAllocate\nHashGrowBatchesDecide\nHashGrowBatchesElect\nHashGrowBatchesFinish\nHashGrowBatchesRepartition\nHashGrowBucketsAllocate\nHashGrowBucketsElect\nHashGrowBucketsReinsert\n\nIn addition to that, I think the ClogGroupUpdate event needs to be renamed\nto XactGroupUpdate, since we changed \"clog\" to \"xact\" in the exposed SLRU\nand LWLock names.\n\n(There are some other names that I wouldn't have picked in a green field,\nbut it's probably not worth the churn to change them.)\n\nAlso, as previously noted, ProcSignalBarrier should be in the IPC event\nclass not IO.\n\nBarring objections, I'll make these things happen before beta1.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 16 May 2020 10:46:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Our naming of wait events is a disaster." } ]
[ { "msg_contents": "I happened to come across this code added by 28cac71bd:\n\nstatic PgStat_MsgSLRU *\nslru_entry(SlruCtl ctl)\n{\n int idx = pgstat_slru_index(ctl->shared->lwlock_tranche_name);\n\n Assert((idx >= 0) && (idx < SLRU_NUM_ELEMENTS));\n\n return &SLRUStats[idx];\n}\n\nwhich is invoked by all the pgstat_count_slru_XXX routines.\nThis seems mightily inefficient --- the count functions are\njust there to increment integer counters, but first they\nhave to do up to half a dozen strcmp's to figure out which\ncounter to increment.\n\nWe could improve this by adding another integer field to\nSlruSharedData (which is already big enough that no one\nwould notice) and recording the result of pgstat_slru_index()\nthere as soon as the lwlock_tranche_name is set. (In fact,\nit looks like we could stop saving the tranche name as such\naltogether, thus buying back way more shmem than the integer\nfield would occupy.)\n\nThis does require the assumption that all backends agree\non the SLRU stats index for a particular SLRU cache. But\nAFAICS we're locked into that already, since the backends\nuse those indexes to tell the stats collector which cache\nthey're sending stats for.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 12 May 2020 15:50:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Inefficiency in SLRU stats collection" }, { "msg_contents": "At Tue, 12 May 2020 15:50:35 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> I happened to come across this code added by 28cac71bd:\n> \n> static PgStat_MsgSLRU *\n> slru_entry(SlruCtl ctl)\n> {\n> int idx = pgstat_slru_index(ctl->shared->lwlock_tranche_name);\n> \n> Assert((idx >= 0) && (idx < SLRU_NUM_ELEMENTS));\n> \n> return &SLRUStats[idx];\n> }\n> \n> which is invoked by all the pgstat_count_slru_XXX routines.\n> This seems mightily inefficient --- the count functions are\n> just there to increment integer counters, but first they\n> have to do up to half a dozen strcmp's to figure out which\n> counter to increment.\n> \n> We could improve this by adding another integer field to\n> SlruSharedData (which is already big enough that no one\n> would notice) and recording the result of pgstat_slru_index()\n> there as soon as the lwlock_tranche_name is set. (In fact,\n> it looks like we could stop saving the tranche name as such\n> altogether, thus buying back way more shmem than the integer\n> field would occupy.)\n\nI noticed that while trying to move that stuff into shmem-stats patch.\n\nI think we can get rid of SlruCtl->shared->lwlock_tranche_name since\nthe only user is the slru_entry() and no external modules don't look\ninto that depth and there's a substitute way to know the name for\nthem.\n\n> This does require the assumption that all backends agree\n> on the SLRU stats index for a particular SLRU cache. But\n> AFAICS we're locked into that already, since the backends\n> use those indexes to tell the stats collector which cache\n> they're sending stats for.\n> \n> Thoughts?\n\nAFAICS it is right and the change suggested looks reasonable to me.\nOne arguable point might be whether it is right that SlruData holds\npgstats internal index from the standpoint of modularity. (It is one\nof the reasons I didn't propose a patch for that..)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 13 May 2020 11:26:49 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inefficiency in SLRU stats collection" }, { "msg_contents": "\n\nOn 2020/05/13 11:26, Kyotaro Horiguchi wrote:\n> At Tue, 12 May 2020 15:50:35 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n>> I happened to come across this code added by 28cac71bd:\n>>\n>> static PgStat_MsgSLRU *\n>> slru_entry(SlruCtl ctl)\n>> {\n>> int idx = pgstat_slru_index(ctl->shared->lwlock_tranche_name);\n>>\n>> Assert((idx >= 0) && (idx < SLRU_NUM_ELEMENTS));\n>>\n>> return &SLRUStats[idx];\n>> }\n>>\n>> which is invoked by all the pgstat_count_slru_XXX routines.\n>> This seems mightily inefficient --- the count functions are\n>> just there to increment integer counters, but first they\n>> have to do up to half a dozen strcmp's to figure out which\n>> counter to increment.\n>>\n>> We could improve this by adding another integer field to\n>> SlruSharedData (which is already big enough that no one\n>> would notice) and recording the result of pgstat_slru_index()\n>> there as soon as the lwlock_tranche_name is set. (In fact,\n>> it looks like we could stop saving the tranche name as such\n>> altogether, thus buying back way more shmem than the integer\n>> field would occupy.)\n\nSounds good to me.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 13 May 2020 21:21:50 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Inefficiency in SLRU stats collection" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> AFAICS it is right and the change suggested looks reasonable to me.\n> One arguable point might be whether it is right that SlruData holds\n> pgstats internal index from the standpoint of modularity. (It is one\n> of the reasons I didn't propose a patch for that..)\n\nYeah, this is a fair point. On the other hand, the existing code has\npgstat.c digging into the SLRU control structure, which is as bad or\nworse a modularity violation. Perhaps we could ditch that by having\nslru.c obtain and store the integer index which it then passes to\nthe pgstat.c counter routines, rather than passing a SlruCtl pointer.\n\nI'll have to look at whether 28cac71bd exposed a data structure that\nwas formerly private, but if it did I'd be VERY strongly inclined\nto revert that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 13 May 2020 10:42:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Inefficiency in SLRU stats collection" } ]
[ { "msg_contents": "Hi,\n\npgstat_read_statsfiles() sets each stat_reset_timestamp to\nthe current timestamp, at the beginning of the function,\njust in case we fail to load the existing statsfile. This code is\noriginally introduced by commit 4c468b37a2.\n\nBut commit ad1b5c842b changed pgstat_read_statsfiles() so that\nthe stats including stat_reset_timestamp are zeroed in that case,\nso now there seems no need to set each stat_reset_timestamp.\nThought?\n\nAttached is the patch that removes such unnecessary sets of\nstat_reset_timestamp from pgstat_read_statsfiles().\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Wed, 13 May 2020 21:06:15 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "pgstat_read_statsfiles() and reset timestamp" }, { "msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> pgstat_read_statsfiles() sets each stat_reset_timestamp to\n> the current timestamp, at the beginning of the function,\n> just in case we fail to load the existing statsfile. This code is\n> originally introduced by commit 4c468b37a2.\n\n> But commit ad1b5c842b changed pgstat_read_statsfiles() so that\n> the stats including stat_reset_timestamp are zeroed in that case,\n> so now there seems no need to set each stat_reset_timestamp.\n\nHuh? The zeroing happens before those fields are filled.\n\n> Attached is the patch that removes such unnecessary sets of\n> stat_reset_timestamp from pgstat_read_statsfiles().\n\n-1, minus a lot actually. What this will do is that if there's\nno stats file, the reset timestamps will all read as whatever\nour epoch timestamp is (2000-01-01, I think). This is not a\ncorner case, either --- it's the expected path at first start.\nWe want current time to be used in that case.\n\nIf there are any code paths in pgstat_read_statsfiles that\nre-zero these structs later, they need to be fixed to restore\nthe reset timestamps to these values, as well.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 13 May 2020 18:24:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgstat_read_statsfiles() and reset timestamp" } ]
[ { "msg_contents": "Hi,\n\nWhen track_io_timing is on, I/O timing information is\ndisplayed in pg_stat_database, in the output of EXPLAIN\nwhen the BUFFERS option is used, and in pg_stat_statements\nas documented in [1].\n\nThis is also described in the manual for pg_stat_statements\n[2], however, manuals for pg_stat_database and EXPLAIN\ndoesn't refer to it.\n\nI think it'll be better to add descriptions to both of them\nfor consistency.\n\nThoughts?\n\n\n[1]\nhttps://www.postgresql.org/docs/devel/runtime-config-statistics.html#GUC-TRACK-IO-TIMING\n[2]\nhttps://www.postgresql.org/docs/devel/pgstatstatements.html#PGSTATSTATEMENTS-COLUMNS\n\nRegards,\n\n--\nAtsushi Torikoshi", "msg_date": "Wed, 13 May 2020 21:54:27 +0900", "msg_from": "Atsushi Torikoshi <atorik@gmail.com>", "msg_from_op": true, "msg_subject": "Add explanations which are influenced by track_io_timing" }, { "msg_contents": "On 2020/05/13 21:54, Atsushi Torikoshi wrote:\n> Hi,\n> \n> When track_io_timing is on, I/O timing information is\n> displayed in pg_stat_database, in the output of EXPLAIN\n> when the BUFFERS option is used, and in pg_stat_statements\n> as documented in [1].\n> \n> This is also described in the manual for pg_stat_statements\n> [2], however, manuals for pg_stat_database and EXPLAIN\n> doesn't refer to it.\n> \n> I think it'll be better to add descriptions to both of them\n> for consistency.\n> \n> Thoughts?\n\n\n+1\n\n+ in milliseconds(if <xref linkend=\"guc-track-io-timing\"/> is enabled,\n+ otherwise zero)\n\nIt's better to add a space character just after \"seconds\".\n\n- written.\n+ written. In addition, If <xref linkend=\"guc-track-io-timing\"/> is enabled,\n+ also include I/O Timings.\n\nIsn't it better to just use clearer description like \"the time reading and\nwriting data blocks\" here instead of \"I/O Timing\"?\nWhat about the attached patch based on yours?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Wed, 13 May 2020 23:27:30 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add explanations which are influenced by track_io_timing" }, { "msg_contents": "Thanks for reviewing!\n\nOn Wed, May 13, 2020 at 11:27 PM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n> What about the attached patch based on yours?\n\n\nIt looks better.\n\nRegards,\n\n--\nAtsushi Torikoshi\n\nThanks for reviewing!On Wed, May 13, 2020 at 11:27 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:What about the attached patch based on yours?It looks better.Regards,--Atsushi Torikoshi", "msg_date": "Fri, 15 May 2020 09:50:41 +0900", "msg_from": "Atsushi Torikoshi <atorik@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add explanations which are influenced by track_io_timing" }, { "msg_contents": "\n\nOn 2020/05/15 9:50, Atsushi Torikoshi wrote:\n> Thanks for reviewing!\n> \n> On Wed, May 13, 2020 at 11:27 PM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> \n> What about the attached patch based on yours?\n> \n> \n> It looks better.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 22 May 2020 23:37:39 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add explanations which are influenced by track_io_timing" }, { "msg_contents": "On Fri, May 22, 2020 at 11:37 PM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n>\n>\n> On 2020/05/15 9:50, Atsushi Torikoshi wrote:\n> > Thanks for reviewing!\n> >\n> > On Wed, May 13, 2020 at 11:27 PM Fujii Masao <\n> masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> >\n> > What about the attached patch based on yours?\n> >\n> >\n> > It looks better.\n>\n> Pushed. Thanks!\n>\n\nThanks for reviewing and improvements!\n\nRegards,\n\n--\nAtsushi Torikoshi\n\nOn Fri, May 22, 2020 at 11:37 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\nOn 2020/05/15 9:50, Atsushi Torikoshi wrote:\n> Thanks for reviewing!\n> \n> On Wed, May 13, 2020 at 11:27 PM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> \n>     What about the attached patch based on yours?\n> \n> \n> It looks better.\n\nPushed. Thanks!Thanks for reviewing and improvements! Regards,--Atsushi Torikoshi", "msg_date": "Mon, 25 May 2020 10:57:17 +0900", "msg_from": "Atsushi Torikoshi <atorik@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add explanations which are influenced by track_io_timing" } ]
[ { "msg_contents": "I read about the utility – https://github.com/dataegret/pgcompacttable . It\nis more careful about resources, because it works on slightly different\nprinciples. The main point of pgcompacttable is that it moves all live rows\nto the beginning of the table with updates in the table. And then it starts\na vacuum on this table, because we know that we have live rows at the\nbeginning and dead rows at the end. And the vacuum itself cuts off this\ntail, i.e. it does not require much additional disk space. Let the\nauto-vacuum or some other process do this.\n\n-- \nС уважением, Антон Пацев.\nBest regards, Anton Patsev.\n\nI read about the utility – https://github.com/dataegret/pgcompacttable . It is more careful about resources, because it works on slightly different principles. The main point of pgcompacttable is that it moves all live rows to the beginning of the table with updates in the table. And then it starts a vacuum on this table, because we know that we have live rows at the beginning and dead rows at the end. And the vacuum itself cuts off this tail, i.e. it does not require much additional disk space. Let the auto-vacuum or some other process do this.-- \nС уважением, Антон Пацев.Best regards, Anton Patsev.", "msg_date": "Wed, 13 May 2020 20:28:52 +0600", "msg_from": "=?UTF-8?B?0JDQvdGC0L7QvSDQn9Cw0YbQtdCy?= <patsev.anton@gmail.com>", "msg_from_op": true, "msg_subject": "Ideas about moving live rows to the top of the table" } ]
[ { "msg_contents": "The comment which refers to the OpenSSL PEM password callback type has a small\ntypo, the type is called pem_password_cb and not pem_passwd_cb (which is an\neasy typo to make to make since confusingly enough the functions in OpenSSL are\ncalled SSL_*_passwd_cb). PFA patch to fix this.\n\ncheers ./daniel", "msg_date": "Thu, 14 May 2020 10:07:47 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Incorrect OpenSSL type reference in code comment" }, { "msg_contents": "On 14/05/2020 11:07, Daniel Gustafsson wrote:\n> The comment which refers to the OpenSSL PEM password callback type has a small\n> typo, the type is called pem_password_cb and not pem_passwd_cb (which is an\n> easy typo to make to make since confusingly enough the functions in OpenSSL are\n> called SSL_*_passwd_cb). PFA patch to fix this.\n\nApplied, thanks!\n\n- Heikki\n\n\n", "msg_date": "Thu, 14 May 2020 13:58:01 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Incorrect OpenSSL type reference in code comment" } ]
[ { "msg_contents": "Reviewing TLS changes for v13 I came across one change which I think might be\nbetter off with a library qualified name. The libpq frontend sslpassword hook\nadded in 4dc63552109f65 is OpenSSL specific, but it has a very generic name:\n\n\tPQsetSSLKeyPassHook(PQsslKeyPassHook_type hook);\n\nThis IMO has potential for confusion if we ever commit another TLS backend,\nsince the above hook wont work for any other library (except maybe OpenSSL\nderivatives like LibreSSL et.al). The backends will always have differently\nnamed hooks, as the signatures will be different, but having one with a generic\nname and another with a library qualified name doesn't seem too friendly to\nanyone implementing with libpq.\n\nAs a point of reference; in the backend we added a TLS init hook in commit\n896fcdb230e72 which also is specific to OpenSSL, but the name is library\nqualified making the purpose and usecase perfectly clear: openssl_tls_init_hook.\n\nSince we haven't shipped this there is still time to rename, which IMO is the\nright way forward. PQsslKeyPassHook_<library>_type would be one option, but\nperhaps there are better alternatives?\n\nThoughts?\n\ncheers ./daniel\n\n", "msg_date": "Thu, 14 May 2020 15:03:41 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Potentially misleading name of libpq pass phrase hook" }, { "msg_contents": "On Thu, May 14, 2020 at 3:03 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> Reviewing TLS changes for v13 I came across one change which I think might\n> be\n> better off with a library qualified name. The libpq frontend sslpassword\n> hook\n> added in 4dc63552109f65 is OpenSSL specific, but it has a very generic\n> name:\n>\n> PQsetSSLKeyPassHook(PQsslKeyPassHook_type hook);\n>\n> This IMO has potential for confusion if we ever commit another TLS backend,\n> since the above hook wont work for any other library (except maybe OpenSSL\n> derivatives like LibreSSL et.al). The backends will always have\n> differently\n> named hooks, as the signatures will be different, but having one with a\n> generic\n> name and another with a library qualified name doesn't seem too friendly to\n> anyone implementing with libpq.\n>\n> As a point of reference; in the backend we added a TLS init hook in commit\n> 896fcdb230e72 which also is specific to OpenSSL, but the name is library\n> qualified making the purpose and usecase perfectly clear:\n> openssl_tls_init_hook.\n>\n> Since we haven't shipped this there is still time to rename, which IMO is\n> the\n> right way forward. PQsslKeyPassHook_<library>_type would be one option,\n> but\n> perhaps there are better alternatives?\n>\n> Thoughts?\n>\n>\nISTM this should be renamed yeah -- and it should probably go on the open\nitem lists, and with the schedule for the beta perhaps dealt with rather\nurgently?\n\n//Magnus\n\nOn Thu, May 14, 2020 at 3:03 PM Daniel Gustafsson <daniel@yesql.se> wrote:Reviewing TLS changes for v13 I came across one change which I think might be\nbetter off with a library qualified name.  The libpq frontend sslpassword hook\nadded in 4dc63552109f65 is OpenSSL specific, but it has a very generic name:\n\n        PQsetSSLKeyPassHook(PQsslKeyPassHook_type hook);\n\nThis IMO has potential for confusion if we ever commit another TLS backend,\nsince the above hook wont work for any other library (except maybe OpenSSL\nderivatives like LibreSSL et.al).  The backends will always have differently\nnamed hooks, as the signatures will be different, but having one with a generic\nname and another with a library qualified name doesn't seem too friendly to\nanyone implementing with libpq.\n\nAs a point of reference; in the backend we added a TLS init hook in commit\n896fcdb230e72 which also is specific to OpenSSL, but the name is library\nqualified making the purpose and usecase perfectly clear: openssl_tls_init_hook.\n\nSince we haven't shipped this there is still time to rename, which IMO is the\nright way forward.  PQsslKeyPassHook_<library>_type would be one option, but\nperhaps there are better alternatives?\n\nThoughts?ISTM this should be renamed yeah -- and it should probably go on the open item lists, and with the schedule for the beta perhaps dealt with rather urgently?//Magnus", "msg_date": "Fri, 15 May 2020 23:46:38 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Potentially misleading name of libpq pass phrase hook" }, { "msg_contents": "On 2020-May-15, Magnus Hagander wrote:\n\n> On Thu, May 14, 2020 at 3:03 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > Since we haven't shipped this there is still time to rename, which\n> > IMO is the right way forward. PQsslKeyPassHook_<library>_type would\n> > be one option, but perhaps there are better alternatives?\n>\n> ISTM this should be renamed yeah -- and it should probably go on the open\n> item lists, and with the schedule for the beta perhaps dealt with rather\n> urgently?\n\nSeems easy enough to do! +1 on Daniel's suggested renaming.\n\nCCing Andrew as committer.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 15 May 2020 20:08:22 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Potentially misleading name of libpq pass phrase hook" }, { "msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Thu, May 14, 2020 at 3:03 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> Since we haven't shipped this there is still time to rename, which IMO\n>> is the right way forward. PQsslKeyPassHook_<library>_type would be one\n>> option, but perhaps there are better alternatives?\n\n> ISTM this should be renamed yeah -- and it should probably go on the open\n> item lists, and with the schedule for the beta perhaps dealt with rather\n> urgently?\n\n+1. Once beta1 is out the cost to change the name goes up noticeably.\nNot that we *couldn't* do it later, but it'd be better to have it be\nright in beta1.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 15 May 2020 20:21:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Potentially misleading name of libpq pass phrase hook" }, { "msg_contents": "On 5/15/20 8:21 PM, Tom Lane wrote:\n> Magnus Hagander <magnus@hagander.net> writes:\n>> On Thu, May 14, 2020 at 3:03 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> Since we haven't shipped this there is still time to rename, which IMO\n>>> is the right way forward. PQsslKeyPassHook_<library>_type would be one\n>>> option, but perhaps there are better alternatives?\n> \n>> ISTM this should be renamed yeah -- and it should probably go on the open\n>> item lists, and with the schedule for the beta perhaps dealt with rather\n>> urgently?\n> \n> +1. Once beta1 is out the cost to change the name goes up noticeably.\n> Not that we *couldn't* do it later, but it'd be better to have it be\n> right in beta1.\n\n+1 on all of the above.\n\nI noticed this has been added to Open Items; I added a note that the\nplan is to fix before the Beta 1 wrap.\n\nJonathan", "msg_date": "Fri, 15 May 2020 21:21:52 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Potentially misleading name of libpq pass phrase hook" }, { "msg_contents": "On Fri, May 15, 2020 at 09:21:52PM -0400, Jonathan S. Katz wrote:\n> +1 on all of the above.\n> \n> I noticed this has been added to Open Items; I added a note that the\n> plan is to fix before the Beta 1 wrap.\n\n+1. Thanks.\n\nAgreed. PQsslKeyPassHook_<library>_type sounds fine to me as\nconvention. Wouldn't we want to also rename PQsetSSLKeyPassHook and\nPQgetSSLKeyPassHook, appending an \"_OpenSSL\" to both?\n--\nMichael", "msg_date": "Sat, 16 May 2020 10:56:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Potentially misleading name of libpq pass phrase hook" }, { "msg_contents": "> On 16 May 2020, at 03:56, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, May 15, 2020 at 09:21:52PM -0400, Jonathan S. Katz wrote:\n>> +1 on all of the above.\n>> \n>> I noticed this has been added to Open Items; I added a note that the\n>> plan is to fix before the Beta 1 wrap.\n> \n> +1. Thanks.\n> \n> Agreed. PQsslKeyPassHook_<library>_type sounds fine to me as\n> convention. Wouldn't we want to also rename PQsetSSLKeyPassHook and\n> PQgetSSLKeyPassHook, appending an \"_OpenSSL\" to both?\n\nYes, I think we should. The attached performs the rename of the hook functions\nand the type, and also fixes an off-by-one-'=' in a header comment which my OCD\ncouldn't unsee.\n\ncheers ./daniel", "msg_date": "Sat, 16 May 2020 09:16:54 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Potentially misleading name of libpq pass phrase hook" }, { "msg_contents": "\nOn 5/15/20 8:08 PM, Alvaro Herrera wrote:\n> On 2020-May-15, Magnus Hagander wrote:\n>\n>> On Thu, May 14, 2020 at 3:03 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> Since we haven't shipped this there is still time to rename, which\n>>> IMO is the right way forward. PQsslKeyPassHook_<library>_type would\n>>> be one option, but perhaps there are better alternatives?\n>> ISTM this should be renamed yeah -- and it should probably go on the open\n>> item lists, and with the schedule for the beta perhaps dealt with rather\n>> urgently?\n> Seems easy enough to do! +1 on Daniel's suggested renaming.\n>\n> CCing Andrew as committer.\n>\n\nI'll attend to this today.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 16 May 2020 09:40:57 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Potentially misleading name of libpq pass phrase hook" }, { "msg_contents": "On 5/16/20 3:16 AM, Daniel Gustafsson wrote:\n>> On 16 May 2020, at 03:56, Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On Fri, May 15, 2020 at 09:21:52PM -0400, Jonathan S. Katz wrote:\n>>> +1 on all of the above.\n>>>\n>>> I noticed this has been added to Open Items; I added a note that the\n>>> plan is to fix before the Beta 1 wrap.\n>>\n>> +1. Thanks.\n>>\n>> Agreed. PQsslKeyPassHook_<library>_type sounds fine to me as\n>> convention. Wouldn't we want to also rename PQsetSSLKeyPassHook and\n>> PQgetSSLKeyPassHook, appending an \"_OpenSSL\" to both?\n> \n> Yes, I think we should. The attached performs the rename of the hook functions\n> and the type, and also fixes an off-by-one-'=' in a header comment which my OCD\n> couldn't unsee.\n\nReviewed, overall looks good to me. My question is around the name. It\nappears the convention is to do \"openssl\" on hooks[1], with the\nconvention being a single hook I could find. But scanning the codebase,\nit appears we either use \"OPENSSL\" for definers and \"openssl\" in\nfunction names.\n\nSo, my 2¢ is to use all lowercase to stick with convention.\n\nThanks!\n\nJonathan\n\n[1]\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/include/libpq/libpq-be.h;hb=HEAD#l293", "msg_date": "Sat, 16 May 2020 10:33:50 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Potentially misleading name of libpq pass phrase hook" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 16 May 2020, at 03:56, Michael Paquier <michael@paquier.xyz> wrote:\n>> Agreed. PQsslKeyPassHook_<library>_type sounds fine to me as\n>> convention. Wouldn't we want to also rename PQsetSSLKeyPassHook and\n>> PQgetSSLKeyPassHook, appending an \"_OpenSSL\" to both?\n\n> Yes, I think we should. The attached performs the rename of the hook functions\n> and the type, and also fixes an off-by-one-'=' in a header comment which my OCD\n> couldn't unsee.\n\nThe patch as committed missed renaming PQgetSSLKeyPassHook() itself,\nbut did rename its result type, which seemed to me to be clearly\nwrong. I took it on myself to fix that up, and also to fix exports.txt\nwhich some of the buildfarm insists be correct ;-)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 16 May 2020 19:47:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Potentially misleading name of libpq pass phrase hook" }, { "msg_contents": "\nOn 5/16/20 7:47 PM, Tom Lane wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n>>> On 16 May 2020, at 03:56, Michael Paquier <michael@paquier.xyz> wrote:\n>>> Agreed. PQsslKeyPassHook_<library>_type sounds fine to me as\n>>> convention. Wouldn't we want to also rename PQsetSSLKeyPassHook and\n>>> PQgetSSLKeyPassHook, appending an \"_OpenSSL\" to both?\n>> Yes, I think we should. The attached performs the rename of the hook functions\n>> and the type, and also fixes an off-by-one-'=' in a header comment which my OCD\n>> couldn't unsee.\n> The patch as committed missed renaming PQgetSSLKeyPassHook() itself,\n> but did rename its result type, which seemed to me to be clearly\n> wrong. I took it on myself to fix that up, and also to fix exports.txt\n> which some of the buildfarm insists be correct ;-)\n>\n> \t\t\t\n\n\n\nargh! thanks!\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 16 May 2020 21:16:34 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Potentially misleading name of libpq pass phrase hook" } ]
[ { "msg_contents": "Hi,\nItemPointerData, on the contrary, from what the name says,\nit is not a pointer to a structure, but a structure in fact.\nWhen assigning the name of the structure variable to a pointer, it may even\nwork,\nbut, it is not the right thing to do and it becomes a nightmare,\nto discover that any other error they have is at cause.\n\nSo:\n1. In some cases, there may be a misunderstanding in the use of\nItemPointerData.\n2. When using the variable name in an assignment, the variable's address is\nused.\n3. While this works for a structure, it shouldn't be the right thing to do.\n4. If we have a local variable, its scope is limited and when it loses its\nscope, memory is certainly garbage.\n5. While this may be working for heapam.c, I believe it is being abused and\nshould be compliant with\n the Postgres API and use the functions that were created for this.\n\nThe patch is primarily intended to correct the use of ItemPointerData.\nBut it is also changing the style, reducing the scope of some variables.\nIf that was not acceptable, reduce the scope and someone has objections,\nI can change the patch, to focus only on the use of ItemPointerData.\nBut as style changes are rare, if possible, it would be good to seize the\nopportunity.\n\nregards,\nRanier Vilela", "msg_date": "Thu, 14 May 2020 14:40:50 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Fix ouside scope t_ctid (ItemPointerData)" }, { "msg_contents": "\n\n> On May 14, 2020, at 10:40 AM, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> \n> Hi,\n> ItemPointerData, on the contrary, from what the name says, \n> it is not a pointer to a structure, but a structure in fact.\n\nThe project frequently uses the pattern\n\n typedef struct FooData {\n ...\n } FooData;\n\n typedef FooData *Foo;\n\nwhere, in this example, \"Foo\" = \"ItemPointer\".\n\nSo the \"Data\" part of \"ItemPointerData\" clues the reader into the fact that ItemPointerData is a struct, not a pointer. Granted, the \"Pointer\" part of that name may confuse some readers, but the struct itself does contain what is essentially a 48-bit pointer, so that name is not nuts.\n\n\n> When assigning the name of the structure variable to a pointer, it may even work, \n> but, it is not the right thing to do and it becomes a nightmare, \n> to discover that any other error they have is at cause.\n\nCan you give a code example of the type of assigment you mean?\n\n> So:\n> 1. In some cases, there may be a misunderstanding in the use of ItemPointerData.\n> 2. When using the variable name in an assignment, the variable's address is used.\n> 3. While this works for a structure, it shouldn't be the right thing to do.\n> 4. If we have a local variable, its scope is limited and when it loses its scope, memory is certainly garbage.\n> 5. While this may be working for heapam.c, I believe it is being abused and should be compliant with \n> the Postgres API and use the functions that were created for this.\n> \n> The patch is primarily intended to correct the use of ItemPointerData.\n> But it is also changing the style, reducing the scope of some variables.\n> If that was not acceptable, reduce the scope and someone has objections, \n> I can change the patch, to focus only on the use of ItemPointerData.\n> But as style changes are rare, if possible, it would be good to seize the opportunity.\n\nI would like to see a version of the patch that only addresses your concerns about ItemPointerData, not because other aspects of the patch are unacceptable (I'm not ready to even contemplate that yet), but because I'm not sure what your complaint is about. Can you restrict the patch to just address that one issue?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 14 May 2020 11:03:56 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix ouside scope t_ctid (ItemPointerData)" }, { "msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> The patch is primarily intended to correct the use of ItemPointerData.\n\nWhat do you think is being \"corrected\" here? It looks to me like\njust some random code rearrangements that aren't even clearly\nbug-free, let alone being stylistic improvements.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 14 May 2020 14:07:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix ouside scope t_ctid (ItemPointerData)" }, { "msg_contents": "Em qui., 14 de mai. de 2020 às 15:03, Mark Dilger <\nmark.dilger@enterprisedb.com> escreveu:\n\n>\n>\n> > On May 14, 2020, at 10:40 AM, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> >\n> > Hi,\n> > ItemPointerData, on the contrary, from what the name says,\n> > it is not a pointer to a structure, but a structure in fact.\n>\n> The project frequently uses the pattern\n>\n> typedef struct FooData {\n> ...\n> } FooData;\n>\n> typedef FooData *Foo;\n>\n> where, in this example, \"Foo\" = \"ItemPointer\".\n>\n> So the \"Data\" part of \"ItemPointerData\" clues the reader into the fact\n> that ItemPointerData is a struct, not a pointer. Granted, the \"Pointer\"\n> part of that name may confuse some readers, but the struct itself does\n> contain what is essentially a 48-bit pointer, so that name is not nuts.\n>\n>\n> > When assigning the name of the structure variable to a pointer, it may\n> even work,\n> > but, it is not the right thing to do and it becomes a nightmare,\n> > to discover that any other error they have is at cause.\n>\n> Can you give a code example of the type of assigment you mean?\n>\nhtup->t_ctid = target_tid;\nhtup->t_ctid = newtid;\nBoth target_tid and newtid are local variable, whe loss scope, memory is\ngarbage.\n\n\n>\n> > So:\n> > 1. In some cases, there may be a misunderstanding in the use of\n> ItemPointerData.\n> > 2. When using the variable name in an assignment, the variable's address\n> is used.\n> > 3. While this works for a structure, it shouldn't be the right thing to\n> do.\n> > 4. If we have a local variable, its scope is limited and when it loses\n> its scope, memory is certainly garbage.\n> > 5. While this may be working for heapam.c, I believe it is being abused\n> and should be compliant with\n> > the Postgres API and use the functions that were created for this.\n> >\n> > The patch is primarily intended to correct the use of ItemPointerData.\n> > But it is also changing the style, reducing the scope of some variables.\n> > If that was not acceptable, reduce the scope and someone has objections,\n> > I can change the patch, to focus only on the use of ItemPointerData.\n> > But as style changes are rare, if possible, it would be good to seize\n> the opportunity.\n>\n> I would like to see a version of the patch that only addresses your\n> concerns about ItemPointerData, not because other aspects of the patch are\n> unacceptable (I'm not ready to even contemplate that yet), but because I'm\n> not sure what your complaint is about. Can you restrict the patch to just\n> address that one issue?\n>\nCertainly.\nIn the same file you can find the appropriate use of the API.\nItemPointerSet(&heapTuple->t_self, blkno, offnum);\n\nregards,\nRanier Vilela", "msg_date": "Thu, 14 May 2020 15:34:23 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix ouside scope t_ctid (ItemPointerData)" }, { "msg_contents": "Em qui., 14 de mai. de 2020 às 15:07, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > The patch is primarily intended to correct the use of ItemPointerData.\n>\n> What do you think is being \"corrected\" here? It looks to me like\n> just some random code rearrangements that aren't even clearly\n> bug-free, let alone being stylistic improvements.\n>\nIt is certainly working, but trusting that the memory of a local variable\nwill not change,\nwhen it loses its scope, is a risk that, certainly, can cause bugs,\nelsewhere.\nAnd it is certainly very difficult to discover its primary cause.\n\nregards,\nRanier Vilela\n\nEm qui., 14 de mai. de 2020 às 15:07, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> The patch is primarily intended to correct the use of ItemPointerData.\n\nWhat do you think is being \"corrected\" here?  It looks to me like\njust some random code rearrangements that aren't even clearly\nbug-free, let alone being stylistic improvements.It is certainly working, but trusting that the memory of a local variable will not change, when it loses its scope, is a risk that, certainly, can cause bugs, elsewhere.And it is certainly very difficult to discover its primary cause. regards,Ranier Vilela", "msg_date": "Thu, 14 May 2020 15:37:32 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix ouside scope t_ctid (ItemPointerData)" }, { "msg_contents": "\n\n> On May 14, 2020, at 11:34 AM, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> \n> htup->t_ctid = target_tid; \n> htup->t_ctid = newtid;\n> Both target_tid and newtid are local variable, whe loss scope, memory is garbage.\n\nOk, thanks for the concrete example of what is bothering you.\n\nIn htup_details, I see that struct HeapTupleHeaderData has a field named t_ctid of type struct ItemPointerData. I also see in heapam that target_tid is of type ItemPointerData. The\n\n\thtup->t_ctid = target_tid\n\ncopies the contents of target_tid. By the time target_tid goes out of scope, the contents are already copied. I would share your concern if t_ctid were of type ItemPointer (aka ItemPointerData *) and the code said\n\n\thtup->t_ctid = &target_tid\n\nbut it doesn't say that, so I don't see the issue. \n\nAlso in heapam, I see that newtid is likewise of type ItemPointerData, so the same logic applies. By the time newtid goes out of scope, its contents have already been copied into t_ctid, so there is no problem.\n\nBut maybe you know all that and are just complaining that the name \"ItemPointerData\" sounds like a pointer rather than a struct? I'm still unclear whether you believe this is a bug, or whether you just don't like the naming that is used.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 14 May 2020 15:23:53 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix ouside scope t_ctid (ItemPointerData)" }, { "msg_contents": "\n\n> On May 14, 2020, at 11:34 AM, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> \n> Certainly.\n> In the same file you can find the appropriate use of the API.\n> ItemPointerSet(&heapTuple->t_self, blkno, offnum);\n\nIt took a couple reads through your patch to figure out what you were trying to accomplish, and I think you are uncomfortable with assigning one ItemPointerData variable from another. ItemPointerData is just a struct with three int16 variables. To make a standalone program that has the same structure without depending on any postgres headers, I'm using \"short int\" instead of \"int16\" and structs \"TwoData\" and \"ThreeData\" that are analogous to BlockIdData and OffsetNumber.\n\n#include <stdio.h>\n\ntypedef struct TwoData {\n short int a;\n short int b;\n} TwoData;\n\ntypedef struct ThreeData {\n TwoData left;\n short int right;\n} ThreeData;\n\nint main(int argc, char **argv)\n{\n ThreeData x = { { 5, 10 }, 15 };\n ThreeData y = x;\n x.left.a = 0;\n x.left.b = 1;\n x.right = 2;\n\n printf(\"y = { { %d, %d }, %d }\\n\",\n y.left.a, y.left.b, y.right);\n\n return 0;\n}\n\nIf you compile and run this, you'll notice it outputs:\n\ny = { { 5, 10 }, 15 }\n\nand not the { { 0, 1}, 2 } that you would expect if y were merely pointing at x.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 14 May 2020 15:49:39 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix ouside scope t_ctid (ItemPointerData)" }, { "msg_contents": "Em qui., 14 de mai. de 2020 às 19:23, Mark Dilger <\nmark.dilger@enterprisedb.com> escreveu:\n\n>\n>\n> > On May 14, 2020, at 11:34 AM, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> >\n> > htup->t_ctid = target_tid;\n> > htup->t_ctid = newtid;\n> > Both target_tid and newtid are local variable, whe loss scope, memory is\n> garbage.\n>\n> Ok, thanks for the concrete example of what is bothering you.\n>\n> In htup_details, I see that struct HeapTupleHeaderData has a field named\n> t_ctid of type struct ItemPointerData. I also see in heapam that\n> target_tid is of type ItemPointerData. The\n>\n> htup->t_ctid = target_tid\n>\n> copies the contents of target_tid. By the time target_tid goes out of\n> scope, the contents are already copied. I would share your concern if\n> t_ctid were of type ItemPointer (aka ItemPointerData *) and the code said\n>\n> htup->t_ctid = &target_tid\n>\n> but it doesn't say that, so I don't see the issue.\n>\nEven if the patch simplifies and uses the API to make the assignments.\nReally, the central problem does not exist, my fault.\nPerhaps because it has never made such use, structure assignment.\nAnd I failed to do research on the subject before.\nSorry.\n\n\n>\n> Also in heapam, I see that newtid is likewise of type ItemPointerData, so\n> the same logic applies. By the time newtid goes out of scope, its contents\n> have already been copied into t_ctid, so there is no problem.\n>\n> But maybe you know all that and are just complaining that the name\n> \"ItemPointerData\" sounds like a pointer rather than a struct? I'm still\n> unclear whether you believe this is a bug, or whether you just don't like\n> the naming that is used.\n>\nMy concerns were about whether attribution really would copy the\nstructure's content and not just its address.\nThe name makes it difficult, but that was not the point.\n\nThe tool warned about uninitialized variable, which I mistakenly deduced\nfor loss of scope.\n\nThank you very much for the clarifications and your time.\nWe never stopped learning, and using structure assignment was a new\nlearning experience.\n\nregards\nRanier Vilela\n\nEm qui., 14 de mai. de 2020 às 19:23, Mark Dilger <mark.dilger@enterprisedb.com> escreveu:\n\n> On May 14, 2020, at 11:34 AM, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> \n> htup->t_ctid = target_tid; \n> htup->t_ctid = newtid;\n> Both target_tid and newtid are local variable, whe loss scope, memory is garbage.\n\nOk, thanks for the concrete example of what is bothering you.\n\nIn htup_details, I see that struct HeapTupleHeaderData has a field named t_ctid of type struct ItemPointerData.  I also see in heapam that target_tid is of type ItemPointerData.  The\n\n        htup->t_ctid = target_tid\n\ncopies the contents of target_tid.  By the time target_tid goes out of scope, the contents are already copied.  I would share your concern if t_ctid were of type ItemPointer (aka ItemPointerData *) and the code said\n\n        htup->t_ctid = &target_tid\n\nbut it doesn't say that, so I don't see the issue.  Even if the patch simplifies and uses the API to make the assignments.Really, the central problem does not exist, my fault.Perhaps because it has never made such use, structure assignment.And I failed to do research on the subject before.Sorry. \n\nAlso in heapam, I see that newtid is likewise of type ItemPointerData, so the same logic applies.  By the time newtid goes out of scope, its contents have already been copied into t_ctid, so there is no problem.\n\nBut maybe you know all that and are just complaining that the name \"ItemPointerData\" sounds like a pointer rather than a struct?  I'm still unclear whether you believe this is a bug, or whether you just don't like the naming that is used.My concerns were about whether attribution really would copy the structure's content and not just its address.The name makes it difficult, but that was not the point.The tool warned about uninitialized variable, which I mistakenly deduced for loss of scope. Thank you very much for the clarifications and your time.We never stopped learning, and using structure assignment was a new learning experience.regardsRanier Vilela", "msg_date": "Thu, 14 May 2020 19:55:17 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix ouside scope t_ctid (ItemPointerData)" }, { "msg_contents": "Em qui., 14 de mai. de 2020 às 19:49, Mark Dilger <\nmark.dilger@enterprisedb.com> escreveu:\n\n>\n>\n> > On May 14, 2020, at 11:34 AM, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> >\n> > Certainly.\n> > In the same file you can find the appropriate use of the API.\n> > ItemPointerSet(&heapTuple->t_self, blkno, offnum);\n>\n> It took a couple reads through your patch to figure out what you were\n> trying to accomplish, and I think you are uncomfortable with assigning one\n> ItemPointerData variable from another. ItemPointerData is just a struct\n> with three int16 variables. To make a standalone program that has the same\n> structure without depending on any postgres headers, I'm using \"short int\"\n> instead of \"int16\" and structs \"TwoData\" and \"ThreeData\" that are analogous\n> to BlockIdData and OffsetNumber.\n>\n> #include <stdio.h>\n>\n> typedef struct TwoData {\n> short int a;\n> short int b;\n> } TwoData;\n>\n> typedef struct ThreeData {\n> TwoData left;\n> short int right;\n> } ThreeData;\n>\n> int main(int argc, char **argv)\n> {\n> ThreeData x = { { 5, 10 }, 15 };\n> ThreeData y = x;\n> x.left.a = 0;\n> x.left.b = 1;\n> x.right = 2;\n>\n> printf(\"y = { { %d, %d }, %d }\\n\",\n> y.left.a, y.left.b, y.right);\n>\n> return 0;\n> }\n>\n> If you compile and run this, you'll notice it outputs:\n>\n> y = { { 5, 10 }, 15 }\n>\n> and not the { { 0, 1}, 2 } that you would expect if y were merely pointing\n> at x.\n>\nThanks for the example.\nBut what I wanted to test was\nstruct1 = struct2;\nBoth being of the same type of structure.\n\nWhat I wrongly deduced was that the address of struct2 was saved and not\nits content.\n\nAgain, thanks for your time and clarification.\n\nregards,\nRanier Vilela\n\nEm qui., 14 de mai. de 2020 às 19:49, Mark Dilger <mark.dilger@enterprisedb.com> escreveu:\n\n> On May 14, 2020, at 11:34 AM, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> \n> Certainly.\n> In the same file you can find the appropriate use of the API.\n> ItemPointerSet(&heapTuple->t_self, blkno, offnum);\n\nIt took a couple reads through your patch to figure out what you were trying to accomplish, and I think you are uncomfortable with assigning one ItemPointerData variable from another.    ItemPointerData is just a struct with three int16 variables.  To make a standalone program that has the same structure without depending on any postgres headers, I'm using \"short int\" instead of \"int16\" and structs \"TwoData\" and \"ThreeData\" that are analogous to BlockIdData and OffsetNumber.\n\n#include <stdio.h>\n\ntypedef struct TwoData {\n    short int a;\n    short int b;\n} TwoData;\n\ntypedef struct ThreeData {\n    TwoData left;\n    short int right;\n} ThreeData;\n\nint main(int argc, char **argv)\n{\n    ThreeData x = { { 5, 10 }, 15 };\n    ThreeData y = x;\n    x.left.a = 0;\n    x.left.b = 1;\n    x.right = 2;\n\n    printf(\"y = { { %d, %d }, %d }\\n\",\n        y.left.a, y.left.b, y.right);\n\n    return 0;\n}\n\nIf you compile and run this, you'll notice it outputs:\n\ny = { { 5, 10 }, 15 }\n\nand not the { { 0, 1}, 2 } that you would expect if y were merely pointing at x.Thanks for the example.But what I wanted to test wasstruct1 = struct2;Both being of the same type of structure.What I wrongly deduced was that the address of struct2 was saved and not its content.Again, thanks for your time and clarification.regards,Ranier Vilela", "msg_date": "Thu, 14 May 2020 19:59:23 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix ouside scope t_ctid (ItemPointerData)" } ]
[ { "msg_contents": "Hi,\n\nI've attached the patch for $subject. The old comment seems to be\nborrowed from WalSndShmemInit().\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 15 May 2020 12:45:34 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Fix a typo in slot.c" }, { "msg_contents": "On Fri, May 15, 2020 at 9:16 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> Hi,\n>\n> I've attached the patch for $subject. The old comment seems to be\n> borrowed from WalSndShmemInit().\n>\n\n /*\n- * Allocate and initialize walsender-related shared memory.\n+ * Allocate and initialize replication slots' shared memory.\n */\n\nHow about changing it to \"Allocate and initialize shared memory for\nreplication slots\"?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 15 May 2020 09:56:36 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix a typo in slot.c" }, { "msg_contents": "On Fri, 15 May 2020 at 13:26, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, May 15, 2020 at 9:16 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > Hi,\n> >\n> > I've attached the patch for $subject. The old comment seems to be\n> > borrowed from WalSndShmemInit().\n> >\n>\n> /*\n> - * Allocate and initialize walsender-related shared memory.\n> + * Allocate and initialize replication slots' shared memory.\n> */\n>\n> How about changing it to \"Allocate and initialize shared memory for\n> replication slots\"?\n>\n\nAgreed.\n\nAttached the updated patch.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 15 May 2020 13:37:41 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Fix a typo in slot.c" }, { "msg_contents": "On Fri, May 15, 2020 at 10:08 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Fri, 15 May 2020 at 13:26, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > /*\n> > - * Allocate and initialize walsender-related shared memory.\n> > + * Allocate and initialize replication slots' shared memory.\n> > */\n> >\n> > How about changing it to \"Allocate and initialize shared memory for\n> > replication slots\"?\n> >\n>\n> Agreed.\n>\n> Attached the updated patch.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 May 2020 10:29:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix a typo in slot.c" }, { "msg_contents": "On Mon, 18 May 2020 at 13:59, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, May 15, 2020 at 10:08 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Fri, 15 May 2020 at 13:26, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > /*\n> > > - * Allocate and initialize walsender-related shared memory.\n> > > + * Allocate and initialize replication slots' shared memory.\n> > > */\n> > >\n> > > How about changing it to \"Allocate and initialize shared memory for\n> > > replication slots\"?\n> > >\n> >\n> > Agreed.\n> >\n> > Attached the updated patch.\n> >\n>\n> Pushed.\n\nThank you!\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 18 May 2020 17:44:43 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Fix a typo in slot.c" } ]
[ { "msg_contents": "Hi,\n\nI have just noticed that pg_bsd_indent complains since\n-Wimplicit-fallthrough=3 has been added to the default set of switches\nif available.\n\nSomething like the attached is fine to take care of those warnings,\nbut what's our current patching policy for this tool?\n\nThanks,\n--\nMichael", "msg_date": "Fri, 15 May 2020 15:03:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "pg_bsd_indent and -Wimplicit-fallthrough" }, { "msg_contents": "On Fri, May 15, 2020 at 8:03 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Hi,\n>\n> I have just noticed that pg_bsd_indent complains since\n> -Wimplicit-fallthrough=3 has been added to the default set of switches\n> if available.\n\nOh Indeed.\n\n> Something like the attached is fine to take care of those warnings,\n> but what's our current patching policy for this tool?\n\nThe patch looks good to me. It looks like we already have custom\npatches, so +1 to applying it.\n\n\n", "msg_date": "Fri, 15 May 2020 08:28:20 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_bsd_indent and -Wimplicit-fallthrough" }, { "msg_contents": "> On 15 May 2020, at 08:28, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Fri, May 15, 2020 at 8:03 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n>> Something like the attached is fine to take care of those warnings,\n>> but what's our current patching policy for this tool?\n> \n> The patch looks good to me. It looks like we already have custom\n> patches, so +1 to applying it.\n\nShouldn't we try and propose it to upstream first to minimize our diff?\n\ncheers ./daniel\n\n", "msg_date": "Fri, 15 May 2020 09:17:54 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pg_bsd_indent and -Wimplicit-fallthrough" }, { "msg_contents": "On Fri, May 15, 2020 at 9:17 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 15 May 2020, at 08:28, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > On Fri, May 15, 2020 at 8:03 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> >> Something like the attached is fine to take care of those warnings,\n> >> but what's our current patching policy for this tool?\n> >\n> > The patch looks good to me. It looks like we already have custom\n> > patches, so +1 to applying it.\n>\n> Shouldn't we try and propose it to upstream first to minimize our diff?\n\nGood point, adding Piotr.\n\n\n", "msg_date": "Fri, 15 May 2020 14:15:53 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_bsd_indent and -Wimplicit-fallthrough" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Fri, May 15, 2020 at 9:17 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> On 15 May 2020, at 08:28, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>>> The patch looks good to me. It looks like we already have custom\n>>> patches, so +1 to applying it.\n\n>> Shouldn't we try and propose it to upstream first to minimize our diff?\n\n> Good point, adding Piotr.\n\nIn the meantime, I went ahead and pushed this to our pg_bsd_indent repo.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 16 May 2020 11:56:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_bsd_indent and -Wimplicit-fallthrough" }, { "msg_contents": "On Sat, May 16, 2020 at 11:56:28AM -0400, Tom Lane wrote:\n> In the meantime, I went ahead and pushed this to our pg_bsd_indent repo.\n\nThanks, Tom.\n--\nMichael", "msg_date": "Sun, 17 May 2020 09:32:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pg_bsd_indent and -Wimplicit-fallthrough" }, { "msg_contents": "On Sun, May 17, 2020 at 2:32 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, May 16, 2020 at 11:56:28AM -0400, Tom Lane wrote:\n> > In the meantime, I went ahead and pushed this to our pg_bsd_indent repo.\n>\n> Thanks, Tom.\n\n+1, thanks a lot!\n\n\n", "msg_date": "Mon, 18 May 2020 11:22:51 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_bsd_indent and -Wimplicit-fallthrough" }, { "msg_contents": "On 18/05/2020 11.22, Julien Rouhaud wrote:\n> On Sun, May 17, 2020 at 2:32 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On Sat, May 16, 2020 at 11:56:28AM -0400, Tom Lane wrote:\n>>> In the meantime, I went ahead and pushed this to our pg_bsd_indent repo.\n>>\n>> Thanks, Tom.\n> \n> +1, thanks a lot!\n> \n\nCommitted upstream, thank you.\n\n\n", "msg_date": "Thu, 21 May 2020 19:39:03 +0200", "msg_from": "Piotr Stefaniak <postgres@piotr-stefaniak.me>", "msg_from_op": false, "msg_subject": "Re: pg_bsd_indent and -Wimplicit-fallthrough" }, { "msg_contents": "On Thu, May 21, 2020 at 7:39 PM Piotr Stefaniak\n<postgres@piotr-stefaniak.me> wrote:\n>\n> On 18/05/2020 11.22, Julien Rouhaud wrote:\n> > On Sun, May 17, 2020 at 2:32 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >>\n> >> On Sat, May 16, 2020 at 11:56:28AM -0400, Tom Lane wrote:\n> >>> In the meantime, I went ahead and pushed this to our pg_bsd_indent repo.\n> >>\n> >> Thanks, Tom.\n> >\n> > +1, thanks a lot!\n> >\n>\n> Committed upstream, thank you.\n\nThanks!\n\n\n", "msg_date": "Fri, 22 May 2020 18:43:38 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_bsd_indent and -Wimplicit-fallthrough" } ]
[ { "msg_contents": "Hi,\n\nAs discussed in the thread that introduced d140f2f3 to rename\nreceivedUpto to flushedUpto and add writtenUpto to the WAL receiver's\nshared memory information, the equivalent columns in\npg_stat_wal_receiver have not been renamed:\nhttps://www.postgresql.org/message-id/CA+hUKGJ06d3h5JeOtAv4h52n0vG1jOPZxqMCn5FySJQUVZA32w@mail.gmail.com\n\nWhen I have implemented this system view, the idea was to keep a\none-one mapping between the SQL interface and the shmem info even if\nwe are not compatible with past versions, hence I think that before\nbeta1 we had better fix that and:\n- rename received_lsn to flushed_lsn.\n- add one column for writtenUpto.\n\nAttached is a patch to do that. This needs also a catalog version\nbump, and I am adding an open item.\nThanks,\n--\nMichael", "msg_date": "Fri, 15 May 2020 18:08:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "pg_stat_wal_receiver and flushedUpto/writtenUpto" }, { "msg_contents": "On 2020-May-15, Michael Paquier wrote:\n\n> As discussed in the thread that introduced d140f2f3 to rename\n> receivedUpto to flushedUpto and add writtenUpto to the WAL receiver's\n> shared memory information, the equivalent columns in\n> pg_stat_wal_receiver have not been renamed:\n\n> When I have implemented this system view, the idea was to keep a\n> one-one mapping between the SQL interface and the shmem info even if\n> we are not compatible with past versions, hence I think that before\n> beta1 we had better fix that and:\n> - rename received_lsn to flushed_lsn.\n> - add one column for writtenUpto.\n\nWhy do you put the column at the end? I would put written_lsn before\nflushed_lsn.\n\nSince this requires a catversion bump, I think it'd be best to do it\nbefore beta1 next week.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 15 May 2020 13:43:11 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_wal_receiver and flushedUpto/writtenUpto" }, { "msg_contents": "On Fri, May 15, 2020 at 01:43:11PM -0400, Alvaro Herrera wrote:\n> Why do you put the column at the end? I would put written_lsn before\n> flushed_lsn.\n\nFine by me. I was thinking yesterday about putting the written\nposition after the flushed one, and finished with something that maps\nwith the structure.\n\n> Since this requires a catversion bump, I think it'd be best to do it\n> before beta1 next week.\n\nYes. What do you think of the attached?\n--\nMichael", "msg_date": "Sat, 16 May 2020 08:05:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pg_stat_wal_receiver and flushedUpto/writtenUpto" }, { "msg_contents": "On 2020-May-16, Michael Paquier wrote:\n\n> On Fri, May 15, 2020 at 01:43:11PM -0400, Alvaro Herrera wrote:\n> > Why do you put the column at the end? I would put written_lsn before\n> > flushed_lsn.\n> \n> Fine by me. I was thinking yesterday about putting the written\n> position after the flushed one, and finished with something that maps\n> with the structure.\n\nIIRC the only reason to put the written LSN where it is is so that it's\nbelow the mutex, to indicate it's not protected by it. Conceptually,\nthe written LSN is \"before\" the flushed LSN, which is why I propose to\nput it ahead of it.\n\n> > Since this requires a catversion bump, I think it'd be best to do it\n> > before beta1 next week.\n> \n> Yes. What do you think of the attached?\n\nYeah, that seems good (I didn't verify the boilerplate in\npg_stat_get_wal_receiver or pg_proc.dat). I propose\n\n+ Last write-ahead log location already received and written to\n+ disk, but not flushed. This should not be used for data\n+ integrity checks.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 15 May 2020 19:34:46 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_wal_receiver and flushedUpto/writtenUpto" }, { "msg_contents": "On Fri, May 15, 2020 at 07:34:46PM -0400, Alvaro Herrera wrote:\n> IIRC the only reason to put the written LSN where it is is so that it's\n> below the mutex, to indicate it's not protected by it. Conceptually,\n> the written LSN is \"before\" the flushed LSN, which is why I propose to\n> put it ahead of it.\n\nSure. My point was mainly that it is easier to compare if we are\nmissing any fields in the view and the order is respected. But it\nmakes also sense to do things your way, so let's do that.\n\n> Yeah, that seems good (I didn't verify the boilerplate in\n> pg_stat_get_wal_receiver or pg_proc.dat). I propose\n> \n> + Last write-ahead log location already received and written to\n> + disk, but not flushed. This should not be used for data\n> + integrity checks.\n\nThanks. If there are no objections, I'll revisit that tomorrow and\napply it with those changes, just in time for beta1.\n--\nMichael", "msg_date": "Sat, 16 May 2020 10:15:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pg_stat_wal_receiver and flushedUpto/writtenUpto" }, { "msg_contents": "On Sat, May 16, 2020 at 10:15:47AM +0900, Michael Paquier wrote:\n> Thanks. If there are no objections, I'll revisit that tomorrow and\n> apply it with those changes, just in time for beta1.\n\nOkay, done this part then.\n--\nMichael", "msg_date": "Sun, 17 May 2020 10:08:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pg_stat_wal_receiver and flushedUpto/writtenUpto" }, { "msg_contents": "\n\nOn 2020/05/17 10:08, Michael Paquier wrote:\n> On Sat, May 16, 2020 at 10:15:47AM +0900, Michael Paquier wrote:\n>> Thanks. If there are no objections, I'll revisit that tomorrow and\n>> apply it with those changes, just in time for beta1.\n> \n> Okay, done this part then.\n\nI found that \"received_lsn\" is still used in high-availability.sgml.\nWe should apply the following change in high-availability?\n\n- view's <literal>received_lsn</literal> indicates that WAL is being\n+ view's <literal>flushed_lsn</literal> indicates that WAL is being\n\nBTW, we have pg_last_wal_receive_lsn() that returns the same lsn as\npg_stat_wal_receiver.flushed_lsn. Previously both used the term \"receive\"\nin their names, but currently not. IMO it's better to use the same term in\nthose names for the consistency, but it's not good idea to rename\npg_last_wal_receive_lsn() to something like pg_last_wal_receive_lsn().\nI have no better idea for now. So I'm ok with the current names.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 19 May 2020 23:38:52 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_wal_receiver and flushedUpto/writtenUpto" }, { "msg_contents": "On Tue, May 19, 2020 at 11:38:52PM +0900, Fujii Masao wrote:\n> I found that \"received_lsn\" is still used in high-availability.sgml.\n> We should apply the following change in high-availability?\n> \n> - view's <literal>received_lsn</literal> indicates that WAL is being\n> + view's <literal>flushed_lsn</literal> indicates that WAL is being\n\nOops, thanks. Will fix.\n\n> BTW, we have pg_last_wal_receive_lsn() that returns the same lsn as\n> pg_stat_wal_receiver.flushed_lsn. Previously both used the term \"receive\"\n> in their names, but currently not. IMO it's better to use the same term in\n> those names for the consistency, but it's not good idea to rename\n> pg_last_wal_receive_lsn() to something like pg_last_wal_receive_lsn().\n> I have no better idea for now. So I'm ok with the current names.\n\nI think you mean renaming pg_last_wal_receive_lsn() to something like\npg_last_wal_flushed_lsn(), no? This name may become confusing because\nwe lose the \"receive\" idea in the function, that we have with the\n\"receiver\" part of pg_stat_wal_receiver. Maybe something like that,\nthough that's long:\n- pg_last_wal_receive_flushed_lsn()\n- pg_last_wal_receive_written_lsn() \n\nAnyway, a rename of this function does not strike me as strongly\nnecessary, as that's less tied with the shared memory structure, and\nwe document that pg_last_wal_receive_lsn() tracks the current LSN\nreceived and flushed. I am actually wondering if in the future it may\nnot be better to remove this function, but it has no maintenance\ncost either so I would just let it as-is.\n--\nMichael", "msg_date": "Wed, 20 May 2020 08:31:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pg_stat_wal_receiver and flushedUpto/writtenUpto" }, { "msg_contents": "\n\nOn 2020/05/20 8:31, Michael Paquier wrote:\n> On Tue, May 19, 2020 at 11:38:52PM +0900, Fujii Masao wrote:\n>> I found that \"received_lsn\" is still used in high-availability.sgml.\n>> We should apply the following change in high-availability?\n>>\n>> - view's <literal>received_lsn</literal> indicates that WAL is being\n>> + view's <literal>flushed_lsn</literal> indicates that WAL is being\n> \n> Oops, thanks. Will fix.\n\nThanks for the fix!\n\n> \n>> BTW, we have pg_last_wal_receive_lsn() that returns the same lsn as\n>> pg_stat_wal_receiver.flushed_lsn. Previously both used the term \"receive\"\n>> in their names, but currently not. IMO it's better to use the same term in\n>> those names for the consistency, but it's not good idea to rename\n>> pg_last_wal_receive_lsn() to something like pg_last_wal_receive_lsn().\n>> I have no better idea for now. So I'm ok with the current names.\n> \n> I think you mean renaming pg_last_wal_receive_lsn() to something like\n> pg_last_wal_flushed_lsn(), no?\n\nNo, that's not good idea, as I told upthread.\n\n> This name may become confusing because\n> we lose the \"receive\" idea in the function, that we have with the\n> \"receiver\" part of pg_stat_wal_receiver. Maybe something like that,\n> though that's long:\n> - pg_last_wal_receive_flushed_lsn()\n> - pg_last_wal_receive_written_lsn()\n\nYes, that's long.\n\n> Anyway, a rename of this function does not strike me as strongly\n> necessary, as that's less tied with the shared memory structure, and\n> we document that pg_last_wal_receive_lsn() tracks the current LSN\n> received and flushed. I am actually wondering if in the future it may\n> not be better to remove this function, but it has no maintenance\n> cost either so I would just let it as-is.\n\nYeah, agreed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 20 May 2020 10:19:56 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_wal_receiver and flushedUpto/writtenUpto" } ]
[ { "msg_contents": "Hi Postgres Hackers,\n\nI am wondering is there any elegant way for self-spawned background process\n(forked by us) to get notified when the regular client-connected process\nexit from the current database (switch db or even terminate)?\n\nThe background is that we are integrating a thread-model based storage\nengine into Postgres via foreign data wrapper. The engine is not allowed to\nhave multiple processes to access it. So we have to spawn a background\nprocess to access the engine, while the client process can communicate with\nthe spawned process via shared memory. In order to let the engine recognize\nthe data type in Postgres, the spawned process has to access catalog such\nas relcache, and It must connect to the target database\nvia BackgroundWorkerInitializeConnection to get the info. Unfortunately, it\nis not possible to switch databases for background process. So it has to\nget notified when client process switches db or terminate, then we can\ncorrespondingly close the spawned process. Please advise us if there are\nalternative approaches.\n\nBest,\nShichao\n\nHi Postgres Hackers,I am wondering is there any elegant way for self-spawned background process (forked by us) to get notified when the regular client-connected process exit from the current database (switch db or even terminate)?The background is that we are integrating a thread-model based storage engine into Postgres via foreign data wrapper. The engine is not allowed to have multiple processes to access it. So we have to spawn a background process to access the engine, while the client process can communicate with the spawned process via shared memory. In order to let the engine recognize the data type in Postgres, the spawned process has to access catalog such as relcache, and It must connect to the target database via BackgroundWorkerInitializeConnection to get the info. Unfortunately, it is not possible to switch databases for background process. So it has to get notified when client process switches db or terminate, then we can correspondingly close the spawned process. Please advise us if there are alternative approaches.Best,Shichao", "msg_date": "Fri, 15 May 2020 14:22:49 -0400", "msg_from": "Shichao Jin <jsc0218@gmail.com>", "msg_from_op": true, "msg_subject": "Spawned Background Process Knows the Exit of Client Process?" }, { "msg_contents": "On Fri, May 15, 2020 at 11:53 PM Shichao Jin <jsc0218@gmail.com> wrote:\n>\n> Hi Postgres Hackers,\n>\n> I am wondering is there any elegant way for self-spawned background process (forked by us) to get notified when the regular client-connected process exit from the current database (switch db or even terminate)?\n>\n> The background is that we are integrating a thread-model based storage engine into Postgres via foreign data wrapper.\n\nPostgreSQL now support pluggable storage API. Have you considered\nusing that instead of FDW?\n\n> The engine is not allowed to have multiple processes to access it. So we have to spawn a background process to access the engine, while the client process can communicate with the spawned process via shared memory. In order to let the engine recognize the data type in Postgres, the spawned process has to access catalog such as relcache, and It must connect to the target database via BackgroundWorkerInitializeConnection to get the info. Unfortunately, it is not possible to switch databases for background process. So it has to get notified when client process switches db or terminate, then we can correspondingly close the spawned process. Please advise us if there are alternative approaches.\n\nThere can be multiple backends accessing different database. But from\nyour description it looks like there is only one background process\nthat will access the storage engine and it will be shared by multiple\nbackends which may be connected to different databases. If that's\ncorrect, you will need to make that background process independent of\ndatabase and just access storage. That looks less performance though.\nMay be you can elaborate more about your usecase.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 18 May 2020 18:07:15 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Spawned Background Process Knows the Exit of Client Process?" }, { "msg_contents": "Hi Ashutosh,\n\nThank you for your answer.\n\nFor the first point, as you suggested, we will migrate to table AM sooner\nor later.\n\nFor the second point, your description is exactly correct (an independent\nprocess to access the storage engine). We can have multiple threads to\novercome the performance issue.\n\nThe problem comes from the ignorance of data types for storage engine,\nwhere the storage engine has to get the comparator function of PG to\ncompare two keys. Otherwise, the storage engine uses \"memcmp\". In order to\nget the compare func, we have to let the independent process dependent on a\nspecific database to access the catalog (relcache). Unfortunately, the\nprocess cannot become independent anymore once it changed its property by\ncalling BackgroundWorkerInitializeConnection. Then our design evolves to\nspawn multiple processes for accessing different tables created by the\nstorage engine. As a result, we have to release these spawned processes\nonce the backend process switches database or terminate itself. Currently,\nwe can set a timer for inactivity duration, in order to release the\nresource. I am wondering is there any elegant way to achieve this goal?\n\nBest,\nShichao\n\nOn Mon, 18 May 2020 at 08:37, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> On Fri, May 15, 2020 at 11:53 PM Shichao Jin <jsc0218@gmail.com> wrote:\n> >\n> > Hi Postgres Hackers,\n> >\n> > I am wondering is there any elegant way for self-spawned background\n> process (forked by us) to get notified when the regular client-connected\n> process exit from the current database (switch db or even terminate)?\n> >\n> > The background is that we are integrating a thread-model based storage\n> engine into Postgres via foreign data wrapper.\n>\n> PostgreSQL now support pluggable storage API. Have you considered\n> using that instead of FDW?\n>\n> > The engine is not allowed to have multiple processes to access it. So we\n> have to spawn a background process to access the engine, while the client\n> process can communicate with the spawned process via shared memory. In\n> order to let the engine recognize the data type in Postgres, the spawned\n> process has to access catalog such as relcache, and It must connect to the\n> target database via BackgroundWorkerInitializeConnection to get the info.\n> Unfortunately, it is not possible to switch databases for background\n> process. So it has to get notified when client process switches db or\n> terminate, then we can correspondingly close the spawned process. Please\n> advise us if there are alternative approaches.\n>\n> There can be multiple backends accessing different database. But from\n> your description it looks like there is only one background process\n> that will access the storage engine and it will be shared by multiple\n> backends which may be connected to different databases. If that's\n> correct, you will need to make that background process independent of\n> database and just access storage. That looks less performance though.\n> May be you can elaborate more about your usecase.\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n>\n\nHi Ashutosh,Thank you for your answer. For the first point, as you suggested, we will migrate to table AM sooner or later.For the second point, your description is exactly correct (an independent process to access the storage engine). We can have multiple threads to overcome the performance issue. The problem comes from the ignorance of data types for storage engine, where the storage engine has to get the comparator function of PG to compare two keys. Otherwise, the storage engine uses \"memcmp\". In order to get the compare func, we have to let the independent process dependent on a specific database to access the catalog (relcache). Unfortunately, the process cannot become independent anymore once it changed its property by calling BackgroundWorkerInitializeConnection. Then our design evolves to spawn multiple processes for accessing different tables created by the storage engine. As a result, we have to release these spawned processes once the backend process switches database or terminate itself. Currently, we can set a timer for inactivity duration, in order to release the resource. I am wondering is there any elegant way to achieve this goal?Best,Shichao On Mon, 18 May 2020 at 08:37, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:On Fri, May 15, 2020 at 11:53 PM Shichao Jin <jsc0218@gmail.com> wrote:\n>\n> Hi Postgres Hackers,\n>\n> I am wondering is there any elegant way for self-spawned background process (forked by us) to get notified when the regular client-connected process exit from the current database (switch db or even terminate)?\n>\n> The background is that we are integrating a thread-model based storage engine into Postgres via foreign data wrapper.\n\nPostgreSQL now support pluggable storage API. Have you considered\nusing that instead of FDW?\n\n> The engine is not allowed to have multiple processes to access it. So we have to spawn a background process to access the engine, while the client process can communicate with the spawned process via shared memory. In order to let the engine recognize the data type in Postgres, the spawned process has to access catalog such as relcache, and It must connect to the target database via BackgroundWorkerInitializeConnection to get the info. Unfortunately, it is not possible to switch databases for background process. So it has to get notified when client process switches db or terminate, then we can correspondingly close the spawned process. Please advise us if there are alternative approaches.\n\nThere can be multiple backends accessing different database. But from\nyour description it looks like there is only one background process\nthat will access the storage engine and it will be shared by multiple\nbackends which may be connected to different databases. If that's\ncorrect, you will need to make that background process independent of\ndatabase and just access storage. That looks less performance though.\nMay be you can elaborate more about your usecase.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Mon, 18 May 2020 10:02:00 -0400", "msg_from": "Shichao Jin <jsc0218@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Spawned Background Process Knows the Exit of Client Process?" } ]
[ { "msg_contents": "Hey,\n\nSo I'm new to poking around in the PostgreSQL code, so this is a bit of\na shot in the dark. I'm having some problems with pg_dump, and a\ndatabase with tablespaces. A couple of the tables are not in the default\ntablespace, and I want to ignore this for the dump.\n\nLooking at the pg_dump --help, there seems to be a perfect option for\nthis:\n\n --no-tablespaces do not dump tablespace assignments\n\nThis seems to work fine when using the plain text format, but I'm using\nthe custom format, and that seems to break the effect of\n--no-tablespaces.\n\nLooking at the code, I think I've managed to determine a place where\nthis behaviour can be changed, and so I've attached a draft patch [1].\n\nIs this an actual problem, and if so, am I anywhere near the right place\nin the code in terms of addressing it?\n\nThanks,\n\nChris\n\n1:", "msg_date": "Fri, 15 May 2020 21:30:26 +0100", "msg_from": "Christopher Baines <mail@cbaines.net>", "msg_from_op": true, "msg_subject": "[PATCH] Fix pg_dump --no-tablespaces for the custom format" }, { "msg_contents": "Christopher Baines <mail@cbaines.net> writes:\n> So I'm new to poking around in the PostgreSQL code, so this is a bit of\n> a shot in the dark. I'm having some problems with pg_dump, and a\n> database with tablespaces. A couple of the tables are not in the default\n> tablespace, and I want to ignore this for the dump.\n\nI think you've misunderstood how the pieces fit together. A lot of\nthe detail-filtering switches, including --no-tablespaces, work on\nthe output side of the \"archive\" format. While you can't really tell\nthe difference in pg_dump text mode, the implication for custom-format\noutput is that the info is always there in the archive file, and you\ngive the switch to pg_restore if you don't want to see the info.\nThis is more flexible since you aren't compelled to make the decision\nup-front, and it doesn't really cost anything to include such info in\nthe archive. (Obviously, table-filtering switches don't work that\nway, since with those there can be a really large cost in file size\nto include unwanted data.)\n\nSo from my perspective, things are working fine and this patch would\nbreak it.\n\nIf you actually want to suppress this info from getting into the\narchive file, you'd have to give a very compelling reason for\nbreaking this behavior for other people.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 15 May 2020 20:16:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix pg_dump --no-tablespaces for the custom format" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Christopher Baines <mail@cbaines.net> writes:\n>> So I'm new to poking around in the PostgreSQL code, so this is a bit of\n>> a shot in the dark. I'm having some problems with pg_dump, and a\n>> database with tablespaces. A couple of the tables are not in the default\n>> tablespace, and I want to ignore this for the dump.\n>\n> I think you've misunderstood how the pieces fit together. A lot of\n> the detail-filtering switches, including --no-tablespaces, work on\n> the output side of the \"archive\" format. While you can't really tell\n> the difference in pg_dump text mode, the implication for custom-format\n> output is that the info is always there in the archive file, and you\n> give the switch to pg_restore if you don't want to see the info.\n> This is more flexible since you aren't compelled to make the decision\n> up-front, and it doesn't really cost anything to include such info in\n> the archive. (Obviously, table-filtering switches don't work that\n> way, since with those there can be a really large cost in file size\n> to include unwanted data.)\n>\n> So from my perspective, things are working fine and this patch would\n> break it.\n>\n> If you actually want to suppress this info from getting into the\n> archive file, you'd have to give a very compelling reason for\n> breaking this behavior for other people.\n\nThanks for getting back to me Tom :)\n\nI don't really follow how having it do something more along the lines of\nhow it's documented would both be less flexible and break existing uses\nof pg_dump. You're not prevented from including tablespace assignments,\njust don't pass --no-tablespaces, and your now able to not include them\nfor the archive formats, just like the plain format, which in my view\nincreases the flexibility of the tool, since something new is possible.\n\nI realise that for people who are passing --no-tablespaces, without\nrealising it does nothing combined with the archive formats, that\nactually not including tablespace assignments will change the behaviour\nfor them, but as above, I'd see this as a positive change, as it makes\nthe tooling more powerful (and simpler to understand as well).\n\nI see now that while the --help output doesn't capture the nuances:\n\n --no-tablespaces do not dump tablespace assignments\n\nThe documentation does:\n\n --no-tablespaces\n\n Do not output commands to select tablespaces. With this option, all\n objects will be created in whichever tablespace is the default\n during restore.\n\n This option is only meaningful for the plain-text format. For the\n archive formats, you can specify the option when you call\n pg_restore.\n\n\nThanks again,\n\nChris", "msg_date": "Sat, 16 May 2020 08:20:29 +0100", "msg_from": "Christopher Baines <mail@cbaines.net>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix pg_dump --no-tablespaces for the custom format" }, { "msg_contents": "On Sat, 16 May 2020 at 04:20, Christopher Baines <mail@cbaines.net> wrote:\n\n>\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>\n> > Christopher Baines <mail@cbaines.net> writes:\n> >> So I'm new to poking around in the PostgreSQL code, so this is a bit of\n> >> a shot in the dark. I'm having some problems with pg_dump, and a\n> >> database with tablespaces. A couple of the tables are not in the default\n> >> tablespace, and I want to ignore this for the dump.\n> >\n> > I think you've misunderstood how the pieces fit together. A lot of\n> > the detail-filtering switches, including --no-tablespaces, work on\n> > the output side of the \"archive\" format. While you can't really tell\n> > the difference in pg_dump text mode, the implication for custom-format\n> > output is that the info is always there in the archive file, and you\n> > give the switch to pg_restore if you don't want to see the info.\n> > This is more flexible since you aren't compelled to make the decision\n> > up-front, and it doesn't really cost anything to include such info in\n> > the archive. (Obviously, table-filtering switches don't work that\n> > way, since with those there can be a really large cost in file size\n> > to include unwanted data.)\n> >\n>\nI've also had to explain a dozen times how the archive format works. Archive\nformat is kind of intermediary format because you can produce a plain format\nusing it.\n\n[Testing some pg_dump --no-option ...]\n\nThe following objects are not included if a --no-option is used:\n\n* grant / revoke\n* comment\n* publication\n* subscription\n* security labels\n\nbut some are included even if --no-option is used:\n\n* owner\n* tablespace\n\nI'm wondering why there is such a distinction. We have some options:\n\n(a) leave it as is and document that those 2 options has no effect in\npg_dump\nand possibly add a warning to report if someone uses it with an archive\nformat;\n(b) exclude owner and tablespace from archive (it breaks compatibility but\ndo\nexactly what users expect).\n\nI do not even consider a possibility to include all objects even if a\n--no-option is used because you will have a bunch of complaints / reports.\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Sat, 16 May 2020 at 04:20, Christopher Baines <mail@cbaines.net> wrote:\nTom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Christopher Baines <mail@cbaines.net> writes:\n>> So I'm new to poking around in the PostgreSQL code, so this is a bit of\n>> a shot in the dark. I'm having some problems with pg_dump, and a\n>> database with tablespaces. A couple of the tables are not in the default\n>> tablespace, and I want to ignore this for the dump.\n>\n> I think you've misunderstood how the pieces fit together.  A lot of\n> the detail-filtering switches, including --no-tablespaces, work on\n> the output side of the \"archive\" format.  While you can't really tell\n> the difference in pg_dump text mode, the implication for custom-format\n> output is that the info is always there in the archive file, and you\n> give the switch to pg_restore if you don't want to see the info.\n> This is more flexible since you aren't compelled to make the decision\n> up-front, and it doesn't really cost anything to include such info in\n> the archive.  (Obviously, table-filtering switches don't work that\n> way, since with those there can be a really large cost in file size\n> to include unwanted data.)\n>\nI've also had to explain a dozen times how the archive format works. Archiveformat is kind of intermediary format because you can produce a plain formatusing it. [Testing some pg_dump --no-option ...]The following objects are not included if a --no-option is used:* grant / revoke* comment* publication* subscription* security labelsbut some are included even if --no-option is used:* owner* tablespaceI'm wondering why there is such a distinction. We have some options:(a) leave it as is and document that those 2 options has no effect in pg_dumpand possibly add a warning to report if someone uses it with an archive format; (b) exclude owner and tablespace from archive (it breaks compatibility but doexactly what users expect).I do not even consider a possibility to include all objects even if a--no-option is used because you will have a bunch of complaints / reports.-- Euler Taveira                 http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 16 May 2020 16:26:27 -0300", "msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix pg_dump --no-tablespaces for the custom format" }, { "msg_contents": "Euler Taveira <euler.taveira@2ndquadrant.com> writes:\n> I'm wondering why there is such a distinction. We have some options:\n\n> (a) leave it as is and document that those 2 options has no effect in\n> pg_dump\n> and possibly add a warning to report if someone uses it with an archive\n> format;\n> (b) exclude owner and tablespace from archive (it breaks compatibility but\n> do\n> exactly what users expect).\n\nI think throwing a warning saying \"this option does nothing\" might be\na reasonable change.\n\nI think (b) would be a break in the archive format, with unclear\nconsequences, so I'm not in favor of that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 16 May 2020 15:31:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix pg_dump --no-tablespaces for the custom format" }, { "msg_contents": "On Sat, 16 May 2020 at 16:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> I think (b) would be a break in the archive format, with unclear\n> consequences, so I'm not in favor of that.\n>\n> I came to the same conclusion while inspecting the source code.\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Sat, 16 May 2020 at 16:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\nI think (b) would be a break in the archive format, with unclear\nconsequences, so I'm not in favor of that.\nI came to the same conclusion while inspecting the source code.-- Euler Taveira                 http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 16 May 2020 16:44:13 -0300", "msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix pg_dump --no-tablespaces for the custom format" } ]
[ { "msg_contents": "The attached patch implements NSS (Network Security Services) [0] with the\nrequired NSPR runtime [1] as a TLS backend for PostgreSQL.\n\nWhile all sslmodes are implemented and work for the most part, the patch is\n*not* ready yet but I wanted to show progress early so that anyone interested\nin this can help out with testing and maybe even hacking.\n\nWhy NSS? Well. It shares no lineage with OpenSSL making it not just an\nalternative by fork but a 100% alternative. It's also actively maintained, is\nreadily available on many platforms where PostgreSQL is popular and has a FIPS\nmode which doesn't require an EOL'd library. And finally, I was asked nicely\nwith the promise of a free beverage, an incentive as good as any.\n\n\nDifferences with OpenSSL\n------------------------\nNSS does not use certificates and keys on the filesystem, it instead uses a\ncertificate database in which all certificates, keys and CRL's are loaded. A\nset of tools are provided to work with the database, like: certutil, crlutil,\npk12util etc. We could support plain PEM files as well, and load them into a\ndatabase ourselves but so far I've opted for just using what is already in the\ndatabase.\n\nThis does mean that new GUCs are needed to identify the database. I've mostly\nrepurposed the existing ones for cert/key/crl, but had to invent a new one for\nthe database. Maybe there should be an entirely new set? This needs to be\ndiscussed with not only NSS in mind but for additional as-of-yet unknown\nbackends we might get (SChannel comes to mind).\n\nNSS also supports partial chain validation per default (as do many other TLS\nlibraries) where OpenSSL does not. I haven't done anything about that just\nyet, thus there is a failing test as a reminder to address it.\n\nThe documentation of NSS/NSPR is unfortunately quite poor and often times\noutdated or simply nonexisting. Cloning the repo and reading the source code\nis the only option for parts of the API.\n\nFeaturewise there might be other things we can make use of in NSS which doesn't\nexist in OpenSSL, but for now I've tried to keep them aligned.\n\n\nKnown Bugs and Limitations (in this version of the patch)\n---------------------------------------------------------\nThe frontend doesn't attempt to verify whether the specified CRL exists in the\ndatabase or not. This can be done with pretty much the same code as in the\nbackend, except that we don't have the client side certificate loaded so we\neither need to read it back from the database, or parse a list of all CRLs\n(which would save us from having the cert in local memory which generally is a\ngood thing to avoid).\n\npgtls_read is calling PR_Recv which works fine for communicating with an NSS\nbackend cluster, but hangs waiting for IO when communicating with an OpenSSL\nbackend cluster. Using PR_Read reverses the situation. This is probably a\nsimple bug but I haven't had time to track it down yet. The below shifts\nbetween the two for debugging.\n\n- nread = PR_Recv(conn->pr_fd, ptr, len, 0, PR_INTERVAL_NO_WAIT);\n+ nread = PR_Read(conn->pr_fd, ptr, len);\n\nPassphrase handling in the backend is broken, more on that under TODO.\n\nThere are a few failing tests and a few skipped ones for now, but the majority\nof the tests pass.\n\n\nTesting\n-------\nIn order for the TAP framework to be able to handle backends with different\ncharacteristics I've broken up SSLServer.pm into a set of modules:\n\n SSL::Server\n SSL::Backend::NSS\n SSL::Backend::OpenSSL\n\nThe SSL tests import SSL::Server which in turn imports the appropriate backend\nmodule in order to perform backend specific setup tasks. The backend used\nshould be transparent for the TAP code when it comes to switching server certs\netc.\n\nSo far I've used foo|bar in the matching regex to provide alternative output,\nand SKIP blocks for tests that don't apply. There might be neater ways to\nachieve this, but I was trying to avoid having separate test files for the\ndifferent backends.\n\nThe certificate databases can be created with a new nssfiles make target in\nsrc/test/ssl, which use the existing files (and also depend on OpenSSL which I\ndon't think is a problematic dependency for development environments). To keep\nit simple I've named the certificates in the NSS database after the filenames,\nthis isn't really NSS best-practices but it makes for an easier time reading\nthe tests IMO.\n\nIf this direction is of interest, extracting into to a separate patch for just\nsetting up the modules and implementing OpenSSL without a new backend is\nprobably the next step.\n\n\nTODO\n----\nThis patch is a work in progress, and there is work left to do, below is a dump\nof what is left to fix before this can be considered a full implementation for\nreview. Most of these items have more documentation in the code comments.\n\n* The split between init and open needs to be revisited, especially in frontend\n where we have a bit more freedom. It remains to be seen if we can do better in\n the backend part.\n\n* Documentation, it's currently not even started\n\n* Windows support. I've hacked mostly using Debian and have tested versions of\n the patch on macOS, but not Windows so far.\n\n* Figure out how to handle cipher configuration. Getting a set of ciphers that\n result in a useable socket isn't as easy as with OpenSSL, and policies seem\n much more preferred. At the very least this needs to be solidly documented.\n\n* The rules in src/test/ssl/Makefile for generating certificate databases can\n probably be generalized into a smaller set of rules based on wildcards.\n\n* The password callback on server-side won't be invoked at server start due to\n init happening in be_tls_open, so something needs to be figured out there.\n Maybe attempt to open the database with a throw-away context in init just to\n invoke the password callback?\n\n* Identify code duplicated between frontend and backend and try to generalize.\n\n* Make sure the handling the error codes correctly in the certificate and auth\n callbacks to properly handle self-signed certs etc.\n\n* Tidy up the tests which are partially hardwired for NSS now to make sure\n there are no regressions for OpenSSL.\n\n* All the code using OpenSSL which isn't the libpq communications parts, like\n pgcrypto, strong_random, sha2, SCRAM et.al\n\n* Review language in code comments and run pgindent et.al\n\n* Settle on a minimum required version. I've been using NSS 3.42 and NSPR 4.20\n simply since they were the packages Debian wanted to install for me, but I'm\n quite convinced that we can go a bit lower (featurewise we can go much lower\n but there are bugfixes in recent versions that we might want to include).\n Anything lower than a version supporting TLSv1.3 seems like an obvious no-no.\n\n\nI'd be surprised if this is all, but that's at least a start. There isn't\nreally a playbook on how to add a new TLS backend, but I'm hoping to be able to\nsummarize the required bits and pieces in README.SSL once this is a bit closer\nto completion.\n\nMy plan is to keep hacking at this to have it reviewable for the 14 cycle, so\nif anyone has an interest in NSS, then I would love to hear feedback on how it\nworks (and doesn't work).\n\nThe 0001 patch contains the full NSS support, and 0002 is a fix for the pgstat\nabstraction which IMO leaks backend implementation details. This needs to go\non it's own thread, but since 0001 fails without it I've included it here for\nsimplicity sake for now.\n\ncheers ./daniel\n\n[0] https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS\n[1] https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR", "msg_date": "Fri, 15 May 2020 22:46:09 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 15 May 2020, at 22:46, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> The 0001 patch contains the full NSS support, and 0002 is a fix for the pgstat\n> abstraction which IMO leaks backend implementation details. This needs to go\n> on it's own thread, but since 0001 fails without it I've included it here for\n> simplicity sake for now.\n\nThe attached 0001 and 0002 are the same patchseries as before, but with the\nOpenSSL test module fixed and a rebase on top of the current master.\n\ncheers ./daniel", "msg_date": "Thu, 25 Jun 2020 17:39:33 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 25 Jun 2020, at 17:39, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 15 May 2020, at 22:46, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> The 0001 patch contains the full NSS support, and 0002 is a fix for the pgstat\n>> abstraction which IMO leaks backend implementation details. This needs to go\n>> on it's own thread, but since 0001 fails without it I've included it here for\n>> simplicity sake for now.\n> \n> The attached 0001 and 0002 are the same patchseries as before, but with the\n> OpenSSL test module fixed and a rebase on top of the current master.\n\nAnother rebase to resolve conflicts with the recent fixes in the SSL tests, as\nwell as some minor cleanup.\n\ncheers ./daniel", "msg_date": "Fri, 3 Jul 2020 13:51:28 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Fri, Jul 3, 2020 at 11:51 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 25 Jun 2020, at 17:39, Daniel Gustafsson <daniel@yesql.se> wrote:\n> >> On 15 May 2020, at 22:46, Daniel Gustafsson <daniel@yesql.se> wrote:\n> >> The 0001 patch contains the full NSS support, and 0002 is a fix for the pgstat\n> >> abstraction which IMO leaks backend implementation details. This needs to go\n> >> on it's own thread, but since 0001 fails without it I've included it here for\n> >> simplicity sake for now.\n> >\n> > The attached 0001 and 0002 are the same patchseries as before, but with the\n> > OpenSSL test module fixed and a rebase on top of the current master.\n>\n> Another rebase to resolve conflicts with the recent fixes in the SSL tests, as\n> well as some minor cleanup.\n\nHi Daniel,\n\nThanks for blazing the trail for other implementations to coexist in\nthe tree. I see that cURL (another project Daniel works on)\nsupports a lot of TLS implementations[1]. I recognise 4 other library\nnames from that table as having appeared on this mailing list as\ncandidates for PostgreSQL support complete with WIP patches, including\nanother one from you (Apple Secure Transport). I don't have strong\nviews on how many and which libraries we should support, but I was\ncurious how many packages depend on libss1.1, libgnutls30 and libnss3\nin the Debian package repos in my sources.list, and I came up with\nOpenSSL = 820, GnuTLS = 342, and NSS = 87.\n\nI guess Solution.pm needs at least USE_NSS => undef for this not to\nbreak the build on Windows.\n\nObviously cfbot is useless for testing this code, since its build\nscript does --with-openssl and you need --with-nss, but it still shows\nus one thing: with your patch, a --with-openssl build is apparently\nbroken:\n\n/001_ssltests.pl .. 1/93 Bailout called. Further testing stopped:\nsystem pg_ctl failed\n\nThere are some weird configure-related hunks in the patch:\n\n+ -runstatedir | --runstatedir | --runstatedi | --runstated \\\n...[more stuff like that]...\n\n-#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))\n+#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))\n\nI see the same when I use Debian's autoconf, but not FreeBSD's or\nMacPorts', despite all being version 2.69. That seems to be due to\nnon-upstreamed changes added by the Debian maintainers (I see the\noff_t thing mentioned in /usr/share/doc/autoconf/changelog.Debian.gz).\nI think you need to build a stock autoconf 2.69 or run autoconf on a\nnon-Debian system.\n\nI installed libnss3-dev on my Debian box and then configure had\ntrouble locating and understanding <ssl.h>, until I added\n--with-includes=/usr/include/nss:/usr/include/nspr. I suspect this is\nsupposed to be done with pkg-config nss --cflags somewhere in\nconfigure (or alternatively nss-config --cflags, nspr-config --cflags,\nI don't know, but we're using pkg-config for other stuff).\n\nI installed the Debian package libnss3-tools (for certutil) and then,\nin src/test/ssl, I ran make nssfiles (I guess that should be\nautomatic?), and make check, and I got this far:\n\nTest Summary Report\n-------------------\nt/001_ssltests.pl (Wstat: 3584 Tests: 93 Failed: 14)\n Failed tests: 14, 16, 18-20, 24, 27-28, 54-55, 78-80\n 91\n Non-zero exit status: 14\n\nYou mentioned some were failing in this WIP -- are those results you expect?\n\n[1] https://curl.haxx.se/docs/ssl-compared.html\n\n\n", "msg_date": "Fri, 10 Jul 2020 17:10:59 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 10 Jul 2020, at 07:10, Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> On Fri, Jul 3, 2020 at 11:51 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> On 25 Jun 2020, at 17:39, Daniel Gustafsson <daniel@yesql.se> wrote:\n>>>> On 15 May 2020, at 22:46, Daniel Gustafsson <daniel@yesql.se> wrote:\n>>>> The 0001 patch contains the full NSS support, and 0002 is a fix for the pgstat\n>>>> abstraction which IMO leaks backend implementation details. This needs to go\n>>>> on it's own thread, but since 0001 fails without it I've included it here for\n>>>> simplicity sake for now.\n>>> \n>>> The attached 0001 and 0002 are the same patchseries as before, but with the\n>>> OpenSSL test module fixed and a rebase on top of the current master.\n>> \n>> Another rebase to resolve conflicts with the recent fixes in the SSL tests, as\n>> well as some minor cleanup.\n> \n> Hi Daniel,\n> \n> Thanks for blazing the trail for other implementations to coexist in\n> the tree. I see that cURL (another project Daniel works on)\n> supports a lot of TLS implementations[1].\n\nThe list on that URL is also just a selection, the total count is 10 (not\ncounting OpenSSL forks) IIRC, after axing support for a few lately. OpenSSL\nclearly has a large mindshare but the gist of it is that there exist quite a\nfew alternatives each with their own strengths.\n\n> I recognise 4 other library\n> names from that table as having appeared on this mailing list as\n> candidates for PostgreSQL support complete with WIP patches, including\n> another one from you (Apple Secure Transport). I don't have strong\n> views on how many and which libraries we should support,\n\nI think it's key to keep in mind *why* it's relevant to provide options in the\nfirst place, after all, as they must be 100% interoperable one can easily argue\nfor a single one being enough. We need to to look at what they offer users on\ntop of just a TLS connection, like: managed certificate storage like for\nexample macOS Keychains, FIPS certification, good platform availability and/or\nOS integration etc. If all a library offers is \"not being OpenSSL\" then it's\nnot clear that we're adding much value by spending the cycles to support it.\n\nMy personal opinion is that we should keep it pretty limited, not least to\nlessen the burden of testing and during feature development. Supporting a new\nlibrary comes with requirements on both the CFBot as well as the buildfarm, not\nto mention on developers who dabble in that area of the code. The goal should\nIMHO be to make it trivial for every postgres installation to use TLS\nregardless of platform and experience level with the person installing it.\n\nThe situation is a bit different for curl where we have as a goal to provide\nenough alternatives such that every platform can have a libcurl/curl more or\nless regardless of what it contains. As a consequence, we have around 80 CI\njobs to test each pull request to provide ample coverage. Being a kitchen-\nsink is really hard work.\n\n> but I was\n> curious how many packages depend on libss1.1, libgnutls30 and libnss3\n> in the Debian package repos in my sources.list, and I came up with\n> OpenSSL = 820, GnuTLS = 342, and NSS = 87.\n\nI don't see a lot of excitement over GnuTLS lately, but Debian shipping it due\nto (I believe) licensing concerns with OpenSSL does help it along. In my\nexperience, platforms with GnuTLS easily available also have OpenSSL easily\navailable.\n\n> I guess Solution.pm needs at least USE_NSS => undef for this not to\n> break the build on Windows.\n\nThanks, I'll fix (I admittedly haven't tried this at all on Windows yet).\n\n> Obviously cfbot is useless for testing this code, since its build\n> script does --with-openssl and you need --with-nss,\n\nRight, this is a CFBot problem with any patch that require specific autoconf\nflags to be excercised. I wonder if we can make something when we do CF app\nintegration which can inject flags to a Travis pipeline in a safe manner?\n\n> but it still shows\n> us one thing: with your patch, a --with-openssl build is apparently\n> broken:\n> \n> /001_ssltests.pl .. 1/93 Bailout called. Further testing stopped:\n> system pg_ctl failed\n\nHumm .. I hate to say \"it worked on my machine\" but it did, but my TLS\nenvironment is hardly standard. Sorry for posting breakage, most likely this\nis a bug in the new test module structure that the patch introduce in order to\nsupport multiple backends for src/tests/ssl. I'll fix.\n\n> There are some weird configure-related hunks in the patch:\n> \n> + -runstatedir | --runstatedir | --runstatedi | --runstated \\\n> ...[more stuff like that]...\n> \n> -#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))\n> +#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))\n> \n> I see the same when I use Debian's autoconf, but not FreeBSD's or\n> MacPorts', despite all being version 2.69. That seems to be due to\n> non-upstreamed changes added by the Debian maintainers (I see the\n> off_t thing mentioned in /usr/share/doc/autoconf/changelog.Debian.gz).\n> I think you need to build a stock autoconf 2.69 or run autoconf on a\n> non-Debian system.\n\nSigh, yes that's a Debianism that slipped through, again sorry about that.\n\n> I installed libnss3-dev on my Debian box and then configure had\n> trouble locating and understanding <ssl.h>, until I added\n> --with-includes=/usr/include/nss:/usr/include/nspr. I suspect this is\n> supposed to be done with pkg-config nss --cflags somewhere in\n> configure (or alternatively nss-config --cflags, nspr-config --cflags,\n> I don't know, but we're using pkg-config for other stuff).\n\nYeah, that's a good point, I should fix that. Having a metric ton of TLS\nlibraries in various versions around in my environment I've been Stockholm\nSyndromed to --with-includes to the point where I didn't even think to run\nwithout it. It should clearly be as easy to use as OpenSSL wrt autoconf.\n\n> I installed the Debian package libnss3-tools (for certutil) and then,\n> in src/test/ssl, I ran make nssfiles (I guess that should be\n> automatic?)\n\nYes, it needs to run automatically for NSS builds on make check.\n\n> , and make check, and I got this far:\n> \n> Test Summary Report\n> -------------------\n> t/001_ssltests.pl (Wstat: 3584 Tests: 93 Failed: 14)\n> Failed tests: 14, 16, 18-20, 24, 27-28, 54-55, 78-80\n> 91\n> Non-zero exit status: 14\n> \n> You mentioned some were failing in this WIP -- are those results you expect?\n\nI'm not on my dev box at the moment, and I don't remember off the cuff, but\nthat sounds higher than I remember. I wonder if I fat-fingered the regexes in\nthe last version?\n\nThanks for taking a look at the patch, I'll fix up the reported issues Monday\nat the latest.\n\ncheers ./daniel\n\n", "msg_date": "Sun, 12 Jul 2020 00:03:22 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Fri, Jul 10, 2020 at 5:10 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> -#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))\n> +#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))\n>\n> I see the same when I use Debian's autoconf, but not FreeBSD's or\n> MacPorts', despite all being version 2.69. That seems to be due to\n> non-upstreamed changes added by the Debian maintainers (I see the\n> off_t thing mentioned in /usr/share/doc/autoconf/changelog.Debian.gz).\n\nBy the way, Dagfinn mentioned that these changes were in fact\nupstreamed, and happened to be beta-released today[1], and are due out\nin ~3 months as 2.70. That'll be something for us to coordinate a bit\nfurther down the road.\n\n[1] https://lists.gnu.org/archive/html/autoconf/2020-07/msg00006.html\n\n\n", "msg_date": "Wed, 15 Jul 2020 21:26:27 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "\nOn 5/15/20 4:46 PM, Daniel Gustafsson wrote:\n>\n> My plan is to keep hacking at this to have it reviewable for the 14 cycle, so\n> if anyone has an interest in NSS, then I would love to hear feedback on how it\n> works (and doesn't work).\n\n\nI'll be happy to help, particularly with Windows support and with some\nof the callback stuff I've had a hand in.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Wed, 15 Jul 2020 14:35:40 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 12 Jul 2020, at 00:03, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> Thanks for taking a look at the patch, I'll fix up the reported issues Monday\n> at the latest.\n\nA bit of life intervened, but attached is a new version of the patch which\nshould work for OpenSSL builds, and have the other issues addressed as well. I\ntook the opportunity to clean up the NSS tests to be more like the OpenSSL ones\nto lessen the impact on the TAP testcases. On my Debian box, using the\nstandard NSS and NSPR packages, I get 6 failures which are essentially all\naround CRL handling. I'm going to circle back and look at what is missing there.\n\nThis version also removes the required patch for statistics reporting as that\nhas been committed in 6a5c750f3f72899f4f982f921d5bf5665f55651e.\n\ncheers ./daniel", "msg_date": "Thu, 16 Jul 2020 00:16:26 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 15 Jul 2020, at 20:35, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n> \n> On 5/15/20 4:46 PM, Daniel Gustafsson wrote:\n>> \n>> My plan is to keep hacking at this to have it reviewable for the 14 cycle, so\n>> if anyone has an interest in NSS, then I would love to hear feedback on how it\n>> works (and doesn't work).\n> \n> I'll be happy to help, particularly with Windows support and with some\n> of the callback stuff I've had a hand in.\n\nThat would be fantastic, thanks! The password callback handling is still a\nTODO so feel free to take a stab at that since you have a lot of context on\nthere.\n\nFor Windows, I've include USE_NSS in Solution.pm as Thomas pointed out in this\nthread, but that was done blind as I've done no testing on Windows yet.\n\ncheers ./daniel\n\n", "msg_date": "Thu, 16 Jul 2020 00:18:26 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 16 Jul 2020, at 00:16, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 12 Jul 2020, at 00:03, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> Thanks for taking a look at the patch, I'll fix up the reported issues Monday\n>> at the latest.\n> \n> A bit of life intervened, but attached is a new version of the patch which\n> should work for OpenSSL builds, and have the other issues addressed as well. I\n> took the opportunity to clean up the NSS tests to be more like the OpenSSL ones\n> to lessen the impact on the TAP testcases. On my Debian box, using the\n> standard NSS and NSPR packages, I get 6 failures which are essentially all\n> around CRL handling. I'm going to circle back and look at what is missing there.\n\nTaking a look at this, the issue was that I had fat-fingered the Makefile rules\nfor generating the NSS databases. This is admittedly very messy at this point,\npartly due to trying to mimick OpenSSL filepaths/names to minimize the impact\non tests and to keep OpenSSL/NSS tests as \"optically\" equivalent as I could.\n\nWith this, I have one failing test (\"intermediate client certificate is\nprovided by client\") which I've left failing since I believe the case should be\nsupported by NSS. The issue is most likely that I havent figured out the right\ncertinfo incantation to make it so (Mozilla hasn't strained themselves when\nwriting documentation for this toolchain, or any part of NSS for that matter).\n\nThe failing test when running with OpenSSL also remains, the issue is that the\nvery first test for incorrect key passphrase fails, even though the server is\nbehaving exactly as it should. Something in the test suite hackery breaks for\nthat test but I've been unable to pin down what it is, any help on would be\ngreatly appreciated.\n\nThis version adds support for sslinfo on NSS for most the functions. In the\nprocess I realized that sslinfo never got the memo about SSL support being\nabstracted behind an API, so I went and did that as well. This part of the\npatch should perhaps be broken out into a separate patch/thread in case it's\ndeemed interesting regardless of the evetual conclusion on this patch. Doing\nthis removed a bit of duplication with the backend code, and some errorhandling\nmoved to be-secure-openssl.c (originally added in d94c36a45ab45). As the\noriginal commit message states, they're mostly code hygiene with belts and\nsuspenders, but if we deemed them valuable enough for a contrib module ISTM\nthey should go into the backend as well. Adding a testcase for sslinfo is a\nTODO.\n\nSupport pg_strong_random, sha2 and pgcrypto has been started, but it's less\ntrivial as NSS/NSPR requires a lot more initialization and state than OpenSSL,\nso it needs a bit more thought.\n\nI've also done a rebase over todays HEAD, a pgindent pass and some cleanup here\nand there.\n\ncheers ./daniel", "msg_date": "Mon, 20 Jul 2020 15:35:51 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On 7/15/20 6:18 PM, Daniel Gustafsson wrote:\n>> On 15 Jul 2020, at 20:35, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n>>\n>> On 5/15/20 4:46 PM, Daniel Gustafsson wrote:\n>>> My plan is to keep hacking at this to have it reviewable for the 14 cycle, so\n>>> if anyone has an interest in NSS, then I would love to hear feedback on how it\n>>> works (and doesn't work).\n>> I'll be happy to help, particularly with Windows support and with some\n>> of the callback stuff I've had a hand in.\n> That would be fantastic, thanks! The password callback handling is still a\n> TODO so feel free to take a stab at that since you have a lot of context on\n> there.\n>\n> For Windows, I've include USE_NSS in Solution.pm as Thomas pointed out in this\n> thread, but that was done blind as I've done no testing on Windows yet.\n>\n\n\nOK, here is an update of your patch that compiles and runs against NSS\nunder Windows (VS2019).\n\n\nIn addition to some work that was missing in src/tools/msvc, I had to\nmake a few adjustments, including:\n\n\n * strtok_r() isn't available on Windows. We don't use it elsewhere in\n the postgres code, and it seemed unnecessary to have reentrant calls\n here, so I just replaced it with equivalent strtok() calls.\n * We were missing an NSS implementation of\n pgtls_verify_peer_name_matches_certificate_guts(). I supplied a\n dummy that's enough to get it building cleanly, but that needs to be\n filled in properly.\n\n\nThere is still plenty of work to go, but this seemed a sufficient\nmilestone to report progress on.\n\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 31 Jul 2020 16:44:46 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On 7/31/20 4:44 PM, Andrew Dunstan wrote:\n> On 7/15/20 6:18 PM, Daniel Gustafsson wrote:\n>>> On 15 Jul 2020, at 20:35, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n>>>\n>>> On 5/15/20 4:46 PM, Daniel Gustafsson wrote:\n>>>> My plan is to keep hacking at this to have it reviewable for the 14 cycle, so\n>>>> if anyone has an interest in NSS, then I would love to hear feedback on how it\n>>>> works (and doesn't work).\n>>> I'll be happy to help, particularly with Windows support and with some\n>>> of the callback stuff I've had a hand in.\n>> That would be fantastic, thanks! The password callback handling is still a\n>> TODO so feel free to take a stab at that since you have a lot of context on\n>> there.\n>>\n>> For Windows, I've include USE_NSS in Solution.pm as Thomas pointed out in this\n>> thread, but that was done blind as I've done no testing on Windows yet.\n>>\n>\n> OK, here is an update of your patch that compiles and runs against NSS\n> under Windows (VS2019).\n>\n>\n> In addition to some work that was missing in src/tools/msvc, I had to\n> make a few adjustments, including:\n>\n>\n> * strtok_r() isn't available on Windows. We don't use it elsewhere in\n> the postgres code, and it seemed unnecessary to have reentrant calls\n> here, so I just replaced it with equivalent strtok() calls.\n> * We were missing an NSS implementation of\n> pgtls_verify_peer_name_matches_certificate_guts(). I supplied a\n> dummy that's enough to get it building cleanly, but that needs to be\n> filled in properly.\n>\n>\n> There is still plenty of work to go, but this seemed a sufficient\n> milestone to report progress on.\n>\n>\n\n\nOK, this version contains pre-generated nss files, and passes a full\nbuildfarm run including the ssl test module, with both openssl and NSS.\nThat should keep the cfbot happy :-)\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 3 Aug 2020 12:46:24 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On 8/3/20 12:46 PM, Andrew Dunstan wrote:\n> On 7/31/20 4:44 PM, Andrew Dunstan wrote:\n>> On 7/15/20 6:18 PM, Daniel Gustafsson wrote:\n>>>> On 15 Jul 2020, at 20:35, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n>>>>\n>>>> On 5/15/20 4:46 PM, Daniel Gustafsson wrote:\n>>>>> My plan is to keep hacking at this to have it reviewable for the 14 cycle, so\n>>>>> if anyone has an interest in NSS, then I would love to hear feedback on how it\n>>>>> works (and doesn't work).\n>>>> I'll be happy to help, particularly with Windows support and with some\n>>>> of the callback stuff I've had a hand in.\n>>> That would be fantastic, thanks! The password callback handling is still a\n>>> TODO so feel free to take a stab at that since you have a lot of context on\n>>> there.\n>>>\n>>> For Windows, I've include USE_NSS in Solution.pm as Thomas pointed out in this\n>>> thread, but that was done blind as I've done no testing on Windows yet.\n>>>\n>> OK, here is an update of your patch that compiles and runs against NSS\n>> under Windows (VS2019).\n>>\n>>\n>> In addition to some work that was missing in src/tools/msvc, I had to\n>> make a few adjustments, including:\n>>\n>>\n>> * strtok_r() isn't available on Windows. We don't use it elsewhere in\n>> the postgres code, and it seemed unnecessary to have reentrant calls\n>> here, so I just replaced it with equivalent strtok() calls.\n>> * We were missing an NSS implementation of\n>> pgtls_verify_peer_name_matches_certificate_guts(). I supplied a\n>> dummy that's enough to get it building cleanly, but that needs to be\n>> filled in properly.\n>>\n>>\n>> There is still plenty of work to go, but this seemed a sufficient\n>> milestone to report progress on.\n>>\n>>\n>\n> OK, this version contains pre-generated nss files, and passes a full\n> buildfarm run including the ssl test module, with both openssl and NSS.\n> That should keep the cfbot happy :-)\n>\n>\n\nrebased on current master.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 3 Aug 2020 15:18:47 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 3 Aug 2020, at 21:18, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n> On 8/3/20 12:46 PM, Andrew Dunstan wrote:\n>> On 7/31/20 4:44 PM, Andrew Dunstan wrote:\n\n>>> OK, here is an update of your patch that compiles and runs against NSS\n>>> under Windows (VS2019).\n\nOut of curiosity since I'm not familiar with Windows, how hard/easy is it to\ninstall NSS for the purpose of a) hacking on postgres+NSS and b) using postgres\nwith NSS as the backend?\n\n>>> * strtok_r() isn't available on Windows. We don't use it elsewhere in\n>>> the postgres code, and it seemed unnecessary to have reentrant calls\n>>> here, so I just replaced it with equivalent strtok() calls.\n\nFair enough, that makes sense.\n\n>>> * We were missing an NSS implementation of\n>>> pgtls_verify_peer_name_matches_certificate_guts(). I supplied a\n>>> dummy that's enough to get it building cleanly, but that needs to be\n>>> filled in properly.\n\nInteresting, not sure how I could've missed that one. \n\n>> OK, this version contains pre-generated nss files, and passes a full\n>> buildfarm run including the ssl test module, with both openssl and NSS.\n>> That should keep the cfbot happy :-)\n\nExciting, thanks a lot for helping out on this! I've started to look at the\nrequired documentation changes during vacation, will hopefully be able to post\nsomething soon.\n\ncheers ./daniel\n\n", "msg_date": "Tue, 4 Aug 2020 23:42:16 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "\nOn 8/4/20 5:42 PM, Daniel Gustafsson wrote:\n>> On 3 Aug 2020, at 21:18, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n>> On 8/3/20 12:46 PM, Andrew Dunstan wrote:\n>>> On 7/31/20 4:44 PM, Andrew Dunstan wrote:\n>>>> OK, here is an update of your patch that compiles and runs against NSS\n>>>> under Windows (VS2019).\n> Out of curiosity since I'm not familiar with Windows, how hard/easy is it to\n> install NSS for the purpose of a) hacking on postgres+NSS and b) using postgres\n> with NSS as the backend?\n\n\n\n\nI've laid out the process at\nhttps://www.2ndquadrant.com/en/blog/nss-on-windows-for-postgresql-development/\n\n\n>>> OK, this version contains pre-generated nss files, and passes a full\n>>> buildfarm run including the ssl test module, with both openssl and NSS.\n>>> That should keep the cfbot happy :-)\n> Exciting, thanks a lot for helping out on this! I've started to look at the\n> required documentation changes during vacation, will hopefully be able to post\n> something soon.\n>\n\n\nGood. Having got the tests running cleanly on Linux, I'm now going back\nto work on that for Windows.\n\n\nAfter that I'll look at the hook/callback stuff.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Wed, 5 Aug 2020 16:38:38 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 5 Aug 2020, at 22:38, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n> \n> On 8/4/20 5:42 PM, Daniel Gustafsson wrote:\n>>> On 3 Aug 2020, at 21:18, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n>>> On 8/3/20 12:46 PM, Andrew Dunstan wrote:\n>>>> On 7/31/20 4:44 PM, Andrew Dunstan wrote:\n>>>>> OK, here is an update of your patch that compiles and runs against NSS\n>>>>> under Windows (VS2019).\n>> Out of curiosity since I'm not familiar with Windows, how hard/easy is it to\n>> install NSS for the purpose of a) hacking on postgres+NSS and b) using postgres\n>> with NSS as the backend?\n> \n> I've laid out the process at\n> https://www.2ndquadrant.com/en/blog/nss-on-windows-for-postgresql-development/\n\nThat's fantastic, thanks for putting that together.\n\n>>>> OK, this version contains pre-generated nss files, and passes a full\n>>>> buildfarm run including the ssl test module, with both openssl and NSS.\n>>>> That should keep the cfbot happy :-)\n\nTurns out the CFBot doesn't like the binary diffs. They are included in this\nversion too but we should probably drop them again it seems.\n\n>> Exciting, thanks a lot for helping out on this! I've started to look at the\n>> required documentation changes during vacation, will hopefully be able to post\n>> something soon.\n> \n> Good. Having got the tests running cleanly on Linux, I'm now going back\n> to work on that for Windows.\n> \n> After that I'll look at the hook/callback stuff.\n\nThe attached v9 contains mostly a first stab at getting some documentation\ngoing, it's far from completed but I'd rather share more frequently to not have\nlocal trees deviate too much in case you've had time to hack as well. I had a\nfew documentation tweaks in the code too, but no real functionality change for\nnow.\n\nThe 0001 patch isn't strictly necessary but it seems reasonable to address the\nvarious ways OpenSSL was spelled out in the docs while at updating the SSL\nportions. It essentially ensures that markup around OpenSSL and SSL is used\nconsistently. I didn't address the linelengths being too long in this patch to\nmake review easier instead.\n\ncheers ./daniel", "msg_date": "Tue, 1 Sep 2020 14:43:58 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": ">\n> >>>> OK, this version contains pre-generated nss files, and passes a full\n> >>>> buildfarm run including the ssl test module, with both openssl and NSS.\n> >>>> That should keep the cfbot happy :-)\n>\n> Turns out the CFBot doesn't like the binary diffs. They are included in this\n> version too but we should probably drop them again it seems.\n>\n\nI did ask Thomas about this, he was going to try to fix it. In\nprinciple we should want it to accept binary diffs exactly for this\nsort of thing.\n\n\n> The attached v9 contains mostly a first stab at getting some documentation\n> going, it's far from completed but I'd rather share more frequently to not have\n> local trees deviate too much in case you've had time to hack as well. I had a\n> few documentation tweaks in the code too, but no real functionality change for\n> now.\n>\n> The 0001 patch isn't strictly necessary but it seems reasonable to address the\n> various ways OpenSSL was spelled out in the docs while at updating the SSL\n> portions. It essentially ensures that markup around OpenSSL and SSL is used\n> consistently. I didn't address the linelengths being too long in this patch to\n> make review easier instead.\n>\n\n\nI'll take a look.\n\ncheers\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 3 Sep 2020 15:26:03 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Thu, Sep 03, 2020 at 03:26:03PM -0400, Andrew Dunstan wrote:\n>> The 0001 patch isn't strictly necessary but it seems reasonable to address the\n>> various ways OpenSSL was spelled out in the docs while at updating the SSL\n>> portions. It essentially ensures that markup around OpenSSL and SSL is used\n>> consistently. I didn't address the linelengths being too long in this patch to\n>> make review easier instead.\n> \n> I'll take a look.\n\nAdding a <productname> markup around OpenSSL in the docs makes things\nconsistent. +1.\n--\nMichael", "msg_date": "Fri, 4 Sep 2020 10:23:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Fri, Sep 04, 2020 at 10:23:34AM +0900, Michael Paquier wrote:\n> Adding a <productname> markup around OpenSSL in the docs makes things\n> consistent. +1.\n\nI have looked at 0001, and applied it after fixing the line length\n(thanks for not doing it to ease my lookup), and I found one extra\nplace in need of fix. Patch 0002 is failing to apply.\n--\nMichael", "msg_date": "Thu, 17 Sep 2020 16:41:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 17 Sep 2020, at 09:41, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Sep 04, 2020 at 10:23:34AM +0900, Michael Paquier wrote:\n>> Adding a <productname> markup around OpenSSL in the docs makes things\n>> consistent. +1.\n> \n> I have looked at 0001, and applied it after fixing the line length\n> (thanks for not doing it to ease my lookup), and I found one extra\n> place in need of fix. \n\nThanks!\n\n> Patch 0002 is failing to apply.\n\nAttached is a v10 rebased to apply on top of HEAD.\n\ncheers ./daniel", "msg_date": "Thu, 17 Sep 2020 11:41:28 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Thu, Sep 17, 2020 at 11:41:28AM +0200, Daniel Gustafsson wrote:\n> Attached is a v10 rebased to apply on top of HEAD.\n\nI am afraid that this needs a new rebase. The patch is failing to\napply, per the CF bot. :/\n--\nMichael", "msg_date": "Tue, 29 Sep 2020 14:59:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 29 Sep 2020, at 07:59, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Thu, Sep 17, 2020 at 11:41:28AM +0200, Daniel Gustafsson wrote:\n>> Attached is a v10 rebased to apply on top of HEAD.\n> \n> I am afraid that this needs a new rebase. The patch is failing to\n> apply, per the CF bot. :/\n\nIt's failing on binary diffs due to the NSS certificate databases being\nincluded to make hacking on the patch easier:\n\n File src/test/ssl/ssl/nss/server.crl: git binary diffs are not supported.\n\nThis is a limitation of the CFBot patch tester, the text portions of the patch\nstill applies with a tiny but of fuzz.\n\ncheers ./daniel\n\n", "msg_date": "Tue, 29 Sep 2020 09:52:17 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 29 Sep 2020, at 09:52, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 29 Sep 2020, at 07:59, Michael Paquier <michael@paquier.xyz> wrote:\n>> \n>> On Thu, Sep 17, 2020 at 11:41:28AM +0200, Daniel Gustafsson wrote:\n>>> Attached is a v10 rebased to apply on top of HEAD.\n>> \n>> I am afraid that this needs a new rebase. The patch is failing to\n>> apply, per the CF bot. :/\n> \n> It's failing on binary diffs due to the NSS certificate databases being\n> included to make hacking on the patch easier:\n> \n> File src/test/ssl/ssl/nss/server.crl: git binary diffs are not supported.\n> \n> This is a limitation of the CFBot patch tester, the text portions of the patch\n> still applies with a tiny but of fuzz.\n\nAttached is a new version which doesn't contain the NSS certificate databases\nto keep the CFBot happy.\n\nIt also implements server-side passphrase callbacks as well as re-enables the\ntests for those. The callback works a bit differently from the OpenSSL one as\nit must run in the forked process, so it can't run on server reload. There's\nalso no default fallback reading from a TTY like in OpenSSL, so if no callback\nit set the always-failing dummy is set.\n\ncheers ./daniel", "msg_date": "Fri, 2 Oct 2020 22:01:37 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "The attached v12 adds support for pgcrypto as well as pg_strong_random, which I\nbelieve completes the required subsystems where we have OpenSSL support today.\nI opted for not adding code to handle the internal shaXXX implementations until\nthe dust settles around the proposal to change the API there.\n\nBlowfish is not supported by NSS AFAICT, even though the cipher mechanism is\ndefined, so the internal implementation is used there instead. CAST5 is\nsupported, but segfaults inside NSS on most inputs so support for that is not\nincluded for now.\n\ncheers ./daniel", "msg_date": "Tue, 20 Oct 2020 14:24:24 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Hi,\n\nOn 2020-10-20 14:24:24 +0200, Daniel Gustafsson wrote:\n> From 0cb0e6a0ce9adb18bc9d212bd03e4e09fa452972 Mon Sep 17 00:00:00 2001\n> From: Daniel Gustafsson <daniel@yesql.se>\n> Date: Thu, 8 Oct 2020 18:44:28 +0200\n> Subject: [PATCH] Support for NSS as a TLS backend v12\n> ---\n> configure | 223 +++-\n> configure.ac | 39 +-\n> contrib/Makefile | 2 +-\n> contrib/pgcrypto/Makefile | 5 +\n> contrib/pgcrypto/nss.c | 773 +++++++++++\n> contrib/pgcrypto/openssl.c | 2 +-\n> contrib/pgcrypto/px.c | 1 +\n> contrib/pgcrypto/px.h | 1 +\n\nPersonally I'd like to see this patch broken up a bit - it's quite\nlarge. Several of the changes could easily be committed separately, no?\n\n\n> if test \"$with_openssl\" = yes ; then\n> + if test x\"$with_nss\" = x\"yes\" ; then\n> + AC_MSG_ERROR([multiple SSL backends cannot be enabled simultaneously\"])\n> + fi\n\nBased on a quick look there's no similar error check for the msvc\nbuild. Should there be?\n\n> \n> +if test \"$with_nss\" = yes ; then\n> + if test x\"$with_openssl\" = x\"yes\" ; then\n> + AC_MSG_ERROR([multiple SSL backends cannot be enabled simultaneously\"])\n> + fi\n\nIsn't this a repetition of the earlier check?\n\n\n> + CLEANLDFLAGS=\"$LDFLAGS\"\n> + # TODO: document this set of LDFLAGS\n> + LDFLAGS=\"-lssl3 -lsmime3 -lnss3 -lplds4 -lplc4 -lnspr4 $LDFLAGS\"\n\nShouldn't this use nss-config or such?\n\n\n> +if test \"$with_nss\" = yes ; then\n> + AC_CHECK_HEADER(ssl.h, [], [AC_MSG_ERROR([header file <ssl.h> is required for NSS])])\n> + AC_CHECK_HEADER(nss.h, [], [AC_MSG_ERROR([header file <nss.h> is required for NSS])])\n> +fi\n\nHm. For me, on debian, these headers are not directly in the default\ninclude search path, but would be as nss/ssl.h. I don't see you adding\nnss/ to CFLAGS anywhere? How does this work currently?\n\nI think it'd also be better if we could include these files as nss/ssl.h\netc - ssl.h is a name way too likely to conflict imo.\n\n> +++ b/src/backend/libpq/be-secure-nss.c\n> @@ -0,0 +1,1158 @@\n> +/*\n> + * BITS_PER_BYTE is also defined in the NSPR header files, so we need to undef\n> + * our version to avoid compiler warnings on redefinition.\n> + */\n> +#define pg_BITS_PER_BYTE BITS_PER_BYTE\n> +#undef BITS_PER_BYTE\n\nMost compilers/preprocessors don't warn about redefinitions when they\nwould result in the same value (IIRC we have some cases of redefinitions\nin tree even). Does nspr's differ?\n\n\n> +/*\n> + * The nspr/obsolete/protypes.h NSPR header typedefs uint64 and int64 with\n> + * colliding definitions from ours, causing a much expected compiler error.\n> + * The definitions are however not actually used in NSPR at all, and are only\n> + * intended for what seems to be backwards compatibility for apps written\n> + * against old versions of NSPR. The following comment is in the referenced\n> + * file, and was added in 1998:\n> + *\n> + *\t\tThis section typedefs the old 'native' types to the new PR<type>s.\n> + *\t\tThese definitions are scheduled to be eliminated at the earliest\n> + *\t\tpossible time. The NSPR API is implemented and documented using\n> + *\t\tthe new definitions.\n> + *\n> + * As there is no opt-out from pulling in these typedefs, we define the guard\n> + * for the file to exclude it. This is incredibly ugly, but seems to be about\n> + * the only way around it.\n> + */\n> +#define PROTYPES_H\n> +#include <nspr.h>\n> +#undef PROTYPES_H\n\nYuck :(.\n\n\n> +int\n> +be_tls_init(bool isServerStart)\n> +{\n> +\tSECStatus\tstatus;\n> +\tSSLVersionRange supported_sslver;\n> +\n> +\t/*\n> +\t * Set up the connection cache for multi-processing application behavior.\n\nHm. Do we necessarily want that? Session resumption is not exactly\nunproblematic... Or does this do something else?\n\n\n> +\t * If we are in ServerStart then we initialize the cache. If the server is\n> +\t * already started, we inherit the cache such that it can be used for\n> +\t * connections. Calling SSL_ConfigMPServerSIDCache sets an environment\n> +\t * variable which contains enough information for the forked child to know\n> +\t * how to access it. Passing NULL to SSL_InheritMPServerSIDCache will\n> +\t * make the forked child look it up by the default name SSL_INHERITANCE,\n> +\t * if env vars aren't inherited then the contents of the variable can be\n> +\t * passed instead.\n> +\t */\n\nDoes this stuff work on windows / EXEC_BACKEND?\n\n\n> +\t * The below parameters are what the implicit initialization would've done\n> +\t * for us, and should work even for older versions where it might not be\n> +\t * done automatically. The last parameter, maxPTDs, is set to various\n> +\t * values in other codebases, but has been unused since NSPR 2.1 which was\n> +\t * released sometime in 1998.\n> +\t */\n> +\tPR_Init(PR_USER_THREAD, PR_PRIORITY_NORMAL, 0 /* maxPTDs */ );\n\nhttps://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR/Reference/PR_Init\nsays that currently all parameters are ignored?\n\n\n\n\n> +\t/*\n> +\t * Import the already opened socket as we don't want to use NSPR functions\n> +\t * for opening the network socket due to how the PostgreSQL protocol works\n> +\t * with TLS connections. This function is not part of the NSPR public API,\n> +\t * see the comment at the top of the file for the rationale of still using\n> +\t * it.\n> +\t */\n> +\tpr_fd = PR_ImportTCPSocket(port->sock);\n> +\tif (!pr_fd)\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errmsg(\"unable to connect to socket\")));\n\nI don't see the comment you're referring to?\n\n\n> +\t/*\n> +\t * Most of the documentation available, and implementations of, NSS/NSPR\n> +\t * use the PR_NewTCPSocket() function here, which has the drawback that it\n> +\t * can only create IPv4 sockets. Instead use PR_OpenTCPSocket() which\n> +\t * copes with IPv6 as well.\n> +\t */\n> +\tmodel = PR_OpenTCPSocket(port->laddr.addr.ss_family);\n> +\tif (!model)\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errmsg(\"unable to open socket\")));\n> +\n> +\t/*\n> +\t * Convert the NSPR socket to an SSL socket. Ensuring the success of this\n> +\t * operation is critical as NSS SSL_* functions may return SECSuccess on\n> +\t * the socket even though SSL hasn't been enabled, which introduce a risk\n> +\t * of silent downgrades.\n> +\t */\n> +\tmodel = SSL_ImportFD(NULL, model);\n> +\tif (!model)\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errmsg(\"unable to enable TLS on socket\")));\n\nIt's confusing that these functions do not actually reference the socket\nvia some handle :(. What does opening a socket do here?\n\n\n> +\t/*\n> +\t * Configure the allowed cipher. If there are no user preferred suites,\n\n*ciphers?\n\n> +\n> +\tport->pr_fd = SSL_ImportFD(model, pr_fd);\n> +\tif (!port->pr_fd)\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errmsg(\"unable to initialize\")));\n> +\n> +\tPR_Close(model);\n\nA comment explaining why we first import a NULL into the model, and then\nrelease the model, and import the real fd would be good.\n\n\n> +ssize_t\n> +be_tls_read(Port *port, void *ptr, size_t len, int *waitfor)\n> +{\n> +\tssize_t\t\tn_read;\n> +\tPRErrorCode err;\n> +\n> +\tn_read = PR_Read(port->pr_fd, ptr, len);\n> +\n> +\tif (n_read < 0)\n> +\t{\n> +\t\terr = PR_GetError();\n\nYay, more thread global state :(.\n\n> +\t\t/* XXX: This logic seems potentially bogus? */\n> +\t\tif (err == PR_WOULD_BLOCK_ERROR)\n> +\t\t\t*waitfor = WL_SOCKET_READABLE;\n> +\t\telse\n> +\t\t\t*waitfor = WL_SOCKET_WRITEABLE;\n\nDon't we need to handle failed connections somewhere here? secure_read()\nwon't know about PR_GetError() etc? How would SSL errors be signalled\nupwards here?\n\nAlso, as you XXX, it's not clear to me that your mapping would always\nresult in waiting for the right event? A tls write could e.g. very well\nrequire receiving data etc?\n\n> +\t/*\n> +\t * At least one byte with password content was returned, and NSS requires\n> +\t * that we return it allocated in NSS controlled memory. If we fail to\n> +\t * allocate then abort without passing back NULL and bubble up the error\n> +\t * on the PG side.\n> +\t */\n> +\tpassword = (char *) PR_Malloc(len + 1);\n> +\tif (!password)\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_OUT_OF_MEMORY),\n> +\t\t\t\t errmsg(\"out of memory\")));\n>\n> +\tstrlcpy(password, buf, sizeof(password));\n> +\texplicit_bzero(buf, sizeof(buf));\n> +\n\nIn case of error you're not bzero'ing out the password!\n\nSeparately, I wonder if we should introduce a function for throwing OOM\nerrors - which then e.g. could print the memory context stats in those\nplaces too...\n\n\n> +static SECStatus\n> +pg_cert_auth_handler(void *arg, PRFileDesc * fd, PRBool checksig, PRBool isServer)\n> +{\n> +\tSECStatus\tstatus;\n> +\tPort\t *port = (Port *) arg;\n> +\tCERTCertificate *cert;\n> +\tchar\t *peer_cn;\n> +\tint\t\t\tlen;\n> +\n> +\tstatus = SSL_AuthCertificate(CERT_GetDefaultCertDB(), port->pr_fd, checksig, PR_TRUE);\n> +\tif (status == SECSuccess)\n> +\t{\n> +\t\tcert = SSL_PeerCertificate(port->pr_fd);\n> +\t\tlen = strlen(cert->subjectName);\n> +\t\tpeer_cn = MemoryContextAllocZero(TopMemoryContext, len + 1);\n> +\t\tif (strncmp(cert->subjectName, \"CN=\", 3) == 0)\n> +\t\t\tstrlcpy(peer_cn, cert->subjectName + strlen(\"CN=\"), len + 1);\n> +\t\telse\n> +\t\t\tstrlcpy(peer_cn, cert->subjectName, len + 1);\n> +\t\tCERT_DestroyCertificate(cert);\n> +\n> +\t\tport->peer_cn = peer_cn;\n> +\t\tport->peer_cert_valid = true;\n\nHm. We either should have something similar to\n\n\t\t\t/*\n\t\t\t * Reject embedded NULLs in certificate common name to prevent\n\t\t\t * attacks like CVE-2009-4034.\n\t\t\t */\n\t\t\tif (len != strlen(peer_cn))\n\t\t\t{\n\t\t\t\tereport(COMMERROR,\n\t\t\t\t\t\t(errcode(ERRCODE_PROTOCOL_VIOLATION),\n\t\t\t\t\t\t errmsg(\"SSL certificate's common name contains embedded null\")));\n\t\t\t\tpfree(peer_cn);\n\t\t\t\treturn -1;\n\t\t\t}\nhere, or a comment explaining why not.\n\nAlso, what's up with the CN= bit? Why is that needed here, but not for\nopenssl?\n\n\n> +static PRFileDesc *\n> +init_iolayer(Port *port, int loglevel)\n> +{\n> +\tconst\t\tPRIOMethods *default_methods;\n> +\tPRFileDesc *layer;\n> +\n> +\t/*\n> +\t * Start by initializing our layer with all the default methods so that we\n> +\t * can selectively override the ones we want while still ensuring that we\n> +\t * have a complete layer specification.\n> +\t */\n> +\tdefault_methods = PR_GetDefaultIOMethods();\n> +\tmemcpy(&pr_iomethods, default_methods, sizeof(PRIOMethods));\n> +\n> +\tpr_iomethods.recv = pg_ssl_read;\n> +\tpr_iomethods.send = pg_ssl_write;\n> +\n> +\t/*\n> +\t * Each IO layer must be identified by a unique name, where uniqueness is\n> +\t * per connection. Each connection in a postgres cluster can generate the\n> +\t * identity from the same string as they will create their IO layers on\n> +\t * different sockets. Only one layer per socket can have the same name.\n> +\t */\n> +\tpr_id = PR_GetUniqueIdentity(\"PostgreSQL\");\n\nSeems like it might not be a bad idea to append Server or something?\n\n\n> +\n> +\t/*\n> +\t * Create the actual IO layer as a stub such that it can be pushed onto\n> +\t * the layer stack. The step via a stub is required as we define custom\n> +\t * callbacks.\n> +\t */\n> +\tlayer = PR_CreateIOLayerStub(pr_id, &pr_iomethods);\n> +\tif (!layer)\n> +\t{\n> +\t\tereport(loglevel,\n> +\t\t\t\t(errmsg(\"unable to create NSS I/O layer\")));\n> +\t\treturn NULL;\n> +\t}\n\nWhy is this accepting a variable log level? The only caller passes ERROR?\n\n\n> +/*\n> + * pg_SSLerrmessage\n> + *\t\tCreate and return a human readable error message given\n> + *\t\tthe specified error code\n> + *\n> + * PR_ErrorToName only converts the enum identifier of the error to string,\n> + * but that can be quite useful for debugging (and in case PR_ErrorToString is\n> + * unable to render a message then we at least have something).\n> + */\n> +static char *\n> +pg_SSLerrmessage(PRErrorCode errcode)\n> +{\n> +\tchar\t\terror[128];\n> +\tint\t\t\tret;\n> +\n> +\t/* TODO: this should perhaps use a StringInfo instead.. */\n> +\tret = pg_snprintf(error, sizeof(error), \"%s (%s)\",\n> +\t\t\t\t\t PR_ErrorToString(errcode, PR_LANGUAGE_I_DEFAULT),\n> +\t\t\t\t\t PR_ErrorToName(errcode));\n> +\tif (ret)\n> +\t\treturn pstrdup(error);\n\n> +\treturn pstrdup(_(\"unknown TLS error\"));\n> +}\n\nWhy not use psrintf() here?\n\n\n\n> +++ b/src/include/common/pg_nss.h\n> @@ -0,0 +1,141 @@\n> +/*-------------------------------------------------------------------------\n> + *\n> + * pg_nss.h\n> + *\t Support for NSS as a TLS backend\n> + *\n> + * These definitions are used by both frontend and backend code.\n> + *\n> + * Copyright (c) 2020, PostgreSQL Global Development Group\n> + *\n> + * IDENTIFICATION\n> + * src/include/common/pg_nss.h\n> + *\n> + *-------------------------------------------------------------------------\n> + */\n> +#ifndef PG_NSS_H\n> +#define PG_NSS_H\n> +\n> +#ifdef USE_NSS\n> +\n> +#include <sslproto.h>\n> +\n> +PRUint16\tpg_find_cipher(char *name);\n> +\n> +typedef struct\n> +{\n> +\tconst char *name;\n> +\tPRUint16\tnumber;\n> +}\t\t\tNSSCiphers;\n> +\n> +#define INVALID_CIPHER\t0xFFFF\n> +\n> +/*\n> + * This list is a partial copy of the ciphers in NSS files lib/ssl/sslproto.h\n> + * in order to provide a human readable version of the ciphers. It would be\n> + * nice to not have to have this, but NSS doesn't provide any API addressing\n> + * the ciphers by name. TODO: do we want more of the ciphers, or perhaps less?\n> + */\n> +static const NSSCiphers NSS_CipherList[] = {\n> +\n> +\t{\"TLS_NULL_WITH_NULL_NULL\", TLS_NULL_WITH_NULL_NULL},\n\nHm. Is this whole business of defining array constants in a header just\ndone to avoid having a .c file that needs to be compiled both in\nfrontend and backend code?\n\n\n> +/*\n> + * The nspr/obsolete/protypes.h NSPR header typedefs uint64 and int64 with\n> + * colliding definitions from ours, causing a much expected compiler error.\n> + * The definitions are however not actually used in NSPR at all, and are only\n> + * intended for what seems to be backwards compatibility for apps written\n> + * against old versions of NSPR. The following comment is in the referenced\n> + * file, and was added in 1998:\n> + *\n> + *\t\tThis section typedefs the old 'native' types to the new PR<type>s.\n> + *\t\tThese definitions are scheduled to be eliminated at the earliest\n> + *\t\tpossible time. The NSPR API is implemented and documented using\n> + *\t\tthe new definitions.\n> + *\n> + * As there is no opt-out from pulling in these typedefs, we define the guard\n> + * for the file to exclude it. This is incredibly ugly, but seems to be about\n> + * the only way around it.\n> + */\n\nThere's a lot of duplicated comments here. Could we move either of the\nfiles to reference the other for longer ones?\n\n\n\n> +/*\n> + * PR_ImportTCPSocket() is a private API, but very widely used, as it's the\n> + * only way to make NSS use an already set up POSIX file descriptor rather\n> + * than opening one itself. To quote the NSS documentation:\n> + *\n> + *\t\t\"In theory, code that uses PR_ImportTCPSocket may break when NSPR's\n> + *\t\timplementation changes. In practice, this is unlikely to happen because\n> + *\t\tNSPR's implementation has been stable for years and because of NSPR's\n> + *\t\tstrong commitment to backward compatibility.\"\n> + *\n> + * https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR/Reference/PR_ImportTCPSocket\n> + *\n> + * The function is declared in <private/pprio.h>, but as it is a header marked\n> + * private we declare it here rather than including it.\n> + */\n> +NSPR_API(PRFileDesc *) PR_ImportTCPSocket(int);\n\nUgh. This is really the way to do this? How do other applications deal\nwith this problem?\n\n\n> +#if defined(WIN32)\n> +static const char *ca_trust_name = \"nssckbi.dll\";\n> +#elif defined(__darwin__)\n> +static const char *ca_trust_name = \"libnssckbi.dylib\";\n> +#else\n> +static const char *ca_trust_name = \"libnssckbi.so\";\n> +#endif\n\nThere's really no pre-existing handling for this in nss???\n\n\n> +\t/*\n> +\t * The original design of NSS was for a single application to use a single\n> +\t * copy of it, initialized with NSS_Initialize() which isn't returning any\n> +\t * handle with which to refer to NSS. NSS initialization and shutdown are\n> +\t * global for the application, so a shutdown in another NSS enabled\n> +\t * library would cause NSS to be stopped for libpq as well. The fix has\n> +\t * been to introduce NSS_InitContext which returns a context handle to\n> +\t * pass to NSS_ShutdownContext. NSS_InitContext was introduced in NSS\n> +\t * 3.12, but the use of it is not very well documented.\n> +\t * https://bugzilla.redhat.com/show_bug.cgi?id=738456\n> +\t *\n> +\t * The InitParameters struct passed can be used to override internal\n> +\t * values in NSS, but the usage is not documented at all. When using\n> +\t * NSS_Init initializations, the values are instead set via PK11_Configure\n> +\t * calls so the PK11_Configure documentation can be used to glean some\n> +\t * details on these.\n> +\t *\n> +\t * https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/PKCS11/Module_Specs\n\n> +\n> +\tif (!nss_context)\n> +\t{\n> +\t\tchar\t *err = pg_SSLerrmessage(PR_GetError());\n> +\n> +\t\tprintfPQExpBuffer(&conn->errorMessage,\n> +\t\t\t\t\t\t libpq_gettext(\"unable to %s certificate database: %s\"),\n> +\t\t\t\t\t\t conn->cert_database ? \"open\" : \"create\",\n> +\t\t\t\t\t\t err);\n> +\t\tfree(err);\n> +\t\treturn PGRES_POLLING_FAILED;\n> +\t}\n> +\n> +\t/*\n> +\t * Configure cipher policy.\n> +\t */\n> +\tstatus = NSS_SetDomesticPolicy();\n\nWhy is \"domestic\" the right thing here?\n\n\n> +\n> +\tPK11_SetPasswordFunc(PQssl_passwd_cb);\n\nIs it actually OK to do stuff like this when other users of NSS might be\npresent? That's obviously more likely in the libpq case, compared to the\nbackend case (where it's also possible, of course). What prevents us\nfrom overriding another user's callback?\n\n\n> +ssize_t\n> +pgtls_read(PGconn *conn, void *ptr, size_t len)\n> +{\n> +\tPRInt32\t\tnread;\n> +\tPRErrorCode status;\n> +\tint\t\t\tread_errno = 0;\n> +\n> +\tnread = PR_Recv(conn->pr_fd, ptr, len, 0, PR_INTERVAL_NO_WAIT);\n> +\n> +\t/*\n> +\t * PR_Recv blocks until there is data to read or the timeout expires. Zero\n> +\t * is returned for closed connections, while -1 indicates an error within\n> +\t * the ongoing connection.\n> +\t */\n> +\tif (nread == 0)\n> +\t{\n> +\t\tread_errno = ECONNRESET;\n> +\t\treturn -1;\n> +\t}\n\nIt's a bit confusing to talk about blocking when the socket presumably\nis in non-blocking mode, and you're also asking to never wait?\n\n\n> +\tif (nread == -1)\n> +\t{\n> +\t\tstatus = PR_GetError();\n> +\n> +\t\tswitch (status)\n> +\t\t{\n> +\t\t\tcase PR_WOULD_BLOCK_ERROR:\n> +\t\t\t\tread_errno = EINTR;\n> +\t\t\t\tbreak;\n\nUh, isn't this going to cause a busy-loop by the caller? EINTR isn't the\nsame as EAGAIN/EWOULDBLOCK?\n\n\n> +\t\t\tcase PR_IO_TIMEOUT_ERROR:\n> +\t\t\t\tbreak;\n\nWhat does this mean? We'll return with a 0 errno here, right? When is\nthis case reachable?\n\nE.g. the comment in fe-misc.c:\n\t\t\t\t/* pqsecure_read set the error message for us */\nfor this case doesn't seem to be fulfilled by this.\n\n\n> +/*\n> + *\tVerify that the server certificate matches the hostname we connected to.\n> + *\n> + * The certificate's Common Name and Subject Alternative Names are considered.\n> + */\n> +int\n> +pgtls_verify_peer_name_matches_certificate_guts(PGconn *conn,\n> +\t\t\t\t\t\t\t\t\t\t\t\tint *names_examined,\n> +\t\t\t\t\t\t\t\t\t\t\t\tchar **first_name)\n> +{\n> +\treturn 1;\n> +}\n\nUh, huh? Certainly doesn't verify anything...\n\n\n> +/* ------------------------------------------------------------ */\n> +/*\t\t\tPostgreSQL specific TLS support functions\t\t\t*/\n> +/* ------------------------------------------------------------ */\n> +\n> +/*\n> + * TODO: this a 99% copy of the same function in the backend, make these share\n> + * a single implementation instead.\n> + */\n> +static char *\n> +pg_SSLerrmessage(PRErrorCode errcode)\n> +{\n> +\tconst char *error;\n> +\n> +\terror = PR_ErrorToName(errcode);\n> +\tif (error)\n> +\t\treturn strdup(error);\n> +\n> +\treturn strdup(\"unknown TLS error\");\n> +}\n\nBtw, why does this need to duplicate strings, instead of returning a\nconst char*?\n\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 20 Oct 2020 12:15:29 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 20 Oct 2020, at 21:15, Andres Freund <andres@anarazel.de> wrote:\n> \n> Hi,\n\nThanks for your review, much appreciated!\n\n> On 2020-10-20 14:24:24 +0200, Daniel Gustafsson wrote:\n>> From 0cb0e6a0ce9adb18bc9d212bd03e4e09fa452972 Mon Sep 17 00:00:00 2001\n>> From: Daniel Gustafsson <daniel@yesql.se>\n>> Date: Thu, 8 Oct 2020 18:44:28 +0200\n>> Subject: [PATCH] Support for NSS as a TLS backend v12\n>> ---\n>> configure | 223 +++-\n>> configure.ac | 39 +-\n>> contrib/Makefile | 2 +-\n>> contrib/pgcrypto/Makefile | 5 +\n>> contrib/pgcrypto/nss.c | 773 +++++++++++\n>> contrib/pgcrypto/openssl.c | 2 +-\n>> contrib/pgcrypto/px.c | 1 +\n>> contrib/pgcrypto/px.h | 1 +\n> \n> Personally I'd like to see this patch broken up a bit - it's quite\n> large. Several of the changes could easily be committed separately, no?\n\nNot sure how much of this makes sense committed separately (unless separately\nmeans in quick succession), but it could certainly be broken up for the sake of\nmaking review easier. I will take a stab at that, but in a follow-up email as\nI would like the split to be a version just doing the split and not also\nintroducing/fixing things.\n\n>> if test \"$with_openssl\" = yes ; then\n>> + if test x\"$with_nss\" = x\"yes\" ; then\n>> + AC_MSG_ERROR([multiple SSL backends cannot be enabled simultaneously\"])\n>> + fi\n> \n> Based on a quick look there's no similar error check for the msvc\n> build. Should there be?\n\nThats a good question. When embarking on this is seemed quite natural to me\nthat it should be, but now I'm not so sure. Maybe there should be a\n--with-openssl-preferred like how we handle readline/libedit or just allow\nmultiple and let the last one win? Do you have any input on what would make\nsense?\n\nThe only thing I think makes no sense is to allow multiple ones at the same\ntime given the current autoconf switches, even if it would just be to pick say\npg_strong_random from one and libpq TLS from another.\n\n>> +if test \"$with_nss\" = yes ; then\n>> + if test x\"$with_openssl\" = x\"yes\" ; then\n>> + AC_MSG_ERROR([multiple SSL backends cannot be enabled simultaneously\"])\n>> + fi\n> \n> Isn't this a repetition of the earlier check?\n\nIt is, and it we want to keep such a check it should be broken out into a\nseparate step performed before all library specific checks IMO.\n\n>> + CLEANLDFLAGS=\"$LDFLAGS\"\n>> + # TODO: document this set of LDFLAGS\n>> + LDFLAGS=\"-lssl3 -lsmime3 -lnss3 -lplds4 -lplc4 -lnspr4 $LDFLAGS\"\n> \n> Shouldn't this use nss-config or such?\n\nIndeed it should, where available. I've added rudimentary support for that\nwithout a fallback as of now.\n\n>> +if test \"$with_nss\" = yes ; then\n>> + AC_CHECK_HEADER(ssl.h, [], [AC_MSG_ERROR([header file <ssl.h> is required for NSS])])\n>> + AC_CHECK_HEADER(nss.h, [], [AC_MSG_ERROR([header file <nss.h> is required for NSS])])\n>> +fi\n> \n> Hm. For me, on debian, these headers are not directly in the default\n> include search path, but would be as nss/ssl.h. I don't see you adding\n> nss/ to CFLAGS anywhere? How does this work currently?\n\nI had Stockholm-syndromed myself into passing --with-includes and hadn't really\nrealized. Sometimes the obvious is too obvious in a 4000+ LOC patch.\n\n> I think it'd also be better if we could include these files as nss/ssl.h\n> etc - ssl.h is a name way too likely to conflict imo.\n\nI've changed this to be nss/ssl.h and nspr/nspr.h etc, but the include path\nwill still need the direct path to the headers (from autoconf) since nss.h\nincludes NSPR headers as #include <nspr.h> and so on.\n\n>> +++ b/src/backend/libpq/be-secure-nss.c\n>> @@ -0,0 +1,1158 @@\n>> +/*\n>> + * BITS_PER_BYTE is also defined in the NSPR header files, so we need to undef\n>> + * our version to avoid compiler warnings on redefinition.\n>> + */\n>> +#define pg_BITS_PER_BYTE BITS_PER_BYTE\n>> +#undef BITS_PER_BYTE\n> \n> Most compilers/preprocessors don't warn about redefinitions when they\n> would result in the same value (IIRC we have some cases of redefinitions\n> in tree even). Does nspr's differ?\n\nGCC 8.3 in my Debian installation throws the below warning:\n\n In file included from /usr/include/nspr/prtypes.h:26,\n from /usr/include/nspr/pratom.h:14,\n from /usr/include/nspr/nspr.h:9,\n from be-secure-nss.c:45:\n /usr/include/nspr/prcpucfg.h:1143: warning: \"BITS_PER_BYTE\" redefined\n #define BITS_PER_BYTE PR_BITS_PER_BYTE\n\n In file included from ../../../src/include/c.h:55,\n from ../../../src/include/postgres.h:46,\n from be-secure-nss.c:16:\n ../../../src/include/pg_config_manual.h:115: note: this is the location of the previous definition\n #define BITS_PER_BYTE 8\n\nPR_BITS_PER_BYTE is defined per platform in pr/include/md/_<platform>.cfg and\nis as expected 8. I assume it's that indirection which cause the warning?\n\n>> +/*\n>> + * The nspr/obsolete/protypes.h NSPR header typedefs uint64 and int64 with\n>> + * colliding definitions from ours, causing a much expected compiler error.\n>> + * The definitions are however not actually used in NSPR at all, and are only\n>> + * intended for what seems to be backwards compatibility for apps written\n>> + * against old versions of NSPR. The following comment is in the referenced\n>> + * file, and was added in 1998:\n>> + *\n>> + *\t\tThis section typedefs the old 'native' types to the new PR<type>s.\n>> + *\t\tThese definitions are scheduled to be eliminated at the earliest\n>> + *\t\tpossible time. The NSPR API is implemented and documented using\n>> + *\t\tthe new definitions.\n>> + *\n>> + * As there is no opt-out from pulling in these typedefs, we define the guard\n>> + * for the file to exclude it. This is incredibly ugly, but seems to be about\n>> + * the only way around it.\n>> + */\n>> +#define PROTYPES_H\n>> +#include <nspr.h>\n>> +#undef PROTYPES_H\n> \n> Yuck :(.\n\nThats not an understatement. Taking another dive into the NSPR code I did\nhowever find a proper way to deal with this. Defining NO_NSPR_10_SUPPORT stops\nNSPR from using the files in obsolete/. So fixed, yay!\n\n>> +int\n>> +be_tls_init(bool isServerStart)\n>> +{\n>> +\tSECStatus\tstatus;\n>> +\tSSLVersionRange supported_sslver;\n>> +\n>> +\t/*\n>> +\t * Set up the connection cache for multi-processing application behavior.\n> \n> Hm. Do we necessarily want that? Session resumption is not exactly\n> unproblematic... Or does this do something else?\n\nFrom my reading of the docs, and experience with the code, a server application\nmust set up a connection cache in order to accept connections. Not entirely\nsure, and the docs aren't terribly clear for non SSLv2/v3 environments (it\nseems to only cache for SSLv2/3 and not TLSv+) but it seems like it may have\nother uses internally. I will hunt down some more information on the NSS\nmailing list.\n\n>> +\t * If we are in ServerStart then we initialize the cache. If the server is\n>> +\t * already started, we inherit the cache such that it can be used for\n>> +\t * connections. Calling SSL_ConfigMPServerSIDCache sets an environment\n>> +\t * variable which contains enough information for the forked child to know\n>> +\t * how to access it. Passing NULL to SSL_InheritMPServerSIDCache will\n>> +\t * make the forked child look it up by the default name SSL_INHERITANCE,\n>> +\t * if env vars aren't inherited then the contents of the variable can be\n>> +\t * passed instead.\n>> +\t */\n> \n> Does this stuff work on windows\n\nAccording to the documentation it does, and Andrew had this working on Windows\nin an earlier version of the patch. I need to get a proper Windows env for\ntesting/dev up and running as mine has bitrotted to nothingness.\n\n> / EXEC_BACKEND?\n\nThat's a good point, maybe we need to do a SSL_ConfigServerSessionIDCache\nrather than the MP version for EXEC_BACKEND? Not sure.\n\n>> +\t * The below parameters are what the implicit initialization would've done\n>> +\t * for us, and should work even for older versions where it might not be\n>> +\t * done automatically. The last parameter, maxPTDs, is set to various\n>> +\t * values in other codebases, but has been unused since NSPR 2.1 which was\n>> +\t * released sometime in 1998.\n>> +\t */\n>> +\tPR_Init(PR_USER_THREAD, PR_PRIORITY_NORMAL, 0 /* maxPTDs */ );\n> \n> https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR/Reference/PR_Init\n> says that currently all parameters are ignored?\n\nRight, my comment didn't reflect that they're all dead these days, only that\none of them has been unused since RUN DMC topped the charts with \"It's like\nthat\". Comment updated.\n\n>> +\t/*\n>> +\t * Import the already opened socket as we don't want to use NSPR functions\n>> +\t * for opening the network socket due to how the PostgreSQL protocol works\n>> +\t * with TLS connections. This function is not part of the NSPR public API,\n>> +\t * see the comment at the top of the file for the rationale of still using\n>> +\t * it.\n>> +\t */\n>> +\tpr_fd = PR_ImportTCPSocket(port->sock);\n>> +\tif (!pr_fd)\n>> +\t\tereport(ERROR,\n>> +\t\t\t\t(errmsg(\"unable to connect to socket\")));\n> \n> I don't see the comment you're referring to?\n\nIt's referring to the comment discussing PR_ImportTCPSocket being a private API\ncall, yet still used by everyone (which is also discussed later in this review).\n\n>> +\t/*\n>> +\t * Most of the documentation available, and implementations of, NSS/NSPR\n>> +\t * use the PR_NewTCPSocket() function here, which has the drawback that it\n>> +\t * can only create IPv4 sockets. Instead use PR_OpenTCPSocket() which\n>> +\t * copes with IPv6 as well.\n>> +\t */\n>> +\tmodel = PR_OpenTCPSocket(port->laddr.addr.ss_family);\n>> +\tif (!model)\n>> +\t\tereport(ERROR,\n>> +\t\t\t\t(errmsg(\"unable to open socket\")));\n>> +\n>> +\t/*\n>> +\t * Convert the NSPR socket to an SSL socket. Ensuring the success of this\n>> +\t * operation is critical as NSS SSL_* functions may return SECSuccess on\n>> +\t * the socket even though SSL hasn't been enabled, which introduce a risk\n>> +\t * of silent downgrades.\n>> +\t */\n>> +\tmodel = SSL_ImportFD(NULL, model);\n>> +\tif (!model)\n>> +\t\tereport(ERROR,\n>> +\t\t\t\t(errmsg(\"unable to enable TLS on socket\")));\n> \n> It's confusing that these functions do not actually reference the socket\n> via some handle :(. What does opening a socket do here?\n\nThis specific call converts the socket from a plain NSPR socket to an SSL/TLS\ncapable socket which NSS will work with. This is a required step for\n\"activating\" NSS on the socket.\n\n>> +\t/*\n>> +\t * Configure the allowed cipher. If there are no user preferred suites,\n> \n> *ciphers?\n\nYes, fixed.\n\n>> +\n>> +\tport->pr_fd = SSL_ImportFD(model, pr_fd);\n>> +\tif (!port->pr_fd)\n>> +\t\tereport(ERROR,\n>> +\t\t\t\t(errmsg(\"unable to initialize\")));\n>> +\n>> +\tPR_Close(model);\n> \n> A comment explaining why we first import a NULL into the model, and then\n> release the model, and import the real fd would be good.\n\nI've added a small comment to explain how the model is a configuration template\nfor the actual socket. This part of NSS/NSPR is a bit overcomplicated for how\nwe have connections, it's more geared towards having many open sockets in the\nsame process.\n\n>> +ssize_t\n>> +be_tls_read(Port *port, void *ptr, size_t len, int *waitfor)\n>> +{\n>> +\tssize_t\t\tn_read;\n>> +\tPRErrorCode err;\n>> +\n>> +\tn_read = PR_Read(port->pr_fd, ptr, len);\n>> +\n>> +\tif (n_read < 0)\n>> +\t{\n>> +\t\terr = PR_GetError();\n> \n> Yay, more thread global state :(.\n\nSorry about that.\n\n>> +\t\t/* XXX: This logic seems potentially bogus? */\n>> +\t\tif (err == PR_WOULD_BLOCK_ERROR)\n>> +\t\t\t*waitfor = WL_SOCKET_READABLE;\n>> +\t\telse\n>> +\t\t\t*waitfor = WL_SOCKET_WRITEABLE;\n> \n> Don't we need to handle failed connections somewhere here? secure_read()\n> won't know about PR_GetError() etc? How would SSL errors be signalled\n> upwards here?\n> \n> Also, as you XXX, it's not clear to me that your mapping would always\n> result in waiting for the right event? A tls write could e.g. very well\n> require receiving data etc?\n\nFixed, but there might be more to be done here.\n\n>> +\t/*\n>> +\t * At least one byte with password content was returned, and NSS requires\n>> +\t * that we return it allocated in NSS controlled memory. If we fail to\n>> +\t * allocate then abort without passing back NULL and bubble up the error\n>> +\t * on the PG side.\n>> +\t */\n>> +\tpassword = (char *) PR_Malloc(len + 1);\n>> +\tif (!password)\n>> +\t\tereport(ERROR,\n>> +\t\t\t\t(errcode(ERRCODE_OUT_OF_MEMORY),\n>> +\t\t\t\t errmsg(\"out of memory\")));\n>> \n>> +\tstrlcpy(password, buf, sizeof(password));\n>> +\texplicit_bzero(buf, sizeof(buf));\n>> +\n> \n> In case of error you're not bzero'ing out the password!\n\nFixed.\n\n> Separately, I wonder if we should introduce a function for throwing OOM\n> errors - which then e.g. could print the memory context stats in those\n> places too...\n\n+1. I'd be happy to review such a patch.\n\n>> +static SECStatus\n>> +pg_cert_auth_handler(void *arg, PRFileDesc * fd, PRBool checksig, PRBool isServer)\n>> +{\n>> +\tSECStatus\tstatus;\n>> +\tPort\t *port = (Port *) arg;\n>> +\tCERTCertificate *cert;\n>> +\tchar\t *peer_cn;\n>> +\tint\t\t\tlen;\n>> +\n>> +\tstatus = SSL_AuthCertificate(CERT_GetDefaultCertDB(), port->pr_fd, checksig, PR_TRUE);\n>> +\tif (status == SECSuccess)\n>> +\t{\n>> +\t\tcert = SSL_PeerCertificate(port->pr_fd);\n>> +\t\tlen = strlen(cert->subjectName);\n>> +\t\tpeer_cn = MemoryContextAllocZero(TopMemoryContext, len + 1);\n>> +\t\tif (strncmp(cert->subjectName, \"CN=\", 3) == 0)\n>> +\t\t\tstrlcpy(peer_cn, cert->subjectName + strlen(\"CN=\"), len + 1);\n>> +\t\telse\n>> +\t\t\tstrlcpy(peer_cn, cert->subjectName, len + 1);\n>> +\t\tCERT_DestroyCertificate(cert);\n>> +\n>> +\t\tport->peer_cn = peer_cn;\n>> +\t\tport->peer_cert_valid = true;\n> \n> Hm. We either should have something similar to\n> \n> \t\t\t/*\n> \t\t\t * Reject embedded NULLs in certificate common name to prevent\n> \t\t\t * attacks like CVE-2009-4034.\n> \t\t\t */\n> \t\t\tif (len != strlen(peer_cn))\n> \t\t\t{\n> \t\t\t\tereport(COMMERROR,\n> \t\t\t\t\t\t(errcode(ERRCODE_PROTOCOL_VIOLATION),\n> \t\t\t\t\t\t errmsg(\"SSL certificate's common name contains embedded null\")));\n> \t\t\t\tpfree(peer_cn);\n> \t\t\t\treturn -1;\n> \t\t\t}\n> here, or a comment explaining why not.\n\nWe should, but it's proving rather difficult as there is no equivalent API call\nto get the string as well as the expected length of it.\n\n> Also, what's up with the CN= bit? Why is that needed here, but not for\n> openssl?\n\nOpenSSL returns only the value portion, whereas NSS returns key=value so we\nneed to skip over the key= part.\n\n>> +static PRFileDesc *\n>> +init_iolayer(Port *port, int loglevel)\n>> +{\n>> +\tconst\t\tPRIOMethods *default_methods;\n>> +\tPRFileDesc *layer;\n>> +\n>> +\t/*\n>> +\t * Start by initializing our layer with all the default methods so that we\n>> +\t * can selectively override the ones we want while still ensuring that we\n>> +\t * have a complete layer specification.\n>> +\t */\n>> +\tdefault_methods = PR_GetDefaultIOMethods();\n>> +\tmemcpy(&pr_iomethods, default_methods, sizeof(PRIOMethods));\n>> +\n>> +\tpr_iomethods.recv = pg_ssl_read;\n>> +\tpr_iomethods.send = pg_ssl_write;\n>> +\n>> +\t/*\n>> +\t * Each IO layer must be identified by a unique name, where uniqueness is\n>> +\t * per connection. Each connection in a postgres cluster can generate the\n>> +\t * identity from the same string as they will create their IO layers on\n>> +\t * different sockets. Only one layer per socket can have the same name.\n>> +\t */\n>> +\tpr_id = PR_GetUniqueIdentity(\"PostgreSQL\");\n> \n> Seems like it might not be a bad idea to append Server or something?\n\nFixed.\n\n>> +\t/*\n>> +\t * Create the actual IO layer as a stub such that it can be pushed onto\n>> +\t * the layer stack. The step via a stub is required as we define custom\n>> +\t * callbacks.\n>> +\t */\n>> +\tlayer = PR_CreateIOLayerStub(pr_id, &pr_iomethods);\n>> +\tif (!layer)\n>> +\t{\n>> +\t\tereport(loglevel,\n>> +\t\t\t\t(errmsg(\"unable to create NSS I/O layer\")));\n>> +\t\treturn NULL;\n>> +\t}\n> \n> Why is this accepting a variable log level? The only caller passes ERROR?\n\nGood catch, that's a leftover from a previous version which no longer makes\nsense. loglevel param removed.\n\n>> +/*\n>> + * pg_SSLerrmessage\n>> + *\t\tCreate and return a human readable error message given\n>> + *\t\tthe specified error code\n>> + *\n>> + * PR_ErrorToName only converts the enum identifier of the error to string,\n>> + * but that can be quite useful for debugging (and in case PR_ErrorToString is\n>> + * unable to render a message then we at least have something).\n>> + */\n>> +static char *\n>> +pg_SSLerrmessage(PRErrorCode errcode)\n>> +{\n>> +\tchar\t\terror[128];\n>> +\tint\t\t\tret;\n>> +\n>> +\t/* TODO: this should perhaps use a StringInfo instead.. */\n>> +\tret = pg_snprintf(error, sizeof(error), \"%s (%s)\",\n>> +\t\t\t\t\t PR_ErrorToString(errcode, PR_LANGUAGE_I_DEFAULT),\n>> +\t\t\t\t\t PR_ErrorToName(errcode));\n>> +\tif (ret)\n>> +\t\treturn pstrdup(error);\n> \n>> +\treturn pstrdup(_(\"unknown TLS error\"));\n>> +}\n> \n> Why not use psrintf() here?\n\nThats a good question to which I don't have a good answer. Changed to doing\njust that.\n\n>> +++ b/src/include/common/pg_nss.h\n>> @@ -0,0 +1,141 @@\n>> +/*-------------------------------------------------------------------------\n>> + *\n>> + * pg_nss.h\n>> + *\t Support for NSS as a TLS backend\n>> + *\n>> + * These definitions are used by both frontend and backend code.\n>> + *\n>> + * Copyright (c) 2020, PostgreSQL Global Development Group\n>> + *\n>> + * IDENTIFICATION\n>> + * src/include/common/pg_nss.h\n>> + *\n>> + *-------------------------------------------------------------------------\n>> + */\n>> +#ifndef PG_NSS_H\n>> +#define PG_NSS_H\n>> +\n>> +#ifdef USE_NSS\n>> +\n>> +#include <sslproto.h>\n>> +\n>> +PRUint16\tpg_find_cipher(char *name);\n>> +\n>> +typedef struct\n>> +{\n>> +\tconst char *name;\n>> +\tPRUint16\tnumber;\n>> +}\t\t\tNSSCiphers;\n>> +\n>> +#define INVALID_CIPHER\t0xFFFF\n>> +\n>> +/*\n>> + * This list is a partial copy of the ciphers in NSS files lib/ssl/sslproto.h\n>> + * in order to provide a human readable version of the ciphers. It would be\n>> + * nice to not have to have this, but NSS doesn't provide any API addressing\n>> + * the ciphers by name. TODO: do we want more of the ciphers, or perhaps less?\n>> + */\n>> +static const NSSCiphers NSS_CipherList[] = {\n>> +\n>> +\t{\"TLS_NULL_WITH_NULL_NULL\", TLS_NULL_WITH_NULL_NULL},\n> \n> Hm. Is this whole business of defining array constants in a header just\n> done to avoid having a .c file that needs to be compiled both in\n> frontend and backend code?\n\nThat was the original motivation, but I guess I should just bit the bullet and\nmake it a .c compiled in both frontend and backend?\n\n>> +/*\n>> + * The nspr/obsolete/protypes.h NSPR header typedefs uint64 and int64 with\n>> + * colliding definitions from ours, causing a much expected compiler error.\n>> + * The definitions are however not actually used in NSPR at all, and are only\n>> + * intended for what seems to be backwards compatibility for apps written\n>> + * against old versions of NSPR. The following comment is in the referenced\n>> + * file, and was added in 1998:\n>> + *\n>> + *\t\tThis section typedefs the old 'native' types to the new PR<type>s.\n>> + *\t\tThese definitions are scheduled to be eliminated at the earliest\n>> + *\t\tpossible time. The NSPR API is implemented and documented using\n>> + *\t\tthe new definitions.\n>> + *\n>> + * As there is no opt-out from pulling in these typedefs, we define the guard\n>> + * for the file to exclude it. This is incredibly ugly, but seems to be about\n>> + * the only way around it.\n>> + */\n> \n> There's a lot of duplicated comments here. Could we move either of the\n> files to reference the other for longer ones?\n\nI took a stab at this in the attached version. The code is perhaps over-\ncommented in parts but I tried to encode my understanding of NSS into the\ncomments where documentation is lacking, since I assume I'm not the only one\nwho is new to NSS. There might be a need to pare back to keep it focused in\ncase this patch goes futher.\n\n>> +/*\n>> + * PR_ImportTCPSocket() is a private API, but very widely used, as it's the\n>> + * only way to make NSS use an already set up POSIX file descriptor rather\n>> + * than opening one itself. To quote the NSS documentation:\n>> + *\n>> + *\t\t\"In theory, code that uses PR_ImportTCPSocket may break when NSPR's\n>> + *\t\timplementation changes. In practice, this is unlikely to happen because\n>> + *\t\tNSPR's implementation has been stable for years and because of NSPR's\n>> + *\t\tstrong commitment to backward compatibility.\"\n>> + *\n>> + * https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR/Reference/PR_ImportTCPSocket\n>> + *\n>> + * The function is declared in <private/pprio.h>, but as it is a header marked\n>> + * private we declare it here rather than including it.\n>> + */\n>> +NSPR_API(PRFileDesc *) PR_ImportTCPSocket(int);\n> \n> Ugh. This is really the way to do this? How do other applications deal\n> with this problem?\n\nThey either #include <private/pprio.h> or they do it like this (or vendor NSPR\nwhich makes calling private APIs less problematic). It sure is ugly, but there\nis no alternative to using this function.\n\n>> +#if defined(WIN32)\n>> +static const char *ca_trust_name = \"nssckbi.dll\";\n>> +#elif defined(__darwin__)\n>> +static const char *ca_trust_name = \"libnssckbi.dylib\";\n>> +#else\n>> +static const char *ca_trust_name = \"libnssckbi.so\";\n>> +#endif\n> \n> There's really no pre-existing handling for this in nss???\n\nNSS_Init does have more or less the above logic (see snippet below), but only\nwhen there is a cert database defined.\n\n /*\n * The following code is an attempt to automagically find the external root\n * module.\n * Note: Keep the #if-defined chunks in order. HPUX must select before UNIX.\n */\n\n static const char *dllname =\n #if defined(XP_WIN32) || defined(XP_OS2)\n \"nssckbi.dll\";\n #elif defined(HPUX) && !defined(__ia64) /* HP-UX PA-RISC */\n \"libnssckbi.sl\";\n #elif defined(DARWIN)\n \"libnssckbi.dylib\";\n #elif defined(XP_UNIX) || defined(XP_BEOS)\n \"libnssckbi.so\"; \n #else\n #error \"Uh! Oh! I don't know about this platform.\"\n #endif\n\nIn the NSS_INIT_NOCERTDB case there is no such handling of the libname provided\nby NSS so we need to do that ourselves.\n\n>> +\t/*\n>> +\t * The original design of NSS was for a single application to use a single\n>> +\t * copy of it, initialized with NSS_Initialize() which isn't returning any\n>> +\t * handle with which to refer to NSS. NSS initialization and shutdown are\n>> +\t * global for the application, so a shutdown in another NSS enabled\n>> +\t * library would cause NSS to be stopped for libpq as well. The fix has\n>> +\t * been to introduce NSS_InitContext which returns a context handle to\n>> +\t * pass to NSS_ShutdownContext. NSS_InitContext was introduced in NSS\n>> +\t * 3.12, but the use of it is not very well documented.\n>> +\t * https://bugzilla.redhat.com/show_bug.cgi?id=738456\n>> +\t *\n>> +\t * The InitParameters struct passed can be used to override internal\n>> +\t * values in NSS, but the usage is not documented at all. When using\n>> +\t * NSS_Init initializations, the values are instead set via PK11_Configure\n>> +\t * calls so the PK11_Configure documentation can be used to glean some\n>> +\t * details on these.\n>> +\t *\n>> +\t * https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/PKCS11/Module_Specs\n> \n>> +\n>> +\tif (!nss_context)\n>> +\t{\n>> +\t\tchar\t *err = pg_SSLerrmessage(PR_GetError());\n>> +\n>> +\t\tprintfPQExpBuffer(&conn->errorMessage,\n>> +\t\t\t\t\t\t libpq_gettext(\"unable to %s certificate database: %s\"),\n>> +\t\t\t\t\t\t conn->cert_database ? \"open\" : \"create\",\n>> +\t\t\t\t\t\t err);\n>> +\t\tfree(err);\n>> +\t\treturn PGRES_POLLING_FAILED;\n>> +\t}\n>> +\n>> +\t/*\n>> +\t * Configure cipher policy.\n>> +\t */\n>> +\tstatus = NSS_SetDomesticPolicy();\n> \n> Why is \"domestic\" the right thing here?\n\nHistorically there are three cipher policies in NSS: Domestic, Export and\nFrance. These would enable a set of ciphers based on US export restrictions\n(domest/export) or French import restrictions. All ciphers would start\ndisabled and then the ciphers belonging to the chosen set would be enabled.\nLong ago, that was however removed and they now all get enabled by calling\neither of these three functions. NSS_SetDomesticPolicy enables all implemented\nciphers, and the other calls just call NSS_SetDomesticPolicy, I guess that API\nwas kept for backwards compatibility. The below bugzilla entry has a bit more\ninformation on this:\n\n https://bugzilla.mozilla.org/show_bug.cgi?id=848384\n\nThat being said, the comment in the code did not reflect that, so I've reworded\nit hoping it will be clearer now.\n\n>> +\n>> +\tPK11_SetPasswordFunc(PQssl_passwd_cb);\n> \n> Is it actually OK to do stuff like this when other users of NSS might be\n> present? That's obviously more likely in the libpq case, compared to the\n> backend case (where it's also possible, of course). What prevents us\n> from overriding another user's callback?\n\nThe password callback pointer is stored in a static variable in NSS (in the\nfile lib/pk11wrap/pk11auth.c).\n\n>> +ssize_t\n>> +pgtls_read(PGconn *conn, void *ptr, size_t len)\n>> +{\n>> +\tPRInt32\t\tnread;\n>> +\tPRErrorCode status;\n>> +\tint\t\t\tread_errno = 0;\n>> +\n>> +\tnread = PR_Recv(conn->pr_fd, ptr, len, 0, PR_INTERVAL_NO_WAIT);\n>> +\n>> +\t/*\n>> +\t * PR_Recv blocks until there is data to read or the timeout expires. Zero\n>> +\t * is returned for closed connections, while -1 indicates an error within\n>> +\t * the ongoing connection.\n>> +\t */\n>> +\tif (nread == 0)\n>> +\t{\n>> +\t\tread_errno = ECONNRESET;\n>> +\t\treturn -1;\n>> +\t}\n> \n> It's a bit confusing to talk about blocking when the socket presumably\n> is in non-blocking mode, and you're also asking to never wait?\n\nFair enough, I can agree that the wording isn't spot on. The socket is\nnon-blocking while PR_Recv can block (which is what we ask it not to). I've\nreworded and moved the comment around to hopefully make it clearer.\n\n>> +\tif (nread == -1)\n>> +\t{\n>> +\t\tstatus = PR_GetError();\n>> +\n>> +\t\tswitch (status)\n>> +\t\t{\n>> +\t\t\tcase PR_WOULD_BLOCK_ERROR:\n>> +\t\t\t\tread_errno = EINTR;\n>> +\t\t\t\tbreak;\n> \n> Uh, isn't this going to cause a busy-loop by the caller? EINTR isn't the\n> same as EAGAIN/EWOULDBLOCK?\n\nRight, that's clearly not right.\n\n>> +\t\t\tcase PR_IO_TIMEOUT_ERROR:\n>> +\t\t\t\tbreak;\n> \n> What does this mean? We'll return with a 0 errno here, right? When is\n> this case reachable?\n\nIt should, AFAICT, only be reachable when PR_Recv is used with a timeout which\nwe don't do. It mentioned somewhere that it had happened in no-wait calls due\nto a bug, but I fail to find that reference now. Either way, I've removed it\nto fall into the default error handling which now sets errno correctly as that\nwas a paddle short here.\n\n> E.g. the comment in fe-misc.c:\n> \t\t\t\t/* pqsecure_read set the error message for us */\n> for this case doesn't seem to be fulfilled by this.\n\nFixed, I hope.\n\n>> +/*\n>> + *\tVerify that the server certificate matches the hostname we connected to.\n>> + *\n>> + * The certificate's Common Name and Subject Alternative Names are considered.\n>> + */\n>> +int\n>> +pgtls_verify_peer_name_matches_certificate_guts(PGconn *conn,\n>> +\t\t\t\t\t\t\t\t\t\t\t\tint *names_examined,\n>> +\t\t\t\t\t\t\t\t\t\t\t\tchar **first_name)\n>> +{\n>> +\treturn 1;\n>> +}\n> \n> Uh, huh? Certainly doesn't verify anything...\n\nDoh, the verification was done as part of the cert validation callback and I\nhad missed moving it to the stub. Fixed and also expanded to closer match how\nit's done in the OpenSSL implementation.\n\n>> +/* ------------------------------------------------------------ */\n>> +/*\t\t\tPostgreSQL specific TLS support functions\t\t\t*/\n>> +/* ------------------------------------------------------------ */\n>> +\n>> +/*\n>> + * TODO: this a 99% copy of the same function in the backend, make these share\n>> + * a single implementation instead.\n>> + */\n>> +static char *\n>> +pg_SSLerrmessage(PRErrorCode errcode)\n>> +{\n>> +\tconst char *error;\n>> +\n>> +\terror = PR_ErrorToName(errcode);\n>> +\tif (error)\n>> +\t\treturn strdup(error);\n>> +\n>> +\treturn strdup(\"unknown TLS error\");\n>> +}\n> \n> Btw, why does this need to duplicate strings, instead of returning a\n> const char*?\n\nNo, it doesn't, and no longer does.\n\nThe attached includes fixes for the above mentioned issues (and a few small\nother ones I stumbled across), hopefully without introducing too many new. As\nmentioned, I'll perform the split into multiple patches in a separate version\nwhich only performs a split to make it easier to diff the individual patchfile\nversions.\n\ncheers ./daniel", "msg_date": "Tue, 27 Oct 2020 21:07:01 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On 27/10/2020 22:07, Daniel Gustafsson wrote:\n> /*\n> * Track whether the NSS database has a password set or not. There is no API\n> * function for retrieving password status, so we simply flip this to true in\n> * case NSS invoked the password callback - as that will only happen in case\n> * there is a password. The reason for tracking this is that there are calls\n> * which require a password parameter, but doesn't use the callbacks provided,\n> * so we must call the callback on behalf of these.\n> */\n> static bool has_password = false;\n\nThis is set in PQssl_passwd_cb function, but never reset. That seems \nwrong. The NSS database used in one connection might have a password, \nwhile another one might not. Or have I completely misunderstood this?\n\n- Heikki\n\n\n", "msg_date": "Tue, 27 Oct 2020 22:18:29 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Hi,\n\nOn 2020-10-27 21:07:01 +0100, Daniel Gustafsson wrote:\n> > On 2020-10-20 14:24:24 +0200, Daniel Gustafsson wrote:\n> >> From 0cb0e6a0ce9adb18bc9d212bd03e4e09fa452972 Mon Sep 17 00:00:00 2001\n> >> From: Daniel Gustafsson <daniel@yesql.se>\n> >> Date: Thu, 8 Oct 2020 18:44:28 +0200\n> >> Subject: [PATCH] Support for NSS as a TLS backend v12\n> >> ---\n> >> configure | 223 +++-\n> >> configure.ac | 39 +-\n> >> contrib/Makefile | 2 +-\n> >> contrib/pgcrypto/Makefile | 5 +\n> >> contrib/pgcrypto/nss.c | 773 +++++++++++\n> >> contrib/pgcrypto/openssl.c | 2 +-\n> >> contrib/pgcrypto/px.c | 1 +\n> >> contrib/pgcrypto/px.h | 1 +\n> > \n> > Personally I'd like to see this patch broken up a bit - it's quite\n> > large. Several of the changes could easily be committed separately, no?\n> \n> Not sure how much of this makes sense committed separately (unless separately\n> means in quick succession), but it could certainly be broken up for the sake of\n> making review easier.\n\nCommitting e.g. the pgcrypto pieces separately from the backend code\nseems unproblematic. But yes, I would expect them to go in close to each\nother. I'm mainly concerned with smaller review-able units.\n\nHave you done testing to ensure that NSS PG cooperates correctly with\nopenssl PG? Is there a way we can make that easier to do? E.g. allowing\nto build frontend with NSS and backend with openssl and vice versa?\n\n\n> >> if test \"$with_openssl\" = yes ; then\n> >> + if test x\"$with_nss\" = x\"yes\" ; then\n> >> + AC_MSG_ERROR([multiple SSL backends cannot be enabled simultaneously\"])\n> >> + fi\n> > \n> > Based on a quick look there's no similar error check for the msvc\n> > build. Should there be?\n> \n> Thats a good question. When embarking on this is seemed quite natural to me\n> that it should be, but now I'm not so sure. Maybe there should be a\n> --with-openssl-preferred like how we handle readline/libedit or just allow\n> multiple and let the last one win? Do you have any input on what would make\n> sense?\n>\n> The only thing I think makes no sense is to allow multiple ones at the same\n> time given the current autoconf switches, even if it would just be to pick say\n> pg_strong_random from one and libpq TLS from another.\n\nMaybe we should just have --with-ssl={openssl,nss}? That'd avoid needing\nto check for errors.\n\nEven better, of course, would be to allow switching of the SSL backend\nbased on config options (PGC_POSTMASTER GUC for backend, connection\nstring for frontend). Mainly because that would make testing of\ninteroperability so much easier. Obviously still a few places like\npgcrypto, randomness, etc, where only a compile time decision seems to\nmake sense.\n\n\n> >> + CLEANLDFLAGS=\"$LDFLAGS\"\n> >> + # TODO: document this set of LDFLAGS\n> >> + LDFLAGS=\"-lssl3 -lsmime3 -lnss3 -lplds4 -lplc4 -lnspr4 $LDFLAGS\"\n> > \n> > Shouldn't this use nss-config or such?\n> \n> Indeed it should, where available. I've added rudimentary support for that\n> without a fallback as of now.\n\nWhen would we need a fallback?\n\n\n> > I think it'd also be better if we could include these files as nss/ssl.h\n> > etc - ssl.h is a name way too likely to conflict imo.\n> \n> I've changed this to be nss/ssl.h and nspr/nspr.h etc, but the include path\n> will still need the direct path to the headers (from autoconf) since nss.h\n> includes NSPR headers as #include <nspr.h> and so on.\n\nHm. Then it's probably not worth going there...\n\n\n> >> +static SECStatus\n> >> +pg_cert_auth_handler(void *arg, PRFileDesc * fd, PRBool checksig, PRBool isServer)\n> >> +{\n> >> +\tSECStatus\tstatus;\n> >> +\tPort\t *port = (Port *) arg;\n> >> +\tCERTCertificate *cert;\n> >> +\tchar\t *peer_cn;\n> >> +\tint\t\t\tlen;\n> >> +\n> >> +\tstatus = SSL_AuthCertificate(CERT_GetDefaultCertDB(), port->pr_fd, checksig, PR_TRUE);\n> >> +\tif (status == SECSuccess)\n> >> +\t{\n> >> +\t\tcert = SSL_PeerCertificate(port->pr_fd);\n> >> +\t\tlen = strlen(cert->subjectName);\n> >> +\t\tpeer_cn = MemoryContextAllocZero(TopMemoryContext, len + 1);\n> >> +\t\tif (strncmp(cert->subjectName, \"CN=\", 3) == 0)\n> >> +\t\t\tstrlcpy(peer_cn, cert->subjectName + strlen(\"CN=\"), len + 1);\n> >> +\t\telse\n> >> +\t\t\tstrlcpy(peer_cn, cert->subjectName, len + 1);\n> >> +\t\tCERT_DestroyCertificate(cert);\n> >> +\n> >> +\t\tport->peer_cn = peer_cn;\n> >> +\t\tport->peer_cert_valid = true;\n> > \n> > Hm. We either should have something similar to\n> > \n> > \t\t\t/*\n> > \t\t\t * Reject embedded NULLs in certificate common name to prevent\n> > \t\t\t * attacks like CVE-2009-4034.\n> > \t\t\t */\n> > \t\t\tif (len != strlen(peer_cn))\n> > \t\t\t{\n> > \t\t\t\tereport(COMMERROR,\n> > \t\t\t\t\t\t(errcode(ERRCODE_PROTOCOL_VIOLATION),\n> > \t\t\t\t\t\t errmsg(\"SSL certificate's common name contains embedded null\")));\n> > \t\t\t\tpfree(peer_cn);\n> > \t\t\t\treturn -1;\n> > \t\t\t}\n> > here, or a comment explaining why not.\n> \n> We should, but it's proving rather difficult as there is no equivalent API call\n> to get the string as well as the expected length of it.\n\nHm. Should at least have a test to ensure that's not a problem then. I\nhope/assume NSS rejects this somewhere internally...\n\n\n> > Also, what's up with the CN= bit? Why is that needed here, but not for\n> > openssl?\n> \n> OpenSSL returns only the value portion, whereas NSS returns key=value so we\n> need to skip over the key= part.\n\nWhy is it a conditional path though?\n\n\n\n\n> >> +/*\n> >> + * PR_ImportTCPSocket() is a private API, but very widely used, as it's the\n> >> + * only way to make NSS use an already set up POSIX file descriptor rather\n> >> + * than opening one itself. To quote the NSS documentation:\n> >> + *\n> >> + *\t\t\"In theory, code that uses PR_ImportTCPSocket may break when NSPR's\n> >> + *\t\timplementation changes. In practice, this is unlikely to happen because\n> >> + *\t\tNSPR's implementation has been stable for years and because of NSPR's\n> >> + *\t\tstrong commitment to backward compatibility.\"\n> >> + *\n> >> + * https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR/Reference/PR_ImportTCPSocket\n> >> + *\n> >> + * The function is declared in <private/pprio.h>, but as it is a header marked\n> >> + * private we declare it here rather than including it.\n> >> + */\n> >> +NSPR_API(PRFileDesc *) PR_ImportTCPSocket(int);\n> > \n> > Ugh. This is really the way to do this? How do other applications deal\n> > with this problem?\n> \n> They either #include <private/pprio.h> or they do it like this (or vendor NSPR\n> which makes calling private APIs less problematic). It sure is ugly, but there\n> is no alternative to using this function.\n\nHm - in debian unstable's NSS this function appears to be in nss/ssl.h,\nnot pprio.h:\n\n/*\n** Imports fd into SSL, returning a new socket. Copies SSL configuration\n** from model.\n*/\nSSL_IMPORT PRFileDesc *SSL_ImportFD(PRFileDesc *model, PRFileDesc *fd);\n\nand ssl.h starts with:\n/*\n * This file contains prototypes for the public SSL functions.\n\n\n> >> +\n> >> +\tPK11_SetPasswordFunc(PQssl_passwd_cb);\n> > \n> > Is it actually OK to do stuff like this when other users of NSS might be\n> > present? That's obviously more likely in the libpq case, compared to the\n> > backend case (where it's also possible, of course). What prevents us\n> > from overriding another user's callback?\n> \n> The password callback pointer is stored in a static variable in NSS (in the\n> file lib/pk11wrap/pk11auth.c).\n\nBut, uh, how is that not a problem? What happens if a backend imports\nlibpq? What if plpython imports curl which then also uses nss?\n\n\n> +\t/*\n> +\t * Finally we must configure the socket for being a server by setting the\n> +\t * certificate and key.\n> +\t */\n> +\tstatus = SSL_ConfigSecureServer(model, server_cert, private_key, kt_rsa);\n> +\tif (status != SECSuccess)\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errmsg(\"unable to configure secure server: %s\",\n> +\t\t\t\t\t\tpg_SSLerrmessage(PR_GetError()))));\n> +\tstatus = SSL_ConfigServerCert(model, server_cert, private_key, NULL, 0);\n> +\tif (status != SECSuccess)\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errmsg(\"unable to configure server for TLS server connections: %s\",\n> +\t\t\t\t\t\tpg_SSLerrmessage(PR_GetError()))));\n\nWhy do both of these need to get called? The NSS docs say:\n\n/*\n** Deprecated variant of SSL_ConfigServerCert.\n**\n...\nSSL_IMPORT SECStatus SSL_ConfigSecureServer(\n PRFileDesc *fd, CERTCertificate *cert,\n SECKEYPrivateKey *key, SSLKEAType kea);\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 27 Oct 2020 23:39:57 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": ">>> Personally I'd like to see this patch broken up a bit - it's quite\n>>> large. Several of the changes could easily be committed separately, no?\n>> \n>> Not sure how much of this makes sense committed separately (unless separately\n>> means in quick succession), but it could certainly be broken up for the sake of\n>> making review easier.\n> \n> Committing e.g. the pgcrypto pieces separately from the backend code\n> seems unproblematic. But yes, I would expect them to go in close to each\n> other. I'm mainly concerned with smaller review-able units.\n\nAttached is a v14 where the logical units are separated into individual\ncommits. I hope this split makes it easier to read.\n\nThe 0006 commit were things not really related to NSS at all that can be\nsubmitted to -hackers independently of this work, but they're still there since\nthis version wasn't supposed to change anything.\n\nMost of the changes to sslinfo in 0005 are really only needed in case OpenSSL\nisn't the only TLS library, but I would argue that they should be considered\nregardless. There we are still accessing the ->ssl member directly and passing\nit to OpenSSL rather than using the be_tls_* API that we have. I can extract\nthat portion as a separate patch submission unless there are objections.\n\ncheers ./daniel", "msg_date": "Wed, 28 Oct 2020 11:56:26 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 28 Oct 2020, at 07:39, Andres Freund <andres@anarazel.de> wrote:\n\n> Have you done testing to ensure that NSS PG cooperates correctly with\n> openssl PG? Is there a way we can make that easier to do? E.g. allowing\n> to build frontend with NSS and backend with openssl and vice versa?\n\nWhen I wrote the Secure Transport patch I had a patch against PostgresNode\nwhich allowed for overriding the server binaries like so:\n\n SSLTEST_SERVER_BIN=/path/bin/ make -C src/test/ssl/ check\n\nI've used that coupled with manual testing so far to make sure that an openssl\nclient can talk to an NSS backend and so on. Before any other backend is added\nwe clearly need *a* way of doing this, one which no doubt will need to be\nimproved upon to suit more workflows.\n\nThis is sort of the same situation as pg_upgrade, where two trees is needed to\nreally test it.\n\nI can clean that patch up and post as a starting point for discussions.\n\n>>>> if test \"$with_openssl\" = yes ; then\n>>>> + if test x\"$with_nss\" = x\"yes\" ; then\n>>>> + AC_MSG_ERROR([multiple SSL backends cannot be enabled simultaneously\"])\n>>>> + fi\n>>> \n>>> Based on a quick look there's no similar error check for the msvc\n>>> build. Should there be?\n>> \n>> Thats a good question. When embarking on this is seemed quite natural to me\n>> that it should be, but now I'm not so sure. Maybe there should be a\n>> --with-openssl-preferred like how we handle readline/libedit or just allow\n>> multiple and let the last one win? Do you have any input on what would make\n>> sense?\n>> \n>> The only thing I think makes no sense is to allow multiple ones at the same\n>> time given the current autoconf switches, even if it would just be to pick say\n>> pg_strong_random from one and libpq TLS from another.\n> \n> Maybe we should just have --with-ssl={openssl,nss}? That'd avoid needing\n> to check for errors.\n\nThats another option, with --with-openssl being an alias for --with-ssl=openssl.\n\nAfter another round of thinking I like this even better as it makes the build\ninfra cleaner, so the attached patch has this implemented.\n\n> Even better, of course, would be to allow switching of the SSL backend\n> based on config options (PGC_POSTMASTER GUC for backend, connection\n> string for frontend). Mainly because that would make testing of\n> interoperability so much easier. Obviously still a few places like\n> pgcrypto, randomness, etc, where only a compile time decision seems to\n> make sense.\n\nIt would make testing easier, but the expense seems potentially rather high.\nHow would a GUC switch be allowed to operate, would we have mixed backends or\nwould be require all openssl connectins to be dropped before serving nss ones?\n\n>>>> + CLEANLDFLAGS=\"$LDFLAGS\"\n>>>> + # TODO: document this set of LDFLAGS\n>>>> + LDFLAGS=\"-lssl3 -lsmime3 -lnss3 -lplds4 -lplc4 -lnspr4 $LDFLAGS\"\n>>> \n>>> Shouldn't this use nss-config or such?\n>> \n>> Indeed it should, where available. I've added rudimentary support for that\n>> without a fallback as of now.\n> \n> When would we need a fallback?\n\nOne one of my boxes I have NSS/NSPR installed via homebrew and they don't ship\nan nss-config AFAICT. I wouldn't be surprised if there are other cases.\n\n>>> I think it'd also be better if we could include these files as nss/ssl.h\n>>> etc - ssl.h is a name way too likely to conflict imo.\n>> \n>> I've changed this to be nss/ssl.h and nspr/nspr.h etc, but the include path\n>> will still need the direct path to the headers (from autoconf) since nss.h\n>> includes NSPR headers as #include <nspr.h> and so on.\n> \n> Hm. Then it's probably not worth going there...\n\nIt does however make visual parsing of the source files easer since it's clear\nwhich ssl.h is being referred to. I'm in favor of keeping it.\n\n>>>> +static SECStatus\n>>>> +pg_cert_auth_handler(void *arg, PRFileDesc * fd, PRBool checksig, PRBool isServer)\n>>>> +{\n>>>> +\tSECStatus\tstatus;\n>>>> +\tPort\t *port = (Port *) arg;\n>>>> +\tCERTCertificate *cert;\n>>>> +\tchar\t *peer_cn;\n>>>> +\tint\t\t\tlen;\n>>>> +\n>>>> +\tstatus = SSL_AuthCertificate(CERT_GetDefaultCertDB(), port->pr_fd, checksig, PR_TRUE);\n>>>> +\tif (status == SECSuccess)\n>>>> +\t{\n>>>> +\t\tcert = SSL_PeerCertificate(port->pr_fd);\n>>>> +\t\tlen = strlen(cert->subjectName);\n>>>> +\t\tpeer_cn = MemoryContextAllocZero(TopMemoryContext, len + 1);\n>>>> +\t\tif (strncmp(cert->subjectName, \"CN=\", 3) == 0)\n>>>> +\t\t\tstrlcpy(peer_cn, cert->subjectName + strlen(\"CN=\"), len + 1);\n>>>> +\t\telse\n>>>> +\t\t\tstrlcpy(peer_cn, cert->subjectName, len + 1);\n>>>> +\t\tCERT_DestroyCertificate(cert);\n>>>> +\n>>>> +\t\tport->peer_cn = peer_cn;\n>>>> +\t\tport->peer_cert_valid = true;\n>>> \n>>> Hm. We either should have something similar to\n>>> \n>>> \t\t\t/*\n>>> \t\t\t * Reject embedded NULLs in certificate common name to prevent\n>>> \t\t\t * attacks like CVE-2009-4034.\n>>> \t\t\t */\n>>> \t\t\tif (len != strlen(peer_cn))\n>>> \t\t\t{\n>>> \t\t\t\tereport(COMMERROR,\n>>> \t\t\t\t\t\t(errcode(ERRCODE_PROTOCOL_VIOLATION),\n>>> \t\t\t\t\t\t errmsg(\"SSL certificate's common name contains embedded null\")));\n>>> \t\t\t\tpfree(peer_cn);\n>>> \t\t\t\treturn -1;\n>>> \t\t\t}\n>>> here, or a comment explaining why not.\n>> \n>> We should, but it's proving rather difficult as there is no equivalent API call\n>> to get the string as well as the expected length of it.\n> \n> Hm. Should at least have a test to ensure that's not a problem then. I\n> hope/assume NSS rejects this somewhere internally...\n\nAgreed, I'll try to hack up a testcase.\n\n>>> Also, what's up with the CN= bit? Why is that needed here, but not for\n>>> openssl?\n>> \n>> OpenSSL returns only the value portion, whereas NSS returns key=value so we\n>> need to skip over the key= part.\n> \n> Why is it a conditional path though?\n\nIt was mostly just a belts-and-suspenders thing, I don't have any hard evidence\nthat it's been a thing in any modern NSS version so it can be removed.\n\n>>>> +/*\n>>>> + * PR_ImportTCPSocket() is a private API, but very widely used, as it's the\n>>>> + * only way to make NSS use an already set up POSIX file descriptor rather\n>>>> + * than opening one itself. To quote the NSS documentation:\n>>>> + *\n>>>> + *\t\t\"In theory, code that uses PR_ImportTCPSocket may break when NSPR's\n>>>> + *\t\timplementation changes. In practice, this is unlikely to happen because\n>>>> + *\t\tNSPR's implementation has been stable for years and because of NSPR's\n>>>> + *\t\tstrong commitment to backward compatibility.\"\n>>>> + *\n>>>> + * https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR/Reference/PR_ImportTCPSocket\n>>>> + *\n>>>> + * The function is declared in <private/pprio.h>, but as it is a header marked\n>>>> + * private we declare it here rather than including it.\n>>>> + */\n>>>> +NSPR_API(PRFileDesc *) PR_ImportTCPSocket(int);\n>>> \n>>> Ugh. This is really the way to do this? How do other applications deal\n>>> with this problem?\n>> \n>> They either #include <private/pprio.h> or they do it like this (or vendor NSPR\n>> which makes calling private APIs less problematic). It sure is ugly, but there\n>> is no alternative to using this function.\n> \n> Hm - in debian unstable's NSS this function appears to be in nss/ssl.h,\n> not pprio.h:\n> \n> /*\n> ** Imports fd into SSL, returning a new socket. Copies SSL configuration\n> ** from model.\n> */\n> SSL_IMPORT PRFileDesc *SSL_ImportFD(PRFileDesc *model, PRFileDesc *fd);\n> \n> and ssl.h starts with:\n> /*\n> * This file contains prototypes for the public SSL functions.\n\nRight, but that's Import*FD*, not Import*TCPSocket*. We use ImportFD as well\nsince it's the API for importing an NSPR socket into NSS and enabling SSL/TLS\non it. Thats been a public API for a long time. ImportTCPSocket is used to\nimport an already opened socket into NSPR, else NSPR must open the socket\nitself. That part has been kept private for reasons unknown, as it's\nincredibly useful.\n\n>>>> +\tPK11_SetPasswordFunc(PQssl_passwd_cb);\n>>> \n>>> Is it actually OK to do stuff like this when other users of NSS might be\n>>> present? That's obviously more likely in the libpq case, compared to the\n>>> backend case (where it's also possible, of course). What prevents us\n>>> from overriding another user's callback?\n>> \n>> The password callback pointer is stored in a static variable in NSS (in the\n>> file lib/pk11wrap/pk11auth.c).\n> \n> But, uh, how is that not a problem? What happens if a backend imports\n> libpq? What if plpython imports curl which then also uses nss?\n\nSorry, that sentence wasn't really finished. What I meant to write was that I\ndon't really have good answers here. The available implementation is via the\nstatic var, and there are no alternative APIs. I've tried googling for\ninsights but haven't come across any.\n\nThe only datapoint I have is that I can't recall there ever being a complaint\nagainst libcurl doing this exact thing. That of course doesn't mean it cannot\nhappen or cause problems.\n\n>> +\t/*\n>> +\t * Finally we must configure the socket for being a server by setting the\n>> +\t * certificate and key.\n>> +\t */\n>> +\tstatus = SSL_ConfigSecureServer(model, server_cert, private_key, kt_rsa);\n>> +\tif (status != SECSuccess)\n>> +\t\tereport(ERROR,\n>> +\t\t\t\t(errmsg(\"unable to configure secure server: %s\",\n>> +\t\t\t\t\t\tpg_SSLerrmessage(PR_GetError()))));\n>> +\tstatus = SSL_ConfigServerCert(model, server_cert, private_key, NULL, 0);\n>> +\tif (status != SECSuccess)\n>> +\t\tereport(ERROR,\n>> +\t\t\t\t(errmsg(\"unable to configure server for TLS server connections: %s\",\n>> +\t\t\t\t\t\tpg_SSLerrmessage(PR_GetError()))));\n> \n> Why do both of these need to get called? The NSS docs say:\n> \n> /*\n> ** Deprecated variant of SSL_ConfigServerCert.\n> **\n> ...\n> SSL_IMPORT SECStatus SSL_ConfigSecureServer(\n> PRFileDesc *fd, CERTCertificate *ce\trt,\n> SECKEYPrivateKey *key, SSLKEAType kea);\n\nThey don't, I had missed the deprecation warning as it's not mentioned at all\nin the online documentation:\n\nhttps://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/SSL_functions/sslfnc.html\n\n(SSL_ConfigServerCert isn't at all mentioned there which dates it to before\nthis went it obsoleting SSL_ConfigSecureServer.)\n\nFixed by removing the superfluous call.\n\nThanks again for reviewing!\n\ncheers ./daniel", "msg_date": "Thu, 29 Oct 2020 16:20:19 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "\nOn 10/29/20 11:20 AM, Daniel Gustafsson wrote:\n>> On 28 Oct 2020, at 07:39, Andres Freund <andres@anarazel.de> wrote:\n>> Have you done testing to ensure that NSS PG cooperates correctly with\n>> openssl PG? Is there a way we can make that easier to do? E.g. allowing\n>> to build frontend with NSS and backend with openssl and vice versa?\n> When I wrote the Secure Transport patch I had a patch against PostgresNode\n> which allowed for overriding the server binaries like so:\n>\n> SSLTEST_SERVER_BIN=/path/bin/ make -C src/test/ssl/ check\n>\n> I've used that coupled with manual testing so far to make sure that an openssl\n> client can talk to an NSS backend and so on. Before any other backend is added\n> we clearly need *a* way of doing this, one which no doubt will need to be\n> improved upon to suit more workflows.\n>\n> This is sort of the same situation as pg_upgrade, where two trees is needed to\n> really test it.\n>\n> I can clean that patch up and post as a starting point for discussions.\n>\n>>>>> if test \"$with_openssl\" = yes ; then\n>>>>> + if test x\"$with_nss\" = x\"yes\" ; then\n>>>>> + AC_MSG_ERROR([multiple SSL backends cannot be enabled simultaneously\"])\n>>>>> + fi\n>>>> Based on a quick look there's no similar error check for the msvc\n>>>> build. Should there be?\n>>> Thats a good question. When embarking on this is seemed quite natural to me\n>>> that it should be, but now I'm not so sure. Maybe there should be a\n>>> --with-openssl-preferred like how we handle readline/libedit or just allow\n>>> multiple and let the last one win? Do you have any input on what would make\n>>> sense?\n>>>\n>>> The only thing I think makes no sense is to allow multiple ones at the same\n>>> time given the current autoconf switches, even if it would just be to pick say\n>>> pg_strong_random from one and libpq TLS from another.\n>> Maybe we should just have --with-ssl={openssl,nss}? That'd avoid needing\n>> to check for errors.\n> Thats another option, with --with-openssl being an alias for --with-ssl=openssl.\n>\n> After another round of thinking I like this even better as it makes the build\n> infra cleaner, so the attached patch has this implemented.\n>\n>> Even better, of course, would be to allow switching of the SSL backend\n>> based on config options (PGC_POSTMASTER GUC for backend, connection\n>> string for frontend). Mainly because that would make testing of\n>> interoperability so much easier. Obviously still a few places like\n>> pgcrypto, randomness, etc, where only a compile time decision seems to\n>> make sense.\n> It would make testing easier, but the expense seems potentially rather high.\n> How would a GUC switch be allowed to operate, would we have mixed backends or\n> would be require all openssl connectins to be dropped before serving nss ones?\n>\n>>>>> + CLEANLDFLAGS=\"$LDFLAGS\"\n>>>>> + # TODO: document this set of LDFLAGS\n>>>>> + LDFLAGS=\"-lssl3 -lsmime3 -lnss3 -lplds4 -lplc4 -lnspr4 $LDFLAGS\"\n>>>> Shouldn't this use nss-config or such?\n>>> Indeed it should, where available. I've added rudimentary support for that\n>>> without a fallback as of now.\n>> When would we need a fallback?\n> One one of my boxes I have NSS/NSPR installed via homebrew and they don't ship\n> an nss-config AFAICT. I wouldn't be surprised if there are other cases.\n>\n>>>> I think it'd also be better if we could include these files as nss/ssl.h\n>>>> etc - ssl.h is a name way too likely to conflict imo.\n>>> I've changed this to be nss/ssl.h and nspr/nspr.h etc, but the include path\n>>> will still need the direct path to the headers (from autoconf) since nss.h\n>>> includes NSPR headers as #include <nspr.h> and so on.\n>> Hm. Then it's probably not worth going there...\n> It does however make visual parsing of the source files easer since it's clear\n> which ssl.h is being referred to. I'm in favor of keeping it.\n>\n>>>>> +static SECStatus\n>>>>> +pg_cert_auth_handler(void *arg, PRFileDesc * fd, PRBool checksig, PRBool isServer)\n>>>>> +{\n>>>>> +\tSECStatus\tstatus;\n>>>>> +\tPort\t *port = (Port *) arg;\n>>>>> +\tCERTCertificate *cert;\n>>>>> +\tchar\t *peer_cn;\n>>>>> +\tint\t\t\tlen;\n>>>>> +\n>>>>> +\tstatus = SSL_AuthCertificate(CERT_GetDefaultCertDB(), port->pr_fd, checksig, PR_TRUE);\n>>>>> +\tif (status == SECSuccess)\n>>>>> +\t{\n>>>>> +\t\tcert = SSL_PeerCertificate(port->pr_fd);\n>>>>> +\t\tlen = strlen(cert->subjectName);\n>>>>> +\t\tpeer_cn = MemoryContextAllocZero(TopMemoryContext, len + 1);\n>>>>> +\t\tif (strncmp(cert->subjectName, \"CN=\", 3) == 0)\n>>>>> +\t\t\tstrlcpy(peer_cn, cert->subjectName + strlen(\"CN=\"), len + 1);\n>>>>> +\t\telse\n>>>>> +\t\t\tstrlcpy(peer_cn, cert->subjectName, len + 1);\n>>>>> +\t\tCERT_DestroyCertificate(cert);\n>>>>> +\n>>>>> +\t\tport->peer_cn = peer_cn;\n>>>>> +\t\tport->peer_cert_valid = true;\n>>>> Hm. We either should have something similar to\n>>>>\n>>>> \t\t\t/*\n>>>> \t\t\t * Reject embedded NULLs in certificate common name to prevent\n>>>> \t\t\t * attacks like CVE-2009-4034.\n>>>> \t\t\t */\n>>>> \t\t\tif (len != strlen(peer_cn))\n>>>> \t\t\t{\n>>>> \t\t\t\tereport(COMMERROR,\n>>>> \t\t\t\t\t\t(errcode(ERRCODE_PROTOCOL_VIOLATION),\n>>>> \t\t\t\t\t\t errmsg(\"SSL certificate's common name contains embedded null\")));\n>>>> \t\t\t\tpfree(peer_cn);\n>>>> \t\t\t\treturn -1;\n>>>> \t\t\t}\n>>>> here, or a comment explaining why not.\n>>> We should, but it's proving rather difficult as there is no equivalent API call\n>>> to get the string as well as the expected length of it.\n>> Hm. Should at least have a test to ensure that's not a problem then. I\n>> hope/assume NSS rejects this somewhere internally...\n> Agreed, I'll try to hack up a testcase.\n>\n>>>> Also, what's up with the CN= bit? Why is that needed here, but not for\n>>>> openssl?\n>>> OpenSSL returns only the value portion, whereas NSS returns key=value so we\n>>> need to skip over the key= part.\n>> Why is it a conditional path though?\n> It was mostly just a belts-and-suspenders thing, I don't have any hard evidence\n> that it's been a thing in any modern NSS version so it can be removed.\n>\n>>>>> +/*\n>>>>> + * PR_ImportTCPSocket() is a private API, but very widely used, as it's the\n>>>>> + * only way to make NSS use an already set up POSIX file descriptor rather\n>>>>> + * than opening one itself. To quote the NSS documentation:\n>>>>> + *\n>>>>> + *\t\t\"In theory, code that uses PR_ImportTCPSocket may break when NSPR's\n>>>>> + *\t\timplementation changes. In practice, this is unlikely to happen because\n>>>>> + *\t\tNSPR's implementation has been stable for years and because of NSPR's\n>>>>> + *\t\tstrong commitment to backward compatibility.\"\n>>>>> + *\n>>>>> + * https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR/Reference/PR_ImportTCPSocket\n>>>>> + *\n>>>>> + * The function is declared in <private/pprio.h>, but as it is a header marked\n>>>>> + * private we declare it here rather than including it.\n>>>>> + */\n>>>>> +NSPR_API(PRFileDesc *) PR_ImportTCPSocket(int);\n>>>> Ugh. This is really the way to do this? How do other applications deal\n>>>> with this problem?\n>>> They either #include <private/pprio.h> or they do it like this (or vendor NSPR\n>>> which makes calling private APIs less problematic). It sure is ugly, but there\n>>> is no alternative to using this function.\n>> Hm - in debian unstable's NSS this function appears to be in nss/ssl.h,\n>> not pprio.h:\n>>\n>> /*\n>> ** Imports fd into SSL, returning a new socket. Copies SSL configuration\n>> ** from model.\n>> */\n>> SSL_IMPORT PRFileDesc *SSL_ImportFD(PRFileDesc *model, PRFileDesc *fd);\n>>\n>> and ssl.h starts with:\n>> /*\n>> * This file contains prototypes for the public SSL functions.\n> Right, but that's Import*FD*, not Import*TCPSocket*. We use ImportFD as well\n> since it's the API for importing an NSPR socket into NSS and enabling SSL/TLS\n> on it. Thats been a public API for a long time. ImportTCPSocket is used to\n> import an already opened socket into NSPR, else NSPR must open the socket\n> itself. That part has been kept private for reasons unknown, as it's\n> incredibly useful.\n>\n>>>>> +\tPK11_SetPasswordFunc(PQssl_passwd_cb);\n>>>> Is it actually OK to do stuff like this when other users of NSS might be\n>>>> present? That's obviously more likely in the libpq case, compared to the\n>>>> backend case (where it's also possible, of course). What prevents us\n>>>> from overriding another user's callback?\n>>> The password callback pointer is stored in a static variable in NSS (in the\n>>> file lib/pk11wrap/pk11auth.c).\n>> But, uh, how is that not a problem? What happens if a backend imports\n>> libpq? What if plpython imports curl which then also uses nss?\n> Sorry, that sentence wasn't really finished. What I meant to write was that I\n> don't really have good answers here. The available implementation is via the\n> static var, and there are no alternative APIs. I've tried googling for\n> insights but haven't come across any.\n>\n> The only datapoint I have is that I can't recall there ever being a complaint\n> against libcurl doing this exact thing. That of course doesn't mean it cannot\n> happen or cause problems.\n>\n>>> +\t/*\n>>> +\t * Finally we must configure the socket for being a server by setting the\n>>> +\t * certificate and key.\n>>> +\t */\n>>> +\tstatus = SSL_ConfigSecureServer(model, server_cert, private_key, kt_rsa);\n>>> +\tif (status != SECSuccess)\n>>> +\t\tereport(ERROR,\n>>> +\t\t\t\t(errmsg(\"unable to configure secure server: %s\",\n>>> +\t\t\t\t\t\tpg_SSLerrmessage(PR_GetError()))));\n>>> +\tstatus = SSL_ConfigServerCert(model, server_cert, private_key, NULL, 0);\n>>> +\tif (status != SECSuccess)\n>>> +\t\tereport(ERROR,\n>>> +\t\t\t\t(errmsg(\"unable to configure server for TLS server connections: %s\",\n>>> +\t\t\t\t\t\tpg_SSLerrmessage(PR_GetError()))));\n>> Why do both of these need to get called? The NSS docs say:\n>>\n>> /*\n>> ** Deprecated variant of SSL_ConfigServerCert.\n>> **\n>> ...\n>> SSL_IMPORT SECStatus SSL_ConfigSecureServer(\n>> PRFileDesc *fd, CERTCertificate *ce\trt,\n>> SECKEYPrivateKey *key, SSLKEAType kea);\n> They don't, I had missed the deprecation warning as it's not mentioned at all\n> in the online documentation:\n>\n> https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/SSL_functions/sslfnc.html\n>\n> (SSL_ConfigServerCert isn't at all mentioned there which dates it to before\n> this went it obsoleting SSL_ConfigSecureServer.)\n>\n> Fixed by removing the superfluous call.\n>\n\n\n\nI've been looking through the new patch set, in particular the testing\nsetup.\n\nThe way it seems to proceed is to use the existing openssl generated\ncertificates and imports them into NSS certificate databases. That seems\nfine to bootstrap testing, but it seems to me it would be more sound not\nto rely on openssl at all. I'd rather see the Makefile containing\ncommands to create these from scratch, which mirror the openssl\nvariants. IOW you should be able to build and test this from scratch,\nincluding certificate generation, without having openssl installed at all.\n\nI also notice that the invocations to pk12util don't contain the \"sql:\"\nprefix to the -d option, even though the database was created with that\nprefix a few lines above. That seems like a mistake from my reading of\nthe pk12util man page.\n\n\ncheers\n\n\nandrew\n\n\n\n\n\n\n", "msg_date": "Sun, 1 Nov 2020 08:13:38 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 1 Nov 2020, at 14:13, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> I've been looking through the new patch set, in particular the testing\n> setup.\n\nThanks!\n\n> The way it seems to proceed is to use the existing openssl generated\n> certificates and imports them into NSS certificate databases. That seems\n> fine to bootstrap testing,\n\nThat's pretty much why I opted for using the existing certs: to bootstrap the\npatch and ensure OpenSSL-backend compatibility.\n\n> but it seems to me it would be more sound not\n> to rely on openssl at all. I'd rather see the Makefile containing\n> commands to create these from scratch, which mirror the openssl\n> variants. IOW you should be able to build and test this from scratch,\n> including certificate generation, without having openssl installed at all.\n\nI don't disagree with this, but I do also believe there is value in testing all\nTLS backends with exactly the same certificates to act as a baseline. The\nnssfiles target should definitely be able to generate from scratch, but maybe a\ncombination is the best option?\n\nBeing well versed in the buildfarm code, do you have an off-the-cuff idea on\nhow to do cross library testing such that OpenSSL/NSS compatibility can be\nensured? Andres was floating the idea of making a single sourcetree be able to\nhave both for testing but more discussion is needed to settle on a way forward.\n\n> I also notice that the invocations to pk12util don't contain the \"sql:\"\n> prefix to the -d option, even though the database was created with that\n> prefix a few lines above. That seems like a mistake from my reading of\n> the pk12util man page.\n\nFixed in the attached v16, which also drops the parts of the patchset which\nhave been submitted separately to -hackers (the sslinfo patch hunks are still\nthere are they are required).\n\ncheers ./daniel", "msg_date": "Sun, 1 Nov 2020 23:04:18 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "\nOn 11/1/20 5:04 PM, Daniel Gustafsson wrote:\n>> On 1 Nov 2020, at 14:13, Andrew Dunstan <andrew@dunslane.net> wrote:\n>> I've been looking through the new patch set, in particular the testing\n>> setup.\n> Thanks!\n>\n>> The way it seems to proceed is to use the existing openssl generated\n>> certificates and imports them into NSS certificate databases. That seems\n>> fine to bootstrap testing,\n> That's pretty much why I opted for using the existing certs: to bootstrap the\n> patch and ensure OpenSSL-backend compatibility.\n>\n>> but it seems to me it would be more sound not\n>> to rely on openssl at all. I'd rather see the Makefile containing\n>> commands to create these from scratch, which mirror the openssl\n>> variants. IOW you should be able to build and test this from scratch,\n>> including certificate generation, without having openssl installed at all.\n> I don't disagree with this, but I do also believe there is value in testing all\n> TLS backends with exactly the same certificates to act as a baseline. The\n> nssfiles target should definitely be able to generate from scratch, but maybe a\n> combination is the best option?\n\n\nYeah. I certainly think we need something that should how we would\ngenerate them from scratch using nss. That said, the importation code is\nalso useful.\n\n\n\n>\n> Being well versed in the buildfarm code, do you have an off-the-cuff idea onIU\n> how to do cross library testing such that OpenSSL/NSS compatibility can be\n> ensured? Andres was floating the idea of making a single sourcetree be able to\n> have both for testing but more discussion is needed to settle on a way forward.\n\n\nWell, I'd probably try to leverage the knowledge we have in doing\ncross-version upgrade testing. It works like this: After the\ninstall-check-C stage each branch saves its binaries and data files in a\nspecial location, adjusting things like library locations to match. then\nto test that version it uses that against all the older versions\nsimilarly saved.\n\n\nWe could generalize that saving mechanism and do it if any module\nrequired it. But instead of testing against a different branch, we'd\ntest against a different animal. So we'd have two animals, one building\nwith openssl and one with nss, and they would test against each other\n(i.e. one as the client and one as the sever, and vice versa).\n\n\nThis would involve a deal of work on my part, but it's very doable, I\nbelieve.\n\n\nWe'd need a way to run tests where we could specify the client and\nserver binary locations.\n\n\nAnyway, those are my thoughts. Comments welcome.\n\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Mon, 2 Nov 2020 09:17:00 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 27 Oct 2020, at 21:18, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> \n> On 27/10/2020 22:07, Daniel Gustafsson wrote:\n>> /*\n>> * Track whether the NSS database has a password set or not. There is no API\n>> * function for retrieving password status, so we simply flip this to true in\n>> * case NSS invoked the password callback - as that will only happen in case\n>> * there is a password. The reason for tracking this is that there are calls\n>> * which require a password parameter, but doesn't use the callbacks provided,\n>> * so we must call the callback on behalf of these.\n>> */\n>> static bool has_password = false;\n> \n> This is set in PQssl_passwd_cb function, but never reset. That seems wrong. The NSS database used in one connection might have a password, while another one might not. Or have I completely misunderstood this?\n\n(sorry for slow response). You are absolutely right, the has_password flag\nmust be tracked per connection in PGconn. The attached v17 implements this as\nwell a frontend bugfix which caused dropped connections and some smaller fixups\nto make strings more translateable.\n\nI've also included a WIP version of SCRAM channel binding in the attached\npatch, it's currently failing to connect but someone here might spot the bug\nbefore I do so I figured it's better to include it.\n\nThe 0005 patch is now, thanks to the sslinfo patch going in on master, only\ncontaining NSS specific code. \n\ncheers ./daniel", "msg_date": "Wed, 4 Nov 2020 14:09:52 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 2 Nov 2020, at 15:17, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> We could generalize that saving mechanism and do it if any module\n> required it. But instead of testing against a different branch, we'd\n> test against a different animal. So we'd have two animals, one building\n> with openssl and one with nss, and they would test against each other\n> (i.e. one as the client and one as the sever, and vice versa).\n\nThat seems like a very good plan. It would also allow us to test a backend\ncompiled with OpenSSL 1.0.2 against a frontend with OpenSSL 1.1.1 which might\ncome in handy when OpenSSL 3.0.0 lands.\n\n> This would involve a deal of work on my part, but it's very doable, I\n> believe.\n\nI have no experience with the buildfarm code, but I'm happy to help if theres\nanything I can do.\n\ncheers ./daniel\n\n\n", "msg_date": "Wed, 4 Nov 2020 14:14:12 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Nov 4, 2020, at 5:09 AM, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> (sorry for slow response). You are absolutely right, the has_password flag\n> must be tracked per connection in PGconn. The attached v17 implements this as\n> well a frontend bugfix which caused dropped connections and some smaller fixups\n> to make strings more translateable.\n\nSome initial notes from building and testing on macOS Mojave. I'm working with\nboth a brew-packaged NSS/NSPR (which includes basic nss-/nspr-config) and a\nhand-built NSS/NSPR (which does not).\n\n1. In configure.ac:\n\n> + LDFLAGS=\"$LDFLAGS $NSS_LIBS $NSPR_LIBS\"\n> + CFLAGS=\"$CFLAGS $NSS_CFLAGS $NSPR_CFLAGS\"\n> +\n> + AC_CHECK_LIB(nss3, SSL_VersionRangeSet, [], [AC_MSG_ERROR([library 'nss3' is required for NSS])])\n\nLooks like SSL_VersionRangeSet is part of libssl3, not libnss3. So this fails\nwith the hand-built stack, where there is no nss-config to populate LDFLAGS. I\nchanged the function to NSS_InitContext and that seems to work nicely.\n\n2. Among the things to eventually think about when it comes to configuring, it\nlooks like some platforms [1] install the headers under <nspr4/...> and\n<nss3/...> instead of <nspr/...> and <nss/...>. It's unfortunate that the NSS\nmaintainers never chose an official installation layout.\n\n3. I need two more `#define NO_NSPR_10_SUPPORT` guards added in both\n\n src/include/common/pg_nss.h\n src/port/pg_strong_random.c\n\nbefore the tree will compile for me. Both of those files include NSS headers.\n\n4. be_tls_init() refuses to run correctly for me; I end up getting an NSPR\nassertion that looks like\n\n sslMutex_Init not implemented for multi-process applications !\n\nWith assertions disabled, this ends up showing a somewhat unhelpful\n\n FATAL: unable to set up TLS connection cache: security library failure. (SEC_ERROR_LIBRARY_FAILURE)\n\nIt looks like cross-process locking isn't actually enabled on macOS, which is a\nlong-standing bug in NSPR [2, 3]. So calls to SSL_ConfigMPServerSIDCache()\nerror out.\n\n--Jacob\n\n[1] https://github.com/erthink/ReOpenLDAP/issues/112\n[2] https://bugzilla.mozilla.org/show_bug.cgi?id=538680\n[3] https://bugzilla.mozilla.org/show_bug.cgi?id=1192500\n\n\n\n", "msg_date": "Fri, 6 Nov 2020 20:37:48 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 6 Nov 2020, at 21:37, Jacob Champion <pchampion@vmware.com> wrote:\n\n> Some initial notes from building and testing on macOS Mojave. I'm working with\n> both a brew-packaged NSS/NSPR (which includes basic nss-/nspr-config) and a\n> hand-built NSS/NSPR (which does not).\n\nThanks for looking!\n\n> 1. In configure.ac:\n> \n>> + LDFLAGS=\"$LDFLAGS $NSS_LIBS $NSPR_LIBS\"\n>> + CFLAGS=\"$CFLAGS $NSS_CFLAGS $NSPR_CFLAGS\"\n>> +\n>> + AC_CHECK_LIB(nss3, SSL_VersionRangeSet, [], [AC_MSG_ERROR([library 'nss3' is required for NSS])])\n> \n> Looks like SSL_VersionRangeSet is part of libssl3, not libnss3. So this fails\n> with the hand-built stack, where there is no nss-config to populate LDFLAGS. I\n> changed the function to NSS_InitContext and that seems to work nicely.\n\nAh yes, fixed.\n\n> 2. Among the things to eventually think about when it comes to configuring, it\n> looks like some platforms [1] install the headers under <nspr4/...> and\n> <nss3/...> instead of <nspr/...> and <nss/...>. It's unfortunate that the NSS\n> maintainers never chose an official installation layout.\n\nYeah, maybe we need to start with the most common path and have fallbacks in\ncase not found?\n\n> 3. I need two more `#define NO_NSPR_10_SUPPORT` guards added in both\n> \n> src/include/common/pg_nss.h\n> src/port/pg_strong_random.c\n> \n> before the tree will compile for me. Both of those files include NSS headers.\n\nOdd that I was able to compile on Linux, but I've added these.\n\n> 4. be_tls_init() refuses to run correctly for me; I end up getting an NSPR\n> assertion that looks like\n> \n> sslMutex_Init not implemented for multi-process applications !\n> \n> With assertions disabled, this ends up showing a somewhat unhelpful\n> \n> FATAL: unable to set up TLS connection cache: security library failure. (SEC_ERROR_LIBRARY_FAILURE)\n> \n> It looks like cross-process locking isn't actually enabled on macOS, which is a\n> long-standing bug in NSPR [2, 3]. So calls to SSL_ConfigMPServerSIDCache()\n> error out.\n\nThats unfortunate since the session cache is required for a server application\nbacked by NSS. The attached switches to SSL_ConfigServerSessionIDCacheWithOpt\nwith which one can explicitly make the cache non-shared, which in turn backs\nthe mutexes with NSPR locks rather than the missing sem_init. Can you test\nthis version and see if that makes it work?\n\nThis version also contains a channel binding bug that Heikki pointed out off-\nlist (sadly not The bug) and a few very minor cleanups as well as a rebase to\nhandle the new pg_strong_random_init. Actually performing the context init\nthere is yet a TODO, but I wanted a version out that at all compiled.\n\ncheers ./daniel", "msg_date": "Sat, 7 Nov 2020 00:11:15 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Nov 6, 2020, at 3:11 PM, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n> The attached switches to SSL_ConfigServerSessionIDCacheWithOpt\n> with which one can explicitly make the cache non-shared, which in turn backs\n> the mutexes with NSPR locks rather than the missing sem_init. Can you test\n> this version and see if that makes it work?\n\nYep, I get much farther through the tests with that patch. I'm currently\ndiving into another assertion failure during socket disconnection:\n\n Assertion failure: fd->secret == NULL, at prlayer.c:45\n\ncURL has some ominously vague references to this [1], though I'm not\nsure that we should work around it in the same way without knowing what\nthe cause is...\n\n--Jacob\n\n[1] https://github.com/curl/curl/blob/4d2f800/lib/vtls/nss.c#L1266\n\n\n\n", "msg_date": "Tue, 10 Nov 2020 20:11:19 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 10 Nov 2020, at 21:11, Jacob Champion <pchampion@vmware.com> wrote:\n> On Nov 6, 2020, at 3:11 PM, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n>> The attached switches to SSL_ConfigServerSessionIDCacheWithOpt\n>> with which one can explicitly make the cache non-shared, which in turn backs\n>> the mutexes with NSPR locks rather than the missing sem_init. Can you test\n>> this version and see if that makes it work?\n> \n> Yep, I get much farther through the tests with that patch.\n\nGreat, thanks for confirming.\n\n> I'm currently\n> diving into another assertion failure during socket disconnection:\n> \n> Assertion failure: fd->secret == NULL, at prlayer.c:45\n> \n> cURL has some ominously vague references to this [1], though I'm not\n> sure that we should work around it in the same way without knowing what\n> the cause is...\n\nDigging through the archives from when this landed in curl, the assertion\nfailure was never fully identified back then but happened spuriously. Which\nversion of NSPR is this happening with?\n\ncheers ./daniel\n\n", "msg_date": "Tue, 10 Nov 2020 23:28:14 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Nov 10, 2020, at 2:28 PM, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n> Digging through the archives from when this landed in curl, the assertion\n> failure was never fully identified back then but happened spuriously. Which\n> version of NSPR is this happening with?\n\nThis is NSPR 4.29, with debugging enabled. The fd that causes the\nassertion is the custom layer that's added during be_tls_open_server(),\nwhich connects a Port as the layer secret. It looks like NSPR is trying\nto help surface potential memory leaks by asserting if the secret is\nnon-NULL at the time the stack is being closed.\n\nIn this case, it doesn't matter since the Port lifetime is managed\nelsewhere, but it looks easy enough to add a custom close in the way\nthat cURL and the NSPR test programs [1] do. Sample patch attached,\nwhich gets me to the end of the tests without any assertions. (Two\nfailures left on my machine.)\n\n--Jacob\n\n[1] https://hg.mozilla.org/projects/nspr/file/bf6620c143/pr/tests/nblayer.c#l354", "msg_date": "Wed, 11 Nov 2020 18:17:03 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Nov 11, 2020, at 10:17 AM, Jacob Champion <pchampion@vmware.com> wrote:\n> \n> (Two failures left on my machine.)\n\nFalse alarm -- the stderr debugging I'd added in to track down the\nassertion tripped up the \"no stderr\" tests. Zero failing tests now.\n\n--Jacob\n\n\n", "msg_date": "Wed, 11 Nov 2020 18:57:02 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Nov 11, 2020, at 10:57 AM, Jacob Champion <pchampion@vmware.com> wrote:\n> \n> False alarm -- the stderr debugging I'd added in to track down the\n> assertion tripped up the \"no stderr\" tests. Zero failing tests now.\n\nI took a look at the OpenSSL interop problems you mentioned upthread. I\ndon't see a hang like you did, but I do see a PR_IO_TIMEOUT_ERROR during\nconnection.\n\nI think pgtls_read() needs to treat PR_IO_TIMEOUT_ERROR as if no bytes\nwere read, in order to satisfy its API. There was some discussion on\nthis upthread:\n\nOn Oct 27, 2020, at 1:07 PM, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n> On 20 Oct 2020, at 21:15, Andres Freund <andres@anarazel.de> wrote:\n>> \n>>> +\t\t\tcase PR_IO_TIMEOUT_ERROR:\n>>> +\t\t\t\tbreak;\n>> \n>> What does this mean? We'll return with a 0 errno here, right? When is\n>> this case reachable?\n> \n> It should, AFAICT, only be reachable when PR_Recv is used with a timeout which\n> we don't do. It mentioned somewhere that it had happened in no-wait calls due\n> to a bug, but I fail to find that reference now. Either way, I've removed it\n> to fall into the default error handling which now sets errno correctly as that\n> was a paddle short here.\n\nPR_IO_TIMEOUT_ERROR is definitely returned in no-wait calls on my\nmachine. It doesn't look like the PR_Recv() API has a choice -- if\nthere's no data, it can't return a positive integer, and returning zero\nmeans that the socket has been disconnected. So -1 with a timeout error\nis the only option.\n\nI'm not completely sure why this is exposed so easily with an OpenSSL\nserver -- I'm guessing the implementation slices up its packets\ndifferently on the wire, causing a read event before NSS is able to\ndecrypt a full record -- but it's worth noting that this case also shows\nup during NSS-to-NSS psql connections, when handling notifications at\nthe end of every query. PQconsumeInput() reports a hard failure with the\ncurrent implementation, but its return value is ignored by\nPrintNotifications(). Otherwise this probably would have showed up\nearlier.\n\n(What's the best way to test this case? Are there lower-level tests for\nthe protocol/network layer somewhere that I'm missing?) \n\nWhile patching this case, I also noticed that pgtls_read() doesn't call\nSOCK_ERRNO_SET() for the disconnection case. That is also in the\nattached patch.\n\n--Jacob", "msg_date": "Thu, 12 Nov 2020 22:12:42 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 12 Nov 2020, at 23:12, Jacob Champion <pchampion@vmware.com> wrote:\n> \n> On Nov 11, 2020, at 10:57 AM, Jacob Champion <pchampion@vmware.com> wrote:\n>> \n>> False alarm -- the stderr debugging I'd added in to track down the\n>> assertion tripped up the \"no stderr\" tests. Zero failing tests now.\n> \n> I took a look at the OpenSSL interop problems you mentioned upthread.\n\nGreat, thanks!\n\n> I don't see a hang like you did, but I do see a PR_IO_TIMEOUT_ERROR during\n> connection.\n> \n> I think pgtls_read() needs to treat PR_IO_TIMEOUT_ERROR as if no bytes\n> were read, in order to satisfy its API. There was some discussion on\n> this upthread:\n> \n> On Oct 27, 2020, at 1:07 PM, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>> On 20 Oct 2020, at 21:15, Andres Freund <andres@anarazel.de> wrote:\n>>> \n>>>> +\t\t\tcase PR_IO_TIMEOUT_ERROR:\n>>>> +\t\t\t\tbreak;\n>>> \n>>> What does this mean? We'll return with a 0 errno here, right? When is\n>>> this case reachable?\n>> \n>> It should, AFAICT, only be reachable when PR_Recv is used with a timeout which\n>> we don't do. It mentioned somewhere that it had happened in no-wait calls due\n>> to a bug, but I fail to find that reference now. Either way, I've removed it\n>> to fall into the default error handling which now sets errno correctly as that\n>> was a paddle short here.\n> \n> PR_IO_TIMEOUT_ERROR is definitely returned in no-wait calls on my\n> machine. It doesn't look like the PR_Recv() API has a choice -- if\n> there's no data, it can't return a positive integer, and returning zero\n> means that the socket has been disconnected. So -1 with a timeout error\n> is the only option.\n\nRight, that makes sense.\n\n> I'm not completely sure why this is exposed so easily with an OpenSSL\n> server -- I'm guessing the implementation slices up its packets\n> differently on the wire, causing a read event before NSS is able to\n> decrypt a full record -- but it's worth noting that this case also shows\n> up during NSS-to-NSS psql connections, when handling notifications at\n> the end of every query. PQconsumeInput() reports a hard failure with the\n> current implementation, but its return value is ignored by\n> PrintNotifications(). Otherwise this probably would have showed up\n> earlier.\n\nShould there perhaps be an Assert there to catch those?\n\n> (What's the best way to test this case? Are there lower-level tests for\n> the protocol/network layer somewhere that I'm missing?)\n\nNot AFAIK. Having been knee-deep now, do you have any ideas on how to\nimplement?\n\n> While patching this case, I also noticed that pgtls_read() doesn't call\n> SOCK_ERRNO_SET() for the disconnection case. That is also in the\n> attached patch.\n\nAh yes, nice catch.\n\nI've incorporated this patch as well as the previous patch for the assertion\nfailure on private callback data into the attached v19 patchset. I also did a\nspellcheck and pgindent run on it for ease of review.\n\ncheers ./daniel", "msg_date": "Fri, 13 Nov 2020 13:14:58 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Nov 13, 2020, at 4:14 AM, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> On 12 Nov 2020, at 23:12, Jacob Champion <pchampion@vmware.com> wrote:\n>> \n>> I'm not completely sure why this is exposed so easily with an OpenSSL\n>> server -- I'm guessing the implementation slices up its packets\n>> differently on the wire, causing a read event before NSS is able to\n>> decrypt a full record -- but it's worth noting that this case also shows\n>> up during NSS-to-NSS psql connections, when handling notifications at\n>> the end of every query. PQconsumeInput() reports a hard failure with the\n>> current implementation, but its return value is ignored by\n>> PrintNotifications(). Otherwise this probably would have showed up\n>> earlier.\n> \n> Should there perhaps be an Assert there to catch those?\n\nHm. From the perspective of helping developers out, perhaps, but from\nthe standpoint of \"don't crash when an endpoint outside our control does\nsomething strange\", I think that's a harder sell. Should the error be\nbubbled all the way up instead? Or perhaps, if psql isn't supposed to\ntreat notification errors as \"hard\" failures, it should at least warn\nthe user that something is fishy?\n\n>> (What's the best way to test this case? Are there lower-level tests for\n>> the protocol/network layer somewhere that I'm missing?)\n> \n> Not AFAIK. Having been knee-deep now, do you have any ideas on how to\n> implement?\n\nI think that testing these sorts of important edge cases needs a\nfriendly DSL -- something that doesn't want to make devs tear their hair\nout while building tests. I've been playing a little bit with Scapy [1]\nto understand more of the libpq v3 protocol; I'll see if that can be\nadapted for pieces of the TLS handshake in a way that's easy to\nmaintain. If it can be, maybe that'd be a good starting example.\n\n> I've incorporated this patch as well as the previous patch for the assertion\n> failure on private callback data into the attached v19 patchset. I also did a\n> spellcheck and pgindent run on it for ease of review.\n\nCommit 6be725e70 got rid of some psql error messaging that the tests\nwere keying off of, so there are a few new failures after a rebase onto\nlatest master.\n\nI've attached a patch that gets the SCRAM tests a little further\n(certificate hashing was caught in an infinite loop). I also added error\nchecks to those loops, along the lines of the existing OpenSSL\nimplementation: if a suitable digest can't be found, the user will see\nan error like\n\n psql: error: could not find digest for OID 'PKCS #1 SHA-256 With RSA Encryption'\n\nIt's a little verbose but I don't think this case should come up in\nnormal practice.\n\n--Jacob\n\n[1] https://scapy.net/", "msg_date": "Mon, 16 Nov 2020 20:00:47 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 16 Nov 2020, at 21:00, Jacob Champion <pchampion@vmware.com> wrote:\n> On Nov 13, 2020, at 4:14 AM, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n>> I've incorporated this patch as well as the previous patch for the assertion\n>> failure on private callback data into the attached v19 patchset. I also did a\n>> spellcheck and pgindent run on it for ease of review.\n> \n> Commit 6be725e70 got rid of some psql error messaging that the tests\n> were keying off of, so there are a few new failures after a rebase onto\n> latest master.\n> \n> I've attached a patch that gets the SCRAM tests a little further\n> (certificate hashing was caught in an infinite loop). I also added error\n> checks to those loops, along the lines of the existing OpenSSL\n> implementation: if a suitable digest can't be found, the user will see\n> an error like\n> \n> psql: error: could not find digest for OID 'PKCS #1 SHA-256 With RSA Encryption'\n> \n> It's a little verbose but I don't think this case should come up in\n> normal practice.\n\nNice, thanks for the fix! I've incorporated your patch into the attached v20\nwhich also fixes client side error reporting to be more readable. The SCRAM\ntests are now also hooked up, albeit with SKIP blocks for NSS, so they can\nstart getting fixed.\n\ncheers ./daniel", "msg_date": "Tue, 17 Nov 2020 16:00:53 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Nov 17, 2020, at 7:00 AM, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n> Nice, thanks for the fix! I've incorporated your patch into the attached v20\n> which also fixes client side error reporting to be more readable.\n\nI was testing handshake failure modes and noticed that some FATAL\nmessages are being sent through to the client in cleartext. The OpenSSL\nimplementation doesn't do this, because it logs handshake problems at\nCOMMERROR level. Should we switch all those ereport() calls in the NSS\nbe_tls_open_server() to COMMERROR as well (and return explicitly), to\navoid this? Or was there a reason for logging at FATAL/ERROR level?\n\nRelated note, at the end of be_tls_open_server():\n\n> ...\n> port->ssl_in_use = true;\n> return 0;\n> \n> error:\n> return 1;\n> }\n\nThis needs to return -1 in the error case; the only caller of\nsecure_open_server() does a direct `result == -1` comparison rather than\nchecking `result != 0`.\n\n--Jacob\n\n", "msg_date": "Fri, 4 Dec 2020 00:57:26 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Tue, 2020-10-27 at 21:07 +0100, Daniel Gustafsson wrote:\r\n> > On 20 Oct 2020, at 21:15, Andres Freund <andres@anarazel.de> wrote:\r\n> > \r\n> > > +static SECStatus\r\n> > > +pg_cert_auth_handler(void *arg, PRFileDesc * fd, PRBool checksig, PRBool isServer)\r\n> > > +{\r\n> > > +\tSECStatus\tstatus;\r\n> > > +\tPort\t *port = (Port *) arg;\r\n> > > +\tCERTCertificate *cert;\r\n> > > +\tchar\t *peer_cn;\r\n> > > +\tint\t\t\tlen;\r\n> > > +\r\n> > > +\tstatus = SSL_AuthCertificate(CERT_GetDefaultCertDB(), port->pr_fd, checksig, PR_TRUE);\r\n> > > +\tif (status == SECSuccess)\r\n> > > +\t{\r\n> > > +\t\tcert = SSL_PeerCertificate(port->pr_fd);\r\n> > > +\t\tlen = strlen(cert->subjectName);\r\n> > > +\t\tpeer_cn = MemoryContextAllocZero(TopMemoryContext, len + 1);\r\n> > > +\t\tif (strncmp(cert->subjectName, \"CN=\", 3) == 0)\r\n> > > +\t\t\tstrlcpy(peer_cn, cert->subjectName + strlen(\"CN=\"), len + 1);\r\n> > > +\t\telse\r\n> > > +\t\t\tstrlcpy(peer_cn, cert->subjectName, len + 1);\r\n> > > +\t\tCERT_DestroyCertificate(cert);\r\n> > > +\r\n> > > +\t\tport->peer_cn = peer_cn;\r\n> > > +\t\tport->peer_cert_valid = true;\r\n> > \r\n> > Hm. We either should have something similar to\r\n> > \r\n> > \t\t\t/*\r\n> > \t\t\t * Reject embedded NULLs in certificate common name to prevent\r\n> > \t\t\t * attacks like CVE-2009-4034.\r\n> > \t\t\t */\r\n> > \t\t\tif (len != strlen(peer_cn))\r\n> > \t\t\t{\r\n> > \t\t\t\tereport(COMMERROR,\r\n> > \t\t\t\t\t\t(errcode(ERRCODE_PROTOCOL_VIOLATION),\r\n> > \t\t\t\t\t\t errmsg(\"SSL certificate's common name contains embedded null\")));\r\n> > \t\t\t\tpfree(peer_cn);\r\n> > \t\t\t\treturn -1;\r\n> > \t\t\t}\r\n> > here, or a comment explaining why not.\r\n> \r\n> We should, but it's proving rather difficult as there is no equivalent API call\r\n> to get the string as well as the expected length of it.\r\n\r\nI'm going to try to tackle this part next. It looks like NSS uses RFC\r\n4514 (or something like it) backslash-quoting, which this code either\r\nneeds to undo or bypass before performing a comparison.\r\n\r\n--Jacob\r\n", "msg_date": "Wed, 13 Jan 2021 17:07:53 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Tue, Nov 17, 2020 at 04:00:53PM +0100, Daniel Gustafsson wrote:\n> Nice, thanks for the fix! I've incorporated your patch into the attached v20\n> which also fixes client side error reporting to be more readable. The SCRAM\n> tests are now also hooked up, albeit with SKIP blocks for NSS, so they can\n> start getting fixed.\n\nOn top of the set of TODO items mentioned in the logs of the patches,\nthis patch set needs a rebase because it does not apply. In order to\nmove on with this set, I would suggest to extract some parts of the\npatch set independently of the others and have two buildfarm members\nfor the MSVC and non-MSVC cases to stress the parts that can be\ncommitted. Just seeing the size, we could move on with:\n- The ./configure set, with the change to introduce --with-ssl=openssl. \n- 0004 for strong randoms.\n- Support for cryptohashes.\n\n+/*\n+ * BITS_PER_BYTE is also defined in the NSPR header files, so we need to undef\n+ * our version to avoid compiler warnings on redefinition.\n+ */\n+#define pg_BITS_PER_BYTE BITS_PER_BYTE\n+#undef BITS_PER_BYTE\nThis could be done separately.\n\nsrc/sgml/libpq.sgml needs to document PQdefaultSSLKeyPassHook_nss, no?\n--\nMichael", "msg_date": "Mon, 18 Jan 2021 16:08:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 18 Jan 2021, at 08:08, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Nov 17, 2020 at 04:00:53PM +0100, Daniel Gustafsson wrote:\n>> Nice, thanks for the fix! I've incorporated your patch into the attached v20\n>> which also fixes client side error reporting to be more readable. The SCRAM\n>> tests are now also hooked up, albeit with SKIP blocks for NSS, so they can\n>> start getting fixed.\n> \n> On top of the set of TODO items mentioned in the logs of the patches,\n> this patch set needs a rebase because it does not apply.\n\nFixed in the attached, which also addresses the points raised earlier by Jacob\nas well as adds certificates created entirely by NSS tooling as well as initial\ncryptohash support. There is something iffy with these certs (the test fails\non mismatching ciphers and/or signature algorithms) that I haven't been able to\npin down, but to get more eyes on this I'm posting the patch with the test\nenabled. The NSS toolchain requires interactive input which makes the Makefile\na bit hacky, ideas on cleaning that up are appreciated.\n\n> In order to\n> move on with this set, I would suggest to extract some parts of the\n> patch set independently of the others and have two buildfarm members\n> for the MSVC and non-MSVC cases to stress the parts that can be\n> committed. Just seeing the size, we could move on with:\n> - The ./configure set, with the change to introduce --with-ssl=openssl. \n> - 0004 for strong randoms.\n> - Support for cryptohashes.\n\nI will leave it to others to decide the feasibility of this, I'm happy to slice\nand dice the commits into smaller bits to for example separate out the\n--with-ssl autoconf change into a non NSS dependent commit, if that's wanted.\n\n> +/*\n> + * BITS_PER_BYTE is also defined in the NSPR header files, so we need to undef\n> + * our version to avoid compiler warnings on redefinition.\n> + */\n> +#define pg_BITS_PER_BYTE BITS_PER_BYTE\n> +#undef BITS_PER_BYTE\n> This could be done separately.\n\nBased on an offlist discussion I believe this was a misunderstanding, but if I\ninstead misunderstood that feel free to correct me with how you think this\nshould be done.\n\n> src/sgml/libpq.sgml needs to document PQdefaultSSLKeyPassHook_nss, no?\n\nGood point, fixed.\n\ncheers ./daniel", "msg_date": "Tue, 19 Jan 2021 21:21:41 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 4 Dec 2020, at 01:57, Jacob Champion <pchampion@vmware.com> wrote:\n> \n> On Nov 17, 2020, at 7:00 AM, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>> Nice, thanks for the fix! I've incorporated your patch into the attached v20\n>> which also fixes client side error reporting to be more readable.\n> \n> I was testing handshake failure modes and noticed that some FATAL\n> messages are being sent through to the client in cleartext. The OpenSSL\n> implementation doesn't do this, because it logs handshake problems at\n> COMMERROR level. Should we switch all those ereport() calls in the NSS\n> be_tls_open_server() to COMMERROR as well (and return explicitly), to\n> avoid this? Or was there a reason for logging at FATAL/ERROR level?\n\nThe ERROR logging made early development easier but then stuck around, I've\nchanged them to COMMERROR returning an error instead in the v21 patch just\nsent to the list.\n\n> Related note, at the end of be_tls_open_server():\n> \n>> ...\n>> port->ssl_in_use = true;\n>> return 0;\n>> \n>> error:\n>> return 1;\n>> }\n> \n> This needs to return -1 in the error case; the only caller of\n> secure_open_server() does a direct `result == -1` comparison rather than\n> checking `result != 0`.\n\nFixed.\n\ncheers ./daniel\n\n", "msg_date": "Tue, 19 Jan 2021 21:23:50 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Tue, 2021-01-19 at 21:21 +0100, Daniel Gustafsson wrote:\r\n> There is something iffy with these certs (the test fails\r\n> on mismatching ciphers and/or signature algorithms) that I haven't been able to\r\n> pin down, but to get more eyes on this I'm posting the patch with the test\r\n> enabled.\r\n\r\nRemoving `--keyUsage keyEncipherment` from the native_server-* CSR\r\ngeneration seems to let the tests pass for me, but I'm wary of just\r\npushing that as a solution because I don't understand why that would\r\nhave anything to do with the failure mode\r\n(SSL_ERROR_NO_SUPPORTED_SIGNATURE_ALGORITHM).\r\n\r\n> The NSS toolchain requires interactive input which makes the Makefile\r\n> a bit hacky, ideas on cleaning that up are appreciated.\r\n\r\nHm. I got nothing, short of a feature request to NSS...\r\n\r\n--Jacob\r\n", "msg_date": "Wed, 20 Jan 2021 00:40:07 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 20 Jan 2021, at 01:40, Jacob Champion <pchampion@vmware.com> wrote:\n> \n> On Tue, 2021-01-19 at 21:21 +0100, Daniel Gustafsson wrote:\n>> There is something iffy with these certs (the test fails\n>> on mismatching ciphers and/or signature algorithms) that I haven't been able to\n>> pin down, but to get more eyes on this I'm posting the patch with the test\n>> enabled.\n> \n> Removing `--keyUsage keyEncipherment` from the native_server-* CSR\n> generation seems to let the tests pass for me, but I'm wary of just\n> pushing that as a solution because I don't understand why that would\n> have anything to do with the failure mode\n> (SSL_ERROR_NO_SUPPORTED_SIGNATURE_ALGORITHM).\n\nAha, that was a good clue, I had overlooked the required extensions in the CSR.\nRe-reading RFC 5280 it seems we need keyEncipherment, dataEncipherment and\ndigitalSignature to create a valid SSL Server certificate. Adding those indeed\nmake the test pass. Skimming the certutil code *I think* removing it as you\ndid cause a set of defaults to kick in that made it work based on the parameter\n\"--nsCertType sslServer\", but it's not entirely easy to make out. Either way,\nrelying on defaults in a test suite seems less than good, so I've extended the\nMakefile to be explicit about the extensions.\n\nThe attached v22 rebase incorporates the fixup to the test Makefile, with not\nfurther changes on top of that.\n\ncheers ./daniel", "msg_date": "Wed, 20 Jan 2021 12:58:13 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, 2021-01-20 at 12:58 +0100, Daniel Gustafsson wrote:\r\n> Aha, that was a good clue, I had overlooked the required extensions in the CSR.\r\n> Re-reading RFC 5280 it seems we need keyEncipherment, dataEncipherment and\r\n> digitalSignature to create a valid SSL Server certificate. Adding those indeed\r\n> make the test pass. Skimming the certutil code *I think* removing it as you\r\n> did cause a set of defaults to kick in that made it work based on the parameter\r\n> \"--nsCertType sslServer\", but it's not entirely easy to make out.\r\n\r\nLovely. I didn't expect *removing* an extension to effectively *add*\r\nmore, but I'm glad it works now.\r\n\r\n==\r\n\r\nTo continue the Subject Common Name discussion [1] from a different\r\npart of the thread:\r\n\r\nAttached is a v23 version of the patchset that peels the raw Common\r\nName out from a client cert's Subject. This allows the following cases\r\nthat the OpenSSL implementation currently handles:\r\n\r\n- subjects that don't begin with a CN\r\n- subjects with quotable characters\r\n- subjects that have no CN at all\r\nEmbedded NULLs are now handled in a similar manner to the OpenSSL side,\r\nthough because this failure happens during the certificate\r\nauthentication callback, it results in a TLS alert rather than simply\r\nclosing the connection.\r\n\r\nFor easier review of just the parts I've changed, I've also attached a\r\nsince-v22.diff, which is part of the 0001 patch.\r\n\r\n--Jacob\r\n\r\n[1] \r\nhttps://www.postgresql.org/message-id/7d6a23a7e30540b486abc823f7ced7a93e1da1e8.camel%40vmware.com", "msg_date": "Wed, 20 Jan 2021 17:07:08 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Tue, Jan 19, 2021 at 09:21:41PM +0100, Daniel Gustafsson wrote:\n>> In order to\n>> move on with this set, I would suggest to extract some parts of the\n>> patch set independently of the others and have two buildfarm members\n>> for the MSVC and non-MSVC cases to stress the parts that can be\n>> committed. Just seeing the size, we could move on with:\n>> - The ./configure set, with the change to introduce --with-ssl=openssl. \n>> - 0004 for strong randoms.\n>> - Support for cryptohashes.\n> \n> I will leave it to others to decide the feasibility of this, I'm happy to slice\n> and dice the commits into smaller bits to for example separate out the\n> --with-ssl autoconf change into a non NSS dependent commit, if that's wanted.\n\nIMO it makes sense to extract the independent pieces and build on top\nof them. The bulk of the changes is likely going to have a bunch of\ncomments if reviewed deeply, so I think that we had better remove from\nthe stack the small-ish problems to ease the next moves. The\n./configure part and replacement of with_openssl by with_ssl is mixed\nin 0001 and 0002, which is actually confusing. And, FWIW, I would be\nfine with applying a patch that introduces a --with-ssl with a\ncompatibility kept for --with-openssl. This is what 0001 is doing,\nactually, similarly to the past switches for --with-uuid.\n\nA point that has been mentioned offline by you, but not mentioned on\nthis list. The structure of the modules in src/test/ssl/ could be\nrefactored to help with an easier integration of more SSL libraries.\nThis makes sense taken independently.\n\n> Based on an offlist discussion I believe this was a misunderstanding, but if I\n> instead misunderstood that feel free to correct me with how you think this\n> should be done.\n\nThe point would be to rename BITS_PER_BYTE to PG_BITS_PER_BYTE in the\ncode and avoid conflicts. I am not completely sure if others would\nagree here, but this would remove quite some ifdef/undef stuff from\nthe code dedicated to NSS.\n\n> > src/sgml/libpq.sgml needs to document PQdefaultSSLKeyPassHook_nss, no?\n> \n> Good point, fixed.\n\nPlease note that patch 0001 is failing to apply after the recent\ncommit b663a41. There are conflicts in postgres_fdw.out.\n\nAlso, what's the minimum version of NSS that would be supported? It\nwould be good to define an acceptable older version, to keep that\ndocumented and to track that perhaps with some configure checks (?),\nsimilarly to what is done for OpenSSL.\n\nPatch 0006 has three trailing whitespaces (git diff --check\ncomplains). Running the regression tests of pgcrypto, I think that\nthe SHA2 implementation is not completely right. Some SHA2 encoding\nreports results from already-freed data. I have spotted a second\nissue within scram_HMAC_init(), where pg_cryptohash_create() remains\nstuck inside NSS_InitContext(), freezing the regression tests where\npassword hashed for SCRAM are created.\n\n+ ResourceOwnerEnlargeCryptoHash(CurrentResourceOwner);\n+ ctx = MemoryContextAlloc(TopMemoryContext, sizeof(pg_cryptohash_ctx));\n+#else\n+ ctx = pg_malloc(sizeof(pg_cryptohash_ctx));\n+#endif\ncryptohash_nss.c cannot use pg_malloc() for frontend allocations. On\nOOM, your patch would call exit() directly, even within libpq. But\nshared library callers need to know about the OOM failure.\n\n+ explicit_bzero(ctx, sizeof(pg_cryptohash_ctx));\n+ pfree(ctx);\nFor similar reasons, pfree should not be used for the frontend code in\ncryptohash_nss.c. The fallback should be just a malloc/free set.\n\n+ status = PK11_DigestBegin(ctx->pk11_context);\n+\n+ if (status != SECSuccess)\n+ return 1;\n+ return 0;\nThis needs to return -1 on failure, not 1.\n\nI really need to study more the choide of the options chosen for\nNSS_InitContext()... But based on the docs I can read on the matter I\nthink that saving nsscontext in pg_cryptohash_ctx is right for each\ncryptohash built.\n\nsrc/tools/msvc/ is missing an update for cryptohash_nss.c.\n--\nMichael", "msg_date": "Thu, 21 Jan 2021 14:21:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Mon, 2020-07-20 at 15:35 +0200, Daniel Gustafsson wrote:\r\n> With this, I have one failing test (\"intermediate client certificate is\r\n> provided by client\") which I've left failing since I believe the case should be\r\n> supported by NSS. The issue is most likely that I havent figured out the right\r\n> certinfo incantation to make it so (Mozilla hasn't strained themselves when\r\n> writing documentation for this toolchain, or any part of NSS for that matter).\r\n\r\nI think we're missing a counterpart to this piece of the OpenSSL\r\nimplementation, in be_tls_init():\r\n\r\n if (ssl_ca_file[0])\r\n {\r\n ...\r\n SSL_CTX_set_client_CA_list(context, root_cert_list);\r\n }\r\n\r\nI think the NSS equivalent to SSL_CTX_set_client_CA_list() is probably\r\nSSL_SetTrustAnchors() (which isn't called out in the online NSS docs,\r\nas far as I can see).\r\n\r\nWhat I'm less sure of is how we want the NSS counterpart to ssl_ca_file\r\nto behave. The OpenSSL implementation allows a list of CA names to be\r\nsent. Should the NSS side take a list of CA cert nicknames? a list of\r\nSubjects? something else?\r\n\r\nmod_nss for httpd had a proposed feature [1] to do this that\r\nunfortunately withered on the vine, and Google returns ~500 results for\r\n\"SSL_SetTrustAnchors\", so I'm unaware of any prior art in the wild...\r\n\r\n--Jacob\r\n\r\n[1] https://bugzilla.redhat.com/show_bug.cgi?id=719401\r\n", "msg_date": "Thu, 21 Jan 2021 20:16:50 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Thu, 2021-01-21 at 14:21 +0900, Michael Paquier wrote:\r\n> Also, what's the minimum version of NSS that would be supported? It\r\n> would be good to define an acceptable older version, to keep that\r\n> documented and to track that perhaps with some configure checks (?),\r\n> similarly to what is done for OpenSSL.\r\n\r\nSome version landmarks:\r\n\r\n- 3.21 adds support for extended master secret, which according to [1]\r\nis required for SCRAM channel binding to actually be secure.\r\n- 3.26 is Debian Stretch.\r\n- 3.28 is Ubuntu 16.04, and RHEL6 (I think).\r\n- 3.35 is Ubuntu 18.04.\r\n- 3.36 is RHEL7 (I think).\r\n- 3.39 gets us final TLS 1.3 support.\r\n- 3.42 is Debian Buster.\r\n- 3.49 is Ubuntu 20.04.\r\n\r\n(I'm having trouble finding online package information for RHEL variants, so I've pulled those versions from online support docs. If someone notices that those are wrong please speak up.)\r\nSo 3.39 would guarantee TLS1.3 but exclude a decent chunk of still-\r\nsupported Debian-alikes. Anything less than 3.21 seems actively unsafe\r\nunless we disable SCRAM with those versions.\r\n\r\nAny other important landmarks (whether feature- or distro-related) we\r\nneed to consider?\r\n\r\n--Jacob\r\n\r\n[1] https://tools.ietf.org/html/rfc7677#section-4\r\n", "msg_date": "Fri, 22 Jan 2021 01:14:27 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, Jan 20, 2021 at 05:07:08PM +0000, Jacob Champion wrote:\n> Lovely. I didn't expect *removing* an extension to effectively *add*\n> more, but I'm glad it works now.\n\nMy apologies for chiming in. I was looking at your patch set here,\nand while reviewing the strong random and cryptohash parts I have\nfound a couple of mistakes in the ./configure part. I think that the\nswitch from --with-openssl to --with-ssl={openssl} could just be done\nindependently as a building piece of the rest, then the first portion\nbased on NSS could just add the minimum set in configure.ac.\n\nPlease note that the patch set has been using autoconf from Debian, or\nsomething forked from upstream. There were also missing updates in\nseveral parts of the code base, and a lack of docs for the new\nswitch. I have spent time checking that with --with-openssl to make\nsure that the obsolete grammar is still compatible, --with-ssl=openssl\nand also without it.\n\nThoughts?\n--\nMichael", "msg_date": "Wed, 27 Jan 2021 16:39:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, 2021-01-27 at 16:39 +0900, Michael Paquier wrote:\r\n> My apologies for chiming in. I was looking at your patch set here,\r\n> and while reviewing the strong random and cryptohash parts I have\r\n> found a couple of mistakes in the ./configure part. I think that the\r\n> switch from --with-openssl to --with-ssl={openssl} could just be done\r\n> independently as a building piece of the rest, then the first portion\r\n> based on NSS could just add the minimum set in configure.ac.\r\n> \r\n> Please note that the patch set has been using autoconf from Debian, or\r\n> something forked from upstream. There were also missing updates in\r\n> several parts of the code base, and a lack of docs for the new\r\n> switch. I have spent time checking that with --with-openssl to make\r\n> sure that the obsolete grammar is still compatible, --with-ssl=openssl\r\n> and also without it.\r\n> \r\n> Thoughts?\r\n\r\nSeems good to me on Ubuntu; builds with both flavors.\r\n\r\nFrom peering at the Windows side:\r\n\r\n> --- a/src/tools/msvc/config_default.pl\r\n> +++ b/src/tools/msvc/config_default.pl\r\n> @@ -16,7 +16,7 @@ our $config = {\r\n> \ttcl => undef, # --with-tcl=<path>\r\n> \tperl => undef, # --with-perl=<path>\r\n> \tpython => undef, # --with-python=<path>\r\n> -\topenssl => undef, # --with-openssl=<path>\r\n> +\topenssl => undef, # --with-ssl=openssl with <path>\r\n> \tuuid => undef, # --with-uuid=<path>\r\n> \txml => undef, # --with-libxml=<path>\r\n> \txslt => undef, # --with-libxslt=<path>\r\n\r\nSo to check understanding: the `openssl` config variable is still alive\r\nfor MSVC builds; it just turns that into `--with-ssl=openssl` in the\r\nfake CONFIGURE_ARGS?\r\n\r\n<bikeshed color=\"lightblue\">\r\n\r\nSince SSL is an obsolete term, and the choice of OpenSSL vs NSS vs\r\n[nothing] affects server operation (such as cryptohash) regardless of\r\nwhether or not connection-level TLS is actually used, what would you\r\nall think about naming this option --with-crypto? I.e.\r\n\r\n --with-crypto=openssl\r\n --with-crypto=nss\r\n\r\n</bikeshed>\r\n\r\n--Jacob\r\n", "msg_date": "Wed, 27 Jan 2021 18:47:17 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, Jan 27, 2021 at 06:47:17PM +0000, Jacob Champion wrote:\n> So to check understanding: the `openssl` config variable is still alive\n> for MSVC builds; it just turns that into `--with-ssl=openssl` in the\n> fake CONFIGURE_ARGS?\n\nYeah, I think that keeping both variables separated in the MSVC\nscripts is the most straight-forward option, as this passes down a\npath. Once there is a value for nss, we'd need to properly issue an\nerror if both OpenSSL and NSS are specified.\n\n> Since SSL is an obsolete term, and the choice of OpenSSL vs NSS vs\n> [nothing] affects server operation (such as cryptohash) regardless of\n> whether or not connection-level TLS is actually used, what would you\n> all think about naming this option --with-crypto? I.e.\n> \n> --with-crypto=openssl\n> --with-crypto=nss\n\nLooking around, curl has multiple switches for each lib with one named\n--with-ssl for OpenSSL, but it needs to be able to use multiple\nlibraries at run time. I can spot that libssh2 uses what you are\nproposing. It seems to me that --with-ssl is a bit more popular but\nnot by that much: wget, wayland, some apache stuff (it uses a path as\noption value). Anyway, what you are suggesting sounds like a good in\nthe context of Postgres. Daniel?\n--\nMichael", "msg_date": "Thu, 28 Jan 2021 15:06:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 28 Jan 2021, at 07:06, Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Jan 27, 2021 at 06:47:17PM +0000, Jacob Champion wrote:\n\n>> Since SSL is an obsolete term, and the choice of OpenSSL vs NSS vs\n>> [nothing] affects server operation (such as cryptohash) regardless of\n>> whether or not connection-level TLS is actually used, what would you\n>> all think about naming this option --with-crypto? I.e.\n>> \n>> --with-crypto=openssl\n>> --with-crypto=nss\n> \n> Looking around, curl has multiple switches for each lib with one named\n> --with-ssl for OpenSSL, but it needs to be able to use multiple\n> libraries at run time. \n\nTo be fair, if we started over in curl I would push back on --with-ssl meaning\nOpenSSL but that ship has long since sailed.\n\n> I can spot that libssh2 uses what you are\n> proposing. It seems to me that --with-ssl is a bit more popular but\n> not by that much: wget, wayland, some apache stuff (it uses a path as\n> option value). Anyway, what you are suggesting sounds like a good in\n> the context of Postgres. Daniel?\n\nSSL is admittedly an obsolete technical term, but it's one that enough people\nhave decided is interchangeable with TLS that it's not a hill worth dying on\nIMHO. Since postgres won't allow for using libnss or OpenSSL for cryptohash\n*without* compiling SSL/TLS support (used or not), I think --with-ssl=LIB is\nmore descriptive and less confusing.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 29 Jan 2021 00:20:21 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Thu, 2021-01-21 at 20:16 +0000, Jacob Champion wrote:\r\n> I think we're missing a counterpart to this piece of the OpenSSL\r\n> implementation, in be_tls_init():\r\n\r\nNever mind. Using SSL_SetTrustAnchor is something we could potentially\r\ndo if we wanted to further limit the CAs that are actually sent to the\r\nclient, but it shouldn't be necessary to get the tests to pass.\r\n\r\nI now think that it's just a matter of making sure that the \"server-cn-\r\nonly\" DB has the root_ca.crt included, so that it can correctly\r\nvalidate the client certificate. Incidentally I think this should also\r\nfix the remaining failing SCRAM test. I'll try to get a patch out\r\ntomorrow, if adding the root CA doesn't invalidate some other test\r\nlogic.\r\n\r\n--Jacob\r\n", "msg_date": "Fri, 29 Jan 2021 01:06:22 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Fri, Jan 29, 2021 at 12:20:21AM +0100, Daniel Gustafsson wrote:\n> SSL is admittedly an obsolete technical term, but it's one that enough people\n> have decided is interchangeable with TLS that it's not a hill worth dying on\n> IMHO. Since postgres won't allow for using libnss or OpenSSL for cryptohash\n> *without* compiling SSL/TLS support (used or not), I think --with-ssl=LIB is\n> more descriptive and less confusing.\n\nOkay, let's use --with-ssl then for the new switch name. The previous\npatch is backward-compatible, and will simplify the rest of the set,\nso let's move on with it. Once this is done, my guess is that it\nwould be cleaner to have a new patch that includes only the\n./configure and MSVC changes, and then the rest: test refactoring,\ncryptohash, strong random and lastly TLS (we may want to cut this a\nbit more though and perhaps have some restrictions depending on the\nscope of options a first patch set could support).\n\nI'll wait a bit first to see if there are any objections to this\nchange.\n--\nMichael", "msg_date": "Fri, 29 Jan 2021 15:01:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 21 Jan 2021, at 06:21, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Jan 19, 2021 at 09:21:41PM +0100, Daniel Gustafsson wrote:\n>>> In order to\n>>> move on with this set, I would suggest to extract some parts of the\n>>> patch set independently of the others and have two buildfarm members\n>>> for the MSVC and non-MSVC cases to stress the parts that can be\n>>> committed. Just seeing the size, we could move on with:\n>>> - The ./configure set, with the change to introduce --with-ssl=openssl. \n>>> - 0004 for strong randoms.\n>>> - Support for cryptohashes.\n>> \n>> I will leave it to others to decide the feasibility of this, I'm happy to slice\n>> and dice the commits into smaller bits to for example separate out the\n>> --with-ssl autoconf change into a non NSS dependent commit, if that's wanted.\n> \n> IMO it makes sense to extract the independent pieces and build on top\n> of them. The bulk of the changes is likely going to have a bunch of\n> comments if reviewed deeply, so I think that we had better remove from\n> the stack the small-ish problems to ease the next moves. The\n> ./configure part and replacement of with_openssl by with_ssl is mixed\n> in 0001 and 0002, which is actually confusing. And, FWIW, I would be\n> fine with applying a patch that introduces a --with-ssl with a\n> compatibility kept for --with-openssl. This is what 0001 is doing,\n> actually, similarly to the past switches for --with-uuid.\n\nThis has been discussed elsewhere in the thread, so let's continue that there.\nThe attached v23 does however split off --with-ssl for OpenSSL in 0001, adding\nthe nss option in 0002.\n\n> A point that has been mentioned offline by you, but not mentioned on\n> this list. The structure of the modules in src/test/ssl/ could be\n> refactored to help with an easier integration of more SSL libraries.\n> This makes sense taken independently.\n\nThis has been submitted in F513E66A-E693-4802-9F8A-A74C1D0E3D10@yesql.se.\n\n>> Based on an offlist discussion I believe this was a misunderstanding, but if I\n>> instead misunderstood that feel free to correct me with how you think this\n>> should be done.\n> \n> The point would be to rename BITS_PER_BYTE to PG_BITS_PER_BYTE in the\n> code and avoid conflicts. I am not completely sure if others would\n> agree here, but this would remove quite some ifdef/undef stuff from\n> the code dedicated to NSS.\n\nAha, now I see what you mean, sorry for the confusion. That can certainly be\ndone (and done so outside of this patchset), but it admittedly feels a bit\nintrusive. If there is consensus that we should namespace our version like\nthis I'll go ahead and do that.\n\n>>> src/sgml/libpq.sgml needs to document PQdefaultSSLKeyPassHook_nss, no?\n>> \n>> Good point, fixed.\n> \n> Please note that patch 0001 is failing to apply after the recent\n> commit b663a41. There are conflicts in postgres_fdw.out.\n\nFixed.\n\n> Patch 0006 has three trailing whitespaces (git diff --check complains).\n\nFixed.\n\n> Running the regression tests of pgcrypto, I think that\n> the SHA2 implementation is not completely right. Some SHA2 encoding\n> reports results from already-freed data. \n\nI've been unable to reproduce, can you shed some light on this?\n\n> I have spotted a second\n> issue within scram_HMAC_init(), where pg_cryptohash_create() remains\n> stuck inside NSS_InitContext(), freezing the regression tests where\n> password hashed for SCRAM are created.\n\nI think the freezing you saw comes from opening and closing NSS contexts per\ncryptohash op (some patience on my part runs the test Ok in ~30s which is\nclearly not in the wheelhouse of acceptable), more on that below.\n\n> + ResourceOwnerEnlargeCryptoHash(CurrentResourceOwner);\n> + ctx = MemoryContextAlloc(TopMemoryContext, sizeof(pg_cryptohash_ctx));\n> +#else\n> + ctx = pg_malloc(sizeof(pg_cryptohash_ctx));\n> +#endif\n> cryptohash_nss.c cannot use pg_malloc() for frontend allocations. On\n> OOM, your patch would call exit() directly, even within libpq. But\n> shared library callers need to know about the OOM failure.\n\nOf course, fixed.\n\n> + status = PK11_DigestBegin(ctx->pk11_context);\n> +\n> + if (status != SECSuccess)\n> + return 1;\n> + return 0;\n> This needs to return -1 on failure, not 1.\n\nDoh, fixed.\n\n> I really need to study more the choide of the options chosen for\n> NSS_InitContext()... But based on the docs I can read on the matter I\n> think that saving nsscontext in pg_cryptohash_ctx is right for each\n> cryptohash built.\n\nIt's a safe but slow option, NSS wasn't really made for running a single crypto\noperation. Since we are opening a context which isn't backed by an NSS\ndatabase we could have a static context, which indeed speeds up processing a\nlot. The problem with that is that there is no good callsite for closing the\ncontext as the backend is closing down. Since you are kneedeep in the\ncryptohash code, do you have any thoughts on this? I've included 0008 which\nimplements this, with a commented out dummy stub for cleaning up.\n\nMaking nss_context static in cryptohash_nss.c is\nappealing but there is no good option for closing it there. Any thoughts on\nhow to handle global contexts like this?\n\n> src/tools/msvc/ is missing an update for cryptohash_nss.c.\n\nFixed.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Fri, 29 Jan 2021 13:57:02 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 29 Jan 2021, at 07:01, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Jan 29, 2021 at 12:20:21AM +0100, Daniel Gustafsson wrote:\n>> SSL is admittedly an obsolete technical term, but it's one that enough people\n>> have decided is interchangeable with TLS that it's not a hill worth dying on\n>> IMHO. Since postgres won't allow for using libnss or OpenSSL for cryptohash\n>> *without* compiling SSL/TLS support (used or not), I think --with-ssl=LIB is\n>> more descriptive and less confusing.\n> \n> Okay, let's use --with-ssl then for the new switch name. The previous\n> patch is backward-compatible, and will simplify the rest of the set,\n> so let's move on with it. Once this is done, my guess is that it\n> would be cleaner to have a new patch that includes only the\n> ./configure and MSVC changes, and then the rest: test refactoring,\n> cryptohash, strong random and lastly TLS (we may want to cut this a\n> bit more though and perhaps have some restrictions depending on the\n> scope of options a first patch set could support).\n> \n> I'll wait a bit first to see if there are any objections to this\n> change.\n\nI'm still not convinced that adding --with-ssl=openssl is worth it before the\nrest of NSS goes in (and more importantly, *if* it goes in).\n\nOn the one hand, we already have pluggable (for some value of) support for\nadding TLS libraries, and adding --with-ssl is one more piece of that puzzle.\nWe could of course have endless --with-X options instead but as you say,\n--with-uuid has set the tone here (and I believe that's good). On the other\nhand, if we never add any other library than OpenSSL then it's just complexity\nwithout benefit.\n\nAs mentioned elsewhere in the thread, the current v23 patchset has the\n--with-ssl change as a separate commit to at least make it visual what it looks\nlike. The documentation changes are in the main NSS patch though since\ndocumenting --with-ssl when there is only one possible value didn't seem to be\nhelpful to users whom are fully expected to use --with-openssl still.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 29 Jan 2021 14:13:30 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Fri, 2021-01-29 at 13:57 +0100, Daniel Gustafsson wrote:\r\n> > On 21 Jan 2021, at 06:21, Michael Paquier <michael@paquier.xyz> wrote:\r\n> > I really need to study more the choide of the options chosen for\r\n> > NSS_InitContext()... But based on the docs I can read on the matter I\r\n> > think that saving nsscontext in pg_cryptohash_ctx is right for each\r\n> > cryptohash built.\r\n> \r\n> It's a safe but slow option, NSS wasn't really made for running a single crypto\r\n> operation. Since we are opening a context which isn't backed by an NSS\r\n> database we could have a static context, which indeed speeds up processing a\r\n> lot. The problem with that is that there is no good callsite for closing the\r\n> context as the backend is closing down. Since you are kneedeep in the\r\n> cryptohash code, do you have any thoughts on this? I've included 0008 which\r\n> implements this, with a commented out dummy stub for cleaning up.\r\n> \r\n> Making nss_context static in cryptohash_nss.c is\r\n> appealing but there is no good option for closing it there. Any thoughts on\r\n> how to handle global contexts like this?\r\n\r\nI'm completely new to this code, so take my thoughts with a grain of\r\nsalt...\r\n\r\nI think the bad news is that the static approach will need support for\r\nENABLE_THREAD_SAFETY. (It looks like the NSS implementation of\r\npgtls_close() needs some thread support too?)\r\n\r\nThe good(?) news is that I don't understand why OpenSSL's\r\nimplementation of cryptohash doesn't _also_ need the thread-safety\r\ncode. (Shouldn't we need to call CRYPTO_set_locking_callback() et al\r\nbefore using any of its cryptohash implementation?) So maybe we can\r\nimplement the same global setup/teardown API for OpenSSL too and not\r\nhave to one-off it for NSS...\r\n\r\n--Jacob\r\n", "msg_date": "Fri, 29 Jan 2021 18:46:17 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Fri, Jan 29, 2021 at 02:13:30PM +0100, Daniel Gustafsson wrote:\n> I'm still not convinced that adding --with-ssl=openssl is worth it before the\n> rest of NSS goes in (and more importantly, *if* it goes in).\n>\n> On the one hand, we already have pluggable (for some value of) support for\n> adding TLS libraries, and adding --with-ssl is one more piece of that puzzle.\n> We could of course have endless --with-X options instead but as you say,\n> --with-uuid has set the tone here (and I believe that's good). On the other\n> hand, if we never add any other library than OpenSSL then it's just complexity\n> without benefit.\n\nIMO, one could say the same thing for any piece of refactoring we have\ndone in the past to make the TLS/crypto code more modular. There is\ndemand for being able to choose among multiple SSL libs at build time,\nand we are still in a phase where we evaluate the options at hand.\nThis refactoring is just careful progress, and this is one step in\nthis direction. The piece about refactoring the SSL tests is\nsimilar.\n\n> As mentioned elsewhere in the thread, the current v23 patchset has the\n> --with-ssl change as a separate commit to at least make it visual what it looks\n> like. The documentation changes are in the main NSS patch though since\n> documenting --with-ssl when there is only one possible value didn't seem to be\n> helpful to users whom are fully expected to use --with-openssl still.\n\nThe documentation changes should be part of the patch introducing the\nswitch IMO: a description of the new switch, as well as a paragraph\nabout the old value being deprecated. That's done this way for UUID.\n--\nMichael", "msg_date": "Sat, 30 Jan 2021 10:27:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Fri, Jan 29, 2021 at 01:57:02PM +0100, Daniel Gustafsson wrote:\n> This has been discussed elsewhere in the thread, so let's continue that there.\n> The attached v23 does however split off --with-ssl for OpenSSL in 0001, adding\n> the nss option in 0002.\n\nWhile going through 0001, I have found a couple of things.\n\n-CF_SRCS = $(if $(subst no,,$(with_openssl)), $(OSSL_SRCS), $(INT_SRCS))\n-CF_TESTS = $(if $(subst no,,$(with_openssl)), $(OSSL_TESTS), $(INT_TESTS))\n+CF_SRCS = $(if $(subst openssl,,$(with_ssl)), $(OSSL_SRCS), $(INT_SRCS))\n+CF_TESTS = $(if $(subst openssl,,$(with_ssl)), $(OSSL_TESTS), $(INT_TESTS))\nIt seems to me that this part is the opposite, aka here the OpenSSL\nfiles and tests (OSSL*) would be used if with_ssl is not openssl.\n\n-ifeq ($(with_openssl),yes)\n+ifneq ($(with_ssl),no)\n+OBJS += \\\n+ fe-secure-common.o\n+endif\nThis split is better, good idea.\n\nThe two SSL tests still included a reference to with_openssl after\n0001:\nsrc/test/ssl/t/001_ssltests.pl:if ($ENV{with_openssl} eq 'yes')\nsrc/test/ssl/t/002_scram.pl:if ($ENV{with_openssl} ne 'yes')\n\nI have refreshed the docs on top to be consistent with the new\nconfiguration, and applied it after more checks. I'll try to look in\nmore details at the failures with cryptohashes I found upthread.\n--\nMichael", "msg_date": "Mon, 1 Feb 2021 22:25:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 20 Jan 2021, at 18:07, Jacob Champion <pchampion@vmware.com> wrote:\n\n> To continue the Subject Common Name discussion [1] from a different\n> part of the thread:\n> \n> Attached is a v23 version of the patchset that peels the raw Common\n> Name out from a client cert's Subject. This allows the following cases\n> that the OpenSSL implementation currently handles:\n> \n> - subjects that don't begin with a CN\n> - subjects with quotable characters\n> - subjects that have no CN at all\n\nNice, thanks for fixing this!\n\n> Embedded NULLs are now handled in a similar manner to the OpenSSL side,\n> though because this failure happens during the certificate\n> authentication callback, it results in a TLS alert rather than simply\n> closing the connection.\n\nBut returning SECFailure from the cert callback force NSS to terminate the\nconnection immediately doesn't it?\n\n> For easier review of just the parts I've changed, I've also attached a\n> since-v22.diff, which is part of the 0001 patch.\n\nI confused my dev trees and missed to include this in the v23 that I sent out\n(which should've been v24), sorry about that. Attached is a v24 which is\nrebased on top of todays --with-ssl commit, and now includes your changes.\n\nAdditionally I've added a shutdown callback such that we close the connection\nimmediately if NSS is shutting down from underneath us. I can't imagine a\nscenario in which that's benign, so let's take whatever precautions we can.\n\nI've also changed the NSS initialization in the cryptohash code to closer match\nwhat the NSS documentation recommends for similar scenarios, but more on that\ndownthread where that's discussed.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Mon, 1 Feb 2021 21:49:20 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 29 Jan 2021, at 19:46, Jacob Champion <pchampion@vmware.com> wrote:\n\n> I think the bad news is that the static approach will need support for\n> ENABLE_THREAD_SAFETY.\n\nI did some more reading today and noticed that the NSS documentation (and their\nsample code for doing crypto without TLS connections) says to use NSS_NoDB_Init\nto perform a read-only init which don't require a matching close call. Now,\nthe docs aren't terribly clear and also seems to have gone offline from MDN,\nand skimming the code isn't entirelt self-explanatory, so I may well have\nmissed something. The v24 patchset posted changes to this and at least passes\ntests with decent performance so it seems worth investigating.\n\n> (It looks like the NSS implementation of pgtls_close() needs some thread\n> support too?)\n\n\nStoring the context in conn would probably be better?\n\n> The good(?) news is that I don't understand why OpenSSL's\n> implementation of cryptohash doesn't _also_ need the thread-safety\n> code. (Shouldn't we need to call CRYPTO_set_locking_callback() et al\n> before using any of its cryptohash implementation?) So maybe we can\n> implement the same global setup/teardown API for OpenSSL too and not\n> have to one-off it for NSS...\n\nNo idea here, wouldn't that impact pgcrypto as well in that case?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 1 Feb 2021 21:49:26 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 1 Feb 2021, at 14:25, Michael Paquier <michael@paquier.xyz> wrote:\n\n> I have refreshed the docs on top to be consistent with the new\n> configuration, and applied it after more checks.\n\nThanks, I was just about to send a rebased version earlier today with the doc\nchanges in the 0001 patch when this email landed in my inbox =) The v24 posted\nupthread is now rebased on top of this.\n\n> I'll try to look in more details at the failures with cryptohashes I found\n> upthread.\n\nGreat, thanks.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 1 Feb 2021 21:51:49 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Mon, 2021-02-01 at 21:49 +0100, Daniel Gustafsson wrote:\r\n> > On 29 Jan 2021, at 19:46, Jacob Champion <pchampion@vmware.com> wrote:\r\n> > I think the bad news is that the static approach will need support for\r\n> > ENABLE_THREAD_SAFETY.\r\n> \r\n> I did some more reading today and noticed that the NSS documentation (and their\r\n> sample code for doing crypto without TLS connections) says to use NSS_NoDB_Init\r\n> to perform a read-only init which don't require a matching close call. Now,\r\n> the docs aren't terribly clear and also seems to have gone offline from MDN,\r\n> and skimming the code isn't entirelt self-explanatory, so I may well have\r\n> missed something. The v24 patchset posted changes to this and at least passes\r\n> tests with decent performance so it seems worth investigating.\r\n\r\nNice! Not having to close helps quite a bit.\r\n\r\n(Looks like thread safety for NSS_Init was added in 3.13, so we have an\r\nabsolute version floor.)\r\n\r\n> > (It looks like the NSS implementation of pgtls_close() needs some thread\r\n> > support too?)\r\n> \r\n> Storing the context in conn would probably be better?\r\n\r\nAgreed.\r\n\r\n> > The good(?) news is that I don't understand why OpenSSL's\r\n> > implementation of cryptohash doesn't _also_ need the thread-safety\r\n> > code. (Shouldn't we need to call CRYPTO_set_locking_callback() et al\r\n> > before using any of its cryptohash implementation?) So maybe we can\r\n> > implement the same global setup/teardown API for OpenSSL too and not\r\n> > have to one-off it for NSS...\r\n> \r\n> No idea here, wouldn't that impact pgcrypto as well in that case?\r\n\r\nIf pgcrypto is backend-only then I don't think it should need\r\nmultithreading protection; is that right?\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 2 Feb 2021 00:42:23 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Mon, 2021-02-01 at 21:49 +0100, Daniel Gustafsson wrote:\r\n> > Embedded NULLs are now handled in a similar manner to the OpenSSL side,\r\n> > though because this failure happens during the certificate\r\n> > authentication callback, it results in a TLS alert rather than simply\r\n> > closing the connection.\r\n> \r\n> But returning SECFailure from the cert callback force NSS to terminate the\r\n> connection immediately doesn't it?\r\n\r\nIIRC NSS will send the alert first, whereas our OpenSSL implementation\r\nwill complete the handshake and then drop the connection. I'll rebuild\r\nwith the latest and confirm.\r\n\r\n> > For easier review of just the parts I've changed, I've also attached a\r\n> > since-v22.diff, which is part of the 0001 patch.\r\n> \r\n> I confused my dev trees and missed to include this in the v23 that I sent out\r\n> (which should've been v24), sorry about that. Attached is a v24 which is\r\n> rebased on top of todays --with-ssl commit, and now includes your changes.\r\n\r\nNo problem. Thanks!\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 2 Feb 2021 00:55:57 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Tue, Feb 02, 2021 at 12:42:23AM +0000, Jacob Champion wrote:\n> (Looks like thread safety for NSS_Init was added in 3.13, so we have an\n> absolute version floor.)\n\nIf that's the case, I would recommend to add at least something in the\nsection called install-requirements in the docs.\n\n> If pgcrypto is backend-only then I don't think it should need\n> multithreading protection; is that right?\n\nNo need for it in the backend, unless there are plans to switch from\nprocesses to threads there :p\n\nlibpq, ecpg and anything using them have to care about that. Worth\nnoting that OpenSSL also has some special handling in libpq with\nCRYPTO_get_id_callback() and that it tracks the number of opened\nconnections.\n--\nMichael", "msg_date": "Tue, 2 Feb 2021 10:06:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Tue, 2021-02-02 at 00:55 +0000, Jacob Champion wrote:\r\n> On Mon, 2021-02-01 at 21:49 +0100, Daniel Gustafsson wrote:\r\n> > > Embedded NULLs are now handled in a similar manner to the OpenSSL side,\r\n> > > though because this failure happens during the certificate\r\n> > > authentication callback, it results in a TLS alert rather than simply\r\n> > > closing the connection.\r\n> > \r\n> > But returning SECFailure from the cert callback force NSS to terminate the\r\n> > connection immediately doesn't it?\r\n> \r\n> IIRC NSS will send the alert first, whereas our OpenSSL implementation\r\n> will complete the handshake and then drop the connection. I'll rebuild\r\n> with the latest and confirm.\r\n\r\nI wasn't able to reproduce the behavior I thought I saw before. In any\r\ncase I think the current NSS implementation for embedded NULLs will\r\nwork correctly.\r\n\r\n> > Attached is a v24 which is\r\n> > rebased on top of todays --with-ssl commit, and now includes your changes.\r\n\r\nI have a v25 attached which fixes and re-enables the skipped/todo'd\r\nclient certificate and SCRAM tests. (Changes between v24 and v25 are in\r\nsince-v24.diff.) The server-cn-only database didn't have the root CA\r\ninstalled to be able to verify client certificates, so I've added it.\r\n\r\nNote that this changes the error message printed during the invalid-\r\nroot tests, because NSS is now sending the root of the chain. So the\r\nserver's issuer is considered untrusted rather than unrecognized.\r\n\r\n--Jacob", "msg_date": "Tue, 2 Feb 2021 20:33:35 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Tue, Feb 02, 2021 at 08:33:35PM +0000, Jacob Champion wrote:\n> Note that this changes the error message printed during the invalid-\n> root tests, because NSS is now sending the root of the chain. So the\n> server's issuer is considered untrusted rather than unrecognized.\n\nI think that it is not a good idea to attach the since-v*.diff patches\ninto the threads. This causes the CF bot to fail in applying those\npatches.\n\nCould it be possible to split 0001 into two parts at least with one\npatch that includes the basic changes for the build and ./configure,\nand a second with the FE/BE changes?\n--\nMichael", "msg_date": "Thu, 4 Feb 2021 16:30:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Thu, 2021-02-04 at 16:30 +0900, Michael Paquier wrote:\r\n> On Tue, Feb 02, 2021 at 08:33:35PM +0000, Jacob Champion wrote:\r\n> > Note that this changes the error message printed during the invalid-\r\n> > root tests, because NSS is now sending the root of the chain. So the\r\n> > server's issuer is considered untrusted rather than unrecognized.\r\n> \r\n> I think that it is not a good idea to attach the since-v*.diff patches\r\n> into the threads. This causes the CF bot to fail in applying those\r\n> patches.\r\n\r\nAh, sorry about that. Is there an extension I can use (or lack thereof)\r\nthat the CF bot will ignore, or does it scan the attachment contents?\r\n\r\n--Jacob\r\n", "msg_date": "Thu, 4 Feb 2021 18:35:28 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Thu, Feb 04, 2021 at 06:35:28PM +0000, Jacob Champion wrote:\n> Ah, sorry about that. Is there an extension I can use (or lack thereof)\n> that the CF bot will ignore, or does it scan the attachment contents?\n\nThe thing is smart, but there are ways to bypass it. Here is the\ncode:\nhttps://github.com/macdice/cfbot/\n\nAnd here are the patterns looked at:\ncfbot_commitfest_rpc.py: groups = re.search('<a\nhref=\"(/message-id/attachment/[^\"]*\\\\.(diff|diff\\\\.gz|patch|patch\\\\.gz|tar\\\\.gz|tgz|tar\\\\.bz2))\">',\nline)\n--\nMichael", "msg_date": "Fri, 5 Feb 2021 15:35:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 4 Feb 2021, at 08:30, Michael Paquier <michael@paquier.xyz> wrote:\n\n> Could it be possible to split 0001 into two parts at least with one\n> patch that includes the basic changes for the build and ./configure,\n> and a second with the FE/BE changes?\n\nAttached is a new patchset where I've tried to split the patches even further\nto try and separate out changes for easier review. While not a perfect split\nI'm sure, and clearly only for review purposes, I do hope it helps a little.\nThere is one hunk in 0002 which moves some OpenSSL specific code from\nunderneath USE_SSL, but thats about the only non-NSS change left in this\npatchset AFAICS.\n\nAdditionally, this version moves the code in thee shared header to a proper .c\nfile shared between frontend and backend as well as performs some general\ncleanup around that.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Tue, 9 Feb 2021 00:08:37 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 4 Feb 2021, at 19:35, Jacob Champion <pchampion@vmware.com> wrote:\n> \n> On Thu, 2021-02-04 at 16:30 +0900, Michael Paquier wrote:\n>> On Tue, Feb 02, 2021 at 08:33:35PM +0000, Jacob Champion wrote:\n>>> Note that this changes the error message printed during the invalid-\n>>> root tests, because NSS is now sending the root of the chain. So the\n>>> server's issuer is considered untrusted rather than unrecognized.\n>> \n>> I think that it is not a good idea to attach the since-v*.diff patches\n>> into the threads. This causes the CF bot to fail in applying those\n>> patches.\n> \n> Ah, sorry about that. Is there an extension I can use (or lack thereof)\n> that the CF bot will ignore, or does it scan the attachment contents?\n\nNaming the file .patch.txt should work, and it serves the double purpose of\nmaking it extra clear that this is not a patch intended to be applied but one\nintended to be read for informational purposes.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 9 Feb 2021 00:11:05 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Tue, Feb 09, 2021 at 12:08:37AM +0100, Daniel Gustafsson wrote:\n> Attached is a new patchset where I've tried to split the patches even further\n> to try and separate out changes for easier review. While not a perfect split\n> I'm sure, and clearly only for review purposes, I do hope it helps a little.\n> There is one hunk in 0002 which moves some OpenSSL specific code from\n> underneath USE_SSL, but thats about the only non-NSS change left in this\n> patchset AFAICS.\n\nI would have imagined 0010 to be either a 0001 or a 0002 :)\n\n }\n+#endif /* USE_SSL */\n+\n+#ifndef USE_OPENSSL\n\nPQsslKeyPassHook_OpenSSL_type\nPQgetSSLKeyPassHook_OpenSSL(void)\nIndeed. Let's fix that on HEAD, as an independent thing.\n\n errmsg(\"hostssl record cannot match because SSL is not supported by this build\"),\n- errhint(\"Compile with --with-ssl=openssl to use SSL connections.\"),\n+ errhint(\"Compile with --with-ssl to use SSL connections.\"),\nActually, we could change that directly on HEAD as you suggest. This\ncode area is surrounded with USE_SSL so there is no need to mention\nopenssl at all.\n\n-/* Support for overriding sslpassword handling with a callback. */\n+/* Support for overriding sslpassword handling with a callback */\nMakes sense.\n\n /*\n * USE_SSL code should be compiled only when compiling with an SSL\n- * implementation. (Currently, only OpenSSL is supported, but we might add\n- * more implementations in the future.)\n+ * implementation.\n */\nFine by me as well, meaning that 0002 could just be committed as-is.\nI am also looking at 0003 a bit.\n--\nMichael", "msg_date": "Tue, 9 Feb 2021 15:47:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 9 Feb 2021, at 07:47, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Feb 09, 2021 at 12:08:37AM +0100, Daniel Gustafsson wrote:\n>> Attached is a new patchset where I've tried to split the patches even further\n>> to try and separate out changes for easier review. While not a perfect split\n>> I'm sure, and clearly only for review purposes, I do hope it helps a little.\n>> There is one hunk in 0002 which moves some OpenSSL specific code from\n>> underneath USE_SSL, but thats about the only non-NSS change left in this\n>> patchset AFAICS.\n> \n> I would have imagined 0010 to be either a 0001 or a 0002 :)\n\nWell, 0010 is a 2 in binary =) Jokes aside, I just didn't want to have a patch\nreferencing files added by later patches in the series.\n\n> errmsg(\"hostssl record cannot match because SSL is not supported by this build\"),\n> - errhint(\"Compile with --with-ssl=openssl to use SSL connections.\"),\n> + errhint(\"Compile with --with-ssl to use SSL connections.\"),\n> Actually, we could change that directly on HEAD as you suggest. This\n> code area is surrounded with USE_SSL so there is no need to mention\n> openssl at all.\n\nWe could, the only reason it says =openssl today is that it's the only possible\nvalue but thats an implementation detail. Changing it now before it's shipped\nanywhere means the translation will be stable even if another library is\nsupported.\n\n> 0002 could just be committed as-is.\n\nIt can be, it's not the most pressing patch scope reduction but everything\nhelps of course.\n\n> I am also looking at 0003 a bit.\n\nThanks. That patch is slightly more interesting in terms of reducing scope\nhere, but I also think it makes the test code a bit easier to digest when\ncertificate management is abstracted into the API rather than the job of the\ntestfile to perform.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n", "msg_date": "Tue, 9 Feb 2021 10:30:52 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Tue, Feb 09, 2021 at 10:30:52AM +0100, Daniel Gustafsson wrote:\n> It can be, it's not the most pressing patch scope reduction but everything\n> helps of course.\n\nOkay. I have spent some time on this one and finished it.\n\n> Thanks. That patch is slightly more interesting in terms of reducing scope\n> here, but I also think it makes the test code a bit easier to digest when\n> certificate management is abstracted into the API rather than the job of the\n> testfile to perform.\n\nThat's my impression. Still, I am wondering if there could be a\ndifferent approach. I need to think more about that first..\n--\nMichael", "msg_date": "Wed, 10 Feb 2021 16:23:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 10 Feb 2021, at 08:23, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Feb 09, 2021 at 10:30:52AM +0100, Daniel Gustafsson wrote:\n>> It can be, it's not the most pressing patch scope reduction but everything\n>> helps of course.\n> \n> Okay. I have spent some time on this one and finished it.\n\nThanks, I'll post a rebased version on top of this soon.\n\n>> Thanks. That patch is slightly more interesting in terms of reducing scope\n>> here, but I also think it makes the test code a bit easier to digest when\n>> certificate management is abstracted into the API rather than the job of the\n>> testfile to perform.\n> \n> That's my impression. Still, I am wondering if there could be a\n> different approach. I need to think more about that first..\n\nAnother option could be to roll SSL config into PostgresNode and expose SSL\nconnections to every subsystem tested with TAP. Something like:\n\n\t$node = get_new_node(..);\n\t$node->setup_ssl(..);\n\t$node->set_certificate(..);\n\nThat is a fair bit more work though, but perhaps we could then easier find\n(and/or prevent) bugs like the one fixed in a45bc8a4f6495072bc48ad40a5aa03.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 10 Feb 2021 13:17:33 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Mon, 2020-07-20 at 15:35 +0200, Daniel Gustafsson wrote:\r\n> This version adds support for sslinfo on NSS for most the functions.\r\n\r\nI've poked around to see what can be done about the\r\nunimplemented ssl_client_dn_field/ssl_issuer_field functions. There's a\r\nnasty soup of specs to wade around in, and it's not really clear to me\r\nwhich ones take precedence since they're mostly centered on LDAP.\r\n\r\nMy take on it is that OpenSSL has done its own thing here, with almost-\r\nbased-on-a-spec-but-not-quite semantics. NSS has no equivalents to many\r\nof the field names that OpenSSL supports (e.g. \"commonName\"). Likewise,\r\nOpenSSL doesn't support case-insensitivity (e.g. \"cn\" in addition to\r\n\"CN\") as many of the relevant RFCs require. They do both support\r\ndotted-decimal representations, so we could theoretically get feature\r\nparity there without a huge amount of work.\r\n\r\nFor the few attributes that NSS has a public API for retrieving:\r\n- common name\r\n- country\r\n- locality\r\n- state\r\n- organization\r\n- domain component\r\n- org. unit\r\n- DN qualifier\r\n- uid\r\n- email address(es?)\r\nwe could hardcode the list of OpenSSL-compatible names, and just\r\ntranslate manually in sslinfo. Then leave the rest up to dotted-decimal \r\nOIDs.\r\n\r\nWould that be desirable, or do we want this interface to be something\r\nmore generally compatible with (some as-of-yet unspecified) spec?\r\n\r\n--Jacob\r\n", "msg_date": "Wed, 17 Feb 2021 01:02:15 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 17 Feb 2021, at 02:02, Jacob Champion <pchampion@vmware.com> wrote:\n\n> On Mon, 2020-07-20 at 15:35 +0200, Daniel Gustafsson wrote:\n>> This version adds support for sslinfo on NSS for most the functions.\n> \n> I've poked around to see what can be done about the\n> unimplemented ssl_client_dn_field/ssl_issuer_field functions. There's a\n> nasty soup of specs to wade around in, and it's not really clear to me\n> which ones take precedence since they're mostly centered on LDAP.\n\nThanks for digging!\n\n> we could hardcode the list of OpenSSL-compatible names, and just\n> translate manually in sslinfo. Then leave the rest up to dotted-decimal \n> OIDs.\n> \n> Would that be desirable, or do we want this interface to be something\n> more generally compatible with (some as-of-yet unspecified) spec?\n\nRegardless of approach taken I think this sounds like something that should be\ntackled in a follow-up patch if the NSS patch is merged - and probably only as\na follow-up to a patch that adds test coverage to sslinfo. From the sounds of\nthings me may not be able to guarantee stability across OpenSSL versions as it\nis right now?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 17 Feb 2021 22:19:35 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 10 Feb 2021, at 13:17, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 10 Feb 2021, at 08:23, Michael Paquier <michael@paquier.xyz> wrote:\n>> \n>> On Tue, Feb 09, 2021 at 10:30:52AM +0100, Daniel Gustafsson wrote:\n>>> It can be, it's not the most pressing patch scope reduction but everything\n>>> helps of course.\n>> \n>> Okay. I have spent some time on this one and finished it.\n> \n> Thanks, I'll post a rebased version on top of this soon.\n\nAttached is a rebase on top of this and the recent cryptohash changes to pass\nin buffer lengths to the _final function. On top of that, I fixed up and\nexpanded the documentation, improved SCRAM handling (by using NSS digest\noperations which are better suited) and reworded and expanded comments. This\npatch version is, I think, feature complete with the OpenSSL implementation.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Wed, 17 Feb 2021 22:35:33 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, 2021-02-17 at 22:19 +0100, Daniel Gustafsson wrote:\r\n> > On 17 Feb 2021, at 02:02, Jacob Champion <pchampion@vmware.com> wrote:\r\n> > Would that be desirable, or do we want this interface to be something\r\n> > more generally compatible with (some as-of-yet unspecified) spec?\r\n> \r\n> Regardless of approach taken I think this sounds like something that should be\r\n> tackled in a follow-up patch if the NSS patch is merged - and probably only as\r\n> a follow-up to a patch that adds test coverage to sslinfo.\r\n\r\nSounds good, and +1 to adding coverage at the same time.\r\n\r\n> From the sounds of\r\n> things me may not be able to guarantee stability across OpenSSL versions as it\r\n> is right now?\r\n\r\nYeah. I was going to write that OpenSSL would be unlikely to change\r\nthese once they're added for the first time, but after checking GitHub\r\nit looks like they have done so recently [1], as part of a patch\r\nrelease no less.\r\n\r\n--Jacob\r\n\r\n[1] https://github.com/openssl/openssl/pull/10029\r\n", "msg_date": "Wed, 17 Feb 2021 22:02:38 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, 2021-02-17 at 22:35 +0100, Daniel Gustafsson wrote:\r\n> Attached is a rebase on top of this and the recent cryptohash changes to pass\r\n> in buffer lengths to the _final function. On top of that, I fixed up and\r\n> expanded the documentation, improved SCRAM handling (by using NSS digest\r\n> operations which are better suited) and reworded and expanded comments. This\r\n> patch version is, I think, feature complete with the OpenSSL implementation.\r\n\r\nfe-secure-nss.c is no longer compiling as of this patchset; looks\r\nlike pgtls_open_client() has a truncated statement.\r\n\r\n--Jacob\r\n", "msg_date": "Thu, 18 Feb 2021 20:33:18 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 18 Feb 2021, at 21:33, Jacob Champion <pchampion@vmware.com> wrote:\n> \n> On Wed, 2021-02-17 at 22:35 +0100, Daniel Gustafsson wrote:\n>> Attached is a rebase on top of this and the recent cryptohash changes to pass\n>> in buffer lengths to the _final function. On top of that, I fixed up and\n>> expanded the documentation, improved SCRAM handling (by using NSS digest\n>> operations which are better suited) and reworded and expanded comments. This\n>> patch version is, I think, feature complete with the OpenSSL implementation.\n> \n> fe-secure-nss.c is no longer compiling as of this patchset; looks\n> like pgtls_open_client() has a truncated statement.\n\nOuch, I had a local mismerge that snuck in as I moved the branch around for\nsubmission here. The attached fixes that as well as implements the sslcrldir\nsupport that was committed recently. The crldir parameter isn't applicable to\nNSS per se since all CRL's are loaded into the NSS database, but it does need\nto be supported for the tests.\n\nThe crldir commit also made similar changes to the test harness as I had done\nto support the NSS database, which made these incompatible. To fix that I've\nimplemented named parameters in switch_server_cert to make it less magic with\nmultiple optional parameters.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Mon, 22 Feb 2021 14:31:13 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Mon, 2021-02-22 at 14:31 +0100, Daniel Gustafsson wrote:\r\n> The attached fixes that as well as implements the sslcrldir\r\n> support that was committed recently. The crldir parameter isn't applicable to\r\n> NSS per se since all CRL's are loaded into the NSS database, but it does need\r\n> to be supported for the tests.\r\n> \r\n> The crldir commit also made similar changes to the test harness as I had done\r\n> to support the NSS database, which made these incompatible. To fix that I've\r\n> implemented named parameters in switch_server_cert to make it less magic with\r\n> multiple optional parameters.\r\n\r\nThe named parameters are a big improvement!\r\n\r\nCouple things I've noticed with this patch, back on the OpenSSL side.\r\nIn SSL::Backend::OpenSSL's set_server_conf() implementation:\r\n\r\n> + my $sslconf =\r\n> + \"ssl_ca_file='$params->{cafile}.crt'\\n\"\r\n> + . \"ssl_cert_file='$params->{certfile}.crt'\\n\"\r\n> + . \"ssl_key_file='$params->{keyfile}.key'\\n\"\r\n> + . \"ssl_crl_file='$params->{crlfile}'\\n\";\r\n> + $sslconf .= \"ssl_crl_dir='$params->{crldir}'\\n\" if defined $params->{crldir};\r\n> }\r\n\r\nthis is missing a `return $sslconf` at the end.\r\n\r\nIn 001_ssltests.pl:\r\n\r\n> -set_server_cert($node, 'server-cn-only', 'root+client_ca',\r\n> - 'server-password', 'echo wrongpassword');\r\n> -command_fails(\r\n> - [ 'pg_ctl', '-D', $node->data_dir, '-l', $node->logfile, 'restart' ],\r\n> - 'restart fails with password-protected key file with wrong password');\r\n> -$node->_update_pid(0);\r\n> +# Since the passphrase callbacks operate at different stages in OpenSSL and\r\n> +# NSS we have two separate blocks for them\r\n> +SKIP:\r\n> +{\r\n> + skip \"Certificate passphrases aren't checked on server restart in NSS\", 2\r\n> + if ($nss);\r\n> +\r\n> + switch_server_cert($node,\r\n> + certfile => 'server-cn-only',\r\n> + cafile => 'root+client_ca',\r\n> + keyfile => 'server-password',\r\n> + nssdatabase => 'server-cn-only.crt__server-password.key.db',\r\n> + passphrase_cmd => 'echo wrongpassword');\r\n> +\r\n> + command_fails(\r\n> + [ 'pg_ctl', '-D', $node->data_dir, '-l', $node->logfile, 'restart' ],\r\n> + 'restart fails with password-protected key file with wrong password');\r\n> + $node->_update_pid(0);\r\n\r\nThe removal of set_server_cert() in favor of switch_server_cert()\r\nbreaks these tests in OpenSSL, because the restart that\r\nswitch_server_cert performs will fail as designed. (The new comment\r\nabove switch_server_cert() suggests maybe you had a switch in mind to\r\nskip the restart?)\r\n\r\nNSS is not affected because we expect the restart to succeed:\r\n\r\n> + command_ok(\r\n> + [ 'pg_ctl', '-D', $node->data_dir, '-l', $node->logfile, 'restart' ],\r\n> + 'restart fails with password-protected key file with wrong password');\r\n\r\nbut I'd argue that that NSS test and the one after it should probably\r\nbe removed. We already know restart succeeded; otherwise\r\nswitch_server_cert() would have failed. (The test descriptions also\r\nhave the old \"restart fails\" verbiage.)\r\n\r\n--Jacob\r\n", "msg_date": "Wed, 24 Feb 2021 00:11:55 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 24 Feb 2021, at 01:11, Jacob Champion <pchampion@vmware.com> wrote:\n> \n> On Mon, 2021-02-22 at 14:31 +0100, Daniel Gustafsson wrote:\n>> The attached fixes that as well as implements the sslcrldir\n>> support that was committed recently. The crldir parameter isn't applicable to\n>> NSS per se since all CRL's are loaded into the NSS database, but it does need\n>> to be supported for the tests.\n>> \n>> The crldir commit also made similar changes to the test harness as I had done\n>> to support the NSS database, which made these incompatible. To fix that I've\n>> implemented named parameters in switch_server_cert to make it less magic with\n>> multiple optional parameters.\n> \n> The named parameters are a big improvement!\n> \n> Couple things I've noticed with this patch, back on the OpenSSL side.\n> In SSL::Backend::OpenSSL's set_server_conf() implementation:\n> \n>> + my $sslconf =\n>> + \"ssl_ca_file='$params->{cafile}.crt'\\n\"\n>> + . \"ssl_cert_file='$params->{certfile}.crt'\\n\"\n>> + . \"ssl_key_file='$params->{keyfile}.key'\\n\"\n>> + . \"ssl_crl_file='$params->{crlfile}'\\n\";\n>> + $sslconf .= \"ssl_crl_dir='$params->{crldir}'\\n\" if defined $params->{crldir};\n>> }\n> \n> this is missing a `return $sslconf` at the end.\n\nYeah, I was clearly undercaffeinated and forgot to re-run the tests on OpenSSL\nafter some hackery. Fixed.\n\n> In 001_ssltests.pl:\n> \n>> -set_server_cert($node, 'server-cn-only', 'root+client_ca',\n>> - 'server-password', 'echo wrongpassword');\n>> -command_fails(\n>> - [ 'pg_ctl', '-D', $node->data_dir, '-l', $node->logfile, 'restart' ],\n>> - 'restart fails with password-protected key file with wrong password');\n>> -$node->_update_pid(0);\n>> +# Since the passphrase callbacks operate at different stages in OpenSSL and\n>> +# NSS we have two separate blocks for them\n>> +SKIP:\n>> +{\n>> + skip \"Certificate passphrases aren't checked on server restart in NSS\", 2\n>> + if ($nss);\n>> +\n>> + switch_server_cert($node,\n>> + certfile => 'server-cn-only',\n>> + cafile => 'root+client_ca',\n>> + keyfile => 'server-password',\n>> + nssdatabase => 'server-cn-only.crt__server-password.key.db',\n>> + passphrase_cmd => 'echo wrongpassword');\n>> +\n>> + command_fails(\n>> + [ 'pg_ctl', '-D', $node->data_dir, '-l', $node->logfile, 'restart' ],\n>> + 'restart fails with password-protected key file with wrong password');\n>> + $node->_update_pid(0);\n> \n> The removal of set_server_cert() in favor of switch_server_cert()\n> breaks these tests in OpenSSL, because the restart that\n> switch_server_cert performs will fail as designed. (The new comment\n> above switch_server_cert() suggests maybe you had a switch in mind to\n> skip the restart?)\n\nI initially had a restart => 'yes' parameter which turned too repetetive since\nnearly all calls wants a restart. When I switched it to an opt-out I missed to\nupdate the tests. Fixed.\n\n> NSS is not affected because we expect the restart to succeed:\n> \n>> + command_ok(\n>> + [ 'pg_ctl', '-D', $node->data_dir, '-l', $node->logfile, 'restart' ],\n>> + 'restart fails with password-protected key file with wrong password');\n> \n> but I'd argue that that NSS test and the one after it should probably\n> be removed. We already know restart succeeded; otherwise\n> switch_server_cert() would have failed. (The test descriptions also\n> have the old \"restart fails\" verbiage.)\n\nAgreed, removed.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Wed, 24 Feb 2021 13:23:32 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Attached is a rebase which attempts to fix the cfbot Appveyor failure, there\nwere missing HAVE_ defines for MSVC.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Wed, 3 Mar 2021 09:52:13 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Greetings,\n\n* Daniel Gustafsson (daniel@yesql.se) wrote:\n> Attached is a rebase which attempts to fix the cfbot Appveyor failure, there\n> were missing HAVE_ defines for MSVC.\n\n> Subject: [PATCH v30 1/9] nss: Support libnss as TLS library in libpq\n> \n> This commit contains the frontend and backend portion of TLS support\n> in libpq to allow encrypted connections. The implementation is done\n\nmaybe add 'using NSS' to that first sentence. ;)\n\n> +++ b/src/backend/libpq/auth.c\n> @@ -2849,7 +2849,14 @@ CheckCertAuth(Port *port)\n> {\n> \tint\t\t\tstatus_check_usermap = STATUS_ERROR;\n> \n> +#if defined(USE_OPENSSL)\n> \tAssert(port->ssl);\n> +#elif defined(USE_NSS)\n> +\t/* TODO: should we rename pr_fd to ssl, to keep consistency? */\n> +\tAssert(port->pr_fd);\n> +#else\n> +\tAssert(false);\n> +#endif\n\nHaving thought about this TODO item for a bit, I tend to think it's\nbetter to keep them distinct. They aren't the same and it might not be\nclear what's going on if one was to somehow mix them (at least if pr_fd\ncontinues to sometimes be a void*, but I wonder why that's being\ndone..? more on that later..).\n\n> +++ b/src/backend/libpq/be-secure-nss.c\n[...]\n> +/* default init hook can be overridden by a shared library */\n> +static void default_nss_tls_init(bool isServerStart);\n> +nss_tls_init_hook_type nss_tls_init_hook = default_nss_tls_init;\n\n> +static PRDescIdentity pr_id;\n> +\n> +static PRIOMethods pr_iomethods;\n\nHappy to be told I'm missing something, but the above two variables seem\nto only be used in init_iolayer.. is there a reason they're declared\nhere instead of just being declared in that function?\n\n> +\t/*\n> +\t * Set the fallback versions for the TLS protocol version range to a\n> +\t * combination of our minimal requirement and the library maximum. Error\n> +\t * messages should be kept identical to those in be-secure-openssl.c to\n> +\t * make translations easier.\n> +\t */\n\nShould we pull these error messages out into another header so that\nthey're in one place to make sure they're kept consistent, if we really\nwant to put the effort in to keep them the same..? I'm not 100% sure\nthat it's actually necessary to do so, but defining these in one place\nwould help maintain this if we want to. Also alright with just keeping\nthe comment, not that big of a deal.\n\n> +int\n> +be_tls_open_server(Port *port)\n> +{\n> +\tSECStatus\tstatus;\n> +\tPRFileDesc *model;\n> +\tPRFileDesc *pr_fd;\n\npr_fd here is materially different from port->pr_fd, no? As in, one is\nthe NSS raw TCP fd while the other is the SSL fd, right? Maybe we\nshould use two different variable names to try and make sure they don't\nget confused? Might even set this to NULL after we are done with it\ntoo.. Then again, I see later on that when we do the dance with the\n'model' PRFileDesc that we just use the same variable- maybe we should\ndo that? That is, just get rid of this 'pr_fd' and use port->pr_fd\nalways?\n\n> +\t/*\n> +\t * The NSPR documentation states that runtime initialization via PR_Init\n> +\t * is no longer required, as the first caller into NSPR will perform the\n> +\t * initialization implicitly. The documentation doesn't however clarify\n> +\t * from which version this is holds true, so let's perform the potentially\n> +\t * superfluous initialization anyways to avoid crashing on older versions\n> +\t * of NSPR, as there is no difference in overhead. The NSS documentation\n> +\t * still states that PR_Init must be called in some way (implicitly or\n> +\t * explicitly).\n> +\t *\n> +\t * The below parameters are what the implicit initialization would've done\n> +\t * for us, and should work even for older versions where it might not be\n> +\t * done automatically. The last parameter, maxPTDs, is set to various\n> +\t * values in other codebases, but has been unused since NSPR 2.1 which was\n> +\t * released sometime in 1998. In current versions of NSPR all parameters\n> +\t * are ignored.\n> +\t */\n> +\tPR_Init(PR_USER_THREAD, PR_PRIORITY_NORMAL, 0 /* maxPTDs */ );\n> +\n> +\t/*\n> +\t * The certificate path (configdir) must contain a valid NSS database. If\n> +\t * the certificate path isn't a valid directory, NSS will fall back on the\n> +\t * system certificate database. If the certificate path is a directory but\n> +\t * is empty then the initialization will fail. On the client side this can\n> +\t * be allowed for any sslmode but the verify-xxx ones.\n> +\t * https://bugzilla.redhat.com/show_bug.cgi?id=728562 For the server side\n> +\t * we won't allow this to fail however, as we require the certificate and\n> +\t * key to exist.\n> +\t *\n> +\t * The original design of NSS was for a single application to use a single\n> +\t * copy of it, initialized with NSS_Initialize() which isn't returning any\n> +\t * handle with which to refer to NSS. NSS initialization and shutdown are\n> +\t * global for the application, so a shutdown in another NSS enabled\n> +\t * library would cause NSS to be stopped for libpq as well. The fix has\n> +\t * been to introduce NSS_InitContext which returns a context handle to\n> +\t * pass to NSS_ShutdownContext. NSS_InitContext was introduced in NSS\n> +\t * 3.12, but the use of it is not very well documented.\n> +\t * https://bugzilla.redhat.com/show_bug.cgi?id=738456\n\nThe above seems to indicate that we will be requiring at least 3.12,\nright? Yet above we have code to work with NSPR versions before 2.1?\nMaybe we should put a stake in the ground that says \"we only support\nback to version X of NSS\", test with that and a few more recent versions\nand the most recent, and then rip out anything that's needed for\nversions which are older than that? I have a pretty hard time imagining\nthat someone is going to want to build PG v14 w/ NSS 2.0 ...\n\n> +\t{\n> +\t\tchar\t *ciphers,\n> +\t\t\t\t *c;\n> +\n> +\t\tchar\t *sep = \":;, \";\n> +\t\tPRUint16\tciphercode;\n> +\t\tconst\t\tPRUint16 *nss_ciphers;\n> +\n> +\t\t/*\n> +\t\t * If the user has specified a set of preferred cipher suites we start\n> +\t\t * by turning off all the existing suites to avoid the risk of down-\n> +\t\t * grades to a weaker cipher than expected.\n> +\t\t */\n> +\t\tnss_ciphers = SSL_GetImplementedCiphers();\n> +\t\tfor (int i = 0; i < SSL_GetNumImplementedCiphers(); i++)\n> +\t\t\tSSL_CipherPrefSet(model, nss_ciphers[i], PR_FALSE);\n> +\n> +\t\tciphers = pstrdup(SSLCipherSuites);\n> +\n> +\t\tfor (c = strtok(ciphers, sep); c; c = strtok(NULL, sep))\n> +\t\t{\n> +\t\t\tif (!pg_find_cipher(c, &ciphercode))\n> +\t\t\t{\n> +\t\t\t\tstatus = SSL_CipherPrefSet(model, ciphercode, PR_TRUE);\n> +\t\t\t\tif (status != SECSuccess)\n> +\t\t\t\t{\n> +\t\t\t\t\tereport(COMMERROR,\n> +\t\t\t\t\t\t\t(errmsg(\"invalid cipher-suite specified: %s\", c)));\n> +\t\t\t\t\treturn -1;\n> +\t\t\t\t}\n> +\t\t\t}\n> +\t\t}\n\nMaybe I'm a bit confused, but doesn't pg_find_cipher return *true* when\na cipher is found, and therefore the '!' above is saying \"if we don't\nfind a matching cipher, then run the code to set the cipher ...\". Also-\nwe don't seem to complain at all about a cipher being specified that we\ndon't find? Guess I would think that we might want to throw a WARNING\nin such a case, but I could possibly be convinced otherwise. Kind of\nwonder just what happens with the current code, I'm guessing ciphercode\nis zero and therefore doesn't complain but also doesn't do what we want.\nI wonder if there's a way to test this?\n\nI do think we should probably throw an error if we end up with *no*\nciphers being set, which doesn't seem to be happening here..?\n\n> +\t/*\n> +\t * Set up the custom IO layer.\n> +\t */\n\nMight be good to mention that the IO Layer is what sets up the\nread/write callbacks to be used.\n\n> +\tport->pr_fd = SSL_ImportFD(model, pr_fd);\n> +\tif (!port->pr_fd)\n> +\t{\n> +\t\tereport(COMMERROR,\n> +\t\t\t\t(errmsg(\"unable to initialize\")));\n> +\t\treturn -1;\n> +\t}\n\nMaybe a comment and a better error message for this?\n\n> +\tPR_Close(model);\n\nThis might deserve one also, the whole 'model' construct is a bit\ndifferent. :)\n\n> +\tport->ssl_in_use = true;\n> +\n> +\t/* Register out shutdown callback */\n\n*our\n\n> +int\n> +be_tls_get_cipher_bits(Port *port)\n> +{\n> +\tSECStatus\tstatus;\n> +\tSSLChannelInfo channel;\n> +\tSSLCipherSuiteInfo suite;\n> +\n> +\tstatus = SSL_GetChannelInfo(port->pr_fd, &channel, sizeof(channel));\n> +\tif (status != SECSuccess)\n> +\t\tgoto error;\n> +\n> +\tstatus = SSL_GetCipherSuiteInfo(channel.cipherSuite, &suite, sizeof(suite));\n> +\tif (status != SECSuccess)\n> +\t\tgoto error;\n> +\n> +\treturn suite.effectiveKeyBits;\n> +\n> +error:\n> +\tereport(WARNING,\n> +\t\t\t(errmsg(\"unable to extract TLS session information: %s\",\n> +\t\t\t\t\tpg_SSLerrmessage(PR_GetError()))));\n> +\treturn 0;\n> +}\n\nIt doesn't have to be much, but I, at least, do prefer to see\nfunction-header comments. :) Not that the OpenSSL code has them\nconsistently, so obviously not that big of a deal. Goes for a number of\nthe functions being added.\n\n> +\t\t\t/* Found a CN, ecode and copy it into a newly allocated buffer */\n\n*decode\n\n> +static PRInt32\n> +pg_ssl_read(PRFileDesc *fd, void *buf, PRInt32 amount, PRIntn flags,\n> +\t\t\tPRIntervalTime timeout)\n> +{\n> +\tPRRecvFN\tread_fn;\n> +\tPRInt32\t\tn_read;\n> +\n> +\tread_fn = fd->lower->methods->recv;\n> +\tn_read = read_fn(fd->lower, buf, amount, flags, timeout);\n> +\n> +\treturn n_read;\n> +}\n> +\n> +static PRInt32\n> +pg_ssl_write(PRFileDesc *fd, const void *buf, PRInt32 amount, PRIntn flags,\n> +\t\t\t PRIntervalTime timeout)\n> +{\n> +\tPRSendFN\tsend_fn;\n> +\tPRInt32\t\tn_write;\n> +\n> +\tsend_fn = fd->lower->methods->send;\n> +\tn_write = send_fn(fd->lower, buf, amount, flags, timeout);\n> +\n> +\treturn n_write;\n> +}\n> +\n> +static PRStatus\n> +pg_ssl_close(PRFileDesc *fd)\n> +{\n> +\t/*\n> +\t * Disconnect our private Port from the fd before closing out the stack.\n> +\t * (Debug builds of NSPR will assert if we do not.)\n> +\t */\n> +\tfd->secret = NULL;\n> +\treturn PR_GetDefaultIOMethods()->close(fd);\n> +}\n\nRegarding these, I find myself wondering how they're different from the\ndefaults..? I mean, the above just directly called\nPR_GetDefaultIOMethods() to then call it's close() function- are the\nfd->lower_methods->recv/send not the default methods? I don't quite get\nwhat the point is from having our own callbacks here if they just do\nexactly what the defaults would do (or are there actually no defined\ndefaults and you have to provide these..?).\n\n> +/*\n> + * ssl_protocol_version_to_nss\n> + *\t\t\tTranslate PostgreSQL TLS version to NSS version\n> + *\n> + * Returns zero in case the requested TLS version is undefined (PG_ANY) and\n> + * should be set by the caller, or -1 on failure.\n> + */\n> +static uint16\n> +ssl_protocol_version_to_nss(int v, const char *guc_name)\n\nguc_name isn't actually used in this function..? Is there some reason\nto keep it or is it leftover?\n\nAlso, I get that they do similar jobs and that one is in the frontend\nand the other is in the backend, but I'm not a fan of having two\n'ssl_protocol_version_to_nss()'s functions that take different argument\ntypes but have exact same name and do functionally different things..\n\n> +++ b/src/backend/utils/misc/guc.c\n> @@ -4377,6 +4381,18 @@ static struct config_string ConfigureNamesString[] =\n> \t\tcheck_canonical_path, assign_pgstat_temp_directory, NULL\n> \t},\n> \n> +#ifdef USE_NSS\n> +\t{\n> +\t\t{\"ssl_database\", PGC_SIGHUP, CONN_AUTH_SSL,\n> +\t\t\tgettext_noop(\"Location of the NSS certificate database.\"),\n> +\t\t\tNULL\n> +\t\t},\n> +\t\t&ssl_database,\n> +\t\t\"\",\n> +\t\tNULL, NULL, NULL\n> +\t},\n> +#endif\n\nWe don't #ifdef out the various GUCs even if SSL isn't compiled in, so\nit doesn't seem quite right to be doing so here? Generally speaking,\nGUCs that we expect people to use (rather than debugging ones and such)\nare typically always built, even if we don't build support for that\ncapability, so we can throw a better error message than just some ugly\nsyntax or parsing error if we come across one being set in a non-enabled\nbuild.\n\n> +++ b/src/common/cipher_nss.c\n> @@ -0,0 +1,192 @@\n> +/*-------------------------------------------------------------------------\n> + *\n> + * cipher_nss.c\n> + *\t NSS functionality shared between frontend and backend for working\n> + *\t with ciphers\n> + *\n> + * This should only bse used if code is compiled with NSS support.\n\n*be\n\n> +++ b/src/include/libpq/libpq-be.h\n> @@ -200,6 +200,10 @@ typedef struct Port\n> \tSSL\t\t *ssl;\n> \tX509\t *peer;\n> #endif\n> +\n> +#ifdef USE_NSS\n> +\tvoid\t *pr_fd;\n> +#endif\n> } Port;\n\nGiven this is under a #ifdef USE_NSS, does it need to be / should it\nreally be a void*?\n\n> +++ b/src/interfaces/libpq/fe-connect.c\n> @@ -359,6 +359,10 @@ static const internalPQconninfoOption PQconninfoOptions[] = {\n> \t\t\"Target-Session-Attrs\", \"\", 15, /* sizeof(\"prefer-standby\") = 15 */\n> \toffsetof(struct pg_conn, target_session_attrs)},\n> \n> +\t{\"cert_database\", NULL, NULL, NULL,\n> +\t\t\"CertificateDatabase\", \"\", 64,\n> +\toffsetof(struct pg_conn, cert_database)},\n\nI mean, maybe nitpicking here, but all the other SSL stuff is\n'sslsomething' and the backend version of this is 'ssl_database', so\nwouldn't it be more consistent to have this be 'ssldatabase'?\n\n> +++ b/src/interfaces/libpq/fe-secure-nss.c\n> + * This logic exist in NSS as well, but it's only available for when there is\n\n*exists\n\n> +\t/*\n> +\t * The NSPR documentation states that runtime initialization via PR_Init\n> +\t * is no longer required, as the first caller into NSPR will perform the\n> +\t * initialization implicitly. See be-secure-nss.c for further discussion\n> +\t * on PR_Init.\n> +\t */\n> +\tPR_Init(PR_USER_THREAD, PR_PRIORITY_NORMAL, 0);\n\nSee same comment I made above- and also there's a comment earlier in\nthis file that we don't need to PR_Init() even ...\n\n> +\t{\n> +\t\tconn->nss_context = NSS_InitContext(\"\", \"\", \"\", \"\", &params,\n> +\t\t\t\t\t\t\t\t\t\t\tNSS_INIT_READONLY | NSS_INIT_NOCERTDB |\n> +\t\t\t\t\t\t\t\t\t\t\tNSS_INIT_NOMODDB | NSS_INIT_FORCEOPEN |\n> +\t\t\t\t\t\t\t\t\t\t\tNSS_INIT_NOROOTINIT | NSS_INIT_PK11RELOAD);\n> +\t\tif (!conn->nss_context)\n> +\t\t{\n> +\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> +\t\t\t\t\t\t\t libpq_gettext(\"unable to create certificate database: %s\"),\n> +\t\t\t\t\t\t\t pg_SSLerrmessage(PR_GetError()));\n> +\t\t\treturn PGRES_POLLING_FAILED;\n> +\t\t}\n> +\t}\n\nThat error message seems a bit ... off? Surely we aren't trying to\nactually create a certificate database here?\n\n> +\t/*\n> +\t * Configure cipher policy.\n> +\t */\n> +\tstatus = NSS_SetDomesticPolicy();\n> +\tif (status != SECSuccess)\n> +\t{\n> +\t\tprintfPQExpBuffer(&conn->errorMessage,\n> +\t\t\t\t\t\t libpq_gettext(\"unable to configure cipher policy: %s\"),\n> +\t\t\t\t\t\t pg_SSLerrmessage(PR_GetError()));\n> +\n> +\t\treturn PGRES_POLLING_FAILED;\n> +\t}\n\nProbably good to pull over at least some parts of the comments made in\nthe backend code about SetDomesticPolicy() actually enabling everything\n(just like all the policies apparently do)...\n\n> +\t/*\n> +\t * If we don't have a certificate database, the system trust store is the\n> +\t * fallback we can use. If we fail to initialize that as well, we can\n> +\t * still attempt a connection as long as the sslmode isn't verify*.\n> +\t */\n> +\tif (!conn->cert_database && conn->sslmode[0] == 'v')\n> +\t{\n> +\t\tstatus = pg_load_nss_module(&ca_trust, ca_trust_name, \"\\\"Root Certificates\\\"\");\n> +\t\tif (status != SECSuccess)\n> +\t\t{\n> +\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> +\t\t\t\t\t\t\t libpq_gettext(\"WARNING: unable to load NSS trust module \\\"%s\\\" : %s\"),\n> +\t\t\t\t\t\t\t ca_trust_name,\n> +\t\t\t\t\t\t\t pg_SSLerrmessage(PR_GetError()));\n> +\n> +\t\t\treturn PGRES_POLLING_FAILED;\n> +\t\t}\n> +\t}\n\nMaybe have something a bit more here about \"maybe you should specifify a\ncert_database\" or such?\n\n> +\tif (conn->ssl_max_protocol_version && strlen(conn->ssl_max_protocol_version) > 0)\n> +\t{\n> +\t\tint\t\t\tssl_max_ver = ssl_protocol_version_to_nss(conn->ssl_max_protocol_version);\n> +\n> +\t\tif (ssl_max_ver == -1)\n> +\t\t{\n> +\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> +\t\t\t\t\t\t\t libpq_gettext(\"invalid value \\\"%s\\\" for maximum version of SSL protocol\\n\"),\n> +\t\t\t\t\t\t\t conn->ssl_max_protocol_version);\n> +\t\t\treturn -1;\n> +\t\t}\n> +\n> +\t\tdesired_range.max = ssl_max_ver;\n> +\t}\n\nIn the backend code, we have an additional check to make sure they\ndidn't set the min version higher than the max.. should we have that\nhere too? Either way, seems like we should be consistent.\n\n> +\t * The model can now we closed as we've applied the settings of the model\n\n*be\n\n> +\t * onto the real socket. From hereon we should only use conn->pr_fd.\n\n*here on\n\nSimilar comments to the backend code- should we just always use\nconn->pr_fd? Or should we rename pr_fd to something else?\n\n> +\t/*\n> +\t * Specify which hostname we are expecting to talk to. This is required,\n> +\t * albeit mostly applies to when opening a connection to a traditional\n> +\t * http server it seems.\n> +\t */\n> +\tSSL_SetURL(conn->pr_fd, (conn->connhost[conn->whichhost]).host);\n\nWe should probably also set SNI, if available (NSS 3.12.6 it seems?),\nsince it looks like that's going to be added to the OpenSSL code.\n\n> +\tdo\n> +\t{\n> +\t\tstatus = SSL_ForceHandshake(conn->pr_fd);\n> +\t}\n> +\twhile (status != SECSuccess && PR_GetError() == PR_WOULD_BLOCK_ERROR);\n\nWe don't seem to have this loop in the backend code.. Is there some\nreason that we don't? Is it possible that we need to have a loop here\ntoo? I recall in the GSS encryption code there were definitely things\nduring setup that had to be looped back over on both sides to make sure\neverything was finished ...\n\n> +\tif (conn->sslmode[0] == 'v')\n> +\t\treturn SECFailure;\n\nSeems a bit grotty to do this (though I see that the OpenSSL code does\ntoo ... at least there we have a comment though, maybe add one here?).\nI would have thought we'd actually do strcmp()'s like above.\n\n> +\t/*\n> +\t * Return the underlying PRFileDesc which can be used to access\n> +\t * information on the connection details. There is no SSL context per se.\n> +\t */\n> +\tif (strcmp(struct_name, \"NSS\") == 0)\n> +\t\treturn conn->pr_fd;\n> +\treturn NULL;\n> +}\n\nIs there never a reason someone might want the pointer returned by\nNSS_InitContext? I don't know that there is but it might be something\nto consider (we could even possibly have our own structure returned by\nthis function which includes both, maybe..?). Not sure if there's a\nsensible use-case for that or not just wanted to bring it up as it's\nsomething I asked myself while reading through this patch.\n\n> +\tif (strcmp(attribute_name, \"protocol\") == 0)\n> +\t{\n> +\t\tswitch (channel.protocolVersion)\n> +\t\t{\n> +#ifdef SSL_LIBRARY_VERSION_TLS_1_3\n> +\t\t\tcase SSL_LIBRARY_VERSION_TLS_1_3:\n> +\t\t\t\treturn \"TLSv1.3\";\n> +#endif\n> +#ifdef SSL_LIBRARY_VERSION_TLS_1_2\n> +\t\t\tcase SSL_LIBRARY_VERSION_TLS_1_2:\n> +\t\t\t\treturn \"TLSv1.2\";\n> +#endif\n> +#ifdef SSL_LIBRARY_VERSION_TLS_1_1\n> +\t\t\tcase SSL_LIBRARY_VERSION_TLS_1_1:\n> +\t\t\t\treturn \"TLSv1.1\";\n> +#endif\n> +\t\t\tcase SSL_LIBRARY_VERSION_TLS_1_0:\n> +\t\t\t\treturn \"TLSv1.0\";\n> +\t\t\tdefault:\n> +\t\t\t\treturn \"unknown\";\n> +\t\t}\n> +\t}\n\nNot sure that it really matters, but this seems like it might be useful\nto have as its own function... Maybe even a data structure that both\nfunctions use just in oppostie directions. Really minor tho. :)\n\n> diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c\n> index c601071838..7f10da3010 100644\n> --- a/src/interfaces/libpq/fe-secure.c\n> +++ b/src/interfaces/libpq/fe-secure.c\n> @@ -448,6 +448,27 @@ PQdefaultSSLKeyPassHook_OpenSSL(char *buf, int size, PGconn *conn)\n> }\n> #endif\t\t\t\t\t\t\t/* USE_OPENSSL */\n> \n> +#ifndef USE_NSS\n> +\n> +PQsslKeyPassHook_nss_type\n> +PQgetSSLKeyPassHook_nss(void)\n> +{\n> +\treturn NULL;\n> +}\n> +\n> +void\n> +PQsetSSLKeyPassHook_nss(PQsslKeyPassHook_nss_type hook)\n> +{\n> +\treturn;\n> +}\n> +\n> +char *\n> +PQdefaultSSLKeyPassHook_nss(PK11SlotInfo * slot, PRBool retry, void *arg)\n> +{\n> +\treturn NULL;\n> +}\n> +#endif\t\t\t\t\t\t\t/* USE_NSS */\n\nIsn't this '!USE_NSS'?\n\n> diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h\n> index 0c9e95f1a7..f15af39222 100644\n> --- a/src/interfaces/libpq/libpq-int.h\n> +++ b/src/interfaces/libpq/libpq-int.h\n> @@ -383,6 +383,7 @@ struct pg_conn\n> \tchar\t *sslrootcert;\t/* root certificate filename */\n> \tchar\t *sslcrl;\t\t\t/* certificate revocation list filename */\n> \tchar\t *sslcrldir;\t\t/* certificate revocation list directory name */\n> +\tchar\t *cert_database;\t/* NSS certificate/key database */\n> \tchar\t *requirepeer;\t/* required peer credentials for local sockets */\n> \tchar\t *gssencmode;\t\t/* GSS mode (require,prefer,disable) */\n> \tchar\t *krbsrvname;\t\t/* Kerberos service name */\n> @@ -507,6 +508,28 @@ struct pg_conn\n> \t\t\t\t\t\t\t\t * OpenSSL version changes */\n> #endif\n> #endif\t\t\t\t\t\t\t/* USE_OPENSSL */\n> +\n> +/*\n> + * The NSS/NSPR specific types aren't used to avoid pulling in the required\n> + * headers here, as they are causing conflicts with PG definitions.\n> + */\n\nI'm a bit confused- what are the conflicts being caused here..?\nCertainly under USE_OPENSSL we use the actual OpenSSL types..\n\n> Subject: [PATCH v30 2/9] Refactor SSL testharness for multiple library\n> \n> The SSL testharness was fully tied to OpenSSL in the way the server was\n> set up and reconfigured. This refactors the SSLServer module into a SSL\n> library agnostic SSL/Server module which in turn use SSL/Backend/<lib>\n> modules for the implementation details.\n> \n> No changes are done to the actual tests, this only change how setup and\n> teardown is performed.\n\nPresumably this could be committed ahead of the main NSS support?\n\n> Subject: [PATCH v30 4/9] nss: pg_strong_random support\n> +++ b/src/port/pg_strong_random.c\n> +bool\n> +pg_strong_random(void *buf, size_t len)\n> +{\n> +\tNSSInitParameters params;\n> +\tNSSInitContext *nss_context;\n> +\tSECStatus\tstatus;\n> +\n> +\tmemset(&params, 0, sizeof(params));\n> +\tparams.length = sizeof(params);\n> +\tnss_context = NSS_InitContext(\"\", \"\", \"\", \"\", &params,\n> +\t\t\t\t\t\t\t\t NSS_INIT_READONLY | NSS_INIT_NOCERTDB |\n> +\t\t\t\t\t\t\t\t NSS_INIT_NOMODDB | NSS_INIT_FORCEOPEN |\n> +\t\t\t\t\t\t\t\t NSS_INIT_NOROOTINIT | NSS_INIT_PK11RELOAD);\n> +\n> +\tif (!nss_context)\n> +\t\treturn false;\n> +\n> +\tstatus = PK11_GenerateRandom(buf, len);\n> +\tNSS_ShutdownContext(nss_context);\n> +\n> +\tif (status == SECSuccess)\n> +\t\treturn true;\n> +\n> +\treturn false;\n> +}\n> +\n> +#else\t\t\t\t\t\t\t/* not USE_OPENSSL, USE_NSS or WIN32 */\n\nI don't know that it's an issue, but do we actually need to init the NSS\ncontext and shut it down every time..?\n\n> /*\n> * Without OpenSSL or Win32 support, just read /dev/urandom ourselves.\n\n*or NSS\n\n> Subject: [PATCH v30 5/9] nss: Documentation\n> +++ b/doc/src/sgml/acronyms.sgml\n> @@ -684,6 +717,16 @@\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term><acronym>TLS</acronym></term>\n> + <listitem>\n> + <para>\n> + <ulink url=\"https://en.wikipedia.org/wiki/Transport_Layer_Security\">\n> + Transport Layer Security</ulink>\n> + </para>\n> + </listitem>\n> + </varlistentry>\n\nWe don't have this already..? Surely we should..\n\n> diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\n> index 967de73596..1608e9a7c7 100644\n> --- a/doc/src/sgml/config.sgml\n> +++ b/doc/src/sgml/config.sgml\n> @@ -1272,6 +1272,23 @@ include_dir 'conf.d'\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry id=\"guc-ssl-database\" xreflabel=\"ssl_database\">\n> + <term><varname>ssl_database</varname> (<type>string</type>)\n> + <indexterm>\n> + <primary><varname>ssl_database</varname> configuration parameter</primary>\n> + </indexterm>\n> + </term>\n> + <listitem>\n> + <para>\n> + Specifies the name of the file containing the server certificates and\n> + keys when using <productname>NSS</productname> for <acronym>SSL</acronym>\n> + connections. This parameter can only be set in the\n> + <filename>postgresql.conf</filename> file or on the server command\n> + line.\n\n*SSL/TLS maybe?\n\n> @@ -1288,7 +1305,9 @@ include_dir 'conf.d'\n> connections using TLS version 1.2 and lower are affected. There is\n> currently no setting that controls the cipher choices used by TLS\n> version 1.3 connections. The default value is\n> - <literal>HIGH:MEDIUM:+3DES:!aNULL</literal>. The default is usually a\n> + <literal>HIGH:MEDIUM:+3DES:!aNULL</literal> for servers which have\n> + been built with <productname>OpenSSL</productname> as the\n> + <acronym>SSL</acronym> library. The default is usually a\n> reasonable choice unless you have specific security requirements.\n> </para>\n\nShouldn't we say something here wrt NSS?\n\n> @@ -1490,8 +1509,11 @@ include_dir 'conf.d'\n> <para>\n> Sets an external command to be invoked when a passphrase for\n> decrypting an SSL file such as a private key needs to be obtained. By\n> - default, this parameter is empty, which means the built-in prompting\n> - mechanism is used.\n> + default, this parameter is empty. When the server is using\n> + <productname>OpenSSL</productname>, this means the built-in prompting\n> + mechanism is used. When using <productname>NSS</productname>, there is\n> + no default prompting so a blank callback will be used returning an\n> + empty password.\n> </para>\n\nMaybe we should point out here that this requires the database to not\nrequire a password..? So if they have one, they need to set this, or\nmaybe we should provide a default one..\n\n> +++ b/doc/src/sgml/libpq.sgml\n> +<synopsis>\n> +PQsslKeyPassHook_nss_type PQgetSSLKeyPassHook_nss(void);\n> +</synopsis>\n> + </para>\n> +\n> + <para>\n> + <function>PQgetSSLKeyPassHook_nss</function> has no effect unless the\n> + server was compiled with <productname>nss</productname> support.\n> + </para>\n\nWe should try to be consistent- above should be NSS, not nss.\n\n> + <listitem>\n> + <para>\n> + <productname>NSS</productname>: specifying the parameter is required\n> + in case any password protected items are referenced in the\n> + <productname>NSS</productname> database, or if the database itself\n> + is password protected. If multiple different objects are password\n> + protected, the same password is used for all.\n> + </para>\n> + </listitem>\n> + </itemizedlist>\n\nIs this a statement about NSS databases (which I don't think it is) or\nabout the fact that we'll just use the password provided for all\nattempts to decrypt something we need in the database? Assuming the\nlatter, seems like we could reword this to be a bit more clear.\n\nMaybe: \n\nAll attempts to decrypt objects which are password protected in the\ndatabase will use this password.\n\n?\n\n> @@ -2620,9 +2791,14 @@ void *PQsslStruct(const PGconn *conn, const char *struct_name);\n> + For <productname>NSS</productname>, there is one struct available under\n> + the name \"NSS\", and it returns a pointer to the\n> + <productname>NSS</productname> <literal>PRFileDesc</literal>.\n\n... SSL PRFileDesc associated with the connection, no?\n\n> +++ b/doc/src/sgml/runtime.sgml\n> @@ -2552,6 +2583,89 @@ openssl x509 -req -in server.csr -text -days 365 \\\n> </para>\n> </sect2>\n> \n> + <sect2 id=\"nss-certificate-database\">\n> + <title>NSS Certificate Databases</title>\n> +\n> + <para>\n> + When using <productname>NSS</productname>, all certificates and keys must\n> + be loaded into an <productname>NSS</productname> certificate database.\n> + </para>\n> +\n> + <para>\n> + To create a new <productname>NSS</productname> certificate database and\n> + load the certificates created in <xref linkend=\"ssl-certificate-creation\" />,\n> + use the following <productname>NSS</productname> commands:\n> +<programlisting>\n> +certutil -d \"sql:server.db\" -N --empty-password\n> +certutil -d \"sql:server.db\" -A -n server.crt -i server.crt -t \"CT,C,C\"\n> +certutil -d \"sql:server.db\" -A -n root.crt -i root.crt -t \"CT,C,C\"\n> +</programlisting>\n> + This will give the certificate the filename as the nickname identifier in\n> + the database which is created as <filename>server.db</filename>.\n> + </para>\n> + <para>\n> + Then load the server key, which require converting it to\n\n*requires\n\n> Subject: [PATCH v30 6/9] nss: Support NSS in pgcrypto\n> +++ b/doc/src/sgml/pgcrypto.sgml\n> <row>\n> <entry>Blowfish</entry>\n> <entry>yes</entry>\n> <entry>yes</entry>\n> + <entry>yes</entry>\n> </row>\n\nMaybe this should mention that it's with the built-in implementation as\nblowfish isn't available from NSS?\n\n> <row>\n> <entry>DES/3DES/CAST5</entry>\n> <entry>no</entry>\n> <entry>yes</entry>\n> + <entry>yes</entry>\n> + </row>\n\nSurely CAST5 from the above should be removed, since it's given its own\nentry now?\n\n> @@ -1241,7 +1260,8 @@ gen_random_uuid() returns uuid\n> <orderedlist>\n> <listitem>\n> <para>\n> - Any digest algorithm <productname>OpenSSL</productname> supports\n> + Any digest algorithm <productname>OpenSSL</productname> and\n> + <productname>NSS</productname> supports\n> is automatically picked up.\n\n*or? Maybe something more specific though- \"Any digest algorithm\nincluded with the library that PostgreSQL is compiled with is\nautomatically picked up.\" ?\n\n> Subject: [PATCH v30 7/9] nss: Support NSS in sslinfo\n> \n> Since sslinfo to a large extent use the be_tls_* API this mostly\n\n*uses\n\n> Subject: [PATCH v30 8/9] nss: Support NSS in cryptohash\n> +++ b/src/common/cryptohash_nss.c\n> +\t/*\n> +\t * Initialize our own NSS context without a database backing it.\n> +\t */\n> +\tmemset(&params, 0, sizeof(params));\n> +\tparams.length = sizeof(params);\n> +\tstatus = NSS_NoDB_Init(\".\");\n\nWe take some pains to use NSS_InitContext elsewhere.. Are we sure that\nwe should be using NSS_NoDB_Init here..?\n\nJust a, well, not so quick read-through. Generally it's looking pretty\ngood to me. Will see about playing with it this week.\n\nThanks!\n\nStephen", "msg_date": "Sun, 21 Mar 2021 19:49:51 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 22 Mar 2021, at 00:49, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> Greetings,\n\nThanks for the review! Below is a partial response, I haven't had time to\naddress all your review comments yet but I wanted to submit a rebased patchset\ndirectly since the current version doesn't work after recent changes in the\ntree. I will address the remaining comments tomorrow or the day after.\n\nThis rebase also includes a fix for pgtls_init which was sent offlist by Jacob.\nThe changes in pgtls_init can potentially be used to initialize the crypto\ncontext for NSS to clean up this patch, Jacob is currently looking at that.\n\n>> Subject: [PATCH v30 1/9] nss: Support libnss as TLS library in libpq\n>> \n>> This commit contains the frontend and backend portion of TLS support\n>> in libpq to allow encrypted connections. The implementation is done\n> \n> maybe add 'using NSS' to that first sentence. ;)\n\nFixed.\n\n>> +++ b/src/backend/libpq/auth.c\n>> @@ -2849,7 +2849,14 @@ CheckCertAuth(Port *port)\n>> {\n>> \tint\t\t\tstatus_check_usermap = STATUS_ERROR;\n>> \n>> +#if defined(USE_OPENSSL)\n>> \tAssert(port->ssl);\n>> +#elif defined(USE_NSS)\n>> +\t/* TODO: should we rename pr_fd to ssl, to keep consistency? */\n>> +\tAssert(port->pr_fd);\n>> +#else\n>> +\tAssert(false);\n>> +#endif\n> \n> Having thought about this TODO item for a bit, I tend to think it's\n> better to keep them distinct.\n\nI agree, which is why the TODO comment was there in the first place. I've\nremoved the comment now.\n\n> They aren't the same and it might not be\n> clear what's going on if one was to somehow mix them (at least if pr_fd\n> continues to sometimes be a void*, but I wonder why that's being\n> done..? more on that later..).\n\nTo paraphrase from a later in this email, there are collisions between nspr and\npostgres on things like BITS_PER_BYTE, and there were also collisions on basic\ntypes until I learned about NO_NSPR_10_SUPPORT. By moving the juggling of this\ninto common/nss.h we can use proper types without introducing that pollution\neverywhere. I will address these places.\n\n>> +++ b/src/backend/libpq/be-secure-nss.c\n> [...]\n>> +/* default init hook can be overridden by a shared library */\n>> +static void default_nss_tls_init(bool isServerStart);\n>> +nss_tls_init_hook_type nss_tls_init_hook = default_nss_tls_init;\n> \n>> +static PRDescIdentity pr_id;\n>> +\n>> +static PRIOMethods pr_iomethods;\n> \n> Happy to be told I'm missing something, but the above two variables seem\n> to only be used in init_iolayer.. is there a reason they're declared\n> here instead of just being declared in that function?\n\nThey must be there since NSPR doesn't copy these but reference them.\n\n>> +\t/*\n>> +\t * Set the fallback versions for the TLS protocol version range to a\n>> +\t * combination of our minimal requirement and the library maximum. Error\n>> +\t * messages should be kept identical to those in be-secure-openssl.c to\n>> +\t * make translations easier.\n>> +\t */\n> \n> Should we pull these error messages out into another header so that\n> they're in one place to make sure they're kept consistent, if we really\n> want to put the effort in to keep them the same..? I'm not 100% sure\n> that it's actually necessary to do so, but defining these in one place\n> would help maintain this if we want to. Also alright with just keeping\n> the comment, not that big of a deal.\n\nIt might make sense to pull them into common/nss.h, but seeing the error\nmessage right there when reading the code does IMO make it clearer so it's a\ndoubleedged sword. Not sure what is the best option, but I'm not married to\nthe current solution so if there is consensus to pull them out somewhere I'm\nhappy to do so.\n\n>> +int\n>> +be_tls_open_server(Port *port)\n>> +{\n>> +\tSECStatus\tstatus;\n>> +\tPRFileDesc *model;\n>> +\tPRFileDesc *pr_fd;\n> \n> pr_fd here is materially different from port->pr_fd, no? As in, one is\n> the NSS raw TCP fd while the other is the SSL fd, right? Maybe we\n> should use two different variable names to try and make sure they don't\n> get confused? Might even set this to NULL after we are done with it\n> too.. Then again, I see later on that when we do the dance with the\n> 'model' PRFileDesc that we just use the same variable- maybe we should\n> do that? That is, just get rid of this 'pr_fd' and use port->pr_fd\n> always?\n\nHmm, I think you're right. I will try that for the next patchset version.\n\n>> +\t/*\n>> +\t * The NSPR documentation states that runtime initialization via PR_Init\n>> +\t * is no longer required, as the first caller into NSPR will perform the\n>> +\t * initialization implicitly. The documentation doesn't however clarify\n>> +\t * from which version this is holds true, so let's perform the potentially\n>> +\t * superfluous initialization anyways to avoid crashing on older versions\n>> +\t * of NSPR, as there is no difference in overhead. The NSS documentation\n>> +\t * still states that PR_Init must be called in some way (implicitly or\n>> +\t * explicitly).\n>> +\t *\n>> +\t * The below parameters are what the implicit initialization would've done\n>> +\t * for us, and should work even for older versions where it might not be\n>> +\t * done automatically. The last parameter, maxPTDs, is set to various\n>> +\t * values in other codebases, but has been unused since NSPR 2.1 which was\n>> +\t * released sometime in 1998. In current versions of NSPR all parameters\n>> +\t * are ignored.\n>> +\t */\n>> +\tPR_Init(PR_USER_THREAD, PR_PRIORITY_NORMAL, 0 /* maxPTDs */ );\n>> +\n>> +\t/*\n>> +\t * The certificate path (configdir) must contain a valid NSS database. If\n>> +\t * the certificate path isn't a valid directory, NSS will fall back on the\n>> +\t * system certificate database. If the certificate path is a directory but\n>> +\t * is empty then the initialization will fail. On the client side this can\n>> +\t * be allowed for any sslmode but the verify-xxx ones.\n>> +\t * https://bugzilla.redhat.com/show_bug.cgi?id=728562 For the server side\n>> +\t * we won't allow this to fail however, as we require the certificate and\n>> +\t * key to exist.\n>> +\t *\n>> +\t * The original design of NSS was for a single application to use a single\n>> +\t * copy of it, initialized with NSS_Initialize() which isn't returning any\n>> +\t * handle with which to refer to NSS. NSS initialization and shutdown are\n>> +\t * global for the application, so a shutdown in another NSS enabled\n>> +\t * library would cause NSS to be stopped for libpq as well. The fix has\n>> +\t * been to introduce NSS_InitContext which returns a context handle to\n>> +\t * pass to NSS_ShutdownContext. NSS_InitContext was introduced in NSS\n>> +\t * 3.12, but the use of it is not very well documented.\n>> +\t * https://bugzilla.redhat.com/show_bug.cgi?id=738456\n> \n> The above seems to indicate that we will be requiring at least 3.12,\n> right? Yet above we have code to work with NSPR versions before 2.1?\n\nWell, not really. The comment tries to explain the rationale for the\nparameters passed. Clearly the comment could be improved to make that point\nclearer.\n\n> Maybe we should put a stake in the ground that says \"we only support\n> back to version X of NSS\", test with that and a few more recent versions\n> and the most recent, and then rip out anything that's needed for\n> versions which are older than that? \n\nYes, right now there is very little in the patch which caters for old versions,\nthe PR_Init call might be one of the few offenders. There has been discussion\nupthread about settling for a required version, combining the insights learned\nthere with a survey of which versions are commonly available packaged.\n\nOnce we settle on a version we can confirm if PR_Init is/isn't needed and\nremove all traces of it if not.\n\n> I have a pretty hard time imagining that someone is going to want to build PG\n> v14 w/ NSS 2.0 ...\n\n\nLet alone compiling 2.0 at all on a recent system..\n\n>> +\t{\n>> +\t\tchar\t *ciphers,\n>> +\t\t\t\t *c;\n>> +\n>> +\t\tchar\t *sep = \":;, \";\n>> +\t\tPRUint16\tciphercode;\n>> +\t\tconst\t\tPRUint16 *nss_ciphers;\n>> +\n>> +\t\t/*\n>> +\t\t * If the user has specified a set of preferred cipher suites we start\n>> +\t\t * by turning off all the existing suites to avoid the risk of down-\n>> +\t\t * grades to a weaker cipher than expected.\n>> +\t\t */\n>> +\t\tnss_ciphers = SSL_GetImplementedCiphers();\n>> +\t\tfor (int i = 0; i < SSL_GetNumImplementedCiphers(); i++)\n>> +\t\t\tSSL_CipherPrefSet(model, nss_ciphers[i], PR_FALSE);\n>> +\n>> +\t\tciphers = pstrdup(SSLCipherSuites);\n>> +\n>> +\t\tfor (c = strtok(ciphers, sep); c; c = strtok(NULL, sep))\n>> +\t\t{\n>> +\t\t\tif (!pg_find_cipher(c, &ciphercode))\n>> +\t\t\t{\n>> +\t\t\t\tstatus = SSL_CipherPrefSet(model, ciphercode, PR_TRUE);\n>> +\t\t\t\tif (status != SECSuccess)\n>> +\t\t\t\t{\n>> +\t\t\t\t\tereport(COMMERROR,\n>> +\t\t\t\t\t\t\t(errmsg(\"invalid cipher-suite specified: %s\", c)));\n>> +\t\t\t\t\treturn -1;\n>> +\t\t\t\t}\n>> +\t\t\t}\n>> +\t\t}\n> \n> Maybe I'm a bit confused, but doesn't pg_find_cipher return *true* when\n> a cipher is found, and therefore the '!' above is saying \"if we don't\n> find a matching cipher, then run the code to set the cipher ...\".\n\nHmm, yes thats broken. Fixed.\n\n> Also- we don't seem to complain at all about a cipher being specified that we\n> don't find? Guess I would think that we might want to throw a WARNING in such\n> a case, but I could possibly be convinced otherwise.\n\n\nNo, I think you're right, we should throw WARNING there or possibly even a\nhigher elevel. Should that be a COMMERROR even?\n\n> Kind of wonder just what happens with the current code, I'm guessing ciphercode\n> is zero and therefore doesn't complain but also doesn't do what we want. I\n> wonder if there's a way to test this?\n\n\nWe could extend the test suite to set ciphers in postgresql.conf, I'll give it\na go.\n\n> I do think we should probably throw an error if we end up with *no*\n> ciphers being set, which doesn't seem to be happening here..?\n\nYeah, that should be a COMMERROR. Fixed.\n\n>> +\t/*\n>> +\t * Set up the custom IO layer.\n>> +\t */\n> \n> Might be good to mention that the IO Layer is what sets up the\n> read/write callbacks to be used.\n\nGood point, will do in the next version of the patchset.\n\n>> +\tport->pr_fd = SSL_ImportFD(model, pr_fd);\n>> +\tif (!port->pr_fd)\n>> +\t{\n>> +\t\tereport(COMMERROR,\n>> +\t\t\t\t(errmsg(\"unable to initialize\")));\n>> +\t\treturn -1;\n>> +\t}\n> \n> Maybe a comment and a better error message for this?\n\nWill do.\n\n> \n>> +\tPR_Close(model);\n> \n> This might deserve one also, the whole 'model' construct is a bit\n> different. :)\n\nAgreed. will do.\n\n>> +\tport->ssl_in_use = true;\n>> +\n>> +\t/* Register out shutdown callback */\n> \n> *our\n\nFixed.\n\n>> +int\n>> +be_tls_get_cipher_bits(Port *port)\n>> +{\n>> +\tSECStatus\tstatus;\n>> +\tSSLChannelInfo channel;\n>> +\tSSLCipherSuiteInfo suite;\n>> +\n>> +\tstatus = SSL_GetChannelInfo(port->pr_fd, &channel, sizeof(channel));\n>> +\tif (status != SECSuccess)\n>> +\t\tgoto error;\n>> +\n>> +\tstatus = SSL_GetCipherSuiteInfo(channel.cipherSuite, &suite, sizeof(suite));\n>> +\tif (status != SECSuccess)\n>> +\t\tgoto error;\n>> +\n>> +\treturn suite.effectiveKeyBits;\n>> +\n>> +error:\n>> +\tereport(WARNING,\n>> +\t\t\t(errmsg(\"unable to extract TLS session information: %s\",\n>> +\t\t\t\t\tpg_SSLerrmessage(PR_GetError()))));\n>> +\treturn 0;\n>> +}\n> \n> It doesn't have to be much, but I, at least, do prefer to see\n> function-header comments. :) Not that the OpenSSL code has them\n> consistently, so obviously not that big of a deal. Goes for a number of\n> the functions being added.\n\nNo disagreement from me, I've added comments on a few more functions and will\ncontinue to go over the patchset to add them everywhere. Some of these\ncomments are pretty uninteresting and could do with some wordsmithing.\n\n>> +\t\t\t/* Found a CN, ecode and copy it into a newly allocated buffer */\n> \n> *decode\n\nFixed.\n\n>> +static PRInt32\n>> +pg_ssl_read(PRFileDesc *fd, void *buf, PRInt32 amount, PRIntn flags,\n>> +\t\t\tPRIntervalTime timeout)\n>> +{\n>> +\tPRRecvFN\tread_fn;\n>> +\tPRInt32\t\tn_read;\n>> +\n>> +\tread_fn = fd->lower->methods->recv;\n>> +\tn_read = read_fn(fd->lower, buf, amount, flags, timeout);\n>> +\n>> +\treturn n_read;\n>> +}\n>> +\n>> +static PRInt32\n>> +pg_ssl_write(PRFileDesc *fd, const void *buf, PRInt32 amount, PRIntn flags,\n>> +\t\t\t PRIntervalTime timeout)\n>> +{\n>> +\tPRSendFN\tsend_fn;\n>> +\tPRInt32\t\tn_write;\n>> +\n>> +\tsend_fn = fd->lower->methods->send;\n>> +\tn_write = send_fn(fd->lower, buf, amount, flags, timeout);\n>> +\n>> +\treturn n_write;\n>> +}\n>> +\n>> +static PRStatus\n>> +pg_ssl_close(PRFileDesc *fd)\n>> +{\n>> +\t/*\n>> +\t * Disconnect our private Port from the fd before closing out the stack.\n>> +\t * (Debug builds of NSPR will assert if we do not.)\n>> +\t */\n>> +\tfd->secret = NULL;\n>> +\treturn PR_GetDefaultIOMethods()->close(fd);\n>> +}\n> \n> Regarding these, I find myself wondering how they're different from the\n> defaults..? I mean, the above just directly called\n> PR_GetDefaultIOMethods() to then call it's close() function- are the\n> fd->lower_methods->recv/send not the default methods? I don't quite get\n> what the point is from having our own callbacks here if they just do\n> exactly what the defaults would do (or are there actually no defined\n> defaults and you have to provide these..?).\n\nIt's really just to cope with debug builds of NSPR which assert that fd->secret\nis null before closing.\n\n>> +/*\n>> + * ssl_protocol_version_to_nss\n>> + *\t\t\tTranslate PostgreSQL TLS version to NSS version\n>> + *\n>> + * Returns zero in case the requested TLS version is undefined (PG_ANY) and\n>> + * should be set by the caller, or -1 on failure.\n>> + */\n>> +static uint16\n>> +ssl_protocol_version_to_nss(int v, const char *guc_name)\n> \n> guc_name isn't actually used in this function..? Is there some reason\n> to keep it or is it leftover?\n\nIt's a leftover from when the function was doing error reporting, fixed.\n\n> Also, I get that they do similar jobs and that one is in the frontend\n> and the other is in the backend, but I'm not a fan of having two\n> 'ssl_protocol_version_to_nss()'s functions that take different argument\n> types but have exact same name and do functionally different things..\n\nGood point, I'll change that.\n\n>> +++ b/src/backend/utils/misc/guc.c\n>> @@ -4377,6 +4381,18 @@ static struct config_string ConfigureNamesString[] =\n>> \t\tcheck_canonical_path, assign_pgstat_temp_directory, NULL\n>> \t},\n>> \n>> +#ifdef USE_NSS\n>> +\t{\n>> +\t\t{\"ssl_database\", PGC_SIGHUP, CONN_AUTH_SSL,\n>> +\t\t\tgettext_noop(\"Location of the NSS certificate database.\"),\n>> +\t\t\tNULL\n>> +\t\t},\n>> +\t\t&ssl_database,\n>> +\t\t\"\",\n>> +\t\tNULL, NULL, NULL\n>> +\t},\n>> +#endif\n> \n> We don't #ifdef out the various GUCs even if SSL isn't compiled in, so\n> it doesn't seem quite right to be doing so here? Generally speaking,\n> GUCs that we expect people to use (rather than debugging ones and such)\n> are typically always built, even if we don't build support for that\n> capability, so we can throw a better error message than just some ugly\n> syntax or parsing error if we come across one being set in a non-enabled\n> build.\n\nOf course, fixed.\n\n>> +++ b/src/common/cipher_nss.c\n>> @@ -0,0 +1,192 @@\n>> +/*-------------------------------------------------------------------------\n>> + *\n>> + * cipher_nss.c\n>> + *\t NSS functionality shared between frontend and backend for working\n>> + *\t with ciphers\n>> + *\n>> + * This should only bse used if code is compiled with NSS support.\n> \n> *be\n\nFixed.\n\n>> +++ b/src/include/libpq/libpq-be.h\n>> @@ -200,6 +200,10 @@ typedef struct Port\n>> \tSSL\t\t *ssl;\n>> \tX509\t *peer;\n>> #endif\n>> +\n>> +#ifdef USE_NSS\n>> +\tvoid\t *pr_fd;\n>> +#endif\n>> } Port;\n> \n> Given this is under a #ifdef USE_NSS, does it need to be / should it\n> really be a void*?\n\nIt's to avoid the same BITS_PER_BYTE collision discussed elsewhere in this\nemail.\n\n>> +++ b/src/interfaces/libpq/fe-connect.c\n>> @@ -359,6 +359,10 @@ static const internalPQconninfoOption PQconninfoOptions[] = {\n>> \t\t\"Target-Session-Attrs\", \"\", 15, /* sizeof(\"prefer-standby\") = 15 */\n>> \toffsetof(struct pg_conn, target_session_attrs)},\n>> \n>> +\t{\"cert_database\", NULL, NULL, NULL,\n>> +\t\t\"CertificateDatabase\", \"\", 64,\n>> +\toffsetof(struct pg_conn, cert_database)},\n> \n> I mean, maybe nitpicking here, but all the other SSL stuff is\n> 'sslsomething' and the backend version of this is 'ssl_database', so\n> wouldn't it be more consistent to have this be 'ssldatabase'?\n\nThats a good point, I was clearly Stockholm syndromed since I hadn't reflected\non that but it's clearly wrong. Will fix.\n\n>> +++ b/src/interfaces/libpq/fe-secure-nss.c\n>> + * This logic exist in NSS as well, but it's only available for when there is\n> \n> *exists\n\nFixed.\n\n>> +\t/*\n>> +\t * The NSPR documentation states that runtime initialization via PR_Init\n>> +\t * is no longer required, as the first caller into NSPR will perform the\n>> +\t * initialization implicitly. See be-secure-nss.c for further discussion\n>> +\t * on PR_Init.\n>> +\t */\n>> +\tPR_Init(PR_USER_THREAD, PR_PRIORITY_NORMAL, 0);\n> \n> See same comment I made above- and also there's a comment earlier in\n> this file that we don't need to PR_Init() even ...\n\nRight, once we can confirm that the minimum required versions are past the\nPR_Init dependency then we should remove all of these calls. If we can't\nremove the calls, the comments should be updated to reflect why they are there.\n\n>> +\t{\n>> +\t\tconn->nss_context = NSS_InitContext(\"\", \"\", \"\", \"\", &params,\n>> +\t\t\t\t\t\t\t\t\t\t\tNSS_INIT_READONLY | NSS_INIT_NOCERTDB |\n>> +\t\t\t\t\t\t\t\t\t\t\tNSS_INIT_NOMODDB | NSS_INIT_FORCEOPEN |\n>> +\t\t\t\t\t\t\t\t\t\t\tNSS_INIT_NOROOTINIT | NSS_INIT_PK11RELOAD);\n>> +\t\tif (!conn->nss_context)\n>> +\t\t{\n>> +\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n>> +\t\t\t\t\t\t\t libpq_gettext(\"unable to create certificate database: %s\"),\n>> +\t\t\t\t\t\t\t pg_SSLerrmessage(PR_GetError()));\n>> +\t\t\treturn PGRES_POLLING_FAILED;\n>> +\t\t}\n>> +\t}\n> \n> That error message seems a bit ... off? Surely we aren't trying to\n> actually create a certificate database here?\n\nNot really no, it does set up a transient database structure for the duration\nof the connection AFAIK but thats clearly not the level of detail we should be\ngiving users. I've reworded to indicate that NSS init failed, and ideally the\npg_SSLerrmessage call will provide appropriate detail.\n\n>> +\t/*\n>> +\t * Configure cipher policy.\n>> +\t */\n>> +\tstatus = NSS_SetDomesticPolicy();\n>> +\tif (status != SECSuccess)\n>> +\t{\n>> +\t\tprintfPQExpBuffer(&conn->errorMessage,\n>> +\t\t\t\t\t\t libpq_gettext(\"unable to configure cipher policy: %s\"),\n>> +\t\t\t\t\t\t pg_SSLerrmessage(PR_GetError()));\n>> +\n>> +\t\treturn PGRES_POLLING_FAILED;\n>> +\t}\n> \n> Probably good to pull over at least some parts of the comments made in\n> the backend code about SetDomesticPolicy() actually enabling everything\n> (just like all the policies apparently do)...\n\nGood point, will do.\n\n>> +\t/*\n>> +\t * If we don't have a certificate database, the system trust store is the\n>> +\t * fallback we can use. If we fail to initialize that as well, we can\n>> +\t * still attempt a connection as long as the sslmode isn't verify*.\n>> +\t */\n>> +\tif (!conn->cert_database && conn->sslmode[0] == 'v')\n>> +\t{\n>> +\t\tstatus = pg_load_nss_module(&ca_trust, ca_trust_name, \"\\\"Root Certificates\\\"\");\n>> +\t\tif (status != SECSuccess)\n>> +\t\t{\n>> +\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n>> +\t\t\t\t\t\t\t libpq_gettext(\"WARNING: unable to load NSS trust module \\\"%s\\\" : %s\"),\n>> +\t\t\t\t\t\t\t ca_trust_name,\n>> +\t\t\t\t\t\t\t pg_SSLerrmessage(PR_GetError()));\n>> +\n>> +\t\t\treturn PGRES_POLLING_FAILED;\n>> +\t\t}\n>> +\t}\n> \n> Maybe have something a bit more here about \"maybe you should specifify a\n> cert_database\" or such?\n\nGood point, will expand with more detail.\n\n>> +\tif (conn->ssl_max_protocol_version && strlen(conn->ssl_max_protocol_version) > 0)\n>> +\t{\n>> +\t\tint\t\t\tssl_max_ver = ssl_protocol_version_to_nss(conn->ssl_max_protocol_version);\n>> +\n>> +\t\tif (ssl_max_ver == -1)\n>> +\t\t{\n>> +\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n>> +\t\t\t\t\t\t\t libpq_gettext(\"invalid value \\\"%s\\\" for maximum version of SSL protocol\\n\"),\n>> +\t\t\t\t\t\t\t conn->ssl_max_protocol_version);\n>> +\t\t\treturn -1;\n>> +\t\t}\n>> +\n>> +\t\tdesired_range.max = ssl_max_ver;\n>> +\t}\n> \n> In the backend code, we have an additional check to make sure they\n> didn't set the min version higher than the max.. should we have that\n> here too? Either way, seems like we should be consistent.\n\nWe already test that in src/interfaces/libpq/fe-connect.c.\n\n>> +\t * The model can now we closed as we've applied the settings of the model\n> \n> *be\n\nFixed.\n\n>> +\t * onto the real socket. From hereon we should only use conn->pr_fd.\n> \n> *here on\n\nFixed.\n\n> Similar comments to the backend code- should we just always use\n> conn->pr_fd? Or should we rename pr_fd to something else?\n\nRenaming is probably not a bad idea, will fix.\n\n>> +\t/*\n>> +\t * Specify which hostname we are expecting to talk to. This is required,\n>> +\t * albeit mostly applies to when opening a connection to a traditional\n>> +\t * http server it seems.\n>> +\t */\n>> +\tSSL_SetURL(conn->pr_fd, (conn->connhost[conn->whichhost]).host);\n> \n> We should probably also set SNI, if available (NSS 3.12.6 it seems?),\n> since it looks like that's going to be added to the OpenSSL code.\n\nGood point, will do.\n\n>> +\tdo\n>> +\t{\n>> +\t\tstatus = SSL_ForceHandshake(conn->pr_fd);\n>> +\t}\n>> +\twhile (status != SECSuccess && PR_GetError() == PR_WOULD_BLOCK_ERROR);\n> \n> We don't seem to have this loop in the backend code.. Is there some\n> reason that we don't? Is it possible that we need to have a loop here\n> too? I recall in the GSS encryption code there were definitely things\n> during setup that had to be looped back over on both sides to make sure\n> everything was finished ...\n\nOff the cuff I can't remember, will look into it.\n\n>> +\tif (conn->sslmode[0] == 'v')\n>> +\t\treturn SECFailure;\n> \n> Seems a bit grotty to do this (though I see that the OpenSSL code does\n> too ... at least there we have a comment though, maybe add one here?).\n> I would have thought we'd actually do strcmp()'s like above.\n\nThat's admittedly copied from the OpenSSL code, and I agree that it's a bit too\nclever. Replaced with plain strcmp's to improve readability in both places it\noccurred.\n\n>> +\t/*\n>> +\t * Return the underlying PRFileDesc which can be used to access\n>> +\t * information on the connection details. There is no SSL context per se.\n>> +\t */\n>> +\tif (strcmp(struct_name, \"NSS\") == 0)\n>> +\t\treturn conn->pr_fd;\n>> +\treturn NULL;\n>> +}\n> \n> Is there never a reason someone might want the pointer returned by\n> NSS_InitContext? I don't know that there is but it might be something\n> to consider (we could even possibly have our own structure returned by\n> this function which includes both, maybe..?). Not sure if there's a\n> sensible use-case for that or not just wanted to bring it up as it's\n> something I asked myself while reading through this patch.\n\nNot sure I understand what you're asking for here, did you mean \"is there ever\na reason\"?\n\n>> +\tif (strcmp(attribute_name, \"protocol\") == 0)\n>> +\t{\n>> +\t\tswitch (channel.protocolVersion)\n>> +\t\t{\n>> +#ifdef SSL_LIBRARY_VERSION_TLS_1_3\n>> +\t\t\tcase SSL_LIBRARY_VERSION_TLS_1_3:\n>> +\t\t\t\treturn \"TLSv1.3\";\n>> +#endif\n>> +#ifdef SSL_LIBRARY_VERSION_TLS_1_2\n>> +\t\t\tcase SSL_LIBRARY_VERSION_TLS_1_2:\n>> +\t\t\t\treturn \"TLSv1.2\";\n>> +#endif\n>> +#ifdef SSL_LIBRARY_VERSION_TLS_1_1\n>> +\t\t\tcase SSL_LIBRARY_VERSION_TLS_1_1:\n>> +\t\t\t\treturn \"TLSv1.1\";\n>> +#endif\n>> +\t\t\tcase SSL_LIBRARY_VERSION_TLS_1_0:\n>> +\t\t\t\treturn \"TLSv1.0\";\n>> +\t\t\tdefault:\n>> +\t\t\t\treturn \"unknown\";\n>> +\t\t}\n>> +\t}\n> \n> Not sure that it really matters, but this seems like it might be useful\n> to have as its own function... Maybe even a data structure that both\n> functions use just in oppostie directions. Really minor tho. :)\n\nI suppose that wouldn't be a bad thing, will fix.\n\t\n>> diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c\n>> index c601071838..7f10da3010 100644\n>> --- a/src/interfaces/libpq/fe-secure.c\n>> +++ b/src/interfaces/libpq/fe-secure.c\n>> @@ -448,6 +448,27 @@ PQdefaultSSLKeyPassHook_OpenSSL(char *buf, int size, PGconn *conn)\n>> }\n>> #endif\t\t\t\t\t\t\t/* USE_OPENSSL */\n>> \n>> +#ifndef USE_NSS\n>> +\n>> +PQsslKeyPassHook_nss_type\n>> +PQgetSSLKeyPassHook_nss(void)\n>> +{\n>> +\treturn NULL;\n>> +}\n>> +\n>> +void\n>> +PQsetSSLKeyPassHook_nss(PQsslKeyPassHook_nss_type hook)\n>> +{\n>> +\treturn;\n>> +}\n>> +\n>> +char *\n>> +PQdefaultSSLKeyPassHook_nss(PK11SlotInfo * slot, PRBool retry, void *arg)\n>> +{\n>> +\treturn NULL;\n>> +}\n>> +#endif\t\t\t\t\t\t\t/* USE_NSS */\n> \n> Isn't this '!USE_NSS'?\n\nTechnically it is, but using just /* USE_NSS */ is consistent with the rest of\nblocks in the file.\n\n>> diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h\n>> index 0c9e95f1a7..f15af39222 100644\n>> --- a/src/interfaces/libpq/libpq-int.h\n>> +++ b/src/interfaces/libpq/libpq-int.h\n>> @@ -383,6 +383,7 @@ struct pg_conn\n>> \tchar\t *sslrootcert;\t/* root certificate filename */\n>> \tchar\t *sslcrl;\t\t\t/* certificate revocation list filename */\n>> \tchar\t *sslcrldir;\t\t/* certificate revocation list directory name */\n>> +\tchar\t *cert_database;\t/* NSS certificate/key database */\n>> \tchar\t *requirepeer;\t/* required peer credentials for local sockets */\n>> \tchar\t *gssencmode;\t\t/* GSS mode (require,prefer,disable) */\n>> \tchar\t *krbsrvname;\t\t/* Kerberos service name */\n>> @@ -507,6 +508,28 @@ struct pg_conn\n>> \t\t\t\t\t\t\t\t * OpenSSL version changes */\n>> #endif\n>> #endif\t\t\t\t\t\t\t/* USE_OPENSSL */\n>> +\n>> +/*\n>> + * The NSS/NSPR specific types aren't used to avoid pulling in the required\n>> + * headers here, as they are causing conflicts with PG definitions.\n>> + */\n> \n> I'm a bit confused- what are the conflicts being caused here..?\n> Certainly under USE_OPENSSL we use the actual OpenSSL types..\n\nIt's referring to collisions with for example BITS_PER_BYTE which is defined\nboth by postgres and nspr. Since writing this I've introduced src/common/nss.h\nto handle it in a single place, so we can indeed use the proper types without\npolluting the file. Fixed.\n\n>> Subject: [PATCH v30 2/9] Refactor SSL testharness for multiple library\n>> \n>> The SSL testharness was fully tied to OpenSSL in the way the server was\n>> set up and reconfigured. This refactors the SSLServer module into a SSL\n>> library agnostic SSL/Server module which in turn use SSL/Backend/<lib>\n>> modules for the implementation details.\n>> \n>> No changes are done to the actual tests, this only change how setup and\n>> teardown is performed.\n> \n> Presumably this could be committed ahead of the main NSS support?\n\nCorrect, I think this has merits even if NSS support is ultimately rejected.\n\n>> Subject: [PATCH v30 4/9] nss: pg_strong_random support\n>> +++ b/src/port/pg_strong_random.c\n>> +bool\n>> +pg_strong_random(void *buf, size_t len)\n>> +{\n>> +\tNSSInitParameters params;\n>> +\tNSSInitContext *nss_context;\n>> +\tSECStatus\tstatus;\n>> +\n>> +\tmemset(&params, 0, sizeof(params));\n>> +\tparams.length = sizeof(params);\n>> +\tnss_context = NSS_InitContext(\"\", \"\", \"\", \"\", &params,\n>> +\t\t\t\t\t\t\t\t NSS_INIT_READONLY | NSS_INIT_NOCERTDB |\n>> +\t\t\t\t\t\t\t\t NSS_INIT_NOMODDB | NSS_INIT_FORCEOPEN |\n>> +\t\t\t\t\t\t\t\t NSS_INIT_NOROOTINIT | NSS_INIT_PK11RELOAD);\n>> +\n>> +\tif (!nss_context)\n>> +\t\treturn false;\n>> +\n>> +\tstatus = PK11_GenerateRandom(buf, len);\n>> +\tNSS_ShutdownContext(nss_context);\n>> +\n>> +\tif (status == SECSuccess)\n>> +\t\treturn true;\n>> +\n>> +\treturn false;\n>> +}\n>> +\n>> +#else\t\t\t\t\t\t\t/* not USE_OPENSSL, USE_NSS or WIN32 */\n> \n> I don't know that it's an issue, but do we actually need to init the NSS\n> context and shut it down every time..?\n\nWe need to have a context, and we should be able to set it like how the WIN32\ncode sets hProvider. I don't remember if there was a reason against that, will\nrevisit.\n\n>> /*\n>> * Without OpenSSL or Win32 support, just read /dev/urandom ourselves.\n> \n> *or NSS\n\nFixed.\n\n>> Subject: [PATCH v30 5/9] nss: Documentation\n>> +++ b/doc/src/sgml/acronyms.sgml\n>> @@ -684,6 +717,16 @@\n>> </listitem>\n>> </varlistentry>\n>> \n>> + <varlistentry>\n>> + <term><acronym>TLS</acronym></term>\n>> + <listitem>\n>> + <para>\n>> + <ulink url=\"https://en.wikipedia.org/wiki/Transport_Layer_Security\">\n>> + Transport Layer Security</ulink>\n>> + </para>\n>> + </listitem>\n>> + </varlistentry>\n> \n> We don't have this already..? Surely we should..\n\nWe really should, especially since we've had <acronym>TLS</acronym> in\nconfig.sgml since 2014 (c6763156589). That's another small piece that could be\ncommitted on it's own to cut down the size of this patchset (even if only by a\ntiny amount).\n\n>> diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\n>> index 967de73596..1608e9a7c7 100644\n>> --- a/doc/src/sgml/config.sgml\n>> +++ b/doc/src/sgml/config.sgml\n>> @@ -1272,6 +1272,23 @@ include_dir 'conf.d'\n>> </listitem>\n>> </varlistentry>\n>> \n>> + <varlistentry id=\"guc-ssl-database\" xreflabel=\"ssl_database\">\n>> + <term><varname>ssl_database</varname> (<type>string</type>)\n>> + <indexterm>\n>> + <primary><varname>ssl_database</varname> configuration parameter</primary>\n>> + </indexterm>\n>> + </term>\n>> + <listitem>\n>> + <para>\n>> + Specifies the name of the file containing the server certificates and\n>> + keys when using <productname>NSS</productname> for <acronym>SSL</acronym>\n>> + connections. This parameter can only be set in the\n>> + <filename>postgresql.conf</filename> file or on the server command\n>> + line.\n> \n> *SSL/TLS maybe?\n\nFixed.\n\n>> @@ -1288,7 +1305,9 @@ include_dir 'conf.d'\n>> connections using TLS version 1.2 and lower are affected. There is\n>> currently no setting that controls the cipher choices used by TLS\n>> version 1.3 connections. The default value is\n>> - <literal>HIGH:MEDIUM:+3DES:!aNULL</literal>. The default is usually a\n>> + <literal>HIGH:MEDIUM:+3DES:!aNULL</literal> for servers which have\n>> + been built with <productname>OpenSSL</productname> as the\n>> + <acronym>SSL</acronym> library. The default is usually a\n>> reasonable choice unless you have specific security requirements.\n>> </para>\n> \n> Shouldn't we say something here wrt NSS?\n\nWe should, but I'm not entirely what just yet. Need to revisit that.\n\n>> @@ -1490,8 +1509,11 @@ include_dir 'conf.d'\n>> <para>\n>> Sets an external command to be invoked when a passphrase for\n>> decrypting an SSL file such as a private key needs to be obtained. By\n>> - default, this parameter is empty, which means the built-in prompting\n>> - mechanism is used.\n>> + default, this parameter is empty. When the server is using\n>> + <productname>OpenSSL</productname>, this means the built-in prompting\n>> + mechanism is used. When using <productname>NSS</productname>, there is\n>> + no default prompting so a blank callback will be used returning an\n>> + empty password.\n>> </para>\n> \n> Maybe we should point out here that this requires the database to not\n> require a password..? So if they have one, they need to set this, or\n> maybe we should provide a default one..\n\nI've added a sentence on not using a password for the cert database. I'm not\nsure if providing a default one is a good idea but it's no less insecure than\nhaving no password really..\n\n>> +++ b/doc/src/sgml/libpq.sgml\n>> +<synopsis>\n>> +PQsslKeyPassHook_nss_type PQgetSSLKeyPassHook_nss(void);\n>> +</synopsis>\n>> + </para>\n>> +\n>> + <para>\n>> + <function>PQgetSSLKeyPassHook_nss</function> has no effect unless the\n>> + server was compiled with <productname>nss</productname> support.\n>> + </para>\n> \n> We should try to be consistent- above should be NSS, not nss.\n\nFixed.\n\n>> + <listitem>\n>> + <para>\n>> + <productname>NSS</productname>: specifying the parameter is required\n>> + in case any password protected items are referenced in the\n>> + <productname>NSS</productname> database, or if the database itself\n>> + is password protected. If multiple different objects are password\n>> + protected, the same password is used for all.\n>> + </para>\n>> + </listitem>\n>> + </itemizedlist>\n> \n> Is this a statement about NSS databases (which I don't think it is) or\n> about the fact that we'll just use the password provided for all\n> attempts to decrypt something we need in the database?\n\nCorrect.\n\n> Assuming the\n> latter, seems like we could reword this to be a bit more clear.\n> \n> Maybe: \n> \n> All attempts to decrypt objects which are password protected in the\n> database will use this password.\n\nAgreed, fixed.\n\n>> @@ -2620,9 +2791,14 @@ void *PQsslStruct(const PGconn *conn, const char *struct_name);\n>> + For <productname>NSS</productname>, there is one struct available under\n>> + the name \"NSS\", and it returns a pointer to the\n>> + <productname>NSS</productname> <literal>PRFileDesc</literal>.\n> \n> ... SSL PRFileDesc associated with the connection, no?\n\nI was trying to be specific that it's an NSS-defined structure and not a\nPostgreSQL one which is returned. Fixed.\n\n>> +++ b/doc/src/sgml/runtime.sgml\n>> @@ -2552,6 +2583,89 @@ openssl x509 -req -in server.csr -text -days 365 \\\n>> </para>\n>> </sect2>\n>> \n>> + <sect2 id=\"nss-certificate-database\">\n>> + <title>NSS Certificate Databases</title>\n>> +\n>> + <para>\n>> + When using <productname>NSS</productname>, all certificates and keys must\n>> + be loaded into an <productname>NSS</productname> certificate database.\n>> + </para>\n>> +\n>> + <para>\n>> + To create a new <productname>NSS</productname> certificate database and\n>> + load the certificates created in <xref linkend=\"ssl-certificate-creation\" />,\n>> + use the following <productname>NSS</productname> commands:\n>> +<programlisting>\n>> +certutil -d \"sql:server.db\" -N --empty-password\n>> +certutil -d \"sql:server.db\" -A -n server.crt -i server.crt -t \"CT,C,C\"\n>> +certutil -d \"sql:server.db\" -A -n root.crt -i root.crt -t \"CT,C,C\"\n>> +</programlisting>\n>> + This will give the certificate the filename as the nickname identifier in\n>> + the database which is created as <filename>server.db</filename>.\n>> + </para>\n>> + <para>\n>> + Then load the server key, which require converting it to\n> \n> *requires\n\nFixed.\n\n>> Subject: [PATCH v30 6/9] nss: Support NSS in pgcrypto\n>> +++ b/doc/src/sgml/pgcrypto.sgml\n>> <row>\n>> <entry>Blowfish</entry>\n>> <entry>yes</entry>\n>> <entry>yes</entry>\n>> + <entry>yes</entry>\n>> </row>\n> \n> Maybe this should mention that it's with the built-in implementation as\n> blowfish isn't available from NSS?\n\nFixed by adding a Note item.\n\n>> <row>\n>> <entry>DES/3DES/CAST5</entry>\n>> <entry>no</entry>\n>> <entry>yes</entry>\n>> + <entry>yes</entry>\n>> + </row>\n> \n> Surely CAST5 from the above should be removed, since it's given its own\n> entry now?\n\nIndeed, fixed.\n\n>> @@ -1241,7 +1260,8 @@ gen_random_uuid() returns uuid\n>> <orderedlist>\n>> <listitem>\n>> <para>\n>> - Any digest algorithm <productname>OpenSSL</productname> supports\n>> + Any digest algorithm <productname>OpenSSL</productname> and\n>> + <productname>NSS</productname> supports\n>> is automatically picked up.\n> \n> *or? Maybe something more specific though- \"Any digest algorithm\n> included with the library that PostgreSQL is compiled with is\n> automatically picked up.\" ?\n\nGood point, thats better. Fixed.\n\n>> Subject: [PATCH v30 7/9] nss: Support NSS in sslinfo\n>> \n>> Since sslinfo to a large extent use the be_tls_* API this mostly\n> \n> *uses\n\nFixed.\n\n>> Subject: [PATCH v30 8/9] nss: Support NSS in cryptohash\n>> +++ b/src/common/cryptohash_nss.c\n>> +\t/*\n>> +\t * Initialize our own NSS context without a database backing it.\n>> +\t */\n>> +\tmemset(&params, 0, sizeof(params));\n>> +\tparams.length = sizeof(params);\n>> +\tstatus = NSS_NoDB_Init(\".\");\n> \n> We take some pains to use NSS_InitContext elsewhere.. Are we sure that\n> we should be using NSS_NoDB_Init here..?\n\nNo, we should probably be using NSS_InitContext. Will fix.\n\n> Just a, well, not so quick read-through. Generally it's looking pretty\n> good to me. Will see about playing with it this week.\n\nThanks again for reviewing, another version which addresses the remaining\nissues will be posted soon but I wanted to get this out to give further reviews\nsomething that properly works.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Tue, 23 Mar 2021 00:38:50 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Greetings,\n\n* Daniel Gustafsson (daniel@yesql.se) wrote:\n> > On 22 Mar 2021, at 00:49, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> Thanks for the review! Below is a partial response, I haven't had time to\n> address all your review comments yet but I wanted to submit a rebased patchset\n> directly since the current version doesn't work after recent changes in the\n> tree. I will address the remaining comments tomorrow or the day after.\n\nGreat, thanks!\n\n> This rebase also includes a fix for pgtls_init which was sent offlist by Jacob.\n> The changes in pgtls_init can potentially be used to initialize the crypto\n> context for NSS to clean up this patch, Jacob is currently looking at that.\n\nAh, cool, sounds good.\n\n> > They aren't the same and it might not be\n> > clear what's going on if one was to somehow mix them (at least if pr_fd\n> > continues to sometimes be a void*, but I wonder why that's being\n> > done..? more on that later..).\n> \n> To paraphrase from a later in this email, there are collisions between nspr and\n> postgres on things like BITS_PER_BYTE, and there were also collisions on basic\n> types until I learned about NO_NSPR_10_SUPPORT. By moving the juggling of this\n> into common/nss.h we can use proper types without introducing that pollution\n> everywhere. I will address these places.\n\nAh, ok, and great, that sounds good.\n\n> >> +++ b/src/backend/libpq/be-secure-nss.c\n> > [...]\n> >> +/* default init hook can be overridden by a shared library */\n> >> +static void default_nss_tls_init(bool isServerStart);\n> >> +nss_tls_init_hook_type nss_tls_init_hook = default_nss_tls_init;\n> > \n> >> +static PRDescIdentity pr_id;\n> >> +\n> >> +static PRIOMethods pr_iomethods;\n> > \n> > Happy to be told I'm missing something, but the above two variables seem\n> > to only be used in init_iolayer.. is there a reason they're declared\n> > here instead of just being declared in that function?\n> \n> They must be there since NSPR doesn't copy these but reference them.\n\nAh, ok, interesting.\n\n> >> +\t/*\n> >> +\t * Set the fallback versions for the TLS protocol version range to a\n> >> +\t * combination of our minimal requirement and the library maximum. Error\n> >> +\t * messages should be kept identical to those in be-secure-openssl.c to\n> >> +\t * make translations easier.\n> >> +\t */\n> > \n> > Should we pull these error messages out into another header so that\n> > they're in one place to make sure they're kept consistent, if we really\n> > want to put the effort in to keep them the same..? I'm not 100% sure\n> > that it's actually necessary to do so, but defining these in one place\n> > would help maintain this if we want to. Also alright with just keeping\n> > the comment, not that big of a deal.\n> \n> It might make sense to pull them into common/nss.h, but seeing the error\n> message right there when reading the code does IMO make it clearer so it's a\n> doubleedged sword. Not sure what is the best option, but I'm not married to\n> the current solution so if there is consensus to pull them out somewhere I'm\n> happy to do so.\n\nMy thought was to put them into some common/ssl.h or something along\nthose lines but I don't see it as a big deal either way really. You\nmake a good point that having the error message there when reading the\ncode is nice.\n\n> > Maybe we should put a stake in the ground that says \"we only support\n> > back to version X of NSS\", test with that and a few more recent versions\n> > and the most recent, and then rip out anything that's needed for\n> > versions which are older than that? \n> \n> Yes, right now there is very little in the patch which caters for old versions,\n> the PR_Init call might be one of the few offenders. There has been discussion\n> upthread about settling for a required version, combining the insights learned\n> there with a survey of which versions are commonly available packaged.\n> \n> Once we settle on a version we can confirm if PR_Init is/isn't needed and\n> remove all traces of it if not.\n\nI don't really see this as all that hard to do- I'd suggest we look at\nwhat systems someone might reasonably deploy v14 on. To that end, I'd\nsay \"only systems which are presently supported\", so: RHEL7+, Debian 9+,\nUbuntu 16.04+. Looking at those, I see:\n\nUbuntu 16.04: 3.28.4\nRHEL6: v3.28.4\nDebian: 3.26.2\n\n> > I have a pretty hard time imagining that someone is going to want to build PG\n> > v14 w/ NSS 2.0 ...\n> \n> Let alone compiling 2.0 at all on a recent system..\n\nIndeed, and given the above, it seems entirely reasonable to make the\nrequirement be NSS v3+, no? I wouldn't be against making that even\ntighter if we thought it made sense to do so.\n\n> > Also- we don't seem to complain at all about a cipher being specified that we\n> > don't find? Guess I would think that we might want to throw a WARNING in such\n> > a case, but I could possibly be convinced otherwise.\n> \n> No, I think you're right, we should throw WARNING there or possibly even a\n> higher elevel. Should that be a COMMERROR even?\n\nI suppose the thought I was having was that we might want to allow some\nstring that covered all the OpenSSL and NSS ciphers that someone feels\ncomfortable with and we'd just ignore the ones that don't make sense for\nthe particular library we're currently built with. Making it a\nCOMMERROR seems like overkill and I'm not entirely sure we actually want\nany warning since we might then be constantly bleating about it.\n\n> > Kind of wonder just what happens with the current code, I'm guessing ciphercode\n> > is zero and therefore doesn't complain but also doesn't do what we want. I\n> > wonder if there's a way to test this?\n> \n> We could extend the test suite to set ciphers in postgresql.conf, I'll give it\n> a go.\n\nThat'd be great, thanks!\n\n> > I do think we should probably throw an error if we end up with *no*\n> > ciphers being set, which doesn't seem to be happening here..?\n> \n> Yeah, that should be a COMMERROR. Fixed.\n\nI do think it makes sense to throw a COMMERROR here since the connection\nis going to end up failing anyway.\n\n> >> +pg_ssl_read(PRFileDesc *fd, void *buf, PRInt32 amount, PRIntn flags,\n> >> +\t\t\tPRIntervalTime timeout)\n> >> +{\n> >> +\tPRRecvFN\tread_fn;\n> >> +\tPRInt32\t\tn_read;\n> >> +\n> >> +\tread_fn = fd->lower->methods->recv;\n> >> +\tn_read = read_fn(fd->lower, buf, amount, flags, timeout);\n> >> +\n> >> +\treturn n_read;\n> >> +}\n> >> +\n> >> +static PRInt32\n> >> +pg_ssl_write(PRFileDesc *fd, const void *buf, PRInt32 amount, PRIntn flags,\n> >> +\t\t\t PRIntervalTime timeout)\n> >> +{\n> >> +\tPRSendFN\tsend_fn;\n> >> +\tPRInt32\t\tn_write;\n> >> +\n> >> +\tsend_fn = fd->lower->methods->send;\n> >> +\tn_write = send_fn(fd->lower, buf, amount, flags, timeout);\n> >> +\n> >> +\treturn n_write;\n> >> +}\n> >> +\n> >> +static PRStatus\n> >> +pg_ssl_close(PRFileDesc *fd)\n> >> +{\n> >> +\t/*\n> >> +\t * Disconnect our private Port from the fd before closing out the stack.\n> >> +\t * (Debug builds of NSPR will assert if we do not.)\n> >> +\t */\n> >> +\tfd->secret = NULL;\n> >> +\treturn PR_GetDefaultIOMethods()->close(fd);\n> >> +}\n> > \n> > Regarding these, I find myself wondering how they're different from the\n> > defaults..? I mean, the above just directly called\n> > PR_GetDefaultIOMethods() to then call it's close() function- are the\n> > fd->lower_methods->recv/send not the default methods? I don't quite get\n> > what the point is from having our own callbacks here if they just do\n> > exactly what the defaults would do (or are there actually no defined\n> > defaults and you have to provide these..?).\n> \n> It's really just to cope with debug builds of NSPR which assert that fd->secret\n> is null before closing.\n\nAnd we have to override the recv/send functions for this too..? Sorry,\nmy comment wasn't just about the close() method but about the others\ntoo.\n\n> >> +\t/*\n> >> +\t * Return the underlying PRFileDesc which can be used to access\n> >> +\t * information on the connection details. There is no SSL context per se.\n> >> +\t */\n> >> +\tif (strcmp(struct_name, \"NSS\") == 0)\n> >> +\t\treturn conn->pr_fd;\n> >> +\treturn NULL;\n> >> +}\n> > \n> > Is there never a reason someone might want the pointer returned by\n> > NSS_InitContext? I don't know that there is but it might be something\n> > to consider (we could even possibly have our own structure returned by\n> > this function which includes both, maybe..?). Not sure if there's a\n> > sensible use-case for that or not just wanted to bring it up as it's\n> > something I asked myself while reading through this patch.\n> \n> Not sure I understand what you're asking for here, did you mean \"is there ever\n> a reason\"?\n\nEh, poor wording on my part. You're right, the question, reworded\nagain, was \"Would someone want to get the context returned by\nNSS_InitContext?\". If we think there's a reason that someone might want\nthat context then perhaps we should allow getting it, in addition to the\npr_fd. If there's really no reason to ever want the context from\nNSS_InitContext then what you have here where we're returning pr_fd is\nprobably fine.\n\n> >> diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c\n> >> index c601071838..7f10da3010 100644\n> >> --- a/src/interfaces/libpq/fe-secure.c\n> >> +++ b/src/interfaces/libpq/fe-secure.c\n> >> @@ -448,6 +448,27 @@ PQdefaultSSLKeyPassHook_OpenSSL(char *buf, int size, PGconn *conn)\n> >> }\n> >> #endif\t\t\t\t\t\t\t/* USE_OPENSSL */\n> >> \n> >> +#ifndef USE_NSS\n> >> +\n> >> +PQsslKeyPassHook_nss_type\n> >> +PQgetSSLKeyPassHook_nss(void)\n> >> +{\n> >> +\treturn NULL;\n> >> +}\n> >> +\n> >> +void\n> >> +PQsetSSLKeyPassHook_nss(PQsslKeyPassHook_nss_type hook)\n> >> +{\n> >> +\treturn;\n> >> +}\n> >> +\n> >> +char *\n> >> +PQdefaultSSLKeyPassHook_nss(PK11SlotInfo * slot, PRBool retry, void *arg)\n> >> +{\n> >> +\treturn NULL;\n> >> +}\n> >> +#endif\t\t\t\t\t\t\t/* USE_NSS */\n> > \n> > Isn't this '!USE_NSS'?\n> \n> Technically it is, but using just /* USE_NSS */ is consistent with the rest of\n> blocks in the file.\n\nHrmpf. I guess it seems a bit confusing to me to have to go find the\nopening #ifndef to realize that it's actally !USE_NSS.. In other words,\nI would think we'd actually want to fix all of these, heh. I only\nactually see one case on a quick grep where it's wrong for USE_OPENSSL\nand so that doesn't seem like it's really a precedent and is more of a\nbug. We certainly say 'not OPENSSL' in one place today too and also\nhave a number of places where we have: #endif ... /* ! WHATEVER */.\n\n> >> diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h\n> >> index 0c9e95f1a7..f15af39222 100644\n> >> --- a/src/interfaces/libpq/libpq-int.h\n> >> +++ b/src/interfaces/libpq/libpq-int.h\n> >> @@ -383,6 +383,7 @@ struct pg_conn\n> >> \tchar\t *sslrootcert;\t/* root certificate filename */\n> >> \tchar\t *sslcrl;\t\t\t/* certificate revocation list filename */\n> >> \tchar\t *sslcrldir;\t\t/* certificate revocation list directory name */\n> >> +\tchar\t *cert_database;\t/* NSS certificate/key database */\n> >> \tchar\t *requirepeer;\t/* required peer credentials for local sockets */\n> >> \tchar\t *gssencmode;\t\t/* GSS mode (require,prefer,disable) */\n> >> \tchar\t *krbsrvname;\t\t/* Kerberos service name */\n> >> @@ -507,6 +508,28 @@ struct pg_conn\n> >> \t\t\t\t\t\t\t\t * OpenSSL version changes */\n> >> #endif\n> >> #endif\t\t\t\t\t\t\t/* USE_OPENSSL */\n> >> +\n> >> +/*\n> >> + * The NSS/NSPR specific types aren't used to avoid pulling in the required\n> >> + * headers here, as they are causing conflicts with PG definitions.\n> >> + */\n> > \n> > I'm a bit confused- what are the conflicts being caused here..?\n> > Certainly under USE_OPENSSL we use the actual OpenSSL types..\n> \n> It's referring to collisions with for example BITS_PER_BYTE which is defined\n> both by postgres and nspr. Since writing this I've introduced src/common/nss.h\n> to handle it in a single place, so we can indeed use the proper types without\n> polluting the file. Fixed.\n\nGreat, thanks!\n\n> >> Subject: [PATCH v30 2/9] Refactor SSL testharness for multiple library\n> >> \n> >> The SSL testharness was fully tied to OpenSSL in the way the server was\n> >> set up and reconfigured. This refactors the SSLServer module into a SSL\n> >> library agnostic SSL/Server module which in turn use SSL/Backend/<lib>\n> >> modules for the implementation details.\n> >> \n> >> No changes are done to the actual tests, this only change how setup and\n> >> teardown is performed.\n> > \n> > Presumably this could be committed ahead of the main NSS support?\n> \n> Correct, I think this has merits even if NSS support is ultimately rejected.\n\nOk- could you break it out on to its own thread and I'll see about\ncommitting it soonish, to get it out of the way?\n\n> >> Subject: [PATCH v30 5/9] nss: Documentation\n> >> +++ b/doc/src/sgml/acronyms.sgml\n> >> @@ -684,6 +717,16 @@\n> >> </listitem>\n> >> </varlistentry>\n> >> \n> >> + <varlistentry>\n> >> + <term><acronym>TLS</acronym></term>\n> >> + <listitem>\n> >> + <para>\n> >> + <ulink url=\"https://en.wikipedia.org/wiki/Transport_Layer_Security\">\n> >> + Transport Layer Security</ulink>\n> >> + </para>\n> >> + </listitem>\n> >> + </varlistentry>\n> > \n> > We don't have this already..? Surely we should..\n> \n> We really should, especially since we've had <acronym>TLS</acronym> in\n> config.sgml since 2014 (c6763156589). That's another small piece that could be\n> committed on it's own to cut down the size of this patchset (even if only by a\n> tiny amount).\n\nDitto on this. :)\n\n> >> @@ -1288,7 +1305,9 @@ include_dir 'conf.d'\n> >> connections using TLS version 1.2 and lower are affected. There is\n> >> currently no setting that controls the cipher choices used by TLS\n> >> version 1.3 connections. The default value is\n> >> - <literal>HIGH:MEDIUM:+3DES:!aNULL</literal>. The default is usually a\n> >> + <literal>HIGH:MEDIUM:+3DES:!aNULL</literal> for servers which have\n> >> + been built with <productname>OpenSSL</productname> as the\n> >> + <acronym>SSL</acronym> library. The default is usually a\n> >> reasonable choice unless you have specific security requirements.\n> >> </para>\n> > \n> > Shouldn't we say something here wrt NSS?\n> \n> We should, but I'm not entirely what just yet. Need to revisit that.\n\nNot sure if we really want to do this but at least with ssllabs.com,\npostgresql.org gets an 'A' rating with this set:\n\nECDHE-ECDSA-CHACHA20-POLY1305\nECDHE-RSA-CHACHA20-POLY1305\nECDHE-ECDSA-AES128-GCM-SHA256\nECDHE-RSA-AES128-GCM-SHA256\nECDHE-ECDSA-AES256-GCM-SHA384\nECDHE-RSA-AES256-GCM-SHA384\nDHE-RSA-AES128-GCM-SHA256\nDHE-RSA-AES256-GCM-SHA384\nECDHE-ECDSA-AES128-SHA256\nECDHE-RSA-AES128-SHA256\nECDHE-ECDSA-AES128-SHA\nECDHE-RSA-AES256-SHA384\nECDHE-RSA-AES128-SHA\nECDHE-ECDSA-AES256-SHA384\nECDHE-ECDSA-AES256-SHA\nECDHE-RSA-AES256-SHA\nDHE-RSA-AES128-SHA256\nDHE-RSA-AES128-SHA\nDHE-RSA-AES256-SHA256\nDHE-RSA-AES256-SHA\nECDHE-ECDSA-DES-CBC3-SHA\nECDHE-RSA-DES-CBC3-SHA\nEDH-RSA-DES-CBC3-SHA\nAES128-GCM-SHA256\nAES256-GCM-SHA384\nAES128-SHA256\nAES256-SHA256\nAES128-SHA\nAES256-SHA\nDES-CBC3-SHA\n!DSS\n\nWhich also seems kinda close to what the default when built with OpenSSL\nends up being? Thought the ssllabs report does list which ones it\nthinks are weak and so we might consider excluding those by default too:\n\nhttps://www.ssllabs.com/ssltest/analyze.html?d=postgresql.org&s=2a02%3a16a8%3adc51%3a0%3a0%3a0%3a0%3a50\n\n> >> @@ -1490,8 +1509,11 @@ include_dir 'conf.d'\n> >> <para>\n> >> Sets an external command to be invoked when a passphrase for\n> >> decrypting an SSL file such as a private key needs to be obtained. By\n> >> - default, this parameter is empty, which means the built-in prompting\n> >> - mechanism is used.\n> >> + default, this parameter is empty. When the server is using\n> >> + <productname>OpenSSL</productname>, this means the built-in prompting\n> >> + mechanism is used. When using <productname>NSS</productname>, there is\n> >> + no default prompting so a blank callback will be used returning an\n> >> + empty password.\n> >> </para>\n> > \n> > Maybe we should point out here that this requires the database to not\n> > require a password..? So if they have one, they need to set this, or\n> > maybe we should provide a default one..\n> \n> I've added a sentence on not using a password for the cert database. I'm not\n> sure if providing a default one is a good idea but it's no less insecure than\n> having no password really..\n\nI was meaning a default callback to prompt, not sure if that was clear.\n\n> > Just a, well, not so quick read-through. Generally it's looking pretty\n> > good to me. Will see about playing with it this week.\n> \n> Thanks again for reviewing, another version which addresses the remaining\n> issues will be posted soon but I wanted to get this out to give further reviews\n> something that properly works.\n\nFantastic, thanks again!\n\nStephen", "msg_date": "Tue, 23 Mar 2021 15:04:27 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Tue, 2021-03-23 at 00:38 +0100, Daniel Gustafsson wrote:\r\n> This rebase also includes a fix for pgtls_init which was sent offlist by Jacob.\r\n> The changes in pgtls_init can potentially be used to initialize the crypto\r\n> context for NSS to clean up this patch, Jacob is currently looking at that.\r\n\r\nI'm having a hell of a time trying to get the context stuff working.\r\nFindings so far (I have patches in progress for many of these, but it's\r\nall blowing up because of the last problem):\r\n\r\nNSS_INIT_NOROOTINIT is hardcoded for NSS_InitContext(), so we probably\r\ndon't need to pass it explicitly. NSS_INIT_PK11RELOAD is apparently\r\nmeant to hack around libraries that do their own PKCS loading; do we\r\nneed it?\r\n\r\nNSS_ShutdownContext() can (and does) fail if we've leaked handles to\r\nobjects, so we need to check its return value. Once this happens,\r\nfuture NSS_InitContext() calls behave poorly. Currently we leak the\r\npr_fd as well as a handful of server_cert handles.\r\n\r\nNSS_NoDB_Init() is going to pin NSS in memory. For the backend this is\r\nprobably okay, but for libpq clients that's probably not what we want.\r\n\r\nThe first database loaded by NSS_InitContext() becomes the \"default\"\r\ndatabase. This is what I'm currently hung up on. I can't figure out how\r\nto get NSS to use the database that was loaded for the current\r\nconnection, so in my local patches for the issues above, client\r\ncertificates fail to load. I can work around it temporarily for the\r\ntests, but this will be a problem if any libpq clients load up multiple\r\nindependent databases for use with separate connections. Anyone know if\r\nthis is a supported use case for NSS?\r\n\r\n--Jacob\r\n", "msg_date": "Wed, 24 Mar 2021 00:05:35 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, Mar 24, 2021 at 12:05:35AM +0000, Jacob Champion wrote:\n> The first database loaded by NSS_InitContext() becomes the \"default\"\n> database. This is what I'm currently hung up on. I can't figure out how\n> to get NSS to use the database that was loaded for the current\n> connection, so in my local patches for the issues above, client\n> certificates fail to load. I can work around it temporarily for the\n> tests, but this will be a problem if any libpq clients load up multiple\n> independent databases for use with separate connections. Anyone know if\n> this is a supported use case for NSS?\n\nAre you referring to the case of threading here? This should be a\nsupported case, as threads created by an application through libpq\ncould perfectly use completely different connection strings.\n--\nMichael", "msg_date": "Wed, 24 Mar 2021 09:28:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Tue, Mar 23, 2021 at 12:38:50AM +0100, Daniel Gustafsson wrote:\n> Thanks again for reviewing, another version which addresses the remaining\n> issues will be posted soon but I wanted to get this out to give further reviews\n> something that properly works.\n\nI have been looking at the infrastructure of the tests, patches 0002\n(some refactoring) and 0003 (more refactoring with tests for NSS), and\nI am a bit confused by its state.\n\nFirst, I think that the split is not completely clear. For example,\npatch 0003 has changes for OpenSSL.pm and Server.pm, but wouldn't it\nbe better to have all the refactoring infrastructure only in 0002,\nwith 0003 introducing only the NSS pieces for its internal data and\nNSS.pm?\n\n+ keyfile => 'server-password',\n+ nssdatabase => 'server-cn-only.crt__server-password.key.db',\n+ passphrase_cmd => 'echo secret1',\n001_ssltests.pl and 002_scram.pl have NSS-related parameters, which\ndoes not look like a clean separation to me as there are OpenSSL tests\nthat use some NSS parts, and the main scripts should remain neutral in\nterms setting contents, including only variables and callbacks that\nshould be filled specifically for each SSL implementation, no? Aren't\nwe missing a second piece here with a set of callbacks for the\nper-library test paths then?\n\n+ if (defined($openssl))\n+ {\n+ copy_files(\"ssl/server-*.crt\", $pgdata);\n+ copy_files(\"ssl/server-*.key\", $pgdata);\n+ chmod(0600, glob \"$pgdata/server-*.key\") or die $!;\n+ copy_files(\"ssl/root+client_ca.crt\", $pgdata);\n+ copy_files(\"ssl/root_ca.crt\", $pgdata);\n+ copy_files(\"ssl/root+client.crl\", $pgdata);\n+ mkdir(\"$pgdata/root+client-crldir\");\n+ copy_files(\"ssl/root+client-crldir/*\",\n\"$pgdata/root+client-crldir/\");\n+ }\n+ elsif (defined($nss))\n+ {\n+ RecursiveCopy::copypath(\"ssl/nss\", $pgdata . \"/nss\") if -e\n\"ssl/nss\";\n+ }\nThis had better be in its own callback, for example.\n--\nMichael", "msg_date": "Wed, 24 Mar 2021 12:54:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, 2021-03-24 at 09:28 +0900, Michael Paquier wrote:\r\n> On Wed, Mar 24, 2021 at 12:05:35AM +0000, Jacob Champion wrote:\r\n> > I can work around it temporarily for the\r\n> > tests, but this will be a problem if any libpq clients load up multiple\r\n> > independent databases for use with separate connections. Anyone know if\r\n> > this is a supported use case for NSS?\r\n> \r\n> Are you referring to the case of threading here? This should be a\r\n> supported case, as threads created by an application through libpq\r\n> could perfectly use completely different connection strings.\r\nRight, but to clarify -- I was asking if *NSS* supports loading and\r\nusing separate certificate databases as part of its API. It seems like\r\nthe internals make it possible, but I don't see the public interfaces\r\nto actually use those internals.\r\n\r\n--Jacob\r\n", "msg_date": "Wed, 24 Mar 2021 15:54:52 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Greetings Jacob,\n\n* Jacob Champion (pchampion@vmware.com) wrote:\n> On Wed, 2021-03-24 at 09:28 +0900, Michael Paquier wrote:\n> > On Wed, Mar 24, 2021 at 12:05:35AM +0000, Jacob Champion wrote:\n> > > I can work around it temporarily for the\n> > > tests, but this will be a problem if any libpq clients load up multiple\n> > > independent databases for use with separate connections. Anyone know if\n> > > this is a supported use case for NSS?\n> > \n> > Are you referring to the case of threading here? This should be a\n> > supported case, as threads created by an application through libpq\n> > could perfectly use completely different connection strings.\n> Right, but to clarify -- I was asking if *NSS* supports loading and\n> using separate certificate databases as part of its API. It seems like\n> the internals make it possible, but I don't see the public interfaces\n> to actually use those internals.\n\nYes, this is done using SECMOD_OpenUserDB, see:\n\nhttps://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/PKCS11_Functions#SECMOD_OpenUserDB\n\nalso there's info here:\n\nhttps://groups.google.com/g/mozilla.dev.tech.crypto/c/Xz6Emfcue0E\n\nWe should document that, as mentioned in the link above, the NSS find\nfunctions will find certs in all the opened databases. As this would\nall be under one application which is linked against libpq and passing\nin different values for ssl_database for different connections, this\ndoesn't seem like it's really that much of an issue.\n\nThanks!\n\nStephen", "msg_date": "Wed, 24 Mar 2021 13:00:36 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, 2021-03-24 at 13:00 -0400, Stephen Frost wrote:\r\n> * Jacob Champion (pchampion@vmware.com) wrote:\r\n> > Right, but to clarify -- I was asking if *NSS* supports loading and\r\n> > using separate certificate databases as part of its API. It seems like\r\n> > the internals make it possible, but I don't see the public interfaces\r\n> > to actually use those internals.\r\n> \r\n> Yes, this is done using SECMOD_OpenUserDB, see:\r\n> \r\n> https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/PKCS11_Functions#SECMOD_OpenUserDB\r\n\r\nAh, I had assumed that the DB-specific InitContext was using this\r\nbehind the scenes; apparently not. I will give that a try, thanks!\r\n\r\n> also there's info here:\r\n> \r\n> https://groups.google.com/g/mozilla.dev.tech.crypto/c/Xz6Emfcue0E\r\n> \r\n> We should document that, as mentioned in the link above, the NSS find\r\n> functions will find certs in all the opened databases. As this would\r\n> all be under one application which is linked against libpq and passing\r\n> in different values for ssl_database for different connections, this\r\n> doesn't seem like it's really that much of an issue.\r\n\r\nI could see this being a problem if two client certificate nicknames\r\ncollide across multiple in-use databases, maybe?\r\n\r\n--Jacob\r\n", "msg_date": "Wed, 24 Mar 2021 17:41:01 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Greetings,\n\n* Jacob Champion (pchampion@vmware.com) wrote:\n> On Wed, 2021-03-24 at 13:00 -0400, Stephen Frost wrote:\n> > * Jacob Champion (pchampion@vmware.com) wrote:\n> > > Right, but to clarify -- I was asking if *NSS* supports loading and\n> > > using separate certificate databases as part of its API. It seems like\n> > > the internals make it possible, but I don't see the public interfaces\n> > > to actually use those internals.\n> > \n> > Yes, this is done using SECMOD_OpenUserDB, see:\n> > \n> > https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/PKCS11_Functions#SECMOD_OpenUserDB\n> \n> Ah, I had assumed that the DB-specific InitContext was using this\n> behind the scenes; apparently not. I will give that a try, thanks!\n> \n> > also there's info here:\n> > \n> > https://groups.google.com/g/mozilla.dev.tech.crypto/c/Xz6Emfcue0E\n> > \n> > We should document that, as mentioned in the link above, the NSS find\n> > functions will find certs in all the opened databases. As this would\n> > all be under one application which is linked against libpq and passing\n> > in different values for ssl_database for different connections, this\n> > doesn't seem like it's really that much of an issue.\n> \n> I could see this being a problem if two client certificate nicknames\n> collide across multiple in-use databases, maybe?\n\nRight, in such a case either cert might get returned and it's possible\nthat the \"wrong\" one is returned and therefore the connection would end\nup failing, assuming that they aren't actually the same and just happen\nto be in both.\n\nSeems like we could use SECMOD_OpenUserDB() and then pass the result\nfrom that into PK11_ListCertsInSlot() and scan through the certs in just\nthe specified database to find the one we're looking for if we really\nfeel compelled to try and address this risk. I've reached out to the\nNSS folks to see if they have any thoughts about the best way to address\nthis.\n\nThanks,\n\nStephen", "msg_date": "Wed, 24 Mar 2021 14:10:16 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 24 Mar 2021, at 04:54, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Mar 23, 2021 at 12:38:50AM +0100, Daniel Gustafsson wrote:\n>> Thanks again for reviewing, another version which addresses the remaining\n>> issues will be posted soon but I wanted to get this out to give further reviews\n>> something that properly works.\n> \n> I have been looking at the infrastructure of the tests, patches 0002\n> (some refactoring) and 0003 (more refactoring with tests for NSS), and\n> I am a bit confused by its state.\n> \n> First, I think that the split is not completely clear. For example,\n> patch 0003 has changes for OpenSSL.pm and Server.pm, but wouldn't it\n> be better to have all the refactoring infrastructure only in 0002,\n> with 0003 introducing only the NSS pieces for its internal data and\n> NSS.pm?\n\nYes. Juggling a patchset of this size is errorprone. This is why I opened the\nseparate thread for this where the patch can be held apart cleaner, so let's\ntake this discussion over there. I will post an updated patch there shortly.\n\n> + keyfile => 'server-password',\n> + nssdatabase => 'server-cn-only.crt__server-password.key.db',\n> + passphrase_cmd => 'echo secret1',\n> 001_ssltests.pl and 002_scram.pl have NSS-related parameters, which\n> does not look like a clean separation to me as there are OpenSSL tests\n> that use some NSS parts, and the main scripts should remain neutral in\n> terms setting contents, including only variables and callbacks that\n> should be filled specifically for each SSL implementation, no? Aren't\n> we missing a second piece here with a set of callbacks for the\n> per-library test paths then?\n\nWell, then again, keyfile is an OpenSSL specific parameter, it just happens to\nbe named quite neutrally. I'm not sure how to best express the certificate and\nkey requirements of a test since the testcase is the source of truth in terms\nof what it requires. If we introduce a standard set of cert/keys which all\nbackends are required to supply, we could refer to those. Tests that need\nsomething more specific can then go into 00X_<library>.pl. There is a balance\nto strike though, there is a single backend now with at most one on the horizon\nwhich is yet to be decided upon, making it too generic may end up making test\nwriting overcomplicated. Do you have any concretee ideas?\n\n> + if (defined($openssl))\n> + {\n> + copy_files(\"ssl/server-*.crt\", $pgdata);\n> + copy_files(\"ssl/server-*.key\", $pgdata);\n> + chmod(0600, glob \"$pgdata/server-*.key\") or die $!;\n> + copy_files(\"ssl/root+client_ca.crt\", $pgdata);\n> + copy_files(\"ssl/root_ca.crt\", $pgdata);\n> + copy_files(\"ssl/root+client.crl\", $pgdata);\n> + mkdir(\"$pgdata/root+client-crldir\");\n> + copy_files(\"ssl/root+client-crldir/*\",\n> \"$pgdata/root+client-crldir/\");\n> + }\n> + elsif (defined($nss))\n> + {\n> + RecursiveCopy::copypath(\"ssl/nss\", $pgdata . \"/nss\") if -e\n> \"ssl/nss\";\n> + }\n> This had better be in its own callback, for example.\n\nYes, this one is a clearer case, fixed in the v2 patch which will be posted on\nthe separate thread.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 25 Mar 2021 00:00:30 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, 2021-03-24 at 14:10 -0400, Stephen Frost wrote:\r\n> * Jacob Champion (pchampion@vmware.com) wrote:\r\n> > I could see this being a problem if two client certificate nicknames\r\n> > collide across multiple in-use databases, maybe?\r\n> \r\n> Right, in such a case either cert might get returned and it's possible\r\n> that the \"wrong\" one is returned and therefore the connection would end\r\n> up failing, assuming that they aren't actually the same and just happen\r\n> to be in both.\r\n> \r\n> Seems like we could use SECMOD_OpenUserDB() and then pass the result\r\n> from that into PK11_ListCertsInSlot() and scan through the certs in just\r\n> the specified database to find the one we're looking for if we really\r\n> feel compelled to try and address this risk. I've reached out to the\r\n> NSS folks to see if they have any thoughts about the best way to address\r\n> this.\r\n\r\nSome additional findings (NSS 3.63), please correct me if I've made any mistakes:\r\n\r\nThe very first NSSInitContext created is special. If it contains a database, that database will be considered part of the \"internal\" slot and its certificates can be referenced directly by nickname. If it doesn't have a database, the internal slot has no certificates, and it will continue to have zero certificates until NSS is completely shut down and reinitialized with a new \"first\" context.\r\n\r\nDatabases that are opened *after* the first one are given their own separate slots. Any certificates that are part of those databases seemingly can't be referenced directly by nickname. They have to be prefixed by their token name -- a name which you don't have if you used NSS_InitContext() to create the database. You have to use SECMOD_OpenUserDB() instead. This explains some strange failures I was seeing in local testing, where the order of InitContext determined whether our client certificate selection succeeded or failed.\r\n\r\nIf you SECMOD_OpenUserDB() a database that is identical to the first (internal) database, NSS deduplicates for you and just returns the internal slot. Which seems like it's helpful, except you're not allowed to close that database, and you have to know not to close it by checking to see whether that slot is the \"internal key slot\". It appears to remain open until NSS is shut down entirely.\r\nBut if you open a database that is *not* the magic internal database,\r\nand then open a duplicate of that one, NSS creates yet another new slot\r\nfor the duplicate. So SECMOD_OpenUserDB() may or may not be a resource\r\nhog, depending on the global state of the process at the time libpq\r\nopens its first connection. We won't be able to control what the parent\r\napplication will do before loading us up.\r\n\r\nIt also doesn't look like any of the SECMOD_* machinery that we're\r\nlooking at is thread-safe, but I'd really like to be wrong...\r\n\r\n--Jacob\r\n", "msg_date": "Wed, 24 Mar 2021 23:56:24 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 23 Mar 2021, at 20:04, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> Greetings,\n> \n> * Daniel Gustafsson (daniel@yesql.se) wrote:\n>>> On 22 Mar 2021, at 00:49, Stephen Frost <sfrost@snowman.net> wrote:\n>> \n>> Thanks for the review! Below is a partial response, I haven't had time to\n>> address all your review comments yet but I wanted to submit a rebased patchset\n>> directly since the current version doesn't work after recent changes in the\n>> tree. I will address the remaining comments tomorrow or the day after.\n> \n> Great, thanks!\n> \n>> This rebase also includes a fix for pgtls_init which was sent offlist by Jacob.\n>> The changes in pgtls_init can potentially be used to initialize the crypto\n>> context for NSS to clean up this patch, Jacob is currently looking at that.\n> \n> Ah, cool, sounds good.\n> \n>>> They aren't the same and it might not be\n>>> clear what's going on if one was to somehow mix them (at least if pr_fd\n>>> continues to sometimes be a void*, but I wonder why that's being\n>>> done..? more on that later..).\n>> \n>> To paraphrase from a later in this email, there are collisions between nspr and\n>> postgres on things like BITS_PER_BYTE, and there were also collisions on basic\n>> types until I learned about NO_NSPR_10_SUPPORT. By moving the juggling of this\n>> into common/nss.h we can use proper types without introducing that pollution\n>> everywhere. I will address these places.\n> \n> Ah, ok, and great, that sounds good.\n\n>>>> +\t/*\n>>>> +\t * Set the fallback versions for the TLS protocol version range to a\n>>>> +\t * combination of our minimal requirement and the library maximum. Error\n>>>> +\t * messages should be kept identical to those in be-secure-openssl.c to\n>>>> +\t * make translations easier.\n>>>> +\t */\n>>> \n>>> Should we pull these error messages out into another header so that\n>>> they're in one place to make sure they're kept consistent, if we really\n>>> want to put the effort in to keep them the same..? I'm not 100% sure\n>>> that it's actually necessary to do so, but defining these in one place\n>>> would help maintain this if we want to. Also alright with just keeping\n>>> the comment, not that big of a deal.\n>> \n>> It might make sense to pull them into common/nss.h, but seeing the error\n>> message right there when reading the code does IMO make it clearer so it's a\n>> doubleedged sword. Not sure what is the best option, but I'm not married to\n>> the current solution so if there is consensus to pull them out somewhere I'm\n>> happy to do so.\n> \n> My thought was to put them into some common/ssl.h or something along\n> those lines but I don't see it as a big deal either way really. You\n> make a good point that having the error message there when reading the\n> code is nice.\n\nThinking more on this, I think my vote will be to keep them duplicated in the\ncode for readability. Unless there are strong feelings against I think we at\nleast should start there.\n\n>>> Maybe we should put a stake in the ground that says \"we only support\n>>> back to version X of NSS\", test with that and a few more recent versions\n>>> and the most recent, and then rip out anything that's needed for\n>>> versions which are older than that? \n>> \n>> Yes, right now there is very little in the patch which caters for old versions,\n>> the PR_Init call might be one of the few offenders. There has been discussion\n>> upthread about settling for a required version, combining the insights learned\n>> there with a survey of which versions are commonly available packaged.\n>> \n>> Once we settle on a version we can confirm if PR_Init is/isn't needed and\n>> remove all traces of it if not.\n> \n> I don't really see this as all that hard to do- I'd suggest we look at\n> what systems someone might reasonably deploy v14 on. To that end, I'd\n> say \"only systems which are presently supported\", so: RHEL7+, Debian 9+,\n> Ubuntu 16.04+.\n\nSounds reasonable.\n\n> Looking at those, I see:\n> \n> Ubuntu 16.04: 3.28.4\n> RHEL6: v3.28.4\n> Debian: 3.26.2\n\nI assume these have matching NSPR versions placing the Debian 9 NSPR package as\nthe lowest required version for that? \n\n>>> I have a pretty hard time imagining that someone is going to want to build PG\n>>> v14 w/ NSS 2.0 ...\n>> \n>> Let alone compiling 2.0 at all on a recent system..\n> \n> Indeed, and given the above, it seems entirely reasonable to make the\n> requirement be NSS v3+, no? I wouldn't be against making that even\n> tighter if we thought it made sense to do so.\n\nI think anything but doing that would be incredibly unreasonable.\n\n>>> Also- we don't seem to complain at all about a cipher being specified that we\n>>> don't find? Guess I would think that we might want to throw a WARNING in such\n>>> a case, but I could possibly be convinced otherwise.\n>> \n>> No, I think you're right, we should throw WARNING there or possibly even a\n>> higher elevel. Should that be a COMMERROR even?\n> \n> I suppose the thought I was having was that we might want to allow some\n> string that covered all the OpenSSL and NSS ciphers that someone feels\n> comfortable with and we'd just ignore the ones that don't make sense for\n> the particular library we're currently built with. Making it a\n> COMMERROR seems like overkill and I'm not entirely sure we actually want\n> any warning since we might then be constantly bleating about it.\n\nRight, with a string like that we'd induce WARNING fatigue quickly. Catching\nthe case of *no* ciphers enabled with a COMMERROR is going some way towards\nbeing helpful to the user in debugging the failed connection here.\n\n>>>> +pg_ssl_read(PRFileDesc *fd, void *buf, PRInt32 amount, PRIntn flags,\n>>>> +\t\t\tPRIntervalTime timeout)\n>>>> +{\n>>>> +\tPRRecvFN\tread_fn;\n>>>> +\tPRInt32\t\tn_read;\n>>>> +\n>>>> +\tread_fn = fd->lower->methods->recv;\n>>>> +\tn_read = read_fn(fd->lower, buf, amount, flags, timeout);\n>>>> +\n>>>> +\treturn n_read;\n>>>> +}\n>>>> +\n>>>> +static PRInt32\n>>>> +pg_ssl_write(PRFileDesc *fd, const void *buf, PRInt32 amount, PRIntn flags,\n>>>> +\t\t\t PRIntervalTime timeout)\n>>>> +{\n>>>> +\tPRSendFN\tsend_fn;\n>>>> +\tPRInt32\t\tn_write;\n>>>> +\n>>>> +\tsend_fn = fd->lower->methods->send;\n>>>> +\tn_write = send_fn(fd->lower, buf, amount, flags, timeout);\n>>>> +\n>>>> +\treturn n_write;\n>>>> +}\n>>>> +\n>>>> +static PRStatus\n>>>> +pg_ssl_close(PRFileDesc *fd)\n>>>> +{\n>>>> +\t/*\n>>>> +\t * Disconnect our private Port from the fd before closing out the stack.\n>>>> +\t * (Debug builds of NSPR will assert if we do not.)\n>>>> +\t */\n>>>> +\tfd->secret = NULL;\n>>>> +\treturn PR_GetDefaultIOMethods()->close(fd);\n>>>> +}\n>>> \n>>> Regarding these, I find myself wondering how they're different from the\n>>> defaults..? I mean, the above just directly called\n>>> PR_GetDefaultIOMethods() to then call it's close() function- are the\n>>> fd->lower_methods->recv/send not the default methods? I don't quite get\n>>> what the point is from having our own callbacks here if they just do\n>>> exactly what the defaults would do (or are there actually no defined\n>>> defaults and you have to provide these..?).\n>> \n>> It's really just to cope with debug builds of NSPR which assert that fd->secret\n>> is null before closing.\n> \n> And we have to override the recv/send functions for this too..? Sorry,\n> my comment wasn't just about the close() method but about the others\n> too.\n\nAh, no we can ditch the .send and .recv functions and stick with the default\nbuilt-ins, I just confirmed this and removed them. I think they are leftovers\nfrom when I injected debug code there during development, they were as you say\ncopies of the default.\n\n>>>> +\t/*\n>>>> +\t * Return the underlying PRFileDesc which can be used to access\n>>>> +\t * information on the connection details. There is no SSL context per se.\n>>>> +\t */\n>>>> +\tif (strcmp(struct_name, \"NSS\") == 0)\n>>>> +\t\treturn conn->pr_fd;\n>>>> +\treturn NULL;\n>>>> +}\n>>> \n>>> Is there never a reason someone might want the pointer returned by\n>>> NSS_InitContext? I don't know that there is but it might be something\n>>> to consider (we could even possibly have our own structure returned by\n>>> this function which includes both, maybe..?). Not sure if there's a\n>>> sensible use-case for that or not just wanted to bring it up as it's\n>>> something I asked myself while reading through this patch.\n>> \n>> Not sure I understand what you're asking for here, did you mean \"is there ever\n>> a reason\"?\n> \n> Eh, poor wording on my part. You're right, the question, reworded\n> again, was \"Would someone want to get the context returned by\n> NSS_InitContext?\". If we think there's a reason that someone might want\n> that context then perhaps we should allow getting it, in addition to the\n> pr_fd. If there's really no reason to ever want the context from\n> NSS_InitContext then what you have here where we're returning pr_fd is\n> probably fine.\n\nI can't think of any reason, maybe Jacob who has been knee-deep in NSS contexts\nhave insights which tell a different story?\n\n>>>> diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c\n>>>> index c601071838..7f10da3010 100644\n>>>> --- a/src/interfaces/libpq/fe-secure.c\n>>>> +++ b/src/interfaces/libpq/fe-secure.c\n>>>> @@ -448,6 +448,27 @@ PQdefaultSSLKeyPassHook_OpenSSL(char *buf, int size, PGconn *conn)\n>>>> }\n>>>> #endif\t\t\t\t\t\t\t/* USE_OPENSSL */\n>>>> \n>>>> +#ifndef USE_NSS\n>>>> +\n>>>> +PQsslKeyPassHook_nss_type\n>>>> +PQgetSSLKeyPassHook_nss(void)\n>>>> +{\n>>>> +\treturn NULL;\n>>>> +}\n>>>> +\n>>>> +void\n>>>> +PQsetSSLKeyPassHook_nss(PQsslKeyPassHook_nss_type hook)\n>>>> +{\n>>>> +\treturn;\n>>>> +}\n>>>> +\n>>>> +char *\n>>>> +PQdefaultSSLKeyPassHook_nss(PK11SlotInfo * slot, PRBool retry, void *arg)\n>>>> +{\n>>>> +\treturn NULL;\n>>>> +}\n>>>> +#endif\t\t\t\t\t\t\t/* USE_NSS */\n>>> \n>>> Isn't this '!USE_NSS'?\n>> \n>> Technically it is, but using just /* USE_NSS */ is consistent with the rest of\n>> blocks in the file.\n> \n> Hrmpf. I guess it seems a bit confusing to me to have to go find the\n> opening #ifndef to realize that it's actally !USE_NSS.. In other words,\n> I would think we'd actually want to fix all of these, heh. I only\n> actually see one case on a quick grep where it's wrong for USE_OPENSSL\n> and so that doesn't seem like it's really a precedent and is more of a\n> bug. We certainly say 'not OPENSSL' in one place today too and also\n> have a number of places where we have: #endif ... /* ! WHATEVER */.\n\nNo disagreement from me. To cut down the size of this patchset however I\npropose that we tackle this separately and leave this as is in this thread\nsince it's in line with the rest of the file (for now).\n\n>>>> Subject: [PATCH v30 2/9] Refactor SSL testharness for multiple library\n>>>> \n>>>> The SSL testharness was fully tied to OpenSSL in the way the server was\n>>>> set up and reconfigured. This refactors the SSLServer module into a SSL\n>>>> library agnostic SSL/Server module which in turn use SSL/Backend/<lib>\n>>>> modules for the implementation details.\n>>>> \n>>>> No changes are done to the actual tests, this only change how setup and\n>>>> teardown is performed.\n>>> \n>>> Presumably this could be committed ahead of the main NSS support?\n>> \n>> Correct, I think this has merits even if NSS support is ultimately rejected.\n> \n> Ok- could you break it out on to its own thread and I'll see about\n> committing it soonish, to get it out of the way?\n\nIt was already on it's own thread, as we discussed offlist. I have since\nrebased and expanded that patch over in that thread which has gotten review\nthat needs to be addressed. As such, I will not update that patch in the\nseries in this thread but keep the changes on that thread, and then pull them\nback into here when ready.\n\n>>>> Subject: [PATCH v30 5/9] nss: Documentation\n>>>> +++ b/doc/src/sgml/acronyms.sgml\n>>>> @@ -684,6 +717,16 @@\n>>>> </listitem>\n>>>> </varlistentry>\n>>>> \n>>>> + <varlistentry>\n>>>> + <term><acronym>TLS</acronym></term>\n>>>> + <listitem>\n>>>> + <para>\n>>>> + <ulink url=\"https://en.wikipedia.org/wiki/Transport_Layer_Security\">\n>>>> + Transport Layer Security</ulink>\n>>>> + </para>\n>>>> + </listitem>\n>>>> + </varlistentry>\n>>> \n>>> We don't have this already..? Surely we should..\n>> \n>> We really should, especially since we've had <acronym>TLS</acronym> in\n>> config.sgml since 2014 (c6763156589). That's another small piece that could be\n>> committed on it's own to cut down the size of this patchset (even if only by a\n>> tiny amount).\n> \n> Ditto on this. :)\n\nDone in https://postgr.es/m/27109504-82DB-41A8-8E63-C0498314F5B0@yesql.se\n\n>>>> @@ -1288,7 +1305,9 @@ include_dir 'conf.d'\n>>>> connections using TLS version 1.2 and lower are affected. There is\n>>>> currently no setting that controls the cipher choices used by TLS\n>>>> version 1.3 connections. The default value is\n>>>> - <literal>HIGH:MEDIUM:+3DES:!aNULL</literal>. The default is usually a\n>>>> + <literal>HIGH:MEDIUM:+3DES:!aNULL</literal> for servers which have\n>>>> + been built with <productname>OpenSSL</productname> as the\n>>>> + <acronym>SSL</acronym> library. The default is usually a\n>>>> reasonable choice unless you have specific security requirements.\n>>>> </para>\n>>> \n>>> Shouldn't we say something here wrt NSS?\n>> \n>> We should, but I'm not entirely what just yet. Need to revisit that.\n> \n> Not sure if we really want to do this but at least with ssllabs.com,\n> postgresql.org gets an 'A' rating with this set:\n> \n> ECDHE-ECDSA-CHACHA20-POLY1305\n> ECDHE-RSA-CHACHA20-POLY1305\n> ECDHE-ECDSA-AES128-GCM-SHA256\n> ECDHE-RSA-AES128-GCM-SHA256\n> ECDHE-ECDSA-AES256-GCM-SHA384\n> ECDHE-RSA-AES256-GCM-SHA384\n> DHE-RSA-AES128-GCM-SHA256\n> DHE-RSA-AES256-GCM-SHA384\n> ECDHE-ECDSA-AES128-SHA256\n> ECDHE-RSA-AES128-SHA256\n> ECDHE-ECDSA-AES128-SHA\n> ECDHE-RSA-AES256-SHA384\n> ECDHE-RSA-AES128-SHA\n> ECDHE-ECDSA-AES256-SHA384\n> ECDHE-ECDSA-AES256-SHA\n> ECDHE-RSA-AES256-SHA\n> DHE-RSA-AES128-SHA256\n> DHE-RSA-AES128-SHA\n> DHE-RSA-AES256-SHA256\n> DHE-RSA-AES256-SHA\n> ECDHE-ECDSA-DES-CBC3-SHA\n> ECDHE-RSA-DES-CBC3-SHA\n> EDH-RSA-DES-CBC3-SHA\n> AES128-GCM-SHA256\n> AES256-GCM-SHA384\n> AES128-SHA256\n> AES256-SHA256\n> AES128-SHA\n> AES256-SHA\n> DES-CBC3-SHA\n> !DSS\n> \n> Which also seems kinda close to what the default when built with OpenSSL\n> ends up being? Thought the ssllabs report does list which ones it\n> thinks are weak and so we might consider excluding those by default too:\n> \n> https://www.ssllabs.com/ssltest/analyze.html?d=postgresql.org&s=2a02%3a16a8%3adc51%3a0%3a0%3a0%3a0%3a50\n\nAgreed, maintaining parity (or thereabouts) with OpenSSL defaults taking\nindustry best practices into account is probably what we should aim for.\n\n>>>> @@ -1490,8 +1509,11 @@ include_dir 'conf.d'\n>>>> <para>\n>>>> Sets an external command to be invoked when a passphrase for\n>>>> decrypting an SSL file such as a private key needs to be obtained. By\n>>>> - default, this parameter is empty, which means the built-in prompting\n>>>> - mechanism is used.\n>>>> + default, this parameter is empty. When the server is using\n>>>> + <productname>OpenSSL</productname>, this means the built-in prompting\n>>>> + mechanism is used. When using <productname>NSS</productname>, there is\n>>>> + no default prompting so a blank callback will be used returning an\n>>>> + empty password.\n>>>> </para>\n>>> \n>>> Maybe we should point out here that this requires the database to not\n>>> require a password..? So if they have one, they need to set this, or\n>>> maybe we should provide a default one..\n>> \n>> I've added a sentence on not using a password for the cert database. I'm not\n>> sure if providing a default one is a good idea but it's no less insecure than\n>> having no password really..\n> \n> I was meaning a default callback to prompt, not sure if that was clear.\n\nAh, no that's not what I thought you meant. Do you have any thoughts on what\nthat callback would look like? Take a password on a TTY input?\n\nBelow are a few fixes addressed from the original review email:\n\n>>> +\t/*\n>>> +\t * Set up the custom IO layer.\n>>> +\t */\n>> \n>> Might be good to mention that the IO Layer is what sets up the\n>> read/write callbacks to be used.\n> \n> Good point, will do in the next version of the patchset.\n\nFixed.\n\n>>> +\tport->pr_fd = SSL_ImportFD(model, pr_fd);\n>>> +\tif (!port->pr_fd)\n>>> +\t{\n>>> +\t\tereport(COMMERROR,\n>>> +\t\t\t\t(errmsg(\"unable to initialize\")));\n>>> +\t\treturn -1;\n>>> +\t}\n>> \n>> Maybe a comment and a better error message for this?\n> \n> Will do.\n\nFixed.\n\n>>> \n>>> +\tPR_Close(model);\n>> \n>> This might deserve one also, the whole 'model' construct is a bit\n>> different. :)\n> \n> Agreed. will do.\n\n\nFixed.\n\n>> Also, I get that they do similar jobs and that one is in the frontend\n>> and the other is in the backend, but I'm not a fan of having two\n>> 'ssl_protocol_version_to_nss()'s functions that take different argument\n>> types but have exact same name and do functionally different things..\n> \n> Good point, I'll change that.\n\nFixed.\n\n>>> +\t/*\n>>> +\t * Configure cipher policy.\n>>> +\t */\n>>> +\tstatus = NSS_SetDomesticPolicy();\n>>> +\tif (status != SECSuccess)\n>>> +\t{\n>>> +\t\tprintfPQExpBuffer(&conn->errorMessage,\n>>> +\t\t\t\t\t\t libpq_gettext(\"unable to configure cipher policy: %s\"),\n>>> +\t\t\t\t\t\t pg_SSLerrmessage(PR_GetError()));\n>>> +\n>>> +\t\treturn PGRES_POLLING_FAILED;\n>>> +\t}\n>> \n>> Probably good to pull over at least some parts of the comments made in\n>> the backend code about SetDomesticPolicy() actually enabling everything\n>> (just like all the policies apparently do)...\n> \n> Good point, will do.\n\nFixed.\n\n>>> +int\n>>> +be_tls_open_server(Port *port)\n>>> +{\n>>> +\tSECStatus\tstatus;\n>>> +\tPRFileDesc *model;\n>>> +\tPRFileDesc *pr_fd;\n>> \n>> pr_fd here is materially different from port->pr_fd, no? As in, one is\n>> the NSS raw TCP fd while the other is the SSL fd, right? Maybe we\n>> should use two different variable names to try and make sure they don't\n>> get confused? Might even set this to NULL after we are done with it\n>> too.. Then again, I see later on that when we do the dance with the\n>> 'model' PRFileDesc that we just use the same variable- maybe we should\n>> do that? That is, just get rid of this 'pr_fd' and use port->pr_fd\n>> always?\n> \n> Hmm, I think you're right. I will try that for the next patchset version.\n\n\n>> Similar comments to the backend code- should we just always use\n>> conn->pr_fd? Or should we rename pr_fd to something else?\n> \n> Renaming is probably not a bad idea, will fix.\n\n\t\nBoth fixed.\n\nAdditionally, a few other off-list reported issues are also fixed in this\nversion (such as fixing the silly markup doc error and testplan off-by-one etc).\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Fri, 26 Mar 2021 00:22:33 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Fri, 2021-03-26 at 00:22 +0100, Daniel Gustafsson wrote:\r\n> > On 23 Mar 2021, at 20:04, Stephen Frost <sfrost@snowman.net> wrote:\r\n> > \r\n> > Eh, poor wording on my part. You're right, the question, reworded\r\n> > again, was \"Would someone want to get the context returned by\r\n> > NSS_InitContext?\". If we think there's a reason that someone might want\r\n> > that context then perhaps we should allow getting it, in addition to the\r\n> > pr_fd. If there's really no reason to ever want the context from\r\n> > NSS_InitContext then what you have here where we're returning pr_fd is\r\n> > probably fine.\r\n> \r\n> I can't think of any reason, maybe Jacob who has been knee-deep in NSS contexts\r\n> have insights which tell a different story?\r\n\r\nThe only thing you can do with a context pointer is shut it down, and I\r\ndon't think that's something that should be exposed.\r\n\r\n--Jacob\r\n", "msg_date": "Thu, 25 Mar 2021 23:59:16 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Greetings,\n\n* Jacob Champion (pchampion@vmware.com) wrote:\n> On Wed, 2021-03-24 at 14:10 -0400, Stephen Frost wrote:\n> > * Jacob Champion (pchampion@vmware.com) wrote:\n> > > I could see this being a problem if two client certificate nicknames\n> > > collide across multiple in-use databases, maybe?\n> > \n> > Right, in such a case either cert might get returned and it's possible\n> > that the \"wrong\" one is returned and therefore the connection would end\n> > up failing, assuming that they aren't actually the same and just happen\n> > to be in both.\n> > \n> > Seems like we could use SECMOD_OpenUserDB() and then pass the result\n> > from that into PK11_ListCertsInSlot() and scan through the certs in just\n> > the specified database to find the one we're looking for if we really\n> > feel compelled to try and address this risk. I've reached out to the\n> > NSS folks to see if they have any thoughts about the best way to address\n> > this.\n> \n> Some additional findings (NSS 3.63), please correct me if I've made any mistakes:\n> \n> The very first NSSInitContext created is special. If it contains a database, that database will be considered part of the \"internal\" slot and its certificates can be referenced directly by nickname. If it doesn't have a database, the internal slot has no certificates, and it will continue to have zero certificates until NSS is completely shut down and reinitialized with a new \"first\" context.\n> \n> Databases that are opened *after* the first one are given their own separate slots. Any certificates that are part of those databases seemingly can't be referenced directly by nickname. They have to be prefixed by their token name -- a name which you don't have if you used NSS_InitContext() to create the database. You have to use SECMOD_OpenUserDB() instead. This explains some strange failures I was seeing in local testing, where the order of InitContext determined whether our client certificate selection succeeded or failed.\n\nThis is more-or-less what we would want though, right..? If a user asks\nfor a connection with ssl_database=blah and sslcert=whatever, we'd want\nto open database 'blah' and search (just) that database for cert\n'whatever'. We could possibly offer other options in the future but\ncertainly this would work and be the most straight-forward and expected\nbehavior.\n\n> If you SECMOD_OpenUserDB() a database that is identical to the first (internal) database, NSS deduplicates for you and just returns the internal slot. Which seems like it's helpful, except you're not allowed to close that database, and you have to know not to close it by checking to see whether that slot is the \"internal key slot\". It appears to remain open until NSS is shut down entirely.\n\nSeems like we shouldn't do that and should just use SECMOD_OpenUserDB()\nfor opening databases.\n\n> But if you open a database that is *not* the magic internal database,\n> and then open a duplicate of that one, NSS creates yet another new slot\n> for the duplicate. So SECMOD_OpenUserDB() may or may not be a resource\n> hog, depending on the global state of the process at the time libpq\n> opens its first connection. We won't be able to control what the parent\n> application will do before loading us up.\n\nI would think we'd want to avoid re-opening the same database multiple\ntimes, to avoid the duplicate slots and such. If the application code\ndoes it themselves, well, there's not much we can do about that, but we\ncould at least avoid doing so in *our* code. I wouldn't expect us to be\nopening hundreds of databases either and so keeping a simple list around\nof what we've opened and scanning it seems like it'd be workable. Of\ncourse, this could likely be improved in the future but I would think\nthat'd be good for an initial implementation.\n\nWe could also just generally caution users in our documentation against\nusing multiple databases. The NSS folks discourage doing so and it\ndoesn't strike me as being a terribly useful thing to do anyway, at\nleast from within one invocation of an application. Still, if we could\nmake it work reasonably well, then I'd say we should go ahead and do so.\n\n> It also doesn't look like any of the SECMOD_* machinery that we're\n> looking at is thread-safe, but I'd really like to be wrong...\n\nThat's unfortuante but solvable by using our own locks, similar\nto what's done in fe-secure-openssl.c.\n\nThanks!\n\nStephen", "msg_date": "Fri, 26 Mar 2021 15:33:38 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Fri, 2021-03-26 at 15:33 -0400, Stephen Frost wrote:\r\n> * Jacob Champion (pchampion@vmware.com) wrote:\r\n> > Databases that are opened *after* the first one are given their own\r\n> > separate slots. [...]\r\n> \r\n> This is more-or-less what we would want though, right..? If a user asks\r\n> for a connection with ssl_database=blah and sslcert=whatever, we'd want\r\n> to open database 'blah' and search (just) that database for cert\r\n> 'whatever'. We could possibly offer other options in the future but\r\n> certainly this would work and be the most straight-forward and expected\r\n> behavior.\r\n\r\nYes, but see below.\r\n\r\n> > If you SECMOD_OpenUserDB() a database that is identical to the first\r\n> > (internal) database, NSS deduplicates for you and just returns the\r\n> > internal slot. Which seems like it's helpful, except you're not\r\n> > allowed to close that database, and you have to know not to close it\r\n> > by checking to see whether that slot is the \"internal key slot\". It\r\n> > appears to remain open until NSS is shut down entirely.\r\n> \r\n> Seems like we shouldn't do that and should just use SECMOD_OpenUserDB()\r\n> for opening databases.\r\n\r\nWe don't have control over whether or not this happens. If the\r\napplication embedding libpq has already loaded the database into the\r\ninternal slot via its own NSS initialization, then when we call\r\nSECMOD_OpenUserDB() for that same database, the internal slot will be\r\nreturned and we have to handle it accordingly.\r\n\r\nIt's not a huge amount of work, but it is magic knowledge that has to\r\nbe maintained, especially in the absence of specialized clientside\r\ntests.\r\n\r\n> > But if you open a database that is *not* the magic internal database,\r\n> > and then open a duplicate of that one, NSS creates yet another new slot\r\n> > for the duplicate. So SECMOD_OpenUserDB() may or may not be a resource\r\n> > hog, depending on the global state of the process at the time libpq\r\n> > opens its first connection. We won't be able to control what the parent\r\n> > application will do before loading us up.\r\n> \r\n> I would think we'd want to avoid re-opening the same database multiple\r\n> times, to avoid the duplicate slots and such. If the application code\r\n> does it themselves, well, there's not much we can do about that, but we\r\n> could at least avoid doing so in *our* code. I wouldn't expect us to be\r\n> opening hundreds of databases either and so keeping a simple list around\r\n> of what we've opened and scanning it seems like it'd be workable. Of\r\n> course, this could likely be improved in the future but I would think\r\n> that'd be good for an initial implementation.\r\n> \r\n> [...]\r\n> \r\n> > It also doesn't look like any of the SECMOD_* machinery that we're\r\n> > looking at is thread-safe, but I'd really like to be wrong...\r\n> \r\n> That's unfortuante but solvable by using our own locks, similar\r\n> to what's done in fe-secure-openssl.c.\r\n\r\nYeah. I was hoping to avoid implementing our own locks and refcounts,\r\nbut it seems like it's going to be required.\r\n\r\n--Jacob\r\n", "msg_date": "Fri, 26 Mar 2021 22:03:22 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Greetings,\n\n* Jacob Champion (pchampion@vmware.com) wrote:\n> On Fri, 2021-03-26 at 15:33 -0400, Stephen Frost wrote:\n> > * Jacob Champion (pchampion@vmware.com) wrote:\n> > > Databases that are opened *after* the first one are given their own\n> > > separate slots. [...]\n> > \n> > This is more-or-less what we would want though, right..? If a user asks\n> > for a connection with ssl_database=blah and sslcert=whatever, we'd want\n> > to open database 'blah' and search (just) that database for cert\n> > 'whatever'. We could possibly offer other options in the future but\n> > certainly this would work and be the most straight-forward and expected\n> > behavior.\n> \n> Yes, but see below.\n> \n> > > If you SECMOD_OpenUserDB() a database that is identical to the first\n> > > (internal) database, NSS deduplicates for you and just returns the\n> > > internal slot. Which seems like it's helpful, except you're not\n> > > allowed to close that database, and you have to know not to close it\n> > > by checking to see whether that slot is the \"internal key slot\". It\n> > > appears to remain open until NSS is shut down entirely.\n> > \n> > Seems like we shouldn't do that and should just use SECMOD_OpenUserDB()\n> > for opening databases.\n> \n> We don't have control over whether or not this happens. If the\n> application embedding libpq has already loaded the database into the\n> internal slot via its own NSS initialization, then when we call\n> SECMOD_OpenUserDB() for that same database, the internal slot will be\n> returned and we have to handle it accordingly.\n> \n> It's not a huge amount of work, but it is magic knowledge that has to\n> be maintained, especially in the absence of specialized clientside\n> tests.\n\nAh.. yeah, fair enough. We could document that we discourage\napplications from doing so, but I agree that we'll need to deal with it\nsince it could happen.\n\n> > > But if you open a database that is *not* the magic internal database,\n> > > and then open a duplicate of that one, NSS creates yet another new slot\n> > > for the duplicate. So SECMOD_OpenUserDB() may or may not be a resource\n> > > hog, depending on the global state of the process at the time libpq\n> > > opens its first connection. We won't be able to control what the parent\n> > > application will do before loading us up.\n> > \n> > I would think we'd want to avoid re-opening the same database multiple\n> > times, to avoid the duplicate slots and such. If the application code\n> > does it themselves, well, there's not much we can do about that, but we\n> > could at least avoid doing so in *our* code. I wouldn't expect us to be\n> > opening hundreds of databases either and so keeping a simple list around\n> > of what we've opened and scanning it seems like it'd be workable. Of\n> > course, this could likely be improved in the future but I would think\n> > that'd be good for an initial implementation.\n> > \n> > [...]\n> > \n> > > It also doesn't look like any of the SECMOD_* machinery that we're\n> > > looking at is thread-safe, but I'd really like to be wrong...\n> > \n> > That's unfortuante but solvable by using our own locks, similar\n> > to what's done in fe-secure-openssl.c.\n> \n> Yeah. I was hoping to avoid implementing our own locks and refcounts,\n> but it seems like it's going to be required.\n\nYeah, afraid so.\n\nThanks!\n\nStephen", "msg_date": "Fri, 26 Mar 2021 18:05:40 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Fri, 2021-03-26 at 18:05 -0400, Stephen Frost wrote:\r\n> * Jacob Champion (pchampion@vmware.com) wrote:\r\n> > Yeah. I was hoping to avoid implementing our own locks and refcounts,\r\n> > but it seems like it's going to be required.\r\n> \r\n> Yeah, afraid so.\r\n\r\nI think it gets worse, after having debugged some confusing crashes.\r\nThere's already been a discussion on PR_Init upthread a bit:\r\n\r\n> Once we settle on a version we can confirm if PR_Init is/isn't needed and\r\n> remove all traces of it if not.\r\n\r\nWhat the NSPR documentation omits is that implicit initialization is\r\nnot threadsafe. So NSS_InitContext() is technically \"threadsafe\"\r\nbecause it's built on PR_CallOnce(), but if you haven't called\r\nPR_Init() yet, multiple simultaneous PR_CallOnce() calls can crash into\r\neach other.\r\n\r\nSo, fine. We just add our own locks around NSS_InitContext() (or around\r\na single call to PR_Init()). Well, the first thread to win and\r\nsuccessfully initialize NSPR gets marked as the \"primordial\" thread\r\nusing thread-local state. And it gets a pthread destructor that does...\r\nsomething. So lazy initialization seems a bit dangerous regardless of\r\nwhether or not we add locks, but I can't really prove whether it's\r\ndangerous or not in practice.\r\n\r\nI do know that only the primordial thread is allowed to call\r\nPR_Cleanup(), and of course we wouldn't be able to control which thread\r\ndoes what for libpq clients. I don't know what other assumptions are\r\nmade about the primordial thread, or if there are any platform-specific \r\nbehaviors with older versions of NSPR that we'd need to worry about. It\r\nused to be that the primordial thread was not allowed to exit before\r\nany other threads, but that restriction was lifted at some point [1].\r\n\r\nI think we're going to need some analogue to PQinitOpenSSL() to help\r\nclient applications cut through the mess, but I'm not sure what it\r\nshould look like, or how we would maintain any sort of API\r\ncompatibility between the two flavors. And does libpq already have some\r\nnotion of a \"main thread\" that I'm missing?\r\n\r\n--Jacob\r\n\r\n[1] https://bugzilla.mozilla.org/show_bug.cgi?id=294955\r\n", "msg_date": "Wed, 31 Mar 2021 22:15:15 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, Mar 31, 2021 at 10:15:15PM +0000, Jacob Champion wrote:\n> I think we're going to need some analogue to PQinitOpenSSL() to help\n> client applications cut through the mess, but I'm not sure what it\n> should look like, or how we would maintain any sort of API\n> compatibility between the two flavors. And does libpq already have some\n> notion of a \"main thread\" that I'm missing?\n\nNope as far as I recall. With OpenSSL, the initialization of the SSL\nmutex lock and the crypto callback initialization is done by the first\nthread in.\n--\nMichael", "msg_date": "Thu, 1 Apr 2021 13:10:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Wed, Mar 31, 2021 at 10:15:15PM +0000, Jacob Champion wrote:\n> > I think we're going to need some analogue to PQinitOpenSSL() to help\n> > client applications cut through the mess, but I'm not sure what it\n> > should look like, or how we would maintain any sort of API\n> > compatibility between the two flavors. And does libpq already have some\n> > notion of a \"main thread\" that I'm missing?\n> \n> Nope as far as I recall. With OpenSSL, the initialization of the SSL\n> mutex lock and the crypto callback initialization is done by the first\n> thread in.\n\nYeah, we haven't got any such concept in libpq. I do think that some of\nthis can simply be documented as \"if you do this, then you need to make\nsure to do this\".\n\nThanks,\n\nStephen", "msg_date": "Thu, 1 Apr 2021 10:15:15 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 23 Mar 2021, at 00:38, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> On 22 Mar 2021, at 00:49, Stephen Frost <sfrost@snowman.net> wrote:\n\nAttached is a rebase on top of the recent SSL related commits with a few more\nfixes from previous reviews.\n\n>>> +++ b/src/interfaces/libpq/fe-connect.c\n>>> @@ -359,6 +359,10 @@ static const internalPQconninfoOption PQconninfoOptions[] = {\n>>> \t\t\"Target-Session-Attrs\", \"\", 15, /* sizeof(\"prefer-standby\") = 15 */\n>>> \toffsetof(struct pg_conn, target_session_attrs)},\n>>> \n>>> +\t{\"cert_database\", NULL, NULL, NULL,\n>>> +\t\t\"CertificateDatabase\", \"\", 64,\n>>> +\toffsetof(struct pg_conn, cert_database)},\n>> \n>> I mean, maybe nitpicking here, but all the other SSL stuff is\n>> 'sslsomething' and the backend version of this is 'ssl_database', so\n>> wouldn't it be more consistent to have this be 'ssldatabase'?\n> \n> Thats a good point, I was clearly Stockholm syndromed since I hadn't reflected\n> on that but it's clearly wrong. Will fix.\n\nFixed\n\n>>> +\t/*\n>>> +\t * If we don't have a certificate database, the system trust store is the\n>>> +\t * fallback we can use. If we fail to initialize that as well, we can\n>>> +\t * still attempt a connection as long as the sslmode isn't verify*.\n>>> +\t */\n>>> +\tif (!conn->cert_database && conn->sslmode[0] == 'v')\n>>> +\t{\n>>> +\t\tstatus = pg_load_nss_module(&ca_trust, ca_trust_name, \"\\\"Root Certificates\\\"\");\n>>> +\t\tif (status != SECSuccess)\n>>> +\t\t{\n>>> +\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n>>> +\t\t\t\t\t\t\t libpq_gettext(\"WARNING: unable to load NSS trust module \\\"%s\\\" : %s\"),\n>>> +\t\t\t\t\t\t\t ca_trust_name,\n>>> +\t\t\t\t\t\t\t pg_SSLerrmessage(PR_GetError()));\n>>> +\n>>> +\t\t\treturn PGRES_POLLING_FAILED;\n>>> +\t\t}\n>>> +\t}\n>> \n>> Maybe have something a bit more here about \"maybe you should specifify a\n>> cert_database\" or such?\n> \n> Good point, will expand with more detail.\n\nFixed.\n\n>>> +\t/*\n>>> +\t * Specify which hostname we are expecting to talk to. This is required,\n>>> +\t * albeit mostly applies to when opening a connection to a traditional\n>>> +\t * http server it seems.\n>>> +\t */\n>>> +\tSSL_SetURL(conn->pr_fd, (conn->connhost[conn->whichhost]).host);\n>> \n>> We should probably also set SNI, if available (NSS 3.12.6 it seems?),\n>> since it looks like that's going to be added to the OpenSSL code.\n> \n> Good point, will do.\n\nActually, it turns out that NSS 3.12.6 introduced the serverside SNI handling\nby providing callbacks to respond to hostname verification. There was no\nmention of clientside SNI in the NSS documentation that I could find, reading\nthe code however SSL_SetURL does actually set the SNI extension in the\nClientHello. So, clientsidee SNI (which is what is proposed for the OpenSSL\nbackend) is already in.\n\n>>> +\tdo\n>>> +\t{\n>>> +\t\tstatus = SSL_ForceHandshake(conn->pr_fd);\n>>> +\t}\n>>> +\twhile (status != SECSuccess && PR_GetError() == PR_WOULD_BLOCK_ERROR);\n>> \n>> We don't seem to have this loop in the backend code.. Is there some\n>> reason that we don't? Is it possible that we need to have a loop here\n>> too? I recall in the GSS encryption code there were definitely things\n>> during setup that had to be looped back over on both sides to make sure\n>> everything was finished ...\n> \n> Off the cuff I can't remember, will look into it.\n\nThinking more about this, I don't think we should have the loop at all in the\nfrontend either. The reason it was added was to cover cases where we're\nconfused about blocking but I can't actually see the case I was worried about\nin the code so I think it's useless. Removed.\n\n>>> +\tif (strcmp(attribute_name, \"protocol\") == 0)\n>>> +\t{\n>>> +\t\tswitch (channel.protocolVersion)\n>>> +\t\t{\n>>> +#ifdef SSL_LIBRARY_VERSION_TLS_1_3\n>>> +\t\t\tcase SSL_LIBRARY_VERSION_TLS_1_3:\n>>> +\t\t\t\treturn \"TLSv1.3\";\n>>> +#endif\n>>> +#ifdef SSL_LIBRARY_VERSION_TLS_1_2\n>>> +\t\t\tcase SSL_LIBRARY_VERSION_TLS_1_2:\n>>> +\t\t\t\treturn \"TLSv1.2\";\n>>> +#endif\n>>> +#ifdef SSL_LIBRARY_VERSION_TLS_1_1\n>>> +\t\t\tcase SSL_LIBRARY_VERSION_TLS_1_1:\n>>> +\t\t\t\treturn \"TLSv1.1\";\n>>> +#endif\n>>> +\t\t\tcase SSL_LIBRARY_VERSION_TLS_1_0:\n>>> +\t\t\t\treturn \"TLSv1.0\";\n>>> +\t\t\tdefault:\n>>> +\t\t\t\treturn \"unknown\";\n>>> +\t\t}\n>>> +\t}\n>> \n>> Not sure that it really matters, but this seems like it might be useful\n>> to have as its own function... Maybe even a data structure that both\n>> functions use just in oppostie directions. Really minor tho. :)\n> \n> I suppose that wouldn't be a bad thing, will fix.\n\nMoved this into a shared function as it's used by both frontend and backend.\nIt's moved mostly verbatim as it seemed simple enough to not warrant much\ncomplication.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Fri, 2 Apr 2021 01:17:20 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Another rebase to cope with recent changes (hmac, ssl tests etc) that\nconflicted and broke this patchset.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Mon, 5 Apr 2021 00:13:43 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Mon, Apr 05, 2021 at 12:13:43AM +0200, Daniel Gustafsson wrote:\n> Another rebase to cope with recent changes (hmac, ssl tests etc) that\n> conflicted and broke this patchset.\n\nPlease find an updated set, v35, attached, and my apologies for\nbreaking again your patch set. While testing this patch set and\nadjusting the SSL tests with HEAD, I have noticed what looks like a\nbug with the DN mapping that NSS does not run. The connection strings\nare the same in v35 and in v34, with dbname only changing in-between.\n\nJust to be sure, because I could have done something wrong with the\nrebase of v35, I have done the same test with v34 applied on top of\ndfc843d and things are failing. So it seems to me that there is an\nissue with the DN mapping part.\n--\nMichael", "msg_date": "Mon, 5 Apr 2021 11:12:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Mon, Apr 05, 2021 at 11:12:22AM +0900, Michael Paquier wrote:\n> Please find an updated set, v35, attached, and my apologies for\n> breaking again your patch set. While testing this patch set and\n> adjusting the SSL tests with HEAD, I have noticed what looks like a\n> bug with the DN mapping that NSS does not run. The connection strings\n> are the same in v35 and in v34, with dbname only changing in-between.\n> \n> Just to be sure, because I could have done something wrong with the\n> rebase of v35, I have done the same test with v34 applied on top of\n> dfc843d and things are failing. So it seems to me that there is an\n> issue with the DN mapping part.\n\nFor now, I have marked this patch set as returned with feedback as it\nis still premature for integration, and there are still bugs in it.\nFWIW, I think that there is a future for providing an alternative to\nOpenSSL, so, even if it could not make it for this release, I'd like\nto push forward with this area more seriously as of 15. The recent\nlibcrypto-related refactorings were one step in this direction, as\nwell.\n--\nMichael", "msg_date": "Thu, 8 Apr 2021 17:10:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 25 Mar 2021, at 00:56, Jacob Champion <pchampion@vmware.com> wrote:\n\n> Databases that are opened *after* the first one are given their own separate slots. Any certificates that are part of those databases seemingly can't be referenced directly by nickname. They have to be prefixed by their token name -- a name which you don't have if you used NSS_InitContext() to create the database. You have to use SECMOD_OpenUserDB() instead. This explains some strange failures I was seeing in local testing, where the order of InitContext determined whether our client certificate selection succeeded or failed.\n\nSorry for the latency is responding, but I'm now back from parental leave.\n\nAFAICT the tokenname for the database can be set with the dbTokenDescription\nmember in the NSSInitParameters struct passed to NSS_InitContext() (documented\nin nss.h). Using this we can avoid the messier SECMOD machinery and use the\ntoken in the auth callback to refer to the database we loaded. I hacked this\nup in my local tree (rebased patchset coming soon) and it seems to work as\nintended.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 27 May 2021 22:31:07 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Attached is a rebase to keep bitrot at bay. On top rebasing and smaller fixes\nin comments etc, this version fixes/adds a number things:\n\n* Performs DN resolution to support the DN mapping\n* Locks the SECMOD parts and PR_Init call in the frontend as per Jacobs\n findings upthread\n* Properly set the tokenname of the database to avoid ambigious lookups in case\n multiple databases are loaded (a better name to ensure uniqueness is a TODO)\n* Adds a test for certificate lookup without sslcert set\n\n\n\n\n\n\n\n\n\n\n\n\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Fri, 28 May 2021 11:04:12 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Tue, 2020-10-27 at 23:39 -0700, Andres Freund wrote:\n> Maybe we should just have --with-ssl={openssl,nss}? That'd avoid\n> needing\n> to check for errors.\n\n[ apologies for the late reply ]\n\nWould it be more proper to call it --with-tls={openssl,nss} ?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 03 Jun 2021 10:37:26 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 3 Jun 2021, at 19:37, Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> On Tue, 2020-10-27 at 23:39 -0700, Andres Freund wrote:\n>> Maybe we should just have --with-ssl={openssl,nss}? That'd avoid\n>> needing\n>> to check for errors.\n> \n> [ apologies for the late reply ]\n> \n> Would it be more proper to call it --with-tls={openssl,nss} ?\n\nWell, we use SSL for everything else (GUCs, connection params and env vars etc)\nso I think --with-ssl is sensible.\n\nHowever, SSL and TLS are used quite interchangeably these days so I think it\nmakes sense to provide --with-tls as an alias.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 3 Jun 2021 19:47:45 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "\nOn 6/3/21 1:47 PM, Daniel Gustafsson wrote:\n>> On 3 Jun 2021, at 19:37, Jeff Davis <pgsql@j-davis.com> wrote:\n>>\n>> On Tue, 2020-10-27 at 23:39 -0700, Andres Freund wrote:\n>>> Maybe we should just have --with-ssl={openssl,nss}? That'd avoid\n>>> needing\n>>> to check for errors.\n>> [ apologies for the late reply ]\n>>\n>> Would it be more proper to call it --with-tls={openssl,nss} ?\n> Well, we use SSL for everything else (GUCs, connection params and env vars etc)\n> so I think --with-ssl is sensible.\n>\n> However, SSL and TLS are used quite interchangeably these days so I think it\n> makes sense to provide --with-tls as an alias.\n>\n\nYeah, but it's annoying to have to start every talk I give touching this\nsubject with the slide that says \"When we say SSL we really means TLS\".\nMaybe release 15 would be a good time to rename user-visible option\nnames etc, with support for legacy names.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 3 Jun 2021 15:53:59 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Thu, 2021-06-03 at 15:53 -0400, Andrew Dunstan wrote:\n> Yeah, but it's annoying to have to start every talk I give touching\n> this\n> subject with the slide that says \"When we say SSL we really means\n> TLS\".\n> Maybe release 15 would be a good time to rename user-visible option\n> names etc, with support for legacy names.\n\nSounds good to me, though I haven't looked into how big of a diff that\nwill be.\n\nAlso, do we have precedent for GUC aliases? That might be a little\nweird.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 03 Jun 2021 13:14:15 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 3 Jun 2021, at 22:14, Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> On Thu, 2021-06-03 at 15:53 -0400, Andrew Dunstan wrote:\n>> Yeah, but it's annoying to have to start every talk I give touching\n>> this\n>> subject with the slide that says \"When we say SSL we really means\n>> TLS\".\n>> Maybe release 15 would be a good time to rename user-visible option\n>> names etc, with support for legacy names.\n\nPerhaps. Having spent some time in this space, SSL has IMHO become the de\nfacto term for an encrypted connection at the socket layer, with TLS being the\ncurrent protocol suite (additionally, often referred to SSL/TLS). Offering\ntls* counterparts to our ssl GUCs etc will offer a level of correctness but I\ndoubt we'll ever get rid of ssl* so we might not help too many users by the\nadded complexity.\n\nIt might also put us a hard spot if the next TLS spec ends up being called\nsomething other than TLS? It's clearly happened before =)\n\n> Sounds good to me, though I haven't looked into how big of a diff that\n> will be.\n> \n> Also, do we have precedent for GUC aliases? That might be a little\n> weird.\n\nI don't think we do currently, but I have a feeling the topic has surfaced here\nbefore.\n\nIf we end up settling on this being something we want I can volunteer to do the\nlegwork, but it seems a discussion best had before a patch is drafted.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 3 Jun 2021 22:49:02 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> It might also put us a hard spot if the next TLS spec ends up being called\n> something other than TLS? It's clearly happened before =)\n\nGood point. I'm inclined to just stick with the SSL terminology.\n\n>> Also, do we have precedent for GUC aliases? That might be a little\n>> weird.\n\n> I don't think we do currently, but I have a feeling the topic has surfaced here\n> before.\n\nWe do, look for \"sort_mem\" in guc.c. So it's not like it'd be\ninconvenient to implement. But I think user confusion and the\npotential for the new terminology to fail to be any more\nfuture-proof are good reasons to just leave the names alone.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 03 Jun 2021 16:55:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 3 Jun 2021, at 22:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>>> Also, do we have precedent for GUC aliases? That might be a little\n>>> weird.\n> \n>> I don't think we do currently, but I have a feeling the topic has surfaced here\n>> before.\n> \n> We do, look for \"sort_mem\" in guc.c.\n\nI knew it seemed familiar but I failed to find it, thanks for the pointer.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 3 Jun 2021 22:58:17 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Thu, Jun 3, 2021 at 04:55:45PM -0400, Tom Lane wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> > It might also put us a hard spot if the next TLS spec ends up being called\n> > something other than TLS? It's clearly happened before =)\n> \n> Good point. I'm inclined to just stick with the SSL terminology.\n\nI wonder if we should use SSL/TLS in more places in our documentation.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 3 Jun 2021 17:06:42 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> I wonder if we should use SSL/TLS in more places in our documentation.\n\nNo objection to doing that in the docs; I'm just questioning\nswitching the code-visible names.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 03 Jun 2021 17:11:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 3 Jun 2021, at 23:11, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Bruce Momjian <bruce@momjian.us> writes:\n>> I wonder if we should use SSL/TLS in more places in our documentation.\n> \n> No objection to doing that in the docs; I'm just questioning\n> switching the code-visible names.\n\nAs long as it's still searchable by \"SSL\", \"TLS\" and \"SSL/TLS\" and not just the\nlatter.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 3 Jun 2021 23:13:50 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Thu, Jun 3, 2021 at 11:14 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 3 Jun 2021, at 23:11, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Bruce Momjian <bruce@momjian.us> writes:\n> >> I wonder if we should use SSL/TLS in more places in our documentation.\n> >\n> > No objection to doing that in the docs; I'm just questioning\n> > switching the code-visible names.\n\n+1.\n\nI also don't think it's worth changing the actual names, I think\nthat'll cause more problems than it solves. But we can, and probably\nshould, change the messaging around it, particularly the docs (but\nprobably also comments in the config file).\n\n\n> As long as it's still searchable by \"SSL\", \"TLS\" and \"SSL/TLS\" and not just the\n> latter.\n\nAgreed, making it searchable and easily cross-linkable.. And maybe\nboth terms should be in the glossary.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Fri, 4 Jun 2021 19:44:12 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Fri, 2021-05-28 at 11:04 +0200, Daniel Gustafsson wrote:\r\n> Attached is a rebase to keep bitrot at bay.\r\n\r\nI get a failure during one of the CRL directory tests due to a missing\r\ndatabase -- it looks like the Makefile is missing an entry. (I'm\r\ndusting off my build after a few months away, so I don't know if this\r\nlatest rebase introduced it or not.)\r\n\r\nAttached is a quick patch; does it work on your machine?\r\n\r\n--Jacob", "msg_date": "Mon, 14 Jun 2021 22:15:47 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 15 Jun 2021, at 00:15, Jacob Champion <pchampion@vmware.com> wrote:\n\n> Attached is a quick patch; does it work on your machine?\n\nIt does, thanks! I've included it in the attached v37 along with a few tiny\nnon-functional improvements in comment spelling etc.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Wed, 16 Jun 2021 00:08:58 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, 2021-06-16 at 00:08 +0200, Daniel Gustafsson wrote:\r\n> > On 15 Jun 2021, at 00:15, Jacob Champion <pchampion@vmware.com> wrote:\r\n> > Attached is a quick patch; does it work on your machine?\r\n> \r\n> It does, thanks! I've included it in the attached v37 along with a few tiny\r\n> non-functional improvements in comment spelling etc.\r\n\r\nGreat, thanks!\r\n\r\nI've been tracking down reference leaks in the client. These open\r\nreferences prevent NSS from shutting down cleanly, which then makes it\r\nimpossible to open a new context in the future. This probably affects\r\nother libpq clients more than it affects psql.\r\n\r\nThe first step to fixing that is not ignoring failures during NSS\r\nshutdown, so I've tried a patch to pgtls_close() that pushes any\r\nfailures through the pqInternalNotice(). That seems to be working well.\r\nThe tests were still mostly green, so I taught connect_ok() to fail if\r\nany stderr showed up, and that exposed quite a few failures.\r\n\r\nI am currently stuck on one last failing test. This leak seems to only\r\nshow up when using TLSv1.2 or below. There doesn't seem to be a\r\nsubstantial difference in libpq code coverage between 1.2 and 1.3, so\r\nI'm worried that either 1) there's some API we use that \"requires\"\r\ncleanup, but only on 1.2 and below, or 2) there's some bug in my\r\nversion of NSS.\r\n\r\nAttached are a few work-in-progress patches. I think the reference\r\ncleanups themselves are probably solid, but the rest of it could use\r\nsome feedback. Are there better ways to test for this? and can anyone\r\nreproduce the TLSv1.2 leak?\r\n\r\n--Jacob", "msg_date": "Tue, 15 Jun 2021 23:50:14 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 16 Jun 2021, at 01:50, Jacob Champion <pchampion@vmware.com> wrote:\n\n> I've been tracking down reference leaks in the client. These open\n> references prevent NSS from shutting down cleanly, which then makes it\n> impossible to open a new context in the future. This probably affects\n> other libpq clients more than it affects psql.\n\nAh, nice catch, that's indeed a bug in the frontend implementation. The\nproblem is that the NSS trustdomain cache *must* be empty before shutting down\nthe context, else this very issue happens. Note this in be_tls_destroy():\n\n /*\n * It reads a bit odd to clear a session cache when we are destroying the\n * context altogether, but if the session cache isn't cleared before\n * shutting down the context it will fail with SEC_ERROR_BUSY.\n */\n SSL_ClearSessionCache();\n\nCalling SSL_ClearSessionCache() in pgtls_close() fixes the error.\n\nThere is another resource leak left (visible in one test after the above is\nadded), the SECMOD module needs to be unloaded in case it's been loaded.\nImplementing that with SECMOD_UnloadUserModule trips a segfault in NSS which I\nhave yet to figure out (when acquiring a lock with NSSRWLock_LockRead).\n\n> The first step to fixing that is not ignoring failures during NSS\n> shutdown, so I've tried a patch to pgtls_close() that pushes any\n> failures through the pqInternalNotice(). That seems to be working well.\n\nI'm keeping these in during hacking, with a comment that they need to be\nrevisited during review since they are mainly useful for debugging.\n\n> The tests were still mostly green, so I taught connect_ok() to fail if\n> any stderr showed up, and that exposed quite a few failures.\n\n\nWith your patches I'm seeing a couple of these:\n\n SSL error: The one-time function was previously called and failed. Its error code is no longer available\n\nThis is an error from NSPR, but it's not clear to me which PR_CallOnce call\nit's coming from. It seems to be hitting in the SAN and CRL tests, so it\nsmells of some form of caching implemented with NSPR API's to me but thats a\nmere hunch.\n\n> I am currently stuck on one last failing test. This leak seems to only\n> show up when using TLSv1.2 or below.\n\nAFAICT the session cache is avoided for TLSv1.3 due to 1.3 not supporting\nrenegotiation.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 16 Jun 2021 15:31:42 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, 2021-06-16 at 15:31 +0200, Daniel Gustafsson wrote:\r\n> > On 16 Jun 2021, at 01:50, Jacob Champion <pchampion@vmware.com> wrote:\r\n> > I've been tracking down reference leaks in the client. These open\r\n> > references prevent NSS from shutting down cleanly, which then makes it\r\n> > impossible to open a new context in the future. This probably affects\r\n> > other libpq clients more than it affects psql.\r\n> \r\n> Ah, nice catch, that's indeed a bug in the frontend implementation. The\r\n> problem is that the NSS trustdomain cache *must* be empty before shutting down\r\n> the context, else this very issue happens. Note this in be_tls_destroy():\r\n> \r\n> /*\r\n> * It reads a bit odd to clear a session cache when we are destroying the\r\n> * context altogether, but if the session cache isn't cleared before\r\n> * shutting down the context it will fail with SEC_ERROR_BUSY.\r\n> */\r\n> SSL_ClearSessionCache();\r\n> \r\n> Calling SSL_ClearSessionCache() in pgtls_close() fixes the error.\r\n\r\nThat's unfortunate. The session cache is global, right? So I'm guessing\r\nwe'll need to refcount and lock that call, to avoid cleaning up out\r\nfrom under a thread that's actively using the the cache?\r\n\r\n> There is another resource leak left (visible in one test after the above is\r\n> added), the SECMOD module needs to be unloaded in case it's been loaded.\r\n> Implementing that with SECMOD_UnloadUserModule trips a segfault in NSS which I\r\n> have yet to figure out (when acquiring a lock with NSSRWLock_LockRead).\r\n> \r\n> [...]\r\n> \r\n> With your patches I'm seeing a couple of these:\r\n> \r\n> SSL error: The one-time function was previously called and failed. Its error code is no longer available\r\n\r\nHmm. Adding SSL_ClearSessionCache() (without thread-safety at the\r\nmoment) fixes all of the SSL tests for me, and I don't see either the\r\nSECMOD leak or the \"one-time function\" error that you've mentioned.\r\nWhat version of NSS are you running? I'm on 3.63.\r\n\r\nI've attached my current patchset (based on v37) for comparison.\r\n\r\n> > I am currently stuck on one last failing test. This leak seems to only\r\n> > show up when using TLSv1.2 or below.\r\n> \r\n> AFAICT the session cache is avoided for TLSv1.3 due to 1.3 not supporting\r\n> renegotiation.\r\n\r\nNice, at least that mystery is solved. :D\r\n\r\nThanks,\r\n--Jacob", "msg_date": "Wed, 16 Jun 2021 16:15:56 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 16 Jun 2021, at 18:15, Jacob Champion <pchampion@vmware.com> wrote:\n> \n> On Wed, 2021-06-16 at 15:31 +0200, Daniel Gustafsson wrote:\n>>> On 16 Jun 2021, at 01:50, Jacob Champion <pchampion@vmware.com> wrote:\n>>> I've been tracking down reference leaks in the client. These open\n>>> references prevent NSS from shutting down cleanly, which then makes it\n>>> impossible to open a new context in the future. This probably affects\n>>> other libpq clients more than it affects psql.\n>> \n>> Ah, nice catch, that's indeed a bug in the frontend implementation. The\n>> problem is that the NSS trustdomain cache *must* be empty before shutting down\n>> the context, else this very issue happens. Note this in be_tls_destroy():\n>> \n>> /*\n>> * It reads a bit odd to clear a session cache when we are destroying the\n>> * context altogether, but if the session cache isn't cleared before\n>> * shutting down the context it will fail with SEC_ERROR_BUSY.\n>> */\n>> SSL_ClearSessionCache();\n>> \n>> Calling SSL_ClearSessionCache() in pgtls_close() fixes the error.\n> \n> That's unfortunate. The session cache is global, right? So I'm guessing\n> we'll need to refcount and lock that call, to avoid cleaning up out\n> from under a thread that's actively using the the cache?\n\nI'm not sure, the documentation doesn't give any answers and implementations of\nlibnss tend to just clear the cache without consideration. In libcurl we do\njust that, and haven't had any complaints - which doesn't mean it's correct but\nit's a datapoint.\n\n>> There is another resource leak left (visible in one test after the above is\n>> added), the SECMOD module needs to be unloaded in case it's been loaded.\n>> Implementing that with SECMOD_UnloadUserModule trips a segfault in NSS which I\n>> have yet to figure out (when acquiring a lock with NSSRWLock_LockRead).\n>> \n>> [...]\n>> \n>> With your patches I'm seeing a couple of these:\n>> \n>> SSL error: The one-time function was previously called and failed. Its error code is no longer available\n> \n> Hmm. Adding SSL_ClearSessionCache() (without thread-safety at the\n> moment) fixes all of the SSL tests for me, and I don't see either the\n> SECMOD leak or the \"one-time function\" error that you've mentioned.\n\nReading the code I don't think a loaded user module is considered a resource\nthat must've been released prior to closing the context. I will dig for what\nshowed up in my tests, but I don't think it was caused by this.\n\n> What version of NSS are you running? I'm on 3.63.\n\nRight now I'm using what Debian 10 is packaging which is 3.42. Admittedly not\nhot off the press but I've been trying to develop off a packaged version which\nwe might see users wanting to deploy against should this get shipped.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 16 Jun 2021 21:52:47 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Attached is a rebased version which incorporates your recent patchset for\nresource handling, as well as the connect_ok test patch.\n\nI've implemented tracking the close_notify alert that you mentioned offlist,\nbut it turns out that the alert callbacks in NSS are of limited use so it\nclose_notify is currently the only checked description. The enum which labels\nthe descriptions in the SSLAlert struct is private, so it's just sending over\nan anonymous number apart from close_notify which is zero.\n\nA few other fixups are included as well, like adapting the pending data read\nfunction in the frontend to how the OpenSSL implementation does it.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Wed, 23 Jun 2021 15:48:59 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, 2021-06-23 at 15:48 +0200, Daniel Gustafsson wrote:\r\n> Attached is a rebased version which incorporates your recent patchset for\r\n> resource handling, as well as the connect_ok test patch.\r\n\r\nWith v38 I do see the \"one-time function was previously called and\r\nfailed\" message you mentioned before, as well as some PR_Assert()\r\ncrashes. Looks like it's just due to the placement of\r\nSSL_ClearSessionCache(); gating it behind the conn->nss_context check\r\nensures that we don't call it if no NSS context actually exists. Patch\r\nattached (0001).\r\n\r\n--\r\n\r\nContinuing my jog around the patch... client connections will crash if\r\nhostaddr is provided rather than host, because SSL_SetURL can't handle\r\na NULL argument. I'm running with 0002 to fix it for the moment, but\r\nI'm not sure yet if it does the right thing for IP addresses, which the\r\nOpenSSL side has a special case for.\r\n\r\nEarly EOFs coming from the server don't currently have their own error\r\nmessage, which leads to a confusingly empty\r\n\r\n connection to server at \"127.0.0.1\", port 47447 failed: \r\n\r\n0003 adds one, to roughly match the corresponding OpenSSL message.\r\n\r\nWhile I was fixing that I noticed that I was getting a \"unable to\r\nverify certificate\" error message for the early EOF case, even with\r\nsslmode=require. That error message is being printed to conn-\r\n>errorMessage during pg_cert_auth_handler(), even if we're not\r\nverifying certificates, and then that message is included in later\r\nunrelated failures. 0004 patches that.\r\n\r\n--Jacob", "msg_date": "Mon, 19 Jul 2021 19:33:23 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 19 Jul 2021, at 21:33, Jacob Champion <pchampion@vmware.com> wrote:\n\n> ..client connections will crash if\n> hostaddr is provided rather than host, because SSL_SetURL can't handle\n> a NULL argument. I'm running with 0002 to fix it for the moment, but\n> I'm not sure yet if it does the right thing for IP addresses, which the\n> OpenSSL side has a special case for.\n\nAFAICT the idea is to handle it in the cert auth callback, so I've added some\nPoC code to check for sslsni there and updated the TODO comment to reflect\nthat.\n\nI've applied your patches in the attached rebase which passes all tests for me.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Mon, 26 Jul 2021 15:26:16 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Another rebase to work around the recent changes in the ssl Makefile.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Tue, 10 Aug 2021 19:22:20 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Tue, 2021-08-10 at 19:22 +0200, Daniel Gustafsson wrote:\r\n> Another rebase to work around the recent changes in the ssl Makefile.\r\n\r\nI have a local test suite that I've been writing against libpq. With\r\nthe new ssldatabase connection option, one tricky aspect is figuring\r\nout whether it's supported or not. It doesn't look like there's any way\r\nto tell, from a client application, whether NSS or OpenSSL (or neither)\r\nis in use.\r\n\r\nYou'd mentioned that perhaps we should support a call like\r\n\r\n PQsslAttribute(NULL, \"library\"); /* returns \"NSS\", \"OpenSSL\", or NULL */\r\n\r\nso that you don't have to have an actual connection first in order to\r\nfigure out what connection options you need to supply. Clients that\r\nsupport multiple libpq versions would need to know whether that call is\r\nreliable (older versions of libpq will always return NULL, whether SSL\r\nis compiled in or not), so maybe we could add a feature macro at the\r\nsame time?\r\n\r\nWe could also add a new API (say, PQsslLibrary()) but I don't know if\r\nthat gives us anything in practice. Thoughts?\r\n\r\n--Jacob\r\n", "msg_date": "Wed, 18 Aug 2021 00:06:59 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, Aug 18, 2021 at 12:06:59AM +0000, Jacob Champion wrote:\n> I have a local test suite that I've been writing against libpq. With\n> the new ssldatabase connection option, one tricky aspect is figuring\n> out whether it's supported or not. It doesn't look like there's any way\n> to tell, from a client application, whether NSS or OpenSSL (or neither)\n> is in use.\n\nThat's about guessing which library libpq is compiled with, so yes\nthat's a problem.\n\n> so that you don't have to have an actual connection first in order to\n> figure out what connection options you need to supply. Clients that\n> support multiple libpq versions would need to know whether that call is\n> reliable (older versions of libpq will always return NULL, whether SSL\n> is compiled in or not), so maybe we could add a feature macro at the\n> same time?\n\nStill, the problem is wider than that, no? One cannot know either if\na version of libpq is able to work with GSSAPI until they attempt a\nconnection with gssencmode. It seems to me that we should work on the\nlarger picture here.\n\n> We could also add a new API (say, PQsslLibrary()) but I don't know if\n> that gives us anything in practice. Thoughts?\n\nKnowing that the GSSAPI stuff is part of fe-secure.c, we may want\ninstead a call that returns a list of supported secure libraries.\n--\nMichael", "msg_date": "Wed, 18 Aug 2021 09:32:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 18 Aug 2021, at 02:32, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Aug 18, 2021 at 12:06:59AM +0000, Jacob Champion wrote:\n>> I have a local test suite that I've been writing against libpq. With\n>> the new ssldatabase connection option, one tricky aspect is figuring\n>> out whether it's supported or not. It doesn't look like there's any way\n>> to tell, from a client application, whether NSS or OpenSSL (or neither)\n>> is in use.\n> \n> That's about guessing which library libpq is compiled with, so yes\n> that's a problem.\n> \n>> so that you don't have to have an actual connection first in order to\n>> figure out what connection options you need to supply. Clients that\n>> support multiple libpq versions would need to know whether that call is\n>> reliable (older versions of libpq will always return NULL, whether SSL\n>> is compiled in or not), so maybe we could add a feature macro at the\n>> same time?\n> \n> Still, the problem is wider than that, no? One cannot know either if\n> a version of libpq is able to work with GSSAPI until they attempt a\n> connection with gssencmode. It seems to me that we should work on the\n> larger picture here.\n\nI think we should do both. PQsslAttribute() already exists, and being able to\nget the library attribute for NULL conn object when there are multiple\nlibraries makes a lot of sense to me. That doesn’t exclude working on a better\nway for apps to interrogate the libpq they have at hand for which capabilities\nit has. Personally I’m not sure what that API could look like, but we should\ndiscuss that in a separate thread I guess.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 18 Aug 2021 11:48:23 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Attached is a rebased v41 to keep the patch from bitrot.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Wed, 8 Sep 2021 20:49:33 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Rebased on top of HEAD with off-list comment fixes by Kevin Burke.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Mon, 20 Sep 2021 11:38:16 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Mon, 2021-07-26 at 15:26 +0200, Daniel Gustafsson wrote:\r\n> > On 19 Jul 2021, at 21:33, Jacob Champion <pchampion@vmware.com> wrote:\r\n> > ..client connections will crash if\r\n> > hostaddr is provided rather than host, because SSL_SetURL can't handle\r\n> > a NULL argument. I'm running with 0002 to fix it for the moment, but\r\n> > I'm not sure yet if it does the right thing for IP addresses, which the\r\n> > OpenSSL side has a special case for.\r\n> \r\n> AFAICT the idea is to handle it in the cert auth callback, so I've added some\r\n> PoC code to check for sslsni there and updated the TODO comment to reflect\r\n> that.\r\n\r\nI dug a bit deeper into the SNI stuff:\r\n\r\n> +\tserver_hostname = SSL_RevealURL(conn->pr_fd);\r\n> +\tif (!server_hostname || server_hostname[0] == '\\0')\r\n> +\t{\r\n> +\t\t/* If SNI is enabled we must have a hostname set */\r\n> +\t\tif (conn->sslsni && conn->sslsni[0])\r\n> +\t\t\tstatus = SECFailure;\r\n\r\nconn->sslsni can be explicitly set to \"0\" to disable it, so this should\r\nprobably be changed to a check for \"1\", but I'm not sure that would be\r\ncorrect either. If the user has the default sslsni=\"1\" and supplies an\r\nIP address for the host parameter, I don't think we should fail the\r\nconnection.\r\n\r\n> +\tif (host && host[0] &&\r\n> +\t\t!(strspn(host, \"0123456789.\") == strlen(host) ||\r\n> +\t\t strchr(host, ':')))\r\n> +\t\tSSL_SetURL(conn->pr_fd, host);\r\n\r\nIt looks like NSS may already have some code that prevents SNI from\r\nbeing sent for IP addresses, so that part of the guard might not be\r\nnecessary. (And potentially counterproductive, because it looks like\r\nNSS can perform verification against the certificate's SANs if you pass\r\nan IP address to SSL_SetURL().)\r\n\r\nSpeaking of IP addresses in SANs, it doesn't look like our OpenSSL\r\nbackend can handle those. That's a separate conversation, but I might\r\ntake a look at a patch for next commitfest.\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 21 Sep 2021 00:06:15 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 21 Sep 2021, at 02:06, Jacob Champion <pchampion@vmware.com> wrote:\n> \n> On Mon, 2021-07-26 at 15:26 +0200, Daniel Gustafsson wrote:\n>>> On 19 Jul 2021, at 21:33, Jacob Champion <pchampion@vmware.com> wrote:\n>>> ..client connections will crash if\n>>> hostaddr is provided rather than host, because SSL_SetURL can't handle\n>>> a NULL argument. I'm running with 0002 to fix it for the moment, but\n>>> I'm not sure yet if it does the right thing for IP addresses, which the\n>>> OpenSSL side has a special case for.\n>> \n>> AFAICT the idea is to handle it in the cert auth callback, so I've added some\n>> PoC code to check for sslsni there and updated the TODO comment to reflect\n>> that.\n> \n> I dug a bit deeper into the SNI stuff:\n> \n>> +\tserver_hostname = SSL_RevealURL(conn->pr_fd);\n>> +\tif (!server_hostname || server_hostname[0] == '\\0')\n>> +\t{\n>> +\t\t/* If SNI is enabled we must have a hostname set */\n>> +\t\tif (conn->sslsni && conn->sslsni[0])\n>> +\t\t\tstatus = SECFailure;\n> \n> conn->sslsni can be explicitly set to \"0\" to disable it, so this should\n> probably be changed to a check for \"1\",\n\nAgreed.\n\n> but I'm not sure that would be\n> correct either. If the user has the default sslsni=\"1\" and supplies an\n> IP address for the host parameter, I don't think we should fail the\n> connection.\n\nMaybe not, but doing so is at least in line with how the OpenSSL support will\nhandle the same config AFAICT. Or am I missing something?\n\n>> +\tif (host && host[0] &&\n>> +\t\t!(strspn(host, \"0123456789.\") == strlen(host) ||\n>> +\t\t strchr(host, ':')))\n>> +\t\tSSL_SetURL(conn->pr_fd, host);\n> \n> It looks like NSS may already have some code that prevents SNI from\n> being sent for IP addresses, so that part of the guard might not be\n> necessary. (And potentially counterproductive, because it looks like\n> NSS can perform verification against the certificate's SANs if you pass\n> an IP address to SSL_SetURL().)\n\nSkimming the NSS code I wasn't able find the countermeasures, can you provide a\nreference to where I should look?\n\nFeel free to post a new version of the NSS patch with these changes if you want.\n\n> Speaking of IP addresses in SANs, it doesn't look like our OpenSSL\n> backend can handle those. That's a separate conversation, but I might\n> take a look at a patch for next commitfest.\n\nPlease do.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 27 Sep 2021 15:44:30 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Mon, 2021-09-27 at 15:44 +0200, Daniel Gustafsson wrote:\r\n> > On 21 Sep 2021, at 02:06, Jacob Champion <pchampion@vmware.com> wrote:\r\n> > but I'm not sure that would be\r\n> > correct either. If the user has the default sslsni=\"1\" and supplies an\r\n> > IP address for the host parameter, I don't think we should fail the\r\n> > connection.\r\n> \r\n> Maybe not, but doing so is at least in line with how the OpenSSL support will\r\n> handle the same config AFAICT. Or am I missing something?\r\n\r\nWith OpenSSL, I don't see a connection failure when using sslsni=1 with\r\nIP addresses. (verify-full can't work, but that's a separate problem.)\r\n\r\n> > > +\tif (host && host[0] &&\r\n> > > +\t\t!(strspn(host, \"0123456789.\") == strlen(host) ||\r\n> > > +\t\t strchr(host, ':')))\r\n> > > +\t\tSSL_SetURL(conn->pr_fd, host);\r\n> > \r\n> > It looks like NSS may already have some code that prevents SNI from\r\n> > being sent for IP addresses, so that part of the guard might not be\r\n> > necessary. (And potentially counterproductive, because it looks like\r\n> > NSS can perform verification against the certificate's SANs if you pass\r\n> > an IP address to SSL_SetURL().)\r\n> \r\n> Skimming the NSS code I wasn't able find the countermeasures, can you provide a\r\n> reference to where I should look?\r\n\r\nI see the check in ssl_ShouldSendSNIExtension(), in ssl3exthandle.c.\r\n\r\n> Feel free to post a new version of the NSS patch with these changes if you want.\r\n\r\nWill do!\r\n\r\nThanks,\r\n--Jacob\r\n", "msg_date": "Mon, 27 Sep 2021 16:29:43 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Mon, 2021-09-27 at 16:29 +0000, Jacob Champion wrote:\r\n> On Mon, 2021-09-27 at 15:44 +0200, Daniel Gustafsson wrote:\r\n> > \r\n> > Feel free to post a new version of the NSS patch with these changes if you want.\r\n> \r\n> Will do!\r\n\r\nSomething like the attached, v43, I think. (since-v42.diff.txt has the\r\nchanges only.)\r\n\r\nThis fixes the interaction of IP addresses and SNI for me, and honors\r\nsslsni=0.\r\n\r\n--Jacob", "msg_date": "Mon, 27 Sep 2021 19:40:28 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Mon, Sep 20, 2021 at 2:38 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> Rebased on top of HEAD with off-list comment fixes by Kevin Burke.\n>\n\nHello Daniel,\n\nI've been playing with your patch on Mac (OS 11.6 Big Sur) and have\nrun into a couple of issues so far.\n\n1. I get 7 warnings while running make (truncated):\ncryptohash_nss.c:101:21: warning: implicit conversion from enumeration\ntype 'SECOidTag' to different enumeration type 'HASH_HashType'\n[-Wenum-conversion]\n ctx->hash_type = SEC_OID_SHA1;\n ~ ^~~~~~~~~~~~\n...\ncryptohash_nss.c:134:34: warning: implicit conversion from enumeration\ntype 'HASH_HashType' to different enumeration type 'SECOidTag'\n[-Wenum-conversion]\n hash = SECOID_FindOIDByTag(ctx->hash_type);\n ~~~~~~~~~~~~~~~~~~~ ~~~~~^~~~~~~~~\n7 warnings generated.\n\n2. libpq-refs-stamp fails -- it appears an exit is being injected into\nlibpq on Mac\n\nNotes about my environment:\nI've installed nss via homebrew (at version 3.70) and linked it.\n\nCheers,\nRachel\n\n\n", "msg_date": "Mon, 27 Sep 2021 16:07:19 -0700", "msg_from": "Rachel Heaton <rachelmheaton@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 28 Sep 2021, at 01:07, Rachel Heaton <rachelmheaton@gmail.com> wrote:\n\n> 1. I get 7 warnings while running make (truncated):\n> cryptohash_nss.c:101:21: warning: implicit conversion from enumeration\n> type 'SECOidTag' to different enumeration type 'HASH_HashType'\n\nNice catch, fixed in the attached.\n\n> 2. libpq-refs-stamp fails -- it appears an exit is being injected into\n> libpq on Mac\n\nI spent some time investigating this, and there are two cases of _exit() and\none atexit() which are coming from the threading code in libnspr (which is the\nruntime lib required by libnss).\n\nOn macOS the threading code registers an atexit handler [0] in order to work\naround issues with __attribute__((destructor)) [1]. The pthreads code also\ndefines PR_ProcessExit [2] which does what it says on the tin, calls exit and\nnot much more [3]. Both of these uses are only compiled when building with\npthreads, which can be disabled in autoconf but that seems broken in recent\nversion of NSPR. I'm fairly sure I've built NSPR with the user pthreads in the\npast, but if packagers build it like this then we need to conform to that. The\nPR_CreateProcess() [4] call further calls _exit() [5] in a number of error\npaths on failing syscalls.\n\nThe libpq libnss implementation doesn't call either of these, and neither does\nlibnss.\n\nI'm not entirely sure what to do here, it clearly requires an exception in the\nMakefile check of sorts if we deem we can live with this.\n\n@Jacob: how did you configure your copy of NSPR?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n[0] https://hg.mozilla.org/projects/nspr/file/tip/pr/src/pthreads/ptthread.c#l1034\n[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1399746#c99\n[2] https://www-archive.mozilla.org/projects/nspr/reference/html/prinit.html#15859\n[3] https://hg.mozilla.org/projects/nspr/file/tip/pr/src/pthreads/ptthread.c#l1181\n[4] https://www-archive.mozilla.org/projects/nspr/reference/html/prprocess.html#24535\n[5] https://hg.mozilla.org/projects/nspr/file/tip/pr/src/md/unix/uxproces.c#l268", "msg_date": "Thu, 30 Sep 2021 14:17:29 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Thu, 2021-09-30 at 14:17 +0200, Daniel Gustafsson wrote:\r\n> The libpq libnss implementation doesn't call either of these, and neither does\r\n> libnss.\r\n\r\nI thought the refs check only searched for direct symbol dependencies;\r\nis that piece of NSPR being statically included somehow?\r\n\r\n> I'm not entirely sure what to do here, it clearly requires an exception in the\r\n> Makefile check of sorts if we deem we can live with this.\r\n> \r\n> @Jacob: how did you configure your copy of NSPR?\r\n\r\nI use the Ubuntu 20.04 builtin (NSPR 4.25.0), but it looks like the\r\nreason I haven't been seeing this is because I've always used --enable-\r\ncoverage. If I take that out, I see the same exit check failure.\r\n\r\n--Jacob\r\n", "msg_date": "Thu, 30 Sep 2021 16:04:39 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Thu, 2021-09-30 at 16:04 +0000, Jacob Champion wrote:\r\n> On Thu, 2021-09-30 at 14:17 +0200, Daniel Gustafsson wrote:\r\n> > The libpq libnss implementation doesn't call either of these, and neither does\r\n> > libnss.\r\n> \r\n> I thought the refs check only searched for direct symbol dependencies;\r\n> is that piece of NSPR being statically included somehow?\r\n\r\nOn my machine, at least, exit() is coming in due to a few calls to\r\npsprintf(), pstrdup(), and pg_malloc() in the new NSS code.\r\n(Disassembly via `objdump -S libpq.so` helped me track those down.) I'm\r\nworking on a patch.\r\n\r\n--Jacob\r\n", "msg_date": "Fri, 1 Oct 2021 00:02:06 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 1 Oct 2021, at 02:02, Jacob Champion <pchampion@vmware.com> wrote:\n\n> On my machine, at least, exit() is coming in due to a few calls to\n> psprintf(), pstrdup(), and pg_malloc() in the new NSS code.\n> (Disassembly via `objdump -S libpq.so` helped me track those down.) I'm\n> working on a patch.\n\nAh, that makes perfect sense. I was too focused on hunting in what new was\nlinked against that I overlooked the obvious. Thanks for finding these.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 1 Oct 2021 08:55:59 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Fri, 2021-10-01 at 08:55 +0200, Daniel Gustafsson wrote:\r\n> Ah, that makes perfect sense. I was too focused on hunting in what new was\r\n> linked against that I overlooked the obvious. Thanks for finding these.\r\n\r\nNo problem at all :) The exit() check is useful but still a little\r\nopaque, I think, especially since (from my newbie perspective) there's\r\nso much of the pgcommon staticlib that is forbidden for use in libpq.\r\n\r\nFixed in v44, attached; changes in since-v43.diff.txt.\r\n\r\n--Jacob", "msg_date": "Mon, 4 Oct 2021 16:14:37 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 4 Oct 2021, at 18:14, Jacob Champion <pchampion@vmware.com> wrote:\n> \n> On Fri, 2021-10-01 at 08:55 +0200, Daniel Gustafsson wrote:\n>> Ah, that makes perfect sense. I was too focused on hunting in what new was\n>> linked against that I overlooked the obvious. Thanks for finding these.\n> \n> No problem at all :) The exit() check is useful but still a little\n> opaque, I think, especially since (from my newbie perspective) there's\n> so much of the pgcommon staticlib that is forbidden for use in libpq.\n\nThanks! These changes looks good. Since you accidentally based this on v43\nand not the v44 I posted with the cryptohash fix in, the attached is a v45 with\nboth your v44 and the previous one, all rebased over HEAD.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Tue, 5 Oct 2021 15:08:18 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Tue, 2021-10-05 at 15:08 +0200, Daniel Gustafsson wrote:\r\n> Thanks! These changes looks good. Since you accidentally based this on v43\r\n> and not the v44 I posted with the cryptohash fix in, the attached is a v45 with\r\n> both your v44 and the previous one, all rebased over HEAD.\r\n\r\nThanks, and sorry about that.\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 5 Oct 2021 15:05:31 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Hi all, apologies but I'm having trouble applying the latest patch (v45) to\nthe latest commit on master (6b0f6f79eef2168ce38a8ee99c3ed76e3df5d7ad)\n\nI downloaded all of the patches to my local filesystem, and then ran:\n\nfor patch in\n../../kevinburke/rustls-postgres/patchsets/2021-10-05-gustafsson-mailing-list/*.patch;\ndo git am $patch; done;\n\nI get the following error on the second patch file:\n\nApplying: Refactor SSL testharness for multiple library\nerror: patch failed: src/test/ssl/t/001_ssltests.pl:7\nerror: src/test/ssl/t/001_ssltests.pl: patch does not apply\nerror: patch failed: src/test/ssl/t/SSLServer.pm:26\nerror: src/test/ssl/t/SSLServer.pm: patch does not apply\nPatch failed at 0001 Refactor SSL testharness for multiple library\nhint: Use 'git am --show-current-patch=diff' to see the failed patch\n\nI believe that these patches need to integrate the refactoring in\ncommit b3b4d8e68ae83f432f43f035c7eb481ef93e1583 - git is searching for the\nwrong text in the existing file, but I'm not sure how to submit a patch\nagainst a patch.\n\nThanks,\nKevin\n\nOn Tue, Oct 5, 2021 at 8:05 AM Jacob Champion <pchampion@vmware.com> wrote:\n\n> On Tue, 2021-10-05 at 15:08 +0200, Daniel Gustafsson wrote:\n> > Thanks! These changes looks good. Since you accidentally based this on\n> v43\n> > and not the v44 I posted with the cryptohash fix in, the attached is a\n> v45 with\n> > both your v44 and the previous one, all rebased over HEAD.\n>\n> Thanks, and sorry about that.\n>\n> --Jacob\n>\n\nHi all, apologies but I'm having trouble applying the latest patch (v45) to the latest commit on master (6b0f6f79eef2168ce38a8ee99c3ed76e3df5d7ad)I downloaded all of the patches to my local filesystem, and then ran: for patch in ../../kevinburke/rustls-postgres/patchsets/2021-10-05-gustafsson-mailing-list/*.patch; do git am $patch; done;I get the following error on the second patch file:Applying: Refactor SSL testharness for multiple libraryerror: patch failed: src/test/ssl/t/001_ssltests.pl:7error: src/test/ssl/t/001_ssltests.pl: patch does not applyerror: patch failed: src/test/ssl/t/SSLServer.pm:26error: src/test/ssl/t/SSLServer.pm: patch does not applyPatch failed at 0001 Refactor SSL testharness for multiple libraryhint: Use 'git am --show-current-patch=diff' to see the failed patchI believe that these patches need to integrate the refactoring in commit b3b4d8e68ae83f432f43f035c7eb481ef93e1583 - git is searching for the wrong text in the existing file, but I'm not sure how to submit a patch against a patch.Thanks,KevinOn Tue, Oct 5, 2021 at 8:05 AM Jacob Champion <pchampion@vmware.com> wrote:On Tue, 2021-10-05 at 15:08 +0200, Daniel Gustafsson wrote:\n> Thanks!  These changes looks good.  Since you accidentally based this on v43\n> and not the v44 I posted with the cryptohash fix in, the attached is a v45 with\n> both your v44 and the previous one, all rebased over HEAD.\n\nThanks, and sorry about that.\n\n--Jacob", "msg_date": "Thu, 28 Oct 2021 21:31:47 -0700", "msg_from": "Kevin Burke <kevin@burke.dev>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "For anyone else trying to test out this branch I'm able to get the patches\nto apply cleanly if I check out e.g. commit\n92e6a98c3636948e7ece9a3260f9d89dd60da278.\n\nKevin\n\n--\nKevin Burke\nphone: 925-271-7005 | kevin.burke.dev\n\n\nOn Thu, Oct 28, 2021 at 9:31 PM Kevin Burke <kevin@burke.dev> wrote:\n\n> Hi all, apologies but I'm having trouble applying the latest patch (v45)\n> to the latest commit on master (6b0f6f79eef2168ce38a8ee99c3ed76e3df5d7ad)\n>\n> I downloaded all of the patches to my local filesystem, and then ran:\n>\n> for patch in\n> ../../kevinburke/rustls-postgres/patchsets/2021-10-05-gustafsson-mailing-list/*.patch;\n> do git am $patch; done;\n>\n> I get the following error on the second patch file:\n>\n> Applying: Refactor SSL testharness for multiple library\n> error: patch failed: src/test/ssl/t/001_ssltests.pl:7\n> error: src/test/ssl/t/001_ssltests.pl: patch does not apply\n> error: patch failed: src/test/ssl/t/SSLServer.pm:26\n> error: src/test/ssl/t/SSLServer.pm: patch does not apply\n> Patch failed at 0001 Refactor SSL testharness for multiple library\n> hint: Use 'git am --show-current-patch=diff' to see the failed patch\n>\n> I believe that these patches need to integrate the refactoring in\n> commit b3b4d8e68ae83f432f43f035c7eb481ef93e1583 - git is searching for the\n> wrong text in the existing file, but I'm not sure how to submit a patch\n> against a patch.\n>\n> Thanks,\n> Kevin\n>\n> On Tue, Oct 5, 2021 at 8:05 AM Jacob Champion <pchampion@vmware.com>\n> wrote:\n>\n>> On Tue, 2021-10-05 at 15:08 +0200, Daniel Gustafsson wrote:\n>> > Thanks! These changes looks good. Since you accidentally based this\n>> on v43\n>> > and not the v44 I posted with the cryptohash fix in, the attached is a\n>> v45 with\n>> > both your v44 and the previous one, all rebased over HEAD.\n>>\n>> Thanks, and sorry about that.\n>>\n>> --Jacob\n>>\n>\n\nFor anyone else trying to test out this branch I'm able to get the patches to apply cleanly if I check out e.g. commit 92e6a98c3636948e7ece9a3260f9d89dd60da278.Kevin--Kevin Burkephone: 925-271-7005 | kevin.burke.devOn Thu, Oct 28, 2021 at 9:31 PM Kevin Burke <kevin@burke.dev> wrote:Hi all, apologies but I'm having trouble applying the latest patch (v45) to the latest commit on master (6b0f6f79eef2168ce38a8ee99c3ed76e3df5d7ad)I downloaded all of the patches to my local filesystem, and then ran: for patch in ../../kevinburke/rustls-postgres/patchsets/2021-10-05-gustafsson-mailing-list/*.patch; do git am $patch; done;I get the following error on the second patch file:Applying: Refactor SSL testharness for multiple libraryerror: patch failed: src/test/ssl/t/001_ssltests.pl:7error: src/test/ssl/t/001_ssltests.pl: patch does not applyerror: patch failed: src/test/ssl/t/SSLServer.pm:26error: src/test/ssl/t/SSLServer.pm: patch does not applyPatch failed at 0001 Refactor SSL testharness for multiple libraryhint: Use 'git am --show-current-patch=diff' to see the failed patchI believe that these patches need to integrate the refactoring in commit b3b4d8e68ae83f432f43f035c7eb481ef93e1583 - git is searching for the wrong text in the existing file, but I'm not sure how to submit a patch against a patch.Thanks,KevinOn Tue, Oct 5, 2021 at 8:05 AM Jacob Champion <pchampion@vmware.com> wrote:On Tue, 2021-10-05 at 15:08 +0200, Daniel Gustafsson wrote:\n> Thanks!  These changes looks good.  Since you accidentally based this on v43\n> and not the v44 I posted with the cryptohash fix in, the attached is a v45 with\n> both your v44 and the previous one, all rebased over HEAD.\n\nThanks, and sorry about that.\n\n--Jacob", "msg_date": "Thu, 28 Oct 2021 21:36:43 -0700", "msg_from": "Kevin Burke <kevin@burke.dev>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 29 Oct 2021, at 06:31, Kevin Burke <kevin@burke.dev> wrote:\n\nThanks for testing the patch!\n\n> I believe that these patches need to integrate the refactoring in commit\n> b3b4d8e68ae83f432f43f035c7eb481ef93e1583 - git is searching for the wrong text\n> in the existing file\n\n\nCorrect, b3b4d8e68 as well as b4c4a00ea both created conflicts with this\npatchset. Attached is an updated patchset fixing both of those as well as\nadding version checks for NSS and NSPR to autoconf (with fallbacks for\nnon-{nss|nspr}-config systems). The versions picked are semi-arbitrary and\ndefinitely up for discussion. I chose them mainly as they were the oldest\ncommonly available packages I found, and they satisfy the requirements we have.\n\n> I'm not sure how to submit a patch against a patch.\n\nIf you've done the work of fixing the conflicts in a rebase, the best option is\nIMO to supply a whole new version of the patchset since that will make the CF\npatch tester be able to build and test the version.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Fri, 29 Oct 2021 13:54:29 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Attached is a rebase fixing a tiny bug in the documentation which prevented it\nfrom being able to compile.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Fri, 5 Nov 2021 11:01:18 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Fri, Nov 5, 2021 at 6:01 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> Attached is a rebase fixing a tiny bug in the documentation which prevented it\n> from being able to compile.\n>\n\nHello, I'm looking to help out with reviews for this CF and I'm\ncurrently looking at this patchset.\n\ncurrently I'm stuck trying to configure:\n\nchecking for nss-config... /usr/bin/nss-config\nchecking for nspr-config... /usr/bin/nspr-config\n...\nchecking nss/ssl.h usability... no\nchecking nss/ssl.h presence... no\nchecking for nss/ssl.h... no\nconfigure: error: header file <nss/ssl.h> is required for NSS\n\nThis is on fedora 33 and nss-devel is installed, nss-config is\navailable (and configure finds it) but the directory is different from\nUbuntu:\n(base) [vagrant@fedora ~]$ nss-config --includedir\n/usr/include/nss3\n(base) [vagrant@fedora ~]$ ls -al /usr/include/nss3/ssl.h\n-rw-r--r--. 1 root root 70450 Sep 30 05:41 /usr/include/nss3/ssl.h\n\nSo if nss-config --includedir is used then #include <ssl.h> should be\nused, or if not then #include <nss3/ssl.h> but on this system #include\n<nss/ssl.h> is not going to work.\n\nThanks\n\n\n", "msg_date": "Tue, 9 Nov 2021 13:59:51 -0500", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Tue, Nov 9, 2021 at 1:59 PM Joshua Brindle\n<joshua.brindle@crunchydata.com> wrote:\n>\n> On Fri, Nov 5, 2021 at 6:01 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> > Attached is a rebase fixing a tiny bug in the documentation which prevented it\n> > from being able to compile.\n> >\n>\n> Hello, I'm looking to help out with reviews for this CF and I'm\n> currently looking at this patchset.\n>\n> currently I'm stuck trying to configure:\n>\n> checking for nss-config... /usr/bin/nss-config\n> checking for nspr-config... /usr/bin/nspr-config\n> ...\n> checking nss/ssl.h usability... no\n> checking nss/ssl.h presence... no\n> checking for nss/ssl.h... no\n> configure: error: header file <nss/ssl.h> is required for NSS\n>\n> This is on fedora 33 and nss-devel is installed, nss-config is\n> available (and configure finds it) but the directory is different from\n> Ubuntu:\n> (base) [vagrant@fedora ~]$ nss-config --includedir\n> /usr/include/nss3\n> (base) [vagrant@fedora ~]$ ls -al /usr/include/nss3/ssl.h\n> -rw-r--r--. 1 root root 70450 Sep 30 05:41 /usr/include/nss3/ssl.h\n>\n> So if nss-config --includedir is used then #include <ssl.h> should be\n> used, or if not then #include <nss3/ssl.h> but on this system #include\n> <nss/ssl.h> is not going to work.\n\nFYI, if I make a symlink to get past this, configure completes but\ncompilation fails because nspr/nspr.h cannot be found (I'm not sure\nwhy configure doesn't discover this)\n../../src/include/common/nss.h:31:10: fatal error: 'nspr/nspr.h' file not found\n#include <nspr/nspr.h>In file included from protocol_nss.c:24:\n../../src/include/common/nss.h:31:10: fatal error: 'nspr/nspr.h' file not found\n#include <nspr/nspr.h>\n ^~~~~~~~~~~~~\n\nIt's a similar issue:\n$ nspr-config --includedir\n/usr/include/nspr4\n\n\n", "msg_date": "Tue, 9 Nov 2021 14:02:13 -0500", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Tue, Nov 9, 2021 at 2:02 PM Joshua Brindle\n<joshua.brindle@crunchydata.com> wrote:\n>\n> On Tue, Nov 9, 2021 at 1:59 PM Joshua Brindle\n> <joshua.brindle@crunchydata.com> wrote:\n> >\n> > On Fri, Nov 5, 2021 at 6:01 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > >\n> > > Attached is a rebase fixing a tiny bug in the documentation which prevented it\n> > > from being able to compile.\n> > >\n> >\n> > Hello, I'm looking to help out with reviews for this CF and I'm\n> > currently looking at this patchset.\n> >\n> > currently I'm stuck trying to configure:\n> >\n> > checking for nss-config... /usr/bin/nss-config\n> > checking for nspr-config... /usr/bin/nspr-config\n> > ...\n> > checking nss/ssl.h usability... no\n> > checking nss/ssl.h presence... no\n> > checking for nss/ssl.h... no\n> > configure: error: header file <nss/ssl.h> is required for NSS\n> >\n> > This is on fedora 33 and nss-devel is installed, nss-config is\n> > available (and configure finds it) but the directory is different from\n> > Ubuntu:\n> > (base) [vagrant@fedora ~]$ nss-config --includedir\n> > /usr/include/nss3\n> > (base) [vagrant@fedora ~]$ ls -al /usr/include/nss3/ssl.h\n> > -rw-r--r--. 1 root root 70450 Sep 30 05:41 /usr/include/nss3/ssl.h\n> >\n> > So if nss-config --includedir is used then #include <ssl.h> should be\n> > used, or if not then #include <nss3/ssl.h> but on this system #include\n> > <nss/ssl.h> is not going to work.\n>\n> FYI, if I make a symlink to get past this, configure completes but\n> compilation fails because nspr/nspr.h cannot be found (I'm not sure\n> why configure doesn't discover this)\n> ../../src/include/common/nss.h:31:10: fatal error: 'nspr/nspr.h' file not found\n> #include <nspr/nspr.h>In file included from protocol_nss.c:24:\n> ../../src/include/common/nss.h:31:10: fatal error: 'nspr/nspr.h' file not found\n> #include <nspr/nspr.h>\n> ^~~~~~~~~~~~~\n>\n> It's a similar issue:\n> $ nspr-config --includedir\n> /usr/include/nspr4\n\nIf these get resolved the next issue is llvm bitcode doesn't compile\nbecause the nss includedir is missing from CPPFLAGS:\n\n/usr/bin/clang -Wno-ignored-attributes -fno-strict-aliasing -fwrapv\n-O2 -I../../../src/include -D_GNU_SOURCE -I/usr/include/libxml2\n-I/usr/include -flto=thin -emit-llvm -c -o be-secure-nss.bc\nbe-secure-nss.c\nIn file included from be-secure-nss.c:20:\nIn file included from ../../../src/include/common/nss.h:38:\nIn file included from /usr/include/nss/nss.h:34:\n/usr/include/nss/seccomon.h:17:10: fatal error: 'prtypes.h' file not found\n#include \"prtypes.h\"\n ^~~~~~~~~~~\n1 error generated.\n\n\n", "msg_date": "Tue, 9 Nov 2021 16:22:59 -0500", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 9 Nov 2021, at 22:22, Joshua Brindle <joshua.brindle@crunchydata.com> wrote:\n> On Tue, Nov 9, 2021 at 2:02 PM Joshua Brindle\n> <joshua.brindle@crunchydata.com> wrote:\n>> \n>> On Tue, Nov 9, 2021 at 1:59 PM Joshua Brindle\n>> <joshua.brindle@crunchydata.com> wrote:\n\n>>> Hello, I'm looking to help out with reviews for this CF and I'm\n>>> currently looking at this patchset.\n\nThanks, much appreciated!\n\n>>> currently I'm stuck trying to configure:\n>>> \n>>> checking for nss-config... /usr/bin/nss-config\n>>> checking for nspr-config... /usr/bin/nspr-config\n>>> ...\n>>> checking nss/ssl.h usability... no\n>>> checking nss/ssl.h presence... no\n>>> checking for nss/ssl.h... no\n>>> configure: error: header file <nss/ssl.h> is required for NSS\n>>> \n>>> This is on fedora 33 and nss-devel is installed, nss-config is\n>>> available (and configure finds it) but the directory is different from\n>>> Ubuntu:\n>>> (base) [vagrant@fedora ~]$ nss-config --includedir\n>>> /usr/include/nss3\n>>> (base) [vagrant@fedora ~]$ ls -al /usr/include/nss3/ssl.h\n>>> -rw-r--r--. 1 root root 70450 Sep 30 05:41 /usr/include/nss3/ssl.h\n>>> \n>>> So if nss-config --includedir is used then #include <ssl.h> should be\n>>> used, or if not then #include <nss3/ssl.h> but on this system #include\n>>> <nss/ssl.h> is not going to work.\n\nInteresting rename, I doubt any version but NSS 3 and NSPR 4 is alive anywhere\nand an incremented major version seems highly unlikely. Going back to plain\n#include <ssl.h> and have the includeflags sort out the correct directories\nseems like the best option then. Fixed in the attached.\n\n>> FYI, if I make a symlink to get past this, configure completes but\n>> compilation fails because nspr/nspr.h cannot be found (I'm not sure\n>> why configure doesn't discover this)\n>> ../../src/include/common/nss.h:31:10: fatal error: 'nspr/nspr.h' file not found\n>> #include <nspr/nspr.h>In file included from protocol_nss.c:24:\n>> ../../src/include/common/nss.h:31:10: fatal error: 'nspr/nspr.h' file not found\n>> #include <nspr/nspr.h>\n>> ^~~~~~~~~~~~~\n>> \n>> It's a similar issue:\n>> $ nspr-config --includedir\n>> /usr/include/nspr4\n\nFixed.\n\n> If these get resolved the next issue is llvm bitcode doesn't compile\n> because the nss includedir is missing from CPPFLAGS:\n> \n> /usr/bin/clang -Wno-ignored-attributes -fno-strict-aliasing -fwrapv\n> -O2 -I../../../src/include -D_GNU_SOURCE -I/usr/include/libxml2\n> -I/usr/include -flto=thin -emit-llvm -c -o be-secure-nss.bc\n> be-secure-nss.c\n> In file included from be-secure-nss.c:20:\n> In file included from ../../../src/include/common/nss.h:38:\n> In file included from /usr/include/nss/nss.h:34:\n> /usr/include/nss/seccomon.h:17:10: fatal error: 'prtypes.h' file not found\n> #include \"prtypes.h\"\n> ^~~~~~~~~~~\n> 1 error generated.\n\nFixed.\n\nThe attached also resolves the conflicts in pgcrypto following db7d1a7b05. PGP\nelgamel and RSA pubkey functions aren't supported for now as there is no bignum\nfunctions similar to the BN_* in OpenSSL. I will look into more how hard it\nwould be to support, for now this gets us ahead.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Wed, 10 Nov 2021 14:49:19 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, Nov 10, 2021 at 8:49 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 9 Nov 2021, at 22:22, Joshua Brindle <joshua.brindle@crunchydata.com> wrote:\n> > On Tue, Nov 9, 2021 at 2:02 PM Joshua Brindle\n> > <joshua.brindle@crunchydata.com> wrote:\n> >>\n> >> On Tue, Nov 9, 2021 at 1:59 PM Joshua Brindle\n> >> <joshua.brindle@crunchydata.com> wrote:\n>\n> >>> Hello, I'm looking to help out with reviews for this CF and I'm\n> >>> currently looking at this patchset.\n>\n> Thanks, much appreciated!\n>\n> >>> currently I'm stuck trying to configure:\n> >>>\n> >>> checking for nss-config... /usr/bin/nss-config\n> >>> checking for nspr-config... /usr/bin/nspr-config\n> >>> ...\n> >>> checking nss/ssl.h usability... no\n> >>> checking nss/ssl.h presence... no\n> >>> checking for nss/ssl.h... no\n> >>> configure: error: header file <nss/ssl.h> is required for NSS\n> >>>\n> >>> This is on fedora 33 and nss-devel is installed, nss-config is\n> >>> available (and configure finds it) but the directory is different from\n> >>> Ubuntu:\n> >>> (base) [vagrant@fedora ~]$ nss-config --includedir\n> >>> /usr/include/nss3\n> >>> (base) [vagrant@fedora ~]$ ls -al /usr/include/nss3/ssl.h\n> >>> -rw-r--r--. 1 root root 70450 Sep 30 05:41 /usr/include/nss3/ssl.h\n> >>>\n> >>> So if nss-config --includedir is used then #include <ssl.h> should be\n> >>> used, or if not then #include <nss3/ssl.h> but on this system #include\n> >>> <nss/ssl.h> is not going to work.\n>\n> Interesting rename, I doubt any version but NSS 3 and NSPR 4 is alive anywhere\n> and an incremented major version seems highly unlikely. Going back to plain\n> #include <ssl.h> and have the includeflags sort out the correct directories\n> seems like the best option then. Fixed in the attached.\n>\n> >> FYI, if I make a symlink to get past this, configure completes but\n> >> compilation fails because nspr/nspr.h cannot be found (I'm not sure\n> >> why configure doesn't discover this)\n> >> ../../src/include/common/nss.h:31:10: fatal error: 'nspr/nspr.h' file not found\n> >> #include <nspr/nspr.h>In file included from protocol_nss.c:24:\n> >> ../../src/include/common/nss.h:31:10: fatal error: 'nspr/nspr.h' file not found\n> >> #include <nspr/nspr.h>\n> >> ^~~~~~~~~~~~~\n> >>\n> >> It's a similar issue:\n> >> $ nspr-config --includedir\n> >> /usr/include/nspr4\n>\n> Fixed.\n>\n> > If these get resolved the next issue is llvm bitcode doesn't compile\n> > because the nss includedir is missing from CPPFLAGS:\n> >\n> > /usr/bin/clang -Wno-ignored-attributes -fno-strict-aliasing -fwrapv\n> > -O2 -I../../../src/include -D_GNU_SOURCE -I/usr/include/libxml2\n> > -I/usr/include -flto=thin -emit-llvm -c -o be-secure-nss.bc\n> > be-secure-nss.c\n> > In file included from be-secure-nss.c:20:\n> > In file included from ../../../src/include/common/nss.h:38:\n> > In file included from /usr/include/nss/nss.h:34:\n> > /usr/include/nss/seccomon.h:17:10: fatal error: 'prtypes.h' file not found\n> > #include \"prtypes.h\"\n> > ^~~~~~~~~~~\n> > 1 error generated.\n>\n> Fixed.\n\nApologies for the delay, this didn't go to my inbox and I missed it on list.\n\nThe bitcode generation is still broken, this time for nspr.h:\n\n/usr/bin/clang -Wno-ignored-attributes -fno-strict-aliasing -fwrapv\n-O2 -I../../../src/include -D_GNU_SOURCE -I/usr/include/libxml2\n-I/usr/include -flto=thin -emit-llvm -c -o be-secure-nss.bc\nbe-secure-nss.c\nIn file included from be-secure-nss.c:20:\n../../../src/include/common/nss.h:31:10: fatal error: 'nspr.h' file not found\n#include <nspr.h>\n ^~~~~~~~\n1 error generated.\n\nFWIW I attached the Dockerfile I've been using to test this, primarily\nto ensure that there were no openssl devel files lurking around during\ncompilation.\n\nIt expects a ./postgres directory with whatever patches already applied to it.\n\n>\n> The attached also resolves the conflicts in pgcrypto following db7d1a7b05. PGP\n> elgamel and RSA pubkey functions aren't supported for now as there is no bignum\n> functions similar to the BN_* in OpenSSL. I will look into more how hard it\n> would be to support, for now this gets us ahead.\n>\n> --\n> Daniel Gustafsson https://vmware.com/\n>", "msg_date": "Mon, 15 Nov 2021 14:51:33 -0500", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 15 Nov 2021, at 20:51, Joshua Brindle <joshua.brindle@crunchydata.com> wrote:\n\n> Apologies for the delay, this didn't go to my inbox and I missed it on list.\n> \n> The bitcode generation is still broken, this time for nspr.h:\n\nInteresting, I am unable to replicate that in my tree but I'll investigate\nfurther tomorrow using your Dockerfile. For the sake of testing, does\ncompilation pass for you in the same place without using --with-llvm?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 15 Nov 2021 22:44:19 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Mon, Nov 15, 2021 at 4:44 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 15 Nov 2021, at 20:51, Joshua Brindle <joshua.brindle@crunchydata.com> wrote:\n>\n> > Apologies for the delay, this didn't go to my inbox and I missed it on list.\n> >\n> > The bitcode generation is still broken, this time for nspr.h:\n>\n> Interesting, I am unable to replicate that in my tree but I'll investigate\n> further tomorrow using your Dockerfile. For the sake of testing, does\n> compilation pass for you in the same place without using --with-llvm?\n>\n\nYes, it builds and check-world passes. I'll continue testing with this\nbuild. Thank you.\n\n\n", "msg_date": "Mon, 15 Nov 2021 17:37:40 -0500", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Mon, Nov 15, 2021 at 5:37 PM Joshua Brindle\n<joshua.brindle@crunchydata.com> wrote:\n>\n> On Mon, Nov 15, 2021 at 4:44 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> > > On 15 Nov 2021, at 20:51, Joshua Brindle <joshua.brindle@crunchydata.com> wrote:\n> >\n> > > Apologies for the delay, this didn't go to my inbox and I missed it on list.\n> > >\n> > > The bitcode generation is still broken, this time for nspr.h:\n> >\n> > Interesting, I am unable to replicate that in my tree but I'll investigate\n> > further tomorrow using your Dockerfile. For the sake of testing, does\n> > compilation pass for you in the same place without using --with-llvm?\n> >\n>\n> Yes, it builds and check-world passes. I'll continue testing with this\n> build. Thank you.\n\nThe previous Dockerfile had some issues due to a hasty port from RHEL\nto Fedora, attached is one that works with your patchset, llvm\ncurrently disabled, and the llvm deps removed.\n\nThe service file is also attached since it's referenced in the\nDockerfile and you'd have had to reproduce it.\n\nAfter building, run with:\ndocker run --name pg-test -p 5432:5432 --cap-add=SYS_ADMIN -v\n/sys/fs/cgroup:/sys/fs/cgroup:ro -d <final docker hash>", "msg_date": "Tue, 16 Nov 2021 09:45:50 -0500", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Tue, Nov 16, 2021 at 9:45 AM Joshua Brindle\n<joshua.brindle@crunchydata.com> wrote:\n>\n> On Mon, Nov 15, 2021 at 5:37 PM Joshua Brindle\n> <joshua.brindle@crunchydata.com> wrote:\n> >\n> > On Mon, Nov 15, 2021 at 4:44 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > >\n> > > > On 15 Nov 2021, at 20:51, Joshua Brindle <joshua.brindle@crunchydata.com> wrote:\n> > >\n> > > > Apologies for the delay, this didn't go to my inbox and I missed it on list.\n> > > >\n> > > > The bitcode generation is still broken, this time for nspr.h:\n> > >\n> > > Interesting, I am unable to replicate that in my tree but I'll investigate\n> > > further tomorrow using your Dockerfile. For the sake of testing, does\n> > > compilation pass for you in the same place without using --with-llvm?\n> > >\n> >\n> > Yes, it builds and check-world passes. I'll continue testing with this\n> > build. Thank you.\n>\n> The previous Dockerfile had some issues due to a hasty port from RHEL\n> to Fedora, attached is one that works with your patchset, llvm\n> currently disabled, and the llvm deps removed.\n>\n> The service file is also attached since it's referenced in the\n> Dockerfile and you'd have had to reproduce it.\n>\n> After building, run with:\n> docker run --name pg-test -p 5432:5432 --cap-add=SYS_ADMIN -v\n> /sys/fs/cgroup:/sys/fs/cgroup:ro -d <final docker hash>\n\nI think there it a typo in the docs here that prevents them from\nbuilding (this diff seems to fix it):\n\ndiff --git a/doc/src/sgml/pgcrypto.sgml b/doc/src/sgml/pgcrypto.sgml\nindex 56b73e033c..844aa31e86 100644\n--- a/doc/src/sgml/pgcrypto.sgml\n+++ b/doc/src/sgml/pgcrypto.sgml\n@@ -767,7 +767,7 @@ pgp_sym_encrypt(data, psw, 'compress-algo=1,\ncipher-algo=aes256')\n <para>\n Which cipher algorithm to use. <literal>cast5</literal> is only available\n if <productname>PostgreSQL</productname> was built with\n- <productname>OpenSSL</productame>.\n+ <productname>OpenSSL</productname>.\n </para>\n <literallayout>\n Values: bf, aes128, aes192, aes256, 3des, cast5\n\n\n", "msg_date": "Tue, 16 Nov 2021 13:26:37 -0500", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Tue, Nov 16, 2021 at 1:26 PM Joshua Brindle\n<joshua.brindle@crunchydata.com> wrote:\n>\n> On Tue, Nov 16, 2021 at 9:45 AM Joshua Brindle\n> <joshua.brindle@crunchydata.com> wrote:\n> >\n> > On Mon, Nov 15, 2021 at 5:37 PM Joshua Brindle\n> > <joshua.brindle@crunchydata.com> wrote:\n> > >\n> > > On Mon, Nov 15, 2021 at 4:44 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > > >\n> > > > > On 15 Nov 2021, at 20:51, Joshua Brindle <joshua.brindle@crunchydata.com> wrote:\n> > > >\n> > > > > Apologies for the delay, this didn't go to my inbox and I missed it on list.\n> > > > >\n> > > > > The bitcode generation is still broken, this time for nspr.h:\n> > > >\n> > > > Interesting, I am unable to replicate that in my tree but I'll investigate\n> > > > further tomorrow using your Dockerfile. For the sake of testing, does\n> > > > compilation pass for you in the same place without using --with-llvm?\n> > > >\n> > >\n> > > Yes, it builds and check-world passes. I'll continue testing with this\n> > > build. Thank you.\n> >\n> > The previous Dockerfile had some issues due to a hasty port from RHEL\n> > to Fedora, attached is one that works with your patchset, llvm\n> > currently disabled, and the llvm deps removed.\n> >\n> > The service file is also attached since it's referenced in the\n> > Dockerfile and you'd have had to reproduce it.\n> >\n> > After building, run with:\n> > docker run --name pg-test -p 5432:5432 --cap-add=SYS_ADMIN -v\n> > /sys/fs/cgroup:/sys/fs/cgroup:ro -d <final docker hash>\n>\n> I think there it a typo in the docs here that prevents them from\n> building (this diff seems to fix it):\n>\n> diff --git a/doc/src/sgml/pgcrypto.sgml b/doc/src/sgml/pgcrypto.sgml\n> index 56b73e033c..844aa31e86 100644\n> --- a/doc/src/sgml/pgcrypto.sgml\n> +++ b/doc/src/sgml/pgcrypto.sgml\n> @@ -767,7 +767,7 @@ pgp_sym_encrypt(data, psw, 'compress-algo=1,\n> cipher-algo=aes256')\n> <para>\n> Which cipher algorithm to use. <literal>cast5</literal> is only available\n> if <productname>PostgreSQL</productname> was built with\n> - <productname>OpenSSL</productame>.\n> + <productname>OpenSSL</productname>.\n> </para>\n> <literallayout>\n> Values: bf, aes128, aes192, aes256, 3des, cast5\n\nAfter a bit more testing, the server is up and running with an nss\ndatabase but before configuring the client database I tried connecting\nand got a segfault:\n\n#0 PR_Write (fd=0x0, buf=0x141ba60, amount=84) at\nio/../../.././nspr/pr/src/io/priometh.c:114\n#1 0x00007ff33dfdc62f in pgtls_write (conn=0x13cecb0, ptr=0x141ba60,\nlen=84) at fe-secure-nss.c:583\n#2 0x00007ff33dfd6e18 in pqsecure_write (conn=0x13cecb0,\nptr=0x141ba60, len=84) at fe-secure.c:295\n#3 0x00007ff33dfd04dc in pqSendSome (conn=0x13cecb0, len=84) at fe-misc.c:834\n#4 0x00007ff33dfd06c8 in pqFlush (conn=0x13cecb0) at fe-misc.c:972\n#5 0x00007ff33dfc257c in pqPacketSend (conn=0x13cecb0, pack_type=0\n'\\000', buf=0x1414c60, buf_len=80) at fe-connect.c:4619\n#6 0x00007ff33dfbfadd in PQconnectPoll (conn=0x13cecb0) at fe-connect.c:2986\n#7 0x00007ff33dfbe55c in connectDBComplete (conn=0x13cecb0) at\nfe-connect.c:2218\n#8 0x00007ff33dfbbaef in PQconnectdbParams (keywords=0x1427d10,\nvalues=0x1427e60, expand_dbname=1) at fe-connect.c:668\n#9 0x000000000043ebc7 in main (argc=2, argv=0x7ffdccd0e2f8) at startup.c:273\n\nIt looks like the ssl connection falls through to attempt a non-ssl\nconnection but at some point conn->ssl_in_use gets set to true,\ndespite pr_fd and nss_context being null.\n\nThis patch fixes the segfault but I suspect is not the correct fix,\ndue to the error when connecting saying \"Success\":\n\n--- a/src/interfaces/libpq/fe-secure-nss.c\n+++ b/src/interfaces/libpq/fe-secure-nss.c\n@@ -498,6 +498,11 @@ pgtls_read(PGconn *conn, void *ptr, size_t len)\n * for closed connections, while -1 indicates an error within\nthe ongoing\n * connection.\n */\n+ if (!conn->pr_fd) {\n+ SOCK_ERRNO_SET(read_errno);\n+ return -1;\n+ }\n+\n nread = PR_Recv(conn->pr_fd, ptr, len, 0, PR_INTERVAL_NO_WAIT);\n\n if (nread == 0)\n@@ -580,6 +585,11 @@ pgtls_write(PGconn *conn, const void *ptr, size_t len)\n PRErrorCode status;\n int write_errno = 0;\n\n+ if (!conn->pr_fd) {\n+ SOCK_ERRNO_SET(write_errno);\n+ return -1;\n+ }\n+\n n = PR_Write(conn->pr_fd, ptr, len);\n\n if (n < 0)\n\n\n", "msg_date": "Wed, 17 Nov 2021 13:42:23 -0500", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 17 Nov 2021, at 19:42, Joshua Brindle <joshua.brindle@crunchydata.com> wrote:\n> On Tue, Nov 16, 2021 at 1:26 PM Joshua Brindle\n> <joshua.brindle@crunchydata.com> wrote:\n\n>> I think there it a typo in the docs here that prevents them from\n>> building (this diff seems to fix it):\n\nAh yes, thanks, I had noticed that one but forgot to send out a new version to\nmake the CFBot green.\n\n> After a bit more testing, the server is up and running with an nss\n> database but before configuring the client database I tried connecting\n> and got a segfault:\n\nInteresting. I'm unable to reproduce this crash, can you show the sequence of\ncommands which led to this?\n\n> It looks like the ssl connection falls through to attempt a non-ssl\n> connection but at some point conn->ssl_in_use gets set to true,\n> despite pr_fd and nss_context being null.\n\npgtls_close missed setting ssl_in_use to false, fixed in the attached. I've\nalso added some assertions to the connection setup for debugging this.\n\n> This patch fixes the segfault but I suspect is not the correct fix,\n> due to the error when connecting saying \"Success\":\n\nRight, without an SSL enabled FD we should never get here.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Tue, 23 Nov 2021 15:12:45 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Tue, Nov 23, 2021 at 9:12 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 17 Nov 2021, at 19:42, Joshua Brindle <joshua.brindle@crunchydata.com> wrote:\n> > On Tue, Nov 16, 2021 at 1:26 PM Joshua Brindle\n> > <joshua.brindle@crunchydata.com> wrote:\n>\n> >> I think there it a typo in the docs here that prevents them from\n> >> building (this diff seems to fix it):\n>\n> Ah yes, thanks, I had noticed that one but forgot to send out a new version to\n> make the CFBot green.\n>\n> > After a bit more testing, the server is up and running with an nss\n> > database but before configuring the client database I tried connecting\n> > and got a segfault:\n>\n> Interesting. I'm unable to reproduce this crash, can you show the sequence of\n> commands which led to this?\n\nIt no longer happens with v49, since it was a null deref of the pr_fd\nwhich no longer happens.\n\nI'll continue testing now, so far it's looking better.\n\nDid the build issue with --with-llvm get fixed in this update also? I\nhaven't tried building with it yet.\n\n> > It looks like the ssl connection falls through to attempt a non-ssl\n> > connection but at some point conn->ssl_in_use gets set to true,\n> > despite pr_fd and nss_context being null.\n>\n> pgtls_close missed setting ssl_in_use to false, fixed in the attached. I've\n> also added some assertions to the connection setup for debugging this.\n>\n> > This patch fixes the segfault but I suspect is not the correct fix,\n> > due to the error when connecting saying \"Success\":\n>\n> Right, without an SSL enabled FD we should never get here.\n>\n\nThank you.\n\n\n", "msg_date": "Tue, 23 Nov 2021 17:39:12 -0500", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 23 Nov 2021, at 23:39, Joshua Brindle <joshua.brindle@crunchydata.com> wrote:\n\n> It no longer happens with v49, since it was a null deref of the pr_fd\n> which no longer happens.\n> \n> I'll continue testing now, so far it's looking better.\n\nGreat, thanks for confirming. I'm still keen on knowing how you triggered the\nsegfault so I can ensure there are no further bugs around there.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 24 Nov 2021 12:59:56 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, Nov 24, 2021 at 6:59 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 23 Nov 2021, at 23:39, Joshua Brindle <joshua.brindle@crunchydata.com> wrote:\n>\n> > It no longer happens with v49, since it was a null deref of the pr_fd\n> > which no longer happens.\n> >\n> > I'll continue testing now, so far it's looking better.\n>\n> Great, thanks for confirming. I'm still keen on knowing how you triggered the\n> segfault so I can ensure there are no further bugs around there.\n>\n\nIt happened when I ran psql with hostssl on the server but before I'd\ninitialized my client certificate store.\n\n\n", "msg_date": "Wed, 24 Nov 2021 08:46:35 -0500", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, Nov 24, 2021 at 8:46 AM Joshua Brindle\n<joshua.brindle@crunchydata.com> wrote:\n>\n> On Wed, Nov 24, 2021 at 6:59 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> > > On 23 Nov 2021, at 23:39, Joshua Brindle <joshua.brindle@crunchydata.com> wrote:\n> >\n> > > It no longer happens with v49, since it was a null deref of the pr_fd\n> > > which no longer happens.\n> > >\n> > > I'll continue testing now, so far it's looking better.\n> >\n> > Great, thanks for confirming. I'm still keen on knowing how you triggered the\n> > segfault so I can ensure there are no further bugs around there.\n> >\n>\n> It happened when I ran psql with hostssl on the server but before I'd\n> initialized my client certificate store.\n\nI don't know enough about NSS to know if this is problematic or not\nbut if I try verify-full without having the root CA in the certificate\nstore I get:\n\n$ /usr/pgsql-15/bin/psql \"host=localhost sslmode=verify-full user=postgres\"\npsql: error: SSL error: Issuer certificate is invalid.\nunable to shut down NSS context: NSS could not shutdown. Objects are\nstill in use.\n\n\n", "msg_date": "Wed, 24 Nov 2021 08:49:10 -0500", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, Nov 24, 2021 at 8:49 AM Joshua Brindle\n<joshua.brindle@crunchydata.com> wrote:\n>\n> On Wed, Nov 24, 2021 at 8:46 AM Joshua Brindle\n> <joshua.brindle@crunchydata.com> wrote:\n> >\n> > On Wed, Nov 24, 2021 at 6:59 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > >\n> > > > On 23 Nov 2021, at 23:39, Joshua Brindle <joshua.brindle@crunchydata.com> wrote:\n> > >\n> > > > It no longer happens with v49, since it was a null deref of the pr_fd\n> > > > which no longer happens.\n> > > >\n> > > > I'll continue testing now, so far it's looking better.\n> > >\n> > > Great, thanks for confirming. I'm still keen on knowing how you triggered the\n> > > segfault so I can ensure there are no further bugs around there.\n> > >\n> >\n> > It happened when I ran psql with hostssl on the server but before I'd\n> > initialized my client certificate store.\n>\n> I don't know enough about NSS to know if this is problematic or not\n> but if I try verify-full without having the root CA in the certificate\n> store I get:\n>\n> $ /usr/pgsql-15/bin/psql \"host=localhost sslmode=verify-full user=postgres\"\n> psql: error: SSL error: Issuer certificate is invalid.\n> unable to shut down NSS context: NSS could not shutdown. Objects are\n> still in use.\n\nSomething is strange with ssl downgrading and a bad ssldatabase\n[postgres@11cdfa30f763 ~]$ /usr/pgsql-15/bin/psql \"ssldatabase=oops\nsslcert=client_cert host=localhost\"\nPassword for user postgres:\n\n<freezes here>\n\nOn the server side:\n2021-11-25 01:52:01.984 UTC [269] LOG: unable to handshake:\nEncountered end of file (PR_END_OF_FILE_ERROR)\n\nOther than that and I still haven't tested --with-llvm I've gotten\neverything working, including with an openssl client. Attached is a\ndockerfile that gets to the point where a client can connect with\nclientcert=verify-full. I've removed some of the old cruft and\ndebugging from the previous versions.\n\nThank you.", "msg_date": "Thu, 25 Nov 2021 08:39:29 -0500", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Mon, 2021-09-27 at 15:44 +0200, Daniel Gustafsson wrote:\r\n> > Speaking of IP addresses in SANs, it doesn't look like our OpenSSL\r\n> > backend can handle those. That's a separate conversation, but I might\r\n> > take a look at a patch for next commitfest.\r\n> \r\n> Please do.\r\n\r\nDidn't get around to it for November, but I'm putting the finishing\r\ntouches on that now.\r\n\r\nWhile I was looking at the new SAN code (in fe-secure-nss.c,\r\npgtls_verify_peer_name_matches_certificate_guts()), I noticed that code\r\ncoverage never seemed to touch a good chunk of it:\r\n\r\n> + for (cn = san_list; cn != san_list; cn = CERT_GetNextGeneralName(cn))\r\n> + {\r\n> + char *alt_name;\r\n> + int rv;\r\n> + char tmp[512];\r\n\r\nThat loop can never execute. But I wonder if all of that extra SAN code\r\nshould be removed anyway? There's this comment above it:\r\n\r\n> +\t/*\r\n> +\t * CERT_VerifyCertName will internally perform RFC 2818 SubjectAltName\r\n> +\t * verification.\r\n> +\t */\r\n\r\nand it seems like SAN verification is working in my testing, despite\r\nthe dead loop.\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 30 Nov 2021 19:03:54 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 25 Nov 2021, at 14:39, Joshua Brindle <joshua.brindle@crunchydata.com> wrote:\n> On Wed, Nov 24, 2021 at 8:49 AM Joshua Brindle\n> <joshua.brindle@crunchydata.com> wrote:\n>> \n>> On Wed, Nov 24, 2021 at 8:46 AM Joshua Brindle\n>> <joshua.brindle@crunchydata.com> wrote:\n\n>> I don't know enough about NSS to know if this is problematic or not\n>> but if I try verify-full without having the root CA in the certificate\n>> store I get:\n>> \n>> $ /usr/pgsql-15/bin/psql \"host=localhost sslmode=verify-full user=postgres\"\n>> psql: error: SSL error: Issuer certificate is invalid.\n>> unable to shut down NSS context: NSS could not shutdown. Objects are\n>> still in use.\n\nFixed.\n\n> Something is strange with ssl downgrading and a bad ssldatabase\n> [postgres@11cdfa30f763 ~]$ /usr/pgsql-15/bin/psql \"ssldatabase=oops\n> sslcert=client_cert host=localhost\"\n> Password for user postgres:\n> \n> <freezes here>\n\nAlso fixed.\n\n> On the server side:\n> 2021-11-25 01:52:01.984 UTC [269] LOG: unable to handshake:\n> Encountered end of file (PR_END_OF_FILE_ERROR)\n\nThis is normal and expected, but to make it easier on users I've changed this\nerror message to be aligned with the OpenSSL implementation.\n\n> Other than that and I still haven't tested --with-llvm I've gotten\n> everything working, including with an openssl client. Attached is a\n> dockerfile that gets to the point where a client can connect with\n> clientcert=verify-full. I've removed some of the old cruft and\n> debugging from the previous versions.\n\nVery cool, thanks! I've been unable to reproduce any issues with llvm but I'll\nkeep poking at that. A new version will be posted shortly with the above and a\nfew more fixes.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 15 Dec 2021 23:05:43 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 30 Nov 2021, at 20:03, Jacob Champion <pchampion@vmware.com> wrote:\n> \n> On Mon, 2021-09-27 at 15:44 +0200, Daniel Gustafsson wrote:\n>>> Speaking of IP addresses in SANs, it doesn't look like our OpenSSL\n>>> backend can handle those. That's a separate conversation, but I might\n>>> take a look at a patch for next commitfest.\n>> \n>> Please do.\n> \n> Didn't get around to it for November, but I'm putting the finishing\n> touches on that now.\n\nCool, thanks!\n\n> While I was looking at the new SAN code (in fe-secure-nss.c,\n> pgtls_verify_peer_name_matches_certificate_guts()), I noticed that code\n> coverage never seemed to touch a good chunk of it:\n> \n>> + for (cn = san_list; cn != san_list; cn = CERT_GetNextGeneralName(cn))\n>> + {\n>> + char *alt_name;\n>> + int rv;\n>> + char tmp[512];\n> \n> That loop can never execute. But I wonder if all of that extra SAN code\n> should be removed anyway? There's this comment above it:\n> \n>> +\t/*\n>> +\t * CERT_VerifyCertName will internally perform RFC 2818 SubjectAltName\n>> +\t * verification.\n>> +\t */\n> \n> and it seems like SAN verification is working in my testing, despite\n> the dead loop.\n\nYeah, that's clearly bogus. I followed the bouncing ball reading NSS code and\nfrom what I can tell the comment is correct. I removed the dead code, only\nrealizing after the fact that I might cause conflict with your tree doing so,\nin that case sorry.\n\nI've attached a v50 which fixes the issues found by Joshua upthread, as well as\nrebases on top of all the recent SSL and pgcrypto changes.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Wed, 15 Dec 2021 23:10:14 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, 2021-12-15 at 23:10 +0100, Daniel Gustafsson wrote:\r\n> > On 30 Nov 2021, at 20:03, Jacob Champion <pchampion@vmware.com> wrote:\r\n> > \r\n> > On Mon, 2021-09-27 at 15:44 +0200, Daniel Gustafsson wrote:\r\n> > > > Speaking of IP addresses in SANs, it doesn't look like our OpenSSL\r\n> > > > backend can handle those. That's a separate conversation, but I might\r\n> > > > take a look at a patch for next commitfest.\r\n> > > \r\n> > > Please do.\r\n> > \r\n> > Didn't get around to it for November, but I'm putting the finishing\r\n> > touches on that now.\r\n> \r\n> Cool, thanks!\r\n\r\nDone and registered in Commitfest.\r\n\r\n> Yeah, that's clearly bogus. I followed the bouncing ball reading NSS code and\r\n> from what I can tell the comment is correct. I removed the dead code, only\r\n> realizing after the fact that I might cause conflict with your tree doing so,\r\n> in that case sorry.\r\n\r\nNo worries, there weren't any issues with the rebase.\r\n\r\n> I've attached a v50 which fixes the issues found by Joshua upthread, as well as\r\n> rebases on top of all the recent SSL and pgcrypto changes.\r\n\r\nThanks!\r\n\r\n--Jacob\r\n", "msg_date": "Thu, 16 Dec 2021 19:56:25 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, Dec 15, 2021 at 5:05 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 25 Nov 2021, at 14:39, Joshua Brindle <joshua.brindle@crunchydata.com> wrote:\n> > On Wed, Nov 24, 2021 at 8:49 AM Joshua Brindle\n> > <joshua.brindle@crunchydata.com> wrote:\n> >>\n> >> On Wed, Nov 24, 2021 at 8:46 AM Joshua Brindle\n> >> <joshua.brindle@crunchydata.com> wrote:\n>\n> >> I don't know enough about NSS to know if this is problematic or not\n> >> but if I try verify-full without having the root CA in the certificate\n> >> store I get:\n> >>\n> >> $ /usr/pgsql-15/bin/psql \"host=localhost sslmode=verify-full user=postgres\"\n> >> psql: error: SSL error: Issuer certificate is invalid.\n> >> unable to shut down NSS context: NSS could not shutdown. Objects are\n> >> still in use.\n>\n> Fixed.\n>\n> > Something is strange with ssl downgrading and a bad ssldatabase\n> > [postgres@11cdfa30f763 ~]$ /usr/pgsql-15/bin/psql \"ssldatabase=oops\n> > sslcert=client_cert host=localhost\"\n> > Password for user postgres:\n> >\n> > <freezes here>\n>\n> Also fixed.\n>\n> > On the server side:\n> > 2021-11-25 01:52:01.984 UTC [269] LOG: unable to handshake:\n> > Encountered end of file (PR_END_OF_FILE_ERROR)\n>\n> This is normal and expected, but to make it easier on users I've changed this\n> error message to be aligned with the OpenSSL implementation.\n>\n> > Other than that and I still haven't tested --with-llvm I've gotten\n> > everything working, including with an openssl client. Attached is a\n> > dockerfile that gets to the point where a client can connect with\n> > clientcert=verify-full. I've removed some of the old cruft and\n> > debugging from the previous versions.\n>\n> Very cool, thanks! I've been unable to reproduce any issues with llvm but I'll\n> keep poking at that. A new version will be posted shortly with the above and a\n> few more fixes.\n\nFor v50 this change was required for an llvm build to succeed on my\nFedora system:\n\ndiff --git a/configure b/configure\nindex 25388a75a2..62d554806a 100755\n--- a/configure\n+++ b/configure\n@@ -13276,6 +13276,7 @@ fi\n\n LDFLAGS=\"$LDFLAGS $NSS_LIBS $NSPR_LIBS\"\n CFLAGS=\"$CFLAGS $NSS_CFLAGS $NSPR_CFLAGS\"\n+ CPPFLAGS=\"$CPPFLAGS $NSS_CFLAGS $NSPR_CFLAGS\"\n\n\n $as_echo \"#define USE_NSS 1\" >>confdefs.h\n\nI'm not certain why configure didn't already have that, configure.ac\nappears to, but nonetheless it builds, all tests succeed, and a quick\ntire kicking looks good.\n\nThank you.\n\n\n", "msg_date": "Mon, 20 Dec 2021 12:27:11 -0500", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Hi,\n\nOn Wed, Dec 15, 2021 at 11:10:14PM +0100, Daniel Gustafsson wrote:\n> \n> I've attached a v50 which fixes the issues found by Joshua upthread, as well as\n> rebases on top of all the recent SSL and pgcrypto changes.\n\nThe cfbot reports that the patchset doesn't apply anymore:\n\nhttp://cfbot.cputube.org/patch_36_3138.log\n=== Applying patches on top of PostgreSQL commit ID 74527c3e022d3ace648340b79a6ddec3419f6732 ===\n[...]\n=== applying patch ./v50-0010-nss-Build-infrastructure.patch\npatching file configure\npatching file configure.ac\nHunk #3 succeeded at 1566 (offset 1 line).\nHunk #4 succeeded at 2366 (offset 1 line).\nHunk #5 succeeded at 2379 (offset 1 line).\npatching file src/backend/libpq/Makefile\npatching file src/common/Makefile\npatching file src/include/pg_config.h.in\nHunk #3 succeeded at 926 (offset 3 lines).\npatching file src/interfaces/libpq/Makefile\npatching file src/tools/msvc/Install.pm\nHunk #1 FAILED at 440.\n1 out of 1 hunk FAILED -- saving rejects to file src/tools/msvc/Install.pm.rej\n\nCould you send a rebased version, possibly with an updated configure as\nreported by Joshua? In the meantime I will switch the entry to Waitinng on\nAuthor.\n\n\n", "msg_date": "Sat, 15 Jan 2022 12:42:30 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 15 Jan 2022, at 05:42, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Wed, Dec 15, 2021 at 11:10:14PM +0100, Daniel Gustafsson wrote:\n>> \n>> I've attached a v50 which fixes the issues found by Joshua upthread, as well as\n>> rebases on top of all the recent SSL and pgcrypto changes.\n> \n> The cfbot reports that the patchset doesn't apply anymore:\n\nFixed, as well as rebased and fixed up on top of the recent cryptohash error\nreporting functionality to support that on par with the OpenSSL backend.\n\n> ..possibly with an updated configure as reported by Joshua?\n\nI must've fat-fingered the \"git add -p\" for v50 as the fix was in configure.ac\nbut not configure. Fixed now.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Mon, 17 Jan 2022 15:09:11 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Hi,\n\nOn Mon, Jan 17, 2022 at 03:09:11PM +0100, Daniel Gustafsson wrote:\n> \n> I must've fat-fingered the \"git add -p\" for v50 as the fix was in configure.ac\n> but not configure. Fixed now.\n\nThanks! Apparently this version now fails on all OS, e.g.:\n\nhttps://cirrus-ci.com/task/4643868095283200\n[22:17:39.965] # Failed test 'certificate authorization succeeds with correct client cert in PEM format'\n[22:17:39.965] # at t/001_ssltests.pl line 456.\n[22:17:39.965] # got: '2'\n[22:17:39.965] # expected: '0'\n[22:17:39.965]\n[22:17:39.965] # Failed test 'certificate authorization succeeds with correct client cert in PEM format: no stderr'\n[22:17:39.965] # at t/001_ssltests.pl line 456.\n[22:17:39.965] # got: 'psql: error: connection to server at \"127.0.0.1\", port 50023 failed: certificate present, but not private key file \"/home/postgres/.postgresql/postgresql.key\"'\n[22:17:39.965] # expected: ''\n[22:17:39.965]\n[22:17:39.965] # Failed test 'certificate authorization succeeds with correct client cert in DER format'\n[22:17:39.965] # at t/001_ssltests.pl line 475.\n[22:17:39.965] # got: '2'\n[22:17:39.965] # expected: '0'\n[...]\n\n\n", "msg_date": "Tue, 18 Jan 2022 14:36:32 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 18 Jan 2022, at 07:36, Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Mon, Jan 17, 2022 at 03:09:11PM +0100, Daniel Gustafsson wrote:\n>> \n>> I must've fat-fingered the \"git add -p\" for v50 as the fix was in configure.ac\n>> but not configure. Fixed now.\n> \n> Thanks! Apparently this version now fails on all OS, e.g.:\n\nFixed, I had made a mistake in the OpenSSL.pm testcode and failed to catch it\nin testing.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Tue, 18 Jan 2022 13:42:54 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Tue, Jan 18, 2022 at 7:43 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 18 Jan 2022, at 07:36, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> > On Mon, Jan 17, 2022 at 03:09:11PM +0100, Daniel Gustafsson wrote:\n> >>\n> >> I must've fat-fingered the \"git add -p\" for v50 as the fix was in configure.ac\n> >> but not configure. Fixed now.\n> >\n> > Thanks! Apparently this version now fails on all OS, e.g.:\n>\n> Fixed, I had made a mistake in the OpenSSL.pm testcode and failed to catch it\n> in testing.\n\nLGTM +1\n\n\n", "msg_date": "Tue, 18 Jan 2022 11:24:49 -0500", "msg_from": "Joshua Brindle <joshua.brindle@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, 2021-12-15 at 23:10 +0100, Daniel Gustafsson wrote:\r\n> I've attached a v50 which fixes the issues found by Joshua upthread, as well as\r\n> rebases on top of all the recent SSL and pgcrypto changes.\r\n\r\nI'm currently tracking down a slot leak. When opening and closing large\r\nnumbers of NSS databases, at some point we appear to run out of slots\r\nand then NSS starts misbehaving, even though we've closed all of our\r\ncontext handles.\r\n\r\nI don't have anything more helpful to share yet, but I wanted to make a\r\nnote of it here in case anyone else had seen it or has ideas on what\r\nmay be causing it. My next move will be to update the version of NSS\r\nI'm running.\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 18 Jan 2022 16:37:51 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 18 Jan 2022, at 17:37, Jacob Champion <pchampion@vmware.com> wrote:\n> \n> On Wed, 2021-12-15 at 23:10 +0100, Daniel Gustafsson wrote:\n>> I've attached a v50 which fixes the issues found by Joshua upthread, as well as\n>> rebases on top of all the recent SSL and pgcrypto changes.\n> \n> I'm currently tracking down a slot leak. When opening and closing large\n> numbers of NSS databases, at some point we appear to run out of slots\n> and then NSS starts misbehaving, even though we've closed all of our\n> context handles.\n\nInteresting, are you able to share a reproducer for this so I can assist in\ndebugging it?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 19 Jan 2022 10:01:44 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Hi,\n\nOn 2022-01-18 13:42:54 +0100, Daniel Gustafsson wrote:\n> Fixed, I had made a mistake in the OpenSSL.pm testcode and failed to catch it\n> in testing.\n\n\n> +task:\n> + name: Linux - Debian Bullseye (nss)\n> [ copy of a bunch of code ]\n\nI also needed similar-but-not-quite-equivalent tasks for the meson patch as\nwell. I just moved to having a splitting the tasks into a template and a use\nof it. It's probably not quite right as I did there, but it might be worth\nlooking into:\n\nhttps://github.com/anarazel/postgres/blob/meson/.cirrus.yml#L181\n\nBut maybe this case actually has a better solution, see two paragraphs down:\n\n\n> + install_script: |\n> + DEBIAN_FRONTEND=noninteractive apt-get --yes install libnss3 libnss3-dev libnss3-tools libnspr4 libnspr4-dev\n\nThis needs an apt-get update beforehand to succeed. That's what caused the last few runs\nto fail, see e.g.\nhttps://cirrus-ci.com/task/6293612580306944\n\n\nJust duplicating the task doesn't really scale once in tree. What about\nreconfiguring (note: add --enable-depend) the linux tasks to build against\nnss, and then run the relevant subset of tests with it? Most tests don't use\ntcp / SSL anyway, so rerunning a small subset of tests should be feasible?\n\n\n> From 297ee9ab31aa579e002edc335cce83dae19711b1 Mon Sep 17 00:00:00 2001\n> From: Daniel Gustafsson <daniel@yesql.se>\n> Date: Mon, 8 Feb 2021 23:52:22 +0100\n> Subject: [PATCH v52 01/11] nss: Support libnss as TLS library in libpq\n\n> 16 files changed, 3192 insertions(+), 7 deletions(-)\n\nPhew. This is a huge patch.\n\nDamn, I only opened this thread to report the CI failure. But now I ended up\ndoing a small review...\n\n\n\n> +#include \"common/nss.h\"\n> +\n> +/*\n> + * The nspr/obsolete/protypes.h NSPR header typedefs uint64 and int64 with\n> + * colliding definitions from ours, causing a much expected compiler error.\n> + * Remove backwards compatibility with ancient NSPR versions to avoid this.\n> + */\n> +#define NO_NSPR_10_SUPPORT\n> +#include <nspr.h>\n> +#include <prerror.h>\n> +#include <prio.h>\n> +#include <prmem.h>\n> +#include <prtypes.h>\n\nDuplicated with nss.h. Which brings me to:\n\n\n> +#include <nss.h>\n\nIs it a great idea to have common/nss.h when there's a library header nss.h?\nPerhaps we should have a pg_ssl_{nss,openssl}.h or such?\n\n\n> +/* ------------------------------------------------------------ */\n> +/*\t\t\t\t\t\t Public interface\t\t\t\t\t\t*/\n> +/* ------------------------------------------------------------ */\n\nNitpicks:\nI don't think we typically do multiple /* */ comments in a row for this type\nof thing. I also don't particularly like centering things like this, tends to\nget inconsistent across comments.\n\n\n> +/*\n> + * be_tls_open_server\n> + *\n> + * Since NSPR initialization must happen after forking, most of the actual\n> + * setup of NSPR/NSS is done here rather than in be_tls_init.\n\nThe \"Since ... must happen after forking\" sounds like it's referencing a\npreviously remarked upon fact. But I don't see anything but a copy of this\ncomment.\n\nDoes this make some things notably more expensive? Presumably it does remove a\nbunch of COW opportunities, but likely that's not a huge factor compared to\nassymetric crypto negotiation...\n\nMaybe soem of this commentary should migrate to the file header or such?\n\n\n> This introduce\n> + * differences with the OpenSSL support where some errors are only reported\n> + * at runtime with NSS where they are reported at startup with OpenSSL.\n\nFound this sentence hard to parse somehow.\n\nIt seems pretty unfriendly to only have minimal error checking at postmaster\nstartup time. Seems at least the presence and usability of keys should be done\n*also* at that time?\n\n\n> +\t/*\n> +\t * If no ciphers are specified, enable them all.\n> +\t */\n> +\tif (!SSLCipherSuites || strlen(SSLCipherSuites) == 0)\n> +\t{\n> +\t\tstatus = NSS_SetDomesticPolicy();\n> +\t\tif (status != SECSuccess)\n> +\t\t{\n> +\t\t\tereport(COMMERROR,\n> +\t\t\t\t\t(errmsg(\"unable to set cipher policy: %s\",\n> +\t\t\t\t\t\t\tpg_SSLerrmessage(PR_GetError()))));\n> +\t\t\treturn -1;\n> +\t\t}\n> +\t}\n> +\telse\n> +\t{\n> +\t\tchar\t *ciphers,\n> +\t\t\t\t *c;\n> +\n> +\t\tchar\t *sep = \":;, \";\n> +\t\tPRUint16\tciphercode;\n> +\t\tconst\t\tPRUint16 *nss_ciphers;\n> +\t\tbool\t\tfound = false;\n> +\n> +\t\t/*\n> +\t\t * If the user has specified a set of preferred cipher suites we start\n> +\t\t * by turning off all the existing suites to avoid the risk of down-\n> +\t\t * grades to a weaker cipher than expected.\n> +\t\t */\n> +\t\tnss_ciphers = SSL_GetImplementedCiphers();\n> +\t\tfor (int i = 0; i < SSL_GetNumImplementedCiphers(); i++)\n> +\t\t\tSSL_CipherPrefSet(model, nss_ciphers[i], PR_FALSE);\n> +\n> +\t\tciphers = pstrdup(SSLCipherSuites);\n> +\n> +\t\tfor (c = strtok(ciphers, sep); c; c = strtok(NULL, sep))\n> +\t\t{\n> +\t\t\tif (pg_find_cipher(c, &ciphercode))\n> +\t\t\t{\n> +\t\t\t\tstatus = SSL_CipherPrefSet(model, ciphercode, PR_TRUE);\n> +\t\t\t\tfound = true;\n> +\t\t\t\tif (status != SECSuccess)\n> +\t\t\t\t{\n> +\t\t\t\t\tereport(COMMERROR,\n> +\t\t\t\t\t\t\t(errmsg(\"invalid cipher-suite specified: %s\", c)));\n> +\t\t\t\t\treturn -1;\n\nIt likely doesn't matter much because the backend will exit, but because\nCOMERROR doesn't throw, it seems like this will leak \"ciphers\"?\n\n\n> +\t\t\t\t}\n> +\t\t\t}\n> +\t\t}\n> +\n> +\t\tpfree(ciphers);\n> +\n> +\t\tif (!found)\n> +\t\t{\n> +\t\t\tereport(COMMERROR,\n> +\t\t\t\t\t(errmsg(\"no cipher-suites found\")));\n> +\t\t\treturn -1;\n> +\t\t}\n> +\t}\n\nSeems like this could reasonably done in a separate function?\n\n\n\n> +\tserver_cert = PK11_FindCertFromNickname(ssl_cert_file, (void *) port);\n> +\tif (!server_cert)\n> +\t{\n> +\t\tif (dummy_ssl_passwd_cb_called)\n> +\t\t{\n> +\t\t\tereport(COMMERROR,\n> +\t\t\t\t\t(errmsg(\"unable to load certificate for \\\"%s\\\": %s\",\n> +\t\t\t\t\t\t\tssl_cert_file, pg_SSLerrmessage(PR_GetError())),\n> +\t\t\t\t\t errhint(\"The certificate requires a password.\")));\n> +\t\t\treturn -1;\n> +\t\t}\n\nI assume PR_GetError() is some thread-local construct, given it's also used in\nlibpq? Why, oh why, do people copy the abysmal \"global errno\" approach\neverywhere.\n\n\n> +ssize_t\n> +be_tls_read(Port *port, void *ptr, size_t len, int *waitfor)\n> +{\n\nI'm not a fan of duplicating the symbol names between be-secure-openssl.c and\nthis. For one it's annoying for source code naviation. It also seems that at\nsome point we might want to be able to link against both at the same time?\nMaybe we should name them unambiguously and then use some indirection in a\nheader somewhere?\n\n\n> +\tssize_t\t\tn_read;\n> +\tPRErrorCode err;\n> +\n> +\tn_read = PR_Read(port->pr_fd, ptr, len);\n> +\n> +\tif (n_read < 0)\n> +\t{\n> +\t\terr = PR_GetError();\n> +\n> +\t\tif (err == PR_WOULD_BLOCK_ERROR)\n> +\t\t{\n> +\t\t\t*waitfor = WL_SOCKET_READABLE;\n> +\t\t\terrno = EWOULDBLOCK;\n> +\t\t}\n> +\t\telse\n> +\t\t\terrno = ECONNRESET;\n> +\t}\n> +\n> +\treturn n_read;\n> +}\n> +\n> +ssize_t\n> +be_tls_write(Port *port, void *ptr, size_t len, int *waitfor)\n> +{\n> +\tssize_t\t\tn_write;\n> +\tPRErrorCode err;\n> +\tPRIntn\t\tflags = 0;\n> +\n> +\t/*\n> +\t * The flags parameter to PR_Send is no longer used and is, according to\n> +\t * the documentation, required to be zero.\n> +\t */\n> +\tn_write = PR_Send(port->pr_fd, ptr, len, flags, PR_INTERVAL_NO_WAIT);\n> +\n> +\tif (n_write < 0)\n> +\t{\n> +\t\terr = PR_GetError();\n> +\n> +\t\tif (err == PR_WOULD_BLOCK_ERROR)\n> +\t\t{\n> +\t\t\t*waitfor = WL_SOCKET_WRITEABLE;\n> +\t\t\terrno = EWOULDBLOCK;\n> +\t\t}\n> +\t\telse\n> +\t\t\terrno = ECONNRESET;\n> +\t}\n> +\n> +\treturn n_write;\n> +}\n> +\n> +/*\n> + * be_tls_close\n> + *\n> + * Callback for closing down the current connection, if any.\n> + */\n> +void\n> +be_tls_close(Port *port)\n> +{\n> +\tif (!port)\n> +\t\treturn;\n> +\t/*\n> +\t * Immediately signal to the rest of the backend that this connnection is\n> +\t * no longer to be considered to be using TLS encryption.\n> +\t */\n> +\tport->ssl_in_use = false;\n> +\n> +\tif (port->peer_cn)\n> +\t{\n> +\t\tSSL_InvalidateSession(port->pr_fd);\n> +\t\tpfree(port->peer_cn);\n> +\t\tport->peer_cn = NULL;\n> +\t}\n> +\n> +\tPR_Close(port->pr_fd);\n> +\tport->pr_fd = NULL;\n\nWhat if we failed before initializing pr_fd?\n\n\n> +\t/*\n> +\t * Since there is no password callback in NSS when the server starts up,\n> +\t * it makes little sense to create an interactive callback. Thus, if this\n> +\t * is a retry attempt then give up immediately.\n> +\t */\n> +\tif (retry)\n> +\t\treturn NULL;\n\nThat's really not great. Can't we do something like initialize NSS in\npostmaster, load the key into memory, including prompting, and then shut nss\ndown again?\n\n\n\n> +/*\n> + * raw_subject_common_name\n> + *\n> + * Returns the Subject Common Name for the given certificate as a raw char\n> + * buffer (that is, without any form of escaping for unprintable characters or\n> + * embedded nulls), with the length of the buffer returned in the len param.\n> + * The buffer is allocated in the TopMemoryContext and is given a NULL\n> + * terminator so that callers are safe to call strlen() on it.\n> + *\n> + * This is used instead of CERT_GetCommonName(), which always performs quoting\n> + * and/or escaping. NSS doesn't appear to give us a way to easily unescape the\n> + * result, and we need to store the raw CN into port->peer_cn for compatibility\n> + * with the OpenSSL implementation.\n> + */\n\nDo we have a testcase for embedded NULLs in common names?\n\n\n> +static char *\n> +raw_subject_common_name(CERTCertificate *cert, unsigned int *len)\n> +{\n> +\tCERTName\tsubject = cert->subject;\n> +\tCERTRDN\t **rdn;\n> +\n> +\tfor (rdn = subject.rdns; *rdn; rdn++)\n> +\t{\n> +\t\tCERTAVA\t **ava;\n> +\n> +\t\tfor (ava = (*rdn)->avas; *ava; ava++)\n> +\t\t{\n> +\t\t\tSECItem\t *buf;\n> +\t\t\tchar\t *cn;\n> +\n> +\t\t\tif (CERT_GetAVATag(*ava) != SEC_OID_AVA_COMMON_NAME)\n> +\t\t\t\tcontinue;\n> +\n> +\t\t\t/* Found a CN, decode and copy it into a newly allocated buffer */\n> +\t\t\tbuf = CERT_DecodeAVAValue(&(*ava)->value);\n> +\t\t\tif (!buf)\n> +\t\t\t{\n> +\t\t\t\t/*\n> +\t\t\t\t * This failure case is difficult to test. (Since this code\n> +\t\t\t\t * runs after certificate authentication has otherwise\n> +\t\t\t\t * succeeded, you'd need to convince a CA implementation to\n> +\t\t\t\t * sign a corrupted certificate in order to get here.)\n\nWhy is that hard with a toy CA locally? Might not be worth the effort, but if\nthe comment explicitly talks about it being hard...\n\n\n\n> +\t\t\t\t * Follow the behavior of CERT_GetCommonName() in this case and\n> +\t\t\t\t * simply return NULL, as if a Common Name had not been found.\n> +\t\t\t\t */\n> +\t\t\t\tgoto fail;\n> +\t\t\t}\n> +\n> +\t\t\tcn = MemoryContextAlloc(TopMemoryContext, buf->len + 1);\n> +\t\t\tmemcpy(cn, buf->data, buf->len);\n> +\t\t\tcn[buf->len] = '\\0';\n> +\n> +\t\t\t*len = buf->len;\n> +\n> +\t\t\tSECITEM_FreeItem(buf, PR_TRUE);\n> +\t\t\treturn cn;\n> +\t\t}\n> +\t}\n> +\n> +fail:\n> +\t/* Not found */\n> +\t*len = 0;\n> +\treturn NULL;\n> +}\n>\n\n> +/*\n> + * pg_SSLShutdownFunc\n> + *\t\tCallback for NSS shutdown\n> + *\n> + * If NSS is terminated from the outside when the connection is still in use\n\nWhat does \"NSS is terminated from the outside when the connection\" really\nmean? Does this mean the client initiating something?\n\n\n> + * we must treat this as potentially hostile and immediately close to avoid\n> + * leaking the connection in any way. Once this is called, NSS will shutdown\n> + * regardless so we may as well clean up the best we can. Returning SECFailure\n> + * will cause the NSS shutdown to return with an error, but it will shutdown\n> + * nevertheless. nss_data is reserved for future use and is always NULL.\n> + */\n> +static SECStatus\n> +pg_SSLShutdownFunc(void *private_data, void *nss_data)\n> +{\n> +\tPort *port = (Port *) private_data;\n> +\n> +\tif (!port || !port->ssl_in_use)\n> +\t\treturn SECSuccess;\n\nHow can that happen?\n\n\n> +\t/*\n> +\t * There is a connection still open, close it and signal to whatever that\n> +\t * called the shutdown that it was erroneous.\n> +\t */\n> +\tbe_tls_close(port);\n> +\tbe_tls_destroy();\n\nAnd this doesn't have any dangerous around those functions getting called\nagain later?\n\n\n\n> +void\n> +pgtls_close(PGconn *conn)\n> +{\n> +\tconn->ssl_in_use = false;\n> +\tconn->has_password = false;\n> +\n> +\t/*\n> +\t * If the system trust module has been loaded we must try to unload it\n> +\t * before closing the context, since it will otherwise fail reporting a\n> +\t * SEC_ERROR_BUSY error.\n> +\t */\n> +\tif (ca_trust != NULL)\n> +\t{\n> +\t\tif (SECMOD_UnloadUserModule(ca_trust) != SECSuccess)\n> +\t\t{\n> +\t\t\tpqInternalNotice(&conn->noticeHooks,\n> +\t\t\t\t\t\t\t \"unable to unload trust module\");\n> +\t\t}\n> +\t\telse\n> +\t\t{\n> +\t\t\tSECMOD_DestroyModule(ca_trust);\n> +\t\t\tca_trust = NULL;\n> +\t\t}\n> +\t}\n\nMight just misunderstand: How can it be ok to destroy ca_trust here? What if\nthere's other connections using it? The same thread might be using multiple\nconnections, and multiple threads might be using connections. Seems very much\nnot thread safe.\n\n\n> +PostgresPollingStatusType\n> +pgtls_open_client(PGconn *conn)\n> +{\n> +\tSECStatus\tstatus;\n> +\tPRFileDesc *model;\n> +\tNSSInitParameters params;\n> +\tSSLVersionRange desired_range;\n> +\n> +#ifdef ENABLE_THREAD_SAFETY\n> +#ifdef WIN32\n> +\t/* This locking is modelled after fe-secure-openssl.c */\n> +\tif (ssl_config_mutex == NULL)\n> +\t{\n> +\t\twhile (InterlockedExchange(&win32_ssl_create_mutex, 1) == 1)\n> +\t\t\t/* loop while another thread owns the lock */ ;\n> +\t\tif (ssl_config_mutex == NULL)\n> +\t\t{\n> +\t\t\tif (pthread_mutex_init(&ssl_config_mutex, NULL))\n> +\t\t\t{\n> +\t\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> +\t\t\t\t\t\t\t\t libpq_gettext(\"unable to lock thread\"));\n> +\t\t\t\treturn PGRES_POLLING_FAILED;\n> +\t\t\t}\n> +\t\t}\n> +\t\tInterlockedExchange(&win32_ssl_create_mutex, 0);\n> +\t}\n> +#endif\n> +\tif (pthread_mutex_lock(&ssl_config_mutex))\n> +\t{\n> +\t\tprintfPQExpBuffer(&conn->errorMessage,\n> +\t\t\t\t\t\t libpq_gettext(\"unable to lock thread\"));\n> +\t\treturn PGRES_POLLING_FAILED;\n> +\t}\n> +#endif\t\t\t\t\t\t\t/* ENABLE_THREAD_SAFETY */\n\nI'd very much like to avoid duplicating this code. Can we put it somewhere\ncombined instead?\n\n\n> +\t/*\n> +\t * The NSPR documentation states that runtime initialization via PR_Init\n> +\t * is no longer required, as the first caller into NSPR will perform the\n> +\t * initialization implicitly. See be-secure-nss.c for further discussion\n> +\t * on PR_Init.\n> +\t */\n> +\tPR_Init(PR_USER_THREAD, PR_PRIORITY_NORMAL, 0);\n\nWhy does this, and several subsequent bits, have to happen under a lock?\n\n\n> +\tif (conn->ssl_max_protocol_version && strlen(conn->ssl_max_protocol_version) > 0)\n> +\t{\n> +\t\tint\t\t\tssl_max_ver = ssl_protocol_param_to_nss(conn->ssl_max_protocol_version);\n> +\n> +\t\tif (ssl_max_ver == -1)\n> +\t\t{\n> +\t\t\tprintfPQExpBuffer(&conn->errorMessage,\n> +\t\t\t\t\t\t\t libpq_gettext(\"invalid value \\\"%s\\\" for maximum version of SSL protocol\\n\"),\n> +\t\t\t\t\t\t\t conn->ssl_max_protocol_version);\n> +\t\t\treturn -1;\n> +\t\t}\n> +\n> +\t\tdesired_range.max = ssl_max_ver;\n> +\t}\n> +\n> +\tif (SSL_VersionRangeSet(model, &desired_range) != SECSuccess)\n> +\t{\n> +\t\tprintfPQExpBuffer(&conn->errorMessage,\n> +\t\t\t\t\t\t libpq_gettext(\"unable to set allowed SSL protocol version range: %s\"),\n> +\t\t\t\t\t\t pg_SSLerrmessage(PR_GetError()));\n> +\t\treturn PGRES_POLLING_FAILED;\n> +\t}\n\nWhy are some parts returning -1 and some PGRES_POLLING_FAILED? -1 certainly\nisn't a member of PostgresPollingStatusType.\n\n\n> +\t\t\t\t/*\n> +\t\t\t\t * The error cases for PR_Recv are not documented, but can be\n> +\t\t\t\t * reverse engineered from _MD_unix_map_default_error() in the\n> +\t\t\t\t * NSPR code, defined in pr/src/md/unix/unix_errors.c.\n> +\t\t\t\t */\n\nCan we propose a patch to document them? Don't want to get bitten by this\nsuddenly changing...\n\n\n\n\n> From a12769bd793a8e073125c3b3a176b355335646bc Mon Sep 17 00:00:00 2001\n> From: Daniel Gustafsson <daniel@yesql.se>\n> Date: Mon, 8 Feb 2021 23:52:45 +0100\n> Subject: [PATCH v52 07/11] nss: Support NSS in pgcrypto\n>\n> This extends pgcrypto to be able to use libnss as a cryptographic\n> backend for pgcrypto much like how OpenSSL is a supported backend.\n> Blowfish is not a supported cipher in NSS, so the implementation\n> falls back on the built-in BF code to be compatible in terms of\n> cipher support.\n\nI wish we didn't have pgcrypto in its current form.\n\n\n\n> From 5079ce8a677074b93ef1f118d535c6dee4ce64f9 Mon Sep 17 00:00:00 2001\n> From: Daniel Gustafsson <daniel@yesql.se>\n> Date: Mon, 8 Feb 2021 23:52:55 +0100\n> Subject: [PATCH v52 10/11] nss: Build infrastructure\n> \n> Finally this adds the infrastructure to build a postgres installation\n> with libnss support.\n\nI would suggest trying to come up with a way to reorder / split the series so\nthat smaller pieces are committable. The way you have this right now leaves\nyou with applying all of it at once as the only realistic way. And this\npatchset is too large for that.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 23 Jan 2022 13:20:05 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, 2022-01-19 at 10:01 +0100, Daniel Gustafsson wrote:\r\n> > On 18 Jan 2022, at 17:37, Jacob Champion <pchampion@vmware.com> wrote:\r\n> > \r\n> > On Wed, 2021-12-15 at 23:10 +0100, Daniel Gustafsson wrote:\r\n> > > I've attached a v50 which fixes the issues found by Joshua upthread, as well as\r\n> > > rebases on top of all the recent SSL and pgcrypto changes.\r\n> > \r\n> > I'm currently tracking down a slot leak. When opening and closing large\r\n> > numbers of NSS databases, at some point we appear to run out of slots\r\n> > and then NSS starts misbehaving, even though we've closed all of our\r\n> > context handles.\r\n> \r\n> Interesting, are you able to share a reproducer for this so I can assist in\r\n> debugging it?\r\n\r\n(This was in my spam folder, sorry for the delay...) Let me see if I\r\ncan minimize my current reproduction case and get it ported out of\r\nPython.\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 25 Jan 2022 22:26:31 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Tue, 2022-01-25 at 22:26 +0000, Jacob Champion wrote:\r\n> On Wed, 2022-01-19 at 10:01 +0100, Daniel Gustafsson wrote:\r\n> > > On 18 Jan 2022, at 17:37, Jacob Champion <pchampion@vmware.com> wrote:\r\n> > > \r\n> > > On Wed, 2021-12-15 at 23:10 +0100, Daniel Gustafsson wrote:\r\n> > > > I've attached a v50 which fixes the issues found by Joshua upthread, as well as\r\n> > > > rebases on top of all the recent SSL and pgcrypto changes.\r\n> > > \r\n> > > I'm currently tracking down a slot leak. When opening and closing large\r\n> > > numbers of NSS databases, at some point we appear to run out of slots\r\n> > > and then NSS starts misbehaving, even though we've closed all of our\r\n> > > context handles.\r\n> > \r\n> > Interesting, are you able to share a reproducer for this so I can assist in\r\n> > debugging it?\r\n> \r\n> (This was in my spam folder, sorry for the delay...) Let me see if I\r\n> can minimize my current reproduction case and get it ported out of\r\n> Python.\r\n\r\nHere's my attempt at a Bash port. It has races but reliably reproduces\r\non my machine after 98 connections (there's a hardcoded slot limit of\r\n100, so that makes sense when factoring in the internal NSS slots).\r\n\r\n--Jacob", "msg_date": "Wed, 26 Jan 2022 18:25:37 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 23 Jan 2022, at 22:20, Andres Freund <andres@anarazel.de> wrote:\n> On 2022-01-18 13:42:54 +0100, Daniel Gustafsson wrote:\n\nThanks heaps for the review, much appreciated!\n\n>> + install_script: |\n>> + DEBIAN_FRONTEND=noninteractive apt-get --yes install libnss3 libnss3-dev libnss3-tools libnspr4 libnspr4-dev\n> \n> This needs an apt-get update beforehand to succeed. That's what caused the last few runs\n> to fail, see e.g.\n> https://cirrus-ci.com/task/6293612580306944\n\nAh, good point. Adding that made it indeed work.\n\n> Just duplicating the task doesn't really scale once in tree.\n\nTotally agree. This was mostly a hack to see if I could make the CFBot build a\ntailored build, then life threw school closures etc at me and I sort of forgot\nabout removing it again.\n\n> What about\n> reconfiguring (note: add --enable-depend) the linux tasks to build against\n> nss, and then run the relevant subset of tests with it? Most tests don't use\n> tcp / SSL anyway, so rerunning a small subset of tests should be feasible?\n\nThat's an interesting idea, I think that could work and be reasonably readable\nat the same time (and won't require in-depth knowledge of Cirrus). As it's the\nsame task it does spend more time towards the max runtime per task, but that's\nnot a problem for now. It's worth keeping in mind though if we deem this to be\na way forward with testing multiple settings.\n\n>> From 297ee9ab31aa579e002edc335cce83dae19711b1 Mon Sep 17 00:00:00 2001\n>> From: Daniel Gustafsson <daniel@yesql.se>\n>> Date: Mon, 8 Feb 2021 23:52:22 +0100\n>> Subject: [PATCH v52 01/11] nss: Support libnss as TLS library in libpq\n> \n>> 16 files changed, 3192 insertions(+), 7 deletions(-)\n> \n> Phew. This is a huge patch.\n\nYeah =/ .. without going beyond and inventing new things on top what is needed\nto replace OpenSSL, a lot of code (and tests) has to be written. If nothing\nelse, this work at least highlights just how much we've come to use OpenSSL.\n\n> Damn, I only opened this thread to report the CI failure. But now I ended up\n> doing a small review...\n\nThanks! Next time we meet, I owe you a beverage of choice.\n\n>> +#include \"common/nss.h\"\n>> +\n>> +/*\n>> + * The nspr/obsolete/protypes.h NSPR header typedefs uint64 and int64 with\n>> + * colliding definitions from ours, causing a much expected compiler error.\n>> + * Remove backwards compatibility with ancient NSPR versions to avoid this.\n>> + */\n>> +#define NO_NSPR_10_SUPPORT\n>> +#include <nspr.h>\n>> +#include <prerror.h>\n>> +#include <prio.h>\n>> +#include <prmem.h>\n>> +#include <prtypes.h>\n> \n> Duplicated with nss.h. Which brings me to:\n\nFixed, there and elsewhere.\n\n>> +#include <nss.h>\n> \n> Is it a great idea to have common/nss.h when there's a library header nss.h?\n> Perhaps we should have a pg_ssl_{nss,openssl}.h or such?\n\nThat's a good point, I modelled it after common/openssl.h but I agree it's\nbetter to differentiate the filenames. I've renamed it to common/pg_nss.h and\nwe should IMO rename common/openssl.h regardless of what happens to this patch.\n\n>> +/* ------------------------------------------------------------ */\n>> +/*\t\t\t\t\t\t Public interface\t\t\t\t\t\t*/\n>> +/* ------------------------------------------------------------ */\n> \n> Nitpicks:\n> I don't think we typically do multiple /* */ comments in a row for this type\n> of thing. I also don't particularly like centering things like this, tends to\n> get inconsistent across comments.\n\nThis is just a copy/paste from be-secure-openssl.c, but I'm far from married to\nit so happy to remove. Fixed.\n\n>> +/*\n>> + * be_tls_open_server\n>> + *\n>> + * Since NSPR initialization must happen after forking, most of the actual\n>> + * setup of NSPR/NSS is done here rather than in be_tls_init.\n> \n> The \"Since ... must happen after forking\" sounds like it's referencing a\n> previously remarked upon fact. But I don't see anything but a copy of this\n> comment.\n\nNSS contexts aren't fork safe, IIRC it's around its use of file descriptors.\nFairly old NSS documentation and mailing list posts cite hardware tokens (which\nwas a very strong focus in the earlier days of NSS) not being safe to use across\nforks and thus none of NSS was ever intended to be initialized until after the\nfork. I've reworded this comment a bit to make that clearer.\n\n> Does this make some things notably more expensive? Presumably it does remove a\n> bunch of COW opportunities, but likely that's not a huge factor compared to\n> assymetric crypto negotiation...\n\nRight, the context of setting up crypto across a network connection it's highly\nlikely to drown out the costs.\n\n> Maybe soem of this commentary should migrate to the file header or such?\n\nMaybe, or perhaps README.ssl? Not sure where it would be most reasonable to\nkeep it such that it's also kept up to date.\n\n>> This introduce\n>> + * differences with the OpenSSL support where some errors are only reported\n>> + * at runtime with NSS where they are reported at startup with OpenSSL.\n> \n> Found this sentence hard to parse somehow.\n> \n> It seems pretty unfriendly to only have minimal error checking at postmaster\n> startup time. Seems at least the presence and usability of keys should be done\n> *also* at that time?\n\nI'll look at adding some setup, and subsequent teardown, of NSS at startup\nduring which we could do checking to be more on par with how the OpenSSL\nbackend will report errors.\n\n>> +\t/*\n>> +\t * If no ciphers are specified, enable them all.\n>> +\t */\n>> +\tif (!SSLCipherSuites || strlen(SSLCipherSuites) == 0)\n>> +\t{\n>> +\t\t...\n>> +\t\t\t\tif (status != SECSuccess)\n>> +\t\t\t\t{\n>> +\t\t\t\t\tereport(COMMERROR,\n>> +\t\t\t\t\t\t\t(errmsg(\"invalid cipher-suite specified: %s\", c)));\n>> +\t\t\t\t\treturn -1;\n> \n> It likely doesn't matter much because the backend will exit, but because\n> COMERROR doesn't throw, it seems like this will leak \"ciphers\"?\n\nAgreed, it won't matter much in practice but we should clearly pfree it, fixed.\n\n>> +\t\tpfree(ciphers);\n>> +\n>> +\t\tif (!found)\n>> +\t\t{\n>> +\t\t\tereport(COMMERROR,\n>> +\t\t\t\t\t(errmsg(\"no cipher-suites found\")));\n>> +\t\t\treturn -1;\n>> +\t\t}\n>> +\t}\n> \n> Seems like this could reasonably done in a separate function?\n\nAgreed, trimming the length of an already very long function is a good idea.\nFixed.\n\n> I assume PR_GetError() is some thread-local construct, given it's also used in\n> libpq?\n\nCorrect.\n\n> Why, oh why, do people copy the abysmal \"global errno\" approach everywhere.\n\nEven better, NSPR has two of them: PR_GetError and PR_GetOSError (the latter\nisn't used in this implementation, but it could potentially be added to error\npaths on NSS_InitContext and other calls that read off the filesystem).\n\n>> +ssize_t\n>> +be_tls_read(Port *port, void *ptr, size_t len, int *waitfor)\n>> +{\n> \n> I'm not a fan of duplicating the symbol names between be-secure-openssl.c and\n> this. For one it's annoying for source code naviation. It also seems that at\n> some point we might want to be able to link against both at the same time?\n> Maybe we should name them unambiguously and then use some indirection in a\n> header somewhere?\n\nWe could do that, and that's something that we could do independently of this\npatch to keep the scope down. Doing it in master now with just the OpenSSL\nimplementation as a consumer would be a logical next step in the TLS library\nabstraction we've done.\n\n>> +\tPR_Close(port->pr_fd);\n>> +\tport->pr_fd = NULL;\n> \n> What if we failed before initializing pr_fd?\n\nFixed.\n\n>> +\t/*\n>> +\t * Since there is no password callback in NSS when the server starts up,\n>> +\t * it makes little sense to create an interactive callback. Thus, if this\n>> +\t * is a retry attempt then give up immediately.\n>> +\t */\n>> +\tif (retry)\n>> +\t\treturn NULL;\n> \n> That's really not great. Can't we do something like initialize NSS in\n> postmaster, load the key into memory, including prompting, and then shut nss\n> down again?\n\nI can look at doing something along those lines. It does require setting up a\nfair bit of infrastructure but if the code refactored to allow reuse it can\nprobably be done fairly readable.\n\n>> +/*\n>> + * raw_subject_common_name\n>> + *\n>> + * Returns the Subject Common Name for the given certificate as a raw char\n>> + * buffer (that is, without any form of escaping for unprintable characters or\n>> + * embedded nulls), with the length of the buffer returned in the len param.\n>> + * The buffer is allocated in the TopMemoryContext and is given a NULL\n>> + * terminator so that callers are safe to call strlen() on it.\n>> + *\n>> + * This is used instead of CERT_GetCommonName(), which always performs quoting\n>> + * and/or escaping. NSS doesn't appear to give us a way to easily unescape the\n>> + * result, and we need to store the raw CN into port->peer_cn for compatibility\n>> + * with the OpenSSL implementation.\n>> + */\n> \n> Do we have a testcase for embedded NULLs in common names?\n\nWe don't, neither for OpenSSL or NSS. AFAICR Jacob spent days trying to get a\ncertificate generation to include an embedded NULL byte but in the end gave up.\nWe would have to write our own tools for generating certificates to add that\n(which may or may not be a bad idea, but it hasn't been done).\n\n>> +\t\t\t/* Found a CN, decode and copy it into a newly allocated buffer */\n>> +\t\t\tbuf = CERT_DecodeAVAValue(&(*ava)->value);\n>> +\t\t\tif (!buf)\n>> +\t\t\t{\n>> +\t\t\t\t/*\n>> +\t\t\t\t * This failure case is difficult to test. (Since this code\n>> +\t\t\t\t * runs after certificate authentication has otherwise\n>> +\t\t\t\t * succeeded, you'd need to convince a CA implementation to\n>> +\t\t\t\t * sign a corrupted certificate in order to get here.)\n> \n> Why is that hard with a toy CA locally? Might not be worth the effort, but if\n> the comment explicitly talks about it being hard...\n\nThe gist of this comment is that it's hard to do with a stock local CA. I've\nadded a small blurb to clarify that a custom implementation would be required.\n\n>> +/*\n>> + * pg_SSLShutdownFunc\n>> + *\t\tCallback for NSS shutdown\n>> + *\n>> + * If NSS is terminated from the outside when the connection is still in use\n> \n> What does \"NSS is terminated from the outside when the connection\" really\n> mean? Does this mean the client initiating something?\n\nIf an extension, or other server-loaded code, interfered with NSS and managed\nto close contexts in order to interfere with connections this would ensure us\nclosing it down cleanly.\n\nThat being said, I was now unable to get my old testcase working so I've for\nnow removed this callback from the patch until I can work out if we can make\nproper use of it. AFAICS other mature NSS implementations aren't using it\n(OpenLDAP did in the past but have since removed it, will look at how/why).\n\n>> +\t\telse\n>> +\t\t{\n>> +\t\t\tSECMOD_DestroyModule(ca_trust);\n>> +\t\t\tca_trust = NULL;\n>> +\t\t}\n>> +\t}\n> \n> Might just misunderstand: How can it be ok to destroy ca_trust here? What if\n> there's other connections using it? The same thread might be using multiple\n> connections, and multiple threads might be using connections. Seems very much\n> not thread safe.\n\nRight, that's a leftover from early hacking that I had missed. Fixed.\n\n>> +\t/* This locking is modelled after fe-secure-openssl.c */\n>> +\tif (ssl_config_mutex == NULL)\n>> +\t{\n>> +\t...\n> \n> I'd very much like to avoid duplicating this code. Can we put it somewhere\n> combined instead?\n\nI can look at splitting it out to fe-secure-common.c. A first step here to\nkeep the goalposts from moving in this patch would be to look at combining lock\ninit in fe-secure-openssl.c:pgtls_init() and fe-connect.c:default_threadlock,\nand then just apply the same recipe here once landed. This could be done\nindependent of this patch.\n\n>> +\t/*\n>> +\t * The NSPR documentation states that runtime initialization via PR_Init\n>> +\t * is no longer required, as the first caller into NSPR will perform the\n>> +\t * initialization implicitly. See be-secure-nss.c for further discussion\n>> +\t * on PR_Init.\n>> +\t */\n>> +\tPR_Init(PR_USER_THREAD, PR_PRIORITY_NORMAL, 0);\n> \n> Why does this, and several subsequent bits, have to happen under a lock?\n\nNSS initialization isn't thread-safe, there is more discussion upthread in and\naround this email:\n\nhttps://postgr.es/m/c8d4bc0dfd266799ab4213f1673a813786ac0c70.camel@vmware.com\n\n> Why are some parts returning -1 and some PGRES_POLLING_FAILED? -1 certainly\n> isn't a member of PostgresPollingStatusType.\n\nThat was a thinko, fixed.\n\n>> +\t\t\t\t/*\n>> +\t\t\t\t * The error cases for PR_Recv are not documented, but can be\n>> +\t\t\t\t * reverse engineered from _MD_unix_map_default_error() in the\n>> +\t\t\t\t * NSPR code, defined in pr/src/md/unix/unix_errors.c.\n>> +\t\t\t\t */\n> \n> Can we propose a patch to document them? Don't want to get bitten by this\n> suddenly changing...\n\nI can certainly propose something on their mailinglist, but I unfortunately\nwouldn't get my hopes up too high as NSS and documentation aren't exactly best\nfriends (the in-tree docs doesn't cover the API and Mozilla recently removed\nmost of the online docs in their neverending developer site reorg).\n\n>> From a12769bd793a8e073125c3b3a176b355335646bc Mon Sep 17 00:00:00 2001\n>> From: Daniel Gustafsson <daniel@yesql.se>\n>> Date: Mon, 8 Feb 2021 23:52:45 +0100\n>> Subject: [PATCH v52 07/11] nss: Support NSS in pgcrypto\n>> \n>> This extends pgcrypto to be able to use libnss as a cryptographic\n>> backend for pgcrypto much like how OpenSSL is a supported backend.\n>> Blowfish is not a supported cipher in NSS, so the implementation\n>> falls back on the built-in BF code to be compatible in terms of\n>> cipher support.\n> \n> I wish we didn't have pgcrypto in its current form.\n\nYes. Very much yes. I don't think doing anything about that in the context of\nthis patch is wise, but a discussion on where to take pgcrypto in the future\nwould probably be a good idea.\n\n>> From 5079ce8a677074b93ef1f118d535c6dee4ce64f9 Mon Sep 17 00:00:00 2001\n>> From: Daniel Gustafsson <daniel@yesql.se>\n>> Date: Mon, 8 Feb 2021 23:52:55 +0100\n>> Subject: [PATCH v52 10/11] nss: Build infrastructure\n>> \n>> Finally this adds the infrastructure to build a postgres installation\n>> with libnss support.\n> \n> I would suggest trying to come up with a way to reorder / split the series so\n> that smaller pieces are committable. The way you have this right now leaves\n> you with applying all of it at once as the only realistic way. And this\n> patchset is too large for that.\n\nI completely agree, the hard part is identifying smaller sets which also make\nsense and which doesn't leave the tree in a bad state should anyone check out\nthat specific point in time.\n\nThe two commits in the patchset that are \"easy\" to consider for pushing\nindependently in this regard are IMO:\n\n * 0002 Test refactoring to support multiple TLS libraries.\n * 0004 Check for empty stderr during connect_ok\n\nThe refactoring in 0002 is hopefully not too controversial, but it clearly\nneeds eyes from someone more familiar with modern and idiomatic Perl. 0004\ncould IMO be pushed regardless of the fate of this patchset (after being\nfloated in its own thread on -hackers).\n\nIn order to find a good split I think we need to figure what to optimize for;\ndo we optimize for ease of reverting should that be needed, or along\nfunctionality borders, or something else? I don't have good ideas here, but a\nsingle 7596 insertions(+), 421 deletions(-) commit is clearly not a good idea.\n\nStephen had an idea off-list that we could look at splitting this across the\nserver/client boundary, which I think is the only idea I've so far which has\nlegs. (The first to go in would come with the common code of course.)\n\nDo you have any thoughts after reading through the patch?\n\nThe attached v53 incorporates the fixes discussed above, and builds green for\nboth OpenSSL and NSS in Cirrus on my Github repo (thanks again for your work on\nthose files) so it will be interesting to see the CFBot running them. Next\nwould be to figure out how to make the MSVC build it, basing an attempt on\nAndrew's blogpost.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Wed, 26 Jan 2022 21:39:16 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Hi,\n\nOn 2022-01-26 21:39:16 +0100, Daniel Gustafsson wrote:\n> > What about\n> > reconfiguring (note: add --enable-depend) the linux tasks to build against\n> > nss, and then run the relevant subset of tests with it? Most tests don't use\n> > tcp / SSL anyway, so rerunning a small subset of tests should be feasible?\n> \n> That's an interesting idea, I think that could work and be reasonably readable\n> at the same time (and won't require in-depth knowledge of Cirrus). As it's the\n> same task it does spend more time towards the max runtime per task, but that's\n> not a problem for now. It's worth keeping in mind though if we deem this to be\n> a way forward with testing multiple settings.\n\nI think it's a way for a limited number of settings, that each only require a\nlimited amount of tests... Rerunning all tests etc is a different story.\n\n\n\n> > Is it a great idea to have common/nss.h when there's a library header nss.h?\n> > Perhaps we should have a pg_ssl_{nss,openssl}.h or such?\n> \n> That's a good point, I modelled it after common/openssl.h but I agree it's\n> better to differentiate the filenames. I've renamed it to common/pg_nss.h and\n> we should IMO rename common/openssl.h regardless of what happens to this patch.\n\n+1\n\n\n> > Does this make some things notably more expensive? Presumably it does remove a\n> > bunch of COW opportunities, but likely that's not a huge factor compared to\n> > assymetric crypto negotiation...\n> \n> Right, the context of setting up crypto across a network connection it's highly\n> likely to drown out the costs.\n\nIf you start to need to run a helper to decrypt an encrypted private key, and\ndo all the initialization, I'm not sure sure that holds true anymore... Have\nyou done any connection speed tests? pgbench -C is helpful for that.\n\n\n> > Maybe soem of this commentary should migrate to the file header or such?\n> \n> Maybe, or perhaps README.ssl? Not sure where it would be most reasonable to\n> keep it such that it's also kept up to date.\n\nEither would work for me.\n\n\n> >> This introduce\n> >> + * differences with the OpenSSL support where some errors are only reported\n> >> + * at runtime with NSS where they are reported at startup with OpenSSL.\n> > \n> > Found this sentence hard to parse somehow.\n> > \n> > It seems pretty unfriendly to only have minimal error checking at postmaster\n> > startup time. Seems at least the presence and usability of keys should be done\n> > *also* at that time?\n> \n> I'll look at adding some setup, and subsequent teardown, of NSS at startup\n> during which we could do checking to be more on par with how the OpenSSL\n> backend will report errors.\n\nCool.\n\n\n> >> +/*\n> >> + * raw_subject_common_name\n> >> + *\n> >> + * Returns the Subject Common Name for the given certificate as a raw char\n> >> + * buffer (that is, without any form of escaping for unprintable characters or\n> >> + * embedded nulls), with the length of the buffer returned in the len param.\n> >> + * The buffer is allocated in the TopMemoryContext and is given a NULL\n> >> + * terminator so that callers are safe to call strlen() on it.\n> >> + *\n> >> + * This is used instead of CERT_GetCommonName(), which always performs quoting\n> >> + * and/or escaping. NSS doesn't appear to give us a way to easily unescape the\n> >> + * result, and we need to store the raw CN into port->peer_cn for compatibility\n> >> + * with the OpenSSL implementation.\n> >> + */\n> > \n> > Do we have a testcase for embedded NULLs in common names?\n> \n> We don't, neither for OpenSSL or NSS. AFAICR Jacob spent days trying to get a\n> certificate generation to include an embedded NULL byte but in the end gave up.\n> We would have to write our own tools for generating certificates to add that\n> (which may or may not be a bad idea, but it hasn't been done).\n\nHah, that's interesting.\n\n\n> >> +/*\n> >> + * pg_SSLShutdownFunc\n> >> + *\t\tCallback for NSS shutdown\n> >> + *\n> >> + * If NSS is terminated from the outside when the connection is still in use\n> > \n> > What does \"NSS is terminated from the outside when the connection\" really\n> > mean? Does this mean the client initiating something?\n> \n> If an extension, or other server-loaded code, interfered with NSS and managed\n> to close contexts in order to interfere with connections this would ensure us\n> closing it down cleanly.\n> \n> That being said, I was now unable to get my old testcase working so I've for\n> now removed this callback from the patch until I can work out if we can make\n> proper use of it. AFAICS other mature NSS implementations aren't using it\n> (OpenLDAP did in the past but have since removed it, will look at how/why).\n\nI think that'd be elog(FATAL) time if we want to do anything (after changing\nstate so that no data is sent to client).\n\n\n> >> +\t\t\t\t/*\n> >> +\t\t\t\t * The error cases for PR_Recv are not documented, but can be\n> >> +\t\t\t\t * reverse engineered from _MD_unix_map_default_error() in the\n> >> +\t\t\t\t * NSPR code, defined in pr/src/md/unix/unix_errors.c.\n> >> +\t\t\t\t */\n> > \n> > Can we propose a patch to document them? Don't want to get bitten by this\n> > suddenly changing...\n> \n> I can certainly propose something on their mailinglist, but I unfortunately\n> wouldn't get my hopes up too high as NSS and documentation aren't exactly best\n> friends (the in-tree docs doesn't cover the API and Mozilla recently removed\n> most of the online docs in their neverending developer site reorg).\n\nKinda makes me question the wisdom of starting to depend on NSS. When openssl\ndocs are vastly outshining a library's, that library really should start to\nask itself some hard questions.\n\n\n\n> In order to find a good split I think we need to figure what to optimize for;\n> do we optimize for ease of reverting should that be needed, or along\n> functionality borders, or something else? I don't have good ideas here, but a\n> single 7596 insertions(+), 421 deletions(-) commit is clearly not a good idea.\n\nI think the goal should be the ability to incrementally commit.\n\n\n> Stephen had an idea off-list that we could look at splitting this across the\n> server/client boundary, which I think is the only idea I've so far which has\n> legs. (The first to go in would come with the common code of course.)\n\nYea, that's the most obvious one. I suspect client-side has a lower\ncomplexity, because it doesn't need to replace quite as many things?\n\n\n> The attached v53 incorporates the fixes discussed above, and builds green for\n> both OpenSSL and NSS in Cirrus on my Github repo (thanks again for your work on\n> those files) so it will be interesting to see the CFBot running them.\n\nLooks like that worked...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 26 Jan 2022 15:59:39 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Wed, 2022-01-26 at 15:59 -0800, Andres Freund wrote:\r\n> > > Do we have a testcase for embedded NULLs in common names?\r\n> > \r\n> > We don't, neither for OpenSSL or NSS. AFAICR Jacob spent days trying to get a\r\n> > certificate generation to include an embedded NULL byte but in the end gave up.\r\n> > We would have to write our own tools for generating certificates to add that\r\n> > (which may or may not be a bad idea, but it hasn't been done).\r\n> \r\n> Hah, that's interesting.\r\n\r\nYeah, OpenSSL just refused to do it, with any method I could find at\r\nleast. My personal test suite is using pyca/cryptography and psycopg2\r\nto cover that case.\r\n\r\n--Jacob\r\n", "msg_date": "Thu, 27 Jan 2022 00:51:59 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": ">>> Can we propose a patch to document them? Don't want to get bitten by this\n>>> suddenly changing...\n>> \n>> I can certainly propose something on their mailinglist, but I unfortunately\n>> wouldn't get my hopes up too high as NSS and documentation aren't exactly best\n>> friends (the in-tree docs doesn't cover the API and Mozilla recently removed\n>> most of the online docs in their neverending developer site reorg).\n> \n> Kinda makes me question the wisdom of starting to depend on NSS. When openssl\n> docs are vastly outshining a library's, that library really should start to\n> ask itself some hard questions.\n\nSadly, there is that. While this is not a new problem, Mozilla has been making\nsome very weird decisions around NSS governance as of late. Another data point\nis the below thread from libcurl:\n\n https://curl.se/mail/lib-2022-01/0120.html\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 28 Jan 2022 15:08:09 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Fri, Jan 28, 2022 at 9:08 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > Kinda makes me question the wisdom of starting to depend on NSS. When openssl\n> > docs are vastly outshining a library's, that library really should start to\n> > ask itself some hard questions.\n\nYeah, OpenSSL is very poor, so being worse is not good.\n\n> Sadly, there is that. While this is not a new problem, Mozilla has been making\n> some very weird decisions around NSS governance as of late. Another data point\n> is the below thread from libcurl:\n>\n> https://curl.se/mail/lib-2022-01/0120.html\n\nI would really, really like to have an alternative to OpenSSL for PG.\nI don't know if this is the right thing, though. If other people are\ndropping support for it, that's a pretty bad sign IMHO. Later in the\nthread it says OpenLDAP have dropped support for it already as well.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 28 Jan 2022 09:30:02 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 28 Jan 2022, at 15:30, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Fri, Jan 28, 2022 at 9:08 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> Kinda makes me question the wisdom of starting to depend on NSS. When openssl\n>>> docs are vastly outshining a library's, that library really should start to\n>>> ask itself some hard questions.\n> \n> Yeah, OpenSSL is very poor, so being worse is not good.\n\nSome background on this for anyone interested: Mozilla removed the\ndocumentation from the MDN website and the attempt at resurrecting it in the\ntree (where it should've been all along </rant>) isn't making much progress.\nSome more can be found in this post on the NSS mailinglist:\n\nhttps://groups.google.com/a/mozilla.org/g/dev-tech-crypto/c/p0MO7030K4A/m/Mx5St_2sAwAJ\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 28 Jan 2022 16:10:28 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 28 Jan 2022, at 15:30, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Fri, Jan 28, 2022 at 9:08 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> Kinda makes me question the wisdom of starting to depend on NSS. When openssl\n>>> docs are vastly outshining a library's, that library really should start to\n>>> ask itself some hard questions.\n> \n> Yeah, OpenSSL is very poor, so being worse is not good.\n> \n>> Sadly, there is that. While this is not a new problem, Mozilla has been making\n>> some very weird decisions around NSS governance as of late. Another data point\n>> is the below thread from libcurl:\n>> \n>> https://curl.se/mail/lib-2022-01/0120.html\n> \n> I would really, really like to have an alternative to OpenSSL for PG.\n> I don't know if this is the right thing, though. If other people are\n> dropping support for it, that's a pretty bad sign IMHO. Later in the\n> thread it says OpenLDAP have dropped support for it already as well.\n\nI'm counting this and Andres' comment as a -1 on the patchset, and given where\nwe are in the cycle I'm mark it rejected in the CF app shortly unless anyone\nobjects.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 31 Jan 2022 14:24:03 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Greetings,\n\n* Daniel Gustafsson (daniel@yesql.se) wrote:\n> > On 28 Jan 2022, at 15:30, Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Fri, Jan 28, 2022 at 9:08 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >>> Kinda makes me question the wisdom of starting to depend on NSS. When openssl\n> >>> docs are vastly outshining a library's, that library really should start to\n> >>> ask itself some hard questions.\n> > \n> > Yeah, OpenSSL is very poor, so being worse is not good.\n> > \n> >> Sadly, there is that. While this is not a new problem, Mozilla has been making\n> >> some very weird decisions around NSS governance as of late. Another data point\n> >> is the below thread from libcurl:\n> >> \n> >> https://curl.se/mail/lib-2022-01/0120.html\n> > \n> > I would really, really like to have an alternative to OpenSSL for PG.\n> > I don't know if this is the right thing, though. If other people are\n> > dropping support for it, that's a pretty bad sign IMHO. Later in the\n> > thread it says OpenLDAP have dropped support for it already as well.\n> \n> I'm counting this and Andres' comment as a -1 on the patchset, and given where\n> we are in the cycle I'm mark it rejected in the CF app shortly unless anyone\n> objects.\n\nI agree that it's concerning to hear that OpenLDAP dropped support for\nNSS... though I don't seem to be able to find any information as to why\nthey decided to do so. NSS is clearly still supported and maintained\nand they do seem to understand that they need to work on the\ndocumentation situation and to get that fixed (the current issue seems\nto be around NSS vs. NSPR and the migration off of MDN to the in-tree\ndocumentation as Daniel mentioned, if I followed the discussion\ncorrectly in the bug that was filed by the curl folks and was then\nactively responded to by the NSS/NSPR folks), which seems to be the main\nissue that's being raised about it by the curl folks and here.\n\nI'm also very much a fan of having an alternative to OpenSSL and the\nNSS/NSPR license fits well for us, unlike the alternatives to OpenSSL\nused by other projects, such as GnuTLS (which is the alternative to\nOpenSSL that OpenLDAP now has) or other libraries like wolfSSL.\n\nBeyond the documentation issue, which I agree is a concern but also\nseems to be actively realized as an issue by the NSS/NSPR folks, is\nthere some other reason that the curl folks are thinking of dropping\nsupport for it? Or does anyone have insight into why OpenLDAP decided\nto remove support?\n\nThanks,\n\nStephen", "msg_date": "Mon, 31 Jan 2022 11:24:54 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Hi,\n\nOn 2022-01-31 14:24:03 +0100, Daniel Gustafsson wrote:\n> > On 28 Jan 2022, at 15:30, Robert Haas <robertmhaas@gmail.com> wrote:\n> > I would really, really like to have an alternative to OpenSSL for PG.\n> > I don't know if this is the right thing, though. If other people are\n> > dropping support for it, that's a pretty bad sign IMHO. Later in the\n> > thread it says OpenLDAP have dropped support for it already as well.\n> \n> I'm counting this and Andres' comment as a -1 on the patchset, and given where\n> we are in the cycle I'm mark it rejected in the CF app shortly unless anyone\n> objects.\n\nI'd make mine more a -0.2 or so. I'm concerned about the lack of non-code\ndocumentation and the state of code documentation. I'd like an openssl\nalternative, although not as much as a few years ago - it seems that the state\nof openssl has improved compared to most of the other implementations.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 31 Jan 2022 13:32:11 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 31 Jan 2022, at 17:24, Stephen Frost <sfrost@snowman.net> wrote:\n> * Daniel Gustafsson (daniel@yesql.se) wrote:\n\n>> I'm counting this and Andres' comment as a -1 on the patchset, and given where\n>> we are in the cycle I'm mark it rejected in the CF app shortly unless anyone\n>> objects.\n> \n> I agree that it's concerning to hear that OpenLDAP dropped support for\n> NSS... though I don't seem to be able to find any information as to why\n> they decided to do so.\n\nI was also unable to do that. There is no information that I could see in\neither the commit message, Bugzilla entry (#9207) or on the mailinglist.\nSearching the web didn't yield anything either. I've reached out to hopefully\nget a bit more information.\n\n> I'm also very much a fan of having an alternative to OpenSSL and the\n> NSS/NSPR license fits well for us, unlike the alternatives to OpenSSL\n> used by other projects, such as GnuTLS (which is the alternative to\n> OpenSSL that OpenLDAP now has) or other libraries like wolfSSL.\n\nShort of platform specific (proprietary) libraries like Schannel and Secure\nTransport, the alternatives are indeed slim.\n\n> Beyond the documentation issue, which I agree is a concern but also\n> seems to be actively realized as an issue by the NSS/NSPR folks,\n\nIt is, but it has also been an issue for years to be honest, getting the docs\nup to scratch will require a very large effort.\n\n> is there some other reason that the curl folks are thinking of dropping support\n> for it?\n\nIt's also not really used anymore in conjunction with curl, with Red Hat no\nlonger shipping builds against it.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 31 Jan 2022 22:48:30 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 31 Jan 2022, at 22:32, Andres Freund <andres@anarazel.de> wrote:\n> \n> Hi,\n> \n> On 2022-01-31 14:24:03 +0100, Daniel Gustafsson wrote:\n>>> On 28 Jan 2022, at 15:30, Robert Haas <robertmhaas@gmail.com> wrote:\n>>> I would really, really like to have an alternative to OpenSSL for PG.\n>>> I don't know if this is the right thing, though. If other people are\n>>> dropping support for it, that's a pretty bad sign IMHO. Later in the\n>>> thread it says OpenLDAP have dropped support for it already as well.\n>> \n>> I'm counting this and Andres' comment as a -1 on the patchset, and given where\n>> we are in the cycle I'm mark it rejected in the CF app shortly unless anyone\n>> objects.\n> \n> I'd make mine more a -0.2 or so. I'm concerned about the lack of non-code\n> documentation and the state of code documentation. I'd like an openssl\n> alternative, although not as much as a few years ago - it seems that the state\n> of openssl has improved compared to most of the other implementations.\n\nIMHO I think OpenSSL has improved over OpenSSL of the past - which is great to\nsee - but they have also diverged themselves into writing a full QUIC\nimplementation which *I personally think* is a distraction they don't need.\n\nThat being said, there aren't too many other options.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 31 Jan 2022 22:51:19 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 31 Jan 2022, at 22:48, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> On 31 Jan 2022, at 17:24, Stephen Frost <sfrost@snowman.net> wrote:\n\n>> I agree that it's concerning to hear that OpenLDAP dropped support for\n>> NSS... though I don't seem to be able to find any information as to why\n>> they decided to do so.\n> \n> I was also unable to do that. There is no information that I could see in\n> either the commit message, Bugzilla entry (#9207) or on the mailinglist.\n> Searching the web didn't yield anything either. I've reached out to hopefully\n> get a bit more information.\n\nSupport issues and Red Hat dropping OpenLDAP was cited [0] as the main drivers\nfor dropping NSS.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n[0] https://curl.se/mail/lib-2022-02/0000.html\n\n", "msg_date": "Tue, 1 Feb 2022 11:09:08 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Greetings,\n\n* Daniel Gustafsson (daniel@yesql.se) wrote:\n> > On 31 Jan 2022, at 22:48, Daniel Gustafsson <daniel@yesql.se> wrote:\n> >> On 31 Jan 2022, at 17:24, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> >> I agree that it's concerning to hear that OpenLDAP dropped support for\n> >> NSS... though I don't seem to be able to find any information as to why\n> >> they decided to do so.\n> > \n> > I was also unable to do that. There is no information that I could see in\n> > either the commit message, Bugzilla entry (#9207) or on the mailinglist.\n> > Searching the web didn't yield anything either. I've reached out to hopefully\n> > get a bit more information.\n> \n> Support issues and Red Hat dropping OpenLDAP was cited [0] as the main drivers\n> for dropping NSS.\n\nThat's both very vaugue and oddly specific, I have to say. Also, not\nreally sure that it's a good reason for other projects to move away, or\nfor the large amount of work put into this effort to be thrown out when\nit seems to be quite close to finally being done and giving us an\nalternative, supported and maintained, TLS/SSL library.\n\nThe concern about the documentation not being easily available is\ncertainly something to consider. I remember in prior reviews not having\nthat much difficulty looking up documentation for functions, and in\ndoing some quick looking around there's certainly some (most?) of the\nNSS documentation still up, the issue is that the NSPR documentation was\ntaken off of the MDN website and that's referenced from the NSS pages\nand is obviously something that folks working with NSS need to be able\nto find the documentation for too.\n\nAll that said, while have documentation on the web is nice and all, it\nseems to still be in the source, at least when I grabbed NSPR locally\nwith apt-get source and looked at PR_Recv, I found:\n\n/*\n *************************************************************************\n * FUNCTION: PR_Recv\n * DESCRIPTION:\n * Receive a specified number of bytes from a connected socket.\n * The operation will block until some positive number of bytes are\n * transferred, a time out has occurred, or there is an error.\n * No more than 'amount' bytes will be transferred.\n * INPUTS:\n * PRFileDesc *fd\n * points to a PRFileDesc object representing a socket.\n * void *buf\n * pointer to a buffer to hold the data received.\n * PRInt32 amount\n * the size of 'buf' (in bytes)\n * PRIntn flags\n * must be zero or PR_MSG_PEEK.\n * PRIntervalTime timeout\n * Time limit for completion of the receive operation.\n * OUTPUTS:\n * None\n * RETURN: PRInt32\n * a positive number indicates the number of bytes actually received.\n * 0 means the network connection is closed.\n * -1 indicates a failure. The reason for the failure is obtained\n * by calling PR_GetError().\n **************************************************************************\n */\n\nSo, it's not the case that the documentation is completely gone and\nutterly unavailable to those who are interested in it, it's just in the\nsource rather than being on a nicely formatted webpage. One can find it\non the web too, naturally:\n\nhttps://github.com/thespooler/nspr/blob/29ba433ebceda269d2b0885176b7f8cd4c5c2c52/pr/include/prio.h#L1424\n\n(no idea what version that is, just found a random github repo with it,\nbut wouldn't be hard to import the latest version).\n\nConsidering how much we point people to our source when they're writing\nextensions and such, this doesn't strike me as quite the dire situation\nthat it first appeared to be based on the initial comments. There is\ndocumentation, it's not actually that hard to find if you're working\nwith the library, and the maintainers have stated their intention to\nwork on improving the web-based documentation.\n\nThanks,\n\nStephen", "msg_date": "Tue, 1 Feb 2022 15:12:28 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Hi,\n\nOn 2022-02-01 15:12:28 -0500, Stephen Frost wrote:\n> The concern about the documentation not being easily available is\n> certainly something to consider. I remember in prior reviews not having\n> that much difficulty looking up documentation for functions\n\nI've definitely several times in the course of this thread asked for\ndocumentation about specific bits and there was none. And not just recently.\n\n\n> All that said, while have documentation on the web is nice and all, it\n> seems to still be in the source, at least when I grabbed NSPR locally\n> with apt-get source and looked at PR_Recv, I found:\n\nWhat I'm most concerned about is less the way individual functions work, and\nmore a bit higher level things. Like e.g. about not being allowed to\nfork. Which has significant design implications given postgres' process\nmodel...\n\n\nI think some documentation has been re-uploaded in the last few days. I recall\nthe content around https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS\nbeing gone too, last time I checked.\n\n\n> So, it's not the case that the documentation is completely gone and\n> utterly unavailable to those who are interested in it, it's just in the\n> source rather than being on a nicely formatted webpage. One can find it\n> on the web too, naturally:\n\n> https://github.com/thespooler/nspr/blob/29ba433ebceda269d2b0885176b7f8cd4c5c2c52/pr/include/prio.h#L1424\n\n> (no idea what version that is, just found a random github repo with it,\n> but wouldn't be hard to import the latest version).\n\nIt's last been updated 2015...\n\nThere's https://hg.mozilla.org/projects/nspr/file/tip/pr/src - which is I\nthink the upstream source.\n\nA project without even a bare-minimal README at the root does have a \"internal\nonly\" feel to it...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 1 Feb 2022 13:52:09 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Tue, Feb 1, 2022 at 01:52:09PM -0800, Andres Freund wrote:\n> There's https://hg.mozilla.org/projects/nspr/file/tip/pr/src - which is I\n> think the upstream source.\n> \n> A project without even a bare-minimal README at the root does have a \"internal\n> only\" feel to it...\n\nI agree --- it is a library --- if they don't feel the need to publish\nthe API, it seems to mean they want to maintain the ability to change it\nat any time, and therefore it is inappropriate for other software to\nrely on that API.\n\nThis is not the same as Postgres extensions needing to read the Postgres\nsource code --- they are an important but edge use case and we never saw\nthe need to standardize or publish the internal functions that must be\nstudied and adjusted possibly for major releases.\n\nThis kind of feels like the Chrome JavaScript code that used to be able\nto be build separately for PL/v8, but has gotten much harder to do in\nthe past few years.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 2 Feb 2022 16:08:04 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On 28.01.22 15:30, Robert Haas wrote:\n> I would really, really like to have an alternative to OpenSSL for PG.\n\nWhat are the reasons people want that? With OpenSSL 3, the main reasons \n-- license and FIPS support -- have gone away.\n\n\n\n", "msg_date": "Thu, 3 Feb 2022 15:07:42 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 3 Feb 2022, at 15:07, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 28.01.22 15:30, Robert Haas wrote:\n>> I would really, really like to have an alternative to OpenSSL for PG.\n> \n> What are the reasons people want that? With OpenSSL 3, the main reasons -- license and FIPS support -- have gone away.\n\nAt least it will go away when OpenSSL 3 is FIPS certified, which is yet to\nhappen (submitted, not processed).\n\nI see quite a few valid reasons to want an alternative, a few off the top of my\nhead include:\n\n- Using trust stores like Keychain on macOS with Secure Transport. There is\nAFAIK something similar on Windows and NSS has it's certificate databases.\nEspecially on client side libpq it would be quite nice to integrate with where\ncertificates already are rather than rely on files on disks.\n\n- Not having to install OpenSSL, Schannel and Secure Transport would make life\neasier for packagers.\n\n- Simply having an alternative. The OpenSSL projects recent venture into\nwriting transport protocols have made a lot of people worried over their\nbandwidth for fixing and supporting core features.\n\nJust my $0.02, everyones mileage varies on these.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 3 Feb 2022 15:53:49 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Tue, Feb 1, 2022 at 01:52:09PM -0800, Andres Freund wrote:\n> > There's https://hg.mozilla.org/projects/nspr/file/tip/pr/src - which is I\n> > think the upstream source.\n> > \n> > A project without even a bare-minimal README at the root does have a \"internal\n> > only\" feel to it...\n> \n> I agree --- it is a library --- if they don't feel the need to publish\n> the API, it seems to mean they want to maintain the ability to change it\n> at any time, and therefore it is inappropriate for other software to\n> rely on that API.\n\nThis is really not a reasonable representation of how this library has\nbeen maintained historically nor is there any reason to think that their\npolicy regarding the API has changed recently. They do have a\ndocumented API and that hasn't changed- it's just that it's not easily\navailable in web-page form any longer and that's due to something\nindependent of the library maintenance. They've also done a good job\nwith maintaining the API as one would expect from a library and so this\nreally isn't a reason to avoid using it. If there's actual specific\nexamples of the API not being well maintained and causing issues then\nplease point to them and we can discuss if that is a reason to consider\nnot depending on NSS/NSPR.\n\n> This is not the same as Postgres extensions needing to read the Postgres\n> source code --- they are an important but edge use case and we never saw\n> the need to standardize or publish the internal functions that must be\n> studied and adjusted possibly for major releases.\n\nI agree that extensions and public libraries aren't entirely the same\nbut I don't think it's all that unreasonable for developers that are\nusing a library to look at the source code for that library when\ndeveloping against it, that's certainly something I've done for a\nnumber of different libraries.\n\n> This kind of feels like the Chrome JavaScript code that used to be able\n> to be build separately for PL/v8, but has gotten much harder to do in\n> the past few years.\n\nThis isn't at all like that case, where the maintainers made a very\nclear and intentional choice to make it quite difficult for packagers to\npull v8 out to package it. Nothing like that has happened with NSS and\nthere isn't any reason to think that it will based on what the\nmaintainers have said and what they've done across the many years that\nNSS has been around.\n\nThanks,\n\nStephen", "msg_date": "Thu, 3 Feb 2022 13:42:53 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Greetings,\n\n* Daniel Gustafsson (daniel@yesql.se) wrote:\n> > On 3 Feb 2022, at 15:07, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> > \n> > On 28.01.22 15:30, Robert Haas wrote:\n> >> I would really, really like to have an alternative to OpenSSL for PG.\n> > \n> > What are the reasons people want that? With OpenSSL 3, the main reasons -- license and FIPS support -- have gone away.\n> \n> At least it will go away when OpenSSL 3 is FIPS certified, which is yet to\n> happen (submitted, not processed).\n> \n> I see quite a few valid reasons to want an alternative, a few off the top of my\n> head include:\n> \n> - Using trust stores like Keychain on macOS with Secure Transport. There is\n> AFAIK something similar on Windows and NSS has it's certificate databases.\n> Especially on client side libpq it would be quite nice to integrate with where\n> certificates already are rather than rely on files on disks.\n> \n> - Not having to install OpenSSL, Schannel and Secure Transport would make life\n> easier for packagers.\n> \n> - Simply having an alternative. The OpenSSL projects recent venture into\n> writing transport protocols have made a lot of people worried over their\n> bandwidth for fixing and supporting core features.\n> \n> Just my $0.02, everyones mileage varies on these.\n\nYeah, agreed on all of these.\n\nThanks,\n\nStephen", "msg_date": "Thu, 3 Feb 2022 13:43:28 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On 03.02.22 15:53, Daniel Gustafsson wrote:\n> I see quite a few valid reasons to want an alternative, a few off the top of my\n> head include:\n> \n> - Using trust stores like Keychain on macOS with Secure Transport. There is\n> AFAIK something similar on Windows and NSS has it's certificate databases.\n> Especially on client side libpq it would be quite nice to integrate with where\n> certificates already are rather than rely on files on disks.\n> \n> - Not having to install OpenSSL, Schannel and Secure Transport would make life\n> easier for packagers.\n\nThose are good reasons for Schannel and Secure Transport, less so for NSS.\n\n> - Simply having an alternative. The OpenSSL projects recent venture into\n> writing transport protocols have made a lot of people worried over their\n> bandwidth for fixing and supporting core features.\n\nIf we want simply an alternative, we had a GnuTLS variant almost done a \nfew years ago, but in the end people didn't want it enough. It seems to \nbe similar now.\n\n\n\n", "msg_date": "Thu, 3 Feb 2022 20:16:00 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Thu, Feb 3, 2022 at 2:16 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> If we want simply an alternative, we had a GnuTLS variant almost done a\n> few years ago, but in the end people didn't want it enough. It seems to\n> be similar now.\n\nYeah. I think it's pretty clear that the only real downside of\ncommitting support for GnuTLS or NSS or anything else is that we then\nneed to maintain that support (or eventually remove it). I don't\nreally see a problem if Daniel wants to commit this, set up a few\nbuildfarm animals, and fix stuff when it breaks. If he does that, I\ndon't see that we're losing anything. But, if he commits it in the\nhope that other people are going to step up to do the maintenance\nwork, maybe that's not going to happen, or at least not without\ngrumbling. I'm not objecting to this being committed in the sense that\nI don't ever want to see it in the tree, but I'm also not volunteering\nto maintain it.\n\nAs a philosophical matter, I don't think it's great for us - or the\nInternet in general - to be too dependent on OpenSSL. Software\nmonocultures are not great, and OpenSSL has near-constant security\nupdates and mediocre documentation. Now, maybe anything else we\nsupport will end up having similar issues, or worse. But if we and\nother projects are never willing to support anything but OpenSSL, then\nthere will never be viable alternatives to OpenSSL, because a library\nthat isn't actually used by the software you care about is of no use.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 3 Feb 2022 14:33:37 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Thu, Feb 3, 2022 at 01:42:53PM -0500, Stephen Frost wrote:\n> Greetings,\n> \n> * Bruce Momjian (bruce@momjian.us) wrote:\n> > On Tue, Feb 1, 2022 at 01:52:09PM -0800, Andres Freund wrote:\n> > > There's https://hg.mozilla.org/projects/nspr/file/tip/pr/src - which is I\n> > > think the upstream source.\n> > > \n> > > A project without even a bare-minimal README at the root does have a \"internal\n> > > only\" feel to it...\n> > \n> > I agree --- it is a library --- if they don't feel the need to publish\n> > the API, it seems to mean they want to maintain the ability to change it\n> > at any time, and therefore it is inappropriate for other software to\n> > rely on that API.\n> \n> This is really not a reasonable representation of how this library has\n> been maintained historically nor is there any reason to think that their\n> policy regarding the API has changed recently. They do have a\n> documented API and that hasn't changed- it's just that it's not easily\n> available in web-page form any longer and that's due to something\n> independent of the library maintenance. They've also done a good job\n\nSo they have always been bad at providing an API, not just now, or that\ntheir web content disappeared and they haven't fixed it, for how long? \nI guess that is better than the v8 case, but not much. Is posting web\ncontent really that hard for them?\n\n> with maintaining the API as one would expect from a library and so this\n> really isn't a reason to avoid using it. If there's actual specific\n> examples of the API not being well maintained and causing issues then\n> please point to them and we can discuss if that is a reason to consider\n> not depending on NSS/NSPR.\n\nI have no specifics.\n\n> > This is not the same as Postgres extensions needing to read the Postgres\n> > source code --- they are an important but edge use case and we never saw\n> > the need to standardize or publish the internal functions that must be\n> > studied and adjusted possibly for major releases.\n> \n> I agree that extensions and public libraries aren't entirely the same\n> but I don't think it's all that unreasonable for developers that are\n> using a library to look at the source code for that library when\n> developing against it, that's certainly something I've done for a\n> number of different libraries.\n\nWow, you have a much higher tolerance than I do. How do you even know\nwhich functions are the public API if you have to look at the source\ncode?\n\n> > This kind of feels like the Chrome JavaScript code that used to be able\n> > to be build separately for PL/v8, but has gotten much harder to do in\n> > the past few years.\n> \n> This isn't at all like that case, where the maintainers made a very\n> clear and intentional choice to make it quite difficult for packagers to\n> pull v8 out to package it. Nothing like that has happened with NSS and\n> there isn't any reason to think that it will based on what the\n> maintainers have said and what they've done across the many years that\n> NSS has been around.\n\nAs far as I know, the v8 developers didn't say anything, they just\nstarted moving things around to make it easier for them and harder for\npackagers --- and they didn't care.\n\nI frankly think we need some public statement from the NSS developers\nbefore moving forward --- there are just too many red flags here, and\nonce we support it, it will be hard to remove support for it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 4 Feb 2022 13:04:23 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Thu, Feb 3, 2022 at 02:33:37PM -0500, Robert Haas wrote:\n> As a philosophical matter, I don't think it's great for us - or the\n> Internet in general - to be too dependent on OpenSSL. Software\n> monocultures are not great, and OpenSSL has near-constant security\n> updates and mediocre documentation. Now, maybe anything else we\n\nI don't think it is fair to be criticizing OpenSSL for its mediocre\ndocumentation when the alternative being considered, NSS, has no public\ndocumentation. Can the source-code-defined NSS documentation be\nconsidered better than the mediocre OpenSSL public documentation?\n\nFor the record, I do like the idea of adding NSS, but I am concerned\nabout its long-term maintenance, we you explained.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 4 Feb 2022 13:22:00 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Fri, Feb 4, 2022 at 1:22 PM Bruce Momjian <bruce@momjian.us> wrote:\n> On Thu, Feb 3, 2022 at 02:33:37PM -0500, Robert Haas wrote:\n> > As a philosophical matter, I don't think it's great for us - or the\n> > Internet in general - to be too dependent on OpenSSL. Software\n> > monocultures are not great, and OpenSSL has near-constant security\n> > updates and mediocre documentation. Now, maybe anything else we\n>\n> I don't think it is fair to be criticizing OpenSSL for its mediocre\n> documentation when the alternative being considered, NSS, has no public\n> documentation. Can the source-code-defined NSS documentation be\n> considered better than the mediocre OpenSSL public documentation?\n\nI mean, I think it's fair to say that my experiences with trying to\nuse the OpenSSL documentation have been poor. Admittedly it's been a\nfew years now so maybe it's gotten better, but my experience was what\nit was. In one case, the function I needed wasn't documented at all,\nand I had to read the C code, which was weirdly-formatted and had no\ncomments. That wasn't fun, and knowing that NSS could be an even worse\nexperience doesn't retroactively turn that into a good one.\n\n> For the record, I do like the idea of adding NSS, but I am concerned\n> about its long-term maintenance, we you explained.\n\nIt sounds like we come down in about the same place here, in the end.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 4 Feb 2022 13:33:00 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "On Fri, Feb 4, 2022 at 01:33:00PM -0500, Robert Haas wrote:\n> > I don't think it is fair to be criticizing OpenSSL for its mediocre\n> > documentation when the alternative being considered, NSS, has no public\n> > documentation. Can the source-code-defined NSS documentation be\n> > considered better than the mediocre OpenSSL public documentation?\n> \n> I mean, I think it's fair to say that my experiences with trying to\n> use the OpenSSL documentation have been poor. Admittedly it's been a\n> few years now so maybe it's gotten better, but my experience was what\n> it was. In one case, the function I needed wasn't documented at all,\n> and I had to read the C code, which was weirdly-formatted and had no\n> comments. That wasn't fun, and knowing that NSS could be an even worse\n> experience doesn't retroactively turn that into a good one.\n\nOh, yeah, the OpenSSL documentation is verifiably mediocre.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 4 Feb 2022 13:39:35 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 4 Feb 2022, at 19:22, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Thu, Feb 3, 2022 at 02:33:37PM -0500, Robert Haas wrote:\n>> As a philosophical matter, I don't think it's great for us - or the\n>> Internet in general - to be too dependent on OpenSSL. Software\n>> monocultures are not great, and OpenSSL has near-constant security\n>> updates and mediocre documentation. Now, maybe anything else we\n> \n> I don't think it is fair to be criticizing OpenSSL for its mediocre\n> documentation when the alternative being considered, NSS, has no public\n> documentation. Can the source-code-defined NSS documentation..\n\nNot that it will shift the needle either way, but to give credit where credit\nis due:\n\nBoth NSS and NSPR are documented, and have been since they were published by\nNetscape in 1998. The documentation does lack things, and some parts are quite\nout of date. That's true and undisputed even by the projects themselves who\nstate this: \"It currently is very deprecated and likely incorrect or broken in\nmany places\".\n\nThe recent issue was that Mozilla decided to remove all 3rd party projects (why\nthey consider their own code 3rd party is a mystery to me) from their MDN site,\nand so NSS and NSPR were deleted with no replacement. This was said to be\nworked on but didn't happen and no docs were imported into the tree. When\nDaniel from curl (the other one, not I) complained, this caused enough momentum\nto get this work going and it's now been \"done\".\n\n NSS: https://firefox-source-docs.mozilla.org/security/nss/\n NSPR: https://firefox-source-docs.mozilla.org/nspr/\n\nI am writing done above in quotes, since the documentation also needs to be\nupdated, completed, rewritten, organized etc etc. The above is an import of\nwhat was found, and is in a fairly poor state. Unfortunately, it's still not\nin the tree where I personally believe documentation stands the best chance of\nbeing kept up to date. The NSPR documentation is probably the best of the two,\nbut it's also much less of a moving target.\n\nIt is true that the documentation is poor and currently in bad shape with lots\nof broken links and heavily disorganized etc. It's also true that I managed to\nimplement full libpq support without any crystal ball or help from the NSS\nfolks. The latter doesn't mean we can brush documentation concerns aside, but\nlet's be fair in our criticism.\n\n> ..be considered better than the mediocre OpenSSL public documentation?\n\nOpenSSL has gotten a lot better in recent years, it's still not great or where\nI would like it to be, but a lot better.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 4 Feb 2022 20:48:52 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Thu, Feb 3, 2022 at 01:42:53PM -0500, Stephen Frost wrote:\n> > * Bruce Momjian (bruce@momjian.us) wrote:\n> > > On Tue, Feb 1, 2022 at 01:52:09PM -0800, Andres Freund wrote:\n> > > > There's https://hg.mozilla.org/projects/nspr/file/tip/pr/src - which is I\n> > > > think the upstream source.\n> > > > \n> > > > A project without even a bare-minimal README at the root does have a \"internal\n> > > > only\" feel to it...\n> > > \n> > > I agree --- it is a library --- if they don't feel the need to publish\n> > > the API, it seems to mean they want to maintain the ability to change it\n> > > at any time, and therefore it is inappropriate for other software to\n> > > rely on that API.\n> > \n> > This is really not a reasonable representation of how this library has\n> > been maintained historically nor is there any reason to think that their\n> > policy regarding the API has changed recently. They do have a\n> > documented API and that hasn't changed- it's just that it's not easily\n> > available in web-page form any longer and that's due to something\n> > independent of the library maintenance. They've also done a good job\n> \n> So they have always been bad at providing an API, not just now, or that\n> their web content disappeared and they haven't fixed it, for how long? \n> I guess that is better than the v8 case, but not much. Is posting web\n> content really that hard for them?\n\nTo be clear, *part* of the web-based documentation disappeared and\nhasn't been replaced yet. The NSS-specific pieces are actually still\navailable, it's the NSPR (which is a lower level library used by NSS)\npart that was removed from MDN and hasn't been brought back yet, but\nwhich does still exist as comments in the source of the library.\n\n> > with maintaining the API as one would expect from a library and so this\n> > really isn't a reason to avoid using it. If there's actual specific\n> > examples of the API not being well maintained and causing issues then\n> > please point to them and we can discuss if that is a reason to consider\n> > not depending on NSS/NSPR.\n> \n> I have no specifics.\n\nThen I don't understand where the claim you made that \"it seems to mean\nthey want to maintain the ability to change it at any time\" has any\nmerit.\n\n> > > This is not the same as Postgres extensions needing to read the Postgres\n> > > source code --- they are an important but edge use case and we never saw\n> > > the need to standardize or publish the internal functions that must be\n> > > studied and adjusted possibly for major releases.\n> > \n> > I agree that extensions and public libraries aren't entirely the same\n> > but I don't think it's all that unreasonable for developers that are\n> > using a library to look at the source code for that library when\n> > developing against it, that's certainly something I've done for a\n> > number of different libraries.\n> \n> Wow, you have a much higher tolerance than I do. How do you even know\n> which functions are the public API if you have to look at the source\n> code?\n\nBecause... it's documented? They have public (and private) .h files in\nthe source tree and the function declarations have large comment blocks\nabove them which provide a documented API. I'm not talking about having\nto decipher from the actual C code what's going on but just reading the\nfunction header comment that provides the documentation of the API for\neach of the functions, and there's larger blocks of comments at the top\nof those .h files which provide more insight into how the functions in\nthat particular part of the system work and interact with each other.\nMaybe those things would be better as separate README files like what we\ndo, but maybe not, and I don't see it as a huge failing that they chose\nto use a big comment block at the top of their .h files to explain\nthings rather than separate README files.\n\nReading comments in code that I'm calling out to, even if it's in\nanother library (or another part of PG where the README isn't helping me\nenough, or due to there not being a README for that particular thing)\nalmost seems typical, to me anyway. Perhaps the exception being when\nthere are good man pages.\n\n> I frankly think we need some public statement from the NSS developers\n> before moving forward --- there are just too many red flags here, and\n> once we support it, it will be hard to remove support for it.\n\nThey have made public statements regarding this and it's been linked to\nalready in this thread:\n\nhttps://github.com/mdn/content/issues/12471\n\nwhere they explicitly state that the project is alive and maintained,\nfurther, it now now also links to this:\n\nhttps://bugzilla.mozilla.org/show_bug.cgi?id=1753127\n\nWhich certainly seems to have had a fair bit of action taken on it.\n\nIndeed, it looks like they've got a lot of the docs up and online now,\nincluding the documentation for the function that started much of this:\n\nhttps://firefox-source-docs.mozilla.org/nspr/reference/pr_recv.html#pr-recv\n\nLooks like they're still working out some of the kinks between the NSS\npages and having links from them over to the NSPR pages, but a whole lot\nof progress sure looks like it's been made in pretty short order here.\n\nDefinitely isn't looking unmaintained to me.\n\nThanks,\n\nStephen", "msg_date": "Fri, 4 Feb 2022 14:57:51 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Greetings,\n\n* Bruce Momjian (bruce@momjian.us) wrote:\n> On Thu, Feb 3, 2022 at 02:33:37PM -0500, Robert Haas wrote:\n> > As a philosophical matter, I don't think it's great for us - or the\n> > Internet in general - to be too dependent on OpenSSL. Software\n> > monocultures are not great, and OpenSSL has near-constant security\n> > updates and mediocre documentation. Now, maybe anything else we\n> \n> I don't think it is fair to be criticizing OpenSSL for its mediocre\n> documentation when the alternative being considered, NSS, has no public\n> documentation. Can the source-code-defined NSS documentation be\n> considered better than the mediocre OpenSSL public documentation?\n\nThis simply isn't the case and wasn't even the case at the start of this\nthread. The NSPR documentation was only available through the header\nfiles due to it being taken down from MDN. The NSS documentation was\nactually still there. Looks like they've now (mostly) fixed the lack of\nNSPR documentation, as noted in the recent email that I sent.\n\n> For the record, I do like the idea of adding NSS, but I am concerned\n> about its long-term maintenance, we you explained.\n\nThey've come out and explicitly said that the project is active and\nmaintained, and they've been doing regular releases. I don't think\nthere's really any reason to think that it's not being maintained at\nthis point.\n\nThanks,\n\nStephen", "msg_date": "Fri, 4 Feb 2022 14:59:35 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "Greetings,\n\n* Daniel Gustafsson (daniel@yesql.se) wrote:\n> I am writing done above in quotes, since the documentation also needs to be\n> updated, completed, rewritten, organized etc etc. The above is an import of\n> what was found, and is in a fairly poor state. Unfortunately, it's still not\n> in the tree where I personally believe documentation stands the best chance of\n> being kept up to date. The NSPR documentation is probably the best of the two,\n> but it's also much less of a moving target.\n\nI wonder about the 'not in tree' bit since it is in the header files,\ncertainly for NSPR which I've been poking at due to this discussion. I\nhad hoped that they were generating the documentation on the webpage\nfrom what's in the header files, is that not the case then? Which is\nmore accurate? If it's a simple matter of spending time going through\nwhat's in the tree and making sure what's online matches that, I suspect\nwe could find some folks with time to work on helping them there.\n\nIf the in-tree stuff isn't accurate then that's a bigger problem, of\ncourse.\n\n> It is true that the documentation is poor and currently in bad shape with lots\n> of broken links and heavily disorganized etc. It's also true that I managed to\n> implement full libpq support without any crystal ball or help from the NSS\n> folks. The latter doesn't mean we can brush documentation concerns aside, but\n> let's be fair in our criticism.\n\nAgreed.\n\nThanks,\n\nStephen", "msg_date": "Fri, 4 Feb 2022 15:03:50 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Support for NSS as a libpq TLS backend" }, { "msg_contents": "> On 4 Feb 2022, at 21:03, Stephen Frost <sfrost@snowman.net> wrote:\n\n> I wonder about the 'not in tree' bit since it is in the header files,\n> certainly for NSPR which I've been poking at due to this discussion.\n\nWhat I meant was that the documentation on the website isn't published from\ndocumentation source code (in whichever format) residing in the tree.\n\nThat being said, I take that back since I just now in a git pull found that\nthey had done just that 6 days ago. It's just as messy and incomplete as what\nis currently on the web, important API's like NSS_InitContext are still not\neven mentioned more than in a release note, but I think it stands a better\nchance of success than before.\n\n> I had hoped that they were generating the documentation on the webpage from\n> what's in the header files, is that not the case then?\n\n\nNot from what I can tell no.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 4 Feb 2022 21:18:13 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Support for NSS as a libpq TLS backend" } ]
[ { "msg_contents": "\nAs I was resuscitating my buildfarm animal lousyjack, which runs tests\nunder valgrind, I neglected to turn off force_parallel=regress in the\nconfiguration settings. The results were fairly disastrous. The runs\ntook about 4 times as long, and some steps failed. I'm not sure if this\nis a known result, so I thought I'd memorialize it in case someone else\nruns into the same issue. When I disabled this setting the animal turned\nto being happy once more.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 16 May 2020 09:58:23 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "valgrind vs force_parallel_mode=regress" } ]
[ { "msg_contents": "If pg_dump can't seek on its output stream when writing a dump in the\ncustom archive format (possibly because you piped its stdout to a file)\nit can't update that file with data offsets. These files will often\nbreak parallel restoration. Warn when the user is doing pg_restore on\nsuch a file to give them a hint as to why their restore is about to\nfail.\n\nThe documentation for pg_restore -j is also updated to suggest that you\ndump custom archive formats with the -f option.\n---\n doc/src/sgml/ref/pg_restore.sgml | 9 +++++++++\n src/bin/pg_dump/pg_backup_custom.c | 8 ++++++++\n 2 files changed, 17 insertions(+)", "msg_date": "Sat, 16 May 2020 16:57:46 -0400", "msg_from": "David Gilman <davidgilman1@gmail.com>", "msg_from_op": true, "msg_subject": "Warn when parallel restoring a custom dump without data offsets" }, { "msg_contents": "On Sat, May 16, 2020 at 04:57:46PM -0400, David Gilman wrote:\n> If pg_dump can't seek on its output stream when writing a dump in the\n> custom archive format (possibly because you piped its stdout to a file)\n> it can't update that file with data offsets. These files will often\n> break parallel restoration. Warn when the user is doing pg_restore on\n> such a file to give them a hint as to why their restore is about to\n> fail.\n\nYou didn't say so, but I gather this is related to this other thread (which\nseems to represent two separate issues).\nhttps://www.postgresql.org/message-id/flat/1582010626326-0.post%40n3.nabble.com#0891d77011cdb6ca3ad8ab7904a2ed63\n\n> Tom, if you or anyone else with PostgreSQL would appreciate the\n> pg_dump file I can send it to you out of band, it's only a few\n> megabytes. I have pg_restore with debug symbols too if you want me to\n> try anything.\n\nWould you send to me or post a link to a filesharing site and I'll try to\nreproduce it ? So far no luck.\n\nYou should include here your diagnosis from that thread, or add it to a commit\nmessage, and mention the suspect commit (548e50976). Eventually add patch for\nthe next commitfest. https://commitfest.postgresql.org/\n\nI guess you're also involved in this conversation:\nhttps://dba.stackexchange.com/questions/257398/pg-restore-with-jobs-flag-results-in-pg-restore-error-a-worker-process-di\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 19 May 2020 08:07:40 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Warn when parallel restoring a custom dump without data offsets" }, { "msg_contents": "I started fooling with this at home while our ISP is broke (pardon my brevity).\n\nMaybe you also saw commit b779ea8a9a2dc3a089b3ac152b1ec4568bfeb26f\n\"Fix pg_restore so parallel restore doesn't fail when the input file\ndoesn't contain data offsets (which it won't, if pg_dump thought its\noutput wasn't seekable)...\"\n\n...which I guess should actually say \"doesn't NECESSARILY fail\", since\nit also adds this comment:\n\"This could fail if we are asked to restore items out-of-order.\"\n\nSo this is a known issue and not a regression. I think the PG11\ncommit you mentioned (548e5097) happens to make some databases fail in\nparallel restore that previously worked (I didn't check). Possibly\nalso some databases (or some pre-existing dumps) which used to fail\nmight possibly now succeed.\n\nYour patch adds a warning if unseekable output might fail during\nparallel restore. I'm not opposed to that, but can we just make\npg_restore work in that case? If the input is unseekable, then we can\nnever do a parallel restore at all. If it *is* seekable, could we\nmake _PrintTocData rewind if it gets to EOF using ftello(SEEK_SET, 0)\nand re-scan again from the beginning? Would you want to try that ?\n\n\n", "msg_date": "Tue, 19 May 2020 22:26:57 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Warn when parallel restoring a custom dump without data offsets" }, { "msg_contents": "Your understanding of the issue is mostly correct:\n\n> I think the PG11\n> commit you mentioned (548e5097) happens to make some databases fail in\n> parallel restore that previously worked (I didn't check).\n\nCorrect, if you do the bisect around that yourself you'll see\npg_restore start failing with the expected \"possibly due to\nout-of-order restore request\" on offset-less dumps. It is a known\nissue but it's only documented in code comments, not anywhere user\nfacing, which is sending people to StackOverflow.\n\n> If the input is unseekable, then we can\n> never do a parallel restore at all.\n\nI don't know if this is strictly true. Imagine the case of a database\ndump of a single large table with a few indexes, so simple enough that\neverything in the file is going to be in restore order. It might seem\nsilly to parallel restore a single table but remember that pg_restore\nalso creates indexes in parallel and on a typical development\nworkstation with a few CPU cores and an SSD it'll be a substantial\nimprovement. There are probably some other corner cases where you can\nget lucky with the offset-less dump and it'll work. That's why my gut\ninstinct was to warn instead of fail.\n\n> If it *is* seekable, could we\n> make _PrintTocData rewind if it gets to EOF using ftello(SEEK_SET, 0)\n> and re-scan again from the beginning? Would you want to try that ?\n\nI will try this and report back. I will also see if I can get an strace.\n\n-- \nDavid Gilman\n:DG<\n\n\n", "msg_date": "Wed, 20 May 2020 08:55:23 -0400", "msg_from": "David Gilman <davidgilman1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Warn when parallel restoring a custom dump without data offsets" }, { "msg_contents": "David Gilman <davidgilman1@gmail.com> writes:\n>> I think the PG11\n>> commit you mentioned (548e5097) happens to make some databases fail in\n>> parallel restore that previously worked (I didn't check).\n\n> Correct, if you do the bisect around that yourself you'll see\n> pg_restore start failing with the expected \"possibly due to\n> out-of-order restore request\" on offset-less dumps.\n\nYeah. Now, the whole point of that patch was to decouple the restore\norder from the dump order ... but with an offset-less dump file, we\ncan't do that, or at least the restore order is greatly constrained.\nI wonder if it'd be sensible for pg_restore to use a different parallel\nscheduling algorithm if it notices that the input lacks offsets.\n(There could still be some benefit from parallelism, just not as much.)\nNo idea if this is going to be worth the trouble, but it probably\nis worth looking into.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 20 May 2020 10:48:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Warn when parallel restoring a custom dump without data offsets" }, { "msg_contents": "I did some more digging. To keep everyone on the same page there are\nfour different ways to order TOCs:\n\n1. topological order,\n2. dataLength order, size of the table, is always zero when pg_dump can't seek,\n3. dumpId order, which should be thought as random but roughly\ncorrelates to topological order to make things fun,\n4. file order, the order that tables are physically stored in the\ncustom dump file.\n\nWithout being able to seek backwards a parallel restore of the custom\ndump archive format has to be ordered by #1 and #4. The reference\ncounting that reduce_dependencies does inside of the parallel restore\nlogic upholds ordering #1. Unfortunately, 548e50976ce changed\nTocEntrySizeCompare (which is used to break ties within #1) to order\nby #2, then by #3. This most often breaks on dumps written by pg_dump\nwithout seeks (everything has a dataLength of zero) as it then falls\nback to #3 ordering every time. But, because nothing in pg_restore\ndoes any ordering by #4 you could potentially run into this with any\ncustom dump so I think it's a regression.\n\nFor some troubleshooting I changed ready_list_sort to never call\nqsort. This fixes the problem by never ordering by #3, leaving things\nin #4 order, but breaks the new algorithm introduced in 548e50976ce.\n\nI did what Justin suggested earlier and it works great. Parallel\nrestore requires seekable input (enforced elsewhere) so everyone's\nparallel restores should work again.\n\nOn Wed, May 20, 2020 at 10:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Gilman <davidgilman1@gmail.com> writes:\n> >> I think the PG11\n> >> commit you mentioned (548e5097) happens to make some databases fail in\n> >> parallel restore that previously worked (I didn't check).\n>\n> > Correct, if you do the bisect around that yourself you'll see\n> > pg_restore start failing with the expected \"possibly due to\n> > out-of-order restore request\" on offset-less dumps.\n>\n> Yeah. Now, the whole point of that patch was to decouple the restore\n> order from the dump order ... but with an offset-less dump file, we\n> can't do that, or at least the restore order is greatly constrained.\n> I wonder if it'd be sensible for pg_restore to use a different parallel\n> scheduling algorithm if it notices that the input lacks offsets.\n> (There could still be some benefit from parallelism, just not as much.)\n> No idea if this is going to be worth the trouble, but it probably\n> is worth looking into.\n>\n> regards, tom lane\n\n\n\n-- \nDavid Gilman\n:DG<", "msg_date": "Wed, 20 May 2020 23:05:01 -0400", "msg_from": "David Gilman <davidgilman1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Warn when parallel restoring a custom dump without data offsets" }, { "msg_contents": "I've rounded this patch out with a test and I've set up the commitfest\nwebsite for this thread. The latest patches are attached and I think\nthey are ready for review.\n\nOn Wed, May 20, 2020 at 11:05 PM David Gilman <davidgilman1@gmail.com> wrote:\n>\n> I did some more digging. To keep everyone on the same page there are\n> four different ways to order TOCs:\n>\n> 1. topological order,\n> 2. dataLength order, size of the table, is always zero when pg_dump can't seek,\n> 3. dumpId order, which should be thought as random but roughly\n> correlates to topological order to make things fun,\n> 4. file order, the order that tables are physically stored in the\n> custom dump file.\n>\n> Without being able to seek backwards a parallel restore of the custom\n> dump archive format has to be ordered by #1 and #4. The reference\n> counting that reduce_dependencies does inside of the parallel restore\n> logic upholds ordering #1. Unfortunately, 548e50976ce changed\n> TocEntrySizeCompare (which is used to break ties within #1) to order\n> by #2, then by #3. This most often breaks on dumps written by pg_dump\n> without seeks (everything has a dataLength of zero) as it then falls\n> back to #3 ordering every time. But, because nothing in pg_restore\n> does any ordering by #4 you could potentially run into this with any\n> custom dump so I think it's a regression.\n>\n> For some troubleshooting I changed ready_list_sort to never call\n> qsort. This fixes the problem by never ordering by #3, leaving things\n> in #4 order, but breaks the new algorithm introduced in 548e50976ce.\n>\n> I did what Justin suggested earlier and it works great. Parallel\n> restore requires seekable input (enforced elsewhere) so everyone's\n> parallel restores should work again.\n>\n> On Wed, May 20, 2020 at 10:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > David Gilman <davidgilman1@gmail.com> writes:\n> > >> I think the PG11\n> > >> commit you mentioned (548e5097) happens to make some databases fail in\n> > >> parallel restore that previously worked (I didn't check).\n> >\n> > > Correct, if you do the bisect around that yourself you'll see\n> > > pg_restore start failing with the expected \"possibly due to\n> > > out-of-order restore request\" on offset-less dumps.\n> >\n> > Yeah. Now, the whole point of that patch was to decouple the restore\n> > order from the dump order ... but with an offset-less dump file, we\n> > can't do that, or at least the restore order is greatly constrained.\n> > I wonder if it'd be sensible for pg_restore to use a different parallel\n> > scheduling algorithm if it notices that the input lacks offsets.\n> > (There could still be some benefit from parallelism, just not as much.)\n> > No idea if this is going to be worth the trouble, but it probably\n> > is worth looking into.\n> >\n> > regards, tom lane\n>\n>\n>\n> --\n> David Gilman\n> :DG<\n\n\n\n-- \nDavid Gilman\n:DG<", "msg_date": "Sat, 23 May 2020 15:54:30 -0400", "msg_from": "David Gilman <davidgilman1@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Warn when parallel restoring a custom dump without data offsets" }, { "msg_contents": "On Sat, May 23, 2020 at 03:54:30PM -0400, David Gilman wrote:\n> I've rounded this patch out with a test and I've set up the commitfest\n> website for this thread. The latest patches are attached and I think\n> they are ready for review.\n\nThanks. https://commitfest.postgresql.org/28/2568/\nI'm not sure this will be considered a bugfix, since the behavior is known.\nMaybe eligible for backpatch though (?)\n\nYour patch was encoded, so this is failing:\nhttp://cfbot.cputube.org/david-gilman.html\n\nIdeally CFBOT would deal with that (maybe by using git-am - adding Thomas), but\nI think you sent using gmail web interface, which also reordered the patches.\n(CFBOT *does* sort them, but it's a known annoyance).\n\n> dump file was written with data offsets pg_restore can seek directly to\n\noffsets COMMA\n\n> pg_restore would only find the TOC if it happened to be immediately\n\n\"immediately\" is wrong, no ? I thought the problem was if we seeked to D and\nthen looked for C, we wouldn't attempt to go backwards.\n\n> read request only when restoring a custom dump file without data offsets.\n\nremove \"only\"\n\n> of a bunch of extra tiny reads when pg_restore starts up.\n\nI would have thought to mention the seeks() ; but it's true that the read()s now\ngrow quadratically. I did run a test, but I don't know how many objects would\nbe unreasonable or how many it'd take to show a problem.\n\nMaybe we should avoid fseeko(0, SEEK_SET) unless we need to wrap around after\nEOF - I'm not sure.\n\nMaybe the cleanest way would be to pre-populate a structure with all the TOC\ndata and loop around that instead of seeking around the file ? Can we use the\nsame structure as pg_dump ?\n\nOtherwise, that makes me think of commit 42f70cd9c. Make it's not a good\nparallel or example for this case, though.\n\n+ The custom archive format may not work with the <option>-j</option>\n+ option if the archive was originally created by writing the archive\n+ to an unseekable output file. For the best concurrent restoration\n\nCan I suggest something like: pg_restore with parallel jobs may fail if the\narchive dump was written to an unseekable output stream, like stdout.\n\n+\t\t\t * If the input file can't be seeked we're at the mercy of the\n\nseeked COMMA\n\n>Subject: [PATCH 3/3] Add integration test for out-of-order TOC requests in pg_restore\n\nWell done - thanks for that.\n\n>Also add undocumented --disable-seeking argument to pg_dump to emulate\n>writing to an unseekable output file.\n\nRemove \"also\".\n\nIs it possible to dump to stdout (or pipe to cat or dd) to avoid a new option ?\n\nMaybe that would involve changing the test process to use the shell (system() vs\nexecve()), or maybe you could write:\n\n/* sh handles output redirection and arg splitting */\n'sh', '-c', 'pg_dump -Fc -Z6 --no-sync --disable-seeking postgres > $tempdir/defaults_custom_format_no_seek_parallel_restore.dump',\n\nBut I think that would need to then separately handle WIN32, so maybe it's not\nworth it.\n\n\n", "msg_date": "Sat, 23 May 2020 17:47:51 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Warn when parallel restoring a custom dump without data offsets" }, { "msg_contents": "Updated patches are attached, I ditched the gmail web interface so\nhopefully this works.\n\nNot mentioned in Justin's feedback: I dropped the extra sort in the test\nas it's no longer necessary. I also added a parallel dump -> parallel\nrestore -> dump test run for the directory format to get some free test\ncoverage.\n\nOn Sat, May 23, 2020 at 05:47:51PM -0500, Justin Pryzby wrote:\n> I'm not sure this will be considered a bugfix, since the behavior is known.\n> Maybe eligible for backpatch though (?)\n\nI'm not familiar with how your release management works, but I'm\npersonally fine with whatever version you can get it into. I urge you to\ntry landing this as soon as possible. The minimum reproducible example\nin the test case is very minimal and I imagine all real world databases\nare going to trigger this.\n\n> I would have thought to mention the seeks() ; but it's true that the read()s now\n> grow quadratically. I did run a test, but I don't know how many objects would\n> be unreasonable or how many it'd take to show a problem.\n\nAnd I misunderstood how bad it was. I thought it was reading little\nheader structs off the disk but it's actually reading the entire table\n(see _skipData). So you're quadratically rereading entire tables and\nthrashing your cache. Oops.\n\n> Maybe we should avoid fseeko(0, SEEK_SET) unless we need to wrap around after\n> EOF - I'm not sure.\n\nThe seek location is already the location of the end of the last good\nobject so just adding wraparound gives the good algorithmic performance\nfrom the technique in commit 42f70cd9c. I’ve gone ahead and implemented\nthis.\n\n> Is it possible to dump to stdout (or pipe to cat or dd) to avoid a new option ?\n\nThe underlying IPC::Run code seems to support piping in a cross-platform\nway. I am not a Perl master though and after spending an evening trying\nto get it to work I went with this approach. If you can put me in touch\nwith anyone to help me out here I'd appreciate it.\n\n-- \nDavid Gilman :DG<\nhttps://gilslotd.com", "msg_date": "Mon, 25 May 2020 13:54:29 -0500", "msg_from": "David Gilman <dgilman@gilslotd.com>", "msg_from_op": false, "msg_subject": "Re: Warn when parallel restoring a custom dump without data offsets" }, { "msg_contents": "The earlier patches weren't applying because I had \"git config\ndiff.noprefix true\" set globally and that was messing up the git\nformat-patch output.\n\nOn Mon, May 25, 2020 at 01:54:29PM -0500, David Gilman wrote:\n> And I misunderstood how bad it was. I thought it was reading little\n> header structs off the disk but it's actually reading the entire table\n> (see _skipData). So you're quadratically rereading entire tables and\n> thrashing your cache. Oops.\n\nI changed _skipData to fseeko() instead of fread() when possible to cut\ndown on this thrashing further.\n\n-- \nDavid Gilman :DG<\nhttps://gilslotd.com", "msg_date": "Mon, 25 May 2020 16:55:26 -0500", "msg_from": "David Gilman <dgilman@gilslotd.com>", "msg_from_op": false, "msg_subject": "Re: Warn when parallel restoring a custom dump without data offsets" }, { "msg_contents": "I've attached the latest patches after further review from Justin Pryzby.\n\n-- \nDavid Gilman :DG<\nhttps://gilslotd.com", "msg_date": "Wed, 27 May 2020 19:33:47 -0500", "msg_from": "David Gilman <dgilman@gilslotd.com>", "msg_from_op": false, "msg_subject": "Re: Warn when parallel restoring a custom dump without data offsets" }, { "msg_contents": "On Mon, May 25, 2020 at 01:54:29PM -0500, David Gilman wrote:\n> > Is it possible to dump to stdout (or pipe to cat or dd) to avoid a new option ?\n> \n> The underlying IPC::Run code seems to support piping in a cross-platform\n> way. I am not a Perl master though and after spending an evening trying\n> to get it to work I went with this approach. If you can put me in touch\n> with anyone to help me out here I'd appreciate it.\n\nI think you can do what's needed like so:\n\n--- a/src/bin/pg_dump/t/002_pg_dump.pl\n+++ b/src/bin/pg_dump/t/002_pg_dump.pl\n@@ -152,10 +152,13 @@ my %pgdump_runs = (\n },\n defaults_custom_format_no_seek_parallel_restore => {\n test_key => 'defaults',\n- dump_cmd => [\n- 'pg_dump', '-Fc', '-Z6', '--no-sync', '--disable-seeking',\n+ dump_cmd => (\n+ [\n+ 'pg_dump', '-Fc', '-Z6', '--no-sync',\n \"--file=$tempdir/defaults_custom_format_no_seek_parallel_restore.dump\", 'postgres',\n- ],\n+ ],\n+ \"|\", [ \"cat\" ], # disable seeking\n+ ),\n\nAlso, these are failing intermittently:\n\nt/002_pg_dump.pl .............. 1649/6758\n# Failed test 'defaults_custom_format_no_seek_parallel_restore: should dump GRANT SELECT (proname ...) ON TABLE pg_proc TO public'\n# at t/002_pg_dump.pl line 3635.\n# Review defaults_custom_format_no_seek_parallel_restore results in /var/lib/pgsql/postgresql.src/src/bin/pg_dump/tmp_check/tmp_test_NqRC\n\nt/002_pg_dump.pl .............. 2060/6758 \n# Failed test 'defaults_dir_format_parallel: should dump GRANT SELECT (proname ...) ON TABLE pg_proc TO public'\n# at t/002_pg_dump.pl line 3635.\n# Review defaults_dir_format_parallel results in /var/lib/pgsql/postgresql.src/src/bin/pg_dump/tmp_check/tmp_test_NqRC\n\nIf you can address those, I think this will be \"ready for committer\".\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 13 Jun 2020 17:51:21 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Warn when parallel restoring a custom dump without data offsets" }, { "msg_contents": "Adding Jim, since he ask about helping with perl.\n\nYou can read the history of the patch here:\n\nhttps://commitfest.postgresql.org/28/2568/\nhttps://www.postgresql.org/message-id/flat/CALBH9DDuJ+scZc4MEvw5uO-=vRyR2=QF9+Yh=3hPEnKHWfS81A@mail.gmail.com\n\nSome context:\n\nDavid is adding a test case for a bugfix he made.\nWe want to test parallel pg_restore of a pg_dump created with a nonseekable FD.\nI suggested to make the patch smaller by creating a nonseekable FD using a pipe.\n\nThe postgres test code is using IPC::Run, but currently passing a single\nargument. I think we want to change that to pass *multiple* arguments, so we\ncan use '>' and/or '|'. I have a patch which partially works, and another patch\nwhich I didn't try very hard to make work, since the first part already took me\na long time..\n\nYou'll want to start with a git checkout and do:\ntime ./configure --enable-tap-tests ...\n\nAnd apply David's patches from the above thread. The goal is to make a test\ncase that fails without his patch and passes with it. And maybe apply my\npatches if they're useful.\n\nI've been running the pg_dump checks like this:\ntime make -C src/bin/pg_dump check\n\nOn Sun, Jun 21, 2020 at 02:42:25PM -0500, Justin Pryzby wrote:\n> On Sun, Jun 21, 2020 at 03:18:58PM -0400, David Gilman wrote:\n> > Thank you for taking a stab at the perl thing. I took the question to\n> > StackOverflow, I haven't yet looped back to try their suggestion but I\n> > think there is hope by messing with the Perl references.\n> > https://stackoverflow.com/questions/62086173/using-the-right-perl-array-references-with-ipcrun\n> \n> I finally got this to work using IPC:Run's '>' redirection operator, but it\n> seems like that opens a file which *is* seekable, so doesn't work for this\n> purpose. Since \"cat\" isn't portable (duh), the next best thing seems to be to\n> pipe to perl -pe '' >ouput. Otherwise maybe your way of adding an\n> --disable-seeking option is best.\n> \n> See if you can do anything with the attached.\n\n[fixed patch I sent to David offlist]\n\n[and another patch which doesn't work yet]\n\n-- \nJustin\n\nPS. The patches are named *.txt so that the patch tester doesn't try to test\nthem, as they're known to be incomplete.", "msg_date": "Mon, 22 Jun 2020 13:40:48 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Warn when parallel restoring a custom dump without data offsets" }, { "msg_contents": "David Gilman <dgilman@gilslotd.com> writes:\n> I've attached the latest patches after further review from Justin Pryzby.\n\nI guess I'm completely confused about the purpose of these patches.\nFar from coping with the situation of an unseekable file, they appear\nto change pg_restore so that it fails altogether if it can't seek\nits input file. Why would we want to do this?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Jul 2020 17:25:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Warn when parallel restoring a custom dump without data offsets" }, { "msg_contents": "On Thu, Jul 02, 2020 at 05:25:21PM -0400, Tom Lane wrote:\n> I guess I'm completely confused about the purpose of these patches.\n> Far from coping with the situation of an unseekable file, they appear\n> to change pg_restore so that it fails altogether if it can't seek\n> its input file. Why would we want to do this?\n\nI'm not sure where the \"fails altogether if it can't seek\" is. The\n\"Skip tables in pg_restore\" patch retains the old fread() logic. The\n--disable-seeking stuff was just to support tests, and thanks to\nhelp from Justin Pryzby the tests no longer require it. I've attached\nthe updated patch set.\n\nNote that this still shouldn't be merged because of Justin's bug report\nin 20200706050129.GW4107@telsasoft.com which is unrelated to this change\nbut will leave you with flaky CI until it's fixed.\n\n-- \nDavid Gilman :DG<\nhttps://gilslotd.com", "msg_date": "Tue, 7 Jul 2020 22:19:35 -0500", "msg_from": "David Gilman <dgilman@gilslotd.com>", "msg_from_op": false, "msg_subject": "Re: Warn when parallel restoring a custom dump without data offsets" }, { "msg_contents": "David Gilman <dgilman@gilslotd.com> writes:\n> On Thu, Jul 02, 2020 at 05:25:21PM -0400, Tom Lane wrote:\n>> I guess I'm completely confused about the purpose of these patches.\n>> Far from coping with the situation of an unseekable file, they appear\n>> to change pg_restore so that it fails altogether if it can't seek\n>> its input file. Why would we want to do this?\n\n> I'm not sure where the \"fails altogether if it can't seek\" is.\n\nI misread the patch, is where :-(\n\nAs penance, I spent some time studying this patchset, and have a few\ncomments:\n\n1. The proposed doc change in 0001 seems out-of-date; isn't it adding a\nwarning about exactly the deficiency that the rest of the patch is\neliminating? Note that the preceding para already says that the input\nhas to be seekable, so that's covered. Maybe there is reason for\ndocumenting that parallel restore will be slower if the archive was\nwritten in a non-seekable way ... but that's not what this says.\n\n2. It struck me that the patch is still pretty inefficient, in that\nanytime it has to back up in an offset-less archive, it blindly rewinds\nto dataStart and rescans everything. In the worst case that'd still be\nO(N^2) work, and it's really not necessary, because once we've seen a\ngiven data block we know where it is. We just have to remember that,\nwhich seems easy enough. (Well, on Windows it's a bit trickier because\nthe state in question is shared across threads; but that's good, it might\nsave some work.)\n\n3. Extending on #2, we actually don't need the rewind and retry logic\nat all. If we are looking for a block we haven't already seen, and we\nget to the end of the archive, it ain't there. (This is a bit less\nobvious in the Windows case than otherwise, but I think it's still true,\ngiven that the start state is either \"all offsets known\" or \"no offsets\nknown\". A particular thread might skip over some blocks on the strength\nof an offset established by another thread, but the blocks ahead of that\nspot must now all have known offsets.)\n\n4. Patch 0002 seems mighty expensive for the amount of code coverage\nit's adding. On my machine it seems to raise the overall runtime of\npg_dump's \"make installcheck\" by about 10%, and the only new coverage\nis of the few lines added here. I wonder if we couldn't cover that\nmore cheaply by testing what happens when we use a \"-L\" option with\nan intentionally mis-sorted restore list.\n\n5. I'm inclined to reject 0003. It's not saving anything very meaningful,\nand we'd just have to put the code back whenever somebody gets around\nto making pg_backup_tar.c capable of out-of-order restores like\npg_backup_custom.c is now able to do.\n\nThe attached 0001 rewrites your 0001 as per the above ideas (dropping\nthe proposed doc change for now), and includes your 0004 for simplicity.\nI'm including your 0002 verbatim just so the cfbot will be able to do a\nmeaningful test on 0001; but as stated, I don't really want to commit it.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 12 Jul 2020 15:57:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Warn when parallel restoring a custom dump without data offsets" }, { "msg_contents": "I wrote:\n> The attached 0001 rewrites your 0001 as per the above ideas (dropping\n> the proposed doc change for now), and includes your 0004 for simplicity.\n> I'm including your 0002 verbatim just so the cfbot will be able to do a\n> meaningful test on 0001; but as stated, I don't really want to commit it.\n\nI spent some more time testing this, by trying to dump and restore the\ncore regression database. I immediately noticed that I sometimes got\n\"ftell mismatch with expected position -- ftell used\" warnings, though\nit was a bit variable depending on the -j level. The reason was fairly\napparent on looking at the code: we had various fseeko() calls in code\npaths that did not bother to correct ctx->filePos afterwards. In fact,\n*none* of the four existing fseeko calls in pg_backup_custom.c did so.\nIt's fairly surprising that that hadn't caused a problem up to now.\n\nI started to add adjustments of ctx->filePos after all the fseeko calls,\nbut then began to wonder why we don't just rip the variable out entirely.\nThe only places where we need it are to set dataPos for data blocks,\nbut that's an entirely pointless activity if we don't have seek\ncapability, because we're not going to be able to rewrite the TOC\nto emit the updated values.\n\nHence, the 0000 patch attached rips out ctx->filePos, and then\n0001 is the currently-discussed patch rebased on that. I also added\nan additional refinement, which is to track the furthest point we've\nscanned to while looking for data blocks in an offset-less file.\nIf we have seek capability, then when we need to resume looking for\ndata blocks we can search forward from that spot rather than wherever\nwe happened to have stopped at. This fixes an additional source\nof potentially-O(N^2) behavior if we have to restore blocks in a\nvery out-of-order fashion. I'm not sure that it makes much difference\nin common cases, but with this we can say positively that we don't\nscan the same block more than once per worker process.\n\nI'm still unhappy about the proposed test case (0002), but now\nI have a more concrete reason for that: it didn't catch this bug,\nso the coverage is still pretty miserable.\n\nDump-and-restore-the-regression-database used to be a pretty common\nmanual test for pg_dump, but we never got around to automating it,\npossibly because we figured that the pg_upgrade test script covers\nthat ground. It's becoming gruesomely clear that pg_upgrade is a\ndistinct operating mode that doesn't necessarily have the same bugs.\nSo I'm inclined to feel that what we ought to do is automate a test\nof that sort; but first we'll have to fix the existing bugs described\nat [1][2].\n\nGiven the current state of affairs, I'm inclined to commit the\nattached with no new test coverage, and then come back and look\nat better testing after the other bugs are dealt with.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/3169466.1594841366%40sss.pgh.pa.us\n[2] https://www.postgresql.org/message-id/3170626.1594842723%40sss.pgh.pa.us", "msg_date": "Wed, 15 Jul 2020 16:40:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Warn when parallel restoring a custom dump without data offsets" }, { "msg_contents": "I wrote:\n> Given the current state of affairs, I'm inclined to commit the\n> attached with no new test coverage, and then come back and look\n> at better testing after the other bugs are dealt with.\n\nPushed back to v12.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Jul 2020 13:05:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Warn when parallel restoring a custom dump without data offsets" } ]
[ { "msg_contents": "I'm the creator of the PostgreSQL driver pgx (https://github.com/jackc/pgx)\nfor the Go language. I have found significant performance advantages to\nusing the extended protocol and binary format values -- in particular for\ntypes such as timestamptz.\n\nHowever, I was recently very surprised to find that it is significantly\nslower to select a text type value in the binary format. For an example\ncase of selecting 1,000 rows each with 5 text columns of 16 bytes each the\napplication time from sending the query to having received the entire\nresponse is approximately 16% slower. Here is a link to the test benchmark:\nhttps://github.com/jackc/pg_text_binary_bench\n\nGiven that the text and binary formats for the text type are identical I\nwould not have expected any performance differences.\n\n My C is rusty and my knowledge of the PG server internals is minimal but\nthe performance difference appears to be that function textsend creates an\nextra copy where textout simply returns a pointer to the existing data.\nThis seems to be superfluous.\n\nI can work around this by specifying the format per result column instead\nof specifying binary for all but this performance bug / anomaly seemed\nworth reporting.\n\nJack\n\nI'm the creator of the PostgreSQL driver pgx (https://github.com/jackc/pgx) for the Go language. I have found significant performance advantages to using the extended protocol and binary format values -- in particular for types such as timestamptz.However, I was recently very surprised to find that it is significantly slower to select a text type value in the binary format. For an example case of selecting 1,000 rows each with 5 text columns of 16 bytes each the application time from sending the query to having received the entire response is approximately 16% slower. Here is a link to the test benchmark: https://github.com/jackc/pg_text_binary_benchGiven that the text and binary formats for the text type are identical I would not have expected any performance differences. My C is rusty and my knowledge of the PG server internals is minimal but the performance difference appears to be that function textsend creates an extra copy where textout simply returns a pointer to the existing data. This seems to be superfluous.I can work around this by specifying the format per result column instead of specifying binary for all but this performance bug / anomaly seemed worth reporting.Jack", "msg_date": "Sat, 16 May 2020 20:12:27 -0500", "msg_from": "Jack Christensen <jack@jncsoftware.com>", "msg_from_op": true, "msg_subject": "Performance penalty when requesting text values in binary format" }, { "msg_contents": "On Sat, 2020-05-16 at 20:12 -0500, Jack Christensen wrote:\n> I'm the creator of the PostgreSQL driver pgx (https://github.com/jackc/pgx) for the Go language.\n> I have found significant performance advantages to using the extended protocol and binary format\n> values -- in particular for types such as timestamptz.\n> \n> However, I was recently very surprised to find that it is significantly slower to select a text\n> type value in the binary format. For an example case of selecting 1,000 rows each with 5 text\n> columns of 16 bytes each the application time from sending the query to having received the\n> entire response is approximately 16% slower. Here is a link to the test benchmark:\n> https://github.com/jackc/pg_text_binary_bench\n> \n> Given that the text and binary formats for the text type are identical I would not have\n> expected any performance differences.\n> \n> My C is rusty and my knowledge of the PG server internals is minimal but the performance\n> difference appears to be that function textsend creates an extra copy where textout\n> simply returns a pointer to the existing data. This seems to be superfluous.\n> \n> I can work around this by specifying the format per result column instead of specifying\n> binary for all but this performance bug / anomaly seemed worth reporting.\n\nDid you profile your benchmark?\nIt would be interesting to know where the time is spent.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 18 May 2020 14:07:52 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Performance penalty when requesting text values in binary format" }, { "msg_contents": "On Mon, May 18, 2020 at 7:07 AM Laurenz Albe <laurenz.albe@cybertec.at>\nwrote:\n\n> Did you profile your benchmark?\n> It would be interesting to know where the time is spent.\n>\n\nUnfortunately, I have not. Fortunately, it appears that Tom Lane recognized\nthis as a part of another issue and has prepared a patch.\n\nhttps://www.postgresql.org/message-id/6648.1589819885%40sss.pgh.pa.us\n\nThanks,\nJack\n\nOn Mon, May 18, 2020 at 7:07 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\nDid you profile your benchmark?\nIt would be interesting to know where the time is spent.Unfortunately, I have not. Fortunately, it appears that Tom Lane recognized this as a part of another issue and has prepared a patch.https://www.postgresql.org/message-id/6648.1589819885%40sss.pgh.pa.usThanks,Jack", "msg_date": "Tue, 19 May 2020 09:40:19 -0500", "msg_from": "Jack Christensen <jack@jncsoftware.com>", "msg_from_op": true, "msg_subject": "Re: Performance penalty when requesting text values in binary format" } ]
[ { "msg_contents": "Hi hackers,\n\nAttached is a small patch for adding \"edit-and-execute-command\" readline \nsupport to psql. Bash has this concept and I miss it when using psql. It \nallows you to amend the current line in an editor by pressing \"v\" (when \nin vi mode) or \"C-x C-e\" (when in emacs mode). Those are the default \nbindings from bash although of course they can be amended in inputrc.\n\nMost of the patch is actually shifting \"do_edit\" from \"command.c\" to \n\"common.c\". There is a small amendment to that function to allow vi to \nlaunch at the correct column offset.\n\nI noticed that there is some logic in configure for detecting certain \nreadline functions. I assume this is for compatibility sake with \nlibedit/editline? Rather than testing for each rl_* function I hid the \nfunctionality behind HAVE_READLINE_READLINE_H .. don't know if this is \nacceptable?\n\n-Joe", "msg_date": "Mon, 18 May 2020 00:29:56 +0100", "msg_from": "\"Joe Wildish\" <joe@lateraljoin.com>", "msg_from_op": true, "msg_subject": "[PATCH] Add support to psql for edit-and-execute-command" }, { "msg_contents": "On Mon, May 18, 2020 at 1:30 AM Joe Wildish <joe@lateraljoin.com> wrote:\n\n>\n> Attached is a small patch for adding \"edit-and-execute-command\" readline\n> support to psql. Bash has this concept and I miss it when using psql. It\n> allows you to amend the current line in an editor by pressing \"v\" (when\n> in vi mode) or \"C-x C-e\" (when in emacs mode). Those are the default\n> bindings from bash although of course they can be amended in inputrc.\n>\n\nThe only difference from \\e is that you don't need to jump to the end of\ninput first, I guess?\n\n--\nAlex\n\nOn Mon, May 18, 2020 at 1:30 AM Joe Wildish <joe@lateraljoin.com> wrote:\nAttached is a small patch for adding \"edit-and-execute-command\" readline \nsupport to psql. Bash has this concept and I miss it when using psql. It \nallows you to amend the current line in an editor by pressing \"v\" (when \nin vi mode) or \"C-x C-e\" (when in emacs mode). Those are the default \nbindings from bash although of course they can be amended in inputrc.The only difference from \\e is that you don't need to jump to the end of input first, I guess?--Alex", "msg_date": "Mon, 18 May 2020 08:08:46 +0200", "msg_from": "Oleksandr Shulgin <oleksandr.shulgin@zalando.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add support to psql for edit-and-execute-command" }, { "msg_contents": "On 18 May 2020, at 7:08, Oleksandr Shulgin wrote:\n\n> The only difference from \\e is that you don't need to jump to the end \n> of\n> input first, I guess?\n\nAIUI, \\e will edit the last thing in history or a specific line number \nfrom history, whereas the patch will allow the current line to be \nedited. That is 99% of the time what I want.\n\nMy work flow is typically \"Run some queries\" => \"Go back to some recent \nquery by searching history, often not the most recent\" => \"Edit query\". \nTo do the edit in an editor (without the patch), I've been deliberately \nnobbling the query once found from history. This allows it to execute \n(and fail) but places it as the most recent thing in history. Then I hit \n\"\\e\".\n\n-Joe\n\n\n\n\n\n\n\nOn 18 May 2020, at 7:08, Oleksandr Shulgin wrote:\n\nThe only difference from \\e is that you don't need to jump to the end of\ninput first, I guess?\n\n\nAIUI, \\e will edit the last thing in history or a specific line number from history, whereas the patch will allow the current line to be edited. That is 99% of the time what I want.\nMy work flow is typically \"Run some queries\" => \"Go back to some recent query by searching history, often not the most recent\" => \"Edit query\". To do the edit in an editor (without the patch), I've been deliberately nobbling the query once found from history. This allows it to execute (and fail) but places it as the most recent thing in history. Then I hit \"\\e\".\n-Joe", "msg_date": "Mon, 18 May 2020 11:04:56 +0100", "msg_from": "\"Joe Wildish\" <joe@lateraljoin.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add support to psql for edit-and-execute-command" }, { "msg_contents": "po 18. 5. 2020 v 12:05 odesílatel Joe Wildish <joe@lateraljoin.com> napsal:\n\n> On 18 May 2020, at 7:08, Oleksandr Shulgin wrote:\n>\n> The only difference from \\e is that you don't need to jump to the end of\n> input first, I guess?\n>\n> AIUI, \\e will edit the last thing in history or a specific line number\n> from history, whereas the patch will allow the current line to be edited.\n> That is 99% of the time what I want.\n>\n> My work flow is typically \"Run some queries\" => \"Go back to some recent\n> query by searching history, often not the most recent\" => \"Edit query\". To\n> do the edit in an editor (without the patch), I've been deliberately\n> nobbling the query once found from history. This allows it to execute (and\n> fail) but places it as the most recent thing in history. Then I hit \"\\e\".\n>\n\n\\e is working with not empty line too.You can check\n\nselect 1\\e\n\nYour patch just save skip on end line and \\e\n\nPersonally I think so it is good idea\n\nPavel\n\n\n-Joe\n>\n\npo 18. 5. 2020 v 12:05 odesílatel Joe Wildish <joe@lateraljoin.com> napsal:\n\n\nOn 18 May 2020, at 7:08, Oleksandr Shulgin wrote:\n\nThe only difference from \\e is that you don't need to jump to the end of\ninput first, I guess?\n\n\nAIUI, \\e will edit the last thing in history or a specific line number from history, whereas the patch will allow the current line to be edited. That is 99% of the time what I want.\nMy work flow is typically \"Run some queries\" => \"Go back to some recent query by searching history, often not the most recent\" => \"Edit query\". To do the edit in an editor (without the patch), I've been deliberately nobbling the query once found from history. This allows it to execute (and fail) but places it as the most recent thing in history. Then I hit \"\\e\".\\e is working with not empty line too.You can check select 1\\eYour patch just save skip on end line and \\ePersonally I think so it is good ideaPavel \n-Joe", "msg_date": "Mon, 18 May 2020 12:09:16 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add support to psql for edit-and-execute-command" }, { "msg_contents": "On 18 May 2020, at 11:09, Pavel Stehule wrote:\n\n>\n> \\e is working with not empty line too.You can check\n>\n> select 1\\e\n>\n> Your patch just save skip on end line and \\e\n>\n> Personally I think so it is good idea\n\nThanks. I did not realise that \\e at the end of a line would edit that \nline. (although you do need to remove the terminator I notice).\n\n-Joe\n\n\n\n\n\n\nOn 18 May 2020, at 11:09, Pavel Stehule wrote:\n\n\\e is working with not empty line too.You can check\n\nselect 1\\e\n\nYour patch just save skip on end line and \\e\n\nPersonally I think so it is good idea\n\n\nThanks. I did not realise that \\e at the end of a line would edit that line. (although you do need to remove the terminator I notice).\n-Joe", "msg_date": "Mon, 18 May 2020 11:16:19 +0100", "msg_from": "\"Joe Wildish\" <joe@lateraljoin.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add support to psql for edit-and-execute-command" }, { "msg_contents": "po 18. 5. 2020 v 12:16 odesílatel Joe Wildish <joe@lateraljoin.com> napsal:\n\n> On 18 May 2020, at 11:09, Pavel Stehule wrote:\n>\n> \\e is working with not empty line too.You can check\n>\n> select 1\\e\n>\n> Your patch just save skip on end line and \\e\n>\n> Personally I think so it is good idea\n>\n> Thanks. I did not realise that \\e at the end of a line would edit that\n> line. (although you do need to remove the terminator I notice).\n>\n\nit is different method with little bit different usage - both method can be\nsupported\n\n\n-Joe\n>\n\npo 18. 5. 2020 v 12:16 odesílatel Joe Wildish <joe@lateraljoin.com> napsal:\n\n\nOn 18 May 2020, at 11:09, Pavel Stehule wrote:\n\n\\e is working with not empty line too.You can check\n\nselect 1\\e\n\nYour patch just save skip on end line and \\e\n\nPersonally I think so it is good idea\n\n\nThanks. I did not realise that \\e at the end of a line would edit that line. (although you do need to remove the terminator I notice).it is different method with little bit different usage - both method can be supported \n-Joe", "msg_date": "Mon, 18 May 2020 12:25:47 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add support to psql for edit-and-execute-command" } ]
[ { "msg_contents": "Hello:\n\nBefore I want to pay attention to some optimizer features, I want to\nestimate how much benefits it can create for customers, at least for our\ncurrent\nrunning customer. So I want to have some basic idea what kind of the query\nis\nrunning now in respect of optimizer.\n\n\nMy basic is we can track it with the below struct(every backend has one\nglobal\nvariable to record it).\n\n+typedef struct\n+{\n+ int subplan_count;\n+ int subquery_count;\n+ int join_count;\n+ bool hasagg;\n+ bool hasgroup;\n+} QueryCharacters;\n\nit will be reset at the beginning of standard_planner, and the values are\nincreased at make_subplan, set_subquery_pathlist, make_one_rel,\ncreate_grouping_paths. later it can be tracked and viewed in\npg_stat_statements.\n\n\nWhat do you think about the requirement and the method I am thinking? Any\nkind of feedback is welcome.\n\n\n-- \nBest Regards\nAndy Fan\n\nHello:Before I want to pay attention to some optimizer features, I want toestimate how much benefits it can create for customers, at least for our currentrunning customer. So I want to have some basic idea what kind of the query isrunning now in respect of optimizer.  My basic is we can track it with the below struct(every backend has one global variable to record it).+typedef struct+{+       int     subplan_count;+       int     subquery_count;+       int join_count;+       bool hasagg;+       bool hasgroup;+} QueryCharacters;it will be reset at the beginning of standard_planner, and the values areincreased at  make_subplan, set_subquery_pathlist, make_one_rel,create_grouping_paths. later it can be tracked and viewed inpg_stat_statements.What do you think about the requirement and the method I am thinking? Anykind of feedback is welcome.-- Best RegardsAndy Fan", "msg_date": "Mon, 18 May 2020 16:29:44 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Find query characters in respect of optimizer for develop purpose" }, { "msg_contents": "On Mon, May 18, 2020 at 1:30 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> Hello:\n>\n> Before I want to pay attention to some optimizer features, I want to\n> estimate how much benefits it can create for customers, at least for our\n> current\n> running customer. So I want to have some basic idea what kind of the query\n> is\n> running now in respect of optimizer.\n>\n>\nYou are imagining this to be collected during planning on a live\ncustomer system as a form of telemetry?\nI was inspired to search the hackers mailing list archive for the word\n\"telemetry\" and didn't get many hits, which surprised me.\n\n\n>\n> My basic is we can track it with the below struct(every backend has one\n> global\n> variable to record it).\n>\n> +typedef struct\n> +{\n> + int subplan_count;\n> + int subquery_count;\n> + int join_count;\n> + bool hasagg;\n> + bool hasgroup;\n> +} QueryCharacters;\n>\n> it will be reset at the beginning of standard_planner, and the values are\n> increased at make_subplan, set_subquery_pathlist, make_one_rel,\n> create_grouping_paths. later it can be tracked and viewed in\n> pg_stat_statements.\n>\n>\nI think the natural reaction to this idea is: isn't there a 3rd party\ntool that does this? Or can't you use one of the hooks and write an\nextension, to, for example, examine the parse,query,and plan trees?\n\nHowever, it does seem like keeping track of this information would be\nmuch easier during planning since planner will be examining the query\ntree and making the plan anyway.\n\nOn the other hand, I think that depends a lot on what specific\ninformation you want to collect. Out of the fields you listed, it is\nunclear what some of them would mean.\nDoes join_count count the number of explicit joins in the original query\nor does it count the number of joins in the final plan? Does\nsubquery_count count all sub-selects in the original query or does it\nonly count subqueries that become SubqueryScans or SubPlans? What about\nsubqueries that become InitPlans?\n\nOne concern I have is that it seems like this struct would have to be\nupdated throughout planning and that it would be easy to break it with\nthe addition of new code. Couldn't every new optimization added to\nplanner potentially affect the accuracy of the information in the\nstruct?\n\n-- \nMelanie Plageman\n\nOn Mon, May 18, 2020 at 1:30 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:Hello:Before I want to pay attention to some optimizer features, I want toestimate how much benefits it can create for customers, at least for our currentrunning customer. So I want to have some basic idea what kind of the query isrunning now in respect of optimizer.  You are imagining this to be collected during planning on a livecustomer system as a form of telemetry?I was inspired to search the hackers mailing list archive for the word\"telemetry\" and didn't get many hits, which surprised me. My basic is we can track it with the below struct(every backend has one global variable to record it).+typedef struct+{+       int     subplan_count;+       int     subquery_count;+       int join_count;+       bool hasagg;+       bool hasgroup;+} QueryCharacters;it will be reset at the beginning of standard_planner, and the values areincreased at  make_subplan, set_subquery_pathlist, make_one_rel,create_grouping_paths. later it can be tracked and viewed inpg_stat_statements.I think the natural reaction to this idea is: isn't there a 3rd partytool that does this? Or can't you use one of the hooks and write anextension, to, for example, examine the parse,query,and plan trees?However, it does seem like keeping track of this information would bemuch easier during planning since planner will be examining the querytree and making the plan anyway.On the other hand, I think that depends a lot on what specificinformation you want to collect. Out of the fields you listed, it isunclear what some of them would mean. Does join_count count the number of explicit joins in the original queryor does it count the number of joins in the final plan? Doessubquery_count count all sub-selects in the original query or does itonly count subqueries that become SubqueryScans or SubPlans? What aboutsubqueries that become InitPlans?One concern I have is that it seems like this struct would have to beupdated throughout planning and that it would be easy to break it withthe addition of new code. Couldn't every new optimization added toplanner potentially affect the accuracy of the information in thestruct?-- Melanie Plageman", "msg_date": "Thu, 11 Jun 2020 14:29:15 -0700", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Find query characters in respect of optimizer for develop purpose" } ]
[ { "msg_contents": "Attached diff fixes two small typos in the optimizer README.\n\ncheers ./daniel", "msg_date": "Mon, 18 May 2020 11:31:14 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Optimizer docs typos" }, { "msg_contents": "On Mon, May 18, 2020 at 11:31 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> Attached diff fixes two small typos in the optimizer README.\n>\n\nPushed, thanks.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, May 18, 2020 at 11:31 AM Daniel Gustafsson <daniel@yesql.se> wrote:Attached diff fixes two small typos in the optimizer README.Pushed, thanks. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Mon, 18 May 2020 11:55:59 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Optimizer docs typos" }, { "msg_contents": "On Mon, May 18, 2020 at 6:56 PM Magnus Hagander <magnus@hagander.net> wrote:\n> On Mon, May 18, 2020 at 11:31 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> Attached diff fixes two small typos in the optimizer README.\n\n> Pushed, thanks.\n\nThank you!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Mon, 18 May 2020 19:00:30 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizer docs typos" }, { "msg_contents": "In this same README doc, another suspicious typo to me, which happens in\nsection \"Optimizer Functions\", is in the prefix to query_planner(),\nwe should have three dashes, rather than two, since query_planner() is\ncalled within grouping_planner().\n\ndiff --git a/src/backend/optimizer/README b/src/backend/optimizer/README\nindex 7dcab9a..bace081 100644\n--- a/src/backend/optimizer/README\n+++ b/src/backend/optimizer/README\n@@ -315,7 +315,7 @@ set up for recursive handling of subqueries\n preprocess target list for non-SELECT queries\n handle UNION/INTERSECT/EXCEPT, GROUP BY, HAVING, aggregates,\n ORDER BY, DISTINCT, LIMIT\n---query_planner()\n+---query_planner()\n make list of base relations used in query\n split up the qual into restrictions (a=1) and joins (b=c)\n find qual clauses that enable merge and hash joins\n\nThanks\nRichard\n\nOn Mon, May 18, 2020 at 6:00 PM Etsuro Fujita <etsuro.fujita@gmail.com>\nwrote:\n\n> On Mon, May 18, 2020 at 6:56 PM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> > On Mon, May 18, 2020 at 11:31 AM Daniel Gustafsson <daniel@yesql.se>\n> wrote:\n> >> Attached diff fixes two small typos in the optimizer README.\n>\n> > Pushed, thanks.\n>\n> Thank you!\n>\n> Best regards,\n> Etsuro Fujita\n>\n>\n>\n\nIn this same README doc, another suspicious typo to me, which happens insection \"Optimizer Functions\", is in the prefix to query_planner(),we should have three dashes, rather than two, since query_planner() iscalled within grouping_planner().diff --git a/src/backend/optimizer/README b/src/backend/optimizer/READMEindex 7dcab9a..bace081 100644--- a/src/backend/optimizer/README+++ b/src/backend/optimizer/README@@ -315,7 +315,7 @@ set up for recursive handling of subqueries   preprocess target list for non-SELECT queries   handle UNION/INTERSECT/EXCEPT, GROUP BY, HAVING, aggregates,        ORDER BY, DISTINCT, LIMIT---query_planner()+---query_planner()    make list of base relations used in query    split up the qual into restrictions (a=1) and joins (b=c)    find qual clauses that enable merge and hash joinsThanksRichardOn Mon, May 18, 2020 at 6:00 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:On Mon, May 18, 2020 at 6:56 PM Magnus Hagander <magnus@hagander.net> wrote:\n> On Mon, May 18, 2020 at 11:31 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> Attached diff fixes two small typos in the optimizer README.\n\n> Pushed, thanks.\n\nThank you!\n\nBest regards,\nEtsuro Fujita", "msg_date": "Mon, 18 May 2020 18:45:11 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizer docs typos" }, { "msg_contents": "On Mon, May 18, 2020 at 7:45 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> In this same README doc, another suspicious typo to me, which happens in\n> section \"Optimizer Functions\", is in the prefix to query_planner(),\n> we should have three dashes, rather than two, since query_planner() is\n> called within grouping_planner().\n>\n> diff --git a/src/backend/optimizer/README b/src/backend/optimizer/README\n> index 7dcab9a..bace081 100644\n> --- a/src/backend/optimizer/README\n> +++ b/src/backend/optimizer/README\n> @@ -315,7 +315,7 @@ set up for recursive handling of subqueries\n> preprocess target list for non-SELECT queries\n> handle UNION/INTERSECT/EXCEPT, GROUP BY, HAVING, aggregates,\n> ORDER BY, DISTINCT, LIMIT\n> ---query_planner()\n> +---query_planner()\n> make list of base relations used in query\n> split up the qual into restrictions (a=1) and joins (b=c)\n> find qual clauses that enable merge and hash joins\n\nYeah, you are right. Another one would be in the prefix to\nstandard_join_search(); I think it might be better to have six dashes,\nrather than five, because standard_join_search() is called within\nmake_rel_from_joinlist().\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Tue, 19 May 2020 19:35:56 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizer docs typos" }, { "msg_contents": "On Tue, May 19, 2020 at 7:35 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Mon, May 18, 2020 at 7:45 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> > In this same README doc, another suspicious typo to me, which happens in\n> > section \"Optimizer Functions\", is in the prefix to query_planner(),\n> > we should have three dashes, rather than two, since query_planner() is\n> > called within grouping_planner().\n> >\n> > diff --git a/src/backend/optimizer/README b/src/backend/optimizer/README\n> > index 7dcab9a..bace081 100644\n> > --- a/src/backend/optimizer/README\n> > +++ b/src/backend/optimizer/README\n> > @@ -315,7 +315,7 @@ set up for recursive handling of subqueries\n> > preprocess target list for non-SELECT queries\n> > handle UNION/INTERSECT/EXCEPT, GROUP BY, HAVING, aggregates,\n> > ORDER BY, DISTINCT, LIMIT\n> > ---query_planner()\n> > +---query_planner()\n> > make list of base relations used in query\n> > split up the qual into restrictions (a=1) and joins (b=c)\n> > find qual clauses that enable merge and hash joins\n>\n> Yeah, you are right. Another one would be in the prefix to\n> standard_join_search(); I think it might be better to have six dashes,\n> rather than five, because standard_join_search() is called within\n> make_rel_from_joinlist().\n\nHere is a patch including the change I proposed. (Yet another thing I\nnoticed is the indent spaces for join_search_one_level(): that\nfunction is called within standard_join_search(), so it would be\nbetter to have one extra space, for consistency with others (eg,\nset_base_rel_pathlists() called from make_one_rel()), but that would\nbe too nitpicking.) This is more like an improvement, so I'll apply\nthe patch to HEAD only, if no objestions.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Wed, 20 May 2020 19:17:48 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizer docs typos" }, { "msg_contents": "At Wed, 20 May 2020 19:17:48 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in \n> On Tue, May 19, 2020 at 7:35 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Mon, May 18, 2020 at 7:45 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> > > ---query_planner()\n> > > +---query_planner()\n> > > make list of base relations used in query\n> > > split up the qual into restrictions (a=1) and joins (b=c)\n> > > find qual clauses that enable merge and hash joins\n> >\n> > Yeah, you are right. Another one would be in the prefix to\n> > standard_join_search(); I think it might be better to have six dashes,\n> > rather than five, because standard_join_search() is called within\n> > make_rel_from_joinlist().\n> \n> Here is a patch including the change I proposed. (Yet another thing I\n> noticed is the indent spaces for join_search_one_level(): that\n> function is called within standard_join_search(), so it would be\n> better to have one extra space, for consistency with others (eg,\n> set_base_rel_pathlists() called from make_one_rel()), but that would\n> be too nitpicking.) This is more like an improvement, so I'll apply\n> the patch to HEAD only, if no objestions.\n\nThe original proposal for query_planner looks fine.\n\nThe description for make_rel_from_joinlist() and that for\nstandard_join_search() are at the same indentation depth. And it is\nalso strange that seemingly there is no line for level-5\nindentation. If we make standard_join_search() a 6th-hyphened level\nitem, indentation of the surrounding descriptions needs a fix.\n\n----make_one_rel()\n set_base_rel_pathlists()\n find seqscan and all index paths for each base relation\n find selectivity of columns used in joins\n make_rel_from_joinlist()\n hand off join subproblems to a plugin, GEQO, or standard_join_search()\n------standard_join_search()\n call join_search_one_level() for each level of join tree needed\n join_search_one_level():\n For each joinrel of the prior level, do make_rels_by_clause_joins()\n if it has join clauses, or make_rels_by_clauseless_joins() if not.\n\nLooking the description for make_rel_from_joinlist(), it seems to me\nthat the author is thinking that make_rel_from_joinlist() is mere the\ndistributor for join subproblems and isn't worth an indent level.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 21 May 2020 11:25:35 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizer docs typos" }, { "msg_contents": "On Thu, May 21, 2020 at 11:25 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Wed, 20 May 2020 19:17:48 +0900, Etsuro Fujita <etsuro.fujita@gmail.com> wrote in\n> > Here is a patch including the change I proposed. (Yet another thing I\n> > noticed is the indent spaces for join_search_one_level(): that\n> > function is called within standard_join_search(), so it would be\n> > better to have one extra space, for consistency with others (eg,\n> > set_base_rel_pathlists() called from make_one_rel()), but that would\n> > be too nitpicking.) This is more like an improvement, so I'll apply\n> > the patch to HEAD only, if no objestions.\n\n> The description for make_rel_from_joinlist() and that for\n> standard_join_search() are at the same indentation depth. And it is\n> also strange that seemingly there is no line for level-5\n> indentation. If we make standard_join_search() a 6th-hyphened level\n> item, indentation of the surrounding descriptions needs a fix.\n>\n> ----make_one_rel()\n> set_base_rel_pathlists()\n> find seqscan and all index paths for each base relation\n> find selectivity of columns used in joins\n> make_rel_from_joinlist()\n> hand off join subproblems to a plugin, GEQO, or standard_join_search()\n> ------standard_join_search()\n> call join_search_one_level() for each level of join tree needed\n> join_search_one_level():\n> For each joinrel of the prior level, do make_rels_by_clause_joins()\n> if it has join clauses, or make_rels_by_clauseless_joins() if not.\n\nI don't think it's odd that we won't have 5-dashes indentation,\nbecause I think we have 5-spaces indentation for\nset_base_rel_pathlists() and make_rel_from_joinlist(). (I think the\ndash indentation of optimizer functions such as standard_join_search()\njust means that the dash-indented functions are more important\ncompared to other space-indented functions IMO.) My point is that we\nshould adjust the dash or space indentation so that a deeper level of\nindentation indicates that the outer optimizer function calls the\ninner optimizer function.\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 21 May 2020 15:36:10 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizer docs typos" }, { "msg_contents": "On Wed, May 20, 2020 at 7:17 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Tue, May 19, 2020 at 7:35 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Mon, May 18, 2020 at 7:45 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> > > In this same README doc, another suspicious typo to me, which happens in\n> > > section \"Optimizer Functions\", is in the prefix to query_planner(),\n> > > we should have three dashes, rather than two, since query_planner() is\n> > > called within grouping_planner().\n> > >\n> > > diff --git a/src/backend/optimizer/README b/src/backend/optimizer/README\n> > > index 7dcab9a..bace081 100644\n> > > --- a/src/backend/optimizer/README\n> > > +++ b/src/backend/optimizer/README\n> > > @@ -315,7 +315,7 @@ set up for recursive handling of subqueries\n> > > preprocess target list for non-SELECT queries\n> > > handle UNION/INTERSECT/EXCEPT, GROUP BY, HAVING, aggregates,\n> > > ORDER BY, DISTINCT, LIMIT\n> > > ---query_planner()\n> > > +---query_planner()\n> > > make list of base relations used in query\n> > > split up the qual into restrictions (a=1) and joins (b=c)\n> > > find qual clauses that enable merge and hash joins\n> >\n> > Yeah, you are right. Another one would be in the prefix to\n> > standard_join_search(); I think it might be better to have six dashes,\n> > rather than five, because standard_join_search() is called within\n> > make_rel_from_joinlist().\n>\n> Here is a patch including the change I proposed. (Yet another thing I\n> noticed is the indent spaces for join_search_one_level(): that\n> function is called within standard_join_search(), so it would be\n> better to have one extra space, for consistency with others (eg,\n> set_base_rel_pathlists() called from make_one_rel()), but that would\n> be too nitpicking.) This is more like an improvement, so I'll apply\n> the patch to HEAD only, if no objestions.\n\nDone.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 22 May 2020 15:53:21 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizer docs typos" }, { "msg_contents": "On Fri, May 22, 2020 at 2:53 PM Etsuro Fujita <etsuro.fujita@gmail.com>\nwrote:\n\n>\n> Done.\n>\n\nThanks!\n\nThanks\nRichard\n\nOn Fri, May 22, 2020 at 2:53 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\nDone.Thanks!ThanksRichard", "msg_date": "Fri, 22 May 2020 16:29:42 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Optimizer docs typos" } ]
[ { "msg_contents": "The syntax for FETCH FIRST allows the <fetch first quantity> to be\nabsent (implying 1).\n\nWe implement this correctly for ONLY, but WITH TIES didn't get the memo.\n\nPatch attached.\n-- \nVik Fearing", "msg_date": "Mon, 18 May 2020 16:41:23 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Missing grammar production for WITH TIES" }, { "msg_contents": "On 2020-May-18, Vik Fearing wrote:\n\n> The syntax for FETCH FIRST allows the <fetch first quantity> to be\n> absent (implying 1).\n> \n> We implement this correctly for ONLY, but WITH TIES didn't get the memo.\n\nOops, yes. I added a test. Will get this pushed immediately after I\nsee beta1 produced.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 18 May 2020 13:03:23 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Missing grammar production for WITH TIES" }, { "msg_contents": "On 5/18/20 7:03 PM, Alvaro Herrera wrote:\n> On 2020-May-18, Vik Fearing wrote:\n> \n>> The syntax for FETCH FIRST allows the <fetch first quantity> to be\n>> absent (implying 1).\n>>\n>> We implement this correctly for ONLY, but WITH TIES didn't get the memo.\n> \n> Oops, yes. I added a test. Will get this pushed immediately after I\n> see beta1 produced.\n\nThanks!\n-- \nVik Fearing\n\n\n", "msg_date": "Mon, 18 May 2020 21:27:26 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Missing grammar production for WITH TIES" }, { "msg_contents": "On 2020-May-18, Alvaro Herrera wrote:\n\n> On 2020-May-18, Vik Fearing wrote:\n> \n> > The syntax for FETCH FIRST allows the <fetch first quantity> to be\n> > absent (implying 1).\n> > \n> > We implement this correctly for ONLY, but WITH TIES didn't get the memo.\n> \n> Oops, yes. I added a test. Will get this pushed immediately after I\n> see beta1 produced.\n\nDone. Thanks!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 18 May 2020 19:30:32 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Missing grammar production for WITH TIES" }, { "msg_contents": "On Mon, May 18, 2020 at 07:30:32PM -0400, Alvaro Herrera wrote:\n> Done. Thanks!\n\nThis has been committed just after beta1 has been stamped. So it\nmeans that it won't be included in it, right?\n--\nMichael", "msg_date": "Tue, 19 May 2020 11:36:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Missing grammar production for WITH TIES" }, { "msg_contents": "On 5/19/20 4:36 AM, Michael Paquier wrote:\n> On Mon, May 18, 2020 at 07:30:32PM -0400, Alvaro Herrera wrote:\n>> Done. Thanks!\n> \n> This has been committed just after beta1 has been stamped. So it\n> means that it won't be included in it, right?\n\nCorrect.\n\nI don't know why there was a delay, but it also doesn't bother me.\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 19 May 2020 05:32:50 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Missing grammar production for WITH TIES" }, { "msg_contents": "On 2020-May-19, Vik Fearing wrote:\n\n> On 5/19/20 4:36 AM, Michael Paquier wrote:\n>\n> > This has been committed just after beta1 has been stamped. So it\n> > means that it won't be included in it, right?\n> \n> Correct.\n\nRight.\n\n> I don't know why there was a delay, but it also doesn't bother me.\n\nI didn't want to risk breaking the buildfarm at the last minute. It'll\nbe there in beta2.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 18 May 2020 23:42:17 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Missing grammar production for WITH TIES" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, May 18, 2020 at 07:30:32PM -0400, Alvaro Herrera wrote:\n>> Done. Thanks!\n\n> This has been committed just after beta1 has been stamped. So it\n> means that it won't be included in it, right?\n\nRight.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 May 2020 00:41:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing grammar production for WITH TIES" }, { "msg_contents": "On Tue, May 19, 2020 at 12:41:39AM -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> This has been committed just after beta1 has been stamped. So it\n>> means that it won't be included in it, right?\n> \n> Right.\n\nStill, wouldn't it be better to wait until the version is tagged? My\nunderstanding is that we had better not commit anything on a branch\nplanned for release between the moment the version is stamped and the\nmoment the tag is pushed so as we have a couple of days to address any\ncomplaints from -packagers. Here, we are in a state where we have\nbetween the stamp time and tag time an extra commit not related to a\npackaging issue. So, if it happens that we have an issue from\n-packagers to address, then we would have to include c301c2e in the\nbeta1. Looking at the patch committed, that's not much of an issue,\nbut I think that we had better avoid that.\n--\nMichael", "msg_date": "Tue, 19 May 2020 15:20:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Missing grammar production for WITH TIES" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, May 19, 2020 at 12:41:39AM -0400, Tom Lane wrote:\n>> Michael Paquier <michael@paquier.xyz> writes:\n>>> This has been committed just after beta1 has been stamped. So it\n>>> means that it won't be included in it, right?\n\n>> Right.\n\n> Still, wouldn't it be better to wait until the version is tagged?\n\nYeah, that would have been better per project protocol: if a tarball\nre-wrap becomes necessary then it would be messy not to include this\nchange along with fixing whatever urgent bug there might be.\n\nHowever, I thought the case for delaying this fix till post-wrap was kind\nof thin anyway, so if that does happen I won't be too fussed about it.\nOtherwise I would've said something earlier on this thread.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 May 2020 09:19:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing grammar production for WITH TIES" }, { "msg_contents": "On 2020-May-19, Tom Lane wrote:\n\n> Yeah, that would have been better per project protocol: if a tarball\n> re-wrap becomes necessary then it would be messy not to include this\n> change along with fixing whatever urgent bug there might be.\n> \n> However, I thought the case for delaying this fix till post-wrap was kind\n> of thin anyway, so if that does happen I won't be too fussed about it.\n> Otherwise I would've said something earlier on this thread.\n\nIn the end, it's a judgement call. In this case, my assessment was that\nthe risk was small enough that I could push after I saw the tarballs\nannounced. In other cases I've judged differently and waited for\nlonger. If the fix had been even simpler, I would have pushed it right\naway, but my confidence with grammar changes is not as high as I would\nlike.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 19 May 2020 11:39:08 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Missing grammar production for WITH TIES" } ]
[ { "msg_contents": "There have been occasional discussions about deprecating or phasing out \npostfix operators, to make various things easier in the parser.\n\nThe first step would in any case be to provide alternatives for the \nexisting postfix operators. There is currently one, namely the numeric \nfactorial operator \"!\". A sensible alternative for that would be \nproviding a function factorial(numeric) -- and that already exists but \nis not documented. (Note that the operator is mapped to proname \n\"numeric_fac\". The function \"factorial\" maps to the same prosrc but is \notherwise independent of the operator.)\n\nSo I suggest that we add that function to the documentation.\n\n(Some adjacent cleanup work might also be in order. The test cases for \nfactorial are currently in int4.sql, but all the factorial functionality \nwas moved to numeric a long time ago.)\n\nWhat are the thoughts about then marking the postfix operator deprecated \nand eventually removing it?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 18 May 2020 16:42:18 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "factorial function/phase out postfix operators?" }, { "msg_contents": "On 5/18/20 4:42 PM, Peter Eisentraut wrote:\n> There have been occasional discussions about deprecating or phasing out\n> postfix operators, to make various things easier in the parser.\n> \n> The first step would in any case be to provide alternatives for the\n> existing postfix operators.  There is currently one, namely the numeric\n> factorial operator \"!\".  A sensible alternative for that would be\n> providing a function factorial(numeric) -- and that already exists but\n> is not documented.  (Note that the operator is mapped to proname\n> \"numeric_fac\".  The function \"factorial\" maps to the same prosrc but is\n> otherwise independent of the operator.)\n> \n> So I suggest that we add that function to the documentation.\n\nI think this should be done regardless.\n\n> (Some adjacent cleanup work might also be in order.  The test cases for\n> factorial are currently in int4.sql, but all the factorial functionality\n> was moved to numeric a long time ago.)\n> \n> What are the thoughts about then marking the postfix operator deprecated\n> and eventually removing it?\n\nI am greatly in favor of removing postfix operators as soon as possible.\n-- \nVik Fearing\n\n\n", "msg_date": "Mon, 18 May 2020 17:02:34 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Mon, May 18, 2020 at 05:02:34PM +0200, Vik Fearing wrote:\n> On 5/18/20 4:42 PM, Peter Eisentraut wrote:\n> > There have been occasional discussions about deprecating or phasing out\n> > postfix operators, to make various things easier in the parser.\n> > \n> > The first step would in any case be to provide alternatives for the\n> > existing postfix operators.� There is currently one, namely the numeric\n> > factorial operator \"!\".� A sensible alternative for that would be\n> > providing a function factorial(numeric) -- and that already exists but\n> > is not documented.� (Note that the operator is mapped to proname\n> > \"numeric_fac\".� The function \"factorial\" maps to the same prosrc but is\n> > otherwise independent of the operator.)\n> > \n> > So I suggest that we add that function to the documentation.\n> \n> I think this should be done regardless.\n> \n> > (Some adjacent cleanup work might also be in order.� The test cases for\n> > factorial are currently in int4.sql, but all the factorial functionality\n> > was moved to numeric a long time ago.)\n> > \n> > What are the thoughts about then marking the postfix operator deprecated\n> > and eventually removing it?\n> \n> I am greatly in favor of removing postfix operators as soon as possible.\n\nAgreed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 18 May 2020 11:39:34 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> What are the thoughts about then marking the postfix operator deprecated \n> and eventually removing it?\n\nIf we do this it'd require a plan. We'd have to also warn about the\nfeature deprecation in (at least) the CREATE OPERATOR man page, and\nwe'd have to decide how many release cycles the deprecation notices\nneed to stand for.\n\nIf that's the intention, though, it'd be good to get those deprecation\nnotices published in v13 not v14.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 May 2020 22:03:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Mon, May 18, 2020 at 10:03:13PM -0400, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > What are the thoughts about then marking the postfix operator deprecated \n> > and eventually removing it?\n> \n> If we do this it'd require a plan. We'd have to also warn about the\n> feature deprecation in (at least) the CREATE OPERATOR man page, and\n> we'd have to decide how many release cycles the deprecation notices\n> need to stand for.\n> \n> If that's the intention, though, it'd be good to get those deprecation\n> notices published in v13 not v14.\n\n+1 for deprecating in v13.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Tue, 19 May 2020 04:11:37 +0200", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On 5/19/20 4:03 AM, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> What are the thoughts about then marking the postfix operator deprecated \n>> and eventually removing it?\n> \n> If we do this it'd require a plan. We'd have to also warn about the\n> feature deprecation in (at least) the CREATE OPERATOR man page, and\n> we'd have to decide how many release cycles the deprecation notices\n> need to stand for.\n\nI have never come across any custom postfix operators in the wild, and\nI've never even seen ! used in practice.\n\nSo I would suggest a very short deprecation period. Deprecate now in\n13, let 14 go by, and rip it all out for 15. That should give us enough\ntime to extend the deprecation period if we need to, or go back on it\nentirely (like I seem to remember we did with VACUUM FREEZE).\n\n> If that's the intention, though, it'd be good to get those deprecation\n> notices published in v13 not v14.\n\n+1\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 19 May 2020 04:30:23 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Mon, May 18, 2020 at 10:42 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> What are the thoughts about then marking the postfix operator deprecated\n> and eventually removing it?\n\nI wrote a little bit about this last year:\n\nhttp://postgr.es/m/CA+TgmoarLfSQcLCh7jx0737SZ28qwbuy+rUWT6rSHAO=B-6xdw@mail.gmail.com\n\nI think it's generally a good idea, though perhaps we should consider\ncontinuing to allow '!' as a postfix operator and just removing\nsupport for any other. That would probably allow us to have a very\nshort deprecation period, since real-world use of user-defined postfix\noperators seems to be nil -- and it would also make this into a change\nthat only affects the lexer and parser, which might make it simpler.\n\nI won't lose a lot of sleep if we decide to rip out '!' as well, but I\ndon't think that continuing to support it would cost us much.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 19 May 2020 08:27:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "út 19. 5. 2020 v 14:27 odesílatel Robert Haas <robertmhaas@gmail.com>\nnapsal:\n\n> On Mon, May 18, 2020 at 10:42 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> > What are the thoughts about then marking the postfix operator deprecated\n> > and eventually removing it?\n>\n> I wrote a little bit about this last year:\n>\n>\n> http://postgr.es/m/CA+TgmoarLfSQcLCh7jx0737SZ28qwbuy+rUWT6rSHAO=B-6xdw@mail.gmail.com\n>\n> I think it's generally a good idea, though perhaps we should consider\n> continuing to allow '!' as a postfix operator and just removing\n> support for any other. That would probably allow us to have a very\n> short deprecation period, since real-world use of user-defined postfix\n> operators seems to be nil -- and it would also make this into a change\n> that only affects the lexer and parser, which might make it simpler.\n>\n> I won't lose a lot of sleep if we decide to rip out '!' as well, but I\n> don't think that continuing to support it would cost us much.\n>\n\nThis is little bit obscure feature. It can be removed and relative quickly.\nMaybe some warning if somebody use it can be good (for Postgres 13)\n\nPavel\n\n\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\nút 19. 5. 2020 v 14:27 odesílatel Robert Haas <robertmhaas@gmail.com> napsal:On Mon, May 18, 2020 at 10:42 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> What are the thoughts about then marking the postfix operator deprecated\n> and eventually removing it?\n\nI wrote a little bit about this last year:\n\nhttp://postgr.es/m/CA+TgmoarLfSQcLCh7jx0737SZ28qwbuy+rUWT6rSHAO=B-6xdw@mail.gmail.com\n\nI think it's generally a good idea, though perhaps we should consider\ncontinuing to allow '!' as a postfix operator and just removing\nsupport for any other. That would probably allow us to have a very\nshort deprecation period, since real-world use of user-defined postfix\noperators seems to be nil -- and it would also make this into a change\nthat only affects the lexer and parser, which might make it simpler.\n\nI won't lose a lot of sleep if we decide to rip out '!' as well, but I\ndon't think that continuing to support it would cost us much.This is little bit obscure feature. It can be removed and relative quickly. Maybe some warning if somebody use it can be good (for Postgres 13)Pavel\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 19 May 2020 14:33:38 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "> \n> I won't lose a lot of sleep if we decide to rip out '!' as well, but I\n> don't think that continuing to support it would cost us much.\n> \n+1 for keeping ! and nuking the rest, if possible.\n\nRegards,\nKen\n\n\n", "msg_date": "Tue, 19 May 2020 08:28:34 -0500", "msg_from": "Kenneth Marshall <ktm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I think it's generally a good idea, though perhaps we should consider\n> continuing to allow '!' as a postfix operator and just removing\n> support for any other.\n\nUh ... what exactly would be the point of that? The real reason to do\nthis at all is not that we have it in for '!', but that we want to\ndrop the possibility of postfix operators from the grammar altogether,\nwhich will remove a boatload of ambiguity.\n\n> I won't lose a lot of sleep if we decide to rip out '!' as well, but I\n> don't think that continuing to support it would cost us much.\n\nAFAICS, it would cost us the entire point of this change.\n\nIn my non-caffeinated state, I don't recall exactly which things are\nblocked by the existence of postfix ops; but I think for instance it might\nbecome possible to remove the restriction of requiring AS before column\naliases that happen to be unreserved keywords.\n\nIf we lobotomize CREATE OPERATOR but don't remove built-in postfix\nops, then none of those improvements will be available. That seems\nlike the worst possible choice.\n\nI would also argue that having a feature that is available to\nbuilt-in operators but not user-defined ones is pretty antithetical\nto Postgres philosophy.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 May 2020 09:51:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Tue, May 19, 2020 at 9:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Uh ... what exactly would be the point of that? The real reason to do\n> this at all is not that we have it in for '!', but that we want to\n> drop the possibility of postfix operators from the grammar altogether,\n> which will remove a boatload of ambiguity.\n\nThe ambiguity doesn't come from the mere existence of postfix\noperators. It comes from the fact that, when we lex the input, we\ncan't tell whether a particular operator that we happen to encounter\nis prefix, infix, or postfix. So hard-coding, for example, a rule that\n'!' is always a postfix operator and anything else is never a postfix\noperator is sufficient to solve the key problems. Then \"SELECT a ! b\"\ncan only be a postfix operator application followed by a column\nlabeling, a \"SELECT a + b\" can only be the application of an infix\noperator.\n\nThe parser ambiguities could also be removed if the source of the\ninformation where a GUC or a catalog lookup; there are good reasons\nnot to go that way, but my point is that the problem is not that\npostfix operators are per se evil, but that the information we need is\nnot available at the right phase of the process. We can only make use\nof the information in pg_operator after we start assigning type\ninformation, which has to happen after we parse, but to avoid the\nambiguity here, we need the information before we parse - i.e. at the\nlexing stage.\n\n> In my non-caffeinated state, I don't recall exactly which things are\n> blocked by the existence of postfix ops; but I think for instance it might\n> become possible to remove the restriction of requiring AS before column\n> aliases that happen to be unreserved keywords.\n\nRight - which would be a huge win.\n\n> I would also argue that having a feature that is available to\n> built-in operators but not user-defined ones is pretty antithetical\n> to Postgres philosophy.\n\nThat I think is the policy question before us. I believe that any rule\nthat tells us which operators are postfix and which are not at the\nlexing stage is good enough. I think here you are arguing for the\nempty set, which will work, but I believe any other fixed set also\nworks, such as { '!' }. I don't think we're going to break a ton of\nuser code no matter which one we pick, but I do think that it's\npossible to pick either one and still achieve our goals here, so\nthat's the issue that I wanted to raise.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 19 May 2020 10:22:35 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On 5/19/20 4:22 PM, Robert Haas wrote:\n> On Tue, May 19, 2020 at 9:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Uh ... what exactly would be the point of that? The real reason to do\n>> this at all is not that we have it in for '!', but that we want to\n>> drop the possibility of postfix operators from the grammar altogether,\n>> which will remove a boatload of ambiguity.\n> \n> The ambiguity doesn't come from the mere existence of postfix\n> operators. It comes from the fact that, when we lex the input, we\n> can't tell whether a particular operator that we happen to encounter\n> is prefix, infix, or postfix. So hard-coding, for example, a rule that\n> '!' is always a postfix operator and anything else is never a postfix\n> operator is sufficient to solve the key problems. Then \"SELECT a ! b\"\n> can only be a postfix operator application followed by a column\n> labeling, a \"SELECT a + b\" can only be the application of an infix\n> operator.\n\nSo if I make a complex UDT where a NOT operator makes a lot of sense[*],\nwhy wouldn't I be allowed to make a prefix operator ! for it? All for\nwhat? That one person in the corner over there who doesn't want to\nrewrite their query to use factorial() instead?\n\nI'm -1 on keeping ! around as a hard-coded postfix operator.\n\n\n[*] I don't have a concrete example in mind, just this abstract one.\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 19 May 2020 16:36:13 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Tue, May 19, 2020 at 10:36 AM Vik Fearing <vik@postgresfriends.org> wrote:\n> So if I make a complex UDT where a NOT operator makes a lot of sense[*],\n> why wouldn't I be allowed to make a prefix operator ! for it? All for\n> what? That one person in the corner over there who doesn't want to\n> rewrite their query to use factorial() instead?\n>\n> I'm -1 on keeping ! around as a hard-coded postfix operator.\n\nFair enough. I think you may be in the majority on that one, too. I\njust wanted to raise the issue, and we'll see if anyone else agrees.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 19 May 2020 10:39:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> The ambiguity doesn't come from the mere existence of postfix\n> operators. It comes from the fact that, when we lex the input, we\n> can't tell whether a particular operator that we happen to encounter\n> is prefix, infix, or postfix. So hard-coding, for example, a rule that\n> '!' is always a postfix operator and anything else is never a postfix\n> operator is sufficient to solve the key problems.\n\nIf we were willing to say that '!' could *only* be a postfix operator,\nthen maybe the ambiguity would go away. Or maybe it wouldn't; if\nyou're seriously proposing this, I think it'd be incumbent on you\nto demonstrate that we could still simplify the grammar to the same\nextent. But that will incur its own set of compatibility problems,\nbecause there's no reason to assume that nobody has made prefix or\ninfix '!' operators.\n\nIn any case, it's hard to decide that that's a less klugy solution\nthan getting rid of postfix ops altogether. There's a reason why\nfew programming languages have those.\n\nIn general, I put this on about the same level as when we decided\nto remove ';' and ':' as operators (cf 259489bab, 766fb7f70).\nSomebody thought it was cute that it was possible to have that,\nwhich maybe it was, but it wasn't really sane in the big picture.\nAnd as I recall, the amount of pushback we got was nil.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 May 2020 10:48:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "Greetings,\n\n* Vik Fearing (vik@postgresfriends.org) wrote:\n> On 5/19/20 4:03 AM, Tom Lane wrote:\n> > Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> >> What are the thoughts about then marking the postfix operator deprecated \n> >> and eventually removing it?\n> > \n> > If we do this it'd require a plan. We'd have to also warn about the\n> > feature deprecation in (at least) the CREATE OPERATOR man page, and\n> > we'd have to decide how many release cycles the deprecation notices\n> > need to stand for.\n> \n> I have never come across any custom postfix operators in the wild, and\n> I've never even seen ! used in practice.\n> \n> So I would suggest a very short deprecation period. Deprecate now in\n> 13, let 14 go by, and rip it all out for 15. That should give us enough\n> time to extend the deprecation period if we need to, or go back on it\n> entirely (like I seem to remember we did with VACUUM FREEZE).\n> \n> > If that's the intention, though, it'd be good to get those deprecation\n> > notices published in v13 not v14.\n> \n> +1\n\nI agree with putting notices into v13 saying they're deprecated, but\nthen actually removing them in v14. For that matter, I'd vote that we\ngenerally accept a system whereby when we commit something that removes\na feature in the next major version, we put out some kind of notice that\nit's been deprecated and won't be in v14. We don't want to run the risk\nof saying XYZ has been deprecated and then it staying around for a few\nyears, nor trying to say \"it'll be removed in v14\" before we actually\nknow that it's been committed for v14.\n\nIn other words, wait to deprecate until the commit has happened for v14\n(and maybe wait a couple days in case someone wasn't watching and argues\nto revert, but not longer than any normal commit), and then go back and\nmark it as \"deprecated and removed in v14\" for all back-branches. Users\nwill continue to have 5 years (by upgrading to v13, or whatever the last\nrelease was before their favorite feature was removed, if they really\nneed to) to update their systems to deal with the change.\n\nWe do not do ourselves nor our users a real service by carrying forward\ndeprecated code/interfaces/views/etc, across major versions; instead\nthey tend to live on in infamy, with some users actually updating and\nsome not, ever, and then complaining when we suggest actually removing\nit (we have lots of good examples of that too) and then we have to have\nthe debate again about removing it and, in some cases, we end up\nun-deprecating it, which is confusing for users and a bit ridiculous.\n\nLet's not do that.\n\nThanks,\n\nStephen", "msg_date": "Tue, 19 May 2020 10:54:49 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> I'm -1 on keeping ! around as a hard-coded postfix operator.\n\nBefore we go much further on this, we should have some proof\nthat there's actually material benefit to be gained. I spent some\ntime just now trying to relax the AS restriction by ripping out\npostfix ops, and the results were not too promising. Indeed the\npostfix-ops problem goes away, but then you find out that SQL's\nrandom syntax choices for type names become the stumbling block.\nAn example here is that given\n\n\tSELECT 'foo'::character varying\n\nit's not clear if \"varying\" is supposed to be part of the type name or a\ncolumn label. It looks to me like we'd have to increase the reserved-ness\nof VARYING, PRECISION, and about half a dozen currently-unreserved\nkeywords involved in INTERVAL syntax, including such popular column names\nas \"month\", \"day\", and \"year\".\n\nPlus I got conflicts on WITHIN, GROUP, and FILTER from ordered-set\naggregate syntax; those are currently unreserved keywords, but they\ncan't be allowed as AS-less column labels.\n\nWe could possibly minimize the damage by inventing another keyword\nclassification besides the four we have now. Or maybe we should\nthink harder about using more lookahead between the lexer and grammar.\nBut this is going to be a lot more ticklish than I would've hoped,\nand possibly not cost-free, so we might well end up never pulling\nthe trigger on such a change.\n\nSo right at the moment I'm agreeing with Stephen's nearby opinion:\nlet's not deprecate these until we've got a patch that gets some\nconcrete benefit from removing them.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 May 2020 11:32:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Tue, May 19, 2020 at 11:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Before we go much further on this, we should have some proof\n> that there's actually material benefit to be gained. I spent some\n> time just now trying to relax the AS restriction by ripping out\n> postfix ops, and the results were not too promising. Indeed the\n> postfix-ops problem goes away, but then you find out that SQL's\n> random syntax choices for type names become the stumbling block.\n> An example here is that given\n>\n> SELECT 'foo'::character varying\n>\n> it's not clear if \"varying\" is supposed to be part of the type name or a\n> column label. It looks to me like we'd have to increase the reserved-ness\n> of VARYING, PRECISION, and about half a dozen currently-unreserved\n> keywords involved in INTERVAL syntax, including such popular column names\n> as \"month\", \"day\", and \"year\".\n>\n> Plus I got conflicts on WITHIN, GROUP, and FILTER from ordered-set\n> aggregate syntax; those are currently unreserved keywords, but they\n> can't be allowed as AS-less column labels.\n\nI came to similar conclusions a couple of years ago:\n\nhttps://www.postgresql.org/message-id/CA+TgmoYzPvT7uiHjWgKtyTivHHLNCp0yLavCoipE-LyG3w2wOQ@mail.gmail.com\n\nWhat I proposed at the time was creating a new category of keywords.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 19 May 2020 14:03:21 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, May 19, 2020 at 11:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Before we go much further on this, we should have some proof\n>> that there's actually material benefit to be gained. I spent some\n>> time just now trying to relax the AS restriction by ripping out\n>> postfix ops, and the results were not too promising.\n\n> I came to similar conclusions a couple of years ago:\n> https://www.postgresql.org/message-id/CA+TgmoYzPvT7uiHjWgKtyTivHHLNCp0yLavCoipE-LyG3w2wOQ@mail.gmail.com\n\nAh, right.\n\n> What I proposed at the time was creating a new category of keywords.\n\nMight work. My main concern would be if we have to forbid those keywords\nas column names --- for words like \"year\", in particular, that'd be a\ndisaster. If the net effect is only that they can't be AS-less col labels,\nit won't break any cases that worked before.\n\nOur existing four-way keyword classification is not something that was\nhanded down on stone tablets. I wonder whether postfix-ectomy changes\nthe situation enough that a complete rethinking would be helpful.\n\nI also continue to think that more lookahead and token-merging would\nbe interesting to pursue. It'd hardly surprise anybody if the\ntoken pair \"character varying\" were always treated as a type name,\nfor instance.\n\nAnyway, the bottom-line conclusion remains the same: let's make sure\nwe know what we'd do after getting rid of postfix ops, before we do\nthat.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 May 2020 14:30:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Tue, May 19, 2020 at 2:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Might work. My main concern would be if we have to forbid those keywords\n> as column names --- for words like \"year\", in particular, that'd be a\n> disaster. If the net effect is only that they can't be AS-less col labels,\n> it won't break any cases that worked before.\n\nISTM that all we have to do to avoid that is switch from a four-way\nclassification to a five-way classification: just split\nunreserved_keyword into totally_unreserved_keyword and\nvery_slightly_reserved_keyword.\n\n> Our existing four-way keyword classification is not something that was\n> handed down on stone tablets. I wonder whether postfix-ectomy changes\n> the situation enough that a complete rethinking would be helpful.\n\nI don't see that they do, but I might be missing something. I think\nthere's an excellent argument for adding one new category, but it's\nnot clear to me why it should reshape the landscape any more than\nthat.\n\n> I also continue to think that more lookahead and token-merging would\n> be interesting to pursue. It'd hardly surprise anybody if the\n> token pair \"character varying\" were always treated as a type name,\n> for instance.\n\nI think that line of attack will not buy very much. The ability to\navoid unexpected consequences is entirely contingent on the\nunlikeliness of the keywords appearing adjacent to each other in some\nother context, and the only argument for that here is that neither of\nthose words is a terribly likely column name. I think that when you\ntry to solve interesting problems with this, though, you very quickly\nrun into problems where that's not the case, and you'll need a\ntechnique that has some knowledge of the parser state to actually do\nsomething that works well. I read a paper some years ago that proposed\na solution to this problem: if the parser generator sees a\nshift/reduce conflict, it checks whether the conflict can be resolve\nby looking ahead one or more additional tokens. If so, it can build a\nlittle DFA that gets run when you enter that state, with edges labeled\nwith lookahead tokens, and it runs that DFA whenever you reach the\nproblematic state. Since, hopefully, such states are relatively rarely\nencountered, the overhead is low, yet it still gives you a way out of\nconflicts in many practical cases. Unfortunately, the chances of bison\nimplementing such a thing do not seem very good.\n\n> Anyway, the bottom-line conclusion remains the same: let's make sure\n> we know what we'd do after getting rid of postfix ops, before we do\n> that.\n\nWell, I don't think we really need to get too conservative here. I've\nstudied this issue enough over the years to be pretty darn sure that\nthis is a necessary prerequisite to doing something about the \"AS\nunreserved_keyword\" issue, and that it is by far the most significant\nissue in doing something about that problem. Sure, there are other\nissues, but I think they are basically matters of politics or policy.\nFor example, if some key people DID think that the four-way keyword\nclassification was handed down on stone tablets, that could be quite a\nproblem, but if we're willing to take the view that solving the \"AS\nunreserved_keyword\" problem is pretty important and we need to find a\nway to get it done, then I think we an do that. It seems to me that\nthe first thing that we need to do here is get a deprecation notice\nout, so that people know that we're planning to break this. I think we\nshould go ahead and make that happen now, or at least pretty soon.\n\nI'm still interested in hearing what people think about hard-coding !\nas a postfix operator vs. removing postfix operators altogether. I\nthink Vik and Tom are against keeping just !, Kenneth Marshall are for\nit, and I'm not sure I understand Pavel's position. I'm about +0.3 for\nkeeping just ! myself. Maybe we'll get some other votes. If you're\nwilling to be persuaded that keeping only ! is a sensible thing to\nconsider, I could probably draft a very rough patch showing that it\nwould still be sufficient to get us out from under the \"AS\nunreserved_keyword\" problem, but if you and/or enough other people\nhate that direction with a fiery passion, I won't bother. I'm pretty\nsure it's technically possible, but the issue is more about what\npeople actually want.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 19 May 2020 15:31:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, May 19, 2020 at 2:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Anyway, the bottom-line conclusion remains the same: let's make sure\n>> we know what we'd do after getting rid of postfix ops, before we do\n>> that.\n\n> Well, I don't think we really need to get too conservative here.\n> ... It seems to me that\n> the first thing that we need to do here is get a deprecation notice\n> out, so that people know that we're planning to break this.\n\nNo, I disagree with that, because from what I've seen so far it's\nnot really clear to me that we have a full solution to the AS\nproblem excepting only postfix ops. I don't want to deprecate\npostfix ops before it's completely clear that we can get something \nout of it. Otherwise, we'll either end up un-deprecating them,\nwhich makes us look silly, or removing a feature for zero benefit.\n\nStephen's nearby proposal to deprecate only after a patch has been\ncommitted doesn't seem all that unreasonable, if you're only intending\nto allow one cycle's worth of notice. In particular, I could easily\nsee us committing a fix sometime this summer and then sticking\ndeprecation notices into the back branches before v13 goes gold.\nBut let's have the complete fix in hand first.\n\n> I'm still interested in hearing what people think about hard-coding !\n> as a postfix operator vs. removing postfix operators altogether. I\n> think Vik and Tom are against keeping just !, Kenneth Marshall are for\n> it, and I'm not sure I understand Pavel's position.\n\nYes, I'm VERY strongly against keeping just !. I think it'd be a\nridiculous, and probably very messy, backwards-compatibility hack; and the\nfact that it will break valid use-cases that we don't need to break seems\nto me to well outweigh the possibility that someone would rather not\nchange their queries to use factorial() or !!.\n\nHowever, we do have to have a benefit to show those people whose\nqueries we break. Hence my insistence on having a working AS fix\n(or some other benefit) before not after.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 May 2020 15:50:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "I wrote:\n> However, we do have to have a benefit to show those people whose\n> queries we break. Hence my insistence on having a working AS fix\n> (or some other benefit) before not after.\n\nI experimented with this a bit more, and came up with the attached.\nIt's not a working patch, just a set of grammar changes that Bison\nis happy with. (Getting to a working patch would require fixing the\nvarious build infrastructure that knows about the keyword classification,\nwhich seems straightforward but tedious.)\n\nAs Robert theorized, it works to move a fairly-small number of unreserved\nkeywords into a new slightly-reserved category. However, as the patch\nstands, only the remaining fully-unreserved keywords can be used as bare\ncolumn labels. I'd hoped to be able to also use col_name keywords in that\nway (which'd make the set of legal bare column labels mostly the same as\nColId). The col_name keywords that cause problems are, it appears,\nonly PRECISION, CHARACTER, and CHAR_P. So in principle we could move\nthose three into yet another keyword category and then let the remaining\ncol_name keywords be included in BareColLabel. I kind of think that\nthat's more complication than it's worth, though.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 19 May 2020 19:47:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Tue, May 19, 2020 at 7:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> As Robert theorized, it works to move a fairly-small number of unreserved\n> keywords into a new slightly-reserved category.\n\nIt wasn't entirely a theoretical argument, since I'm pretty sure I did\nspend some time experimenting with gram.y back in the day, but\npossibly not to the extent that you've done here. And I seem not to\nhave saved my work, either...\n\n> However, as the patch\n> stands, only the remaining fully-unreserved keywords can be used as bare\n> column labels. I'd hoped to be able to also use col_name keywords in that\n> way (which'd make the set of legal bare column labels mostly the same as\n> ColId). The col_name keywords that cause problems are, it appears,\n> only PRECISION, CHARACTER, and CHAR_P. So in principle we could move\n> those three into yet another keyword category and then let the remaining\n> col_name keywords be included in BareColLabel. I kind of think that\n> that's more complication than it's worth, though.\n\nI think it's a judgement call. If all we do is what you have in the\npatch, we can make 288 keywords that currently aren't usable as column\nlabels without AS, plus future unreserved keywords that get similar\ntreatment. If we also split the column-name keywords, then we can buy\nourselves another 48 keywords that can be used as column labels\nwithout AS. Presumably everybody is going to agree that allowing more\nkeywords to be used this way is better than fewer, but also that\nhaving fewer keyword classifications is better than having more, and\nthose goals are in tension in this case.\n\nI believe that most, possibly all, of the examples of this problem\nthat I have seen involve unreserved keywords, but that might just\nbecause there are a lot more unreserved keywords than there are\nkeywords of any other sort. Things like TIME, POSITION, and VALUES\ndon't seem like particularly unlikely choices for a column label. I\nmean, someone who knows SQL well and is a good programmer might not\nchoose these things, either because they're kind of generic, or\nbecause they're known to have special meaning in SQL. However, SQL is\nused by many people who don't know it well and aren't good\nprogrammers, and people coming from other database systems generally\ndon't have to worry much about their choice of column labels and then\nget sad when their migration fails. So I'd be somewhat inclined to see\nhow far we can reasonably push this, but I'm also entirely willing to\naccept that 85% of a loaf is better than none.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 20 May 2020 12:32:34 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, May 19, 2020 at 7:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> However, as the patch\n>> stands, only the remaining fully-unreserved keywords can be used as bare\n>> column labels. I'd hoped to be able to also use col_name keywords in that\n>> way (which'd make the set of legal bare column labels mostly the same as\n>> ColId). The col_name keywords that cause problems are, it appears,\n>> only PRECISION, CHARACTER, and CHAR_P. So in principle we could move\n>> those three into yet another keyword category and then let the remaining\n>> col_name keywords be included in BareColLabel. I kind of think that\n>> that's more complication than it's worth, though.\n\n> I think it's a judgement call. If all we do is what you have in the\n> patch, we can make 288 keywords that currently aren't usable as column\n> labels without AS, plus future unreserved keywords that get similar\n> treatment. If we also split the column-name keywords, then we can buy\n> ourselves another 48 keywords that can be used as column labels\n> without AS. Presumably everybody is going to agree that allowing more\n> keywords to be used this way is better than fewer, but also that\n> having fewer keyword classifications is better than having more, and\n> those goals are in tension in this case.\n\nRight; I'd done the same arithmetic. Since we currently have a total\nof 450 keywords of all flavors, that means we can make either 64%\nof them or 74.6% of them be safe to use as bare column labels. While\nthat's surely better than today, it doesn't seem like it's going to\nmake for any sort of sea change in the extent of the problem. So I was\nfeeling a bit discouraged by these results.\n\nI too failed to save the results of some experimentation, but I'd\nalso poked at the type_func_name_keyword category, and it has a similar\nsituation where only about three keywords cause problems if included\nin BareColLabel. So we could possibly get another twenty-ish keywords\ninto that set with yet a third new keyword category. But (a) we'd still\nonly be at 79% coverage and (b) this is *really* making things messy\nkeyword-category-wise. I feel like we'd be better advised to somehow\ntreat can-be-bare-col-label as an independent classification.\n\n(I did not look at whether any of the fully-reserved keywords could\nbe made safe to use, but it seems likely that at least some of them\ncould be, if we accept even more classification mess.)\n\nBottom line is that we can reduce the scope of the col-label problem\nthis way, but we can't make it go away entirely. Is a partial solution\nto that worth a full drop of postfix operators? Possibly, but I'm not\nsure. I still feel like it'd be worth investigating some other solution\ntechnology, ie lookahead, though I concede your point that that has\npitfalls too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 20 May 2020 14:24:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "\n\n> On May 20, 2020, at 11:24 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Bottom line is that we can reduce the scope of the col-label problem\n> this way, but we can't make it go away entirely. Is a partial solution\n> to that worth a full drop of postfix operators? Possibly, but I'm not\n> sure. I still feel like it'd be worth investigating some other solution\n> technology, ie lookahead, though I concede your point that that has\n> pitfalls too.\n\nI should think a lot of the problem stems from allowing the same characters to be used in postfix operators as in other operators. The ! character is already not allowed as a column alias:\n\n+SELECT 1 AS ! ORDER BY !;\n+ERROR: syntax error at or near \"!\"\n+LINE 1: SELECT 1 AS ! ORDER BY !;\n+ ^\n\nBut you can use it as a prefix or infix operator, which creates the confusion about whether\n\n\tSELECT 5 ! x\n\nMeans \"x\" as an alias or as the right argument to the ! infix operator. But if we made a clean distinction between the characters that are allowed in postfix operators vs. those allowed for infix operators, then we'd get to have postfix operators without the ambiguity, right?\n\nWhen thinking about postfix operators, the subscript and superscript character ranges come to my mind, such as\n\n\tSELECT Σ₂(x² + y³ + z⁴);\n\nThese also come to mind as prefix operators, but I don't recall seeing them as infix operators, so maybe it would be ok to disallow that? As for the ! infix operator, it doesn't exist by default:\n\n+SELECT x ! y from (select 5 AS x, 3 AS y) AS ss;\n+ERROR: operator does not exist: integer ! integer\n+LINE 1: SELECT x ! y from (select 5 AS x, 3 AS y) AS ss;\n+ ^\n+HINT: No operator matches the given name and argument types. You might need to add explicit type casts.\n\nSo if we put that in the set of characters disallowed for infix operators, we would only be breaking custom infix operators named that, which seems like less breakage to me than removing postfix operators of all kinds.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 20 May 2020 13:18:01 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> ... But if we made a clean distinction between the characters that are allowed in postfix operators vs. those allowed for infix operators, then we'd get to have postfix operators without the ambiguity, right?\n\nI continue to see little point in half-baked compatibility measures\nlike that. You'd be much more likely to break working setups (that\nmight not even involve any postfix operators) than to accomplish\nanything useful. In particular, if Joe DBA out there has a postfix\noperator, and it's not named according to whatever rule you chose,\nthen you haven't done anything to fix his compatibility problem.\n\n> When thinking about postfix operators, the subscript and superscript character ranges come to my mind, such as\n> \tSELECT Σ₂(x² + y³ + z⁴);\n\nWe already have a convention about non-ASCII characters, and it is that\nthey are identifier characters not operator characters. Changing that\nwould break yet a different set of applications. (That is to say,\nthe above SELECT already has a well-defined lexical interpretation.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 20 May 2020 16:29:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On 2020-May-20, Tom Lane wrote:\n\n> I too failed to save the results of some experimentation, but I'd\n> also poked at the type_func_name_keyword category, and it has a similar\n> situation where only about three keywords cause problems if included\n> in BareColLabel. So we could possibly get another twenty-ish keywords\n> into that set with yet a third new keyword category. But (a) we'd still\n> only be at 79% coverage and (b) this is *really* making things messy\n> keyword-category-wise. I feel like we'd be better advised to somehow\n> treat can-be-bare-col-label as an independent classification.\n> \n> (I did not look at whether any of the fully-reserved keywords could\n> be made safe to use, but it seems likely that at least some of them\n> could be, if we accept even more classification mess.)\n\nWould it make sense (and possible) to have a keyword category that is\nnot disjoint wrt. the others? Maybe that ends up being easier than\na solution that ends up with six or seven categories.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 20 May 2020 17:54:18 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-May-20, Tom Lane wrote:\n>> I feel like we'd be better advised to somehow\n>> treat can-be-bare-col-label as an independent classification.\n\n> Would it make sense (and possible) to have a keyword category that is\n> not disjoint wrt. the others? Maybe that ends up being easier than\n> a solution that ends up with six or seven categories.\n\nYeah, that's the same thing I was vaguely imagining -- an independent\nflag on each keyword as to whether it can be used as a bare column\nalias.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 20 May 2020 18:21:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Wed, May 20, 2020 at 2:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Right; I'd done the same arithmetic. Since we currently have a total\n> of 450 keywords of all flavors, that means we can make either 64%\n> of them or 74.6% of them be safe to use as bare column labels. While\n> that's surely better than today, it doesn't seem like it's going to\n> make for any sort of sea change in the extent of the problem. So I was\n> feeling a bit discouraged by these results.\n\nI don't think you should feel discouraged by these results. They\nassume that people are just as likely to have a problem with a\nreserved keyword as an unreserved keyword, and I don't think that's\nactually true. The 25.4% of keywords that aren't handled this way\ninclude, to take a particularly egregious example, \"AS\" itself. And I\ndon't think many people are going to be sad if \"select 1 as;\" fails to\ntreat \"as\" as a column label.\n\nAlso, even if we only made 74.6% of these safe to use as bare column\nlabels, or even 64%, I think that's actually pretty significant. If I\ncould reduce my mortgage payment by 64%, I would be pretty happy. For\nmany people, that would be a sufficiently large economic impact that\nit actually would be a sea change in terms of their quality of life. I\ndon't see a reason to suppose that's not also true here.[1]\n\nI do like the idea of considering \"can be a bare column label\" as an\nindependent dimension from the existing keyword classification.\nPresumably we would then have, in addition to the four existing\nkeyword productions, but then also a separate\nbare_column_label_keyword: production that would include many of the\nsame keywords. One nice thing about that approach is that we would\nthen have a clear list of exactly which keywords can't be given that\ntreatment, and if somebody wanted to go investigate possible\nimprovements for any of those, they could do so. I think we'd want a\ncross-check: check_keywords.pl should contain the list of keywords\nthat are expected to be excluded from this new production, so that any\ntime someone adds a new keyword, they've either got to add it to the\nnew production or add it to the exception list.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n[1] On the other hand, if I had 64% fewer ants in my picnic basket, I\nwould probably still be unhappy with the number of ants in my picnic\nbasket, so it all depends on context and perspective.\n\n\n", "msg_date": "Thu, 21 May 2020 09:36:43 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On 2020-05-20 01:47, Tom Lane wrote:\n> I wrote:\n>> However, we do have to have a benefit to show those people whose\n>> queries we break. Hence my insistence on having a working AS fix\n>> (or some other benefit) before not after.\n> I experimented with this a bit more, and came up with the attached.\n> It's not a working patch, just a set of grammar changes that Bison\n> is happy with. (Getting to a working patch would require fixing the\n> various build infrastructure that knows about the keyword classification,\n> which seems straightforward but tedious.)\n\nWhat I was hoping to get out of this was to resolve some of the weird \nprecedence hacks that were blamed on postfix operators. But building on \nyour patch, the best I could achieve was\n\n-%nonassoc IDENT GENERATED NULL_P PARTITION RANGE ROWS GROUPS PRECEDING \nFOLLOWING CUBE ROLLUP\n+%nonassoc IDENT PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING CUBE \nROLLUP\n\nwhich is a pretty poor yield.\n\nMaybe this isn't worth it after all.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 25 May 2020 22:50:48 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> What I was hoping to get out of this was to resolve some of the weird \n> precedence hacks that were blamed on postfix operators.\n\nYeah, I was thinking about that too, but hadn't gotten to it.\n\n> But building on your patch, the best I could achieve was\n\n> -%nonassoc IDENT GENERATED NULL_P PARTITION RANGE ROWS GROUPS PRECEDING \n> FOLLOWING CUBE ROLLUP\n> +%nonassoc IDENT PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING CUBE \n> ROLLUP\n\n> which is a pretty poor yield.\n\nI'd hoped for better as well. Still, it's possible this would save us\nfrom greater pain in the future, seeing that the SQL committee seems\nresolutely uninterested in whether the syntax they invent is parsable.\n\n(Also, there are other factors here: I think at least some of those\nprecedence hacks are there to avoid fully reserving the associated\nkeywords.)\n\n> Maybe this isn't worth it after all.\n\nIt'd be nice to have a better yield from removing a user-visible\nfeature. Perhaps there would be no complaints about removing\npostfix ops, but if there are I want to be able to point to some\nsubstantial benefit that users get from it. (Which is why I focused\non the optional-AS business to start with ... users don't care\nabout how many precedence hacks we need.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 May 2020 18:18:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "> On May 19, 2020, at 4:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> I wrote:\n>> However, we do have to have a benefit to show those people whose\n>> queries we break. Hence my insistence on having a working AS fix\n>> (or some other benefit) before not after.\n> \n> I experimented with this a bit more, and came up with the attached.\n> It's not a working patch, just a set of grammar changes that Bison\n> is happy with. (Getting to a working patch would require fixing the\n> various build infrastructure that knows about the keyword classification,\n> which seems straightforward but tedious.)\n\nI built a patch on top of yours that does much of that tedious work.\n\n> As Robert theorized, it works to move a fairly-small number of unreserved\n> keywords into a new slightly-reserved category. However, as the patch\n> stands, only the remaining fully-unreserved keywords can be used as bare\n> column labels. I'd hoped to be able to also use col_name keywords in that\n> way (which'd make the set of legal bare column labels mostly the same as\n> ColId). The col_name keywords that cause problems are, it appears,\n> only PRECISION, CHARACTER, and CHAR_P. So in principle we could move\n> those three into yet another keyword category and then let the remaining\n> col_name keywords be included in BareColLabel. I kind of think that\n> that's more complication than it's worth, though.\n\nBy my count, 288 more keywords can be used as column aliases without the AS keyword after the patch. That exactly matches what Robert said upthread.\n\nTom and Álvaro discussed upthread:\n\n> Would it make sense (and possible) to have a keyword category that is\n> not disjoint wrt. the others? Maybe that ends up being easier than\n> a solution that ends up with six or seven categories.\n\nI didn't see much point in that. The way Tom had it in his patch was easy to work with. Maybe I'm missing something?\n\nThe patch, attached, still needs documentation updates and an update to pg_upgrade. Users upgrading to v14 may have custom postfix operators. Should pg_upgrade leave them untouched? They wouldn't be reachable through the grammar any longer. Should pg_upgrade delete them? I'm generally not in favor of deleting user data as part of an upgrade, and rows in the catalog tables corresponding to custom postfix operators are sort of user data, if you squint and look at them just right. Thoughts?\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 30 Jun 2020 14:47:24 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "> On Jun 30, 2020, at 2:47 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> \n> \n>> On May 19, 2020, at 4:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> \n>> I wrote:\n>>> However, we do have to have a benefit to show those people whose\n>>> queries we break. Hence my insistence on having a working AS fix\n>>> (or some other benefit) before not after.\n>> \n>> I experimented with this a bit more, and came up with the attached.\n>> It's not a working patch, just a set of grammar changes that Bison\n>> is happy with. (Getting to a working patch would require fixing the\n>> various build infrastructure that knows about the keyword classification,\n>> which seems straightforward but tedious.)\n> \n> I built a patch on top of yours that does much of that tedious work.\n> \n>> As Robert theorized, it works to move a fairly-small number of unreserved\n>> keywords into a new slightly-reserved category. However, as the patch\n>> stands, only the remaining fully-unreserved keywords can be used as bare\n>> column labels. I'd hoped to be able to also use col_name keywords in that\n>> way (which'd make the set of legal bare column labels mostly the same as\n>> ColId). The col_name keywords that cause problems are, it appears,\n>> only PRECISION, CHARACTER, and CHAR_P. So in principle we could move\n>> those three into yet another keyword category and then let the remaining\n>> col_name keywords be included in BareColLabel. I kind of think that\n>> that's more complication than it's worth, though.\n> \n> By my count, 288 more keywords can be used as column aliases without the AS keyword after the patch. That exactly matches what Robert said upthread.\n> \n> Tom and Álvaro discussed upthread:\n> \n>> Would it make sense (and possible) to have a keyword category that is\n>> not disjoint wrt. the others? Maybe that ends up being easier than\n>> a solution that ends up with six or seven categories.\n\nVersion 2, attached, follows this design, increasing the number of keywords that can be used as column aliases without the AS keyword up to 411, with only 39 keywords still requiring an explicit preceding AS.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 10 Jul 2020 10:13:55 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Sat, Jul 11, 2020 at 1:14 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> > Tom and Álvaro discussed upthread:\n> >\n> >> Would it make sense (and possible) to have a keyword category that is\n> >> not disjoint wrt. the others? Maybe that ends up being easier than\n> >> a solution that ends up with six or seven categories.\n>\n> Version 2, attached, follows this design, increasing the number of keywords that can be used as column aliases without the AS keyword up to 411, with only 39 keywords still requiring an explicit preceding AS.\n\nHi Mark,\n\nThis isn't a full review, but I have a few questions/comments:\n\nBy making col-label-ness an orthogonal attribute, do we still need the\ncategory of non_label_keyword? It seems not.\n\npg_get_keywords() should probably have a column to display ability to\nact as a bare col label. Perhaps a boolean? If so, what do you think\nof using true/false for the new field in kwlist.h as well?\n\nIn the bikeshedding department, it seems \"implicit\" was chosen because\nit was distinct from \"bare\". I think \"bare\" as a descriptor should be\nkept throughout for readability's sake. Maybe BareColLabel could be\n\"IDENT or bare_label_keyword\" for example. Same for the $status var.\n\nLikewise, it seems the actual removal of postfix operators should be a\nseparate patch.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 18 Jul 2020 16:00:12 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "> On Jul 18, 2020, at 1:00 AM, John Naylor <john.naylor@2ndquadrant.com> wrote:\n> \n> On Sat, Jul 11, 2020 at 1:14 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> \n>>> Tom and Álvaro discussed upthread:\n>>> \n>>>> Would it make sense (and possible) to have a keyword category that is\n>>>> not disjoint wrt. the others? Maybe that ends up being easier than\n>>>> a solution that ends up with six or seven categories.\n>> \n>> Version 2, attached, follows this design, increasing the number of keywords that can be used as column aliases without the AS keyword up to 411, with only 39 keywords still requiring an explicit preceding AS.\n> \n> Hi Mark,\n> \n> This isn't a full review, but I have a few questions/comments:\n\nThanks for looking!\n\n> By making col-label-ness an orthogonal attribute, do we still need the\n> category of non_label_keyword? It seems not.\n\nYou are right. The non_label_keyword category has been removed from v3.\n\n> pg_get_keywords() should probably have a column to display ability to\n> act as a bare col label. Perhaps a boolean? If so, what do you think\n> of using true/false for the new field in kwlist.h as well?\n\nI have broken this into its own patch. I like using a BARE_LABEL / EXPLICIT_LABEL in kwlist.h because it is self-documenting. I don't care about the *exact* strings that we choose for that, but using TRUE/FALSE in kwlist.h makes it harder for a person adding a new keyword to know what to place there. If they guess \"FALSE\", and also don't know about adding the new keyword to the bare_label_keyword rule in gram.y, then those two mistakes will agree with each other and the person adding the keyword won't likely know they did it wrong. It is simple enough for gen_keywordlist.pl to convert between what we use in kwlist.h and a boolean value for kwlist_d.h, so I did it that way.\n\n> In the bikeshedding department, it seems \"implicit\" was chosen because\n> it was distinct from \"bare\". I think \"bare\" as a descriptor should be\n> kept throughout for readability's sake. Maybe BareColLabel could be\n> \"IDENT or bare_label_keyword\" for example. Same for the $status var.\n\nThe category \"bare_label_keyword\" is used in v3. As for the $status var, I don't want to name that $bare, as I didn't go with your idea about using a boolean. $status = \"BARE_LABEL\" vs \"EXPLICIT_LABEL\" makes sense to me, more than $bare = \"BARE_LABEL\" vs \"EXPLICIT_LABEL\" does. \"status\" is still a bit vague, so more bikeshedding is welcome.\n\n> Likewise, it seems the actual removal of postfix operators should be a\n> separate patch.\n\nI broke out the removal of postfix operators into its own patch in the v3 series.\n\nThis patch does not attempt to remove pre-existing postfix operators from existing databases, so users upgrading to the new major version who have custom postfix operators will find that pg_upgrade chokes trying to recreate the postfix operator. That's not great, but perhaps there is nothing automated that we could do for them that would be any better.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 21 Jul 2020 17:46:59 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Wed, Jul 22, 2020 at 8:47 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n>\n>\n> > On Jul 18, 2020, at 1:00 AM, John Naylor <john.naylor@2ndquadrant.com> wrote:\n> >\n> > pg_get_keywords() should probably have a column to display ability to\n> > act as a bare col label. Perhaps a boolean? If so, what do you think\n> > of using true/false for the new field in kwlist.h as well?\n\nHi Mark, sorry for the delay.\n\n> I have broken this into its own patch. I like using a BARE_LABEL / EXPLICIT_LABEL in kwlist.h because it is self-documenting. I don't care about the *exact* strings that we choose for that, but using TRUE/FALSE in kwlist.h makes it harder for a person adding a new keyword to know what to place there. If they guess \"FALSE\", and also don't know about adding the new keyword to the bare_label_keyword rule in gram.y, then those two mistakes will agree with each other and the person adding the keyword won't likely know they did it wrong. It is simple enough for gen_keywordlist.pl to convert between what we use in kwlist.h and a boolean value for kwlist_d.h, so I did it that way.\n\nSounds fine to me.\n\n> > In the bikeshedding department, it seems \"implicit\" was chosen because\n> > it was distinct from \"bare\". I think \"bare\" as a descriptor should be\n> > kept throughout for readability's sake. Maybe BareColLabel could be\n> > \"IDENT or bare_label_keyword\" for example. Same for the $status var.\n>\n> The category \"bare_label_keyword\" is used in v3. As for the $status var, I don't want to name that $bare, as I didn't go with your idea about using a boolean. $status = \"BARE_LABEL\" vs \"EXPLICIT_LABEL\" makes sense to me, more than $bare = \"BARE_LABEL\" vs \"EXPLICIT_LABEL\" does. \"status\" is still a bit vague, so more bikeshedding is welcome.\n\nYeah, it's very generic, but it's hard to find a short word for\n\"can-be-used-as-a-bare-column-label-ness\".\n\n> This patch does not attempt to remove pre-existing postfix operators from existing databases, so users upgrading to the new major version who have custom postfix operators will find that pg_upgrade chokes trying to recreate the postfix operator. That's not great, but perhaps there is nothing automated that we could do for them that would be any better.\n\nI'm thinking it would be good to have something like\n\nselect oid from pg_operator where oprright = 0 and oid >= FirstNormalObjectId;\n\nin the pre-upgrade check.\n\nOther comments:\n\n0001:\n\n+ errhint(\"postfix operator support has been discontinued\")));\n\nThis language seems more appropriate for release notes -- I would word\nthe hint in the present, as in \"postfix operators are not supported\".\nDitto the words \"discontinuation\", \"has been removed\", and \"no longer\nworks\" elsewhere in the patch.\n\n+SELECT -5!;\n+SELECT -0!;\n+SELECT 0!;\n+SELECT 100!;\n\nI think one negative and one non-negative case is enough to confirm\nthe syntax error.\n\n- gram.y still contains \"POSTFIXOP\" and \"postfix-operator\".\n\n- parse_expr.c looks like it has some now-unreachable code.\n\n\n0002:\n\n+ * All keywords can be used explicitly as a column label in expressions\n+ * like 'SELECT 1234 AS keyword', but only some keywords can be used\n+ * implicitly as column labels in expressions like 'SELECT 1234 keyword'.\n+ * Those that can be used implicitly should be listed here.\n\nIn my mind, \"AS\" is the thing that's implied when not present, so we\nshould reword this to use the \"bare\" designation when talking about\nthe labels. I think there are contexts elsewhere where the implicit\ncolumn label is \"col1, col2, col3...\". I can't remember offhand where\nthat is though.\n\n- * kwlist.h's table from one source of truth.)\n+ * kwlist.h's table from a common master list.)\n\nOff topic.\n\n\n0003:\n\nFirst off, I get a crash when trying\n\nselect * from pg_get_keywords();\n\nand haven't investigated further. I don't think the returned types\nmatch, though.\n\nContinuing on, I think 2 and 3 can be squashed together. If anything,\nit should make revisiting cosmetic decisions easier.\n\n+ TupleDescInitEntry(tupdesc, (AttrNumber) 4, \"bare\",\n+ BOOLOID, -1, 0);\n\nPerhaps something a bit meatier for the user-visible field name. I\ndon't have a great suggestion.\n\n- proname => 'pg_get_keywords', procost => '10', prorows => '400',\n+ proname => 'pg_get_keywords', procost => '10', prorows => '450',\n\nOff topic for this patch. Not sure it matters much, either.\n\n\"EXPLICIT_LABEL\" -- continuing my line of thought above, all labels\nare explicit, that's why they're called labels. Brainstorm:\n\nEXPLICIT_AS_LABEL\nEXPLICIT_AS\nNON_BARE_LABEL\n*shrug*\n\n+ # parser/kwlist.h lists each keyword as either bare or\n+ # explicit, but ecpg neither needs nor has any such\n\nPL/pgSQL also uses this script, so maybe just phrase it to exclude the\ncore keyword list.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 29 Jul 2020 20:03:35 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Tue, May 19, 2020 at 5:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > What are the thoughts about then marking the postfix operator deprecated\n> > and eventually removing it?\n>\n> If we do this it'd require a plan. We'd have to also warn about the\n> feature deprecation in (at least) the CREATE OPERATOR man page, and\n> we'd have to decide how many release cycles the deprecation notices\n> need to stand for.\n>\n> If that's the intention, though, it'd be good to get those deprecation\n> notices published in v13 not v14.\n\nI imagine the release candidates are not too far away by now, and if\nwe are confident enough in the direction the patches in this thread\nare going, we should probably consider a deprecation notice soon.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 24 Aug 2020 10:28:17 +0300", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "\n\n> On Aug 24, 2020, at 12:28 AM, John Naylor <john.naylor@2ndquadrant.com> wrote:\n> \n> On Tue, May 19, 2020 at 5:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> \n>> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>>> What are the thoughts about then marking the postfix operator deprecated\n>>> and eventually removing it?\n>> \n>> If we do this it'd require a plan. We'd have to also warn about the\n>> feature deprecation in (at least) the CREATE OPERATOR man page, and\n>> we'd have to decide how many release cycles the deprecation notices\n>> need to stand for.\n>> \n>> If that's the intention, though, it'd be good to get those deprecation\n>> notices published in v13 not v14.\n> \n> I imagine the release candidates are not too far away by now, and if\n> we are confident enough in the direction the patches in this thread\n> are going, we should probably consider a deprecation notice soon.\n\nIf so, we might want to also update the deprecation warning for the prefix !! operator in pg_operator.dat:\n\n{ oid => '389', descr => 'deprecated, use ! instead',\n oprname => '!!', oprkind => 'l', oprleft => '0', oprright => 'int8',\n oprresult => 'numeric', oprcode => 'numeric_fac' },\n\nThat will be the only remaining factorial operator if we remove postfix operators.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 24 Aug 2020 08:04:00 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "> On Jul 29, 2020, at 5:03 AM, John Naylor <john.naylor@2ndquadrant.com> wrote:\n> \n> On Wed, Jul 22, 2020 at 8:47 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> \n>> \n>> \n>>> On Jul 18, 2020, at 1:00 AM, John Naylor <john.naylor@2ndquadrant.com> wrote:\n>>> \n>>> pg_get_keywords() should probably have a column to display ability to\n>>> act as a bare col label. Perhaps a boolean? If so, what do you think\n>>> of using true/false for the new field in kwlist.h as well?\n> \n> Hi Mark, sorry for the delay.\n\nLikewise, John. Thanks for the review! I am attaching version 4 of the patch to address your comments.\n\n>> I have broken this into its own patch. I like using a BARE_LABEL / EXPLICIT_LABEL in kwlist.h because it is self-documenting. I don't care about the *exact* strings that we choose for that, but using TRUE/FALSE in kwlist.h makes it harder for a person adding a new keyword to know what to place there. If they guess \"FALSE\", and also don't know about adding the new keyword to the bare_label_keyword rule in gram.y, then those two mistakes will agree with each other and the person adding the keyword won't likely know they did it wrong. It is simple enough for gen_keywordlist.pl to convert between what we use in kwlist.h and a boolean value for kwlist_d.h, so I did it that way.\n> \n> Sounds fine to me.\n> \n>>> In the bikeshedding department, it seems \"implicit\" was chosen because\n>>> it was distinct from \"bare\". I think \"bare\" as a descriptor should be\n>>> kept throughout for readability's sake. Maybe BareColLabel could be\n>>> \"IDENT or bare_label_keyword\" for example. Same for the $status var.\n>> \n>> The category \"bare_label_keyword\" is used in v3. As for the $status var, I don't want to name that $bare, as I didn't go with your idea about using a boolean. $status = \"BARE_LABEL\" vs \"EXPLICIT_LABEL\" makes sense to me, more than $bare = \"BARE_LABEL\" vs \"EXPLICIT_LABEL\" does. \"status\" is still a bit vague, so more bikeshedding is welcome.\n> \n> Yeah, it's very generic, but it's hard to find a short word for\n> \"can-be-used-as-a-bare-column-label-ness\".\n\nThe construction colname AS colalias brings to mind the words \"pseudonym\" and \"alias\". The distinction we're trying to draw here is between implicit pseudoyms and explicit ones, but \"alias\" is shorter and simpler, so I like that better than \"pseudonym\". Both are labels, so adding \"label\" to the name doesn't really get us anything. The constructions \"implicit alias\" vs. \"explicit alias\" seem to me to be an improvement, along with their other forms like \"ImplicitAlias\", or \"implicit_alias\", etc., so I've used those in version 4.\n\nThe word \"status\" here really means something like \"plicity\" (implict vs. explicit), but \"plicity\" isn't a word, so I used \"aliastype\" instead.\n\nI've replaced uses of \"bare\" with \"implicit\" or \"implicit_alias\" or similar.\n\n> \n>> This patch does not attempt to remove pre-existing postfix operators from existing databases, so users upgrading to the new major version who have custom postfix operators will find that pg_upgrade chokes trying to recreate the postfix operator. That's not great, but perhaps there is nothing automated that we could do for them that would be any better.\n> \n> I'm thinking it would be good to have something like\n> \n> select oid from pg_operator where oprright = 0 and oid >= FirstNormalObjectId;\n> \n> in the pre-upgrade check.\n\nDone. Testing an upgrade of a 9.1 test install, relying on the regression database having left over user defined postfix operators, gives this result:\n\npg_upgrade --old-bindir=/Users/mark.dilger/pg91/bin --old-datadir=/Users/mark.dilger/pg91/test_data --new-bindir=/Users/mark.dilger/pgtest/test_install/bin --new-datadir=/Users/mark.dilger/pgtest/test_data\nPerforming Consistency Checks\n-----------------------------\nChecking cluster versions ok\nChecking database user is the install user ok\nChecking database connection settings ok\nChecking for prepared transactions ok\nChecking for reg* data types in user tables ok\nChecking for contrib/isn with bigint-passing mismatch ok\nChecking for user defined postfix operators fatal\n\nYour installation contains user defined postfix operators, which is not\nsupported anymore. Consider dropping the postfix operators and replacing\nthem with prefix operators or function calls.\nA list of user defined postfix operators is in the file:\n postfix_ops.txt\n\nFailure, exiting\n\n\nWith the contents of postfix_ops.txt:\n\nIn database: regression\n (oid=27113) public.#@# (pg_catalog.int8)\n (oid=27114) public.#%# (pg_catalog.int8)\n\nwhich should be enough for a user to identify which operator is meant. I just invented that format. Let me know if there is a preferred way to lay out that information. \n\n> \n> Other comments:\n> \n> 0001:\n> \n> + errhint(\"postfix operator support has been discontinued\")));\n> \n> This language seems more appropriate for release notes -- I would word\n> the hint in the present, as in \"postfix operators are not supported\".\n> Ditto the words \"discontinuation\", \"has been removed\", and \"no longer\n> works\" elsewhere in the patch.\n\nChanged the hint to say \"Postfix operators are not supported.\" \n\nChanged the regression test comment and code comments to not use the objectionable language you mention.\n\n> +SELECT -5!;\n> +SELECT -0!;\n> +SELECT 0!;\n> +SELECT 100!;\n> \n> I think one negative and one non-negative case is enough to confirm\n> the syntax error.\n\nOk, done.\n\n> - gram.y still contains \"POSTFIXOP\" and \"postfix-operator\".\n> \n> - parse_expr.c looks like it has some now-unreachable code.\n\nGood point. I've removed or renamed that stuff. Some of it went away, but some stuff was shared between general postfix operators and ANY/ALL, so that just got renamed accordingly.\n\n> 0002:\n> \n> + * All keywords can be used explicitly as a column label in expressions\n> + * like 'SELECT 1234 AS keyword', but only some keywords can be used\n> + * implicitly as column labels in expressions like 'SELECT 1234 keyword'.\n> + * Those that can be used implicitly should be listed here.\n> \n> In my mind, \"AS\" is the thing that's implied when not present, so we\n> should reword this to use the \"bare\" designation when talking about\n> the labels. I think there are contexts elsewhere where the implicit\n> column label is \"col1, col2, col3...\". I can't remember offhand where\n> that is though.\n\nPer my rambling above, I think what's really implied or explicit when \"AS\" is missing or present is that we're making an alias, so \"implicit alias\" and \"explicit alias\" sound correct to me.\n\n> - * kwlist.h's table from one source of truth.)\n> + * kwlist.h's table from a common master list.)\n> \n> Off topic.\n\nRemoved. This appears to have been an unintentional revert of an unrelated commit.\n\n> 0003:\n> \n> First off, I get a crash when trying\n> \n> select * from pg_get_keywords();\n> \n> and haven't investigated further. I don't think the returned types\n> match, though.\n\nFixed.\n\n> Continuing on, I think 2 and 3 can be squashed together. If anything,\n> it should make revisiting cosmetic decisions easier.\n\nSquashed together.\n\n> + TupleDescInitEntry(tupdesc, (AttrNumber) 4, \"bare\",\n> + BOOLOID, -1, 0);\n> \n> Perhaps something a bit meatier for the user-visible field name. I\n> don't have a great suggestion.\n\nI've changed it from \"bool bare\" to \"text aliastype\". (I still wish \"plicity\" were a word.) Rather than true/false, it returns \"implicit\"/\"explicit\".\n\n> - proname => 'pg_get_keywords', procost => '10', prorows => '400',\n> + proname => 'pg_get_keywords', procost => '10', prorows => '450',\n> \n> Off topic for this patch. Not sure it matters much, either.\n\nWell, I did touch that function a bit, adding a new column, and the number of rows returned is exactly 450, so if I'm not going to update it, who will? The count may increase over time if other keywords are added, but I doubt anybody who adds a single keyword would bother updating prorows here.\n\nI agree that it doesn't matter much. If you don't buy into the paragraph above, I'll remove it for the next patch version.\n\n> \"EXPLICIT_LABEL\" -- continuing my line of thought above, all labels\n> are explicit, that's why they're called labels.\n\nRight. Labels are explicit. \n\n> Brainstorm:\n> \n> EXPLICIT_AS_LABEL\n> EXPLICIT_AS\n> NON_BARE_LABEL\n> *shrug*\n\nChanged to EXPLICIT_ALIAS.\n\n> + # parser/kwlist.h lists each keyword as either bare or\n> + # explicit, but ecpg neither needs nor has any such\n> \n> PL/pgSQL also uses this script, so maybe just phrase it to exclude the\n> core keyword list.\n\nDone.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 25 Aug 2020 20:12:00 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Wed, Aug 26, 2020 at 6:12 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> The construction colname AS colalias brings to mind the words \"pseudonym\" and \"alias\". The distinction we're trying to draw here is between implicit pseudoyms and explicit ones, but \"alias\" is shorter and simpler, so I like that better than \"pseudonym\". Both are labels, so adding \"label\" to the name doesn't really get us anything. The constructions \"implicit alias\" vs. \"explicit alias\" seem to me to be an improvement, along with their other forms like \"ImplicitAlias\", or \"implicit_alias\", etc., so I've used those in version 4.\n\n> The word \"status\" here really means something like \"plicity\" (implict vs. explicit), but \"plicity\" isn't a word, so I used \"aliastype\" instead.\n\nSeems fine.\n\n> A list of user defined postfix operators is in the file:\n> postfix_ops.txt\n>\n> Failure, exiting\n>\n>\n> With the contents of postfix_ops.txt:\n>\n> In database: regression\n> (oid=27113) public.#@# (pg_catalog.int8)\n> (oid=27114) public.#%# (pg_catalog.int8)\n>\n> which should be enough for a user to identify which operator is meant. I just invented that format. Let me know if there is a preferred way to lay out that information.\n\nNot sure if there's a precedent here, and seems fine to me.\n\n+ /*\n+ * If neither argument is specified, do not mention postfix operators, as\n+ * the user is unlikely to have meant to create one. It is more likely\n+ * they simply neglected to mention the args.\n+ */\n if (!OidIsValid(typeId1) && !OidIsValid(typeId2))\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),\n- errmsg(\"at least one of leftarg or rightarg must be specified\")));\n+ errmsg(\"operator arguments must be specified\")));\n+\n+ /*\n+ * But if only the right arg is missing, they probably do intend to create\n+ * a postfix operator, so give them a hint about why that does not work.\n+ */\n+ if (!OidIsValid(typeId2))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),\n+ errmsg(\"operator right argument must be specified\"),\n+ errhint(\"Postfix operators are not supported.\")));\n\nThis is just a nitpick -- I think the comments in this section would\nflow better if order of checks were reversed, although the code might\nnot. I don't feel too strongly about it.\n\n- * between POSTFIXOP and Op. We can safely assign the same priority to\n- * various unreserved keywords as needed to resolve ambiguities (this can't\n- * have any bad effects since obviously the keywords will still behave the\n- * same as if they weren't keywords). We need to do this:\n+ * greater than Op. We can safely assign the same priority to various\n+ * unreserved keywords as needed to resolve ambiguities (this can't have any\n+ * bad effects since obviously the keywords will still behave the same as if\n+ * they weren't keywords). We need to do this:\n\nI believe it's actually \"lower than Op\", and since POSTFIXOP is gone\nit doesn't seem to matter how low it is. In fact, I found that the\nlines with INDENT and UNBOUNDED now work as the lowest precedence\ndeclarations. Maybe that's worth something?\n\nFollowing on Peter E.'s example upthread, GENERATED can be removed\nfrom precedence, and I also found the same is true for PRESERVE and\nSTRIP_P.\n\nI've attached a patch which applies on top of 0001 to demonstrate\nthis. There might possibly still be syntax errors for things not\ncovered in the regression test, but there are no s/r conflicts at\nleast.\n\n-{ oid => '389', descr => 'deprecated, use ! instead',\n+{ oid => '389', descr => 'factorial',\n\nHmm, no objection, but it could be argued that we should just go ahead\nand remove \"!!\" also, keeping only \"factorial()\". If we're going to\nbreak a small amount of code using the normal math expression, it\nseems silly to use a non-standard one that we deprecated before 2011\n(cf. 908ab802864). On the other hand, removing it doesn't buy us\nanything.\n\nSome leftovers...\n\n...in catalog/namespace.c:\n\nOpernameGetOprid()\n * Pass oprleft = InvalidOid for a prefix op, oprright = InvalidOid for\n * a postfix op.\n\nOpernameGetCandidates()\n * The returned items always have two args[] entries --- one or the other\n * will be InvalidOid for a prefix or postfix oprkind. nargs is 2, too.\n\n...in nodes/print.c:\n\n/* we print prefix and postfix ops the same... */\n\n\n> > 0002:\n> >\n> > + * All keywords can be used explicitly as a column label in expressions\n> > + * like 'SELECT 1234 AS keyword', but only some keywords can be used\n> > + * implicitly as column labels in expressions like 'SELECT 1234 keyword'.\n> > + * Those that can be used implicitly should be listed here.\n> >\n> > In my mind, \"AS\" is the thing that's implied when not present, so we\n> > should reword this to use the \"bare\" designation when talking about\n> > the labels. I think there are contexts elsewhere where the implicit\n> > column label is \"col1, col2, col3...\". I can't remember offhand where\n> > that is though.\n>\n> Per my rambling above, I think what's really implied or explicit when \"AS\" is missing or present is that we're making an alias, so \"implicit alias\" and \"explicit alias\" sound correct to me.\n\nSounds fine.\n\n> > - proname => 'pg_get_keywords', procost => '10', prorows => '400',\n> > + proname => 'pg_get_keywords', procost => '10', prorows => '450',\n> >\n> > Off topic for this patch. Not sure it matters much, either.\n>\n> Well, I did touch that function a bit, adding a new column, and the number of rows returned is exactly 450, so if I'm not going to update it, who will? The count may increase over time if other keywords are added, but I doubt anybody who adds a single keyword would bother updating prorows here.\n>\n> I agree that it doesn't matter much. If you don't buy into the paragraph above, I'll remove it for the next patch version.\n\nNo strong feelings -- if it were me, I'd put in a separate\n\"by-the-way\" patch at the end, and the committer can squash at their\ndiscretion. But not really worth a separate thread.\n\n# select aliastype, count(*) from pg_get_keywords() group by 1;\n aliastype | count\n-----------+-------\n explicit | 39\n implicit | 411\n(2 rows)\n\nNice!\n\nThe binary has increased by ~16kB, mostly because of the new keyword\nlist in the grammar, but that's pretty small, all things considered.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 26 Aug 2020 16:33:20 +0300", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "> On Aug 26, 2020, at 6:33 AM, John Naylor <john.naylor@2ndquadrant.com> wrote:\n> \n> + /*\n> + * If neither argument is specified, do not mention postfix operators, as\n> + * the user is unlikely to have meant to create one. It is more likely\n> + * they simply neglected to mention the args.\n> + */\n> if (!OidIsValid(typeId1) && !OidIsValid(typeId2))\n> ereport(ERROR,\n> (errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),\n> - errmsg(\"at least one of leftarg or rightarg must be specified\")));\n> + errmsg(\"operator arguments must be specified\")));\n> +\n> + /*\n> + * But if only the right arg is missing, they probably do intend to create\n> + * a postfix operator, so give them a hint about why that does not work.\n> + */\n> + if (!OidIsValid(typeId2))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_FUNCTION_DEFINITION),\n> + errmsg(\"operator right argument must be specified\"),\n> + errhint(\"Postfix operators are not supported.\")));\n> \n> This is just a nitpick -- I think the comments in this section would\n> flow better if order of checks were reversed, although the code might\n> not. I don't feel too strongly about it.\n\nI don't want to reorder the code, but combining the two code comments together allows the comment to flow more as you indicate. Done for v5.\n\n> \n> - * between POSTFIXOP and Op. We can safely assign the same priority to\n> - * various unreserved keywords as needed to resolve ambiguities (this can't\n> - * have any bad effects since obviously the keywords will still behave the\n> - * same as if they weren't keywords). We need to do this:\n> + * greater than Op. We can safely assign the same priority to various\n> + * unreserved keywords as needed to resolve ambiguities (this can't have any\n> + * bad effects since obviously the keywords will still behave the same as if\n> + * they weren't keywords). We need to do this:\n> \n> I believe it's actually \"lower than Op\",\n\nRight. I have fixed the comment. Thanks for noticing.\n\n> and since POSTFIXOP is gone\n> it doesn't seem to matter how low it is. In fact, I found that the\n> lines with INDENT and UNBOUNDED now work as the lowest precedence\n> declarations. Maybe that's worth something?\n> \n> Following on Peter E.'s example upthread, GENERATED can be removed\n> from precedence, and I also found the same is true for PRESERVE and\n> STRIP_P.\n> \n> I've attached a patch which applies on top of 0001 to demonstrate\n> this. There might possibly still be syntax errors for things not\n> covered in the regression test, but there are no s/r conflicts at\n> least.\n\nI don't have any problem with the changes you made in your patch, but building on your changes I also found that the following cleanup causes no apparent problems:\n\n-%nonassoc UNBOUNDED /* ideally should have same precedence as IDENT */\n-%nonassoc IDENT PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING CUBE ROLLUP\n+%nonassoc UNBOUNDED IDENT\n+%nonassoc PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING CUBE ROLLUP\n\nWhich does what the old comment apparently wanted.\n\n> \n> -{ oid => '389', descr => 'deprecated, use ! instead',\n> +{ oid => '389', descr => 'factorial',\n> \n> Hmm, no objection, but it could be argued that we should just go ahead\n> and remove \"!!\" also, keeping only \"factorial()\". If we're going to\n> break a small amount of code using the normal math expression, it\n> seems silly to use a non-standard one that we deprecated before 2011\n> (cf. 908ab802864). On the other hand, removing it doesn't buy us\n> anything.\n\nYeah, I don't have strong feelings about this. We should decide soon, though, because we should have the deprecation warnings resolved in time for v13, even if this patch hasn't been applied yet.\n\n> Some leftovers...\n> \n> ...in catalog/namespace.c:\n> \n> OpernameGetOprid()\n> * Pass oprleft = InvalidOid for a prefix op, oprright = InvalidOid for\n> * a postfix op.\n> \n> OpernameGetCandidates()\n> * The returned items always have two args[] entries --- one or the other\n> * will be InvalidOid for a prefix or postfix oprkind. nargs is 2, too.\n> \n> ...in nodes/print.c:\n> \n> /* we print prefix and postfix ops the same... */\n\nCleaned up.\n\n>>> 0002:\n>>> \n>>> + * All keywords can be used explicitly as a column label in expressions\n>>> + * like 'SELECT 1234 AS keyword', but only some keywords can be used\n>>> + * implicitly as column labels in expressions like 'SELECT 1234 keyword'.\n>>> + * Those that can be used implicitly should be listed here.\n>>> \n>>> In my mind, \"AS\" is the thing that's implied when not present, so we\n>>> should reword this to use the \"bare\" designation when talking about\n>>> the labels. I think there are contexts elsewhere where the implicit\n>>> column label is \"col1, col2, col3...\". I can't remember offhand where\n>>> that is though.\n>> \n>> Per my rambling above, I think what's really implied or explicit when \"AS\" is missing or present is that we're making an alias, so \"implicit alias\" and \"explicit alias\" sound correct to me.\n> \n> Sounds fine.\n> \n>>> - proname => 'pg_get_keywords', procost => '10', prorows => '400',\n>>> + proname => 'pg_get_keywords', procost => '10', prorows => '450',\n>>> \n>>> Off topic for this patch. Not sure it matters much, either.\n>> \n>> Well, I did touch that function a bit, adding a new column, and the number of rows returned is exactly 450, so if I'm not going to update it, who will? The count may increase over time if other keywords are added, but I doubt anybody who adds a single keyword would bother updating prorows here.\n>> \n>> I agree that it doesn't matter much. If you don't buy into the paragraph above, I'll remove it for the next patch version.\n> \n> No strong feelings -- if it were me, I'd put in a separate\n> \"by-the-way\" patch at the end, and the committer can squash at their\n> discretion. But not really worth a separate thread.\n\nOk, I've split this out into \n\n> \n> # select aliastype, count(*) from pg_get_keywords() group by 1;\n> aliastype | count\n> -----------+-------\n> explicit | 39\n> implicit | 411\n> (2 rows)\n> \n> Nice!\n\nI think the number of people porting from other RDBMSs to PostgreSQL who are helped by this patch scales with the number of keywords moved to the \"implicit\" category, and we've done pretty well here.\n\nI wonder if we can get more comments for or against this patch, at least in principle, in the very near future, to help determine whether the deprecation notices should go into v13?\n\n> The binary has increased by ~16kB, mostly because of the new keyword\n> list in the grammar, but that's pretty small, all things considered.\n\nRight, thanks for checking the size increase.\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 26 Aug 2020 08:57:53 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Wed, Aug 26, 2020 at 11:57 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I wonder if we can get more comments for or against this patch, at least in principle, in the very near future, to help determine whether the deprecation notices should go into v13?\n\nSpeaking of that, has somebody written a specific patch for that?\nLike, exactly what are we proposing that this deprecation warning is\ngoing to say?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 26 Aug 2020 13:55:26 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Wed, Aug 26, 2020 at 6:57 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n>\n>\n> > On Aug 26, 2020, at 6:33 AM, John Naylor <john.naylor@2ndquadrant.com> wrote:\n>\n> > and since POSTFIXOP is gone\n> > it doesn't seem to matter how low it is. In fact, I found that the\n> > lines with INDENT and UNBOUNDED now work as the lowest precedence\n> > declarations. Maybe that's worth something?\n> >\n> > Following on Peter E.'s example upthread, GENERATED can be removed\n> > from precedence, and I also found the same is true for PRESERVE and\n> > STRIP_P.\n> >\n> > I've attached a patch which applies on top of 0001 to demonstrate\n> > this. There might possibly still be syntax errors for things not\n> > covered in the regression test, but there are no s/r conflicts at\n> > least.\n>\n> I don't have any problem with the changes you made in your patch, but building on your changes I also found that the following cleanup causes no apparent problems:\n>\n> -%nonassoc UNBOUNDED /* ideally should have same precedence as IDENT */\n> -%nonassoc IDENT PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING CUBE ROLLUP\n> +%nonassoc UNBOUNDED IDENT\n> +%nonassoc PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING CUBE ROLLUP\n>\n> Which does what the old comment apparently wanted.\n\nThis changes the context of the comment at the top of the block:\n\n * To support target_el without AS, we must give IDENT an explicit priority\n * lower than Op. We can safely assign the same priority to various\n * unreserved keywords as needed to resolve ambiguities (this can't have any\n\nThis also works:\n\n-%nonassoc UNBOUNDED /* ideally should have same\nprecedence as IDENT */\n-%nonassoc IDENT PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING\nCUBE ROLLUP\n+%nonassoc UNBOUNDED IDENT PARTITION RANGE ROWS GROUPS CUBE ROLLUP\n+%nonassoc PRECEDING FOLLOWING\n\nNot sure if either is better. Some additional input would be good here.\n\nWhile looking for a place to put a v13 deprecation notice, I found\nsome more places in the docs which need updating:\n\nref/create_operator.sgml\n\n\"At least one of LEFTARG and RIGHTARG must be defined. For binary\noperators, both must be defined. For right unary operators, only\nLEFTARG should be defined, while for left unary operators only\nRIGHTARG should be defined.\"\n\nref/create_opclass.sgml\n\n\"In an OPERATOR clause, the operand data type(s) of the operator, or\nNONE to signify a left-unary or right-unary operator.\"\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 27 Aug 2020 12:24:37 +0300", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Wed, Aug 26, 2020 at 8:55 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Aug 26, 2020 at 11:57 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> > I wonder if we can get more comments for or against this patch, at least in principle, in the very near future, to help determine whether the deprecation notices should go into v13?\n>\n> Speaking of that, has somebody written a specific patch for that?\n> Like, exactly what are we proposing that this deprecation warning is\n> going to say?\n\nWell, for starters it'll say the obvious, but since we have a concrete\ntimeframe, maybe a <note> tag to make it more visible, like in the\nattached, compressed to avoid confusing the cfbot.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 27 Aug 2020 14:11:51 +0300", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Thu, Aug 27, 2020 at 7:12 AM John Naylor <john.naylor@2ndquadrant.com> wrote:\n> Well, for starters it'll say the obvious, but since we have a concrete\n> timeframe, maybe a <note> tag to make it more visible, like in the\n> attached, compressed to avoid confusing the cfbot.\n\nYeah, that looks like a good spot. I think we should also add\nsomething to the documentation of the factorial operator, mentioning\nthat it will be going away. Perhaps we can advise people to write !!3\ninstead of 3! for forward-compatibility, or maybe we should instead\nsuggest numeric_fac(3).\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 27 Aug 2020 09:50:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Yeah, that looks like a good spot. I think we should also add\n> something to the documentation of the factorial operator, mentioning\n> that it will be going away. Perhaps we can advise people to write !!3\n> instead of 3! for forward-compatibility, or maybe we should instead\n> suggest numeric_fac(3).\n\nWell, the !! operator itself has been \"deprecated\" for a long time:\n\nregression=# \\do+ !!\n List of operators\n Schema | Name | Left arg type | Right arg type | Result type | Function | Description \n------------+------+---------------+----------------+-------------+-------------+---------------------------\n pg_catalog | !! | | bigint | numeric | numeric_fac | deprecated, use ! instead\n pg_catalog | !! | | tsquery | tsquery | tsquery_not | NOT tsquery\n(2 rows)\n\nI'm a bit inclined to kill them both off and standardize on factorial()\n(not numeric_fac).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Aug 2020 10:04:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "\n\n> On Aug 27, 2020, at 7:04 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Robert Haas <robertmhaas@gmail.com> writes:\n>> Yeah, that looks like a good spot. I think we should also add\n>> something to the documentation of the factorial operator, mentioning\n>> that it will be going away. Perhaps we can advise people to write !!3\n>> instead of 3! for forward-compatibility, or maybe we should instead\n>> suggest numeric_fac(3).\n> \n> Well, the !! operator itself has been \"deprecated\" for a long time:\n> \n> regression=# \\do+ !!\n> List of operators\n> Schema | Name | Left arg type | Right arg type | Result type | Function | Description \n> ------------+------+---------------+----------------+-------------+-------------+---------------------------\n> pg_catalog | !! | | bigint | numeric | numeric_fac | deprecated, use ! instead\n> pg_catalog | !! | | tsquery | tsquery | tsquery_not | NOT tsquery\n> (2 rows)\n> \n> I'm a bit inclined to kill them both off and standardize on factorial()\n> (not numeric_fac).\n> \n> \t\t\tregards, tom lane\n\nJust for historical context, it seems that when you committed 908ab80286401bb20a519fa7dc7a837631f20369 in 2011, you were choosing one operator per underlying proc to be the canonical operator name, and deprecating all other operators based on the same proc. You chose postfix ! as the canonical operator for numeric_fac and deprecated prefix !!, but I think I can infer from that commit that if postfix ! did not exist, prefix !! would have been the canonical operator and would not have been deprecated.\n\nThe main reason I did not remove prefix !! in this patch series is that the patch is about removing postfix operator support, and so it seemed off topic. But if there is general agreement to remove prefix !!, I'll put that in the next patch.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 27 Aug 2020 07:14:26 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Thu, Aug 27, 2020 at 10:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Well, the !! operator itself has been \"deprecated\" for a long time:\n>\n> regression=# \\do+ !!\n> List of operators\n> Schema | Name | Left arg type | Right arg type | Result type | Function | Description\n> ------------+------+---------------+----------------+-------------+-------------+---------------------------\n> pg_catalog | !! | | bigint | numeric | numeric_fac | deprecated, use ! instead\n> pg_catalog | !! | | tsquery | tsquery | tsquery_not | NOT tsquery\n> (2 rows)\n>\n> I'm a bit inclined to kill them both off and standardize on factorial()\n> (not numeric_fac).\n\nWorks for me. !! hasn't been marked as deprecated in the\ndocumentation, only the operator comment, which probably not many\npeople look at. But I don't see a problem updating the documentation\nnow to say:\n\n- !! is going away, use factorial()\n- ! is going away, use factorial()\n- postfix operators are going away\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 27 Aug 2020 10:19:21 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Thu, Aug 27, 2020 at 5:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm a bit inclined to kill them both off and standardize on factorial()\n> (not numeric_fac).\n\n+1\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 28 Aug 2020 11:43:58 +0300", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Wed, Aug 26, 2020 at 6:57 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I don't have any problem with the changes you made in your patch, but building on your changes I also found that the following cleanup causes no apparent problems:\n>\n> -%nonassoc UNBOUNDED /* ideally should have same precedence as IDENT */\n> -%nonassoc IDENT PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING CUBE ROLLUP\n> +%nonassoc UNBOUNDED IDENT\n> +%nonassoc PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING CUBE ROLLUP\n\nThinking about this some more, I don't think we don't need to do any\nprecedence refactoring in order to apply the functional change of\nthese patches. We could leave that for follow-on patches once we\nfigure out the best way forward, which could take some time.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 28 Aug 2020 11:44:54 +0300", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Fri, Aug 28, 2020 at 4:44 AM John Naylor <john.naylor@2ndquadrant.com> wrote:\n> On Thu, Aug 27, 2020 at 5:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I'm a bit inclined to kill them both off and standardize on factorial()\n> > (not numeric_fac).\n>\n> +1\n\nHere's a modified version of John's patch that also describes ! and !!\nas deprecated. It looked too wordy to me to recommend what should be\nused instead, so I have not done that.\n\nComments?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 28 Aug 2020 11:00:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Fri, Aug 28, 2020 at 11:00 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Aug 28, 2020 at 4:44 AM John Naylor <john.naylor@2ndquadrant.com> wrote:\n> > On Thu, Aug 27, 2020 at 5:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > I'm a bit inclined to kill them both off and standardize on factorial()\n> > > (not numeric_fac).\n> >\n> > +1\n>\n> Here's a modified version of John's patch that also describes ! and !!\n> as deprecated. It looked too wordy to me to recommend what should be\n> used instead, so I have not done that.\n>\n> Comments?\n\nNever mind, I see there's a new thread for this. Sorry for the noise.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 28 Aug 2020 11:01:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "> On Aug 27, 2020, at 2:24 AM, John Naylor <john.naylor@2ndquadrant.com> wrote:\n> \n> On Wed, Aug 26, 2020 at 6:57 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> \n>> \n>> \n>>> On Aug 26, 2020, at 6:33 AM, John Naylor <john.naylor@2ndquadrant.com> wrote:\n>> \n>>> and since POSTFIXOP is gone\n>>> it doesn't seem to matter how low it is. In fact, I found that the\n>>> lines with INDENT and UNBOUNDED now work as the lowest precedence\n>>> declarations. Maybe that's worth something?\n>>> \n>>> Following on Peter E.'s example upthread, GENERATED can be removed\n>>> from precedence, and I also found the same is true for PRESERVE and\n>>> STRIP_P.\n>>> \n>>> I've attached a patch which applies on top of 0001 to demonstrate\n>>> this. There might possibly still be syntax errors for things not\n>>> covered in the regression test, but there are no s/r conflicts at\n>>> least.\n>> \n>> I don't have any problem with the changes you made in your patch, but building on your changes I also found that the following cleanup causes no apparent problems:\n>> \n>> -%nonassoc UNBOUNDED /* ideally should have same precedence as IDENT */\n>> -%nonassoc IDENT PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING CUBE ROLLUP\n>> +%nonassoc UNBOUNDED IDENT\n>> +%nonassoc PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING CUBE ROLLUP\n>> \n>> Which does what the old comment apparently wanted.\n> \n> This changes the context of the comment at the top of the block:\n> \n> * To support target_el without AS, we must give IDENT an explicit priority\n> * lower than Op. We can safely assign the same priority to various\n> * unreserved keywords as needed to resolve ambiguities (this can't have any\n> \n> This also works:\n> \n> -%nonassoc UNBOUNDED /* ideally should have same\n> precedence as IDENT */\n> -%nonassoc IDENT PARTITION RANGE ROWS GROUPS PRECEDING FOLLOWING\n> CUBE ROLLUP\n> +%nonassoc UNBOUNDED IDENT PARTITION RANGE ROWS GROUPS CUBE ROLLUP\n> +%nonassoc PRECEDING FOLLOWING\n> \n> Not sure if either is better. Some additional input would be good here.\n\nYou wrote in a later email:\n\n> Thinking about this some more, I don't think we don't need to do any\n> precedence refactoring in order to apply the functional change of\n> these patches. We could leave that for follow-on patches once we\n> figure out the best way forward, which could take some time.\n\nSo I tried to leave the precedence stuff alone as much as possible in this next patch set. I agree such refactoring can be done separately, and at a later time.\n\n> \n> While looking for a place to put a v13 deprecation notice, I found\n> some more places in the docs which need updating:\n> \n> ref/create_operator.sgml\n> \n> \"At least one of LEFTARG and RIGHTARG must be defined. For binary\n> operators, both must be defined. For right unary operators, only\n> LEFTARG should be defined, while for left unary operators only\n> RIGHTARG should be defined.\"\n> \n> ref/create_opclass.sgml\n> \n> \"In an OPERATOR clause, the operand data type(s) of the operator, or\n> NONE to signify a left-unary or right-unary operator.\"\n\nSome changes were made on another thread [1] for the deprecation notices, committed recently by Tom, and I think this patch set is compatible with what was done there. This patch set is intended for commit against master, targeted for PostgreSQL 14, so the deprecation notices are removed along with the things that were deprecated. The references to right-unary operators that you call out, above, have been removed.\n\n\n\n\n\n\n\n[1] https://postgr.es/m/BE2DF53D-251A-4E26-972F-930E523580E9@enterprisedb.com\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 1 Sep 2020 12:00:35 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Tue, Sep 1, 2020 at 10:00 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> Some changes were made on another thread [1] for the deprecation notices, committed recently by Tom, and I think this patch set is compatible with what was done there. This patch set is intended for commit against master, targeted for PostgreSQL 14, so the deprecation notices are removed along with the things that were deprecated. The references to right-unary operators that you call out, above, have been removed.\n\nHi Mark,\n\nLooks good. Just a couple things I found in 0001:\n\nThe factorial operators should now be removed from func.sgml.\n\nFor pg_dump, should we issue a pg_log_warning() (or stronger)\nsomewhere if user-defined postfix operators are found? I'm looking at\nthe example of \"WITH OIDS\" in pg_dump.c.\n\nNitpick: these can be removed, since we already test factorial() in this file:\n\n-SELECT 4!;\n-SELECT !!3;\n+SELECT factorial(4);\n+SELECT factorial(3);\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 2 Sep 2020 11:33:57 +0300", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "> On Sep 2, 2020, at 1:33 AM, John Naylor <john.naylor@2ndquadrant.com> wrote:\n> \n> On Tue, Sep 1, 2020 at 10:00 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> \n>> Some changes were made on another thread [1] for the deprecation notices, committed recently by Tom, and I think this patch set is compatible with what was done there. This patch set is intended for commit against master, targeted for PostgreSQL 14, so the deprecation notices are removed along with the things that were deprecated. The references to right-unary operators that you call out, above, have been removed.\n> \n> Hi Mark,\n> \n> Looks good. Just a couple things I found in 0001:\n> \n> The factorial operators should now be removed from func.sgml.\n\nRight you are. Removed in v7.\n\n> For pg_dump, should we issue a pg_log_warning() (or stronger)\n> somewhere if user-defined postfix operators are found? I'm looking at\n> the example of \"WITH OIDS\" in pg_dump.c.\n\nSince newer pg_dump binaries can be used to dump data from older servers, and since users might then load that dump back into an older server, I think doing anything stronger than a pg_log_warning() would be incorrect. I did not find precedents under comparable circumstances for taking stronger actions than pg_log_warning. I assume we can't, for example, omit the operator from the dump, nor can we abort the process.\n\nA pg_log_warning has been added in v7.\n\nDumping right-unary (postfix) operators should work (with a warning) in v7. I think pg_dump in v6 was broken in this regard.\n \n> Nitpick: these can be removed, since we already test factorial() in this file:\n> \n> -SELECT 4!;\n> -SELECT !!3;\n> +SELECT factorial(4);\n> +SELECT factorial(3);\n\nI was on the fence between removing those (as you suggest) vs. converting them to function calls (as v6 did). They are removed in v7. \n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 3 Sep 2020 08:49:56 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Thu, Sep 3, 2020 at 11:50 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> [v7]\n\nOk, I've marked it ready for committer.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 7 Sep 2020 11:43:00 -0400", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Thu, Sep 3, 2020 at 11:50 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Since newer pg_dump binaries can be used to dump data from older servers, and since users might then load that dump back into an older server, I think doing anything stronger than a pg_log_warning() would be incorrect. I did not find precedents under comparable circumstances for taking stronger actions than pg_log_warning. I assume we can't, for example, omit the operator from the dump, nor can we abort the process.\n\nI'm not sure that this is the right solution. Generally, the\nrecommendation is that you should use the pg_dump that corresponds to\nthe server version where you want to do the reload, so if you're\nhoping to dump 9.6 and restore on 11, you should be using the pg_dump\nfrom 11, not 14. So my thought would be that if there are user-defined\npostfix operators, pg_dump ought to error out. However, that could be\ninconvenient for people who are using pg_dump in ways that are maybe\nnot what we would recommend but which may happen to work but for this\nissue, so I'm not sure. On the third hand, though, we think that there\nare very few user-defined postfix operators out there, so if we just\ngive an error, we probably won't be inconveniencing many people.\n\nI'm not sure who is going to commit this work, and that person may\nhave a different preference than me. However, if it's me, I'd like to\nsee the removal of the existing postfix operators broken off into its\nown patch, separate from the removal of the underlying facility to\nhave postfix operators.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 11 Sep 2020 11:06:26 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Sep 3, 2020 at 11:50 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> Since newer pg_dump binaries can be used to dump data from older servers, and since users might then load that dump back into an older server, I think doing anything stronger than a pg_log_warning() would be incorrect. I did not find precedents under comparable circumstances for taking stronger actions than pg_log_warning. I assume we can't, for example, omit the operator from the dump, nor can we abort the process.\n\n> I'm not sure that this is the right solution. Generally, the\n> recommendation is that you should use the pg_dump that corresponds to\n> the server version where you want to do the reload, so if you're\n> hoping to dump 9.6 and restore on 11, you should be using the pg_dump\n> from 11, not 14. So my thought would be that if there are user-defined\n> postfix operators, pg_dump ought to error out. However, that could be\n> inconvenient for people who are using pg_dump in ways that are maybe\n> not what we would recommend but which may happen to work but for this\n> issue, so I'm not sure. On the third hand, though, we think that there\n> are very few user-defined postfix operators out there, so if we just\n> give an error, we probably won't be inconveniencing many people.\n\nMy inclination is to simply not change pg_dump. There is no need to break\nthe use-case of loading the output back into the server version it came\nfrom, if we don't have to. If the output is getting loaded into a server\nthat lacks postfix operators, that server can throw the error. There's no\nreal gain in having pg_dump prejudge the issue.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Sep 2020 11:36:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "\n\n> On Sep 11, 2020, at 8:36 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Robert Haas <robertmhaas@gmail.com> writes:\n>> On Thu, Sep 3, 2020 at 11:50 AM Mark Dilger\n>> <mark.dilger@enterprisedb.com> wrote:\n>>> Since newer pg_dump binaries can be used to dump data from older servers, and since users might then load that dump back into an older server, I think doing anything stronger than a pg_log_warning() would be incorrect. I did not find precedents under comparable circumstances for taking stronger actions than pg_log_warning. I assume we can't, for example, omit the operator from the dump, nor can we abort the process.\n> \n>> I'm not sure that this is the right solution. Generally, the\n>> recommendation is that you should use the pg_dump that corresponds to\n>> the server version where you want to do the reload, so if you're\n>> hoping to dump 9.6 and restore on 11, you should be using the pg_dump\n>> from 11, not 14. So my thought would be that if there are user-defined\n>> postfix operators, pg_dump ought to error out. However, that could be\n>> inconvenient for people who are using pg_dump in ways that are maybe\n>> not what we would recommend but which may happen to work but for this\n>> issue, so I'm not sure. On the third hand, though, we think that there\n>> are very few user-defined postfix operators out there, so if we just\n>> give an error, we probably won't be inconveniencing many people.\n> \n> My inclination is to simply not change pg_dump. There is no need to break\n> the use-case of loading the output back into the server version it came\n> from, if we don't have to. If the output is getting loaded into a server\n> that lacks postfix operators, that server can throw the error. There's no\n> real gain in having pg_dump prejudge the issue.\n\nI think some kind of indication that the dump won't be loadable is useful if they're planning to move the dump file across an expensive link, or if they intend to blow away the old data directory to make room for the new. Whether that indication should be in the form of a warning or an error is less clear to me. Whatever we do here, I think it sets a precedent for how such situations are handled in the future, so maybe focusing overmuch on the postfix operator issue is less helpful than on the broader concept. What, for example, would we do if we someday dropped GiST support? Print a warning when dumping a database with GiST indexes? Omit the indexes? Abort the dump?\n\nThe docs at https://www.postgresql.org/docs/12/app-pgdump.html say:\n\n> Because pg_dump is used to transfer data to newer versions of PostgreSQL, the output of pg_dump can be expected to load into PostgreSQL server versions newer than pg_dump's version.\n<snip>\n> Also, it is not guaranteed that pg_dump's output can be loaded into a server of an older major version — not even if the dump was taken from a server of that version.\n\n\nI think somewhere around here the docs need to call out what happens when the older major version supported a feature that has been dropped from the newer major version.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 11 Sep 2020 09:39:36 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> On Sep 11, 2020, at 8:36 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> My inclination is to simply not change pg_dump. There is no need to break\n>> the use-case of loading the output back into the server version it came\n>> from, if we don't have to. If the output is getting loaded into a server\n>> that lacks postfix operators, that server can throw the error. There's no\n>> real gain in having pg_dump prejudge the issue.\n\n> I think some kind of indication that the dump won't be loadable is\n> useful if they're planning to move the dump file across an expensive\n> link, or if they intend to blow away the old data directory to make room\n> for the new. Whether that indication should be in the form of a warning\n> or an error is less clear to me.\n\nI think definitely not an error, because that breaks a plausible (even if\nnot recommended) use-case.\n\n> Whatever we do here, I think it sets a precedent for how such situations\n> are handled in the future, so maybe focusing overmuch on the postfix\n> operator issue is less helpful than on the broader concept. What, for\n> example, would we do if we someday dropped GiST support?\n\nI'm not sure that there is or should be a one-size-fits-all policy.\nWe do actually have multiple precedents already:\n\n* DefineIndex substitutes \"gist\" for \"rtree\" to allow transparent updating\nof dumps from DBs that used the old rtree AM.\n\n* Up till very recently (84eca14bc), ResolveOpClass had similar hacks to\nsubstitute for old opclass names.\n\n* bb03010b9 and e58a59975 got rid of other server-side hacks for\nconverting old dump files.\n\nSo generally the preference is to make the server deal with conversion\nissues; and this must be so, since what you have to work with may be a\ndump taken with an old pg_dump. In this case, though, it doesn't seem\nlike there's any plausible way for the server to translate old DDL.\n\nAs for the pg_dump side, aside from the WITH OIDS precedent you mentioned,\nthere was till recently (d9fa17aa7) code to deal with unconvertible\npre-7.1 aggregates. That code issued a pg_log_warning and then ignored\n(didn't dump) the aggregate. I think it didn't have much choice about\nthe latter step because, if memory serves, there simply wasn't any way to\nrepresent those old aggregates in the new CREATE AGGREGATE syntax; so we\ncouldn't leave it to the server to decide whether to throw error or not.\n(It's also possible, given how far back that was, that we simply weren't\nbeing very considerate of upgrade issues. It's old enough that I would\nnot take it as great precedent. But it is a precedent.)\n\nThe behavior of WITH OIDS is to issue a pg_log_warning and then ignore\nthe property. I do not much care for this, although I see the point that\nwe don't want to stick WITH OIDS into the CREATE TABLE because then the\nCREATE would fail, leaving the dump completely unusable on newer servers.\nMy choice would have been to write CREATE TABLE without that option and\nthen add ALTER TABLE ... WITH OIDS. In this way the dump script does\nwhat it should when restoring into an old server, while if you load into\na new server you hear about it --- and you can ignore the error if you\nwant.\n\nI think the right thing for postfix operators is probably to issue\npg_log_warning and then dump the object anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Sep 2020 14:25:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "\n\n> On Sep 11, 2020, at 11:25 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> On Sep 11, 2020, at 8:36 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> My inclination is to simply not change pg_dump. There is no need to break\n>>> the use-case of loading the output back into the server version it came\n>>> from, if we don't have to. If the output is getting loaded into a server\n>>> that lacks postfix operators, that server can throw the error. There's no\n>>> real gain in having pg_dump prejudge the issue.\n> \n>> I think some kind of indication that the dump won't be loadable is\n>> useful if they're planning to move the dump file across an expensive\n>> link, or if they intend to blow away the old data directory to make room\n>> for the new. Whether that indication should be in the form of a warning\n>> or an error is less clear to me.\n> \n> I think definitely not an error, because that breaks a plausible (even if\n> not recommended) use-case.\n> \n>> Whatever we do here, I think it sets a precedent for how such situations\n>> are handled in the future, so maybe focusing overmuch on the postfix\n>> operator issue is less helpful than on the broader concept. What, for\n>> example, would we do if we someday dropped GiST support?\n> \n> I'm not sure that there is or should be a one-size-fits-all policy.\n> We do actually have multiple precedents already:\n> \n> * DefineIndex substitutes \"gist\" for \"rtree\" to allow transparent updating\n> of dumps from DBs that used the old rtree AM.\n> \n> * Up till very recently (84eca14bc), ResolveOpClass had similar hacks to\n> substitute for old opclass names.\n> \n> * bb03010b9 and e58a59975 got rid of other server-side hacks for\n> converting old dump files.\n> \n> So generally the preference is to make the server deal with conversion\n> issues; and this must be so, since what you have to work with may be a\n> dump taken with an old pg_dump. In this case, though, it doesn't seem\n> like there's any plausible way for the server to translate old DDL.\n> \n> As for the pg_dump side, aside from the WITH OIDS precedent you mentioned,\n> there was till recently (d9fa17aa7) code to deal with unconvertible\n> pre-7.1 aggregates. That code issued a pg_log_warning and then ignored\n> (didn't dump) the aggregate. I think it didn't have much choice about\n> the latter step because, if memory serves, there simply wasn't any way to\n> represent those old aggregates in the new CREATE AGGREGATE syntax; so we\n> couldn't leave it to the server to decide whether to throw error or not.\n> (It's also possible, given how far back that was, that we simply weren't\n> being very considerate of upgrade issues. It's old enough that I would\n> not take it as great precedent. But it is a precedent.)\n> \n> The behavior of WITH OIDS is to issue a pg_log_warning and then ignore\n> the property. I do not much care for this, although I see the point that\n> we don't want to stick WITH OIDS into the CREATE TABLE because then the\n> CREATE would fail, leaving the dump completely unusable on newer servers.\n> My choice would have been to write CREATE TABLE without that option and\n> then add ALTER TABLE ... WITH OIDS. In this way the dump script does\n> what it should when restoring into an old server, while if you load into\n> a new server you hear about it --- and you can ignore the error if you\n> want.\n> \n> I think the right thing for postfix operators is probably to issue\n> pg_log_warning and then dump the object anyway.\n\nThat happens to be the patch behavior as it stands now.\n\nAnother option would be to have pg_dump take a strictness mode option. I don't think the option should have anything to do with postfix operators specifically, but be more general like --dump-incompatible-objects vs. --omit-incompatible-objects vs. --error-on-incompatible-objects vs. --do-your-best-to-fixup-incompatible-objects, with one of those being the default (and with all of them having better names). If --error-on-incompatible-objects were the default, that would behave as Robert recommended upthread.\n\nI can totally see an objection to the added complexity of such options, so I'm really just putting this out on the list for comment.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 11 Sep 2020 12:23:17 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Fri, Sep 11, 2020 at 3:23 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Another option would be to have pg_dump take a strictness mode option. I don't think the option should have anything to do with postfix operators specifically, but be more general like --dump-incompatible-objects vs. --omit-incompatible-objects vs. --error-on-incompatible-objects vs. --do-your-best-to-fixup-incompatible-objects, with one of those being the default (and with all of them having better names). If --error-on-incompatible-objects were the default, that would behave as Robert recommended upthread.\n>\n> I can totally see an objection to the added complexity of such options, so I'm really just putting this out on the list for comment.\n\nI'm not opposed to Tom's proposal. I just wanted to raise the issue\nfor discussion.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 11 Sep 2020 15:54:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "\n\n> On Sep 11, 2020, at 12:54 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Fri, Sep 11, 2020 at 3:23 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> Another option would be to have pg_dump take a strictness mode option. I don't think the option should have anything to do with postfix operators specifically, but be more general like --dump-incompatible-objects vs. --omit-incompatible-objects vs. --error-on-incompatible-objects vs. --do-your-best-to-fixup-incompatible-objects, with one of those being the default (and with all of them having better names). If --error-on-incompatible-objects were the default, that would behave as Robert recommended upthread.\n>> \n>> I can totally see an objection to the added complexity of such options, so I'm really just putting this out on the list for comment.\n> \n> I'm not opposed to Tom's proposal. I just wanted to raise the issue\n> for discussion.\n\nAh, ok. I don't feel any need for changes, either. I'll leave the patch as it stands now.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 11 Sep 2020 12:56:59 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> On Sep 11, 2020, at 12:54 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n>> On Fri, Sep 11, 2020 at 3:23 PM Mark Dilger\n>> <mark.dilger@enterprisedb.com> wrote:\n>>> Another option would be to have pg_dump take a strictness mode option. I don't think the option should have anything to do with postfix operators specifically, but be more general like --dump-incompatible-objects vs. --omit-incompatible-objects vs. --error-on-incompatible-objects vs. --do-your-best-to-fixup-incompatible-objects, with one of those being the default (and with all of them having better names). If --error-on-incompatible-objects were the default, that would behave as Robert recommended upthread.\n>>> I can totally see an objection to the added complexity of such options, so I'm really just putting this out on the list for comment.\n\n>> I'm not opposed to Tom's proposal. I just wanted to raise the issue\n>> for discussion.\n\n> Ah, ok. I don't feel any need for changes, either. I'll leave the\n> patch as it stands now.\n\nWe're in violent agreement it seems.\n\nAt some point it might be worth doing something like what Mark suggests\nabove, but this patch shouldn't be tasked with it. In any case, since\npg_dump does not know what the target server version really is, it's\ngoing to be hard for it to authoritatively distinguish what will work\nor not.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Sep 2020 16:51:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I'm not sure who is going to commit this work, and that person may\n> have a different preference than me. However, if it's me, I'd like to\n> see the removal of the existing postfix operators broken off into its\n> own patch, separate from the removal of the underlying facility to\n> have postfix operators.\n\nI've pushed a subset of the v7-0001 patch to meet Robert's preference.\nContinuing to look at the rest of it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Sep 2020 16:19:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "So I've finished up applying 0001 and started to look at 0002\n... and I find the terminology you've chosen to be just really\nopaque and confusing. \"aliastype\" being \"implicit\" or \"explicit\"\nis not going to make any sense to anyone until they read the\nmanual, and it probably still won't make sense after that.\n\nIn the first place, the terminology we use for these things\nis usually \"column label\", not \"alias\"; see e.g.\nhttps://www.postgresql.org/docs/devel/queries-select-lists.html#QUERIES-COLUMN-LABELS\nLikewise, gram.y itself refers to the construct as a ColLabel.\nAliases are things that appear in the FROM clause.\n\nIn the second place, \"implicit\" vs \"explicit\" just doesn't make\nany sense to me. You could maybe say that the AS is implicit\nwhen you omit it, but the column label is surely not implicit;\nit's right there where you wrote it.\n\nI confess to not having paid very close attention to this thread\nlately, but the last I'd noticed the terminology proposed for\ninternal use was \"bare column label\", which I think is much better.\nAs for what to expose in pg_get_keywords, I think something like\n\"label_requires_as bool\" would be immediately understandable.\nIf you really want it to be an enum sort of thing, maybe the output\ncolumn title could be \"collabel\" with values \"bare\" or \"requires_AS\".\n\nSo I'm thinking about making these changes in gram.y:\n\nImplicitAlias -> BareColLabel\nimplicit_alias_keyword -> bare_label_keyword\n\nand corresponding terminology changes elsewhere.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 18 Sep 2020 11:29:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "\n\n> On Sep 18, 2020, at 8:29 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> So I've finished up applying 0001 and started to look at 0002\n> ... and I find the terminology you've chosen to be just really\n> opaque and confusing. \"aliastype\" being \"implicit\" or \"explicit\"\n> is not going to make any sense to anyone until they read the\n> manual, and it probably still won't make sense after that.\n> \n> In the first place, the terminology we use for these things\n> is usually \"column label\", not \"alias\"; see e.g.\n> https://www.postgresql.org/docs/devel/queries-select-lists.html#QUERIES-COLUMN-LABELS\n> Likewise, gram.y itself refers to the construct as a ColLabel.\n> Aliases are things that appear in the FROM clause.\n> \n> In the second place, \"implicit\" vs \"explicit\" just doesn't make\n> any sense to me. You could maybe say that the AS is implicit\n> when you omit it, but the column label is surely not implicit;\n> it's right there where you wrote it.\n> \n> I confess to not having paid very close attention to this thread\n> lately, but the last I'd noticed the terminology proposed for\n> internal use was \"bare column label\", which I think is much better.\n> As for what to expose in pg_get_keywords, I think something like\n> \"label_requires_as bool\" would be immediately understandable.\n> If you really want it to be an enum sort of thing, maybe the output\n> column title could be \"collabel\" with values \"bare\" or \"requires_AS\".\n> \n> So I'm thinking about making these changes in gram.y:\n> \n> ImplicitAlias -> BareColLabel\n> implicit_alias_keyword -> bare_label_keyword\n> \n> and corresponding terminology changes elsewhere.\n\nThat sounds ok to me.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 18 Sep 2020 08:43:52 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Fri, Sep 18, 2020 at 11:29 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I confess to not having paid very close attention to this thread\n> lately, but the last I'd noticed the terminology proposed for\n> internal use was \"bare column label\", which I think is much better.\n\nI agree.\n\n> As for what to expose in pg_get_keywords, I think something like\n> \"label_requires_as bool\" would be immediately understandable.\n> If you really want it to be an enum sort of thing, maybe the output\n> column title could be \"collabel\" with values \"bare\" or \"requires_AS\".\n\nIt's sort of possible to be confused by \"label requires as\" since \"as\"\nis being used as a known but isn't really one generally speaking, but\nwe can't very well quote it so I don't know how to make it more clear.\n\n> So I'm thinking about making these changes in gram.y:\n>\n> ImplicitAlias -> BareColLabel\n> implicit_alias_keyword -> bare_label_keyword\n>\n> and corresponding terminology changes elsewhere.\n\n+1.\n\nThanks for picking this up; I am pretty excited about this.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 18 Sep 2020 13:42:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Sep 18, 2020 at 11:29 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> As for what to expose in pg_get_keywords, I think something like\n>> \"label_requires_as bool\" would be immediately understandable.\n>> If you really want it to be an enum sort of thing, maybe the output\n>> column title could be \"collabel\" with values \"bare\" or \"requires_AS\".\n\n> It's sort of possible to be confused by \"label requires as\" since \"as\"\n> is being used as a known but isn't really one generally speaking, but\n> we can't very well quote it so I don't know how to make it more clear.\n\nAfter re-reading the description of pg_get_keywords, I was reminded that\nwhat it outputs now is intended to provide both a machine-friendly\ndescription of the keyword category (\"catcode\") and a human-friendly\ndescription (\"catdesc\"). So we really should do likewise for the\nlabel property. What I now propose is to add two output columns:\n\nbarelabel bool (t or f, obviously)\nbaredesc text (\"can be bare label\" or \"requires AS\", possibly localized)\n\nFeel free to bikeshed on those details.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 18 Sep 2020 14:11:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Fri, Sep 18, 2020 at 2:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> After re-reading the description of pg_get_keywords, I was reminded that\n> what it outputs now is intended to provide both a machine-friendly\n> description of the keyword category (\"catcode\") and a human-friendly\n> description (\"catdesc\"). So we really should do likewise for the\n> label property. What I now propose is to add two output columns:\n>\n> barelabel bool (t or f, obviously)\n> baredesc text (\"can be bare label\" or \"requires AS\", possibly localized)\n\nThat might be over-engineered in a vacuum, but it seems like it may be\ncleaner to stick with the existing precedent than to diverge from it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 18 Sep 2020 14:34:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Sep 18, 2020 at 2:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What I now propose is to add two output columns:\n>> \n>> barelabel bool (t or f, obviously)\n>> baredesc text (\"can be bare label\" or \"requires AS\", possibly localized)\n\n> That might be over-engineered in a vacuum, but it seems like it may be\n> cleaner to stick with the existing precedent than to diverge from it.\n\nYeah, my recollection of the pg_get_keywords design is that we couldn't\nagree on whether to emit a machine-friendly description or a\nhuman-friendly one, so we compromised by doing both :-(. But the same\nfactors exist with this addition --- you can make an argument for\npreferring either boolean or text output.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 18 Sep 2020 15:31:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "Pushed with the discussed terminological changes and some other\nfooling about, including fixing the documentation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 18 Sep 2020 16:48:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "On Fri, Sep 18, 2020 at 4:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Pushed with the discussed terminological changes and some other\n> fooling about, including fixing the documentation.\n\nAwesome. Thanks!\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 18 Sep 2020 20:18:16 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" }, { "msg_contents": "John Naylor <john.naylor@2ndquadrant.com> writes:\n> I believe it's actually \"lower than Op\", and since POSTFIXOP is gone\n> it doesn't seem to matter how low it is. In fact, I found that the\n> lines with INDENT and UNBOUNDED now work as the lowest precedence\n> declarations. Maybe that's worth something?\n\n> Following on Peter E.'s example upthread, GENERATED can be removed\n> from precedence, and I also found the same is true for PRESERVE and\n> STRIP_P.\n\nNow that the main patch is pushed, I went back to revisit this precedence\nissue. I'm afraid to move the precedence of IDENT as much as you suggest\nhere. The comment for opt_existing_window_name says that it's expecting\nthe precedence of IDENT to be just below that of Op. If there's daylight\nin between, that could result in funny behavior for use of some of the\nunreserved words with other precedence levels in this context.\n\nHowever, I concur that we ought to be able to remove the explicit\nprecedences for GENERATED, NULL_P, PRESERVE, and STRIP_P, so I did that.\n\nAn interesting point is that it's actually possible to remove the\nprecedence declaration for IDENT itself (at least, that does not\ncreate any bison errors; I did not do any behavioral testing).\nI believe what we had that for originally was to control the precedence\nbehavior of the \"target_el: a_expr IDENT\" rule, and now that that\nrule doesn't end with IDENT, its behavior isn't governed by that.\nBut I think we're best off to keep the precedence assignment, as\na place to hang the precedences of PARTITION etc.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 19 Sep 2020 15:20:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: factorial function/phase out postfix operators?" } ]
[ { "msg_contents": "Unless I'm missing something, the g_comment_start and g_comment_end variables\nin pg_dump.c seems to have been unused since 30ab5bd43d8f2082659191 (in the 7.2\ncycle) and can probably be safely removed by now. The attached passes make\ncheck.\n\ncheers ./daniel", "msg_date": "Mon, 18 May 2020 17:04:02 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Vintage unused variables in pg_dump.c " }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> Unless I'm missing something, the g_comment_start and g_comment_end variables\n> in pg_dump.c seems to have been unused since 30ab5bd43d8f2082659191 (in the 7.2\n> cycle) and can probably be safely removed by now.\n\nIndeed. (Well, I didn't verify your statement about when they were\nlast used, but they're clearly dead now.) Pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 May 2020 13:22:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Vintage unused variables in pg_dump.c" } ]
[ { "msg_contents": "Hi\n\nLast week I played with dbms_sql extension and some patterns of usage\ncursor in PL/SQL and PL/pgSQL. I found fact, so iteration over cursor (FOR\nstatement) doesn't support unbound cursors. I think so this limit is not\nnecessary. This statement can open portal for bound cursor or can iterate\nover before opened portal. When portal was opened inside FOR statement,\nthen it is closed inside this statement.\n\nImplementation is simple, usage is simple too:\n\nCREATE OR REPLACE FUNCTION public.forc02()\n RETURNS void\n LANGUAGE plpgsql\nAS $function$\ndeclare\n c refcursor;\n r record;\nbegin\n open c for select * from generate_series(1,20) g(v);\n\n for r in c\n loop\n raise notice 'cycle body one %', r.v;\n exit when r.v >= 6;\n end loop;\n\n for r in c\n loop\n raise notice 'cycle body two %', r.v;\n end loop;\n\n close c;\nend\n$function$\n\nComments, notes?\n\nRegards\n\nPavel", "msg_date": "Mon, 18 May 2020 17:33:07 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "proposal - plpgsql - FOR over unbound cursor" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: tested, passed\n\nThe patch applies cleanly and AFAICS there are no issues with the patch.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Mon, 08 Jun 2020 11:39:18 +0000", "msg_from": "Asif Rehman <asifr.rehman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal - plpgsql - FOR over unbound cursor" }, { "msg_contents": "po 8. 6. 2020 v 13:40 odesílatel Asif Rehman <asifr.rehman@gmail.com>\nnapsal:\n\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: tested, passed\n>\n> The patch applies cleanly and AFAICS there are no issues with the patch.\n>\n> The new status of this patch is: Ready for Committer\n>\n\nThank you\n\nPavel\n\npo 8. 6. 2020 v 13:40 odesílatel Asif Rehman <asifr.rehman@gmail.com> napsal:The following review has been posted through the commitfest application:\nmake installcheck-world:  tested, passed\nImplements feature:       tested, passed\nSpec compliant:           not tested\nDocumentation:            tested, passed\n\nThe patch applies cleanly and AFAICS there are no issues with the patch.\n\nThe new status of this patch is: Ready for CommitterThank youPavel", "msg_date": "Mon, 8 Jun 2020 13:45:15 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - plpgsql - FOR over unbound cursor" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> Last week I played with dbms_sql extension and some patterns of usage\n> cursor in PL/SQL and PL/pgSQL. I found fact, so iteration over cursor (FOR\n> statement) doesn't support unbound cursors. I think so this limit is not\n> necessary.\n\nI guess I don't understand why we should add this. What does it do\nthat can't be done better with a plain FOR-over-SELECT?\n\nThe example you give of splitting an iteration into two loops doesn't\ninspire me to think it's useful; it looks more like encouraging awful\nprogramming practice.\n\n> This statement can open portal for bound cursor or can iterate\n> over before opened portal. When portal was opened inside FOR statement,\n> then it is closed inside this statement.\n\nAnd this definition seems quite inconsistent and error-prone.\nThe point of a FOR loop, IMO, is to have a fairly self-contained\ndefinition of the set of iterations that will occur. This\neliminates that property, leaving you with something no cleaner\nthan a hand-built loop around a FETCH command.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Jul 2020 14:06:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal - plpgsql - FOR over unbound cursor" }, { "msg_contents": "st 1. 7. 2020 v 20:06 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > Last week I played with dbms_sql extension and some patterns of usage\n> > cursor in PL/SQL and PL/pgSQL. I found fact, so iteration over cursor\n> (FOR\n> > statement) doesn't support unbound cursors. I think so this limit is not\n> > necessary.\n>\n> I guess I don't understand why we should add this. What does it do\n> that can't be done better with a plain FOR-over-SELECT?\n>\n> The example you give of splitting an iteration into two loops doesn't\n> inspire me to think it's useful; it looks more like encouraging awful\n> programming practice.\n>\n\nThere are few points for this feature.\n\n1. possibility to use FOR cycle for refcursors. Refcursor can be passed as\nargument and it can be practical for some workflows with multiple steps -\npreparing, iterations, closing.\n\n2. symmetry - FETCH statement can be used for bound/unbound cursors. FOR\ncycle can be used only for bound cursors.\n\n3. It is one pattern (and I have not an idea how often) used by the dms_sql\npackage. You can get a refcursor as a result of some procedures, and next\nsteps you can iterate over this cursor. PL/SQL can use FOR cycle (and it is\nnot possible in PL/pgSQL).\n\n\n> > This statement can open portal for bound cursor or can iterate\n> > over before opened portal. When portal was opened inside FOR statement,\n> > then it is closed inside this statement.\n>\n> And this definition seems quite inconsistent and error-prone.\n> The point of a FOR loop, IMO, is to have a fairly self-contained\n> definition of the set of iterations that will occur. This\n> eliminates that property, leaving you with something no cleaner\n> than a hand-built loop around a FETCH command.\n>\n\nThis is 100% valid for bound cursors. We don't allow unbound cursors there\nnow, and we can define behaviour.\n\nI understand that this feature increases the complexity of FOR cycle, but I\nsee an interesting possibility to create a dynamic cursor somewhere and\niterate elsewhere. My motivation is little bit near to\nhttps://commitfest.postgresql.org/28/2376/\n\nRegards\n\nPavel\n\n\n\n> regards, tom lane\n>\n\nst 1. 7. 2020 v 20:06 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> Last week I played with dbms_sql extension and some patterns of usage\n> cursor in PL/SQL and PL/pgSQL. I found fact, so iteration over cursor (FOR\n> statement) doesn't support unbound cursors. I think so this limit is not\n> necessary.\n\nI guess I don't understand why we should add this.  What does it do\nthat can't be done better with a plain FOR-over-SELECT?\n\nThe example you give of splitting an iteration into two loops doesn't\ninspire me to think it's useful; it looks more like encouraging awful\nprogramming practice.There are few points for this feature.1. possibility to use FOR cycle for refcursors. Refcursor can be passed as argument and it can be practical for some workflows with multiple steps - preparing, iterations, closing.2. symmetry - FETCH statement can be used for bound/unbound cursors. FOR cycle can be used only for bound cursors.3. It is one pattern (and I have not an idea how often) used by the dms_sql package. You can get a refcursor as a result of some procedures, and next steps you can iterate over this cursor. PL/SQL can use FOR cycle (and it is not possible in PL/pgSQL).\n\n> This statement can open portal for bound cursor or can iterate\n> over before opened portal. When portal was opened inside FOR statement,\n> then it is closed inside this statement.\n\nAnd this definition seems quite inconsistent and error-prone.\nThe point of a FOR loop, IMO, is to have a fairly self-contained\ndefinition of the set of iterations that will occur.  This\neliminates that property, leaving you with something no cleaner\nthan a hand-built loop around a FETCH command.This is 100% valid for bound cursors. We don't allow unbound cursors there now, and we can define behaviour. I understand that this feature increases the complexity of FOR cycle, but I see an interesting possibility to create a dynamic cursor somewhere and iterate elsewhere. My motivation is little bit near to https://commitfest.postgresql.org/28/2376/RegardsPavel\n\n                        regards, tom lane", "msg_date": "Wed, 1 Jul 2020 22:26:23 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - plpgsql - FOR over unbound cursor" } ]
[ { "msg_contents": "Hi\n\nI am a member of a small UK based team with extensive database experience. We are considering a project using PostgresSQL source code which only uses the insert data capabilities.\n\nIs there a contact who we could speak with and discuss our project aims in principal.\n\nThanks\n\nLuke\n\n\n\n\n\n\n\n\n\nHi \n \nI am a member of a small UK based team with extensive database experience. We are considering a project using PostgresSQL source code which only uses the insert data capabilities.\n \nIs there a contact who we could speak with and discuss our project aims in principal.\n \nThanks\n \nLuke", "msg_date": "Mon, 18 May 2020 16:21:35 +0000", "msg_from": "Luke Porter <luke_porter@hotmail.com>", "msg_from_op": true, "msg_subject": "PostgresSQL project" }, { "msg_contents": "On 2020-05-18 18:21, Luke Porter wrote:\n> I am a member of a small UK based team with extensive database \n> experience. We are considering a project using PostgresSQL source code \n> which only uses the insert data capabilities.\n> \n> Is there a contact who we could speak with and discuss our project aims \n> in principal.\n\nIf yours is an open-source project that you eventually want to share \nwith the community, then you can discuss it on this mailing list.\n\nIf it is a closed-source, proprietary, or in-house project, then the \ncommunity isn't the right place to discuss it, but you could hire \nprofessional consultants to help you, depending on your needs.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 18 May 2020 23:40:20 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PostgresSQL project" }, { "msg_contents": "Hi Peter\n\nThanks for the prompt response.\n\nYes, this is an open source project which will be shared with the community. We will also consider hiring appropriate consultants.\n\nAt a summary level, we have a proven approach for how a relational database can provide comprehensive logical insert, update and delete functionality through an append only paradigm which effectively delivers a perfect audit ie a means to access the database at any point in the audit (time) and for all data to be in a relationally correct state.\n\nIn so doing, we are removing the need for the use of update and delete code (as presently used) with enhancements to the insert code module.\n\nWe presently achieve this affect by use of a generator which creates an application specific database environment for append only (with a normal current view schema being the input for the generator).\n\nThe specification for the requirements of the insert code module is very detailed. Our challenge is to identify appropriate PostgreSQL architects/programmers who have experience of the PostgreSQL database kernel. More specifically, to outline the general approach and work packages to go about this.\n\nRegards\n\nLuke\n\n\n\n-----Original Message-----\nFrom: Peter Eisentraut <peter.eisentraut@2ndadrant.com> \nSent: 18 May 2020 22:40\nTo: Luke Porter <luke_porter@hotmail.com>; pgsql-hackers@lists.postgresql.org\nSubject: Re: PostgresSQL project\n\nOn 2020-05-18 18:21, Luke Porter wrote:\n> I am a member of a small UK based team with extensive database \n> experience. We are considering a project using PostgresSQL source code \n> which only uses the insert data capabilities.\n> \n> Is there a contact who we could speak with and discuss our project \n> aims in principal.\n\nIf yours is an open-source project that you eventually want to share with the community, then you can discuss it on this mailing list.\n\nIf it is a closed-source, proprietary, or in-house project, then the community isn't the right place to discuss it, but you could hire professional consultants to help you, depending on your needs.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 19 May 2020 07:01:31 +0000", "msg_from": "Luke Porter <luke_porter@hotmail.com>", "msg_from_op": true, "msg_subject": "RE: PostgresSQL project" } ]
[ { "msg_contents": "Hi,\n\nAttached is a draft of the release announcement for the PostgreSQL 13\nBeta 1 release this week.\n\nThe goal of this release announcement is to make people aware of the new\nfeatures that are introduced in PostgreSQL 13 and, importantly, get them\nto start testing. I have tried to include a broad array of features that\ncan noticeably impact people's usage of PostgreSQL. Note that the order\nof the features in the announcement are not in any particular ranking\n(though I do call out VACUUM as being one of the \"most anticipated\nfeatures\"), but are my efforts to try and tell a story about the release.\n\nPlease let me know your thoughts, comments, corrections, etc. and also\nif there are any glaring omissions. I know this is a bit longer than a\ntypical release announcement, but please let me know your feedback\nbefore the end of Wed. May 20 AOE (i.e. before the release ships).\n\nThanks for your review!\n\nJonathan", "msg_date": "Mon, 18 May 2020 22:29:21 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "PostgreSQL 13 Beta 1 Release Announcement Draft" }, { "msg_contents": "On Mon, May 18, 2020 at 7:29 PM Jonathan S. Katz <jkatz@postgresql.org>\nwrote:\n\n> Attached is a draft of the release announcement for the PostgreSQL 13\n> Beta 1 release this week.\n>\n\nWe could call out the additional commits that Tom has done for wait event\nrenaming, re: compatibility - next to \"Rename some recovery-related wait\nevents\".\n\nThese are the relevant commits, I think:\n\nRename SLRU structures and associated LWLocks.\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=5da14938f7bfb96b648ee3c47e7ea2afca5bcc4a\n\nRename assorted LWLock tranches.\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=36ac359d3621578cefc2156a3917024cdd3b1829\n\nDrop the redundant \"Lock\" suffix from LWLock wait event names.\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=14a91010912632cae322b06fce0425faedcf7353\n\nMop-up for wait event naming issues.\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=3048898e73c75f54bb259323382e0e7f6368cb6f\n\nThanks,\nLukas\n\n-- \nLukas Fittl\n\nOn Mon, May 18, 2020 at 7:29 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\nAttached is a draft of the release announcement for the PostgreSQL 13\nBeta 1 release this week.\nWe could call out the additional commits that Tom has done for wait event renaming, re: compatibility - next to \"Rename some recovery-related wait events\".These are the relevant commits, I think:Rename SLRU structures and associated LWLocks.https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=5da14938f7bfb96b648ee3c47e7ea2afca5bcc4aRename assorted LWLock tranches.https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=36ac359d3621578cefc2156a3917024cdd3b1829Drop the redundant \"Lock\" suffix from LWLock wait event names.https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=14a91010912632cae322b06fce0425faedcf7353Mop-up for wait event naming issues.https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=3048898e73c75f54bb259323382e0e7f6368cb6fThanks,Lukas-- Lukas Fittl", "msg_date": "Tue, 19 May 2020 09:58:56 -0700", "msg_from": "Lukas Fittl <lukas@fittl.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 13 Beta 1 Release Announcement Draft" }, { "msg_contents": "Hi,\n\nOn 5/18/20 10:29 PM, Jonathan S. Katz wrote:\n> Hi,\n> \n> Attached is a draft of the release announcement for the PostgreSQL 13\n> Beta 1 release this week.\n> \n> The goal of this release announcement is to make people aware of the new\n> features that are introduced in PostgreSQL 13 and, importantly, get them\n> to start testing. I have tried to include a broad array of features that\n> can noticeably impact people's usage of PostgreSQL. Note that the order\n> of the features in the announcement are not in any particular ranking\n> (though I do call out VACUUM as being one of the \"most anticipated\n> features\"), but are my efforts to try and tell a story about the release.\n> \n> Please let me know your thoughts, comments, corrections, etc. and also\n> if there are any glaring omissions. I know this is a bit longer than a\n> typical release announcement, but please let me know your feedback\n> before the end of Wed. May 20 AOE (i.e. before the release ships).\n\nThanks everyone for your responses (a note on that in a sec). I have\nattached an update to the release, which will be close to and/or the\nfinal copy that goes out tomorrow.\n\nBased on the feedback received, I accepted & declined changes based on\na) how it impacts our users at large b) if an expression was actually\nwrong vs. personal preference of explanation. For (a), there is an\nexisting note in the announcement to read the release notes as there may\nbe some features that are more interesting to the reader than what may\nbe described. As mentioned, the goal is to try to have the release\nannouncement as a springboard into what people can use/test.\n\nI also received an interesting amount of off-list feedback this\niteration. While I generally do not mind chatting (this week an\nunfortunate exception and I apologize if I did not reply back to you), I\ndo ask as part of the collaborative process to try to keep feedback on\nlist so as to limit duplication of comments.\n\nAnyway, I am happy to receive and incorporate additional feedback up\nuntil we launch tomorrow ~12 UTC.\n\nThanks! Happy Beta eve :)\n\nJonathan", "msg_date": "Wed, 20 May 2020 18:11:08 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 13 Beta 1 Release Announcement Draft" }, { "msg_contents": "On Wed, May 20, 2020 at 06:11:08PM -0400, Jonathan S. Katz wrote:\n> This release includes more ways to monitor actibity within a PostgreSQL\n\nactivity\n\n> partition its \"accounts\" table, making it easier to benchmark workloads that\n> contains partitions.\n\ncontain\n\nNo need to respond :)\n\nThanks,\nJustin\n\n\n", "msg_date": "Wed, 20 May 2020 17:42:37 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 13 Beta 1 Release Announcement Draft" }, { "msg_contents": "On 5/20/20 6:42 PM, Justin Pryzby wrote:\n> On Wed, May 20, 2020 at 06:11:08PM -0400, Jonathan S. Katz wrote:\n>> This release includes more ways to monitor actibity within a PostgreSQL\n> \n> activity\n\n...that one is embarrassing. Thanks.\n\n> \n>> partition its \"accounts\" table, making it easier to benchmark workloads that\n>> contains partitions.\n> \n> contain\n\nAdjusted.\n\n> No need to respond :)\n\nHappy to :) A lot of the early craziness of the week has now subsided.\n\nThanks!\n\nJonathan", "msg_date": "Wed, 20 May 2020 18:44:20 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 13 Beta 1 Release Announcement Draft" }, { "msg_contents": "Hi Jon,\n\nI noticed a couple minor inconsistencies:\n\n\".datetime\" -> elsewhere functions are formatted as `.datetime()`\n\nlibpq -> `libpq`\n\nThe link to the release notes on its own line is the same as the\ninline link, if that makes sense. In other places with links on their\nown line, the full URL is in the link text.\n\nAlso, for \"indexes that contain many repeat values\", \"repeated\" might\nsound better here. It's one of those things that jumped out at me at\nfirst reading, but when trying both in my head, it seems ok.\n\nRegarding \"streaming `pg_basebackup`s\", I'm used to the general term\n\"base backups\" in this usage, which seems a distinct concept from the\nname of the invoked command.\n\nThanks!\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 21 May 2020 12:12:13 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 13 Beta 1 Release Announcement Draft" }, { "msg_contents": "Hi John,\n\nOn 5/21/20 12:12 AM, John Naylor wrote:\n> Hi Jon,\n> \n> I noticed a couple minor inconsistencies:\n> \n> \".datetime\" -> elsewhere functions are formatted as `.datetime()`\n> \n> libpq -> `libpq`\n> \n> The link to the release notes on its own line is the same as the\n> inline link, if that makes sense. In other places with links on their\n> own line, the full URL is in the link text.\n> \n> Also, for \"indexes that contain many repeat values\", \"repeated\" might\n> sound better here. It's one of those things that jumped out at me at\n> first reading, but when trying both in my head, it seems ok.\n> \n> Regarding \"streaming `pg_basebackup`s\", I'm used to the general term\n> \"base backups\" in this usage, which seems a distinct concept from the\n> name of the invoked command.\n\nThanks for the suggestions. I ended up incorporating all of them.\n\nStay tuned for the release...\n\nJonathan", "msg_date": "Thu, 21 May 2020 07:44:02 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 13 Beta 1 Release Announcement Draft" }, { "msg_contents": "Congrats to all for the release of a new major version!\n\nTwo questions:\n- Why is VACUUM together with FETCH FIRST WITH TIES, CREATE TABLE LIKE,\nALTER VIEW, ALTER TABLE, etc in Utility Commands section?\n Shouldn't there be a separate section for SQL changes? (or keep one\nsection but rename the Utility to include all?)\n\n> Add FOREIGN to ALTER statements, if appropriate (Luis Carril)\n\n> WHAT IS THIS ABOUT?\n- The \"WHAT IS THIS ABOUT?\" should be removed, in my opinion.\n\nAgain, congrats for another release of the best database in the world.\n\nPantelis Theodosiou\n\nOn Thu, May 21, 2020 at 12:44 PM Jonathan S. Katz <jkatz@postgresql.org>\nwrote:\n\n> Hi John,\n>\n> On 5/21/20 12:12 AM, John Naylor wrote:\n> > Hi Jon,\n> >\n> > I noticed a couple minor inconsistencies:\n> >\n> > \".datetime\" -> elsewhere functions are formatted as `.datetime()`\n> >\n> > libpq -> `libpq`\n> >\n> > The link to the release notes on its own line is the same as the\n> > inline link, if that makes sense. In other places with links on their\n> > own line, the full URL is in the link text.\n> >\n> > Also, for \"indexes that contain many repeat values\", \"repeated\" might\n> > sound better here. It's one of those things that jumped out at me at\n> > first reading, but when trying both in my head, it seems ok.\n> >\n> > Regarding \"streaming `pg_basebackup`s\", I'm used to the general term\n> > \"base backups\" in this usage, which seems a distinct concept from the\n> > name of the invoked command.\n>\n> Thanks for the suggestions. I ended up incorporating all of them.\n>\n> Stay tuned for the release...\n>\n> Jonathan\n>\n>\n\nCongrats to all for the release of a new major version!Two questions:- Why is VACUUM together with FETCH FIRST WITH TIES, CREATE TABLE LIKE, ALTER VIEW, ALTER TABLE, etc in Utility Commands section?  Shouldn't there be a separate section for SQL changes? (or keep one section but rename the Utility to include all?)> Add FOREIGN to ALTER statements, if appropriate (Luis Carril)> WHAT IS THIS ABOUT?- The \"WHAT IS THIS ABOUT?\" should be removed, in my opinion.Again, congrats for another release of the best database in the world.Pantelis TheodosiouOn Thu, May 21, 2020 at 12:44 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:Hi John,\n\nOn 5/21/20 12:12 AM, John Naylor wrote:\n> Hi Jon,\n> \n> I noticed a couple minor inconsistencies:\n> \n> \".datetime\" -> elsewhere functions are formatted as `.datetime()`\n> \n> libpq -> `libpq`\n> \n> The link to the release notes on its own line is the same as the\n> inline link, if that makes sense. In other places with links on their\n> own line, the full URL is in the link text.\n> \n> Also, for \"indexes that contain many repeat values\", \"repeated\" might\n> sound better here. It's one of those things that jumped out at me at\n> first reading, but when trying both in my head, it seems ok.\n> \n> Regarding \"streaming `pg_basebackup`s\", I'm used to the general term\n> \"base backups\" in this usage, which seems a  distinct concept from the\n> name of the invoked command.\n\nThanks for the suggestions. I ended up incorporating all of them.\n\nStay tuned for the release...\n\nJonathan", "msg_date": "Thu, 21 May 2020 15:20:40 +0100", "msg_from": "Pantelis Theodosiou <ypercube@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 13 Beta 1 Release Announcement Draft" }, { "msg_contents": "On Thu, May 21, 2020 at 3:20 PM Pantelis Theodosiou <ypercube@gmail.com>\nwrote:\n\n> Congrats to all for the release of a new major version!\n>\n> Two questions:\n> - Why is VACUUM together with FETCH FIRST WITH TIES, CREATE TABLE LIKE,\n> ALTER VIEW, ALTER TABLE, etc in Utility Commands section?\n> Shouldn't there be a separate section for SQL changes? (or keep one\n> section but rename the Utility to include all?)\n>\n> > Add FOREIGN to ALTER statements, if appropriate (Luis Carril)\n>\n> > WHAT IS THIS ABOUT?\n> - The \"WHAT IS THIS ABOUT?\" should be removed, in my opinion.\n>\n> Again, congrats for another release of the best database in the world.\n>\n> Pantelis Theodosiou\n>\n> On Thu, May 21, 2020 at 12:44 PM Jonathan S. Katz <jkatz@postgresql.org>\n> wrote:\n>\n>>\n>> Thanks for the suggestions. I ended up incorporating all of them.\n>>\n>> Stay tuned for the release...\n>>\n>> Jonathan\n>>\n>>\nApologies, I realized a minute too late that my comments are about the\nRelease Notes and not the Announcement.\nHowever, since the link to Notes makes them no visible to more eyes, they\ncould be checked again.\n\nPantelis Theodosiou\n\nOn Thu, May 21, 2020 at 3:20 PM Pantelis Theodosiou <ypercube@gmail.com> wrote:Congrats to all for the release of a new major version!Two questions:- Why is VACUUM together with FETCH FIRST WITH TIES, CREATE TABLE LIKE, ALTER VIEW, ALTER TABLE, etc in Utility Commands section?  Shouldn't there be a separate section for SQL changes? (or keep one section but rename the Utility to include all?)> Add FOREIGN to ALTER statements, if appropriate (Luis Carril)> WHAT IS THIS ABOUT?- The \"WHAT IS THIS ABOUT?\" should be removed, in my opinion.Again, congrats for another release of the best database in the world.Pantelis TheodosiouOn Thu, May 21, 2020 at 12:44 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n\nThanks for the suggestions. I ended up incorporating all of them.\n\nStay tuned for the release...\n\nJonathan\nApologies, I realized a minute too late that my comments are about the Release Notes and not the Announcement. However, since the link to Notes makes them no visible to more eyes, they could be checked again.Pantelis Theodosiou", "msg_date": "Thu, 21 May 2020 15:50:12 +0100", "msg_from": "Pantelis Theodosiou <ypercube@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 13 Beta 1 Release Announcement Draft" }, { "msg_contents": "On 19/05/2020 04:29, Jonathan S. Katz wrote:\n> Hi,\n>\n> Attached is a draft of the release announcement for the PostgreSQL 13\n> Beta 1 release this week.\n>\n>\nHi,\n\nMaybe I'm too late, but in this paragraph :\n\n\n> `psql` now includes the `\\warn` command that is similar to the `\\echo`\ncommand\n> in terms of outputting data, except `\\warn` sends it to stderr. And in\ncase you\n> need additional guidance on any of the PostgreSQL commands, the\n`--help` flag\n> now includes a link to\n[https://www.postgresql.org](https://www.postgresql.org).\n\nis it --help shouldn't be /help ?\n\nSame thing in the release note\n(https://www.postgresql.org/docs/13/release-13.html) :\n\n> Add the PostgreSQL home page to command-line |--help| output (Peter\nEisentraut)\n\nas it probalbly refer to 27f3dea64833d68c1fa08c1e5d26176a579f69c8, isn't\nit ?\n\nregards,\n\n-- \nSébastien\n\n\n\n\n\n\n\nOn 19/05/2020 04:29, Jonathan S. Katz\n wrote:\n\n\nHi,\n\nAttached is a draft of the release announcement for the PostgreSQL 13\nBeta 1 release this week.\n\n\n\n\nHi, \n\nMaybe I'm too late, but in this paragraph :\n\n\n > `psql` now includes the `\\warn` command that is similar to\n the `\\echo` command\n > in terms of outputting data, except `\\warn` sends it to\n stderr. And in case you\n > need additional guidance on any of the PostgreSQL commands,\n the `--help` flag\n > now includes a link to\n [https://www.postgresql.org](https://www.postgresql.org).\n\nis it --help shouldn't be /help ? \n\nSame thing in the release note\n (https://www.postgresql.org/docs/13/release-13.html) : \n\n> Add the PostgreSQL home\n page to command-line --help output\n (Peter Eisentraut)\nas it probalbly refer to\n 27f3dea64833d68c1fa08c1e5d26176a579f69c8, isn't it ? \n\nregards, \n\n-- \nSébastien", "msg_date": "Fri, 26 Jun 2020 11:12:45 +0200", "msg_from": "=?UTF-8?Q?S=c3=a9bastien_Lardi=c3=a8re?= <sebastien@lardiere.net>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 13 Beta 1 Release Announcement Draft" } ]
[ { "msg_contents": "Hi,\n\nI noticed that if a row level policy is defined on an extension\nobject, even in the extension creation script, pg_dump dumps a\nseparate CREATE POLICY statement for such policies. That makes the\ndump unrestorable because the CREATE EXTENSION and CREATE POLICY then\nconflicts.\n\nHere is a simple example. I just abused the pageinspect contrib module\nto demonstrate the problem.\n\n```\ndiff --git a/contrib/pageinspect/pageinspect--1.5.sql\nb/contrib/pageinspect/pageinspect--1.5.sql\nindex 1e40c3c97e..f04d70d1c1 100644\n--- a/contrib/pageinspect/pageinspect--1.5.sql\n+++ b/contrib/pageinspect/pageinspect--1.5.sql\n@@ -277,3 +277,9 @@ CREATE FUNCTION gin_leafpage_items(IN page bytea,\n RETURNS SETOF record\n AS 'MODULE_PATHNAME', 'gin_leafpage_items'\n LANGUAGE C STRICT PARALLEL SAFE;\n+\n+-- sample table\n+CREATE TABLE pf_testtab (a int, b int);\n+-- sample policy\n+CREATE POLICY p1 ON pf_testtab\n+FOR SELECT USING (true);\n```\n\nIf I now take a dump of a database with pageinspect extension created,\nthe dump has the following.\n\n```\n\n--\n-- Name: pageinspect; Type: EXTENSION; Schema: -; Owner:\n--\n\nCREATE EXTENSION IF NOT EXISTS pageinspect WITH SCHEMA public;\n\n--\n-- Name: pf_testtab p1; Type: POLICY; Schema: public; Owner: pavan\n--\n\nCREATE POLICY p1 ON public.pf_testtab FOR SELECT USING (true);\n\n```\n\nThat's a problem. The CREATE POLICY statement fails during restore\nbecause CREATE EXTENSION already creates the policy.\n\nAre we missing recording dependency on extension for row level\npolicies? Or somehow pg_dump should skip dumping those policies?\n\nThanks,\nPavan\n\n--\n Pavan Deolasee http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Tue, 19 May 2020 12:01:35 +0530", "msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>", "msg_from_op": true, "msg_subject": "pg_dump dumps row level policies on extension tables" }, { "msg_contents": "On Tue, 19 May 2020 at 15:31, Pavan Deolasee <pavan.deolasee@gmail.com> wrote:\n>\n> Hi,\n>\n> I noticed that if a row level policy is defined on an extension\n> object, even in the extension creation script, pg_dump dumps a\n> separate CREATE POLICY statement for such policies. That makes the\n> dump unrestorable because the CREATE EXTENSION and CREATE POLICY then\n> conflicts.\n>\n> Here is a simple example. I just abused the pageinspect contrib module\n> to demonstrate the problem.\n>\n> ```\n> diff --git a/contrib/pageinspect/pageinspect--1.5.sql\n> b/contrib/pageinspect/pageinspect--1.5.sql\n> index 1e40c3c97e..f04d70d1c1 100644\n> --- a/contrib/pageinspect/pageinspect--1.5.sql\n> +++ b/contrib/pageinspect/pageinspect--1.5.sql\n> @@ -277,3 +277,9 @@ CREATE FUNCTION gin_leafpage_items(IN page bytea,\n> RETURNS SETOF record\n> AS 'MODULE_PATHNAME', 'gin_leafpage_items'\n> LANGUAGE C STRICT PARALLEL SAFE;\n> +\n> +-- sample table\n> +CREATE TABLE pf_testtab (a int, b int);\n> +-- sample policy\n> +CREATE POLICY p1 ON pf_testtab\n> +FOR SELECT USING (true);\n> ```\n>\n> If I now take a dump of a database with pageinspect extension created,\n> the dump has the following.\n>\n> ```\n>\n> --\n> -- Name: pageinspect; Type: EXTENSION; Schema: -; Owner:\n> --\n>\n> CREATE EXTENSION IF NOT EXISTS pageinspect WITH SCHEMA public;\n>\n> --\n> -- Name: pf_testtab p1; Type: POLICY; Schema: public; Owner: pavan\n> --\n>\n> CREATE POLICY p1 ON public.pf_testtab FOR SELECT USING (true);\n>\n> ```\n>\n> That's a problem. The CREATE POLICY statement fails during restore\n> because CREATE EXTENSION already creates the policy.\n>\n> Are we missing recording dependency on extension for row level\n> policies? Or somehow pg_dump should skip dumping those policies?\n>\n\nI think we don't support this case as the comment in\ncheckExtensionMembership() describes:\n\n /*\n * In 9.6 and above, mark the member object to have any non-initial ACL,\n * policies, and security labels dumped.\n *\n * Note that any initial ACLs (see pg_init_privs) will be removed when we\n * extract the information about the object. We don't provide support for\n * initial policies and security labels and it seems unlikely for those to\n * ever exist, but we may have to revisit this later.\n *\n * Prior to 9.6, we do not include any extension member components.\n *\n * In binary upgrades, we still dump all components of the members\n * individually, since the idea is to exactly reproduce the database\n * contents rather than replace the extension contents with something\n * different.\n */\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 1 Jun 2020 20:08:50 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump dumps row level policies on extension tables" } ]
[ { "msg_contents": "Here is a series of patches to do some refactoring in the grammar around \nthe commands COMMENT, DROP, SECURITY LABEL, and ALTER EXTENSION ... \nADD/DROP. In the grammar, these commands (with some exceptions) \nbasically just take a reference to an object and later look it up in C \ncode. Some of that was already generalized individually for each \ncommand (drop_type_any_name, drop_type_name, etc.). This patch combines \nit into common lists for all these commands.\n\nAdvantages:\n\n- Avoids having to list each object type at least four times.\n\n- Object types not supported by security labels or extensions are now \nexplicitly listed and give a proper error message. Previously, this was \njust encoded in the grammar itself and specifying a non-supported object \ntype would just give a parse error.\n\n- Reduces lines of code in gram.y.\n\n- Removes some old cruft.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 19 May 2020 08:43:27 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "some grammar refactoring" }, { "msg_contents": "On Tue, May 19, 2020 at 2:43 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> Here is a series of patches to do some refactoring in the grammar around\n> the commands COMMENT, DROP, SECURITY LABEL, and ALTER EXTENSION ...\n> ADD/DROP. In the grammar, these commands (with some exceptions)\n> basically just take a reference to an object and later look it up in C\n> code. Some of that was already generalized individually for each\n> command (drop_type_any_name, drop_type_name, etc.). This patch combines\n> it into common lists for all these commands.\n\nI haven't reviewed the code, but +1 for the idea.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 19 May 2020 08:52:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: some grammar refactoring" }, { "msg_contents": "On Tue, May 19, 2020 at 12:13 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> Here is a series of patches to do some refactoring in the grammar around\n> the commands COMMENT, DROP, SECURITY LABEL, and ALTER EXTENSION ...\n> ADD/DROP. In the grammar, these commands (with some exceptions)\n> basically just take a reference to an object and later look it up in C\n> code. Some of that was already generalized individually for each\n> command (drop_type_any_name, drop_type_name, etc.). This patch combines\n> it into common lists for all these commands.\n>\n> Advantages:\n>\n> - Avoids having to list each object type at least four times.\n>\n> - Object types not supported by security labels or extensions are now\n> explicitly listed and give a proper error message. Previously, this was\n> just encoded in the grammar itself and specifying a non-supported object\n> type would just give a parse error.\n>\n> - Reduces lines of code in gram.y.\n>\n> - Removes some old cruft.\n>\n>\nI liked the idea.\n\nI had quick glance through the patches and also did quick review and\ntesting.\nI haven't found any issue with the patch.\n\n-- \n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\n\n-- \nRushabh Lathia\n\nOn Tue, May 19, 2020 at 12:13 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:Here is a series of patches to do some refactoring in the grammar around \nthe commands COMMENT, DROP, SECURITY LABEL, and ALTER EXTENSION ... \nADD/DROP.  In the grammar, these commands (with some exceptions) \nbasically just take a reference to an object and later look it up in C \ncode.  Some of that was already generalized individually for each \ncommand (drop_type_any_name, drop_type_name, etc.).  This patch combines \nit into common lists for all these commands.\n\nAdvantages:\n\n- Avoids having to list each object type at least four times.\n\n- Object types not supported by security labels or extensions are now \nexplicitly listed and give a proper error message.  Previously, this was \njust encoded in the grammar itself and specifying a non-supported object \ntype would just give a parse error.\n\n- Reduces lines of code in gram.y.\n\n- Removes some old cruft.\nI liked the idea.I had quick glance through the patches and also did quick review and testing. I haven't found any issue with the patch.\n-- \nPeter Eisentraut              http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n-- Rushabh Lathia", "msg_date": "Thu, 21 May 2020 14:19:55 +0530", "msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>", "msg_from_op": false, "msg_subject": "Re: some grammar refactoring" }, { "msg_contents": "On 2020-05-19 08:43, Peter Eisentraut wrote:\n> Here is a series of patches to do some refactoring in the grammar around\n> the commands COMMENT, DROP, SECURITY LABEL, and ALTER EXTENSION ...\n> ADD/DROP. In the grammar, these commands (with some exceptions)\n> basically just take a reference to an object and later look it up in C\n> code. Some of that was already generalized individually for each\n> command (drop_type_any_name, drop_type_name, etc.). This patch combines\n> it into common lists for all these commands.\n\nWhile most of this patch set makes no behavior changes by design, I \nshould point out this little change hidden in the middle:\n\n Remove deprecated syntax from CREATE/DROP LANGUAGE\n\n Remove the option to specify the language name as a single-quoted\n string. This has been obsolete since ee8ed85da3b. Removing it allows\n better grammar refactoring.\n\n The syntax of the CREATE FUNCTION LANGUAGE clause is not changed.\n\n(ee8ed85da3b is in PG 7.2.)\n\nI expect this to be uncontroversial, but it should be pointed out.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 22 May 2020 10:48:14 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: some grammar refactoring" }, { "msg_contents": "\n\n> On May 18, 2020, at 11:43 PM, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> Here is a series of patches to do some refactoring in the grammar around the commands COMMENT, DROP, SECURITY LABEL, and ALTER EXTENSION ... ADD/DROP. In the grammar, these commands (with some exceptions) basically just take a reference to an object and later look it up in C code. Some of that was already generalized individually for each command (drop_type_any_name, drop_type_name, etc.). This patch combines it into common lists for all these commands.\n> \n> Advantages:\n> \n> - Avoids having to list each object type at least four times.\n> \n> - Object types not supported by security labels or extensions are now explicitly listed and give a proper error message. Previously, this was just encoded in the grammar itself and specifying a non-supported object type would just give a parse error.\n> \n> - Reduces lines of code in gram.y.\n> \n> - Removes some old cruft.\n\nI like the general direction you are going with this, but the decision in v1-0006 to move the error for invalid object types out of gram.y and into extension.c raises an organizational question. At some places in gram.y, there is C code that checks parsed tokens and ereports if they are invalid, in some sense extending the grammar right within gram.y. In many other places, including what you are doing in this patch, the token is merely stored in a Stmt object with the error checking delayed until command processing. For tokens which need to be checked against the catalogs, that decision makes perfect sense. But for ones where all the information necessary to validate the token exists in the parser, it is not clear to me why it gets delayed until command processing. Is there a design principle behind when these checks are done in gram.y vs. when they are delayed to the command processing? I'm guessing in v1-0006 that you are doing it this way because there are multiple places in gram.y where tokens would need to be checked, and by delaying the check until ExecAlterExtensionContentsStmt, you can put the check all in one place. Is that all it is?\n\nI have had reason in the past to want to reorganize gram.y to have all these types of checks in a single, consistent format and location, rather than scattered through gram.y and backend/commands/. Does anybody else have an interest in this?\n\nMy interest in this stems from the fact that bison can be run to generate data files that can then be used in reverse to generate random SQL. The more the parsing logic is visible to bison, the more useful the generated data files are. But a single, consistent design for extra-grammatical error checks could help augment those files fairly well, too.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 22 May 2020 09:53:21 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: some grammar refactoring" }, { "msg_contents": "On 2020-05-22 18:53, Mark Dilger wrote:\r\n> I like the general direction you are going with this, but the decision in v1-0006 to move the error for invalid object types out of gram.y and into extension.c raises an organizational question. At some places in gram.y, there is C code that checks parsed tokens and ereports if they are invalid, in some sense extending the grammar right within gram.y. In many other places, including what you are doing in this patch, the token is merely stored in a Stmt object with the error checking delayed until command processing. For tokens which need to be checked against the catalogs, that decision makes perfect sense. But for ones where all the information necessary to validate the token exists in the parser, it is not clear to me why it gets delayed until command processing. Is there a design principle behind when these checks are done in gram.y vs. when they are delayed to the command processing? I'm guessing in v1-0006 that you are doing it this way because there are multiple places in gram.y where tokens would need to be checked, and by delaying the check until ExecAlterExtensionContentsStmt, you can put the check all in one place. Is that all it is?\r\n\r\nWe have been for some time moving to a style where we rely on switch \r\nstatements around OBJECT_* constants to (a) decide what is allowed with \r\ncertain object types, and (b) make sure we have an explicit decision on \r\neach object type and don't forget any. This has worked well, I think.\r\n\r\nThis is more of that. Before this patch, it would have been pretty hard \r\nto find out which object types are supported with extensions or security \r\nlabels, except by very carefully reading the grammar.\r\n\r\nMoreover, you now get a proper error message for unsupported object \r\ntypes rather than just a generic parse error.\r\n\r\n> I have had reason in the past to want to reorganize gram.y to have all these types of checks in a single, consistent format and location, rather than scattered through gram.y and backend/commands/. Does anybody else have an interest in this?\r\n> \r\n> My interest in this stems from the fact that bison can be run to generate data files that can then be used in reverse to generate random SQL. The more the parsing logic is visible to bison, the more useful the generated data files are. But a single, consistent design for extra-grammatical error checks could help augment those files fairly well, too.\r\n\r\nIt's certainly already the case that the grammar accepts statements that \r\nend up being invalid, even if you ignore catalog lookup. I don't think \r\nmy patch moves the needle on this in a significant way.\r\n\r\n-- \r\nPeter Eisentraut http://www.2ndQuadrant.com/\r\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\r\n", "msg_date": "Mon, 25 May 2020 11:55:10 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: some grammar refactoring" }, { "msg_contents": "\n\n> On May 25, 2020, at 2:55 AM, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> On 2020-05-22 18:53, Mark Dilger wrote:\n>> I like the general direction you are going with this, but the decision in v1-0006 to move the error for invalid object types out of gram.y and into extension.c raises an organizational question. At some places in gram.y, there is C code that checks parsed tokens and ereports if they are invalid, in some sense extending the grammar right within gram.y. In many other places, including what you are doing in this patch, the token is merely stored in a Stmt object with the error checking delayed until command processing. For tokens which need to be checked against the catalogs, that decision makes perfect sense. But for ones where all the information necessary to validate the token exists in the parser, it is not clear to me why it gets delayed until command processing. Is there a design principle behind when these checks are done in gram.y vs. when they are delayed to the command processing? I'm guessing in v1-0006 that you are doing it this way because there are multiple places in gram.y where tokens would need to be checked, and by delaying the check until ExecAlterExtensionContentsStmt, you can put the check all in one place. Is that all it is?\n> \n> We have been for some time moving to a style where we rely on switch statements around OBJECT_* constants to (a) decide what is allowed with certain object types, and (b) make sure we have an explicit decision on each object type and don't forget any. This has worked well, I think.\n\nYes, I think so, too. I like that overall design.\n\n> This is more of that.\n\nYes, it is.\n\n> Before this patch, it would have been pretty hard to find out which object types are supported with extensions or security labels, except by very carefully reading the grammar.\n\nFair enough.\n\n> Moreover, you now get a proper error message for unsupported object types rather than just a generic parse error.\n\nSounds great.\n\n>> I have had reason in the past to want to reorganize gram.y to have all these types of checks in a single, consistent format and location, rather than scattered through gram.y and backend/commands/. Does anybody else have an interest in this?\n>> My interest in this stems from the fact that bison can be run to generate data files that can then be used in reverse to generate random SQL. The more the parsing logic is visible to bison, the more useful the generated data files are. But a single, consistent design for extra-grammatical error checks could help augment those files fairly well, too.\n> \n> It's certainly already the case that the grammar accepts statements that end up being invalid, even if you ignore catalog lookup. I don't think my patch moves the needle on this in a significant way.\n\nI don't think it moves the needle too much, either. But since your patch is entirely a refactoring patch and not a feature patch, I thought it would be fair to ask larger questions about how the code should be structured. I like using enums and switch statements and getting better error messages, but there doesn't seem to be any fundamental reason why that should be in the command execution step. It feels like a layering violation to me.\n\nI don't object to this patch getting committed. A subsequent patch to consolidate all the grammar checks into src/backend/parser and out of src/backend/commands won't be blocked by this.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 25 May 2020 12:09:52 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: some grammar refactoring" }, { "msg_contents": "On 2020-05-25 21:09, Mark Dilger wrote:\n> I don't think it moves the needle too much, either. But since your patch is entirely a refactoring patch and not a feature patch, I thought it would be fair to ask larger questions about how the code should be structured. I like using enums and switch statements and getting better error messages, but there doesn't seem to be any fundamental reason why that should be in the command execution step. It feels like a layering violation to me.\n\nMost utility commands don't have an intermediate parse analysis pass. \nThey just go straight from the grammar to the execution. Maybe that \ncould be rethought, but that's the way it is now.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 26 May 2020 10:28:42 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: some grammar refactoring" }, { "msg_contents": "On Tue, May 26, 2020 at 4:28 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2020-05-25 21:09, Mark Dilger wrote:\n> > I don't think it moves the needle too much, either. But since your patch is entirely a refactoring patch and not a feature patch, I thought it would be fair to ask larger questions about how the code should be structured. I like using enums and switch statements and getting better error messages, but there doesn't seem to be any fundamental reason why that should be in the command execution step. It feels like a layering violation to me.\n>\n> Most utility commands don't have an intermediate parse analysis pass.\n> They just go straight from the grammar to the execution. Maybe that\n> could be rethought, but that's the way it is now.\n\nI think it can and should be rethought at some point. The present\nsplit leads to a lot of weird coding. We've had security\nvulnerabilities that were due to things like passing the same RangeVar\nto two different places, leading to two different lookups for the name\nthat could be induced to return different OIDs. It also leads to a lot\nof fuzzy thinking about where locks are taken, in which order, and how\nmany times, and with what strength. The code for queries seems to have\nbeen thought through a lot more carefully, because the existence of\nprepared queries makes mistakes a lot more noticeable. I hope some day\nsomeone will be motivated to improve the situation for DDL as well,\nthough it will probably be a thankless task.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 26 May 2020 16:06:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: some grammar refactoring" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, May 26, 2020 at 4:28 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> Most utility commands don't have an intermediate parse analysis pass.\n>> They just go straight from the grammar to the execution. Maybe that\n>> could be rethought, but that's the way it is now.\n\n> I think it can and should be rethought at some point.\n\nThe other problem is that the ones that do have explicit parse analysis\ntend to be doing it at the wrong time. I've fixed some ALTER TABLE\nproblems by rearranging that, but we still have open bugs that are due\nto this type of mistake, eg [1]. I agree that we need a rethink, and\nwe need it badly.\n\nIf this patch is changing when any parse-analysis-like actions happen,\nthen I would say that it needs very careful review --- much more than\nthe \"refactoring\" label would suggest. Maybe it's making things better,\nor maybe it doesn't matter; but this area is a minefield.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/16272-6e32da020e9a9381%40postgresql.org\n\n\n", "msg_date": "Tue, 26 May 2020 21:49:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: some grammar refactoring" }, { "msg_contents": "On 2020-05-19 08:43, Peter Eisentraut wrote:\n> Here is a series of patches to do some refactoring in the grammar around\n> the commands COMMENT, DROP, SECURITY LABEL, and ALTER EXTENSION ...\n> ADD/DROP.\n\nThese patches have been committed.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 14 Jun 2020 07:46:01 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: some grammar refactoring" } ]
[ { "msg_contents": "Hi all,\n\nWhile digging into my backlog, I have found this message from Peter E\nmentioning about $subject:\nhttps://www.postgresql.org/message-id/e6aac026-174c-9952-689f-6bee76f9ab68@2ndquadrant.com\n\nIt seems to me that it would be a good idea to make those checks more\nconsistent, and attached is a patch.\n\nThoughts?\n--\nMichael", "msg_date": "Tue, 19 May 2020 16:09:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Expand the use of check_canonical_path() for more GUCs" }, { "msg_contents": "On 2020-05-19 09:09, Michael Paquier wrote:\n> While digging into my backlog, I have found this message from Peter E\n> mentioning about $subject:\n> https://www.postgresql.org/message-id/e6aac026-174c-9952-689f-6bee76f9ab68@2ndquadrant.com\n> \n> It seems to me that it would be a good idea to make those checks more\n> consistent, and attached is a patch.\n\nThat thread didn't resolve why check_canonical_path() is necessary \nthere. Maybe the existing uses could be removed?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 19 May 2020 13:02:12 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Expand the use of check_canonical_path() for more GUCs" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> While digging into my backlog, I have found this message from Peter E\n> mentioning about $subject:\n> https://www.postgresql.org/message-id/e6aac026-174c-9952-689f-6bee76f9ab68@2ndquadrant.com\n\n> It seems to me that it would be a good idea to make those checks more\n> consistent, and attached is a patch.\n\nHm, I'm pretty certain that data_directory does not need this because\ncanonicalization is done elsewhere; the most that you could accomplish\nthere is to cause problems. Dunno about the rest.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 May 2020 09:32:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Expand the use of check_canonical_path() for more GUCs" }, { "msg_contents": "On Tue, May 19, 2020 at 09:32:15AM -0400, Tom Lane wrote:\n> Hm, I'm pretty certain that data_directory does not need this because\n> canonicalization is done elsewhere; the most that you could accomplish\n> there is to cause problems. Dunno about the rest.\n\nHmm. I missed that this is getting done in SelectConfigFiles() first\nby the postmaster so that's not necessary, which also does the work\nfor hba_file and ident_file. config_file does not need that either as\nAbsoluteConfigLocation() does the same work via ParseConfigFile(). So\nperhaps we could add a comment or such about that? Attached is an\nidea.\n\nThe rest is made of PromoteTriggerFile, pg_krb_server_keyfile,\nssl_cert_file, ssl_key_file, ssl_ca_file, ssl_crl_file and\nssl_dh_params_file where loaded values are taken as-is, so applying\ncanonicalization would be helpful there, no?\n--\nMichael", "msg_date": "Wed, 20 May 2020 16:03:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Expand the use of check_canonical_path() for more GUCs" }, { "msg_contents": "On Tue, May 19, 2020 at 01:02:12PM +0200, Peter Eisentraut wrote:\n> That thread didn't resolve why check_canonical_path() is necessary there.\n> Maybe the existing uses could be removed?\n\nThis would impact log_directory, external_pid_file,\nstats_temp_directory, where it is still useful to show to the user\ncleaned up names, no? See for example 2594cf0.\n--\nMichael", "msg_date": "Wed, 20 May 2020 16:13:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Expand the use of check_canonical_path() for more GUCs" }, { "msg_contents": "On 2020-05-20 09:13, Michael Paquier wrote:\n> On Tue, May 19, 2020 at 01:02:12PM +0200, Peter Eisentraut wrote:\n>> That thread didn't resolve why check_canonical_path() is necessary there.\n>> Maybe the existing uses could be removed?\n> \n> This would impact log_directory, external_pid_file,\n> stats_temp_directory, where it is still useful to show to the user\n> cleaned up names, no? See for example 2594cf0.\n\nI don't understand why we need to alter the file names specified by the \nuser. They presumably wrote them that way for a reason and they \nprobably like them that way.\n\nThere are specific situations where we need to do that to know whether a \npath is in the data directory or the same as some other one etc. But \nunless there is a reason like that, I think we should just leave them.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 20 May 2020 10:05:29 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Expand the use of check_canonical_path() for more GUCs" }, { "msg_contents": "At Wed, 20 May 2020 10:05:29 +0200, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in \n> On 2020-05-20 09:13, Michael Paquier wrote:\n> > On Tue, May 19, 2020 at 01:02:12PM +0200, Peter Eisentraut wrote:\n> >> That thread didn't resolve why check_canonical_path() is necessary\n> >> there.\n> >> Maybe the existing uses could be removed?\n> > This would impact log_directory, external_pid_file,\n> > stats_temp_directory, where it is still useful to show to the user\n> > cleaned up names, no? See for example 2594cf0.\n> \n> I don't understand why we need to alter the file names specified by\n> the user. They presumably wrote them that way for a reason and they\n> probably like them that way.\n> \n> There are specific situations where we need to do that to know whether\n> a path is in the data directory or the same as some other one etc.\n> But unless there is a reason like that, I think we should just leave\n> them.\n\nI completely agree to Peter here. I would be surprised that I see\nsystem views show different strings from what I wrote in config\nfiles. I also think that it ought to be documented if we store tweaked\nstring for a user input.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 21 May 2020 10:11:26 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Expand the use of check_canonical_path() for more GUCs" }, { "msg_contents": "\n\n> On May 20, 2020, at 12:03 AM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, May 19, 2020 at 09:32:15AM -0400, Tom Lane wrote:\n>> Hm, I'm pretty certain that data_directory does not need this because\n>> canonicalization is done elsewhere; the most that you could accomplish\n>> there is to cause problems. Dunno about the rest.\n> \n> Hmm. I missed that this is getting done in SelectConfigFiles() first\n> by the postmaster so that's not necessary, which also does the work\n> for hba_file and ident_file. config_file does not need that either as\n> AbsoluteConfigLocation() does the same work via ParseConfigFile(). So\n> perhaps we could add a comment or such about that? Attached is an\n> idea.\n> \n> The rest is made of PromoteTriggerFile, pg_krb_server_keyfile,\n> ssl_cert_file, ssl_key_file, ssl_ca_file, ssl_crl_file and\n> ssl_dh_params_file where loaded values are taken as-is, so applying\n> canonicalization would be helpful there, no?\n\nBefore this patch, there are three GUCs that get check_canonical_path treatment:\n\n log_directory\n external_pid_file\n stats_temp_directory\n\nAfter the patch, these also get the treatment (though Peter seems to be objecting to the change):\n\n promote_trigger_file\n krb_server_keyfile\n ssl_cert_file\n ssl_key_file\n ssl_ca_file\n ssl_crl_file\n ssl_dh_params_file\n\nand these still don't, with comments about how they are already canonicalized when the config file is loaded:\n\n data_directory\n config_file\n hba_file\n ident_file\n\nA little poking around shows that in SelectConfigFiles(), these four directories were set by SetConfigOption(). I don't see a problem with the code, but the way this stuff is spread around makes it easy for somebody adding a new GUC file path to do it wrong. I don't have much opinion about Peter's preference that paths be left alone, but I'd prefer some comments in guc.c explaining it all. The only cleanup that occurs to me is to reorder ConfigureNamesString[] to have all the path options back-to-back, with the four that are set by SelectConfigFiles() at the top with comments about why they are special, and the rest after that with comments about why they need or do not need canonicalization.\n \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 28 May 2020 14:24:17 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Expand the use of check_canonical_path() for more GUCs" }, { "msg_contents": "On Tue, May 19, 2020 at 7:02 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> That thread didn't resolve why check_canonical_path() is necessary\n> there. Maybe the existing uses could be removed?\n\nThis first sentence of this reply seems worthy of particualr\nattention. We have to know what problem this is intended to fix before\nwe try to decide in which cases it's needed. Otherwise, whether we add\nit everywhere or remove it everywhere, we'll only know that it's\nconsistent, not that it's correct.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 29 May 2020 13:24:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Expand the use of check_canonical_path() for more GUCs" }, { "msg_contents": "On 2020-May-28, Mark Dilger wrote:\n\n> A little poking around shows that in SelectConfigFiles(), these four\n> directories were set by SetConfigOption(). I don't see a problem with\n> the code, but the way this stuff is spread around makes it easy for\n> somebody adding a new GUC file path to do it wrong. I don't have much\n> opinion about Peter's preference that paths be left alone, but I'd\n> prefer some comments in guc.c explaining it all. The only cleanup\n> that occurs to me is to reorder ConfigureNamesString[] to have all the\n> path options back-to-back, with the four that are set by\n> SelectConfigFiles() at the top with comments about why they are\n> special, and the rest after that with comments about why they need or\n> do not need canonicalization.\n\nNo need for reorganization, I think, just have a comment on top of each\nentry that doesn't use canonicalization such as \"no canonicalization,\nas explained in ...\" where that refers to a single largish comment that\nexplains what canonicalization is, why you use it, and why you don't.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 29 May 2020 14:14:44 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Expand the use of check_canonical_path() for more GUCs" }, { "msg_contents": "On 2020-05-29 19:24, Robert Haas wrote:\n> On Tue, May 19, 2020 at 7:02 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> That thread didn't resolve why check_canonical_path() is necessary\n>> there. Maybe the existing uses could be removed?\n> \n> This first sentence of this reply seems worthy of particualr\n> attention. We have to know what problem this is intended to fix before\n> we try to decide in which cases it's needed. Otherwise, whether we add\n> it everywhere or remove it everywhere, we'll only know that it's\n> consistent, not that it's correct.\n\nThe archeology reveals that these calls where originally added to \ncanonicalize the data_directory and config_file settings (7b0f060d54), \nbut that was then moved out of guc.c to be done early during postmaster \nstartup (337ffcddba). The remaining calls of check_canonical_path() in \nguc.c appear to be leftovers from a previous regime.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 2 Jun 2020 11:04:42 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Expand the use of check_canonical_path() for more GUCs" }, { "msg_contents": "On Tue, Jun 2, 2020 at 5:04 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> The archeology reveals that these calls where originally added to\n> canonicalize the data_directory and config_file settings (7b0f060d54),\n> but that was then moved out of guc.c to be done early during postmaster\n> startup (337ffcddba). The remaining calls of check_canonical_path() in\n> guc.c appear to be leftovers from a previous regime.\n\nThanks for looking into it. Sounds like it can just be ripped out,\nthen, unless someone knows of a reason to do otherwise.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 3 Jun 2020 12:16:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Expand the use of check_canonical_path() for more GUCs" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jun 2, 2020 at 5:04 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> The archeology reveals that these calls where originally added to\n>> canonicalize the data_directory and config_file settings (7b0f060d54),\n>> but that was then moved out of guc.c to be done early during postmaster\n>> startup (337ffcddba). The remaining calls of check_canonical_path() in\n>> guc.c appear to be leftovers from a previous regime.\n\n> Thanks for looking into it. Sounds like it can just be ripped out,\n> then, unless someone knows of a reason to do otherwise.\n\nIn the abstract, I agree with Peter's point that we shouldn't alter\nuser-given strings without need. However, I think there's strong\nreason for canonicalizing the data directory and config file locations.\nWe access those both before and after chdir'ing into the datadir, so\nwe'd better have absolute paths to them --- and at least for the\ndatadir, it's documented that you can initially give it as a path\nrelative to wherever you started the postmaster from. If the other\nfiles are only accessed after the chdir happens then we could likely\ndo without canonicalizing them. But ... do we know which directory\nthe user (thought he) specified them with reference to? Forced\ncanonicalization does have the advantage that it's clear to all\nonlookers how we are interpreting the paths.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Jun 2020 14:45:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Expand the use of check_canonical_path() for more GUCs" }, { "msg_contents": "On Wed, Jun 03, 2020 at 02:45:50PM -0400, Tom Lane wrote:\n> In the abstract, I agree with Peter's point that we shouldn't alter\n> user-given strings without need. However, I think there's strong\n> reason for canonicalizing the data directory and config file locations.\n> We access those both before and after chdir'ing into the datadir, so\n> we'd better have absolute paths to them --- and at least for the\n> datadir, it's documented that you can initially give it as a path\n> relative to wherever you started the postmaster from. If the other\n> files are only accessed after the chdir happens then we could likely\n> do without canonicalizing them. But ... do we know which directory\n> the user (thought he) specified them with reference to? Forced\n> canonicalization does have the advantage that it's clear to all\n> onlookers how we are interpreting the paths.\n\nEven with the last point... It looks like there is little love for\nthis patch. So it seems to me that this brings the discussion down to\ntwo points: shouldn't we document why canonicalization is not done for\ndata_directory, config_file, hba_file and ident_file with some\ncomments in guc.c? Then, why do we apply it to external_pid_file,\nLog_directory and stats_temp_directory knowing that we chdir to PGDATA\nin the postmaster before they get used (as far as I can see)?\n--\nMichael", "msg_date": "Thu, 4 Jun 2020 14:19:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Expand the use of check_canonical_path() for more GUCs" }, { "msg_contents": "On 2020-06-03 20:45, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> On Tue, Jun 2, 2020 at 5:04 AM Peter Eisentraut\n>> <peter.eisentraut@2ndquadrant.com> wrote:\n>>> The archeology reveals that these calls where originally added to\n>>> canonicalize the data_directory and config_file settings (7b0f060d54),\n>>> but that was then moved out of guc.c to be done early during postmaster\n>>> startup (337ffcddba). The remaining calls of check_canonical_path() in\n>>> guc.c appear to be leftovers from a previous regime.\n> \n>> Thanks for looking into it. Sounds like it can just be ripped out,\n>> then, unless someone knows of a reason to do otherwise.\n> \n> In the abstract, I agree with Peter's point that we shouldn't alter\n> user-given strings without need. However, I think there's strong\n> reason for canonicalizing the data directory and config file locations.\n\nIt is not proposed to change that. It is only debated whether the same \ncanonicalization should be applied to other GUCs that represent paths.\n\n> We access those both before and after chdir'ing into the datadir, so\n> we'd better have absolute paths to them --- and at least for the\n> datadir, it's documented that you can initially give it as a path\n> relative to wherever you started the postmaster from. If the other\n> files are only accessed after the chdir happens then we could likely\n> do without canonicalizing them. But ... do we know which directory\n> the user (thought he) specified them with reference to? Forced\n> canonicalization does have the advantage that it's clear to all\n> onlookers how we are interpreting the paths.\n\nThis (and some other messages in this thread) appears to assume that \ncanonicalize_path() turns relative paths into absolute paths, but \nAFAICT, it does not do that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 4 Jun 2020 19:03:22 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Expand the use of check_canonical_path() for more GUCs" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> This (and some other messages in this thread) appears to assume that \n> canonicalize_path() turns relative paths into absolute paths, but \n> AFAICT, it does not do that.\n\nAh, fair point --- I'd been assuming that we were applying\ncanonicalize_path as cleanup for an absolute-ification operation,\nbut you are right that check_canonical_path does not do that.\n\nDigging around, though, I notice a different motivation.\nIn assign_pgstat_temp_directory we have\n\n\t/* check_canonical_path already canonicalized newval for us */\n\t...\n\ttname = guc_malloc(ERROR, strlen(newval) + 12); /* /global.tmp */\n\tsprintf(tname, \"%s/global.tmp\", newval);\n\tfname = guc_malloc(ERROR, strlen(newval) + 13); /* /global.stat */\n\tsprintf(fname, \"%s/global.stat\", newval);\n\nand I believe what the comment is on about is that these path derivation\noperations are unreliable if newval isn't in canonical form. I seem\nto remember for example that in some Windows configurations, mixing\nslashes and backslashes doesn't work.\n\nSo the real point here is that we could use the user's string unmodified\nas long as we only use it exactly as-is, but cases where we derive other\npathnames from it require more work.\n\nOf course, we could leave the GUC string alone and only canonicalize while\nforming derived paths, but that seems mighty error-prone. In any case,\njust ripping out the check_canonical_path usages without any other mop-up\nwill break things.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jun 2020 13:22:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Expand the use of check_canonical_path() for more GUCs" }, { "msg_contents": "Re-reading this thread it seems to me that the conclusion is to mark the patch\nReturned with Feedback in this commitfest, and possibly expand documentation or\ncomments on path canonicalization in the code at some point.\n\nDoes that seem fair?\n\ncheers ./daniel\n\n", "msg_date": "Thu, 9 Jul 2020 14:19:06 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Expand the use of check_canonical_path() for more GUCs" }, { "msg_contents": "On Thu, Jul 09, 2020 at 02:19:06PM +0200, Daniel Gustafsson wrote:\n> Re-reading this thread it seems to me that the conclusion is to mark the patch\n> Returned with Feedback in this commitfest, and possibly expand documentation or\n> comments on path canonicalization in the code at some point.\n> \n> Does that seem fair?\n\nYes, that's fair as there is visibly a consensus that we could perhaps\nremove some of the canonicalization, but not expand it. There is\nnothing preventing to live with the current things in place either, so\ndone.\n--\nMichael", "msg_date": "Fri, 10 Jul 2020 09:32:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Expand the use of check_canonical_path() for more GUCs" } ]
[ { "msg_contents": "Since commit 74a308cf5221f we use explicit_bzero on pgpass and connhost\npassword in libpq, but not sslpassword which seems an oversight. The attached\nperforms an explicit_bzero before freeing like the pattern for other password\nvariables.\n\ncheers ./daniel", "msg_date": "Tue, 19 May 2020 14:33:40 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "explicit_bzero for sslpassword" }, { "msg_contents": "On Tue, May 19, 2020 at 02:33:40PM +0200, Daniel Gustafsson wrote:\n> Since commit 74a308cf5221f we use explicit_bzero on pgpass and connhost\n> password in libpq, but not sslpassword which seems an oversight. The attached\n> performs an explicit_bzero before freeing like the pattern for other password\n> variables.\n\nGood catch, let's fix that. I would like to apply your suggested fix,\nbut let's see first if others have any comments.\n--\nMichael", "msg_date": "Wed, 20 May 2020 14:56:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: explicit_bzero for sslpassword" }, { "msg_contents": "On 2020-05-20 07:56, Michael Paquier wrote:\n> On Tue, May 19, 2020 at 02:33:40PM +0200, Daniel Gustafsson wrote:\n>> Since commit 74a308cf5221f we use explicit_bzero on pgpass and connhost\n>> password in libpq, but not sslpassword which seems an oversight. The attached\n>> performs an explicit_bzero before freeing like the pattern for other password\n>> variables.\n> \n> Good catch, let's fix that. I would like to apply your suggested fix,\n> but let's see first if others have any comments.\n\nLooks correct to me.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 20 May 2020 10:06:55 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: explicit_bzero for sslpassword" }, { "msg_contents": "On Wed, May 20, 2020 at 10:06:55AM +0200, Peter Eisentraut wrote:\n> Looks correct to me.\n\nThanks for confirming, Peter. Got this one applied.\n--\nMichael", "msg_date": "Thu, 21 May 2020 16:29:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: explicit_bzero for sslpassword" } ]
[ { "msg_contents": "Definition of pg_atomic_compare_exchange_u64 requires alignment of \nexpected pointer on 8-byte boundary.\n\npg_atomic_compare_exchange_u64(volatile pg_atomic_uint64 *ptr,\n                                uint64 *expected, uint64 newval)\n{\n#ifndef PG_HAVE_ATOMIC_U64_SIMULATION\n     AssertPointerAlignment(ptr, 8);\n     AssertPointerAlignment(expected, 8);\n#endif\n\n\nI wonder if there are platforms  where such restriction is actually needed.\nAnd if so, looks like our ./src/test/regress/regress.c is working only \noccasionally:\n\nstatic void\ntest_atomic_uint64(void)\n{\n     pg_atomic_uint64 var;\n     uint64        expected;\n     ...\n         if (!pg_atomic_compare_exchange_u64(&var, &expected, 1))\n\nbecause there is no warranty that \"expected\" variable will be aligned on \nstack at 8 byte boundary (at least at Win32).\n\n\n", "msg_date": "Tue, 19 May 2020 16:07:29 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Problem with pg_atomic_compare_exchange_u64 at 32-bit platformwd" }, { "msg_contents": "On Tue, May 19, 2020 at 04:07:29PM +0300, Konstantin Knizhnik wrote:\n> Definition of pg_atomic_compare_exchange_u64 requires alignment of expected\n> pointer on 8-byte boundary.\n> \n> pg_atomic_compare_exchange_u64(volatile pg_atomic_uint64 *ptr,\n> �� ��� ��� ��� ��� ��� ��� ��� uint64 *expected, uint64 newval)\n> {\n> #ifndef PG_HAVE_ATOMIC_U64_SIMULATION\n> �� �AssertPointerAlignment(ptr, 8);\n> �� �AssertPointerAlignment(expected, 8);\n> #endif\n> \n> \n> I wonder if there are platforms� where such restriction is actually needed.\n\nIn general, sparc Linux does SIGBUS on unaligned access. Other platforms\nfunction but suffer performance penalties.\n\n> And if so, looks like our ./src/test/regress/regress.c is working only\n> occasionally:\n> \n> static void\n> test_atomic_uint64(void)\n> {\n> ��� pg_atomic_uint64 var;\n> ��� uint64�� ��� �expected;\n> ��� ...\n> ��� ��� if (!pg_atomic_compare_exchange_u64(&var, &expected, 1))\n> \n> because there is no warranty that \"expected\" variable will be aligned on\n> stack at 8 byte boundary (at least at Win32).\n\nsrc/tools/msvc sets ALIGNOF_LONG_LONG_INT=8, so it believes that win32 does\nguarantee 8-byte alignment of both automatic variables. Is it wrong?\n\n\n", "msg_date": "Tue, 19 May 2020 20:05:00 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Problem with pg_atomic_compare_exchange_u64 at 32-bit platformwd" }, { "msg_contents": "Hi, \n\nOn May 19, 2020 8:05:00 PM PDT, Noah Misch <noah@leadboat.com> wrote:\n>On Tue, May 19, 2020 at 04:07:29PM +0300, Konstantin Knizhnik wrote:\n>> Definition of pg_atomic_compare_exchange_u64 requires alignment of\n>expected\n>> pointer on 8-byte boundary.\n>> \n>> pg_atomic_compare_exchange_u64(volatile pg_atomic_uint64 *ptr,\n>>                                uint64 *expected, uint64 newval)\n>> {\n>> #ifndef PG_HAVE_ATOMIC_U64_SIMULATION\n>>     AssertPointerAlignment(ptr, 8);\n>>     AssertPointerAlignment(expected, 8);\n>> #endif\n>> \n>> \n>> I wonder if there are platforms  where such restriction is actually\n>needed.\n>\n>In general, sparc Linux does SIGBUS on unaligned access. Other\n>platforms\n>function but suffer performance penalties.\n\nIndeed. Cross cacheline atomics are e.g. really expensive on x86. Essentially requiring a full blown bus lock iirc.\n\n\n>> And if so, looks like our ./src/test/regress/regress.c is working\n>only\n>> occasionally:\n>> \n>> static void\n>> test_atomic_uint64(void)\n>> {\n>>     pg_atomic_uint64 var;\n>>     uint64        expected;\n>>     ...\n>>         if (!pg_atomic_compare_exchange_u64(&var, &expected, 1))\n>> \n>> because there is no warranty that \"expected\" variable will be aligned\n>on\n>> stack at 8 byte boundary (at least at Win32).\n>\n>src/tools/msvc sets ALIGNOF_LONG_LONG_INT=8, so it believes that win32\n>does\n>guarantee 8-byte alignment of both automatic variables. Is it wrong?\n\nGenerally the definition of the atomics should ensure the required alignment. E.g. using alignment attributes to the struct.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Tue, 19 May 2020 22:10:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Problem with pg_atomic_compare_exchange_u64 at 32-bit platformwd" }, { "msg_contents": "\n\nOn 20.05.2020 06:05, Noah Misch wrote:\n> On Tue, May 19, 2020 at 04:07:29PM +0300, Konstantin Knizhnik wrote:\n>> Definition of pg_atomic_compare_exchange_u64 requires alignment of expected\n>> pointer on 8-byte boundary.\n>>\n>> pg_atomic_compare_exchange_u64(volatile pg_atomic_uint64 *ptr,\n>>                                uint64 *expected, uint64 newval)\n>> {\n>> #ifndef PG_HAVE_ATOMIC_U64_SIMULATION\n>>     AssertPointerAlignment(ptr, 8);\n>>     AssertPointerAlignment(expected, 8);\n>> #endif\n>>\n>>\n>> I wonder if there are platforms  where such restriction is actually needed.\n> In general, sparc Linux does SIGBUS on unaligned access. Other platforms\n> function but suffer performance penalties.\nWell, if platform enforces strict alignment, then addressed value should \nbe properly aligned in any case, shouldn't it?\nSo my question is whether there are platforms which allows unaligned \naccess for normal (non-atomic) memory operations\nbut requires them for atomic operations.\n\n>\n>> And if so, looks like our ./src/test/regress/regress.c is working only\n>> occasionally:\n>>\n>> static void\n>> test_atomic_uint64(void)\n>> {\n>>     pg_atomic_uint64 var;\n>>     uint64        expected;\n>>     ...\n>>         if (!pg_atomic_compare_exchange_u64(&var, &expected, 1))\n>>\n>> because there is no warranty that \"expected\" variable will be aligned on\n>> stack at 8 byte boundary (at least at Win32).\n> src/tools/msvc sets ALIGNOF_LONG_LONG_INT=8, so it believes that win32 does\n> guarantee 8-byte alignment of both automatic variables. Is it wrong?\n\nYes, by default \"long long\" and \"double\" types are aligned on 8-byte \nboundary at 32-bit Windows (but not at 32-bit Linux).\nBu it is only about alignment of fields inside struct.\nSo if you define structure:\n\ntypedef struct {\n      int x;\n      long long y;\n} foo;\n\nthen sizeof(foo) will be really 16 at Win32.'\nBut Win32 doesn't enforce alignment of stack frames on 8-byte boundary.\nIt means that if you define local variable \"y\":\n\nvoid f() {\n      int x;\n      long long y;\n      printf(\"%p\\n\", &y);\n}\n\nthen its address must not be aligned on 8 at 32-bit platform.\nThis is why \"expected\" in test_atomic_uint64 may not be aligned on \n8-byte boundary and we can get assertion failure.\n\n\n\n", "msg_date": "Wed, 20 May 2020 10:23:37 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Problem with pg_atomic_compare_exchange_u64 at 32-bit platforms" }, { "msg_contents": "\n\nOn 20.05.2020 08:10, Andres Freund wrote:\n> Hi,\n>\n> On May 19, 2020 8:05:00 PM PDT, Noah Misch <noah@leadboat.com> wrote:\n>> On Tue, May 19, 2020 at 04:07:29PM +0300, Konstantin Knizhnik wrote:\n>>> Definition of pg_atomic_compare_exchange_u64 requires alignment of\n>> expected\n>>> pointer on 8-byte boundary.\n>>>\n>>> pg_atomic_compare_exchange_u64(volatile pg_atomic_uint64 *ptr,\n>>>                                uint64 *expected, uint64 newval)\n>>> {\n>>> #ifndef PG_HAVE_ATOMIC_U64_SIMULATION\n>>>     AssertPointerAlignment(ptr, 8);\n>>>     AssertPointerAlignment(expected, 8);\n>>> #endif\n>>>\n>>>\n>>> I wonder if there are platforms  where such restriction is actually\n>> needed.\n>>\n>> In general, sparc Linux does SIGBUS on unaligned access. Other\n>> platforms\n>> function but suffer performance penalties.\n> Indeed. Cross cacheline atomics are e.g. really expensive on x86. Essentially requiring a full blown bus lock iirc.\n>\nPlease notice that here we talk about alignment not of atomic pointer \nitself, but of pointer to the expected value.\nAt Intel CMPXCHG instruction read and write expected value throw AX \nregister.\nSo alignment of pointer to expected value in \npg_atomic_compare_exchange_u64 is not needed in this case.\n\nAnd my question was whether there are some platforms where \nimplementation of compare-exchange 64-bit primitive\nrequires stronger alignment of \"expected\" pointer than one enforced by \noriginal alignment rules for this platform.\n\n\n>\n> Generally the definition of the atomics should ensure the required alignment. E.g. using alignment attributes to the struct.\n\nOnce again, we are speaking not about alignment of \"pg_atomic_uint64 *ptr\"\nwhich is really enforced by alignment of pg_atomic_uint64 struct, but \nabout alignment of \"uint64 *expected\"\nwhich is not guaranteed.\n\nActually, If you allocate pg_atomic_uint64 on stack at 32-bt platform, \nthen it my be also not properly aligned!\nBut since there is completely no sense in local atomic variables, it is \nnot a problem.\n\n\n\n\n", "msg_date": "Wed, 20 May 2020 10:32:18 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Problem with pg_atomic_compare_exchange_u64 at 32-bit platforms" }, { "msg_contents": "On Wed, May 20, 2020 at 10:23:37AM +0300, Konstantin Knizhnik wrote:\n> On 20.05.2020 06:05, Noah Misch wrote:\n> >On Tue, May 19, 2020 at 04:07:29PM +0300, Konstantin Knizhnik wrote:\n> >>Definition of pg_atomic_compare_exchange_u64 requires alignment of expected\n> >>pointer on 8-byte boundary.\n> >>\n> >>pg_atomic_compare_exchange_u64(volatile pg_atomic_uint64 *ptr,\n> >> �� ��� ��� ��� ��� ��� ��� ��� uint64 *expected, uint64 newval)\n> >>{\n> >>#ifndef PG_HAVE_ATOMIC_U64_SIMULATION\n> >> �� �AssertPointerAlignment(ptr, 8);\n> >> �� �AssertPointerAlignment(expected, 8);\n> >>#endif\n> >>\n> >>\n> >>I wonder if there are platforms� where such restriction is actually needed.\n> >In general, sparc Linux does SIGBUS on unaligned access. Other platforms\n> >function but suffer performance penalties.\n> Well, if platform enforces strict alignment, then addressed value should be\n> properly aligned in any case, shouldn't it?\n\nNo. One can always cast a char* to a uint64* and get a misaligned read when\ndereferencing the resulting pointer.\n\n> >>And if so, looks like our ./src/test/regress/regress.c is working only\n> >>occasionally:\n> >>\n> >>static void\n> >>test_atomic_uint64(void)\n> >>{\n> >> ��� pg_atomic_uint64 var;\n> >> ��� uint64�� ��� �expected;\n> >> ��� ...\n> >> ��� ��� if (!pg_atomic_compare_exchange_u64(&var, &expected, 1))\n> >>\n> >>because there is no warranty that \"expected\" variable will be aligned on\n> >>stack at 8 byte boundary (at least at Win32).\n> >src/tools/msvc sets ALIGNOF_LONG_LONG_INT=8, so it believes that win32 does\n> >guarantee 8-byte alignment of both automatic variables. Is it wrong?\n> \n> Yes, by default \"long long\" and \"double\" types are aligned on 8-byte\n> boundary at 32-bit Windows (but not at 32-bit Linux).\n> Bu it is only about alignment of fields inside struct.\n> So if you define structure:\n> \n> typedef struct {\n> ���� int x;\n> ���� long long y;\n> } foo;\n> \n> then sizeof(foo) will be really 16 at Win32.'\n> But Win32 doesn't enforce alignment of stack frames on 8-byte boundary.\n> It means that if you define local variable \"y\":\n> \n> void f() {\n> ���� int x;\n> ���� long long y;\n> ���� printf(\"%p\\n\", &y);\n> }\n> \n> then its address must not be aligned on 8 at 32-bit platform.\n> This is why \"expected\" in test_atomic_uint64 may not be aligned on 8-byte\n> boundary and we can get assertion failure.\n\nCan you construct a patch that adds some automatic variables to a regress.c\nfunction and causes an assertion inside pg_atomic_compare_exchange_u64() to\nfail on some machine you have? I don't think win32 behaves as you say. If\nyou can make a test actually fail using the technique you describe, that would\nremove all doubt.\n\n\n", "msg_date": "Wed, 20 May 2020 00:36:33 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Problem with pg_atomic_compare_exchange_u64 at 32-bit platforms" }, { "msg_contents": "On 20.05.2020 10:36, Noah Misch wrote:\n> On Wed, May 20, 2020 at 10:23:37AM +0300, Konstantin Knizhnik wrote:\n>> On 20.05.2020 06:05, Noah Misch wrote:\n>>> On Tue, May 19, 2020 at 04:07:29PM +0300, Konstantin Knizhnik wrote:\n>>>> Definition of pg_atomic_compare_exchange_u64 requires alignment of expected\n>>>> pointer on 8-byte boundary.\n>>>>\n>>>> pg_atomic_compare_exchange_u64(volatile pg_atomic_uint64 *ptr,\n>>>>                                uint64 *expected, uint64 newval)\n>>>> {\n>>>> #ifndef PG_HAVE_ATOMIC_U64_SIMULATION\n>>>>     AssertPointerAlignment(ptr, 8);\n>>>>     AssertPointerAlignment(expected, 8);\n>>>> #endif\n>>>>\n>>>>\n>>>> I wonder if there are platforms  where such restriction is actually needed.\n>>> In general, sparc Linux does SIGBUS on unaligned access. Other platforms\n>>> function but suffer performance penalties.\n>> Well, if platform enforces strict alignment, then addressed value should be\n>> properly aligned in any case, shouldn't it?\n> No. One can always cast a char* to a uint64* and get a misaligned read when\n> dereferencing the resulting pointer.\n\nYes, certainly we can \"fool\" compiler using type casts:\n\nchar buf[8];\n*(int64_t*)buf = 1;\n\nBut I am speaking about normal (safe) access to variables:\n\nlong long x;\n\nIn this case \"x\" compiler enforces proper alignment of \"x\" for the \ntarget platform.\nWe are not adding AssertPointerAlignment to any function which has \npointer arguments, aren' we?\nI understand we do we require struct alignment pointer to atomic \nvariables even at the platforms which do not require it\n(as Andreas explained, if value cross cacheline, it will cause \nsignificant slowdown).\nBut my question was whether we need string alignment of expected value?\n\n\n>> void f() {\n>>      int x;\n>>      long long y;\n>>      printf(\"%p\\n\", &y);\n>> }\n>>\n>> then its address must not be aligned on 8 at 32-bit platform.\n>> This is why \"expected\" in test_atomic_uint64 may not be aligned on 8-byte\n>> boundary and we can get assertion failure.\n> Can you construct a patch that adds some automatic variables to a regress.c\n> function and causes an assertion inside pg_atomic_compare_exchange_u64() to\n> fail on some machine you have? I don't think win32 behaves as you say. If\n> you can make a test actually fail using the technique you describe, that would\n> remove all doubt.\nI do not have access to Win32.\nBut I think that if you just add some 4-byte variable before \"expected\" \ndefinition, then you will get this  assertion failure (proposed patch is \nattached).\nPlease notice that PG_HAVE_ATOMIC_U64_SIMULATION should not be defined \nand Postgres is build with --enable-cassert and CLAGS=-O0\n\nAlso please notice that my report is not caused just by hypothetical \nproblem which I found out looking at Postgres code.\nWe actually get this assertion failure in pg_atomic_compare_exchange_u64 \nat Win32 (not in regress.c).", "msg_date": "Wed, 20 May 2020 10:59:44 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Problem with pg_atomic_compare_exchange_u64 at 32-bit platforms" }, { "msg_contents": "On Wed, May 20, 2020 at 10:59:44AM +0300, Konstantin Knizhnik wrote:\n> On 20.05.2020 10:36, Noah Misch wrote:\n> >On Wed, May 20, 2020 at 10:23:37AM +0300, Konstantin Knizhnik wrote:\n> >>On 20.05.2020 06:05, Noah Misch wrote:\n> >>>On Tue, May 19, 2020 at 04:07:29PM +0300, Konstantin Knizhnik wrote:\n> >>>>Definition of pg_atomic_compare_exchange_u64 requires alignment of expected\n> >>>>pointer on 8-byte boundary.\n> >>>>\n> >>>>pg_atomic_compare_exchange_u64(volatile pg_atomic_uint64 *ptr,\n> >>>> �� ��� ��� ��� ��� ��� ��� ��� uint64 *expected, uint64 newval)\n> >>>>{\n> >>>>#ifndef PG_HAVE_ATOMIC_U64_SIMULATION\n> >>>> �� �AssertPointerAlignment(ptr, 8);\n> >>>> �� �AssertPointerAlignment(expected, 8);\n> >>>>#endif\n> >>>>\n> >>>>\n> >>>>I wonder if there are platforms� where such restriction is actually needed.\n> >>>In general, sparc Linux does SIGBUS on unaligned access. Other platforms\n> >>>function but suffer performance penalties.\n> >>Well, if platform enforces strict alignment, then addressed value should be\n> >>properly aligned in any case, shouldn't it?\n> >No. One can always cast a char* to a uint64* and get a misaligned read when\n> >dereferencing the resulting pointer.\n> \n> Yes, certainly we can \"fool\" compiler using type casts:\n> \n> char buf[8];\n> *(int64_t*)buf = 1;\n\nPostgreSQL does things like that, so the assertions aren't frivolous.\n\n> But I am speaking about normal (safe) access to variables:\n> \n> long long x;\n> \n> In this case \"x\" compiler enforces proper alignment of \"x\" for the target\n> platform.\n> We are not adding AssertPointerAlignment to any function which has pointer\n> arguments, aren' we?\n\nMost functions don't have such assertions. That doesn't make it wrong for\nthis function to have them.\n\n> I understand we do we require struct alignment pointer to atomic variables\n> even at the platforms which do not require it\n> (as Andreas explained, if value cross cacheline, it will cause significant\n> slowdown).\n> But my question was whether we need string alignment of expected value?\n\nI expect at least some platforms need strict alignment, though I haven't tried\nto prove it.\n\n> >>void f() {\n> >> ���� int x;\n> >> ���� long long y;\n> >> ���� printf(\"%p\\n\", &y);\n> >>}\n> >>\n> >>then its address must not be aligned on 8 at 32-bit platform.\n> >>This is why \"expected\" in test_atomic_uint64 may not be aligned on 8-byte\n> >>boundary and we can get assertion failure.\n> >Can you construct a patch that adds some automatic variables to a regress.c\n> >function and causes an assertion inside pg_atomic_compare_exchange_u64() to\n> >fail on some machine you have? I don't think win32 behaves as you say. If\n> >you can make a test actually fail using the technique you describe, that would\n> >remove all doubt.\n> I do not have access to Win32.\n> But I think that if you just add some 4-byte variable before \"expected\"\n> definition, then you will get this� assertion failure (proposed patch is\n> attached).\n> Please notice that PG_HAVE_ATOMIC_U64_SIMULATION should not be defined and\n> Postgres is build with --enable-cassert and CLAGS=-O0\n> \n> Also please notice that my report is not caused just by hypothetical problem\n> which I found out looking at Postgres code.\n> We actually get this assertion failure in pg_atomic_compare_exchange_u64 at\n> Win32 (not in regress.c).\n\nGiven https://postgr.es/m/flat/20150108204635.GK6299%40alap3.anarazel.de was\nnecessary, that is plausible. Again, if you can provide a test case that you\nhave confirmed reproduces it, that will remove all doubt. You refer to a \"we\"\nthat has access to a system that reproduces it.\n\n\n", "msg_date": "Thu, 21 May 2020 23:24:23 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Problem with pg_atomic_compare_exchange_u64 at 32-bit platforms" }, { "msg_contents": "Hi,\n\nOn 2020-05-20 10:32:18 +0300, Konstantin Knizhnik wrote:\n> On 20.05.2020 08:10, Andres Freund wrote:\n> > On May 19, 2020 8:05:00 PM PDT, Noah Misch <noah@leadboat.com> wrote:\n> > > On Tue, May 19, 2020 at 04:07:29PM +0300, Konstantin Knizhnik wrote:\n> > > > Definition of pg_atomic_compare_exchange_u64 requires alignment of\n> > > expected\n> > > > pointer on 8-byte boundary.\n> > > >\n> > > > pg_atomic_compare_exchange_u64(volatile pg_atomic_uint64 *ptr,\n> > > > �� ��� ��� ��� ��� ��� ��� ��� uint64 *expected, uint64 newval)\n> > > > {\n> > > > #ifndef PG_HAVE_ATOMIC_U64_SIMULATION\n> > > > �� �AssertPointerAlignment(ptr, 8);\n> > > > �� �AssertPointerAlignment(expected, 8);\n> > > > #endif\n> > > >\n> > > >\n> > > > I wonder if there are platforms� where such restriction is actually\n> > > needed.\n> > >\n> > > In general, sparc Linux does SIGBUS on unaligned access. Other\n> > > platforms\n> > > function but suffer performance penalties.\n> > Indeed. Cross cacheline atomics are e.g. really expensive on x86. Essentially requiring a full blown bus lock iirc.\n> >\n> Please notice that here we talk about alignment not of atomic pointer\n> itself, but of pointer to the expected value.\n\nThat wasn't particularly clear in your first email... In hindsight I\ncan see that you meant that, but I'm not surprised to not have\nunderstood that the on the first read either.\n\n\n> At Intel CMPXCHG instruction read and write expected value throw AX\n> register.\n> So alignment of pointer to expected value in pg_atomic_compare_exchange_u64\n> is not needed in this case.\n\nx86 also supports doing a CMPXCHG crossing a cacheline boundary, it's\njust terrifyingly expensive...\n\n\nI can imagine this being a problem on a 32bit platforms, but on 64bit\nplatforms, it seems only an insane platform ABI would only have 4 byte\nalignment on 64bit integers. That'd cause so much unnecessarily split\ncachlines... That's separate from the ISA actually supporting doing such\nreads efficiently, of course.\n\n\nBut that still leaves the alignment check on expected to be too strong\non 32 bit platforms where 64bit alignment is only 4 bytes. I'm doubtful\nthat's it's a good idea to use a comparison value potentially split\nacross cachelines for an atomic operation that's potentially\ncontended. But also, I don't particularly care about 32 bit performance.\n\nI think we should probably just drop the second assert down to\nALIGNOF_INT64. Would require a new configure stanza, but that's easy\nenough to do. It's just adding\nAC_CHECK_ALIGNOF(PG_INT64_TYPE)\n\nDoing that change made me think about replace the conditional long long\nint alignof logic in configure.in, and just unconditionally do the a\ncheck for PG_INT64_TYPE, seems nicer. That made me look at Solution.pm\ndue to ALIGNOF_LONG_LONG, and it's interesting:\n\t# Every symbol in pg_config.h.in must be accounted for here. Set\n\t# to undef if the symbol should not be defined.\n\tmy %define = (\n...\n\t\tALIGNOF_LONG_LONG_INT => 8,\n...\n\t\tPG_INT64_TYPE => 'long long int',\n\nso currently our msvc build actually claims that the alignment\nrequirements are what the code tests. And that's not just since\npg_config.h is autogenerated, it was that way before too:\n\n/* The alignment requirement of a `long long int'. */\n#define ALIGNOF_LONG_LONG_INT 8\n/* Define to the name of a signed 64-bit integer type. */\n#define PG_INT64_TYPE long long int\n\nand has been for a while.\n\n\n> And my question was whether there are some platforms where implementation of\n> compare-exchange 64-bit primitive\n> requires stronger alignment of \"expected\" pointer than one enforced by\n> original alignment rules for this platform.\n\nIIRC there's a few older platforms that have single-copy-atomicity for 8\nbyte values, but do *not* have it for ones not aligned to 8 byte\nplatforms. Despite not having such an ABI alignment.\n\nIt's not impossible to come up with a case where that could matter (if\nexpected pointed into some shared memory that could be read by others),\nbut it's hard to take them serious.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 28 May 2020 18:46:17 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Problem with pg_atomic_compare_exchange_u64 at 32-bit platforms" } ]
[ { "msg_contents": "Hi,\n\nI've been running some TPC-H benchmarks on master, to check if there's\nsomething unexpected, and I ran into some annoying issues with Q17 and\nQ20. I'll use Q17 as it's a bit simpler.\n\nI think there are two related problem - with costing and with excessive\nI/O due to using logical tapes.\n\nLet's talk about the costing first. On 75GB scale (with disabled parallel\nquery), the execution plan looks like this:\n\n QUERY PLAN\n ---------------------------------------------------------------------------------------------------------------\n Limit (cost=16997740.10..16997740.12 rows=1 width=32)\n -> Aggregate (cost=16997740.10..16997740.12 rows=1 width=32)\n -> Nested Loop (cost=14204895.82..16997574.11 rows=66397 width=8)\n Join Filter: (part.p_partkey = lineitem.l_partkey)\n -> Hash Join (cost=14204895.25..16251060.84 rows=6640 width=40)\n Hash Cond: (lineitem_1.l_partkey = part.p_partkey)\n -> HashAggregate (cost=13977751.34..15945557.39 rows=6206695 width=36)\n Group Key: lineitem_1.l_partkey\n Planned Partitions: 128\n -> Seq Scan on lineitem lineitem_1 (cost=0.00..5519079.56 rows=191969856 width=9)\n -> Hash (cost=227058.33..227058.33 rows=6846 width=4)\n -> Seq Scan on part (cost=0.00..227058.33 rows=6846 width=4)\n Filter: ((p_brand = 'Brand#22'::bpchar) AND (p_container = 'LG BOX'::bpchar))\n -> Index Scan using idx_lineitem_part_supp on lineitem (cost=0.57..112.30 rows=10 width=17)\n Index Cond: (l_partkey = lineitem_1.l_partkey)\n Filter: (l_quantity < ((0.2 * avg(lineitem_1.l_quantity))))\n (16 rows)\n\nand if I disale hash aggregate (or spill to disk), it changes to this:\n\n QUERY PLAN \n -------------------------------------------------------------------------------------------------------------------------\n Limit (cost=44577524.39..44577524.40 rows=1 width=32)\n -> Aggregate (cost=44577524.39..44577524.40 rows=1 width=32)\n -> Merge Join (cost=41772792.17..44577358.39 rows=66397 width=8)\n Merge Cond: (lineitem_1.l_partkey = part.p_partkey)\n Join Filter: (lineitem.l_quantity < ((0.2 * avg(lineitem_1.l_quantity))))\n -> GroupAggregate (cost=41772791.17..43305665.51 rows=6206695 width=36)\n Group Key: lineitem_1.l_partkey\n -> Sort (cost=41772791.17..42252715.81 rows=191969856 width=9)\n Sort Key: lineitem_1.l_partkey\n -> Seq Scan on lineitem lineitem_1 (cost=0.00..5519079.56 rows=191969856 width=9)\n -> Materialize (cost=1.00..1191105.89 rows=205371 width=21)\n -> Nested Loop (cost=1.00..1190592.46 rows=205371 width=21)\n -> Index Scan using part_pkey on part (cost=0.43..329262.21 rows=6846 width=4)\n Filter: ((p_brand = 'Brand#22'::bpchar) AND (p_container = 'LG BOX'::bpchar))\n -> Index Scan using idx_lineitem_part_supp on lineitem (cost=0.57..125.51 rows=31 width=17)\n Index Cond: (l_partkey = part.p_partkey)\n (16 rows)\n\nThe problem is that the hashagg plan runs in ~1400 seconds, while the\ngroupagg only takes ~360. And per explain analyze, the difference really\nis in the aggregation - if we subtract the seqscan, the sort+groupagg\ntakes about 310s:\n\n -> GroupAggregate (cost=41772791.17..43305665.51 rows=6206695 width=36) (actual time=283378.004..335611.192 rows=6398981 loops=1)\n Group Key: lineitem_1.l_partkey\n -> Sort (cost=41772791.17..42252715.81 rows=191969856 width=9) (actual time=283377.977..306182.393 rows=191969841 loops=1)\n Sort Key: lineitem_1.l_partkey\n Sort Method: external merge Disk: 3569544kB\n -> Seq Scan on lineitem lineitem_1 (cost=0.00..5519079.56 rows=191969856 width=9) (actual time=0.019..28253.076 rows=192000551 loops=1)\n\nwhile the hashagg takes ~1330s:\n\n -> HashAggregate (cost=13977751.34..15945557.39 rows=6206695 width=36) (actual time=202952.170..1354546.897 rows=6400000 loops=1)\n Group Key: lineitem_1.l_partkey\n Planned Partitions: 128\n Peak Memory Usage: 4249 kB\n Disk Usage: 26321840 kB\n HashAgg Batches: 16512\n -> Seq Scan on lineitem lineitem_1 (cost=0.00..5519079.56 rows=191969856 width=9) (actual time=0.007..22205.617 rows=192000551 loops=1)\n\nAnd that's while only writing 26GB, compared to 35GB in the sorted plan,\nand with cost being ~16M vs. ~43M (so roughly inverse).\n\nOK, let's make the hashagg plan more expensive - that'll fix it, right?.\nBut how do you do that? I might lower the work_mem, say from 4MB to 1MB,\nwhich gets us from ~16M\n\n -> HashAggregate (cost=13977751.34..15945557.39 rows=6206695 width=36)\n Group Key: lineitem_1.l_partkey\n Planned Partitions: 128\n -> Seq Scan on lineitem lineitem_1 (cost=0.00..5519079.56 rows=191969856 width=9)\n\nto ~20M (I'm a bit surprised that the planned partitions dropped 4x, but\nI suspect there's an explanation for that).\n\n -> HashAggregate (cost=17727162.59..20632321.45 rows=6206695 width=36)\n Group Key: lineitem_1.l_partkey\n Planned Partitions: 32\n -> Seq Scan on lineitem lineitem_1 (cost=0.00..5519079.56 rows=191969856 width=9)\n\nAnyway, this did not really solve anything, apparently. The cost is\nstill much lower than for groupagg, and moreover I don't want to lower\nwork_mem - I want to increase cost for a given work_mem value. And it\nalso increases the sort cost estimate, of course.\n\nAs I'll show in a minute, I believe most of this is due to I/O pattern\nfor the logical tapes, which is very different between sort and hashagg.\nSo it'd be natural to consider seq_page_cost/random_page_cost on the\ntemp tablespace, but that's not how it works - we just ignore that :-(\n\n\nWhy do I think this is due to a difference in I/O pattern on the logical\ntape set? I've moved the temporary tablespace to a separate SSD device,\nand used iosnoop [1] to collect all I/O requests associated with this\nquery. Attached are four charts showing blocks (sectors) accessed over\ntime, both for the groupagg and hashagg plans.\n\n\n1) sort + groupagg\n\nFor groupagg (tempio-sort.png) the chart looks a bit chaotic, but it's\nreasonable - it shows the sort does merges, etc. Nothing particularly\nsurprising, IMHO.\n\nIt's also interesting to look at statistics of block sizes, and deltas\nof the blocks, for different request types. Showing the most common\nblock sizes look something like this (the last column is percentage\nof all requests with the same request type):\n\n type | bytes | count | pct \n ------+---------+-------+-------\n RA | 131072 | 26034 | 59.92\n RA | 16384 | 6160 | 14.18\n RA | 8192 | 3636 | 8.37\n RA | 32768 | 3406 | 7.84\n RA | 65536 | 3270 | 7.53\n RA | 24576 | 361 | 0.83\n ...\n W | 1310720 | 8070 | 34.26\n W | 262144 | 1213 | 5.15\n W | 524288 | 1056 | 4.48\n W | 1056768 | 689 | 2.93\n W | 786432 | 292 | 1.24\n W | 802816 | 199 | 0.84\n ...\n\nThe writes are buffered and so are done by kworkers, which seem to be\nable to coalesce them into fairly large chunks (e.g. 34% are 1280kB).\nThe reads come from the postgres backend, and generally are 128kB reads.\n\nThe deltas (in 512B sectors) are mostly consistent with this:\n\n type | block_delta | count | pct \n ------+-------------+-------+-------\n RA | 256 | 13432 | 30.91\n RA | 16 | 3291 | 7.57\n RA | 32 | 3272 | 7.53\n RA | 64 | 3266 | 7.52\n RA | 128 | 2877 | 6.62\n RA | 1808 | 1278 | 2.94\n RA | -2320 | 483 | 1.11\n RA | 28928 | 386 | 0.89\n ...\n W | 2560 | 7856 | 33.35\n W | 2064 | 4921 | 20.89\n W | 2080 | 586 | 2.49\n W | 30960 | 300 | 1.27\n W | 2160 | 253 | 1.07\n W | 1024 | 248 | 1.05\n ...\n\nI believe this suggests most of the I/O is pretty sequential. E.g. 31%\nof the reads are 256 sectors (128kB) apart, which is proportional to the\n128kB reads.\n\n\n2) hashagg\n\nThe I/O pattern is illustrated by the tempion-hash.png chart, and it's\nclearly very different from the sort one. We're reading over and over\nin a zig-zag way. I'm pretty sure there are ~128 cycles matching the\nnumber of partitions in the explain analyze output, which end up being\ninterleaved in the temporary file.\n\nBut even at the partition level this is not very very sequential - there\nare two \"zoom\" charts showing smaller parts in more detail, and there's\nvery obvious nested zig-zag pattern.\n\nAlso, let's look at the block / delta stats:\n\n type | bytes | count | pct\n ------+---------+---------+--------\n RA | 8192 | 3087724 | 95.42\n RA | 24576 | 69511 | 2.15\n RA | 16384 | 49297 | 1.52\n RA | 32768 | 15589 | 0.48\n ...\n W | 8192 | 321089 | 65.72\n W | 16384 | 74097 | 15.17\n W | 24576 | 27785 | 5.69\n W | 1310720 | 16860 | 3.45\n W | 32768 | 13823 | 2.83\n W | 40960 | 7771 | 1.59\n W | 49152 | 4767 | 0.98\n ...\n\nWell, that's not great - we're not really coalescing writes or reads,\neverything is pretty much 8kB block. Especially the writes are somewhat\nsurprising/concerning, because it shows the kernel is unable to combine\nthe requests etc.\n\nThe deltas look very different too:\n\n type | block_delta | count | pct\n ------+-------------+-------+-------\n RA | 2016 | 72399 | 2.24\n RA | 2032 | 72351 | 2.24\n RA | 1984 | 72183 | 2.23\n RA | 2000 | 71964 | 2.22\n RA | 2048 | 71718 | 2.22\n RA | 2064 | 71387 | 2.21\n RA | 1968 | 71363 | 2.21\n RA | 1952 | 70412 | 2.18\n RA | 2080 | 70189 | 2.17\n RA | 2096 | 69568 | 2.15\n RA | 1936 | 69109 | 2.14\n RA | 1920 | 67660 | 2.09\n RA | 2112 | 67248 | 2.08\n RA | 1904 | 66026 | 2.04\n ...\n\nThere's no clear winner matching the block size, or anything. In fact,\nit does oscillate around 2000 sectors, i.e. 1MB. And 128 partitions\nmultiplied by 8kB block per partition is ... 1MB (tadaaaa!).\n\nThis however makes any read-ahead attempts ineffective :-(\n\nAnd let me repeat - this is on a machine with temp tablespace moved to\nan SSD, so the random I/O is not entirely terrible. On a different box\nwith temp tablespace on 3x SATA RAID, the impact is much worse.\n\n\nThis kinda makes me question whether logical tapes are the right tool\nfor hashagg. I've read the explanation in logtape.c why it's about the\nsame amount of I/O as using separate files, but IMO that only really\nworks for I/O patters similar to merge sort - the more I think about\nthis, the more I'm convinced we should just do what hashjoin is doing.\n\nBut maybe I'm wrong, and logical tapes are the best thing we can do\nhere. But in that case I think we need to improve the costing, so that\nit reflects the very different I/O pattern.\n\n\n[1] https://github.com/brendangregg/perf-tools/blob/master/iosnoop\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 19 May 2020 17:12:02 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Tue, 2020-05-19 at 17:12 +0200, Tomas Vondra wrote:\n> I think there are two related problem - with costing and with\n> excessive\n> I/O due to using logical tapes.\n\nThank you for the detailed analysis. I am still digesting this\ninformation.\n\n> This kinda makes me question whether logical tapes are the right tool\n> for hashagg. I've read the explanation in logtape.c why it's about\n> the\n> same amount of I/O as using separate files, but IMO that only really\n> works for I/O patters similar to merge sort - the more I think about\n> this, the more I'm convinced we should just do what hashjoin is\n> doing.\n\nFundamentally, sort writes sequentially and reads randomly; while\nHashAgg writes randomly and reads sequentially. \n\nIf the random writes of HashAgg end up fragmented too much on disk,\nthen clearly the sequential reads are not so sequential anyway. The\nonly way to avoid fragmentation on disk is to preallocate for the\ntape/file.\n\nBufFile (relying more on the OS) would probably do a better job of\npreallocating the disk space in a useful way; whereas logtape.c makes\nit easier to manage buffers and the overall number of files created\n(thereby allowing higher fanout of partitions).\n\nWe have a number of possibilities here:\n\n1. Improve costing to reflect that HashAgg is creating more random IOs\nthan Sort.\n2. Reduce the partition fanout in the hopes that the OS does a better\njob with readahead.\n3. Switch back to BufFile, in which case we probably need to reduce the\nfanout for other reasons.\n4. Change logtape.c to allow preallocation or to write in larger\nblocks.\n5. Change BufFile to allow more control over buffer usage, and switch\nto that.\n\n#1 or #2 are the least invasive, and I think we can get a satisfactory\nsolution by combining those.\n\nI saw good results with the high fanout and low work_mem when there is\nstill a lot of system memory. That's a nice benefit, but perhaps it's\nsafer to use a lower fanout (which will lead to recursion) until we get\na better handle on the IO patterns.\n\nPerhaps you can try recompiling with a lower max partitions and rerun\nthe query? How much would we have to lower it for either the cost to\napproach reality or the OS readahead to become effective?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 19 May 2020 09:27:34 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Tue, May 19, 2020 at 09:27:34AM -0700, Jeff Davis wrote:\n>On Tue, 2020-05-19 at 17:12 +0200, Tomas Vondra wrote:\n>> I think there are two related problem - with costing and with\n>> excessive\n>> I/O due to using logical tapes.\n>\n>Thank you for the detailed analysis. I am still digesting this\n>information.\n>\n>> This kinda makes me question whether logical tapes are the right tool\n>> for hashagg. I've read the explanation in logtape.c why it's about\n>> the\n>> same amount of I/O as using separate files, but IMO that only really\n>> works for I/O patters similar to merge sort - the more I think about\n>> this, the more I'm convinced we should just do what hashjoin is\n>> doing.\n>\n>Fundamentally, sort writes sequentially and reads randomly; while\n>HashAgg writes randomly and reads sequentially.\n>\n\nNot sure. I think the charts and stats of iosnoop data show that an\nawful lot of reads during sort is actually pretty sequential. Moreover,\nsort manages to read the data in much larger blocks - 128kB instead of\njust 8kB (which is what hashagg seems to be doing).\n\nI wonder why is that and if we could achieve that for hashagg too ...\n\n>If the random writes of HashAgg end up fragmented too much on disk,\n>then clearly the sequential reads are not so sequential anyway. The\n>only way to avoid fragmentation on disk is to preallocate for the\n>tape/file.\n>\n\nAnd if there a way to pre-allocate larger chunks? Presumably we could\nassign the blocks to tape in larger chunks (e.g. 128kB, i.e. 16 x 8kB)\ninstead of just single block. I haven't seen anything like that in\ntape.c, though ...\n\n>BufFile (relying more on the OS) would probably do a better job of\n>preallocating the disk space in a useful way; whereas logtape.c makes\n>it easier to manage buffers and the overall number of files created\n>(thereby allowing higher fanout of partitions).\n>\n>We have a number of possibilities here:\n>\n>1. Improve costing to reflect that HashAgg is creating more random IOs\n>than Sort.\n\nI think we'll need to do something about this, but I think we should try\nimproving the behavior first and then model the costing based on that.\n\n>2. Reduce the partition fanout in the hopes that the OS does a better\n>job with readahead.\n\nI doubt this will make a significant difference. I think the problem is\nthe partitions end up interleaved way too much in the temp file, and I\ndon't see how a lower fanout would fix that.\n\nBTW what do you mean when you say \"fanout\"? Do you mean how fast we\nincrease the number of partitions, or some parameter in particular?\n\n>3. Switch back to BufFile, in which case we probably need to reduce the\n>fanout for other reasons.\n\nMaybe, although that seems pretty invasive post beta1.\n\n>4. Change logtape.c to allow preallocation or to write in larger\n>blocks.\n\nI think this is what I suggested above (allocating 16 blocks at a time,\nor something). I wonder how wasteful this would be, but I think not very\nmuch. Essentially, with 1024 partitions and pre-allocating space in\n128kB chunks, that means 128MB may end up unused, which seems ok-ish,\nand I guess we could further restrict that by starting with lower value\nand gradually increasing the number. Or something like that ...\n\n>5. Change BufFile to allow more control over buffer usage, and switch\n>to that.\n>\n\nMaybe. I don't recall what exactly is the issue with buffer usage, but I\nthink it has the same invasiveness issue as (3). OTOH it's what hashjoin\ndoes, and we've lived with it for ages ...\n\n>#1 or #2 are the least invasive, and I think we can get a satisfactory\n>solution by combining those.\n>\n\nOK. I think tweaking the costing (and essentially reverting to what 12\ndoes for those queries) is perfectly reasonable. But if we can actually\nget some speedup thanks to hashagg, even better.\n\n>I saw good results with the high fanout and low work_mem when there is\n>still a lot of system memory. That's a nice benefit, but perhaps it's\n>safer to use a lower fanout (which will lead to recursion) until we get\n>a better handle on the IO patterns.\n>\n\nI don't know how much we can rely on that - once we push some of the\ndata from page cache, it has the issues I described. The trouble is\npeople may not have enough memory to keep everything in cache, otherwise\nthey might just as well bump up work_mem and not spill at all.\n\n>Perhaps you can try recompiling with a lower max partitions and rerun\n>the query? How much would we have to lower it for either the cost to\n>approach reality or the OS readahead to become effective?\n>\n\nI can try that, of course. Which parameters should I tweak / how?\n\nI can also try running it with BufFile, in case you prepare a WIP patch.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 19 May 2020 19:53:20 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Tue, 2020-05-19 at 19:53 +0200, Tomas Vondra wrote:\n> \n> And if there a way to pre-allocate larger chunks? Presumably we could\n> assign the blocks to tape in larger chunks (e.g. 128kB, i.e. 16 x\n> 8kB)\n> instead of just single block. I haven't seen anything like that in\n> tape.c, though ...\n\nIt turned out to be simple (at least a POC) so I threw together a\npatch. I just added a 32-element array of block numbers to each tape.\nWhen we need a new block, we retrieve a block number from that array;\nor if it's empty, we fill it by calling ltsGetFreeBlock() 32 times.\n\nI reproduced the problem on a smaller scale (330M groups, ~30GB of\nmemory on a 16GB box). Work_mem=64MB. The query is a simple distinct.\n\nUnpatched master:\n Sort: 250s\n HashAgg: 310s\nPatched master:\n Sort: 245s\n HashAgg: 262s\n\nThat's a nice improvement for such a simple patch. We can tweak the\nnumber of blocks to preallocate, or do other things like double from a\nsmall number up to a maximum. Also, a proper patch would probably\nrelease the blocks back as free when the tape was rewound.\n\nAs long as the number of block numbers to preallocate is not too large,\nI don't think we need to change the API. It seems fine for sort to do\nthe same thing, even though there's not any benefit.\n\nRegards,\n\tJeff Davis", "msg_date": "Tue, 19 May 2020 21:15:40 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Tue, May 19, 2020 at 09:15:40PM -0700, Jeff Davis wrote:\n>On Tue, 2020-05-19 at 19:53 +0200, Tomas Vondra wrote:\n>>\n>> And if there a way to pre-allocate larger chunks? Presumably we could\n>> assign the blocks to tape in larger chunks (e.g. 128kB, i.e. 16 x\n>> 8kB)\n>> instead of just single block. I haven't seen anything like that in\n>> tape.c, though ...\n>\n>It turned out to be simple (at least a POC) so I threw together a\n>patch. I just added a 32-element array of block numbers to each tape.\n>When we need a new block, we retrieve a block number from that array;\n>or if it's empty, we fill it by calling ltsGetFreeBlock() 32 times.\n>\n>I reproduced the problem on a smaller scale (330M groups, ~30GB of\n>memory on a 16GB box). Work_mem=64MB. The query is a simple distinct.\n>\n>Unpatched master:\n> Sort: 250s\n> HashAgg: 310s\n>Patched master:\n> Sort: 245s\n> HashAgg: 262s\n>\n>That's a nice improvement for such a simple patch. We can tweak the\n>number of blocks to preallocate, or do other things like double from a\n>small number up to a maximum. Also, a proper patch would probably\n>release the blocks back as free when the tape was rewound.\n>\n>As long as the number of block numbers to preallocate is not too large,\n>I don't think we need to change the API. It seems fine for sort to do\n>the same thing, even though there's not any benefit.\n>\n\nI gave it a try on the machine with temp tablespace on SSD, and I can\nconfirm it improves performance. I've tried with different work_mem\nvalues and I've also increased the number of pre-allocated blocks to 64\nand 128 blocks, and the numbers look like this:\n\nmaster\n\n sort hash\n ----------------------------\n 4MB 335 1331\n 128MB 220 1208\n\n\npatched (32)\n\n sort hash\n ----------------------------\n 4MB 344 685\n 128MB 217 641\n\n\npatched (64)\n\n sort hash\n ----------------------------\n 4MB 329 545\n 128MB 214 493\n\npatched (128)\n\n sort hash\n ----------------------------\n 4MB 331 478\n 128MB 222 434\n\n\nI agree that's pretty nice. I wonder how far would we need to go before\nreaching a plateau. I'll try this on the other machine with temporary\ntablespace on SATA, but that'll take longer.\n\nThe I/O pattern changed significantly - it's not visible on the charts,\nso I'm not attaching them. But the statistics of block sizes and \"gaps\"\nare pretty clear.\n\n\nsize of I/O requests\n--------------------\n\na) master\n\n type | bytes | count | pct \n ------+---------+---------+--------\n RA | 8192 | 2905948 | 95.83\n RA | 24576 | 63470 | 2.09\n RA | 16384 | 40155 | 1.32\n W | 8192 | 149295 | 52.85\n W | 16384 | 51781 | 18.33\n W | 24576 | 22247 | 7.88\n W | 1310720 | 15493 | 5.48\n W | 32768 | 11856 | 4.20\n\nb) patched, 32 blocks\n\n type | bytes | count | pct\n ------+---------+--------+--------\n RA | 131072 | 247686 | 41.75\n RA | 8192 | 95746 | 16.14\n RA | 16384 | 82314 | 13.87\n RA | 32768 | 82146 | 13.85\n RA | 65536 | 82126 | 13.84\n W | 1310720 | 16815 | 52.19\n W | 262144 | 3628 | 11.26\n W | 524288 | 2791 | 8.66\n\nc) patched, 64 blocks\n\n type | bytes | count | pct\n ------+---------+--------+--------\n RA | 131072 | 213556 | 56.18\n RA | 8192 | 47663 | 12.54\n RA | 16384 | 39358 | 10.35\n RA | 32768 | 39308 | 10.34\n RA | 65536 | 39304 | 10.34\n W | 1310720 | 18132 | 65.27\n W | 524288 | 3722 | 13.40\n W | 262144 | 581 | 2.09\n W | 1048576 | 405 | 1.46\n W | 8192 | 284 | 1.02\n\nd) patched, 128 blocks\n\n type | bytes | count | pct\n ------+---------+--------+--------\n RA | 131072 | 200816 | 70.93\n RA | 8192 | 23640 | 8.35\n RA | 16384 | 19324 | 6.83\n RA | 32768 | 19279 | 6.81\n RA | 65536 | 19273 | 6.81\n W | 1310720 | 18000 | 65.91\n W | 524288 | 2074 | 7.59\n W | 1048576 | 660 | 2.42\n W | 8192 | 409 | 1.50\n W | 786432 | 354 | 1.30\n\nClearly, the I/O requests are much larger - both reads and writes\nshifted from 8kB to much larger ones, and the larger the number of\nblocks the more significant the shift is. This means the workload is\ngetting more \"sequential\" and the write combining / read-ahead becomes\nmore effective.\n\n\ndeltas between I/O requests\n---------------------------\n\nI'll only show reads to save space, it's about the same for writes.\n\na) master\n\n type | block_delta | count | pct \n ------+-------------+--------+-------\n RA | 256 | 569237 | 18.77\n RA | 240 | 475182 | 15.67\n RA | 272 | 437260 | 14.42\n RA | 224 | 328604 | 10.84\n RA | 288 | 293628 | 9.68\n RA | 208 | 199530 | 6.58\n RA | 304 | 181695 | 5.99\n RA | 192 | 109472 | 3.61\n RA | 320 | 105211 | 3.47\n RA | 336 | 57423 | 1.89\n\nb) patched, 32 blocks\n\n type | block_delta | count | pct \n ------+-------------+--------+-------\n RA | 256 | 165071 | 27.82\n RA | 32 | 82129 | 13.84\n RA | 64 | 82122 | 13.84\n RA | 128 | 82077 | 13.83\n RA | 16 | 82042 | 13.83\n RA | 7440 | 45168 | 7.61\n RA | 7952 | 9838 | 1.66\n\nc) patched, 64 blocks\n\n type | block_delta | count | pct\n ------+-------------+--------+-------\n RA | 256 | 173737 | 45.70\n RA | 32 | 39301 | 10.34\n RA | 64 | 39299 | 10.34\n RA | 128 | 39291 | 10.34\n RA | 16 | 39250 | 10.32\n RA | 15120 | 21202 | 5.58\n RA | 15376 | 4448 | 1.17\n\nd) patched, 128 blocks\n\n type | block_delta | count | pct\n ------+-------------+--------+-------\n RA | 256 | 180955 | 63.91\n RA | 32 | 19274 | 6.81\n RA | 64 | 19273 | 6.81\n RA | 128 | 19264 | 6.80\n RA | 16 | 19203 | 6.78\n RA | 30480 | 9835 | 3.47\n\nThe way I understand it, this needs to be interpreted together with\nblock size stats - in a perfectly sequential workload the two stats\nwould match. For master that's clearly not the case - the most common\nread request size is 8kB, but the most common delta is 128kB (256\nsectors, which is the read-ahead for the SSD device). The patched\nresults are much closer, mostly thanks to switching to 128kB reads.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 21 May 2020 02:12:55 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Tue, May 19, 2020 at 05:12:02PM +0200, Tomas Vondra wrote:\n>\n> ...\n>\n>The problem is that the hashagg plan runs in ~1400 seconds, while the\n>groupagg only takes ~360. And per explain analyze, the difference really\n>is in the aggregation - if we subtract the seqscan, the sort+groupagg\n>takes about 310s:\n>\n> -> GroupAggregate (cost=41772791.17..43305665.51 rows=6206695 width=36) (actual time=283378.004..335611.192 rows=6398981 loops=1)\n> Group Key: lineitem_1.l_partkey\n> -> Sort (cost=41772791.17..42252715.81 rows=191969856 width=9) (actual time=283377.977..306182.393 rows=191969841 loops=1)\n> Sort Key: lineitem_1.l_partkey\n> Sort Method: external merge Disk: 3569544kB\n> -> Seq Scan on lineitem lineitem_1 (cost=0.00..5519079.56 rows=191969856 width=9) (actual time=0.019..28253.076 rows=192000551 loops=1)\n>\n>while the hashagg takes ~1330s:\n>\n> -> HashAggregate (cost=13977751.34..15945557.39 rows=6206695 width=36) (actual time=202952.170..1354546.897 rows=6400000 loops=1)\n> Group Key: lineitem_1.l_partkey\n> Planned Partitions: 128\n> Peak Memory Usage: 4249 kB\n> Disk Usage: 26321840 kB\n> HashAgg Batches: 16512\n> -> Seq Scan on lineitem lineitem_1 (cost=0.00..5519079.56 rows=191969856 width=9) (actual time=0.007..22205.617 rows=192000551 loops=1)\n>\n>And that's while only writing 26GB, compared to 35GB in the sorted plan,\n>and with cost being ~16M vs. ~43M (so roughly inverse).\n>\n\nI've noticed I've actually made a mistake here - it's not 26GB vs. 35GB\nin hash vs. sort, it's 26GB vs. 3.5GB. That is, the sort-based plan\nwrites out *way less* data to the temp file.\n\nThe reason is revealed by explain verbose:\n\n -> GroupAggregate\n Output: lineitem_1.l_partkey, (0.2 * avg(lineitem_1.l_quantity))\n Group Key: lineitem_1.l_partkey\n -> Sort\n Output: lineitem_1.l_partkey, lineitem_1.l_quantity\n Sort Key: lineitem_1.l_partkey\n -> Seq Scan on public.lineitem lineitem_1\n Output: lineitem_1.l_partkey, lineitem_1.l_quantity\n\n -> HashAggregate\n Output: lineitem_1.l_partkey, (0.2 * avg(lineitem_1.l_quantity))\n Group Key: lineitem_1.l_partkey\n -> Seq Scan on public.lineitem lineitem_1\n Output: lineitem_1.l_orderkey, lineitem_1.l_partkey,\n lineitem_1.l_suppkey, lineitem_1.l_linenumber,\n lineitem_1.l_quantity, lineitem_1.l_extendedprice,\n lineitem_1.l_discount, lineitem_1.l_tax,\n lineitem_1.l_returnflag, lineitem_1.l_linestatus,\n lineitem_1.l_shipdate, lineitem_1.l_commitdate,\n lineitem_1.l_receiptdate, lineitem_1.l_shipinstruct,\n lineitem_1.l_shipmode, lineitem_1.l_comment\n\nIt seems that in the hashagg case we're not applying projection in the\nseqscan, forcing us to serialize way much data (the whole lineitem\ntable, essentially).\n\nIt's probably still worth tweaking the I/O pattern, I think.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 21 May 2020 15:41:22 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Thu, May 21, 2020 at 03:41:22PM +0200, Tomas Vondra wrote:\n>On Tue, May 19, 2020 at 05:12:02PM +0200, Tomas Vondra wrote:\n>>\n>>...\n>>\n>>The problem is that the hashagg plan runs in ~1400 seconds, while the\n>>groupagg only takes ~360. And per explain analyze, the difference really\n>>is in the aggregation - if we subtract the seqscan, the sort+groupagg\n>>takes about 310s:\n>>\n>> -> GroupAggregate (cost=41772791.17..43305665.51 rows=6206695 width=36) (actual time=283378.004..335611.192 rows=6398981 loops=1)\n>> Group Key: lineitem_1.l_partkey\n>> -> Sort (cost=41772791.17..42252715.81 rows=191969856 width=9) (actual time=283377.977..306182.393 rows=191969841 loops=1)\n>> Sort Key: lineitem_1.l_partkey\n>> Sort Method: external merge Disk: 3569544kB\n>> -> Seq Scan on lineitem lineitem_1 (cost=0.00..5519079.56 rows=191969856 width=9) (actual time=0.019..28253.076 rows=192000551 loops=1)\n>>\n>>while the hashagg takes ~1330s:\n>>\n>> -> HashAggregate (cost=13977751.34..15945557.39 rows=6206695 width=36) (actual time=202952.170..1354546.897 rows=6400000 loops=1)\n>> Group Key: lineitem_1.l_partkey\n>> Planned Partitions: 128\n>> Peak Memory Usage: 4249 kB\n>> Disk Usage: 26321840 kB\n>> HashAgg Batches: 16512\n>> -> Seq Scan on lineitem lineitem_1 (cost=0.00..5519079.56 rows=191969856 width=9) (actual time=0.007..22205.617 rows=192000551 loops=1)\n>>\n>>And that's while only writing 26GB, compared to 35GB in the sorted plan,\n>>and with cost being ~16M vs. ~43M (so roughly inverse).\n>>\n>\n>I've noticed I've actually made a mistake here - it's not 26GB vs. 35GB\n>in hash vs. sort, it's 26GB vs. 3.5GB. That is, the sort-based plan\n>writes out *way less* data to the temp file.\n>\n>The reason is revealed by explain verbose:\n>\n> -> GroupAggregate\n> Output: lineitem_1.l_partkey, (0.2 * avg(lineitem_1.l_quantity))\n> Group Key: lineitem_1.l_partkey\n> -> Sort\n> Output: lineitem_1.l_partkey, lineitem_1.l_quantity\n> Sort Key: lineitem_1.l_partkey\n> -> Seq Scan on public.lineitem lineitem_1\n> Output: lineitem_1.l_partkey, lineitem_1.l_quantity\n>\n> -> HashAggregate\n> Output: lineitem_1.l_partkey, (0.2 * avg(lineitem_1.l_quantity))\n> Group Key: lineitem_1.l_partkey\n> -> Seq Scan on public.lineitem lineitem_1\n> Output: lineitem_1.l_orderkey, lineitem_1.l_partkey,\n> lineitem_1.l_suppkey, lineitem_1.l_linenumber,\n> lineitem_1.l_quantity, lineitem_1.l_extendedprice,\n> lineitem_1.l_discount, lineitem_1.l_tax,\n> lineitem_1.l_returnflag, lineitem_1.l_linestatus,\n> lineitem_1.l_shipdate, lineitem_1.l_commitdate,\n> lineitem_1.l_receiptdate, lineitem_1.l_shipinstruct,\n> lineitem_1.l_shipmode, lineitem_1.l_comment\n>\n>It seems that in the hashagg case we're not applying projection in the\n>seqscan, forcing us to serialize way much data (the whole lineitem\n>table, essentially).\n>\n>It's probably still worth tweaking the I/O pattern, I think.\n>\n\nOK, it seems the attached trivial fix (simply changing CP_LABEL_TLIST to\nCP_SMALL_TLIST) addresses this for me. I've only tried it on the patched\nversion that pre-allocates 128 blocks, and the results seem pretty nice:\n\n sort hash hash+tlist\n ------------------------------------------\n 4MB 331 478 188\n 128MB 222 434 210\n\nwhich I guess is what we wanted ...\n\nI'll give it a try on the other machine (temp on SATA), but I don't see\nwhy would it not behave similarly nicely.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 21 May 2020 16:30:40 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Thu, May 21, 2020 at 02:12:55AM +0200, Tomas Vondra wrote:\n>\n> ...\n>\n>I agree that's pretty nice. I wonder how far would we need to go before\n>reaching a plateau. I'll try this on the other machine with temporary\n>tablespace on SATA, but that'll take longer.\n>\n\nOK, I've managed to get some numbers from the other machine, with 75GB\ndata set and temp tablespace on SATA RAID. I haven't collected I/O data\nusing iosnoop this time, because we already know how that changes from\nthe other machine. I've also only done this with 128MB work_mem, because\nof how long a single run takes, and with 128 blocks pre-allocation.\n\nThe patched+tlist means both pre-allocation and with the tlist tweak\nI've posted to this thread a couple minutes ago:\n\n master patched patched+tlist\n -----------------------------------------------------\n sort 485 472 462\n hash 24686 3060 559\n\nSo the pre-allocation makes it 10x faster, and the tlist tweak makes it\n5x faster. Not bad, I guess.\n\nNote: I've slightly tweaked read-ahead on the RAID device(s) on those\npatched runs, but the effect was pretty negligible (compared to other\npatched runs with the old read-ahead setting).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 21 May 2020 16:45:10 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Thu, May 21, 2020 at 10:45 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> So the pre-allocation makes it 10x faster, and the tlist tweak makes it\n> 5x faster. Not bad, I guess.\n\nThat is pretty great stuff, Tomas.\n\nFWIW, I agree that CP_SMALL_TLIST seems like the right thing here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 21 May 2020 13:05:25 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Thu, 2020-05-21 at 16:30 +0200, Tomas Vondra wrote:\n> OK, it seems the attached trivial fix (simply changing CP_LABEL_TLIST\n> to\n> CP_SMALL_TLIST) addresses this for me.\n\nGreat!\n\nThere were a couple plan changes where it introduced a Subquery Scan.\nI'm not sure that I understand why it's doing that, can you verify that\nit is a reasonable thing to do?\n\nAside from that, feel free to commit it.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 21 May 2020 11:19:01 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Thu, May 21, 2020 at 11:19:01AM -0700, Jeff Davis wrote:\n>On Thu, 2020-05-21 at 16:30 +0200, Tomas Vondra wrote:\n>> OK, it seems the attached trivial fix (simply changing CP_LABEL_TLIST\n>> to\n>> CP_SMALL_TLIST) addresses this for me.\n>\n>Great!\n>\n>There were a couple plan changes where it introduced a Subquery Scan.\n>I'm not sure that I understand why it's doing that, can you verify that\n>it is a reasonable thing to do?\n>\n>Aside from that, feel free to commit it.\n>\n\nIt's doing that because we're doing projection everywhere, even in cases\nwhen it may not be necessary - but I think that's actually OK.\n\nAt first I thought we might only do it conditionally when we expect to\nspill to disk, but that'd not work for cases when we only realize we\nneed to spill to disk during execution.\n\nSo I think the plan changes are correct and expected.\n\nI think we should do the pre-allocation patch too. I haven't tried yet\nbut I believe the tlist fix alone won't do nearly as good.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 21 May 2020 20:34:05 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Thu, May 21, 2020 at 08:34:05PM +0200, Tomas Vondra wrote:\n>On Thu, May 21, 2020 at 11:19:01AM -0700, Jeff Davis wrote:\n>\n> ...\n>\n>I think we should do the pre-allocation patch too. I haven't tried yet\n>but I believe the tlist fix alone won't do nearly as good.\n>\n\nI've done some measurements on the smaller (SSD) machine, and the\ncomparison looks like this:\n\n sort hash hash+prealloc+tlist hash+tlist\n --------------------------------------------------------\n 4MB 331 478 188 330\n 128MB 222 434 210 350\n\n\nThe last column is master with the tlist tweak alone - it's better than\nhashagg on master alone, but it's not nearly as good as with both tlist\nand prealloc patches.\n\nI can't test this on the larger box with SATA temporary tablespace at\nthe moment (other tests are running), but I believe the difference will\nbe even more pronounced there.\n\nI don't think we're under a lot of pressure - beta1 is out anyway, so we\nhave time to do proper testing first.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 21 May 2020 20:54:59 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Thu, 2020-05-21 at 20:54 +0200, Tomas Vondra wrote:\n> The last column is master with the tlist tweak alone - it's better\n> than\n> hashagg on master alone, but it's not nearly as good as with both\n> tlist\n> and prealloc patches.\n\nRight, I certainly think we should do the prealloc change, as well.\n\nI'm tweaking the patch to be a bit more flexible. I'm thinking we\nshould start the preallocation list size ~8 and then double it up to\n~128 (depending on your results). That would reduce the waste in case\nwe have a large number of small partitions.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 21 May 2020 12:04:19 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Tue, May 19, 2020 at 09:15:40PM -0700, Jeff Davis wrote:\n>On Tue, 2020-05-19 at 19:53 +0200, Tomas Vondra wrote:\n>>\n>> And if there a way to pre-allocate larger chunks? Presumably we could\n>> assign the blocks to tape in larger chunks (e.g. 128kB, i.e. 16 x\n>> 8kB)\n>> instead of just single block. I haven't seen anything like that in\n>> tape.c, though ...\n>\n>It turned out to be simple (at least a POC) so I threw together a\n>patch. I just added a 32-element array of block numbers to each tape.\n>When we need a new block, we retrieve a block number from that array;\n>or if it's empty, we fill it by calling ltsGetFreeBlock() 32 times.\n>\n\nI think the PoC patch goes in the right direction. I have two ideas how\nto improve it a bit:\n\n1) Instead of assigning the pages one by one, we can easily extend the\nAPI to allow getting a range of blocks, so that we don't need to call\nltsGetFreeBlock in a loop. Instead we could call ltsGetFreeBlockRange\nwith the requested number of blocks. And we could keep just a min/max\nof free blocks, not an array with fixed number of elements.\n\n2) We could make it self-tuning, by increasing the number of blocks\nwe pre-allocate. So every time we exhaust the range, we double the\nnumber of blocks (with a reasonable maximum, like 1024 or so). Or we\nmight just increment it by 32, or something.\n\nIIUC the danger of pre-allocating blocks is that we might not fill them,\nresulting in temp file much larger than necessary. It might be harmless\non some (most?) current filesystems that don't actually allocate space\nfor blocks that are never written, but it also confuses our accounting\nof temporary file sizes. So we should try to limit that, and growing the\nnumber of pre-allocated blocks over time seems reasonable.\n\nBoth (1) and (2) seem fairly simple, not much more complex than the\ncurrent PoC patch.\n\nI also wonder if we could collect / report useful statistics about I/O\non the temporary file, not just the size. I mean, how many pages we've\nwritten/read, how sequential it was, etc. But some of that is probably\nonly visible at the OS level (e.g. we have no insignt into how the\nkernel combines writes in page cache, etc.). This is clearly matter for\nv14, though.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 21 May 2020 21:13:18 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Thu, May 21, 2020 at 12:04:19PM -0700, Jeff Davis wrote:\n>On Thu, 2020-05-21 at 20:54 +0200, Tomas Vondra wrote:\n>> The last column is master with the tlist tweak alone - it's better\n>> than\n>> hashagg on master alone, but it's not nearly as good as with both\n>> tlist\n>> and prealloc patches.\n>\n>Right, I certainly think we should do the prealloc change, as well.\n>\n>I'm tweaking the patch to be a bit more flexible. I'm thinking we\n>should start the preallocation list size ~8 and then double it up to\n>~128 (depending on your results). That would reduce the waste in case\n>we have a large number of small partitions.\n>\n\nYou're reading my mind ;-)\n\nI don't think 128 is necessarily the maximum we should use - it's just\nthat I haven't tested higher values. I wouldn't be surprised if higher\nvalues made it a bit faster. But we can test and tune that, I agree with\ngrowing the number of pre-allocted blocks over time.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 21 May 2020 21:17:39 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Thu, 2020-05-21 at 21:13 +0200, Tomas Vondra wrote:\n> 1) Instead of assigning the pages one by one, we can easily extend\n> the\n> API to allow getting a range of blocks, so that we don't need to call\n> ltsGetFreeBlock in a loop. Instead we could call ltsGetFreeBlockRange\n> with the requested number of blocks.\n\nltsGetFreeBlock() just draws one element from a minheap. Is there some\nmore efficient way to get many elements from a minheap at once?\n\n> And we could keep just a min/max\n> of free blocks, not an array with fixed number of elements.\n\nI don't quite know what you mean. Can you elaborate?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 21 May 2020 12:40:23 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Thu, May 21, 2020 at 12:40:23PM -0700, Jeff Davis wrote:\n>On Thu, 2020-05-21 at 21:13 +0200, Tomas Vondra wrote:\n>> 1) Instead of assigning the pages one by one, we can easily extend\n>> the\n>> API to allow getting a range of blocks, so that we don't need to call\n>> ltsGetFreeBlock in a loop. Instead we could call ltsGetFreeBlockRange\n>> with the requested number of blocks.\n>\n>ltsGetFreeBlock() just draws one element from a minheap. Is there some\n>more efficient way to get many elements from a minheap at once?\n>\n>> And we could keep just a min/max\n>> of free blocks, not an array with fixed number of elements.\n>\n>I don't quite know what you mean. Can you elaborate?\n>\n\nAh, I forgot there's and internal minheap thing - I thought we're just\nincrementing some internal counter or something like that, but with the\nminheap we can't just get a range of blocks. So just disregard that,\nyou're right we need the array.\n\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 21 May 2020 23:02:32 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Thu, 2020-05-21 at 21:13 +0200, Tomas Vondra wrote:\n> 2) We could make it self-tuning, by increasing the number of blocks\n> we pre-allocate. So every time we exhaust the range, we double the\n> number of blocks (with a reasonable maximum, like 1024 or so). Or we\n> might just increment it by 32, or something.\n\nAttached a new version that uses the doubling behavior, and cleans it\nup a bit. It also returns the unused prealloc blocks back to lts-\n>freeBlocks when the tape is rewound for reading.\n\n> IIUC the danger of pre-allocating blocks is that we might not fill\n> them,\n> resulting in temp file much larger than necessary. It might be\n> harmless\n> on some (most?) current filesystems that don't actually allocate\n> space\n> for blocks that are never written, but it also confuses our\n> accounting\n> of temporary file sizes. So we should try to limit that, and growing\n> the\n> number of pre-allocated blocks over time seems reasonable.\n\nThere's another danger here: it doesn't matter how well the filesystem\ndeals with sparse writes, because ltsWriteBlock fills in the holes with\nzeros anyway. That's potentially a significant amount of wasted IO\neffort if we aren't careful.\n\nRegards,\n\tJeff Davis", "msg_date": "Thu, 21 May 2020 14:16:37 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Thu, May 21, 2020 at 02:16:37PM -0700, Jeff Davis wrote:\n>On Thu, 2020-05-21 at 21:13 +0200, Tomas Vondra wrote:\n>> 2) We could make it self-tuning, by increasing the number of blocks\n>> we pre-allocate. So every time we exhaust the range, we double the\n>> number of blocks (with a reasonable maximum, like 1024 or so). Or we\n>> might just increment it by 32, or something.\n>\n>Attached a new version that uses the doubling behavior, and cleans it\n>up a bit. It also returns the unused prealloc blocks back to lts-\n>freeBlocks when the tape is rewound for reading.\n>\n\nAh, the returning is a nice idea, that should limit the overhead quite a\nbit, I think.\n\n>> IIUC the danger of pre-allocating blocks is that we might not fill\n>> them,\n>> resulting in temp file much larger than necessary. It might be\n>> harmless\n>> on some (most?) current filesystems that don't actually allocate\n>> space\n>> for blocks that are never written, but it also confuses our\n>> accounting\n>> of temporary file sizes. So we should try to limit that, and growing\n>> the\n>> number of pre-allocated blocks over time seems reasonable.\n>\n>There's another danger here: it doesn't matter how well the filesystem\n>deals with sparse writes, because ltsWriteBlock fills in the holes with\n>zeros anyway. That's potentially a significant amount of wasted IO\n>effort if we aren't careful.\n>\n\nTrue. I'll give it a try on both machines and report some numbers. Might\ntake a couple of days.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 21 May 2020 23:41:22 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Thu, May 21, 2020 at 11:41:22PM +0200, Tomas Vondra wrote:\n>On Thu, May 21, 2020 at 02:16:37PM -0700, Jeff Davis wrote:\n>>On Thu, 2020-05-21 at 21:13 +0200, Tomas Vondra wrote:\n>>>2) We could make it self-tuning, by increasing the number of blocks\n>>>we pre-allocate. So every time we exhaust the range, we double the\n>>>number of blocks (with a reasonable maximum, like 1024 or so). Or we\n>>>might just increment it by 32, or something.\n>>\n>>Attached a new version that uses the doubling behavior, and cleans it\n>>up a bit. It also returns the unused prealloc blocks back to lts-\n>>freeBlocks when the tape is rewound for reading.\n>>\n>\n>Ah, the returning is a nice idea, that should limit the overhead quite a\n>bit, I think.\n>\n>>>IIUC the danger of pre-allocating blocks is that we might not fill\n>>>them,\n>>>resulting in temp file much larger than necessary. It might be\n>>>harmless\n>>>on some (most?) current filesystems that don't actually allocate\n>>>space\n>>>for blocks that are never written, but it also confuses our\n>>>accounting\n>>>of temporary file sizes. So we should try to limit that, and growing\n>>>the\n>>>number of pre-allocated blocks over time seems reasonable.\n>>\n>>There's another danger here: it doesn't matter how well the filesystem\n>>deals with sparse writes, because ltsWriteBlock fills in the holes with\n>>zeros anyway. That's potentially a significant amount of wasted IO\n>>effort if we aren't careful.\n>>\n>\n>True. I'll give it a try on both machines and report some numbers. Might\n>take a couple of days.\n>\n\nOK, so I do have some numbers to share. I think there's a clear\nconclusion that the two patches are a huge improvement, but there's also\nsomething fishy about planning of parallel queries.\n\nFirstly, I have two machines that I used for testing:\n\n1) small one: i5-2500k (4 cores), 8GB RAM, SSD RAID for data, SSD for\ntemporary tablespace, using TPC-H 32GB data set\n\n2) big one: 2x xeon e5-2620v3 (8 cores), 64GB RAM, NVME SSD for data,\ntemporary tablespace on SATA RAID0 (3 x 7.2k), using TPC-H 75GB\n\n\nserial queries (no parallelism)\n===============================\n\nResults with parallel query disabled on the two machines look like this:\n\n1) small one (SSD)\n\n algorithm master prealloc tlist prealloc-tlist\n --------------------------------------------------\n hash 1365 437 368 213\n sort 226 214 224 215\n\nThe sort row simply means \"enable_hashagg = off\" and AFAIK the patches\nshould not have a lot of influence here - the prealloc does, but it's\nfairly negligible.\n\nIt's not always exactly on part, I've seen cases where hash or sort were\na bit faster (probably depending on work_mem), but I think we can ignore\nthat for now.\n\n\n2) big one (SATA)\n\n algorithm master tlist prealloc prealloc+tlist\n --------------------------------------------------\n hash 25534 5120 2402 540\n sort 460 460 465 485\n\nThe effect is even more pronounced, thanks to poor handling of random\nI/O by the SATA RAID device. It's not exactly on par with sort, but it's\nclose enough ...\n\n\nparallel queries\n================\n\nAnd now the fun begins ...\n\n\n1) small one (SSD, max_parallel_workers_per_gather = 2)\n\n algorithm master tlist prealloc prealloc+tlist\n --------------------------------------------------\n hash 693 390 177 128\n sort 103 99 101 99\n\nThis looks pretty nice - the patches have the expected effect, it got\nfaster than with just a single CPU etc.\n\n\n2) big one (SATA, max_parallel_workers_per_gather = 16)\n\n algorithm master tlist prealloc prealloc+tlist\n --------------------------------------------------\n hash ? 25000 ? 3132\n sort 248 234 216 200\n\nWell, not that nice :-( The hash queries take so much time that I've\ndecided not to wait for them and the two numbers are actually just\nestimates (after processing just a couple of logical tapes).\n\nPlus it actually gets slower than with serial execution, so what's the\nproblem here? Especially considering it worked OK on the small machine?\n\nAt first I thought it's something about SSD vs. SATA, but it seems to be\nmore about how we construct the plans, because the plans between the two\nmachines are very different. And it seems to be depend by the number of\nworkers per gather - for low number of workers the plan looks like this\n(the plans are attached in plans.txt in case the formatting gets broken\nby your client):\n\n\n QUERY PLAN\n ---------------------------------------------------------------------------------------------------------------\n Limit\n -> Aggregate\n -> Hash Join\n Hash Cond: (part.p_partkey = lineitem_1.l_partkey)\n Join Filter: (lineitem.l_quantity < ((0.2 * avg(lineitem_1.l_quantity))))\n -> Gather\n Workers Planned: 2\n -> Nested Loop\n -> Parallel Seq Scan on part\n Filter: ((p_brand = 'Brand#22'::bpchar) AND (p_container = 'LG BOX'::bpchar))\n -> Index Scan using idx_lineitem_part_supp on lineitem\n Index Cond: (l_partkey = part.p_partkey)\n -> Hash\n -> Finalize HashAggregate\n Group Key: lineitem_1.l_partkey\n -> Gather\n Workers Planned: 2\n -> Partial HashAggregate\n Group Key: lineitem_1.l_partkey\n -> Parallel Seq Scan on lineitem lineitem_1\n (20 rows)\n\nbut then if I crank the number of workers up, it switches to this:\n\n QUERY PLAN\n ---------------------------------------------------------------------------------------------------------------------\n Limit\n -> Finalize Aggregate\n -> Gather\n Workers Planned: 5\n -> Partial Aggregate\n -> Nested Loop\n Join Filter: (part.p_partkey = lineitem.l_partkey)\n -> Hash Join\n Hash Cond: (part.p_partkey = lineitem_1.l_partkey)\n -> Parallel Seq Scan on part\n Filter: ((p_brand = 'Brand#22'::bpchar) AND (p_container = 'LG BOX'::bpchar))\n -> Hash\n -> HashAggregate\n Group Key: lineitem_1.l_partkey\n -> Seq Scan on lineitem lineitem_1\n -> Index Scan using idx_lineitem_part_supp on lineitem\n Index Cond: (l_partkey = lineitem_1.l_partkey)\n Filter: (l_quantity < ((0.2 * avg(lineitem_1.l_quantity))))\n (18 rows)\n\n\nNotice that in the first plan, the hashagg is on top of parallel-aware\npath - so each workers builds hashagg only on a subset of data, and also\nspills only a fraction of the input rows (so that all workers combined\nspill rouhly the \"whole\" table).\n\nIn the second plan, the hashagg is on the non-partitioned side of the\njoin, so each workers builds a hash aggregate on the *whole* set of\ninput rows. Which means that (a) we need much more disk space for temp\nfiles, making it unlikely to fit into page cache and (b) there's a lot\nof contention for I/O, making it much more random.\n\nNow, I haven't seen the second plan with sort-based aggregation, no\nmatter how I set the number of workers it always looks like this:\n\n QUERY PLAN\n ---------------------------------------------------------------------------------------------------------------------\n Limit\n -> Aggregate\n -> Merge Join\n Merge Cond: (lineitem_1.l_partkey = part.p_partkey)\n Join Filter: (lineitem.l_quantity < ((0.2 * avg(lineitem_1.l_quantity))))\n -> Finalize GroupAggregate\n Group Key: lineitem_1.l_partkey\n -> Gather Merge\n Workers Planned: 8\n -> Partial GroupAggregate\n Group Key: lineitem_1.l_partkey\n -> Sort\n Sort Key: lineitem_1.l_partkey\n -> Parallel Seq Scan on lineitem lineitem_1\n -> Materialize\n -> Gather Merge\n Workers Planned: 6\n -> Nested Loop\n -> Parallel Index Scan using part_pkey on part\n Filter: ((p_brand = 'Brand#22'::bpchar) AND (p_container = 'LG BOX'::bpchar))\n -> Index Scan using idx_lineitem_part_supp on lineitem\n Index Cond: (l_partkey = part.p_partkey)\n (22 rows)\n\nHow come we don't have the same issue here? Is there something in the\noptimizer that prevents us from creating the \"silly\" plans with\ngroupagg, and we should do the same thing for hashagg?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 25 May 2020 04:10:45 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Mon, May 25, 2020 at 04:10:45AM +0200, Tomas Vondra wrote:\n>\n> ...\n>\n>parallel queries\n>================\n>\n>And now the fun begins ...\n>\n>\n>1) small one (SSD, max_parallel_workers_per_gather = 2)\n>\n> algorithm master tlist prealloc prealloc+tlist\n> --------------------------------------------------\n> hash 693 390 177 128\n> sort 103 99 101 99\n>\n>This looks pretty nice - the patches have the expected effect, it got\n>faster than with just a single CPU etc.\n>\n>\n>2) big one (SATA, max_parallel_workers_per_gather = 16)\n>\n> algorithm master tlist prealloc prealloc+tlist\n> --------------------------------------------------\n> hash ? 25000 ? 3132\n> sort 248 234 216 200\n>\n>Well, not that nice :-( The hash queries take so much time that I've\n>decided not to wait for them and the two numbers are actually just\n>estimates (after processing just a couple of logical tapes).\n>\n>Plus it actually gets slower than with serial execution, so what's the\n>problem here? Especially considering it worked OK on the small machine?\n>\n>At first I thought it's something about SSD vs. SATA, but it seems to be\n>more about how we construct the plans, because the plans between the two\n>machines are very different. And it seems to be depend by the number of\n>workers per gather - for low number of workers the plan looks like this\n>(the plans are attached in plans.txt in case the formatting gets broken\n>by your client):\n>\n>\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------\n> Limit\n> -> Aggregate\n> -> Hash Join\n> Hash Cond: (part.p_partkey = lineitem_1.l_partkey)\n> Join Filter: (lineitem.l_quantity < ((0.2 * avg(lineitem_1.l_quantity))))\n> -> Gather\n> Workers Planned: 2\n> -> Nested Loop\n> -> Parallel Seq Scan on part\n> Filter: ((p_brand = 'Brand#22'::bpchar) AND (p_container = 'LG BOX'::bpchar))\n> -> Index Scan using idx_lineitem_part_supp on lineitem\n> Index Cond: (l_partkey = part.p_partkey)\n> -> Hash\n> -> Finalize HashAggregate\n> Group Key: lineitem_1.l_partkey\n> -> Gather\n> Workers Planned: 2\n> -> Partial HashAggregate\n> Group Key: lineitem_1.l_partkey\n> -> Parallel Seq Scan on lineitem lineitem_1\n> (20 rows)\n>\n>but then if I crank the number of workers up, it switches to this:\n>\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------\n> Limit\n> -> Finalize Aggregate\n> -> Gather\n> Workers Planned: 5\n> -> Partial Aggregate\n> -> Nested Loop\n> Join Filter: (part.p_partkey = lineitem.l_partkey)\n> -> Hash Join\n> Hash Cond: (part.p_partkey = lineitem_1.l_partkey)\n> -> Parallel Seq Scan on part\n> Filter: ((p_brand = 'Brand#22'::bpchar) AND (p_container = 'LG BOX'::bpchar))\n> -> Hash\n> -> HashAggregate\n> Group Key: lineitem_1.l_partkey\n> -> Seq Scan on lineitem lineitem_1\n> -> Index Scan using idx_lineitem_part_supp on lineitem\n> Index Cond: (l_partkey = lineitem_1.l_partkey)\n> Filter: (l_quantity < ((0.2 * avg(lineitem_1.l_quantity))))\n> (18 rows)\n>\n>\n>Notice that in the first plan, the hashagg is on top of parallel-aware\n>path - so each workers builds hashagg only on a subset of data, and also\n>spills only a fraction of the input rows (so that all workers combined\n>spill rouhly the \"whole\" table).\n>\n\nOK, I've done an experiment and re-ran the test with\n\n max_parallel_workers_per_gather = 5\n\nwhich is the highest value still giving the \"good\" plan, and the results\nlook like this:\n\n master tlist prealloc prealloc+tlist\n ----------------------------------------------------\n hash 10535 1044 1723 407\n sort 198 196 192 219\n\nwhich is obviously *way* better than the numbers with more workers:\n\n> algorithm master tlist prealloc prealloc+tlist\n> --------------------------------------------------\n> hash ? 25000 ? 3132\n> sort 248 234 216 200\n\nIt's still ~2x slower than the sort, so presumably we'll need to tweak\nthe costing somehow. I do belive this is still due to differences in I/O\npatterns, with parallel hashagg probably being a bit more random (I'm\ndeducing that from SSD not being affected by this).\n\nI'd imagine this is because given the same work_mem value, sort tends to\ncreate \"sorted chunks\" that are then merged into larger runs, making it\nmore sequential. OTOH hashagg likely makes it more random with smaller\nwork_mem values - more batches making it more interleaved / random.\n\n\nThis does not explain why we end up with the \"bad\" plans, though.\n\nAttached are two files showing how the plan changes for different number\nof workers per gather, both for groupagg and hashagg. For groupagg the\nplan shape does not change at all, for hashagg it starts as \"good\" and\nthen between 5 and 6 switches to \"bad\" one.\n\nThere's another interesting thing I just noticed - as we increase the\nnumber of workers, the cost estimate actually starts to grow at some\npoint:\n\n workers | plan cost\n 0 | 23594267\n\t 1 | 20155545\n\t 2 | 19785306\n\t 5 | 22718176 <-\n\t 6 | 23063639\n\t10 | 22990594\n\t12 | 22972363\n\nAFAIK this happens because we pick the number of workers simply based on\nsize of the input relation, which ignores the cost due to sending data\nfrom workers to leaders (parallel_tuple_cost). Which in this case is\nquite significant, because each worker produces large number of groups.\nI don't think this is causing the issue, though, because the sort plans\nbehave the same way. (I wonder if we could/should consider different\nnumber of workers, somehow.)\n\nWe probably can't see these plans on 12 simply because hashagg would\nneed more memory than work_mem (especially in parallel mode), so we\nsimply reject them.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 25 May 2020 14:17:22 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Mon, 2020-05-25 at 04:10 +0200, Tomas Vondra wrote:\n> algorithm master prealloc tlist prealloc-tlist\n> --------------------------------------------------\n> hash 1365 437 368 213\n> sort 226 214 224 215\n> \n> The sort row simply means \"enable_hashagg = off\" and AFAIK the\n> patches\n> should not have a lot of influence here - the prealloc does, but it's\n> fairly negligible.\n\nI also say a small speedup from the prealloc patch for Sort. I wrote if\noff initially, but I'm wondering if there's something going on there.\nPerhaps drawing K elements from the minheap at once is better for\ncaching? If so, that's good news, because it means the prealloc list is\na win-win.\n\n> -> Finalize HashAggregate\n> Group Key: lineitem_1.l_partkey\n> -> Gather\n> Workers Planned: 2\n> -> Partial HashAggregate\n> Group Key:\n> lineitem_1.l_partkey\n> -> Parallel Seq Scan on\n> lineitem lineitem_1\n> (20 rows)\n\nAlthough each worker here only gets half the tuples, it will get\n(approximately) all of the *groups*. This may partly explain why the\nplanner moves away from this plan when there are more workers: the\nnumber of hashagg batches doesn't go down much with more workers.\n\nIt also might be interesting to know the estimate for the number of\ngroups relative to the size of the table. If those two are close, it\nmight look to the planner like processing the whole input in each\nworker isn't much worse than processing all of the groups in each\nworker.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 25 May 2020 11:36:42 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Mon, 2020-05-25 at 14:17 +0200, Tomas Vondra wrote:\n> It's still ~2x slower than the sort, so presumably we'll need to\n> tweak\n> the costing somehow.\n\nOne thing to think about is that the default random_page_cost is only\n4X seq_page_cost. We know that's complete fiction, but it's meant to\npaper over the OS caching effects. It seems like that shortcut may be\nwhat's hurting us now.\n\nHashAgg counts 1/2 of the page accesses as random, whereas Sort only\ncounts 1/4 as random. If the random_page_cost were closer to reality,\nHashAgg would already be penalized substantially. It might be\ninteresting to test with higher values of random_page_cost and see what\nthe planner does.\n\nIf we want to be a bit more conservative, I'm fine with adding a\ngeneral penalty against a HashAgg that we expect to spill (multiply the\ndisk costs by some factor). We can consider removing the penalty in\nv14.\n\n> I do belive this is still due to differences in I/O\n> patterns, with parallel hashagg probably being a bit more random (I'm\n> deducing that from SSD not being affected by this).\n\nDo you think the difference in IO patterns is due to a difference in\nhandling reads vs. writes in the kernel? Or do you think that 128\nblocks is not enough to amortize the cost of a seek for that device?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 25 May 2020 12:49:45 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Mon, May 25, 2020 at 11:36:42AM -0700, Jeff Davis wrote:\n>On Mon, 2020-05-25 at 04:10 +0200, Tomas Vondra wrote:\n>> algorithm master prealloc tlist prealloc-tlist\n>> --------------------------------------------------\n>> hash 1365 437 368 213\n>> sort 226 214 224 215\n>>\n>> The sort row simply means \"enable_hashagg = off\" and AFAIK the\n>> patches\n>> should not have a lot of influence here - the prealloc does, but it's\n>> fairly negligible.\n>\n>I also say a small speedup from the prealloc patch for Sort. I wrote if\n>off initially, but I'm wondering if there's something going on there.\n>Perhaps drawing K elements from the minheap at once is better for\n>caching? If so, that's good news, because it means the prealloc list is\n>a win-win.\n>\n\nTrue.\n\n\n>> -> Finalize HashAggregate\n>> Group Key: lineitem_1.l_partkey\n>> -> Gather\n>> Workers Planned: 2\n>> -> Partial HashAggregate\n>> Group Key:\n>> lineitem_1.l_partkey\n>> -> Parallel Seq Scan on\n>> lineitem lineitem_1\n>> (20 rows)\n>\n>Although each worker here only gets half the tuples, it will get\n>(approximately) all of the *groups*. This may partly explain why the\n>planner moves away from this plan when there are more workers: the\n>number of hashagg batches doesn't go down much with more workers.\n>\n>It also might be interesting to know the estimate for the number of\n>groups relative to the size of the table. If those two are close, it\n>might look to the planner like processing the whole input in each\n>worker isn't much worse than processing all of the groups in each\n>worker.\n>\n\nHmmm, yeah. The number of groups per worker is another moving part. But\nisn't the number of groups per worker pretty much the same (and equal to\nthe total number of groups) in all plans? I mean all the plans (both\nhash and sort based) have this:\n\n -> Finalize HashAggregate (cost=18313351.98..18654393.43 rows=14949762 width=36)\n Group Key: lineitem_1.l_partkey\n Planned Partitions: 64\n -> Gather (cost=14993231.96..17967638.74 rows=14949762 width=36)\n Workers Planned: 1\n -> Partial HashAggregate (cost=14992231.96..16471662.54 rows=14949762 width=36)\n Group Key: lineitem_1.l_partkey\n Planned Partitions: 64\n -> Parallel Seq Scan on lineitem lineitem_1 (cost=0.00..11083534.91 rows=264715991 width=9)\n\nI think it's rather that we actually expect this number of rows from\neach worker, so the total cost is\n\n parallel_tuple_cost * num_of_workers * num_of_groups\n\nBut we just ignore this inherent cost when picking the number of\nworkers, don't we? Because we don't know how many rows will be produced\nand passed to the Gather in the end.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 26 May 2020 00:04:07 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Mon, May 25, 2020 at 12:49:45PM -0700, Jeff Davis wrote:\n>On Mon, 2020-05-25 at 14:17 +0200, Tomas Vondra wrote:\n>> It's still ~2x slower than the sort, so presumably we'll need to\n>> tweak\n>> the costing somehow.\n>\n>One thing to think about is that the default random_page_cost is only\n>4X seq_page_cost. We know that's complete fiction, but it's meant to\n>paper over the OS caching effects. It seems like that shortcut may be\n>what's hurting us now.\n>\n>HashAgg counts 1/2 of the page accesses as random, whereas Sort only\n>counts 1/4 as random. If the random_page_cost were closer to reality,\n>HashAgg would already be penalized substantially. It might be\n>interesting to test with higher values of random_page_cost and see what\n>the planner does.\n>\n\nHmmm, good point. I've tried bumping the random_page_cost up (to 40) and\nthe bogus plans disappeared - instead, the parallel plans switch from\nhashagg to groupagg at 4 workers. There's a lot of parameters affecting\nthis, though (e.g. higher work_mem -> more workers use hashagg).\n\nOne thing annoys me, though. The costing just ignores costs set on the\n(temporary) tablespace, which confused me at first because\n\n ALTER TABLESPACE temp SET (random_page_cost = 40);\n\nhad absolutely no effect. It's not the fault of this patch and it's\nactually understandable (we can have multiple temp tablespaces, and we \ndon't know which one will end up being used).\n\nI wonder if it would be a good idea to have \"temp\" version of those cost\nvariables, applied to all temporary tablespaces ... Currently I have to\ndo the inverse thing - tweak this for all regular tablespaces.\n\n>If we want to be a bit more conservative, I'm fine with adding a\n>general penalty against a HashAgg that we expect to spill (multiply the\n>disk costs by some factor). We can consider removing the penalty in\n>v14.\n>\n\nNot sure, I need to look at the costing a bit.\n\nI see all page writes are random while all page reads are sequential.\nShouldn't this consider the number of tapes and that we pre-allocate\nblocks? Essentially, We split the whole file into chunks of 128 blocks\nthat can be read sequentially, and switch between blocks are random.\n\nNot sure if that's necessary, though. If increasing random_page_cost\npushes the plans to groupagg, then maybe that's good enough.\n\nLet's push the two fixes that we already have. These extra questions\nclearly need more investigation and testing, and I'm not even sure it's\nsomething we should pursue for v13.\n\n>> I do belive this is still due to differences in I/O\n>> patterns, with parallel hashagg probably being a bit more random (I'm\n>> deducing that from SSD not being affected by this).\n>\n>Do you think the difference in IO patterns is due to a difference in\n>handling reads vs. writes in the kernel? Or do you think that 128\n>blocks is not enough to amortize the cost of a seek for that device?\n>\n\nI don't know. I kinda imagined it was due to the workers interfering\nwith each other, but that should affect the sort the same way, right?\nI don't have any data to support this, at the moment - I can repeat\nthe iosnoop tests and analyze the data, of course.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 26 May 2020 00:59:09 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Tue, May 26, 2020 at 10:59 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> On Mon, May 25, 2020 at 12:49:45PM -0700, Jeff Davis wrote:\n> >Do you think the difference in IO patterns is due to a difference in\n> >handling reads vs. writes in the kernel? Or do you think that 128\n> >blocks is not enough to amortize the cost of a seek for that device?\n>\n> I don't know. I kinda imagined it was due to the workers interfering\n> with each other, but that should affect the sort the same way, right?\n> I don't have any data to support this, at the moment - I can repeat\n> the iosnoop tests and analyze the data, of course.\n\nAbout the reads vs writes question: I know that reading and writing\ntwo interleaved sequential \"streams\" through the same fd confuses the\nread-ahead/write-behind heuristics on FreeBSD UFS (I mean: w(1),\nr(42), w(2), r(43), w(3), r(44), ...) so the performance is terrible\non spinning media. Andrew Gierth reported that as a problem for\nsequential scans that are also writing back hint bits, and vacuum.\nHowever, in a quick test on a Linux 4.19 XFS system, using a program\nto generate interleaving read and write streams 1MB apart, I could see\nthat it was still happily generating larger clustered I/Os. I have no\nclue for other operating systems. That said, even on Linux, reads and\nwrites still have to compete for scant IOPS on slow-seek media (albeit\nhopefully in larger clustered I/Os)...\n\nJumping over large interleaving chunks with no prefetching from other\ntapes *must* produce stalls though... and if you crank up the read\nahead size to be a decent percentage of the contiguous chunk size, I\nguess you must also waste I/O bandwidth on unwanted data past the end\nof each chunk, no?\n\nIn an off-list chat with Jeff about whether Hash Join should use\nlogtape.c for its partitions too, the first thought I had was that to\nbe competitive with separate files, perhaps you'd need to write out a\nlist of block ranges for each tape (rather than just next pointers on\neach block), so that you have the visibility required to control\nprefetching explicitly. I guess that would be a bit like the list of\nphysical extents that Linux commands like filefrag(8) and xfs_bmap(8)\ncan show you for regular files. (Other thoughts included worrying\nabout how to make it allocate and stream blocks in parallel queries,\n...!?#$)\n\n\n", "msg_date": "Tue, 26 May 2020 17:02:41 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Tue, May 26, 2020 at 05:02:41PM +1200, Thomas Munro wrote:\n>On Tue, May 26, 2020 at 10:59 AM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> On Mon, May 25, 2020 at 12:49:45PM -0700, Jeff Davis wrote:\n>> >Do you think the difference in IO patterns is due to a difference in\n>> >handling reads vs. writes in the kernel? Or do you think that 128\n>> >blocks is not enough to amortize the cost of a seek for that device?\n>>\n>> I don't know. I kinda imagined it was due to the workers interfering\n>> with each other, but that should affect the sort the same way, right?\n>> I don't have any data to support this, at the moment - I can repeat\n>> the iosnoop tests and analyze the data, of course.\n>\n>About the reads vs writes question: I know that reading and writing\n>two interleaved sequential \"streams\" through the same fd confuses the\n>read-ahead/write-behind heuristics on FreeBSD UFS (I mean: w(1),\n>r(42), w(2), r(43), w(3), r(44), ...) so the performance is terrible\n>on spinning media. Andrew Gierth reported that as a problem for\n>sequential scans that are also writing back hint bits, and vacuum.\n>However, in a quick test on a Linux 4.19 XFS system, using a program\n>to generate interleaving read and write streams 1MB apart, I could see\n>that it was still happily generating larger clustered I/Os. I have no\n>clue for other operating systems. That said, even on Linux, reads and\n>writes still have to compete for scant IOPS on slow-seek media (albeit\n>hopefully in larger clustered I/Os)...\n>\n\nTrue. I've repeated the tests with collection of iosnoop stats, both for\nthe good (partitioned hashagg) and bad (complete hashagg in each worker)\nplans. Ignoring the fact that the bad plan does much more I/O in general\n(about 32GB write + 35GB reads vs. 5.4GB + 7.6GB), request size stats\nare almost exactly the same:\n\n1) good plan (partitioned hashagg, ~180 seconds)\n\n type | bytes | count | pct\n ------+---------+-------+-------\n RA | 131072 | 39392 | 71.62\n RA | 8192 | 5666 | 10.30\n RA | 16384 | 3080 | 5.60\n RA | 32768 | 2888 | 5.25\n RA | 65536 | 2870 | 5.22\n RA | 262144 | 710 | 1.29\n W | 1310720 | 3138 | 32.01\n W | 360448 | 633 | 6.46\n W | 688128 | 538 | 5.49\n W | 692224 | 301 | 3.07\n W | 364544 | 247 | 2.52\n W | 696320 | 182 | 1.86\n W | 8192 | 164 | 1.67\n W | 700416 | 116 | 1.18\n W | 368640 | 102 | 1.04\n\n2) bad plan (complete hashagg, ~500 seconds)\n\n type | bytes | count | pct\n ------+---------+--------+--------\n RA | 131072 | 258924 | 68.54\n RA | 8192 | 31357 | 8.30\n RA | 16384 | 27596 | 7.31\n RA | 32768 | 26434 | 7.00\n RA | 65536 | 26415 | 6.99\n RM | 4096 | 532 | 100.00\n W | 1310720 | 15346 | 34.64\n W | 8192 | 911 | 2.06\n W | 360448 | 816 | 1.84\n W | 16384 | 726 | 1.64\n W | 688128 | 545 | 1.23\n W | 32768 | 544 | 1.23\n W | 40960 | 486 | 1.10\n W | 524288 | 457 | 1.03\n\nSo in both cases, majority of read requests (~70%) are 128kB, with\nadditional ~25% happening in request larger than 8kB. In terms of I/O,\nthat's more than 90% of read I/O.\n\nThere is some difference in the \"I/O delta\" stats, showing how far the\nqueued I/O requests are. The write stats look almost exactly the same,\nbut for reads it looks like this:\n\n1) good plan\n\n type | block_delta | count | pct \n ------+-------------+-------+-------\n RA | 256 | 7555 | 13.74\n RA | 64 | 2297 | 4.18\n RA | 32 | 685 | 1.25\n RA | 128 | 613 | 1.11\n RA | 16 | 612 | 1.11\n\n2) bad plans\n\n type | block_delta | count | pct\n ------+-------------+-------+-------\n RA | 64 | 18817 | 4.98\n RA | 30480 | 9778 | 2.59\n RA | 256 | 9437 | 2.50\n\nIdeally this should match the block size stats (it's in sectors, so 256\nis 128kB). Unfortunately this does not work all that great - even for\nthe \"good\" plan it's only about 14% vs. 70% (of 128kB blocks). In the\nserial plan (disabled parallelism) this was ~70% vs. 75%, much closer.\n\nAnyway, I think this shows that the read-ahead works pretty well even\nwith multiple workers - otherwise there wouldn't be that many 128kB\nrequests. The poor correlation with 128kB deltas is unfortunate, but I\ndon't think we can really fix that.\n\nThis was on linux (5.6.0) with ext4, but I don't think the filesystem\nmatters that much - the read-ahead happens in page cache I think.\n\n>Jumping over large interleaving chunks with no prefetching from other\n>tapes *must* produce stalls though... and if you crank up the read\n>ahead size to be a decent percentage of the contiguous chunk size, I\n>guess you must also waste I/O bandwidth on unwanted data past the end\n>of each chunk, no?\n>\n>In an off-list chat with Jeff about whether Hash Join should use\n>logtape.c for its partitions too, the first thought I had was that to\n>be competitive with separate files, perhaps you'd need to write out a\n>list of block ranges for each tape (rather than just next pointers on\n>each block), so that you have the visibility required to control\n>prefetching explicitly. I guess that would be a bit like the list of\n>physical extents that Linux commands like filefrag(8) and xfs_bmap(8)\n>can show you for regular files. (Other thoughts included worrying\n>about how to make it allocate and stream blocks in parallel queries,\n>...!?#$)\n\nI was wondering how useful would it be to do explicit prefetch too.\n\nI'm not familiar with logtape internals but IIRC the blocks are linked\nby each block having a pointer to the prev/next block, which means we\ncan't prefetch more than one block ahead I think. But maybe I'm wrong,\nor maybe fetching even just one block ahead would help ...\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 26 May 2020 16:15:24 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Tue, 2020-05-26 at 16:15 +0200, Tomas Vondra wrote:\n> I'm not familiar with logtape internals but IIRC the blocks are\n> linked\n> by each block having a pointer to the prev/next block, which means we\n> can't prefetch more than one block ahead I think. But maybe I'm\n> wrong,\n> or maybe fetching even just one block ahead would help ...\n\nWe'd have to get creative. Keeping a directory in the LogicalTape\nstructure might work, but I'm worried the memory requirements would be\ntoo high.\n\nOne idea is to add a \"prefetch block\" to the TapeBlockTrailer (perhaps\nonly in the forward direction?). We could modify the prealloc list so\nthat we always know the next K blocks that will be allocated to the\ntape. All for v14, of course, but I'd be happy to hack together a\nprototype to collect data.\n\nDo you have any other thoughts on the current prealloc patch for v13,\nor is it about ready for commit?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 26 May 2020 11:40:07 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Tue, May 26, 2020 at 11:40:07AM -0700, Jeff Davis wrote:\n>On Tue, 2020-05-26 at 16:15 +0200, Tomas Vondra wrote:\n>> I'm not familiar with logtape internals but IIRC the blocks are\n>> linked\n>> by each block having a pointer to the prev/next block, which means we\n>> can't prefetch more than one block ahead I think. But maybe I'm\n>> wrong,\n>> or maybe fetching even just one block ahead would help ...\n>\n>We'd have to get creative. Keeping a directory in the LogicalTape\n>structure might work, but I'm worried the memory requirements would be\n>too high.\n>\n>One idea is to add a \"prefetch block\" to the TapeBlockTrailer (perhaps\n>only in the forward direction?). We could modify the prealloc list so\n>that we always know the next K blocks that will be allocated to the\n>tape. All for v14, of course, but I'd be happy to hack together a\n>prototype to collect data.\n>\n\nYeah. I agree prefetching is definitely out of v13 scope. It might be\ninteresting to try how useful would it be, if you're willing to spend\nsome time on a prototype.\n\n>\n>Do you have any other thoughts on the current prealloc patch for v13,\n>or is it about ready for commit?\n>\n\nI think it's pretty much ready to go.\n\nI have some some doubts about the maximum value (128 probably means\nread-ahead values above 256 are probably pointless, although I have not\ntested that). But it's still a huge improvement with 128, so let's get\nthat committed.\n\nI've been thinking about actually computing the expected number of\nblocks per tape, and tying the maximum to that, somehow. But that's\nsomething we can look at in the future.\n\nAs for the tlist fix, I think that's mostly ready too - the one thing we\nshould do is probably only doing it for AGG_HASHED. For AGG_SORTED it's\nnot really necessary.\n\n\niregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 26 May 2020 21:15:11 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Tue, 2020-05-26 at 21:15 +0200, Tomas Vondra wrote:\n> Yeah. I agree prefetching is definitely out of v13 scope. It might be\n> interesting to try how useful would it be, if you're willing to spend\n> some time on a prototype.\n\nI think a POC would be pretty quick; I'll see if I can hack something\ntogether.\n\n> I think it's pretty much ready to go.\n\nCommitted with max of 128 preallocated blocks. Minor revisions.\n\n> \n> As for the tlist fix, I think that's mostly ready too - the one thing\n> we\n> should do is probably only doing it for AGG_HASHED. For AGG_SORTED\n> it's\n> not really necessary.\n\nMelanie previously posted a patch to avoid spilling unneeded columns,\nbut it introduced more code:\n\n\nhttps://www.postgresql.org/message-id/CAAKRu_aefEsv+UkQWqu+ioEnoiL2LJu9Diuh9BR8MbyXuZ0j4A@mail.gmail.com\n\nand it seems that Heikki also looked at it. Perhaps we should get an\nacknowledgement from one of them that your one-line change is the right\napproach?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Tue, 26 May 2020 17:40:19 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Tue, May 26, 2020 at 5:40 PM Jeff Davis <pgsql@j-davis.com> wrote:\n\n> On Tue, 2020-05-26 at 21:15 +0200, Tomas Vondra wrote:\n> >\n> > As for the tlist fix, I think that's mostly ready too - the one thing\n> > we\n> > should do is probably only doing it for AGG_HASHED. For AGG_SORTED\n> > it's\n> > not really necessary.\n>\n> Melanie previously posted a patch to avoid spilling unneeded columns,\n> but it introduced more code:\n>\n>\n>\n> https://www.postgresql.org/message-id/CAAKRu_aefEsv+UkQWqu+ioEnoiL2LJu9Diuh9BR8MbyXuZ0j4A@mail.gmail.com\n>\n> and it seems that Heikki also looked at it. Perhaps we should get an\n> acknowledgement from one of them that your one-line change is the right\n> approach?\n>\n>\nI spent some time looking at it today, and, it turns out I was wrong.\n\nI thought that there was a case I had found where CP_SMALL_TLIST did not\neliminate as many columns as could be eliminated for the purposes of\nspilling, but, that turned out not to be the case.\n\nI changed CP_LABEL_TLIST to CP_SMALL_TLIST in\ncreate_groupingsets_plan(), create_agg_plan(), etc and tried a bunch of\ndifferent queries and this 2-3 line change worked for all the cases I\ntried. Is that where you made the change?\nAnd then are you proposing to set it based on the aggstrategy to either\nCP_LABEL_TLIST or CP_SMALL_TLIST here?\n\n-- \nMelanie Plageman\n\nOn Tue, May 26, 2020 at 5:40 PM Jeff Davis <pgsql@j-davis.com> wrote:On Tue, 2020-05-26 at 21:15 +0200, Tomas Vondra wrote:\n> \n> As for the tlist fix, I think that's mostly ready too - the one thing\n> we\n> should do is probably only doing it for AGG_HASHED. For AGG_SORTED\n> it's\n> not really necessary.\n\nMelanie previously posted a patch to avoid spilling unneeded columns,\nbut it introduced more code:\n\n\nhttps://www.postgresql.org/message-id/CAAKRu_aefEsv+UkQWqu+ioEnoiL2LJu9Diuh9BR8MbyXuZ0j4A@mail.gmail.com\n\nand it seems that Heikki also looked at it. Perhaps we should get an\nacknowledgement from one of them that your one-line change is the right\napproach?\nI spent some time looking at it today, and, it turns out I was wrong.I thought that there was a case I had found where CP_SMALL_TLIST did noteliminate as many columns as could be eliminated for the purposes ofspilling, but, that turned out not to be the case.I changed CP_LABEL_TLIST to CP_SMALL_TLIST increate_groupingsets_plan(), create_agg_plan(), etc and tried a bunch ofdifferent queries and this 2-3 line change worked for all the cases Itried. Is that where you made the change?And then are you proposing to set it based on the aggstrategy to eitherCP_LABEL_TLIST or CP_SMALL_TLIST here? -- Melanie Plageman", "msg_date": "Tue, 26 May 2020 18:42:50 -0700", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Tue, May 26, 2020 at 06:42:50PM -0700, Melanie Plageman wrote:\n>On Tue, May 26, 2020 at 5:40 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n>> On Tue, 2020-05-26 at 21:15 +0200, Tomas Vondra wrote:\n>> >\n>> > As for the tlist fix, I think that's mostly ready too - the one thing\n>> > we\n>> > should do is probably only doing it for AGG_HASHED. For AGG_SORTED\n>> > it's\n>> > not really necessary.\n>>\n>> Melanie previously posted a patch to avoid spilling unneeded columns,\n>> but it introduced more code:\n>>\n>>\n>>\n>> https://www.postgresql.org/message-id/CAAKRu_aefEsv+UkQWqu+ioEnoiL2LJu9Diuh9BR8MbyXuZ0j4A@mail.gmail.com\n>>\n>> and it seems that Heikki also looked at it. Perhaps we should get an\n>> acknowledgement from one of them that your one-line change is the right\n>> approach?\n>>\n>>\n>I spent some time looking at it today, and, it turns out I was wrong.\n>\n>I thought that there was a case I had found where CP_SMALL_TLIST did not\n>eliminate as many columns as could be eliminated for the purposes of\n>spilling, but, that turned out not to be the case.\n>\n>I changed CP_LABEL_TLIST to CP_SMALL_TLIST in\n>create_groupingsets_plan(), create_agg_plan(), etc and tried a bunch of\n>different queries and this 2-3 line change worked for all the cases I\n>tried. Is that where you made the change?\n\nI've only made the change in create_agg_plan, because that's what was in\nthe query plan I was investigating. You may be right that the same fix\nis needed in additional places, though.\n\n>And then are you proposing to set it based on the aggstrategy to either\n>CP_LABEL_TLIST or CP_SMALL_TLIST here?\n>\n\nYes, something like that. The patch I shared on on 5/21 just changed\nthat, but I'm wondering if that could add overhead for sorted\naggregation, which already does the projection thanks to the sort.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Wed, 27 May 2020 11:07:04 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Wed, May 27, 2020 at 11:07:04AM +0200, Tomas Vondra wrote:\n>On Tue, May 26, 2020 at 06:42:50PM -0700, Melanie Plageman wrote:\n>>On Tue, May 26, 2020 at 5:40 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>>\n>>>On Tue, 2020-05-26 at 21:15 +0200, Tomas Vondra wrote:\n>>>>\n>>>> As for the tlist fix, I think that's mostly ready too - the one thing\n>>>> we\n>>>> should do is probably only doing it for AGG_HASHED. For AGG_SORTED\n>>>> it's\n>>>> not really necessary.\n>>>\n>>>Melanie previously posted a patch to avoid spilling unneeded columns,\n>>>but it introduced more code:\n>>>\n>>>\n>>>\n>>>https://www.postgresql.org/message-id/CAAKRu_aefEsv+UkQWqu+ioEnoiL2LJu9Diuh9BR8MbyXuZ0j4A@mail.gmail.com\n>>>\n>>>and it seems that Heikki also looked at it. Perhaps we should get an\n>>>acknowledgement from one of them that your one-line change is the right\n>>>approach?\n>>>\n>>>\n>>I spent some time looking at it today, and, it turns out I was wrong.\n>>\n>>I thought that there was a case I had found where CP_SMALL_TLIST did not\n>>eliminate as many columns as could be eliminated for the purposes of\n>>spilling, but, that turned out not to be the case.\n>>\n>>I changed CP_LABEL_TLIST to CP_SMALL_TLIST in\n>>create_groupingsets_plan(), create_agg_plan(), etc and tried a bunch of\n>>different queries and this 2-3 line change worked for all the cases I\n>>tried. Is that where you made the change?\n>\n>I've only made the change in create_agg_plan, because that's what was in\n>the query plan I was investigating. You may be right that the same fix\n>is needed in additional places, though.\n>\n\nAttached is a patch adding CP_SMALL_TLIST to create_agg_plan and\ncreate_groupingsets_plan. I've looked at the other places that I think\nseem like they might benefit from it (create_upper_unique_plan,\ncreate_group_plan) but I think we don't need to modify those - there'll\neither be a Sort of HashAgg, which will take care about the projection.\n\nOr do you think some other places need CP_SMALL_TLIST?\n\n\n>>And then are you proposing to set it based on the aggstrategy to either\n>>CP_LABEL_TLIST or CP_SMALL_TLIST here?\n>>\n>\n>Yes, something like that. The patch I shared on on 5/21 just changed\n>that, but I'm wondering if that could add overhead for sorted\n>aggregation, which already does the projection thanks to the sort.\n>\n\nI ended up tweaking the tlist only for AGG_MIXED and AGG_HASHED. We\nclearly don't need it for AGG_PLAIN or AGG_SORTED. This way we don't\nbreak regression tests by adding unnecessary \"Subquery Scan\" nodes.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 28 May 2020 20:57:50 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Tue, 2020-05-26 at 17:40 -0700, Jeff Davis wrote:\n> On Tue, 2020-05-26 at 21:15 +0200, Tomas Vondra wrote:\n> > Yeah. I agree prefetching is definitely out of v13 scope. It might\n> > be\n> > interesting to try how useful would it be, if you're willing to\n> > spend\n> > some time on a prototype.\n> \n> I think a POC would be pretty quick; I'll see if I can hack something\n> together.\n\nAttached (intended for v14).\n\nI changed the list from a simple array to a circular buffer so that we\ncan keep enough preallocated block numbers in it to do prefetching.\n\nOn SSD I didn't see any improvement, but perhaps it will do better on\nmagnetic storage.\n\nRegards,\n\tJeff Davis", "msg_date": "Thu, 28 May 2020 17:48:11 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Thu, 2020-05-28 at 20:57 +0200, Tomas Vondra wrote:\n> Attached is a patch adding CP_SMALL_TLIST to create_agg_plan and\n> create_groupingsets_plan.\n\nLooks good, except one question:\n\nWhy would aggstrategy ever be MIXED when in create_agg_path? Wouldn't\nthat only happen in create_groupingsets_path?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 28 May 2020 18:14:55 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Thu, May 28, 2020 at 06:14:55PM -0700, Jeff Davis wrote:\n>On Thu, 2020-05-28 at 20:57 +0200, Tomas Vondra wrote:\n>> Attached is a patch adding CP_SMALL_TLIST to create_agg_plan and\n>> create_groupingsets_plan.\n>\n>Looks good, except one question:\n>\n>Why would aggstrategy ever be MIXED when in create_agg_path? Wouldn't\n>that only happen in create_groupingsets_path?\n>\n\nAh, right. Yeah, we only need to check for AGG_HASH here. Moreover,\nAGG_MIXED probably does not need the tlist tweak, because the input\nshould be pre-sorted as with AGG_SORTED.\n\nAnd we should probably do similar check in the create_groupinsets_path,\nI guess. At first I thought we can't do that before inspecting rollups,\nwhich only happens later in the function, but now I see there's\naggstrategy in GroupingSetsPath too.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 29 May 2020 15:04:54 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Fri, 2020-05-29 at 15:04 +0200, Tomas Vondra wrote:\n> Ah, right. Yeah, we only need to check for AGG_HASH here. Moreover,\n> AGG_MIXED probably does not need the tlist tweak, because the input\n> should be pre-sorted as with AGG_SORTED.\n> \n> And we should probably do similar check in the\n> create_groupinsets_path,\n> I guess. At first I thought we can't do that before inspecting\n> rollups,\n> which only happens later in the function, but now I see there's\n> aggstrategy in GroupingSetsPath too.\n\nLooks good.\n\n\tJeff\n\n\n\n\n", "msg_date": "Fri, 29 May 2020 13:12:40 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "Hello\n\nIs this patch the only thing missing before this open item can be\nconsidered closed?\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 5 Jun 2020 17:19:43 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Fri, Jun 05, 2020 at 05:19:43PM -0400, Alvaro Herrera wrote:\n>Hello\n>\n>Is this patch the only thing missing before this open item can be\n>considered closed?\n>\n\nI've already pushed this as 4cad2534da6d17067d98cf04be2dfc1bda8f2cd0,\nsorry for not mentioning it in this thread explicitly.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sat, 6 Jun 2020 00:20:57 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On 2020-Jun-06, Tomas Vondra wrote:\n\n> On Fri, Jun 05, 2020 at 05:19:43PM -0400, Alvaro Herrera wrote:\n>\n> > Is this patch the only thing missing before this open item can be\n> > considered closed?\n> \n> I've already pushed this as 4cad2534da6d17067d98cf04be2dfc1bda8f2cd0,\n> sorry for not mentioning it in this thread explicitly.\n\nThat's great to know, thanks. The other bit necessary to answer my\nquestion is whether do we need to do anything else in this area -- if\nno, then we can mark the open item as closed:\nhttps://wiki.postgresql.org/wiki/PostgreSQL_13_Open_Items#Open_Issues\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 5 Jun 2020 18:51:34 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" }, { "msg_contents": "On Fri, Jun 05, 2020 at 06:51:34PM -0400, Alvaro Herrera wrote:\n>On 2020-Jun-06, Tomas Vondra wrote:\n>\n>> On Fri, Jun 05, 2020 at 05:19:43PM -0400, Alvaro Herrera wrote:\n>>\n>> > Is this patch the only thing missing before this open item can be\n>> > considered closed?\n>>\n>> I've already pushed this as 4cad2534da6d17067d98cf04be2dfc1bda8f2cd0,\n>> sorry for not mentioning it in this thread explicitly.\n>\n>That's great to know, thanks. The other bit necessary to answer my\n>question is whether do we need to do anything else in this area -- if\n>no, then we can mark the open item as closed:\n>https://wiki.postgresql.org/wiki/PostgreSQL_13_Open_Items#Open_Issues\n>\n\nHmmm, good question.\n\nThere was some discussion about maybe tweaking the costing model to make\nit a bit more pessimistic (assuming more random I/O or something like\nthat), but I'm not sure it's still needed. Increasing random_page_cost\nfor the temp tablespace did the trick for me.\n\nSo I'd say we can mark it as closed ...\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Sat, 6 Jun 2020 01:17:26 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Trouble with hashagg spill I/O pattern and costing" } ]
[ { "msg_contents": "Hello hackers,\n\nParallel sequential scan relies on the kernel detecting sequential\naccess, but we don't make the job easy. The resulting striding\npattern works terribly on strict next-block systems like FreeBSD UFS,\nand degrades rapidly when you add too many workers on sliding window\nsystems like Linux.\n\nDemonstration using FreeBSD on UFS on a virtual machine, taking ball\npark figures from iostat:\n\n create table t as select generate_series(1, 200000000)::int i;\n\n set max_parallel_workers_per_gather = 0;\n select count(*) from t;\n -> execution time 13.3s, average read size = ~128kB, ~500MB/s\n\n set max_parallel_workers_per_gather = 1;\n select count(*) from t;\n -> execution time 24.9s, average read size = ~32kB, ~250MB/s\n\nNote the small read size, which means that there was no read\nclustering happening at all: that's the logical block size of this\nfilesystem.\n\nThat explains some complaints I've heard about PostgreSQL performance\non that filesystem: parallel query destroys I/O performance.\n\nAs a quick experiment, I tried teaching the block allocated to\nallocate ranges of up 64 blocks at a time, ramping up incrementally,\nand ramping down at the end, and I got:\n\n set max_parallel_workers_per_gather = 1;\n select count(*) from t;\n -> execution time 7.5s, average read size = ~128kB, ~920MB/s\n\n set max_parallel_workers_per_gather = 3;\n select count(*) from t;\n -> execution time 5.2s, average read size = ~128kB, ~1.2GB/s\n\nI've attached the quick and dirty patch I used for that.", "msg_date": "Wed, 20 May 2020 13:53:24 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Wed, May 20, 2020 at 7:24 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> Hello hackers,\n>\n> Parallel sequential scan relies on the kernel detecting sequential\n> access, but we don't make the job easy. The resulting striding\n> pattern works terribly on strict next-block systems like FreeBSD UFS,\n> and degrades rapidly when you add too many workers on sliding window\n> systems like Linux.\n>\n> Demonstration using FreeBSD on UFS on a virtual machine, taking ball\n> park figures from iostat:\n>\n> create table t as select generate_series(1, 200000000)::int i;\n>\n> set max_parallel_workers_per_gather = 0;\n> select count(*) from t;\n> -> execution time 13.3s, average read size = ~128kB, ~500MB/s\n>\n> set max_parallel_workers_per_gather = 1;\n> select count(*) from t;\n> -> execution time 24.9s, average read size = ~32kB, ~250MB/s\n>\n> Note the small read size, which means that there was no read\n> clustering happening at all: that's the logical block size of this\n> filesystem.\n>\n> That explains some complaints I've heard about PostgreSQL performance\n> on that filesystem: parallel query destroys I/O performance.\n>\n> As a quick experiment, I tried teaching the block allocated to\n> allocate ranges of up 64 blocks at a time, ramping up incrementally,\n> and ramping down at the end, and I got:\n>\n\nGood experiment. IIRC, we have discussed a similar idea during the\ndevelopment of this feature but we haven't seen any better results by\nallocating in ranges on the systems we have tried. So, we want with\nthe current approach which is more granular and seems to allow better\nparallelism. I feel we need to ensure that we don't regress\nparallelism in existing cases, otherwise, the idea sounds promising to\nme.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 May 2020 07:53:28 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Wed, May 20, 2020 at 2:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Good experiment. IIRC, we have discussed a similar idea during the\n> development of this feature but we haven't seen any better results by\n> allocating in ranges on the systems we have tried. So, we want with\n> the current approach which is more granular and seems to allow better\n> parallelism. I feel we need to ensure that we don't regress\n> parallelism in existing cases, otherwise, the idea sounds promising to\n> me.\n\nYeah, Linux seems to do pretty well at least with smallish numbers of\nworkers, and when you use large numbers you can probably tune your way\nout of the problem. ZFS seems to do fine. I wonder how well the\nother OSes cope.\n\n\n", "msg_date": "Wed, 20 May 2020 15:08:24 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "Em qua., 20 de mai. de 2020 às 00:09, Thomas Munro <thomas.munro@gmail.com>\nescreveu:\n\n> On Wed, May 20, 2020 at 2:23 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > Good experiment. IIRC, we have discussed a similar idea during the\n> > development of this feature but we haven't seen any better results by\n> > allocating in ranges on the systems we have tried. So, we want with\n> > the current approach which is more granular and seems to allow better\n> > parallelism. I feel we need to ensure that we don't regress\n> > parallelism in existing cases, otherwise, the idea sounds promising to\n> > me.\n>\n> Yeah, Linux seems to do pretty well at least with smallish numbers of\n> workers, and when you use large numbers you can probably tune your way\n> out of the problem. ZFS seems to do fine. I wonder how well the\n> other OSes cope.\n>\nWindows 10 (64bits, i5, 8GB, SSD)\n\npostgres=# set max_parallel_workers_per_gather = 0;\nSET\nTime: 2,537 ms\npostgres=# select count(*) from t;\n count\n-----------\n 200000000\n(1 row)\n\n\nTime: 47767,916 ms (00:47,768)\npostgres=# set max_parallel_workers_per_gather = 1;\nSET\nTime: 4,889 ms\npostgres=# select count(*) from t;\n count\n-----------\n 200000000\n(1 row)\n\n\nTime: 32645,448 ms (00:32,645)\n\nHow display \" -> execution time 5.2s, average read size =\"?\n\nregards,\nRanier VIlela\n\nEm qua., 20 de mai. de 2020 às 00:09, Thomas Munro <thomas.munro@gmail.com> escreveu:On Wed, May 20, 2020 at 2:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Good experiment.  IIRC, we have discussed a similar idea during the\n> development of this feature but we haven't seen any better results by\n> allocating in ranges on the systems we have tried.  So, we want with\n> the current approach which is more granular and seems to allow better\n> parallelism.  I feel we need to ensure that we don't regress\n> parallelism in existing cases, otherwise, the idea sounds promising to\n> me.\n\nYeah, Linux seems to do pretty well at least with smallish numbers of\nworkers, and when you use large numbers you can probably tune your way\nout of the problem.  ZFS seems to do fine.  I wonder how well the\nother OSes cope.Windows 10 (64bits, i5, 8GB, SSD)postgres=# set max_parallel_workers_per_gather = 0;SETTime: 2,537 mspostgres=#  select count(*) from t;   count----------- 200000000(1 row)Time: 47767,916 ms (00:47,768)postgres=# set max_parallel_workers_per_gather = 1;SETTime: 4,889 mspostgres=#  select count(*) from t;   count----------- 200000000(1 row)Time: 32645,448 ms (00:32,645)How display \"\n-> execution time 5.2s, average read size =\"?regards,Ranier VIlela", "msg_date": "Wed, 20 May 2020 08:02:42 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Wed, May 20, 2020 at 11:03 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> Time: 47767,916 ms (00:47,768)\n> Time: 32645,448 ms (00:32,645)\n\nJust to make sure kernel caching isn't helping here, maybe try making\nthe table 2x or 4x bigger? My test was on a virtual machine with only\n4GB RAM, so the table couldn't be entirely cached.\n\n> How display \" -> execution time 5.2s, average read size =\"?\n\nExecution time is what you showed, and average read size should be\ninside the Windows performance window somewhere (not sure what it's\ncalled).\n\n\n", "msg_date": "Thu, 21 May 2020 09:48:42 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "Em qua., 20 de mai. de 2020 às 18:49, Thomas Munro <thomas.munro@gmail.com>\nescreveu:\n\n> On Wed, May 20, 2020 at 11:03 PM Ranier Vilela <ranier.vf@gmail.com>\n> wrote:\n> > Time: 47767,916 ms (00:47,768)\n> > Time: 32645,448 ms (00:32,645)\n>\n> Just to make sure kernel caching isn't helping here, maybe try making\n> the table 2x or 4x bigger? My test was on a virtual machine with only\n> 4GB RAM, so the table couldn't be entirely cached.\n>\n4x bigger.\nPostgres defaults settings.\n\npostgres=# create table t as select generate_series(1, 800000000)::int i;\nSELECT 800000000\npostgres=# \\timing\nTiming is on.\npostgres=# set max_parallel_workers_per_gather = 0;\nSET\nTime: 8,622 ms\npostgres=# select count(*) from t;\n count\n-----------\n 800000000\n(1 row)\n\n\nTime: 227238,445 ms (03:47,238)\npostgres=# set max_parallel_workers_per_gather = 1;\nSET\nTime: 20,975 ms\npostgres=# select count(*) from t;\n count\n-----------\n 800000000\n(1 row)\n\n\nTime: 138027,351 ms (02:18,027)\n\nregards,\nRanier Vilela\n\nEm qua., 20 de mai. de 2020 às 18:49, Thomas Munro <thomas.munro@gmail.com> escreveu:On Wed, May 20, 2020 at 11:03 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> Time: 47767,916 ms (00:47,768)\n> Time: 32645,448 ms (00:32,645)\n\nJust to make sure kernel caching isn't helping here, maybe try making\nthe table 2x or 4x bigger?  My test was on a virtual machine with only\n4GB RAM, so the table couldn't be entirely cached.4x bigger.Postgres defaults settings.postgres=# create table t as select generate_series(1, 800000000)::int i;SELECT 800000000postgres=# \\timingTiming is on.postgres=# set max_parallel_workers_per_gather = 0;SETTime: 8,622 mspostgres=# select count(*) from t;   count----------- 800000000(1 row)Time: 227238,445 ms (03:47,238)postgres=# set max_parallel_workers_per_gather = 1;SETTime: 20,975 mspostgres=# select count(*) from t;   count----------- 800000000(1 row)Time: 138027,351 ms (02:18,027)regards,Ranier Vilela", "msg_date": "Wed, 20 May 2020 20:14:12 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Thu, May 21, 2020 at 11:15 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> postgres=# set max_parallel_workers_per_gather = 0;\n> Time: 227238,445 ms (03:47,238)\n> postgres=# set max_parallel_workers_per_gather = 1;\n> Time: 138027,351 ms (02:18,027)\n\nOk, so it looks like NT/NTFS isn't suffering from this problem.\nThanks for testing!\n\n\n", "msg_date": "Thu, 21 May 2020 11:47:47 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "Em qua., 20 de mai. de 2020 às 20:48, Thomas Munro <thomas.munro@gmail.com>\nescreveu:\n\n> On Thu, May 21, 2020 at 11:15 AM Ranier Vilela <ranier.vf@gmail.com>\n> wrote:\n> > postgres=# set max_parallel_workers_per_gather = 0;\n> > Time: 227238,445 ms (03:47,238)\n> > postgres=# set max_parallel_workers_per_gather = 1;\n> > Time: 138027,351 ms (02:18,027)\n>\n> Ok, so it looks like NT/NTFS isn't suffering from this problem.\n> Thanks for testing!\n>\nMaybe it wasn’t clear, the tests were done with your patch applied.\n\nregards,\nRanier Vilela\n\nEm qua., 20 de mai. de 2020 às 20:48, Thomas Munro <thomas.munro@gmail.com> escreveu:On Thu, May 21, 2020 at 11:15 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> postgres=# set max_parallel_workers_per_gather = 0;\n> Time: 227238,445 ms (03:47,238)\n> postgres=# set max_parallel_workers_per_gather = 1;\n> Time: 138027,351 ms (02:18,027)\n\nOk, so it looks like NT/NTFS isn't suffering from this problem.\nThanks for testing!Maybe it wasn’t clear, the tests were done with your patch applied.regards,Ranier Vilela", "msg_date": "Wed, 20 May 2020 20:50:45 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Thu, May 21, 2020 at 11:51 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> Em qua., 20 de mai. de 2020 às 20:48, Thomas Munro <thomas.munro@gmail.com> escreveu:\n>> On Thu, May 21, 2020 at 11:15 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>> > postgres=# set max_parallel_workers_per_gather = 0;\n>> > Time: 227238,445 ms (03:47,238)\n>> > postgres=# set max_parallel_workers_per_gather = 1;\n>> > Time: 138027,351 ms (02:18,027)\n>>\n>> Ok, so it looks like NT/NTFS isn't suffering from this problem.\n>> Thanks for testing!\n>\n> Maybe it wasn’t clear, the tests were done with your patch applied.\n\nOh! And how do the times look without it?\n\n\n", "msg_date": "Thu, 21 May 2020 12:02:36 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "Em qua., 20 de mai. de 2020 às 21:03, Thomas Munro <thomas.munro@gmail.com>\nescreveu:\n\n> On Thu, May 21, 2020 at 11:51 AM Ranier Vilela <ranier.vf@gmail.com>\n> wrote:\n> > Em qua., 20 de mai. de 2020 às 20:48, Thomas Munro <\n> thomas.munro@gmail.com> escreveu:\n> >> On Thu, May 21, 2020 at 11:15 AM Ranier Vilela <ranier.vf@gmail.com>\n> wrote:\n> >> > postgres=# set max_parallel_workers_per_gather = 0;\n> >> > Time: 227238,445 ms (03:47,238)\n> >> > postgres=# set max_parallel_workers_per_gather = 1;\n> >> > Time: 138027,351 ms (02:18,027)\n> >>\n> >> Ok, so it looks like NT/NTFS isn't suffering from this problem.\n> >> Thanks for testing!\n> >\n> > Maybe it wasn’t clear, the tests were done with your patch applied.\n>\n> Oh! And how do the times look without it?\n>\nVanila Postgres (latest)\n\ncreate table t as select generate_series(1, 800000000)::int i;\n set max_parallel_workers_per_gather = 0;\nTime: 210524,317 ms (03:30,524)\nset max_parallel_workers_per_gather = 1;\nTime: 146982,737 ms (02:26,983)\n\nregards,\nRanier Vilela\n\nEm qua., 20 de mai. de 2020 às 21:03, Thomas Munro <thomas.munro@gmail.com> escreveu:On Thu, May 21, 2020 at 11:51 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> Em qua., 20 de mai. de 2020 às 20:48, Thomas Munro <thomas.munro@gmail.com> escreveu:\n>> On Thu, May 21, 2020 at 11:15 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>> > postgres=# set max_parallel_workers_per_gather = 0;\n>> > Time: 227238,445 ms (03:47,238)\n>> > postgres=# set max_parallel_workers_per_gather = 1;\n>> > Time: 138027,351 ms (02:18,027)\n>>\n>> Ok, so it looks like NT/NTFS isn't suffering from this problem.\n>> Thanks for testing!\n>\n> Maybe it wasn’t clear, the tests were done with your patch applied.\n\nOh!  And how do the times look without it?Vanila Postgres (latest)create table t as select generate_series(1, 800000000)::int i; set max_parallel_workers_per_gather = 0;Time: 210524,317 ms (03:30,524)set max_parallel_workers_per_gather = 1;Time: 146982,737 ms (02:26,983)regards,Ranier Vilela", "msg_date": "Wed, 20 May 2020 22:37:36 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Thu, May 21, 2020 at 1:38 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>> >> On Thu, May 21, 2020 at 11:15 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>> >> > postgres=# set max_parallel_workers_per_gather = 0;\n>> >> > Time: 227238,445 ms (03:47,238)\n>> >> > postgres=# set max_parallel_workers_per_gather = 1;\n>> >> > Time: 138027,351 ms (02:18,027)\n\n> Vanila Postgres (latest)\n>\n> create table t as select generate_series(1, 800000000)::int i;\n> set max_parallel_workers_per_gather = 0;\n> Time: 210524,317 ms (03:30,524)\n> set max_parallel_workers_per_gather = 1;\n> Time: 146982,737 ms (02:26,983)\n\nThanks. So it seems like Linux, Windows and anything using ZFS are\nOK, which probably explains why we hadn't heard complaints about it.\n\n\n", "msg_date": "Thu, 21 May 2020 14:31:37 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Thu, 21 May 2020 at 14:32, Thomas Munro <thomas.munro@gmail.com> wrote:\n> Thanks. So it seems like Linux, Windows and anything using ZFS are\n> OK, which probably explains why we hadn't heard complaints about it.\n\nI tried out a different test on a Windows 8.1 machine I have here. I\nwas concerned that the test that was used here ends up with tuples\nthat are too narrow and that the executor would spend quite a bit of\ntime going between nodes and performing the actual aggregation. I\nthought it might be good to add some padding so that there are far\nfewer tuples on the page.\n\nI ended up with:\n\ncreate table t (a int, b text);\n-- create a table of 100GB in size.\ninsert into t select x,md5(x::text) from\ngenerate_series(1,1000000*1572.7381809)x; -- took 1 hr 18 mins\nvacuum freeze t;\n\nquery = select count(*) from t;\nDisk = Samsung SSD 850 EVO mSATA 1TB.\n\nMaster:\nworkers = 0 : Time: 269104.281 ms (04:29.104) 380MB/s\nworkers = 1 : Time: 741183.646 ms (12:21.184) 138MB/s\nworkers = 2 : Time: 656963.754 ms (10:56.964) 155MB/s\n\nPatched:\n\nworkers = 0 : Should be the same as before as the code for this didn't change.\nworkers = 1 : Time: 300299.364 ms (05:00.299) 340MB/s\nworkers = 2 : Time: 270213.726 ms (04:30.214) 379MB/s\n\n(A better query would likely have been just: SELECT * FROM t WHERE a =\n1; but I'd run the test by the time I thought of that.)\n\nSo, this shows that Windows, at least 8.1, does suffer from this too.\n\nFor the patch. I know you just put it together quickly, but I don't\nthink you can do that ramp up the way you have. It looks like there's\na risk of torn reads and torn writes and I'm unsure how much that\ncould affect the test results here. It looks like there's a risk that\na worker gets some garbage number of pages to read rather than what\nyou think it will. Also, I also don't quite understand the need for a\nramp-up in pages per serving. Shouldn't you instantly start at some\nsize and hold that, then only maybe ramp down at the end so that\nworkers all finish at close to the same time? However, I did have\nother ideas which I'll explain below.\n\n From my previous work on that function to add the atomics. I did think\nthat it would be better to dish out more than 1 page at a time.\nHowever, there is the risk that the workload is not evenly distributed\nbetween the workers. My thoughts were that we could divide the total\npages by the number of workers then again by 100 and dish out blocks\nbased on that. That way workers will get about 100th of their fair\nshare of pages at once, so assuming there's an even amount of work to\ndo per serving of pages, then the last worker should only run on at\nmost 1% longer. Perhaps that 100 should be 1000, then the run on time\nfor the last worker is just 0.1%. Perhaps the serving size can also\nbe capped at some maximum like 64. We'll certainly need to ensure it's\nat least 1! I imagine that will eliminate the need for any ramp down\nof pages per serving near the end of the scan.\n\nDavid\n\n\n", "msg_date": "Thu, 21 May 2020 17:06:18 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Thu, 21 May 2020 at 17:06, David Rowley <dgrowleyml@gmail.com> wrote:\n> For the patch. I know you just put it together quickly, but I don't\n> think you can do that ramp up the way you have. It looks like there's\n> a risk of torn reads and torn writes and I'm unsure how much that\n> could affect the test results here.\n\nOops. On closer inspection, I see that memory is per worker, not\nglobal to the scan.\n\n\n", "msg_date": "Fri, 22 May 2020 09:59:58 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Fri, May 22, 2020 at 10:00 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Thu, 21 May 2020 at 17:06, David Rowley <dgrowleyml@gmail.com> wrote:\n> > For the patch. I know you just put it together quickly, but I don't\n> > think you can do that ramp up the way you have. It looks like there's\n> > a risk of torn reads and torn writes and I'm unsure how much that\n> > could affect the test results here.\n>\n> Oops. On closer inspection, I see that memory is per worker, not\n> global to the scan.\n\nRight, I think it's safe. I think you were probably right that\nramp-up isn't actually useful though, it's only the end of the scan\nthat requires special treatment so we don't get unfair allocation as\nthe work runs out, due to course grain. I suppose that even if you\nhave a scheme that falls back to fine grained allocation for the final\nN pages, it's still possible that a highly distracted process (most\nlikely the leader given its double duties) can finish up sitting on a\nlarge range of pages and eventually have to process them all at the\nend after the other workers have already knocked off and gone for a\npint.\n\n\n", "msg_date": "Fri, 22 May 2020 10:27:15 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "Hi Thomas,\n\nSome more data points:\n\ncreate table t_heap as select generate_series(1, 100000000) i;\n\nQuery: select count(*) from t_heap;\nshared_buffers=32MB (so that I don't have to clear buffers, OS page\ncache)\nOS: FreeBSD 12.1 with UFS on GCP\n4 vCPUs, 4GB RAM Intel Skylake\n22G Google PersistentDisk\nTime is measured with \\timing on.\n\nWithout your patch:\n\nmax_parallel_workers_per_gather Time(seconds)\n 0 33.88s\n 1 57.62s\n 2 62.01s\n 6 222.94s\n\nWith your patch:\n\nmax_parallel_workers_per_gather Time(seconds)\n 0 29.04s\n 1 29.17s\n 2 28.78s\n 6 291.27s\n\nI checked with explain analyze to ensure that the number of workers\nplanned = max_parallel_workers_per_gather\n\nApart from the last result (max_parallel_workers_per_gather=6), all\nthe other results seem favorable.\nCould the last result be down to the fact that the number of workers\nplanned exceeded the number of vCPUs?\n\nI also wanted to evaluate Zedstore with your patch.\nI used the same setup as above.\nNo discernible difference though, maybe I'm missing something:\n\nWithout your patch:\n\nmax_parallel_workers_per_gather Time(seconds)\n 0 25.86s\n 1 15.70s\n 2 12.60s\n 6 12.41s\n\n\nWith your patch:\n\nmax_parallel_workers_per_gather Time(seconds)\n 0 26.96s\n 1 15.73s\n 2 12.46s\n 6 12.10s\n--\nSoumyadeep\n\n\nOn Thu, May 21, 2020 at 3:28 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Fri, May 22, 2020 at 10:00 AM David Rowley <dgrowleyml@gmail.com>\n> wrote:\n> > On Thu, 21 May 2020 at 17:06, David Rowley <dgrowleyml@gmail.com> wrote:\n> > > For the patch. I know you just put it together quickly, but I don't\n> > > think you can do that ramp up the way you have. It looks like there's\n> > > a risk of torn reads and torn writes and I'm unsure how much that\n> > > could affect the test results here.\n> >\n> > Oops. On closer inspection, I see that memory is per worker, not\n> > global to the scan.\n>\n> Right, I think it's safe. I think you were probably right that\n> ramp-up isn't actually useful though, it's only the end of the scan\n> that requires special treatment so we don't get unfair allocation as\n> the work runs out, due to course grain. I suppose that even if you\n> have a scheme that falls back to fine grained allocation for the final\n> N pages, it's still possible that a highly distracted process (most\n> likely the leader given its double duties) can finish up sitting on a\n> large range of pages and eventually have to process them all at the\n> end after the other workers have already knocked off and gone for a\n> pint.\n>\n>\n>\n\nHi Thomas,Some more data points:create table t_heap as select generate_series(1, 100000000) i;Query: select count(*) from t_heap;shared_buffers=32MB (so that I don't have to clear buffers, OS pagecache)OS: FreeBSD 12.1 with UFS on GCP4 vCPUs, 4GB RAM Intel Skylake22G Google PersistentDiskTime is measured with \\timing on.Without your patch:max_parallel_workers_per_gather    Time(seconds)                              0           33.88s                              1           57.62s                              2           62.01s                              6          222.94sWith your patch:max_parallel_workers_per_gather    Time(seconds)                              0           29.04s                              1           29.17s                              2           28.78s                              6          291.27sI checked with explain analyze to ensure that the number of workersplanned = max_parallel_workers_per_gatherApart from the last result (max_parallel_workers_per_gather=6), allthe other results seem favorable.Could the last result be down to the fact that the number of workersplanned exceeded the number of vCPUs?I also wanted to evaluate Zedstore with your patch.I used the same setup as above.No discernible difference though, maybe I'm missing something:Without your patch:max_parallel_workers_per_gather    Time(seconds)                              0           25.86s                              1           15.70s                              2           12.60s                              6           12.41sWith your patch:max_parallel_workers_per_gather    Time(seconds)                              0           26.96s                              1           15.73s                              2           12.46s                              6           12.10s--SoumyadeepOn Thu, May 21, 2020 at 3:28 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Fri, May 22, 2020 at 10:00 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Thu, 21 May 2020 at 17:06, David Rowley <dgrowleyml@gmail.com> wrote:\n> > For the patch. I know you just put it together quickly, but I don't\n> > think you can do that ramp up the way you have. It looks like there's\n> > a risk of torn reads and torn writes and I'm unsure how much that\n> > could affect the test results here.\n>\n> Oops. On closer inspection, I see that memory is per worker, not\n> global to the scan.\n\nRight, I think it's safe.  I think you were probably right that\nramp-up isn't actually useful though, it's only the end of the scan\nthat requires special treatment so we don't get unfair allocation as\nthe work runs out, due to course grain.  I suppose that even if you\nhave a scheme that falls back to fine grained allocation for the final\nN pages, it's still possible that a highly distracted process (most\nlikely the leader given its double duties) can finish up sitting on a\nlarge range of pages and eventually have to process them all at the\nend after the other workers have already knocked off and gone for a\npint.", "msg_date": "Thu, 21 May 2020 18:14:37 -0700", "msg_from": "Soumyadeep Chakraborty <sochakraborty@pivotal.io>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Fri, May 22, 2020 at 1:14 PM Soumyadeep Chakraborty\n<sochakraborty@pivotal.io> wrote:\n> Some more data points:\n\nThanks!\n\n> max_parallel_workers_per_gather Time(seconds)\n> 0 29.04s\n> 1 29.17s\n> 2 28.78s\n> 6 291.27s\n>\n> I checked with explain analyze to ensure that the number of workers\n> planned = max_parallel_workers_per_gather\n>\n> Apart from the last result (max_parallel_workers_per_gather=6), all\n> the other results seem favorable.\n> Could the last result be down to the fact that the number of workers\n> planned exceeded the number of vCPUs?\n\nInteresting. I guess it has to do with patterns emerging from various\nparameters like that magic number 64 I hard coded into the test patch,\nand other unknowns in your storage stack. I see a small drop off that\nI can't explain yet, but not that.\n\n> I also wanted to evaluate Zedstore with your patch.\n> I used the same setup as above.\n> No discernible difference though, maybe I'm missing something:\n\nIt doesn't look like it's using table_block_parallelscan_nextpage() as\na block allocator so it's not affected by the patch. It has its own\nthing zs_parallelscan_nextrange(), which does\npg_atomic_fetch_add_u64(&pzscan->pzs_allocatedtids,\nZS_PARALLEL_CHUNK_SIZE), and that macro is 0x100000.\n\n\n", "msg_date": "Fri, 22 May 2020 14:26:44 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Tue, May 19, 2020 at 10:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Good experiment. IIRC, we have discussed a similar idea during the\n> development of this feature but we haven't seen any better results by\n> allocating in ranges on the systems we have tried. So, we want with\n> the current approach which is more granular and seems to allow better\n> parallelism. I feel we need to ensure that we don't regress\n> parallelism in existing cases, otherwise, the idea sounds promising to\n> me.\n\nI think there's a significant difference. The idea I remember being\ndiscussed at the time was to divide the relation into equal parts at\nthe very start and give one part to each worker. I think that carries\na lot of risk of some workers finishing much sooner than others. This\nidea, AIUI, is to divide the relation into chunks that are small\ncompared to the size of the relation, but larger than 1 block. That\ncarries some risk of an unequal division of work, as has already been\nnoted, but it's much less, especially if we use smaller chunk sizes\nonce we get close to the end, as proposed here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 22 May 2020 14:30:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Thu, May 21, 2020 at 6:28 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Right, I think it's safe. I think you were probably right that\n> ramp-up isn't actually useful though, it's only the end of the scan\n> that requires special treatment so we don't get unfair allocation as\n> the work runs out, due to course grain.\n\nThe ramp-up seems like it might be useful if the query involves a LIMIT.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 22 May 2020 14:31:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Sat, May 23, 2020 at 12:00 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, May 19, 2020 at 10:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Good experiment. IIRC, we have discussed a similar idea during the\n> > development of this feature but we haven't seen any better results by\n> > allocating in ranges on the systems we have tried. So, we want with\n> > the current approach which is more granular and seems to allow better\n> > parallelism. I feel we need to ensure that we don't regress\n> > parallelism in existing cases, otherwise, the idea sounds promising to\n> > me.\n>\n> I think there's a significant difference. The idea I remember being\n> discussed at the time was to divide the relation into equal parts at\n> the very start and give one part to each worker.\n>\n\nI have checked the archives and found that we have done some testing\nby allowing each worker to work on a block-by-block basis and by\nhaving a fixed number of chunks for each worker. See the results [1]\n(the program used is attached in another email [2]). The conclusion\nwas that we didn't find much difference with any of those approaches.\nNow, the reason could be that because we have tested on a machine (I\nthink it was hydra (Power-7)) where the chunk-size doesn't matter but\nI think it can show some difference in the machines on which Thomas\nand David are testing. At that time there was also a discussion to\nchunk on the basis of \"each worker processes one 1GB-sized segment\"\nwhich Tom and Stephen seem to support [3]. I think an idea to divide\nthe relation into segments based on workers for a parallel scan has\nbeen used by other database (DynamoDB) as well [4] so it is not\ncompletely without merit. I understand that larger sized chunks can\nlead to unequal work distribution but they have their own advantages,\nso we might want to get the best of both the worlds where in the\nbeginning we have larger sized chunks and then slowly reduce the\nchunk-size towards the end of the scan. I am not sure what is the\nbest thing to do here but maybe some experiments can shed light on\nthis mystery.\n\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1JHCmN2X1LjQ4bOmLApt%2BbtOuid5Vqqk5G6dDFV69iyHg%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CAA4eK1JyVNEBE8KuxKd3bJhkG6tSbpBYX_%2BZtP34ZSTCSucA1A%40mail.gmail.com\n[3] - https://www.postgresql.org/message-id/30549.1422459647%40sss.pgh.pa.us\n[4] - https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Scan.html#Scan.ParallelScan\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 23 May 2020 10:04:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Sat, 23 May 2020 at 06:31, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, May 21, 2020 at 6:28 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Right, I think it's safe. I think you were probably right that\n> > ramp-up isn't actually useful though, it's only the end of the scan\n> > that requires special treatment so we don't get unfair allocation as\n> > the work runs out, due to course grain.\n>\n> The ramp-up seems like it might be useful if the query involves a LIMIT.\n\nThat's true, but I think the intelligence there would need to go\nbeyond, \"if there's a LIMIT clause, do ramp-up\", as we might have\nalready fully ramped up well before the LIMIT is reached.\n\nDavid\n\n\n", "msg_date": "Mon, 25 May 2020 06:17:34 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "> It doesn't look like it's using table_block_parallelscan_nextpage() as\n> a block allocator so it's not affected by the patch. It has its own\n> thing zs_parallelscan_nextrange(), which does\n> pg_atomic_fetch_add_u64(&pzscan->pzs_allocatedtids,\n> ZS_PARALLEL_CHUNK_SIZE), and that macro is 0x100000.\n\nMy apologies, I was too hasty. Indeed, you are correct. Zedstore's\nunit of work is chunks of the logical zstid space. There is a\ncorrelation between the zstid and blocks: zstids near each other are\nlikely to lie in the same block or in neighboring blocks. It would be\ninteresting to try something like this patch for Zedstore.\n\nRegards,\nSoumyadeep\n\n\n", "msg_date": "Wed, 3 Jun 2020 15:14:59 -0700", "msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Sat, May 23, 2020 at 12:00 AM Robert Haas\n<robertmhaas(at)gmail(dot)com> wrote:\n> I think there's a significant difference. The idea I remember being\n> discussed at the time was to divide the relation into equal parts at\n> the very start and give one part to each worker. I think that carries\n> a lot of risk of some workers finishing much sooner than others.\n\nWas the idea of work-stealing considered? Here is what I have been\nthinking about:\n\nEach worker would be assigned a contiguous chunk of blocks at init time.\nThen if a worker is finished with its work, it can inspect other\nworkers' remaining work and \"steal\" some of the blocks from the end of\nthe victim worker's allocation.\n\nConsiderations for such a scheme:\n\n1. Victim selection: Who will be the victim worker? It can be selected at\nrandom if nothing else.\n\n2. How many blocks to steal? Stealing half of the victim's remaining\nblocks seems to be fair.\n\n3. Stealing threshold: We should disallow stealing if the amount of\nremaining work is not enough in the victim worker.\n\n4. Additional parallel state: Representing the chunk of \"work\". I guess\none variable for the current block and one for the last block in the\nchunk allocated. The latter would have to be protected with atomic\nfetches as it would be decremented by the stealing worker.\n\n5. Point 4 implies that there might be more atomic fetch operations as\ncompared to this patch. Idk if that is a lesser evil than the workers\nbeing idle..probably not? A way to salvage that a little would be to\nforego atomic fetches when the amount of work remaining is less than the\nthreshold discussed in 3 as there is no possibility of work stealing then.\n\n\nRegards,\n\nSoumyadeep\n\n\n", "msg_date": "Wed, 3 Jun 2020 15:18:31 -0700", "msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Wed, Jun 3, 2020 at 3:18 PM Soumyadeep Chakraborty\n<soumyadeep2007@gmail.com> wrote:\n> Idk if that is a lesser evil than the workers\n> being idle..probably not?\n\nApologies, I meant that the extra atomic fetches is probably a lesser\nevil than the workers being idle.\n\nSoumyadeep\n\n\n", "msg_date": "Wed, 3 Jun 2020 16:21:57 -0700", "msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Thu, 21 May 2020 at 17:06, David Rowley <dgrowleyml@gmail.com> wrote:\n> create table t (a int, b text);\n> -- create a table of 100GB in size.\n> insert into t select x,md5(x::text) from\n> generate_series(1,1000000*1572.7381809)x; -- took 1 hr 18 mins\n> vacuum freeze t;\n>\n> query = select count(*) from t;\n> Disk = Samsung SSD 850 EVO mSATA 1TB.\n>\n> Master:\n> workers = 0 : Time: 269104.281 ms (04:29.104) 380MB/s\n> workers = 1 : Time: 741183.646 ms (12:21.184) 138MB/s\n> workers = 2 : Time: 656963.754 ms (10:56.964) 155MB/s\n>\n> Patched:\n>\n> workers = 0 : Should be the same as before as the code for this didn't change.\n> workers = 1 : Time: 300299.364 ms (05:00.299) 340MB/s\n> workers = 2 : Time: 270213.726 ms (04:30.214) 379MB/s\n>\n> (A better query would likely have been just: SELECT * FROM t WHERE a =\n> 1; but I'd run the test by the time I thought of that.)\n>\n> So, this shows that Windows, at least 8.1, does suffer from this too.\n\nI repeated this test on an up-to-date Windows 10 machine to see if the\nlater kernel is any better at the readahead.\n\nResults for the same test are:\n\nMaster:\n\nmax_parallel_workers_per_gather = 0: Time: 148481.244 ms (02:28.481)\n(706.2MB/sec)\nmax_parallel_workers_per_gather = 1: Time: 327556.121 ms (05:27.556)\n(320.1MB/sec)\nmax_parallel_workers_per_gather = 2: Time: 329055.530 ms (05:29.056)\n(318.6MB/sec)\n\nPatched:\n\nmax_parallel_workers_per_gather = 0: Time: 141363.991 ms (02:21.364)\n(741.7MB/sec)\nmax_parallel_workers_per_gather = 1: Time: 144982.202 ms (02:24.982)\n(723.2MB/sec)\nmax_parallel_workers_per_gather = 2: Time: 143355.656 ms (02:23.356)\n(731.4MB/sec)\n\nDavid\n\n\n", "msg_date": "Wed, 10 Jun 2020 17:07:46 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Wed, Jun 10, 2020 at 5:06 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I repeated this test on an up-to-date Windows 10 machine to see if the\n> later kernel is any better at the readahead.\n>\n> Results for the same test are:\n>\n> Master:\n>\n> max_parallel_workers_per_gather = 0: Time: 148481.244 ms (02:28.481)\n> (706.2MB/sec)\n> max_parallel_workers_per_gather = 1: Time: 327556.121 ms (05:27.556)\n> (320.1MB/sec)\n> max_parallel_workers_per_gather = 2: Time: 329055.530 ms (05:29.056)\n> (318.6MB/sec)\n>\n> Patched:\n>\n> max_parallel_workers_per_gather = 0: Time: 141363.991 ms (02:21.364)\n> (741.7MB/sec)\n> max_parallel_workers_per_gather = 1: Time: 144982.202 ms (02:24.982)\n> (723.2MB/sec)\n> max_parallel_workers_per_gather = 2: Time: 143355.656 ms (02:23.356)\n> (731.4MB/sec)\n\nThanks!\n\nI also heard from Andres that he likes this patch with his AIO\nprototype, because of the way request merging works. So it seems like\nthere are several reasons to want it.\n\nBut ... where should we get the maximum step size from? A GUC?\n\n\n", "msg_date": "Wed, 10 Jun 2020 17:20:32 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Wed, 10 Jun 2020 at 17:21, Thomas Munro <thomas.munro@gmail.com> wrote:\n> I also heard from Andres that he likes this patch with his AIO\n> prototype, because of the way request merging works. So it seems like\n> there are several reasons to want it.\n>\n> But ... where should we get the maximum step size from? A GUC?\n\nI guess we'd need to determine if other step sizes were better under\nany conditions. I guess one condition would be if there was a LIMIT\nclause. I could check if setting it to 1024 makes any difference, but\nI'm thinking it won't since I got fairly consistent results on all\nworker settings with the patched version.\n\nDavid\n\n\n", "msg_date": "Wed, 10 Jun 2020 17:39:26 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Wed, 10 Jun 2020 at 17:39, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Wed, 10 Jun 2020 at 17:21, Thomas Munro <thomas.munro@gmail.com> wrote:\n> > I also heard from Andres that he likes this patch with his AIO\n> > prototype, because of the way request merging works. So it seems like\n> > there are several reasons to want it.\n> >\n> > But ... where should we get the maximum step size from? A GUC?\n>\n> I guess we'd need to determine if other step sizes were better under\n> any conditions. I guess one condition would be if there was a LIMIT\n> clause. I could check if setting it to 1024 makes any difference, but\n> I'm thinking it won't since I got fairly consistent results on all\n> worker settings with the patched version.\n\nI did another round of testing on the same machine trying some step\nsizes larger than 64 blocks. I can confirm that it does improve the\nsituation further going bigger than 64.\n\nI got up as far as 16384, but made a couple of additional changes for\nthat run only. Instead of increasing the ramp-up 1 block at a time, I\ninitialised phsw_step_size to 1 and multiplied it by 2 until I reached\nthe chosen step size. With numbers that big, ramping up 1 block at a\ntime was slow enough that I'd never have reached the target step size\n\nHere are the results of the testing:\n\nMaster:\n\nmax_parallel_workers_per_gather = 0: Time: 148481.244 ms (02:28.481)\n(706.2MB/sec)\nmax_parallel_workers_per_gather = 1: Time: 327556.121 ms (05:27.556)\n(320.1MB/sec)\nmax_parallel_workers_per_gather = 2: Time: 329055.530 ms (05:29.056)\n(318.6MB/sec)\n\nPatched stepsize = 64:\n\nmax_parallel_workers_per_gather = 0: Time: 141363.991 ms (02:21.364)\n(741.7MB/sec)\nmax_parallel_workers_per_gather = 1: Time: 144982.202 ms (02:24.982)\n(723.2MB/sec)\nmax_parallel_workers_per_gather = 2: Time: 143355.656 ms (02:23.356)\n(731.4MB/sec)\n\nPatched stepsize = 1024:\n\nmax_parallel_workers_per_gather = 0: Time: 152599.159 ms (02:32.599)\n(687.1MB/sec)\nmax_parallel_workers_per_gather = 1: Time: 104227.232 ms (01:44.227)\n(1006.04MB/sec)\nmax_parallel_workers_per_gather = 2: Time: 97149.343 ms (01:37.149)\n(1079.3MB/sec)\n\nPatched stepsize = 8192:\n\nmax_parallel_workers_per_gather = 0: Time: 143524.038 ms (02:23.524)\n(730.59MB/sec)\nmax_parallel_workers_per_gather = 1: Time: 102899.288 ms (01:42.899)\n(1019.0MB/sec)\nmax_parallel_workers_per_gather = 2: Time: 91148.340 ms (01:31.148)\n(1150.4MB/sec)\n\nPatched stepsize = 16384 (power 2 ramp-up)\n\nmax_parallel_workers_per_gather = 0: Time: 144598.502 ms (02:24.599)\n(725.16MB/sec)\nmax_parallel_workers_per_gather = 1: Time: 97344.160 ms (01:37.344)\n(1077.1MB/sec)\nmax_parallel_workers_per_gather = 2: Time: 88025.420 ms (01:28.025)\n(1191.2MB/sec)\n\nI thought about what you mentioned about a GUC, and I think it's a bad\nidea to do that. I think it would be better to choose based on the\nrelation size. For smaller relations, we want to keep the step size\nsmall. Someone may enable parallel query on such a small relation if\nthey're doing something like calling an expensive function on the\nresults, so we do need to avoid going large for small relations.\n\nI considered something like:\n\ncreate function nextpower2(a bigint) returns bigint as $$ declare n\nbigint := 1; begin while n < a loop n := n * 2; end loop; return n;\nend; $$ language plpgsql;\nselect pg_size_pretty(power(2,p)::numeric * 8192) rel_size,\nnextpower2(power(2,p)::bigint / 1024) as stepsize from\ngenerate_series(1,32) p;\n rel_size | stepsize\n----------+----------\n 16 kB | 1\n 32 kB | 1\n 64 kB | 1\n 128 kB | 1\n 256 kB | 1\n 512 kB | 1\n 1024 kB | 1\n 2048 kB | 1\n 4096 kB | 1\n 8192 kB | 1\n 16 MB | 2\n 32 MB | 4\n 64 MB | 8\n 128 MB | 16\n 256 MB | 32\n 512 MB | 64\n 1024 MB | 128\n 2048 MB | 256\n 4096 MB | 512\n 8192 MB | 1024\n 16 GB | 2048\n 32 GB | 4096\n 64 GB | 8192\n 128 GB | 16384\n 256 GB | 32768\n 512 GB | 65536\n 1024 GB | 131072\n 2048 GB | 262144\n 4096 GB | 524288\n 8192 GB | 1048576\n 16 TB | 2097152\n 32 TB | 4194304\n\nSo with that algorithm with this 100GB table that I've been using in\nmy test, we'd go with a step size of 16384. I think we'd want to avoid\ngoing any more than that. The above code means we'll do between just\nbelow 0.1% and 0.2% of the relation per step. If I divided the number\nof blocks by say 128 instead of 1024, then that would be about 0.78%\nand 1.56% of the relation each time. It's not unrealistic today that\nsomeone might throw that many workers at a job, so, I'd say dividing\nby 1024 or even 2048 would likely be about right.\n\nDavid\n\n\n", "msg_date": "Thu, 11 Jun 2020 00:33:45 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Wed, Jun 10, 2020 at 6:04 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Wed, 10 Jun 2020 at 17:39, David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > On Wed, 10 Jun 2020 at 17:21, Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > I also heard from Andres that he likes this patch with his AIO\n> > > prototype, because of the way request merging works. So it seems like\n> > > there are several reasons to want it.\n> > >\n> > > But ... where should we get the maximum step size from? A GUC?\n> >\n> > I guess we'd need to determine if other step sizes were better under\n> > any conditions. I guess one condition would be if there was a LIMIT\n> > clause. I could check if setting it to 1024 makes any difference, but\n> > I'm thinking it won't since I got fairly consistent results on all\n> > worker settings with the patched version.\n>\n> I did another round of testing on the same machine trying some step\n> sizes larger than 64 blocks. I can confirm that it does improve the\n> situation further going bigger than 64.\n>\n\nCan we try the same test with 4, 8, 16 workers as well? I don't\nforesee any problem with a higher number of workers but it might be\nbetter to once check that if it is not too much additional work.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 10 Jun 2020 18:54:23 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Thu, 11 Jun 2020 at 01:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Can we try the same test with 4, 8, 16 workers as well? I don't\n> foresee any problem with a higher number of workers but it might be\n> better to once check that if it is not too much additional work.\n\nI ran the tests again with up to 7 workers. The CPU here only has 8\ncores (a laptop), so I'm not sure if there's much sense in going\nhigher than that?\n\nCPU = Intel i7-8565U. 16GB RAM.\n\nNote that I did the power2 ramp-up with each of the patched tests this\ntime. Thomas' version ramps up 1 pages at a time, which is ok when\nonly ramping up to 64 pages, but not for these higher numbers I'm\ntesting with. (Patch attached)\n\nResults attached in a graph format, or in text below:\n\nMaster:\n\nworkers=0: Time: 141175.935 ms (02:21.176) (742.7MB/sec)\nworkers=1: Time: 316854.538 ms (05:16.855) (330.9MB/sec)\nworkers=2: Time: 323471.791 ms (05:23.472) (324.2MB/sec)\nworkers=3: Time: 321637.945 ms (05:21.638) (326MB/sec)\nworkers=4: Time: 308689.599 ms (05:08.690) (339.7MB/sec)\nworkers=5: Time: 289014.709 ms (04:49.015) (362.8MB/sec)\nworkers=6: Time: 267785.27 ms (04:27.785) (391.6MB/sec)\nworkers=7: Time: 248735.817 ms (04:08.736) (421.6MB/sec)\n\nPatched 64: (power 2 ramp-up)\n\nworkers=0: Time: 152752.558 ms (02:32.753) (686.5MB/sec)\nworkers=1: Time: 149940.841 ms (02:29.941) (699.3MB/sec)\nworkers=2: Time: 136534.043 ms (02:16.534) (768MB/sec)\nworkers=3: Time: 119387.248 ms (01:59.387) (878.3MB/sec)\nworkers=4: Time: 114080.131 ms (01:54.080) (919.2MB/sec)\nworkers=5: Time: 111472.144 ms (01:51.472) (940.7MB/sec)\nworkers=6: Time: 108290.608 ms (01:48.291) (968.3MB/sec)\nworkers=7: Time: 104349.947 ms (01:44.350) (1004.9MB/sec)\n\nPatched 1024: (power 2 ramp-up)\n\nworkers=0: Time: 146106.086 ms (02:26.106) (717.7MB/sec)\nworkers=1: Time: 109832.773 ms (01:49.833) (954.7MB/sec)\nworkers=2: Time: 98921.515 ms (01:38.922) (1060MB/sec)\nworkers=3: Time: 94259.243 ms (01:34.259) (1112.4MB/sec)\nworkers=4: Time: 93275.637 ms (01:33.276) (1124.2MB/sec)\nworkers=5: Time: 93921.452 ms (01:33.921) (1116.4MB/sec)\nworkers=6: Time: 93988.386 ms (01:33.988) (1115.6MB/sec)\nworkers=7: Time: 92096.414 ms (01:32.096) (1138.6MB/sec)\n\nPatched 8192: (power 2 ramp-up)\n\nworkers=0: Time: 143367.057 ms (02:23.367) (731.4MB/sec)\nworkers=1: Time: 103138.918 ms (01:43.139) (1016.7MB/sec)\nworkers=2: Time: 93368.573 ms (01:33.369) (1123.1MB/sec)\nworkers=3: Time: 89464.529 ms (01:29.465) (1172.1MB/sec)\nworkers=4: Time: 89921.393 ms (01:29.921) (1166.1MB/sec)\nworkers=5: Time: 93575.401 ms (01:33.575) (1120.6MB/sec)\nworkers=6: Time: 93636.584 ms (01:33.637) (1119.8MB/sec)\nworkers=7: Time: 93682.21 ms (01:33.682) (1119.3MB/sec)\n\nPatched 16384 (power 2 ramp-up)\n\nworkers=0: Time: 144598.502 ms (02:24.599) (725.2MB/sec)\nworkers=1: Time: 97344.16 ms (01:37.344) (1077.2MB/sec)\nworkers=2: Time: 88025.42 ms (01:28.025) (1191.2MB/sec)\nworkers=3: Time: 97711.521 ms (01:37.712) (1073.1MB/sec)\nworkers=4: Time: 88877.913 ms (01:28.878) (1179.8MB/sec)\nworkers=5: Time: 96985.978 ms (01:36.986) (1081.2MB/sec)\nworkers=6: Time: 92368.543 ms (01:32.369) (1135.2MB/sec)\nworkers=7: Time: 87498.156 ms (01:27.498) (1198.4MB/sec)\n\nDavid", "msg_date": "Thu, 11 Jun 2020 13:47:49 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Thu, Jun 11, 2020 at 7:18 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 11 Jun 2020 at 01:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Can we try the same test with 4, 8, 16 workers as well? I don't\n> > foresee any problem with a higher number of workers but it might be\n> > better to once check that if it is not too much additional work.\n>\n> I ran the tests again with up to 7 workers. The CPU here only has 8\n> cores (a laptop), so I'm not sure if there's much sense in going\n> higher than that?\n>\n\nI think it proves your point that there is a value in going for step\nsize greater than 64. However, I think the difference at higher sizes\nis not significant. For example, the difference between 8192 and\n16384 doesn't seem much if we leave higher worker count where the data\ncould be a bit misleading due to variation. I could see that there is\na clear and significant difference till 1024 but after that difference\nis not much.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Jun 2020 07:38:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Thu, 11 Jun 2020 at 14:09, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jun 11, 2020 at 7:18 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > On Thu, 11 Jun 2020 at 01:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Can we try the same test with 4, 8, 16 workers as well? I don't\n> > > foresee any problem with a higher number of workers but it might be\n> > > better to once check that if it is not too much additional work.\n> >\n> > I ran the tests again with up to 7 workers. The CPU here only has 8\n> > cores (a laptop), so I'm not sure if there's much sense in going\n> > higher than that?\n> >\n>\n> I think it proves your point that there is a value in going for step\n> size greater than 64. However, I think the difference at higher sizes\n> is not significant. For example, the difference between 8192 and\n> 16384 doesn't seem much if we leave higher worker count where the data\n> could be a bit misleading due to variation. I could see that there is\n> a clear and significant difference till 1024 but after that difference\n> is not much.\n\nI guess the danger with going too big is that we have some Seqscan\nfilter that causes some workers to do very little to nothing with the\nrows, despite discarding them and other workers are left with rows\nthat are not filtered and require some expensive processing. Keeping\nthe number of blocks on the smaller side would reduce the chances of\nsomeone being hit by that. The algorithm I proposed above still can\nbe capped by doing something like nblocks = Min(1024,\npg_nextpower2_32(pbscan->phs_nblocks / 1024)); That way we'll end up\nwith:\n\n\n rel_size | stepsize\n----------+----------\n 16 kB | 1\n 32 kB | 1\n 64 kB | 1\n 128 kB | 1\n 256 kB | 1\n 512 kB | 1\n 1024 kB | 1\n 2048 kB | 1\n 4096 kB | 1\n 8192 kB | 1\n 16 MB | 2\n 32 MB | 4\n 64 MB | 8\n 128 MB | 16\n 256 MB | 32\n 512 MB | 64\n 1024 MB | 128\n 2048 MB | 256\n 4096 MB | 512\n 8192 MB | 1024\n 16 GB | 1024\n 32 GB | 1024\n 64 GB | 1024\n 128 GB | 1024\n 256 GB | 1024\n 512 GB | 1024\n 1024 GB | 1024\n 2048 GB | 1024\n 4096 GB | 1024\n 8192 GB | 1024\n 16 TB | 1024\n 32 TB | 1024\n(32 rows)\n\nDavid\n\n\n", "msg_date": "Thu, 11 Jun 2020 15:05:25 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Thu, Jun 11, 2020 at 8:35 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 11 Jun 2020 at 14:09, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Jun 11, 2020 at 7:18 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > >\n> > > On Thu, 11 Jun 2020 at 01:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > Can we try the same test with 4, 8, 16 workers as well? I don't\n> > > > foresee any problem with a higher number of workers but it might be\n> > > > better to once check that if it is not too much additional work.\n> > >\n> > > I ran the tests again with up to 7 workers. The CPU here only has 8\n> > > cores (a laptop), so I'm not sure if there's much sense in going\n> > > higher than that?\n> > >\n> >\n> > I think it proves your point that there is a value in going for step\n> > size greater than 64. However, I think the difference at higher sizes\n> > is not significant. For example, the difference between 8192 and\n> > 16384 doesn't seem much if we leave higher worker count where the data\n> > could be a bit misleading due to variation. I could see that there is\n> > a clear and significant difference till 1024 but after that difference\n> > is not much.\n>\n> I guess the danger with going too big is that we have some Seqscan\n> filter that causes some workers to do very little to nothing with the\n> rows, despite discarding them and other workers are left with rows\n> that are not filtered and require some expensive processing. Keeping\n> the number of blocks on the smaller side would reduce the chances of\n> someone being hit by that.\n>\n\nRight and good point.\n\n> The algorithm I proposed above still can\n> be capped by doing something like nblocks = Min(1024,\n> pg_nextpower2_32(pbscan->phs_nblocks / 1024)); That way we'll end up\n> with:\n>\n\nI think something on these lines would be a good idea especially\nkeeping step-size proportional to relation size. However, I am not\ncompletely sure if doubling the step-size with equal increase in\nrelation size (ex. what is happening between 16MB~8192MB) is the best\nidea. Why not double the step-size when relation size increases by\nfour times? Will some more tests help us to identify this? I also\ndon't know what is the right answer here so just trying to brainstorm.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Jun 2020 09:33:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Thu, 11 Jun 2020 at 16:03, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I think something on these lines would be a good idea especially\n> keeping step-size proportional to relation size. However, I am not\n> completely sure if doubling the step-size with equal increase in\n> relation size (ex. what is happening between 16MB~8192MB) is the best\n> idea. Why not double the step-size when relation size increases by\n> four times? Will some more tests help us to identify this? I also\n> don't know what is the right answer here so just trying to brainstorm.\n\nBrainstorming sounds good. I'm by no means under any illusion that the\nformula is correct.\n\nBut, why four times? The way I did it tries to keep the number of\nchunks roughly the same each time. I think the key is the number of\nchunks more than the size of the chunks. Having fewer chunks increases\nthe chances of an imbalance of work between workers, and with what you\nmention, the number of chunks will vary more than what I have proposed\n\nThe code I showed above will produce something between 512-1024 chunks\nfor all cases until we 2^20 pages, then we start capping the chunk\nsize to 1024. I could probably get onboard with making it depend on\nthe number of parallel workers, but perhaps it would be better just to\ndivide by, say, 16384 rather than 1024, as I proposed above. That way\nwe'll be more fine-grained, but we'll still read in larger than 1024\nchunk sizes when the relation gets beyond 128GB.\n\nDavid\n\n\n", "msg_date": "Thu, 11 Jun 2020 16:43:05 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Thu, Jun 11, 2020 at 10:13 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 11 Jun 2020 at 16:03, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I think something on these lines would be a good idea especially\n> > keeping step-size proportional to relation size. However, I am not\n> > completely sure if doubling the step-size with equal increase in\n> > relation size (ex. what is happening between 16MB~8192MB) is the best\n> > idea. Why not double the step-size when relation size increases by\n> > four times? Will some more tests help us to identify this? I also\n> > don't know what is the right answer here so just trying to brainstorm.\n>\n> Brainstorming sounds good. I'm by no means under any illusion that the\n> formula is correct.\n>\n> But, why four times?\n>\n\nJust trying to see if we can optimize such that we use bigger\nstep-size for bigger relations and smaller step-size for smaller\nrelations.\n\n> The way I did it tries to keep the number of\n> chunks roughly the same each time. I think the key is the number of\n> chunks more than the size of the chunks. Having fewer chunks increases\n> the chances of an imbalance of work between workers, and with what you\n> mention, the number of chunks will vary more than what I have proposed\n>\n\nBut, I think it will lead to more number of chunks for smaller relations.\n\n> The code I showed above will produce something between 512-1024 chunks\n> for all cases until we 2^20 pages, then we start capping the chunk\n> size to 1024. I could probably get onboard with making it depend on\n> the number of parallel workers, but perhaps it would be better just to\n> divide by, say, 16384 rather than 1024, as I proposed above. That way\n> we'll be more fine-grained, but we'll still read in larger than 1024\n> chunk sizes when the relation gets beyond 128GB.\n>\n\nI think increasing step-size might be okay for very large relations.\n\nAnother point I am thinking is that whatever formula we come up here\nmight not be a good fit for every case. For ex. as you mentioned\nabove that larger step-size can impact the performance based on\nqualification, similarly there could be other things like having a\ntarget list or qual having some function which takes more time for\ncertain tuples and lesser for others especially if function evaluation\nis based on some column values. So, can we think of providing a\nrel_option for step-size?\n--\nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Jun 2020 17:04:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Thu, 11 Jun 2020 at 23:35, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Another point I am thinking is that whatever formula we come up here\n> might not be a good fit for every case. For ex. as you mentioned\n> above that larger step-size can impact the performance based on\n> qualification, similarly there could be other things like having a\n> target list or qual having some function which takes more time for\n> certain tuples and lesser for others especially if function evaluation\n> is based on some column values. So, can we think of providing a\n> rel_option for step-size?\n\nI think someone at some point is not going to like the automatic\nchoice. So perhaps a reloption to allow users to overwrite it is a\ngood idea. -1 should most likely mean use the automatic choice based\non relation size. I think for parallel seq scans that filter a large\nportion of the records most likely need some sort of index, but there\nare perhaps some genuine cases for not having one. e.g perhaps the\nquery is just not run often enough for an index to be worthwhile. In\nthat case, the performance is likely less critical, but at least the\nreloption would allow users to get the old behaviour.\n\nDavid\n\n\n", "msg_date": "Fri, 12 Jun 2020 08:54:03 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Fri, Jun 12, 2020 at 2:24 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 11 Jun 2020 at 23:35, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Another point I am thinking is that whatever formula we come up here\n> > might not be a good fit for every case. For ex. as you mentioned\n> > above that larger step-size can impact the performance based on\n> > qualification, similarly there could be other things like having a\n> > target list or qual having some function which takes more time for\n> > certain tuples and lesser for others especially if function evaluation\n> > is based on some column values. So, can we think of providing a\n> > rel_option for step-size?\n>\n> I think someone at some point is not going to like the automatic\n> choice. So perhaps a reloption to allow users to overwrite it is a\n> good idea. -1 should most likely mean use the automatic choice based\n> on relation size. I think for parallel seq scans that filter a large\n> portion of the records most likely need some sort of index, but there\n> are perhaps some genuine cases for not having one. e.g perhaps the\n> query is just not run often enough for an index to be worthwhile. In\n> that case, the performance is likely less critical, but at least the\n> reloption would allow users to get the old behaviour.\n>\n\nmakes sense to me.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 12 Jun 2020 08:55:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Thu, Jun 11, 2020 at 4:54 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I think someone at some point is not going to like the automatic\n> choice. So perhaps a reloption to allow users to overwrite it is a\n> good idea. -1 should most likely mean use the automatic choice based\n> on relation size. I think for parallel seq scans that filter a large\n> portion of the records most likely need some sort of index, but there\n> are perhaps some genuine cases for not having one. e.g perhaps the\n> query is just not run often enough for an index to be worthwhile. In\n> that case, the performance is likely less critical, but at least the\n> reloption would allow users to get the old behaviour.\n\nLet me play the devil's advocate here. I feel like if the step size is\nlimited by the relation size and there is ramp-up and ramp-down, or\nmaybe even if you don't have all 3 of those but perhaps say 2 of them,\nthe chances of there being a significant downside from using this seem\nquite small. At that point I wonder whether you really need an option.\nIt's true that someone might not like it, but there are all sorts of\nthings that at least one person doesn't like and one can't cater to\nall of them.\n\nTo put that another way, in what scenario do we suppose that a\nreasonable person would wish to use this reloption? If there's none,\nwe don't need it. If there is one, can we develop a mitigation that\nsolves their problem automatically instead?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 12 Jun 2020 13:58:12 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Fri, Jun 12, 2020 at 11:28 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jun 11, 2020 at 4:54 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > I think someone at some point is not going to like the automatic\n> > choice. So perhaps a reloption to allow users to overwrite it is a\n> > good idea. -1 should most likely mean use the automatic choice based\n> > on relation size. I think for parallel seq scans that filter a large\n> > portion of the records most likely need some sort of index, but there\n> > are perhaps some genuine cases for not having one. e.g perhaps the\n> > query is just not run often enough for an index to be worthwhile. In\n> > that case, the performance is likely less critical, but at least the\n> > reloption would allow users to get the old behaviour.\n>\n> Let me play the devil's advocate here. I feel like if the step size is\n> limited by the relation size and there is ramp-up and ramp-down, or\n> maybe even if you don't have all 3 of those but perhaps say 2 of them,\n> the chances of there being a significant downside from using this seem\n> quite small. At that point I wonder whether you really need an option.\n> It's true that someone might not like it, but there are all sorts of\n> things that at least one person doesn't like and one can't cater to\n> all of them.\n>\n> To put that another way, in what scenario do we suppose that a\n> reasonable person would wish to use this reloption?\n>\n\nThe performance can vary based on qualification where some workers\ndiscard more rows as compared to others, with the current system with\nstep-size as one, the probability of unequal work among workers is\nquite low as compared to larger step-sizes.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 13 Jun 2020 11:43:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Sat, Jun 13, 2020 at 2:13 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> The performance can vary based on qualification where some workers\n> discard more rows as compared to others, with the current system with\n> step-size as one, the probability of unequal work among workers is\n> quite low as compared to larger step-sizes.\n\nIt seems like this would require incredibly bad luck, though. If the\nstep size is less than 1/1024 of the relation size, and we ramp down\nfor, say, the last 5% of the relation, then the worst case is that\nchunk 972 of 1024 is super-slow compared to all the other blocks, so\nthat it takes longer to process chunk 972 only than it does to process\nchunks 973-1024 combined. It is not impossible, but that chunk has to\nbe like 50x worse than all the others, which doesn't seem like\nsomething that is going to happen often enough to be worth worrying\nabout very much. I'm not saying it will never happen. I'm just\nskeptical about the benefit of adding a GUC or reloption for a corner\ncase like this. I think people will fiddle with it when it isn't\nreally needed, and won't realize it exists in the scenarios where it\nwould have helped. And then, because we have the setting, we'll have\nto keep it around forever, even as we improve the algorithm in other\nways, which could become a maintenance burden. I think it's better to\ntreat stuff like this as an implementation detail rather than\nsomething we expect users to adjust.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 15 Jun 2020 11:29:00 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Tue, 16 Jun 2020 at 03:29, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sat, Jun 13, 2020 at 2:13 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > The performance can vary based on qualification where some workers\n> > discard more rows as compared to others, with the current system with\n> > step-size as one, the probability of unequal work among workers is\n> > quite low as compared to larger step-sizes.\n>\n> It seems like this would require incredibly bad luck, though. If the\n> step size is less than 1/1024 of the relation size, and we ramp down\n> for, say, the last 5% of the relation, then the worst case is that\n> chunk 972 of 1024 is super-slow compared to all the other blocks, so\n> that it takes longer to process chunk 972 only than it does to process\n> chunks 973-1024 combined. It is not impossible, but that chunk has to\n> be like 50x worse than all the others, which doesn't seem like\n> something that is going to happen often enough to be worth worrying\n> about very much. I'm not saying it will never happen. I'm just\n> skeptical about the benefit of adding a GUC or reloption for a corner\n> case like this. I think people will fiddle with it when it isn't\n> really needed, and won't realize it exists in the scenarios where it\n> would have helped.\n\nI'm trying to think of likely scenarios where \"lots of work at the\nend\" is going to be common. I can think of queue processing, but\neverything I can think about there requires an UPDATE to the processed\nflag, which won't be using parallel query anyway. There's then\nprocessing something based on some period of time like \"the last\nhour\", \"today\". For append-only tables the latest information is\nlikely to be at the end of the heap. For that, anyone that's getting\na SeqScan on a large relation should likely have added an index. If a\nbtree is too costly, then BRIN is pretty perfect for that case.\n\nFWIW, I'm not really keen on adding a reloption or a GUC. I've also\nvoiced here that I'm not even keen on the ramp-up.\n\nTo summarise what's all been proposed so far:\n\n1. Use a constant, (e.g. 64) as the parallel step size\n2. Ramp up the step size over time\n3. Ramp down the step size towards the end of the scan.\n4. Auto-determine a good stepsize based on the size of the relation.\n5. Add GUC to allow users to control or override the step size.\n6. Add relption to allow users to control or override the step size.\n\n\nHere are my thoughts on each of those:\n\n#1 is a bad idea as there are legitimate use-cases for using parallel\nquery on small tables. e.g calling some expensive parallel safe\nfunction. Small tables are more likely to be cached.\n#2 I don't quite understand why this is useful\n#3 I understand this is to try to make it so workers all complete\naround about the same time.\n#4 We really should be doing it this way.\n#5 Having a global knob to control something that is very specific to\nthe size of a relation does not make much sense to me.\n#6. I imagine someone will have some weird use-case that works better\nwhen parallel workers get 1 page at a time. I'm not convinced that\nthey're not doing something else wrong.\n\nSo my vote is for 4 with possibly 3, if we can come up with something\nsmart enough * that works well in parallel. I think there's less of a\nneed for this if we divided the relation into more chunks, e.g. 8192\nor 16384.\n\n* Perhaps when there are less than 2 full chunks remaining, workers\ncan just take half of what is left. Or more specifically\nMax(pg_next_power2(remaining_blocks) / 2, 1), which ideally would work\nout allocating an amount of pages proportional to the amount of beer\neach mathematician receives in the \"An infinite number of\nmathematicians walk into a bar\" joke, obviously with the exception\nthat we stop dividing when we get to 1. However, I'm not quite sure\nhow well that can be made to work with multiple bartenders working in\nparallel.\n\nDavid\n\n\n", "msg_date": "Tue, 16 Jun 2020 09:09:05 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Mon, Jun 15, 2020 at 8:59 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sat, Jun 13, 2020 at 2:13 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > The performance can vary based on qualification where some workers\n> > discard more rows as compared to others, with the current system with\n> > step-size as one, the probability of unequal work among workers is\n> > quite low as compared to larger step-sizes.\n>\n> It seems like this would require incredibly bad luck, though.\n>\n\nI agree that won't be a common scenario but apart from that also I am\nnot sure if we can conclude that the proposed patch won't cause any\nregressions. See one of the tests [1] done by Soumyadeep where the\npatch has caused regression in one of the cases, now we can either try\nto improve the patch and see we didn't cause any regressions or assume\nthat those are some minority cases which we don't care. Another point\nis that this thread has started with a theory that this idea can give\nbenefits on certain filesystems and AFAICS we have tested it on one\nother type of system, so not sure if that is sufficient.\n\nHaving said that, I just want to clarify that I am positive about this\nwork but just not very sure that it is a universal win based on the\ntesting done till now.\n\n[1] - https://www.postgresql.org/message-id/CADwEdoqirzK3H8bB%3DxyJ192EZCNwGfcCa_WJ5GHVM7Sv8oenuA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 16 Jun 2020 16:27:26 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Mon, Jun 15, 2020 at 5:09 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> To summarise what's all been proposed so far:\n>\n> 1. Use a constant, (e.g. 64) as the parallel step size\n> 2. Ramp up the step size over time\n> 3. Ramp down the step size towards the end of the scan.\n> 4. Auto-determine a good stepsize based on the size of the relation.\n> 5. Add GUC to allow users to control or override the step size.\n> 6. Add relption to allow users to control or override the step size.\n>\n> Here are my thoughts on each of those:\n>\n> #1 is a bad idea as there are legitimate use-cases for using parallel\n> query on small tables. e.g calling some expensive parallel safe\n> function. Small tables are more likely to be cached.\n\nI agree.\n\n> #2 I don't quite understand why this is useful\n\nI was thinking that if the query had a small LIMIT, you'd want to\navoid handing out excessively large chunks, but actually that seems\nlike it might just be fuzzy thinking on my part. We're not committing\nto scanning the entirety of the chunk just because we've assigned it\nto a worker.\n\n> #3 I understand this is to try to make it so workers all complete\n> around about the same time.\n> #4 We really should be doing it this way.\n> #5 Having a global knob to control something that is very specific to\n> the size of a relation does not make much sense to me.\n> #6. I imagine someone will have some weird use-case that works better\n> when parallel workers get 1 page at a time. I'm not convinced that\n> they're not doing something else wrong.\n\nAgree with all of that.\n\n> So my vote is for 4 with possibly 3, if we can come up with something\n> smart enough * that works well in parallel. I think there's less of a\n> need for this if we divided the relation into more chunks, e.g. 8192\n> or 16384.\n\nI agree with that too.\n\n> * Perhaps when there are less than 2 full chunks remaining, workers\n> can just take half of what is left. Or more specifically\n> Max(pg_next_power2(remaining_blocks) / 2, 1), which ideally would work\n> out allocating an amount of pages proportional to the amount of beer\n> each mathematician receives in the \"An infinite number of\n> mathematicians walk into a bar\" joke, obviously with the exception\n> that we stop dividing when we get to 1. However, I'm not quite sure\n> how well that can be made to work with multiple bartenders working in\n> parallel.\n\nThat doesn't sound nearly aggressive enough to me. I mean, let's\nsuppose that we're concerned about the scenario where one chunk takes\n50x as long as all the other chunks. Well, if we have 1024 chunks\ntotal, and we hit the problem chunk near the beginning, there will be\nno problem. In effect, there are 1073 units of work instead of 1024,\nand we accidentally assigned one guy 50 units of work when we thought\nwe were assigning 1 unit of work. If there's enough work left that we\ncan assign each other worker 49 units more than what we would have\ndone had that chunk been the same cost as all the others, then there's\nno problem. So for instance if there are 4 workers, we can still even\nthings out if we hit the problematic chunk more than ~150 chunks from\nthe end. If we're closer to the end than that, there's no way to avoid\nthe slow chunk delaying the overall completion time, and the problem\ngets worse as the problem chunk gets closer to the end.\n\nHow can we improve? Well, if when we're less than 150 chunks from the\nend, we reduce the chunk size by 2x, then instead of having 1 chunk\nthat is 50x as expensive, hopefully we'll have 2 smaller chunks that\nare each 50x as expensive. They'll get assigned to 2 different\nworkers, and the remaining 2 workers now need enough extra work from\nother chunks to even out the work distribution, which should still be\npossible. It gets tough though if breaking the one expensive chunk in\nhalf produces 1 regular-price half-chunk and one half-chunk that is\n50x as expensive as all the others. Because we have <150 chunks left,\nthere's no way to keep everybody else busy until the sole expensive\nhalf-chunk completes. In a sufficiently-extreme scenario, assigning\neven a single full block to a worker is too much, and you really want\nto handle the tuples out individually.\n\nAnyway, if we don't do anything special until we get down to the last\n2 chunks, it's only going to make a difference when one of those last\n2 chunks happens to be the expensive one. If say the third-to-last\nchunk is the expensive one, subdividing the last 2 chunks lets all the\nworkers who didn't get the expensive chunk fight over the scraps, but\nthat's not an improvement. If anything it's worse, because there's\nmore communication overhead and you don't gain anything vs. just\nassigning each chunk to a worker straight up.\n\nIn a real example we don't know that we have a single expensive chunk\n-- each chunk just has its own cost, and they could all be the same,\nor some could be much more expensive. When we have a lot of work left,\nwe can be fairly cavalier in handing out larger chunks of it with the\nfull confidence that even if some of those chunks turn out to be way\nmore expensive than others, we'll still be able to equalize the\nfinishing times by our choice of how to distribute the remaining work.\nBut as there's less and less work left, I think you need to hand out\nthe work in smaller increments to maximize the chances of obtaining an\nequal work distribution.\n\nSo maybe one idea is to base the chunk size on the number of blocks\nremaining to be scanned. Say that the chunk size is limited to 1/512\nof the *remaining* blocks in the relation, probably with some upper\nlimit. I doubt going beyond 1GB makes any sense just because that's\nhow large the files are, for example, and that might be too big for\nother reasons. But let's just use that as an example. Say you have a\n20TB relation. You hand out 1GB segments until you get down to 512GB\nremaining. Then you hand out 512MB segments until you get down to\n256GB remaining, and then 256MB segments until you get down to 128GB\nremaining, and so on. Once you get down to the last 4MB you're handing\nout individual blocks, just as you would do from the beginning if the\nwhole relation size was 4MB.\n\nThis kind of thing is a bit overcomplicated and doesn't really help if\nthe first 1GB you hand out at the very beginning turns out to be the\n1GB chunk of death, and it takes a bazillion times longer than\nanything else, and it's just going to be the last worker to finish no\nmatter what you do about anything else. The increasing granularity\nnear the end is just fighting over scraps in that case. The only thing\nyou can do to avoid this kind of problem is use a lower maximum chunk\nsize from the beginning, and I think we might want to consider doing\nthat, because I suspect that the incremental benefits from 64MB chunks\nto 1GB chunks are pretty small, for example.\n\nBut, in more normal cases where you have some somewhat-expensive\nchunks mixed in with the regular-price chunks, I think this sort of\nthing should work pretty well. If you never give out more that 1/512\nof the remaining blocks, then you can still achieve an equal work\ndistribution as long as you don't hit a chunk whose cost relative to\nothers is more than 512/(# of processes you have - 1). So for example\nwith 6 processes, you need a single chunk that's more than 100x as\nexpensive as the others to break it. That can definitely happen,\nbecause we can construct arbitrarily bad cases for this sort of thing,\nbut hopefully they wouldn't come up all that frequently...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 16 Jun 2020 11:20:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Tue, Jun 16, 2020 at 6:57 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I agree that won't be a common scenario but apart from that also I am\n> not sure if we can conclude that the proposed patch won't cause any\n> regressions. See one of the tests [1] done by Soumyadeep where the\n> patch has caused regression in one of the cases, now we can either try\n> to improve the patch and see we didn't cause any regressions or assume\n> that those are some minority cases which we don't care. Another point\n> is that this thread has started with a theory that this idea can give\n> benefits on certain filesystems and AFAICS we have tested it on one\n> other type of system, so not sure if that is sufficient.\n\nYeah, it seems like those cases might need some more investigation,\nbut they're also not necessarily an argument for a configuration\nsetting. It's not so much that I dislike the idea of being able to\nconfigure something here; it's really that I don't want a reloption\nthat feels like magic. For example, we know that work_mem can be\nreally hard to configure because there may be no value that's high\nenough to make your queries run fast during normal periods but low\nenough to avoid running out of memory during busy periods. That kind\nof thing sucks, and we should avoid creating more such cases.\n\nOne problem here is that the best value might depend not only on the\nrelation but on the individual query. A GUC could be changed\nper-query, but different tables in the query might need different\nvalues. Changing a reloption requires locking, and you wouldn't want\nto have to keep changing it for each different query. Now if we figure\nout that something is hardware-dependent -- like we come up with a\ngood formula that adjusts the value automatically most of the time,\nbut say it needs to more more on SSDs than on spinning disks or the\nother way around, well then that's a good candidate for some kind of\nsetting, maybe a tablespace option. But if it seems to depend on the\nquery, we need a better idea, not a user-configurable setting.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 16 Jun 2020 13:14:11 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Wed, 17 Jun 2020 at 03:20, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jun 15, 2020 at 5:09 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > * Perhaps when there are less than 2 full chunks remaining, workers\n> > can just take half of what is left. Or more specifically\n> > Max(pg_next_power2(remaining_blocks) / 2, 1), which ideally would work\n> > out allocating an amount of pages proportional to the amount of beer\n> > each mathematician receives in the \"An infinite number of\n> > mathematicians walk into a bar\" joke, obviously with the exception\n> > that we stop dividing when we get to 1. However, I'm not quite sure\n> > how well that can be made to work with multiple bartenders working in\n> > parallel.\n>\n> That doesn't sound nearly aggressive enough to me. I mean, let's\n> suppose that we're concerned about the scenario where one chunk takes\n> 50x as long as all the other chunks. Well, if we have 1024 chunks\n> total, and we hit the problem chunk near the beginning, there will be\n> no problem. In effect, there are 1073 units of work instead of 1024,\n> and we accidentally assigned one guy 50 units of work when we thought\n> we were assigning 1 unit of work. If there's enough work left that we\n> can assign each other worker 49 units more than what we would have\n> done had that chunk been the same cost as all the others, then there's\n> no problem. So for instance if there are 4 workers, we can still even\n> things out if we hit the problematic chunk more than ~150 chunks from\n> the end. If we're closer to the end than that, there's no way to avoid\n> the slow chunk delaying the overall completion time, and the problem\n> gets worse as the problem chunk gets closer to the end.\n\nI've got something like that in the attached. Currently, I've set the\nnumber of chunks to 2048 and I'm starting the ramp down when 64 chunks\nremain, which means we'll start the ramp-down when there's about 3.1%\nof the scan remaining. I didn't see the point of going with the larger\nnumber of chunks and having ramp-down code.\n\nAttached is the patch and an .sql file with a function which can be\nused to demonstrate what chunk sizes the patch will choose and demo\nthe ramp-down.\n\ne.g.\n# select show_parallel_scan_chunks(1000000, 2048, 64);\n\nIt would be really good if people could test this using the test case\nmentioned in [1]. We really need to get a good idea of how this\nbehaves on various operating systems.\n\nWith a 32TB relation, the code will make the chunk size 16GB. Perhaps\nI should change the code to cap that at 1GB.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvrfJfYH51_WY-iQqPw8yGR4fDoTxAQKqn%2BSa7NTKEVWtg%40mail.gmail.com", "msg_date": "Thu, 18 Jun 2020 22:15:00 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Thu, Jun 18, 2020 at 6:15 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> With a 32TB relation, the code will make the chunk size 16GB. Perhaps\n> I should change the code to cap that at 1GB.\n\nIt seems pretty hard to believe there's any significant advantage to a\nchunk size >1GB, so I would be in favor of that change.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 18 Jun 2020 11:26:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Fri, 19 Jun 2020 at 03:26, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jun 18, 2020 at 6:15 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > With a 32TB relation, the code will make the chunk size 16GB. Perhaps\n> > I should change the code to cap that at 1GB.\n>\n> It seems pretty hard to believe there's any significant advantage to a\n> chunk size >1GB, so I would be in favor of that change.\n\nI could certainly make that change. With the standard page size, 1GB\nis 131072 pages and a power of 2. That would change for non-standard\npage sizes, so we'd need to decide if we want to keep the chunk size a\npower of 2, or just cap it exactly at whatever number of pages 1GB is.\n\nI'm not sure how much of a difference it'll make, but I also just want\nto note that synchronous scans can mean we'll start the scan anywhere\nwithin the table, so capping to 1GB does not mean we read an entire\nextent. It's more likely to span 2 extents.\n\nDavid\n\n\n", "msg_date": "Fri, 19 Jun 2020 11:34:15 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Fri, 19 Jun 2020 at 11:34, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Fri, 19 Jun 2020 at 03:26, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Thu, Jun 18, 2020 at 6:15 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > > With a 32TB relation, the code will make the chunk size 16GB. Perhaps\n> > > I should change the code to cap that at 1GB.\n> >\n> > It seems pretty hard to believe there's any significant advantage to a\n> > chunk size >1GB, so I would be in favor of that change.\n>\n> I could certainly make that change. With the standard page size, 1GB\n> is 131072 pages and a power of 2. That would change for non-standard\n> page sizes, so we'd need to decide if we want to keep the chunk size a\n> power of 2, or just cap it exactly at whatever number of pages 1GB is.\n>\n> I'm not sure how much of a difference it'll make, but I also just want\n> to note that synchronous scans can mean we'll start the scan anywhere\n> within the table, so capping to 1GB does not mean we read an entire\n> extent. It's more likely to span 2 extents.\n\nHere's a patch which caps the maximum chunk size to 131072. If\nsomeone doubles the page size then that'll be 2GB instead of 1GB. I'm\nnot personally worried about that.\n\nI tested the performance on a Windows 10 laptop using the test case from [1]\n\nMaster:\n\nworkers=0: Time: 141175.935 ms (02:21.176)\nworkers=1: Time: 316854.538 ms (05:16.855)\nworkers=2: Time: 323471.791 ms (05:23.472)\nworkers=3: Time: 321637.945 ms (05:21.638)\nworkers=4: Time: 308689.599 ms (05:08.690)\nworkers=5: Time: 289014.709 ms (04:49.015)\nworkers=6: Time: 267785.270 ms (04:27.785)\nworkers=7: Time: 248735.817 ms (04:08.736)\n\nPatched:\n\nworkers=0: Time: 155985.204 ms (02:35.985)\nworkers=1: Time: 112238.741 ms (01:52.239)\nworkers=2: Time: 105861.813 ms (01:45.862)\nworkers=3: Time: 91874.311 ms (01:31.874)\nworkers=4: Time: 92538.646 ms (01:32.539)\nworkers=5: Time: 93012.902 ms (01:33.013)\nworkers=6: Time: 94269.076 ms (01:34.269)\nworkers=7: Time: 90858.458 ms (01:30.858)\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvrfJfYH51_WY-iQqPw8yGR4fDoTxAQKqn%2BSa7NTKEVWtg%40mail.gmail.com", "msg_date": "Fri, 19 Jun 2020 14:10:17 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Thu, Jun 18, 2020 at 10:10 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> Here's a patch which caps the maximum chunk size to 131072. If\n> someone doubles the page size then that'll be 2GB instead of 1GB. I'm\n> not personally worried about that.\n\nMaybe use RELSEG_SIZE?\n\n> I tested the performance on a Windows 10 laptop using the test case from [1]\n>\n> Master:\n>\n> workers=0: Time: 141175.935 ms (02:21.176)\n> workers=1: Time: 316854.538 ms (05:16.855)\n> workers=2: Time: 323471.791 ms (05:23.472)\n> workers=3: Time: 321637.945 ms (05:21.638)\n> workers=4: Time: 308689.599 ms (05:08.690)\n> workers=5: Time: 289014.709 ms (04:49.015)\n> workers=6: Time: 267785.270 ms (04:27.785)\n> workers=7: Time: 248735.817 ms (04:08.736)\n>\n> Patched:\n>\n> workers=0: Time: 155985.204 ms (02:35.985)\n> workers=1: Time: 112238.741 ms (01:52.239)\n> workers=2: Time: 105861.813 ms (01:45.862)\n> workers=3: Time: 91874.311 ms (01:31.874)\n> workers=4: Time: 92538.646 ms (01:32.539)\n> workers=5: Time: 93012.902 ms (01:33.013)\n> workers=6: Time: 94269.076 ms (01:34.269)\n> workers=7: Time: 90858.458 ms (01:30.858)\n\nNice results. I wonder if these stack with the gains Thomas was\ndiscussing with his DSM-from-the-main-shmem-segment patch.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 19 Jun 2020 16:00:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Sat, 20 Jun 2020 at 08:00, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jun 18, 2020 at 10:10 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > Here's a patch which caps the maximum chunk size to 131072. If\n> > someone doubles the page size then that'll be 2GB instead of 1GB. I'm\n> > not personally worried about that.\n>\n> Maybe use RELSEG_SIZE?\n\nI was hoping to keep the guarantees that the chunk size is always a\npower of 2. If, for example, someone configured PostgreSQL\n--with-segsize=3, then RELSEG_SIZE would be 393216 with the standard\nBLCKSZ.\n\nNot having it a power of 2 does mean the ramp-down is more uneven when\nthe sizes become very small:\n\npostgres=# select 393216>>x from generate_Series(0,18)x;\n ?column?\n----------\n 393216\n 196608\n 98304\n 49152\n 24576\n 12288\n 6144\n 3072\n 1536\n 768\n 384\n 192\n 96\n 48\n 24\n 12\n 6\n 3\n 1\n(19 rows)\n\nPerhaps that's not a problem though, but then again, perhaps just\nkeeping it at 131072 regardless of RELSEG_SIZE and BLCKSZ is also ok.\nThe benchmarks I did on Windows [1] showed that the returns diminished\nonce we started making the step size some decent amount so my thoughts\nare that I've set PARALLEL_SEQSCAN_MAX_CHUNK_SIZE to something large\nenough that it'll make no difference to the performance anyway. So\nthere's probably not much point in giving it too much thought.\n\nPerhaps pg_nextpower2_32(RELSEG_SIZE) would be okay though.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvopPkA+q5y_k_6CUV4U6DPhmz771VeUMuzLs3D3mWYMOg@mail.gmail.com\n\n\n", "msg_date": "Mon, 22 Jun 2020 10:52:04 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Fri, 19 Jun 2020 at 14:10, David Rowley <dgrowleyml@gmail.com> wrote:\n> Here's a patch which caps the maximum chunk size to 131072. If\n> someone doubles the page size then that'll be 2GB instead of 1GB. I'm\n> not personally worried about that.\n>\n> I tested the performance on a Windows 10 laptop using the test case from [1]\n\nI also tested this an AMD machine running Ubuntu 20.04 on kernel\nversion 5.4.0-37. I used the same 100GB table I mentioned in [1], but\nwith the query \"select * from t where a < 0;\", which saves having to\ndo any aggregate work.\n\nThere seems to be quite a big win with Linux too. See the attached\ngraphs. Both graphs are based on the same results, just the MB/sec\none takes the query time in milliseconds and converts that into MB/sec\nfor the 100 GB table. i.e. 100*1024/(<milliseconds> /1000)\n\nThe machine is a 64core / 128 thread AMD machine (3990x) with a 1TB\nSamsung 970 Pro evo plus SSD, 64GB RAM\n\n> [1] https://www.postgresql.org/message-id/CAApHDvrfJfYH51_WY-iQqPw8yGR4fDoTxAQKqn%2BSa7NTKEVWtg%40mail.gmail.com", "msg_date": "Mon, 22 Jun 2020 16:54:22 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Mon, 22 Jun 2020 at 16:54, David Rowley <dgrowleyml@gmail.com> wrote:\n> I also tested this an AMD machine running Ubuntu 20.04 on kernel\n> version 5.4.0-37. I used the same 100GB table I mentioned in [1], but\n> with the query \"select * from t where a < 0;\", which saves having to\n> do any aggregate work.\n\nI just wanted to add a note here that Thomas and I just discussed this\na bit offline. He recommended I try setting the kernel readhead a bit\nhigher.\n\nIt was set to 128kB, so I cranked it up to 2MB with:\n\nsudo blockdev --setra 4096 /dev/nvme0n1p2\n\nI didn't want to run the full test again as it took quite a long time,\nso I just tried with 32 workers.\n\nThe first two results here are taken from the test results I just\nposted 1 hour ago.\n\nMaster readhead=128kB = 89921.283 ms\nv2 patch readhead=128kB = 36085.642 ms\nmaster readhead=2MB = 60984.905 ms\nv2 patch readhead=2MB = 22611.264 ms\n\nThere must be a fairly large element of reading from the page cache\nthere since 22.6 seconds means 4528MB/sec over the 100GB table. The\nmaximum for a PCIe 3.0 x4 slot is 3940MB/s\n\nDavid\n\n\n", "msg_date": "Mon, 22 Jun 2020 17:52:48 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Sun, Jun 21, 2020 at 6:52 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> Perhaps that's not a problem though, but then again, perhaps just\n> keeping it at 131072 regardless of RELSEG_SIZE and BLCKSZ is also ok.\n> The benchmarks I did on Windows [1] showed that the returns diminished\n> once we started making the step size some decent amount so my thoughts\n> are that I've set PARALLEL_SEQSCAN_MAX_CHUNK_SIZE to something large\n> enough that it'll make no difference to the performance anyway. So\n> there's probably not much point in giving it too much thought.\n>\n> Perhaps pg_nextpower2_32(RELSEG_SIZE) would be okay though.\n\nI guess I don't care that much; it was just a thought. Maybe tying it\nto RELSEG_SIZE is a bad idea anyway. After all, what if we find cases\nwhere 1GB is too much? Like, how much benefit do we get from making it\n1GB rather than 64MB, say? I don't think we should be making this\nvalue big just because we can.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 22 Jun 2020 10:20:02 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "Em seg., 22 de jun. de 2020 às 02:53, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Mon, 22 Jun 2020 at 16:54, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I also tested this an AMD machine running Ubuntu 20.04 on kernel\n> > version 5.4.0-37. I used the same 100GB table I mentioned in [1], but\n> > with the query \"select * from t where a < 0;\", which saves having to\n> > do any aggregate work.\n>\n> I just wanted to add a note here that Thomas and I just discussed this\n> a bit offline. He recommended I try setting the kernel readhead a bit\n> higher.\n>\n> It was set to 128kB, so I cranked it up to 2MB with:\n>\n> sudo blockdev --setra 4096 /dev/nvme0n1p2\n>\n> I didn't want to run the full test again as it took quite a long time,\n> so I just tried with 32 workers.\n>\n> The first two results here are taken from the test results I just\n> posted 1 hour ago.\n>\n> Master readhead=128kB = 89921.283 ms\n> v2 patch readhead=128kB = 36085.642 ms\n> master readhead=2MB = 60984.905 ms\n> v2 patch readhead=2MB = 22611.264 ms\n>\n\nHi, redoing the tests with v2 here.\nnotebook with i5, 8GB, 256 GB (SSD)\nWindows 10 64 bits (2004\nmsvc 2019 64 bits\nPostgresql head (with v2 patch)\nConfiguration: none\nConnection local ipv4 (not localhost)\n\ncreate table t (a int, b text);\ninsert into t select x,md5(x::text) from\ngenerate_series(1,1000000*1572.7381809)x;\nvacuum freeze t;\n\nset max_parallel_workers_per_gather = 0;\nTime: 354211,826 ms (05:54,212)\nset max_parallel_workers_per_gather = 1;\nTime: 332805,773 ms (05:32,806)\nset max_parallel_workers_per_gather = 2;\nTime: 282566,711 ms (04:42,567)\nset max_parallel_workers_per_gather = 3;\nTime: 263383,945 ms (04:23,384)\nset max_parallel_workers_per_gather = 4;\nTime: 255728,259 ms (04:15,728)\nset max_parallel_workers_per_gather = 5;\nTime: 238288,720 ms (03:58,289)\nset max_parallel_workers_per_gather = 6;\nTime: 238647,792 ms (03:58,648)\nset max_parallel_workers_per_gather = 7;\nTime: 231295,763 ms (03:51,296)\nset max_parallel_workers_per_gather = 8;\nTime: 232502,828 ms (03:52,503)\nset max_parallel_workers_per_gather = 9;\nTime: 230970,604 ms (03:50,971)\nset max_parallel_workers_per_gather = 10;\nTime: 232104,182 ms (03:52,104)\n\nset max_parallel_workers_per_gather = 8;\npostgres=# explain select count(*) from t;\n QUERY PLAN\n-------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=15564556.43..15564556.44 rows=1 width=8)\n -> Gather (cost=15564555.60..15564556.41 rows=8 width=8)\n Workers Planned: 8\n -> Partial Aggregate (cost=15563555.60..15563555.61 rows=1\nwidth=8)\n -> Parallel Seq Scan on t (cost=0.00..15072074.88\nrows=196592288 width=0)\n(5 rows)\n\nQuestions:\n1. Why acquire and release lock in retry: loop.\n\nWouldn't that be better?\n\n /* Grab the spinlock. */\n SpinLockAcquire(&pbscan->phs_mutex);\n\nretry:\n/*\n* If the scan's startblock has not yet been initialized, we must do so\n* now. If this is not a synchronized scan, we just start at block 0, but\n* if it is a synchronized scan, we must get the starting position from\n* the synchronized scan machinery. We can't hold the spinlock while\n* doing that, though, so release the spinlock, get the information we\n* need, and retry. If nobody else has initialized the scan in the\n* meantime, we'll fill in the value we fetched on the second time\n* through.\n*/\nif (pbscan->phs_startblock == InvalidBlockNumber)\n{\nif (!pbscan->base.phs_syncscan)\npbscan->phs_startblock = 0;\nelse if (sync_startpage != InvalidBlockNumber)\npbscan->phs_startblock = sync_startpage;\nelse\n{\nsync_startpage = ss_get_location(rel, pbscan->phs_nblocks);\ngoto retry;\n}\n}\n SpinLockRelease(&pbscan->phs_mutex);\n}\n\nAcquire lock once, before retry?\n\n2. Is there any configuration to improve performance?\n\nregards,\nRanier Vilela\n\nEm seg., 22 de jun. de 2020 às 02:53, David Rowley <dgrowleyml@gmail.com> escreveu:On Mon, 22 Jun 2020 at 16:54, David Rowley <dgrowleyml@gmail.com> wrote:\n> I also tested this an AMD machine running Ubuntu 20.04 on kernel\n> version 5.4.0-37.  I used the same 100GB table I mentioned in [1], but\n> with the query \"select * from t where a < 0;\", which saves having to\n> do any aggregate work.\n\nI just wanted to add a note here that Thomas and I just discussed this\na bit offline. He recommended I try setting the kernel readhead a bit\nhigher.\n\nIt was set to 128kB, so I cranked it up to 2MB with:\n\nsudo blockdev --setra 4096 /dev/nvme0n1p2\n\nI didn't want to run the full test again as it took quite a long time,\nso I just tried with 32 workers.\n\nThe first two results here are taken from the test results I just\nposted 1 hour ago.\n\nMaster readhead=128kB = 89921.283 ms\nv2 patch readhead=128kB = 36085.642 ms\nmaster readhead=2MB = 60984.905 ms\nv2 patch readhead=2MB = 22611.264 msHi, redoing the tests with v2 here.notebook with i5, 8GB, 256 GB (SSD)Windows 10 64 bits (2004msvc 2019 64 bitsPostgresql head (with v2 patch)Configuration: noneConnection local ipv4 (not localhost)\ncreate table t (a int, b text);insert into t select x,md5(x::text) from\ngenerate_series(1,1000000*1572.7381809)x;\nvacuum freeze t;\n\nset max_parallel_workers_per_gather = 0;Time: 354211,826 ms (05:54,212)\nset max_parallel_workers_per_gather = 1;Time: 332805,773 ms (05:32,806)\nset max_parallel_workers_per_gather = 2;Time: 282566,711 ms (04:42,567)\nset max_parallel_workers_per_gather = 3;Time: 263383,945 ms (04:23,384)\nset max_parallel_workers_per_gather = 4; Time: 255728,259 ms (04:15,728)\nset max_parallel_workers_per_gather = 5; Time: 238288,720 ms (03:58,289)\nset max_parallel_workers_per_gather = 6; Time: 238647,792 ms (03:58,648)\nset max_parallel_workers_per_gather = 7; Time: 231295,763 ms (03:51,296)\nset max_parallel_workers_per_gather = 8; Time: 232502,828 ms (03:52,503)\nset max_parallel_workers_per_gather = 9; Time: 230970,604 ms (03:50,971)\nset max_parallel_workers_per_gather = 10; Time: 232104,182 ms (03:52,104)\n\nset max_parallel_workers_per_gather = 8; postgres=# explain select count(*) from t;                                        QUERY PLAN------------------------------------------------------------------------------------------- Finalize Aggregate  (cost=15564556.43..15564556.44 rows=1 width=8)   ->  Gather  (cost=15564555.60..15564556.41 rows=8 width=8)         Workers Planned: 8         ->  Partial Aggregate  (cost=15563555.60..15563555.61 rows=1 width=8)               ->  Parallel Seq Scan on t  (cost=0.00..15072074.88 rows=196592288 width=0)(5 rows)Questions:1. Why \nacquire and release lock in retry: loop.Wouldn't that be better?    /* Grab the spinlock. */    SpinLockAcquire(&pbscan->phs_mutex);retry:\t/*\t * If the scan's startblock has not yet been initialized, we must do so\t * now.  If this is not a synchronized scan, we just start at block 0, but\t * if it is a synchronized scan, we must get the starting position from\t * the synchronized scan machinery.  We can't hold the spinlock while\t * doing that, though, so release the spinlock, get the information we\t * need, and retry.  If nobody else has initialized the scan in the\t * meantime, we'll fill in the value we fetched on the second time\t * through.\t */\tif (pbscan->phs_startblock == InvalidBlockNumber)\t{\t\tif (!pbscan->base.phs_syncscan)\t\t\tpbscan->phs_startblock = 0;\t\telse if (sync_startpage != InvalidBlockNumber)\t\t\tpbscan->phs_startblock = sync_startpage;\t\telse\t\t{\t\t\tsync_startpage = ss_get_location(rel, pbscan->phs_nblocks);\t\t\tgoto retry;\t\t}\t}    SpinLockRelease(&pbscan->phs_mutex);}Acquire lock once, before retry?2. Is there any configuration to improve performance?regards,Ranier Vilela", "msg_date": "Mon, 22 Jun 2020 13:45:38 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "Ranier,\n\nThis topic is largely unrelated to the current thread. Also...\n\nOn Mon, Jun 22, 2020 at 12:47 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> Questions:\n> 1. Why acquire and release lock in retry: loop.\n\nThis is a super-bad idea. Note the coding rule mentioned in spin.h.\nThere are many discussion on this mailing list about the importance of\nkeeping the critical section for a spinlock to a few instructions.\nCalling another function that *itself acquires an LWLock* is\ndefinitely not OK.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 22 Jun 2020 15:32:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "Em seg., 22 de jun. de 2020 às 16:33, Robert Haas <robertmhaas@gmail.com>\nescreveu:\n\n> Ranier,\n>\n> This topic is largely unrelated to the current thread. Also...\n>\nWeel, I was trying to improve the patch for the current thread.\nOr perhaps, you are referring to something else, which I may not have\nunderstood.\n\n\n>\n> On Mon, Jun 22, 2020 at 12:47 PM Ranier Vilela <ranier.vf@gmail.com>\n> wrote:\n> > Questions:\n> > 1. Why acquire and release lock in retry: loop.\n>\n> This is a super-bad idea. Note the coding rule mentioned in spin.h.\n> There are many discussion on this mailing list about the importance of\n> keeping the critical section for a spinlock to a few instructions.\n> Calling another function that *itself acquires an LWLock* is\n> definitely not OK.\n>\nPerhaps, I was not clear and it is another misunderstanding.\nI am not suggesting a function to acquire the lock.\nBy the way, I did the tests with this change and it worked perfectly.\nBut, as it is someone else's patch, I asked why to learn.\nBy the way, my suggestion is with less instructions than the patch.\nThe only change I asked is why to acquire and release the lock repeatedly\nwithin the goto retry, when you already have it.\nIf I can acquire the lock before retry: and release it only at the end when\nI leave table_block_parallelscan_startblock_init,\nwhy not do it.\nI will attach the suggested excerpt so that I have no doubts about what I\nam asking.\n\nregards,\nRanier Vilela", "msg_date": "Mon, 22 Jun 2020 17:27:46 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Fri, Jun 19, 2020 at 2:10 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> Here's a patch which caps the maximum chunk size to 131072. If\n> someone doubles the page size then that'll be 2GB instead of 1GB. I'm\n> not personally worried about that.\n\nI wonder how this interacts with the sync scan feature. It has a\nconflicting goal...\n\n\n", "msg_date": "Tue, 23 Jun 2020 09:52:21 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Tue, 23 Jun 2020 at 09:52, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Fri, Jun 19, 2020 at 2:10 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > Here's a patch which caps the maximum chunk size to 131072. If\n> > someone doubles the page size then that'll be 2GB instead of 1GB. I'm\n> > not personally worried about that.\n>\n> I wonder how this interacts with the sync scan feature. It has a\n> conflicting goal...\n\nOf course, syncscan relies on subsequent scanners finding buffers\ncached, either in (ideally) shared buffers or the kernel cache. The\nscans need to be roughly synchronised for that to work. If we go and\nmake the chunk size too big, then that'll reduce the chances useful\nbuffers being found by subsequent scans. It sounds like a good reason\nto try and find the smallest chunk size that allows readahead to work\nwell. The benchmarks I did on Windows certainly show that there are\ndiminishing returns when the chunk size gets larger, so capping it at\nsome number of megabytes would probably be a good idea. It would just\ntake a bit of work to figure out how many megabytes that should be.\nLikely it's going to depend on the size of shared buffers and how much\nmemory the machine has got, but also what other work is going on that\nmight be evicting buffers at the same time. Perhaps something in the\nrange of 2-16MB would be ok. I can do some tests with that and see if\nI can get the same performance as with the larger chunks.\n\nDavid\n\n\n", "msg_date": "Tue, 23 Jun 2020 10:50:16 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Tue, 23 Jun 2020 at 07:33, Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Jun 22, 2020 at 12:47 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > Questions:\n> > 1. Why acquire and release lock in retry: loop.\n>\n> This is a super-bad idea. Note the coding rule mentioned in spin.h.\n> There are many discussion on this mailing list about the importance of\n> keeping the critical section for a spinlock to a few instructions.\n> Calling another function that *itself acquires an LWLock* is\n> definitely not OK.\n\nJust a short history lesson for Ranier to help clear up any confusion:\n\nBack before 3cda10f41 there was some merit in improving the\nperformance of these functions. Before that, we used to dish out pages\nunder a lock. With that old method, if given enough workers and a\nsimple enough query, we could start to see workers waiting on the lock\njust to obtain the next block number they're to work on. After the\natomics were added in that commit, we didn't really see that again.\n\nWhat we're trying to fix here is the I/O pattern that these functions\ninduce and that's all we should be doing here. Changing this is\ntricky to get right as we need to consider so many operating systems\nand how they deal with I/O readahead.\n\nDavid\n\n\n", "msg_date": "Tue, 23 Jun 2020 14:29:12 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "Em seg., 22 de jun. de 2020 às 23:29, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Tue, 23 Jun 2020 at 07:33, Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Mon, Jun 22, 2020 at 12:47 PM Ranier Vilela <ranier.vf@gmail.com>\n> wrote:\n> > > Questions:\n> > > 1. Why acquire and release lock in retry: loop.\n> >\n> > This is a super-bad idea. Note the coding rule mentioned in spin.h.\n> > There are many discussion on this mailing list about the importance of\n> > keeping the critical section for a spinlock to a few instructions.\n> > Calling another function that *itself acquires an LWLock* is\n> > definitely not OK.\n>\n> Just a short history lesson for Ranier to help clear up any confusion:\n>\n> Back before 3cda10f41 there was some merit in improving the\n> performance of these functions. Before that, we used to dish out pages\n> under a lock. With that old method, if given enough workers and a\n> simple enough query, we could start to see workers waiting on the lock\n> just to obtain the next block number they're to work on. After the\n> atomics were added in that commit, we didn't really see that again.\n>\nIt is a good explanation. I already imagined it could be to help other\nprocesses, but I still wasn't sure.\nHowever, I did a test with this modification (lock before retry), and it\nworked.\n\n\n>\n> What we're trying to fix here is the I/O pattern that these functions\n> induce and that's all we should be doing here. Changing this is\n> tricky to get right as we need to consider so many operating systems\n> and how they deal with I/O readahead.\n>\nYes, I understand that focus here is I/O.\n\nSorry, by the noise.\n\nregards,\nRanier Vilela\n\nEm seg., 22 de jun. de 2020 às 23:29, David Rowley <dgrowleyml@gmail.com> escreveu:On Tue, 23 Jun 2020 at 07:33, Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Jun 22, 2020 at 12:47 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > Questions:\n> > 1. Why acquire and release lock in retry: loop.\n>\n> This is a super-bad idea. Note the coding rule mentioned in spin.h.\n> There are many discussion on this mailing list about the importance of\n> keeping the critical section for a spinlock to a few instructions.\n> Calling another function that *itself acquires an LWLock* is\n> definitely not OK.\n\nJust a short history lesson for Ranier to help clear up any confusion:\n\nBack before 3cda10f41 there was some merit in improving the\nperformance of these functions. Before that, we used to dish out pages\nunder a lock. With that old method, if given enough workers and a\nsimple enough query, we could start to see workers waiting on the lock\njust to obtain the next block number they're to work on.  After the\natomics were added in that commit, we didn't really see that again.It is a good explanation. I already imagined it could be to help other processes, but I still wasn't sure.However, I did a test with this modification (lock before retry), and it worked. \n\nWhat we're trying to fix here is the I/O pattern that these functions\ninduce and that's all we should be doing here.  Changing this is\ntricky to get right as we need to consider so many operating systems\nand how they deal with I/O readahead.Yes, I understand that focus here is I/O. Sorry, by the noise.regards,Ranier Vilela", "msg_date": "Tue, 23 Jun 2020 08:05:03 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Tue, 23 Jun 2020 at 10:50, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 23 Jun 2020 at 09:52, Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> > On Fri, Jun 19, 2020 at 2:10 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > > Here's a patch which caps the maximum chunk size to 131072. If\n> > > someone doubles the page size then that'll be 2GB instead of 1GB. I'm\n> > > not personally worried about that.\n> >\n> > I wonder how this interacts with the sync scan feature. It has a\n> > conflicting goal...\n>\n> Of course, syncscan relies on subsequent scanners finding buffers\n> cached, either in (ideally) shared buffers or the kernel cache. The\n> scans need to be roughly synchronised for that to work. If we go and\n> make the chunk size too big, then that'll reduce the chances useful\n> buffers being found by subsequent scans. It sounds like a good reason\n> to try and find the smallest chunk size that allows readahead to work\n> well. The benchmarks I did on Windows certainly show that there are\n> diminishing returns when the chunk size gets larger, so capping it at\n> some number of megabytes would probably be a good idea. It would just\n> take a bit of work to figure out how many megabytes that should be.\n> Likely it's going to depend on the size of shared buffers and how much\n> memory the machine has got, but also what other work is going on that\n> might be evicting buffers at the same time. Perhaps something in the\n> range of 2-16MB would be ok. I can do some tests with that and see if\n> I can get the same performance as with the larger chunks.\n\nI did some further benchmarking on both Windows 10 and on Linux with\nthe 5.4.0-37 kernel running on Ubuntu 20.04. I started by reducing\nPARALLEL_SEQSCAN_MAX_CHUNK_SIZE down to 256 and ran the test multiple\ntimes, each time doubling the PARALLEL_SEQSCAN_MAX_CHUNK_SIZE. On the\nLinux test, I used the standard kernel readhead of 128kB. Thomas and I\ndiscovered earlier that increasing that increases the throughput all\nround.\n\nThese tests were done with the PARALLEL_SEQSCAN_NCHUNKS as 2048, which\nmeans with the 100GB table I used for testing, the uncapped chunk size\nof 8192 blocks would be selected (aka 16MB). The performance is quite\na bit slower when the chunk size is capped to 256 blocks and it does\nincrease again with larger maximum chunk sizes, but the returns do get\nsmaller and smaller with each doubling of\nPARALLEL_SEQSCAN_MAX_CHUNK_SIZE. Uncapped, or 8192 did give the best\nperformance on both Windows and Linux. I didn't test with anything\nhigher than that.\n\nSo, based on these results, it seems 16MBs is not a bad value to cap\nthe chunk size at. If that turns out to be true for other tests too,\nthen likely 16MB is not too unrealistic a value to cap the size of the\nblock chunks to.\n\nPlease see the attached v2_on_linux.png and v2_on_windows.png for the\nresults of that.\n\nI also performed another test to see how the performance looks with\nboth synchronize_seqscans on and off. To test this I decided that a\n100GB table on a 64GB RAM machine was just not larger enough, so I\nincreased the table size to 800GB. I set parallel_workers for the\nrelation to 10 and ran:\n\ndrowley@amd3990x:~$ cat bench.sql\nselect * from t where a < 0;\npgbench -n -f bench.sql -T 10800 -P 600 -c 6 -j 6 postgres\n\n(This query returns 0 rows).\n\nSo each query had 11 backends (including the main process) and there\nwere 6 of those running concurrently. i.e 66 backends busy working on\nthe problem in total.\n\nThe results of that were:\n\nAuto chunk size selection without any cap (for an 800GB table that's\n65536 blocks)\n\nsynchronize_seqscans = on: latency average = 372738.134 ms (2197.7 MB/s) <-- bad\nsynchronize_seqscans = off: latency average = 320204.028 ms (2558.3 MB/s)\n\nSo here it seems that synchronize_seqscans = on slows things down.\n\nTrying again after capping the number of blocks per chunk to 8192:\n\nsynchronize_seqscans = on: latency average = 321969.172 ms (2544.3 MB/s)\nsynchronize_seqscans = off: latency average = 321389.523 ms (2548.9 MB/s)\n\nSo the performance there is about the same.\n\nI was surprised to see that synchronize_seqscans = off didn't slow\ndown the performance by about 6x. So I tested to see what master does,\nand:\n\nsynchronize_seqscans = on: latency average = 1070226.162 ms (765.4MB/s)\nsynchronize_seqscans = off: latency average = 1085846.859 ms (754.4MB/s)\n\nIt does pretty poorly in both cases.\n\nThe full results of that test are in the attached\n800gb_table_synchronize_seqscans_test.txt file.\n\nIn summary, based on these tests, I don't think we're making anything\nworse in regards to synchronize_seqscans if we cap the maximum number\nof blocks to allocate to each worker at once to 8192. Perhaps there's\nsome argument for using something smaller than that for servers with\nvery little RAM, but I don't personally think so as it still depends\non the table size and It's hard to imagine tables in the hundreds of\nGBs on servers that struggle with chunk allocations of 16MB. The\ntable needs to be at least ~70GB to get a 8192 chunk size with the\ncurrent v2 patch settings.\n\nDavid", "msg_date": "Wed, 24 Jun 2020 15:52:46 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Tue, Jun 23, 2020 at 11:53 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> In summary, based on these tests, I don't think we're making anything\n> worse in regards to synchronize_seqscans if we cap the maximum number\n> of blocks to allocate to each worker at once to 8192. Perhaps there's\n> some argument for using something smaller than that for servers with\n> very little RAM, but I don't personally think so as it still depends\n> on the table size and It's hard to imagine tables in the hundreds of\n> GBs on servers that struggle with chunk allocations of 16MB. The\n> table needs to be at least ~70GB to get a 8192 chunk size with the\n> current v2 patch settings.\n\nNice research. That makes me happy. I had a feeling the maximum useful\nchunk size ought to be more in this range than the larger values we\nwere discussing before, but I didn't even think about the effect on\nsynchronized scans.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 25 Jun 2020 11:33:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Fri, Jun 26, 2020 at 3:33 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Jun 23, 2020 at 11:53 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > In summary, based on these tests, I don't think we're making anything\n> > worse in regards to synchronize_seqscans if we cap the maximum number\n> > of blocks to allocate to each worker at once to 8192. Perhaps there's\n> > some argument for using something smaller than that for servers with\n> > very little RAM, but I don't personally think so as it still depends\n> > on the table size and It's hard to imagine tables in the hundreds of\n> > GBs on servers that struggle with chunk allocations of 16MB. The\n> > table needs to be at least ~70GB to get a 8192 chunk size with the\n> > current v2 patch settings.\n>\n> Nice research. That makes me happy. I had a feeling the maximum useful\n> chunk size ought to be more in this range than the larger values we\n> were discussing before, but I didn't even think about the effect on\n> synchronized scans.\n\n+1. This seems about right to me. We can always reopen the\ndiscussion if someone shows up with evidence in favour of a tweak to\nthe formula, but this seems to address the basic problem pretty well,\nand also fits nicely with future plans for AIO and DIO.\n\n\n", "msg_date": "Tue, 14 Jul 2020 19:13:13 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Tue, 14 Jul 2020 at 19:13, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Fri, Jun 26, 2020 at 3:33 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Tue, Jun 23, 2020 at 11:53 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > > In summary, based on these tests, I don't think we're making anything\n> > > worse in regards to synchronize_seqscans if we cap the maximum number\n> > > of blocks to allocate to each worker at once to 8192. Perhaps there's\n> > > some argument for using something smaller than that for servers with\n> > > very little RAM, but I don't personally think so as it still depends\n> > > on the table size and It's hard to imagine tables in the hundreds of\n> > > GBs on servers that struggle with chunk allocations of 16MB. The\n> > > table needs to be at least ~70GB to get a 8192 chunk size with the\n> > > current v2 patch settings.\n> >\n> > Nice research. That makes me happy. I had a feeling the maximum useful\n> > chunk size ought to be more in this range than the larger values we\n> > were discussing before, but I didn't even think about the effect on\n> > synchronized scans.\n>\n> +1. This seems about right to me. We can always reopen the\n> discussion if someone shows up with evidence in favour of a tweak to\n> the formula, but this seems to address the basic problem pretty well,\n> and also fits nicely with future plans for AIO and DIO.\n\nThank you both of you for having a look at the results.\n\nI'm now pretty happy with this too. I do understand that we've not\nexactly exhaustively tested all our supported operating systems.\nHowever, we've seen some great speedups with Windows 10 and Linux with\nSSDs. Thomas saw great speedups with FreeBSD with the original patch\nusing chunk sizes of 64 blocks. (I wonder if it's worth verifying that\nit increases further with the latest patch with the same test you did\nin the original email on this thread?)\n\nI'd like to propose that if anyone wants to do further testing on\nother operating systems with SSDs or HDDs then it would be good if\nthat could be done within a 1 week from this email. There are various\nbenchmarking ideas on this thread for inspiration.\n\nIf we've not seen any performance regressions within 1 week, then I\npropose that we (pending final review) push this to allow wider\ntesting. It seems we're early enough in the PG14 cycle that there's a\nlarge window of time for us to do something about any reported\nperformance regressions that come in.\n\nI also have in mind that Amit was keen to see a GUC or reloption to\nallow users to control this. My thoughts on that are still that it\nwould be possible to craft a case where we scan an entire heap to get\na very small number of rows that are all located in the same area in\nthe table and then call some expensive function on those rows. The\nchunk size ramp down code will help reduce the chances of one worker\nrunning on much longer than its co-workers, but not eliminate the\nchances. Even the code as it stands today could suffer from this to a\nlesser extent if all the matching rows are on a single page. My\ncurrent thoughts are that this just seems unlikely and that the\ngranularity of 1 block for cases like this was never that great\nanyway. I suppose a more ideal plan shape would \"Distribute\" matching\nrows to allow another set of workers to pick these rows up one-by-one\nand process them. Our to-date lack of such an operator probably counts\na little towards the fact that one parallel worker being tied up with\na large amount of work is not that common. Based on those thoughts,\nI'd like to avoid any GUC/reloption until we see evidence that it's\nreally needed.\n\nAny objections to any of the above?\n\nDavid\n\n\n", "msg_date": "Wed, 15 Jul 2020 12:24:52 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Wed, Jul 15, 2020 at 5:55 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 14 Jul 2020 at 19:13, Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> > On Fri, Jun 26, 2020 at 3:33 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > On Tue, Jun 23, 2020 at 11:53 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > > > In summary, based on these tests, I don't think we're making anything\n> > > > worse in regards to synchronize_seqscans if we cap the maximum number\n> > > > of blocks to allocate to each worker at once to 8192. Perhaps there's\n> > > > some argument for using something smaller than that for servers with\n> > > > very little RAM, but I don't personally think so as it still depends\n> > > > on the table size and It's hard to imagine tables in the hundreds of\n> > > > GBs on servers that struggle with chunk allocations of 16MB. The\n> > > > table needs to be at least ~70GB to get a 8192 chunk size with the\n> > > > current v2 patch settings.\n> > >\n> > > Nice research. That makes me happy. I had a feeling the maximum useful\n> > > chunk size ought to be more in this range than the larger values we\n> > > were discussing before, but I didn't even think about the effect on\n> > > synchronized scans.\n> >\n> > +1. This seems about right to me. We can always reopen the\n> > discussion if someone shows up with evidence in favour of a tweak to\n> > the formula, but this seems to address the basic problem pretty well,\n> > and also fits nicely with future plans for AIO and DIO.\n>\n> Thank you both of you for having a look at the results.\n>\n> I'm now pretty happy with this too. I do understand that we've not\n> exactly exhaustively tested all our supported operating systems.\n> However, we've seen some great speedups with Windows 10 and Linux with\n> SSDs. Thomas saw great speedups with FreeBSD with the original patch\n> using chunk sizes of 64 blocks. (I wonder if it's worth verifying that\n> it increases further with the latest patch with the same test you did\n> in the original email on this thread?)\n>\n> I'd like to propose that if anyone wants to do further testing on\n> other operating systems with SSDs or HDDs then it would be good if\n> that could be done within a 1 week from this email. There are various\n> benchmarking ideas on this thread for inspiration.\n>\n\nYeah, I agree it would be good if we could do what you said.\n\n> If we've not seen any performance regressions within 1 week, then I\n> propose that we (pending final review) push this to allow wider\n> testing.\n\nI think Soumyadeep has reported a regression case [1] with the earlier\nversion of the patch. I am not sure if we have verified that the\nsituation improves with the latest version of the patch. I request\nSoumyadeep to please try once with the latest patch.\n\n[1] - https://www.postgresql.org/message-id/CADwEdoqirzK3H8bB%3DxyJ192EZCNwGfcCa_WJ5GHVM7Sv8oenuA%40mail.gmail.com\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 15 Jul 2020 08:21:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Wed, 15 Jul 2020 at 14:51, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jul 15, 2020 at 5:55 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > If we've not seen any performance regressions within 1 week, then I\n> > propose that we (pending final review) push this to allow wider\n> > testing.\n>\n> I think Soumyadeep has reported a regression case [1] with the earlier\n> version of the patch. I am not sure if we have verified that the\n> situation improves with the latest version of the patch. I request\n> Soumyadeep to please try once with the latest patch.\n\nYeah, it would be good to see some more data points on that test.\nJumping from 2 up to 6 workers just leaves us to guess where the\nperformance started to become bad. It would be good to know if the\nregression is repeatable or if it was affected by some other process.\n\nI see the disk type on that report was Google PersistentDisk. I don't\npretend to be any sort of expert on network filesystems, but I guess a\nregression would be possible in that test case if say there was an\nadditional layer of caching of very limited size between the kernel\ncache and the disks, maybe on a remote machine. If it were doing some\nsort of prefetching to try to reduce latency and requests to the\nactual disks then perhaps going up to 6 workers with 64 chunk size (as\nThomas' patch used at that time) caused more cache misses on that\ncache due to the requests exceeding what had already been prefetched.\nThat's just a stab in the dark. Maybe someone with knowledge of these\nnetwork file systems can come up with a better theory.\n\nIt would be good to see EXPLAIN (ANALYZE, BUFFERS) with SET\ntrack_io_timing = on; for each value of max_parallel_workers.\n\nDavid\n\n> [1] - https://www.postgresql.org/message-id/CADwEdoqirzK3H8bB%3DxyJ192EZCNwGfcCa_WJ5GHVM7Sv8oenuA%40mail.gmail.com\n\n\n", "msg_date": "Wed, 15 Jul 2020 15:52:17 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Wednesday, July 15, 2020 12:52 PM (GMT+9), David Rowley wrote:\r\n\r\n>On Wed, 15 Jul 2020 at 14:51, Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n>>\r\n>> On Wed, Jul 15, 2020 at 5:55 AM David Rowley <dgrowleyml@gmail.com> wrote:\r\n>>> If we've not seen any performance regressions within 1 week, then I \r\n>>> propose that we (pending final review) push this to allow wider \r\n>>> testing.\r\n>>\r\n>> I think Soumyadeep has reported a regression case [1] with the earlier \r\n>> version of the patch. I am not sure if we have verified that the \r\n>> situation improves with the latest version of the patch. I request \r\n>> Soumyadeep to please try once with the latest patch.\r\n>...\r\n>Yeah, it would be good to see some more data points on that test.\r\n>Jumping from 2 up to 6 workers just leaves us to guess where the performance\r\n>started to become bad. >It would be good to know if the regression is\r\n>repeatable or if it was affected by some other process.\r\n>...\r\n>It would be good to see EXPLAIN (ANALYZE, BUFFERS) with SET track_io_timing = on;\r\n>for each value of >max_parallel_workers.\r\n\r\nHi,\r\n\r\nIf I'm following the thread correctly, we may have gains on this patch\r\nof Thomas and David, but we need to test its effects on different\r\nfilesystems. It's also been clarified by David through benchmark tests\r\nthat synchronize_seqscans is not affected as long as the set cap per\r\nchunk size of parallel scan is at 8192.\r\n\r\nI also agree that having a control on this through GUC can be\r\nbeneficial for users, however, that can be discussed in another\r\nthread or development in the future.\r\n\r\nDavid Rowley wrote:\r\n>I'd like to propose that if anyone wants to do further testing on\r\n>other operating systems with SSDs or HDDs then it would be good if\r\n>that could be done within a 1 week from this email. There are various\r\n>benchmarking ideas on this thread for inspiration.\r\n\r\nI'd like to join on testing it, this one using HDD, and at the bottom\r\nare the results. Due to my machine limitations, I only tested\r\n0~6 workers, that even if I increase max_parallel_workers_per_gather\r\nmore than that, the query planner would still cap the workers at 6.\r\nI also set the track_io_timing to on as per David's recommendation.\r\n\r\nTested on:\r\nXFS filesystem, HDD virtual machine\r\nRHEL4, 64-bit,\r\n4 CPUs, Intel Core Processor (Haswell, IBRS)\r\nPostgreSQL 14devel on x86_64-pc-linux-gnu\r\n\r\n\r\n----Test Case (Soumyadeep's) [1]\r\n\r\nshared_buffers = 32MB (to use OS page cache)\r\n\r\ncreate table t_heap as select generate_series(1, 100000000) i; --about 3.4GB size\r\n\r\nSET track_io_timing = on;\r\n\r\n\\timing\r\n\r\nset max_parallel_workers_per_gather = 0; --0 to 6\r\n\r\nSELECT count(*) from t_heap;\r\nEXPLAIN (ANALYZE, BUFFERS) SELECT count(*) from t_heap;\r\n\r\n[Summary]\r\nI used the same query from the thread. However, the sql query execution time\r\nand query planner execution time results between the master and patched do\r\nnot vary much.\r\nOTOH, in terms of I/O stats, I observed similar regression in both master\r\nand patched as we increase max_parallel_workers_per_gather.\r\n\r\nIt could also be possible that each benchmark setting for max_parallel_workers_per_gather\r\nis affected by previous result . IOW, later benchmark runs benefit from the data cached by\r\nprevious runs on OS level. \r\nAny advice? Please refer to tables below for results.\r\n\r\n(MASTER/UNPATCHED)\r\n| Parallel Workers | SQLExecTime | PlannerExecTime | Buffers | \r\n|------------------|--------------|------------------|-----------------------------| \r\n| 0 | 12942.606 ms | 37031.786 ms | shared hit=32 read=442446 | \r\n| 1 | 4959.567 ms | 17601.813 ms | shared hit=128 read=442350 | \r\n| 2 | 3273.610 ms | 11766.441 ms | shared hit=288 read=442190 | \r\n| 3 | 2449.342 ms | 9057.236 ms | shared hit=512 read=441966 | \r\n| 4 | 2482.404 ms | 8853.702 ms | shared hit=800 read=441678 | \r\n| 5 | 2430.944 ms | 8777.630 ms | shared hit=1152 read=441326 | \r\n| 6 | 2493.416 ms | 8798.200 ms | shared hit=1568 read=440910 | \r\n\r\n(PATCHED V2)\r\n| Parallel Workers | SQLExecTime | PlannerExecTime | Buffers | \r\n|------------------|-------------|------------------|-----------------------------| \r\n| 0 | 9283.193 ms | 34471.050 ms | shared hit=2624 read=439854 | \r\n| 1 | 4872.728 ms | 17449.725 ms | shared hit=2528 read=439950 | \r\n| 2 | 3240.301 ms | 11556.243 ms | shared hit=2368 read=440110 | \r\n| 3 | 2419.512 ms | 8709.572 ms | shared hit=2144 read=440334 | \r\n| 4 | 2746.820 ms | 8768.812 ms | shared hit=1856 read=440622 | \r\n| 5 | 2424.687 ms | 8699.762 ms | shared hit=1504 read=440974 | \r\n| 6 | 2581.999 ms | 8627.627 ms | shared hit=1440 read=441038 | \r\n\r\n(I/O Read Stat)\r\n| Parallel Workers | I/O (Master) | I/O (Patched) | \r\n|------------------|---------------|---------------| \r\n| 0 | read=1850.233 | read=1071.209 | \r\n| 1 | read=1246.939 | read=1115.361 | \r\n| 2 | read=1079.837 | read=1090.425 | \r\n| 3 | read=1342.133 | read=1094.115 | \r\n| 4 | read=1478.821 | read=1355.966 | \r\n| 5 | read=1691.244 | read=1679.446 | \r\n| 6 | read=1952.384 | read=1881.733 | \r\n\r\nI hope this helps in a way.\r\n\r\nRegards,\r\nKirk Jamison\r\n\r\n[1] https://www.postgresql.org/message-id/CADwEdoqirzK3H8bB=xyJ192EZCNwGfcCa_WJ5GHVM7Sv8oenuA@mail.gmail.com\r\n", "msg_date": "Fri, 17 Jul 2020 06:05:17 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Fri, Jul 17, 2020 at 11:35 AM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n>\n> On Wednesday, July 15, 2020 12:52 PM (GMT+9), David Rowley wrote:\n>\n> >On Wed, 15 Jul 2020 at 14:51, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> On Wed, Jul 15, 2020 at 5:55 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> >>> If we've not seen any performance regressions within 1 week, then I\n> >>> propose that we (pending final review) push this to allow wider\n> >>> testing.\n> >>\n> >> I think Soumyadeep has reported a regression case [1] with the earlier\n> >> version of the patch. I am not sure if we have verified that the\n> >> situation improves with the latest version of the patch. I request\n> >> Soumyadeep to please try once with the latest patch.\n> >...\n> >Yeah, it would be good to see some more data points on that test.\n> >Jumping from 2 up to 6 workers just leaves us to guess where the performance\n> >started to become bad. >It would be good to know if the regression is\n> >repeatable or if it was affected by some other process.\n> >...\n> >It would be good to see EXPLAIN (ANALYZE, BUFFERS) with SET track_io_timing = on;\n> >for each value of >max_parallel_workers.\n>\n> Hi,\n>\n> If I'm following the thread correctly, we may have gains on this patch\n> of Thomas and David, but we need to test its effects on different\n> filesystems. It's also been clarified by David through benchmark tests\n> that synchronize_seqscans is not affected as long as the set cap per\n> chunk size of parallel scan is at 8192.\n>\n> I also agree that having a control on this through GUC can be\n> beneficial for users, however, that can be discussed in another\n> thread or development in the future.\n>\n> David Rowley wrote:\n> >I'd like to propose that if anyone wants to do further testing on\n> >other operating systems with SSDs or HDDs then it would be good if\n> >that could be done within a 1 week from this email. There are various\n> >benchmarking ideas on this thread for inspiration.\n>\n> I'd like to join on testing it, this one using HDD, and at the bottom\n> are the results. Due to my machine limitations, I only tested\n> 0~6 workers, that even if I increase max_parallel_workers_per_gather\n> more than that, the query planner would still cap the workers at 6.\n> I also set the track_io_timing to on as per David's recommendation.\n>\n> Tested on:\n> XFS filesystem, HDD virtual machine\n> RHEL4, 64-bit,\n> 4 CPUs, Intel Core Processor (Haswell, IBRS)\n> PostgreSQL 14devel on x86_64-pc-linux-gnu\n>\n>\n> ----Test Case (Soumyadeep's) [1]\n>\n> shared_buffers = 32MB (to use OS page cache)\n>\n> create table t_heap as select generate_series(1, 100000000) i; --about 3.4GB size\n>\n> SET track_io_timing = on;\n>\n> \\timing\n>\n> set max_parallel_workers_per_gather = 0; --0 to 6\n>\n> SELECT count(*) from t_heap;\n> EXPLAIN (ANALYZE, BUFFERS) SELECT count(*) from t_heap;\n>\n> [Summary]\n> I used the same query from the thread. However, the sql query execution time\n> and query planner execution time results between the master and patched do\n> not vary much.\n> OTOH, in terms of I/O stats, I observed similar regression in both master\n> and patched as we increase max_parallel_workers_per_gather.\n>\n> It could also be possible that each benchmark setting for max_parallel_workers_per_gather\n> is affected by previous result . IOW, later benchmark runs benefit from the data cached by\n> previous runs on OS level.\n>\n\nYeah, I think to some extent that is visible in results because, after\npatch, at 0 workers, the execution time is reduced significantly\nwhereas there is not much difference at other worker counts. I think\nfor non-parallel case (0 workers), there shouldn't be any difference.\nAlso, I am not sure if there is any reason why after patch the number\nof shared hits is improved, probably due to caching effects?\n\n> Any advice?\n\nI think recreating the database and restarting the server after each\nrun might help in getting consistent results. Also, you might want to\ntake median of three runs.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 Jul 2020 14:48:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Friday, July 17, 2020 6:18 PM (GMT+9), Amit Kapila wrote:\r\n\r\n> On Fri, Jul 17, 2020 at 11:35 AM k.jamison@fujitsu.com <k.jamison@fujitsu.com>\r\n> wrote:\r\n> >\r\n> > On Wednesday, July 15, 2020 12:52 PM (GMT+9), David Rowley wrote:\r\n> >\r\n> > >On Wed, 15 Jul 2020 at 14:51, Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> > >>\r\n> > >> On Wed, Jul 15, 2020 at 5:55 AM David Rowley <dgrowleyml@gmail.com>\r\n> wrote:\r\n> > >>> If we've not seen any performance regressions within 1 week, then\r\n> > >>> I propose that we (pending final review) push this to allow wider\r\n> > >>> testing.\r\n> > >>\r\n> > >> I think Soumyadeep has reported a regression case [1] with the\r\n> > >> earlier version of the patch. I am not sure if we have verified\r\n> > >> that the situation improves with the latest version of the patch.\r\n> > >> I request Soumyadeep to please try once with the latest patch.\r\n> > >...\r\n> > >Yeah, it would be good to see some more data points on that test.\r\n> > >Jumping from 2 up to 6 workers just leaves us to guess where the\r\n> > >performance started to become bad. >It would be good to know if the\r\n> > >regression is repeatable or if it was affected by some other process.\r\n> > >...\r\n> > >It would be good to see EXPLAIN (ANALYZE, BUFFERS) with SET\r\n> > >track_io_timing = on; for each value of >max_parallel_workers.\r\n> >\r\n> > Hi,\r\n> >\r\n> > If I'm following the thread correctly, we may have gains on this patch\r\n> > of Thomas and David, but we need to test its effects on different\r\n> > filesystems. It's also been clarified by David through benchmark tests\r\n> > that synchronize_seqscans is not affected as long as the set cap per\r\n> > chunk size of parallel scan is at 8192.\r\n> >\r\n> > I also agree that having a control on this through GUC can be\r\n> > beneficial for users, however, that can be discussed in another thread\r\n> > or development in the future.\r\n> >\r\n> > David Rowley wrote:\r\n> > >I'd like to propose that if anyone wants to do further testing on\r\n> > >other operating systems with SSDs or HDDs then it would be good if\r\n> > >that could be done within a 1 week from this email. There are various\r\n> > >benchmarking ideas on this thread for inspiration.\r\n> >\r\n> > I'd like to join on testing it, this one using HDD, and at the bottom\r\n> > are the results. Due to my machine limitations, I only tested\r\n> > 0~6 workers, that even if I increase max_parallel_workers_per_gather\r\n> > more than that, the query planner would still cap the workers at 6.\r\n> > I also set the track_io_timing to on as per David's recommendation.\r\n> >\r\n> > Tested on:\r\n> > XFS filesystem, HDD virtual machine\r\n> > RHEL4, 64-bit,\r\n> > 4 CPUs, Intel Core Processor (Haswell, IBRS) PostgreSQL 14devel on\r\n> > x86_64-pc-linux-gnu\r\n> >\r\n> >\r\n> > ----Test Case (Soumyadeep's) [1]\r\n> >\r\n> > shared_buffers = 32MB (to use OS page cache)\r\n> >\r\n> > create table t_heap as select generate_series(1, 100000000) i; --about\r\n> 3.4GB size\r\n> >\r\n> > SET track_io_timing = on;\r\n> >\r\n> > \\timing\r\n> >\r\n> > set max_parallel_workers_per_gather = 0; --0 to 6\r\n> >\r\n> > SELECT count(*) from t_heap;\r\n> > EXPLAIN (ANALYZE, BUFFERS) SELECT count(*) from t_heap;\r\n> >\r\n> > [Summary]\r\n> > I used the same query from the thread. However, the sql query\r\n> > execution time and query planner execution time results between the\r\n> > master and patched do not vary much.\r\n> > OTOH, in terms of I/O stats, I observed similar regression in both\r\n> > master and patched as we increase max_parallel_workers_per_gather.\r\n> >\r\n> > It could also be possible that each benchmark setting for\r\n> > max_parallel_workers_per_gather is affected by previous result . IOW,\r\n> > later benchmark runs benefit from the data cached by previous runs on OS level.\r\n> >\r\n> \r\n> Yeah, I think to some extent that is visible in results because, after patch, at 0\r\n> workers, the execution time is reduced significantly whereas there is not much\r\n> difference at other worker counts. I think for non-parallel case (0 workers),\r\n> there shouldn't be any difference.\r\n> Also, I am not sure if there is any reason why after patch the number of shared hits\r\n> is improved, probably due to caching effects?\r\n> \r\n> > Any advice?\r\n> \r\n> I think recreating the database and restarting the server after each run might help\r\n> in getting consistent results. Also, you might want to take median of three runs.\r\n\r\nThank you for the advice. I repeated the test as per your advice and average of 3 runs\r\nper worker/s planned.\r\nIt still shows the following similar performance results between Master and Patch V2.\r\nI wonder why there's no difference though.\r\n\r\nThe test on my machine is roughly like this:\r\n\r\ncreatedb test\r\npsql -d test\r\ncreate table t_heap as select generate_series(1, 100000000) i;\r\n\\q\r\n\r\npg_ctl restart\r\npsql -d test\r\nSET track_io_timing = on;\r\nSET max_parallel_workers_per_gather = 0;\r\nSHOW max_parallel_workers_per_gather;\r\nEXPLAIN (ANALYZE, BUFFERS) SELECT count(*) from t_heap;\r\n\\timing\r\nSELECT count(*) from t_heap;\r\n\r\ndrop table t_heap;\r\n\\q\r\ndropdb test\r\npg_ctl restart\r\n\r\nBelow are the results. Again, almost no discernible difference between the master and patch.\r\nAlso, the results when max_parallel_workers_per_gather is more than 4 could be inaccurate\r\ndue to my machine's limitation of only having v4 CPUs. Even so, query planner capped it at\r\n6 workers.\r\n\r\nQuery Planner I/O Timings (track_io_timing = on) in ms :\r\n| Worker | I/O READ (Master) | I/O READ (Patch) | I/O WRITE (Master) | I/O WRITE (Patch) | \r\n|--------|-------------------|------------------|--------------------|-------------------| \r\n| 0 | \"1,130.777\" | \"1,250.821\" | \"01,698.051\" | \"01,733.439\" | \r\n| 1 | \"1,603.016\" | \"1,660.767\" | \"02,312.248\" | \"02,291.661\" | \r\n| 2 | \"2,036.269\" | \"2,107.066\" | \"02,698.216\" | \"02,796.893\" | \r\n| 3 | \"2,298.811\" | \"2,307.254\" | \"05,695.991\" | \"05,894.183\" | \r\n| 4 | \"2,098.642\" | \"2,135.960\" | \"23,837.088\" | \"26,537.158\" | \r\n| 5 | \"1,956.536\" | \"1,997.464\" | \"45,891.851\" | \"48,049.338\" | \r\n| 6 | \"2,201.816\" | \"2,219.001\" | \"61,937.828\" | \"67,809.486\" |\r\n\r\nQuery Planner Execution Time (ms):\r\n| Worker | QueryPlanner (Master) | QueryPlanner (Patch) | \r\n|--------|-----------------------|----------------------| \r\n| 0.000 | \"40,454.252\" | \"40,521.578\" | \r\n| 1.000 | \"21,332.067\" | \"21,205.068\" | \r\n| 2.000 | \"14,266.756\" | \"14,385.539\" | \r\n| 3.000 | \"11,597.936\" | \"11,722.055\" | \r\n| 4.000 | \"12,937.468\" | \"13,439.247\" | \r\n| 5.000 | \"14,383.083\" | \"14,782.866\" | \r\n| 6.000 | \"14,671.336\" | \"15,507.581\" |\r\n\r\nBased from the results above, the I/O latency increases as number of workers\r\nalso increase. Despite that, the query planner execution time is almost closely same\r\nwhen 2 or more workers are used (14~11s). Same results between Master and Patch V2.\r\n\r\nAs for buffers, same results are shown per worker (both Master and Patch).\r\n| Worker | Buffers | \r\n|--------|--------------------------------------------------| \r\n| 0 | shared read=442478 dirtied=442478 written=442446 | \r\n| 1 | shared read=442478 dirtied=442478 written=442414 | \r\n| 2 | shared read=442478 dirtied=442478 written=442382 | \r\n| 3 | shared read=442478 dirtied=442478 written=442350 | \r\n| 4 | shared read=442478 dirtied=442478 written=442318 | \r\n| 5 | shared read=442478 dirtied=442478 written=442286 | \r\n| 6 | shared read=442478 dirtied=442478 written=442254 |\r\n\r\n\r\nSQL Query Execution Time (ms) :\r\n| Worker | SQL (Master) | SQL (Patch) | \r\n|--------|--------------|--------------| \r\n| 0 | \"10,418.606\" | \"10,377.377\" | \r\n| 1 | \"05,427.460\" | \"05,402.727\" | \r\n| 2 | \"03,662.998\" | \"03,650.277\" | \r\n| 3 | \"02,718.837\" | \"02,692.871\" | \r\n| 4 | \"02,759.802\" | \"02,693.370\" | \r\n| 5 | \"02,761.834\" | \"02,682.590\" | \r\n| 6 | \"02,711.434\" | \"02,726.332\" |\r\n\r\nThe SQL query execution time definitely benefitted from previous run of query planner,\r\nso the results are faster. But again, both Master and Patched have almost the same results.\r\nNonetheless, the execution time is almost consistent when\r\nmax_parallel_workers_per_gather is 2 (default) and above.\r\n\r\nI am definitely missing something. Perhaps I think I could not understand why there's no\r\nI/O difference between the Master and Patched (V2). Or has it been already improved\r\neven without this patch?\r\n\r\nKind regards,\r\nKirk Jamison\r\n", "msg_date": "Tue, 21 Jul 2020 02:36:14 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Tue, Jul 21, 2020 at 8:06 AM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n>\n> Thank you for the advice. I repeated the test as per your advice and average of 3 runs\n> per worker/s planned.\n> It still shows the following similar performance results between Master and Patch V2.\n> I wonder why there's no difference though.\n>\n> The test on my machine is roughly like this:\n>\n> createdb test\n> psql -d test\n> create table t_heap as select generate_series(1, 100000000) i;\n> \\q\n>\n> pg_ctl restart\n> psql -d test\n> SET track_io_timing = on;\n> SET max_parallel_workers_per_gather = 0;\n> SHOW max_parallel_workers_per_gather;\n> EXPLAIN (ANALYZE, BUFFERS) SELECT count(*) from t_heap;\n> \\timing\n> SELECT count(*) from t_heap;\n>\n> drop table t_heap;\n> \\q\n> dropdb test\n> pg_ctl restart\n>\n> Below are the results. Again, almost no discernible difference between the master and patch.\n> Also, the results when max_parallel_workers_per_gather is more than 4 could be inaccurate\n> due to my machine's limitation of only having v4 CPUs. Even so, query planner capped it at\n> 6 workers.\n>\n> Query Planner I/O Timings (track_io_timing = on) in ms :\n> | Worker | I/O READ (Master) | I/O READ (Patch) | I/O WRITE (Master) | I/O WRITE (Patch) |\n> |--------|-------------------|------------------|--------------------|-------------------|\n> | 0 | \"1,130.777\" | \"1,250.821\" | \"01,698.051\" | \"01,733.439\" |\n> | 1 | \"1,603.016\" | \"1,660.767\" | \"02,312.248\" | \"02,291.661\" |\n> | 2 | \"2,036.269\" | \"2,107.066\" | \"02,698.216\" | \"02,796.893\" |\n> | 3 | \"2,298.811\" | \"2,307.254\" | \"05,695.991\" | \"05,894.183\" |\n> | 4 | \"2,098.642\" | \"2,135.960\" | \"23,837.088\" | \"26,537.158\" |\n> | 5 | \"1,956.536\" | \"1,997.464\" | \"45,891.851\" | \"48,049.338\" |\n> | 6 | \"2,201.816\" | \"2,219.001\" | \"61,937.828\" | \"67,809.486\" |\n>\n> Query Planner Execution Time (ms):\n> | Worker | QueryPlanner (Master) | QueryPlanner (Patch) |\n> |--------|-----------------------|----------------------|\n> | 0.000 | \"40,454.252\" | \"40,521.578\" |\n> | 1.000 | \"21,332.067\" | \"21,205.068\" |\n> | 2.000 | \"14,266.756\" | \"14,385.539\" |\n> | 3.000 | \"11,597.936\" | \"11,722.055\" |\n> | 4.000 | \"12,937.468\" | \"13,439.247\" |\n> | 5.000 | \"14,383.083\" | \"14,782.866\" |\n> | 6.000 | \"14,671.336\" | \"15,507.581\" |\n>\n> Based from the results above, the I/O latency increases as number of workers\n> also increase. Despite that, the query planner execution time is almost closely same\n> when 2 or more workers are used (14~11s). Same results between Master and Patch V2.\n>\n> As for buffers, same results are shown per worker (both Master and Patch).\n> | Worker | Buffers |\n> |--------|--------------------------------------------------|\n> | 0 | shared read=442478 dirtied=442478 written=442446 |\n> | 1 | shared read=442478 dirtied=442478 written=442414 |\n> | 2 | shared read=442478 dirtied=442478 written=442382 |\n> | 3 | shared read=442478 dirtied=442478 written=442350 |\n> | 4 | shared read=442478 dirtied=442478 written=442318 |\n> | 5 | shared read=442478 dirtied=442478 written=442286 |\n> | 6 | shared read=442478 dirtied=442478 written=442254 |\n>\n>\n> SQL Query Execution Time (ms) :\n> | Worker | SQL (Master) | SQL (Patch) |\n> |--------|--------------|--------------|\n> | 0 | \"10,418.606\" | \"10,377.377\" |\n> | 1 | \"05,427.460\" | \"05,402.727\" |\n> | 2 | \"03,662.998\" | \"03,650.277\" |\n> | 3 | \"02,718.837\" | \"02,692.871\" |\n> | 4 | \"02,759.802\" | \"02,693.370\" |\n> | 5 | \"02,761.834\" | \"02,682.590\" |\n> | 6 | \"02,711.434\" | \"02,726.332\" |\n>\n> The SQL query execution time definitely benefitted from previous run of query planner,\n> so the results are faster. But again, both Master and Patched have almost the same results.\n> Nonetheless, the execution time is almost consistent when\n> max_parallel_workers_per_gather is 2 (default) and above.\n>\n> I am definitely missing something. Perhaps I think I could not understand why there's no\n> I/O difference between the Master and Patched (V2). Or has it been already improved\n> even without this patch?\n>\n\nI don't think it is strange that you are not seeing much difference\nbecause as per the initial email by Thomas this patch is not supposed\nto give benefits on all systems. I think we wanted to check that the\npatch should not regress performance in cases where it doesn't give\nbenefits. I think it might be okay to run with a higher number of\nworkers than you have CPUs in the system as we wanted to check if such\ncases regress as shown by Soumyadeep above [1]. Can you once try with\n8 and or 10 workers as well?\n\n[1] - https://www.postgresql.org/message-id/CADwEdoqirzK3H8bB%3DxyJ192EZCNwGfcCa_WJ5GHVM7Sv8oenuA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Jul 2020 08:48:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Tuesday, July 21, 2020 12:18 PM, Amit Kapila wrote:\r\n> On Tue, Jul 21, 2020 at 8:06 AM k.jamison@fujitsu.com <k.jamison@fujitsu.com>\r\n> wrote:\r\n> >\r\n> > Thank you for the advice. I repeated the test as per your advice and\r\n> > average of 3 runs per worker/s planned.\r\n> > It still shows the following similar performance results between Master and\r\n> Patch V2.\r\n> > I wonder why there's no difference though.\r\n> >\r\n> > The test on my machine is roughly like this:\r\n> >\r\n> > createdb test\r\n> > psql -d test\r\n> > create table t_heap as select generate_series(1, 100000000) i; \\q\r\n> >\r\n> > pg_ctl restart\r\n> > psql -d test\r\n> > SET track_io_timing = on;\r\n> > SET max_parallel_workers_per_gather = 0; SHOW\r\n> > max_parallel_workers_per_gather; EXPLAIN (ANALYZE, BUFFERS) SELECT\r\n> > count(*) from t_heap; \\timing SELECT count(*) from t_heap;\r\n> >\r\n> > drop table t_heap;\r\n> > \\q\r\n> > dropdb test\r\n> > pg_ctl restart\r\n> >\r\n> > Below are the results. Again, almost no discernible difference between the\r\n> master and patch.\r\n> > Also, the results when max_parallel_workers_per_gather is more than 4\r\n> > could be inaccurate due to my machine's limitation of only having v4\r\n> > CPUs. Even so, query planner capped it at\r\n> > 6 workers.\r\n> >\r\n> > Query Planner I/O Timings (track_io_timing = on) in ms :\r\n> > | Worker | I/O READ (Master) | I/O READ (Patch) | I/O WRITE (Master) |\r\n> > | I/O WRITE (Patch) |\r\n> >\r\n> |--------|-------------------|------------------|--------------------|-------------\r\n> ------|\r\n> > | 0 | \"1,130.777\" | \"1,250.821\" | \"01,698.051\" |\r\n> \"01,733.439\" |\r\n> > | 1 | \"1,603.016\" | \"1,660.767\" | \"02,312.248\" |\r\n> \"02,291.661\" |\r\n> > | 2 | \"2,036.269\" | \"2,107.066\" | \"02,698.216\" |\r\n> \"02,796.893\" |\r\n> > | 3 | \"2,298.811\" | \"2,307.254\" | \"05,695.991\" |\r\n> \"05,894.183\" |\r\n> > | 4 | \"2,098.642\" | \"2,135.960\" | \"23,837.088\" |\r\n> \"26,537.158\" |\r\n> > | 5 | \"1,956.536\" | \"1,997.464\" | \"45,891.851\" |\r\n> \"48,049.338\" |\r\n> > | 6 | \"2,201.816\" | \"2,219.001\" | \"61,937.828\" |\r\n> \"67,809.486\" |\r\n> >\r\n> > Query Planner Execution Time (ms):\r\n> > | Worker | QueryPlanner (Master) | QueryPlanner (Patch) |\r\n> > |--------|-----------------------|----------------------|\r\n> > | 0.000 | \"40,454.252\" | \"40,521.578\" |\r\n> > | 1.000 | \"21,332.067\" | \"21,205.068\" |\r\n> > | 2.000 | \"14,266.756\" | \"14,385.539\" |\r\n> > | 3.000 | \"11,597.936\" | \"11,722.055\" |\r\n> > | 4.000 | \"12,937.468\" | \"13,439.247\" |\r\n> > | 5.000 | \"14,383.083\" | \"14,782.866\" |\r\n> > | 6.000 | \"14,671.336\" | \"15,507.581\" |\r\n> >\r\n> > Based from the results above, the I/O latency increases as number of\r\n> > workers also increase. Despite that, the query planner execution time\r\n> > is almost closely same when 2 or more workers are used (14~11s). Same\r\n> results between Master and Patch V2.\r\n> >\r\n> > As for buffers, same results are shown per worker (both Master and Patch).\r\n> > | Worker | Buffers |\r\n> > |--------|--------------------------------------------------|\r\n> > | 0 | shared read=442478 dirtied=442478 written=442446 |\r\n> > | 1 | shared read=442478 dirtied=442478 written=442414 |\r\n> > | 2 | shared read=442478 dirtied=442478 written=442382 |\r\n> > | 3 | shared read=442478 dirtied=442478 written=442350 |\r\n> > | 4 | shared read=442478 dirtied=442478 written=442318 |\r\n> > | 5 | shared read=442478 dirtied=442478 written=442286 |\r\n> > | 6 | shared read=442478 dirtied=442478 written=442254 |\r\n> >\r\n> >\r\n> > SQL Query Execution Time (ms) :\r\n> > | Worker | SQL (Master) | SQL (Patch) |\r\n> > |--------|--------------|--------------|\r\n> > | 0 | \"10,418.606\" | \"10,377.377\" |\r\n> > | 1 | \"05,427.460\" | \"05,402.727\" |\r\n> > | 2 | \"03,662.998\" | \"03,650.277\" |\r\n> > | 3 | \"02,718.837\" | \"02,692.871\" |\r\n> > | 4 | \"02,759.802\" | \"02,693.370\" |\r\n> > | 5 | \"02,761.834\" | \"02,682.590\" |\r\n> > | 6 | \"02,711.434\" | \"02,726.332\" |\r\n> >\r\n> > The SQL query execution time definitely benefitted from previous run\r\n> > of query planner, so the results are faster. But again, both Master and Patched\r\n> have almost the same results.\r\n> > Nonetheless, the execution time is almost consistent when\r\n> > max_parallel_workers_per_gather is 2 (default) and above.\r\n> >\r\n> > I am definitely missing something. Perhaps I think I could not\r\n> > understand why there's no I/O difference between the Master and\r\n> > Patched (V2). Or has it been already improved even without this patch?\r\n> >\r\n> \r\n> I don't think it is strange that you are not seeing much difference because as per\r\n> the initial email by Thomas this patch is not supposed to give benefits on all\r\n> systems. I think we wanted to check that the patch should not regress\r\n> performance in cases where it doesn't give benefits. I think it might be okay to\r\n> run with a higher number of workers than you have CPUs in the system as we\r\n> wanted to check if such cases regress as shown by Soumyadeep above [1]. Can\r\n> you once try with\r\n> 8 and or 10 workers as well?\r\n> \r\n> [1] -\r\n> https://www.postgresql.org/message-id/CADwEdoqirzK3H8bB%3DxyJ192EZCN\r\n> wGfcCa_WJ5GHVM7Sv8oenuA%40mail.gmail.com\r\n\r\nYou are right. Kindly excuse me on that part, which only means it may or may not have any\r\nbenefits on the filesystem I am using. But for other fs, as we can see from David's benchmarks \r\nsignificant results/benefits.\r\n\r\nFollowing your advice on regression test case, I increased the number of workers,\r\nbut the query planner still capped the workers at 6, so the results from 6 workers onwards\r\nare almost the same.\r\nI don't see significant difference between master and patched on my machine\r\nas per my test results below. (Just for reconfirmation)\r\n\r\nQuery Planner I/O Timings (ms):\r\n| Worker | I/O READ (Master) | I/O READ (Patch) | I/O WRITE (Master) | I/O WRITE (Patch) | \r\n|--------|-------------------|------------------|--------------------|-------------------| \r\n| 0 | \"1,130.78\" | \"1,250.82\" | \"1,698.05\" | \"1,733.44\" | \r\n| 1 | \"1,603.02\" | \"1,660.77\" | \"2,312.25\" | \"2,291.66\" | \r\n| 2 | \"2,036.27\" | \"2,107.07\" | \"2,698.22\" | \"2,796.89\" | \r\n| 3 | \"2,298.81\" | \"2,307.25\" | \"5,695.99\" | \"5,894.18\" | \r\n| 4 | \"2,098.64\" | \"2,135.96\" | \"23,837.09\" | \"26,537.16\" | \r\n| 5 | \"1,956.54\" | \"1,997.46\" | \"45,891.85\" | \"48,049.34\" | \r\n| 6 | \"2,201.82\" | \"2,219.00\" | \"61,937.83\" | \"67,809.49\" | \r\n| 8 | \"2,117.80\" | \"2,169.67\" | \"60,671.22\" | \"68,676.36\" | \r\n| 16 | \"2,052.73\" | \"2,134.86\" | \"60,635.17\" | \"66,462.82\" | \r\n| 32 | \"2,036.00\" | \"2,200.98\" | \"60,833.92\" | \"67,702.49\" |\r\n\r\nQuery Planner Execution Time (ms):\r\n| Worker | QueryPlanner (Master) | QueryPlanner (Patch) | \r\n|--------|-----------------------|----------------------| \r\n| 0 | \"40,454.25\" | \"40,521.58\" | \r\n| 1 | \"21,332.07\" | \"21,205.07\" | \r\n| 2 | \"14,266.76\" | \"14,385.54\" | \r\n| 3 | \"11,597.94\" | \"11,722.06\" | \r\n| 4 | \"12,937.47\" | \"13,439.25\" | \r\n| 5 | \"14,383.08\" | \"14,782.87\" | \r\n| 6 | \"14,671.34\" | \"15,507.58\" | \r\n| 8 | \"14,679.50\" | \"15,615.69\" | \r\n| 16 | \"14,474.78\" | \"15,274.61\" | \r\n| 32 | \"14,462.11\" | \"15,470.68\" |\r\n\r\n| Worker | Buffers | \r\n|--------|--------------------------------------------------| \r\n| 0 | shared read=442478 dirtied=442478 written=442446 | \r\n| 1 | shared read=442478 dirtied=442478 written=442414 | \r\n| 2 | shared read=442478 dirtied=442478 written=442382 | \r\n| 3 | shared read=442478 dirtied=442478 written=442350 | \r\n| 4 | shared read=442478 dirtied=442478 written=442318 | \r\n| 5 | shared read=442478 dirtied=442478 written=442286 | \r\n| 6 | shared read=442478 dirtied=442478 written=442254 | \r\n| 8 | shared read=442478 dirtied=442478 written=442254 | \r\n| 16 | shared read=442478 dirtied=442478 written=442254 | \r\n| 32 | shared read=442478 dirtied=442478 written=442254 |\r\n\r\nI also re-ran the query and measured the execution time (ms) with \\timing\r\n| Worker | SQL (Master) | SQL (Patch) | \r\n|--------|--------------|-------------| \r\n| 0 | 15476.458 | 15278.772 | \r\n| 1 | 8292.702 | 8426.435 | \r\n| 2 | 6256.673 | 6232.456 | \r\n| 3 | 6357.217 | 6340.013 | \r\n| 4 | 7591.311 | 7913.881 | \r\n| 5 | 8165.315 | 8070.592 | \r\n| 6 | 8065.578 | 8200.076 | \r\n| 8 | 7988.302 | 8609.138 | \r\n| 16 | 8025.170 | 8469.895 | \r\n| 32 | 8019.393 | 8645.150 |\r\n\r\nAgain tested on:\r\nXFS filesystem, HDD virtual machine, 8GB RAM\r\nRHEL4, 64-bit,\r\n4 CPUs, Intel Core Processor (Haswell, IBRS)\r\nPostgreSQL 14devel on x86_64-pc-linux-gnu\r\n\r\nSo I guess it does not affect the filesystem that I am using. So I think it's OK.\r\n\r\nKind regards,\r\nKirk Jamison\r\n\r\n", "msg_date": "Tue, 21 Jul 2020 09:38:46 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Tue, Jul 21, 2020 at 3:08 PM k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n>\n> On Tuesday, July 21, 2020 12:18 PM, Amit Kapila wrote:\n> > On Tue, Jul 21, 2020 at 8:06 AM k.jamison@fujitsu.com <k.jamison@fujitsu.com>\n> > wrote:\n> > >\n> > > I am definitely missing something. Perhaps I think I could not\n> > > understand why there's no I/O difference between the Master and\n> > > Patched (V2). Or has it been already improved even without this patch?\n> > >\n> >\n> > I don't think it is strange that you are not seeing much difference because as per\n> > the initial email by Thomas this patch is not supposed to give benefits on all\n> > systems. I think we wanted to check that the patch should not regress\n> > performance in cases where it doesn't give benefits. I think it might be okay to\n> > run with a higher number of workers than you have CPUs in the system as we\n> > wanted to check if such cases regress as shown by Soumyadeep above [1]. Can\n> > you once try with\n> > 8 and or 10 workers as well?\n> >\n>\n> You are right. Kindly excuse me on that part, which only means it may or may not have any\n> benefits on the filesystem I am using. But for other fs, as we can see from David's benchmarks\n> significant results/benefits.\n>\n> Following your advice on regression test case, I increased the number of workers,\n> but the query planner still capped the workers at 6, so the results from 6 workers onwards\n> are almost the same.\n>\n\nI am slightly confused if the number of workers are capped at 6, then\nwhat exactly the data at 32 worker count means? If you want query\nplanner to choose more number of workers, then I think either you need\nto increase the data or use Alter Table <tbl_name> Set\n(parallel_workers = <num_workers_you_want>);\n\n> I don't see significant difference between master and patched on my machine\n> as per my test results below. (Just for reconfirmation)\n>\n\nI see the difference of about 7-8% at higher (32) client-count. Am, I\nmissing something?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 Jul 2020 16:02:30 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "Hi Kirk,\n\nThank you for doing some testing on this. It's very useful to get some\nsamples from other hardware / filesystem / os combinations.\n\nOn Tue, 21 Jul 2020 at 21:38, k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n> Query Planner I/O Timings (ms):\n> | Worker | I/O READ (Master) | I/O READ (Patch) | I/O WRITE (Master) | I/O WRITE (Patch) |\n> |--------|-------------------|------------------|--------------------|-------------------|\n> | 0 | \"1,130.78\" | \"1,250.82\" | \"1,698.05\" | \"1,733.44\" |\n\n\n> | Worker | Buffers |\n> |--------|--------------------------------------------------|\n> | 0 | shared read=442478 dirtied=442478 written=442446 |\n\nI'm thinking the scale of this test might be a bit too small for the\nmachine you're using to test. When you see \"shared read\" in the\nEXPLAIN (ANALYZE, BUFFERS) output, it does not necessarily mean that\nthe page had to be read from disk. We use buffered I/O, so the page\ncould just have been fetched from the kernel's cache.\n\nIf we do some maths here on the timing. It took 1130.78 milliseconds\nto read 442478 pages, which, assuming the standard page size of 8192\nbytes, that's 3457 MB in 1130.78 milliseconds, or 3057 MB/sec. Is\nthat a realistic throughput for this machine in terms of I/O? Or do\nyou think that some of these pages might be coming from the Kernel's\ncache?\n\nI understand that Amit wrote:\n\nOn Fri, 17 Jul 2020 at 21:18, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I think recreating the database and restarting the server after each\n> run might help in getting consistent results. Also, you might want to\n> take median of three runs.\n\nPlease also remember, if you're recreating the database after having\nrestarted the machine that creating the table will likely end up\ncaching some of the pages either in shared buffers or the Kernel's\ncache. It would be better to leave the database intact and just reboot\nthe machine. I didn't really like that option with my tests so I just\nincreased the size of the table beyond any size that my machines could\nhave cached. With the 16GB RAM Windows laptop, I used a 100GB table\nand with the 64GB workstation, I used an 800GB table. I think my test\nusing SELECT * FROM t WHERE a < 0; with a table that has a padding\ncolumn is likely going to be a more accurate test. Providing there is\nno rows with a < 0 in the table then the executor will spend almost\nall of the time in nodeSeqscan.c trying to find a row with a < 0.\nThere's no additional overhead of aggregation doing the count(*).\nHaving the additional padding column means that we read more data per\nevaluation of the a < 0 expression. Also, having a single column\ntable is not that realistic.\n\nI'm pretty keen to see this machine running something closer to the\ntest I mentioned in [1] but the benchmark query I mentioned in [2]\nwith the \"t\" table being at least twice the size of RAM in the\nmachine. Larger would be better though. With such a scaled test, I\ndon't think there's much need to reboot the machine in between. Just\nrun a single query first to warm up the cache before timing anything.\nHaving the table a few times larger than RAM will mean that we can be\ncertain that the disk was actually used during the test. The more data\nwe can be certain came from disk the more we can trust that the\nresults are meaningful.\n\nThanks again for testing this.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvrfJfYH51_WY-iQqPw8yGR4fDoTxAQKqn+Sa7NTKEVWtg@mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAApHDvo+LEGKMcavOiPYK8NEbgP-LrXns2TJ1n_XNRJVE9X+Cw@mail.gmail.com\n\n\n", "msg_date": "Wed, 22 Jul 2020 11:55:17 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Wed, Jul 22, 2020 at 5:25 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> I understand that Amit wrote:\n>\n> On Fri, 17 Jul 2020 at 21:18, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I think recreating the database and restarting the server after each\n> > run might help in getting consistent results. Also, you might want to\n> > take median of three runs.\n>\n> Please also remember, if you're recreating the database after having\n> restarted the machine that creating the table will likely end up\n> caching some of the pages either in shared buffers or the Kernel's\n> cache.\n>\n\nYeah, that is true but every time before the test the same amount of\ndata should be present in shared buffers (or OS cache) if any which\nwill help in getting consistent results. However, it is fine to\nreboot the machine as well if that is a convenient way.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 22 Jul 2020 09:26:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Wed, Jul 22, 2020 at 3:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Yeah, that is true but every time before the test the same amount of\n> data should be present in shared buffers (or OS cache) if any which\n> will help in getting consistent results. However, it is fine to\n> reboot the machine as well if that is a convenient way.\n\nWe really should have an extension (pg_prewarm?) that knows how to\nkick stuff out PostgreSQL's shared buffers and the page cache\n(POSIX_FADV_DONTNEED).\n\n\n", "msg_date": "Wed, 22 Jul 2020 16:32:41 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Tuesday, July 21, 2020 7:33 PM, Amit Kapila wrote:\r\n> On Tue, Jul 21, 2020 at 3:08 PM k.jamison@fujitsu.com <k.jamison@fujitsu.com>\r\n> wrote:\r\n> >\r\n> > On Tuesday, July 21, 2020 12:18 PM, Amit Kapila wrote:\r\n> > > On Tue, Jul 21, 2020 at 8:06 AM k.jamison@fujitsu.com\r\n> > > <k.jamison@fujitsu.com>\r\n> > > wrote:\r\n> > > >\r\n> > > > I am definitely missing something. Perhaps I think I could not\r\n> > > > understand why there's no I/O difference between the Master and\r\n> > > > Patched (V2). Or has it been already improved even without this patch?\r\n> > > >\r\n> > >\r\n> > > I don't think it is strange that you are not seeing much difference\r\n> > > because as per the initial email by Thomas this patch is not\r\n> > > supposed to give benefits on all systems. I think we wanted to\r\n> > > check that the patch should not regress performance in cases where\r\n> > > it doesn't give benefits. I think it might be okay to run with a\r\n> > > higher number of workers than you have CPUs in the system as we\r\n> > > wanted to check if such cases regress as shown by Soumyadeep above\r\n> > > [1]. Can you once try with\r\n> > > 8 and or 10 workers as well?\r\n> > >\r\n> >\r\n> > You are right. Kindly excuse me on that part, which only means it may\r\n> > or may not have any benefits on the filesystem I am using. But for\r\n> > other fs, as we can see from David's benchmarks significant results/benefits.\r\n> >\r\n> > Following your advice on regression test case, I increased the number\r\n> > of workers, but the query planner still capped the workers at 6, so\r\n> > the results from 6 workers onwards are almost the same.\r\n> >\r\n> \r\n> I am slightly confused if the number of workers are capped at 6, then what exactly\r\n> the data at 32 worker count means? If you want query planner to choose more\r\n> number of workers, then I think either you need to increase the data or use Alter\r\n> Table <tbl_name> Set (parallel_workers = <num_workers_you_want>);\r\n\r\nOops I'm sorry, the \"workers\" labelled in those tables actually mean max_parallel_workers_per_gather\r\nand not parallel_workers. In the query planner, I thought the _per_gather corresponds or controls\r\nthe workers planned/launched values, and those are the numbers that I used in the tables.\r\n\r\nI used the default max_parallel_workers & max_worker_proceses which is 8 by default in postgresql.conf.\r\nIOW, I ran all those tests with maximum of 8 processes set. But my query planner capped both the\r\nWorkers Planned and Launched at 6 for some reason when increasing the value for\r\nmax_parallel_workers_per_gather. \r\n\r\nHowever, when I used the ALTER TABLE SET (parallel_workers = N) based from your suggestion,\r\nthe query planner acquired that set value only for \"Workers Planned\", but not for \"Workers Launched\". \r\nThe behavior of query planner is also different when I also set the value of max_worker_processes\r\nand max_parallel_workers to parallel_workers + 1.\r\n\r\nFor example (ran on Master),\r\n1. Set same value as parallel_workers, but \"Workers Launched\" and \"Workers Planned\" do not match.\r\nmax_worker_processes = 8\r\nmax_parallel_workers = 8\r\nALTER TABLE t_heap SET (parallel_workers = 8);\r\nALTER TABLE\r\nSET max_parallel_workers_per_gather = 8;\r\nSET\r\ntest=# EXPLAIN (ANALYZE, BUFFERS) SELECT count(*) from t_heap;\r\n QUERY PLAN\r\n--------------------------------------------------------------------------------------------------------------------------------------------------\r\n Finalize Aggregate (cost=619778.66..619778.67 rows=1 width=8) (actual time=16316.295..16316.295 rows=1 loops=1)\r\n Buffers: shared read=442478 dirtied=442478 written=442222\r\n -> Gather (cost=619777.83..619778.64 rows=8 width=8) (actual time=16315.528..16316.668 rows=8 loops=1)\r\n Workers Planned: 8\r\n Workers Launched: 7\r\n Buffers: shared read=442478 dirtied=442478 written=442222\r\n -> Partial Aggregate (cost=618777.83..618777.84 rows=1 width=8) (actual time=16305.092..16305.092 rows=1 loops=8)\r\n Buffers: shared read=442478 dirtied=442478 written=442222\r\n -> Parallel Seq Scan on t_heap (cost=0.00..583517.86 rows=14103986 width=0) (actual time=0.725..14290.117 rows=12500000 loops=8)\r\n Buffers: shared read=442478 dirtied=442478 written=442222\r\n Planning Time: 5.327 ms\r\n Buffers: shared hit=17 read=10\r\n Execution Time: 16316.915 ms\r\n(13 rows)\r\n\r\n2. Match the workers launched and workers planned values (parallel_workers + 1)\r\nmax_worker_processes = 9\r\nmax_parallel_workers = 9\r\n\r\nALTER TABLE t_heap SET (parallel_workers = 8);\r\nALTER TABLE;\r\nSET max_parallel_workers_per_gather = 8;\r\nSET\r\n\r\ntest=# EXPLAIN (ANALYZE, BUFFERS) SELECT count(*) from t_heap;\r\n QUERY PLAN\r\n--------------------------------------------------------------------------------------------------------------------------------------------------\r\n Finalize Aggregate (cost=619778.66..619778.67 rows=1 width=8) (actual time=16783.944..16783.944 rows=1 loops=1)\r\n Buffers: shared read=442478 dirtied=442478 written=442190\r\n -> Gather (cost=619777.83..619778.64 rows=8 width=8) (actual time=16783.796..16785.474 rows=9 loops=1)\r\n Workers Planned: 8\r\n Workers Launched: 8\r\n Buffers: shared read=442478 dirtied=442478 written=442190\r\n -> Partial Aggregate (cost=618777.83..618777.84 rows=1 width=8) (actual time=16770.218..16770.218 rows=1 loops=9)\r\n Buffers: shared read=442478 dirtied=442478 written=442190\r\n -> Parallel Seq Scan on t_heap (cost=0.00..583517.86 rows=14103986 width=0) (actual time=6.004..14967.329 rows=11111111 loops=9)\r\n Buffers: shared read=442478 dirtied=442478 written=442190\r\n Planning Time: 4.755 ms\r\n Buffers: shared hit=17 read=10\r\n Execution Time: 16785.719 ms\r\n(13 rows)\r\n\r\n\r\n\r\nKind regards,\r\nKirk Jamison\r\n\r\n", "msg_date": "Wed, 22 Jul 2020 04:40:53 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Wed, 22 Jul 2020 at 16:40, k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n> I used the default max_parallel_workers & max_worker_proceses which is 8 by default in postgresql.conf.\n> IOW, I ran all those tests with maximum of 8 processes set. But my query planner capped both the\n> Workers Planned and Launched at 6 for some reason when increasing the value for\n> max_parallel_workers_per_gather.\n\nmax_parallel_workers_per_gather just imposes a limit on the planner as\nto the maximum number of parallel workers it may choose for a given\nparallel portion of a plan. The actual number of workers the planner\nwill decide is best to use is based on the size of the relation. More\npages = more workers. It sounds like in this case the planner didn't\nthink it was worth using more than 6 workers.\n\nThe parallel_workers reloption, when not set to -1 overwrites the\nplanner's decision on how many workers to use. It'll just always try\nto use \"parallel_workers\".\n\n> However, when I used the ALTER TABLE SET (parallel_workers = N) based from your suggestion,\n> the query planner acquired that set value only for \"Workers Planned\", but not for \"Workers Launched\".\n> The behavior of query planner is also different when I also set the value of max_worker_processes\n> and max_parallel_workers to parallel_workers + 1.\n\nWhen it comes to execution, the executor is limited to how many\nparallel worker processes are available to execute the plan. If all\nworkers happen to be busy with other tasks then it may find itself\nhaving to process the entire query in itself without any help from\nworkers. Or there may be workers available, just not as many as the\nplanner picked to execute the query.\n\nThe number of available workers is configured with the\n\"max_parallel_workers\". That's set in postgresql.conf. PostgreSQL\nwon't complain if you try to set a relation's parallel_workers\nreloption to a number higher than the \"max_parallel_workers\" GUC.\n\"max_parallel_workers\" is further limited by \"max_worker_processes\".\nLikely you'll want to set both those to at least 32 for this test,\nthen just adjust the relation's parallel_workers setting for each\ntest.\n\nDavid\n\n\n", "msg_date": "Wed, 22 Jul 2020 17:20:52 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Wednesday, July 22, 2020 2:21 PM (GMT+9), David Rowley wrote:\r\n\r\n> On Wed, 22 Jul 2020 at 16:40, k.jamison@fujitsu.com <k.jamison@fujitsu.com>\r\n> wrote:\r\n> > I used the default max_parallel_workers & max_worker_proceses which is 8 by\r\n> default in postgresql.conf.\r\n> > IOW, I ran all those tests with maximum of 8 processes set. But my\r\n> > query planner capped both the Workers Planned and Launched at 6 for\r\n> > some reason when increasing the value for max_parallel_workers_per_gather.\r\n> \r\n> max_parallel_workers_per_gather just imposes a limit on the planner as to the\r\n> maximum number of parallel workers it may choose for a given parallel portion of\r\n> a plan. The actual number of workers the planner will decide is best to use is\r\n> based on the size of the relation. More pages = more workers. It sounds like in\r\n> this case the planner didn't think it was worth using more than 6 workers.\r\n> \r\n> The parallel_workers reloption, when not set to -1 overwrites the planner's\r\n> decision on how many workers to use. It'll just always try to use\r\n> \"parallel_workers\".\r\n>\r\n> > However, when I used the ALTER TABLE SET (parallel_workers = N) based\r\n> > from your suggestion, the query planner acquired that set value only for\r\n> \"Workers Planned\", but not for \"Workers Launched\".\r\n> > The behavior of query planner is also different when I also set the\r\n> > value of max_worker_processes and max_parallel_workers to parallel_workers\r\n> + 1.\r\n> \r\n> When it comes to execution, the executor is limited to how many parallel worker\r\n> processes are available to execute the plan. If all workers happen to be busy with\r\n> other tasks then it may find itself having to process the entire query in itself\r\n> without any help from workers. Or there may be workers available, just not as\r\n> many as the planner picked to execute the query.\r\n\r\nEven though I read the documentation [1][2] on parallel query, I might not have\r\nunderstood it clearly yet. So thank you very much for explaining simpler how the \r\nrelation size, GUCs, and reloption affect the query planner's behavior\r\nSo in this test case, I shouldn't force the workers to have same values for workers\r\nplanned and workers launched, is it correct? To just let the optimizer do its own decision.\r\n\r\n> The number of available workers is configured with the\r\n> \"max_parallel_workers\". That's set in postgresql.conf. PostgreSQL\r\n> won't complain if you try to set a relation's parallel_workers reloption to a number\r\n> higher than the \"max_parallel_workers\" GUC.\r\n> \"max_parallel_workers\" is further limited by \"max_worker_processes\".\r\n> Likely you'll want to set both those to at least 32 for this test, then just adjust the\r\n> relation's parallel_workers setting for each test.\r\n> \r\nThank you for the advice. For the same test case [3], I will use the following configuration:\r\nshared_buffers = 32MB\r\nmax_parallel_workers =32\r\nmax_worker_processes = 32\r\n\r\nMaybe the relation size is also small as you mentioned, that the query optimizer decided\r\nto only use 6 workers in my previous test. So let me see first if the results would vary\r\nagain with the above configuration and testing different values for parallel_workers.\r\n\r\nKind regards,\r\nKirk Jamison\r\n\r\n[1] https://www.postgresql.org/docs/13/how-parallel-query-works.html\r\n[2] https://www.postgresql.org/docs/current/runtime-config-resource.html\r\n[3] https://www.postgresql.org/message-id/CADwEdoqirzK3H8bB=xyJ192EZCNwGfcCa_WJ5GHVM7Sv8oenuA@mail.gmail.com\r\n\r\n", "msg_date": "Wed, 22 Jul 2020 06:17:33 +0000", "msg_from": "\"k.jamison@fujitsu.com\" <k.jamison@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Wed, 22 Jul 2020 at 18:17, k.jamison@fujitsu.com\n<k.jamison@fujitsu.com> wrote:\n> Even though I read the documentation [1][2] on parallel query, I might not have\n> understood it clearly yet. So thank you very much for explaining simpler how the\n> relation size, GUCs, and reloption affect the query planner's behavior\n> So in this test case, I shouldn't force the workers to have same values for workers\n> planned and workers launched, is it correct? To just let the optimizer do its own decision.\n\nWhat you want to do is force the planner's hand with each test as to\nhow many parallel workers it uses by setting the reloption\nparallel_workers to the number of workers you want to test. Just make\nsure the executor has enough workers to launch by setting\nmax_parallel_workers and max_worker_processes to something high enough\nto conduct the tests without the executor failing to launch any\nworkers.\n\n> Maybe the relation size is also small as you mentioned, that the query optimizer decided\n> to only use 6 workers in my previous test. So let me see first if the results would vary\n> again with the above configuration and testing different values for parallel_workers.\n\nThe parallel_worker reloption will overwrite the planner's choice of\nhow many parallel workers to use. Just make sure the executor has\nenough to use. You'll be able to determine that from the Workers\nPlanned matching Workers Launched.\n\nDavid\n\n\n", "msg_date": "Wed, 22 Jul 2020 20:18:07 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "Hi David,\n\nApologies for the delay, I had missed these emails.\n\nOn Tue, Jul 14, 2020 at 8:52 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> It would be good to know if the\n> regression is repeatable or if it was affected by some other process.\n\n\nThese are the latest results on the same setup as [1].\n(TL;DR: the FreeBSD VM with Google Persistent Disk is too unstable -\nthere is too much variability in performance to conclude that there is a\nregression)\n\n\nWith 100000000 rows:\n\nmaster (606c384598):\n\nmax_parallel_workers_per_gather Time(seconds)\n 0 20.09s\n 1 9.77s\n 2 9.92s\n 6 9.55s\n\n\nv2 patch (applied on 606c384598):\n\nmax_parallel_workers_per_gather Time(seconds)\n 0 18.34s\n 1 9.68s\n 2 9.15s\n 6 9.11s\n\n\nThe above results were averaged across 3 runs with little or no\ndeviation between runs. The absolute values are very different from the\nresults reported in [1].\n\nSo, I tried to repro the regression as I had reported in [1] with\n150000000 rows:\n\nmaster (449e14a561)\n\nmax_parallel_workers_per_gather Time(seconds)\n 0 42s, 42s\n 1 395s, 393s\n 2 404s, 403s\n 6 403s, 403s\n\nThomas' patch (applied on 449e14a561):\n\nmax_parallel_workers_per_gather Time(seconds)\n 0 43s,43s\n 1 203s, 42s\n 2 42s, 42s\n 6 44s, 43s\n\n\nv2 patch (applied on 449e14a561):\n\nmax_parallel_workers_per_gather Time(seconds)\n 0 274s, 403s\n 1 419s, 419s\n 2 448s, 448s\n 6 137s, 419s\n\n\nAs you can see, I got wildly different results with 150000000 rows (even\nbetween runs of the same experiment)\nI don't think that the environment is stable enough to tell if there is\nany regression.\n\nWhat I can say is that there are no processes apart from Postgres\nrunning on the system. Also, the environment is pretty constrained -\njust 1G of free hard drive space before the start of every run, when I\nhave 150000000 rows, apart from the mere 32M of shared buffers and only\n4G of RAM.\n\nI don't know much about Google Persistent Disk to be very honest.\nBasically, I just provisioned one when I provisioned a GCP VM for testing on\nFreeBSD, as Thomas had mentioned that FreeBSD UFS is a bad case for\nparallel seq scan.\n\n> It would be good to see EXPLAIN (ANALYZE, BUFFERS) with SET\n> track_io_timing = on; for each value of max_parallel_workers.\n\nAs for running EXPLAIN ANALYZE, running that on this system incurs a\nnon-trivial amount of overhead. The overhead is simply staggering. This\nis the result of pg_test_timing in the FreeBSD GCP VM:\n\n$ /usr/local/pgsql/bin/pg_test_timing -d 50\nTesting timing overhead for 50 seconds.\nPer loop time including overhead: 4329.80 ns\nHistogram of timing durations:\n < us % of total count\n 1 0.00000 0\n 2 0.00000 0\n 4 3.08896 356710\n 8 95.97096 11082616\n 16 0.37748 43591\n 32 0.55502 64093\n 64 0.00638 737\n 128 0.00118 136\n 256 0.00002 2\n\nAs a point of comparison, on my local Ubuntu workstation:\n\n$ /usr/local/pgsql/bin/pg_test_timing -d 50\nTesting timing overhead for 50 seconds.\nPer loop time including overhead: 22.65 ns\nHistogram of timing durations:\n < us % of total count\n 1 97.73691 2157634382\n 2 2.26246 49945854\n 4 0.00039 8711\n 8 0.00016 3492\n 16 0.00008 1689\n 32 0.00000 63\n 64 0.00000 1\n\nThis is why I opted to simply use \\timing on.\n\nRegards,\n\nSoumyadeep (VMware)\n\n[1] https://www.postgresql.org/message-id/CADwEdoqirzK3H8bB=xyJ192EZCNwGfcCa_WJ5GHVM7Sv8oenuA@mail.gmail.com\n\n\n", "msg_date": "Wed, 22 Jul 2020 11:00:50 -0700", "msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Tue, Jul 21, 2020 at 9:33 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Wed, Jul 22, 2020 at 3:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Yeah, that is true but every time before the test the same amount of\n> > data should be present in shared buffers (or OS cache) if any which\n> > will help in getting consistent results. However, it is fine to\n> > reboot the machine as well if that is a convenient way.\n>\n> We really should have an extension (pg_prewarm?) that knows how to\n> kick stuff out PostgreSQL's shared buffers and the page cache\n> (POSIX_FADV_DONTNEED).\n>\n>\n+1. Clearing the OS page cache on FreeBSD is non-trivial during testing.\nYou can't do this on FreeBSD:\nsync; echo 3 > /proc/sys/vm/drop_caches\n\nAlso, it would be nice to evict only those pages from the OS page cache\nthat are Postgres pages instead of having to drop everything.\n\nRegards,\nSoumyadeep (VMware)\n\n\n", "msg_date": "Wed, 22 Jul 2020 12:02:38 -0700", "msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Wed, Jul 22, 2020 at 10:03 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Wed, Jul 22, 2020 at 3:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Yeah, that is true but every time before the test the same amount of\n> > data should be present in shared buffers (or OS cache) if any which\n> > will help in getting consistent results. However, it is fine to\n> > reboot the machine as well if that is a convenient way.\n>\n> We really should have an extension (pg_prewarm?) that knows how to\n> kick stuff out PostgreSQL's shared buffers and the page cache\n> (POSIX_FADV_DONTNEED).\n>\n\n+1. Such an extension would be quite helpful for performance benchmarks.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 23 Jul 2020 16:30:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "Hi Soumyadeep,\n\nThanks for re-running the tests.\n\nOn Thu, 23 Jul 2020 at 06:01, Soumyadeep Chakraborty\n<soumyadeep2007@gmail.com> wrote:\n> On Tue, Jul 14, 2020 at 8:52 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > It would be good to see EXPLAIN (ANALYZE, BUFFERS) with SET\n> > track_io_timing = on; for each value of max_parallel_workers.\n>\n> As for running EXPLAIN ANALYZE, running that on this system incurs a\n> non-trivial amount of overhead. The overhead is simply staggering.\n\nI didn't really intend for that to be used to get an accurate overall\ntiming for the query. It was more to get an indication of the reads\nare actually hitting the disk or not.\n\nI mentioned to Kirk in [1] that his read speed might be a bit higher\nthan what his disk can actually cope with. I'm not too sure on the\nHDD he mentions, but if it's a single HDD then reading at an average\nspeed of 3457 MB/sec seems quite a bit too fast. It seems more likely,\nin his cases, that those reads are mostly coming from the kernel's\ncache.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvoDzAzXEp+Ay2CfT3U=ZcD5NLD7K9_Y936bnHjzs5jkHw@mail.gmail.com\n\n\n", "msg_date": "Fri, 24 Jul 2020 19:12:24 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" }, { "msg_contents": "On Wed, 15 Jul 2020 at 12:24, David Rowley <dgrowleyml@gmail.com> wrote:\n> If we've not seen any performance regressions within 1 week, then I\n> propose that we (pending final review) push this to allow wider\n> testing. It seems we're early enough in the PG14 cycle that there's a\n> large window of time for us to do something about any reported\n> performance regressions that come in.\n\nI did that final review which ended up in quite a few cosmetic changes.\n\nFunctionality-wise, it's basically that of the v2 patch with the\nPARALLEL_SEQSCAN_MAX_CHUNK_SIZE set to 8192.\n\nI mentioned that we might want to revisit giving users some influence\non the chunk size, but we'll only do so once we see some conclusive\nevidence that it's worthwhile.\n\nDavid\n\n\n", "msg_date": "Sun, 26 Jul 2020 21:09:07 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parallel Seq Scan vs kernel read ahead" } ]
[ { "msg_contents": "Hello,\n\nI've discovered something today that I didn't really expect. When a user\ndumps a database with the --schema flag of pg_dump, extensions in this\nschema aren't dumped. As far as I can tell, the documentation isn't clear\nabout this (\"Dump only schemas matching pattern; this selects both the\nschema itself, and all its contained objects.\"), though the source code\ndefinitely is (\"We dump all user-added extensions by default, or none of\nthem if include_everything is false (i.e., a --schema or --table switch was\ngiven).\", in pg_dump.c).\n\nI was wondering the reason behind this choice. If anyone knows, I'd be\nhappy to hear about it.\n\nI see two things:\n* it's been overlooked, and we should dump all the extensions available in\na schema if this schema has been selected through the --schema flag.\n* it's kind of like the large objects handling, and I'd pretty interested\nin adding a --extensions (as the same way there is a --blobs flag).\n\nThanks.\n\nRegards.\n\n\n-- \nGuillaume.\n\nHello,I've discovered something today that I didn't really expect. When a user dumps a database with the --schema flag of pg_dump, extensions in this schema aren't dumped. As far as I can tell, the documentation isn't clear about this (\"Dump only schemas matching pattern; this selects both the schema itself, and all its contained objects.\"), though the source code definitely is (\"We dump all user-added extensions by default, or none of them if include_everything is false (i.e., a --schema or --table switch was given).\", in pg_dump.c).I was wondering the reason behind this choice. If anyone knows, I'd be happy to hear about it.I see two things:* it's been overlooked, and we should dump all the extensions available in a schema if this schema has been selected through the --schema flag.* it's kind of like the large objects handling, and I'd pretty interested in adding a --extensions (as the same way there is a --blobs flag).Thanks.Regards.-- Guillaume.", "msg_date": "Wed, 20 May 2020 10:06:01 +0200", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Extensions not dumped when --schema is used" }, { "msg_contents": "On Wed, 2020-05-20 at 10:06 +0200, Guillaume Lelarge wrote:\n> I've discovered something today that I didn't really expect.\n> When a user dumps a database with the --schema flag of pg_dump,\n> extensions in this schema aren't dumped. As far as I can tell,\n> the documentation isn't clear about this (\"Dump only schemas\n> matching pattern; this selects both the schema itself, and all\n> its contained objects.\"), though the source code definitely is\n> (\"We dump all user-added extensions by default, or none of them\n> if include_everything is false (i.e., a --schema or --table\n> switch was given).\", in pg_dump.c).\n> \n> I was wondering the reason behind this choice. If anyone knows,\n> I'd be happy to hear about it.\n> \n> I see two things:\n> * it's been overlooked, and we should dump all the extensions\n> available in a schema if this schema has been selected through\n> the --schema flag.\n> * it's kind of like the large objects handling, and I'd pretty\n> interested in adding a --extensions (as the same way there is a\n> --blobs flag).\n\nI am not sure if there is a good reason for the current behavior,\nbut I'd favor the second solution. I think as extensions as belonging\nto the database rather than the schema; the schema is just where the\nobjects are housed.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 20 May 2020 10:45:26 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "> On 20 May 2020, at 10:06, Guillaume Lelarge <guillaume@lelarge.info> wrote:\n\n> I was wondering the reason behind this choice. If anyone knows, I'd be happy to hear about it.\n\nExtensions were dumped unconditionally in the beginning, but it was changed to\nmatch how procedural language definitions were dumped.\n\n> * it's been overlooked, and we should dump all the extensions available in a schema if this schema has been selected through the --schema flag.\n\nFor reference, --schema-only will include all the extensions, but not\n--schema=foo and not \"--schema-only --schema=foo\".\n\nExtensions don't belong to a schema per se, the namespace oid in pg_extension\nmarks where most of the objects are contained but not necessarily all of them.\nGiven that, it makes sense to not include extensions for --schema. However,\nthat can be considered sort of an implementation detail which may not be\nentirely obvious to users (especially since you yourself is a power-user).\n\n> * it's kind of like the large objects handling, and I'd pretty interested in adding a --extensions (as the same way there is a --blobs flag).\n\nAn object in a schema might have attributes that depend on an extension in\norder to restore, so it makes sense to provide a way to include them for a\n--schema dump. The question is what --extensions should do: only dump any\nextensions that objects in the schema depend on; require a pattern and only\ndump matching extensions; dump all extensions (probably not) or something else?\n\ncheers ./daniel\n\n", "msg_date": "Wed, 20 May 2020 11:26:36 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "Le mer. 20 mai 2020 à 11:26, Daniel Gustafsson <daniel@yesql.se> a écrit :\n\n> > On 20 May 2020, at 10:06, Guillaume Lelarge <guillaume@lelarge.info>\n> wrote:\n>\n> > I was wondering the reason behind this choice. If anyone knows, I'd be\n> happy to hear about it.\n>\n> Extensions were dumped unconditionally in the beginning, but it was\n> changed to\n> match how procedural language definitions were dumped.\n>\n>\nThat makes sense. Thank you.\n\n> * it's been overlooked, and we should dump all the extensions available\n> in a schema if this schema has been selected through the --schema flag.\n>\n> For reference, --schema-only will include all the extensions, but not\n> --schema=foo and not \"--schema-only --schema=foo\".\n>\n>\nYes.\n\nExtensions don't belong to a schema per se, the namespace oid in\n> pg_extension\n> marks where most of the objects are contained but not necessarily all of\n> them.\n> Given that, it makes sense to not include extensions for --schema.\n> However,\n> that can be considered sort of an implementation detail which may not be\n> entirely obvious to users (especially since you yourself is a power-user).\n>\n>\nI agree.\n\n> * it's kind of like the large objects handling, and I'd pretty interested\n> in adding a --extensions (as the same way there is a --blobs flag).\n>\n> An object in a schema might have attributes that depend on an extension in\n> order to restore, so it makes sense to provide a way to include them for a\n> --schema dump.\n\n\nThat's actually how I figured this out. A customer can't restore his dump\nbecause of a missing extension (pg_trgm to be precise).\n\n The question is what --extensions should do: only dump any\n> extensions that objects in the schema depend on; require a pattern and only\n> dump matching extensions; dump all extensions (probably not) or something\n> else?\n>\n>\nActually, \"dump all extensions\" (#3) would make sense to me, and has my\nvote. Otherwise, and though it would imply much more work, \"only dump any\nextension that objects in the schema depend on\" (#1) comes second in my\nopinion. Using the pattern means something more manual for users, and I\nreally think it would be a bad idea. People dump databases, schemas, and\ntables. Theu usually don't know which extensions those objects depend on.\nBut, anyway, I would work on any of these solutions, depending on what most\npeople think is best.\n\nThanks.\n\n\n-- \nGuillaume.\n\nLe mer. 20 mai 2020 à 11:26, Daniel Gustafsson <daniel@yesql.se> a écrit :> On 20 May 2020, at 10:06, Guillaume Lelarge <guillaume@lelarge.info> wrote:\n\n> I was wondering the reason behind this choice. If anyone knows, I'd be happy to hear about it.\n\nExtensions were dumped unconditionally in the beginning, but it was changed to\nmatch how procedural language definitions were dumped.\nThat makes sense. Thank you. \n> * it's been overlooked, and we should dump all the extensions available in a schema if this schema has been selected through the --schema flag.\n\nFor reference, --schema-only will include all the extensions, but not\n--schema=foo and not \"--schema-only --schema=foo\".\nYes. \nExtensions don't belong to a schema per se, the namespace oid in pg_extension\nmarks where most of the objects are contained but not necessarily all of them.\nGiven that, it makes sense to not include extensions for --schema.  However,\nthat can be considered sort of an implementation detail which may not be\nentirely obvious to users (especially since you yourself is a power-user).\nI agree. \n> * it's kind of like the large objects handling, and I'd pretty interested in adding a --extensions (as the same way there is a --blobs flag).\n\nAn object in a schema might have attributes that depend on an extension in\norder to restore, so it makes sense to provide a way to include them for a\n--schema dump.That's actually how I figured this out. A customer can't restore his dump because of a missing extension (pg_trgm to be precise).   The question is what --extensions should do: only dump any\nextensions that objects in the schema depend on; require a pattern and only\ndump matching extensions; dump all extensions (probably not) or something else?\nActually, \"dump all extensions\" (#3) would make sense to me, and has my vote. Otherwise, and though it would imply much more work, \"only dump any extension that objects in the schema depend on\" (#1) comes second in my opinion. Using the pattern means something more manual for users, and I really think it would be a bad idea. People dump databases, schemas, and tables. Theu usually don't know which extensions those objects depend on. But, anyway, I would work on any of these solutions, depending on what most people think is best.Thanks.-- Guillaume.", "msg_date": "Wed, 20 May 2020 11:38:21 +0200", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "> On 20 May 2020, at 11:38, Guillaume Lelarge <guillaume@lelarge.info> wrote:\n\n> Actually, \"dump all extensions\" (#3) would make sense to me, and has my vote.\n\nWouldn't that open for another set of problems when running with --schema=bar\nand getting errors on restoring for relocatable extensions like these:\n\n\tCREATE EXTENSION IF NOT EXISTS pg_trgm WITH SCHEMA foo;\n\nOnly dumping extensions depended on has the same problem of course.\n\n> People dump databases, schemas, and tables. Theu usually don't know which extensions those objects depend on.\n\nThat I totally agree with, question is how we best can help users here.\n\ncheers ./daniel\n\n", "msg_date": "Wed, 20 May 2020 11:55:24 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "Guillaume Lelarge <guillaume@lelarge.info> writes:\n> Le mer. 20 mai 2020 à 11:26, Daniel Gustafsson <daniel@yesql.se> a écrit :\n>> The question is what --extensions should do: only dump any\n>> extensions that objects in the schema depend on; require a pattern and only\n>> dump matching extensions; dump all extensions (probably not) or something\n>> else?\n\n> Actually, \"dump all extensions\" (#3) would make sense to me, and has my\n> vote.\n\nI think that makes no sense at all. By definition, a dump that's been\nrestricted with --schema, --table, or any similar switch is incomplete\nand may not restore on its own. Typical examples include foreign key\nreferences to tables in other schemas, views using functions in other\nschemas, etc etc. I see no reason for extension dependencies to be\ntreated differently from those cases.\n\nIn any use of selective dump, it's the user's job to select a set of\nobjects that she wants dumped (or restored). Trying to second-guess that\nis mostly going to make the feature less usable for power-user cases.\n\nAs a counterexample, what if you want the dump to be restorable on a\nsystem that doesn't have all of the extensions available on the source?\nYou carefully pick out the tables that you need, which don't require the\nunavailable extensions ... and then pg_dump decides you don't know what\nyou're doing and includes all the problematic extensions anyway.\n\nI could get behind an \"--extensions=PATTERN\" switch to allow selective\naddition of extensions to a selective dump, but I don't want to see us\noverruling the user's choices about what to dump.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 20 May 2020 10:39:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "Le mer. 20 mai 2020 à 16:39, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n\n> Guillaume Lelarge <guillaume@lelarge.info> writes:\n> > Le mer. 20 mai 2020 à 11:26, Daniel Gustafsson <daniel@yesql.se> a\n> écrit :\n> >> The question is what --extensions should do: only dump any\n> >> extensions that objects in the schema depend on; require a pattern and\n> only\n> >> dump matching extensions; dump all extensions (probably not) or\n> something\n> >> else?\n>\n> > Actually, \"dump all extensions\" (#3) would make sense to me, and has my\n> > vote.\n>\n> I think that makes no sense at all. By definition, a dump that's been\n> restricted with --schema, --table, or any similar switch is incomplete\n> and may not restore on its own. Typical examples include foreign key\n> references to tables in other schemas, views using functions in other\n> schemas, etc etc. I see no reason for extension dependencies to be\n> treated differently from those cases.\n>\n>\nAgreed.\n\nIn any use of selective dump, it's the user's job to select a set of\n> objects that she wants dumped (or restored). Trying to second-guess that\n> is mostly going to make the feature less usable for power-user cases.\n>\n>\nAgreed, though right now he has no way to do this for extensions.\n\nAs a counterexample, what if you want the dump to be restorable on a\n> system that doesn't have all of the extensions available on the source?\n> You carefully pick out the tables that you need, which don't require the\n> unavailable extensions ... and then pg_dump decides you don't know what\n> you're doing and includes all the problematic extensions anyway.\n>\n>\nThat's true.\n\nI could get behind an \"--extensions=PATTERN\" switch to allow selective\n> addition of extensions to a selective dump, but I don't want to see us\n> overruling the user's choices about what to dump.\n>\n>\nWith all your comments, I can only agree to your views. I'll try to work on\nthis anytime soon.\n\n\n-- \nGuillaume.\n\nLe mer. 20 mai 2020 à 16:39, Tom Lane <tgl@sss.pgh.pa.us> a écrit :Guillaume Lelarge <guillaume@lelarge.info> writes:\n> Le mer. 20 mai 2020 à 11:26, Daniel Gustafsson <daniel@yesql.se> a écrit :\n>>  The question is what --extensions should do: only dump any\n>> extensions that objects in the schema depend on; require a pattern and only\n>> dump matching extensions; dump all extensions (probably not) or something\n>> else?\n\n> Actually, \"dump all extensions\" (#3) would make sense to me, and has my\n> vote.\n\nI think that makes no sense at all.  By definition, a dump that's been\nrestricted with --schema, --table, or any similar switch is incomplete\nand may not restore on its own.  Typical examples include foreign key\nreferences to tables in other schemas, views using functions in other\nschemas, etc etc.  I see no reason for extension dependencies to be\ntreated differently from those cases.\nAgreed. \nIn any use of selective dump, it's the user's job to select a set of\nobjects that she wants dumped (or restored).  Trying to second-guess that\nis mostly going to make the feature less usable for power-user cases.\nAgreed, though right now he has no way to do this for extensions. \nAs a counterexample, what if you want the dump to be restorable on a\nsystem that doesn't have all of the extensions available on the source?\nYou carefully pick out the tables that you need, which don't require the\nunavailable extensions ... and then pg_dump decides you don't know what\nyou're doing and includes all the problematic extensions anyway.\nThat's true. \nI could get behind an \"--extensions=PATTERN\" switch to allow selective\naddition of extensions to a selective dump, but I don't want to see us\noverruling the user's choices about what to dump.\nWith all your comments, I can only agree to your views. I'll try to work on this anytime soon.-- Guillaume.", "msg_date": "Sat, 23 May 2020 14:53:53 +0200", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "Hi,\n\nLe sam. 23 mai 2020 à 14:53, Guillaume Lelarge <guillaume@lelarge.info> a\nécrit :\n\n> Le mer. 20 mai 2020 à 16:39, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n>\n>> Guillaume Lelarge <guillaume@lelarge.info> writes:\n>> > Le mer. 20 mai 2020 à 11:26, Daniel Gustafsson <daniel@yesql.se> a\n>> écrit :\n>> >> The question is what --extensions should do: only dump any\n>> >> extensions that objects in the schema depend on; require a pattern and\n>> only\n>> >> dump matching extensions; dump all extensions (probably not) or\n>> something\n>> >> else?\n>>\n>> > Actually, \"dump all extensions\" (#3) would make sense to me, and has my\n>> > vote.\n>>\n>> I think that makes no sense at all. By definition, a dump that's been\n>> restricted with --schema, --table, or any similar switch is incomplete\n>> and may not restore on its own. Typical examples include foreign key\n>> references to tables in other schemas, views using functions in other\n>> schemas, etc etc. I see no reason for extension dependencies to be\n>> treated differently from those cases.\n>>\n>>\n> Agreed.\n>\n> In any use of selective dump, it's the user's job to select a set of\n>> objects that she wants dumped (or restored). Trying to second-guess that\n>> is mostly going to make the feature less usable for power-user cases.\n>>\n>>\n> Agreed, though right now he has no way to do this for extensions.\n>\n> As a counterexample, what if you want the dump to be restorable on a\n>> system that doesn't have all of the extensions available on the source?\n>> You carefully pick out the tables that you need, which don't require the\n>> unavailable extensions ... and then pg_dump decides you don't know what\n>> you're doing and includes all the problematic extensions anyway.\n>>\n>>\n> That's true.\n>\n> I could get behind an \"--extensions=PATTERN\" switch to allow selective\n>> addition of extensions to a selective dump, but I don't want to see us\n>> overruling the user's choices about what to dump.\n>>\n>>\n> With all your comments, I can only agree to your views. I'll try to work\n> on this anytime soon.\n>\n>\n\"Anytime soon\" was a long long time ago, and I eventually completely forgot\nthis, sorry. As nobody worked on it yet, I took a shot at it. See attached\npatch.\n\nI don't know if I should add this right away in the commit fest app. If\nyes, I guess it should go on the next commit fest (2021-03), right?\n\n\n-- \nGuillaume.", "msg_date": "Mon, 25 Jan 2021 14:34:02 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "On Mon, Jan 25, 2021 at 9:34 PM Guillaume Lelarge\n<guillaume@lelarge.info> wrote:\n>\n> \"Anytime soon\" was a long long time ago, and I eventually completely forgot this, sorry. As nobody worked on it yet, I took a shot at it. See attached patch.\n\nGreat!\n\nI didn't reviewed it thoroughly yet, but after a quick look it sounds\nsensible. I'd prefer to see some tests added, and it looks like a\ntest for plpgsql could be added quite easily.\n\n> I don't know if I should add this right away in the commit fest app. If yes, I guess it should go on the next commit fest (2021-03), right?\n\nCorrect, and please add it on the commit fest!\n\n\n", "msg_date": "Tue, 26 Jan 2021 12:10:16 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "Le mar. 26 janv. 2021 à 05:10, Julien Rouhaud <rjuju123@gmail.com> a écrit :\n\n> On Mon, Jan 25, 2021 at 9:34 PM Guillaume Lelarge\n> <guillaume@lelarge.info> wrote:\n> >\n> > \"Anytime soon\" was a long long time ago, and I eventually completely\n> forgot this, sorry. As nobody worked on it yet, I took a shot at it. See\n> attached patch.\n>\n> Great!\n>\n> I didn't reviewed it thoroughly yet, but after a quick look it sounds\n> sensible. I'd prefer to see some tests added, and it looks like a\n> test for plpgsql could be added quite easily.\n>\n>\nI tried that all afternoon yesterday, but failed to do so. My had still\nhurts, but I'll try again though it may take some time.\n\n> I don't know if I should add this right away in the commit fest app. If\n> yes, I guess it should go on the next commit fest (2021-03), right?\n>\n> Correct, and please add it on the commit fest!\n>\n\nDone, see https://commitfest.postgresql.org/32/2956/.\n\n\n-- \nGuillaume.\n\nLe mar. 26 janv. 2021 à 05:10, Julien Rouhaud <rjuju123@gmail.com> a écrit :On Mon, Jan 25, 2021 at 9:34 PM Guillaume Lelarge\n<guillaume@lelarge.info> wrote:\n>\n> \"Anytime soon\" was a long long time ago, and I eventually completely forgot this, sorry. As nobody worked on it yet, I took a shot at it. See attached patch.\n\nGreat!\n\nI didn't reviewed it thoroughly yet, but after a quick look it sounds\nsensible.  I'd prefer to see some tests added, and it looks like a\ntest for plpgsql could be added quite easily.\nI tried that all afternoon yesterday, but failed to do so. My had still hurts, but I'll try again though it may take some time. \n> I don't know if I should add this right away in the commit fest app. If yes, I guess it should go on the next commit fest (2021-03), right?\n\nCorrect, and please add it on the commit fest!\nDone, see https://commitfest.postgresql.org/32/2956/.-- Guillaume.", "msg_date": "Tue, 26 Jan 2021 13:41:44 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "Le mar. 26 janv. 2021 à 13:41, Guillaume Lelarge <guillaume@lelarge.info> a\nécrit :\n\n> Le mar. 26 janv. 2021 à 05:10, Julien Rouhaud <rjuju123@gmail.com> a\n> écrit :\n>\n>> On Mon, Jan 25, 2021 at 9:34 PM Guillaume Lelarge\n>> <guillaume@lelarge.info> wrote:\n>> >\n>> > \"Anytime soon\" was a long long time ago, and I eventually completely\n>> forgot this, sorry. As nobody worked on it yet, I took a shot at it. See\n>> attached patch.\n>>\n>> Great!\n>>\n>> I didn't reviewed it thoroughly yet, but after a quick look it sounds\n>> sensible. I'd prefer to see some tests added, and it looks like a\n>> test for plpgsql could be added quite easily.\n>>\n>>\n> I tried that all afternoon yesterday, but failed to do so. My had still\n> hurts, but I'll try again though it may take some time.\n>\n>\ns/My had/My head/ ..\n\n> I don't know if I should add this right away in the commit fest app. If\n>> yes, I guess it should go on the next commit fest (2021-03), right?\n>>\n>> Correct, and please add it on the commit fest!\n>>\n>\n> Done, see https://commitfest.postgresql.org/32/2956/.\n>\n>\n\n-- \nGuillaume.\n\nLe mar. 26 janv. 2021 à 13:41, Guillaume Lelarge <guillaume@lelarge.info> a écrit :Le mar. 26 janv. 2021 à 05:10, Julien Rouhaud <rjuju123@gmail.com> a écrit :On Mon, Jan 25, 2021 at 9:34 PM Guillaume Lelarge\n<guillaume@lelarge.info> wrote:\n>\n> \"Anytime soon\" was a long long time ago, and I eventually completely forgot this, sorry. As nobody worked on it yet, I took a shot at it. See attached patch.\n\nGreat!\n\nI didn't reviewed it thoroughly yet, but after a quick look it sounds\nsensible.  I'd prefer to see some tests added, and it looks like a\ntest for plpgsql could be added quite easily.\nI tried that all afternoon yesterday, but failed to do so. My had still hurts, but I'll try again though it may take some time. s/My had/My head/ .. \n> I don't know if I should add this right away in the commit fest app. If yes, I guess it should go on the next commit fest (2021-03), right?\n\nCorrect, and please add it on the commit fest!\nDone, see https://commitfest.postgresql.org/32/2956/. -- Guillaume.", "msg_date": "Tue, 26 Jan 2021 13:42:33 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nThe patch applies cleanly and looks fine to me. However consider this scenario.\r\n\r\n- CREATE SCHEMA foo;\r\n- CREATE EXTENSION file_fdw WITH SCHEMA foo;\r\n- pg_dump --file=/tmp/test.sql --exclude-schema=foo postgres\r\n\r\nThis will still include the extension 'file_fdw' in the backup script. Shouldn't it be excluded as well?\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Wed, 03 Feb 2021 17:32:19 +0000", "msg_from": "Asif Rehman <asifr.rehman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "Hi,\n\nThanks for the review.\n\nLe mer. 3 févr. 2021 à 18:33, Asif Rehman <asifr.rehman@gmail.com> a écrit :\n\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: not tested\n>\n> The patch applies cleanly and looks fine to me. However consider this\n> scenario.\n>\n> - CREATE SCHEMA foo;\n> - CREATE EXTENSION file_fdw WITH SCHEMA foo;\n> - pg_dump --file=/tmp/test.sql --exclude-schema=foo postgres\n>\n> This will still include the extension 'file_fdw' in the backup script.\n> Shouldn't it be excluded as well?\n>\n> The new status of this patch is: Waiting on Author\n>\n\nThis behaviour is already there without my patch, and I think it's a valid\nbehaviour. An extension doesn't belong to a schema. Its objects do, but the\nextension doesn't.\n\n\n-- \nGuillaume.\n\nHi,Thanks for the review.Le mer. 3 févr. 2021 à 18:33, Asif Rehman <asifr.rehman@gmail.com> a écrit :The following review has been posted through the commitfest application:\nmake installcheck-world:  tested, passed\nImplements feature:       tested, passed\nSpec compliant:           not tested\nDocumentation:            not tested\n\nThe patch applies cleanly and looks fine to me. However consider this scenario.\n\n- CREATE SCHEMA foo;\n- CREATE EXTENSION file_fdw WITH SCHEMA foo;\n- pg_dump  --file=/tmp/test.sql --exclude-schema=foo postgres\n\nThis will still include the extension 'file_fdw' in the backup script. Shouldn't it be excluded as well?\n\nThe new status of this patch is: Waiting on Author\nThis behaviour is already there without my patch, and I think it's a valid behaviour. An extension doesn't belong to a schema. Its objects do, but the extension doesn't.-- Guillaume.", "msg_date": "Wed, 3 Feb 2021 18:42:11 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "Le mar. 26 janv. 2021 à 13:42, Guillaume Lelarge <guillaume@lelarge.info> a\nécrit :\n\n> Le mar. 26 janv. 2021 à 13:41, Guillaume Lelarge <guillaume@lelarge.info>\n> a écrit :\n>\n>> Le mar. 26 janv. 2021 à 05:10, Julien Rouhaud <rjuju123@gmail.com> a\n>> écrit :\n>>\n>>> On Mon, Jan 25, 2021 at 9:34 PM Guillaume Lelarge\n>>> <guillaume@lelarge.info> wrote:\n>>> >\n>>> > \"Anytime soon\" was a long long time ago, and I eventually completely\n>>> forgot this, sorry. As nobody worked on it yet, I took a shot at it. See\n>>> attached patch.\n>>>\n>>> Great!\n>>>\n>>> I didn't reviewed it thoroughly yet, but after a quick look it sounds\n>>> sensible. I'd prefer to see some tests added, and it looks like a\n>>> test for plpgsql could be added quite easily.\n>>>\n>>>\n>> I tried that all afternoon yesterday, but failed to do so. My had still\n>> hurts, but I'll try again though it may take some time.\n>>\n>>\n> s/My had/My head/ ..\n>\n>\nI finally managed to get a working TAP test for my patch. I have no idea if\nit's good, and if it's enough. Anyway, new version of the patch attached.\n\n\n-- \nGuillaume.", "msg_date": "Thu, 18 Feb 2021 11:13:06 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "On Thu, Feb 18, 2021 at 11:13:06AM +0100, Guillaume Lelarge wrote:\n> I finally managed to get a working TAP test for my patch. I have no idea if\n> it's good, and if it's enough. Anyway, new version of the patch attached.\n\nAs presented in this patch, specifying both --extension and\n--table/--schema means that pg_dump will dump both tables and\nextensions matching the pattern passed down. But shouldn't extensions\nnot be dumped if --table or --schema is used? Combining --schema with\n--table implies that the schema part is ignored, for instance.\n\nThe comment at the top of selectDumpableExtension() is incorrect,\nmissing the new case added by --extension.\n\nThe use of strict_names looks right to me, but you have missed a\nrefresh of the documentation of --strict-names that applies to\n--extension with this patch.\n\n+ dump_cmd => [\n+ 'pg_dump', \"--file=$tempdir/test_schema_plus_extensions.sql\",\n+ '--schema=dump_test', '--extension=plpgsql', '--no-sync',\nWhat's the goal of this test. plpgsql is a system extension so it\nwill never be dumped. It seems to me that any tests related to this \npatch should be added to src/test/modules/test_pg_dump/, which\nincludes a custom extension in the test, that could be dumped.\n\n+ dumped. Multiple schemas can be selected by writing multiple\n+ <option>-e</option> switches. The <replaceable\ns/schemas/extensions/.\n--\nMichael", "msg_date": "Sat, 20 Feb 2021 21:25:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> As presented in this patch, specifying both --extension and\n> --table/--schema means that pg_dump will dump both tables and\n> extensions matching the pattern passed down. But shouldn't extensions\n> not be dumped if --table or --schema is used? Combining --schema with\n> --table implies that the schema part is ignored, for instance.\n\nI haven't read the patch, but the behavior I would expect is:\n\n1. If --extension=pattern is given, then extensions matching the\npattern are included in the dump, regardless of other switches.\n(Conversely, use of --extension doesn't affect choices about what\nother objects are dumped.)\n\n2. Without --extension, the behavior is backward compatible,\nie, dump extensions in an include_everything dump but not\notherwise.\n\nMaybe we could have a separate discussion as to which switches turn\noff include_everything, but that seems independent of this patch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 Feb 2021 11:31:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "Le sam. 20 févr. 2021 à 13:25, Michael Paquier <michael@paquier.xyz> a\nécrit :\n\n> On Thu, Feb 18, 2021 at 11:13:06AM +0100, Guillaume Lelarge wrote:\n> > I finally managed to get a working TAP test for my patch. I have no idea\n> if\n> > it's good, and if it's enough. Anyway, new version of the patch attached.\n>\n> As presented in this patch, specifying both --extension and\n> --table/--schema means that pg_dump will dump both tables and\n> extensions matching the pattern passed down. But shouldn't extensions\n> not be dumped if --table or --schema is used? Combining --schema with\n> --table implies that the schema part is ignored, for instance.\n>\n>\nActually, that's the whole point of the patch. Imagine someone who wants to\nbackup a table with a citext column. With the --table option, this user\nwill only have the table, without the extension that would allow to restore\nthe dump. He will have to remember to create the extension before. The new\noption allows him to add the CREATE EXTENSION he needs into his dump.\n\nThe comment at the top of selectDumpableExtension() is incorrect,\n> missing the new case added by --extension.\n>\n> The use of strict_names looks right to me, but you have missed a\n> refresh of the documentation of --strict-names that applies to\n> --extension with this patch.\n>\n> + dump_cmd => [\n> + 'pg_dump', \"--file=$tempdir/test_schema_plus_extensions.sql\",\n> + '--schema=dump_test', '--extension=plpgsql', '--no-sync',\n> What's the goal of this test. plpgsql is a system extension so it\n> will never be dumped. It seems to me that any tests related to this\n> patch should be added to src/test/modules/test_pg_dump/, which\n> includes a custom extension in the test, that could be dumped.\n>\n> + dumped. Multiple schemas can be selected by writing multiple\n> + <option>-e</option> switches. The <replaceable\n> s/schemas/extensions/.\n>\n\nYou're right on all these points. I'm on vacation this week, but I'll build\na v3 with these issues fixed when I'll get back from vacation.\n\nThanks.\n\n\n-- \nGuillaume.\n\nLe sam. 20 févr. 2021 à 13:25, Michael Paquier <michael@paquier.xyz> a écrit :On Thu, Feb 18, 2021 at 11:13:06AM +0100, Guillaume Lelarge wrote:\n> I finally managed to get a working TAP test for my patch. I have no idea if\n> it's good, and if it's enough. Anyway, new version of the patch attached.\n\nAs presented in this patch, specifying both --extension and\n--table/--schema means that pg_dump will dump both tables and\nextensions matching the pattern passed down.  But shouldn't extensions\nnot be dumped if --table or --schema is used?  Combining --schema with\n--table implies that the schema part is ignored, for instance.\nActually, that's the whole point of the patch. Imagine someone who wants to backup a table with a citext column. With the --table option, this user will only have the table, without the extension that would allow to restore the dump. He will have to remember to create the extension before. The new option allows him to add the CREATE EXTENSION he needs into his dump. \nThe comment at the top of selectDumpableExtension() is incorrect,\nmissing the new case added by --extension.\n\nThe use of strict_names looks right to me, but you have missed a\nrefresh of the documentation of --strict-names that applies to\n--extension with this patch.\n\n+       dump_cmd => [\n+           'pg_dump', \"--file=$tempdir/test_schema_plus_extensions.sql\",\n+           '--schema=dump_test', '--extension=plpgsql', '--no-sync',\nWhat's the goal of this test.  plpgsql is a system extension so it\nwill never be dumped.  It seems to me that any tests related to this \npatch should be added to src/test/modules/test_pg_dump/, which\nincludes a custom extension in the test, that could be dumped.\n\n+        dumped.  Multiple schemas can be selected by writing multiple\n+        <option>-e</option> switches.  The <replaceable\ns/schemas/extensions/.You're right on all these points. I'm on vacation this week, but I'll build a v3 with these issues fixed when I'll get back from vacation.Thanks.-- Guillaume.", "msg_date": "Sat, 20 Feb 2021 22:38:36 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "Le sam. 20 févr. 2021 à 17:31, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n\n> Michael Paquier <michael@paquier.xyz> writes:\n> > As presented in this patch, specifying both --extension and\n> > --table/--schema means that pg_dump will dump both tables and\n> > extensions matching the pattern passed down. But shouldn't extensions\n> > not be dumped if --table or --schema is used? Combining --schema with\n> > --table implies that the schema part is ignored, for instance.\n>\n> I haven't read the patch, but the behavior I would expect is:\n>\n> 1. If --extension=pattern is given, then extensions matching the\n> pattern are included in the dump, regardless of other switches.\n> (Conversely, use of --extension doesn't affect choices about what\n> other objects are dumped.)\n>\n> 2. Without --extension, the behavior is backward compatible,\n> ie, dump extensions in an include_everything dump but not\n> otherwise.\n>\n>\nYes, that's what it's supposed to do.\n\nMaybe we could have a separate discussion as to which switches turn\n> off include_everything, but that seems independent of this patch.\n>\n>\n+1\n\n\n-- \nGuillaume.\n\nLe sam. 20 févr. 2021 à 17:31, Tom Lane <tgl@sss.pgh.pa.us> a écrit :Michael Paquier <michael@paquier.xyz> writes:\n> As presented in this patch, specifying both --extension and\n> --table/--schema means that pg_dump will dump both tables and\n> extensions matching the pattern passed down.  But shouldn't extensions\n> not be dumped if --table or --schema is used?  Combining --schema with\n> --table implies that the schema part is ignored, for instance.\n\nI haven't read the patch, but the behavior I would expect is:\n\n1. If --extension=pattern is given, then extensions matching the\npattern are included in the dump, regardless of other switches.\n(Conversely, use of --extension doesn't affect choices about what\nother objects are dumped.)\n\n2. Without --extension, the behavior is backward compatible,\nie, dump extensions in an include_everything dump but not\notherwise.\nYes, that's what it's supposed to do. \nMaybe we could have a separate discussion as to which switches turn\noff include_everything, but that seems independent of this patch.\n+1-- Guillaume.", "msg_date": "Sat, 20 Feb 2021 22:39:24 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "On Sat, Feb 20, 2021 at 10:39:24PM +0100, Guillaume Lelarge wrote:\n> Le sam. 20 févr. 2021 à 17:31, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n>> I haven't read the patch, but the behavior I would expect is:\n>>\n>> 1. If --extension=pattern is given, then extensions matching the\n>> pattern are included in the dump, regardless of other switches.\n>> (Conversely, use of --extension doesn't affect choices about what\n>> other objects are dumped.)\n>>\n>> 2. Without --extension, the behavior is backward compatible,\n>> ie, dump extensions in an include_everything dump but not\n>> otherwise.\n>\n> Yes, that's what it's supposed to do.\n\nOkay, that sounds fine to me. Thanks for confirming.\n--\nMichael", "msg_date": "Sun, 21 Feb 2021 08:14:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "On Thu, Feb 18, 2021 at 11:13:06AM +0100, Guillaume Lelarge wrote:\n> Le mar. 26 janv. 2021 � 13:42, Guillaume Lelarge <guillaume@lelarge.info> a\n> �crit :\n> \n> > Le mar. 26 janv. 2021 � 13:41, Guillaume Lelarge <guillaume@lelarge.info>\n> > a �crit :\n> >\n> >> Le mar. 26 janv. 2021 � 05:10, Julien Rouhaud <rjuju123@gmail.com> a\n> >> �crit :\n> >>\n> >>> On Mon, Jan 25, 2021 at 9:34 PM Guillaume Lelarge\n> >>> <guillaume@lelarge.info> wrote:\n> >>> >\n> >>> > \"Anytime soon\" was a long long time ago, and I eventually completely\n> >>> forgot this, sorry. As nobody worked on it yet, I took a shot at it. See\n> >>> attached patch.\n> >>>\n> >>> Great!\n> >>>\n> >>> I didn't reviewed it thoroughly yet, but after a quick look it sounds\n> >>> sensible. I'd prefer to see some tests added, and it looks like a\n> >>> test for plpgsql could be added quite easily.\n> >>>\n> >>>\n> >> I tried that all afternoon yesterday, but failed to do so. My had still\n> >> hurts, but I'll try again though it may take some time.\n> >>\n> >>\n> > s/My had/My head/ ..\n> >\n> >\n> I finally managed to get a working TAP test for my patch. I have no idea if\n> it's good, and if it's enough. Anyway, new version of the patch attached.\n> \n> \n> -- \n> Guillaume.\n\nThanks for doing this work!\n\n> diff --git a/doc/src/sgml/ref/pg_dump.sgml b/doc/src/sgml/ref/pg_dump.sgml\n> index bcbb7a25fb..95d45fabfb 100644\n> --- a/doc/src/sgml/ref/pg_dump.sgml\n> +++ b/doc/src/sgml/ref/pg_dump.sgml\n> @@ -215,6 +215,38 @@ PostgreSQL documentation\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term><option>-e <replaceable class=\"parameter\">pattern</replaceable></option></term>\n> + <term><option>--extension=<replaceable class=\"parameter\">pattern</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + Dump only extensions matching <replaceable\n> + class=\"parameter\">pattern</replaceable>. When this option is not\n> + specified, all non-system extensions in the target database will be\n> + dumped. Multiple schemas can be selected by writing multiple\n ^^^^^^^^^^^^^^^^\nI think this should read \"Multiple extensions\".\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Mon, 22 Feb 2021 00:15:03 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "On Sun, Feb 21, 2021 at 08:14:45AM +0900, Michael Paquier wrote:\n> Okay, that sounds fine to me. Thanks for confirming.\n\nGuillaume, it has been a couple of weeks since your last update. Are\nyou planning to send a new version of the patch?\n--\nMichael", "msg_date": "Mon, 15 Mar 2021 14:00:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "Le lun. 15 mars 2021 à 06:00, Michael Paquier <michael@paquier.xyz> a\nécrit :\n\n> On Sun, Feb 21, 2021 at 08:14:45AM +0900, Michael Paquier wrote:\n> > Okay, that sounds fine to me. Thanks for confirming.\n>\n> Guillaume, it has been a couple of weeks since your last update. Are\n> you planning to send a new version of the patch?\n>\n\nThis is on my TODO list, but I can't find the time right now. If someone's\ninterested in this patch, i have no problem with him/her working on it.\nOtherwise, I'll do it as soon as possible (meaning probably in two weeks).\n\nLe lun. 15 mars 2021 à 06:00, Michael Paquier <michael@paquier.xyz> a écrit :On Sun, Feb 21, 2021 at 08:14:45AM +0900, Michael Paquier wrote:\n> Okay, that sounds fine to me.  Thanks for confirming.\n\nGuillaume, it has been a couple of weeks since your last update.  Are\nyou planning to send a new version of the patch?This is on my TODO list, but I can't find the time right now. If someone's interested in this patch, i have no problem with him/her working on it. Otherwise, I'll do it as soon as possible (meaning probably in two weeks).", "msg_date": "Mon, 15 Mar 2021 09:19:19 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "On Mon, Mar 15, 2021 at 09:19:19AM +0100, Guillaume Lelarge wrote:\n> Le lun. 15 mars 2021 � 06:00, Michael Paquier <michael@paquier.xyz> a\n> �crit :\n> \n> > On Sun, Feb 21, 2021 at 08:14:45AM +0900, Michael Paquier wrote:\n> > > Okay, that sounds fine to me. Thanks for confirming.\n> >\n> > Guillaume, it has been a couple of weeks since your last update. Are\n> > you planning to send a new version of the patch?\n> >\n> \n> This is on my TODO list, but I can't find the time right now. If someone's\n> interested in this patch, i have no problem with him/her working on it.\n> Otherwise, I'll do it as soon as possible (meaning probably in two weeks).\n\nThat will (almost) be the end of the last commitfest for pg14.\n\nIs that a feature you really want to see in pg14? If yes and if you're sure\nyou won't have time to work on the patch within 2 weeks I can take care of\naddressing all comments.\n\n\n", "msg_date": "Mon, 15 Mar 2021 18:21:55 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "On Mon, Mar 15, 2021 at 06:21:55PM +0800, Julien Rouhaud wrote:\n> Is that a feature you really want to see in pg14? If yes and if you're sure\n> you won't have time to work on the patch within 2 weeks I can take care of\n> addressing all comments.\n\nA lot of things will depend on the feature freeze date, but the sooner\na version is available, the sooner I could look at what is proposed.\nRecalling my memories of the discussion, there was not much to\naddress, and the logic was rather clear.\n--\nMichael", "msg_date": "Mon, 15 Mar 2021 19:25:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "Le lun. 15 mars 2021 à 18:25, Michael Paquier <michael@paquier.xyz> a\nécrit :\n\n> On Mon, Mar 15, 2021 at 06:21:55PM +0800, Julien Rouhaud wrote:\n> > Is that a feature you really want to see in pg14? If yes and if you're\n> sure\n> > you won't have time to work on the patch within 2 weeks I can take care\n> of\n> > addressing all comments.\n>\n> A lot of things will depend on the feature freeze date, but the sooner\n> a version is available, the sooner I could look at what is proposed.\n>\n\nI totally agree.\n\nRecalling my memories of the discussion, there was not much to\n> address, and the logic was rather clear.\n>\n\nyes, I just had a look it's only a matter of adapting the tests to\ntest_pg_dump and fixing a few typos AFAICS.\n\n>\n\nLe lun. 15 mars 2021 à 18:25, Michael Paquier <michael@paquier.xyz> a écrit :On Mon, Mar 15, 2021 at 06:21:55PM +0800, Julien Rouhaud wrote:\n> Is that a feature you really want to see in pg14?  If yes and if you're sure\n> you won't have time to work on the patch within 2 weeks I can take care of\n> addressing all comments.\n\nA lot of things will depend on the feature freeze date, but the sooner\na version is available, the sooner I could look at what is proposed.I totally agree. \nRecalling my memories of the discussion, there was not much to\naddress, and the logic was rather clear.yes, I just had a look it's only a matter of adapting the tests to test_pg_dump and fixing a few typos AFAICS.", "msg_date": "Mon, 15 Mar 2021 18:31:49 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "Le lun. 15 mars 2021 à 11:32, Julien Rouhaud <rjuju123@gmail.com> a écrit :\n\n> Le lun. 15 mars 2021 à 18:25, Michael Paquier <michael@paquier.xyz> a\n> écrit :\n>\n>> On Mon, Mar 15, 2021 at 06:21:55PM +0800, Julien Rouhaud wrote:\n>> > Is that a feature you really want to see in pg14? If yes and if you're\n>> sure\n>> > you won't have time to work on the patch within 2 weeks I can take care\n>> of\n>> > addressing all comments.\n>>\n>> A lot of things will depend on the feature freeze date, but the sooner\n>> a version is available, the sooner I could look at what is proposed.\n>>\n>\n> I totally agree.\n>\n> Recalling my memories of the discussion, there was not much to\n>> address, and the logic was rather clear.\n>>\n>\n> yes, I just had a look it's only a matter of adapting the tests to\n> test_pg_dump and fixing a few typos AFAICS.\n>\n\nJeez, you're answering too fast. I was working on my own answer when Julien\nsent another reply 😅\n\nAnyways. Yeah, I know we're near feature freeze. This feature would be nice\nto have, but I don't feel strongly about it. I think this feature is\ncurrently lacking in PostgreSQL but I don't much care if it makes it to 14\nor any future release. If you have time to work on the pg_dump test suite\nand are interested, then sure, go ahead, I'm fine with this. Otherwise I'll\ndo it in a few weeks and if it means it'll land in v15, then be it. That's\nnot an issue in itself.\n\nThough, I really appreciate the concern. Thanks.\n\n>\n\nLe lun. 15 mars 2021 à 11:32, Julien Rouhaud <rjuju123@gmail.com> a écrit :Le lun. 15 mars 2021 à 18:25, Michael Paquier <michael@paquier.xyz> a écrit :On Mon, Mar 15, 2021 at 06:21:55PM +0800, Julien Rouhaud wrote:\n> Is that a feature you really want to see in pg14?  If yes and if you're sure\n> you won't have time to work on the patch within 2 weeks I can take care of\n> addressing all comments.\n\nA lot of things will depend on the feature freeze date, but the sooner\na version is available, the sooner I could look at what is proposed.I totally agree. \nRecalling my memories of the discussion, there was not much to\naddress, and the logic was rather clear.yes, I just had a look it's only a matter of adapting the tests to test_pg_dump and fixing a few typos AFAICS. Jeez, you're answering too fast. I was working on my own answer when Julien sent another reply 😅Anyways. Yeah, I know we're near feature freeze. This feature would be nice to have, but I don't feel strongly about it. I think this feature is currently lacking in PostgreSQL but I don't much care if it makes it to 14 or any future release. If you have time to work on the pg_dump test suite and are interested, then sure, go ahead, I'm fine with this. Otherwise I'll do it in a few weeks and if it means it'll land in v15, then be it. That's not an issue in itself.Though, I really appreciate the concern. Thanks.", "msg_date": "Mon, 15 Mar 2021 11:37:02 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "On Mon, Mar 15, 2021 at 11:37:02AM +0100, Guillaume Lelarge wrote:\n> Anyways. Yeah, I know we're near feature freeze. This feature would be nice\n> to have, but I don't feel strongly about it. I think this feature is\n> currently lacking in PostgreSQL but I don't much care if it makes it to 14\n> or any future release. If you have time to work on the pg_dump test suite\n> and are interested, then sure, go ahead, I'm fine with this. Otherwise I'll\n> do it in a few weeks and if it means it'll land in v15, then be it. That's\n> not an issue in itself.\n\nOkay. So I have looked at that stuff in details, and after fixing\nall the issues reported upthread in the code, docs and tests I am\nfinishing with the attached. The tests have been moved out of\nsrc/bin/pg_dump/ to src/test/modules/test_pg_dump/, and include both\npositive and negative tests (used the trick with plpgsql for the\nlatter to avoid the dump of the extension test_pg_dump or any data\nrelated to it).\n--\nMichael", "msg_date": "Tue, 30 Mar 2021 12:02:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "On Tue, Mar 30, 2021 at 12:02:45PM +0900, Michael Paquier wrote:\n> Okay. So I have looked at that stuff in details, and after fixing\n> all the issues reported upthread in the code, docs and tests I am\n> finishing with the attached. The tests have been moved out of\n> src/bin/pg_dump/ to src/test/modules/test_pg_dump/, and include both\n> positive and negative tests (used the trick with plpgsql for the\n> latter to avoid the dump of the extension test_pg_dump or any data\n> related to it).\n\nI have double-checked this stuff this morning, and did not notice any\nissues. So, applied.\n--\nMichael", "msg_date": "Wed, 31 Mar 2021 09:37:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "Le mer. 31 mars 2021 à 02:37, Michael Paquier <michael@paquier.xyz> a\nécrit :\n\n> On Tue, Mar 30, 2021 at 12:02:45PM +0900, Michael Paquier wrote:\n> > Okay. So I have looked at that stuff in details, and after fixing\n> > all the issues reported upthread in the code, docs and tests I am\n> > finishing with the attached. The tests have been moved out of\n> > src/bin/pg_dump/ to src/test/modules/test_pg_dump/, and include both\n> > positive and negative tests (used the trick with plpgsql for the\n> > latter to avoid the dump of the extension test_pg_dump or any data\n> > related to it).\n>\n> I have double-checked this stuff this morning, and did not notice any\n> issues. So, applied.\n>\n\nThanks a lot. I've seen your email yesterday but had too much work going on\nto find the time to test your patch. Anyway, I'll take a look at how you\ncoded the TAP test to better understand that part and to be able to do it\nmyself next time.\n\nThanks again.\n\nLe mer. 31 mars 2021 à 02:37, Michael Paquier <michael@paquier.xyz> a écrit :On Tue, Mar 30, 2021 at 12:02:45PM +0900, Michael Paquier wrote:\n> Okay.  So I have looked at that stuff in details, and after fixing\n> all the issues reported upthread in the code, docs and tests I am\n> finishing with the attached.  The tests have been moved out of\n> src/bin/pg_dump/ to src/test/modules/test_pg_dump/, and include both\n> positive and negative tests (used the trick with plpgsql for the\n> latter to avoid the dump of the extension test_pg_dump or any data\n> related to it).\n\nI have double-checked this stuff this morning, and did not notice any\nissues.  So, applied.Thanks a lot. I've seen your email yesterday but had too much work going on to find the time to test your patch. Anyway, I'll take a look at how you coded the TAP test to better understand that part and to be able to do it myself next time.Thanks again.", "msg_date": "Wed, 31 Mar 2021 08:14:43 +0200", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "On Wed, Mar 31, 2021 at 09:37:44AM +0900, Michael Paquier wrote:\n> On Tue, Mar 30, 2021 at 12:02:45PM +0900, Michael Paquier wrote:\n> > Okay. So I have looked at that stuff in details, and after fixing\n> > all the issues reported upthread in the code, docs and tests I am\n> > finishing with the attached. The tests have been moved out of\n> > src/bin/pg_dump/ to src/test/modules/test_pg_dump/, and include both\n> > positive and negative tests (used the trick with plpgsql for the\n> > latter to avoid the dump of the extension test_pg_dump or any data\n> > related to it).\n> \n> I have double-checked this stuff this morning, and did not notice any\n> issues. So, applied.\n\nI noticed the patch's behavior for relations that are members of non-dumped\nextensions and are also registered using pg_extension_config_dump(). It\ndepends on the schema:\n\n- If extschema='public', \"pg_dump -e plpgsql\" makes no mention of the\n relations.\n- If extschema='public', \"pg_dump -e plpgsql --schema=public\" includes\n commands to dump the relation data. This surprised me. (The\n --schema=public argument causes selectDumpableNamespace() to set\n nsinfo->dobj.dump=DUMP_COMPONENT_ALL instead of DUMP_COMPONENT_ACL.)\n- If extschema is not any sort of built-in schema, \"pg_dump -e plpgsql\"\n includes commands to dump the relation data. This surprised me.\n\nI'm attaching a test case patch that demonstrates this. Is this behavior\nintentional?", "msg_date": "Sun, 4 Apr 2021 15:08:02 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "On Sun, Apr 04, 2021 at 03:08:02PM -0700, Noah Misch wrote:\n> On Wed, Mar 31, 2021 at 09:37:44AM +0900, Michael Paquier wrote:\n> > On Tue, Mar 30, 2021 at 12:02:45PM +0900, Michael Paquier wrote:\n> > > Okay. So I have looked at that stuff in details, and after fixing\n> > > all the issues reported upthread in the code, docs and tests I am\n> > > finishing with the attached. The tests have been moved out of\n> > > src/bin/pg_dump/ to src/test/modules/test_pg_dump/, and include both\n> > > positive and negative tests (used the trick with plpgsql for the\n> > > latter to avoid the dump of the extension test_pg_dump or any data\n> > > related to it).\n> > \n> > I have double-checked this stuff this morning, and did not notice any\n> > issues. So, applied.\n> \n> I noticed the patch's behavior for relations that are members of non-dumped\n> extensions and are also registered using pg_extension_config_dump(). It\n> depends on the schema:\n> \n> - If extschema='public', \"pg_dump -e plpgsql\" makes no mention of the\n> relations.\n> - If extschema='public', \"pg_dump -e plpgsql --schema=public\" includes\n> commands to dump the relation data. This surprised me. (The\n> --schema=public argument causes selectDumpableNamespace() to set\n> nsinfo->dobj.dump=DUMP_COMPONENT_ALL instead of DUMP_COMPONENT_ACL.)\n> - If extschema is not any sort of built-in schema, \"pg_dump -e plpgsql\"\n> includes commands to dump the relation data. This surprised me.\n> \n> I'm attaching a test case patch that demonstrates this. Is this behavior\n> intentional?\n\nI think this is a bug in $SUBJECT.\n\n\n", "msg_date": "Wed, 7 Apr 2021 19:42:11 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "On Wed, Apr 07, 2021 at 07:42:11PM -0700, Noah Misch wrote:\n> I think this is a bug in $SUBJECT.\n\nSorry for the late reply. I intend to answer to that and this is\nregistered as an open item, but I got busy with some other things.\n--\nMichael", "msg_date": "Thu, 8 Apr 2021 12:12:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "On Sun, Apr 04, 2021 at 03:08:02PM -0700, Noah Misch wrote:\n> I noticed the patch's behavior for relations that are members of non-dumped\n> extensions and are also registered using pg_extension_config_dump(). It\n> depends on the schema:\n> \n> - If extschema='public', \"pg_dump -e plpgsql\" makes no mention of the\n> relations.\n\nThis one is expected to me. The caller of pg_dump is not specifying\nthe extension that should be dumped, hence it looks logic to me to not\ndump the tables marked as pg_extension_config_dump() part as an\nextension not listed.\n\n> - If extschema='public', \"pg_dump -e plpgsql --schema=public\" includes\n> commands to dump the relation data. This surprised me. (The\n> --schema=public argument causes selectDumpableNamespace() to set\n> nsinfo->dobj.dump=DUMP_COMPONENT_ALL instead of DUMP_COMPONENT_ACL.)\n\nThis one would be expected to me. Per the discussion of upthread, we\nwant --schema and --extension to be two separate and exclusive\nswitches. So, once the caller specifies --schema we should dump the\ncontents of the schema, even if its extension is not listed with\n--extension. Anyway, the behavior to select if a schema can be dumped\nor not is not really related to this new code, right? And \"public\" is\na mixed beast, being treated as a system object and a user object by\nselectDumpableNamespace().\n\n> - If extschema is not any sort of built-in schema, \"pg_dump -e plpgsql\"\n> includes commands to dump the relation data. This surprised me.\n\nHmm. But you are right that this one is inconsistent with the first\ncase where the extension is not listed. I would have said that as the\nextension is not directly specified and that the schema is not passed\ndown either then we should not dump it at all, but this combination\nactually does so. Maybe we should add an extra logic into\nselectDumpableNamespace(), close to the end of it, to decide if,\ndepending on the contents of the extensions to include, we should dump\nits associated schema or not? Another solution would be to make use\nof schema_include_oids to filter out the schemas we don't want, but\nthat would mean that --extension gets priority over --schema or\n--table but we did ot want that per the original discussion.\n--\nMichael", "msg_date": "Tue, 13 Apr 2021 14:43:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "On Tue, Apr 13, 2021 at 02:43:11PM +0900, Michael Paquier wrote:\n> On Sun, Apr 04, 2021 at 03:08:02PM -0700, Noah Misch wrote:\n> > I noticed the patch's behavior for relations that are members of non-dumped\n> > extensions and are also registered using pg_extension_config_dump(). It\n> > depends on the schema:\n> > \n> > - If extschema='public', \"pg_dump -e plpgsql\" makes no mention of the\n> > relations.\n> \n> This one is expected to me. The caller of pg_dump is not specifying\n> the extension that should be dumped, hence it looks logic to me to not\n> dump the tables marked as pg_extension_config_dump() part as an\n> extension not listed.\n\nAgreed.\n\n> > - If extschema='public', \"pg_dump -e plpgsql --schema=public\" includes\n> > commands to dump the relation data. This surprised me. (The\n> > --schema=public argument causes selectDumpableNamespace() to set\n> > nsinfo->dobj.dump=DUMP_COMPONENT_ALL instead of DUMP_COMPONENT_ACL.)\n> \n> This one would be expected to me. Per the discussion of upthread, we\n> want --schema and --extension to be two separate and exclusive\n> switches. So, once the caller specifies --schema we should dump the\n> contents of the schema, even if its extension is not listed with\n> --extension.\n\nI may disagree with this later, but I'm setting it aside for the moment.\n\n> Anyway, the behavior to select if a schema can be dumped\n> or not is not really related to this new code, right? And \"public\" is\n> a mixed beast, being treated as a system object and a user object by\n> selectDumpableNamespace().\n\nCorrect.\n\n> > - If extschema is not any sort of built-in schema, \"pg_dump -e plpgsql\"\n> > includes commands to dump the relation data. This surprised me.\n> \n> Hmm. But you are right that this one is inconsistent with the first\n> case where the extension is not listed. I would have said that as the\n> extension is not directly specified and that the schema is not passed\n> down either then we should not dump it at all, but this combination\n> actually does so. Maybe we should add an extra logic into\n> selectDumpableNamespace(), close to the end of it, to decide if,\n> depending on the contents of the extensions to include, we should dump\n> its associated schema or not? Another solution would be to make use\n> of schema_include_oids to filter out the schemas we don't want, but\n> that would mean that --extension gets priority over --schema or\n> --table but we did ot want that per the original discussion.\n\nNo, neither of those solutions apply. \"pg_dump -e plpgsql\" selects all\nschemas. That is consistent with its documentation; plain \"pg_dump\" has long\nselected all schemas, and the documentation for \"-e\" does not claim that \"-e\"\nchanges the selection of non-extension objects. We're not going to solve the\nproblem by making selectDumpableNamespace() select some additional aspect of\nschema foo, because it's already selecting every available aspect. Like you\nsay, we're also not going to solve the problem by removing some existing\naspect of schema foo from selection, because that would remove dump material\nunrelated to any extension.\n\nThis isn't a problem of selecting schemas for inclusion in the dump. This is\na problem of associating extensions with their pg_extension_config_dump()\nrelations and omitting those extension-member relations when \"-e\" causes\nomission of the extension.\n\n\n", "msg_date": "Tue, 13 Apr 2021 08:00:34 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "On Tue, Apr 13, 2021 at 08:00:34AM -0700, Noah Misch wrote:\n> On Tue, Apr 13, 2021 at 02:43:11PM +0900, Michael Paquier wrote:\n>>> - If extschema='public', \"pg_dump -e plpgsql --schema=public\" includes\n>>> commands to dump the relation data. This surprised me. (The\n>>> --schema=public argument causes selectDumpableNamespace() to set\n>>> nsinfo->dobj.dump=DUMP_COMPONENT_ALL instead of DUMP_COMPONENT_ACL.)\n>> \n>> This one would be expected to me. Per the discussion of upthread, we\n>> want --schema and --extension to be two separate and exclusive\n>> switches. So, once the caller specifies --schema we should dump the\n>> contents of the schema, even if its extension is not listed with\n>> --extension.\n> \n> I may disagree with this later, but I'm setting it aside for the moment.\n>\n> This isn't a problem of selecting schemas for inclusion in the dump. This is\n> a problem of associating extensions with their pg_extension_config_dump()\n> relations and omitting those extension-member relations when \"-e\" causes\n> omission of the extension.\n\nAt code level, the decision to dump the data of any extension's\ndumpable table is done in processExtensionTables(). I have to admit\nthat your feeling here keeps the code simpler than what I have been\nthinking if we apply an extra filtering based on the list of\nextensions in this code path. So I can see the value in your argument\nto not dump at all the data of an extension's dumpable table as long\nas its extension is not listed, and this, even if its schema is\nexplicitly listed.\n\nSo I got down to make the behavior more consistent with the patch\nattached. This passes your case. It is worth noting that if a\ntable is part of a schema created by an extension, but that the table\nis not dependent on the extension, we would still dump its data if\nusing --schema with this table's schema while the extension is not\npart of the list from --extension. In the attached, that's just the\nextra test with without_extension_implicit_schema.\n\n(By the way, good catch with the duplicated --no-sync in the new\ntests.)\n\nWhat do you think?\n--\nMichael", "msg_date": "Wed, 14 Apr 2021 10:38:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "On Wed, Apr 14, 2021 at 10:38:17AM +0900, Michael Paquier wrote:\n> On Tue, Apr 13, 2021 at 08:00:34AM -0700, Noah Misch wrote:\n> > On Tue, Apr 13, 2021 at 02:43:11PM +0900, Michael Paquier wrote:\n> >>> - If extschema='public', \"pg_dump -e plpgsql --schema=public\" includes\n> >>> commands to dump the relation data. This surprised me. (The\n> >>> --schema=public argument causes selectDumpableNamespace() to set\n> >>> nsinfo->dobj.dump=DUMP_COMPONENT_ALL instead of DUMP_COMPONENT_ACL.)\n\n> > This isn't a problem of selecting schemas for inclusion in the dump. This is\n> > a problem of associating extensions with their pg_extension_config_dump()\n> > relations and omitting those extension-member relations when \"-e\" causes\n> > omission of the extension.\n> \n> At code level, the decision to dump the data of any extension's\n> dumpable table is done in processExtensionTables(). I have to admit\n> that your feeling here keeps the code simpler than what I have been\n> thinking if we apply an extra filtering based on the list of\n> extensions in this code path. So I can see the value in your argument\n> to not dump at all the data of an extension's dumpable table as long\n> as its extension is not listed, and this, even if its schema is\n> explicitly listed.\n> \n> So I got down to make the behavior more consistent with the patch\n> attached. This passes your case.\n\nYes.\n\n> It is worth noting that if a\n> table is part of a schema created by an extension, but that the table\n> is not dependent on the extension, we would still dump its data if\n> using --schema with this table's schema while the extension is not\n> part of the list from --extension. In the attached, that's just the\n> extra test with without_extension_implicit_schema.\n\nThat's consistent with v13, and it seems fine. I've not used a non-test\nextension that creates a schema.\n\n> --- a/src/test/modules/test_pg_dump/t/001_base.pl\n> +++ b/src/test/modules/test_pg_dump/t/001_base.pl\n> @@ -208,6 +208,30 @@ my %pgdump_runs = (\n> \t\t\t'pg_dump', '--no-sync', \"--file=$tempdir/without_extension.sql\",\n> \t\t\t'--extension=plpgsql', 'postgres',\n> \t\t],\n> +\t},\n> +\n> +\t# plgsql in the list of extensions blocks the dump of extension\n> +\t# test_pg_dump.\n> +\twithout_extension_explicit_schema => {\n> +\t\tdump_cmd => [\n> +\t\t\t'pg_dump',\n> +\t\t\t'--no-sync',\n> +\t\t\t\"--file=$tempdir/without_extension_explicit_schema.sql\",\n> +\t\t\t'--extension=plpgsql',\n> +\t\t\t'--schema=public',\n> +\t\t\t'postgres',\n> +\t\t],\n> +\t},\n> +\n> +\twithout_extension_implicit_schema => {\n> +\t\tdump_cmd => [\n> +\t\t\t'pg_dump',\n> +\t\t\t'--no-sync',\n> +\t\t\t\"--file=$tempdir/without_extension_implicit_schema.sql\",\n> +\t\t\t'--extension=plpgsql',\n> +\t\t\t'--schema=regress_pg_dump_schema',\n> +\t\t\t'postgres',\n> +\t\t],\n> \t},);\n\nThe name \"without_extension_explicit_schema\" arose because that test differs\nfrom the \"without_extension\" test by adding --schema=public. The test named\n\"without_extension_implicit_schema\" differs from \"without_extension\" by adding\n--schema=regress_pg_dump_schema, so the word \"implicit\" feels not-descriptive\nof the test. I recommend picking a different name. Other than that, the\nchange looks good.\n\n\n", "msg_date": "Wed, 14 Apr 2021 05:31:15 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "On Wed, Apr 14, 2021 at 05:31:15AM -0700, Noah Misch wrote:\n> The name \"without_extension_explicit_schema\" arose because that test differs\n> from the \"without_extension\" test by adding --schema=public. The test named\n> \"without_extension_implicit_schema\" differs from \"without_extension\" by adding\n> --schema=regress_pg_dump_schema, so the word \"implicit\" feels not-descriptive\n> of the test. I recommend picking a different name. Other than that, the\n> change looks good.\n\nThanks for the review. I have picked up \"internal\" instead, as\nthat's the schema created within the extension itself, and applied the\npatch.\n--\nMichael", "msg_date": "Thu, 15 Apr 2021 16:58:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Extensions not dumped when --schema is used" }, { "msg_contents": "Le jeu. 15 avr. 2021 à 09:58, Michael Paquier <michael@paquier.xyz> a\nécrit :\n\n> On Wed, Apr 14, 2021 at 05:31:15AM -0700, Noah Misch wrote:\n> > The name \"without_extension_explicit_schema\" arose because that test\n> differs\n> > from the \"without_extension\" test by adding --schema=public. The test\n> named\n> > \"without_extension_implicit_schema\" differs from \"without_extension\" by\n> adding\n> > --schema=regress_pg_dump_schema, so the word \"implicit\" feels\n> not-descriptive\n> > of the test. I recommend picking a different name. Other than that, the\n> > change looks good.\n>\n> Thanks for the review. I have picked up \"internal\" instead, as\n> that's the schema created within the extension itself, and applied the\n> patch.\n>\n\nThanks for the work on this. I didn't understand everything on the issue,\nwhich is why I didn't say a thing, but I followed the thread, and very much\nappreciated the fix.\n\n\n-- \nGuillaume.\n\nLe jeu. 15 avr. 2021 à 09:58, Michael Paquier <michael@paquier.xyz> a écrit :On Wed, Apr 14, 2021 at 05:31:15AM -0700, Noah Misch wrote:\n> The name \"without_extension_explicit_schema\" arose because that test differs\n> from the \"without_extension\" test by adding --schema=public.  The test named\n> \"without_extension_implicit_schema\" differs from \"without_extension\" by adding\n> --schema=regress_pg_dump_schema, so the word \"implicit\" feels not-descriptive\n> of the test.  I recommend picking a different name.  Other than that, the\n> change looks good.\n\nThanks for the review.  I have picked up \"internal\" instead, as\nthat's the schema created within the extension itself, and applied the\npatch.Thanks for the work on this. I didn't understand everything on the issue, which is why I didn't say a thing, but I followed the thread, and very much appreciated the fix.-- Guillaume.", "msg_date": "Thu, 15 Apr 2021 10:28:49 +0200", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Extensions not dumped when --schema is used" } ]
[ { "msg_contents": "If I run pgindent on a built tree on macos, I get this error\n\nFailure in ./src/backend/utils/probes.h: Error@375: Stuff missing from \nend of file\n\nThe file in question is built by the dtrace command. I have attached it \nhere.\n\nIs this something to fix in pgindent? Or should this file be excluded, \nsince it's generated?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 20 May 2020 11:52:25 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "pgindent vs dtrace on macos" }, { "msg_contents": "> On 20 May 2020, at 11:52, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n\n> Or should this file be excluded, since it's generated?\n\nThat would get my vote. Generated files where we don't control the generator\ncan be excluded.\n\ncheers ./daniel\n\n", "msg_date": "Wed, 20 May 2020 12:01:23 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pgindent vs dtrace on macos" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> If I run pgindent on a built tree on macos, I get this error\n> Failure in ./src/backend/utils/probes.h: Error@375: Stuff missing from \n> end of file\n> The file in question is built by the dtrace command. I have attached it \n> here.\n> Is this something to fix in pgindent? Or should this file be excluded, \n> since it's generated?\n\nHm, there's nothing obviously wrong with the file. But since it's\ngenerated by code not under our control, we should exclude it.\nAnd given that, it's probably not worth figuring out why it breaks\npgindent.\n\nOn a closely related point: I was confused for awhile on Monday\nafternoon, wondering why the built tarballs didn't match my local\ntree. I eventually realized that when I'd run pgindent on Saturday,\nit had reformatted some generated files such as\nsrc/bin/psql/sql_help.h, causing those not to match the freshly-made\nones in the tarball. I wonder if we should make an effort to ensure\nthat our generated .h and .c files always satisfy pgindent. If not,\nwe probably should exclude them too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 20 May 2020 09:56:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgindent vs dtrace on macos" }, { "msg_contents": "On 2020-05-20 15:56, Tom Lane wrote:\n> On a closely related point: I was confused for awhile on Monday\n> afternoon, wondering why the built tarballs didn't match my local\n> tree. I eventually realized that when I'd run pgindent on Saturday,\n> it had reformatted some generated files such as\n> src/bin/psql/sql_help.h, causing those not to match the freshly-made\n> ones in the tarball. I wonder if we should make an effort to ensure\n> that our generated .h and .c files always satisfy pgindent.\n\nWe should generally try to do that, if only so that they don't appear \nweird and random when looking at them.\n\nI think in the past it would have been very difficult for a generation \nscript to emulate pgindent's weird un-indentation logic on trailing \nlines, but that shouldn't be a problem anymore.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 22 May 2020 11:29:25 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pgindent vs dtrace on macos" }, { "msg_contents": "> On 22 May 2020, at 11:29, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> On 2020-05-20 15:56, Tom Lane wrote:\n\n>> I wonder if we should make an effort to ensure\n>> that our generated .h and .c files always satisfy pgindent.\n> \n> We should generally try to do that, if only so that they don't appear weird and random when looking at them.\n\nThe attached patch fixes the generation of sql_help.h and perl_opmask.h to make\nsure they conform to pgindent. Those were the only file I got diffs in after a\npgindent run apart from fmgrprotos.h which gave the below:\n\n@@ -912,7 +912,7 @@\n extern Datum interval_mul(PG_FUNCTION_ARGS);\n extern Datum pg_typeof(PG_FUNCTION_ARGS);\n extern Datum ascii(PG_FUNCTION_ARGS);\n-extern Datum chr(PG_FUNCTION_ARGS);\n+extern Datum chr (PG_FUNCTION_ARGS);\n extern Datum repeat(PG_FUNCTION_ARGS);\n extern Datum similar_escape(PG_FUNCTION_ARGS);\n extern Datum mul_d_interval(PG_FUNCTION_ARGS);\n@@ -968,7 +968,7 @@\n extern Datum bitsubstr_no_len(PG_FUNCTION_ARGS);\n extern Datum numeric_in(PG_FUNCTION_ARGS);\n extern Datum numeric_out(PG_FUNCTION_ARGS);\n-extern Datum numeric(PG_FUNCTION_ARGS);\n+extern Datum numeric (PG_FUNCTION_ARGS);\n extern Datum numeric_abs(PG_FUNCTION_ARGS);\n extern Datum numeric_sign(PG_FUNCTION_ARGS);\n extern Datum numeric_round(PG_FUNCTION_ARGS);\n\nNot sure what pgindent is doing there, but it seems hard to address in the\ngenerator.\n\nprobes.h is also added to the exclusion list in the patch. On that note, I\nwonder if we should add the plperl .xs generated files as exclusions too since\nwe don't control that generator?\n\ncheers ./daniel", "msg_date": "Wed, 16 Sep 2020 23:17:46 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pgindent vs dtrace on macos" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> The attached patch fixes the generation of sql_help.h and perl_opmask.h to make\n> sure they conform to pgindent. Those were the only file I got diffs in after a\n> pgindent run apart from fmgrprotos.h which gave the below:\n\nHmm, I seem to recall there were more when this happened to me back in\nMay. But in any case, fixing these is an improvement.\n\n> Not sure what pgindent is doing there, but it seems hard to address in the\n> generator.\n\nI think the issue is that pgindent believes \"numeric\" and \"chr\" are\ntypedefs. (The regex code can be blamed for \"chr\", but I'm not quite\nsure where \"numeric\" is coming from.) Observe that it also messes up\nthe definitions of those two functions, not only their extern\ndeclarations.\n\nWe could try adding those names to the typedef exclusion list in pgindent,\nbut that could easily make things worse not better overall. On balance\nI'd say this particular behavior is a pgindent bug, and if anybody is hot\nto remove the discrepancy then they ought to try to fix pgindent not the\nfmgrprotos generator.\n\n> probes.h is also added to the exclusion list in the patch.\n\nCheck.\n\n> On that note, I\n> wonder if we should add the plperl .xs generated files as exclusions too since\n> we don't control that generator?\n\nNot an issue I don't think; pgindent won't touch extensions other than\n.c and .h.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Sep 2020 19:55:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgindent vs dtrace on macos" }, { "msg_contents": "On 2020-Sep-16, Tom Lane wrote:\n\n> Daniel Gustafsson <daniel@yesql.se> writes:\n\n> > Not sure what pgindent is doing there, but it seems hard to address in the\n> > generator.\n> \n> I think the issue is that pgindent believes \"numeric\" and \"chr\" are\n> typedefs. (The regex code can be blamed for \"chr\", but I'm not quite\n> sure where \"numeric\" is coming from.)\n\nIt's in src/interfaces/ecpg/include/pgtypes_numeric.h\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 16 Sep 2020 21:03:34 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgindent vs dtrace on macos" }, { "msg_contents": "I wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> The attached patch fixes the generation of sql_help.h and perl_opmask.h to make\n>> sure they conform to pgindent. Those were the only file I got diffs in after a\n>> pgindent run apart from fmgrprotos.h which gave the below:\n\n> Hmm, I seem to recall there were more when this happened to me back in\n> May. But in any case, fixing these is an improvement.\n\nExperimenting with this patch soon found one additional case: sql_help.c,\nalso emitted by create_help.pl, also needs some whitespace help.\nI do not recall if there are other places, but fixing these is\nsurely a step forward. I fixed the sql_help.c output and pushed it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Sep 2020 20:34:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgindent vs dtrace on macos" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Sep-16, Tom Lane wrote:\n>> I think the issue is that pgindent believes \"numeric\" and \"chr\" are\n>> typedefs. (The regex code can be blamed for \"chr\", but I'm not quite\n>> sure where \"numeric\" is coming from.)\n\n> It's in src/interfaces/ecpg/include/pgtypes_numeric.h\n\nIt strikes me that a low-cost workaround would be to rename these\nC functions. There's no law that their C names must match the\nSQL names.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Sep 2020 20:39:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgindent vs dtrace on macos" }, { "msg_contents": ">>> The attached patch fixes the generation of sql_help.h and perl_opmask.h to make\n>>> sure they conform to pgindent. Those were the only file I got diffs in after a\n>>> pgindent run apart from fmgrprotos.h which gave the below:\n> \n>> Hmm, I seem to recall there were more when this happened to me back in\n>> May. But in any case, fixing these is an improvement.\n> \n> Experimenting with this patch soon found one additional case: sql_help.c,\n> also emitted by create_help.pl, also needs some whitespace help.\n> I do not recall if there are other places, but fixing these is\n> surely a step forward. I fixed the sql_help.c output and pushed it.\n\nThanks! I think a bug for .c and .h files with matching names in my small\nscript testing for discrepancies hid that one.\n\n>> On that note, I\n>> wonder if we should add the plperl .xs generated files as exclusions too since\n>> we don't control that generator?\n> \n> Not an issue I don't think; pgindent won't touch extensions other than\n> .c and .h.\n\nSorry for being unclear, I meant the generated .c counterpart of the .xs file.\nSo something like the below:\n\n--- a/src/tools/pgindent/exclude_file_patterns\n+++ b/src/tools/pgindent/exclude_file_patterns\n@@ -5,6 +5,8 @@\n /ecpg/test/expected/\n /snowball/libstemmer/\n /pl/plperl/ppport\\.h$\n+/pl/plperl/SPI\\.c$\n+/pl/plperl/Util\\.c$\n /jit/llvmjit\\.h$\n /utils/probes\\.h$\n /tmp_check/\n\ncheers ./daniel\n\n", "msg_date": "Thu, 17 Sep 2020 10:15:30 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pgindent vs dtrace on macos" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>>> On that note, I\n>>> wonder if we should add the plperl .xs generated files as exclusions too since\n>>> we don't control that generator?\n\n>> Not an issue I don't think; pgindent won't touch extensions other than\n>> .c and .h.\n\n> Sorry for being unclear, I meant the generated .c counterpart of the .xs file.\n\nOh, I see. Not sure. For myself, I only care about files that survive\n\"make distclean\" and get into a tarball, which those don't. On the\nother hand, if we fixed perlchunks.h and plperl_opmask.h then it's hard\nto argue with worrying about SPI.c and Util.c, as those all have the\nsame lifespan.\n\nI tried redoing the experiment of pgindenting all the tarball member\nfiles, and found that we still have diffs in these generated files:\n\nsrc/backend/utils/sort/qsort_tuple.c\nsrc/common/kwlist_d.h \nsrc/interfaces/ecpg/preproc/c_kwlist_d.h \nsrc/interfaces/ecpg/preproc/ecpg_kwlist_d.h\nsrc/pl/plpgsql/src/pl_reserved_kwlist_d.h \nsrc/pl/plpgsql/src/pl_unreserved_kwlist_d.h\nsrc/pl/plpgsql/src/plerrcodes.h\nsrc/pl/plpython/spiexceptions.h\nsrc/pl/tcl/pltclerrcodes.h\n\nTo my eyes, what pgindent does to the *kwlist_d.h files is rather ugly,\nso I'm inclined to make them exclusions rather than adjust the generator\nscript. The others seem like we could tweak the relevant generators\nfairly easily.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Sep 2020 11:39:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgindent vs dtrace on macos" }, { "msg_contents": "I went ahead and added SPI.c, Util.c, and the *kwlist_d.h headers\nto the exclusion list. I then tried to run pgindent in a completely\nbuilt-out development directory (not distclean'ed, which is the way\nI'd always used it before). This found a few more exclusions we\nneed to have if we want to allow for that usage. Pushed the lot.\n\nWe still have to deal with\n\nsrc/backend/utils/sort/qsort_tuple.c\nsrc/pl/plpgsql/src/plerrcodes.h\nsrc/pl/plpython/spiexceptions.h\nsrc/pl/tcl/pltclerrcodes.h\n\nif we want to be entirely clean about this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Sep 2020 14:20:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgindent vs dtrace on macos" }, { "msg_contents": "I wrote:\n> We still have to deal with\n> src/backend/utils/sort/qsort_tuple.c\n> src/pl/plpgsql/src/plerrcodes.h\n> src/pl/plpython/spiexceptions.h\n> src/pl/tcl/pltclerrcodes.h\n> if we want to be entirely clean about this.\n\nI took care of those, so I think we're done here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Sep 2020 13:59:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgindent vs dtrace on macos" }, { "msg_contents": "> On 21 Sep 2020, at 19:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> I wrote:\n>> We still have to deal with\n>> src/backend/utils/sort/qsort_tuple.c\n>> src/pl/plpgsql/src/plerrcodes.h\n>> src/pl/plpython/spiexceptions.h\n>> src/pl/tcl/pltclerrcodes.h\n>> if we want to be entirely clean about this.\n> \n> I took care of those, so I think we're done here.\n\nAha, thanks! They were still on my TODO but I got distracted by a shiny object\nor two. Thanks for fixing.\n\ncheers ./daniel\n\n", "msg_date": "Mon, 21 Sep 2020 20:14:03 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pgindent vs dtrace on macos" }, { "msg_contents": "Oh wait, I forgot about the fmgrprotos.h discrepancy.\n\nI wrote:\n> It strikes me that a low-cost workaround would be to rename these\n> C functions. There's no law that their C names must match the\n> SQL names.\n\nHere's a proposed patch to fix it that way.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 21 Sep 2020 14:55:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgindent vs dtrace on macos" }, { "msg_contents": "On 2020-Sep-21, Tom Lane wrote:\n\n> Oh wait, I forgot about the fmgrprotos.h discrepancy.\n> \n> I wrote:\n> > It strikes me that a low-cost workaround would be to rename these\n> > C functions. There's no law that their C names must match the\n> > SQL names.\n> \n> Here's a proposed patch to fix it that way.\n\npgtypes_numeric.h still contains\n\ntypedef struct\n{\n int ndigits; /* number of digits in digits[] - can be 0! */\n int weight; /* weight of first digit */\n int rscale; /* result scale */\n int dscale; /* display scale */\n int sign; /* NUMERIC_POS, NUMERIC_NEG, or NUMERIC_NAN */\n NumericDigit *buf; /* start of alloc'd space for digits[] */\n NumericDigit *digits; /* decimal digits */\n} numeric;\n\n... isn't this more likely to create a typedef entry than merely a\nfunction name?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 21 Sep 2020 16:08:04 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgindent vs dtrace on macos" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Sep-21, Tom Lane wrote:\n>> Here's a proposed patch to fix it that way.\n\n> pgtypes_numeric.h still contains\n> typedef struct\n> {\n> } numeric;\n\n> ... isn't this more likely to create a typedef entry than merely a\n> function name?\n\nWell, yeah, it *is* a typedef. My proposal is to rename the C function\nto avoid the conflict, rather than renaming the typedef. Given the\nsmall number of direct calls (none), that's a lot less work. Also,\nI think pgtypes_numeric.h is exposed to ecpg client code, so changing\nthat typedef's name could be quite problematic.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Sep 2020 15:21:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgindent vs dtrace on macos" }, { "msg_contents": "On 2020-Sep-21, Tom Lane wrote:\n\n> > ... isn't this more likely to create a typedef entry than merely a\n> > function name?\n> \n> Well, yeah, it *is* a typedef. My proposal is to rename the C function\n> to avoid the conflict, rather than renaming the typedef. Given the\n> small number of direct calls (none), that's a lot less work. Also,\n> I think pgtypes_numeric.h is exposed to ecpg client code, so changing\n> that typedef's name could be quite problematic.\n\nAh, of course.\n\nThe idea of adding the names to pgindent's %blacklist results in severe\nuglification, particularly in the regex code, so +1 for your workaround.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 21 Sep 2020 16:37:34 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgindent vs dtrace on macos" }, { "msg_contents": "> On 21 Sep 2020, at 20:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Oh wait, I forgot about the fmgrprotos.h discrepancy.\n> \n> I wrote:\n>> It strikes me that a low-cost workaround would be to rename these\n>> C functions. There's no law that their C names must match the\n>> SQL names.\n> \n> Here's a proposed patch to fix it that way.\n\n+1 on this patch. Do you think it's worth adding a note about this in the\ndocumentation to save the next one staring at this a few minutes? Something\nalong the lines of:\n\n--- a/src/tools/pgindent/README\n+++ b/src/tools/pgindent/README\n@@ -110,6 +110,9 @@ Sometimes, if pgindent or perltidy produces odd-looking output, it's because\n of minor bugs like extra commas. Don't hesitate to clean that up while\n you're at it.\n\n+If an exported function shares a name with a typedef, the header file with the\n+prototype can get incorrect spacing for the function.\n+\n\ncheers ./daniel\n\n", "msg_date": "Tue, 22 Sep 2020 10:25:54 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pgindent vs dtrace on macos" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 21 Sep 2020, at 20:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Here's a proposed patch to fix it that way.\n\n> +1 on this patch. Do you think it's worth adding a note about this in the\n> documentation to save the next one staring at this a few minutes? Something\n> along the lines of:\n\n> +If an exported function shares a name with a typedef, the header file with the\n> +prototype can get incorrect spacing for the function.\n\nAFAIK, a function name that's the same as a typedef name will get\nmisformatted whether it's exported or not; pgindent doesn't really\nknow the difference. But yeah, we could add something about this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 22 Sep 2020 10:18:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgindent vs dtrace on macos" }, { "msg_contents": "After further thought about this, I concluded that a much better idea\nis to just exclude fmgrprotos.h from the pgindent run. While renaming\nthese two functions may or may not be worthwhile, it doesn't provide\nany sort of permanent fix for fmgrprotos.h, because new typedef conflicts\ncould arise at any time. (The fact that the typedef list isn't fully\nunder our control, thanks to contributions from system headers, makes\nthis a bigger risk than it might appear.)\n\nSo I did that, and also added a README comment along the lines you\nsuggested.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 22 Sep 2020 11:35:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgindent vs dtrace on macos" } ]
[ { "msg_contents": "The following documentation comment has been logged on the website:\n\nPage: https://www.postgresql.org/docs/12/tutorial-join.html\nDescription:\n\nThe tutorial about joins makes the following statement about the explicit\nJOIN operator:\r\n\r\n> This syntax is not as commonly used as the one above\r\n\r\nI think in 2020 this claim is no longer true, and I would love to see the\nmanual prefer the \"modern\" explicit JOIN operator rather than sticking to\nthe ancient implicit joins in the WHERE clause.", "msg_date": "Wed, 20 May 2020 10:07:03 +0000", "msg_from": "PG Doc comments form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "Change JOIN tutorial to focus more on explicit joins" }, { "msg_contents": "On Thu, May 21, 2020 at 1:37 AM PG Doc comments form\n<noreply@postgresql.org> wrote:\n> The following documentation comment has been logged on the website:\n>\n> Page: https://www.postgresql.org/docs/12/tutorial-join.html\n> Description:\n>\n> The tutorial about joins makes the following statement about the explicit\n> JOIN operator:\n>\n> > This syntax is not as commonly used as the one above\n>\n> I think in 2020 this claim is no longer true, and I would love to see the\n> manual prefer the \"modern\" explicit JOIN operator rather than sticking to\n> the ancient implicit joins in the WHERE clause.\n\n+1\n\nThe \"new\" syntax is 28 years old, from SQL 92. I don't see too many\nSQL 86 joins. Would you like to write a documentation patch?\n\n\n", "msg_date": "Thu, 21 May 2020 09:56:02 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change JOIN tutorial to focus more on explicit joins" }, { "msg_contents": "On 20.05.20 23:56, Thomas Munro wrote:\n> On Thu, May 21, 2020 at 1:37 AM PG Doc comments form\n> <noreply@postgresql.org> wrote:\n>> The following documentation comment has been logged on the website:\n>>\n>> Page: https://www.postgresql.org/docs/12/tutorial-join.html\n>> Description:\n>>\n>> The tutorial about joins makes the following statement about the explicit\n>> JOIN operator:\n>>\n>>> This syntax is not as commonly used as the one above\n>> I think in 2020 this claim is no longer true, and I would love to see the\n>> manual prefer the \"modern\" explicit JOIN operator rather than sticking to\n>> the ancient implicit joins in the WHERE clause.\n> +1\n>\n> The \"new\" syntax is 28 years old, from SQL 92. I don't see too many\n> SQL 86 joins. Would you like to write a documentation patch?\n>\n>\nThe attached patch\n\n- prefers the explicit join-syntax over the implicit one and explains \nthe keywords of the explicit syntax\n\n- uses a more accurate definition of 'join'\n\n- separates <programlisting> and <screen> tags\n\n- shifts <indexterm> definitions outside of <para> to get a better \nrendering in PDF\n\n- adds a note concerning IDs and foreign keys\n\n\n--\n\nJ. Purtz", "msg_date": "Wed, 27 May 2020 10:29:03 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Change JOIN tutorial to focus more on explicit joins" }, { "msg_contents": "On Wed, May 27, 2020 at 8:29 PM Jürgen Purtz <juergen@purtz.de> wrote:\n> > The \"new\" syntax is 28 years old, from SQL 92. I don't see too many\n> > SQL 86 joins. Would you like to write a documentation patch?\n> >\n> >\n> The attached patch\n>\n> - prefers the explicit join-syntax over the implicit one and explains\n> the keywords of the explicit syntax\n>\n> - uses a more accurate definition of 'join'\n>\n> - separates <programlisting> and <screen> tags\n>\n> - shifts <indexterm> definitions outside of <para> to get a better\n> rendering in PDF\n>\n> - adds a note concerning IDs and foreign keys\n\nHi Jürgen,\n\nPlease add to the commitfest app, so we don't lose track of it.\n\n\n", "msg_date": "Sat, 18 Jul 2020 11:48:40 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change JOIN tutorial to focus more on explicit joins" }, { "msg_contents": "On 2020-05-27 10:29, Jürgen Purtz wrote:\n> The attached patch\n> \n> - prefers the explicit join-syntax over the implicit one and explains\n> the keywords of the explicit syntax\n> \n> - uses a more accurate definition of 'join'\n> \n> - separates <programlisting> and <screen> tags\n> \n> - shifts <indexterm> definitions outside of <para> to get a better\n> rendering in PDF\n> \n> - adds a note concerning IDs and foreign keys\n\nI have committed some parts of this patch:\n\n > - separates <programlisting> and <screen> tags\n\n > - shifts <indexterm> definitions outside of <para> to get a better\n > rendering in PDF\n\nas well as the change of W1/W2 to w1/w2. (Note that there is also \nsrc/tutorial/basics.source that should be adjusted in the same way.)\n\nFor the remaining patch I have a couple of concerns:\n\n > <para>\n > Attempt to determine the semantics of this query when the\n > - <literal>WHERE</literal> clause is omitted.\n > + <literal>ON</literal> clause is omitted.\n > </para>\n > </formalpara>\n\nThis no longer works.\n\nIn general, I agree that some more emphasis on the JOIN syntax is okay. \nBut I think the order in which the tutorial has taught it so far is \nokay: First you do it the manual way, then you learn the more abstract way.\n\n > + <note>\n > + <para>\n > + The examples shown here combine rows via city names.\n > + This should help to understand the concept. Professional\n > + solutions prefer to use numerical IDs and foreign keys\n > + to join tables.\n > + </para>\n > + </note>\n\nWhile there are interesting debates to be had about natural vs. \nsurrogate keys, I don't think we should imply that one of them is \nunprofessional and then leave it at that and give no further guidance. \nI think we should leave this out.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 4 Sep 2020 08:52:56 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Change JOIN tutorial to focus more on explicit joins" }, { "msg_contents": "On 04.09.20 08:52, Peter Eisentraut wrote:\n>\n> For the remaining patch I have a couple of concerns:\n>\n> >      <para>\n> >       Attempt to determine the semantics of this query when the\n> > -     <literal>WHERE</literal> clause is omitted.\n> > +     <literal>ON</literal> clause is omitted.\n> >      </para>\n> >     </formalpara>\n>\n> This no longer works.\n>\nOk, but I don't have any better suggestion than to delete this para.\n> In general, I agree that some more emphasis on the JOIN syntax is \n> okay. But I think the order in which the tutorial has taught it so far \n> is okay: First you do it the manual way, then you learn the more \n> abstract way.\n\nIn this context, I wouldn't use the terms 'manual' and 'abstract', it's \nmore about 'implicit' and 'explicit' syntax. The 'explicit' syntax does \nnot only emphasis the aspect of 'joining' tables, it also differentiates \nbetween the usage of following AND/OR/NOT key words as join conditions \nor as additional restrictions (the results are identical but not the \nsemantic). Because the purpose of this patch is the preference of the \nexplicit syntax, we shall show this syntax first.\n\n>\n> > +   <note>\n> > +    <para>\n> > +     The examples shown here combine rows via city names.\n> > +     This should help to understand the concept. Professional\n> > +     solutions prefer to use numerical IDs and foreign keys\n> > +     to join tables.\n> > +    </para>\n> > +   </note>\n>\n> While there are interesting debates to be had about natural vs. \n> surrogate keys, I don't think we should imply that one of them is \n> unprofessional and then leave it at that and give no further guidance. \n> I think we should leave this out.\n>\nOk, deleted.\n\n--\n\nJürgen Purtz", "msg_date": "Fri, 4 Sep 2020 11:36:39 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Change JOIN tutorial to focus more on explicit joins" }, { "msg_contents": "On Fri, Sep 4, 2020 at 2:36 AM Jürgen Purtz <juergen@purtz.de> wrote:\n\n> On 04.09.20 08:52, Peter Eisentraut wrote:\n> >\n> > For the remaining patch I have a couple of concerns:\n>\n\nThis patch should not be changing the formatting choices for these queries,\njust the addition of a JOIN clause and modification of the WHERE clause.\nSpecifically, SELECT is left-aligned while all subsequent clauses indent\nunder it. Forced alignment by adding extra spaces isn't done here either.\nI have not altered those in the attached.\n\nDid some word-smithing on the first paragraph. The part about the\ncross-join was hurt by \"in some way\" and \"may be\" is not needed.\n\nPointing out that values from both tables doesn't seem like an improvement\nwhen the second item covers that and it is more specific in noting that the\ncity name that is joined on appears twice - once from each table.\n\nON expression is more precise and the reader should be ok with the term.\n\nRemoval of the exercise is good. Not the time to discuss cross join\nanyway. Given that \"ON true\" works the cross join form isn't even required.\n\nIn the FROM clause form I would not add table prefixes to the column\nnames. They are not part of the form changing. If discussion about table\nprefixing is desired it should be done explicitly and by itself. They are\nused later on, I didn't check to see whether that was covered or might be\nconfusing.\n\nI suggested a wording for why to use join syntax that doesn't involve\nlegacy and points out its merit compared to sticking a join expression into\nthe where clause.\n\nThe original patch missed having the syntax for the first left outer join\nconform to the multi-line query writing standard you introduced. I did not\nchange.\n\nThe \"AND\" ON clause should just go with (not changed):\n\nON (w1.temp_lo < w2.temp_lo\n AND w1.temp_hi > w2.temp_high);\n\nAttaching my suggestions made on top of the attached original\n0002-query.patch\n\nDavid J.", "msg_date": "Wed, 21 Oct 2020 16:40:18 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change JOIN tutorial to focus more on explicit joins" }, { "msg_contents": "On 22.10.20 01:40, David G. Johnston wrote:\n> On Fri, Sep 4, 2020 at 2:36 AM Jürgen Purtz <juergen@purtz.de \n> <mailto:juergen@purtz.de>> wrote:\n>\n> On 04.09.20 08:52, Peter Eisentraut wrote:\n> >\n> > For the remaining patch I have a couple of concerns:\n>\n>\n> This patch should not be changing the formatting choices for these \n> queries, just the addition of a JOIN clause and modification of the \n> WHERE clause.  Specifically, SELECT is left-aligned while all \n> subsequent clauses indent under it.  Forced alignment by adding extra \n> spaces isn't done here either.  I have not altered those in the attached.\n>\n> Did some word-smithing on the first paragraph.  The part about the \n> cross-join was hurt by \"in some way\" and \"may be\" is not needed.\n>\n> Pointing out that values from both tables doesn't seem like an \n> improvement when the second item covers that and it is more specific \n> in noting that the city name that is joined on appears twice - once \n> from each table.\n>\n> ON expression is more precise and the reader should be ok with the term.\n>\n> Removal of the exercise is good.  Not the time to discuss cross join \n> anyway.  Given that \"ON true\" works the cross join form isn't even \n> required.\n>\n> In the FROM clause form I would not add table prefixes to the column \n> names.  They are not part of the form changing.  If discussion about \n> table prefixing is desired it should be done explicitly and by \n> itself.  They are used later on, I didn't check to see whether that \n> was covered or might be confusing.\n>\n> I suggested a wording for why to use join syntax that doesn't involve \n> legacy and points out its merit compared to sticking a join expression \n> into the where clause.\n>\n> The original patch missed having the syntax for the first left outer \n> join conform to the multi-line query writing standard you introduced.  \n> I did not change.\n>\n> The \"AND\" ON clause should just go with (not changed):\n>\n> ON (w1.temp_lo < w2.temp_lo\n>     AND w1.temp_hi > w2.temp_high);\n>\n> Attaching my suggestions made on top of the attached original \n> 0002-query.patch\n>\n> David J.\n>\n(Hopefully) I have integrated all of David's suggestions as well as the \nfollowing rules:\n\n- Syntax formatting with the previously used 4 spaces plus newline for JOIN\n\n- Table aliases only when necessary or explicitly discussed\n\nThe discussion about the explicit vs. implicit syntax is added to the \n\"As join expressions serve a specific purpose ... \" sentence and creates \na paragraph of its own.\n\nThe patch is build on top of master.\n\n--\n\nJ. Purtz", "msg_date": "Thu, 22 Oct 2020 15:32:00 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Change JOIN tutorial to focus more on explicit joins" }, { "msg_contents": "čt 22. 10. 2020 v 15:32 odesílatel Jürgen Purtz <juergen@purtz.de> napsal:\n\n> On 22.10.20 01:40, David G. Johnston wrote:\n>\n> On Fri, Sep 4, 2020 at 2:36 AM Jürgen Purtz <juergen@purtz.de> wrote:\n>\n>> On 04.09.20 08:52, Peter Eisentraut wrote:\n>> >\n>> > For the remaining patch I have a couple of concerns:\n>>\n>\n> This patch should not be changing the formatting choices for these\n> queries, just the addition of a JOIN clause and modification of the WHERE\n> clause. Specifically, SELECT is left-aligned while all subsequent clauses\n> indent under it. Forced alignment by adding extra spaces isn't done here\n> either. I have not altered those in the attached.\n>\n> Did some word-smithing on the first paragraph. The part about the\n> cross-join was hurt by \"in some way\" and \"may be\" is not needed.\n>\n> Pointing out that values from both tables doesn't seem like an improvement\n> when the second item covers that and it is more specific in noting that the\n> city name that is joined on appears twice - once from each table.\n>\n> ON expression is more precise and the reader should be ok with the term.\n>\n> Removal of the exercise is good. Not the time to discuss cross join\n> anyway. Given that \"ON true\" works the cross join form isn't even required.\n>\n> In the FROM clause form I would not add table prefixes to the column\n> names. They are not part of the form changing. If discussion about table\n> prefixing is desired it should be done explicitly and by itself. They are\n> used later on, I didn't check to see whether that was covered or might be\n> confusing.\n>\n> I suggested a wording for why to use join syntax that doesn't involve\n> legacy and points out its merit compared to sticking a join expression into\n> the where clause.\n>\n> The original patch missed having the syntax for the first left outer join\n> conform to the multi-line query writing standard you introduced. I did not\n> change.\n>\n> The \"AND\" ON clause should just go with (not changed):\n>\n> ON (w1.temp_lo < w2.temp_lo\n> AND w1.temp_hi > w2.temp_high);\n>\n> Attaching my suggestions made on top of the attached original\n> 0002-query.patch\n>\n> David J.\n>\n> (Hopefully) I have integrated all of David's suggestions as well as the\n> following rules:\n>\n> - Syntax formatting with the previously used 4 spaces plus newline for JOIN\n>\n> - Table aliases only when necessary or explicitly discussed\n>\n> The discussion about the explicit vs. implicit syntax is added to the \"As\n> join expressions serve a specific purpose ... \" sentence and creates a\n> paragraph of its own.\n>\n> The patch is build on top of master.\n>\n\nWhy do you use parenthesis for ON clause? It is useless. SQL is not C or\nJAVA.\n\nRegards\n\nPavel\n\n--\n> J. Purtz\n>\n>\n>\n\nčt 22. 10. 2020 v 15:32 odesílatel Jürgen Purtz <juergen@purtz.de> napsal:\n\nOn 22.10.20 01:40, David G. Johnston\n wrote:\n\n\n\n\nOn Fri, Sep\n 4, 2020 at 2:36 AM Jürgen Purtz <juergen@purtz.de>\n wrote:\n\n\n\nOn 04.09.20 08:52, Peter\n Eisentraut wrote:\n >\n > For the remaining patch I have a couple of concerns:\n\n\n\nThis patch\n should not be changing the formatting choices for these\n queries, just the addition of a JOIN clause and modification\n of the WHERE clause.  Specifically, SELECT is left-aligned\n while all subsequent clauses indent under it.  Forced\n alignment by adding extra spaces isn't done here either.  I\n have not altered those in the attached.\n\n\nDid some\n word-smithing on the first paragraph.  The part about the\n cross-join was hurt by \"in some way\" and \"may be\" is not\n needed.\n\n\nPointing out\n that values from both tables doesn't seem like an\n improvement when the second item covers that and it is more\n specific in noting that the city name that is joined on\n appears twice - once from each table.\n\n\nON expression\n is more precise and the reader should be ok with the term.\n\n\nRemoval of\n the exercise is good.  Not the time to discuss cross join\n anyway.  Given that \"ON true\" works the cross join form\n isn't even required.\n\n\nIn the FROM\n clause form I would not add table prefixes to the column\n names.  They are not part of the form changing.  If\n discussion about table prefixing is desired it should be\n done explicitly and by itself.  They are used later on, I\n didn't check to see whether that was covered or might be\n confusing.\n\n\nI suggested a\n wording for why to use join syntax that doesn't involve\n legacy and points out its merit compared to sticking a join\n expression into the where clause.\n\n\nThe original\n patch missed having the syntax for the first left outer join\n conform to the multi-line query writing standard you\n introduced.  I did not change.\n\n\nThe \"AND\" ON\n clause should just go with (not changed):\n\n\nON\n (w1.temp_lo < w2.temp_lo\n    AND\n w1.temp_hi > w2.temp_high);\n\n\nAttaching my\n suggestions made on top of the attached original\n 0002-query.patch\n\n\nDavid J.\n\n\n\n\n\n(Hopefully) I have integrated all of David's suggestions as well\n as the following rules:\n- Syntax formatting with the previously used 4 spaces plus\n newline for JOIN\n- Table aliases only when necessary or explicitly discussed\n\nThe discussion about the explicit vs. implicit syntax is added to\n the \"As join expressions serve a specific purpose ... \" sentence\n and creates a paragraph of its own.\nThe patch is build on top of master.Why do you use parenthesis for ON clause?  It is useless. SQL is not C or JAVA.RegardsPavel\n\n--\n J. Purtz", "msg_date": "Thu, 22 Oct 2020 17:14:19 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change JOIN tutorial to focus more on explicit joins" }, { "msg_contents": "On Thu, Oct 22, 2020 at 8:14 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Why do you use parenthesis for ON clause? It is useless. SQL is not C or\n> JAVA.\n>\n>\nAt this point in my career it's just a personal habit. I never programmed\nC, done most of my development in Java so maybe that's a subconscious\ninfluence?\n\nI suspect it is partly because I seldom need to use \"ON\" but instead join\nwith \"USING\" which does require the parentheses, so when I need to use ON I\njust keep them.\n\nI agree they are unnecessary in the example and should be removed to be\nconsistent.\n\nDavid J.\n\nOn Thu, Oct 22, 2020 at 8:14 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:Why do you use parenthesis for ON clause?  It is useless. SQL is not C or JAVA.At this point in my career it's just a personal habit.  I never programmed C, done most of my development in Java so maybe that's a subconscious influence?I suspect it is partly because I seldom need to use \"ON\" but instead join with \"USING\" which does require the parentheses, so when I need to use ON I just keep them.I agree they are unnecessary in the example and should be removed to be consistent.David J.", "msg_date": "Thu, 22 Oct 2020 09:27:08 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change JOIN tutorial to focus more on explicit joins" }, { "msg_contents": "čt 22. 10. 2020 v 18:27 odesílatel David G. Johnston <\ndavid.g.johnston@gmail.com> napsal:\n\n> On Thu, Oct 22, 2020 at 8:14 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>> Why do you use parenthesis for ON clause? It is useless. SQL is not C or\n>> JAVA.\n>>\n>>\n> At this point in my career it's just a personal habit. I never programmed\n> C, done most of my development in Java so maybe that's a subconscious\n> influence?\n>\n> I suspect it is partly because I seldom need to use \"ON\" but instead join\n> with \"USING\" which does require the parentheses, so when I need to use ON I\n> just keep them.\n>\n> I agree they are unnecessary in the example and should be removed to be\n> consistent.\n>\n\n:)\n\n\n\n> David J.\n>\n>\n\nčt 22. 10. 2020 v 18:27 odesílatel David G. Johnston <david.g.johnston@gmail.com> napsal:On Thu, Oct 22, 2020 at 8:14 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:Why do you use parenthesis for ON clause?  It is useless. SQL is not C or JAVA.At this point in my career it's just a personal habit.  I never programmed C, done most of my development in Java so maybe that's a subconscious influence?I suspect it is partly because I seldom need to use \"ON\" but instead join with \"USING\" which does require the parentheses, so when I need to use ON I just keep them.I agree they are unnecessary in the example and should be removed to be consistent.:) David J.", "msg_date": "Thu, 22 Oct 2020 18:31:07 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change JOIN tutorial to focus more on explicit joins" }, { "msg_contents": "On 22.10.20 17:14, Pavel Stehule wrote:\n>\n> Why do you use parenthesis for ON clause?  It is useless. SQL is not C \n> or JAVA.\n\n\nTwo more general answers:\n- Why do people use tabs, spaces, and newlines to format their code even \nthough it's not necessary? SQL is a language to develop applications. \nAnd what are the main costs of an application? It's not the time which \nit takes to develop them. It's the time for their maintenance. During \nthe course of one or more decades, different persons will have to read \nthe code, add additional features, and fix bugs. They need some time to \nread and understand the existing code. This task can be accelerated if \nthe code is easy to read. Therefore, it's a good habit of developers to \nsometimes spend some extra characters to the code than is required -  \nnot only comments. An example: there are clear precedence rules for \nBoolean operators NOT/AND/OR. In an extensive statement it may be \nhelpful - for the developer himself as well as for anybody else -to use \nnewlines and parentheses at places where they are not necessary to keep \nan overview of the intention of the statement. In such cases, \ncode-optimization is the duty of the compiler, not of the developer.\n- In my professional life as a software developer, I have seen about 15 \ndifferent languages. But only in rare cases, they have offered new \nfeatures or concepts. To overcome this Babylonian linguistic diversity I \ntend to use such syntactical constructs which are common to many of them \neven, even if they are not necessary for the concrete language.\n\nAnd the concrete answer: Omitting the parentheses for the join condition \nraises the danger that its Boolean operators are mixed with the Boolean \noperators of the WHERE condition. The result at runtime is the same, but \na reader will understand the intention of the statement faster if the \nparentheses exists.\n\n--\n\nJ. Purtz\n\n\n\n\n", "msg_date": "Fri, 23 Oct 2020 11:14:07 +0200", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Change JOIN tutorial to focus more on explicit joins" }, { "msg_contents": "pá 23. 10. 2020 v 11:14 odesílatel Jürgen Purtz <juergen@purtz.de> napsal:\n\n> On 22.10.20 17:14, Pavel Stehule wrote:\n> >\n> > Why do you use parenthesis for ON clause? It is useless. SQL is not C\n> > or JAVA.\n>\n>\n> Two more general answers:\n> - Why do people use tabs, spaces, and newlines to format their code even\n> though it's not necessary? SQL is a language to develop applications.\n> And what are the main costs of an application? It's not the time which\n> it takes to develop them. It's the time for their maintenance. During\n> the course of one or more decades, different persons will have to read\n> the code, add additional features, and fix bugs. They need some time to\n> read and understand the existing code. This task can be accelerated if\n> the code is easy to read. Therefore, it's a good habit of developers to\n> sometimes spend some extra characters to the code than is required -\n> not only comments. An example: there are clear precedence rules for\n> Boolean operators NOT/AND/OR. In an extensive statement it may be\n> helpful - for the developer himself as well as for anybody else -to use\n> newlines and parentheses at places where they are not necessary to keep\n> an overview of the intention of the statement. In such cases,\n> code-optimization is the duty of the compiler, not of the developer.\n> - In my professional life as a software developer, I have seen about 15\n> different languages. But only in rare cases, they have offered new\n> features or concepts. To overcome this Babylonian linguistic diversity I\n> tend to use such syntactical constructs which are common to many of them\n> even, even if they are not necessary for the concrete language.\n>\n> And the concrete answer: Omitting the parentheses for the join condition\n> raises the danger that its Boolean operators are mixed with the Boolean\n> operators of the WHERE condition. The result at runtime is the same, but\n> a reader will understand the intention of the statement faster if the\n> parentheses exists.\n>\n\nI strongly disagree.\n\nIf there are some boolean predicates, then parenthesis has sense. Without\nthese predicates the parenthesis decrease readability. This is the sense of\nJOIN syntax to separate predicates.\n\nI have a different problem - when I see parentheses where they should not\nbe, I am searching for a reason, and It is unfriendly where there is not\nany reason. I can understand if somebody uses useless parentheses in their\nproduct, but we talk about official documentation, and then we should\nrespect the character of language.\n\nRegards\n\nPavel\n\n\n\n> --\n>\n> J. Purtz\n>\n>\n>\n\npá 23. 10. 2020 v 11:14 odesílatel Jürgen Purtz <juergen@purtz.de> napsal:On 22.10.20 17:14, Pavel Stehule wrote:\n>\n> Why do you use parenthesis for ON clause?  It is useless. SQL is not C \n> or JAVA.\n\n\nTwo more general answers:\n- Why do people use tabs, spaces, and newlines to format their code even \nthough it's not necessary? SQL is a language to develop applications. \nAnd what are the main costs of an application? It's not the time which \nit takes to develop them. It's the time for their maintenance. During \nthe course of one or more decades, different persons will have to read \nthe code, add additional features, and fix bugs. They need some time to \nread and understand the existing code. This task can be accelerated if \nthe code is easy to read. Therefore, it's a good habit of developers to \nsometimes spend some extra characters to the code than is required -  \nnot only comments. An example: there are clear precedence rules for \nBoolean operators NOT/AND/OR. In an extensive statement it may be \nhelpful - for the developer himself as well as for anybody else -to use \nnewlines and parentheses at places where they are not necessary to keep \nan overview of the intention of the statement. In such cases, \ncode-optimization is the duty of the compiler, not of the developer.\n- In my professional life as a software developer, I have seen about 15 \ndifferent languages. But only in rare cases, they have offered new \nfeatures or concepts. To overcome this Babylonian linguistic diversity I \ntend to use such syntactical constructs which are common to many of them \neven, even if they are not necessary for the concrete language.\n\nAnd the concrete answer: Omitting the parentheses for the join condition \nraises the danger that its Boolean operators are mixed with the Boolean \noperators of the WHERE condition. The result at runtime is the same, but \na reader will understand the intention of the statement faster if the \nparentheses exists.I strongly disagree.If there are some boolean predicates, then parenthesis has sense. Without these predicates the parenthesis decrease readability. This is the sense of JOIN syntax to separate predicates. I have a different problem - when I see parentheses where they should not be, I am searching for a reason, and It is unfriendly where there is not any reason. I can understand if somebody uses useless parentheses in their product, but we talk about official documentation, and then we should respect the character of language.  RegardsPavel \n\n--\n\nJ. Purtz", "msg_date": "Fri, 23 Oct 2020 11:23:04 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change JOIN tutorial to focus more on explicit joins" }, { "msg_contents": "Status update for a commitfest entry.\r\n\r\nThe commitfest is nearing the end and this thread was inactive for a while. As far as I see something got committed and now the discussion is stuck in arguing about parenthesis.\r\nFWIW, I think it is a matter of personal taste. Maybe we can compromise on simply leaving this part unchanged.\r\n\r\nIf you are planning to continue working on it, please move it to the next CF.", "msg_date": "Mon, 30 Nov 2020 19:45:31 +0000", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Change JOIN tutorial to focus more on explicit joins" }, { "msg_contents": "On 30.11.20 20:45, Anastasia Lubennikova wrote:\n> As far as I see something got committed and now the discussion is stuck in arguing about parenthesis.\n> FWIW, I think it is a matter of personal taste. Maybe we can compromise on simply leaving this part unchanged.\n\nWith or without parenthesis is a little more than a personal taste, but \nit's a very tiny detail. I'm happy with either of the two variants.\n\n--\n\nJ. Purtz\n\n\n\n\n", "msg_date": "Mon, 30 Nov 2020 21:15:11 +0100", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Change JOIN tutorial to focus more on explicit joins" }, { "msg_contents": "On Mon, Nov 30, 2020 at 1:15 PM Jürgen Purtz <juergen@purtz.de> wrote:\n\n> On 30.11.20 20:45, Anastasia Lubennikova wrote:\n> > As far as I see something got committed and now the discussion is stuck\n> in arguing about parenthesis.\n> > FWIW, I think it is a matter of personal taste. Maybe we can compromise\n> on simply leaving this part unchanged.\n>\n> With or without parenthesis is a little more than a personal taste, but\n> it's a very tiny detail. I'm happy with either of the two variants.\n>\n>\nSorry, I managed to overlook the most recent patch.\n\nI admitted my use of parentheses was incorrect and I don't see anyone else\ndefending them. Please remove them.\n\nMinor typos:\n\n\"the database compare\" -> needs an \"s\" (compares)\n\n\"In this case, the definition how to compare their rows.\" -> remove,\nredundant with the first sentence\n\n\"The results from the older implicit syntax, and the newer explicit JOIN/ON\nsyntax, are identical\" -> move the commas around to what is shown here\n\nDavid J.\n\nOn Mon, Nov 30, 2020 at 1:15 PM Jürgen Purtz <juergen@purtz.de> wrote:On 30.11.20 20:45, Anastasia Lubennikova wrote:\n> As far as I see something got committed and now the discussion is stuck in arguing about parenthesis.\n> FWIW, I think it is a matter of personal taste. Maybe we can compromise on simply leaving this part unchanged.\n\nWith or without parenthesis is a little more than a personal taste, but \nit's a very tiny detail. I'm happy with either of the two variants.Sorry, I managed to overlook the most recent patch.I admitted my use of parentheses was incorrect and I don't see anyone else defending them.  Please remove them.Minor typos:\"the database compare\" -> needs an \"s\" (compares)\"In this case, the definition how to compare their rows.\" -> remove, redundant with the first sentence\"The results from the older implicit syntax, and the newer explicit JOIN/ON syntax, are identical\" -> move the commas around to what is shown hereDavid J.", "msg_date": "Mon, 30 Nov 2020 13:25:58 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change JOIN tutorial to focus more on explicit joins" }, { "msg_contents": "On 30.11.20 21:25, David G. Johnston wrote:\n> Sorry, I managed to overlook the most recent patch.\n>\n> I admitted my use of parentheses was incorrect and I don't see anyone \n> else defending them.  Please remove them.\n>\n> Minor typos:\n>\n> \"the database compare\" -> needs an \"s\" (compares)\n>\n> \"In this case, the definition how to compare their rows.\" -> remove, \n> redundant with the first sentence\n>\n> \"The results from the older implicit syntax, and the newer explicit \n> JOIN/ON syntax, are identical\" -> move the commas around to what is \n> shown here\n>\n> David J.\n>\n>\nOK. Patch attached.\n\n--\n\nJ. Purtz", "msg_date": "Tue, 1 Dec 2020 09:38:20 +0100", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Change JOIN tutorial to focus more on explicit joins" }, { "msg_contents": "On 12/1/20 3:38 AM, Jürgen Purtz wrote:\n> On 30.11.20 21:25, David G. Johnston wrote:\n>> Sorry, I managed to overlook the most recent patch.\n>>\n>> I admitted my use of parentheses was incorrect and I don't see anyone \n>> else defending them.  Please remove them.\n>>\n>> Minor typos:\n>>\n>> \"the database compare\" -> needs an \"s\" (compares)\n>>\n>> \"In this case, the definition how to compare their rows.\" -> remove, \n>> redundant with the first sentence\n>>\n>> \"The results from the older implicit syntax, and the newer explicit \n>> JOIN/ON syntax, are identical\" -> move the commas around to what is \n>> shown here\n>>\n>>\n> OK. Patch attached.\n\nPeter, you committed some of this patch originally. Do you think the \nrest of the patch is now in shape to be committed?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Wed, 10 Mar 2021 08:06:16 -0500", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Change JOIN tutorial to focus more on explicit joins" }, { "msg_contents": "On Thu, Mar 11, 2021 at 2:06 AM David Steele <david@pgmasters.net> wrote:\n> On 12/1/20 3:38 AM, Jürgen Purtz wrote:\n> > OK. Patch attached.\n\n+ Queries which access multiple tables (including repeats) at once are called\n\nI'd write \"Queries that\" here (that's is a transatlantic difference in\nusage; I try to proofread these things in American mode for\nconsistency with the rest of the language in this project, which I\nprobably don't entirely succeed at but this one I've learned...).\n\nMaybe instead of \"(including repeats)\" it could say \"(or multiple\ninstances of the same table)\"?\n\n+ For example, to return all the weather records together with the\nlocation of the\n+ associated city, the database compares the <structfield>city</structfield>\n column of each row of the <structname>weather</structname> table with the\n <structfield>name</structfield> column of all rows in the\n<structname>cities</structname>\n table, and select the pairs of rows where these values match.\n\nHere \"select\" should agree with \"the database\" and take an -s, no?\n\n+ This syntax pre-dates the <literal>JOIN</literal> and <literal>ON</literal>\n+ keywords. The tables are simply listed in the <literal>FROM</literal>,\n+ comma-separated, and the comparison expression added to the\n+ <literal>WHERE</literal> clause.\n\nCould we mention SQL92 somewhere? Like maybe \"This syntax pre-dates\nthe JOIN and ON keywords, which were introduced by SQL-92\". (That's a\n\"non-restrictive which\", I think the clue is the comma?)\n\n\n", "msg_date": "Mon, 15 Mar 2021 15:47:15 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change JOIN tutorial to focus more on explicit joins" }, { "msg_contents": "po 15. 3. 2021 v 3:48 odesílatel Thomas Munro <thomas.munro@gmail.com>\nnapsal:\n\n> On Thu, Mar 11, 2021 at 2:06 AM David Steele <david@pgmasters.net> wrote:\n> > On 12/1/20 3:38 AM, Jürgen Purtz wrote:\n> > > OK. Patch attached.\n>\n> + Queries which access multiple tables (including repeats) at once are\n> called\n>\n> I'd write \"Queries that\" here (that's is a transatlantic difference in\n> usage; I try to proofread these things in American mode for\n> consistency with the rest of the language in this project, which I\n> probably don't entirely succeed at but this one I've learned...).\n>\n> Maybe instead of \"(including repeats)\" it could say \"(or multiple\n> instances of the same table)\"?\n>\n> + For example, to return all the weather records together with the\n> location of the\n> + associated city, the database compares the\n> <structfield>city</structfield>\n> column of each row of the <structname>weather</structname> table with\n> the\n> <structfield>name</structfield> column of all rows in the\n> <structname>cities</structname>\n> table, and select the pairs of rows where these values match.\n>\n> Here \"select\" should agree with \"the database\" and take an -s, no?\n>\n> + This syntax pre-dates the <literal>JOIN</literal> and\n> <literal>ON</literal>\n> + keywords. The tables are simply listed in the\n> <literal>FROM</literal>,\n> + comma-separated, and the comparison expression added to the\n> + <literal>WHERE</literal> clause.\n>\n> Could we mention SQL92 somewhere? Like maybe \"This syntax pre-dates\n> the JOIN and ON keywords, which were introduced by SQL-92\". (That's a\n> \"non-restrictive which\", I think the clue is the comma?)\n>\n\nprevious syntax should be mentioned too. An reader can find this syntax\nthousands applications\n\nPavel\n\n\n>\n\npo 15. 3. 2021 v 3:48 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:On Thu, Mar 11, 2021 at 2:06 AM David Steele <david@pgmasters.net> wrote:\n> On 12/1/20 3:38 AM, Jürgen Purtz wrote:\n> > OK. Patch attached.\n\n+    Queries which access multiple tables (including repeats) at once are called\n\nI'd write \"Queries that\" here (that's is a transatlantic difference in\nusage; I try to proofread these things in American mode for\nconsistency with the rest of the language in this project, which I\nprobably don't entirely succeed at but this one I've learned...).\n\nMaybe instead of \"(including repeats)\" it could say \"(or multiple\ninstances of the same table)\"?\n\n+    For example, to return all the weather records together with the\nlocation of the\n+    associated city, the database compares the <structfield>city</structfield>\n     column of each row of the <structname>weather</structname> table with the\n     <structfield>name</structfield> column of all rows in the\n<structname>cities</structname>\n     table, and select the pairs of rows where these values match.\n\nHere \"select\" should agree with \"the database\" and take an -s, no?\n\n+    This syntax pre-dates the <literal>JOIN</literal> and <literal>ON</literal>\n+    keywords.  The tables are simply listed in the <literal>FROM</literal>,\n+    comma-separated, and the comparison expression added to the\n+    <literal>WHERE</literal> clause.\n\nCould we mention SQL92 somewhere?  Like maybe \"This syntax pre-dates\nthe JOIN and ON keywords, which were introduced by SQL-92\".  (That's a\n\"non-restrictive which\", I think the clue is the comma?)previous syntax should be mentioned too. An reader can find this syntax thousands applicationsPavel", "msg_date": "Mon, 15 Mar 2021 05:28:44 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change JOIN tutorial to focus more on explicit joins" }, { "msg_contents": "On 15.03.21 03:47, Thomas Munro wrote:\n> On Thu, Mar 11, 2021 at 2:06 AM David Steele <david@pgmasters.net> wrote:\n>> On 12/1/20 3:38 AM, Jürgen Purtz wrote:\n>>> OK. Patch attached.\n> + Queries which access multiple tables (including repeats) at once are called\n>\n> I'd write \"Queries that\" here (that's is a transatlantic difference in\n> usage; I try to proofread these things in American mode for\n> consistency with the rest of the language in this project, which I\n> probably don't entirely succeed at but this one I've learned...).\n>\n> Maybe instead of \"(including repeats)\" it could say \"(or multiple\n> instances of the same table)\"?\n>\n> + For example, to return all the weather records together with the\n> location of the\n> + associated city, the database compares the <structfield>city</structfield>\n> column of each row of the <structname>weather</structname> table with the\n> <structfield>name</structfield> column of all rows in the\n> <structname>cities</structname>\n> table, and select the pairs of rows where these values match.\n>\n> Here \"select\" should agree with \"the database\" and take an -s, no?\n>\n> + This syntax pre-dates the <literal>JOIN</literal> and <literal>ON</literal>\n> + keywords. The tables are simply listed in the <literal>FROM</literal>,\n> + comma-separated, and the comparison expression added to the\n> + <literal>WHERE</literal> clause.\n>\n> Could we mention SQL92 somewhere? Like maybe \"This syntax pre-dates\n> the JOIN and ON keywords, which were introduced by SQL-92\". (That's a\n> \"non-restrictive which\", I think the clue is the comma?)\n\n+1. All proposed changes integrated.\n--\nKind regards, Jürgen Purtz", "msg_date": "Mon, 15 Mar 2021 09:06:51 +0100", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: Change JOIN tutorial to focus more on explicit joins" }, { "msg_contents": "On 15.03.21 09:06, Jürgen Purtz wrote:\n> +1. All proposed changes integrated.\n\ncommitted\n\n\n", "msg_date": "Thu, 8 Apr 2021 11:00:34 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Change JOIN tutorial to focus more on explicit joins" } ]
[ { "msg_contents": "Hackers,\n\nOver on [1], Heikki mentioned about the usefulness of caching results\nfrom parameterized subplans so that they could be used again for\nsubsequent scans which have the same parameters as a previous scan.\nOn [2], I mentioned that parameterized nested loop joins could see\nsimilar gains with such a cache. I suggested there that instead of\nadding code that only allows this to work for subplans, that instead,\nwe add a new node type that can handle the caching for us. We can\nthen just inject that node type in places where it seems beneficial.\n\nI've attached a patch which implements this. The new node type is\ncalled \"Result Cache\". I'm not particularly wedded to keeping that\nname, but if I change it, I only want to do it once. I've got a few\nother names I mind, but I don't feel strongly or confident enough in\nthem to go and do the renaming.\n\nHow the caching works:\n\nFirst off, it's only good for plugging in on top of parameterized\nnodes that are rescanned with different parameters. The cache itself\nuses a hash table using the simplehash.h implementation. The memory\nconsumption is limited to work_mem. The code maintains an LRU list and\nwhen we need to add new entries but don't have enough space to do so,\nwe free off older items starting at the top of the LRU list. When we\nget a cache hit, we move that entry to the end of the LRU list so that\nit'll be the last to be evicted.\n\nWhen should we cache:\n\nFor nested loop joins, the decision is made purely based on cost. The\ncosting model looks at the expected number of calls, the distinct\nvalue estimate and work_mem size. It then determines how many items\ncan be cached and then goes on to estimate an expected cache hit ratio\nand also an eviction ratio. It adjusts the input costs according to\nthose ratios and adds some additional charges for caching and cache\nlookups.\n\nFor subplans, since we plan subplans before we're done planning the\nouter plan, there's very little information to go on about the number\nof times that the cache will be looked up. For now, I've coded things\nso the cache is always used for EXPR_SUBLINK type subplans. There may\nbe other types of subplan that could support caching too, but I've not\nreally gone through them all yet to determine which. I certainly know\nthere's some that we can't cache for.\n\nWhy caching might be good:\n\nWith hash joins, it's sometimes not so great that we have to hash the\nentire inner plan and only probe a very small number of values. If we\nwere able to only fill the hash table with values that are needed,\nthen then a lot of time and memory could be saved. Effectively, the\npatch does exactly this with the combination of a parameterized nested\nloop join with a Result Cache node above the inner scan.\n\nFor subplans, the gains can be more because often subplans are much\nmore expensive to execute than what might go on the inside of a\nparameterized nested loop join.\n\nCurrent problems and some ways to make it better:\n\nThe patch does rely heavily on good ndistinct estimates. One\nunfortunate problem is that if the planner has no statistics for\nwhatever it's trying to estimate for, it'll default to returning\nDEFAULT_NUM_DISTINCT (200). That may cause the Result Cache to appear\nmuch more favourable than it should. One way I can think to work\naround that would be to have another function similar to\nestimate_num_groups() which accepts a default value which it will\nreturn if it was unable to find statistics to use. In this case, such\na function could just be called passing the number of input rows as\nthe default, which would make the costing code think each value is\nunique, which would not be favourable for caching. I've not done\nanything like that in what I've attached here. That solution would\nalso do nothing if the ndistinct estimate was available, but was just\nincorrect, as it often is.\n\nThere are currently a few compiler warnings with the patch due to the\nscope of the simplehash.h hash table. Because the scope is static\nrather than extern there's a load of unused function warnings. Not\nsure yet the best way to deal with this. I don't want to change the\nscope to extern just to keep compilers quiet.\n\nAlso during cache_reduce_memory(), I'm performing a hash table lookup\nfollowed by a hash table delete. I already have the entry to delete,\nbut there's no simplehash.h function that allows deletion by element\npointer, only by key. This wastes a hash table lookup. I'll likely\nmake an adjustment to the simplehash.h code to export the delete code\nas a separate function to fix this.\n\nDemo:\n\n# explain (analyze, costs off) select relname,(select count(*) from\npg_class c2 where c1.relkind = c2.relkind) from pg_class c1;\n QUERY PLAN\n----------------------------------------------------------------------------------------\n Seq Scan on pg_class c1 (actual time=0.069..0.470 rows=391 loops=1)\n SubPlan 1\n -> Result Cache (actual time=0.001..0.001 rows=1 loops=391)\n Cache Key: c1.relkind\n Cache Hits: 387 Cache Misses: 4 Cache Evictions: 0 Cache\nOverflows: 0\n -> Aggregate (actual time=0.062..0.062 rows=1 loops=4)\n -> Seq Scan on pg_class c2 (actual time=0.007..0.056\nrows=98 loops=4)\n Filter: (c1.relkind = relkind)\n Rows Removed by Filter: 293\n Planning Time: 0.047 ms\n Execution Time: 0.536 ms\n(11 rows)\n\n# set enable_resultcache=0; -- disable result caching\nSET\n# explain (analyze, costs off) select relname,(select count(*) from\npg_class c2 where c1.relkind = c2.relkind) from pg_class c1;\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Seq Scan on pg_class c1 (actual time=0.070..24.619 rows=391 loops=1)\n SubPlan 1\n -> Aggregate (actual time=0.062..0.062 rows=1 loops=391)\n -> Seq Scan on pg_class c2 (actual time=0.009..0.056\nrows=120 loops=391)\n Filter: (c1.relkind = relkind)\n Rows Removed by Filter: 271\n Planning Time: 0.042 ms\n Execution Time: 24.653 ms\n(8 rows)\n\n-- Demo with parameterized nested loops\ncreate table hundredk (hundredk int, tenk int, thousand int, hundred\nint, ten int, one int);\ninsert into hundredk select x%100000,x%10000,x%1000,x%100,x%10,1 from\ngenerate_Series(1,100000) x;\ncreate table lookup (a int);\ninsert into lookup select x from generate_Series(1,100000)x,\ngenerate_Series(1,100);\ncreate index on lookup(a);\nvacuum analyze lookup, hundredk;\n\n# explain (analyze, costs off) select count(*) from hundredk hk inner\njoin lookup l on hk.thousand = l.a;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Aggregate (actual time=1876.710..1876.710 rows=1 loops=1)\n -> Nested Loop (actual time=0.013..1371.690 rows=9990000 loops=1)\n -> Seq Scan on hundredk hk (actual time=0.005..8.451\nrows=100000 loops=1)\n -> Result Cache (actual time=0.000..0.005 rows=100 loops=100000)\n Cache Key: hk.thousand\n Cache Hits: 99000 Cache Misses: 1000 Cache Evictions:\n0 Cache Overflows: 0\n -> Index Only Scan using lookup_a_idx on lookup l\n(actual time=0.002..0.011 rows=100 loops=1000)\n Index Cond: (a = hk.thousand)\n Heap Fetches: 0\n Planning Time: 0.113 ms\n Execution Time: 1876.741 ms\n(11 rows)\n\n# set enable_resultcache=0;\nSET\n# explain (analyze, costs off) select count(*) from hundredk hk inner\njoin lookup l on hk.thousand = l.a;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------\n Aggregate (actual time=2401.351..2401.352 rows=1 loops=1)\n -> Merge Join (actual time=28.412..1890.905 rows=9990000 loops=1)\n Merge Cond: (l.a = hk.thousand)\n -> Index Only Scan using lookup_a_idx on lookup l (actual\ntime=0.005..10.170 rows=99901 loops=1)\n Heap Fetches: 0\n -> Sort (actual time=28.388..576.783 rows=9990001 loops=1)\n Sort Key: hk.thousand\n Sort Method: quicksort Memory: 7760kB\n -> Seq Scan on hundredk hk (actual time=0.005..11.039\nrows=100000 loops=1)\n Planning Time: 0.123 ms\n Execution Time: 2401.379 ms\n(11 rows)\n\nCache Overflows:\n\nYou might have noticed \"Cache Overflow\" in the EXPLAIN ANALYZE output.\nThis happens if a single scan of the inner node exhausts the cache\nmemory. In this case, all the other entries will already have been\nevicted in an attempt to make space for the current scan's tuples.\nHowever, if we see an overflow then the size of the results from a\nsingle scan alone must have exceeded work_mem. There might be some\ntweaking to do here as it seems a shame that a single overly larger\nscan would flush the entire cache. I doubt it would be too hard to\nlimit the flushing to some percentage of work_mem. Similar to how\nlarge seqscans don't entirely flush shared_buffers.\n\nCurrent Status:\n\nI've spent quite a bit of time getting this working. I'd like to take\na serious go at making this happen for PG14. For now, it all seems to\nwork. I have some concerns about bad statistics causing nested loop\njoins to be favoured more than they were previously due to the result\ncache further lowering the cost of them when the cache hit ratio is\nthought to be high.\n\nFor now, the node type is parallel_safe, but not parallel_aware. I can\nsee that a parallel_aware version would be useful, but I've not done\nthat here. Anything in that area will not be part of my initial\neffort. The unfortunate part about that is the actual hit ratio will\ndrop with more parallel workers since the caches of each worker are\nseparate.\n\nSome tests show a 10x speedup on TPC-H Q2.\n\nI'm interested in getting feedback on this before doing much further work on it.\n\nDoes it seem like something we might want for PG14?\n\nDavid\n\n[1] https://www.postgresql.org/message-id/daceb327-9a20-51f4-fe6c-60b898692305%40iki.fi\n[2] https://www.postgresql.org/message-id/CAKJS1f8oNXQ-LqjK%3DBOFDmxLc_7s3uFr_g4qi7Ncrjig0JOCiA%40mail.gmail.com", "msg_date": "Wed, 20 May 2020 23:44:27 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Wed, 20 May 2020 at 12:44, David Rowley <dgrowleyml@gmail.com> wrote:\n\n> Hackers,\n>\n> Over on [1], Heikki mentioned about the usefulness of caching results\n> from parameterized subplans so that they could be used again for\n> subsequent scans which have the same parameters as a previous scan.\n> On [2], I mentioned that parameterized nested loop joins could see\n> similar gains with such a cache. I suggested there that instead of\n> adding code that only allows this to work for subplans, that instead,\n> we add a new node type that can handle the caching for us. We can\n> then just inject that node type in places where it seems beneficial.\n>\n\nVery cool\n\n\n> I've attached a patch which implements this. The new node type is\n> called \"Result Cache\". I'm not particularly wedded to keeping that\n> name, but if I change it, I only want to do it once. I've got a few\n> other names I mind, but I don't feel strongly or confident enough in\n> them to go and do the renaming.\n>\n> How the caching works:\n>\n> First off, it's only good for plugging in on top of parameterized\n> nodes that are rescanned with different parameters. The cache itself\n> uses a hash table using the simplehash.h implementation. The memory\n> consumption is limited to work_mem. The code maintains an LRU list and\n> when we need to add new entries but don't have enough space to do so,\n> we free off older items starting at the top of the LRU list. When we\n> get a cache hit, we move that entry to the end of the LRU list so that\n> it'll be the last to be evicted.\n>\n> When should we cache:\n>\n> For nested loop joins, the decision is made purely based on cost.\n\n\nI thought the main reason to do this was the case when the nested loop\nsubplan was significantly underestimated and we realize during execution\nthat we should have built a hash table. So including this based on cost\nalone seems to miss a trick.\n\n\n> The patch does rely heavily on good ndistinct estimates.\n\n\nExactly. We know we seldom get those with many-way joins.\n\nSo +1 for adding this technique. My question is whether it should be added\nas an optional facility of a parameterised sub plan, rather than an\nalways-needed full-strength node. That way the choice of whether to use it\ncan happen at execution time once we notice that we've been called too many\ntimes.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nMission Critical Databases\n\nOn Wed, 20 May 2020 at 12:44, David Rowley <dgrowleyml@gmail.com> wrote:Hackers,\n\nOver on [1], Heikki mentioned about the usefulness of caching results\nfrom parameterized subplans so that they could be used again for\nsubsequent scans which have the same parameters as a previous scan.\nOn [2], I mentioned that parameterized nested loop joins could see\nsimilar gains with such a cache. I suggested there that instead of\nadding code that only allows this to work for subplans, that instead,\nwe add a new node type that can handle the caching for us.  We can\nthen just inject that node type in places where it seems beneficial.Very cool \nI've attached a patch which implements this.  The new node type is\ncalled \"Result Cache\".  I'm not particularly wedded to keeping that\nname, but if I change it, I only want to do it once. I've got a few\nother names I mind, but I don't feel strongly or confident enough in\nthem to go and do the renaming.\n\nHow the caching works:\n\nFirst off, it's only good for plugging in on top of parameterized\nnodes that are rescanned with different parameters. The cache itself\nuses a hash table using the simplehash.h implementation.  The memory\nconsumption is limited to work_mem. The code maintains an LRU list and\nwhen we need to add new entries but don't have enough space to do so,\nwe free off older items starting at the top of the LRU list.  When we\nget a cache hit, we move that entry to the end of the LRU list so that\nit'll be the last to be evicted.\n\nWhen should we cache:\n\nFor nested loop joins, the decision is made purely based on cost.I thought the main reason to do this was the case when the nested loop subplan was significantly underestimated and we realize during execution that we should have built a hash table. So including this based on cost alone seems to miss a trick. \nThe patch does rely heavily on good ndistinct estimates. Exactly. We know we seldom get those with many-way joins.So +1 for adding this technique. My question is whether it should be added as an optional facility of a parameterised sub plan, rather than an always-needed full-strength node. That way the choice of whether to use it can happen at execution time once we notice that we've been called too many times.-- Simon Riggs                http://www.2ndQuadrant.com/Mission Critical Databases", "msg_date": "Wed, 20 May 2020 13:56:34 +0100", "msg_from": "Simon Riggs <simon@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Thu, 21 May 2020 at 00:56, Simon Riggs <simon@2ndquadrant.com> wrote:\n> I thought the main reason to do this was the case when the nested loop subplan was significantly underestimated and we realize during execution that we should have built a hash table. So including this based on cost alone seems to miss a trick.\n\nIsn't that mostly because the planner tends to choose a\nnon-parameterized nested loop when it thinks the outer side of the\njoin has just 1 row? If so, I'd say that's a separate problem as\nResult Cache only deals with parameterized nested loops. Perhaps the\nproblem you mention could be fixed by adding some \"uncertainty degree\"\nto the selectivity estimate function and have it return that along\nwith the selectivity. We'd likely not want to choose an\nunparameterized nested loop when the uncertainly level is high.\nMultiplying the selectivity of different selectivity estimates could\nraise the uncertainty level a magnitude.\n\nFor plans where the planner chooses to use a non-parameterized nested\nloop due to having just 1 row on the outer side of the loop, it's\ntaking a huge risk. The cost of putting the 1 row on the inner side of\na hash join would bearly cost anything extra during execution.\nHashing 1 row is pretty cheap and performing a lookup on that hashed\nrow is not much more expensive than evaluating the qual of the nested\nloop. Really just requires the additional hash function calls. Having\nthe uncertainty degree I mentioned above would allow us to only have\nthe planner do that when the uncertainty degree indicates it's not\nworth the risk.\n\nDavid\n\n\n", "msg_date": "Fri, 22 May 2020 11:53:51 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "> My question is whether it should be added as an optional facility of a\n> parameterised sub plan, rather than an always-needed full-strength node.\n> That way the choice of whether to use it can happen at execution time once\n> we notice that we've been called too many times.\n>\n>\nActually I am not sure about what does the \"parameterized sub plan\" mean (I\ntreat is a SubPlan Node), so please correct me if I misunderstand you:)\nBecause\nthe inner plan in nest loop not a SubPlan node actually. so if bind the\nfacility to SubPlan node, we may loss the chances for nest loop. And when\nwe\nconsider the usage for nest loop, we can consider the below example, where\nthis\nfeature will be more powerful.\n\n\nselect j1o.i, j2_v.sum_5\nfrom j1 j1o\ninner join lateral\n(select im100, sum(im5) as sum_5\nfrom j2\nwhere j1o.im100 = im100\nand j1o.i = 1\ngroup by im100) j2_v\non true\nwhere j1o.i = 1;\n\n-- \nBest Regards\nAndy Fan\n\n My question is whether it should be added as an optional facility of a parameterised sub plan, rather than an always-needed full-strength node. That way the choice of whether to use it can happen at execution time once we notice that we've been called too many times. Actually I am not sure about what does the \"parameterized sub plan\" mean (Itreat is a SubPlan Node), so please correct me if I misunderstand you:)  Becausethe inner plan in nest loop not a SubPlan node actually.  so if bind thefacility to SubPlan node, we may loss the chances for nest loop.  And when weconsider the usage for nest loop,  we can consider the below example, where thisfeature will be more powerful.select j1o.i, j2_v.sum_5from j1 j1oinner join lateral (select im100, sum(im5) as sum_5 from j2 where j1o.im100 = im100 and j1o.i = 1 group by im100) j2_von truewhere j1o.i = 1;-- Best RegardsAndy Fan", "msg_date": "Fri, 22 May 2020 08:11:51 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Fri, 22 May 2020 at 12:12, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> Actually I am not sure about what does the \"parameterized sub plan\" mean (I\n> treat is a SubPlan Node), so please correct me if I misunderstand you:) Because\n> the inner plan in nest loop not a SubPlan node actually. so if bind the\n> facility to SubPlan node, we may loss the chances for nest loop.\n\nA parameterized subplan would be a subquery that contains column\nreference to a query above its own level. The executor changes that\ncolumn reference into a parameter and the subquery will need to be\nrescanned each time the parameter's value changes.\n\n> And when we\n> consider the usage for nest loop, we can consider the below example, where this\n> feature will be more powerful.\n\nI didn't quite get the LATERAL support quite done in the version I\nsent. For now, I'm not considering adding a Result Cache node if there\nare lateral vars in any location other than the inner side of the\nnested loop join. I think it'll just be a few lines to make it work\nthough. I wanted to get some feedback before going to too much more\ntrouble to make all cases work.\n\nI've now added this patch to the first commitfest of PG14.\n\nDavid\n\n\n", "msg_date": "Mon, 25 May 2020 19:53:27 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "Today I tested the correctness & performance of this patch based on TPC-H\nworkload, the environment is setup based on [1]. Correctness is tested by\nstoring the result into another table when this feature is not introduced\nand\nthen enable this feature and comparing the result with the original ones. No\nissue is found at this stage.\n\nI also checked the performance gain for TPC-H workload, totally 4 out of\nthe 22\nqueries uses this new path, 3 of them are subplan, 1 of them is nestloop.\nAll of\nchanges gets a better result. You can check the attachments for reference.\nnormal.log is the data without this feature, patched.log is the data with\nthe\nfeature. The data doesn't show the 10x performance gain, I think that's\nmainly\ndata size related.\n\nAt the code level, I mainly checked nestloop path and\ncost_resultcache_rescan,\neverything looks good to me. I'd like to check the other parts in the\nfollowing days.\n\n[1] https://ankane.org/tpc-h\n\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Tue, 2 Jun 2020 17:04:50 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Tue, 2 Jun 2020 at 21:05, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> Today I tested the correctness & performance of this patch based on TPC-H\n> workload, the environment is setup based on [1]. Correctness is tested by\n> storing the result into another table when this feature is not introduced and\n> then enable this feature and comparing the result with the original ones. No\n> issue is found at this stage.\n\nThank you for testing it out.\n\n> I also checked the performance gain for TPC-H workload, totally 4 out of the 22\n> queries uses this new path, 3 of them are subplan, 1 of them is nestloop. All of\n> changes gets a better result. You can check the attachments for reference.\n> normal.log is the data without this feature, patched.log is the data with the\n> feature. The data doesn't show the 10x performance gain, I think that's mainly\n> data size related.\n\nThanks for running those tests. I had a quick look at the results and\nI think to say that all 4 are better is not quite right. One is\nactually a tiny bit slower and one is only faster due to a plan\nchange. Here's my full analysis.\n\nQ2 uses a result cache for the subplan and has about a 37.5% hit ratio\nwhich reduces the execution time of the query down to 67% of the\noriginal.\nQ17 uses a result cache for the subplan and has about a 96.5% hit\nratio which reduces the execution time of the query down to 24% of the\noriginal time.\nQ18 uses a result cache for 2 x nested loop joins and has a 0% hit\nratio. The execution time is reduced to 91% of the original time only\nbecause the planner uses a different plan, which just happens to be\nfaster by chance.\nQ20 uses a result cache for the subplan and has a 0% hit ratio. The\nexecution time is 100.27% of the original time. There are 8620 cache\nmisses.\nAll other queries use the same plan with and without the patch.\n\n> At the code level, I mainly checked nestloop path and cost_resultcache_rescan,\n> everything looks good to me. I'd like to check the other parts in the following days.\n\nGreat.\n\n> [1] https://ankane.org/tpc-h\n\n\n", "msg_date": "Wed, 3 Jun 2020 08:54:41 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": ">\n>\n> Thanks for running those tests. I had a quick look at the results and\n> I think to say that all 4 are better is not quite right. One is\n> actually a tiny bit slower and one is only faster due to a plan\n> change.\n>\n>\nYes.. Thanks for pointing it out.\n\n\n> Q18 uses a result cache for 2 x nested loop joins and has a 0% hit\n> ratio. The execution time is reduced to 91% of the original time only\n> because the planner uses a different plan, which just happens to be\n> faster by chance.\n> Q20 uses a result cache for the subplan and has a 0% hit ratio. The\n> execution time is 100.27% of the original time. There are 8620 cache\n> misses.\n>\n\nLooks the case here is some statistics issue or cost model issue. I'd\nlike to check more about that. But before that, I upload the steps[1] I\nused\nin case you want to reproduce it locally.\n\n[1] https://github.com/zhihuiFan/tpch-postgres\n\n-- \nBest Regards\nAndy Fan\n\n\nThanks for running those tests.  I had a quick look at the results and\nI think to say that all 4 are better is not quite right. One is\nactually a tiny bit slower and one is only faster due to a plan\nchange.  \nYes..  Thanks for pointing it out. \nQ18 uses a result cache for 2 x nested loop joins and has a 0% hit\nratio.  The execution time is reduced to 91% of the original time only\nbecause the planner uses a different plan, which just happens to be\nfaster by chance.\nQ20 uses a result cache for the subplan and has a 0% hit ratio.  The\nexecution time is 100.27% of the original time. There are 8620 cache\nmisses.Looks the case here is some statistics issue or cost model issue. I'dlike to check more about that.  But before that, I upload the steps[1] I usedin case you want to reproduce it locally. [1]  https://github.com/zhihuiFan/tpch-postgres -- Best RegardsAndy Fan", "msg_date": "Wed, 3 Jun 2020 10:36:20 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Wed, Jun 3, 2020 at 10:36 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>> Thanks for running those tests. I had a quick look at the results and\n>> I think to say that all 4 are better is not quite right. One is\n>> actually a tiny bit slower and one is only faster due to a plan\n>> change.\n>>\n>>\n> Yes.. Thanks for pointing it out.\n>\n>\n>> Q18 uses a result cache for 2 x nested loop joins and has a 0% hit\n>> ratio. The execution time is reduced to 91% of the original time only\n>> because the planner uses a different plan, which just happens to be\n>> faster by chance.\n>>\n>\nThis case should be caused by wrong rows estimations on condition\no_orderkey in (select l_orderkey from lineitem group by l_orderkey having\nsum(l_quantity) > 312). The estimation is 123766 rows, but the fact is 10\nrows.\nThis estimation is hard and I don't think we should address this issue on\nthis\npatch.\n\n\nQ20 uses a result cache for the subplan and has a 0% hit ratio. The\n>> execution time is 100.27% of the original time. There are 8620 cache\n>> misses.\n>>\n>\n>\nThis is by design for current implementation.\n\n> For subplans, since we plan subplans before we're done planning the\n> outer plan, there's very little information to go on about the number\n> of times that the cache will be looked up. For now, I've coded things\n> so the cache is always used for EXPR_SUBLINK type subplans. \"\n\nI first tried to see if we can have a row estimation before the subplan\nis created and it looks very complex. The subplan was created during\npreprocess_qual_conditions, at that time, we even didn't create the base\nRelOptInfo , to say nothing of join_rel which the rows estimation happens\nmuch later.\n\nThen I see if we can delay the cache decision until we have the rows\nestimation,\nExecInitSubPlan may be a candidate. At this time we can't add a new\nResutCache node, but we can add a cache function to SubPlan node with costed\nbased. However the num_of_distinct values for parameterized variable can't\nbe\ncalculated which I still leave it as an open issue.\n\n-- \nBest Regards\nAndy Fan\n\nOn Wed, Jun 3, 2020 at 10:36 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\nThanks for running those tests.  I had a quick look at the results and\nI think to say that all 4 are better is not quite right. One is\nactually a tiny bit slower and one is only faster due to a plan\nchange.  \nYes..  Thanks for pointing it out. \nQ18 uses a result cache for 2 x nested loop joins and has a 0% hit\nratio.  The execution time is reduced to 91% of the original time only\nbecause the planner uses a different plan, which just happens to be\nfaster by chance.This case should be caused by wrong rows estimations on condition  o_orderkey in (select l_orderkey from lineitem group by l_orderkey having\t\t\t\tsum(l_quantity) > 312).  The estimation is 123766 rows, but the fact is 10 rows.This estimation is hard and I don't think we should address this issue on thispatch. \nQ20 uses a result cache for the subplan and has a 0% hit ratio.  The\nexecution time is 100.27% of the original time. There are 8620 cache\nmisses.This is by design for current implementation. > For subplans, since we plan subplans before we're done planning the> outer plan, there's very little information to go on about the number> of times that the cache will be looked up. For now, I've coded things> so the cache is always used for EXPR_SUBLINK type subplans. \"I first tried to see if we can have a row estimation before the subplanis created and it looks very complex.  The subplan was created during preprocess_qual_conditions, at that time, we even didn't create the baseRelOptInfo , to say nothing of join_rel which the rows estimation happensmuch later. Then I see if we can delay the cache decision until we have the rows estimation,ExecInitSubPlan may be a candidate.  At this time  we can't add a newResutCache node, but we can add a cache function to SubPlan node with costedbased.  However the num_of_distinct values for parameterized variable can't becalculated which I still leave it as an open issue. -- Best RegardsAndy Fan", "msg_date": "Fri, 12 Jun 2020 12:10:26 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Fri, 12 Jun 2020 at 16:10, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> I first tried to see if we can have a row estimation before the subplan\n> is created and it looks very complex. The subplan was created during\n> preprocess_qual_conditions, at that time, we even didn't create the base\n> RelOptInfo , to say nothing of join_rel which the rows estimation happens\n> much later.\n>\n> Then I see if we can delay the cache decision until we have the rows estimation,\n> ExecInitSubPlan may be a candidate. At this time we can't add a new\n> ResutCache node, but we can add a cache function to SubPlan node with costed\n> based. However the num_of_distinct values for parameterized variable can't be\n> calculated which I still leave it as an open issue.\n\nI don't really like the idea of stuffing this feature into some\nexisting node type. Doing so would seem pretty magical when looking\nat an EXPLAIN ANALYZE. There is of course overhead to pulling tuples\nthrough an additional node in the plan, but if you use that as an\nargument then there's some room to argue that we should only have 1\nexecutor node type to get rid of that overhead.\n\nTom mentioned in [1] that he's reconsidering his original thoughts on\nleaving the AlternativeSubPlan selection decision until execution\ntime. If that were done late in planning, as Tom mentioned, then it\nwould be possible to give a more accurate cost to the Result Cache as\nwe'd have built the outer plan by that time and would be able to\nestimate the number of distinct calls to the correlated subplan. As\nthat feature is today we'd be unable to delay making the decision\nuntil execution time as we don't have the required details to know how\nmany distinct calls there will be to the Result Cache node.\n\nFor now, I'm planning on changing things around a little in the Result\nCache node to allow faster deletions from the cache. As of now, we\nmust perform 2 hash lookups to perform a single delete. This is\nbecause we must perform the lookup to fetch the entry from the MRU\nlist key, then an additional lookup in the hash delete code. I plan\non changing the hash delete code to expose another function that\nallows us to delete an item directly if we've already looked it up.\nThis should make a small reduction in the overheads of the node.\nPerhaps if the overhead is very small (say < 1%) when the cache is of\nno use then it might not be such a bad thing to just have a Result\nCache for correlated subplans regardless of estimates. With the TPCH\nQ20 test, it appeared as if the overhead was 0.27% for that particular\nsubplan. A more simple subplan would execute more quickly resulting\nthe Result Cache overhead being a more significant portion of the\noverall subquery execution. I'd need to perform a worst-case overhead\ntest to get an indication of what the percentage is.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/1992952.1592785225@sss.pgh.pa.us\n\n\n", "msg_date": "Tue, 30 Jun 2020 11:57:03 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Tue, 30 Jun 2020 at 11:57, David Rowley <dgrowleyml@gmail.com> wrote:\n> For now, I'm planning on changing things around a little in the Result\n> Cache node to allow faster deletions from the cache. As of now, we\n> must perform 2 hash lookups to perform a single delete. This is\n> because we must perform the lookup to fetch the entry from the MRU\n> list key, then an additional lookup in the hash delete code. I plan\n> on changing the hash delete code to expose another function that\n> allows us to delete an item directly if we've already looked it up.\n> This should make a small reduction in the overheads of the node.\n> Perhaps if the overhead is very small (say < 1%) when the cache is of\n> no use then it might not be such a bad thing to just have a Result\n> Cache for correlated subplans regardless of estimates. With the TPCH\n> Q20 test, it appeared as if the overhead was 0.27% for that particular\n> subplan. A more simple subplan would execute more quickly resulting\n> the Result Cache overhead being a more significant portion of the\n> overall subquery execution. I'd need to perform a worst-case overhead\n> test to get an indication of what the percentage is.\n\nI made the changes that I mention to speedup the cache deletes. The\npatch is now in 3 parts. The first two parts are additional work and\nthe final part is the existing work with some small tweaks.\n\n0001: Alters estimate_num_groups() to allow it to pass back a flags\nvariable to indicate if the estimate used DEFAULT_NUM_DISTINCT. The\nidea here is to try and avoid using a Result Cache for a Nested Loop\njoin when the statistics are likely to be unreliable. Because\nDEFAULT_NUM_DISTINCT is 200, if we estimate that number of distinct\nvalues then a Result Cache is likely to look highly favourable in some\nsituations where it very well may not be. I've not given this patch a\nhuge amount of thought, but so far I don't see anything too\nunreasonable about it. I'm prepared to be wrong about that though.\n\n0002 Makes some adjustments to simplehash.h to expose a function which\nallows direct deletion of a hash table element when we already have a\npointer to the bucket. I think this is a pretty good change as it\nreuses more simplehash.h code than without the patch.\n\n0003 Is the result cache code. I've done another pass over this\nversion and fixed a few typos and added a few comments. I've not yet\nadded support for LATERAL joins. I plan to do that soon. For now, I\njust wanted to get something out there as I saw that the patch did\nneed rebased.\n\nI did end up testing the overheads of having a Result Cache node on a\nvery simple subplan that'll never see a cache hit. The overhead is\nquite a bit more than the 0.27% that we saw with TPCH Q20.\n\nUsing a query that gets zero cache hits:\n\n$ cat bench.sql\nselect relname,(select oid from pg_class c2 where c1.oid = c2.oid)\nfrom pg_Class c1 offset 1000000000;\n\nenable_resultcache = on:\n\n$ pgbench -n -f bench.sql -T 60 postgres\nlatency average = 0.474 ms\ntps = 2110.431529 (including connections establishing)\ntps = 2110.503284 (excluding connections establishing)\n\nenable_resultcache = off:\n\n$ pgbench -n -f bench.sql -T 60 postgres\nlatency average = 0.379 ms\ntps = 2640.534303 (including connections establishing)\ntps = 2640.620552 (excluding connections establishing)\n\nWhich is about a 25% overhead in this very simple case. With more\ncomplex subqueries that overhead will drop significantly, but for that\nsimple one, it does seem a quite a bit too high to be adding a Result\nCache unconditionally for all correlated subqueries. I think based on\nthat it's worth looking into the AlternativeSubPlan option that I\nmentioned earlier.\n\nI've attached the v2 patch series.\n\nDavid", "msg_date": "Thu, 2 Jul 2020 22:57:44 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Thu, 2 Jul 2020 at 22:57, David Rowley <dgrowleyml@gmail.com> wrote:\n> I've attached the v2 patch series.\n\nThere was a bug in v2 that caused the caching not to work properly\nwhen a unique join skipped to the next outer row after finding the\nfirst match. The cache was not correctly marked as complete in that\ncase. Normally we only mark the cache entry complete when we read the\nscan to completion. Unique joins is a special case where we can mark\nit as complete early.\n\nI've also made a few more changes to reduce the size of the\nResultCacheEntry struct, taking it from 40 bytes down to 24. That\nmatters quite a bit when the cached tuple is very narrow. One of the\ntests in resultcache.out, because we can now fit more entries in the\ncache, it reports a 15% increase in cache hits.\n\nI also improved the costing regarding the estimate of how many cache\nentries we could fit in work mem. Previously I was not accounting for\nthe size of the cache data structures in memory. v2 only accounted for\nthe tuples themselves. It's important to count these as if we don't\nthen it could cause the costing to think we could fit more entries\nthan we actually could which meant the estimated number of cache\nevictions was off and could result in preferring a result cache plan\nwhen we perhaps shouldn't.\n\nI've attached v4.\n\nI've also attached a bunch of benchmark results which were based on v3\nof the patch. I didn't send out v3, but the results of v4 should be\nalmost the same for this test. The script to run the benchmark is\ncontained in the resultcachebench.txt file. The benchmark just mocks\nup a \"parts\" table and a \"sales\" table. The parts table has 1 million\nrows in the 1 million test, as does the sales table. This goes up to\n10 million and 100 million in the other two tests. What varies with\neach bar in the chart is the number of distinct parts in the sales\ntable. I just started with 1 part then doubled that up to ~1 million.\nThe unpatched version always uses a Hash Join, which is wasteful since\nonly a subset of parts are looked up. In the 1 million test the\nplanner switches to using a Hash Join in the patched version at 65k\nparts. It waits until the 1 million distinct parts test to switch\nover in the 10 million and 100 million test. The hash join costs are\nhigher in that case due to multi-batching, which is why the crossover\npoint is higher on the larger scale tests. I used 256MB work_mem for\nall tests. Looking closely at the 10 million test, you can see that\nthe hash join starts taking longer from 128 parts onward. The hash\ntable is the same each time here, so I can only suspect that the\nslowdown between 64 and 128 parts is due to CPU cache thrashing when\ngetting the correct buckets from the overly large hash table. This is\nnot really visible in the patched version as the resultcache hash\ntable is much smaller.\n\nDavid", "msg_date": "Wed, 8 Jul 2020 00:32:22 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Wed, 8 Jul 2020 at 00:32, David Rowley <dgrowleyml@gmail.com> wrote:\n> I've attached v4.\n\nThomas pointed out to me earlier that, per the CFbot, v4 was\ngenerating a new compiler warning. Andres pointed out to me that I\ncould fix the warnings of the unused functions in simplehash.h by\nchanging the scope from static to static inline.\n\nThe attached v5 patch set fixes that.\n\nDavid", "msg_date": "Wed, 8 Jul 2020 15:37:09 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "Hi,\n\nOn 2020-05-20 23:44:27 +1200, David Rowley wrote:\n> I've attached a patch which implements this. The new node type is\n> called \"Result Cache\". I'm not particularly wedded to keeping that\n> name, but if I change it, I only want to do it once. I've got a few\n> other names I mind, but I don't feel strongly or confident enough in\n> them to go and do the renaming.\n\nI'm not convinced it's a good idea to introduce a separate executor node\nfor this. There's a fair bit of overhead in them, and they will only be\nbelow certain types of nodes afaict. It seems like it'd be better to\npull the required calls into the nodes that do parametrized scans of\nsubsidiary nodes. Have you considered that?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 8 Jul 2020 09:53:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Thu, 9 Jul 2020 at 04:53, Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2020-05-20 23:44:27 +1200, David Rowley wrote:\n> > I've attached a patch which implements this. The new node type is\n> > called \"Result Cache\". I'm not particularly wedded to keeping that\n> > name, but if I change it, I only want to do it once. I've got a few\n> > other names I mind, but I don't feel strongly or confident enough in\n> > them to go and do the renaming.\n>\n> I'm not convinced it's a good idea to introduce a separate executor node\n> for this. There's a fair bit of overhead in them, and they will only be\n> below certain types of nodes afaict. It seems like it'd be better to\n> pull the required calls into the nodes that do parametrized scans of\n> subsidiary nodes. Have you considered that?\n\nI see 41 different node types mentioned in ExecReScan(). I don't\nreally think it would be reasonable to change all those.\n\nHere are a couple of examples, one with a Limit below the Result Cache\nand one with a GroupAggregate.\n\npostgres=# explain (costs off) select * from pg_Class c1 where relname\n= (select relname from pg_Class c2 where c1.relname = c2.relname\noffset 1 limit 1);\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Seq Scan on pg_class c1\n Filter: (relname = (SubPlan 1))\n SubPlan 1\n -> Result Cache\n Cache Key: c1.relname\n -> Limit\n -> Index Only Scan using pg_class_relname_nsp_index\non pg_class c2\n Index Cond: (relname = c1.relname)\n(8 rows)\n\n\npostgres=# explain (costs off) select * from pg_Class c1 where relname\n= (select relname from pg_Class c2 where c1.relname = c2.relname group\nby 1 having count(*) > 1);\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Seq Scan on pg_class c1\n Filter: (relname = (SubPlan 1))\n SubPlan 1\n -> Result Cache\n Cache Key: c1.relname\n -> GroupAggregate\n Group Key: c2.relname\n Filter: (count(*) > 1)\n -> Index Only Scan using pg_class_relname_nsp_index\non pg_class c2\n Index Cond: (relname = c1.relname)\n(10 rows)\n\nAs for putting the logic somewhere like ExecReScan() then the first\nparagraph in [1] are my thoughts on that.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvr-yx9DEJ1Lc9aAy8QZkgEZkTP=3hBRBe83Vwo=kAndcA@mail.gmail.com\n\n\n", "msg_date": "Thu, 9 Jul 2020 10:25:14 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Wed, 8 Jul 2020 at 15:37, David Rowley <dgrowleyml@gmail.com> wrote:\n> The attached v5 patch set fixes that.\n\nI've attached an updated set of patches for this per recent conflict.\n\nI'd like to push the 0002 patch quite soon as I think it's an\nimprovement to simplehash.h regardless of if we get Result Cache. It\nreuses the SH_LOOKUP function for deletes. Also, if we ever get around\nto giving up performing a lookup if we get too far away from the\noptimal bucket, then that would only need to appear in one location\nrather than in two.\n\nAndres, or anyone, any objections to me pushing 0002?\n\nDavid", "msg_date": "Tue, 4 Aug 2020 10:05:25 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "Hi,\n\nOn 2020-08-04 10:05:25 +1200, David Rowley wrote:\n> I'd like to push the 0002 patch quite soon as I think it's an\n> improvement to simplehash.h regardless of if we get Result Cache. It\n> reuses the SH_LOOKUP function for deletes. Also, if we ever get around\n> to giving up performing a lookup if we get too far away from the\n> optimal bucket, then that would only need to appear in one location\n> rather than in two.\n\n> Andres, or anyone, any objections to me pushing 0002?\n\nI think it'd be good to add a warning that, unless one is very careful,\nno other hashtable modifications are allowed between lookup and\nmodification. E.g. something like\na = foobar_lookup();foobar_insert();foobar_delete();\nwill occasionally go wrong...\n\n\n> -\t\t/* TODO: return false; if distance too big */\n> +/*\n> + * Perform hash table lookup on 'key', delete the entry belonging to it and\n> + * return true. Returns false if no item could be found relating to 'key'.\n> + */\n> +SH_SCOPE bool\n> +SH_DELETE(SH_TYPE * tb, SH_KEY_TYPE key)\n> +{\n> +\tSH_ELEMENT_TYPE *entry = SH_LOOKUP(tb, key);\n> \n> -\t\tcurelem = SH_NEXT(tb, curelem, startelem);\n> +\tif (likely(entry != NULL))\n> +\t{\n> +\t\t/*\n> +\t\t * Perform deletion and also the relocation of subsequent items which\n> +\t\t * are not in their optimal position but can now be moved up.\n> +\t\t */\n> +\t\tSH_DELETE_ITEM(tb, entry);\n> +\t\treturn true;\n> \t}\n> +\n> +\treturn false;\t\t/* Can't find 'key' */\n> }\n\nYou meantioned on IM that there's a slowdowns with gcc. I wonder if this\ncould partially be responsible. Does SH_DELETE inline LOOKUP and\nDELETE_ITEM? And does the generated code end up reloading entry-> or\ntb-> members?\n\nWhen the SH_SCOPE isn't static *, then IIRC gcc on unixes can't rely on\nthe called function actually being the function defined in the same\ntranslation unit (unless -fno-semantic-interposition is specified).\n\n\nHm, but you said that this happens in tidbitmap.c, and there all\nreferenced functions are local statics. So that's not quite the\nexplanation I was thinking it was...\n\n\nHm. Also wonder whether we currently (i.e. the existing code) we\nunnecessarily end up reloading tb->data a bunch of times, because we do\nthe access to ->data as\n\t\tSH_ELEMENT_TYPE *entry = &tb->data[curelem];\n\nThink we should instead store tb->data in a local variable.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 Aug 2020 10:44:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Wed, May 20, 2020 at 7:44 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> I've attached a patch which implements this. The new node type is\n> called \"Result Cache\". I'm not particularly wedded to keeping that\n> name, but if I change it, I only want to do it once. I've got a few\n> other names I mind, but I don't feel strongly or confident enough in\n> them to go and do the renaming.\n\nThis is cool work; I am going to bikeshed on the name for a minute. I\ndon't think Result Cache is terrible, but I have two observations:\n\n1. It might invite confusion with a feature of some other database\nsystems where they cache the results of entire queries and try to\nreuse the entire result set.\n\n2. The functionality reminds me a bit of a Materialize node, except\nthat instead of overflowing to disk, we throw away cache entries, and\ninstead of caching just one result, we potentially cache many.\n\nI can't really think of a way to work Materialize into the node name\nand I'm not sure it would be the right idea anyway. But I wonder if\nmaybe a name like \"Parameterized Cache\" would be better? That would\navoid confusion with any other use of the phrase \"result cache\"; also,\nan experienced PostgreSQL user might be more likely to guess how a\n\"Parameterized Cache\" is different from a \"Materialize\" node than they\nwould be if it were called a \"Result Cache\".\n\nJust my $0.02,\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 5 Aug 2020 16:13:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Wed, May 20, 2020 at 4:44 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> Does it seem like something we might want for PG14?\n\nMinor terminology issue: \"Hybrid Hash Join\" is a specific hash join\nalgorithm which is unrelated to what you propose to do here. I hope\nthat confusion can be avoided, possibly by not using the word hybrid\nin the name.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 5 Aug 2020 13:40:52 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Thu, 6 Aug 2020 at 08:13, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> This is cool work; I am going to bikeshed on the name for a minute. I\n> don't think Result Cache is terrible, but I have two observations:\n\nThanks\n\n> 1. It might invite confusion with a feature of some other database\n> systems where they cache the results of entire queries and try to\n> reuse the entire result set.\n\nYeah. I think \"Cache\" is good to keep, but I'm pretty much in favour\nof swapping \"Result\" for something else. It's a bit too close to the\n\"Result\" node in name, but too distant for everything else.\n\n> 2. The functionality reminds me a bit of a Materialize node, except\n> that instead of overflowing to disk, we throw away cache entries, and\n> instead of caching just one result, we potentially cache many.\n>\n> I can't really think of a way to work Materialize into the node name\n> and I'm not sure it would be the right idea anyway. But I wonder if\n> maybe a name like \"Parameterized Cache\" would be better?\n\nYeah, I think that name is better. The only downside as far as I can\nsee is the length of it.\n\nI'll hold off a bit before doing any renaming though to see what other\npeople think. I just feel bikeshedding on the name is something that's\ngoing to take up quite a bit of time and effort with this. I plan to\nrename it at most once.\n\nThanks for the comments\n\nDavid\n\n\n", "msg_date": "Thu, 6 Aug 2020 10:00:31 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Thu, 6 Aug 2020 at 05:44, Andres Freund <andres@anarazel.de> wrote:\n> > Andres, or anyone, any objections to me pushing 0002?\n>\n> I think it'd be good to add a warning that, unless one is very careful,\n> no other hashtable modifications are allowed between lookup and\n> modification. E.g. something like\n> a = foobar_lookup();foobar_insert();foobar_delete();\n> will occasionally go wrong...\n\nGood point. I agree. An insert could grow the table. Additionally,\nanother delete could shuffle elements back to a more optimal position\nso we couldn't do any inserts or deletes between the lookup of the\nitem to delete and the actual delete.\n\n> > - /* TODO: return false; if distance too big */\n> > +/*\n> > + * Perform hash table lookup on 'key', delete the entry belonging to it and\n> > + * return true. Returns false if no item could be found relating to 'key'.\n> > + */\n> > +SH_SCOPE bool\n> > +SH_DELETE(SH_TYPE * tb, SH_KEY_TYPE key)\n> > +{\n> > + SH_ELEMENT_TYPE *entry = SH_LOOKUP(tb, key);\n> >\n> > - curelem = SH_NEXT(tb, curelem, startelem);\n> > + if (likely(entry != NULL))\n> > + {\n> > + /*\n> > + * Perform deletion and also the relocation of subsequent items which\n> > + * are not in their optimal position but can now be moved up.\n> > + */\n> > + SH_DELETE_ITEM(tb, entry);\n> > + return true;\n> > }\n> > +\n> > + return false; /* Can't find 'key' */\n> > }\n>\n> You meantioned on IM that there's a slowdowns with gcc. I wonder if this\n> could partially be responsible. Does SH_DELETE inline LOOKUP and\n> DELETE_ITEM? And does the generated code end up reloading entry-> or\n> tb-> members?\n\nYeah both the SH_LOOKUP and SH_DELETE_ITEM are inlined.\n\nI think the difference might be coming from the fact that I have to\ncalculate the bucket index from the bucket pointer using:\n\n /* Calculate the index of 'entry' */\n curelem = entry - &tb->data[0];\n\nThere is some slight change of instructions due to the change in the\nhash lookup part of SH_DELETE, but for the guts of the code that's\ngenerated for SH_DELETE_ITEM, there's a set of instructions that are\njust additional:\n\nsubq %r10, %rax\nsarq $4, %rax\nimull $-1431655765, %eax, %eax\nleal 1(%rax), %r8d\n\nFor testing sake, I changed the curelem = entry - &tb->data[0]; to\njust be curelem = 10; and those 4 instructions disappear.\n\nI can't really work out what the imull constant means. In binary, that\nnumber is 10101010101010101010101010101011\n\nI wonder if it might be easier if I just leave SH_DELETE alone and\njust add a new function to delete with the known element.\n\n> When the SH_SCOPE isn't static *, then IIRC gcc on unixes can't rely on\n> the called function actually being the function defined in the same\n> translation unit (unless -fno-semantic-interposition is specified).\n>\n>\n> Hm, but you said that this happens in tidbitmap.c, and there all\n> referenced functions are local statics. So that's not quite the\n> explanation I was thinking it was...\n>\n>\n> Hm. Also wonder whether we currently (i.e. the existing code) we\n> unnecessarily end up reloading tb->data a bunch of times, because we do\n> the access to ->data as\n> SH_ELEMENT_TYPE *entry = &tb->data[curelem];\n>\n> Think we should instead store tb->data in a local variable.\n\nAt the start of SH_DELETE_ITEM I tried doing:\n\nSH_ELEMENT_TYPE *buckets = tb->data;\n\nthen referencing that local var instead of tb->data in the body of the\nloop. No meaningful improvements to the assembly. It just seems to\nadjust which registers are used.\n\nWIth the local var I see:\n\naddq %r9, %rdx\n\nbut in the version without the local variable I see:\n\naddq 24(%rdi), %rdx\n\nthe data array is 24 bytes into the SH_TYPE struct. So it appears like\nwe just calculate the offset to that field by adding 24 to the tb\nfield without the local var and just load the value from the register\nthat's storing the local var otherwise.\n\nDavid\n\n\n", "msg_date": "Fri, 7 Aug 2020 00:41:04 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "Hi,\n\nOn 2020-07-09 10:25:14 +1200, David Rowley wrote:\n> On Thu, 9 Jul 2020 at 04:53, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2020-05-20 23:44:27 +1200, David Rowley wrote:\n> > > I've attached a patch which implements this. The new node type is\n> > > called \"Result Cache\". I'm not particularly wedded to keeping that\n> > > name, but if I change it, I only want to do it once. I've got a few\n> > > other names I mind, but I don't feel strongly or confident enough in\n> > > them to go and do the renaming.\n> >\n> > I'm not convinced it's a good idea to introduce a separate executor node\n> > for this. There's a fair bit of overhead in them, and they will only be\n> > below certain types of nodes afaict. It seems like it'd be better to\n> > pull the required calls into the nodes that do parametrized scans of\n> > subsidiary nodes. Have you considered that?\n> \n> I see 41 different node types mentioned in ExecReScan(). I don't\n> really think it would be reasonable to change all those.\n\nBut that's because we dispatch ExecReScan mechanically down to every\nsingle executor node. That doesn't determine how many nodes would need\nto modify to include explicit caching? What am I missing?\n\nWouldn't we need roughly just nodeNestloop.c and nodeSubplan.c\nintegration?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 Aug 2020 17:21:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Tue, 11 Aug 2020 at 12:21, Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2020-07-09 10:25:14 +1200, David Rowley wrote:\n> > On Thu, 9 Jul 2020 at 04:53, Andres Freund <andres@anarazel.de> wrote:\n> > > I'm not convinced it's a good idea to introduce a separate executor node\n> > > for this. There's a fair bit of overhead in them, and they will only be\n> > > below certain types of nodes afaict. It seems like it'd be better to\n> > > pull the required calls into the nodes that do parametrized scans of\n> > > subsidiary nodes. Have you considered that?\n> >\n> > I see 41 different node types mentioned in ExecReScan(). I don't\n> > really think it would be reasonable to change all those.\n>\n> But that's because we dispatch ExecReScan mechanically down to every\n> single executor node. That doesn't determine how many nodes would need\n> to modify to include explicit caching? What am I missing?\n>\n> Wouldn't we need roughly just nodeNestloop.c and nodeSubplan.c\n> integration?\n\nhmm, I think you're right there about those two node types. I'm just\nnot sure you're right about overloading these node types to act as a\ncache. How would you inform users via EXPLAIN ANALYZE of how many\ncache hits/misses occurred? What would you use to disable it for an\nescape hatch for when the planner makes a bad choice about caching?\n\nDavid\n\n\n", "msg_date": "Tue, 11 Aug 2020 17:23:42 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "Hi,\n\nOn 2020-08-11 17:23:42 +1200, David Rowley wrote:\n> On Tue, 11 Aug 2020 at 12:21, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2020-07-09 10:25:14 +1200, David Rowley wrote:\n> > > On Thu, 9 Jul 2020 at 04:53, Andres Freund <andres@anarazel.de> wrote:\n> > > > I'm not convinced it's a good idea to introduce a separate executor node\n> > > > for this. There's a fair bit of overhead in them, and they will only be\n> > > > below certain types of nodes afaict. It seems like it'd be better to\n> > > > pull the required calls into the nodes that do parametrized scans of\n> > > > subsidiary nodes. Have you considered that?\n> > >\n> > > I see 41 different node types mentioned in ExecReScan(). I don't\n> > > really think it would be reasonable to change all those.\n> >\n> > But that's because we dispatch ExecReScan mechanically down to every\n> > single executor node. That doesn't determine how many nodes would need\n> > to modify to include explicit caching? What am I missing?\n> >\n> > Wouldn't we need roughly just nodeNestloop.c and nodeSubplan.c\n> > integration?\n> \n> hmm, I think you're right there about those two node types. I'm just\n> not sure you're right about overloading these node types to act as a\n> cache.\n\nI'm not 100% either, to be clear. I am just acutely aware that adding\nentire nodes is pretty expensive, and that there's, afaict, no need to\nhave arbitrary (i.e. pointer to function) type callbacks to point to the\ncache.\n\n> How would you inform users via EXPLAIN ANALYZE of how many\n> cache hits/misses occurred?\n\nSimilar to how we display memory for sorting etc.\n\n\n> What would you use to disable it for an\n> escape hatch for when the planner makes a bad choice about caching?\n\nIsn't that *easier* when embedding it into the node? There's no nice way\nto remove an intermediary executor node entirely, but it's trivial to\nhave an if statement like\nif (node->cache && upsert_cache(node->cache, param))\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 Aug 2020 22:44:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Tue, 11 Aug 2020 at 17:44, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-08-11 17:23:42 +1200, David Rowley wrote:\n> > On Tue, 11 Aug 2020 at 12:21, Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > On 2020-07-09 10:25:14 +1200, David Rowley wrote:\n> > > > On Thu, 9 Jul 2020 at 04:53, Andres Freund <andres@anarazel.de> wrote:\n> > > > > I'm not convinced it's a good idea to introduce a separate executor node\n> > > > > for this. There's a fair bit of overhead in them, and they will only be\n> > > > > below certain types of nodes afaict. It seems like it'd be better to\n> > > > > pull the required calls into the nodes that do parametrized scans of\n> > > > > subsidiary nodes. Have you considered that?\n> > > >\n> > > > I see 41 different node types mentioned in ExecReScan(). I don't\n> > > > really think it would be reasonable to change all those.\n> > >\n> > > But that's because we dispatch ExecReScan mechanically down to every\n> > > single executor node. That doesn't determine how many nodes would need\n> > > to modify to include explicit caching? What am I missing?\n> > >\n> > > Wouldn't we need roughly just nodeNestloop.c and nodeSubplan.c\n> > > integration?\n> >\n> > hmm, I think you're right there about those two node types. I'm just\n> > not sure you're right about overloading these node types to act as a\n> > cache.\n>\n> I'm not 100% either, to be clear. I am just acutely aware that adding\n> entire nodes is pretty expensive, and that there's, afaict, no need to\n> have arbitrary (i.e. pointer to function) type callbacks to point to the\n> cache.\n\nPerhaps you're right, but I'm just not convinced of it. I feel\nthere's a certain air of magic involved in any node that has a good\nname and reputation for doing one thing that we suddenly add new\nfunctionality to which causes it to perform massively differently.\n\nA counterexample to your argument is that Materialize is a node type.\nThere's only a limits number of places where that node is used. One of\nthose places can be on the inside of a non-parameterized nested loop.\nYour argument of having Nested Loop do caching would also indicate\nthat Materialize should be part of Nested Loop instead of a node\nitself. There's a few other places Materialize is used, e.g scrollable\ncursors, but in that regard, you could say that the caching should be\nhandled in ExecutePlan(). I just don't think it should be, as I don't\nthink Result Cache should be part of any other node or code.\n\nAnother problem I see with overloading nodeSubplan and nodeNestloop\nis, we don't really document our executor nodes today, so unless this\npatch starts a new standard for that, then there's not exactly a good\nplace to mention that parameterized nested loops may now cache results\nfrom the inner scan.\n\nI do understand what you mean with the additional node overhead. I saw\nthat in my adventures of INNER JOIN removals a few years ago. I hope\nthe fact that I've tried to code the planner so that for nested loops,\nit only uses a Result Cache node when it thinks it'll speed things up.\nThat decision is of course based on having good statistics, which\nmight not be the case. I don't quite have that luxury with subplans\ndue to lack of knowledge of the outer plan when planning the subquery.\n\n> > How would you inform users via EXPLAIN ANALYZE of how many\n> > cache hits/misses occurred?\n>\n> Similar to how we display memory for sorting etc.\n\nI was more thinking of how bizarre it would be to see Nested Loop and\nSubPlan report cache statistics. It may appear quite magical to users\nto see EXPLAIN ANALYZE mentioning that their Nested Loop is now\nreporting something about cache hits.\n\n> > What would you use to disable it for an\n> > escape hatch for when the planner makes a bad choice about caching?\n>\n> Isn't that *easier* when embedding it into the node? There's no nice way\n> to remove an intermediary executor node entirely, but it's trivial to\n> have an if statement like\n> if (node->cache && upsert_cache(node->cache, param))\n\nI was more meaning that it might not make sense to keep the\nenable_resultcache GUC if the caching were part of the existing nodes.\nI think people are pretty used to the enable_* GUCs corresponding to\nan executor whose name roughly matches the name of the GUC. In this\ncase, without a Result Cache node, enable_resultcache would not assist\nin self-documenting. However, perhaps 2 new GUCs instead,\nenable_nestloop_caching and enable_subplan_caching. We're currently\nshort of any other enable_* GUCs that are node modifiers. We did have\nenable_hashagg_disk until a few weeks ago. Nobody seemed to like that,\nbut perhaps there were other reasons for people not to like it other\nthan it was a node modifier GUC.\n\nI'm wondering if anyone else has any thoughts on this?\n\nDavid\n\n\n", "msg_date": "Tue, 18 Aug 2020 21:42:25 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Mon, 25 May 2020 at 19:53, David Rowley <dgrowleyml@gmail.com> wrote:\n> I didn't quite get the LATERAL support quite done in the version I\n> sent. For now, I'm not considering adding a Result Cache node if there\n> are lateral vars in any location other than the inner side of the\n> nested loop join. I think it'll just be a few lines to make it work\n> though. I wanted to get some feedback before going to too much more\n> trouble to make all cases work.\n\nI've now changed the patch so that it supports adding a Result Cache\nnode to LATERAL joins.\n\ne.g.\n\nregression=# explain analyze select count(*) from tenk1 t1, lateral\n(select x from generate_Series(1,t1.twenty) x) gs;\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=150777.53..150777.54 rows=1 width=8) (actual\ntime=22.191..22.191 rows=1 loops=1)\n -> Nested Loop (cost=0.01..125777.53 rows=10000000 width=0)\n(actual time=0.010..16.980 rows=95000 loops=1)\n -> Seq Scan on tenk1 t1 (cost=0.00..445.00 rows=10000\nwidth=4) (actual time=0.003..0.866 rows=10000 loops=1)\n -> Result Cache (cost=0.01..10.01 rows=1000 width=0)\n(actual time=0.000..0.001 rows=10 loops=10000)\n Cache Key: t1.twenty\n Hits: 9980 Misses: 20 Evictions: 0 Overflows: 0\n -> Function Scan on generate_series x\n(cost=0.00..10.00 rows=1000 width=0) (actual time=0.001..0.002 rows=10\nloops=20)\n Planning Time: 0.046 ms\n Execution Time: 22.208 ms\n(9 rows)\n\nTime: 22.704 ms\nregression=# set enable_resultcache=0;\nSET\nTime: 0.367 ms\nregression=# explain analyze select count(*) from tenk1 t1, lateral\n(select x from generate_Series(1,t1.twenty) x) gs;\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=225445.00..225445.01 rows=1 width=8) (actual\ntime=35.578..35.579 rows=1 loops=1)\n -> Nested Loop (cost=0.00..200445.00 rows=10000000 width=0)\n(actual time=0.008..30.196 rows=95000 loops=1)\n -> Seq Scan on tenk1 t1 (cost=0.00..445.00 rows=10000\nwidth=4) (actual time=0.002..0.905 rows=10000 loops=1)\n -> Function Scan on generate_series x (cost=0.00..10.00\nrows=1000 width=0) (actual time=0.001..0.002 rows=10 loops=10000)\n Planning Time: 0.031 ms\n Execution Time: 35.590 ms\n(6 rows)\n\nTime: 36.027 ms\n\nv7 patch series attached.\n\nI also modified the 0002 patch so instead of modifying simplehash.h's\nSH_DELETE function to have it call SH_LOOKUP and the newly added\nSH_DELETE_ITEM function, I've just added an entirely new\nSH_DELETE_ITEM and left SH_DELETE untouched. Trying to remove the\ncode duplication without having a negative effect on performance was\ntricky and it didn't save enough code to seem worthwhile enough.\n\nI also did a round of polishing work, fixed a spelling mistake in a\ncomment and reworded a few other comments to make some meaning more\nclear.\n\nDavid", "msg_date": "Wed, 19 Aug 2020 14:45:55 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Tue, 18 Aug 2020 at 21:42, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 11 Aug 2020 at 17:44, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2020-08-11 17:23:42 +1200, David Rowley wrote:\n> > > On Tue, 11 Aug 2020 at 12:21, Andres Freund <andres@anarazel.de> wrote:\n> > > >\n> > > > On 2020-07-09 10:25:14 +1200, David Rowley wrote:\n> > > > > On Thu, 9 Jul 2020 at 04:53, Andres Freund <andres@anarazel.de> wrote:\n> > > > > > I'm not convinced it's a good idea to introduce a separate executor node\n> > > > > > for this. There's a fair bit of overhead in them, and they will only be\n> > > > > > below certain types of nodes afaict. It seems like it'd be better to\n> > > > > > pull the required calls into the nodes that do parametrized scans of\n> > > > > > subsidiary nodes. Have you considered that?\n> > > > >\n> > > > > I see 41 different node types mentioned in ExecReScan(). I don't\n> > > > > really think it would be reasonable to change all those.\n> > > >\n> > > > But that's because we dispatch ExecReScan mechanically down to every\n> > > > single executor node. That doesn't determine how many nodes would need\n> > > > to modify to include explicit caching? What am I missing?\n> > > >\n> > > > Wouldn't we need roughly just nodeNestloop.c and nodeSubplan.c\n> > > > integration?\n> > >\n> > > hmm, I think you're right there about those two node types. I'm just\n> > > not sure you're right about overloading these node types to act as a\n> > > cache.\n> >\n> > I'm not 100% either, to be clear. I am just acutely aware that adding\n> > entire nodes is pretty expensive, and that there's, afaict, no need to\n> > have arbitrary (i.e. pointer to function) type callbacks to point to the\n> > cache.\n>\n> Perhaps you're right, but I'm just not convinced of it. I feel\n> there's a certain air of magic involved in any node that has a good\n> name and reputation for doing one thing that we suddenly add new\n> functionality to which causes it to perform massively differently.\n>\n\n[ my long babble removed]\n\n> I'm wondering if anyone else has any thoughts on this?\n\nJust for anyone following along at home. The two variations would\nroughly look like:\n\nCurrent method:\n\nregression=# explain (analyze, costs off, timing off, summary off)\nselect count(*) from tenk1 t1 inner join tenk1 t2 on\nt1.twenty=t2.unique1;\n QUERY PLAN\n---------------------------------------------------------------------------------------\n Aggregate (actual rows=1 loops=1)\n -> Nested Loop (actual rows=10000 loops=1)\n -> Seq Scan on tenk1 t1 (actual rows=10000 loops=1)\n -> Result Cache (actual rows=1 loops=10000)\n Cache Key: t1.twenty\n Hits: 9980 Misses: 20 Evictions: 0 Overflows: 0\n -> Index Scan using tenk1_unique1 on tenk1 t2 (actual\nrows=1 loops=20)\n Index Cond: (unique1 = t1.twenty)\n(8 rows)\n\nAndres' suggestion:\n\nregression=# explain (analyze, costs off, timing off, summary off)\nselect count(*) from tenk1 t1 inner join tenk1 t2 on\nt1.twenty=t2.unique1;\n QUERY PLAN\n---------------------------------------------------------------------------------------\n Aggregate (actual rows=1 loops=1)\n -> Nested Loop (actual rows=10000 loops=1)\n Cache Key: t1.twenty Hits: 9980 Misses: 20 Evictions: 0\nOverflows: 0\n -> Seq Scan on tenk1 t1 (actual rows=10000 loops=1)\n -> Index Scan using tenk1_unique1 on tenk1 t2 (actual rows=1 loops=20)\n Index Cond: (unique1 = t1.twenty)\n(6 rows)\n\nand for subplans:\n\nCurrent method:\n\nregression=# explain (analyze, costs off, timing off, summary off)\nselect twenty, (select count(*) from tenk1 t2 where t1.twenty =\nt2.twenty) from tenk1 t1;\n QUERY PLAN\n---------------------------------------------------------------------\n Seq Scan on tenk1 t1 (actual rows=10000 loops=1)\n SubPlan 1\n -> Result Cache (actual rows=1 loops=10000)\n Cache Key: t1.twenty\n Hits: 9980 Misses: 20 Evictions: 0 Overflows: 0\n -> Aggregate (actual rows=1 loops=20)\n -> Seq Scan on tenk1 t2 (actual rows=500 loops=20)\n Filter: (t1.twenty = twenty)\n Rows Removed by Filter: 9500\n(9 rows)\n\nAndres' suggestion:\n\nregression=# explain (analyze, costs off, timing off, summary off)\nselect twenty, (select count(*) from tenk1 t2 where t1.twenty =\nt2.twenty) from tenk1 t1;\n QUERY PLAN\n---------------------------------------------------------------------\n Seq Scan on tenk1 t1 (actual rows=10000 loops=1)\n SubPlan 1\n Cache Key: t1.twenty Hits: 9980 Misses: 20 Evictions: 0 Overflows: 0\n -> Aggregate (actual rows=1 loops=20)\n -> Seq Scan on tenk1 t2 (actual rows=500 loops=20)\n Filter: (t1.twenty = twenty)\n Rows Removed by Filter: 9500\n(7 rows)\n\nI've spoken to one other person off-list about this and they suggested\nthat they prefer Andres' suggestion on performance grounds that it's\nless overhead to pull tuples through the plan and cheaper executor\nstartup/shutdowns due to fewer nodes.\n\nI don't object to making the change. I just object to making it only\nto put it back again later when someone else speaks up that they'd\nprefer to keep nodes modular and not overload them in obscure ways.\n\nSo other input is welcome. Is it too weird to overload SubPlan and\nNested Loop this way? Or okay to do that if it squeezes out a dozen\nor so nanoseconds per tuple?\n\nI did some analysis into the overhead of pulling tuples through an\nadditional executor node in [1].\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAKJS1f9UXdk6ZYyqbJnjFO9a9hyHKGW7B%3DZRh-rxy9qxfPA5Gw%40mail.gmail.com\n\n\n", "msg_date": "Wed, 19 Aug 2020 15:48:31 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "st 19. 8. 2020 v 5:48 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n\n> On Tue, 18 Aug 2020 at 21:42, David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > On Tue, 11 Aug 2020 at 17:44, Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > Hi,\n> > >\n> > > On 2020-08-11 17:23:42 +1200, David Rowley wrote:\n> > > > On Tue, 11 Aug 2020 at 12:21, Andres Freund <andres@anarazel.de>\n> wrote:\n> > > > >\n> > > > > On 2020-07-09 10:25:14 +1200, David Rowley wrote:\n> > > > > > On Thu, 9 Jul 2020 at 04:53, Andres Freund <andres@anarazel.de>\n> wrote:\n> > > > > > > I'm not convinced it's a good idea to introduce a separate\n> executor node\n> > > > > > > for this. There's a fair bit of overhead in them, and they\n> will only be\n> > > > > > > below certain types of nodes afaict. It seems like it'd be\n> better to\n> > > > > > > pull the required calls into the nodes that do parametrized\n> scans of\n> > > > > > > subsidiary nodes. Have you considered that?\n> > > > > >\n> > > > > > I see 41 different node types mentioned in ExecReScan(). I don't\n> > > > > > really think it would be reasonable to change all those.\n> > > > >\n> > > > > But that's because we dispatch ExecReScan mechanically down to\n> every\n> > > > > single executor node. That doesn't determine how many nodes would\n> need\n> > > > > to modify to include explicit caching? What am I missing?\n> > > > >\n> > > > > Wouldn't we need roughly just nodeNestloop.c and nodeSubplan.c\n> > > > > integration?\n> > > >\n> > > > hmm, I think you're right there about those two node types. I'm just\n> > > > not sure you're right about overloading these node types to act as a\n> > > > cache.\n> > >\n> > > I'm not 100% either, to be clear. I am just acutely aware that adding\n> > > entire nodes is pretty expensive, and that there's, afaict, no need to\n> > > have arbitrary (i.e. pointer to function) type callbacks to point to\n> the\n> > > cache.\n> >\n> > Perhaps you're right, but I'm just not convinced of it. I feel\n> > there's a certain air of magic involved in any node that has a good\n> > name and reputation for doing one thing that we suddenly add new\n> > functionality to which causes it to perform massively differently.\n> >\n>\n> [ my long babble removed]\n>\n> > I'm wondering if anyone else has any thoughts on this?\n>\n> Just for anyone following along at home. The two variations would\n> roughly look like:\n>\n> Current method:\n>\n> regression=# explain (analyze, costs off, timing off, summary off)\n> select count(*) from tenk1 t1 inner join tenk1 t2 on\n> t1.twenty=t2.unique1;\n> QUERY PLAN\n>\n> ---------------------------------------------------------------------------------------\n> Aggregate (actual rows=1 loops=1)\n> -> Nested Loop (actual rows=10000 loops=1)\n> -> Seq Scan on tenk1 t1 (actual rows=10000 loops=1)\n> -> Result Cache (actual rows=1 loops=10000)\n> Cache Key: t1.twenty\n> Hits: 9980 Misses: 20 Evictions: 0 Overflows: 0\n> -> Index Scan using tenk1_unique1 on tenk1 t2 (actual\n> rows=1 loops=20)\n> Index Cond: (unique1 = t1.twenty)\n> (8 rows)\n>\n> Andres' suggestion:\n>\n> regression=# explain (analyze, costs off, timing off, summary off)\n> select count(*) from tenk1 t1 inner join tenk1 t2 on\n> t1.twenty=t2.unique1;\n> QUERY PLAN\n>\n> ---------------------------------------------------------------------------------------\n> Aggregate (actual rows=1 loops=1)\n> -> Nested Loop (actual rows=10000 loops=1)\n> Cache Key: t1.twenty Hits: 9980 Misses: 20 Evictions: 0\n> Overflows: 0\n> -> Seq Scan on tenk1 t1 (actual rows=10000 loops=1)\n> -> Index Scan using tenk1_unique1 on tenk1 t2 (actual rows=1\n> loops=20)\n> Index Cond: (unique1 = t1.twenty)\n> (6 rows)\n>\n> and for subplans:\n>\n> Current method:\n>\n> regression=# explain (analyze, costs off, timing off, summary off)\n> select twenty, (select count(*) from tenk1 t2 where t1.twenty =\n> t2.twenty) from tenk1 t1;\n> QUERY PLAN\n> ---------------------------------------------------------------------\n> Seq Scan on tenk1 t1 (actual rows=10000 loops=1)\n> SubPlan 1\n> -> Result Cache (actual rows=1 loops=10000)\n> Cache Key: t1.twenty\n> Hits: 9980 Misses: 20 Evictions: 0 Overflows: 0\n> -> Aggregate (actual rows=1 loops=20)\n> -> Seq Scan on tenk1 t2 (actual rows=500 loops=20)\n> Filter: (t1.twenty = twenty)\n> Rows Removed by Filter: 9500\n> (9 rows)\n>\n> Andres' suggestion:\n>\n> regression=# explain (analyze, costs off, timing off, summary off)\n> select twenty, (select count(*) from tenk1 t2 where t1.twenty =\n> t2.twenty) from tenk1 t1;\n> QUERY PLAN\n> ---------------------------------------------------------------------\n> Seq Scan on tenk1 t1 (actual rows=10000 loops=1)\n> SubPlan 1\n> Cache Key: t1.twenty Hits: 9980 Misses: 20 Evictions: 0 Overflows:\n> 0\n> -> Aggregate (actual rows=1 loops=20)\n> -> Seq Scan on tenk1 t2 (actual rows=500 loops=20)\n> Filter: (t1.twenty = twenty)\n> Rows Removed by Filter: 9500\n> (7 rows)\n>\n> I've spoken to one other person off-list about this and they suggested\n> that they prefer Andres' suggestion on performance grounds that it's\n> less overhead to pull tuples through the plan and cheaper executor\n> startup/shutdowns due to fewer nodes.\n>\n\nI didn't do performance tests, that should be necessary, but I think\nAndres' variant is a little bit more readable.\n\nThe performance is most important, but readability of EXPLAIN is\ninteresting too.\n\nRegards\n\nPavel\n\n\n>\n> I don't object to making the change. I just object to making it only\n> to put it back again later when someone else speaks up that they'd\n> prefer to keep nodes modular and not overload them in obscure ways.\n>\n> So other input is welcome. Is it too weird to overload SubPlan and\n> Nested Loop this way? Or okay to do that if it squeezes out a dozen\n> or so nanoseconds per tuple?\n>\n> I did some analysis into the overhead of pulling tuples through an\n> additional executor node in [1].\n>\n> David\n>\n> [1]\n> https://www.postgresql.org/message-id/CAKJS1f9UXdk6ZYyqbJnjFO9a9hyHKGW7B%3DZRh-rxy9qxfPA5Gw%40mail.gmail.com\n>\n>\n>\n\nst 19. 8. 2020 v 5:48 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:On Tue, 18 Aug 2020 at 21:42, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 11 Aug 2020 at 17:44, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2020-08-11 17:23:42 +1200, David Rowley wrote:\n> > > On Tue, 11 Aug 2020 at 12:21, Andres Freund <andres@anarazel.de> wrote:\n> > > >\n> > > > On 2020-07-09 10:25:14 +1200, David Rowley wrote:\n> > > > > On Thu, 9 Jul 2020 at 04:53, Andres Freund <andres@anarazel.de> wrote:\n> > > > > > I'm not convinced it's a good idea to introduce a separate executor node\n> > > > > > for this. There's a fair bit of overhead in them, and they will only be\n> > > > > > below certain types of nodes afaict. It seems like it'd be better to\n> > > > > > pull the required calls into the nodes that do parametrized scans of\n> > > > > > subsidiary nodes. Have you considered that?\n> > > > >\n> > > > > I see 41 different node types mentioned in ExecReScan().  I don't\n> > > > > really think it would be reasonable to change all those.\n> > > >\n> > > > But that's because we dispatch ExecReScan mechanically down to every\n> > > > single executor node. That doesn't determine how many nodes would need\n> > > > to modify to include explicit caching? What am I missing?\n> > > >\n> > > > Wouldn't we need roughly just nodeNestloop.c and nodeSubplan.c\n> > > > integration?\n> > >\n> > > hmm, I think you're right there about those two node types.  I'm just\n> > > not sure you're right about overloading these node types to act as a\n> > > cache.\n> >\n> > I'm not 100% either, to be clear.  I am just acutely aware that adding\n> > entire nodes is pretty expensive, and that there's, afaict, no need to\n> > have arbitrary (i.e. pointer to function) type callbacks to point to the\n> > cache.\n>\n> Perhaps you're right, but I'm just not convinced of it.  I feel\n> there's a certain air of magic involved in any node that has a good\n> name and reputation for doing one thing that we suddenly add new\n> functionality to which causes it to perform massively differently.\n>\n\n[ my long babble removed]\n\n> I'm wondering if anyone else has any thoughts on this?\n\nJust for anyone following along at home. The two variations would\nroughly look like:\n\nCurrent method:\n\nregression=# explain (analyze, costs off, timing off, summary off)\nselect count(*) from tenk1 t1 inner join tenk1 t2 on\nt1.twenty=t2.unique1;\n                                      QUERY PLAN\n---------------------------------------------------------------------------------------\n Aggregate (actual rows=1 loops=1)\n   ->  Nested Loop (actual rows=10000 loops=1)\n         ->  Seq Scan on tenk1 t1 (actual rows=10000 loops=1)\n         ->  Result Cache (actual rows=1 loops=10000)\n               Cache Key: t1.twenty\n               Hits: 9980  Misses: 20  Evictions: 0  Overflows: 0\n               ->  Index Scan using tenk1_unique1 on tenk1 t2 (actual\nrows=1 loops=20)\n                     Index Cond: (unique1 = t1.twenty)\n(8 rows)\n\nAndres' suggestion:\n\nregression=# explain (analyze, costs off, timing off, summary off)\nselect count(*) from tenk1 t1 inner join tenk1 t2 on\nt1.twenty=t2.unique1;\n                                      QUERY PLAN\n---------------------------------------------------------------------------------------\n Aggregate (actual rows=1 loops=1)\n   ->  Nested Loop (actual rows=10000 loops=1)\n          Cache Key: t1.twenty  Hits: 9980  Misses: 20  Evictions: 0\nOverflows: 0\n        ->  Seq Scan on tenk1 t1 (actual rows=10000 loops=1)\n        ->  Index Scan using tenk1_unique1 on tenk1 t2 (actual rows=1 loops=20)\n              Index Cond: (unique1 = t1.twenty)\n(6 rows)\n\nand for subplans:\n\nCurrent method:\n\nregression=# explain (analyze, costs off, timing off, summary off)\nselect twenty, (select count(*) from tenk1 t2 where t1.twenty =\nt2.twenty) from tenk1 t1;\n                             QUERY PLAN\n---------------------------------------------------------------------\n Seq Scan on tenk1 t1 (actual rows=10000 loops=1)\n   SubPlan 1\n     ->  Result Cache (actual rows=1 loops=10000)\n           Cache Key: t1.twenty\n           Hits: 9980  Misses: 20  Evictions: 0  Overflows: 0\n           ->  Aggregate (actual rows=1 loops=20)\n                 ->  Seq Scan on tenk1 t2 (actual rows=500 loops=20)\n                       Filter: (t1.twenty = twenty)\n                       Rows Removed by Filter: 9500\n(9 rows)\n\nAndres' suggestion:\n\nregression=# explain (analyze, costs off, timing off, summary off)\nselect twenty, (select count(*) from tenk1 t2 where t1.twenty =\nt2.twenty) from tenk1 t1;\n                             QUERY PLAN\n---------------------------------------------------------------------\n Seq Scan on tenk1 t1 (actual rows=10000 loops=1)\n   SubPlan 1\n    Cache Key: t1.twenty  Hits: 9980  Misses: 20  Evictions: 0  Overflows: 0\n    ->  Aggregate (actual rows=1 loops=20)\n          ->  Seq Scan on tenk1 t2 (actual rows=500 loops=20)\n                Filter: (t1.twenty = twenty)\n                Rows Removed by Filter: 9500\n(7 rows)\n\nI've spoken to one other person off-list about this and they suggested\nthat they prefer Andres' suggestion on performance grounds that it's\nless overhead to pull tuples through the plan and cheaper executor\nstartup/shutdowns due to fewer nodes.I didn't do performance tests, that should be necessary, but I think Andres' variant is a little bit more readable.The performance is most important, but readability of EXPLAIN is interesting too.RegardsPavel  \n\nI don't object to making the change. I just object to making it only\nto put it back again later when someone else speaks up that they'd\nprefer to keep nodes modular and not overload them in obscure ways.\n\nSo other input is welcome.  Is it too weird to overload SubPlan and\nNested Loop this way?  Or okay to do that if it squeezes out a dozen\nor so nanoseconds per tuple?\n\nI did some analysis into the overhead of pulling tuples through an\nadditional executor node in [1].\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAKJS1f9UXdk6ZYyqbJnjFO9a9hyHKGW7B%3DZRh-rxy9qxfPA5Gw%40mail.gmail.com", "msg_date": "Wed, 19 Aug 2020 06:17:28 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I don't object to making the change. I just object to making it only\n> to put it back again later when someone else speaks up that they'd\n> prefer to keep nodes modular and not overload them in obscure ways.\n\n> So other input is welcome. Is it too weird to overload SubPlan and\n> Nested Loop this way? Or okay to do that if it squeezes out a dozen\n> or so nanoseconds per tuple?\n\nIf you need somebody to blame it on, blame it on me - but I agree\nthat that is an absolutely horrid abuse of NestLoop. We might as\nwell reduce explain.c to a one-liner that prints \"Here Be Dragons\",\nbecause no one will understand what this display is telling them.\n\nI'm also quite skeptical that adding overhead to nodeNestloop.c\nto support this would actually be a net win once you account for\nwhat happens in plans where the caching is of no value.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 Aug 2020 00:23:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Wed, 19 Aug 2020 at 16:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > I don't object to making the change. I just object to making it only\n> > to put it back again later when someone else speaks up that they'd\n> > prefer to keep nodes modular and not overload them in obscure ways.\n>\n> > So other input is welcome. Is it too weird to overload SubPlan and\n> > Nested Loop this way? Or okay to do that if it squeezes out a dozen\n> > or so nanoseconds per tuple?\n>\n> If you need somebody to blame it on, blame it on me - but I agree\n> that that is an absolutely horrid abuse of NestLoop. We might as\n> well reduce explain.c to a one-liner that prints \"Here Be Dragons\",\n> because no one will understand what this display is telling them.\n\nThanks for chiming in. I'm relieved it's not me vs everyone else anymore.\n\n> I'm also quite skeptical that adding overhead to nodeNestloop.c\n> to support this would actually be a net win once you account for\n> what happens in plans where the caching is of no value.\n\nAgreed.\n\nDavid\n\n\n", "msg_date": "Thu, 20 Aug 2020 09:59:38 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Wed, 19 Aug 2020 at 16:18, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n>\n> st 19. 8. 2020 v 5:48 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n>> Current method:\n>>\n>> regression=# explain (analyze, costs off, timing off, summary off)\n>> select twenty, (select count(*) from tenk1 t2 where t1.twenty =\n>> t2.twenty) from tenk1 t1;\n>> QUERY PLAN\n>> ---------------------------------------------------------------------\n>> Seq Scan on tenk1 t1 (actual rows=10000 loops=1)\n>> SubPlan 1\n>> -> Result Cache (actual rows=1 loops=10000)\n>> Cache Key: t1.twenty\n>> Hits: 9980 Misses: 20 Evictions: 0 Overflows: 0\n>> -> Aggregate (actual rows=1 loops=20)\n>> -> Seq Scan on tenk1 t2 (actual rows=500 loops=20)\n>> Filter: (t1.twenty = twenty)\n>> Rows Removed by Filter: 9500\n>> (9 rows)\n>>\n>> Andres' suggestion:\n>>\n>> regression=# explain (analyze, costs off, timing off, summary off)\n>> select twenty, (select count(*) from tenk1 t2 where t1.twenty =\n>> t2.twenty) from tenk1 t1;\n>> QUERY PLAN\n>> ---------------------------------------------------------------------\n>> Seq Scan on tenk1 t1 (actual rows=10000 loops=1)\n>> SubPlan 1\n>> Cache Key: t1.twenty Hits: 9980 Misses: 20 Evictions: 0 Overflows: 0\n>> -> Aggregate (actual rows=1 loops=20)\n>> -> Seq Scan on tenk1 t2 (actual rows=500 loops=20)\n>> Filter: (t1.twenty = twenty)\n>> Rows Removed by Filter: 9500\n>> (7 rows)\n\n> I didn't do performance tests, that should be necessary, but I think Andres' variant is a little bit more readable.\n\nThanks for chiming in on this. I was just wondering about the\nreadability part and what makes the one with the Result Cache node\nless readable? I can think of a couple of reasons you might have this\nview and just wanted to double-check what it is.\n\nDavid\n\n\n", "msg_date": "Thu, 20 Aug 2020 10:04:10 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On 2020-Aug-19, David Rowley wrote:\n\n> Andres' suggestion:\n> \n> regression=# explain (analyze, costs off, timing off, summary off)\n> select count(*) from tenk1 t1 inner join tenk1 t2 on\n> t1.twenty=t2.unique1;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------\n> Aggregate (actual rows=1 loops=1)\n> -> Nested Loop (actual rows=10000 loops=1)\n> Cache Key: t1.twenty Hits: 9980 Misses: 20 Evictions: 0 Overflows: 0\n> -> Seq Scan on tenk1 t1 (actual rows=10000 loops=1)\n> -> Index Scan using tenk1_unique1 on tenk1 t2 (actual rows=1 loops=20)\n> Index Cond: (unique1 = t1.twenty)\n> (6 rows)\n\nI think it doesn't look terrible in the SubPlan case -- it kinda makes\nsense there -- but for nested loop it appears really strange.\n\nOn the performance aspect, I wonder what the overhead is, particularly\nconsidering Tom's point of making these nodes more expensive for cases\nwith no caching. And also, as the JIT saga continues, aren't we going\nto get plan trees recompiled too, at which point it won't matter much?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 19 Aug 2020 18:58:11 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Thu, 20 Aug 2020 at 10:58, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On the performance aspect, I wonder what the overhead is, particularly\n> considering Tom's point of making these nodes more expensive for cases\n> with no caching.\n\nIt's likely small. I've not written any code but only thought about it\nand I think it would be something like if (node->tuplecache != NULL).\nI imagine that in simple cases the branch predictor would likely\nrealise the likely prediction fairly quickly and predict with 100%\naccuracy, once learned. But it's perhaps possible that some other\nbranch shares the same slot in the branch predictor and causes some\nconflicting predictions. The size of the branch predictor cache is\nlimited, of course. Certainly introducing new branches that\nmispredict and cause a pipeline stall during execution would be a very\nbad thing for performance. I'm unsure what would happen if there's\nsay, 2 Nested loops, one with caching = on and one with caching = off\nwhere the number of tuples between the two is highly variable. I'm\nnot sure a branch predictor would handle that well given that the two\nbranches will be at the same address but have different predictions.\nHowever, if the predictor was to hash in the stack pointer too, then\nthat might not be a problem. Perhaps someone with a better\nunderstanding of modern branch predictors can share their insight\nthere.\n\n> And also, as the JIT saga continues, aren't we going\n> to get plan trees recompiled too, at which point it won't matter much?\n\nI was thinking batch execution would be our solution to the node\noverhead problem. We'll get there one day... we just need to finish\nwith the infinite other optimisations there are to do first.\n\nDavid\n\n\n", "msg_date": "Thu, 20 Aug 2020 11:56:20 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "čt 20. 8. 2020 v 0:04 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n\n> On Wed, 19 Aug 2020 at 16:18, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> >\n> >\n> > st 19. 8. 2020 v 5:48 odesílatel David Rowley <dgrowleyml@gmail.com>\n> napsal:\n> >> Current method:\n> >>\n> >> regression=# explain (analyze, costs off, timing off, summary off)\n> >> select twenty, (select count(*) from tenk1 t2 where t1.twenty =\n> >> t2.twenty) from tenk1 t1;\n> >> QUERY PLAN\n> >> ---------------------------------------------------------------------\n> >> Seq Scan on tenk1 t1 (actual rows=10000 loops=1)\n> >> SubPlan 1\n> >> -> Result Cache (actual rows=1 loops=10000)\n> >> Cache Key: t1.twenty\n> >> Hits: 9980 Misses: 20 Evictions: 0 Overflows: 0\n> >> -> Aggregate (actual rows=1 loops=20)\n> >> -> Seq Scan on tenk1 t2 (actual rows=500 loops=20)\n> >> Filter: (t1.twenty = twenty)\n> >> Rows Removed by Filter: 9500\n> >> (9 rows)\n> >>\n> >> Andres' suggestion:\n> >>\n> >> regression=# explain (analyze, costs off, timing off, summary off)\n> >> select twenty, (select count(*) from tenk1 t2 where t1.twenty =\n> >> t2.twenty) from tenk1 t1;\n> >> QUERY PLAN\n> >> ---------------------------------------------------------------------\n> >> Seq Scan on tenk1 t1 (actual rows=10000 loops=1)\n> >> SubPlan 1\n> >> Cache Key: t1.twenty Hits: 9980 Misses: 20 Evictions: 0\n> Overflows: 0\n> >> -> Aggregate (actual rows=1 loops=20)\n> >> -> Seq Scan on tenk1 t2 (actual rows=500 loops=20)\n> >> Filter: (t1.twenty = twenty)\n> >> Rows Removed by Filter: 9500\n> >> (7 rows)\n>\n> > I didn't do performance tests, that should be necessary, but I think\n> Andres' variant is a little bit more readable.\n>\n> Thanks for chiming in on this. I was just wondering about the\n> readability part and what makes the one with the Result Cache node\n> less readable? I can think of a couple of reasons you might have this\n> view and just wanted to double-check what it is.\n>\n\nIt is more compact - less rows, less nesting levels\n\n\n> David\n>\n\nčt 20. 8. 2020 v 0:04 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:On Wed, 19 Aug 2020 at 16:18, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n>\n> st 19. 8. 2020 v 5:48 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n>> Current method:\n>>\n>> regression=# explain (analyze, costs off, timing off, summary off)\n>> select twenty, (select count(*) from tenk1 t2 where t1.twenty =\n>> t2.twenty) from tenk1 t1;\n>>                              QUERY PLAN\n>> ---------------------------------------------------------------------\n>>  Seq Scan on tenk1 t1 (actual rows=10000 loops=1)\n>>    SubPlan 1\n>>      ->  Result Cache (actual rows=1 loops=10000)\n>>            Cache Key: t1.twenty\n>>            Hits: 9980  Misses: 20  Evictions: 0  Overflows: 0\n>>            ->  Aggregate (actual rows=1 loops=20)\n>>                  ->  Seq Scan on tenk1 t2 (actual rows=500 loops=20)\n>>                        Filter: (t1.twenty = twenty)\n>>                        Rows Removed by Filter: 9500\n>> (9 rows)\n>>\n>> Andres' suggestion:\n>>\n>> regression=# explain (analyze, costs off, timing off, summary off)\n>> select twenty, (select count(*) from tenk1 t2 where t1.twenty =\n>> t2.twenty) from tenk1 t1;\n>>                              QUERY PLAN\n>> ---------------------------------------------------------------------\n>>  Seq Scan on tenk1 t1 (actual rows=10000 loops=1)\n>>    SubPlan 1\n>>     Cache Key: t1.twenty  Hits: 9980  Misses: 20  Evictions: 0  Overflows: 0\n>>     ->  Aggregate (actual rows=1 loops=20)\n>>           ->  Seq Scan on tenk1 t2 (actual rows=500 loops=20)\n>>                 Filter: (t1.twenty = twenty)\n>>                 Rows Removed by Filter: 9500\n>> (7 rows)\n\n> I didn't do performance tests, that should be necessary, but I think Andres' variant is a little bit more readable.\n\nThanks for chiming in on this.  I was just wondering about the\nreadability part and what makes the one with the Result Cache node\nless readable?  I can think of a couple of reasons you might have this\nview and just wanted to double-check what it is.It is more compact - less rows, less nesting levels \n\nDavid", "msg_date": "Thu, 20 Aug 2020 05:54:28 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "Hi,\n\nOn 2020-08-19 18:58:11 -0400, Alvaro Herrera wrote:\n> On 2020-Aug-19, David Rowley wrote:\n> \n> > Andres' suggestion:\n> > \n> > regression=# explain (analyze, costs off, timing off, summary off)\n> > select count(*) from tenk1 t1 inner join tenk1 t2 on\n> > t1.twenty=t2.unique1;\n> > QUERY PLAN\n> > ---------------------------------------------------------------------------------------\n> > Aggregate (actual rows=1 loops=1)\n> > -> Nested Loop (actual rows=10000 loops=1)\n> > Cache Key: t1.twenty Hits: 9980 Misses: 20 Evictions: 0 Overflows: 0\n> > -> Seq Scan on tenk1 t1 (actual rows=10000 loops=1)\n> > -> Index Scan using tenk1_unique1 on tenk1 t2 (actual rows=1 loops=20)\n> > Index Cond: (unique1 = t1.twenty)\n> > (6 rows)\n> \n> I think it doesn't look terrible in the SubPlan case -- it kinda makes\n> sense there -- but for nested loop it appears really strange.\n\nWhile I'm against introducing a separate node for the caching, I'm *not*\nagainst displaying a different node type when caching is\npresent. E.g. it'd be perfectly reasonable from my POV to have a 'Cached\nNested Loop' join and a plain 'Nested Loop' node in the above node. I'd\nprobably still want to display the 'Cache Key' similar to your example,\nbut I don't see how it'd be better to display it with one more\nintermediary node.\n\n\n> On the performance aspect, I wonder what the overhead is, particularly\n> considering Tom's point of making these nodes more expensive for cases\n> with no caching.\n\nI doubt it, due to being a well predictable branch. But it's also easy\nenough to just have a different Exec* function for the caching and\nnon-caching case, should that turn out to be a problem.\n\n\n> And also, as the JIT saga continues, aren't we going to get plan trees\n> recompiled too, at which point it won't matter much?\n\nThat's a fair bit out, I think. And even then it'll only help for\nqueries that run long enough (eventually also often enough, if we get\nprepared statement JITing) to be worth JITing.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 24 Aug 2020 13:26:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Tue, 25 Aug 2020 at 08:26, Andres Freund <andres@anarazel.de> wrote:\n> While I'm against introducing a separate node for the caching, I'm *not*\n> against displaying a different node type when caching is\n> present. E.g. it'd be perfectly reasonable from my POV to have a 'Cached\n> Nested Loop' join and a plain 'Nested Loop' node in the above node. I'd\n> probably still want to display the 'Cache Key' similar to your example,\n> but I don't see how it'd be better to display it with one more\n> intermediary node.\n\n...Well, this is difficult... For the record, in case anyone missed\nit, I'm pretty set on being against doing any node overloading for\nthis. I think it's a pretty horrid modularity violation regardless of\nwhat text appears in EXPLAIN. I think if we merge these nodes then we\nmay as well go further and merge in other simple nodes like LIMIT.\nThen after a few iterations of that, we end up with with a single node\nin EXPLAIN that nobody can figure out what it does. \"Here Be Dragons\",\nas Tom said. That might seem like a bit of an exaggeration, but it is\nimportant to understand that this would start us down that path, and\nthe more steps you take down that path, the harder it is to return\nfrom it.\n\nLet's look at nodeProjectSet.c, for example, which I recall you spent\nquite a bit of time painfully extracting the scattered logic to get it\ninto a single reusable node (69f4b9c85). I understand your motivation\nwas for JIT compilation and not to modularise the code, however, I\nthink the byproduct of that change of having all that code in one\nexecutor node was a good change, and I'm certainly a fan of what it\nallowed you to achieve with JIT. I really wouldn't like to put anyone\nelse in a position of having to extract out some complex logic that we\nadd to existing nodes in some future version of PostgreSQL. It might\nseem quite innocent today, but add a few more years of development and\nI'm sure things will get buried a little deeper.\n\nI'm sure you know better than most that the day will come where we go\nand start rewriting all of our executor node code to implement\nsomething like batch execution. I'd imagine you'd agree that this job\nwould be easier if nodes were single-purpose, rather than overloaded\nwith a bunch of needless complexity that only Heath Robinson himself\ncould be proud of.\n\nI find it bizarre that on one hand, for non-parameterized nested\nloops, we can have the inner scan become materialized with a\nMaterialize node (I don't recall complaints about that) However, on\nthe other hand, for parameterized nested loops, we build the caching\ninto the Nested Loop node itself.\n\nFor the other arguments: I'm also struggling a bit to understand the\narguments that it makes EXPLAIN easier to read due to reduced nesting\ndepth. If that's the case, why don't we get rid of Hash below a Hash\nJoin? It seems nobody has felt strongly enough about that to go to the\ntrouble of writing the patch. We could certainly do work to reduce\nnesting depth in EXPLAIN provided you're not big on what the plan\nactually does. One line should be ok if you're not worried about\nwhat's happening to your tuples. Unfortunately, that does not seem\nvery useful as it tends to be that people who do look at EXPLAIN do\nactually want to know what the planner has decided to do and are\ninterested in what's happening to their tuples. Hiding away details\nthat can significantly impact the performance of the plan does not\nseem like a great direction to be moving in.\n\nAlso, just in case anyone is misunderstanding this Andres' argument.\nIt's entirely based on the performance impact of having an additional\nnode. However, given the correct planner choice, there will never be\na gross slowdown due to having the extra node. The costing, the way it\ncurrently is designed will only choose to use a Result Cache if it\nthinks it'll be cheaper to do so and cheaper means having enough cache\nhits for the caching overhead to be worthwhile. If we get a good\ncache hit ratio then the additional node overhead does not exist\nduring execution since we don't look any further than the cache during\na cache hit. It would only be a cache miss that requires pulling the\ntuples through an additional node. Given perfect statistics (which of\ncourse is impossible) and costs, we'll never slow down the execution\nof a plan by having a separate Result Cache node. In reality, poor\nstatistics, e.g, massive n_distinct underestimations, could cause\nslowdowns, but loading this into one node is not going to save us from\nthat. All that your design will save us from is that 12 nanosecond\nper-tuple hop (measured on a 5-year-old laptop) to an additional node\nduring cache misses. It seems like a strange thing to optimise for,\ngiven that the planner only chooses to use a Result Cache when there's\na good number of expected cache hits.\n\nI understand that you've voiced your feelings about this, but what I\nwant to know is, how strongly do you feel about overloading the node?\nWill you stand in my way if I want to push ahead with the separate\nnode? Will anyone else?\n\nDavid\n\n\n", "msg_date": "Tue, 25 Aug 2020 20:48:37 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On 25/08/2020 20:48, David Rowley wrote:\n> On Tue, 25 Aug 2020 at 08:26, Andres Freund <andres@anarazel.de> wrote:\n>> While I'm against introducing a separate node for the caching, I'm *not*\n>> against displaying a different node type when caching is\n>> present. E.g. it'd be perfectly reasonable from my POV to have a 'Cached\n>> Nested Loop' join and a plain 'Nested Loop' node in the above node. I'd\n>> probably still want to display the 'Cache Key' similar to your example,\n>> but I don't see how it'd be better to display it with one more\n>> intermediary node.\n> ...Well, this is difficult... For the record, in case anyone missed\n> it, I'm pretty set on being against doing any node overloading for\n> this. I think it's a pretty horrid modularity violation regardless of\n> what text appears in EXPLAIN. I think if we merge these nodes then we\n> may as well go further and merge in other simple nodes like LIMIT.\n> Then after a few iterations of that, we end up with with a single node\n> in EXPLAIN that nobody can figure out what it does. \"Here Be Dragons\",\n> as Tom said. That might seem like a bit of an exaggeration, but it is\n> important to understand that this would start us down that path, and\n> the more steps you take down that path, the harder it is to return\n> from it.\n[...]\n>\n> I understand that you've voiced your feelings about this, but what I\n> want to know is, how strongly do you feel about overloading the node?\n> Will you stand in my way if I want to push ahead with the separate\n> node? Will anyone else?\n>\n> David\n>\n>\n From my own experience, and thinking about issues like this, I my \nthinking keeping them separate adds robustness wrt change. Presumably \ncommon code can be extracted out, to avoid excessive code duplication?\n\n-- Gavin\n\n\n\n", "msg_date": "Tue, 25 Aug 2020 22:57:02 +1200", "msg_from": "Gavin Flower <GavinFlower@archidevsys.co.nz>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "Hi,\n\nOn 2020-08-25 20:48:37 +1200, David Rowley wrote:\n> On Tue, 25 Aug 2020 at 08:26, Andres Freund <andres@anarazel.de> wrote:\n> > While I'm against introducing a separate node for the caching, I'm *not*\n> > against displaying a different node type when caching is\n> > present. E.g. it'd be perfectly reasonable from my POV to have a 'Cached\n> > Nested Loop' join and a plain 'Nested Loop' node in the above node. I'd\n> > probably still want to display the 'Cache Key' similar to your example,\n> > but I don't see how it'd be better to display it with one more\n> > intermediary node.\n> \n> ...Well, this is difficult... For the record, in case anyone missed\n> it, I'm pretty set on being against doing any node overloading for\n> this. I think it's a pretty horrid modularity violation regardless of\n> what text appears in EXPLAIN. I think if we merge these nodes then we\n> may as well go further and merge in other simple nodes like LIMIT.\n\nHuh? That doesn't make any sense. LIMIT is applicable to every single\nnode type with the exception of hash. The caching you talk about is\napplicable only to node types that parametrize their sub-nodes, of which\nthere are exactly two instances.\n\nLimit doesn't shuttle through huge amounts of tuples normally. What you\ntalk about does.\n\n\n\n> Also, just in case anyone is misunderstanding this Andres' argument.\n> It's entirely based on the performance impact of having an additional\n> node.\n\nNot entirely, no. It's also just that it doesn't make sense to have two\nnodes setting parameters that then half magically picked up by a special\nsubsidiary node type and used as a cache key. This is pseudo modularity,\nnot real modularity. And makes it harder to display useful information\nin explain etc. And makes it harder to e.g. clear the cache in cases we\nknow that there's no further use of the current cache. At least without\npiercing the abstraction veil.\n\n\n> However, given the correct planner choice, there will never be\n> a gross slowdown due to having the extra node.\n\nThere'll be a significant reduction in increase in performance.\n\n\n> I understand that you've voiced your feelings about this, but what I\n> want to know is, how strongly do you feel about overloading the node?\n> Will you stand in my way if I want to push ahead with the separate\n> node? Will anyone else?\n\nI feel pretty darn strongly about this. If there's plenty people on your\nside I'll not stand in your way, but I think this is a bad design based on\npretty flimsy reasons.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 25 Aug 2020 08:52:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Tue, Aug 25, 2020 at 11:53 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2020-08-25 20:48:37 +1200, David Rowley wrote:\n> > On Tue, 25 Aug 2020 at 08:26, Andres Freund <andres@anarazel.de> wrote:\n> > > While I'm against introducing a separate node for the caching, I'm\n> *not*\n> > > against displaying a different node type when caching is\n> > > present. E.g. it'd be perfectly reasonable from my POV to have a\n> 'Cached\n> > > Nested Loop' join and a plain 'Nested Loop' node in the above node. I'd\n> > > probably still want to display the 'Cache Key' similar to your example,\n> > > but I don't see how it'd be better to display it with one more\n> > > intermediary node.\n> >\n> > ...Well, this is difficult... For the record, in case anyone missed\n> > it, I'm pretty set on being against doing any node overloading for\n> > this. I think it's a pretty horrid modularity violation regardless of\n> > what text appears in EXPLAIN. I think if we merge these nodes then we\n> > may as well go further and merge in other simple nodes like LIMIT.\n>\n> Huh? That doesn't make any sense. LIMIT is applicable to every single\n> node type with the exception of hash. The caching you talk about is\n> applicable only to node types that parametrize their sub-nodes, of which\n> there are exactly two instances.\n>\n> Limit doesn't shuttle through huge amounts of tuples normally. What you\n> talk about does.\n>\n>\n>\n> > Also, just in case anyone is misunderstanding this Andres' argument.\n> > It's entirely based on the performance impact of having an additional\n> > node.\n>\n> Not entirely, no. It's also just that it doesn't make sense to have two\n> nodes setting parameters that then half magically picked up by a special\n> subsidiary node type and used as a cache key. This is pseudo modularity,\n> not real modularity. And makes it harder to display useful information\n> in explain etc. And makes it harder to e.g. clear the cache in cases we\n> know that there's no further use of the current cache. At least without\n> piercing the abstraction veil.\n>\n>\n> > However, given the correct planner choice, there will never be\n> > a gross slowdown due to having the extra node.\n>\n> There'll be a significant reduction in increase in performance.\n\n\nIf this is a key blocking factor for this topic, I'd like to do a simple\nhack\nto put the cache function into the subplan node, then do some tests to\nshow the real difference. But it is better to decide how much difference\ncan be thought of as a big difference. And for education purposes,\nI'd like to understand where these differences come from. For my\ncurrent knowledge, my basic idea is it saves some function calls?\n\n\n>\n\n> > I understand that you've voiced your feelings about this, but what I\n> > want to know is, how strongly do you feel about overloading the node?\n> > Will you stand in my way if I want to push ahead with the separate\n> > node? Will anyone else?\n>\n> I feel pretty darn strongly about this. If there's plenty people on your\n> side I'll not stand in your way, but I think this is a bad design based on\n> pretty flimsy reasons.\n>\n>\nNice to see the different opinions from two great guys and interesting to\nsee how this can be resolved at last:)\n\n-- \nBest Regards\nAndy Fan\n\nOn Tue, Aug 25, 2020 at 11:53 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2020-08-25 20:48:37 +1200, David Rowley wrote:\n> On Tue, 25 Aug 2020 at 08:26, Andres Freund <andres@anarazel.de> wrote:\n> > While I'm against introducing a separate node for the caching, I'm *not*\n> > against displaying a different node type when caching is\n> > present. E.g. it'd be perfectly reasonable from my POV to have a 'Cached\n> > Nested Loop' join and a plain 'Nested Loop' node in the above node. I'd\n> > probably still want to display the 'Cache Key' similar to your example,\n> > but I don't see how it'd be better to display it with one more\n> > intermediary node.\n> \n> ...Well, this is difficult... For the record, in case anyone missed\n> it, I'm pretty set on being against doing any node overloading for\n> this.  I think it's a pretty horrid modularity violation regardless of\n> what text appears in EXPLAIN. I think if we merge these nodes then we\n> may as well go further and merge in other simple nodes like LIMIT.\n\nHuh? That doesn't make any sense. LIMIT is applicable to every single\nnode type with the exception of hash. The caching you talk about is\napplicable only to node types that parametrize their sub-nodes, of which\nthere are exactly two instances.\n\nLimit doesn't shuttle through huge amounts of tuples normally. What you\ntalk about does.\n\n\n\n> Also, just in case anyone is misunderstanding this Andres' argument.\n> It's entirely based on the performance impact of having an additional\n> node.\n\nNot entirely, no. It's also just that it doesn't make sense to have two\nnodes setting parameters that then half magically picked up by a special\nsubsidiary node type and used as a cache key. This is pseudo modularity,\nnot real modularity. And makes it harder to display useful information\nin explain etc. And makes it harder to e.g. clear the cache in cases we\nknow that there's no further use of the current cache. At least without\npiercing the abstraction veil.\n\n\n> However, given the correct planner choice, there will never be\n> a gross slowdown due to having the extra node.\n\nThere'll be a significant reduction in increase in performance.If this is a key blocking factor for this topic, I'd like to do a simple hackto put the cache function into the subplan node, then do some tests toshow the real difference.  But it is better to decide how much differencecan be thought of as a big difference.  And  for education purposes,  I'd like to understand where these differences come from.  For mycurrent knowledge,  my basic idea is it saves some function calls?  \n\n> I understand that you've voiced your feelings about this, but what I\n> want to know is, how strongly do you feel about overloading the node?\n> Will you stand in my way if I want to push ahead with the separate\n> node?  Will anyone else?\n\nI feel pretty darn strongly about this. If there's plenty people on your\nside I'll not stand in your way, but I think this is a bad design based on\npretty flimsy reasons.\nNice to see the different opinions from two great guys and interesting tosee how this can be resolved at last:) -- Best RegardsAndy Fan", "msg_date": "Wed, 26 Aug 2020 01:18:34 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Tue, Aug 25, 2020 at 11:53 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2020-08-25 20:48:37 +1200, David Rowley wrote:\n> > On Tue, 25 Aug 2020 at 08:26, Andres Freund <andres@anarazel.de> wrote:\n> > > While I'm against introducing a separate node for the caching, I'm\n> *not*\n> > > against displaying a different node type when caching is\n> > > present. E.g. it'd be perfectly reasonable from my POV to have a\n> 'Cached\n> > > Nested Loop' join and a plain 'Nested Loop' node in the above node. I'd\n> > > probably still want to display the 'Cache Key' similar to your example,\n> > > but I don't see how it'd be better to display it with one more\n> > > intermediary node.\n> >\n> > ...Well, this is difficult... For the record, in case anyone missed\n> > it, I'm pretty set on being against doing any node overloading for\n> > this. I think it's a pretty horrid modularity violation regardless of\n> > what text appears in EXPLAIN. I think if we merge these nodes then we\n> > may as well go further and merge in other simple nodes like LIMIT.\n>\n> Huh? That doesn't make any sense. LIMIT is applicable to every single\n> node type with the exception of hash. The caching you talk about is\n> applicable only to node types that parametrize their sub-nodes, of which\n> there are exactly two instances.\n>\n> Limit doesn't shuttle through huge amounts of tuples normally. What you\n> talk about does.\n>\n>\n>\n> > Also, just in case anyone is misunderstanding this Andres' argument.\n> > It's entirely based on the performance impact of having an additional\n> > node.\n>\n> Not entirely, no. It's also just that it doesn't make sense to have two\n> nodes setting parameters that then half magically picked up by a special\n> subsidiary node type and used as a cache key. This is pseudo modularity,\n> not real modularity. And makes it harder to display useful information\n> in explain etc. And makes it harder to e.g. clear the cache in cases we\n> know that there's no further use of the current cache. At least without\n> piercing the abstraction veil.\n>\n>\nSorry that I missed this when I replied to the last thread. I understand\nthis, I remain neutral about this.\n\n\n> > However, given the correct planner choice, there will never be\n> > a gross slowdown due to having the extra node.\n>\n> There'll be a significant reduction in increase in performance.\n>\n>\n> > I understand that you've voiced your feelings about this, but what I\n> > want to know is, how strongly do you feel about overloading the node?\n> > Will you stand in my way if I want to push ahead with the separate\n> > node? Will anyone else?\n>\n> I feel pretty darn strongly about this. If there's plenty people on your\n> side I'll not stand in your way, but I think this is a bad design based on\n> pretty flimsy reasons.\n>\n> Greetings,\n>\n> Andres Freund\n>\n>\n>\n\n-- \nBest Regards\nAndy Fan\n\nOn Tue, Aug 25, 2020 at 11:53 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2020-08-25 20:48:37 +1200, David Rowley wrote:\n> On Tue, 25 Aug 2020 at 08:26, Andres Freund <andres@anarazel.de> wrote:\n> > While I'm against introducing a separate node for the caching, I'm *not*\n> > against displaying a different node type when caching is\n> > present. E.g. it'd be perfectly reasonable from my POV to have a 'Cached\n> > Nested Loop' join and a plain 'Nested Loop' node in the above node. I'd\n> > probably still want to display the 'Cache Key' similar to your example,\n> > but I don't see how it'd be better to display it with one more\n> > intermediary node.\n> \n> ...Well, this is difficult... For the record, in case anyone missed\n> it, I'm pretty set on being against doing any node overloading for\n> this.  I think it's a pretty horrid modularity violation regardless of\n> what text appears in EXPLAIN. I think if we merge these nodes then we\n> may as well go further and merge in other simple nodes like LIMIT.\n\nHuh? That doesn't make any sense. LIMIT is applicable to every single\nnode type with the exception of hash. The caching you talk about is\napplicable only to node types that parametrize their sub-nodes, of which\nthere are exactly two instances.\n\nLimit doesn't shuttle through huge amounts of tuples normally. What you\ntalk about does.\n\n\n\n> Also, just in case anyone is misunderstanding this Andres' argument.\n> It's entirely based on the performance impact of having an additional\n> node.\n\nNot entirely, no. It's also just that it doesn't make sense to have two\nnodes setting parameters that then half magically picked up by a special\nsubsidiary node type and used as a cache key. This is pseudo modularity,\nnot real modularity. And makes it harder to display useful information\nin explain etc. And makes it harder to e.g. clear the cache in cases we\nknow that there's no further use of the current cache. At least without\npiercing the abstraction veil.\n Sorry that I missed this when I replied to the last thread.  I understandthis, I remain neutral about this.  \n\n> However, given the correct planner choice, there will never be\n> a gross slowdown due to having the extra node.\n\nThere'll be a significant reduction in increase in performance.\n\n\n> I understand that you've voiced your feelings about this, but what I\n> want to know is, how strongly do you feel about overloading the node?\n> Will you stand in my way if I want to push ahead with the separate\n> node?  Will anyone else?\n\nI feel pretty darn strongly about this. If there's plenty people on your\nside I'll not stand in your way, but I think this is a bad design based on\npretty flimsy reasons.\n\nGreetings,\n\nAndres Freund\n\n\n-- Best RegardsAndy Fan", "msg_date": "Wed, 26 Aug 2020 01:37:08 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Wed, 26 Aug 2020 at 05:18, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>\n> On Tue, Aug 25, 2020 at 11:53 PM Andres Freund <andres@anarazel.de> wrote:\n>>\n>> On 2020-08-25 20:48:37 +1200, David Rowley wrote:\n>> > Also, just in case anyone is misunderstanding this Andres' argument.\n>> > It's entirely based on the performance impact of having an additional\n>> > node.\n>>\n>> Not entirely, no. It's also just that it doesn't make sense to have two\n>> nodes setting parameters that then half magically picked up by a special\n>> subsidiary node type and used as a cache key. This is pseudo modularity,\n>> not real modularity. And makes it harder to display useful information\n>> in explain etc. And makes it harder to e.g. clear the cache in cases we\n>> know that there's no further use of the current cache. At least without\n>> piercing the abstraction veil.\n>>\n>>\n>> > However, given the correct planner choice, there will never be\n>> > a gross slowdown due to having the extra node.\n>>\n>> There'll be a significant reduction in increase in performance.\n>\n>\n> If this is a key blocking factor for this topic, I'd like to do a simple hack\n> to put the cache function into the subplan node, then do some tests to\n> show the real difference. But it is better to decide how much difference\n> can be thought of as a big difference. And for education purposes,\n> I'd like to understand where these differences come from. For my\n> current knowledge, my basic idea is it saves some function calls?\n\nIf testing this, the cache hit ratio will be pretty key to the\nresults. You'd notice the overhead much less with a larger cache hit\nratio since you're not pulling the tuple from as deeply a nested node.\n I'm unsure how you'd determine what is a good cache hit ratio to\ntest it with. The lower the cache expected cache hit ratio, the higher\nthe cost of the Result Cache node will be, so the planner has less\nchance of choosing to use it. Maybe some experiments will find a\ncase where the planner picks a Result Cache plan with a low hit ratio\ncan be tested.\n\nSay you find a case with the hit ratio of 90%. Going by [1] I found\npulling a tuple through an additional node to cost about 12\nnanoseconds on an intel 4712HQ CPU. With a hit ratio of 90% we'll\nonly pull 10% of tuples through the additional node, so that's about\n1.2 nanoseconds per tuple, or 1.2 milliseconds per million tuples. It\nmight become hard to measure above the noise. More costly inner scans\nwill have the planner choose to Result Cache with lower estimated hit\nratios, but in that case, pulling the tuple through the additional\nnode during a cache miss will be less noticeable due to the more\ncostly inner side of the join.\n\nLikely you could test the overhead only in theory without going to the\ntrouble of adapting the code to make SubPlan and Nested Loop do the\ncaching internally. If you just modify ExecResultCache() to have it\nsimply return its subnode, then measure the performance with and\nwithout enable_resultcache, you should get an idea of the per-tuple\noverhead of pulling the tuple through the additional node on your CPU.\nAfter you know that number, you could put the code back to what the\npatches have and then experiment with a number of cases to find a case\nthat chooses Result Cache and gets a low hit ratio.\n\n\nFor example, from the plan I used in the initial email on this thread:\n\n -> Index Only Scan using lookup_a_idx on lookup l\n(actual time=0.002..0.011 rows=100 loops=1000)\n Index Cond: (a = hk.thousand)\n Heap Fetches: 0\n Planning Time: 0.113 ms\n Execution Time: 1876.741 ms\n\nI don't have the exact per tuple overhead on the machine I ran that\non, but it's an AMD 3990x CPU, so I'll guess the overhead is about 8\nnanoseconds per tuple, given I found it to be 12 nanoseconds on a 2014\nCPU If that's right, then the overhead is something like 8 * 100\n(rows) * 1000 (loops) = 800000 nanoseconds = 0.8 milliseconds. If I\ncompare that to the execution time of the query, it's about 0.04%.\n\nI imagine we'll need to find something with a much worse hit ratio so\nwe can actually measure the overhead.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAKJS1f9UXdk6ZYyqbJnjFO9a9hyHKGW7B%3DZRh-rxy9qxfPA5Gw%40mail.gmail.com\n\n\n", "msg_date": "Wed, 26 Aug 2020 12:14:27 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Wed, Aug 26, 2020 at 8:14 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 26 Aug 2020 at 05:18, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> >\n> > On Tue, Aug 25, 2020 at 11:53 PM Andres Freund <andres@anarazel.de>\n> wrote:\n> >>\n> >> On 2020-08-25 20:48:37 +1200, David Rowley wrote:\n> >> > Also, just in case anyone is misunderstanding this Andres' argument.\n> >> > It's entirely based on the performance impact of having an additional\n> >> > node.\n> >>\n> >> Not entirely, no. It's also just that it doesn't make sense to have two\n> >> nodes setting parameters that then half magically picked up by a special\n> >> subsidiary node type and used as a cache key. This is pseudo modularity,\n> >> not real modularity. And makes it harder to display useful information\n> >> in explain etc. And makes it harder to e.g. clear the cache in cases we\n> >> know that there's no further use of the current cache. At least without\n> >> piercing the abstraction veil.\n> >>\n> >>\n> >> > However, given the correct planner choice, there will never be\n> >> > a gross slowdown due to having the extra node.\n> >>\n> >> There'll be a significant reduction in increase in performance.\n> >\n> >\n> > If this is a key blocking factor for this topic, I'd like to do a simple\n> hack\n> > to put the cache function into the subplan node, then do some tests to\n> > show the real difference. But it is better to decide how much difference\n> > can be thought of as a big difference. And for education purposes,\n> > I'd like to understand where these differences come from. For my\n> > current knowledge, my basic idea is it saves some function calls?\n>\n> If testing this, the cache hit ratio will be pretty key to the\n> results. You'd notice the overhead much less with a larger cache hit\n> ratio since you're not pulling the tuple from as deeply a nested node.\n> I'm unsure how you'd determine what is a good cache hit ratio to\n> test it with.\n\n\nI wanted to test the worst case where the cache hit ratio is 0. and then\ncompare the difference between putting the cache as a dedicated\nnode and in a SubPlan node. However, we have a better way\nto test the difference based on your below message.\n\n\n>\n\nThe lower the cache expected cache hit ratio, the higher\n> the cost of the Result Cache node will be, so the planner has less\n> chance of choosing to use it.\n>\n\nIIRC, we add the ResultCache for subplan nodes unconditionally now.\nThe main reason is we lack of ndistinct estimation during the subquery\nplanning. Tom suggested converting the AlternativeSubPlan to SubPlan\nin setrefs.c [1], and I also ran into a case that can be resolved if we do\nsuch conversion even earlier[2], the basic idea is we can do such\nconversation\nonce we can get the actual values for the subplan.\n\nsomething like\nif (bms_is_subset(subplan->deps_relids, rel->relids)\n{\n convert_alternativesubplans_to_subplan(rel);\n}\nyou can see if that can be helpful for ResultCache in this user case. my\npatch in [2] is still in a very PoC stage so it only takes care of subplan\nin\nrel->reltarget.\n\n\n> Say you find a case with the hit ratio of 90%. Going by [1] I found\n> pulling a tuple through an additional node to cost about 12\n> nanoseconds on an intel 4712HQ CPU. With a hit ratio of 90% we'll\n> only pull 10% of tuples through the additional node, so that's about\n> 1.2 nanoseconds per tuple, or 1.2 milliseconds per million tuples. It\n> might become hard to measure above the noise. More costly inner scans\n> will have the planner choose to Result Cache with lower estimated hit\n> ratios, but in that case, pulling the tuple through the additional\n> node during a cache miss will be less noticeable due to the more\n> costly inner side of the join.\n>\n> Likely you could test the overhead only in theory without going to the\n> trouble of adapting the code to make SubPlan and Nested Loop do the\n> caching internally. If you just modify ExecResultCache() to have it\n> simply return its subnode, then measure the performance with and\n> without enable_resultcache, you should get an idea of the per-tuple\n> overhead of pulling the tuple through the additional node on your CPU.\n>\n\nThanks for the hints. I think we can test it even easier with Limit node.\n\ncreate table test_pull_tuples(a int);\ninsert into test_pull_tuples select i from generate_seri\ninsert into test_pull_tuples select i from generate_series(1, 100000)i;\n-- test with pgbench.\nselect * from test_pull_tuples; 18.850 ms\nselect * from test_pull_tuples limit 100000; 20.500 ms\n\nBasically it is 16 nanoseconds per tuple on my Intel(R) Xeon(R) CPU\nE5-2650.\nPersonally I'd say the performance difference is negligible unless I see\nsome\ndifferent numbers.\n\n[1]\nhttps://www.postgresql.org/message-id/1992952.1592785225%40sss.pgh.pa.us\n[2]\nhttps://www.postgresql.org/message-id/CAKU4AWoMRzZKk1vPstKTjS7sYeN43j8WtsAZy2pv73vm_E_6dA%40mail.gmail.com\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Wed, Aug 26, 2020 at 8:14 AM David Rowley <dgrowleyml@gmail.com> wrote:On Wed, 26 Aug 2020 at 05:18, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>\n> On Tue, Aug 25, 2020 at 11:53 PM Andres Freund <andres@anarazel.de> wrote:\n>>\n>> On 2020-08-25 20:48:37 +1200, David Rowley wrote:\n>> > Also, just in case anyone is misunderstanding this Andres' argument.\n>> > It's entirely based on the performance impact of having an additional\n>> > node.\n>>\n>> Not entirely, no. It's also just that it doesn't make sense to have two\n>> nodes setting parameters that then half magically picked up by a special\n>> subsidiary node type and used as a cache key. This is pseudo modularity,\n>> not real modularity. And makes it harder to display useful information\n>> in explain etc. And makes it harder to e.g. clear the cache in cases we\n>> know that there's no further use of the current cache. At least without\n>> piercing the abstraction veil.\n>>\n>>\n>> > However, given the correct planner choice, there will never be\n>> > a gross slowdown due to having the extra node.\n>>\n>> There'll be a significant reduction in increase in performance.\n>\n>\n> If this is a key blocking factor for this topic, I'd like to do a simple hack\n> to put the cache function into the subplan node, then do some tests to\n> show the real difference.  But it is better to decide how much difference\n> can be thought of as a big difference.  And  for education purposes,\n> I'd like to understand where these differences come from.  For my\n> current knowledge,  my basic idea is it saves some function calls?\n\nIf testing this, the cache hit ratio will be pretty key to the\nresults. You'd notice the overhead much less with a larger cache hit\nratio since you're not pulling the tuple from as deeply a nested node.I'm unsure how you'd determine what is a good cache hit ratio to\ntest it with.I wanted to test the worst case where the cache hit ratio is 0. and thencompare the difference between putting the cache as a dedicatednode and in a SubPlan node.  However, we have a better wayto test the difference based on your below message.   The lower the cache expected cache hit ratio, the higher\nthe cost of the Result Cache node will be, so the planner has less\nchance of choosing to use it.  IIRC, we add the ResultCache for subplan nodes unconditionally now. The main reason is we lack of ndistinct estimation during the subqueryplanning.  Tom suggested converting the AlternativeSubPlan to SubPlan in setrefs.c [1], and I also ran into a case that can be resolved if we dosuch conversion even earlier[2], the basic idea is we can do such conversationonce we can get the actual values for the subplan.  something like if (bms_is_subset(subplan->deps_relids,  rel->relids){   convert_alternativesubplans_to_subplan(rel); }you can see if that can be helpful for ResultCache in this user case.   mypatch in [2] is still in a very PoC stage so it only takes care of subplan inrel->reltarget.\n\nSay you find a case with the hit ratio of 90%.  Going by [1] I found\npulling a tuple through an additional node to cost about 12\nnanoseconds on an intel 4712HQ CPU.  With a hit ratio of 90% we'll\nonly pull 10% of tuples through the additional node, so that's about\n1.2 nanoseconds per tuple, or 1.2 milliseconds per million tuples. It\nmight become hard to measure above the noise. More costly inner scans\nwill have the planner choose to Result Cache with lower estimated hit\nratios, but in that case, pulling the tuple through the additional\nnode during a cache miss will be less noticeable due to the more\ncostly inner side of the join.\n\nLikely you could test the overhead only in theory without going to the\ntrouble of adapting the code to make SubPlan and Nested Loop do the\ncaching internally.  If you just modify ExecResultCache() to have it\nsimply return its subnode, then measure the performance with and\nwithout enable_resultcache, you should get an idea of the per-tuple\noverhead of pulling the tuple through the additional node on your CPU.Thanks for the hints.  I think we can test it even easier with Limit node. create table test_pull_tuples(a int);insert into test_pull_tuples select i from generate_seriinsert into test_pull_tuples select i from generate_series(1, 100000)i;-- test with pgbench.select * from test_pull_tuples;                           18.850 msselect * from test_pull_tuples limit 100000;       20.500 msBasically it is 16 nanoseconds per tuple on my Intel(R) Xeon(R) CPU E5-2650. Personally I'd say the performance difference is negligible unless I see somedifferent numbers.[1]  https://www.postgresql.org/message-id/1992952.1592785225%40sss.pgh.pa.us[2] https://www.postgresql.org/message-id/CAKU4AWoMRzZKk1vPstKTjS7sYeN43j8WtsAZy2pv73vm_E_6dA%40mail.gmail.com -- Best RegardsAndy Fan", "msg_date": "Wed, 26 Aug 2020 16:03:36 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Wed, 26 Aug 2020 at 03:52, Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2020-08-25 20:48:37 +1200, David Rowley wrote:\n> > However, given the correct planner choice, there will never be\n> > a gross slowdown due to having the extra node.\n>\n> There'll be a significant reduction in increase in performance.\n\nSo I did a very rough-cut change to the patch to have the caching be\npart of Nested Loop. It can be applied on top of the other 3 v7\npatches.\n\nFor the performance, the test I did results in the performance\nactually being reduced from having the Result Cache as a separate\nnode. The reason for this is mostly because Nested Loop projects.\nEach time I fetch a MinimalTuple from the cache, the patch will deform\nit in order to store it in the virtual inner tuple slot for the nested\nloop. Having the Result Cache as a separate node can skip this step as\nit's result tuple slot is a TTSOpsMinimalTuple, so we can just store\nthe cached MinimalTuple right into the slot without any\ndeforming/copying.\n\nHere's an example of a query that's now slower:\n\nselect count(*) from hundredk hk inner join lookup100 l on hk.one = l.a;\n\nIn this case, the original patch does not have to deform the\nMinimalTuple from the cache as the count(*) does not require any Vars\nfrom it. With the rough patch that's attached, the MinimalTuple is\ndeformed in during the transformation during ExecCopySlot(). The\nslowdown exists no matter which column of the hundredk table I join to\n(schema in [1]).\n\nPerformance comparison is as follows:\n\nv7 (Result Cache as a separate node)\npostgres=# explain (analyze, timing off) select count(*) from hundredk\nhk inner join lookup l on hk.one = l.a;\n Execution Time: 652.582 ms\n\nv7 + attached rough patch\npostgres=# explain (analyze, timing off) select count(*) from hundredk\nhk inner join lookup l on hk.one = l.a;\n Execution Time: 843.566 ms\n\nI've not yet thought of any way to get rid of the needless\nMinimalTuple deform. I suppose the cache could just have already\ndeformed tuples, but that requires more memory and would result in a\nworse cache hit ratios for workloads where the cache gets filled.\n\nI'm open to ideas to make the comparison fairer.\n\n(Renamed the patch file to .txt to stop the CFbot getting upset with me)\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvrPcQyQdWERGYWx8J+2DLUNgXu+fOSbQ1UscxrunyXyrQ@mail.gmail.com", "msg_date": "Sat, 29 Aug 2020 02:54:58 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Sat, 29 Aug 2020 at 02:54, David Rowley <dgrowleyml@gmail.com> wrote:\n> I'm open to ideas to make the comparison fairer.\n\nWhile on that, it's not just queries that don't require the cached\ntuple to be deformed that are slower. Here's a couple of example that\ndo requite both patches to deform the cached tuple:\n\nSome other results that do result in both patches deforming tuples\nstill slows that v7 is faster:\n\nQuery1:\n\nv7 + attached patch\npostgres=# explain (analyze, timing off) select count(l.a) from\nhundredk hk inner join lookup100 l on hk.one = l.a;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=378570.41..378570.42 rows=1 width=8) (actual rows=1 loops=1)\n -> Nested Loop Cached (cost=0.43..353601.00 rows=9987763 width=4)\n(actual rows=10000000 loops=1)\n Cache Key: $0\n Hits: 99999 Misses: 1 Evictions: 0 Overflows: 0\n -> Seq Scan on hundredk hk (cost=0.00..1637.00 rows=100000\nwidth=4) (actual rows=100000 loops=1)\n -> Index Only Scan using lookup100_a_idx on lookup100 l\n(cost=0.43..2.52 rows=100 width=4) (actual rows=100 loops=1)\n Index Cond: (a = hk.one)\n Heap Fetches: 0\n Planning Time: 0.050 ms\n Execution Time: 928.698 ms\n(10 rows)\n\nv7 only:\npostgres=# explain (analyze, timing off) select count(l.a) from\nhundredk hk inner join lookup100 l on hk.one = l.a;\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=152861.19..152861.20 rows=1 width=8) (actual rows=1 loops=1)\n -> Nested Loop (cost=0.45..127891.79 rows=9987763 width=4)\n(actual rows=10000000 loops=1)\n -> Seq Scan on hundredk hk (cost=0.00..1637.00 rows=100000\nwidth=4) (actual rows=100000 loops=1)\n -> Result Cache (cost=0.45..2.53 rows=100 width=4) (actual\nrows=100 loops=100000)\n Cache Key: hk.one\n Hits: 99999 Misses: 1 Evictions: 0 Overflows: 0\n -> Index Only Scan using lookup100_a_idx on lookup100\nl (cost=0.43..2.52 rows=100 width=4) (actual rows=100 loops=1)\n Index Cond: (a = hk.one)\n Heap Fetches: 0\n Planning Time: 0.604 ms\n Execution Time: 897.958 ms\n(11 rows)\n\n\nQuery2:\n\nv7 + attached patch\npostgres=# explain (analyze, timing off) select * from hundredk hk\ninner join lookup100 l on hk.one = l.a;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n Nested Loop Cached (cost=0.43..353601.00 rows=9987763 width=28)\n(actual rows=10000000 loops=1)\n Cache Key: $0\n Hits: 99999 Misses: 1 Evictions: 0 Overflows: 0\n -> Seq Scan on hundredk hk (cost=0.00..1637.00 rows=100000\nwidth=24) (actual rows=100000 loops=1)\n -> Index Only Scan using lookup100_a_idx on lookup100 l\n(cost=0.43..2.52 rows=100 width=4) (actual rows=100 loops=1)\n Index Cond: (a = hk.one)\n Heap Fetches: 0\n Planning Time: 0.621 ms\n Execution Time: 883.610 ms\n(9 rows)\n\nv7 only:\npostgres=# explain (analyze, timing off) select * from hundredk hk\ninner join lookup100 l on hk.one = l.a;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.45..127891.79 rows=9987763 width=28) (actual\nrows=10000000 loops=1)\n -> Seq Scan on hundredk hk (cost=0.00..1637.00 rows=100000\nwidth=24) (actual rows=100000 loops=1)\n -> Result Cache (cost=0.45..2.53 rows=100 width=4) (actual\nrows=100 loops=100000)\n Cache Key: hk.one\n Hits: 99999 Misses: 1 Evictions: 0 Overflows: 0\n -> Index Only Scan using lookup100_a_idx on lookup100 l\n(cost=0.43..2.52 rows=100 width=4) (actual rows=100 loops=1)\n Index Cond: (a = hk.one)\n Heap Fetches: 0\n Planning Time: 0.088 ms\n Execution Time: 870.601 ms\n(10 rows)\n\nDavid\n\n\n", "msg_date": "Sat, 29 Aug 2020 02:58:18 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Wed, Aug 19, 2020 at 6:58 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2020-Aug-19, David Rowley wrote:\n> > Andres' suggestion:\n> > regression=# explain (analyze, costs off, timing off, summary off)\n> > select count(*) from tenk1 t1 inner join tenk1 t2 on\n> > t1.twenty=t2.unique1;\n> > QUERY PLAN\n> > ---------------------------------------------------------------------------------------\n> > Aggregate (actual rows=1 loops=1)\n> > -> Nested Loop (actual rows=10000 loops=1)\n> > Cache Key: t1.twenty Hits: 9980 Misses: 20 Evictions: 0 Overflows: 0\n> > -> Seq Scan on tenk1 t1 (actual rows=10000 loops=1)\n> > -> Index Scan using tenk1_unique1 on tenk1 t2 (actual rows=1 loops=20)\n> > Index Cond: (unique1 = t1.twenty)\n> > (6 rows)\n>\n> I think it doesn't look terrible in the SubPlan case -- it kinda makes\n> sense there -- but for nested loop it appears really strange.\n\nI disagree. I don't know why anyone should find this confusing, except\nthat we're not used to seeing it. It seems to make a lot of sense that\nif you are executing the same plan tree with different parameters, you\nmight want to cache results to avoid recomputation. So why wouldn't\nnodes that do this include a cache?\n\nThis is not necessarily a vote for Andres's proposal. I don't know\nwhether it's technically better to include the caching in the Nested\nLoop node or to make it a separate node, and I think we should do the\none that's better. Getting pushed into an inferior design because we\nthink the EXPLAIN output will be clearer does not make sense to me.\n\nI think David's points elsewhere on the thread about ProjectSet and\nMaterialize nodes are interesting. It was never very clear to me why\nProjectSet was handled separately in every node, adding quite a bit of\ncomplexity, and why Materialize was a separate node. Likewise, why are\nHash Join and Hash two separate nodes instead of just one? Why do we\nnot treat projection as a separate node type even when we're not\nprojecting a set? In general, I've never really understood why we\nchoose to include some functionality in other nodes and keep other\nthings separate. Is there even an organizing principle, or is it just\nhistorical baggage?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 28 Aug 2020 11:33:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Sat, Aug 29, 2020 at 3:33 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think David's points elsewhere on the thread about ProjectSet and\n> Materialize nodes are interesting.\n\nIndeed, I'm now finding it very difficult to look past the similarity with:\n\npostgres=# explain select count(*) from t t1 cross join t t2;\n QUERY PLAN\n----------------------------------------------------------------------------\n Aggregate (cost=1975482.56..1975482.57 rows=1 width=8)\n -> Nested Loop (cost=0.00..1646293.50 rows=131675625 width=0)\n -> Seq Scan on t t1 (cost=0.00..159.75 rows=11475 width=0)\n -> Materialize (cost=0.00..217.12 rows=11475 width=0)\n -> Seq Scan on t t2 (cost=0.00..159.75 rows=11475 width=0)\n(5 rows)\n\nI wonder what it would take to overcome the overheads of the separate\nResult Cache node, with techniques to step out of the way or something\nlike that.\n\n> [tricky philosophical questions about ancient and maybe in some cases arbitrary choices]\n\nAck.\n\n\n", "msg_date": "Mon, 31 Aug 2020 17:56:41 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "Thanks for chipping in here.\n\nOn Mon, 31 Aug 2020 at 17:57, Thomas Munro <thomas.munro@gmail.com> wrote:\n> I wonder what it would take to overcome the overheads of the separate\n> Result Cache node, with techniques to step out of the way or something\n> like that.\n\nSo far it looks like there are more overheads to having the caching\ndone inside nodeNestloop.c. See [1]. Perhaps there's something that\ncan be done to optimise away the needless MinimalTuple deform that I\nmentioned there, but for now, performance-wise, we're better off\nhaving a separate node.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvo2acQSogMCa3hB7moRntXWHO8G+WSwhyty2+c8vYRq3A@mail.gmail.com\n\n\n", "msg_date": "Tue, 1 Sep 2020 09:59:05 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Sat, 29 Aug 2020 at 02:54, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Wed, 26 Aug 2020 at 03:52, Andres Freund <andres@anarazel.de> wrote:\n> > There'll be a significant reduction in increase in performance.\n>\n> So I did a very rough-cut change to the patch to have the caching be\n> part of Nested Loop. It can be applied on top of the other 3 v7\n> patches.\n>\n> For the performance, the test I did results in the performance\n> actually being reduced from having the Result Cache as a separate\n> node. The reason for this is mostly because Nested Loop projects.\n\nI spoke to Andres off-list this morning in regards to what can be done\nto remove this performance regression over the separate Result Cache\nnode version of the patch. He mentioned that I could create another\nProjectionInfo for when reading from the cache's slot and use that to\nproject with.\n\nI've hacked this up in the attached. It looks like another version of\nthe joinqual would also need to be created to that the MinimalTuple\nfrom the cache is properly deformed. I've not done this yet.\n\nThe performance does improve this time. Using the same two test\nqueries from [1], I get:\n\nv7 (Separate Result Cache node)\n\nQuery 1:\npostgres=# explain (analyze, timing off) select count(l.a) from\nhundredk hk inner join lookup100 l on hk.one = l.a;\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=152861.19..152861.20 rows=1 width=8) (actual rows=1 loops=1)\n -> Nested Loop (cost=0.45..127891.79 rows=9987763 width=4)\n(actual rows=10000000 loops=1)\n -> Seq Scan on hundredk hk (cost=0.00..1637.00 rows=100000\nwidth=4) (actual rows=100000 loops=1)\n -> Result Cache (cost=0.45..2.53 rows=100 width=4) (actual\nrows=100 loops=100000)\n Cache Key: hk.one\n Hits: 99999 Misses: 1 Evictions: 0 Overflows: 0\n -> Index Only Scan using lookup100_a_idx on lookup100\nl (cost=0.43..2.52 rows=100 width=4) (actual rows=100 loops=1)\n Index Cond: (a = hk.one)\n Heap Fetches: 0\n Planning Time: 0.045 ms\n Execution Time: 894.003 ms\n(11 rows)\n\nQuery 2:\npostgres=# explain (analyze, timing off) select * from hundredk hk\ninner join lookup100 l on hk.one = l.a;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.45..127891.79 rows=9987763 width=28) (actual\nrows=10000000 loops=1)\n -> Seq Scan on hundredk hk (cost=0.00..1637.00 rows=100000\nwidth=24) (actual rows=100000 loops=1)\n -> Result Cache (cost=0.45..2.53 rows=100 width=4) (actual\nrows=100 loops=100000)\n Cache Key: hk.one\n Hits: 99999 Misses: 1 Evictions: 0 Overflows: 0\n -> Index Only Scan using lookup100_a_idx on lookup100 l\n(cost=0.43..2.52 rows=100 width=4) (actual rows=100 loops=1)\n Index Cond: (a = hk.one)\n Heap Fetches: 0\n Planning Time: 0.077 ms\n Execution Time: 854.950 ms\n(10 rows)\n\nv7 + hacks_V3 (caching done in Nested Loop)\n\nQuery 1:\nexplain (analyze, timing off) select count(l.a) from hundredk hk inner\njoin lookup100 l on hk.one = l.a;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=378570.41..378570.42 rows=1 width=8) (actual rows=1 loops=1)\n -> Nested Loop Cached (cost=0.43..353601.00 rows=9987763 width=4)\n(actual rows=10000000 loops=1)\n Cache Key: $0\n Hits: 99999 Misses: 1 Evictions: 0 Overflows: 0\n -> Seq Scan on hundredk hk (cost=0.00..1637.00 rows=100000\nwidth=4) (actual rows=100000 loops=1)\n -> Index Only Scan using lookup100_a_idx on lookup100 l\n(cost=0.43..2.52 rows=100 width=4) (actual rows=100 loops=1)\n Index Cond: (a = hk.one)\n Heap Fetches: 0\n Planning Time: 0.103 ms\n Execution Time: 770.470 ms\n(10 rows)\n\nQuery 2\nexplain (analyze, timing off) select * from hundredk hk inner join\nlookup100 l on hk.one = l.a;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n Nested Loop Cached (cost=0.43..353601.00 rows=9987763 width=28)\n(actual rows=10000000 loops=1)\n Cache Key: $0\n Hits: 99999 Misses: 1 Evictions: 0 Overflows: 0\n -> Seq Scan on hundredk hk (cost=0.00..1637.00 rows=100000\nwidth=24) (actual rows=100000 loops=1)\n -> Index Only Scan using lookup100_a_idx on lookup100 l\n(cost=0.43..2.52 rows=100 width=4) (actual rows=100 loops=1)\n Index Cond: (a = hk.one)\n Heap Fetches: 0\n Planning Time: 0.090 ms\n Execution Time: 779.181 ms\n(9 rows)\n\nAlso, I'd just like to reiterate that the attached is a very rough cut\nimplementation that I've put together just to use for performance\ncomparison in order to help move this conversation along. (I do know\nthat I'm breaking the const qualifier on PlanState's innerops.)\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvqt5U6VcKSm2G9Q1n4rsHejL-VX7QG9KToAQ0HyZymSzQ@mail.gmail.com", "msg_date": "Wed, 2 Sep 2020 16:02:54 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On 2020-Sep-02, David Rowley wrote:\n\n> v7 (Separate Result Cache node)\n> Query 1:\n> Execution Time: 894.003 ms\n> \n> Query 2:\n> Execution Time: 854.950 ms\n\n> v7 + hacks_V3 (caching done in Nested Loop)\n> Query 1:\n> Execution Time: 770.470 ms\n>\n> Query 2\n> Execution Time: 779.181 ms\n\nWow, this is a *significant* change.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 2 Sep 2020 09:49:54 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Thu, 3 Sep 2020 at 01:49, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2020-Sep-02, David Rowley wrote:\n>\n> > v7 (Separate Result Cache node)\n> > Query 1:\n> > Execution Time: 894.003 ms\n> >\n> > Query 2:\n> > Execution Time: 854.950 ms\n>\n> > v7 + hacks_V3 (caching done in Nested Loop)\n> > Query 1:\n> > Execution Time: 770.470 ms\n> >\n> > Query 2\n> > Execution Time: 779.181 ms\n>\n> Wow, this is a *significant* change.\n\nYeah, it's more than I thought it was going to be. It seems I\nmisthought in [1] where I mentioned:\n\n> With a hit ratio of 90% we'll\n> only pull 10% of tuples through the additional node, so that's about\n> 1.2 nanoseconds per tuple, or 1.2 milliseconds per million tuples. It\n> might become hard to measure above the noise. More costly inner scans\n> will have the planner choose to Result Cache with lower estimated hit\n> ratios, but in that case, pulling the tuple through the additional\n> node during a cache miss will be less noticeable due to the more\n> costly inner side of the join.\n\nThis wasn't technically wrong. I just failed to consider that a cache\nhit when the cache is built into Nested Loop requires looking at no\nother node. The tuples are right there in the cache, 90% of the time,\nin this example. No need to execute any nodes to get at them.\n\nI have come around a bit to Andres' idea. But we'd need to display the\nnested loop node as something like \"Cacheable Nested Loop\" in EXPLAIN\nso that we could easily identify what's going on. Not sure if the word\n\"Hash\" would be better to inject in the name somewhere rather than\n\"Cacheable\".\n\nI've not done any further work to shift the patch any further in that\ndirection yet. I know it's going to be quite a bit of work and it\nsounds like there are still objections in both directions. I'd rather\neveryone agreed on something before I go to the trouble of trying to\nmake something committable with Andres' way.\n\nTom, I'm wondering if you'd still be against this if Nested Loop\nshowed a different name in EXPLAIN when it was using caching? Or are\nyou also concerned about adding unrelated code into nodeNestloop.c?\nIf so, I'm wondering if adding a completely new node like\nnodeNestcacheloop.c. But that's going to add lots of boilerplate code\nthat we'd get away with not having otherwise.\n\nIn the meantime, I did change a couple of things with the current\nseparate node version. It's just around how the path stuff works in\nthe planner. I'd previously modified try_nestloop_path() to try a\nResult Cache, but I noticed more recently that's not how it's done for\nMaterialize. So in the attached, I've just aligned it to how\nnon-parameterized Nested Loops with a Materialized inner side work.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvrX9o35_WUoL5c5arJ0XbJFN-cDHckjL57-PR-Keeypdw@mail.gmail.com", "msg_date": "Tue, 15 Sep 2020 12:58:40 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Tue, 15 Sep 2020 at 12:58, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> I've not done any further work to shift the patch any further in that\n> direction yet. I know it's going to be quite a bit of work and it\n> sounds like there are still objections in both directions. I'd rather\n> everyone agreed on something before I go to the trouble of trying to\n> make something committable with Andres' way.\n\nI spent some time converting the existing v8 to move the caching into\nthe Nested Loop node instead of having an additional Result Cache node\nbetween the Nested Loop and the inner index scan. To minimise the size\nof this patch I've dropped support for caching Subplans, for now.\n\nI'd say the quality of this patch is still first draft. I just spent\ntoday getting some final things working again and spent a few hours\ntrying to break it then another few hours running benchmarks on it and\ncomparing it to the v8 patch, (v8 uses a separate Result Cache node).\n\nI'd say most of the patch is pretty good, but the changes I've made in\nnodeNestloop.c will need to be changed a bit. All the caching logic\nis in a new file named execMRUTupleCache.c. nodeNestloop.c is just a\nconsumer of this. It can detect if the MRUTupleCache was a hit or a\nmiss depending on which slot the tuple is returned in. So far I'm just\nusing that to switch around the projection info and join quals for the\nones I initialised to work with the MinimalTupleSlot from the cache.\nI'm not yet sure exactly how this should be improved, I just know\nwhat's there is not so great.\n\nSo far benchmarking shows there's still a regression from the v8\nversion of the patch. This is using count(*). An earlier test [1] did\nshow speedups when we needed to deform tuples returned by the nested\nloop node. I've not yet repeated that test again. I was disappointed\nto see v9 slower than v8 after having spent about 3 days rewriting the\npatch\n\nThe setup for the test I did was:\n\ncreate table hundredk (hundredk int, tenk int, thousand int, hundred\nint, ten int, one int);\ninsert into hundredk select x%100000,x%10000,x%1000,x%100,x%10,1 from\ngenerate_Series(1,100000) x;\ncreate table lookup (a int);\ninsert into lookup select x from generate_Series(1,100000)x,\ngenerate_Series(1,100);\ncreate index on lookup(a);\nvacuum analyze lookup, hundredk;\n\nI then ran a query like;\nselect count(*) from hundredk hk inner join lookup l on hk.thousand = l.a;\n\nin pgbench for 60 seconds and then again after swapping the join\ncolumn to hk.hundred, hk.ten and hk.one so that fewer index lookups\nwere performed and more cache hits were seen.\n\nI did have enable_mergejoin = off when testing v8 and v9 on this test.\nThe planner seemed to favour merge join over nested loop without that.\n\nResults in hundred_rows_per_rescan.png.\n\nI then reduced the lookup table so it only has 1 row to lookup instead\nof 100 for each value.\n\ntruncate lookup;\ninsert into lookup select x from generate_Series(1,100000)x;\nvacuum analyze lookup;\n\nand ran the tests again. Results in one_row_per_rescan.png.\n\nI also wanted to note that these small scale tests are not the best\ncase for this patch. I've seen much more significant gains when an\nunpatched Hash join's hash table filled the L3 cache and started\nhaving to wait for RAM. Since my MRU cache was much smaller than the\nHash join's hash table, it performed about 3x faster. What I'm trying\nto focus on here is the regression from v8 to v9. It seems to cast a\nbit more doubt as to whether v9 is any better than v8.\n\nI really would like to start moving this work towards a commit in the\nnext month or two. So any comments about v8 vs v9 would be welcome as\nI'm still uncertain which patch is best to pursue.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvpDdQDFSM+u19ROinT0qw41OX=MW4-B2mO003v6-X0AjA@mail.gmail.com", "msg_date": "Tue, 20 Oct 2020 22:30:55 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Tue, 20 Oct 2020 at 22:30, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> So far benchmarking shows there's still a regression from the v8\n> version of the patch. This is using count(*). An earlier test [1] did\n> show speedups when we needed to deform tuples returned by the nested\n> loop node. I've not yet repeated that test again. I was disappointed\n> to see v9 slower than v8 after having spent about 3 days rewriting the\n> patch\n\nI did some further tests this time with some tuple deforming. Again,\nit does seem that v9 is slower than v8.\n\nGraphs attached\n\nLooking at profiles, I don't really see any obvious reason as to why\nthis is. I'm very much inclined to just pursue the v8 patch (separate\nResult Cache node) and just drop the v9 idea altogether.\n\nDavid", "msg_date": "Mon, 2 Nov 2020 20:43:54 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Mon, 2 Nov 2020 at 20:43, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 20 Oct 2020 at 22:30, David Rowley <dgrowleyml@gmail.com> wrote:\n> I did some further tests this time with some tuple deforming. Again,\n> it does seem that v9 is slower than v8.\n>\n> Graphs attached\n>\n> Looking at profiles, I don't really see any obvious reason as to why\n> this is. I'm very much inclined to just pursue the v8 patch (separate\n> Result Cache node) and just drop the v9 idea altogether.\n\nNobody raised any objections, so I'll start taking a more serious look\nat the v8 version (the patch with the separate Result Cache node).\n\nOne thing that I had planned to come back to about now is the name\n\"Result Cache\". I admit to not thinking for too long on the best name\nand always thought it was something to come back to later when there's\nsome actual code to debate a better name for. \"Result Cache\" was\nalways a bit of a placeholder name.\n\nSome other names that I'd thought of were:\n\n\"MRU Hash\"\n\"MRU Cache\"\n\"Parameterized Tuple Cache\" (bit long)\n\"Parameterized Cache\"\n\"Parameterized MRU Cache\"\n\nI know Robert had shown some interest in using a different name. It\nwould be nice to settle on something most people are happy with soon.\n\nDavid\n\n\n", "msg_date": "Fri, 6 Nov 2020 11:12:33 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Fri, Nov 6, 2020 at 6:13 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Mon, 2 Nov 2020 at 20:43, David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > On Tue, 20 Oct 2020 at 22:30, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I did some further tests this time with some tuple deforming. Again,\n> > it does seem that v9 is slower than v8.\n> >\n> > Graphs attached\n> >\n> > Looking at profiles, I don't really see any obvious reason as to why\n> > this is. I'm very much inclined to just pursue the v8 patch (separate\n> > Result Cache node) and just drop the v9 idea altogether.\n>\n> Nobody raised any objections, so I'll start taking a more serious look\n> at the v8 version (the patch with the separate Result Cache node).\n>\n> One thing that I had planned to come back to about now is the name\n> \"Result Cache\". I admit to not thinking for too long on the best name\n> and always thought it was something to come back to later when there's\n> some actual code to debate a better name for. \"Result Cache\" was\n> always a bit of a placeholder name.\n>\n> Some other names that I'd thought of were:\n>\n> \"MRU Hash\"\n> \"MRU Cache\"\n> \"Parameterized Tuple Cache\" (bit long)\n> \"Parameterized Cache\"\n> \"Parameterized MRU Cache\"\n>\n>\nI think \"Tuple Cache\" would be OK which means it is a cache for tuples.\nTelling MRU/LRU would be too internal for an end user and \"Parameterized\"\nlooks redundant given that we have said \"Cache Key\" just below the node\nname.\n\nJust my $0.01.\n\n-- \nBest Regards\nAndy Fan\n\nOn Fri, Nov 6, 2020 at 6:13 AM David Rowley <dgrowleyml@gmail.com> wrote:On Mon, 2 Nov 2020 at 20:43, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 20 Oct 2020 at 22:30, David Rowley <dgrowleyml@gmail.com> wrote:\n> I did some further tests this time with some tuple deforming.  Again,\n> it does seem that v9 is slower than v8.\n>\n> Graphs attached\n>\n> Looking at profiles, I don't really see any obvious reason as to why\n> this is.  I'm very much inclined to just pursue the v8 patch (separate\n> Result Cache node) and just drop the v9 idea altogether.\n\nNobody raised any objections, so I'll start taking a more serious look\nat the v8 version (the patch with the separate Result Cache node).\n\nOne thing that I had planned to come back to about now is the name\n\"Result Cache\".  I admit to not thinking for too long on the best name\nand always thought it was something to come back to later when there's\nsome actual code to debate a better name for. \"Result Cache\" was\nalways a bit of a placeholder name.\n\nSome other names that I'd thought of were:\n\n\"MRU Hash\"\n\"MRU Cache\"\n\"Parameterized Tuple Cache\" (bit long)\n\"Parameterized Cache\"\n\"Parameterized MRU Cache\"\nI think \"Tuple Cache\" would be OK which means it is a cache for tuples. Telling MRU/LRU would be too internal for an end user and \"Parameterized\"looks redundant given that we have said \"Cache Key\" just below the node name.Just my $0.01.  -- Best RegardsAndy Fan", "msg_date": "Sun, 8 Nov 2020 21:26:51 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Mon, Nov 2, 2020 at 3:44 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Tue, 20 Oct 2020 at 22:30, David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > So far benchmarking shows there's still a regression from the v8\n> > version of the patch. This is using count(*). An earlier test [1] did\n> > show speedups when we needed to deform tuples returned by the nested\n> > loop node. I've not yet repeated that test again. I was disappointed\n> > to see v9 slower than v8 after having spent about 3 days rewriting the\n> > patch\n>\n> I did some further tests this time with some tuple deforming. Again,\n> it does seem that v9 is slower than v8.\n>\n\nI run your test case on v8 and v9, I can produce a stable difference\nbetween them.\n\nv8:\nstatement latencies in milliseconds:\n 1603.611 select count(*) from hundredk hk inner join lookup l on\nhk.thousand = l.a;\n\nv9:\nstatement latencies in milliseconds:\n 1772.287 select count(*) from hundredk hk inner join lookup l on\nhk.thousand = l.a;\n\nthen I did a perf on the 2 version, Is it possible that you\ncalled tts_minimal_clear twice in\nthe v9 version? Both ExecClearTuple and ExecStoreMinimalTuple\ncalled tts_minimal_clear\non the same slot.\n\nWith the following changes:\n\ndiff --git a/src/backend/executor/execMRUTupleCache.c\nb/src/backend/executor/execMRUTupleCache.c\nindex 3553dc26cb..b82d8e98b8 100644\n--- a/src/backend/executor/execMRUTupleCache.c\n+++ b/src/backend/executor/execMRUTupleCache.c\n@@ -203,10 +203,9 @@ prepare_probe_slot(MRUTupleCache *mrucache,\nMRUCacheKey *key)\n TupleTableSlot *tslot = mrucache->tableslot;\n int numKeys = mrucache->nkeys;\n\n- ExecClearTuple(pslot);\n-\n if (key == NULL)\n {\n+ ExecClearTuple(pslot);\n /* Set the probeslot's values based on the current\nparameter values */\n for (int i = 0; i < numKeys; i++)\n pslot->tts_values[i] =\nExecEvalExpr(mrucache->param_exprs[i],\n@@ -641,7 +640,7 @@ ExecMRUTupleCacheFetch(MRUTupleCache *mrucache)\n {\n mrucache->state =\nMRUCACHE_FETCH_NEXT_TUPLE;\n\n-\nExecClearTuple(mrucache->cachefoundslot);\n+ //\nExecClearTuple(mrucache->cachefoundslot);\n slot =\nmrucache->cachefoundslot;\n\nExecStoreMinimalTuple(mrucache->last_tuple->mintuple, slot, false);\n return slot;\n@@ -740,7 +739,7 @@ ExecMRUTupleCacheFetch(MRUTupleCache *mrucache)\n return NULL;\n }\n\n- ExecClearTuple(mrucache->cachefoundslot);\n+ // ExecClearTuple(mrucache->cachefoundslot);\n slot = mrucache->cachefoundslot;\n\nExecStoreMinimalTuple(mrucache->last_tuple->mintuple, slot, false);\n return slot;\n\n\nv9 has the following result:\n 1608.048 select count(*) from hundredk hk inner join lookup l on\nhk.thousand = l.a;\n\n\n\n> Graphs attached\n>\n> Looking at profiles, I don't really see any obvious reason as to why\n> this is. I'm very much inclined to just pursue the v8 patch (separate\n> Result Cache node) and just drop the v9 idea altogether.\n>\n> David\n>\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Mon, Nov 2, 2020 at 3:44 PM David Rowley <dgrowleyml@gmail.com> wrote:On Tue, 20 Oct 2020 at 22:30, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> So far benchmarking shows there's still a regression from the v8\n> version of the patch. This is using count(*). An earlier test [1] did\n> show speedups when we needed to deform tuples returned by the nested\n> loop node. I've not yet repeated that test again. I was disappointed\n> to see v9 slower than v8 after having spent about 3 days rewriting the\n> patch\n\nI did some further tests this time with some tuple deforming.  Again,\nit does seem that v9 is slower than v8.I run your test case on v8 and v9,  I can produce a stable difference between them. v8:statement latencies in milliseconds:      1603.611  select count(*) from hundredk hk inner join lookup l on hk.thousand = l.a;v9: statement latencies in milliseconds:      1772.287  select count(*) from hundredk hk inner join lookup l on hk.thousand = l.a;then I did a perf on the 2 version,  Is it possible that you called tts_minimal_clear twice in the v9 version?  Both ExecClearTuple and  ExecStoreMinimalTuple called tts_minimal_clearon the same  slot. With the following changes:diff --git a/src/backend/executor/execMRUTupleCache.c b/src/backend/executor/execMRUTupleCache.cindex 3553dc26cb..b82d8e98b8 100644--- a/src/backend/executor/execMRUTupleCache.c+++ b/src/backend/executor/execMRUTupleCache.c@@ -203,10 +203,9 @@ prepare_probe_slot(MRUTupleCache *mrucache, MRUCacheKey *key)        TupleTableSlot *tslot = mrucache->tableslot;        int                             numKeys = mrucache->nkeys;-       ExecClearTuple(pslot);-        if (key == NULL)        {+               ExecClearTuple(pslot);                /* Set the probeslot's values based on the current parameter values */                for (int i = 0; i < numKeys; i++)                        pslot->tts_values[i] = ExecEvalExpr(mrucache->param_exprs[i],@@ -641,7 +640,7 @@ ExecMRUTupleCacheFetch(MRUTupleCache *mrucache)                                        {                                                mrucache->state = MRUCACHE_FETCH_NEXT_TUPLE;-                                               ExecClearTuple(mrucache->cachefoundslot);+                                               // ExecClearTuple(mrucache->cachefoundslot);                                                slot = mrucache->cachefoundslot;                                                ExecStoreMinimalTuple(mrucache->last_tuple->mintuple, slot, false);                                                return slot;@@ -740,7 +739,7 @@ ExecMRUTupleCacheFetch(MRUTupleCache *mrucache)                                        return NULL;                                }-                               ExecClearTuple(mrucache->cachefoundslot);+                               // ExecClearTuple(mrucache->cachefoundslot);                                slot = mrucache->cachefoundslot;                                ExecStoreMinimalTuple(mrucache->last_tuple->mintuple, slot, false);                                return slot;v9 has the following result:      1608.048  select count(*) from hundredk hk inner join lookup l on hk.thousand = l.a; \nGraphs attached\n\nLooking at profiles, I don't really see any obvious reason as to why\nthis is.  I'm very much inclined to just pursue the v8 patch (separate\nResult Cache node) and just drop the v9 idea altogether.\n\nDavid\n-- Best RegardsAndy Fan", "msg_date": "Sun, 8 Nov 2020 22:52:43 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Mon, 9 Nov 2020 at 03:52, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> then I did a perf on the 2 version, Is it possible that you called tts_minimal_clear twice in\n> the v9 version? Both ExecClearTuple and ExecStoreMinimalTuple called tts_minimal_clear\n> on the same slot.\n>\n> With the following changes:\n\nThanks for finding that. After applying that fix I did a fresh set of\nbenchmarks on the latest master, latest master + v8 and latest master\n+ v9 using the attached script. (resultcachebench2.sh.txt)\n\nI ran this on my zen2 AMD64 machine and formatted the results into the\nattached resultcache_master_vs_v8_vs_v9.csv file\n\nIf I load this into PostgreSQL:\n\n# create table resultcache_bench (tbl text, target text, col text,\nlatency_master numeric(10,3), latency_v8 numeric(10,3), latency_v9\nnumeric(10,3));\n# copy resultcache_bench from\n'/path/to/resultcache_master_vs_v8_vs_v9.csv' with(format csv);\n\nand run:\n\n# select col,tbl,target, sum(latency_v8) v8, sum(latency_v9) v9,\nround(avg(latency_v8/latency_v9)*100,1) as v8_vs_v9 from\nresultcache_bench group by 1,2,3 order by 2,1,3;\n\nI've attached the results of the above query. (resultcache_v8_vs_v9.txt)\n\nOut of the 24 tests done on each branch, only 6 of 24 are better on v9\ncompared to v8. So v8 wins on 75% of the tests. v9 never wins using\nthe lookup1 table (1 row per lookup). It only runs on 50% of the\nlookup100 queries (100 inner rows per outer row). However, despite the\ndraw in won tests for the lookup100 test, v8 takes less time overall,\nas indicated by the following query:\n\npostgres=# select round(avg(latency_v8/latency_v9)*100,1) as v8_vs_v9\nfrom resultcache_bench where tbl='lookup100';\n v8_vs_v9\n----------\n 99.3\n(1 row)\n\nDitching the WHERE clause and simply doing:\n\npostgres=# select round(avg(latency_v8/latency_v9)*100,1) as v8_vs_v9\nfrom resultcache_bench;\n v8_vs_v9\n----------\n 96.2\n(1 row)\n\nindicates that v8 is 3.8% faster than v9. Altering that query\naccordingly indicates v8 is 11.5% faster than master and v9 is only 7%\nfaster than master.\n\nOf course, scaling up the test will yield both versions being even\nmore favourable then master, but the point here is comparing v8 to v9.\n\nDavid", "msg_date": "Mon, 9 Nov 2020 15:07:34 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Mon, Nov 9, 2020 at 10:07 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Mon, 9 Nov 2020 at 03:52, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > then I did a perf on the 2 version, Is it possible that you called\n> tts_minimal_clear twice in\n> > the v9 version? Both ExecClearTuple and ExecStoreMinimalTuple called\n> tts_minimal_clear\n> > on the same slot.\n> >\n> > With the following changes:\n>\n> Thanks for finding that. After applying that fix I did a fresh set of\n> benchmarks on the latest master, latest master + v8 and latest master\n> + v9 using the attached script. (resultcachebench2.sh.txt)\n>\n> I ran this on my zen2 AMD64 machine and formatted the results into the\n> attached resultcache_master_vs_v8_vs_v9.csv file\n>\n> If I load this into PostgreSQL:\n>\n> # create table resultcache_bench (tbl text, target text, col text,\n> latency_master numeric(10,3), latency_v8 numeric(10,3), latency_v9\n> numeric(10,3));\n> # copy resultcache_bench from\n> '/path/to/resultcache_master_vs_v8_vs_v9.csv' with(format csv);\n>\n> and run:\n>\n> # select col,tbl,target, sum(latency_v8) v8, sum(latency_v9) v9,\n> round(avg(latency_v8/latency_v9)*100,1) as v8_vs_v9 from\n> resultcache_bench group by 1,2,3 order by 2,1,3;\n>\n> I've attached the results of the above query. (resultcache_v8_vs_v9.txt)\n>\n> Out of the 24 tests done on each branch, only 6 of 24 are better on v9\n> compared to v8. So v8 wins on 75% of the tests.\n\n\nI think either version is OK for me and I like this patch overall. However\nI believe v9\nshould be no worse than v8 all the time, Is there any theory to explain\nyour result?\n\n\nv9 never wins using\n> the lookup1 table (1 row per lookup). It only runs on 50% of the\n> lookup100 queries (100 inner rows per outer row). However, despite the\n> draw in won tests for the lookup100 test, v8 takes less time overall,\n> as indicated by the following query:\n>\n> postgres=# select round(avg(latency_v8/latency_v9)*100,1) as v8_vs_v9\n> from resultcache_bench where tbl='lookup100';\n> v8_vs_v9\n> ----------\n> 99.3\n> (1 row)\n>\n> Ditching the WHERE clause and simply doing:\n>\n> postgres=# select round(avg(latency_v8/latency_v9)*100,1) as v8_vs_v9\n> from resultcache_bench;\n> v8_vs_v9\n> ----------\n> 96.2\n> (1 row)\n>\n> indicates that v8 is 3.8% faster than v9. Altering that query\n> accordingly indicates v8 is 11.5% faster than master and v9 is only 7%\n> faster than master.\n>\n> Of course, scaling up the test will yield both versions being even\n> more favourable then master, but the point here is comparing v8 to v9.\n>\n> David\n>\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Mon, Nov 9, 2020 at 10:07 AM David Rowley <dgrowleyml@gmail.com> wrote:On Mon, 9 Nov 2020 at 03:52, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> then I did a perf on the 2 version,  Is it possible that you called tts_minimal_clear twice in\n> the v9 version?  Both ExecClearTuple and  ExecStoreMinimalTuple called tts_minimal_clear\n> on the same  slot.\n>\n> With the following changes:\n\nThanks for finding that.  After applying that fix I did a fresh set of\nbenchmarks on the latest master, latest master + v8 and latest master\n+ v9 using the attached script. (resultcachebench2.sh.txt)\n\nI ran this on my zen2 AMD64 machine and formatted the results into the\nattached resultcache_master_vs_v8_vs_v9.csv file\n\nIf I load this into PostgreSQL:\n\n# create table resultcache_bench (tbl text, target text, col text,\nlatency_master numeric(10,3), latency_v8 numeric(10,3), latency_v9\nnumeric(10,3));\n# copy resultcache_bench from\n'/path/to/resultcache_master_vs_v8_vs_v9.csv' with(format csv);\n\nand run:\n\n# select col,tbl,target, sum(latency_v8) v8, sum(latency_v9) v9,\nround(avg(latency_v8/latency_v9)*100,1) as v8_vs_v9 from\nresultcache_bench group by 1,2,3 order by 2,1,3;\n\nI've attached the results of the above query. (resultcache_v8_vs_v9.txt)\n\nOut of the 24 tests done on each branch, only 6 of 24 are better on v9\ncompared to v8. So v8 wins on 75% of the tests.  I think either version is OK for me and I like this patch overall.  However I believe v9should be no worse than v8 all the time,  Is there any theory to explainyour result? v9 never wins using\nthe lookup1 table (1 row per lookup). It only runs on 50% of the\nlookup100 queries (100 inner rows per outer row). However, despite the\ndraw in won tests for the lookup100 test, v8 takes less time overall,\nas indicated by the following query:\n\npostgres=# select round(avg(latency_v8/latency_v9)*100,1) as v8_vs_v9\nfrom resultcache_bench where tbl='lookup100';\n v8_vs_v9\n----------\n     99.3\n(1 row)\n\nDitching the WHERE clause and simply doing:\n\npostgres=# select round(avg(latency_v8/latency_v9)*100,1) as v8_vs_v9\nfrom resultcache_bench;\n v8_vs_v9\n----------\n     96.2\n(1 row)\n\nindicates that v8 is 3.8% faster than v9. Altering that query\naccordingly indicates v8 is 11.5% faster than master and v9 is only 7%\nfaster than master.\n\nOf course, scaling up the test will yield both versions being even\nmore favourable then master, but the point here is comparing v8 to v9.\n\nDavid\n-- Best RegardsAndy Fan", "msg_date": "Mon, 9 Nov 2020 11:28:59 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Mon, 9 Nov 2020 at 16:29, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> I think either version is OK for me and I like this patch overall.\n\nThat's good to know. Thanks.\n\n> However I believe v9\n> should be no worse than v8 all the time, Is there any theory to explain\n> your result?\n\nNothing jumps out at me from looking at profiles. The only thing I\nnoticed was the tuple deforming is more costly with v9. I'm not sure\nwhy.\n\nThe other part of v9 that I don't have a good solution for yet is the\ncode around the swapping of the projection info for the Nested Loop.\nThe cache always uses a MinimalTupleSlot, but we may have a\nVirtualSlot when we get a cache miss. If we do then we need to\ninitialise 2 different projection infos so when we project from the\ncache that we have the step to deform the minimal tuple. That step is\nnot required when the inner slot is a virtual slot.\n\nI did some further testing on performance. Basically, I increased the\nsize of the tests by 2 orders of magnitude. Instead of 100k rows, I\nused 10million rows. (See attached\nresultcache_master_vs_v8_vs_v9_big.csv)\n\nLoading that in with:\n\n# create table resultcache_bench2 (tbl text, target text, col text,\nlatency_master numeric(10,3), latency_v8 numeric(10,3), latency_v9\nnumeric(10,3));\n# copy resultcache_bench2 from\n'/path/to/resultcache_master_vs_v8_vs_v9_big.csv' with(format csv);\n\nI see that v8 still wins.\n\npostgres=# select round(avg(latency_v8/latency_master)*100,1) as\nv8_vs_master, round(avg(latency_v9/latency_master)*100,1) as\nv9_vs_master, round(avg(latency_v8/latency_v9)*100,1) as v8_vs_v9 from\nresultcache_bench2;\n v8_vs_master | v9_vs_master | v8_vs_v9\n--------------+--------------+----------\n 56.7 | 58.8 | 97.3\n\nExecution for all tests for v8 runs in 56.7% of master, but v9 runs in\n58.8% of master's time. Full results in\nresultcache_master_v8_vs_v9_big.txt. v9 wins in 7 of 24 tests this\ntime. The best example test for v8 shows that v8 takes 90.6% of the\ntime of v9, but in the tests where v9 is faster, it only has a 4.3%\nlead on v8 (95.7%). You can see that overall v8 is 2.7% faster than v9\nfor these tests.\n\nDavid", "msg_date": "Tue, 10 Nov 2020 10:38:41 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On 2020-Nov-10, David Rowley wrote:\n\n> On Mon, 9 Nov 2020 at 16:29, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> > However I believe v9\n> > should be no worse than v8 all the time, Is there any theory to explain\n> > your result?\n> \n> Nothing jumps out at me from looking at profiles. The only thing I\n> noticed was the tuple deforming is more costly with v9. I'm not sure\n> why.\n\nAre you taking into account the possibility that generated machine code\nis a small percent slower out of mere bad luck? I remember someone\nsuggesting that they can make code 2% faster or so by inserting random\nno-op instructions in the binary, or something like that. So if the\ndifference between v8 and v9 is that small, then it might be due to this\nkind of effect.\n\nI don't know what is a good technique to test this hypothesis.\n\n\n", "msg_date": "Mon, 9 Nov 2020 20:15:30 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Are you taking into account the possibility that generated machine code\n> is a small percent slower out of mere bad luck? I remember someone\n> suggesting that they can make code 2% faster or so by inserting random\n> no-op instructions in the binary, or something like that. So if the\n> difference between v8 and v9 is that small, then it might be due to this\n> kind of effect.\n\nYeah. I believe what this arises from is good or bad luck about relevant\ntight loops falling within or across cache lines, and that sort of thing.\nWe've definitely seen performance changes up to a couple percent with\nno apparent change to the relevant code.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 09 Nov 2020 18:49:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Tue, 10 Nov 2020 at 12:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > Are you taking into account the possibility that generated machine code\n> > is a small percent slower out of mere bad luck? I remember someone\n> > suggesting that they can make code 2% faster or so by inserting random\n> > no-op instructions in the binary, or something like that. So if the\n> > difference between v8 and v9 is that small, then it might be due to this\n> > kind of effect.\n>\n> Yeah. I believe what this arises from is good or bad luck about relevant\n> tight loops falling within or across cache lines, and that sort of thing.\n> We've definitely seen performance changes up to a couple percent with\n> no apparent change to the relevant code.\n\nIt possibly is this issue.\n\nNormally how I build up my confidence in which is faster is why just\nrebasing on master as it advances and see if the winner ever changes.\nThe theory here is if one patch is consistently the fastest, then\nthere's more chance if there being a genuine reason for it.\n\nSo far I've only rebased v9 twice. Both times it was slower than v8.\nSince the benchmarks are all scripted, it's simple enough to kick off\nanother round to see if anything has changed.\n\nI do happen to prefer having the separate Result Cache node (v8), so\nfrom my point of view, even if the performance was equal, I'd rather\nhave v8. I understand that some others feel different though.\n\nDavid\n\n\n", "msg_date": "Tue, 10 Nov 2020 12:55:01 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Mon, Nov 9, 2020 at 3:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > Are you taking into account the possibility that generated machine code\n> > is a small percent slower out of mere bad luck? I remember someone\n> > suggesting that they can make code 2% faster or so by inserting random\n> > no-op instructions in the binary, or something like that. So if the\n> > difference between v8 and v9 is that small, then it might be due to this\n> > kind of effect.\n>\n> Yeah. I believe what this arises from is good or bad luck about relevant\n> tight loops falling within or across cache lines, and that sort of thing.\n> We've definitely seen performance changes up to a couple percent with\n> no apparent change to the relevant code.\n\nThat was Andrew Gierth. And it was 5% IIRC.\n\nIn theory it should be possible to control for this using a tool like\nstabilizer:\n\nhttps://github.com/ccurtsinger/stabilizer\n\nI am not aware of anybody having actually used the tool with Postgres,\nthough. It looks rather inconvenient.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 9 Nov 2020 15:55:38 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Tue, Nov 10, 2020 at 7:55 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Tue, 10 Nov 2020 at 12:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > > Are you taking into account the possibility that generated machine code\n> > > is a small percent slower out of mere bad luck? I remember someone\n> > > suggesting that they can make code 2% faster or so by inserting random\n> > > no-op instructions in the binary, or something like that. So if the\n> > > difference between v8 and v9 is that small, then it might be due to\n> this\n> > > kind of effect.\n> >\n> > Yeah. I believe what this arises from is good or bad luck about relevant\n> > tight loops falling within or across cache lines, and that sort of thing.\n> > We've definitely seen performance changes up to a couple percent with\n> > no apparent change to the relevant code.\n>\n> I do happen to prefer having the separate Result Cache node (v8), so\n> from my point of view, even if the performance was equal, I'd rather\n> have v8. I understand that some others feel different though.\n>\n>\nWhile I have interest about what caused the tiny difference, I admit that\nwhat direction\nthis patch should go is more important. Not sure if anyone is convinced\nthat\nv8 and v9 have a similar performance. The current data show it is similar.\nI want to\nprofile/read code more, but I don't know what part I should pay attention\nto. So I think\nany hints on why v9 should be better at a noticeable level in theory\nshould be very\nhelpful. After that, I'd like to read the code or profile more carefully.\n\n-- \nBest Regards\nAndy Fan\n\nOn Tue, Nov 10, 2020 at 7:55 AM David Rowley <dgrowleyml@gmail.com> wrote:On Tue, 10 Nov 2020 at 12:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > Are you taking into account the possibility that generated machine code\n> > is a small percent slower out of mere bad luck?  I remember someone\n> > suggesting that they can make code 2% faster or so by inserting random\n> > no-op instructions in the binary, or something like that.  So if the\n> > difference between v8 and v9 is that small, then it might be due to this\n> > kind of effect.\n>\n> Yeah.  I believe what this arises from is good or bad luck about relevant\n> tight loops falling within or across cache lines, and that sort of thing.\n> We've definitely seen performance changes up to a couple percent with\n> no apparent change to the relevant code.\nI do happen to prefer having the separate Result Cache node (v8), so\nfrom my point of view, even if the performance was equal, I'd rather\nhave v8. I understand that some others feel different though.While I have interest about what caused the tiny difference,  I admit that what directionthis patch should go is more important.  Not sure if anyone is convinced thatv8 and v9 have a similar performance.  The current data show it is similar. I want to profile/read code more, but I don't know what part I should pay attention to.  So I thinkany hints on why v9 should be better at a noticeable level  in theory should be very helpful.  After that, I'd like to read the code or profile more carefully. -- Best RegardsAndy Fan", "msg_date": "Tue, 10 Nov 2020 10:38:43 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Tue, 10 Nov 2020 at 15:38, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> While I have interest about what caused the tiny difference, I admit that what direction\n> this patch should go is more important. Not sure if anyone is convinced that\n> v8 and v9 have a similar performance. The current data show it is similar. I want to\n> profile/read code more, but I don't know what part I should pay attention to. So I think\n> any hints on why v9 should be better at a noticeable level in theory should be very\n> helpful. After that, I'd like to read the code or profile more carefully.\n\nIt was thought by putting the cache code directly inside\nnodeNestloop.c that the overhead of fetching a tuple from a subnode\ncould be eliminated when we get a cache hit.\n\nA cache hit on v8 looks like:\n\nNest loop -> Fetch new outer row\nNest loop -> Fetch inner row\nResult Cache -> cache hit return first cached tuple\nNest loop -> eval qual and return tuple if matches\n\nWith v9 it's more like:\n\nNest Loop -> Fetch new outer row\nNest loop -> cache hit return first cached tuple\nNest loop -> eval qual and return tuple if matches\n\nSo 1 less hop between nodes.\n\nIn reality, the hop is not that expensive, so might not be a big\nenough factor to slow the execution down.\n\nThere's some extra complexity in v9 around the slot type of the inner\ntuple. A cache hit means the slot type is Minimal. But a miss means\nthe slot type is whatever type the inner node's slot is. So some code\nexists to switch the qual and projection info around depending on if\nwe get a cache hit or a miss.\n\nI did some calculations on how costly pulling a tuple through a node in [1].\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAKJS1f9UXdk6ZYyqbJnjFO9a9hyHKGW7B%3DZRh-rxy9qxfPA5Gw%40mail.gmail.com\n\n\n", "msg_date": "Tue, 10 Nov 2020 17:30:13 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Tue, 10 Nov 2020 at 12:55, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 10 Nov 2020 at 12:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > > Are you taking into account the possibility that generated machine code\n> > > is a small percent slower out of mere bad luck? I remember someone\n> > > suggesting that they can make code 2% faster or so by inserting random\n> > > no-op instructions in the binary, or something like that. So if the\n> > > difference between v8 and v9 is that small, then it might be due to this\n> > > kind of effect.\n> >\n> > Yeah. I believe what this arises from is good or bad luck about relevant\n> > tight loops falling within or across cache lines, and that sort of thing.\n> > We've definitely seen performance changes up to a couple percent with\n> > no apparent change to the relevant code.\n>\n> It possibly is this issue.\n>\n> Normally how I build up my confidence in which is faster is why just\n> rebasing on master as it advances and see if the winner ever changes.\n> The theory here is if one patch is consistently the fastest, then\n> there's more chance if there being a genuine reason for it.\n\nI kicked off a script last night that ran benchmarks on master, v8 and\nv9 of the patch on 1 commit per day for the past 30 days since\nyesterday. The idea here is that as the code changes that if the\nperformance differences are due to code alignment then there should be\nenough churn in 30 days to show if this is the case.\n\nThe quickly put together script is attached. It would need quite a bit\nof modification to run on someone else's machine.\n\nThis took about 20 hours to run. I found that v8 is faster on 28 out\nof 30 commits. In the two cases where v9 was faster, v9 took 99.8% and\n98.5% of the time of v8. In the 28 cases where v8 was faster it was\ngenerally about 2-4% faster, but a couple of times 8-10% faster. Full\nresults attached in .csv file. Also the query I ran to compare the\nresults once loaded into Postgres.\n\nDavid", "msg_date": "Thu, 12 Nov 2020 15:36:18 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "Hi David:\n\nI did a review on the v8, it looks great to me. Here are some tiny things\nnoted,\njust FYI.\n\n 1. modified src/include/utils/selfuncs.h\n@@ -70,9 +70,9 @@\n * callers to provide further details about some assumptions which were\nmade\n * during the estimation.\n */\n-#define SELFLAG_USED_DEFAULT (1 << 0) /* Estimation fell back on one of\n- * the DEFAULTs as defined above.\n- */\n+#define SELFLAG_USED_DEFAULT (1 << 0) /* Estimation fell back on one\n+ * of the DEFAULTs as defined\n+ * above. */\n\nLooks nothing has changed.\n\n\n2. leading spaces is not necessary in comments.\n\n /*\n * ResultCacheTuple Stores an individually cached tuple\n */\ntypedef struct ResultCacheTuple\n{\nMinimalTuple mintuple; /* Cached tuple */\nstruct ResultCacheTuple *next; /* The next tuple with the same parameter\n* values or NULL if it's the last one */\n} ResultCacheTuple;\n\n\n3. We define ResultCacheKey as below.\n\n/*\n * ResultCacheKey\n * The hash table key for cached entries plus the LRU list link\n */\ntypedef struct ResultCacheKey\n{\nMinimalTuple params;\ndlist_node lru_node; /* Pointer to next/prev key in LRU list */\n} ResultCacheKey;\n\nSince we store it as a MinimalTuple, we need some FETCH_INNER_VAR step for\neach element during the ResultCacheHash_equal call. I am thinking if we can\nstore a \"Datum *\" directly to save these steps.\nexec_aggvalues/exec_aggnulls looks\na good candidate for me, except that the name looks not good. IMO, we can\nrename exec_aggvalues/exec_aggnulls and try to merge\nEEOP_AGGREF/EEOP_WINDOW_FUNC into a more generic step which can be\nreused in this case.\n\n4. I think the ExecClearTuple in prepare_probe_slot is not a must, since\nthe\ndata tts_values/tts_flags/tts_nvalid are all reset later, and tts_tid is not\nreal used in our case. Since both prepare_probe_slot\nand ResultCacheHash_equal are in pretty hot path, we may need to consider\nit.\n\nstatic inline void\nprepare_probe_slot(ResultCacheState *rcstate, ResultCacheKey *key)\n{\n...\nExecClearTuple(pslot);\n...\n}\n\n\nstatic void\ntts_virtual_clear(TupleTableSlot *slot)\n{\nif (unlikely(TTS_SHOULDFREE(slot)))\n{\nVirtualTupleTableSlot *vslot = (VirtualTupleTableSlot *) slot;\n\npfree(vslot->data);\nvslot->data = NULL;\n\nslot->tts_flags &= ~TTS_FLAG_SHOULDFREE;\n}\n\nslot->tts_nvalid = 0;\nslot->tts_flags |= TTS_FLAG_EMPTY;\nItemPointerSetInvalid(&slot->tts_tid);\n}\n\n-- \nBest Regards\nAndy Fan\n\nHi David:I did a review on the v8,  it looks great to me.  Here are some tiny things noted, just FYI. 1. modified   src/include/utils/selfuncs.h@@ -70,9 +70,9 @@  * callers to provide further details about some assumptions which were made  * during the estimation.  */-#define SELFLAG_USED_DEFAULT\t\t(1 << 0) /* Estimation fell back on one of-\t\t\t\t\t\t\t\t\t\t\t  * the DEFAULTs as defined above.-\t\t\t\t\t\t\t\t\t\t\t  */+#define SELFLAG_USED_DEFAULT\t\t(1 << 0)\t/* Estimation fell back on one+\t\t\t\t\t\t\t\t\t\t\t\t * of the DEFAULTs as defined+\t\t\t\t\t\t\t\t\t\t\t\t * above. */Looks nothing has changed.2. leading spaces is not necessary in comments.  /*  * ResultCacheTuple Stores an individually cached tuple  */typedef struct ResultCacheTuple{\tMinimalTuple mintuple;\t\t/* Cached tuple */\tstruct ResultCacheTuple *next;\t/* The next tuple with the same parameter\t\t\t\t\t\t\t\t\t * values or NULL if it's the last one */} ResultCacheTuple;3. We define ResultCacheKey as below./* * ResultCacheKey * The hash table key for cached entries plus the LRU list link */typedef struct ResultCacheKey{\tMinimalTuple params;\tdlist_node\tlru_node;\t\t/* Pointer to next/prev key in LRU list */} ResultCacheKey;Since we store it as a MinimalTuple, we need some FETCH_INNER_VAR step foreach element during the ResultCacheHash_equal call.  I am thinking if we canstore a \"Datum *\" directly to save these steps.  exec_aggvalues/exec_aggnulls looksa good candidate for me, except that the name looks not good. IMO, we can rename exec_aggvalues/exec_aggnulls and try to merge EEOP_AGGREF/EEOP_WINDOW_FUNC into a more generic step which can bereused in this case. 4. I think the  ExecClearTuple in prepare_probe_slot is not a must, since thedata tts_values/tts_flags/tts_nvalid are all reset later, and tts_tid is notreal used in our case.  Since both prepare_probe_slot and ResultCacheHash_equal are in  pretty hot path, we may need to consider it.static inline voidprepare_probe_slot(ResultCacheState *rcstate, ResultCacheKey *key){...\tExecClearTuple(pslot);  ...}static voidtts_virtual_clear(TupleTableSlot *slot){\tif (unlikely(TTS_SHOULDFREE(slot)))\t{\t\tVirtualTupleTableSlot *vslot = (VirtualTupleTableSlot *) slot;\t\tpfree(vslot->data);\t\tvslot->data = NULL;\t\tslot->tts_flags &= ~TTS_FLAG_SHOULDFREE;\t}\tslot->tts_nvalid = 0;\tslot->tts_flags |= TTS_FLAG_EMPTY;\tItemPointerSetInvalid(&slot->tts_tid);}-- Best RegardsAndy Fan", "msg_date": "Sun, 22 Nov 2020 21:21:06 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Sun, Nov 22, 2020 at 9:21 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> Hi David:\n>\n> I did a review on the v8, it looks great to me. Here are some tiny\n> things noted,\n> just FYI.\n>\n> 1. modified src/include/utils/selfuncs.h\n> @@ -70,9 +70,9 @@\n> * callers to provide further details about some assumptions which were\n> made\n> * during the estimation.\n> */\n> -#define SELFLAG_USED_DEFAULT (1 << 0) /* Estimation fell back on one of\n> - * the DEFAULTs as defined above.\n> - */\n> +#define SELFLAG_USED_DEFAULT (1 << 0) /* Estimation fell back on one\n> + * of the DEFAULTs as defined\n> + * above. */\n>\n> Looks nothing has changed.\n>\n>\n> 2. leading spaces is not necessary in comments.\n>\n> /*\n> * ResultCacheTuple Stores an individually cached tuple\n> */\n> typedef struct ResultCacheTuple\n> {\n> MinimalTuple mintuple; /* Cached tuple */\n> struct ResultCacheTuple *next; /* The next tuple with the same parameter\n> * values or NULL if it's the last one */\n> } ResultCacheTuple;\n>\n>\n> 3. We define ResultCacheKey as below.\n>\n> /*\n> * ResultCacheKey\n> * The hash table key for cached entries plus the LRU list link\n> */\n> typedef struct ResultCacheKey\n> {\n> MinimalTuple params;\n> dlist_node lru_node; /* Pointer to next/prev key in LRU list */\n> } ResultCacheKey;\n>\n> Since we store it as a MinimalTuple, we need some FETCH_INNER_VAR step for\n> each element during the ResultCacheHash_equal call. I am thinking if we\n> can\n> store a \"Datum *\" directly to save these steps.\n> exec_aggvalues/exec_aggnulls looks\n> a good candidate for me, except that the name looks not good. IMO, we can\n> rename exec_aggvalues/exec_aggnulls and try to merge\n> EEOP_AGGREF/EEOP_WINDOW_FUNC into a more generic step which can be\n> reused in this case.\n>\n> 4. I think the ExecClearTuple in prepare_probe_slot is not a must, since\n> the\n> data tts_values/tts_flags/tts_nvalid are all reset later, and tts_tid is\n> not\n> real used in our case. Since both prepare_probe_slot\n> and ResultCacheHash_equal are in pretty hot path, we may need to consider\n> it.\n>\n> static inline void\n> prepare_probe_slot(ResultCacheState *rcstate, ResultCacheKey *key)\n> {\n> ...\n> ExecClearTuple(pslot);\n> ...\n> }\n>\n>\n> static void\n> tts_virtual_clear(TupleTableSlot *slot)\n> {\n> if (unlikely(TTS_SHOULDFREE(slot)))\n> {\n> VirtualTupleTableSlot *vslot = (VirtualTupleTableSlot *) slot;\n>\n> pfree(vslot->data);\n> vslot->data = NULL;\n>\n> slot->tts_flags &= ~TTS_FLAG_SHOULDFREE;\n> }\n>\n> slot->tts_nvalid = 0;\n> slot->tts_flags |= TTS_FLAG_EMPTY;\n> ItemPointerSetInvalid(&slot->tts_tid);\n> }\n>\n> --\n> Best Regards\n> Andy Fan\n>\n\n\nadd 2 more comments.\n\n1. I'd suggest adding Assert(false); in RC_END_OF_SCAN case to make the\nerror clearer.\n\ncase RC_END_OF_SCAN:\n/*\n* We've already returned NULL for this scan, but just in case\n* something call us again by mistake.\n*/\nreturn NULL;\n\n2. Currently we handle the (!cache_store_tuple(node, outerslot))) case by\nset it\n to RC_CACHE_BYPASS_MODE. The only reason for the cache_store_tuple\nfailure is\n we can't cache_reduce_memory. I guess if cache_reduce_memory\n failed once, it would not succeed later(no more tuples can be stored,\n nothing is changed). So I think we can record this state and avoid any\nnew\n cache_reduce_memory call.\n\n/*\n* If we failed to create the entry or failed to store the\n* tuple in the entry, then go into bypass mode.\n*/\nif (unlikely(entry == NULL ||\n!cache_store_tuple(node, outerslot)))\n\n to\n\nif (unlikely(entry == NULL || node->memory_cant_be_reduced ||\n!cache_store_tuple(node, outerslot)))\n\n-- \nBest Regards\nAndy Fan\n\nOn Sun, Nov 22, 2020 at 9:21 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:Hi David:I did a review on the v8,  it looks great to me.  Here are some tiny things noted, just FYI. 1. modified   src/include/utils/selfuncs.h@@ -70,9 +70,9 @@  * callers to provide further details about some assumptions which were made  * during the estimation.  */-#define SELFLAG_USED_DEFAULT\t\t(1 << 0) /* Estimation fell back on one of-\t\t\t\t\t\t\t\t\t\t\t  * the DEFAULTs as defined above.-\t\t\t\t\t\t\t\t\t\t\t  */+#define SELFLAG_USED_DEFAULT\t\t(1 << 0)\t/* Estimation fell back on one+\t\t\t\t\t\t\t\t\t\t\t\t * of the DEFAULTs as defined+\t\t\t\t\t\t\t\t\t\t\t\t * above. */Looks nothing has changed.2. leading spaces is not necessary in comments.  /*  * ResultCacheTuple Stores an individually cached tuple  */typedef struct ResultCacheTuple{\tMinimalTuple mintuple;\t\t/* Cached tuple */\tstruct ResultCacheTuple *next;\t/* The next tuple with the same parameter\t\t\t\t\t\t\t\t\t * values or NULL if it's the last one */} ResultCacheTuple;3. We define ResultCacheKey as below./* * ResultCacheKey * The hash table key for cached entries plus the LRU list link */typedef struct ResultCacheKey{\tMinimalTuple params;\tdlist_node\tlru_node;\t\t/* Pointer to next/prev key in LRU list */} ResultCacheKey;Since we store it as a MinimalTuple, we need some FETCH_INNER_VAR step foreach element during the ResultCacheHash_equal call.  I am thinking if we canstore a \"Datum *\" directly to save these steps.  exec_aggvalues/exec_aggnulls looksa good candidate for me, except that the name looks not good. IMO, we can rename exec_aggvalues/exec_aggnulls and try to merge EEOP_AGGREF/EEOP_WINDOW_FUNC into a more generic step which can bereused in this case. 4. I think the  ExecClearTuple in prepare_probe_slot is not a must, since thedata tts_values/tts_flags/tts_nvalid are all reset later, and tts_tid is notreal used in our case.  Since both prepare_probe_slot and ResultCacheHash_equal are in  pretty hot path, we may need to consider it.static inline voidprepare_probe_slot(ResultCacheState *rcstate, ResultCacheKey *key){...\tExecClearTuple(pslot);  ...}static voidtts_virtual_clear(TupleTableSlot *slot){\tif (unlikely(TTS_SHOULDFREE(slot)))\t{\t\tVirtualTupleTableSlot *vslot = (VirtualTupleTableSlot *) slot;\t\tpfree(vslot->data);\t\tvslot->data = NULL;\t\tslot->tts_flags &= ~TTS_FLAG_SHOULDFREE;\t}\tslot->tts_nvalid = 0;\tslot->tts_flags |= TTS_FLAG_EMPTY;\tItemPointerSetInvalid(&slot->tts_tid);}-- Best RegardsAndy Fan\nadd 2 more comments. 1. I'd suggest adding Assert(false); in RC_END_OF_SCAN  case to make the error clearer.case RC_END_OF_SCAN:\t\t/*\t\t * We've already returned NULL for this scan, but just in case\t\t * something call us again by mistake.\t\t */\t\treturn NULL;2. Currently we handle the (!cache_store_tuple(node, outerslot))) case by set it   to RC_CACHE_BYPASS_MODE. The only reason for the cache_store_tuple failure is   we can't cache_reduce_memory.  I guess if cache_reduce_memory   failed once, it would not succeed later(no more tuples can be stored,   nothing is changed). So I think we can record this state and avoid any new   cache_reduce_memory call.\t/*\t * If we failed to create the entry or failed to store the\t * tuple in the entry, then go into bypass mode.\t */\tif (unlikely(entry == NULL ||\t\t !cache_store_tuple(node, outerslot))) to\tif (unlikely(entry == NULL || node->memory_cant_be_reduced ||\t\t !cache_store_tuple(node, outerslot)))-- Best RegardsAndy Fan", "msg_date": "Sun, 22 Nov 2020 23:23:42 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "Thanks for having another look at this.\n\n> On Sun, Nov 22, 2020 at 9:21 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> add 2 more comments.\n>\n> 1. I'd suggest adding Assert(false); in RC_END_OF_SCAN case to make the error clearer.\n>\n> case RC_END_OF_SCAN:\n> /*\n> * We've already returned NULL for this scan, but just in case\n> * something call us again by mistake.\n> */\n> return NULL;\n\nThis just took some inspiration from nodeMaterial.c where it says:\n\n/*\n* If necessary, try to fetch another row from the subplan.\n*\n* Note: the eof_underlying state variable exists to short-circuit further\n* subplan calls. It's not optional, unfortunately, because some plan\n* node types are not robust about being called again when they've already\n* returned NULL.\n*/\n\nI'm not feeling a pressing need to put an Assert(false); in there as\nit's not what nodeMaterial.c does. nodeMaterial is nodeResultCache's\nsister node which can also be seen below Nested Loops.\n\n> 2. Currently we handle the (!cache_store_tuple(node, outerslot))) case by set it\n> to RC_CACHE_BYPASS_MODE. The only reason for the cache_store_tuple failure is\n> we can't cache_reduce_memory. I guess if cache_reduce_memory\n> failed once, it would not succeed later(no more tuples can be stored,\n> nothing is changed). So I think we can record this state and avoid any new\n> cache_reduce_memory call.\n>\n> /*\n> * If we failed to create the entry or failed to store the\n> * tuple in the entry, then go into bypass mode.\n> */\n> if (unlikely(entry == NULL ||\n> !cache_store_tuple(node, outerslot)))\n>\n> to\n>\n> if (unlikely(entry == NULL || node->memory_cant_be_reduced ||\n> !cache_store_tuple(node, outerslot)))\n\nThe reason for RC_CACHE_BYPASS_MODE is if there's a single set of\nparameters that have so many results that they, alone, don't fit in\nthe cache. We call cache_reduce_memory() whenever we go over our\nmemory budget. That function returns false if it was unable to free\nenough memory without removing the \"specialkey\", which in this case is\nthe current cache entry that's being populated. Later, when we're\ncaching some entry that isn't quite so large, we still want to be able\nto cache that. In that case, we'll have removed the remnants of the\noverly large entry that didn't fit to way for newer and, hopefully,\nsmaller entries. No problems. I'm not sure why there's a need for\nanother flag here.\n\nA bit more background.\n\nWhen caching a new entry, or finding an existing entry, we move that\nentry to the top of the MRU dlist. When adding entries or tuples to\nexisting entries, if we've gone over memory budget, then we remove\ncache entries from the MRU list starting at the tail (lease recently\nused). If we begin caching tuples for an entry and need to free some\nspace, then since we've put the current entry to the top of the MRU\nlist, it'll be the last one to be removed. However, it's still\npossible that we run through the entire MRU list and end up at the\nmost recently used item. So the entry we're populating can also be\nremoved if freeing everything else was still not enough to give us\nenough free memory. The code refers to this as a cache overflow. This\ncauses the state machine to move into RC_CACHE_BYPASS_MODE mode. We'll\njust read tuples directly from the subnode in that case, no need to\nattempt to cache them. They're not going to fit. We'll come out of\nRC_CACHE_BYPASS_MODE when doing the next rescan with a different set\nof parameters. This is our chance to try caching things again. The\ncode does that. There might be far fewer tuples for the next parameter\nwe're scanning for, or those tuples might be more narrow. So it makes\nsense to give caching them another try. Perhaps there's some point\nwhere we should give up doing that, but given good statistics, it's\nunlikely the planner would have thought a result cache would have been\nworth the trouble and would likely have picked some other way to\nexecute the plan. The planner does estimate the average size of a\ncache entry and calculates how many of those fit into a hash_mem. If\nthat number is too low then Result Caching the inner side won't be too\nappealing. Of course, calculating the average does not mean there are\nno outliers. We'll deal with the large side of the outliers with the\nbypass code.\n\nI currently don't really see what needs to be changed about that.\n\nDavid\n\n\n", "msg_date": "Fri, 27 Nov 2020 13:10:43 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Fri, Nov 27, 2020 at 8:10 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> Thanks for having another look at this.\n>\n> > On Sun, Nov 22, 2020 at 9:21 PM Andy Fan <zhihui.fan1213@gmail.com>\n> wrote:\n> > add 2 more comments.\n> >\n> > 1. I'd suggest adding Assert(false); in RC_END_OF_SCAN case to make the\n> error clearer.\n> >\n> > case RC_END_OF_SCAN:\n> > /*\n> > * We've already returned NULL for this scan, but just in case\n> > * something call us again by mistake.\n> > */\n> > return NULL;\n>\n> This just took some inspiration from nodeMaterial.c where it says:\n>\n> /*\n> * If necessary, try to fetch another row from the subplan.\n> *\n> * Note: the eof_underlying state variable exists to short-circuit further\n> * subplan calls. It's not optional, unfortunately, because some plan\n> * node types are not robust about being called again when they've already\n> * returned NULL.\n> */\n>\n> I'm not feeling a pressing need to put an Assert(false); in there as\n> it's not what nodeMaterial.c does. nodeMaterial is nodeResultCache's\n> sister node which can also be seen below Nested Loops.\n>\n>\nOK, even though I am not quite understanding the above now, I will try to\nfigure it\nby myself. I'm OK with this decision.\n\n\n\n\n> > 2. Currently we handle the (!cache_store_tuple(node, outerslot))) case\n> by set it\n> > to RC_CACHE_BYPASS_MODE. The only reason for the cache_store_tuple\n> failure is\n> > we can't cache_reduce_memory. I guess if cache_reduce_memory\n> > failed once, it would not succeed later(no more tuples can be stored,\n> > nothing is changed). So I think we can record this state and avoid\n> any new\n> > cache_reduce_memory call.\n> >\n> > /*\n> > * If we failed to create the entry or failed to store the\n> > * tuple in the entry, then go into bypass mode.\n> > */\n> > if (unlikely(entry == NULL ||\n> > !cache_store_tuple(node, outerslot)))\n> >\n> > to\n> >\n> > if (unlikely(entry == NULL || node->memory_cant_be_reduced ||\n> > !cache_store_tuple(node, outerslot)))\n>\n> The reason for RC_CACHE_BYPASS_MODE is if there's a single set of\n> parameters that have so many results that they, alone, don't fit in\n> the cache. We call cache_reduce_memory() whenever we go over our\n> memory budget. That function returns false if it was unable to free\n> enough memory without removing the \"specialkey\", which in this case is\n> the current cache entry that's being populated. Later, when we're\n> caching some entry that isn't quite so large, we still want to be able\n> to cache that. In that case, we'll have removed the remnants of the\n> overly large entry that didn't fit to way for newer and, hopefully,\n> smaller entries. No problems. I'm not sure why there's a need for\n> another flag here.\n>\n>\nThanks for the explanation, I'm sure I made some mistakes before at\nthis part.\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Fri, Nov 27, 2020 at 8:10 AM David Rowley <dgrowleyml@gmail.com> wrote:Thanks for having another look at this.\n\n> On Sun, Nov 22, 2020 at 9:21 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> add 2 more comments.\n>\n> 1. I'd suggest adding Assert(false); in RC_END_OF_SCAN  case to make the error clearer.\n>\n> case RC_END_OF_SCAN:\n> /*\n> * We've already returned NULL for this scan, but just in case\n> * something call us again by mistake.\n> */\n> return NULL;\n\nThis just took some inspiration from nodeMaterial.c where it says:\n\n/*\n* If necessary, try to fetch another row from the subplan.\n*\n* Note: the eof_underlying state variable exists to short-circuit further\n* subplan calls.  It's not optional, unfortunately, because some plan\n* node types are not robust about being called again when they've already\n* returned NULL.\n*/\n\nI'm not feeling a pressing need to put an Assert(false); in there as\nit's not what nodeMaterial.c does.  nodeMaterial is nodeResultCache's\nsister node which can also be seen below Nested Loops.\nOK, even though I am not quite understanding the above now,  I will try to figure itby myself. I'm OK with this decision.  \n> 2. Currently we handle the (!cache_store_tuple(node, outerslot))) case by set it\n>    to RC_CACHE_BYPASS_MODE. The only reason for the cache_store_tuple failure is\n>    we can't cache_reduce_memory.  I guess if cache_reduce_memory\n>    failed once, it would not succeed later(no more tuples can be stored,\n>    nothing is changed). So I think we can record this state and avoid any new\n>    cache_reduce_memory call.\n>\n> /*\n> * If we failed to create the entry or failed to store the\n> * tuple in the entry, then go into bypass mode.\n> */\n> if (unlikely(entry == NULL ||\n> !cache_store_tuple(node, outerslot)))\n>\n>  to\n>\n> if (unlikely(entry == NULL || node->memory_cant_be_reduced ||\n> !cache_store_tuple(node, outerslot)))\n\nThe reason for RC_CACHE_BYPASS_MODE is if there's a single set of\nparameters that have so many results that they, alone, don't fit in\nthe cache. We call cache_reduce_memory() whenever we go over our\nmemory budget. That function returns false if it was unable to free\nenough memory without removing the \"specialkey\", which in this case is\nthe current cache entry that's being populated. Later, when we're\ncaching some entry that isn't quite so large, we still want to be able\nto cache that.  In that case, we'll have removed the remnants of the\noverly large entry that didn't fit to way for newer and, hopefully,\nsmaller entries. No problems.  I'm not sure why there's a need for\nanother flag here.\n Thanks for the explanation, I'm sure I made some mistakes before atthis part.  -- Best RegardsAndy Fan", "msg_date": "Fri, 27 Nov 2020 10:36:45 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Thu, 12 Nov 2020 at 15:36, David Rowley <dgrowleyml@gmail.com> wrote:\n> I kicked off a script last night that ran benchmarks on master, v8 and\n> v9 of the patch on 1 commit per day for the past 30 days since\n> yesterday. The idea here is that as the code changes that if the\n> performance differences are due to code alignment then there should be\n> enough churn in 30 days to show if this is the case.\n>\n> The quickly put together script is attached. It would need quite a bit\n> of modification to run on someone else's machine.\n>\n> This took about 20 hours to run. I found that v8 is faster on 28 out\n> of 30 commits. In the two cases where v9 was faster, v9 took 99.8% and\n> 98.5% of the time of v8. In the 28 cases where v8 was faster it was\n> generally about 2-4% faster, but a couple of times 8-10% faster. Full\n> results attached in .csv file. Also the query I ran to compare the\n> results once loaded into Postgres.\n\nSince running those benchmarks, Andres spent a little bit of time\nlooking at the v9 patch and he pointed out that I can use the same\nprojection info in the nested loop code with and without a cache hit.\nI just need to ensure that inneropsfixed is false so that the\nexpression compilation includes a deform step when result caching is\nenabled. Making it work like that did make a small performance\nimprovement, but further benchmarking showed that it was still not as\nfast as the v8 patch (separate Result Cache node).\n\nDue to that, I want to push forward with having the separate Result\nCache node and just drop the idea of including the feature as part of\nthe Nested Loop node.\n\nI've attached an updated patch, v10. This is v8 with a few further\nchanges; I added the peak memory tracking and adjusted a few comments.\nI added a paragraph to explain what RC_CACHE_BYPASS_MODE is. I also\nnoticed that the code I'd written to build the cache lookup expression\nincluded a step to deform the outer tuple. This was unnecessary and\nslowed down the expression evaluation.\n\nI'm fairly happy with patches 0001 to 0003. However, I ended up\nstripping out the subplan caching code out of 0003 and putting it in\n0004. This part I'm not so happy with. The problem there is that when\nplanning a correlated subquery we don't have any context to determine\nhow many distinct values the subplan will be called with. For now, the\n0004 patch just always includes a Result Cache for correlated\nsubqueries. The reason I don't like that is that it could slow things\ndown when the cache never gets a hit. The additional cost of adding\ntuples to the cache is going to slow things down.\n\nI'm not yet sure the best way to make 0004 better. I don't think using\nAlternativeSubplans is a good choice as it means having to build two\nsubplans. Also determining the cheapest plan to use couldn't use the\nexisting logic that's in fix_alternative_subplan(). It might be best\nleft until we do some refactoring so instead of building subplans as\nsoon as we've run the planner, instead have it keep a list of Paths\naround and then choose the best Path once the top-level plan has been\nplanned. That's a pretty big change.\n\nOn making another pass over this patchset, I feel there are two points\nthat might still raise a few eyebrows:\n\n1. In order to not have Nested Loops picked with an inner Result Cache\nwhen the inner index's parameters have no valid statistics, I modified\nestimate_num_groups() to add a new parameter that allows callers to\npass an EstimationInfo struct to have the function set a flag to\nindicate of DEFAULT_NUM_DISTINCT was used. Callers which don't care\nabout this can just pass NULL. I did once try adding a new parameter\nto clauselist_selectivity() in 2686ee1b. There was not much\nexcitement about that we ended up removing it again. I don't see any\nalternative here.\n\n2. Nobody really mentioned they didn't like the name Result Cache. I\nreally used that as a placeholder name until I came up with something\nbetter. I mentioned a few other names in [1]. If nobody is objecting\nto Result Cache, I'll just keep it named that way.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvoj_sH1H3JVXgHuwnxf1FQbjRVOqqgxzOgJX13NiA9-cg@mail.gmail.com", "msg_date": "Sat, 5 Dec 2020 03:41:21 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "Thanks for working on the new version.\n\nOn Fri, Dec 4, 2020 at 10:41 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n>\n> I also\n> noticed that the code I'd written to build the cache lookup expression\n> included a step to deform the outer tuple. This was unnecessary and\n> slowed down the expression evaluation.\n>\n>\nI thought it would be something like my 3rd suggestion on [1], however after\nI read the code, it looked like no. Could you explain what changes it is?\nI probably missed something.\n\n[1]\nhttps://www.postgresql.org/message-id/CAApHDvqvGZUPKHO%2B4Xp7Lm_q1OXBo2Yp1%3D5pVnEUcr4dgOXxEg%40mail.gmail.com\n\n-- \nBest Regards\nAndy Fan\n\nThanks for working on the new version. On Fri, Dec 4, 2020 at 10:41 PM David Rowley <dgrowleyml@gmail.com> wrote:I also\nnoticed that the code I'd written to build the cache lookup expression\nincluded a step to deform the outer tuple. This was unnecessary and\nslowed down the expression evaluation.I thought it would be something like my 3rd suggestion on [1], however afterI read the code,  it looked like no.  Could you explain what changes it is? I probably missed something.[1]  https://www.postgresql.org/message-id/CAApHDvqvGZUPKHO%2B4Xp7Lm_q1OXBo2Yp1%3D5pVnEUcr4dgOXxEg%40mail.gmail.com-- Best RegardsAndy Fan", "msg_date": "Sat, 5 Dec 2020 22:52:45 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "Thanks for this review. I somehow missed addressing what's mentioned\nhere for the v10 patch. Comments below.\n\nOn Mon, 23 Nov 2020 at 02:21, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> 1. modified src/include/utils/selfuncs.h\n> @@ -70,9 +70,9 @@\n> * callers to provide further details about some assumptions which were made\n> * during the estimation.\n> */\n> -#define SELFLAG_USED_DEFAULT (1 << 0) /* Estimation fell back on one of\n> - * the DEFAULTs as defined above.\n> - */\n> +#define SELFLAG_USED_DEFAULT (1 << 0) /* Estimation fell back on one\n> + * of the DEFAULTs as defined\n> + * above. */\n>\n> Looks nothing has changed.\n\nI accidentally took the changes made by pgindent into the wrong patch.\nFixed that in v10.\n\n> 2. leading spaces is not necessary in comments.\n>\n> /*\n> * ResultCacheTuple Stores an individually cached tuple\n> */\n> typedef struct ResultCacheTuple\n> {\n> MinimalTuple mintuple; /* Cached tuple */\n> struct ResultCacheTuple *next; /* The next tuple with the same parameter\n> * values or NULL if it's the last one */\n> } ResultCacheTuple;\n\nOK, I've changed that so that they're on 1 line instead of 3.\n\n> 3. We define ResultCacheKey as below.\n>\n> /*\n> * ResultCacheKey\n> * The hash table key for cached entries plus the LRU list link\n> */\n> typedef struct ResultCacheKey\n> {\n> MinimalTuple params;\n> dlist_node lru_node; /* Pointer to next/prev key in LRU list */\n> } ResultCacheKey;\n>\n> Since we store it as a MinimalTuple, we need some FETCH_INNER_VAR step for\n> each element during the ResultCacheHash_equal call. I am thinking if we can\n> store a \"Datum *\" directly to save these steps. exec_aggvalues/exec_aggnulls looks\n> a good candidate for me, except that the name looks not good. IMO, we can\n> rename exec_aggvalues/exec_aggnulls and try to merge\n> EEOP_AGGREF/EEOP_WINDOW_FUNC into a more generic step which can be\n> reused in this case.\n\nI think this is along the lines of what I'd been thinking about and\nmentioned internally to Thomas and Andres. I called it a MemTuple and\nit was basically a contiguous block of memory with Datum and isnull\narrays and any varlena attributes at the end of the contiguous\nallocation. These could quickly be copied into a VirtualSlot with\nzero deforming. I've not given this too much thought yet, but if I\nwas to do this I'd be aiming to store the cached tuple this way to so\nsave having to deform it each time we get a cache hit. We'd use more\nmemory storing entries this way, but if we're not expecting the Result\nCache to fill work_mem, then perhaps it's another approach that the\nplanner could decide on. Perhaps the cached tuple pointer could be a\nunion to allow us to store either without making the struct any\nlarger.\n\nHowever, FWIW, I'd prefer to think about this later though.\n\n> 4. I think the ExecClearTuple in prepare_probe_slot is not a must, since the\n> data tts_values/tts_flags/tts_nvalid are all reset later, and tts_tid is not\n> real used in our case. Since both prepare_probe_slot\n> and ResultCacheHash_equal are in pretty hot path, we may need to consider it.\n\nI agree that it would be nice not to do the ExecClearTuple(), but the\nonly way I can see to get rid of it also requires getting rid of the\nExecStoreVirtualTuple(). The problem is ExecStoreVirtualTuple()\nAsserts that the slot is empty, which it won't be the second time\naround unless we ExecClearTuple it. It seems that to make that work\nwe'd have to manually set slot->tts_nvalid. I see other places in the\ncode doing this ExecClearTuple() / ExecStoreVirtualTuple() dance, so I\ndon't think it's going to be up to this patch to start making\noptimisations just for this 1 case.\n\nDavid\n\n\n", "msg_date": "Mon, 7 Dec 2020 12:46:12 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Sun, 6 Dec 2020 at 03:52, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> On Fri, Dec 4, 2020 at 10:41 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>>\n>> I also\n>> noticed that the code I'd written to build the cache lookup expression\n>> included a step to deform the outer tuple. This was unnecessary and\n>> slowed down the expression evaluation.\n>>\n>\n> I thought it would be something like my 3rd suggestion on [1], however after\n> I read the code, it looked like no. Could you explain what changes it is?\n> I probably missed something.\n\nBasically, an extra argument in ExecBuildParamSetEqual() which allows\nthe TupleTableSlotOps for the left and right side to be set\nindividually. Previously I was passing a single TupleTableSlotOps of\nTTSOpsMinimalTuple. The probeslot is a TTSOpsVirtual tuple, so\npassing TTSOpsMinimalTuple causes the function to add a needless\nEEOP_OUTER_FETCHSOME step to the expression.\n\nDavid\n\n\n", "msg_date": "Mon, 7 Dec 2020 12:50:11 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "I've attached another patchset that addresses some comments left by\nZhihong Yu over on [1]. The version number got bumped to v12 instead\nof v11 as I still have a copy of the other version of the patch which\nI made some changes to and internally named v11.\n\nThe patchset has grown 1 additional patch which is the 0004 patch.\nThe review on the other thread mentioned that I should remove the code\nduplication for the full cache check that I had mostly duplicated\nbetween adding a new entry to the cache and adding tuple to an\nexisting entry. I'm still a bit unsure that I like merging this into\na helper function. One call needs the return value of the function to\nbe a boolean value to know if it's still okay to use the cache. The\nother need the return value to be the cache entry. The patch makes the\nhelper function return the entry and returns NULL to communicate the\nfalse value. I'm not a fan of the change and might drop it.\n\nThe 0005 patch is now the only one that I think needs more work to\nmake it good enough. This is Result Cache for subplans. I mentioned\nin [2] what my problem with that patch is.\n\nOn Mon, 7 Dec 2020 at 12:50, David Rowley <dgrowleyml@gmail.com> wrote:\n> Basically, an extra argument in ExecBuildParamSetEqual() which allows\n> the TupleTableSlotOps for the left and right side to be set\n> individually. Previously I was passing a single TupleTableSlotOps of\n> TTSOpsMinimalTuple. The probeslot is a TTSOpsVirtual tuple, so\n> passing TTSOpsMinimalTuple causes the function to add a needless\n> EEOP_OUTER_FETCHSOME step to the expression.\n\nI also benchmarked that change and did see that it gives a small but\nnotable improvement to the performance.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CALNJ-vRAgksPqjK-sAU+9gu3R44s_3jVPJ_5SDB++jjEkTntiA@mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAApHDvpGX7RN+sh7Hn9HWZQKp53SjKaL=GtDzYheHWiEd-8moQ@mail.gmail.com", "msg_date": "Tue, 8 Dec 2020 20:15:52 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Tue, 8 Dec 2020 at 20:15, David Rowley <dgrowleyml@gmail.com> wrote:\n> I've attached another patchset that addresses some comments left by\n> Zhihong Yu over on [1]. The version number got bumped to v12 instead\n> of v11 as I still have a copy of the other version of the patch which\n> I made some changes to and internally named v11.\n\nIf anyone else wants to have a look at these, please do so soon. I'm\nplanning on starting to take a serious look at getting 0001-0003 in\nearly next week.\n\nDavid\n\n\n", "msg_date": "Thu, 10 Dec 2020 09:53:40 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On 09.12.2020 23:53, David Rowley wrote:\n> On Tue, 8 Dec 2020 at 20:15, David Rowley <dgrowleyml@gmail.com> wrote:\n>> I've attached another patchset that addresses some comments left by\n>> Zhihong Yu over on [1]. The version number got bumped to v12 instead\n>> of v11 as I still have a copy of the other version of the patch which\n>> I made some changes to and internally named v11.\n> If anyone else wants to have a look at these, please do so soon. I'm\n> planning on starting to take a serious look at getting 0001-0003 in\n> early next week.\n>\n> David\n>\nI tested the patched version of Postgres on JOBS benchmark:\n\nhttps://github.com/gregrahn/join-order-benchmark\n\nFor most queries performance is the same, some queries are executed \nfaster but\none query is 150 times slower:\n\n\nexplain analyze SELECT MIN(chn.name) AS character,\n        MIN(t.title) AS movie_with_american_producer\nFROM char_name AS chn,\n      cast_info AS ci,\n      company_name AS cn,\n      company_type AS ct,\n      movie_companies AS mc,\n      role_type AS rt,\n      title AS t\nWHERE ci.note LIKE '%(producer)%'\n   AND cn.country_code = '[us]'\n   AND t.production_year > 1990\n   AND t.id = mc.movie_id\n   AND t.id = ci.movie_id\n   AND ci.movie_id = mc.movie_id\n   AND chn.id = ci.person_role_id\n   AND rt.id = ci.role_id\n   AND cn.id = mc.company_id\n   AND ct.id = mc.company_type_id;\nexplain analyze SELECT MIN(cn.name) AS from_company,\n        MIN(lt.link) AS movie_link_type,\n        MIN(t.title) AS non_polish_sequel_movie\nFROM company_name AS cn,\n      company_type AS ct,\n      keyword AS k,\n      link_type AS lt,\n      movie_companies AS mc,\n      movie_keyword AS mk,\n      movie_link AS ml,\n      title AS t\nWHERE cn.country_code !='[pl]'\n   AND (cn.name LIKE '%Film%'\n        OR cn.name LIKE '%Warner%')\n   AND ct.kind ='production companies'\n   AND k.keyword ='sequel'\n   AND lt.link LIKE '%follow%'\n   AND mc.note IS NULL\n   AND t.production_year BETWEEN 1950 AND 2000\n   AND lt.id = ml.link_type_id\n   AND ml.movie_id = t.id\n   AND t.id = mk.movie_id\n   AND mk.keyword_id = k.id\n   AND t.id = mc.movie_id\n   AND mc.company_type_id = ct.id\n   AND mc.company_id = cn.id\n   AND ml.movie_id = mk.movie_id\n   AND ml.movie_id = mc.movie_id\n   AND mk.movie_id = mc.movie_id;\n\n\nQUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------\n  Finalize Aggregate  (cost=300131.43..300131.44 rows=1 width=64) \n(actual time=522985.919..522993.614 rows=1 loops=1)\n    ->  Gather  (cost=300131.00..300131.41 rows=4 width=64) (actual \ntime=522985.908..522993.606 rows=5 loops=1)\n          Workers Planned: 4\n          Workers Launched: 4\n          ->  Partial Aggregate  (cost=299131.00..299131.01 rows=1 \nwidth=64) (actual time=522726.599..522726.606 rows=1 loops=5)\n                ->  Hash Join  (cost=38559.78..298508.36 rows=124527 \nwidth=33) (actual time=301521.477..522726.592 rows=2 loops=5)\n                      Hash Cond: (ci.role_id = rt.id)\n                      ->  Hash Join  (cost=38558.51..298064.76 \nrows=124527 width=37) (actual time=301521.418..522726.529 rows=2 loops=5)\n                            Hash Cond: (mc.company_type_id = ct.id)\n                            ->  Nested Loop (cost=38557.42..297390.45 \nrows=124527 width=41) (actual time=301521.392..522726.498 rows=2 loops=5)\n                                  ->  Nested Loop \n(cost=38556.98..287632.46 rows=255650 width=29) (actual \ntime=235.183..4596.950 rows=156421 loops=5)\n                                        Join Filter: (t.id = ci.movie_id)\n                                        ->  Parallel Hash Join \n(cost=38556.53..84611.99 rows=162109 width=29) (actual \ntime=234.991..718.934 rows=119250 loops\n=5)\n                                              Hash Cond: (t.id = \nmc.movie_id)\n                                              ->  Parallel Seq Scan on \ntitle t  (cost=0.00..43899.19 rows=435558 width=21) (actual \ntime=0.010..178.332 rows=34\n9806 loops=5)\n                                                    Filter: \n(production_year > 1990)\n                                                    Rows Removed by \nFilter: 155856\n                                              ->  Parallel Hash \n(cost=34762.05..34762.05 rows=303558 width=8) (actual \ntime=234.282..234.285 rows=230760 loops\n=5)\n                                                    Buckets: 2097152 \n(originally 1048576)  Batches: 1 (originally 1)  Memory Usage: 69792kB\n                                                    ->  Parallel Hash \nJoin  (cost=5346.12..34762.05 rows=303558 width=8) (actual \ntime=11.846..160.085 rows=230\n760 loops=5)\n                                                          Hash Cond: \n(mc.company_id = cn.id)\n                                                          -> Parallel \nSeq Scan on movie_companies mc  (cost=0.00..27206.55 rows=841655 \nwidth=12) (actual time\n=0.013..40.426 rows=521826 loops=5)\n                                                          -> Parallel \nHash  (cost=4722.92..4722.92 rows=49856 width=4) (actual \ntime=11.658..11.659 rows=16969\n  loops=5)\nBuckets: 131072  Batches: 1  Memory Usage: 4448kB\n->  Parallel Seq Scan on company_name cn  (cost=0.00..4722.92 rows=49856 \nwidth=4) (actual time\n=0.014..8.324 rows=16969 loops=5)\nFilter: ((country_code)::text = '[us]'::text)\nRows Removed by Filter: 30031\n                                        ->  Result Cache \n(cost=0.45..1.65 rows=2 width=12) (actual time=0.019..0.030 rows=1 \nloops=596250)\n                                              Cache Key: mc.movie_id\n                                              Hits: 55970  Misses: \n62602  Evictions: 0  Overflows: 0  Memory Usage: 6824kB\n                                              Worker 0:  Hits: 56042 \nMisses: 63657  Evictions: 0  Overflows: 0  Memory Usage: 6924kB\n                                              Worker 1:  Hits: 56067 \nMisses: 63659  Evictions: 0  Overflows: 0  Memory Usage: 6906kB\n                                              Worker 2:  Hits: 55947 \nMisses: 62171  Evictions: 0  Overflows: 0  Memory Usage: 6767kB\n                                              Worker 3:  Hits: 56150 \nMisses: 63985  Evictions: 0  Overflows: 0  Memory Usage: 6945kB\n                                              ->  Index Scan using \ncast_info_movie_id_idx on cast_info ci  (cost=0.44..1.64 rows=2 \nwidth=12) (actual time=0.03\n3..0.053 rows=1 loops=316074)\n                                                    Index Cond: \n(movie_id = mc.movie_id)\n                                                    Filter: \n((note)::text ~~ '%(producer)%'::text)\n                                                    Rows Removed by \nFilter: 25\n                                  ->  Result Cache (cost=0.44..0.59 \nrows=1 width=20) (actual time=3.311..3.311 rows=0 loops=782104)\n                                        Cache Key: ci.person_role_id\n                                        Hits: 5  Misses: 156294 \nEvictions: 0  Overflows: 0  Memory Usage: 9769kB\n                                        Worker 0:  Hits: 0  Misses: \n156768  Evictions: 0  Overflows: 0  Memory Usage: 9799kB\n                                        Worker 1:  Hits: 1  Misses: \n156444  Evictions: 0  Overflows: 0  Memory Usage: 9778kB\n                                        Worker 2:  Hits: 0  Misses: \n156222  Evictions: 0  Overflows: 0  Memory Usage: 9764kB\n                                        Worker 3:  Hits: 0  Misses: \n156370  Evictions: 0  Overflows: 0  Memory Usage: 9774kB\n                                        ->  Index Scan using \nchar_name_pkey on char_name chn  (cost=0.43..0.58 rows=1 width=20) \n(actual time=0.001..0.001 rows\n=0 loops=782098)\n                                              Index Cond: (id = \nci.person_role_id)\n                            ->  Hash  (cost=1.04..1.04 rows=4 width=4) \n(actual time=0.014..0.014 rows=4 loops=5)\n                                  Buckets: 1024  Batches: 1  Memory \nUsage: 9kB\n                                  ->  Seq Scan on company_type ct \n(cost=0.00..1.04 rows=4 width=4) (actual time=0.012..0.012 rows=4 loops=5)\n                      ->  Hash  (cost=1.12..1.12 rows=12 width=4) \n(actual time=0.027..0.028 rows=12 loops=5)\n                            Buckets: 1024  Batches: 1  Memory Usage: 9kB\n                            ->  Seq Scan on role_type rt \n(cost=0.00..1.12 rows=12 width=4) (actual time=0.022..0.023 rows=12 loops=5)\n  Planning Time: 2.398 ms\n  Execution Time: 523002.608 ms\n(55 rows)\n\nI attach file with times of query execution.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 10 Dec 2020 19:44:03 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "Thanks a lot for testing this patch. It's good to see it run through a\nbenchmark that exercises quite a few join problems.\n\nOn Fri, 11 Dec 2020 at 05:44, Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> For most queries performance is the same, some queries are executed\n> faster but\n> one query is 150 times slower:\n>\n>\n> explain analyze SELECT MIN(chn.name) AS character,\n...\n> Execution Time: 523002.608 ms\n\n> I attach file with times of query execution.\n\nI noticed the time reported in results.csv is exactly the same as the\none in the EXPLAIN ANALYZE above. One thing to note there that it\nwould be a bit fairer if the benchmark was testing the execution time\nof the query instead of the time to EXPLAIN ANALYZE.\n\nOne of the reasons that the patch may look less favourable here is\nthat the timing overhead on EXPLAIN ANALYZE increases with additional\nnodes.\n\nIf I just put this to the test by using the tables and query from [1].\n\n# explain (analyze, costs off) select count(*) from hundredk hk inner\n# join lookup l on hk.thousand = l.a;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Aggregate (actual time=1891.262..1891.263 rows=1 loops=1)\n -> Nested Loop (actual time=0.312..1318.087 rows=9990000 loops=1)\n -> Seq Scan on hundredk hk (actual time=0.299..15.753\nrows=100000 loops=1)\n -> Result Cache (actual time=0.000..0.004 rows=100 loops=100000)\n Cache Key: hk.thousand\n Hits: 99000 Misses: 1000 Evictions: 0 Overflows: 0\nMemory Usage: 3579kB\n -> Index Only Scan using lookup_a_idx on lookup l\n(actual time=0.003..0.012 rows=100 loops=1000)\n Index Cond: (a = hk.thousand)\n Heap Fetches: 0\n Planning Time: 3.471 ms\n Execution Time: 1891.612 ms\n(11 rows)\n\nYou can see here the query took 1.891 seconds to execute.\n\nSame query without EXPLAIN ANALYZE.\n\npostgres=# \\timing\nTiming is on.\npostgres=# select count(*) from hundredk hk inner\npostgres-# join lookup l on hk.thousand = l.a;\n count\n---------\n 9990000\n(1 row)\n\nTime: 539.449 ms\n\nOr is it more accurate to say it took just 0.539 seconds?\n\nGoing through the same query after disabling; enable_resultcache,\nenable_mergejoin, enable_nestloop, I can generate the following table\nwhich compares the EXPLAIN ANALYZE time to the \\timing on time.\n\npostgres=# select type,ea_time,timing_time, round(ea_time::numeric /\ntiming_time::numeric,3) as ea_overhead from results order by\ntiming_time;\n type | ea_time | timing_time | ea_overhead\n----------------+----------+-------------+-------------\n Nest loop + RC | 1891.612 | 539.449 | 3.507\n Merge join | 2411.632 | 1008.991 | 2.390\n Nest loop | 2484.82 | 1049.63 | 2.367\n Hash join | 4969.284 | 3272.424 | 1.519\n\nResult Cache will be hit a bit harder by this problem due to it having\nadditional nodes in the plan. The Hash Join query seems to suffer much\nless from this problem.\n\nHowever, saying that. It's certainly not the entire problem here:\n\nHits: 5 Misses: 156294 Evictions: 0 Overflows: 0 Memory Usage: 9769kB\n\nThe planner must have thought there'd be more hits than that or it\nwouldn't have thought Result Caching would be a good plan. Estimating\nthe cache hit ratio using n_distinct becomes much less reliable when\nthere are joins and filters. A.K.A the real world.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvrPcQyQdWERGYWx8J+2DLUNgXu+fOSbQ1UscxrunyXyrQ@mail.gmail.com\n\n\n", "msg_date": "Fri, 11 Dec 2020 11:03:05 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "@cfbot: rebased on 55dc86eca70b1dc18a79c141b3567efed910329d\n\nOn Tue, Dec 08, 2020 at 08:15:52PM +1300, David Rowley wrote:\n> From cfbfb8187f4e8303fe3358b5c909533ee6629efe Mon Sep 17 00:00:00 2001\n> From: \"dgrowley@gmail.com\" <dgrowley@gmail.com>\n> Date: Thu, 2 Jul 2020 16:06:36 +1200\n> Subject: [PATCH v12 1/5] Allow estimate_num_groups() to pass back further\n> details about the estimation\n\n> +#define SELFLAG_USED_DEFAULT\t\t(1 << 0)\t/* Estimation fell back on one\n...\n> +typedef struct EstimationInfo\n> +{\n> +\tint\t\t\tflags;\t\t\t/* Flags, as defined above to mark special\n> +\t\t\t\t\t\t\t\t * properties of the estimation. */\n\nMaybe it should be a bits32 ?\n(Also, according to Michael, some people preferred 0x01 to 1<<0)\n\n> +\t/* Ensure we didn't mess up the tracking somehow */\n> +\tAssert(rcstate->mem_used >= 0);\n\nI think these assertions aren't useful since the type is unsigned:\n+ uint64 mem_used; /* bytes of memory used by cache */ \n\n> +\thash_mem_bytes = get_hash_mem() * 1024L;\n\nI think \"result cache nodes\" should be added here:\n\ndoc/src/sgml/config.sgml- <para>\ndoc/src/sgml/config.sgml- Hash-based operations are generally more sensitive to memory\ndoc/src/sgml/config.sgml- availability than equivalent sort-based operations. The\ndoc/src/sgml/config.sgml- memory available for hash tables is computed by multiplying\ndoc/src/sgml/config.sgml- <varname>work_mem</varname> by\ndoc/src/sgml/config.sgml: <varname>hash_mem_multiplier</varname>. This makes it\ndoc/src/sgml/config.sgml- possible for hash-based operations to use an amount of memory\ndoc/src/sgml/config.sgml- that exceeds the usual <varname>work_mem</varname> base\ndoc/src/sgml/config.sgml- amount.\ndoc/src/sgml/config.sgml- </para>\n\nLanguage fixen follow:\n\n> + * Initialize the hash table to empty.\n\nas empty\n\n> + * prepare_probe_slot\n> + *\t\tPopulate rcstate's probeslot with the values from the tuple stored\n> + *\t\tin 'key'. If 'key' is NULL, then perform the population by evalulating\n\nsp: evaluating\n\n> From d9c3f2cab13ec26bbd8d1245be6304c506e1f878 Mon Sep 17 00:00:00 2001\n> From: \"dgrowley@gmail.com\" <dgrowley@gmail.com>\n> Date: Tue, 8 Dec 2020 17:54:04 +1300\n> Subject: [PATCH v12 4/5] Remove code duplication in nodeResultCache.c\n\n> + * cache_check_mem\n> + *\t\tCheck if we've allocate more than our memory budget and, if so, reduce\n\nallocated\n\nXXX: what patch???\n\n> +\t * Set the number of bytes each cache entry should consume in the cache.\n> +\t * To provide us with better estimations on how many cache entries we can\n> +\t * store at once we make a call to the excutor here to ask it what memory\n\nspell: executor\nonce COMMA\n\n> +\t * inappropriate to do so. If we see that this has been done then we'll\ndone COMMA\n\n> +\t * Since we've already estimated the maximum number of entries we can\n> +\t * store at once and know the estimated number of distinct values we'll be\n> +\t * called with, well take this opportunity to set the path's est_entries.\n\nwe'll\n\n> +\t * This will ultimately determine the hash table size that the executor\n> +\t * will use. If we leave this at zero the executor will just choose the\nzero COMMA\n\n> +\t * Set the total_cost accounting for the expected cache hit ratio. We\n> +\t * also add on a cpu_operator_cost to account for a cache lookup. This\n> +\t * will happen regardless of if it's a cache hit or not.\n\n\"whether it's a cache hit or not\"\n\n> +\t * Additionally we charge a cpu_tuple_cost to account for cache lookups,\n> +\t * which we'll do regardless of if it was a cache hit or not.\n\nsame\n\n> + * get_resultcache_path\n> + *\t\tIf possible,.make and return a Result Cache path atop of 'inner_path'.\n\ndotmake\n\n> +SET work_mem TO '64kB';\n> +SET enable_mergejoin TO off;\n> +-- Ensure we get some evitions. We're unable to validate the hits and misses\n\nevictions\n\n-- \nJustin", "msg_date": "Wed, 27 Jan 2021 23:43:50 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "Thanks for having a look at this.\n\nI've taken most of your suggestions. The things quoted below are just\nthe ones I didn't agree with or didn't understand.\n\nOn Thu, 28 Jan 2021 at 18:43, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Dec 08, 2020 at 08:15:52PM +1300, David Rowley wrote:\n> > +typedef struct EstimationInfo\n> > +{\n> > + int flags; /* Flags, as defined above to mark special\n> > + * properties of the estimation. */\n>\n> Maybe it should be a bits32 ?\n\nI've changed this to uint32. There are a few examples in the code\nbase of bit flags using int. e.g PlannedStmt.jitFlags and\n_mdfd_getseg()'s \"behavior\" parameter, there are also quite a few\nusing unsigned types.\n\n> (Also, according to Michael, some people preferred 0x01 to 1<<0)\n\nI'd rather keep the (1 << 0). I think that it gets much easier to\nread when we start using more significant bits. Granted the codebase\nhas lots of examples of each. I just picked the one I prefer. If\nthere's some consensus that we switch the bit-shifting to hex\nconstants for other bitflag defines then I'll change it.\n\n> I think \"result cache nodes\" should be added here:\n>\n> doc/src/sgml/config.sgml- <para>\n> doc/src/sgml/config.sgml- Hash-based operations are generally more sensitive to memory\n> doc/src/sgml/config.sgml- availability than equivalent sort-based operations. The\n> doc/src/sgml/config.sgml- memory available for hash tables is computed by multiplying\n> doc/src/sgml/config.sgml- <varname>work_mem</varname> by\n> doc/src/sgml/config.sgml: <varname>hash_mem_multiplier</varname>. This makes it\n> doc/src/sgml/config.sgml- possible for hash-based operations to use an amount of memory\n> doc/src/sgml/config.sgml- that exceeds the usual <varname>work_mem</varname> base\n> doc/src/sgml/config.sgml- amount.\n> doc/src/sgml/config.sgml- </para>\n\nI'd say it would be better to mention it in the previous paragraph.\nI've done that. It now looks like:\n\n Hash tables are used in hash joins, hash-based aggregation, result\n cache nodes and hash-based processing of <literal>IN</literal>\n subqueries.\n </para>\n\nLikely setops should be added to that list too, but not by this patch.\n\n> Language fixen follow:\n>\n> > + * Initialize the hash table to empty.\n>\n> as empty\n\nPerhaps, but I've kept the \"to empty\" as it's used in\nnodeRecursiveunion.c and nodeSetOp.c to do the same thing. If you\npropose a patch that gets transaction to change those instances then\nI'll switch this one too.\n\nI'm just in the middle of considering some other changes to the patch\nand will post an updated version once I'm done with that.\n\nDavid\n\n\n", "msg_date": "Fri, 29 Jan 2021 18:41:52 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Fri, 11 Dec 2020 at 05:44, Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> I tested the patched version of Postgres on JOBS benchmark:\n>\n> https://github.com/gregrahn/join-order-benchmark\n>\n> For most queries performance is the same, some queries are executed\n> faster but\n> one query is 150 times slower:\n\nI set up my AMD 3990x machine here to run the join order benchmark. I\nused a shared_buffers of 20GB so that all the data would fit in there.\nwork_mem was set to 256MB.\n\nI used imdbpy2sql.py to parse the imdb database files and load the\ndata into PostgreSQL. This seemed to work okay apart from the\nmovie_info_idx table appeared to be missing. Many of the 113 join\norder benchmark queries need this table. Without that table, only 71\nof the queries can run. I've not yet investigated why the table was\nnot properly created and loaded.\n\nI performed 5 different sets of tests using master at 9522085a, and\nmaster with the attached series of patches applied.\n\nTests:\n* Test 1 uses the standard setting of 100 for\ndefault_statistics_target and has parallel query disabled.\n* Test 2 again uses 100 for the default_statistics_target but enables\nparallel query.\n* Test 3 increases default_statistics_target to 10000 (then ANALYZE)\nand disables parallel query.\n* Test 4 as test 3 but with parallel query enabled.\n* Test 5 changes the cost model for Result Cache so that instead of\nusing a result cache based on the estimated number of cache hits, the\ncosting is simplified to inject a Result Cache node to a parameterised\nnested loop if the n_distinct estimate of the nested loop parameters\nis less than half the row estimate of the outer plan.\n\nI ran each query using pgbench for 20 seconds.\n\nTest 1:\n\n18 of the 71 queries used a Result Cache node. Overall the runtime of\nthose queries was reduced by 12.5% using v13 when compared to master.\n\nOver each of the 71 queries, the total time to parse/plan/execute each\nof the queries was reduced by 7.95%.\n\nTest 2:\n\nAgain 18 queries used a Result Cache. The speedup was about 2.2% for\njust those 18 and 2.1% over the 71 queries.\n\nTest 3:\n\n9 queries used a Result Cache. The speedup was 3.88% for those 9\nqueries and 0.79% over the 71 queries.\n\nTest 4:\n\n8 of the 71 queries used a Result Cache. The speedup was 4.61% over\nthose 8 queries and 4.53% over the 71 queries.\n\nTest 5:\n\nSaw 15 queries using a Result Cache node. These 15 ran 5.95% faster\nthan master and over all of the 71 queries, the benchmark was 0.32%\nfaster.\n\nI see some of the queries do take quite a bit of effort for the query\nplanner due to the large number of joins. Some of the faster to\nexecute queries here took a little longer due to this.\n\nThe reason I increased the statistics targets to 10k was down to the\nfact that I noticed that in test 2 that queries 15c and 15d became\nslower. After checking the n_distinct estimate for the Result Cache\nkey column I found that the estimate was significantly out when\ncompared to the actual n_distinct. Manually correcting the n_distinct\ncaused the planner to move away from using a Result Cache for those\nqueries. However, I thought I'd check if increasing the statistics\ntargets allowed a better n_distinct estimate due to the larger number\nof blocks being sampled. It did.\n\nI've attached a spreadsheet with the results of each of the tests.\n\nThe attached file v13_costing_hacks.patch.txt is the quick and dirty\npatch I put together to run test 5.\n\nDavid", "msg_date": "Wed, 3 Feb 2021 19:51:53 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Wed, 3 Feb 2021 at 19:51, David Rowley <dgrowleyml@gmail.com> wrote:\n> I've attached a spreadsheet with the results of each of the tests.\n>\n> The attached file v13_costing_hacks.patch.txt is the quick and dirty\n> patch I put together to run test 5.\n\nI've attached an updated set of patches. I'd forgotten to run make\ncheck-world with the 0005 patch and that was making the CF bot\ncomplain. I'm not intending 0005 for commit in the state that it's\nin, so I've just dropped it.\n\nI've also done some further performance testing with the attached set\nof patched, this time I focused solely on planner performance using\nthe Join Order Benchmark. Some of the queries in this benchmark do\ngive the planner quite a bit of exercise. Queries such as 29b take my\n1-year old, fairly powerful AMD hardware about 78 ms to make a plan\nfor.\n\nThe attached spreadsheet shows the details of the results of these\ntests. Skip to the \"Test6 no parallel 100 stats EXPLAIN only\" sheet.\n\nTo get these results I just ran pgbench for 10 seconds on each query\nprefixed with \"EXPLAIN \".\n\nTo summarise here, the planner performance gets a fair bit worse with\nthe patched code. With master, summing the average planning time over\neach of the queries resulted in a total planning time of 765.7 ms.\nAfter patching, that went up to 1097.5 ms. I was pretty disappointed\nabout that.\n\nOn looking into why the performance gets worse, there's a few factors.\nOne factor is that I'm adding a new path to consider and if that path\nsticks around then subsequent joins may consider that path. Changing\nthings around so I only ever add the best path, the time went down to\n1067.4 ms. add_path() does tend to ditch inferior paths anyway, so\nthis may not really be a good thing to do. Another thing that I picked\nup on was the code that checks if a Result Cache Path is legal to use,\nit must check if the inner side of the join has any volatile\nfunctions. If I just comment out those checks, then the total planning\ntime goes down to 985.6 ms. The estimate_num_groups() call that the\ncosting for the ResultCache path must do to estimate the cache hit\nratio is another factor. When replacing that call with a constant\nvalue the total planning time goes down to 905.7 ms.\n\nI can see perhaps ways that the volatile function checks could be\noptimised a bit further, but the other stuff really is needed, so it\nappears if we want this, then it seems like the planner is going to\nbecome slightly slower. That does not exactly fill me with joy. We\ncurrently have enable_partitionwise_aggregate and\nenable_partitionwise_join which are both disabled by default because\nof the possibility of slowing down the planner. One option could be\nto make enable_resultcache off by default too. I'm not really liking\nthe idea of that much though since anyone who leaves the setting that\nway won't ever get any gains from caching the inner side of\nparameterised nested loop results.\n\nThe idea I had to speed up the volatile function call checks was along\nsimilar lines to what parallel query does when it looks for parallel\nunsafe functions in the parse. Right now those checks are only done\nunder a few conditions where we think that parallel query might\nactually be used. (See standard_planner()). However, with Result\nCache, those could be used in many other cases too, so we don't really\nhave any means to short circuit those checks. There might be gains to\nbe had by checking the parse once rather than having to call\ncontains_volatile_functions in the various places we do call it. I\nthink both the parallel safety and volatile checks could then be done\nin the same tree traverse. Anyway. I've not done any hacking on this.\nIt's just an idea so far.\n\nDoes anyone have any particular thoughts on the planner slowdown?\n\nDavid", "msg_date": "Tue, 16 Feb 2021 23:15:51 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Tue, Feb 16, 2021 at 6:16 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 3 Feb 2021 at 19:51, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I've attached a spreadsheet with the results of each of the tests.\n> >\n> > The attached file v13_costing_hacks.patch.txt is the quick and dirty\n> > patch I put together to run test 5.\n>\n> I've attached an updated set of patches. I'd forgotten to run make\n> check-world with the 0005 patch and that was making the CF bot\n> complain. I'm not intending 0005 for commit in the state that it's\n> in, so I've just dropped it.\n>\n> I've also done some further performance testing with the attached set\n> of patched, this time I focused solely on planner performance using\n> the Join Order Benchmark. Some of the queries in this benchmark do\n> give the planner quite a bit of exercise. Queries such as 29b take my\n> 1-year old, fairly powerful AMD hardware about 78 ms to make a plan\n> for.\n>\n> The attached spreadsheet shows the details of the results of these\n> tests. Skip to the \"Test6 no parallel 100 stats EXPLAIN only\" sheet.\n>\n> To get these results I just ran pgbench for 10 seconds on each query\n> prefixed with \"EXPLAIN \".\n>\n> To summarise here, the planner performance gets a fair bit worse with\n> the patched code. With master, summing the average planning time over\n> each of the queries resulted in a total planning time of 765.7 ms.\n> After patching, that went up to 1097.5 ms. I was pretty disappointed\n> about that.\n>\n> On looking into why the performance gets worse, there's a few factors.\n> One factor is that I'm adding a new path to consider and if that path\n> sticks around then subsequent joins may consider that path. Changing\n> things around so I only ever add the best path, the time went down to\n> 1067.4 ms. add_path() does tend to ditch inferior paths anyway, so\n> this may not really be a good thing to do. Another thing that I picked\n> up on was the code that checks if a Result Cache Path is legal to use,\n> it must check if the inner side of the join has any volatile\n> functions. If I just comment out those checks, then the total planning\n> time goes down to 985.6 ms. The estimate_num_groups() call that the\n> costing for the ResultCache path must do to estimate the cache hit\n> ratio is another factor. When replacing that call with a constant\n> value the total planning time goes down to 905.7 ms.\n>\n> I can see perhaps ways that the volatile function checks could be\n> optimised a bit further, but the other stuff really is needed, so it\n> appears if we want this, then it seems like the planner is going to\n> become slightly slower. That does not exactly fill me with joy. We\n> currently have enable_partitionwise_aggregate and\n> enable_partitionwise_join which are both disabled by default because\n> of the possibility of slowing down the planner. One option could be\n> to make enable_resultcache off by default too. I'm not really liking\n> the idea of that much though since anyone who leaves the setting that\n> way won't ever get any gains from caching the inner side of\n> parameterised nested loop results.\n>\n> The idea I had to speed up the volatile function call checks was along\n> similar lines to what parallel query does when it looks for parallel\n> unsafe functions in the parse. Right now those checks are only done\n> under a few conditions where we think that parallel query might\n> actually be used. (See standard_planner()). However, with Result\n> Cache, those could be used in many other cases too, so we don't really\n> have any means to short circuit those checks. There might be gains to\n> be had by checking the parse once rather than having to call\n> contains_volatile_functions in the various places we do call it. I\n> think both the parallel safety and volatile checks could then be done\n> in the same tree traverse. Anyway. I've not done any hacking on this.\n> It's just an idea so far.\n>\n>\n\n> Does anyone have any particular thoughts on the planner slowdown?\n>\n\nI used the same JOB test case and testing with 19c.sql, I can get a similar\nresult with you (There are huge differences between master and v14). I\nthink the reason is we are trying the result cache path on a very hot line (\nnest loop inner path), so the cost will be huge. I see\nget_resultcache_path\nhas some fastpath to not create_resultcache_path, but the limitation looks\ntoo broad. The below is a small adding on it, the planing time can be\nreduced from 79ms to 52ms for 19c.sql in my hardware.\n\n+ /*\n+ * If the inner path is cheap enough, no bother to try the result\n+ * cache path. 20 is just an arbitrary value. This may reduce some\n+ * planning time.\n+ */\n+ if (inner_path->total_cost < 20)\n+ return NULL;\n\n> I used imdbpy2sql.py to parse the imdb database files and load the\n> data into PostgreSQL. This seemed to work okay apart from the\n> movie_info_idx table appeared to be missing. Many of the 113 join\n> order benchmark queries need this table.\n\nI followed the steps in [1] and changed something with the attached patch.\nAt last I got 2367725 rows. But probably you are running into a different\nproblem since no change is for movie_info_idx table.\n\n[1] https://github.com/gregrahn/join-order-benchmark\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)", "msg_date": "Sun, 21 Feb 2021 18:11:39 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Tue, Feb 16, 2021 at 11:15:51PM +1300, David Rowley wrote:\n> To summarise here, the planner performance gets a fair bit worse with\n> the patched code. With master, summing the average planning time over\n> each of the queries resulted in a total planning time of 765.7 ms.\n> After patching, that went up to 1097.5 ms. I was pretty disappointed\n> about that.\n\nI have a couple ideas;\n\n - default enable_resultcache=off seems okay. In plenty of cases, planning\n time is unimportant. This is the \"low bar\" - if we can do better and enable\n it by default, that's great.\n\n - Maybe this should be integrated into nestloop rather than being a separate\n plan node. That means that it could be dynamically enabled during\n execution, maybe after a few loops or after checking that there's at least\n some minimal number of repeated keys and cache hits. cost_nestloop would\n consider whether to use a result cache or not, and explain would show the\n cache stats as a part of nested loop. In this case, I propose there'd still\n be a GUC to disable it.\n\n - Maybe cost_resultcache() can be split into initial_cost and final_cost\n parts, same as for nestloop ? I'm not sure how it'd work, since\n initial_cost is supposed to return a lower bound, and resultcache tries to\n make things cheaper. initial_cost would just add some operator/tuple costs\n to make sure that resultcache of a unique scan is more expensive than\n nestloop alone. estimate_num_groups is at least O(n) WRT\n rcpath->param_exprs, so maybe you charge 100*list_length(param_exprs) *\n cpu_operator_cost in initial_cost and then call estimate_num_groups in\n final_cost. We'd be estimating the cost of estimating the cost...\n\n - Maybe an initial implementation of this would only add a result cache if the\n best plan was already going to use a nested loop, even though a cached\n nested loop might be cheaper than other plans. This would avoid most\n planner costs, and give improved performance at execution time, but leaves\n something \"on the table\" for the future.\n\n> +cost_resultcache_rescan(PlannerInfo *root, ResultCachePath *rcpath,\n> +\t\t\tCost *rescan_startup_cost, Cost *rescan_total_cost)\n> +{\n> +\tdouble\t\ttuples = rcpath->subpath->rows;\n> +\tdouble\t\tcalls = rcpath->calls;\n...\n> +\t/* estimate on the distinct number of parameter values */\n> +\tndistinct = estimate_num_groups(root, rcpath->param_exprs, calls, NULL,\n> +\t\t\t\t\t&estinfo);\n\nShouldn't this pass \"tuples\" and not \"calls\" ?\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 21 Feb 2021 19:21:33 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Mon, Feb 22, 2021 at 9:21 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Tue, Feb 16, 2021 at 11:15:51PM +1300, David Rowley wrote:\n> > To summarise here, the planner performance gets a fair bit worse with\n> > the patched code. With master, summing the average planning time over\n> > each of the queries resulted in a total planning time of 765.7 ms.\n> > After patching, that went up to 1097.5 ms. I was pretty disappointed\n> > about that.\n>\n> I have a couple ideas;\n>\n> - default enable_resultcache=off seems okay. In plenty of cases, planning\n> time is unimportant. This is the \"low bar\" - if we can do better and\n> enable\n> it by default, that's great.\n>\n> - Maybe this should be integrated into nestloop rather than being a\n> separate\n> plan node. That means that it could be dynamically enabled during\n> execution, maybe after a few loops or after checking that there's at\n> least\n> some minimal number of repeated keys and cache hits. cost_nestloop\n> would\n> consider whether to use a result cache or not, and explain would show\n> the\n> cache stats as a part of nested loop.\n\n\n+1 for this idea now.. I am always confused why there is no such node in\nOracle\neven if it is so aggressive to do performance improvement and this function\nlooks very promising. After realizing the costs in planner, I think\nplanning time\nmight be an answer (BTW, I am still not sure Oracle did this).\n\nIn this case, I propose there'd still\n> be a GUC to disable it.\n>\n> - Maybe cost_resultcache() can be split into initial_cost and final_cost\n> parts, same as for nestloop ? I'm not sure how it'd work, since\n> initial_cost is supposed to return a lower bound, and resultcache tries\n> to\n> make things cheaper. initial_cost would just add some operator/tuple\n> costs\n> to make sure that resultcache of a unique scan is more expensive than\n> nestloop alone. estimate_num_groups is at least O(n) WRT\n> rcpath->param_exprs, so maybe you charge 100*list_length(param_exprs) *\n> cpu_operator_cost in initial_cost and then call estimate_num_groups in\n> final_cost. We'd be estimating the cost of estimating the cost...\n>\n> - Maybe an initial implementation of this would only add a result cache\n> if the\n> best plan was already going to use a nested loop, even though a cached\n> nested loop might be cheaper than other plans. This would avoid most\n> planner costs, and give improved performance at execution time, but\n> leaves\n> something \"on the table\" for the future.\n>\n> > +cost_resultcache_rescan(PlannerInfo *root, ResultCachePath *rcpath,\n> > + Cost *rescan_startup_cost, Cost *rescan_total_cost)\n> > +{\n> > + double tuples = rcpath->subpath->rows;\n> > + double calls = rcpath->calls;\n> ...\n> > + /* estimate on the distinct number of parameter values */\n> > + ndistinct = estimate_num_groups(root, rcpath->param_exprs, calls,\n> NULL,\n> > + &estinfo);\n>\n> Shouldn't this pass \"tuples\" and not \"calls\" ?\n>\n> --\n> Justin\n>\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Mon, Feb 22, 2021 at 9:21 AM Justin Pryzby <pryzby@telsasoft.com> wrote:On Tue, Feb 16, 2021 at 11:15:51PM +1300, David Rowley wrote:\n> To summarise here, the planner performance gets a fair bit worse with\n> the patched code. With master, summing the average planning time over\n> each of the queries resulted in a total planning time of 765.7 ms.\n> After patching, that went up to 1097.5 ms.  I was pretty disappointed\n> about that.\n\nI have a couple ideas;\n\n - default enable_resultcache=off seems okay.  In plenty of cases, planning\n   time is unimportant.  This is the \"low bar\" - if we can do better and enable\n   it by default, that's great.\n\n - Maybe this should be integrated into nestloop rather than being a separate\n   plan node.  That means that it could be dynamically enabled during\n   execution, maybe after a few loops or after checking that there's at least\n   some minimal number of repeated keys and cache hits.  cost_nestloop would\n   consider whether to use a result cache or not, and explain would show the\n   cache stats as a part of nested loop.  +1 for this idea now.. I am always confused why there is no such node in Oracleeven if it is so aggressive to do performance improvement and this function looks very promising.   After realizing the costs in planner,  I think planning timemight be an answer (BTW, I am still not sure Oracle did this).  In this case, I propose there'd still\n   be a GUC to disable it.\n\n - Maybe cost_resultcache() can be split into initial_cost and final_cost\n   parts, same as for nestloop ?  I'm not sure how it'd work, since\n   initial_cost is supposed to return a lower bound, and resultcache tries to\n   make things cheaper.  initial_cost would just add some operator/tuple costs\n   to make sure that resultcache of a unique scan is more expensive than\n   nestloop alone.  estimate_num_groups is at least O(n) WRT\n   rcpath->param_exprs, so maybe you charge 100*list_length(param_exprs) *\n   cpu_operator_cost in initial_cost and then call estimate_num_groups in\n   final_cost.  We'd be estimating the cost of estimating the cost...\n\n - Maybe an initial implementation of this would only add a result cache if the\n   best plan was already going to use a nested loop, even though a cached\n   nested loop might be cheaper than other plans.  This would avoid most\n   planner costs, and give improved performance at execution time, but leaves\n   something \"on the table\" for the future.\n\n> +cost_resultcache_rescan(PlannerInfo *root, ResultCachePath *rcpath,\n> +                     Cost *rescan_startup_cost, Cost *rescan_total_cost)\n> +{\n> +     double          tuples = rcpath->subpath->rows;\n> +     double          calls = rcpath->calls;\n...\n> +     /* estimate on the distinct number of parameter values */\n> +     ndistinct = estimate_num_groups(root, rcpath->param_exprs, calls, NULL,\n> +                                     &estinfo);\n\nShouldn't this pass \"tuples\" and not \"calls\" ?\n\n-- \nJustin\n-- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Tue, 23 Feb 2021 09:21:52 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "Hi,\n\nOn 2021-02-16 23:15:51 +1300, David Rowley wrote:\n> There might be gains to be had by checking the parse once rather than\n> having to call contains_volatile_functions in the various places we do\n> call it. I think both the parallel safety and volatile checks could\n> then be done in the same tree traverse. Anyway. I've not done any\n> hacking on this. It's just an idea so far.\n\nISTM that it could be worth to that as part of preprocess_expression() -\nit's a pass that we unconditionally do pretty early, it already computes\nopfuncid, often already fetches the pg_proc entry (cf\nsimplify_function()), etc.\n\nExcept for the annoying issue that that we pervasively use Lists as\nexpressions, I'd argue that we should actually cache \"subtree\nvolatility\" in Expr nodes, similar to the way we use OpExpr.opfuncid\netc. That'd allow us to make contain_volatile_functions() very cheap in\nthe majority of cases, but we could still easily invalidate that state\nwhen necessary by setting \"exprhasvolatile\" to unknown (causing the next\ncontain_volatile_functions() to compute it from scratch).\n\nBut since we actually do use Lists as expressions (which do not inherit\nfrom Expr), we'd instead need to pass a new param to\npreprocess_expression() that stores the volatility somewhere in\nPlannerInfo? Seems a bit yucky to manage :(.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 Feb 2021 17:39:11 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Except for the annoying issue that that we pervasively use Lists as\n> expressions, I'd argue that we should actually cache \"subtree\n> volatility\" in Expr nodes, similar to the way we use OpExpr.opfuncid\n> etc. That'd allow us to make contain_volatile_functions() very cheap\n\n... and completely break changing volatility with ALTER FUNCTION.\nThe case of OpExpr.opfuncid is okay only because we don't provide\na way to switch an operator's underlying function. (See also\n9f1255ac8.)\n\nIt'd certainly be desirable to reduce the number of duplicated\nfunction property lookups in the planner, but I'm not convinced\nthat that is a good way to go about it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Feb 2021 20:51:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "Hi,\n\nOn 2021-02-22 20:51:17 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Except for the annoying issue that that we pervasively use Lists as\n> > expressions, I'd argue that we should actually cache \"subtree\n> > volatility\" in Expr nodes, similar to the way we use OpExpr.opfuncid\n> > etc. That'd allow us to make contain_volatile_functions() very cheap\n> \n> ... and completely break changing volatility with ALTER FUNCTION.\n> The case of OpExpr.opfuncid is okay only because we don't provide\n> a way to switch an operator's underlying function. (See also\n> 9f1255ac8.)\n\nHm. I was imagining we'd only set it within the planner. If so, I don't\nthink it'd change anything around ALTER FUNCTION.\n\nBut anyway, due to the List* issue, I don't think it's a viable approach\nas-is anyway.\n\nWe could add a wrapper node around \"planner expressions\" that stores\nmetadata about them during planning, without those properties leaking\nover expressions used at other times. E.g. having\npreprocess_expression() return a PlannerExpr that that points to the\nexpression as preprocess_expression returns it today. That'd make it\neasy to cache information like volatility. But it also seems\nprohibitively invasive :(.\n\n\n> It'd certainly be desirable to reduce the number of duplicated\n> function property lookups in the planner, but I'm not convinced\n> that that is a good way to go about it.\n\nDo you have suggestions?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 Feb 2021 18:01:05 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> We could add a wrapper node around \"planner expressions\" that stores\n> metadata about them during planning, without those properties leaking\n> over expressions used at other times. E.g. having\n> preprocess_expression() return a PlannerExpr that that points to the\n> expression as preprocess_expression returns it today. That'd make it\n> easy to cache information like volatility. But it also seems\n> prohibitively invasive :(.\n\nI doubt it's that bad. We could cache such info in RestrictInfo\nfor quals, or PathTarget for tlists, without much new notational\noverhead. That doesn't cover everything the planner deals with\nof course, but it would cover enough that you'd be chasing pretty\nsmall returns to worry about more.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Feb 2021 00:43:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Tue, Feb 23, 2021 at 10:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andres Freund <andres@anarazel.de> writes:\n> > We could add a wrapper node around \"planner expressions\" that stores\n> > metadata about them during planning, without those properties leaking\n> > over expressions used at other times. E.g. having\n> > preprocess_expression() return a PlannerExpr that that points to the\n> > expression as preprocess_expression returns it today. That'd make it\n> > easy to cache information like volatility. But it also seems\n> > prohibitively invasive :(.\n>\n> I doubt it's that bad. We could cache such info in RestrictInfo\n> for quals, or PathTarget for tlists, without much new notational\n> overhead. That doesn't cover everything the planner deals with\n> of course, but it would cover enough that you'd be chasing pretty\n> small returns to worry about more.\n>\n> regards, tom lane\n>\n>\n>\nThis patch set no longer applies\nhttp://cfbot.cputube.org/patch_32_2569.log\n\nCan we get a rebase?\n\nI am marking the patch \"Waiting on Author\"\n\n\n\n-- \nIbrar Ahmed\n\nOn Tue, Feb 23, 2021 at 10:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andres Freund <andres@anarazel.de> writes:\n> We could add a wrapper node around \"planner expressions\" that stores\n> metadata about them during planning, without those properties leaking\n> over expressions used at other times. E.g. having\n> preprocess_expression() return a PlannerExpr that that points to the\n> expression as preprocess_expression returns it today. That'd make it\n> easy to cache information like volatility. But it also seems\n> prohibitively invasive :(.\n\nI doubt it's that bad.  We could cache such info in RestrictInfo\nfor quals, or PathTarget for tlists, without much new notational\noverhead.  That doesn't cover everything the planner deals with\nof course, but it would cover enough that you'd be chasing pretty\nsmall returns to worry about more.\n\n                        regards, tom lane\n\n This patch set no longer applieshttp://cfbot.cputube.org/patch_32_2569.logCan we get a rebase? I am marking the patch \"Waiting on Author\"-- Ibrar Ahmed", "msg_date": "Thu, 4 Mar 2021 16:15:51 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Fri, 5 Mar 2021 at 00:16, Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> This patch set no longer applies\n> http://cfbot.cputube.org/patch_32_2569.log\n>\n> Can we get a rebase?\n\nv14 should still apply. I think the problem is that the CFbot at best\ncan only try and apply the latest .patch files that are on the thread\nin alphabetical order of the filename. The bot is likely just trying\nto apply the unrelated patch that was posted since I posted v14.\n\nI've attached the v14 version again. Hopefully, that'll make the CFbot happy.\n\nI'm also working on another version of the patch with slightly\ndifferent planner code. I hope to reduce the additional planner\noverheads a bit with it. It should arrive here in the next day or two.\n\nDavid", "msg_date": "Thu, 11 Mar 2021 12:17:55 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "Thanks for these suggestions.\n\nOn Mon, 22 Feb 2021 at 14:21, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Feb 16, 2021 at 11:15:51PM +1300, David Rowley wrote:\n> > To summarise here, the planner performance gets a fair bit worse with\n> > the patched code. With master, summing the average planning time over\n> > each of the queries resulted in a total planning time of 765.7 ms.\n> > After patching, that went up to 1097.5 ms. I was pretty disappointed\n> > about that.\n>\n> I have a couple ideas;\n>\n> - default enable_resultcache=off seems okay. In plenty of cases, planning\n> time is unimportant. This is the \"low bar\" - if we can do better and enable\n> it by default, that's great.\n\nI think that's reasonable. Teaching the planner to do new tricks is\nnever going to make the planner produce plans more quickly. When the\nnew planner trick gives us a more optimal plan, then great. When it\ndoes not then it's wasted effort. Giving users the ability to switch\noff the planner's new ability seems like a good way for people who\ncontinually find it the additional effort costs more than it saves\nseems like a good way to keep them happy.\n\n> - Maybe this should be integrated into nestloop rather than being a separate\n> plan node. That means that it could be dynamically enabled during\n> execution, maybe after a few loops or after checking that there's at least\n> some minimal number of repeated keys and cache hits. cost_nestloop would\n> consider whether to use a result cache or not, and explain would show the\n> cache stats as a part of nested loop. In this case, I propose there'd still\n> be a GUC to disable it.\n\nThere was quite a bit of discussion on that topic already on this\nthread. I don't really want to revisit that.\n\nThe main problem with that is that we'd be forced into costing a\nNested loop with a result cache exactly the same as we do for a plain\nnested loop. If we were to lower the cost to account for the cache\nhits then the planner is more likely to choose a nested loop over a\nmerge/hash join. If we then switched the caching off during execution\ndue to low cache hits then that does not magically fix the bad choice\nof join method. The planner may have gone with a Hash Join if it had\nknown the cache hit ratio would be that bad. We'd still be left to\ndeal with the poor performing nested loop. What you'd really want\ninstead of turning the cache off would be to have nested loop ditch\nthe parameter scan and just morph itself into a Hash Join node. (I'm\nnot proposing we do that)\n\n> - Maybe cost_resultcache() can be split into initial_cost and final_cost\n> parts, same as for nestloop ? I'm not sure how it'd work, since\n> initial_cost is supposed to return a lower bound, and resultcache tries to\n> make things cheaper. initial_cost would just add some operator/tuple costs\n> to make sure that resultcache of a unique scan is more expensive than\n> nestloop alone. estimate_num_groups is at least O(n) WRT\n> rcpath->param_exprs, so maybe you charge 100*list_length(param_exprs) *\n> cpu_operator_cost in initial_cost and then call estimate_num_groups in\n> final_cost. We'd be estimating the cost of estimating the cost...\n\nThe cost of the Result Cache is pretty dependant on the n_distinct\nestimate. Low numbers of distinct values tend to estimate a high\nnumber of cache hits, whereas large n_distinct values (relative to the\nnumber of outer rows) is not going to estimate a large number of cache\nhits.\n\nI don't think feeding in a fake value would help us here. We'd\nprobably do better if we had a fast way to determine if a given Expr\nis unique. (e.g UniqueKeys patch). Result Cache is never going to be\na win for a parameter that the value is never the same as some\npreviously seen value. This would likely allow us to skip considering\na Result Cache for the majority of OLTP type joins.\n\n> - Maybe an initial implementation of this would only add a result cache if the\n> best plan was already going to use a nested loop, even though a cached\n> nested loop might be cheaper than other plans. This would avoid most\n> planner costs, and give improved performance at execution time, but leaves\n> something \"on the table\" for the future.\n>\n> > +cost_resultcache_rescan(PlannerInfo *root, ResultCachePath *rcpath,\n> > + Cost *rescan_startup_cost, Cost *rescan_total_cost)\n> > +{\n> > + double tuples = rcpath->subpath->rows;\n> > + double calls = rcpath->calls;\n> ...\n> > + /* estimate on the distinct number of parameter values */\n> > + ndistinct = estimate_num_groups(root, rcpath->param_exprs, calls, NULL,\n> > + &estinfo);\n>\n> Shouldn't this pass \"tuples\" and not \"calls\" ?\n\nhmm. I don't think so. \"calls\" is the estimated number of outer side\nrows. Here you're asking if the n_distinct estimate is relevant to\nthe inner side rows. It's not. If we expect to be called 1000 times by\nthe outer side of the nested loop, then we need to know our n_distinct\nestimate for those 1000 rows. If the estimate comes back as 10\ndistinct values and we see that we're likely to be able to fit all the\ntuples for those 10 distinct values in the cache, then the hit ratio\nis going to come out at 99%. 10 misses, for the first time each value\nis looked up and the remainder of the 990 calls will be hits. The\nnumber of tuples (and the width of tuples) on the inside of the nested\nloop is only relevant to calculating how many cache entries is likely\nto fit into hash_mem. When we think cache entries will be evicted\nthen that makes the cache hit calculation more complex.\n\nI've tried to explain what's going on in cost_resultcache_rescan() the\nbest I can with comments. I understand it's still pretty hard to\nfollow what's going on. I'm open to making it easier to understand if\nyou have suggestions.\n\nDavid\n\n\n", "msg_date": "Fri, 12 Mar 2021 13:31:43 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Tue, 23 Feb 2021 at 14:22, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> On Mon, Feb 22, 2021 at 9:21 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> - Maybe this should be integrated into nestloop rather than being a separate\n>> plan node. That means that it could be dynamically enabled during\n>> execution, maybe after a few loops or after checking that there's at least\n>> some minimal number of repeated keys and cache hits. cost_nestloop would\n>> consider whether to use a result cache or not, and explain would show the\n>> cache stats as a part of nested loop.\n>\n>\n> +1 for this idea now.. I am always confused why there is no such node in Oracle\n> even if it is so aggressive to do performance improvement and this function\n> looks very promising. After realizing the costs in planner, I think planning time\n> might be an answer (BTW, I am still not sure Oracle did this).\n\nIf you're voting for merging Result Cache with Nested Loop and making\nit a single node, then that was already suggested on this thread. I\ndidn't really like the idea and I wasn't alone on that. Tom didn't\nmuch like it either. Never-the-less, I went and coded it and found\nthat it made the whole thing slower.\n\nThere's nothing stopping Result Cache from switching itself off if it\nsees poor cache hit ratios. It can then just become a proxy node,\neffectively doing nothing apart from fetching from its own outer node\nwhen asked for a tuple. It does not need to be part of Nested Loop to\nhave that ability.\n\nDavid\n\n\n", "msg_date": "Fri, 12 Mar 2021 13:38:49 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Tue, 23 Feb 2021 at 18:43, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I doubt it's that bad. We could cache such info in RestrictInfo\n> for quals, or PathTarget for tlists, without much new notational\n> overhead. That doesn't cover everything the planner deals with\n> of course, but it would cover enough that you'd be chasing pretty\n> small returns to worry about more.\n\nThis seems like a pretty good idea. So I coded it up.\n\nThe 0001 patch adds a has_volatile bool field to RestrictInfo and sets\nit when building the RestrictInfo. I've also added has_volatile_expr\nto PathTarget which is maintained when first building, then adding new\nExprs to the PathTarget. I've modified a series of existing calls to\ncontain_volatile_functions() to check these new fields first. This\nseems pretty good even without the Result Cache patch as it saves a\nfew duplicate checks for volatile functions. For example, both\ncheck_hashjoinable() and check_mergejoinable() call\ncontain_volatile_functions(). Now they just check the has_volatile\nflag after just calling contain_volatile_functions() once per\nRestrictInfo when the RestrictInfo is built.\n\nI tested the performance of just 0001 against master and I did see the\noverall planning and execution time of the join order benchmark query\n29b go from taking 104.8 ms down to 103.7 ms.\n\nFor the Result Cache patch, I've coded it to make use of these new\nfields instead of calling contain_volatile_functions().\n\nI also noticed that I can use the pre-cached\nRestrictInfo->hashjoinoperator field when it's set. This will be the\nsame operator as we'd be looking up using lookup_type_cache() anyway.\n\nWith Result Cache we can also cache the tuples from non-equality\njoins, e.g ON t1.x > t2.y, but we still need to look for the hash\nequality operator in that case. I had thoughts that it might be worth\nadding an additional field to RestrictInfo for resultcacheoperator to\nsave having to look it up each time for when hashjoinoperator is not\nset.\n\nWe must still call estimate_num_groups() once each time we create a\nResultCachePath. That's required in order to estimate the cache hits.\nAll other join operators only care about clauselist_selectivity(). The\nselectivity estimates for those are likely to be cached in the\nRestictInfo to save having to do it again next time. There's no\ncaching for estimate_num_groups(). I don't quite see any way to add\ncaching for this, however.\n\nI've attached the updated patches.\n\nIt took v14 144.6 ms to plan and execute query 29b. It takes v15 128.5\nms. Master takes 104.8 ms (see attached graph). The caching has\nimproved the planning performance quite a bit. Thank you for the\nsuggestion.\n\nDavid", "msg_date": "Fri, 12 Mar 2021 14:49:29 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Tue, 23 Feb 2021 at 18:43, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I doubt it's that bad. We could cache such info in RestrictInfo\n>> for quals, or PathTarget for tlists, without much new notational\n>> overhead. That doesn't cover everything the planner deals with\n>> of course, but it would cover enough that you'd be chasing pretty\n>> small returns to worry about more.\n\n> This seems like a pretty good idea. So I coded it up.\n\n> The 0001 patch adds a has_volatile bool field to RestrictInfo and sets\n> it when building the RestrictInfo.\n\nI'm -1 on doing it exactly that way, because you're expending\nthe cost of those lookups without certainty that you need the answer.\nI had in mind something more like the way that we cache selectivity\nestimates in RestrictInfo, in which the value is cached when first\ndemanded and then re-used on subsequent checks --- see in\nclause_selectivity_ext, around line 750. You do need a way for the\nfield to have a \"not known yet\" value, but that's not hard. Moreover,\nthis sort of approach can be less invasive than what you did here,\nbecause the caching behavior can be hidden inside\ncontain_volatile_functions, rather than having all the call sites\nknow about it explicitly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Mar 2021 20:59:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Fri, 12 Mar 2021 at 14:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > The 0001 patch adds a has_volatile bool field to RestrictInfo and sets\n> > it when building the RestrictInfo.\n>\n> I'm -1 on doing it exactly that way, because you're expending\n> the cost of those lookups without certainty that you need the answer.\n> I had in mind something more like the way that we cache selectivity\n> estimates in RestrictInfo, in which the value is cached when first\n> demanded and then re-used on subsequent checks --- see in\n> clause_selectivity_ext, around line 750. You do need a way for the\n> field to have a \"not known yet\" value, but that's not hard. Moreover,\n> this sort of approach can be less invasive than what you did here,\n> because the caching behavior can be hidden inside\n> contain_volatile_functions, rather than having all the call sites\n> know about it explicitly.\n\nI was aware that the selectivity code did things that way. However, I\ndidn't copy it as we have functions like match_opclause_to_indexcol()\nand match_saopclause_to_indexcol() which calls\ncontain_volatile_functions() on just a single operand of an OpExpr.\nWe'd have no chance to cache the volatility property on the first\nlookup since we'd not have the RestrictInfo to set it in. I didn't\nthink that was great, so it led me down the path of setting it always\nrather than on the first volatility lookup.\n\nI had in mind that most RestrictInfos would get tested between\nchecking for hash and merge joinability and index compatibility.\nHowever, I think baserestrictinfos that reference non-indexed columns\nwon't get checked, so the way I've done it will be a bit wasteful like\nyou mention.\n\nDavid\n\n\n", "msg_date": "Fri, 12 Mar 2021 16:01:33 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Fri, 12 Mar 2021 at 14:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > On Tue, 23 Feb 2021 at 18:43, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I doubt it's that bad. We could cache such info in RestrictInfo\n> >> for quals, or PathTarget for tlists, without much new notational\n> >> overhead. That doesn't cover everything the planner deals with\n> >> of course, but it would cover enough that you'd be chasing pretty\n> >> small returns to worry about more.\n>\n> > This seems like a pretty good idea. So I coded it up.\n>\n> > The 0001 patch adds a has_volatile bool field to RestrictInfo and sets\n> > it when building the RestrictInfo.\n>\n> I'm -1 on doing it exactly that way, because you're expending\n> the cost of those lookups without certainty that you need the answer.\n> I had in mind something more like the way that we cache selectivity\n> estimates in RestrictInfo, in which the value is cached when first\n> demanded and then re-used on subsequent checks --- see in\n> clause_selectivity_ext, around line 750. You do need a way for the\n> field to have a \"not known yet\" value, but that's not hard. Moreover,\n> this sort of approach can be less invasive than what you did here,\n> because the caching behavior can be hidden inside\n> contain_volatile_functions, rather than having all the call sites\n> know about it explicitly.\n\nI coded up something more along the lines of what I think you had in\nmind for the 0001 patch.\n\nUpdated patches attached.\n\nDavid", "msg_date": "Mon, 15 Mar 2021 23:57:45 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Mon, 15 Mar 2021 at 23:57, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Fri, 12 Mar 2021 at 14:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I'm -1 on doing it exactly that way, because you're expending\n> > the cost of those lookups without certainty that you need the answer.\n> > I had in mind something more like the way that we cache selectivity\n> > estimates in RestrictInfo, in which the value is cached when first\n> > demanded and then re-used on subsequent checks --- see in\n> > clause_selectivity_ext, around line 750. You do need a way for the\n> > field to have a \"not known yet\" value, but that's not hard. Moreover,\n> > this sort of approach can be less invasive than what you did here,\n> > because the caching behavior can be hidden inside\n> > contain_volatile_functions, rather than having all the call sites\n> > know about it explicitly.\n>\n> I coded up something more along the lines of what I think you had in\n> mind for the 0001 patch.\n\nI've now cleaned up the 0001 patch. I ended up changing a few places\nwhere we pass the RestrictInfo->clause to contain_volatile_functions()\nto instead pass the RestrictInfo itself so that there's a possibility\nof caching the volatility property for a subsequent call.\n\nI also made a pass over the remaining patches and for the 0004 patch,\naside from the name, \"Result Cache\", I think that it's ready to go. We\nshould consider before RC1 if we should have enable_resultcache switch\non or off by default.\n\nDoes anyone care to have a final look at these patches? I'd like to\nstart pushing them fairly soon.\n\nDavid", "msg_date": "Wed, 24 Mar 2021 00:42:18 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Wed, 24 Mar 2021 at 00:42, David Rowley <dgrowleyml@gmail.com> wrote:\n> I've now cleaned up the 0001 patch. I ended up changing a few places\n> where we pass the RestrictInfo->clause to contain_volatile_functions()\n> to instead pass the RestrictInfo itself so that there's a possibility\n> of caching the volatility property for a subsequent call.\n>\n> I also made a pass over the remaining patches and for the 0004 patch,\n> aside from the name, \"Result Cache\", I think that it's ready to go. We\n> should consider before RC1 if we should have enable_resultcache switch\n> on or off by default.\n>\n> Does anyone care to have a final look at these patches? I'd like to\n> start pushing them fairly soon.\n\nI've now pushed the 0001 patch to cache the volatility of PathTarget\nand RestrictInfo.\n\nI'll be looking at the remaining patches over the next few days.\n\nAttached are a rebased set of patches on top of current master. The\nonly change is to the 0003 patch (was 0004) which had an unstable\nregression test for parallel plan with a Result Cache. I've swapped\nthe unstable test for something that shouldn't fail randomly depending\non if a parallel worker did any work or not.\n\nDavid", "msg_date": "Mon, 29 Mar 2021 15:20:36 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "Hi,\nFor show_resultcache_info()\n\n+ if (rcstate->shared_info != NULL)\n+ {\n\nThe negated condition can be used with a return. This way, the loop can be\nunindented.\n\n+ * ResultCache nodes are intended to sit above a parameterized node in the\n+ * plan tree in order to cache results from them.\n\nSince the parameterized node is singular, it would be nice if 'them' can be\nexpanded to refer to the source of result cache.\n\n+ rcstate->mem_used -= freed_mem;\n\nShould there be assertion that after the subtraction, mem_used stays\nnon-negative ?\n\n+ if (found && entry->complete)\n+ {\n+ node->stats.cache_hits += 1; /* stats update */\n\nOnce inside the if block, we would return.\n+ else\n+ {\nThe else block can be unindented (dropping else keyword).\n\n+ * return 1 row. XXX is this worth the check?\n+ */\n+ if (unlikely(entry->complete))\n\nSince the check is on a flag (with minimal overhead), it seems the check\ncan be kept, with the question removed.\n\nCheers\n\nOn Sun, Mar 28, 2021 at 7:21 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 24 Mar 2021 at 00:42, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I've now cleaned up the 0001 patch. I ended up changing a few places\n> > where we pass the RestrictInfo->clause to contain_volatile_functions()\n> > to instead pass the RestrictInfo itself so that there's a possibility\n> > of caching the volatility property for a subsequent call.\n> >\n> > I also made a pass over the remaining patches and for the 0004 patch,\n> > aside from the name, \"Result Cache\", I think that it's ready to go. We\n> > should consider before RC1 if we should have enable_resultcache switch\n> > on or off by default.\n> >\n> > Does anyone care to have a final look at these patches? I'd like to\n> > start pushing them fairly soon.\n>\n> I've now pushed the 0001 patch to cache the volatility of PathTarget\n> and RestrictInfo.\n>\n> I'll be looking at the remaining patches over the next few days.\n>\n> Attached are a rebased set of patches on top of current master. The\n> only change is to the 0003 patch (was 0004) which had an unstable\n> regression test for parallel plan with a Result Cache. I've swapped\n> the unstable test for something that shouldn't fail randomly depending\n> on if a parallel worker did any work or not.\n>\n> David\n>\n\nHi,For show_resultcache_info()+   if (rcstate->shared_info != NULL)+   {The negated condition can be used with a return. This way, the loop can be unindented.+ * ResultCache nodes are intended to sit above a parameterized node in the+ * plan tree in order to cache results from them.Since the parameterized node is singular, it would be nice if 'them' can be expanded to refer to the source of result cache.+   rcstate->mem_used -= freed_mem;Should there be assertion that after the subtraction, mem_used stays non-negative ?+               if (found && entry->complete)+               {+                   node->stats.cache_hits += 1;    /* stats update */Once inside the if block, we would return.+               else+               {The else block can be unindented (dropping else keyword).+                * return 1 row.  XXX is this worth the check?+                */+               if (unlikely(entry->complete))Since the check is on a flag (with minimal overhead), it seems the check can be kept, with the question removed.CheersOn Sun, Mar 28, 2021 at 7:21 PM David Rowley <dgrowleyml@gmail.com> wrote:On Wed, 24 Mar 2021 at 00:42, David Rowley <dgrowleyml@gmail.com> wrote:\n> I've now cleaned up the 0001 patch. I ended up changing a few places\n> where we pass the RestrictInfo->clause to contain_volatile_functions()\n> to instead pass the RestrictInfo itself so that there's a possibility\n> of caching the volatility property for a subsequent call.\n>\n> I also made a pass over the remaining patches and for the 0004 patch,\n> aside from the name, \"Result Cache\", I think that it's ready to go. We\n> should consider before RC1 if we should have enable_resultcache switch\n> on or off by default.\n>\n> Does anyone care to have a final look at these patches? I'd like to\n> start pushing them fairly soon.\n\nI've now pushed the 0001 patch to cache the volatility of PathTarget\nand RestrictInfo.\n\nI'll be looking at the remaining patches over the next few days.\n\nAttached are a rebased set of patches on top of current master. The\nonly change is to the 0003 patch (was 0004) which had an unstable\nregression test for parallel plan with a Result Cache.  I've swapped\nthe unstable test for something that shouldn't fail randomly depending\non if a parallel worker did any work or not.\n\nDavid", "msg_date": "Sun, 28 Mar 2021 19:59:43 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Mon, 29 Mar 2021 at 15:56, Zhihong Yu <zyu@yugabyte.com> wrote:\n> For show_resultcache_info()\n>\n> + if (rcstate->shared_info != NULL)\n> + {\n>\n> The negated condition can be used with a return. This way, the loop can be unindented.\n\nOK. I change that.\n\n> + * ResultCache nodes are intended to sit above a parameterized node in the\n> + * plan tree in order to cache results from them.\n>\n> Since the parameterized node is singular, it would be nice if 'them' can be expanded to refer to the source of result cache.\n\nI've done a bit of rewording in that paragraph.\n\n> + rcstate->mem_used -= freed_mem;\n>\n> Should there be assertion that after the subtraction, mem_used stays non-negative ?\n\nI'm not sure. I ended up adding one and also adjusting the #ifdef in\nremove_cache_entry() which had some code to validate the memory\naccounting so that it compiles when USE_ASSERT_CHECKING is defined.\nI'm unsure if that's a bit too expensive to enable during debugs but I\ndidn't really want to leave the code in there unless it's going to get\nsome exercise on the buildfarm.\n\n> + if (found && entry->complete)\n> + {\n> + node->stats.cache_hits += 1; /* stats update */\n>\n> Once inside the if block, we would return.\n\nOK change.\n\n> + else\n> + {\n> The else block can be unindented (dropping else keyword).\n\nchanged.\n\n> + * return 1 row. XXX is this worth the check?\n> + */\n> + if (unlikely(entry->complete))\n>\n> Since the check is on a flag (with minimal overhead), it seems the check can be kept, with the question removed.\n\nI changed the comment, but I did leave a mention that I'm still not\nsure if it should be an Assert() or an elog.\n\nThe attached patch is an updated version of the Result Cache patch\ncontaining the changes for the things you highlighted plus a few other\nthings.\n\nI pushed the change to simplehash.h and the estimate_num_groups()\nchange earlier, so only 1 patch remaining.\n\nAlso, I noticed the CFBof found another unstable parallel regression\ntest. This was due to some code in show_resultcache_info() which\nskipped parallel workers that appeared to not help out. It looks like\non my machine the worker never got a chance to do anything, but on one\nof the CFbot's machines, it did. I ended up changing the EXPLAIN\noutput so that it shows the cache statistics regardless of if the\nworker helped or not.\n\nDavid", "msg_date": "Wed, 31 Mar 2021 00:42:14 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "Hi,\nIn paraminfo_get_equal_hashops(),\n\n+ /* Reject if there are any volatile functions */\n+ if (contain_volatile_functions(expr))\n+ {\n\nYou can move the above code to just ahead of:\n\n+ if (IsA(expr, Var))\n+ var_relids = bms_make_singleton(((Var *) expr)->varno);\n\nThis way, when we return early, var_relids doesn't need to be populated.\n\nCheers\n\nOn Tue, Mar 30, 2021 at 4:42 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Mon, 29 Mar 2021 at 15:56, Zhihong Yu <zyu@yugabyte.com> wrote:\n> > For show_resultcache_info()\n> >\n> > + if (rcstate->shared_info != NULL)\n> > + {\n> >\n> > The negated condition can be used with a return. This way, the loop can\n> be unindented.\n>\n> OK. I change that.\n>\n> > + * ResultCache nodes are intended to sit above a parameterized node in\n> the\n> > + * plan tree in order to cache results from them.\n> >\n> > Since the parameterized node is singular, it would be nice if 'them' can\n> be expanded to refer to the source of result cache.\n>\n> I've done a bit of rewording in that paragraph.\n>\n> > + rcstate->mem_used -= freed_mem;\n> >\n> > Should there be assertion that after the subtraction, mem_used stays\n> non-negative ?\n>\n> I'm not sure. I ended up adding one and also adjusting the #ifdef in\n> remove_cache_entry() which had some code to validate the memory\n> accounting so that it compiles when USE_ASSERT_CHECKING is defined.\n> I'm unsure if that's a bit too expensive to enable during debugs but I\n> didn't really want to leave the code in there unless it's going to get\n> some exercise on the buildfarm.\n>\n> > + if (found && entry->complete)\n> > + {\n> > + node->stats.cache_hits += 1; /* stats update */\n> >\n> > Once inside the if block, we would return.\n>\n> OK change.\n>\n> > + else\n> > + {\n> > The else block can be unindented (dropping else keyword).\n>\n> changed.\n>\n> > + * return 1 row. XXX is this worth the check?\n> > + */\n> > + if (unlikely(entry->complete))\n> >\n> > Since the check is on a flag (with minimal overhead), it seems the check\n> can be kept, with the question removed.\n>\n> I changed the comment, but I did leave a mention that I'm still not\n> sure if it should be an Assert() or an elog.\n>\n> The attached patch is an updated version of the Result Cache patch\n> containing the changes for the things you highlighted plus a few other\n> things.\n>\n> I pushed the change to simplehash.h and the estimate_num_groups()\n> change earlier, so only 1 patch remaining.\n>\n> Also, I noticed the CFBof found another unstable parallel regression\n> test. This was due to some code in show_resultcache_info() which\n> skipped parallel workers that appeared to not help out. It looks like\n> on my machine the worker never got a chance to do anything, but on one\n> of the CFbot's machines, it did. I ended up changing the EXPLAIN\n> output so that it shows the cache statistics regardless of if the\n> worker helped or not.\n>\n> David\n>\n\nHi,In paraminfo_get_equal_hashops(),+       /* Reject if there are any volatile functions */+       if (contain_volatile_functions(expr))+       {You can move the above code to just ahead of:+       if (IsA(expr, Var))+           var_relids = bms_make_singleton(((Var *) expr)->varno);This way, when we return early, var_relids doesn't need to be populated.CheersOn Tue, Mar 30, 2021 at 4:42 AM David Rowley <dgrowleyml@gmail.com> wrote:On Mon, 29 Mar 2021 at 15:56, Zhihong Yu <zyu@yugabyte.com> wrote:\n> For show_resultcache_info()\n>\n> +   if (rcstate->shared_info != NULL)\n> +   {\n>\n> The negated condition can be used with a return. This way, the loop can be unindented.\n\nOK. I change that.\n\n> + * ResultCache nodes are intended to sit above a parameterized node in the\n> + * plan tree in order to cache results from them.\n>\n> Since the parameterized node is singular, it would be nice if 'them' can be expanded to refer to the source of result cache.\n\nI've done a bit of rewording in that paragraph.\n\n> +   rcstate->mem_used -= freed_mem;\n>\n> Should there be assertion that after the subtraction, mem_used stays non-negative ?\n\nI'm not sure.  I ended up adding one and also adjusting the #ifdef in\nremove_cache_entry() which had some code to validate the memory\naccounting so that it compiles when USE_ASSERT_CHECKING is defined.\nI'm unsure if that's a bit too expensive to enable during debugs but I\ndidn't really want to leave the code in there unless it's going to get\nsome exercise on the buildfarm.\n\n> +               if (found && entry->complete)\n> +               {\n> +                   node->stats.cache_hits += 1;    /* stats update */\n>\n> Once inside the if block, we would return.\n\nOK change.\n\n> +               else\n> +               {\n> The else block can be unindented (dropping else keyword).\n\nchanged.\n\n> +                * return 1 row.  XXX is this worth the check?\n> +                */\n> +               if (unlikely(entry->complete))\n>\n> Since the check is on a flag (with minimal overhead), it seems the check can be kept, with the question removed.\n\nI changed the comment, but I did leave a mention that I'm still not\nsure if it should be an Assert() or an elog.\n\nThe attached patch is an updated version of the Result Cache patch\ncontaining the changes for the things you highlighted plus a few other\nthings.\n\nI pushed the change to simplehash.h and the estimate_num_groups()\nchange earlier, so only 1 patch remaining.\n\nAlso, I noticed the CFBof found another unstable parallel regression\ntest. This was due to some code in show_resultcache_info() which\nskipped parallel workers that appeared to not help out. It looks like\non my machine the worker never got a chance to do anything, but on one\nof the CFbot's machines, it did.  I ended up changing the EXPLAIN\noutput so that it shows the cache statistics regardless of if the\nworker helped or not.\n\nDavid", "msg_date": "Tue, 30 Mar 2021 09:37:43 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Wed, 31 Mar 2021 at 05:34, Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n> In paraminfo_get_equal_hashops(),\n>\n> + /* Reject if there are any volatile functions */\n> + if (contain_volatile_functions(expr))\n> + {\n>\n> You can move the above code to just ahead of:\n>\n> + if (IsA(expr, Var))\n> + var_relids = bms_make_singleton(((Var *) expr)->varno);\n>\n> This way, when we return early, var_relids doesn't need to be populated.\n\nThanks for having another look. I did a bit more work in that area\nand removed that code. I dug a little deeper and I can't see any way\nthat a lateral_var on a rel can refer to anything inside the rel. It\nlooks like that code was just a bit over paranoid about that.\n\nI also added some additional caching in RestrictInfo to cache the hash\nequality operator to use for the result cache. This saves checking\nthis each time we consider a join during the join search. In many\ncases we would have used the value cached in\nRestrictInfo.hashjoinoperator, however, for non-equaliy joins, that\nwould have be set to InvalidOid. We can still use Result Cache for\nnon-equality joins.\n\nI've now pushed the main patch.\n\nThere's a couple of things I'm not perfectly happy with:\n\n1. The name. There's a discussion on [1] if anyone wants to talk about that.\n2. For lateral joins, there's no place to cache the hash equality\noperator. Maybe there's some rework to do to add the ability to check\nthings for those like we use RestrictInfo for regular joins.\n3. No ability to cache n_distinct estimates. This must be repeated\neach time we consider a join. RestrictInfo allows caching for this to\nspeed up clauselist_selectivity() for other join types.\n\nThere was no consensus reached on the name of the node. \"Tuple Cache\"\nseems like the favourite so far, but there's not been a great deal of\ninput. At least not enough that I was motivated to rename everything.\nPeople will perhaps have more time to consider names during beta.\n\nThank you to everyone who gave input and reviewed this patch. It would\nbe great to get feedback on the performance with real workloads. As\nmentioned in the commit message, there is a danger that it causes\nperformance regressions when n_distinct estimates are significantly\nunderestimated.\n\nI'm off to look at the buildfarm now.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvq=yQXr5kqhRviT2RhNKwToaWr9JAN5t+5_PzhuRJ3wvg@mail.gmail.com\n\n\n", "msg_date": "Thu, 1 Apr 2021 12:49:49 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Thu, 1 Apr 2021 at 12:49, David Rowley <dgrowleyml@gmail.com> wrote:\n> I'm off to look at the buildfarm now.\n\nWell, it looks like the buildfarm didn't like the patch much. I had to\nrevert the patch.\n\nIt appears I overlooked some details in the EXPLAIN ANALYZE output\nwhen force_parallel_mode = regress is on. To make this work I had to\nchange the EXPLAIN output so that it does not show the main process's\ncache Hit/Miss/Eviction details when there are zero misses. In the\nanimals running force_parallel_mode = regress there was an additional\nline for the parallel worker containing the expected cache\nhits/misses/evictions as well as the one for the main process. The\nmain process was not doing any work. I took inspiration from\nshow_sort_info() which does not show the details for the main process\nwhen it did not help with the Sort.\n\nThere was also an issue on florican [1] which appears to be due to\nthat machine being 32-bit. I should have considered that when\nthinking of the cache eviction test. I originally tried to make the\ntest as small as possible by lowering work_mem down to 64kB and only\nusing enough rows to overflow that by a small amount. I think what's\nhappening on florican is that due to all the pointer fields in the\ncache being 32-bits instead of 64-bits that more records fit into the\ncache and there are no evictions. I've scaled that test up a bit now\nto use 1200 rows instead of 800.\n\nThe 32-bit machines also were reporting a different number of exact\nblocks in the bitmap heap scan. I've now just disabled bitmap scans\nfor those tests.\n\nI've attached the updated patch. I'll let the CFbot grab this to\nensure it's happy with it before I go looking to push it again.\n\nDavid\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican&dt=2021-04-01%2000%3A28%3A12", "msg_date": "Thu, 1 Apr 2021 23:23:05 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "> I've attached the updated patch. I'll let the CFbot grab this to ensure it's\r\n> happy with it before I go looking to push it again.\r\n\r\nHi,\r\n\r\nI took a look into the patch and noticed some minor things.\r\n\r\n1.\r\n+\t\tcase T_ResultCache:\r\n+\t\t\tptype = \"ResultCache\";\r\n+\t\t\tsubpath = ((ResultCachePath *) path)->subpath;\r\n+\t\t\tbreak;\r\n \t\tcase T_UniquePath:\r\n \t\t\tptype = \"Unique\";\r\n \t\t\tsubpath = ((UniquePath *) path)->subpath;\r\nshould we use \"case T_ResultCachePath\" here?\r\n\r\n2.\r\nIs it better to add ResultCache's info to \" src/backend/optimizer/README \" ?\r\nSomething like:\r\n NestPath - nested-loop joins\r\n MergePath - merge joins\r\n HashPath - hash joins\r\n+ ResultCachePath - Result cache\r\n\r\nBest regards,\r\nHou zhijie\r\n", "msg_date": "Thu, 1 Apr 2021 10:41:30 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Thu, 1 Apr 2021 at 23:41, houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> > I've attached the updated patch. I'll let the CFbot grab this to ensure it's\n> > happy with it before I go looking to push it again.\n>\n> Hi,\n>\n> I took a look into the patch and noticed some minor things.\n>\n> 1.\n> + case T_ResultCache:\n> + ptype = \"ResultCache\";\n> + subpath = ((ResultCachePath *) path)->subpath;\n> + break;\n> case T_UniquePath:\n> ptype = \"Unique\";\n> subpath = ((UniquePath *) path)->subpath;\n> should we use \"case T_ResultCachePath\" here?\n>\n> 2.\n> Is it better to add ResultCache's info to \" src/backend/optimizer/README \" ?\n> Something like:\n> NestPath - nested-loop joins\n> MergePath - merge joins\n> HashPath - hash joins\n> + ResultCachePath - Result cache\n\nThanks for pointing those two things out.\n\nI've pushed the patch again with some updates to EXPLAIN to fix the\nissue from yesterday. I also disabled result cache in the\npartition_prune tests as I suspect that the parallel tests there might\njust be a bit too unstable in the buildfarm. The cache\nhit/miss/eviction line might disappear if the main process does not\nget a chance to do any work.\n\nWell, it's now time to watch the buildfarm again...\n\nDavid\n\n\n", "msg_date": "Fri, 2 Apr 2021 14:15:40 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Fri, Mar 12, 2021 at 8:31 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> Thanks for these suggestions.\n>\n> On Mon, 22 Feb 2021 at 14:21, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Tue, Feb 16, 2021 at 11:15:51PM +1300, David Rowley wrote:\n> > > To summarise here, the planner performance gets a fair bit worse with\n> > > the patched code. With master, summing the average planning time over\n> > > each of the queries resulted in a total planning time of 765.7 ms.\n> > > After patching, that went up to 1097.5 ms. I was pretty disappointed\n> > > about that.\n> >\n> > I have a couple ideas;\n>\n\nI just checked the latest code, looks like we didn't improve this\nsituation except\nthat we introduced a GUC to control it. Am I missing something? I don't\nhave a\nsuggestion though.\n\n\n> > - default enable_resultcache=off seems okay. In plenty of cases,\n> planning\n> > time is unimportant. This is the \"low bar\" - if we can do better and\n> enable\n> > it by default, that's great.\n>\n> I think that's reasonable. Teaching the planner to do new tricks is\n> never going to make the planner produce plans more quickly. When the\n> new planner trick gives us a more optimal plan, then great. When it\n> does not then it's wasted effort. Giving users the ability to switch\n> off the planner's new ability seems like a good way for people who\n> continually find it the additional effort costs more than it saves\n> seems like a good way to keep them happy.\n>\n> > - Maybe this should be integrated into nestloop rather than being a\n> separate\n> > plan node. That means that it could be dynamically enabled during\n> > execution, maybe after a few loops or after checking that there's at\n> least\n> > some minimal number of repeated keys and cache hits. cost_nestloop\n> would\n> > consider whether to use a result cache or not, and explain would show\n> the\n> > cache stats as a part of nested loop. In this case, I propose\n> there'd still\n> > be a GUC to disable it.\n>\n> There was quite a bit of discussion on that topic already on this\n> thread. I don't really want to revisit that.\n>\n> The main problem with that is that we'd be forced into costing a\n> Nested loop with a result cache exactly the same as we do for a plain\n> nested loop. If we were to lower the cost to account for the cache\n> hits then the planner is more likely to choose a nested loop over a\n> merge/hash join. If we then switched the caching off during execution\n> due to low cache hits then that does not magically fix the bad choice\n> of join method. The planner may have gone with a Hash Join if it had\n> known the cache hit ratio would be that bad. We'd still be left to\n> deal with the poor performing nested loop. What you'd really want\n> instead of turning the cache off would be to have nested loop ditch\n> the parameter scan and just morph itself into a Hash Join node. (I'm\n> not proposing we do that)\n>\n> > - Maybe cost_resultcache() can be split into initial_cost and final_cost\n> > parts, same as for nestloop ? I'm not sure how it'd work, since\n> > initial_cost is supposed to return a lower bound, and resultcache\n> tries to\n> > make things cheaper. initial_cost would just add some operator/tuple\n> costs\n> > to make sure that resultcache of a unique scan is more expensive than\n> > nestloop alone. estimate_num_groups is at least O(n) WRT\n> > rcpath->param_exprs, so maybe you charge 100*list_length(param_exprs)\n> *\n> > cpu_operator_cost in initial_cost and then call estimate_num_groups in\n> > final_cost. We'd be estimating the cost of estimating the cost...\n>\n> The cost of the Result Cache is pretty dependant on the n_distinct\n> estimate. Low numbers of distinct values tend to estimate a high\n> number of cache hits, whereas large n_distinct values (relative to the\n> number of outer rows) is not going to estimate a large number of cache\n> hits.\n>\n> I don't think feeding in a fake value would help us here. We'd\n> probably do better if we had a fast way to determine if a given Expr\n> is unique. (e.g UniqueKeys patch). Result Cache is never going to be\n> a win for a parameter that the value is never the same as some\n> previously seen value. This would likely allow us to skip considering\n> a Result Cache for the majority of OLTP type joins.\n>\n> > - Maybe an initial implementation of this would only add a result cache\n> if the\n> > best plan was already going to use a nested loop, even though a cached\n> > nested loop might be cheaper than other plans. This would avoid most\n> > planner costs, and give improved performance at execution time, but\n> leaves\n> > something \"on the table\" for the future.\n> >\n> > > +cost_resultcache_rescan(PlannerInfo *root, ResultCachePath *rcpath,\n> > > + Cost *rescan_startup_cost, Cost\n> *rescan_total_cost)\n> > > +{\n> > > + double tuples = rcpath->subpath->rows;\n> > > + double calls = rcpath->calls;\n> > ...\n> > > + /* estimate on the distinct number of parameter values */\n> > > + ndistinct = estimate_num_groups(root, rcpath->param_exprs,\n> calls, NULL,\n> > > + &estinfo);\n> >\n> > Shouldn't this pass \"tuples\" and not \"calls\" ?\n>\n> hmm. I don't think so. \"calls\" is the estimated number of outer side\n> rows. Here you're asking if the n_distinct estimate is relevant to\n> the inner side rows. It's not. If we expect to be called 1000 times by\n> the outer side of the nested loop, then we need to know our n_distinct\n> estimate for those 1000 rows. If the estimate comes back as 10\n> distinct values and we see that we're likely to be able to fit all the\n> tuples for those 10 distinct values in the cache, then the hit ratio\n> is going to come out at 99%. 10 misses, for the first time each value\n> is looked up and the remainder of the 990 calls will be hits. The\n> number of tuples (and the width of tuples) on the inside of the nested\n> loop is only relevant to calculating how many cache entries is likely\n> to fit into hash_mem. When we think cache entries will be evicted\n> then that makes the cache hit calculation more complex.\n>\n> I've tried to explain what's going on in cost_resultcache_rescan() the\n> best I can with comments. I understand it's still pretty hard to\n> follow what's going on. I'm open to making it easier to understand if\n> you have suggestions.\n>\n> David\n>\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Fri, Mar 12, 2021 at 8:31 AM David Rowley <dgrowleyml@gmail.com> wrote:Thanks for these suggestions.\n\nOn Mon, 22 Feb 2021 at 14:21, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Feb 16, 2021 at 11:15:51PM +1300, David Rowley wrote:\n> > To summarise here, the planner performance gets a fair bit worse with\n> > the patched code. With master, summing the average planning time over\n> > each of the queries resulted in a total planning time of 765.7 ms.\n> > After patching, that went up to 1097.5 ms.  I was pretty disappointed\n> > about that.\n>\n> I have a couple ideas;I just checked the latest code,  looks like we didn't improve this situation exceptthat we introduced a GUC to control it.   Am I missing something? I don't have asuggestion though. \n>  - default enable_resultcache=off seems okay.  In plenty of cases, planning\n>    time is unimportant.  This is the \"low bar\" - if we can do better and enable\n>    it by default, that's great.\n\nI think that's reasonable. Teaching the planner to do new tricks is\nnever going to make the planner produce plans more quickly. When the\nnew planner trick gives us a more optimal plan, then great.  When it\ndoes not then it's wasted effort.  Giving users the ability to switch\noff the planner's new ability seems like a good way for people who\ncontinually find it the additional effort costs more than it saves\nseems like a good way to keep them happy.\n\n>  - Maybe this should be integrated into nestloop rather than being a separate\n>    plan node.  That means that it could be dynamically enabled during\n>    execution, maybe after a few loops or after checking that there's at least\n>    some minimal number of repeated keys and cache hits.  cost_nestloop would\n>    consider whether to use a result cache or not, and explain would show the\n>    cache stats as a part of nested loop.  In this case, I propose there'd still\n>    be a GUC to disable it.\n\nThere was quite a bit of discussion on that topic already on this\nthread. I don't really want to revisit that.\n\nThe main problem with that is that we'd be forced into costing a\nNested loop with a result cache exactly the same as we do for a plain\nnested loop.  If we were to lower the cost to account for the cache\nhits then the planner is more likely to choose a nested loop over a\nmerge/hash join.  If we then switched the caching off during execution\ndue to low cache hits then that does not magically fix the bad choice\nof join method.  The planner may have gone with a Hash Join if it had\nknown the cache hit ratio would be that bad. We'd still be left to\ndeal with the poor performing nested loop.  What you'd really want\ninstead of turning the cache off would be to have nested loop ditch\nthe parameter scan and just morph itself into a Hash Join node. (I'm\nnot proposing we do that)\n\n>  - Maybe cost_resultcache() can be split into initial_cost and final_cost\n>    parts, same as for nestloop ?  I'm not sure how it'd work, since\n>    initial_cost is supposed to return a lower bound, and resultcache tries to\n>    make things cheaper.  initial_cost would just add some operator/tuple costs\n>    to make sure that resultcache of a unique scan is more expensive than\n>    nestloop alone.  estimate_num_groups is at least O(n) WRT\n>    rcpath->param_exprs, so maybe you charge 100*list_length(param_exprs) *\n>    cpu_operator_cost in initial_cost and then call estimate_num_groups in\n>    final_cost.  We'd be estimating the cost of estimating the cost...\n\nThe cost of the Result Cache is pretty dependant on the n_distinct\nestimate. Low numbers of distinct values tend to estimate a high\nnumber of cache hits, whereas large n_distinct values (relative to the\nnumber of outer rows) is not going to estimate a large number of cache\nhits.\n\nI don't think feeding in a fake value would help us here.  We'd\nprobably do better if we had a fast way to determine if a given Expr\nis unique. (e.g UniqueKeys patch).  Result Cache is never going to be\na win for a parameter that the value is never the same as some\npreviously seen value. This would likely allow us to skip considering\na Result Cache for the majority of OLTP type joins.\n\n>  - Maybe an initial implementation of this would only add a result cache if the\n>    best plan was already going to use a nested loop, even though a cached\n>    nested loop might be cheaper than other plans.  This would avoid most\n>    planner costs, and give improved performance at execution time, but leaves\n>    something \"on the table\" for the future.\n>\n> > +cost_resultcache_rescan(PlannerInfo *root, ResultCachePath *rcpath,\n> > +                     Cost *rescan_startup_cost, Cost *rescan_total_cost)\n> > +{\n> > +     double          tuples = rcpath->subpath->rows;\n> > +     double          calls = rcpath->calls;\n> ...\n> > +     /* estimate on the distinct number of parameter values */\n> > +     ndistinct = estimate_num_groups(root, rcpath->param_exprs, calls, NULL,\n> > +                                     &estinfo);\n>\n> Shouldn't this pass \"tuples\" and not \"calls\" ?\n\nhmm. I don't think so. \"calls\" is the estimated number of outer side\nrows.  Here you're asking if the n_distinct estimate is relevant to\nthe inner side rows. It's not. If we expect to be called 1000 times by\nthe outer side of the nested loop, then we need to know our n_distinct\nestimate for those 1000 rows. If the estimate comes back as 10\ndistinct values and we see that we're likely to be able to fit all the\ntuples for those 10 distinct values in the cache, then the hit ratio\nis going to come out at 99%. 10 misses, for the first time each value\nis looked up and the remainder of the 990 calls will be hits. The\nnumber of tuples (and the width of tuples) on the inside of the nested\nloop is only relevant to calculating how many cache entries is likely\nto fit into hash_mem.  When we think cache entries will be evicted\nthen that makes the cache hit calculation more complex.\n\nI've tried to explain what's going on in cost_resultcache_rescan() the\nbest I can with comments. I understand it's still pretty hard to\nfollow what's going on. I'm open to making it easier to understand if\nyou have suggestions.\n\nDavid\n-- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Wed, 26 May 2021 10:19:23 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" }, { "msg_contents": "On Wed, 26 May 2021 at 14:19, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> I just checked the latest code, looks like we didn't improve this situation except\n> that we introduced a GUC to control it. Am I missing something? I don't have a\n> suggestion though.\n\nVarious extra caching was done to help speed it up. We now cache the\nvolatility of RestrictInfo and PathTarget.\n\nI also added caching for the hash function in RestrictInfo so that we\ncould more quickly determine if we can Result Cache or not.\n\nThere's still a bit of caching left that I didn't do. This is around\nlateral_vars. I've nowhere to cache the hash function since that's\njust a list of vars. At the moment we need to check that each time we\nconsider a result cache path. LATERAL joins are a bit less common so\nI didn't think that would be a huge issue. There's always\nenable_resultcache = off for people who cannot tolerate the overhead.\n\nAlso, it's never going to be 100% as fast as it was. We're considering\nanother path that we didn't consider before.\n\nDid you do some performance testing that caused you to bring this topic up?\n\nDavid\n\n\n", "msg_date": "Wed, 26 May 2021 15:43:54 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Hybrid Hash/Nested Loop joins and caching results from subplans" } ]
[ { "msg_contents": "I have implemented the SEARCH and CYCLE clauses.\n\nThis is standard SQL syntax attached to a recursive CTE to compute a \ndepth- or breadth-first ordering and cycle detection, respectively. \nThis is just convenience syntax for what you can already do manually. \nThe original discussion about recursive CTEs briefly mentioned these as \nsomething to do later but then it was never mentioned again.\n\nSQL specifies these in terms of syntactic transformations, and so that's \nhow I have implemented them also, mainly in the rewriter.\n\nI have successfully tested this against examples I found online that \nwere aimed at DB2.\n\nThe contained documentation and the code comment in rewriteHandler.c \nexplain the details.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 20 May 2020 13:46:18 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "SEARCH and CYCLE clauses" }, { "msg_contents": "On 5/20/20 1:46 PM, Peter Eisentraut wrote:\n> I have implemented the SEARCH and CYCLE clauses.\n\nYES!\n\n> This is standard SQL syntax attached to a recursive CTE to compute a\n> depth- or breadth-first ordering and cycle detection, respectively. This\n> is just convenience syntax for what you can already do manually. The\n> original discussion about recursive CTEs briefly mentioned these as\n> something to do later but then it was never mentioned again.\n> \n> SQL specifies these in terms of syntactic transformations, and so that's\n> how I have implemented them also, mainly in the rewriter.\n\nI've attempted to do this several times but didn't get anywhere with it.\n I'm looking forward to reviewing this.\n\n(And maybe it will put me on the right path for implementing <unique\npredicate>.)\n-- \nVik Fearing\n\n\n", "msg_date": "Wed, 20 May 2020 15:04:20 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "On 5/20/20 3:04 PM, Vik Fearing wrote:\n\n> I'm looking forward to reviewing this.\nA few quick things I've noticed so far:\n\n1)\nThere are some smart quotes in the comments that should be converted to\nsingle quotes.\n\n\n2)\nThis query is an infinite loop, as expected:\n\n with recursive a as (select 1 as b union all select b from a)\n table a;\n\nBut it becomes an error when you add a cycle clause to it:\n\n with recursive a as (select 1 as b union all table a)\n cycle b set c to true default false using p\n table a;\n\n ERROR: each UNION query must have the same number of columns\n\nThe same error occurs with a search clause.\n\n\n3)\nIf I take the same infinite loop query but replace the TABLE syntax with\na SELECT and add a cycle clause, it's not an infinite loop anymore.\n\n with recursive a as (select 1 as b union all select b from a)\n cycle b set c to true default false using p\n table a;\n\n b | c | p\n---+---+-----------\n 1 | f | {(1)}\n 1 | t | {(1),(1)}\n(2 rows)\n\nWhy does it stop? It should still be an infinite loop.\n\n\n4)\nIf I use NULL instead of false, I only get one row back.\n\n with recursive a as (select 1 as b union all select b from a)\n cycle b set c to true default false using p\n table a;\n\n b | c | p\n---+---+-------\n 1 | | {(1)}\n(1 row)\n\n\n5)\nI can set both states to the same value.\n\n with recursive a as (select 1 as b union all select b from a)\n cycle b set c to true default true using p\n table a;\n\n b | c | p\n---+---+-------\n 1 | t | {(1)}\n(1 row)\n\nThis is a direct violation of 7.18 SR 2.b.ii.3 as well as common sense.\n BTW, I applaud your decision to violate the other part of that rule and\nallowing any data type here.\n\n\n5)\nThe same rule as above says that the value and the default value must be\nliterals but not everything that a human might consider a literal is\naccepted. In particular:\n\n with recursive a as (select 1 as b union all select b from a)\n cycle b set c to 1 default -1 using p\n table a;\n\n ERROR: syntax error at or near \"-\"\n\nCan we just accept a full a_expr here instead of AexprConst? Both\nDEFAULT and USING are fully reserved keywords.\n\n\n\nThat's all for now; will test more later.\nThanks for working on this!\n-- \nVik Fearing\n\n\n", "msg_date": "Wed, 20 May 2020 21:28:04 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "On 2020-05-20 21:28, Vik Fearing wrote:\n> 1)\n> There are some smart quotes in the comments that should be converted to\n> single quotes.\n\nok, fixing that\n\n> 2)\n> This query is an infinite loop, as expected:\n> \n> with recursive a as (select 1 as b union all select b from a)\n> table a;\n> \n> But it becomes an error when you add a cycle clause to it:\n> \n> with recursive a as (select 1 as b union all table a)\n> cycle b set c to true default false using p\n> table a;\n> \n> ERROR: each UNION query must have the same number of columns\n\ntable a expands to select * from a, and if you have a cycle clause, then \na has three columns, but the other branch of the union only has one, so \nthat won't work anymore, will it?\n\n> 3)\n> If I take the same infinite loop query but replace the TABLE syntax with\n> a SELECT and add a cycle clause, it's not an infinite loop anymore.\n> \n> with recursive a as (select 1 as b union all select b from a)\n> cycle b set c to true default false using p\n> table a;\n> \n> b | c | p\n> ---+---+-----------\n> 1 | f | {(1)}\n> 1 | t | {(1),(1)}\n> (2 rows)\n> \n> Why does it stop? It should still be an infinite loop.\n\nIf you specify the cycle clause, then the processing will stop if it \nsees the same row more than once, which it did here.\n\n> 4)\n> If I use NULL instead of false, I only get one row back.\n> \n> with recursive a as (select 1 as b union all select b from a)\n> cycle b set c to true default false using p\n> table a;\n> \n> b | c | p\n> ---+---+-------\n> 1 | | {(1)}\n> (1 row)\n\nIf you specify null, then the cycle check expression will always fail, \nso it will abort after the first row. (We should perhaps prohibit \nspecifying null, but see below.)\n\n> 5)\n> I can set both states to the same value.\n> \n> with recursive a as (select 1 as b union all select b from a)\n> cycle b set c to true default true using p\n> table a;\n\n> This is a direct violation of 7.18 SR 2.b.ii.3 as well as common sense.\n> BTW, I applaud your decision to violate the other part of that rule and\n> allowing any data type here.\n> \n> \n> 5)\n> The same rule as above says that the value and the default value must be\n> literals but not everything that a human might consider a literal is\n> accepted. In particular:\n> \n> with recursive a as (select 1 as b union all select b from a)\n> cycle b set c to 1 default -1 using p\n> table a;\n> \n> ERROR: syntax error at or near \"-\"\n> \n> Can we just accept a full a_expr here instead of AexprConst? Both\n> DEFAULT and USING are fully reserved keywords.\n\nThis is something we need to think about. If we want to check at parse \ntime whether the two values are not the same (and perhaps not null), \nthen we either need to restrict the generality of what we can specify, \nor we need to be prepared to do full expression evaluation in the \nparser. A simple and practical way might be to only allow string and \nboolean literal. I don't have a strong opinion here.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 22 May 2020 11:24:55 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "pá 22. 5. 2020 v 11:25 odesílatel Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> napsal:\n\n> On 2020-05-20 21:28, Vik Fearing wrote:\n> > 1)\n> > There are some smart quotes in the comments that should be converted to\n> > single quotes.\n>\n> ok, fixing that\n>\n> > 2)\n> > This query is an infinite loop, as expected:\n> >\n> > with recursive a as (select 1 as b union all select b from a)\n> > table a;\n> >\n> > But it becomes an error when you add a cycle clause to it:\n> >\n> > with recursive a as (select 1 as b union all table a)\n> > cycle b set c to true default false using p\n> > table a;\n> >\n> > ERROR: each UNION query must have the same number of columns\n>\n> table a expands to select * from a, and if you have a cycle clause, then\n> a has three columns, but the other branch of the union only has one, so\n> that won't work anymore, will it?\n>\n> > 3)\n> > If I take the same infinite loop query but replace the TABLE syntax with\n> > a SELECT and add a cycle clause, it's not an infinite loop anymore.\n> >\n> > with recursive a as (select 1 as b union all select b from a)\n> > cycle b set c to true default false using p\n> > table a;\n> >\n> > b | c | p\n> > ---+---+-----------\n> > 1 | f | {(1)}\n> > 1 | t | {(1),(1)}\n> > (2 rows)\n> >\n> > Why does it stop? It should still be an infinite loop.\n>\n> If you specify the cycle clause, then the processing will stop if it\n> sees the same row more than once, which it did here.\n>\n> > 4)\n> > If I use NULL instead of false, I only get one row back.\n> >\n> > with recursive a as (select 1 as b union all select b from a)\n> > cycle b set c to true default false using p\n> > table a;\n> >\n> > b | c | p\n> > ---+---+-------\n> > 1 | | {(1)}\n> > (1 row)\n>\n> If you specify null, then the cycle check expression will always fail,\n> so it will abort after the first row. (We should perhaps prohibit\n> specifying null, but see below.)\n>\n> > 5)\n> > I can set both states to the same value.\n> >\n> > with recursive a as (select 1 as b union all select b from a)\n> > cycle b set c to true default true using p\n> > table a;\n>\n> > This is a direct violation of 7.18 SR 2.b.ii.3 as well as common sense.\n> > BTW, I applaud your decision to violate the other part of that rule and\n> > allowing any data type here.\n> >\n> >\n> > 5)\n> > The same rule as above says that the value and the default value must be\n> > literals but not everything that a human might consider a literal is\n> > accepted. In particular:\n> >\n> > with recursive a as (select 1 as b union all select b from a)\n> > cycle b set c to 1 default -1 using p\n> > table a;\n> >\n> > ERROR: syntax error at or near \"-\"\n> >\n> > Can we just accept a full a_expr here instead of AexprConst? Both\n> > DEFAULT and USING are fully reserved keywords.\n>\n> This is something we need to think about. If we want to check at parse\n> time whether the two values are not the same (and perhaps not null),\n> then we either need to restrict the generality of what we can specify,\n> or we need to be prepared to do full expression evaluation in the\n> parser. A simple and practical way might be to only allow string and\n> boolean literal. I don't have a strong opinion here.\n>\n\nif you check it in parse time, then you disallow parametrization there.\n\nIs any reason to do this check in parse time?\n\n\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n>\n\npá 22. 5. 2020 v 11:25 odesílatel Peter Eisentraut <peter.eisentraut@2ndquadrant.com> napsal:On 2020-05-20 21:28, Vik Fearing wrote:\n> 1)\n> There are some smart quotes in the comments that should be converted to\n> single quotes.\n\nok, fixing that\n\n> 2)\n> This query is an infinite loop, as expected:\n> \n>    with recursive a as (select 1 as b union all select b from a)\n>    table a;\n> \n> But it becomes an error when you add a cycle clause to it:\n> \n>    with recursive a as (select 1 as b union all table a)\n>      cycle b set c to true default false using p\n>    table a;\n> \n>    ERROR:  each UNION query must have the same number of columns\n\ntable a expands to select * from a, and if you have a cycle clause, then \na has three columns, but the other branch of the union only has one, so \nthat won't work anymore, will it?\n\n> 3)\n> If I take the same infinite loop query but replace the TABLE syntax with\n> a SELECT and add a cycle clause, it's not an infinite loop anymore.\n> \n>    with recursive a as (select 1 as b union all select b from a)\n>      cycle b set c to true default false using p\n>    table a;\n> \n>   b | c |     p\n> ---+---+-----------\n>   1 | f | {(1)}\n>   1 | t | {(1),(1)}\n> (2 rows)\n> \n> Why does it stop?  It should still be an infinite loop.\n\nIf you specify the cycle clause, then the processing will stop if it \nsees the same row more than once, which it did here.\n\n> 4)\n> If I use NULL instead of false, I only get one row back.\n> \n>    with recursive a as (select 1 as b union all select b from a)\n>      cycle b set c to true default false using p\n>    table a;\n> \n>   b | c |   p\n> ---+---+-------\n>   1 |   | {(1)}\n> (1 row)\n\nIf you specify null, then the cycle check expression will always fail, \nso it will abort after the first row.  (We should perhaps prohibit \nspecifying null, but see below.)\n\n> 5)\n> I can set both states to the same value.\n> \n>    with recursive a as (select 1 as b union all select b from a)\n>      cycle b set c to true default true using p\n>    table a;\n\n> This is a direct violation of 7.18 SR 2.b.ii.3 as well as common sense.\n>   BTW, I applaud your decision to violate the other part of that rule and\n> allowing any data type here.\n> \n> \n> 5)\n> The same rule as above says that the value and the default value must be\n> literals but not everything that a human might consider a literal is\n> accepted.  In particular:\n> \n>    with recursive a as (select 1 as b union all select b from a)\n>      cycle b set c to 1 default -1 using p\n>    table a;\n> \n>    ERROR:  syntax error at or near \"-\"\n> \n> Can we just accept a full a_expr here instead of AexprConst?  Both\n> DEFAULT and USING are fully reserved keywords.\n\nThis is something we need to think about.  If we want to check at parse \ntime whether the two values are not the same (and perhaps not null), \nthen we either need to restrict the generality of what we can specify, \nor we need to be prepared to do full expression evaluation in the \nparser.  A simple and practical way might be to only allow string and \nboolean literal.  I don't have a strong opinion here.if you check it in parse time, then you disallow parametrization there.Is any reason to do this check in parse time?\n\n-- \nPeter Eisentraut              http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 22 May 2020 11:33:30 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "On 5/22/20 11:24 AM, Peter Eisentraut wrote:\n> On 2020-05-20 21:28, Vik Fearing wrote:\n>> 1)\n>> There are some smart quotes in the comments that should be converted to\n>> single quotes.\n> \n> ok, fixing that\n> \n>> 2)\n>> This query is an infinite loop, as expected:\n>>\n>>    with recursive a as (select 1 as b union all select b from a)\n>>    table a;\n>>\n>> But it becomes an error when you add a cycle clause to it:\n>>\n>>    with recursive a as (select 1 as b union all table a)\n>>      cycle b set c to true default false using p\n>>    table a;\n>>\n>>    ERROR:  each UNION query must have the same number of columns\n> \n> table a expands to select * from a, and if you have a cycle clause, then\n> a has three columns, but the other branch of the union only has one, so\n> that won't work anymore, will it?\n\nIt seems there was a copy/paste error here. The first query should have\nbeen the same as the second but without the cycle clause.\n\nIt seems strange to me that adding a <search or cycle clause> would\nbreak a previously working query. I would rather see the * expanded\nbefore adding the new columns. This is a user's opinion, I don't know\nhow hard that would be to implement.\n\n>> 3)\n>> If I take the same infinite loop query but replace the TABLE syntax with\n>> a SELECT and add a cycle clause, it's not an infinite loop anymore.\n>>\n>>    with recursive a as (select 1 as b union all select b from a)\n>>      cycle b set c to true default false using p\n>>    table a;\n>>\n>>   b | c |     p\n>> ---+---+-----------\n>>   1 | f | {(1)}\n>>   1 | t | {(1),(1)}\n>> (2 rows)\n>>\n>> Why does it stop?  It should still be an infinite loop.\n> \n> If you specify the cycle clause, then the processing will stop if it\n> sees the same row more than once, which it did here.\n\nYes, this was a misplaced expectation on my part.\n\n>> 4)\n>> If I use NULL instead of false, I only get one row back.\n>>\n>>    with recursive a as (select 1 as b union all select b from a)\n>>      cycle b set c to true default false using p\n>>    table a;\n>>\n>>   b | c |   p\n>> ---+---+-------\n>>   1 |   | {(1)}\n>> (1 row)\n> \n> If you specify null, then the cycle check expression will always fail,\n> so it will abort after the first row.  (We should perhaps prohibit\n> specifying null, but see below.)\n\nI would rather make the cycle check expression work with null.\n\n>> 5)\n>> I can set both states to the same value.\n>>\n>>    with recursive a as (select 1 as b union all select b from a)\n>>      cycle b set c to true default true using p\n>>    table a;\n> \n>> This is a direct violation of 7.18 SR 2.b.ii.3 as well as common sense.\n>>   BTW, I applaud your decision to violate the other part of that rule and\n>> allowing any data type here.\n>>\n>>\n>> 5)\n>> The same rule as above says that the value and the default value must be\n>> literals but not everything that a human might consider a literal is\n>> accepted.  In particular:\n>>\n>>    with recursive a as (select 1 as b union all select b from a)\n>>      cycle b set c to 1 default -1 using p\n>>    table a;\n>>\n>>    ERROR:  syntax error at or near \"-\"\n>>\n>> Can we just accept a full a_expr here instead of AexprConst?  Both\n>> DEFAULT and USING are fully reserved keywords.\n> \n> This is something we need to think about.  If we want to check at parse\n> time whether the two values are not the same (and perhaps not null),\n> then we either need to restrict the generality of what we can specify,\n> or we need to be prepared to do full expression evaluation in the\n> parser.  A simple and practical way might be to only allow string and\n> boolean literal.  I don't have a strong opinion here.\n\n\nI'm with Pavel on this one. Why does it have to be a parse-time error?\n Also, I regularly see people write functions as sort of pauper's\nvariables, but a function call isn't allowed here.\n\n----\n\nAnother bug I found. If I try to do your regression test as an\nautonomous query, I get this:\n\n with recursive\n\n graph (f, t, label) as (\n values (1, 2, 'arc 1 -> 2'),\n (1, 3, 'arc 1 -> 3'),\n (2, 3, 'arc 2 -> 3'),\n (1, 4, 'arc 1 -> 4'),\n (4, 5, 'arc 4 -> 5'),\n (5, 1, 'arc 5 -> 1')\n ),\n\n search_graph (f, t, label) as (\n select * from graph g\n union all\n select g.*\n from graph g, search_graph sg\n where g.f = sg.t\n )\n cycle f, t set is_cycle to true default false using path\n\n select * from search_graph;\n\n ERROR: could not find CTE \"graph\"\n\n----\n\nAs an improvement over the spec, I think the vast majority of people\nwill be using simple true/false values. Can we make that optional?\n\n CYCLE f, t SET is_cycle USING path\n\nwould be the same as\n\n CYCLE f, t SET is_cycle TO true DEFAULT false USING path\n-- \nVik Fearing\n\n\n", "msg_date": "Fri, 22 May 2020 12:40:12 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "On 2020-05-22 12:40, Vik Fearing wrote:\n>> If you specify null, then the cycle check expression will always fail,\n>> so it will abort after the first row.  (We should perhaps prohibit\n>> specifying null, but see below.)\n> \n> I would rather make the cycle check expression work with null.\n\nIt works correctly AFAICT. What is your expectation?\n\n>> This is something we need to think about.  If we want to check at parse\n>> time whether the two values are not the same (and perhaps not null),\n>> then we either need to restrict the generality of what we can specify,\n>> or we need to be prepared to do full expression evaluation in the\n>> parser.  A simple and practical way might be to only allow string and\n>> boolean literal.  I don't have a strong opinion here.\n> \n> \n> I'm with Pavel on this one. Why does it have to be a parse-time error?\n> Also, I regularly see people write functions as sort of pauper's\n> variables, but a function call isn't allowed here.\n\nIf not parse-time error, at what time do you want to check it?\n\n> As an improvement over the spec, I think the vast majority of people\n> will be using simple true/false values. Can we make that optional?\n> \n> CYCLE f, t SET is_cycle USING path\n> \n> would be the same as\n> \n> CYCLE f, t SET is_cycle TO true DEFAULT false USING path\n\nI was also considering that. It would be an easy change to make.\n\n(Apparently, in DB2 you can omit the USING path clause. Not sure how to \nmake that work, however.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 22 May 2020 14:32:41 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "Here is an updated patch that I think fixes all the cases that you \nidentified. (The issue of what kinds of constants or expressions to \naccept for cycle marks has not been touched.) To fix the star expansion \nI had to add a little bit of infrastructure that could also be used as a \nmore general facility \"don't include this column in star expansion\", so \nthis could perhaps use some consideration from a more general angle as well.\n\nMore tests and breakage reports welcome.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 15 Jun 2020 11:49:53 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "While the larger patch is being considered, I think some simpler and \nseparable pieces could be addressed.\n\nHere is a patch that adjusts the existing cycle detection example and \ntest queries to put the cycle column before the path column. The CYCLE \nclause puts them in that order, and so if we added that feature that \nwould make the sequence of examples more consistent and easier to follow.\n\n(And while the order of columns has no semantic meaning, for a human \nleft-to-right reader it also makes a bit more sense because the cycle \nflag is computed against the previous path value, so it happens \"before\" \nthe path column.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 3 Jul 2020 09:08:26 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "Hi\n\nút 22. 9. 2020 v 20:01 odesílatel Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> napsal:\n\n> I have implemented the SEARCH and CYCLE clauses.\n>\n> This is standard SQL syntax attached to a recursive CTE to compute a\n> depth- or breadth-first ordering and cycle detection, respectively.\n> This is just convenience syntax for what you can already do manually.\n> The original discussion about recursive CTEs briefly mentioned these as\n> something to do later but then it was never mentioned again.\n>\n> SQL specifies these in terms of syntactic transformations, and so that's\n> how I have implemented them also, mainly in the rewriter.\n>\n> I have successfully tested this against examples I found online that\n> were aimed at DB2.\n>\n> The contained documentation and the code comment in rewriteHandler.c\n> explain the details.\n>\n\nI am playing with this patch. It looks well, but I found some issues\n(example is from attached data.sql)\n\nWITH recursive destinations (departure, arrival, connections, cost) AS\n (SELECT f.departure, f.arrival, 0, price\n FROM flights f\n WHERE f.departure = 'New York'\n UNION ALL\n SELECT r.departure, b.arrival, r.connections + 1,\n r.cost + b.price\n FROM destinations r, flights b\n WHERE r.arrival = b.departure) cycle departure, arrival set\nis_cycle to true default false using path\n\nSELECT *\n FROM destinations ;\n;\n\nThe result is correct. When I tried to use UNION instead UNION ALL, the pg\ncrash\n\nProgram received signal SIGABRT, Aborted.\n0x00007f761338ebc5 in raise () from /lib64/libc.so.6\n(gdb) bt\n#0 0x00007f761338ebc5 in raise () from /lib64/libc.so.6\n#1 0x00007f76133778a4 in abort () from /lib64/libc.so.6\n#2 0x000000000090e7eb in ExceptionalCondition (conditionName=<optimized\nout>, errorType=<optimized out>, fileName=<optimized out>,\n lineNumber=<optimized out>) at assert.c:67\n#3 0x00000000007205e7 in generate_setop_grouplist\n(targetlist=targetlist@entry=0x7f75fce5d018, op=<optimized out>,\nop=<optimized out>)\n at prepunion.c:1412\n#4 0x00000000007219d0 in generate_recursion_path\n(pTargetList=0x7fff073ee728, refnames_tlist=<optimized out>, root=0xf90bd8,\nsetOp=0xf90840)\n at prepunion.c:502\n#5 plan_set_operations (root=0xf90bd8) at prepunion.c:156\n#6 0x000000000070f79b in grouping_planner (root=0xf90bd8,\ninheritance_update=false, tuple_fraction=<optimized out>) at planner.c:1886\n#7 0x0000000000712ea7 in subquery_planner (glob=<optimized out>,\nparse=<optimized out>, parent_root=<optimized out>, hasRecursion=<optimized\nout>,\n tuple_fraction=0) at planner.c:1015\n#8 0x000000000071a614 in SS_process_ctes (root=0xf7abd8) at subselect.c:952\n#9 0x00000000007125d4 in subquery_planner (glob=glob@entry=0xf8a010,\nparse=parse@entry=0xf6cf20, parent_root=parent_root@entry=0x0,\n hasRecursion=hasRecursion@entry=false,\ntuple_fraction=tuple_fraction@entry=0) at planner.c:645\n#10 0x000000000071425b in standard_planner (parse=0xf6cf20,\nquery_string=<optimized out>, cursorOptions=256, boundParams=<optimized\nout>)\n at planner.c:405\n#11 0x00000000007e5f68 in pg_plan_query (querytree=0xf6cf20,\n query_string=query_string@entry=0xea6370 \"WITH recursive destinations\n(departure, arrival, connections, cost) AS \\n (SELECT f.departure,\nf.arrival, 0, price\\n\", ' ' <repeats 12 times>, \"FROM flights f \\n\", ' '\n<repeats 12 times>, \"WHERE f.departure = 'New York' \\n UNION \"...,\n cursorOptions=cursorOptions@entry=256, boundParams=boundParams@entry=0x0)\nat postgres.c:875\n#12 0x00000000007e6061 in pg_plan_queries (querytrees=0xf8b690,\n query_string=query_string@entry=0xea6370 \"WITH recursive destinations\n(departure, arrival, connections, cost) AS \\n (SELECT f.departure,\nf.arrival, 0, price\\n\", ' ' <repeats 12 times>, \"FROM flights f \\n\", ' '\n<repeats 12 times>, \"WHERE f.departure = 'New York' \\n UNION \"...,\n cursorOptions=cursorOptions@entry=256, boundParams=boundParams@entry=0x0)\nat postgres.c:966\n#13 0x00000000007e63b8 in exec_simple_query (\n query_string=0xea6370 \"WITH recursive destinations (departure, arrival,\nconnections, cost) AS \\n (SELECT f.departure, f.arrival, 0, price\\n\", '\n' <repeats 12 times>, \"FROM flights f \\n\", ' ' <repeats 12 times>, \"WHERE\nf.departure = 'New York' \\n UNION \"...) at postgres.c:1158\n#14 0x00000000007e81e4 in PostgresMain (argc=<optimized out>,\nargv=<optimized out>, dbname=<optimized out>, username=<optimized out>) at\npostgres.c:4309\n#15 0x00000000007592b9 in BackendRun (port=0xecaf20) at postmaster.c:4541\n#16 BackendStartup (port=0xecaf20) at postmaster.c:4225\n#17 ServerLoop () at postmaster.c:1742\n#18 0x000000000075a0ed in PostmasterMain (argc=<optimized out>,\nargv=0xea0c90) at postmaster.c:1415\n#19 0x00000000004832ec in main (argc=3, argv=0xea0c90) at main.c:209\n\n\n\nlooks so clause USING in cycle detection is unsupported for DB2 and Oracle\n- the examples from these databases doesn't work on PG without modifications\n\nRegards\n\nPavel\n\n\n\n\n\n>\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>", "msg_date": "Tue, 22 Sep 2020 20:29:33 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "Hi\n\nI found another bug\n\ncreate view xx as WITH recursive destinations (departure, arrival,\nconnections, cost, itinerary) AS\n (SELECT f.departure, f.arrival, 1, price,\n CAST(f.departure || f.arrival AS VARCHAR(2000))\n FROM flights f\n WHERE f.departure = 'New York'\n UNION ALL\n SELECT r.departure, b.arrival, r.connections + 1 ,\n r.cost + b.price, CAST(r.itinerary || b.arrival AS\nVARCHAR(2000))\n FROM destinations r, flights b\n WHERE r.arrival = b.departure)\n CYCLE arrival SET cyclic_data TO '1' DEFAULT '0' using path\nSELECT departure, arrival, itinerary, cyclic_data\n FROM destinations ;\n\npostgres=# select * from xx;\nERROR: attribute number 6 exceeds number of columns 5\n\nRegards\n\nPavel\n\nHiI found another bugcreate view xx as  WITH recursive destinations (departure, arrival, connections, cost, itinerary) AS    (SELECT f.departure, f.arrival, 1, price,                 CAST(f.departure || f.arrival AS VARCHAR(2000))            FROM flights f              WHERE f.departure = 'New York'     UNION ALL      SELECT r.departure, b.arrival, r.connections + 1 ,                r.cost + b.price, CAST(r.itinerary || b.arrival AS VARCHAR(2000))            FROM destinations r, flights b             WHERE r.arrival = b.departure)     CYCLE arrival SET cyclic_data TO '1' DEFAULT '0' using path SELECT departure, arrival, itinerary, cyclic_data         FROM destinations  ;postgres=# select * from xx;ERROR:  attribute number 6 exceeds number of columns 5RegardsPavel", "msg_date": "Tue, 22 Sep 2020 20:43:58 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "On 2020-09-22 20:29, Pavel Stehule wrote:\n> The result is correct. When I tried to use UNION instead UNION ALL, the \n> pg crash\n\nI fixed the crash, but UNION [DISTINCT] won't actually work here because \nrow/record types are not hashable. I'm leaving the partial support in, \nbut I'm documenting it as currently not supported.\n\n> looks so clause USING in cycle detection is unsupported for DB2 and \n> Oracle - the examples from these databases doesn't work on PG without \n> modifications\n\nYeah, the path clause is actually not necessary from a user's \nperspective, but it's required for internal bookkeeping. We could \nperhaps come up with a mechanism to make it invisible coming out of the \nCTE (maybe give the CTE a target list internally), but that seems like a \nseparate project.\n\nThe attached patch fixes the issues you have reported (also the view \nissue from the other email). I have also moved the whole rewrite \nsupport to a new file to not blow up rewriteHandler.c so much.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 9 Oct 2020 11:40:40 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "pá 9. 10. 2020 v 11:40 odesílatel Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> napsal:\n\n> On 2020-09-22 20:29, Pavel Stehule wrote:\n> > The result is correct. When I tried to use UNION instead UNION ALL, the\n> > pg crash\n>\n> I fixed the crash, but UNION [DISTINCT] won't actually work here because\n> row/record types are not hashable. I'm leaving the partial support in,\n> but I'm documenting it as currently not supported.\n>\n\n I think so UNION is a common solution against the cycles. So missing\nsupport for this specific case is not a nice thing. How much work is needed\nfor hashing rows. It should not be too much code.\n\n\n> > looks so clause USING in cycle detection is unsupported for DB2 and\n> > Oracle - the examples from these databases doesn't work on PG without\n> > modifications\n>\n> Yeah, the path clause is actually not necessary from a user's\n> perspective, but it's required for internal bookkeeping. We could\n> perhaps come up with a mechanism to make it invisible coming out of the\n> CTE (maybe give the CTE a target list internally), but that seems like a\n> separate project.\n>\n> The attached patch fixes the issues you have reported (also the view\n> issue from the other email). I have also moved the whole rewrite\n> support to a new file to not blow up rewriteHandler.c so much.\n>\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\npá 9. 10. 2020 v 11:40 odesílatel Peter Eisentraut <peter.eisentraut@2ndquadrant.com> napsal:On 2020-09-22 20:29, Pavel Stehule wrote:\n> The result is correct. When I tried to use UNION instead UNION ALL, the \n> pg crash\n\nI fixed the crash, but UNION [DISTINCT] won't actually work here because \nrow/record types are not hashable.  I'm leaving the partial support in, \nbut I'm documenting it as currently not supported.  I think so UNION is a common solution against the cycles. So missing support for this specific case is not a nice thing. How much work is needed for hashing rows. It should not be too much code. \n\n> looks so clause USING in cycle detection is unsupported for DB2 and \n> Oracle - the examples from these databases doesn't work on PG without \n> modifications\n\nYeah, the path clause is actually not necessary from a user's \nperspective, but it's required for internal bookkeeping.  We could \nperhaps come up with a mechanism to make it invisible coming out of the \nCTE (maybe give the CTE a target list internally), but that seems like a \nseparate project.\n\nThe attached patch fixes the issues you have reported (also the view \nissue from the other email).  I have also moved the whole rewrite \nsupport to a new file to not blow up rewriteHandler.c so much.\n\n-- \nPeter Eisentraut              http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 9 Oct 2020 12:17:32 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "Hi\n\npá 9. 10. 2020 v 12:17 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> pá 9. 10. 2020 v 11:40 odesílatel Peter Eisentraut <\n> peter.eisentraut@2ndquadrant.com> napsal:\n>\n>> On 2020-09-22 20:29, Pavel Stehule wrote:\n>> > The result is correct. When I tried to use UNION instead UNION ALL, the\n>> > pg crash\n>>\n>> I fixed the crash, but UNION [DISTINCT] won't actually work here because\n>> row/record types are not hashable. I'm leaving the partial support in,\n>> but I'm documenting it as currently not supported.\n>>\n>\n> I think so UNION is a common solution against the cycles. So missing\n> support for this specific case is not a nice thing. How much work is needed\n> for hashing rows. It should not be too much code.\n>\n>\n>> > looks so clause USING in cycle detection is unsupported for DB2 and\n>> > Oracle - the examples from these databases doesn't work on PG without\n>> > modifications\n>>\n>> Yeah, the path clause is actually not necessary from a user's\n>> perspective, but it's required for internal bookkeeping. We could\n>> perhaps come up with a mechanism to make it invisible coming out of the\n>> CTE (maybe give the CTE a target list internally), but that seems like a\n>> separate project.\n>>\n>> The attached patch fixes the issues you have reported (also the view\n>> issue from the other email). I have also moved the whole rewrite\n>> support to a new file to not blow up rewriteHandler.c so much.\n>>\n>> --\n>> Peter Eisentraut http://www.2ndQuadrant.com/\n>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>>\n>\nThis patch is based on transformation CYCLE and SEARCH clauses to specific\nexpressions - it is in agreement with ANSI SQL\n\nThere is not a problem with compilation\nNobody had objections in discussion\nThere are enough regress tests and documentation\ncheck-world passed\ndoc build passed\n\nI'll mark this patch as ready for committer\n\nPossible enhancing for this feature (can be done in next steps)\n\n1. support UNION DISTINCT\n2. better compatibility with Oracle and DB2 (USING clause can be optional)\n\nRegards\n\nPavel\n\nHipá 9. 10. 2020 v 12:17 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:pá 9. 10. 2020 v 11:40 odesílatel Peter Eisentraut <peter.eisentraut@2ndquadrant.com> napsal:On 2020-09-22 20:29, Pavel Stehule wrote:\n> The result is correct. When I tried to use UNION instead UNION ALL, the \n> pg crash\n\nI fixed the crash, but UNION [DISTINCT] won't actually work here because \nrow/record types are not hashable.  I'm leaving the partial support in, \nbut I'm documenting it as currently not supported.  I think so UNION is a common solution against the cycles. So missing support for this specific case is not a nice thing. How much work is needed for hashing rows. It should not be too much code. \n\n> looks so clause USING in cycle detection is unsupported for DB2 and \n> Oracle - the examples from these databases doesn't work on PG without \n> modifications\n\nYeah, the path clause is actually not necessary from a user's \nperspective, but it's required for internal bookkeeping.  We could \nperhaps come up with a mechanism to make it invisible coming out of the \nCTE (maybe give the CTE a target list internally), but that seems like a \nseparate project.\n\nThe attached patch fixes the issues you have reported (also the view \nissue from the other email).  I have also moved the whole rewrite \nsupport to a new file to not blow up rewriteHandler.c so much.\n\n-- \nPeter Eisentraut              http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & ServicesThis patch is based on transformation CYCLE and SEARCH clauses to specific expressions - it is in agreement with ANSI SQLThere is not a problem with compilationNobody had objections in discussionThere are enough regress tests and documentationcheck-world passeddoc build passedI'll mark this patch as ready for committerPossible enhancing for this feature (can be done in next steps)1. support UNION DISTINCT2. better compatibility with Oracle and DB2 (USING clause can be optional)RegardsPavel", "msg_date": "Sat, 10 Oct 2020 07:25:25 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "On Fri, May 22, 2020 at 5:25 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> This is something we need to think about. If we want to check at parse\n> time whether the two values are not the same (and perhaps not null),\n> then we either need to restrict the generality of what we can specify,\n> or we need to be prepared to do full expression evaluation in the\n> parser. A simple and practical way might be to only allow string and\n> boolean literal. I don't have a strong opinion here.\n\nI don't have an opinion on this feature, but I think doing expression\nevaluation in the raw parser would be a pretty bad idea. I think we're\nnot supposed to do anything in the parser that involves catalog access\nor even references to GUC values. It might be OK if it happens in\nparse analysis rather than gram.y, but even that sounds a bit sketchy\nto me. We'd need to think carefully about what effects such things\nwoul have on the plan cache, and whether they introduce any security\nholes, and maybe some other things.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 12 Oct 2020 16:24:00 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "On 10.10.2020 08:25, Pavel Stehule wrote:\n> Hi\n>\n> pá 9. 10. 2020 v 12:17 odesílatel Pavel Stehule \n> <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> napsal:\n>\n>\n>\n> pá 9. 10. 2020 v 11:40 odesílatel Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com\n> <mailto:peter.eisentraut@2ndquadrant.com>> napsal:\n>\n> On 2020-09-22 20:29, Pavel Stehule wrote:\n> > The result is correct. When I tried to use UNION instead\n> UNION ALL, the\n> > pg crash\n>\n> I fixed the crash, but UNION [DISTINCT] won't actually work\n> here because\n> row/record types are not hashable.  I'm leaving the partial\n> support in,\n> but I'm documenting it as currently not supported.\n>\n>  I think so UNION is a common solution against the cycles. So\n> missing support for this specific case is not a nice thing. How\n> much work is needed for hashing rows. It should not be too much code.\n>\n>\n> > looks so clause USING in cycle detection is unsupported for\n> DB2 and\n> > Oracle - the examples from these databases doesn't work on\n> PG without\n> > modifications\n>\n> Yeah, the path clause is actually not necessary from a user's\n> perspective, but it's required for internal bookkeeping.  We\n> could\n> perhaps come up with a mechanism to make it invisible coming\n> out of the\n> CTE (maybe give the CTE a target list internally), but that\n> seems like a\n> separate project.\n>\n> The attached patch fixes the issues you have reported (also\n> the view\n> issue from the other email).  I have also moved the whole rewrite\n> support to a new file to not blow up rewriteHandler.c so much.\n>\n> -- \n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training &\n> Services\n>\n>\n> This patch is based on transformation CYCLE and SEARCH clauses to \n> specific expressions - it is in agreement with ANSI SQL\n>\n> There is not a problem with compilation\n> Nobody had objections in discussion\n> There are enough regress tests and documentation\n> check-world passed\n> doc build passed\n>\n> I'll mark this patch as ready for committer\n>\n> Possible enhancing for this feature (can be done in next steps)\n>\n> 1. support UNION DISTINCT\n> 2. better compatibility with Oracle and DB2 (USING clause can be optional)\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\nStatus update for a commitfest entry.\nAccording to cfbot patch no longer applies. So I moved it to waiting on \nauthor.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\nOn 10.10.2020 08:25, Pavel Stehule\n wrote:\n\n\n\n\nHi\n\n\n\npá 9. 10. 2020 v 12:17\n odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n napsal:\n\n\n\n\n\n\n\npá 9. 10. 2020 v 11:40\n odesílatel Peter Eisentraut <peter.eisentraut@2ndquadrant.com>\n napsal:\n\nOn 2020-09-22\n 20:29, Pavel Stehule wrote:\n > The result is correct. When I tried to use UNION\n instead UNION ALL, the \n > pg crash\n\n I fixed the crash, but UNION [DISTINCT] won't actually\n work here because \n row/record types are not hashable.  I'm leaving the\n partial support in, \n but I'm documenting it as currently not supported.\n\n \n  I think so UNION is a common solution against the\n cycles. So missing support for this specific case is not\n a nice thing. How much work is needed for hashing rows.\n It should not be too much code.\n \n\n\n\n > looks so clause USING in cycle detection is\n unsupported for DB2 and \n > Oracle - the examples from these databases\n doesn't work on PG without \n > modifications\n\n Yeah, the path clause is actually not necessary from a\n user's \n perspective, but it's required for internal\n bookkeeping.  We could \n perhaps come up with a mechanism to make it invisible\n coming out of the \n CTE (maybe give the CTE a target list internally), but\n that seems like a \n separate project.\n\n The attached patch fixes the issues you have reported\n (also the view \n issue from the other email).  I have also moved the\n whole rewrite \n support to a new file to not blow up rewriteHandler.c\n so much.\n\n -- \n Peter Eisentraut              http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Remote DBA,\n Training & Services\n\n\n\n\n\n\nThis patch is based on transformation CYCLE and SEARCH\n clauses to specific expressions - it is in agreement with\n ANSI SQL\n\n\nThere is not a problem with compilation\nNobody had objections in discussion\nThere are enough regress tests and documentation\ncheck-world passed\ndoc build passed\n\n\nI'll mark this patch as ready for committer\n\n\nPossible enhancing for this feature (can be done in next\n steps)\n\n\n1. support UNION DISTINCT\n2. better compatibility with Oracle and DB2 (USING clause\n can be optional)\n\n\nRegards\n\n\nPavel\n\n\n\n\n\n\n\n \n\n\n\n\nStatus update for a commitfest entry.\n According to cfbot patch no longer applies. So I moved it to\n waiting on author.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 27 Oct 2020 22:31:19 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "On 2020-10-10 07:25, Pavel Stehule wrote:\n> This patch is based on transformation CYCLE and SEARCH clauses to \n> specific expressions - it is in agreement with ANSI SQL\n> \n> There is not a problem with compilation\n> Nobody had objections in discussion\n> There are enough regress tests and documentation\n> check-world passed\n> doc build passed\n> \n> I'll mark this patch as ready for committer\n> \n> Possible enhancing for this feature (can be done in next steps)\n> \n> 1. support UNION DISTINCT\n> 2. better compatibility with Oracle and DB2 (USING clause can be optional)\n\nHere is an updated patch. New since last time:\n\n- UNION DISTINCT is now supported (since hash_record() was added)\n\n- Some code has been cleaned up.\n\n- Some code has been moved from the rewriter to the parser so that \ncertain errors are properly detected at parse time.\n\n- Added more syntax checks and more tests.\n\n- Support for dependency tracking was added (the type and operator for \nthe cycle mark need to be added as dependencies).\n\nI found a bug that nested UNIONs (foo UNION bar UNION baz) were not \nhandled (would crash) in the rewriter code. For now, I have just \nchanged that to error out. This could be fixed, it would be a localized \nchange in the rewriter code in any case. Doesn't seem important for the \nfirst pass, though.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/", "msg_date": "Wed, 25 Nov 2020 14:06:03 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "st 25. 11. 2020 v 14:06 odesílatel Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> napsal:\n\n> On 2020-10-10 07:25, Pavel Stehule wrote:\n> > This patch is based on transformation CYCLE and SEARCH clauses to\n> > specific expressions - it is in agreement with ANSI SQL\n> >\n> > There is not a problem with compilation\n> > Nobody had objections in discussion\n> > There are enough regress tests and documentation\n> > check-world passed\n> > doc build passed\n> >\n> > I'll mark this patch as ready for committer\n> >\n> > Possible enhancing for this feature (can be done in next steps)\n> >\n> > 1. support UNION DISTINCT\n> > 2. better compatibility with Oracle and DB2 (USING clause can be\n> optional)\n>\n> Here is an updated patch. New since last time:\n>\n> - UNION DISTINCT is now supported (since hash_record() was added)\n>\n> - Some code has been cleaned up.\n>\n> - Some code has been moved from the rewriter to the parser so that\n> certain errors are properly detected at parse time.\n>\n> - Added more syntax checks and more tests.\n>\n> - Support for dependency tracking was added (the type and operator for\n> the cycle mark need to be added as dependencies).\n>\n> I found a bug that nested UNIONs (foo UNION bar UNION baz) were not\n> handled (would crash) in the rewriter code. For now, I have just\n> changed that to error out. This could be fixed, it would be a localized\n> change in the rewriter code in any case. Doesn't seem important for the\n> first pass, though.\n>\n\nI checked this patch, and I didn't find any issue.\n\nmake check-world passed\nmake doc passed\n\nI'll mark it as ready for committer\n\nRegards\n\nPavel\n\n\n\n> --\n> Peter Eisentraut\n> 2ndQuadrant, an EDB company\n> https://www.2ndquadrant.com/\n>\n\nst 25. 11. 2020 v 14:06 odesílatel Peter Eisentraut <peter.eisentraut@2ndquadrant.com> napsal:On 2020-10-10 07:25, Pavel Stehule wrote:\n> This patch is based on transformation CYCLE and SEARCH clauses to \n> specific expressions - it is in agreement with ANSI SQL\n> \n> There is not a problem with compilation\n> Nobody had objections in discussion\n> There are enough regress tests and documentation\n> check-world passed\n> doc build passed\n> \n> I'll mark this patch as ready for committer\n> \n> Possible enhancing for this feature (can be done in next steps)\n> \n> 1. support UNION DISTINCT\n> 2. better compatibility with Oracle and DB2 (USING clause can be optional)\n\nHere is an updated patch.  New since last time:\n\n- UNION DISTINCT is now supported (since hash_record() was added)\n\n- Some code has been cleaned up.\n\n- Some code has been moved from the rewriter to the parser so that \ncertain errors are properly detected at parse time.\n\n- Added more syntax checks and more tests.\n\n- Support for dependency tracking was added (the type and operator for \nthe cycle mark need to be added as dependencies).\n\nI found a bug that nested UNIONs (foo UNION bar UNION baz) were not \nhandled (would crash) in the rewriter code.  For now, I have just \nchanged that to error out.  This could be fixed, it would be a localized \nchange in the rewriter code in any case.  Doesn't seem important for the \nfirst pass, though.I checked this patch, and I didn't find any issue. make check-world passedmake doc passedI'll mark it as ready for committerRegardsPavel\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/", "msg_date": "Wed, 25 Nov 2020 20:35:30 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "On 5/22/20 12:40 PM, Vik Fearing wrote:\n>>> 2)\n>>> This query is an infinite loop, as expected:\n>>>\n>>>    with recursive a as (select 1 as b union all select b from a)\n>>>    table a;\n>>>\n>>> But it becomes an error when you add a cycle clause to it:\n>>>\n>>>    with recursive a as (select 1 as b union all table a)\n>>>      cycle b set c to true default false using p\n>>>    table a;\n>>>\n>>>    ERROR:  each UNION query must have the same number of columns\n>> table a expands to select * from a, and if you have a cycle clause, then\n>> a has three columns, but the other branch of the union only has one, so\n>> that won't work anymore, will it?\n> It seems there was a copy/paste error here. The first query should have\n> been the same as the second but without the cycle clause.\n> \n> It seems strange to me that adding a <search or cycle clause> would\n> break a previously working query. I would rather see the * expanded\n> before adding the new columns. This is a user's opinion, I don't know\n> how hard that would be to implement.\n\n\nAfter thinking about it quite a bit more, I have changed my mind on\nthis. The transformation does add columns to the <with list element>\nand so TABLE or SELECT * should see them. Especially since they see\nthem from outside of the wle.\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 8 Dec 2020 17:31:09 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "On 6/15/20 11:49 AM, Peter Eisentraut wrote:\n> To fix the star expansion I had to add a little bit of infrastructure\n> that could also be used as a more general facility \"don't include this\n> column in star expansion\", so this could perhaps use some consideration\n> from a more general angle as well.\n\nCould this work be salvaged to add the ability to ALTER a column to hide\nit from star expansion? That's a feature I've often seen requested,\nespecially from people working with PostGIS's geometry.\n\nTotally off-topic for this thread, though.\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 8 Dec 2020 17:33:55 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "On 2020-11-25 20:35, Pavel Stehule wrote:\n> I checked this patch, and I didn't find any issue.\n> \n> make check-world passed\n> make doc passed\n> \n> I'll mark it as ready for committer\n\nThis has been committed. Thanks.\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/\n\n\n", "msg_date": "Mon, 1 Feb 2021 19:02:16 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "po 1. 2. 2021 v 19:02 odesílatel Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> napsal:\n\n> On 2020-11-25 20:35, Pavel Stehule wrote:\n> > I checked this patch, and I didn't find any issue.\n> >\n> > make check-world passed\n> > make doc passed\n> >\n> > I'll mark it as ready for committer\n>\n> This has been committed. Thanks.\n>\n\ngreat!\n\nPavel\n\n\n> --\n> Peter Eisentraut\n> 2ndQuadrant, an EDB company\n> https://www.2ndquadrant.com/\n>\n\npo 1. 2. 2021 v 19:02 odesílatel Peter Eisentraut <peter.eisentraut@2ndquadrant.com> napsal:On 2020-11-25 20:35, Pavel Stehule wrote:\n> I checked this patch, and I didn't find any issue.\n> \n> make check-world passed\n> make doc passed\n> \n> I'll mark it as ready for committer\n\nThis has been committed.  Thanks.great!Pavel\n\n-- \nPeter Eisentraut\n2ndQuadrant, an EDB company\nhttps://www.2ndquadrant.com/", "msg_date": "Mon, 1 Feb 2021 19:16:01 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "On 22.05.20 14:32, Peter Eisentraut wrote:\n>> As an improvement over the spec, I think the vast majority of people\n>> will be using simple true/false values.  Can we make that optional?\n>>\n>>      CYCLE f, t SET is_cycle USING path\n>>\n>> would be the same as\n>>\n>>      CYCLE f, t SET is_cycle TO true DEFAULT false USING path\n> \n> I was also considering that.  It would be an easy change to make.\n\nThis change has been accepted into the SQL:202x draft. Here is a patch \nfor it.", "msg_date": "Mon, 22 Feb 2021 09:44:30 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "On 2/22/21 9:44 AM, Peter Eisentraut wrote:\n> On 22.05.20 14:32, Peter Eisentraut wrote:\n>>> As an improvement over the spec, I think the vast majority of people\n>>> will be using simple true/false values.  Can we make that optional?\n>>>\n>>>      CYCLE f, t SET is_cycle USING path\n>>>\n>>> would be the same as\n>>>\n>>>      CYCLE f, t SET is_cycle TO true DEFAULT false USING path\n>>\n>> I was also considering that.  It would be an easy change to make.\n> \n> This change has been accepted into the SQL:202x draft.\n\nYay!\n\n> Here is a patch for it.\n\nThis looks good to me, except that you forgot to add the feature stamp.\n Attached is a small diff to apply on top of your patch to fix that.\n-- \nVik Fearing", "msg_date": "Mon, 22 Feb 2021 11:05:09 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "On 22.02.21 11:05, Vik Fearing wrote:\n> This looks good to me, except that you forgot to add the feature stamp.\n> Attached is a small diff to apply on top of your patch to fix that.\n\nThe feature code is from SQL:202x, whereas the table is relative to \nSQL:2016. We could add it, but probably with a comment.\n\n\n", "msg_date": "Mon, 22 Feb 2021 13:28:54 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "On 2/22/21 1:28 PM, Peter Eisentraut wrote:\n> On 22.02.21 11:05, Vik Fearing wrote:\n>> This looks good to me, except that you forgot to add the feature stamp.\n>>   Attached is a small diff to apply on top of your patch to fix that.\n> \n> The feature code is from SQL:202x, whereas the table is relative to\n> SQL:2016.  We could add it, but probably with a comment.\n\nOK.\n-- \nVik Fearing\n\n\n", "msg_date": "Mon, 22 Feb 2021 14:45:45 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: SEARCH and CYCLE clauses" }, { "msg_contents": "On 22.02.21 14:45, Vik Fearing wrote:\n> On 2/22/21 1:28 PM, Peter Eisentraut wrote:\n>> On 22.02.21 11:05, Vik Fearing wrote:\n>>> This looks good to me, except that you forgot to add the feature stamp.\n>>>   Attached is a small diff to apply on top of your patch to fix that.\n>>\n>> The feature code is from SQL:202x, whereas the table is relative to\n>> SQL:2016.  We could add it, but probably with a comment.\n> \n> OK.\n> \n\ndone\n\n\n", "msg_date": "Sat, 27 Feb 2021 08:16:13 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: SEARCH and CYCLE clauses" } ]
[ { "msg_contents": "Commit 911e7020770 added a variety of new support routines to index\nAMs. For example, it added a support function 5 to btree (see\nBTOPTIONS_PROC), but didn't document this alongside the other support\nfunctions in btree.sgml.\n\nIt looks like the new support functions are fundamentally different to\nthe existing ones in that they exist only as a way of supplying\nparameters to other support functions. The idea was to preserve\ncompatibility with the old support function signatures. Even still, I\nthink that the new support functions should get some mention alongside\nthe older support functions.\n\nI also wonder whether or not xindex.sgml needs to be updated to\naccount for opclass parameters.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 20 May 2020 14:36:45 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Operator class parameters and sgml docs" }, { "msg_contents": "Hi, Peter!\n\nOn Thu, May 21, 2020 at 12:37 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Commit 911e7020770 added a variety of new support routines to index\n> AMs. For example, it added a support function 5 to btree (see\n> BTOPTIONS_PROC), but didn't document this alongside the other support\n> functions in btree.sgml.\n>\n> It looks like the new support functions are fundamentally different to\n> the existing ones in that they exist only as a way of supplying\n> parameters to other support functions. The idea was to preserve\n> compatibility with the old support function signatures. Even still, I\n> think that the new support functions should get some mention alongside\n> the older support functions.\n>\n> I also wonder whether or not xindex.sgml needs to be updated to\n> account for opclass parameters.\n\nThank you for pointing. I'm going to take a look on this in next few days.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Thu, 21 May 2020 03:17:31 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Operator class parameters and sgml docs" }, { "msg_contents": "On Thu, May 21, 2020 at 3:17 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n>\n> On Thu, May 21, 2020 at 12:37 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Commit 911e7020770 added a variety of new support routines to index\n> > AMs. For example, it added a support function 5 to btree (see\n> > BTOPTIONS_PROC), but didn't document this alongside the other support\n> > functions in btree.sgml.\n> >\n> > It looks like the new support functions are fundamentally different to\n> > the existing ones in that they exist only as a way of supplying\n> > parameters to other support functions. The idea was to preserve\n> > compatibility with the old support function signatures. Even still, I\n> > think that the new support functions should get some mention alongside\n> > the older support functions.\n> >\n> > I also wonder whether or not xindex.sgml needs to be updated to\n> > account for opclass parameters.\n>\n> Thank you for pointing. I'm going to take a look on this in next few days.\n\n\nI'm sorry for the delay. I was very busy with various stuff. I'm\ngoing to post docs patch next week.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Thu, 28 May 2020 23:02:46 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Operator class parameters and sgml docs" }, { "msg_contents": "On Thu, May 28, 2020 at 11:02 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n>\n> On Thu, May 21, 2020 at 3:17 AM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> >\n> > On Thu, May 21, 2020 at 12:37 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > Commit 911e7020770 added a variety of new support routines to index\n> > > AMs. For example, it added a support function 5 to btree (see\n> > > BTOPTIONS_PROC), but didn't document this alongside the other support\n> > > functions in btree.sgml.\n> > >\n> > > It looks like the new support functions are fundamentally different to\n> > > the existing ones in that they exist only as a way of supplying\n> > > parameters to other support functions. The idea was to preserve\n> > > compatibility with the old support function signatures. Even still, I\n> > > think that the new support functions should get some mention alongside\n> > > the older support functions.\n> > >\n> > > I also wonder whether or not xindex.sgml needs to be updated to\n> > > account for opclass parameters.\n> >\n> > Thank you for pointing. I'm going to take a look on this in next few days.\n>\n> I'm sorry for the delay. I was very busy with various stuff. I'm\n> going to post docs patch next week.\n\n\nThank you for patience. The documentation patch is attached. I think\nit requires review by native english speaker.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 16 Jun 2020 14:24:05 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Operator class parameters and sgml docs" }, { "msg_contents": "On Tue, Jun 16, 2020 at 4:24 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> Thank you for patience. The documentation patch is attached. I think\n> it requires review by native english speaker.\n\n* \"...paramaters that controls\" should be \"...paramaters that control\".\n\n* \"with set of operator class specific option\" should be \"with a set\nof operator class specific options\".\n\n* \"The options could be accessible from each support function\" should\nbe \"The options can be accessed from other support functions\"\n\n(At least I think that that's what you meant)\n\nIt's very hard to write documentation like this, even for native\nEnglish speakers. I think that it's important to have something in\nplace, though. The GiST example helps a lot.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 16 Jun 2020 16:50:37 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Operator class parameters and sgml docs" }, { "msg_contents": "On Wed, Jun 17, 2020 at 2:50 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Tue, Jun 16, 2020 at 4:24 AM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > Thank you for patience. The documentation patch is attached. I think\n> > it requires review by native english speaker.\n>\n> * \"...paramaters that controls\" should be \"...paramaters that control\".\n>\n> * \"with set of operator class specific option\" should be \"with a set\n> of operator class specific options\".\n>\n> * \"The options could be accessible from each support function\" should\n> be \"The options can be accessed from other support functions\"\n\nFixed, thanks!\n\n> It's very hard to write documentation like this, even for native\n> English speakers. I think that it's important to have something in\n> place, though. The GiST example helps a lot.\n\nI've added a complete example for defining a set of parameters and\naccessing them from another support function to the GiST\ndocumentation.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 17 Jun 2020 14:00:15 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Operator class parameters and sgml docs" }, { "msg_contents": "On Wed, Jun 17, 2020 at 2:00 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Wed, Jun 17, 2020 at 2:50 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > On Tue, Jun 16, 2020 at 4:24 AM Alexander Korotkov\n> > <a.korotkov@postgrespro.ru> wrote:\n> > > Thank you for patience. The documentation patch is attached. I think\n> > > it requires review by native english speaker.\n> >\n> > * \"...paramaters that controls\" should be \"...paramaters that control\".\n> >\n> > * \"with set of operator class specific option\" should be \"with a set\n> > of operator class specific options\".\n> >\n> > * \"The options could be accessible from each support function\" should\n> > be \"The options can be accessed from other support functions\"\n>\n> Fixed, thanks!\n>\n> > It's very hard to write documentation like this, even for native\n> > English speakers. I think that it's important to have something in\n> > place, though. The GiST example helps a lot.\n>\n> I've added a complete example for defining a set of parameters and\n> accessing them from another support function to the GiST\n> documentation.\n\nI'm going to push this patch if there are no objections. I'm almost\nsure that documentation of opclass options will require further\nadjustments. However, I think the current patch makes it better, not\nworse.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Thu, 18 Jun 2020 20:06:41 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Operator class parameters and sgml docs" }, { "msg_contents": "On Thu, Jun 18, 2020 at 8:06 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Wed, Jun 17, 2020 at 2:00 PM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > On Wed, Jun 17, 2020 at 2:50 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > On Tue, Jun 16, 2020 at 4:24 AM Alexander Korotkov\n> > > <a.korotkov@postgrespro.ru> wrote:\n> > > > Thank you for patience. The documentation patch is attached. I think\n> > > > it requires review by native english speaker.\n> > >\n> > > * \"...paramaters that controls\" should be \"...paramaters that control\".\n> > >\n> > > * \"with set of operator class specific option\" should be \"with a set\n> > > of operator class specific options\".\n> > >\n> > > * \"The options could be accessible from each support function\" should\n> > > be \"The options can be accessed from other support functions\"\n> >\n> > Fixed, thanks!\n> >\n> > > It's very hard to write documentation like this, even for native\n> > > English speakers. I think that it's important to have something in\n> > > place, though. The GiST example helps a lot.\n> >\n> > I've added a complete example for defining a set of parameters and\n> > accessing them from another support function to the GiST\n> > documentation.\n>\n> I'm going to push this patch if there are no objections. I'm almost\n> sure that documentation of opclass options will require further\n> adjustments. However, I think the current patch makes it better, not\n> worse.\n\nSo, pushed!\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Sat, 20 Jun 2020 13:55:33 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Operator class parameters and sgml docs" }, { "msg_contents": "On Sat, Jun 20, 2020 at 3:55 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> So, pushed!\n\nNoticed one small thing. You forgot to update this part from the B-Tree docs:\n\n\"As shown in Table 37.9, btree defines one required and three optional\nsupport functions. The four user-defined methods are:\"\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 20 Jun 2020 12:15:17 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Operator class parameters and sgml docs" }, { "msg_contents": "On Sat, Jun 20, 2020 at 10:15 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Sat, Jun 20, 2020 at 3:55 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > So, pushed!\n>\n> Noticed one small thing. You forgot to update this part from the B-Tree docs:\n>\n> \"As shown in Table 37.9, btree defines one required and three optional\n> support functions. The four user-defined methods are:\"\n\nThanks! I've also spotted a similar issue in SP-GiST. Fix for both is pushed.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Sun, 21 Jun 2020 00:39:54 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Operator class parameters and sgml docs" }, { "msg_contents": "On Sat, Jun 20, 2020 at 01:55:33PM +0300, Alexander Korotkov wrote:\n> On Thu, Jun 18, 2020 at 8:06 PM Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:\n> > On Wed, Jun 17, 2020 at 2:00 PM Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:\n> > > On Wed, Jun 17, 2020 at 2:50 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > > On Tue, Jun 16, 2020 at 4:24 AM Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:\n> > > > > Thank you for patience. The documentation patch is attached. I think\n> > > > > it requires review by native english speaker.\n...\n> > > Fixed, thanks!\n> > >\n> > > > It's very hard to write documentation like this, even for native\n> > > > English speakers. I think that it's important to have something in\n> > > > place, though. The GiST example helps a lot.\n...\n> > I'm going to push this patch if there are no objections. I'm almost\n> > sure that documentation of opclass options will require further\n> > adjustments. However, I think the current patch makes it better, not\n> > worse.\n> \n> So, pushed!\n\nFind attached some language review of user-facing docs.\n\ndiff --git a/doc/src/sgml/brin.sgml b/doc/src/sgml/brin.sgml\nindex d7f1af7819..4c5eeb875f 100644\n--- a/doc/src/sgml/brin.sgml\n+++ b/doc/src/sgml/brin.sgml\n@@ -562,7 +562,7 @@ typedef struct BrinOpcInfo\n </varlistentry>\n </variablelist>\n\n [-Optionally, an-]{+An+} operator class for <acronym>BRIN</acronym> can [-supply-]{+optionally specify+} the\n following method:\n\n <variablelist>\n@@ -570,22 +570,22 @@ typedef struct BrinOpcInfo\n <term><function>void options(local_relopts *relopts)</function></term>\n <listitem>\n <para>\n Defines {+a+} set of user-visible parameters that control operator class\n behavior.\n </para>\n\n <para>\n The <function>options</function> function [-has given-]{+is passed a+} pointer to {+a+}\n <replaceable>local_relopts</replaceable> struct, which needs to be\n filled with a set of operator class specific options. The options\n can be accessed from other support functions using {+the+}\n <literal>PG_HAS_OPCLASS_OPTIONS()</literal> and\n <literal>PG_GET_OPCLASS_OPTIONS()</literal> macros.\n </para>\n\n <para>\n Since both key extraction [-for-]{+of+} indexed [-value-]{+values+} and representation of the\n key in <acronym>GIN</acronym> are flexible, [-it-]{+they+} may [-depends-]{+depend+} on\n user-specified parameters.\n </para>\n </listitem>\ndiff --git a/doc/src/sgml/btree.sgml b/doc/src/sgml/btree.sgml\nindex 2c4dd48ea3..b17b166e84 100644\n--- a/doc/src/sgml/btree.sgml\n+++ b/doc/src/sgml/btree.sgml\n@@ -557,7 +557,7 @@ equalimage(<replaceable>opcintype</replaceable> <type>oid</type>) returns bool\n Optionally, a B-tree operator family may provide\n <function>options</function> (<quote>operator class specific\n options</quote>) support functions, registered under support\n function number 5. These functions define {+a+} set of user-visible\n parameters that control operator class behavior.\n </para>\n <para>\n@@ -566,19 +566,19 @@ equalimage(<replaceable>opcintype</replaceable> <type>oid</type>) returns bool\n<synopsis>\noptions(<replaceable>relopts</replaceable> <type>local_relopts *</type>) returns void\n</synopsis>\n The function [-has given-]{+is passed a+} pointer to {+a+} <replaceable>local_relopts</replaceable>\n struct, which needs to be filled with a set of operator class\n specific options. The options can be accessed from other support\n functions using {+the+} <literal>PG_HAS_OPCLASS_OPTIONS()</literal> and\n <literal>PG_GET_OPCLASS_OPTIONS()</literal> macros.\n </para>\n <para>\n Currently, no B-Tree operator class has {+an+} <function>options</function>\n support function. B-tree doesn't allow flexible representation of keys\n like GiST, SP-GiST, GIN and BRIN do. So, <function>options</function>\n probably doesn't have much [-usage-]{+application+} in {+the+} current[-shape of-] B-tree index\n access method. Nevertheless, this support function was added to B-tree\n for uniformity, and[-probably it-] will [-found its usage-]{+probably find uses+} during further\n evolution of B-tree in <productname>PostgreSQL</productname>.\n </para>\n </listitem>\ndiff --git a/doc/src/sgml/gin.sgml b/doc/src/sgml/gin.sgml\nindex d85e7c8796..7a8c18a449 100644\n--- a/doc/src/sgml/gin.sgml\n+++ b/doc/src/sgml/gin.sgml\n@@ -411,17 +411,17 @@\n </para>\n\n <para>\n The <function>options</function> function [-has given-]{+is passed a+} pointer to {+a+}\n <replaceable>local_relopts</replaceable> struct, which needs to be\n filled with [-s-]{+a+} set of operator class specific options. The options\n can be accessed from other support functions using {+the+}\n <literal>PG_HAS_OPCLASS_OPTIONS()</literal> and\n <literal>PG_GET_OPCLASS_OPTIONS()</literal> macros.\n </para>\n\n <para>\n Since both key extraction [-for-]{+of+} indexed [-value-]{+values+} and representation of the\n key in <acronym>GIN</acronym> are flexible, [-it-]{+they+} may [-depends-]{+depend+} on\n user-specified parameters.\n </para>\n </listitem>\ndiff --git a/doc/src/sgml/gist.sgml b/doc/src/sgml/gist.sgml\nindex 31c28fdb61..5d970ee9f2 100644\n--- a/doc/src/sgml/gist.sgml\n+++ b/doc/src/sgml/gist.sgml\n@@ -946,7 +946,7 @@ my_fetch(PG_FUNCTION_ARGS)\n <term><function>options</function></term>\n <listitem>\n <para>\n Allows [-defintion-]{+definition+} of user-visible parameters that control operator\n class behavior.\n </para>\n\n@@ -962,16 +962,16 @@ LANGUAGE C STRICT;\n </para>\n\n <para>\n The function [-has given-]{+is passed a+} pointer to {+a+} <replaceable>local_relopts</replaceable>\n struct, which needs to be filled with a set of operator class\n specific options. The options can be accessed from other support\n functions using {+the+} <literal>PG_HAS_OPCLASS_OPTIONS()</literal> and\n <literal>PG_GET_OPCLASS_OPTIONS()</literal> macros.\n </para>\n\n <para>\n [-The sample-]{+An example+} implementation of [-my_option()-]{+my_options()+} and parameters [-usage-]\n[- in the another-]{+use+}\n{+ from other+} support [-function-]{+functions+} are given below:\n\n<programlisting>\ntypedef enum MyEnumType\n@@ -990,7 +990,7 @@ typedef struct\n int str_param; /* string parameter */\n} MyOptionsStruct;\n\n/* String [-representations for-]{+representation of+} enum values */\nstatic relopt_enum_elt_def myEnumValues[] =\n{\n {\"on\", MY_ENUM_ON},\n@@ -1002,7 +1002,7 @@ static relopt_enum_elt_def myEnumValues[] =\nstatic char *str_param_default = \"default\";\n\n/*\n * Sample [-validatior:-]{+validator:+} checks that string is not longer than 8 bytes.\n */\nstatic void \nvalidate_my_string_relopt(const char *value)\n@@ -1090,8 +1090,8 @@ my_compress(PG_FUNCTION_ARGS)\n\n <para>\n Since the representation of the key in <acronym>GiST</acronym> is\n flexible, it may [-depends-]{+depend+} on user-specified parameters. For [-instace,-]{+instance,+}\n the length of key signature may be [-such parameter.-]{+specified.+} See\n <literal>gtsvector_options()</literal> for example.\n </para>\n </listitem>\ndiff --git a/doc/src/sgml/spgist.sgml b/doc/src/sgml/spgist.sgml\nindex 03f914735b..1395dbaf88 100644\n--- a/doc/src/sgml/spgist.sgml\n+++ b/doc/src/sgml/spgist.sgml\n@@ -895,16 +895,16 @@ LANGUAGE C STRICT;\n </para>\n\n <para>\n The function [-has given-]{+is passed a+} pointer to {+a+} <replaceable>local_relopts</replaceable>\n struct, which needs to be filled with a set of operator class\n specific options. The options can be accessed from other support\n functions using {+the+} <literal>PG_HAS_OPCLASS_OPTIONS()</literal> and\n <literal>PG_GET_OPCLASS_OPTIONS()</literal> macros.\n </para>\n\n <para>\n Since the representation of the key in <acronym>SP-GiST</acronym> is\n flexible, it may [-depends-]{+depend+} on user-specified parameters.\n </para>\n </listitem>\n </varlistentry>\ndiff --git a/doc/src/sgml/xindex.sgml b/doc/src/sgml/xindex.sgml\nindex 0e4587a81b..2cfd71b5b7 100644\n--- a/doc/src/sgml/xindex.sgml\n+++ b/doc/src/sgml/xindex.sgml\n@@ -410,9 +410,9 @@\n </para>\n\n <para>\n Additionally, some opclasses allow [-user-]{+users+} to [-set specific parameters,-]{+specify parameters+} which\n [-controls its-]{+control their+} behavior. Each builtin index access method [-have-]{+has an+} optional\n <function>options</function> support function, which defines {+a+} set of\n opclass-specific parameters.\n </para>\n\n@@ -459,7 +459,7 @@\n </row>\n <row>\n <entry>\n Defines {+a+} set of options that are specific [-for-]{+to+} this operator class\n (optional)\n </entry>\n <entry>5</entry>\n@@ -501,7 +501,7 @@\n </row>\n <row>\n <entry>\n Defines {+a+} set of options that are specific [-for-]{+to+} this operator class\n (optional)\n </entry>\n <entry>3</entry>\n@@ -584,7 +584,7 @@\n <row>\n <entry><function>options</function></entry>\n <entry>\n Defines {+a+} set of options that are specific [-for-]{+to+} this operator class\n (optional)\n </entry>\n <entry>10</entry>\n@@ -643,7 +643,7 @@\n <row>\n <entry><function>options</function></entry>\n <entry>\n Defines {+a+} set of options that are specific [-for-]{+to+} this operator class\n (optional)\n </entry>\n <entry>6</entry>\n@@ -720,7 +720,7 @@\n <row>\n <entry><function>options</function></entry>\n <entry>\n Defines {+a+} set of options that are specific [-for-]{+to+} this operator class\n (optional)\n </entry>\n <entry>7</entry>\n@@ -778,7 +778,7 @@\n <row>\n <entry><function>options</function></entry>\n <entry>\n Defines {+a+} set of options that are specific [-for-]{+to+} this operator class\n (optional)\n </entry>\n <entry>5</entry>", "msg_date": "Sat, 20 Jun 2020 18:21:45 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Operator class parameters and sgml docs" }, { "msg_contents": "And a couple more in spgist.sgml (some of which were not added by this patch).", "msg_date": "Sat, 20 Jun 2020 18:28:16 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Operator class parameters and sgml docs" }, { "msg_contents": "On Sun, Jun 21, 2020 at 2:28 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> And a couple more in spgist.sgml (some of which were not added by this patch).\n\nPushed, thanks!\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Sun, 21 Jun 2020 04:52:59 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Operator class parameters and sgml docs" } ]
[ { "msg_contents": "Hi Forum,\n If I have a cluster with Synchronous replication enabled with three nodes,\nfor eg:\n\n[primary] [hot stand by 1] [host stand by 2]\n\nAnd for some unforeseen reasons, if primary fails, the failover will kick\nin and hot stand by 1 will become new primary and cluster setup will look\nlike this\n\n[new primary (hot stand by1)] [host stand by 2]\n\nMy question here is, what will happen if the original primary which has\nfailed comes back. Will it become part of this high available replica\ncluster automatically or it will be stale and disconnected from the\ncluster?\n\nHow can we automatically make the failed primary to be part of the\ncluster with hot standby role? It would be of great help, if you can direct\nme to any references details. Thank you, upfront.\n\nRegards,\nSanthosh\n\nHi Forum, If I have a cluster with Synchronous replication enabled with three nodes, for eg:[primary] [hot stand by 1] [host stand by 2]And for some unforeseen reasons, if primary fails, the failover will kick in and hot stand by 1 will become new primary and  cluster setup will look like this[new primary (hot stand by1)] [host stand by 2]My question here is, what will happen if the original primary which has failed comes back. Will it become part of this high available replica cluster automatically or it will be stale and disconnected from the cluster? How can we automatically make the failed primary to be part of the cluster with hot standby role? It would be of great help, if you can direct me to any references details. Thank you, upfront.Regards,Santhosh", "msg_date": "Thu, 21 May 2020 10:33:14 +0530", "msg_from": "Santhosh Kumar <krssanthosh@gmail.com>", "msg_from_op": true, "msg_subject": "Behaviour of failed Primary" }, { "msg_contents": "On Thu, May 21, 2020 at 5:38 PM Santhosh Kumar <krssanthosh@gmail.com> wrote:\n>\n> Hi Forum,\n> If I have a cluster with Synchronous replication enabled with three nodes, for eg:\n>\n> [primary] [hot stand by 1] [host stand by 2]\n>\n> And for some unforeseen reasons, if primary fails, the failover will kick in and hot stand by 1 will become new primary and cluster setup will look like this\n>\n> [new primary (hot stand by1)] [host stand by 2]\n>\n> My question here is, what will happen if the original primary which has failed comes back. Will it become part of this high available replica cluster automatically or it will be stale and disconnected from the cluster?\n>\n\nIt won't become standby automatically as it would have diverged from\nthe new master.\n\n> How can we automatically make the failed primary to be part of the cluster with hot standby role? It would be of great help, if you can direct me to any references details. Thank you, upfront.\n>\n\nI think pg_rewind can help in such situations. See the docs of pg_rewind [1].\n\n\n[1] - https://www.postgresql.org/docs/devel/app-pgrewind.html\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 May 2020 20:00:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Behaviour of failed Primary" } ]
[ { "msg_contents": "Hi all,\n\nNormally $subject would have been discussed at the developer meeting\nin Ottawa, but that's not going to happen per the current situation.\n\nFor the last couple of years, we have been using the same timeline for\nfor commit fests in a development cycle, so why not going with the\nsame flow this year? This would mean 5 CFs:\n- 2020-07-01~2020-07-31\n- 2020-09-01~2020-09-30\n- 2020-11-01~2020-11-30\n- 2021-01-01~2021-01-31\n- 2021-03-01~2021-03-31\n\nAny thoughts or opinions?\n\nThanks,\n--\nMichael", "msg_date": "Thu, 21 May 2020 15:35:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Schedule of commit fests for PG14" }, { "msg_contents": "On Thu, May 21, 2020 at 8:36 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Hi all,\n>\n> Normally $subject would have been discussed at the developer meeting\n> in Ottawa, but that's not going to happen per the current situation.\n>\n> For the last couple of years, we have been using the same timeline for\n> for commit fests in a development cycle, so why not going with the\n> same flow this year? This would mean 5 CFs:\n> - 2020-07-01~2020-07-31\n> - 2020-09-01~2020-09-30\n> - 2020-11-01~2020-11-30\n> - 2021-01-01~2021-01-31\n> - 2021-03-01~2021-03-31\n>\n> Any thoughts or opinions?\n\n+1, I don't see for now any reason not going with the same planning for pg14.\n\n\n", "msg_date": "Thu, 21 May 2020 08:44:56 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Schedule of commit fests for PG14" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Normally $subject would have been discussed at the developer meeting\n> in Ottawa, but that's not going to happen per the current situation.\n\n> For the last couple of years, we have been using the same timeline for\n> for commit fests in a development cycle, so why not going with the\n> same flow this year? This would mean 5 CFs:\n> - 2020-07-01~2020-07-31\n> - 2020-09-01~2020-09-30\n> - 2020-11-01~2020-11-30\n> - 2021-01-01~2021-01-31\n> - 2021-03-01~2021-03-31\n\nYeah, nobody's expressed any great unhappiness with the schedule\nrecently, so let's just do the same thing again this year.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 21 May 2020 10:02:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Schedule of commit fests for PG14" }, { "msg_contents": "On 5/21/20 2:35 AM, Michael Paquier wrote:\n> Hi all,\n> \n> Normally $subject would have been discussed at the developer meeting\n> in Ottawa, but that's not going to happen per the current situation.\n> \n> For the last couple of years, we have been using the same timeline for\n> for commit fests in a development cycle, so why not going with the\n> same flow this year? This would mean 5 CFs:\n> - 2020-07-01~2020-07-31\n> - 2020-09-01~2020-09-30\n> - 2020-11-01~2020-11-30\n> - 2021-01-01~2021-01-31\n> - 2021-03-01~2021-03-31\n> \n> Any thoughts or opinions?\n\n+1. This schedule seems to have worked fine the last two years.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Thu, 21 May 2020 10:13:41 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: Schedule of commit fests for PG14" }, { "msg_contents": "On Thu, May 21, 2020 at 10:13:41AM -0400, David Steele wrote:\n> +1. This schedule seems to have worked fine the last two years.\n\nSounds like a conclusion to me. I have created four new CFs for the\nnext development cycle then in the CF app.\n--\nMichael", "msg_date": "Tue, 26 May 2020 11:38:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Schedule of commit fests for PG14" } ]
[ { "msg_contents": "Hello hackers!\n\nThis email is about the proposal to implement OLTP-oriented compression on PostgreSQL.\n\nAlthough there are currently some alternative ways to achieve compression, such as using a file system that supports compression.\nHowever, this depends on a specific file system and is not suitable for all deployment environments, but also increases the complexity of deployment and maintenance.\n\nI hope this compression work can meet the following goals\n1. In most scenarios, the compressed size of the table can be lower than 50% of the original table\n2. Mainly oriented to the OLTP scenario, the performance impact on the load of frequent reads and writes is relatively small.\n3. Does not rely on special external software or hardware that is difficult to obtain\n4. Friendly to application developers and database managers\n5. The transformation of PostgreSQL is small\n\n\nI have noticed that there has been some discussion or work related to compression before, but they do not meet the above goals.\nsuch as,\n1. Use the sparse file[1]\n This is also the implemention method of MySQL 5.7's transparent page compression. However, sparse files may generate a lot of fragmentation inside the file system, and the \"compressed\" data file in sparse files will be restored to their original size after physical backup and restoration, unless our backup and recovery tools also support sparse files.\n2. Use TAM (table access method interface) (pg_cryogen, zedstore) [2] [3]\n Neither storage engine is geared towards OLTP scenarios. It is best to make relatively small modifications to the existing code of the heap engine (by modify md.c and fd.c mainly).\n\n\nThe methods proposed by Postgres Pro Enterprise CFS [4] and Nikolay P [5] are close to my needs.\nHowever, I would like to mention a slightly different implementation plan, which does not require space reclamation. Hope to get any suggestions.\n\n# Premise assumption\n1. Most of the pages in the compressed table can be compressed to less than 50% of the original size\n As long as you use an algorithm with a relatively high compression ratio (such as zlib, zstd), this first point should be easy to meet. Unless the table stores compressed data, such as pictures.\n2. The compression ratio of most pages in the same table is relatively close\n\n\n# Page storage\n\nConfigure 3 files for storing compressed data for each segment of each main fork. The main fork segment file(for example: 123456.2) still exists, but its size is 0.\n\n-Compressed data file (for example: 123456.2.cd)\n Used to store the compressed page. The block size of this file is table level configurable. But it can only be 1/2, 1/4 or 1/8 of BLOCKSZ\n-Compress overflow address file (for example: 123456.2.coa)\n When a page cannot be compressed to less than the size of the compressed block, this file is used to store the address of the overflow block.\n-Compress overflow data file (for example: 123456.2.cod)\n When a page cannot be compressed to less than the size of the compressed block, this file is used to store the overflow block.\n\nThe following is an example when the compressed block size is 4K, which is 1/2 of BLOCKSZ.\n\n## Scenario 1: The compressed size of the original page (including the header of the compressed page) is less than or equal to the compressed block size (4KB)\n\nCompressed data files(123456.2.cd):\n 0 1 2\n +=======+=======+=======+\n | data0 | data1 | data2 | \n +=======+=======+=======+\n->| 4K |<-\n\n\n## Scenario 2: The compressed size of the original page (including the header of the compressed page) is larger than the compressed block size (4KB)\n\nIf the compressed size of the original page (page 3 below) is greater than 4KB, it will not be compressed. The first 4KB of the original page is stored in the compressed data file, and the last 4KB is stored in the compress overflow data file.\n\nCompressed data files(123456.2.cd):\n\n 0 1 2 3\n +=======+=======+=======+=========+\n | data0 | data1 | data2 | data3_1 |\n +=======+=======+=======+=========+\n ->| 1st 4K |<-\n\nCompress overflow address file(123456.2.coa):\nThe compress overflow address file stores the block number of the compress overflow block assigned to each block + 1\nThe size of the compressed block and the number of expanded blocks of the compress overflow data file are stored in the head of the compress overflow address file\n\n 0 1 2 3\n +=======+=======+=======+=======+=======+\n | head | | | | 1 |\n +=======+=======+=======+=======+===|===+\n |\n |\nCompress overflow data file: |\n _______________________________|\n |\n 0 | 1 2 3\n +===|=====+=========+==========+=========+\n | data3_2 | | | |\n +=========+=========+==========+=========+\n->| 2nd 4K |<-\n\n\nIf the compressed block size is 1/4 or 1/8 of BLOCKSZ, each block that fails to compress may require multiple compress overflow block storage.\nThe following is an example when the compressed block size is 2K, which is 1/4 of BLOCKSZ.\n\n## Scenario 3: The compressed size of the original page (including the header of the compressed page) is larger than 2KB(compressed page block size) but less than 6KB (BLOCKSZ - compressed page block size )\n\nIn this case, data files will store compressed data, and at least 2KB storage space can be saved.\n\nCompressed data files(123456.2.cd):\n\n 0 1 2 3\n +=======+=======+=======+=========+\n | data0 | data1 | data2 | data3_1 |\n +=======+=======+=======+=========+\n ->| 1st 2K |<-\n\nCompress overflow address file(123456.2.coa):\n\n 0 1 2 3\n +=======+=======+=======+=======+=======+\n | head | | | | 1,2 |\n +=======+=======+=======+=======+===|===+\n |\n |\nCompress overflow data file : |\n _______________________________|\n |\n 0 | 1 2 3\n +===|=====+=========+==========+=========+\n | data3_2 | data3_3 | | |\n +=========+=========+==========+=========+\n | 2nd 2K | 3rh 2K | | |\n\n\n## Scenario 4: The compressed size of the original page (including the header of the compressed page) is larger than 6KB (BLOCKSZ - compressed page block size )\n\nIn this case, data files will store original data(8KB). same as Scenario 2\n\nCompressed data files(123456.2.cd):\n\n 0 1 2 3\n +=======+=======+=======+=========+\n | data0 | data1 | data2 | data3_1 |\n +=======+=======+=======+=========+\n ->| 1st 2K |<-\n\nCompress overflow address file(123456.2.coa):\n\n 0 1 2 3\n +=======+=======+=======+=======+=======+\n | head | | | | 1,2,3 |\n +=======+=======+=======+=======+===|===+\n |\n |\nCompress overflow data file : |\n _______________________________|\n |\n 0 | 1 2 3\n +===|=====+=========+==========+=========+\n | data3_2 | data3_3 | data3_4 | |\n +=========+=========+==========+=========+\n | 2nd 2K | 3rd 2K | 4th 2K | |\n\n\n# How to distinguish between compressed or uncompressed blocks in compressed data files?\n\nThe PostgreSQL heap file has a uniform header. At first, I considered adding compression-related flags to the header.\nHowever, there will be a problem. When the size of the data in the page after compression changes, from compressed format to uncompressed format, or from uncompressed format to compressed format,\nNeed to modify the head of the original page, not only to recalculate the checksum, but also update the buffer.\n\nHowever, I noticed that the first 4 bytes of each page are the high part of pg_lsn.\nTherefore, use an oversized `lsn number` that cannot appear in the real environment as a sign of whether it is a magic of compressed pages.\nThe definition of the envisaged compressed header is as follows\n\ntypedef struct\n{\n uint32 magic; /* compress page magic number,must be 0xfffffffe */\n uint8 algorithm; /* 1=pglz, 2=zstd ...*/\n uint8 flag; /* reserved */\n uint16 size; /* size after compressed */\n} PageCompressHead;\n\n\n# How to manage block space in compress overflow data files?\n\nOnce the overflow block x in the compress overflow data file is allocated to the block a, it will always belong to the block a, even if the size of the block a after compression becomes smaller and the overflow block x is no longer be used.\n\nThis approach simplifies the space management of compress overflow blocks, but fragmentation may occur, especially when the compressed block size is 1/4 or 1/8 BLOCKSZ.\nHowever, the fragment will only appear in the scene where the size of the same block is frequently changed greatly after compression.\n\nConsider the following situation. If only one record is inserted into a page and written to the disk, the compression rate must be very high, and only one compressed block is required.\nAfter writing new rows in the future, the required compressed blocks will become 2, 3, 4 ... These overflow blocks are not allocated at a time, so it is likely that they are not sequential in the compress overflow data file, resulting in more fragmentation.\n\nWe can avoid this problem by setting a table-level number of pre-allocated compressed blocks.\nWhen the number of compressed blocks required after the original page is compressed is less than this value, space is allocated according to the number of pre-allocated compressed blocks.\n\nAnd no matter how severe the fragmentation, the total space occupied by the compressed table cannot be larger than the original table before compression.\n\n# Impact on other parts of PostgreSQL?\n1. pg_basebackup / pg_checksum needs to handle checksum verification according to the new compression format\n2. pg_rewind needs to handle the copying of data blocks according to the new compression format\n\n# Problems\nThis solution simplifies copy storage space management, but also has the following defects\n1. The space saved by compression is limited by the size of the compressed block.\n For example, when the compressed block size is set to 4KB, up to 50% of space can be saved.\n For inserting only unmodified tables, you can set the compressed block size to BLOCKSZ / 8 to alleviate this problem; but for scenarios with frequent writes, it is easy to generate fragments and increase the number of IOs.\n2. When accessing a page that can be compressed to a compressed block, only one IO is required; but when accessing a page that cannot be compressed to a compressed block, multiple IO is required\n Generally it is 3 times, the address file is very small, it should be almost in the memory, not counting the address file is 2 times.\n\n\nI think the above issues are a necessary trade-off. Any suggestions are welcome.\n\n# references\n[1] https://www.postgresql.org/message-id/flat/op.ux8if71gcigqcu%40soyouz\n[2] https://www.postgresql.org/message-id/CAONYFtNDNghfatnrhOJMOT=BXNbEiobHFABt2sx_cn2=5t=1_Q@mail.gmail.com\n[3] https://www.postgresql.org/message-id/CALfoeiuF-m5jg51mJUPm5GN8u396o5sA2AF5N97vTRAEDYac7w%40mail.gmail.com\n[4] https://postgrespro.com/docs/enterprise/9.6/cfs\n[5] https://www.postgresql.org/message-id/flat/11996861554042351%40iva4-dd95b404a60b.qloud-c.yandex.net\n\n\nBest Regards\nChen Hujaun\n\n\n\n\n\n\nHello hackers!This email is about the proposal to implement OLTP-oriented compression on PostgreSQL.Although there are currently some alternative ways to achieve compression, such as using a file system that supports compression.However, this depends on a specific file system and is not suitable for all deployment environments, but also increases the complexity of deployment and maintenance.I hope this compression work can meet the following goals1. In most scenarios, the compressed size of the table can be lower than 50% of the original table2. Mainly oriented to the OLTP scenario, the performance impact on the load of frequent reads and writes is relatively small.3. Does not rely on special external software or hardware that is difficult to obtain4. Friendly to application developers and database managers5. The transformation of PostgreSQL is smallI have noticed that there has been some discussion or work related to compression before, but they do not meet the above goals.such as,1. Use the sparse file[1]   This is also the implemention method of MySQL 5.7's transparent page compression. However, sparse files may generate a lot of fragmentation inside the file system, and the \"compressed\" data file in sparse files will be restored to their original size after physical backup and restoration, unless our backup and recovery tools also support sparse files.2. Use TAM (table access method interface) (pg_cryogen, zedstore) [2] [3]   Neither storage engine is geared towards OLTP scenarios. It is best to make relatively small modifications to the existing code of the heap engine (by modify md.c and fd.c mainly).The methods proposed by Postgres Pro Enterprise CFS [4] and Nikolay P [5] are close to my needs.However, I would like to mention a slightly different implementation plan, which does not require space reclamation. Hope to get any suggestions.# Premise assumption1. Most of the pages in the compressed table can be compressed to less than 50% of the original size   As long as you use an algorithm with a relatively high compression ratio (such as zlib, zstd), this first point should be easy to meet. Unless the table stores compressed data, such as pictures.2. The compression ratio of most pages in the same table is relatively close# Page storageConfigure 3 files for storing compressed data for each segment of each main fork. The main fork segment file(for example: 123456.2) still exists, but its size is 0.-Compressed data file (for example: 123456.2.cd)   Used to store the compressed page. The block size of this file is table level configurable. But it can only be 1/2, 1/4 or 1/8 of BLOCKSZ-Compress overflow address file (for example: 123456.2.coa)   When a page cannot be compressed to less than the size of the compressed block, this file is used to store the address of the overflow block.-Compress overflow data file (for example: 123456.2.cod)   When a page cannot be compressed to less than the size of the compressed block, this file is used to store the overflow block.The following is an example when the compressed block size is 4K, which is 1/2 of BLOCKSZ.## Scenario 1: The compressed size of the original page (including the header of the compressed page) is less than or equal to the compressed block size (4KB)Compressed data files(123456.2.cd):  0       1       2  +=======+=======+=======+  | data0 | data1 | data2 |    +=======+=======+=======+->|   4K  |<-## Scenario 2: The compressed size of the original page (including the header of the compressed page) is larger than the compressed block size (4KB)If the compressed size of the original page (page 3 below) is greater than 4KB, it will not be compressed. The first 4KB of the original page is stored in the compressed data file, and the last 4KB is stored in the compress overflow data file.Compressed data files(123456.2.cd):  0       1       2       3  +=======+=======+=======+=========+  | data0 | data1 | data2 | data3_1 |  +=======+=======+=======+=========+                        ->| 1st 4K  |<-Compress overflow address file(123456.2.coa):The compress overflow address file stores the block number of the compress overflow block assigned to each block + 1The size of the compressed block and the number of expanded blocks of the compress overflow data file are stored in the head of the compress overflow address file          0       1       2       3  +=======+=======+=======+=======+=======+  | head  |       |       |       |   1   |  +=======+=======+=======+=======+===|===+                                      |                                      |Compress overflow data file:          |       _______________________________|      |  0   |    1       2       3  +===|=====+=========+==========+=========+  | data3_2 |         |          |         |  +=========+=========+==========+=========+->| 2nd 4K  |<-If the compressed block size is 1/4 or 1/8 of BLOCKSZ, each block that fails to compress may require multiple compress overflow block storage.The following is an example when the compressed block size is 2K, which is 1/4 of BLOCKSZ.## Scenario 3: The compressed size of the original page (including the header of the compressed page) is larger than 2KB(compressed page block size) but less than 6KB (BLOCKSZ - compressed page block size )In this case, data files will store compressed data, and at least 2KB storage space can be saved.Compressed data files(123456.2.cd):  0       1       2       3  +=======+=======+=======+=========+  | data0 | data1 | data2 | data3_1 |  +=======+=======+=======+=========+                        ->| 1st 2K  |<-Compress overflow address file(123456.2.coa):          0       1       2       3  +=======+=======+=======+=======+=======+  | head  |       |       |       | 1,2   |  +=======+=======+=======+=======+===|===+                                      |                                      |Compress overflow data file :         |       _______________________________|      |  0   |     1         2          3  +===|=====+=========+==========+=========+  | data3_2 | data3_3 |          |         |  +=========+=========+==========+=========+  | 2nd 2K  | 3rh 2K  |          |         |## Scenario 4: The compressed size of the original page (including the header of the compressed page) is larger than 6KB (BLOCKSZ - compressed page block size )In this case, data files will store original data(8KB). same as Scenario 2Compressed data files(123456.2.cd):  0       1       2       3  +=======+=======+=======+=========+  | data0 | data1 | data2 | data3_1 |  +=======+=======+=======+=========+                        ->| 1st 2K  |<-Compress overflow address file(123456.2.coa):          0       1       2       3  +=======+=======+=======+=======+=======+  | head  |       |       |       | 1,2,3 |  +=======+=======+=======+=======+===|===+                                      |                                      |Compress overflow data file :         |       _______________________________|      |  0   |     1         2          3  +===|=====+=========+==========+=========+  | data3_2 | data3_3 | data3_4  |         |  +=========+=========+==========+=========+  | 2nd 2K  | 3rd 2K  | 4th 2K   |         |# How to distinguish between compressed or uncompressed blocks in compressed data files?The PostgreSQL heap file has a uniform header. At first, I considered adding compression-related flags to the header.However, there will be a problem. When the size of the data in the page after compression changes, from compressed format to uncompressed format, or from uncompressed format to compressed format,Need to modify the head of the original page, not only to recalculate the checksum, but also update the buffer.However, I noticed that the first 4 bytes of each page are the high part of pg_lsn.Therefore, use an oversized `lsn number` that cannot appear in the real environment as a sign of whether it is a magic of compressed pages.The definition of the envisaged compressed header is as followstypedef struct{    uint32    magic;       /* compress page magic number,must be 0xfffffffe */    uint8    algorithm;     /* 1=pglz, 2=zstd ...*/    uint8    flag;             /* reserved */    uint16    size;           /* size after compressed */} PageCompressHead;# How to manage block space in compress overflow data files?Once the overflow block x in the compress overflow data file is allocated to the block a, it will always belong to the block a, even if the size of the block a after compression becomes smaller and the overflow block x is no longer be used.This approach simplifies the space management of compress overflow blocks, but fragmentation may occur, especially when the compressed block size is 1/4 or 1/8 BLOCKSZ.However, the fragment will only appear in the scene where the size of the same block is frequently changed greatly after compression.Consider the following situation. If only one record is inserted into a page and written to the disk, the compression rate must be very high, and only one compressed block is required.After writing new rows in the future, the required compressed blocks will become 2, 3, 4 ... These overflow blocks are not allocated at a time, so it is likely that they are not sequential in the compress overflow data file, resulting in more fragmentation.We can avoid this problem by setting a table-level number of pre-allocated compressed blocks.When the number of compressed blocks required after the original page is compressed is less than this value, space is allocated according to the number of pre-allocated compressed blocks.And no matter how severe the fragmentation, the total space occupied by the compressed table cannot be larger than the original table before compression.# Impact on other parts of PostgreSQL?1. pg_basebackup / pg_checksum needs to handle checksum verification according to the new compression format2. pg_rewind needs to handle the copying of data blocks according to the new compression format# ProblemsThis solution simplifies copy storage space management, but also has the following defects1. The space saved by compression is limited by the size of the compressed block.   For example, when the compressed block size is set to 4KB, up to 50% of space can be saved.   For inserting only unmodified tables, you can set the compressed block size to BLOCKSZ / 8 to alleviate this problem; but for scenarios with frequent writes, it is easy to generate fragments and increase the number of IOs.2. When accessing a page that can be compressed to a compressed block, only one IO is required; but when accessing a page that cannot be compressed to a compressed block, multiple IO is required   Generally it is 3 times, the address file is very small, it should be almost in the memory, not counting the address file is 2 times.I think the above issues are a necessary trade-off. Any suggestions are welcome.# references[1] https://www.postgresql.org/message-id/flat/op.ux8if71gcigqcu%40soyouz[2] https://www.postgresql.org/message-id/CAONYFtNDNghfatnrhOJMOT=BXNbEiobHFABt2sx_cn2=5t=1_Q@mail.gmail.com[3] https://www.postgresql.org/message-id/CALfoeiuF-m5jg51mJUPm5GN8u396o5sA2AF5N97vTRAEDYac7w%40mail.gmail.com[4] https://postgrespro.com/docs/enterprise/9.6/cfs[5] https://www.postgresql.org/message-id/flat/11996861554042351%40iva4-dd95b404a60b.qloud-c.yandex.netBest RegardsChen Hujaun", "msg_date": "Thu, 21 May 2020 14:36:40 +0800 (GMT+08:00)", "msg_from": "chenhj <chjischj@163.com>", "msg_from_op": true, "msg_subject": "[Proposal] Page Compression for OLTP" }, { "msg_contents": "Hello,\n\nMy 0.02€, some of which may just show some misunderstanding on my part:\n\n - you have clearly given quite a few thoughts about the what and how…\n which makes your message an interesting read.\n\n - Could this be proposed as some kind of extension, provided that enough\n hooks are available? ISTM that foreign tables and/or alternative\n storage engine (aka ACCESS METHOD) provide convenient APIs which could\n fit the need for these? Or are they not appropriate? You seem to\n suggest that there are not.\n\n If not, what could be done to improve API to allow what you are seeking\n to do? Maybe you need a somehow lower-level programmable API which does\n not exist already, or at least is not exported already, but could be\n specified and implemented with limited effort? Basically you would like\n to read/write pg pages to somewhere, and then there is the syncing\n issue to consider. Maybe such a \"page storage\" API could provide\n benefit for some specialized hardware, eg persistent memory stores,\n so there would be more reason to define it anyway? I think it might\n be valuable to give it some thoughts.\n\n - Could you maybe elaborate on how your plan differs from [4] and [5]?\n\n - Have you consider keeping page headers and compressing tuple data\n only?\n\n - I'm not sure there is a point in going below the underlying file\n system blocksize, quite often 4 KiB? Or maybe yes? Or is there\n a benefit to aim at 1/4 even if most pages overflow?\n\n - ISTM that your approach entails 3 \"files\". Could it be done with 2?\n I'd suggest that the possible overflow pointers (coa) could be part of\n the headers so that when reading the 3.1 page, then the header would\n tell where to find the overflow 3.2, without requiring an additional\n independent structure with very small data in it, most of it zeros.\n Possibly this is not possible, because it would require some available\n space in standard headers when the is page is not compressible, and\n there is not enough. Maybe creating a little room for that in\n existing headers (4 bytes could be enough?) would be a good compromise.\n Hmmm. Maybe the approach I suggest would only work for 1/2 compression,\n but not for other target ratios, but I think it could be made to work\n if the pointer can entail several blocks in the overflow table.\n\n - If one page is split in 3 parts, could it creates problems on syncing,\n if 1/3 or 2/3 pages get written, but maybe that is manageable with WAL\n as it would note that the page was not synced and that is enough for\n replay.\n\n - I'm unclear how you would manage the 2 representations of a page in\n memory. I'm afraid that a 8 KiB page compressed to 4 KiB would\n basically take 12 KiB, i.e. reduce the available memory for caching\n purposes. Hmmm. The current status is that a written page probably\n takes 16 KiB, once in shared buffers and once in the system caches,\n so it would be an improvement anyway.\n\n - Maybe the compressed and overflow table could become bloated somehow,\n which would require the vaccuuming implementation and add to the\n complexity of the implementation?\n\n - External tools should be available to allow page inspection, eg for\n debugging purposes.\n\n-- \nFabien.", "msg_date": "Thu, 21 May 2020 10:04:55 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Page Compression for OLTP" }, { "msg_contents": "At 2020-05-21 15:04:55, \"Fabien COELHO\" <coelho@cri.ensmp.fr> wrote: > >Hello, > >My 0.02��, some of which may just show some misunderstanding on my part: > > - Could this be proposed as some kind of extension, provided that enough > hooks are available? ISTM that foreign tables and/or alternative > storage engine (aka ACCESS METHOD) provide convenient APIs which could > fit the need for these? Or are they not appropriate? You seem to > suggest that there are not. > > If not, what could be done to improve API to allow what you are seeking > to do? Maybe you need a somehow lower-level programmable API which does > not exist already, or at least is not exported already, but could be > specified and implemented with limited effort? Basically you would like > to read/write pg pages to somewhere, and then there is the syncing > issue to consider. Maybe such a \"page storage\" API could provide > benefit for some specialized hardware, eg persistent memory stores, > so there would be more reason to define it anyway? I think it might > be valuable to give it some thoughts. Thank you for giving so many comments. In my opinion, developing a foreign table or a new storage engine, in addition to compression, also needs to do a lot of extra things. A similar explanation was mentioned in Nikolay P's email. The \"page storage\" API may be a good choice, and I will consider it, but I have not yet figured out how to implement it. > - Could you maybe elaborate on how your plan differs from [4] and [5]? My solution is similar to CFS, and it is also embedded in the file access layer (fd.c, md.c) to realize the mapping from block number to the corresponding file and location where compressed data is stored. However, the most important difference is that I hope to avoid the need for GC through the design of the page layout. https://www.postgresql.org/message-id/flat/11996861554042351%40iva4-dd95b404a60b.qloud-c.yandex.net >> The most difficult thing in CFS development is certainly >> defragmentation. In CFS it is done using background garbage collection, >> by one or one >> GC worker processes. The main challenges were to minimize its >> interaction with normal work of the system, make it fault tolerant and >> prevent unlimited growth of data segments. >> CFS is not introducing its own storage manager, it is mostly embedded in >> existed Postgres file access layer (fd.c, md.c). It allows to reused >> code responsible for mapping relations and file descriptors cache. As it >> was recently discussed in hackers, it may be good idea to separate the >> questions \"how to map blocks to filenames and offsets\" and \"how to >> actually perform IO\". In this it will be easier to implement compressed >> storage manager. > - Have you consider keeping page headers and compressing tuple data > only? In that case, we must add some additional information in the page header to identify whether this is a compressed page or an uncompressed page. When a compressed page becomes an uncompressed page, or vice versa, an uncompressed page becomes a compressed page, the original page header must be modified. This is unacceptable because it requires modifying the shared buffer and recalculating the checksum. However, it should be feasible to put this flag in the compressed address file. The problem with this is that even if a page only occupies the size of one compressed block, the address file needs to be read, that is, from 1 IO to 2 IO. Since the address file is very small, it is basically a memory access, this cost may not be as large as I had imagined. > - I'm not sure there is a point in going below the underlying file > system blocksize, quite often 4 KiB? Or maybe yes? Or is there > a benefit to aim at 1/4 even if most pages overflow? My solution is mainly optimized for scenarios where the original page can be compressed to only require one compressed block of storage. The scene where the original page is stored in multiple compressed blocks is suitable for scenarios that are not particularly sensitive to performance, but are more concerned about the compression rate, such as cold data. In addition, users can also choose to compile PostgreSQL with 16KB or 32KB BLOCKSZ. > - ISTM that your approach entails 3 \"files\". Could it be done with 2? > I'd suggest that the possible overflow pointers (coa) could be part of > the headers so that when reading the 3.1 page, then the header would > tell where to find the overflow 3.2, without requiring an additional > independent structure with very small data in it, most of it zeros. > Possibly this is not possible, because it would require some available > space in standard headers when the is page is not compressible, and > there is not enough. Maybe creating a little room for that in > existing headers (4 bytes could be enough?) would be a good compromise. > Hmmm. Maybe the approach I suggest would only work for 1/2 compression, > but not for other target ratios, but I think it could be made to work > if the pointer can entail several blocks in the overflow table. My solution is optimized for scenarios where the original page can be compressed to only need one compressed block to store, In this scenario, only 1 IO is required for reading and writing, and there is no need to access additional overflow address file and overflow data file. Your suggestion reminded me. The performance difference may not be as big as I thought (testing and comparison is required). If I give up the pursuit of \"only one IO\", the file layout can be simplified. For example, it is simplified to the following form, only two files (the following example uses a compressed block size of 4KB) # Page storage(Plan B) Use the compress address file to store the compressed block pointer, and the Compress data file stores the compressed block data. compress address file: 0 1 2 3 +=======+=======+=======+=======+=======+ | head | 1 | 2 | 3,4 | 5 | +=======+=======+=======+=======+=======+ compress address file saves the following information for each page -Compressed size (when size is 0, it means uncompressed format) -Block number occupied in Compress data file By the way, I want to access the compress address file through mmap, just like snapfs https://github.com/postgrespro/snapfs/blob/pg_snap/src/backend/storage/file/snapfs.c Compress data file: 0 1 2 3 4 +=========+=========+==========+=========+=========+ | data1 | data2 | data3_1 | data3_2 | data4 | +=========+=========+==========+=========+=========+ | 4K | # Page storage(Plan C) Further, since the size of the compress address file is fixed, the above address file and data file can also be combined into one file 0 1 2 123071 0 1 2 +=======+=======+=======+ +=======+=========+=========+ | head | 1 | 2 | ... | | data1 | data2 | ... +=======+=======+=======+ +=======+=========+=========+ head | address | data | If the difference in performance is so negligible, maybe Plan C is a better solution. (Are there any other problems?) > > - Maybe the compressed and overflow table could become bloated somehow, > which would require the vaccuuming implementation and add to the > complexity of the implementation? > Vacuuming is what I try to avoid. As I explained in the first email, even without vaccuum, bloating should not become a serious problem. >>However, the fragment will only appear in the scene where the size of the same block is frequently changed greatly after compression. >>... >>And no matter how severe the fragmentation, the total space occupied by the compressed table cannot be larger than the original table before compression. Best Regards Chen Huajun\nAt 2020-05-21 15:04:55, \"Fabien COELHO\" <coelho@cri.ensmp.fr> wrote:\n\n>\n>Hello,\n>\n>My 0.02��, some of which may just show some misunderstanding on my part:\n>\n> - Could this be proposed as some kind of extension, provided that enough\n> hooks are available? ISTM that foreign tables and/or alternative\n> storage engine (aka ACCESS METHOD) provide convenient APIs which could\n> fit the need for these? Or are they not appropriate? You seem to\n> suggest that there are not.\n>\n> If not, what could be done to improve API to allow what you are seeking\n> to do? Maybe you need a somehow lower-level programmable API which does\n> not exist already, or at least is not exported already, but could be\n> specified and implemented with limited effort? Basically you would like\n> to read/write pg pages to somewhere, and then there is the syncing\n> issue to consider. Maybe such a \"page storage\" API could provide\n> benefit for some specialized hardware, eg persistent memory stores,\n> so there would be more reason to define it anyway? I think it might\n> be valuable to give it some thoughts.\n\nThank you for giving so many comments.\nIn my opinion, developing a foreign table or a new storage engine, in addition to compression, also needs to do a lot of extra things.\nA similar explanation was mentioned in Nikolay P's email.\n\nThe \"page storage\" API may be a good choice, and I will consider it, but I have not yet figured out how to implement it.\n\n> - Could you maybe elaborate on how your plan differs from [4] and [5]?\n\nMy solution is similar to CFS, and it is also embedded in the file access layer (fd.c, md.c) to realize the mapping from block number to the corresponding file and location where compressed data is stored.\n\nHowever, the most important difference is that I hope to avoid the need for GC through the design of the page layout. \n\nhttps://www.postgresql.org/message-id/flat/11996861554042351%40iva4-dd95b404a60b.qloud-c.yandex.net\n\n>> The most difficult thing in CFS development is certainly \n>> defragmentation. In CFS it is done using background garbage collection, \n>> by one or one\n>> GC worker processes. The main challenges were to minimize its \n>> interaction with normal work of the system, make it fault tolerant and \n>> prevent unlimited growth of data segments.\n\n>> CFS is not introducing its own storage manager, it is mostly embedded in \n>> existed Postgres file access layer (fd.c, md.c). It allows to reused \n>> code responsible for mapping relations and file descriptors cache. As it \n>> was recently discussed in hackers, it may be good idea to separate the \n>> questions \"how to map blocks to filenames and offsets\" and \"how to \n>> actually perform IO\". In this it will be easier to implement compressed \n>> storage manager.\n\n\n> - Have you consider keeping page headers and compressing tuple data\n> only?\n\nIn that case, we must add some additional information in the page header to identify whether this is a compressed page or an uncompressed page.\nWhen a compressed page becomes an uncompressed page, or vice versa, an uncompressed page becomes a compressed page, the original page header must be modified.\nThis is unacceptable because it requires modifying the shared buffer and recalculating the checksum.\n\nHowever, it should be feasible to put this flag in the compressed address file.\nThe problem with this is that even if a page only occupies the size of one compressed block, the address file needs to be read, that is, from 1 IO to 2 IO.\nSince the address file is very small, it is basically a memory access, this cost may not be as large as I had imagined.\n\n> - I'm not sure there is a point in going below the underlying file\n> system blocksize, quite often 4 KiB? Or maybe yes? Or is there\n> a benefit to aim at 1/4 even if most pages overflow?\n\nMy solution is mainly optimized for scenarios where the original page can be compressed to only require one compressed block of storage.\nThe scene where the original page is stored in multiple compressed blocks is suitable for scenarios that are not particularly sensitive to performance, but are more concerned about the compression rate, such as cold data.\n\nIn addition, users can also choose to compile PostgreSQL with 16KB or 32KB BLOCKSZ.\n\n> - ISTM that your approach entails 3 \"files\". Could it be done with 2?\n> I'd suggest that the possible overflow pointers (coa) could be part of\n> the headers so that when reading the 3.1 page, then the header would\n> tell where to find the overflow 3.2, without requiring an additional\n> independent structure with very small data in it, most of it zeros.\n> Possibly this is not possible, because it would require some available\n> space in standard headers when the is page is not compressible, and\n> there is not enough. Maybe creating a little room for that in\n> existing headers (4 bytes could be enough?) would be a good compromise.\n> Hmmm. Maybe the approach I suggest would only work for 1/2 compression,\n> but not for other target ratios, but I think it could be made to work\n> if the pointer can entail several blocks in the overflow table.\n\nMy solution is optimized for scenarios where the original page can be compressed to only need one compressed block to store,\nIn this scenario, only 1 IO is required for reading and writing, and there is no need to access additional overflow address file and overflow data file.\n\nYour suggestion reminded me. The performance difference may not be as big as I thought (testing and comparison is required). If I give up the pursuit of \"only one IO\", the file layout can be simplified.\n\nFor example, it is simplified to the following form, only two files (the following example uses a compressed block size of 4KB)\n\n# Page storage(Plan B) \n\nUse the compress address file to store the compressed block pointer, and the Compress data file stores the compressed block data.\n\ncompress address file:\n \n 0 1 2 3\n +=======+=======+=======+=======+=======+\n | head | 1 | 2 | 3,4 | 5 |\n +=======+=======+=======+=======+=======+\n\ncompress address file saves the following information for each page\n\n-Compressed size (when size is 0, it means uncompressed format)\n-Block number occupied in Compress data file\n\nBy the way, I want to access the compress address file through mmap, just like snapfs\nhttps://github.com/postgrespro/snapfs/blob/pg_snap/src/backend/storage/file/snapfs.c\n\nCompress data file:\n\n 0 1 2 3 4\n +=========+=========+==========+=========+=========+\n | data1 | data2 | data3_1 | data3_2 | data4 | \n +=========+=========+==========+=========+=========+\n | 4K |\n\n\n# Page storage(Plan C)\n\nFurther, since the size of the compress address file is fixed, the above address file and data file can also be combined into one file\n\n 0 1 2 123071 0 1 2\n +=======+=======+=======+ +=======+=========+=========+\n | head | 1 | 2 | ... | | data1 | data2 | ... \n +=======+=======+=======+ +=======+=========+=========+\n head | address | data |\n\nIf the difference in performance is so negligible, maybe Plan C is a better solution. (Are there any other problems?)\n\n>\n> - Maybe the compressed and overflow table could become bloated somehow,\n> which would require the vaccuuming implementation and add to the\n> complexity of the implementation?\n>\n\nVacuuming is what I try to avoid.\n\nAs I explained in the first email, even without vaccuum, bloating should not become a serious problem.\n\n>>However, the fragment will only appear in the scene where the size of the same block is frequently changed greatly after compression.\n>>...\n>>And no matter how severe the fragmentation, the total space occupied by the compressed table cannot be larger than the original table before compression.\n\nBest Regards\nChen Huajun", "msg_date": "Fri, 22 May 2020 08:07:34 +0800 (CST)", "msg_from": "chenhj <chjischj@163.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Page Compression for OLTP" }, { "msg_contents": "Sorry, There may be a problem with the display format of the previous mail. So resend it\n----------------------------------------------------------------------------------------------------\n\nAt 2020-05-21 15:04:55, \"Fabien COELHO\" <coelho@cri.ensmp.fr> wrote:\n\n>\n>Hello,\n>\n>My 0.02��, some of which may just show some misunderstanding on my part:\n>\n> - Could this be proposed as some kind of extension, provided that enough\n> hooks are available? ISTM that foreign tables and/or alternative\n> storage engine (aka ACCESS METHOD) provide convenient APIs which could\n> fit the need for these? Or are they not appropriate? You seem to\n> suggest that there are not.\n>\n> If not, what could be done to improve API to allow what you are seeking\n> to do? Maybe you need a somehow lower-level programmable API which does\n> not exist already, or at least is not exported already, but could be\n> specified and implemented with limited effort? Basically you would like\n> to read/write pg pages to somewhere, and then there is the syncing\n> issue to consider. Maybe such a \"page storage\" API could provide\n> benefit for some specialized hardware, eg persistent memory stores,\n> so there would be more reason to define it anyway? I think it might\n> be valuable to give it some thoughts.\n\nThank you for giving so many comments.\nIn my opinion, developing a foreign table or a new storage engine, in addition to compression, also needs to do a lot of extra things.\nA similar explanation was mentioned in Nikolay P's email.\n\nThe \"page storage\" API may be a good choice, and I will consider it, but I have not yet figured out how to implement it.\n\n> - Could you maybe elaborate on how your plan differs from [4] and [5]?\n\nMy solution is similar to CFS, and it is also embedded in the file access layer (fd.c, md.c) to realize the mapping from block number to the corresponding file and location where compressed data is stored.\n\nHowever, the most important difference is that I hope to avoid the need for GC through the design of the page layout.\n\nhttps://www.postgresql.org/message-id/flat/11996861554042351%40iva4-dd95b404a60b.qloud-c.yandex.net\n\n>> The most difficult thing in CFS development is certainly\n>> defragmentation. In CFS it is done using background garbage collection,\n>> by one or one\n>> GC worker processes. The main challenges were to minimize its\n>> interaction with normal work of the system, make it fault tolerant and\n>> prevent unlimited growth of data segments.\n\n>> CFS is not introducing its own storage manager, it is mostly embedded in\n>> existed Postgres file access layer (fd.c, md.c). It allows to reused\n>> code responsible for mapping relations and file descriptors cache. As it\n>> was recently discussed in hackers, it may be good idea to separate the\n>> questions \"how to map blocks to filenames and offsets\" and \"how to\n>> actually perform IO\". In this it will be easier to implement compressed\n>> storage manager.\n\n\n> - Have you consider keeping page headers and compressing tuple data\n> only?\n\nIn that case, we must add some additional information in the page header to identify whether this is a compressed page or an uncompressed page.\nWhen a compressed page becomes an uncompressed page, or vice versa, an uncompressed page becomes a compressed page, the original page header must be modified.\nThis is unacceptable because it requires modifying the shared buffer and recalculating the checksum.\n\nHowever, it should be feasible to put this flag in the compressed address file.\nThe problem with this is that even if a page only occupies the size of one compressed block, the address file needs to be read, that is, from 1 IO to 2 IO.\nSince the address file is very small, it is basically a memory access, this cost may not be as large as I had imagined.\n\n> - I'm not sure there is a point in going below the underlying file\n> system blocksize, quite often 4 KiB? Or maybe yes? Or is there\n> a benefit to aim at 1/4 even if most pages overflow?\n\nMy solution is mainly optimized for scenarios where the original page can be compressed to only require one compressed block of storage.\nThe scene where the original page is stored in multiple compressed blocks is suitable for scenarios that are not particularly sensitive to performance, but are more concerned about the compression rate, such as cold data.\n\nIn addition, users can also choose to compile PostgreSQL with 16KB or 32KB BLOCKSZ.\n\n> - ISTM that your approach entails 3 \"files\". Could it be done with 2?\n> I'd suggest that the possible overflow pointers (coa) could be part of\n> the headers so that when reading the 3.1 page, then the header would\n> tell where to find the overflow 3.2, without requiring an additional\n> independent structure with very small data in it, most of it zeros.\n> Possibly this is not possible, because it would require some available\n> space in standard headers when the is page is not compressible, and\n> there is not enough. Maybe creating a little room for that in\n> existing headers (4 bytes could be enough?) would be a good compromise.\n> Hmmm. Maybe the approach I suggest would only work for 1/2 compression,\n> but not for other target ratios, but I think it could be made to work\n> if the pointer can entail several blocks in the overflow table.\n\nMy solution is optimized for scenarios where the original page can be compressed to only need one compressed block to store,\nIn this scenario, only 1 IO is required for reading and writing, and there is no need to access additional overflow address file and overflow data file.\n\nYour suggestion reminded me. The performance difference may not be as big as I thought (testing and comparison is required). If I give up the pursuit of \"only one IO\", the file layout can be simplified.\n\nFor example, it is simplified to the following form, only two files (the following example uses a compressed block size of 4KB)\n\n# Page storage(Plan B)\n\nUse the compress address file to store the compressed block pointer, and the Compress data file stores the compressed block data.\n\ncompress address file:\n \n 0 1 2 3\n+=======+=======+=======+=======+=======+\n| head | 1 | 2 | 3,4 | 5 |\n+=======+=======+=======+=======+=======+\n\ncompress address file saves the following information for each page\n\n-Compressed size (when size is 0, it means uncompressed format)\n-Block number occupied in Compress data file\n\nBy the way, I want to access the compress address file through mmap, just like snapfs\nhttps://github.com/postgrespro/snapfs/blob/pg_snap/src/backend/storage/file/snapfs.c\n\nCompress data file:\n\n0 1 2 3 4\n+=========+=========+==========+=========+=========+\n| data1 | data2 | data3_1 | data3_2 | data4 |\n+=========+=========+==========+=========+=========+\n| 4K |\n\n\n# Page storage(Plan C)\n\nFurther, since the size of the compress address file is fixed, the above address file and data file can also be combined into one file\n\n 0 1 2 123071 0 1 2\n+=======+=======+=======+ +=======+=========+=========+\n| head | 1 | 2 | ... | | data1 | data2 | ... \n+=======+=======+=======+ +=======+=========+=========+\n head | address | data |\n\nIf the difference in performance is so negligible, maybe Plan C is a better solution. (Are there any other problems?)\n\n>\n> - Maybe the compressed and overflow table could become bloated somehow,\n> which would require the vaccuuming implementation and add to the\n> complexity of the implementation?\n>\n\nVacuuming is what I try to avoid.\n\nAs I explained in the first email, even without vaccuum, bloating should not become a serious problem.\n\n>>However, the fragment will only appear in the scene where the size of the same block is frequently changed greatly after compression.\n>>...\n>>And no matter how severe the fragmentation, the total space occupied by the compressed table cannot be larger than the original table before compression.\n\nBest Regards\nChen Huajun\n\n\n\n\n\n\nSorry, There may be a problem with the display format of the previous mail. So resend it----------------------------------------------------------------------------------------------------At 2020-05-21 15:04:55, \"Fabien COELHO\" <coelho@cri.ensmp.fr> wrote:>>Hello,>>My 0.02��, some of which may just show some misunderstanding on my part:>>  - Could this be proposed as some kind of extension, provided that enough>    hooks are available? ISTM that foreign tables and/or alternative>    storage engine (aka ACCESS METHOD) provide convenient APIs which could>    fit the need for these? Or are they not appropriate? You seem to>    suggest that there are not.>>    If not, what could be done to improve API to allow what you are seeking>    to do? Maybe you need a somehow lower-level programmable API which does>    not exist already, or at least is not exported already, but could be>    specified and implemented with limited effort? Basically you would like>    to read/write pg pages to somewhere, and then there is the syncing>    issue to consider. Maybe such a \"page storage\" API could provide>    benefit for some specialized hardware, eg persistent memory stores,>    so there would be more reason to define it anyway? I think it might>    be valuable to give it some thoughts.Thank you for giving so many comments.In my opinion, developing a foreign table or a new storage engine, in addition to compression, also needs to do a lot of extra things.A similar explanation was mentioned in Nikolay P's email.The \"page storage\" API may be a good choice, and I will consider it, but I have not yet figured out how to implement it.>  - Could you maybe elaborate on how your plan differs from [4] and [5]?My solution is similar to CFS, and it is also embedded in the file access layer (fd.c, md.c) to realize the mapping from block number to the corresponding file and location where compressed data is stored.However, the most important difference is that I hope to avoid the need for GC through the design of the page layout. https://www.postgresql.org/message-id/flat/11996861554042351%40iva4-dd95b404a60b.qloud-c.yandex.net>> The most difficult thing in CFS development is certainly >> defragmentation. In CFS it is done using background garbage collection, >> by one or one>> GC worker processes. The main challenges were to minimize its >> interaction with normal work of the system, make it fault tolerant and >> prevent unlimited growth of data segments.>> CFS is not introducing its own storage manager, it is mostly embedded in >> existed Postgres file access layer (fd.c, md.c). It allows to reused >> code responsible for mapping relations and file descriptors cache. As it >> was recently discussed in hackers, it may be good idea to separate the >> questions \"how to map blocks to filenames and offsets\" and \"how to >> actually perform IO\". In this it will be easier to implement compressed >> storage manager.>  - Have you consider keeping page headers and compressing tuple data>    only?In that case, we must add some additional information in the page header to identify whether this is a compressed page or an uncompressed page.When a compressed page becomes an uncompressed page, or vice versa, an uncompressed page becomes a compressed page, the original page header must be modified.This is unacceptable because it requires modifying the shared buffer and recalculating the checksum.However, it should be feasible to put this flag in the compressed address file.The problem with this is that even if a page only occupies the size of one compressed block, the address file needs to be read, that is, from 1 IO to 2 IO.Since the address file is very small, it is basically a memory access, this cost may not be as large as I had imagined.>  - I'm not sure there is a point in going below the underlying file>    system blocksize, quite often 4 KiB? Or maybe yes? Or is there>    a benefit to aim at 1/4 even if most pages overflow?My solution is mainly optimized for scenarios where the original page can be compressed to only require one compressed block of storage.The scene where the original page is stored in multiple compressed blocks is suitable for scenarios that are not particularly sensitive to performance, but are more concerned about the compression rate, such as cold data.In addition, users can also choose to compile PostgreSQL with 16KB or 32KB BLOCKSZ.>  - ISTM that your approach entails 3 \"files\". Could it be done with 2?>    I'd suggest that the possible overflow pointers (coa) could be part of>    the headers so that when reading the 3.1 page, then the header would>    tell where to find the overflow 3.2, without requiring an additional>    independent structure with very small data in it, most of it zeros.>    Possibly this is not possible, because it would require some available>    space in standard headers when the is page is not compressible, and>    there is not enough. Maybe creating a little room for that in>    existing headers (4 bytes could be enough?) would be a good compromise.>    Hmmm. Maybe the approach I suggest would only work for 1/2 compression,>    but not for other target ratios, but I think it could be made to work>    if the pointer can entail several blocks in the overflow table.My solution is optimized for scenarios where the original page can be compressed to only need one compressed block to store,In this scenario, only 1 IO is required for reading and writing, and there is no need to access additional overflow address file and overflow data file.Your suggestion reminded me. The performance difference may not be as big as I thought (testing and comparison is required). If I give up the pursuit of \"only one IO\", the file layout can be simplified.For example, it is simplified to the following form, only two files (the following example uses a compressed block size of 4KB)# Page storage(Plan B) Use the compress address file to store the compressed block pointer, and the Compress data file stores the compressed block data.compress address file:         0       1       2       3+=======+=======+=======+=======+=======+| head  |  1    |    2  | 3,4   |   5   |+=======+=======+=======+=======+=======+compress address file saves the following information for each page-Compressed size (when size is 0, it means uncompressed format)-Block number occupied in Compress data fileBy the way, I want to access the compress address file through mmap, just like snapfshttps://github.com/postgrespro/snapfs/blob/pg_snap/src/backend/storage/file/snapfs.cCompress data file:0         1         2          3         4+=========+=========+==========+=========+=========+| data1   | data2   | data3_1  | data3_2 | data4   | +=========+=========+==========+=========+=========+|    4K   |# Page storage(Plan C)Further, since the size of the compress address file is fixed, the above address file and data file can also be combined into one file        0       1       2     123071    0         1         2+=======+=======+=======+     +=======+=========+=========+| head  |  1    |    2  | ... |       | data1   | data2   | ...  +=======+=======+=======+     +=======+=========+=========+  head  |              address        |          data          |If the difference in performance is so negligible, maybe Plan C is a better solution. (Are there any other problems?)>>  - Maybe the compressed and overflow table could become bloated somehow,>    which would require the vaccuuming implementation and add to the>    complexity of the implementation?>Vacuuming is what I try to avoid.As I explained in the first email, even without vaccuum, bloating should not become a serious problem.>>However, the fragment will only appear in the scene where the size of the same block is frequently changed greatly after compression.>>...>>And no matter how severe the fragmentation, the total space occupied by the compressed table cannot be larger than the original table before compression.Best RegardsChen Huajun", "msg_date": "Fri, 22 May 2020 14:15:57 +0800 (GMT+08:00)", "msg_from": "chenhj <chjischj@163.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Page Compression for OLTP" }, { "msg_contents": "Hi hackers,\n\n\n> # Page storage(Plan C)\n>\n> Further, since the size of the compress address file is fixed, the above address file and data file can also be combined into one file\n>\n> 0 1 2 123071 0 1 2\n> +=======+=======+=======+ +=======+=========+=========+\n> | head | 1 | 2 | ... | | data1 | data2 | ... \n> +=======+=======+=======+ +=======+=========+=========+\n> head | address | data |\n\nI made a prototype according to the above storage method. Any suggestions are welcome.\n\n# Page compress file storage related definitions\n\n/*\n* layout of Page Compress file:\n*\n* - PageCompressHeader\n* - PageCompressAddr[]\n* - chuncks of PageCompressData\n*\n*/\ntypedef struct PageCompressHeader\n{\n pg_atomic_uint32 nblocks; /* number of total blocks in this segment */\n pg_atomic_uint32 allocated_chunks; /* number of total allocated chunks in data area */\n uint16 chunk_size; /* size of each chunk, must be 1/2 1/4 or 1/8 of BLCKSZ */\n uint8 algorithm; /* compress algorithm, 1=pglz, 2=lz4 */\n} PageCompressHeader;\n\ntypedef struct PageCompressAddr\n{\n uint8 nchunks; /* number of chunks for this block */\n uint8 allocated_chunks; /* number of allocated chunks for this block */\n\n /* variable-length fields, 1 based chunk no array for this block, size of the array must be 2, 4 or 8 */\n pc_chunk_number_t chunknos[FLEXIBLE_ARRAY_MEMBER];\n} PageCompressAddr;\n\ntypedef struct PageCompressData\n{\n char page_header[SizeOfPageHeaderData]; /* page header */\n uint32 size; /* size of compressed data */\n char data[FLEXIBLE_ARRAY_MEMBER]; /* compressed page, except for the page header */\n} PageCompressData;\n\n\n# Usage\n\nSet whether to use compression through storage parameters of tables and indexes\n\n- compress_type\n Set whether to compress and the compression algorithm used, supported values: none, pglz, zstd\n\n- compress_chunk_size\n\n Chunk is the smallest unit of storage space allocated for compressed pages.\n The size of the chunk can only be 1/2, 1/4 or 1/8 of BLCKSZ\n\n- compress_prealloc_chunks\n\n The number of chunks pre-allocated for each page. The maximum value allowed is: BLCKSZ/compress_chunk_size -1.\n If the number of chunks required for a compressed page is less than `compress_prealloc_chunks`,\n It allocates `compress_prealloc_chunks` chunks to avoid future storage fragmentation when the page needs more storage space.\n\n\n# Sample\n\n## requirement\n\n- zstd\n\n## build\n\n./configure --with-zstd\nmake\nmake install\n\n## create compressed table and index\n\ncreate table tb1(id int,c1 text);\ncreate table tb1_zstd(id int,c1 text) with(compress_type=zstd,compress_chunk_size=1024);\ncreate table tb1_zstd_4(id int,c1 text) with(compress_type=zstd,compress_chunk_size=1024,compress_prealloc_chunks=4);\n\ncreate index tb1_idx_id on tb1(id);\ncreate index tb1_idx_id_zstd on tb1(id) with(compress_type=zstd,compress_chunk_size=1024);\ncreate index tb1_idx_id_zstd_4 on tb1(id) with(compress_type=zstd,compress_chunk_size=1024,compress_prealloc_chunks=4);\n\ncreate index tb1_idx_c1 on tb1(c1);\ncreate index tb1_idx_c1_zstd on tb1(c1) with(compress_type=zstd,compress_chunk_size=1024);\ncreate index tb1_idx_c1_zstd_4 on tb1(c1) with(compress_type=zstd,compress_chunk_size=1024,compress_prealloc_chunks=4);\n\ninsert into tb1 select generate_series(1,1000000),md5(random()::text);\ninsert into tb1_zstd select generate_series(1,1000000),md5(random()::text);\ninsert into tb1_zstd_4 select generate_series(1,1000000),md5(random()::text);\n\n## show size of table and index\n\npostgres=# \\d+\n List of relations\nSchema | Name | Type | Owner | Persistence | Size | Description\n--------+------------+-------+----------+-------------+-------+-------------\npublic | tb1 | table | postgres | permanent | 65 MB |\npublic | tb1_zstd | table | postgres | permanent | 37 MB |\npublic | tb1_zstd_4 | table | postgres | permanent | 37 MB |\n(3 rows)\n\npostgres=# \\di+\n List of relations\nSchema | Name | Type | Owner | Table | Persistence | Size | Description\n--------+-------------------+-------+----------+-------+-------------+-------+-------------\npublic | tb1_idx_c1 | index | postgres | tb1 | permanent | 73 MB |\npublic | tb1_idx_c1_zstd | index | postgres | tb1 | permanent | 36 MB |\npublic | tb1_idx_c1_zstd_4 | index | postgres | tb1 | permanent | 41 MB |\npublic | tb1_idx_id | index | postgres | tb1 | permanent | 21 MB |\npublic | tb1_idx_id_zstd | index | postgres | tb1 | permanent | 13 MB |\npublic | tb1_idx_id_zstd_4 | index | postgres | tb1 | permanent | 15 MB |\n(6 rows)\n\n\n# pgbench performance testing(TPC-B)\n\nCompress the pgbench_accounts table and its primary key index.\nThe compression parameters are (compress_type=zstd, compress_chunk_size=1024).\nThen compare the performance difference between the original table and the compressed table.\n\ntest command:\n\n pgbench -i -s 1000\n pgbench -n -T 300 -c 16 -j 16 db1\n\ntps comparison:\n\n original table :20081\n compressed table:19984\n\n\nComparison of storage space:\n\n original compressed(before benchmark) compressed(after benchmark*)\npgbench_accounts 13 GB 1660 MB 1711 MB\npgbench_accounts_pkey 2142 MB 738 MB 816 MB\n\n*note:After the benchmark, there are some compressed pages that need 2 chuncks to store data\n\n\n# TODO list\n\n1. support setting of compress level\n2. support ALTER TABLE/INDEX xx set(...)\n3. support checksum in pg_basebackup, pg_checksum and replication\n4. support pg_rewind\n5. infomation output for compressed page's meta data\n\n\n# Problem\n\nWhen compress_chunk_size=1024, about 4MB of space is needed to store the address,\nwhich will cause the space of the small file to become larger after compression.\n\nThe solutions considered are as follows:\nThe address and data of the compressed page are divided into two files, and the address file is also divided into disk space as needed, and at least one BLCKSZ is allocated for each expansion.\n\n\nBest Regards\nChen Hujaun", "msg_date": "Fri, 5 Jun 2020 20:39:46 +0800 (GMT+08:00)", "msg_from": "chenhj <chjischj@163.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Page Compression for OLTP" }, { "msg_contents": "Hi hackers,\n\n\nI further improved this Patch, adjusted some of the design, and added related modifications\n(pg_rewind,replication,checksum,backup) and basic tests. Any suggestions are welcome.\n\n\nthis patch can also be obtained from here\nhttps://github.com/ChenHuajun/postgres/tree/page_compress_14\n\n\n# 1. Page storage\n\n\nThe compressed data block is stored in one or more chunks of the compressed data file, \nand the size of each chunk is 1/8, 1/4, or 1/2 block size.\nThe storage location of each compressed data block is represented by an array of chunkno \nand stored in the compressed address file.\n\n\n## 1.1 page compressed address file(_pca)\n\n\n blk0 1 2 3\n+=======+=======+=======+=======+=======+\n| head | 1 | 2 | 3,4 | 5 |\n+=======+=======+=======+=======+=======+\n\n\n## 1.2 page compressed data file(_pcd)\n\n\nchunk1 2 3 4 5\n+=========+=========+==========+=========+=========+\n| blk0 | blk2 | blk2_1 | blk2_2 | blk3 |\n+=========+=========+==========+=========+=========+\n| 4K |\n\n\n\n\n# 2. Usage\n\n\n## 2.1 Set whether to use compression through storage parameters of tables and indexes\n\n\n- compresstype\n Set whether to compress and the compression algorithm used, supported values: none, pglz, zstd\n\n\n- compresslevel\n Set compress level(only zstd support)\n \n- compress_chunk_size\n\n\n Chunk is the smallest unit of storage space allocated for compressed pages.\n The size of the chunk can only be 1/2, 1/4 or 1/8 of BLCKSZ\n\n\n- compress_prealloc_chunks\n\n\n The number of chunks pre-allocated for each page. The maximum value allowed is: BLCKSZ/compress_chunk_size -1.\n If the number of chunks required for a compressed page is less than `compress_prealloc_chunks`,\n It allocates `compress_prealloc_chunks` chunks to avoid future storage fragmentation when the page needs more storage space.\n\n\nexample:\nCREATE TABLE tbl_pc(id int, c1 text) WITH(compresstype=zstd, compresslevel=0, compress_chunk_size=1024, compress_prealloc_chunks=2);\nCREATE INDEX tbl_pc_idx1 on tbl_pc(c1) WITH(compresstype=zstd, compresslevel=1, compress_chunk_size=4096, compress_prealloc_chunks=0);\n\n\n\n\n## 2.2 Set default compression option when create table in specified tablespace\n\n\n- default_compresstype\n- default_compresslevel\n- default_compress_chunk_size\n- default_compress_prealloc_chunks\n\n\nnote:temp table and unlogged table will not be affected by the above 4 parameters\n\n\nexample:\nALTER TABLESPACE pg_default SET(default_compresstype=zstd, default_compresslevel=2, default_compress_chunk_size=1024, default_compress_prealloc_chunks=2);\n\n\n\n\n## 2.3 View the storage location of each block of the compressed table\n\n\nadd some functions in pageinspect to inspect compressed relation\n\n\n- get_compress_address_header(relname text, segno integer)\n- get_compress_address_items(relname text, segno integer)\n\n\nexample:\nSELECT nblocks, allocated_chunks, chunk_size, algorithm FROM get_compress_address_header('test_compressed',0);\n nblocks | allocated_chunks | chunk_size | algorithm \n---------+------------------+------------+-----------\n 1 | 20 | 1024 | 1\n(1 row)\n\n\nSELECT * FROM get_compress_address_items('test_compressed',0);\n blkno | nchunks | allocated_chunks | chunknos \n-------+---------+------------------+---------------\n 0 | 0 | 4 | {1,2,3,4}\n 1 | 0 | 4 | {5,6,7,8}\n 2 | 0 | 4 | {9,10,11,12}\n 3 | 0 | 4 | {13,14,15,16}\n 4 | 0 | 4 | {17,18,19,20}\n(5 rows)\n\n\n## 2.4 Compare the compression ratio of different compression algorithms and compression levels\n\n\nUse a new function in pageinspect can compare the compression ratio of different compression algorithms and compression levels.\nThis helps determine what compression parameters to use.\n\n\n- page_compress(page bytea, algorithm text, level integer)\n\n\nexample:\npostgres=# SELECT blk,octet_length(page_compress(get_raw_page('test_compressed', 'main', blk), 'pglz', 0)) compressed_size from generate_series(0,4) blk;\n blk | compressed_size\n-----+-----------------\n 0 | 3234\n 1 | 3516\n 2 | 3515\n 3 | 3515\n 4 | 1571\n(5 rows)\n\n\npostgres=# SELECT blk,octet_length(page_compress(get_raw_page('test_compressed', 'main', blk), 'zstd', 0)) compressed_size from generate_series(0,4) blk;\n blk | compressed_size\n-----+-----------------\n 0 | 1640\n 1 | 1771\n 2 | 1801\n 3 | 1813\n 4 | 806\n(5 rows)\n\n\n\n\n# 3. How to ensure crash safe\nFor the convenience of implementation, when the chunk space is allocated in the compressed address file, \nWAL is not written. Therefore, if postgres crashes during the space allocation process, \nincomplete data may remain in the compressed address file.\n\n\nIn order to ensure the data consistency of the compressed address file, the following measures have been taken\n\n\n1. Divide the compressed address file into several 512-byte areas. The address data of each data block is stored in only one area, \n and does not cross the area boundary to prevent half of the addresses from being persistent and the other half of the addresses not being persistent.\n2. When allocating chunk space, write address information in a fixed order in the address file to avoid inconsistent data midway. details as follows\n\n\n -Accumulate the total number of allocated chunks in the Header (PageCompressHeader.allocated_chunks)\n -Write the chunkno array in the address corresponding to the data block (PageCompressAddr.chunknos)\n -Write the number of allocated chunks in the address corresponding to the written data block (PageCompressAddr.nchunks)\n -Update the global number of blocks in the Header (PageCompressHeader.nblocks)\n\n\ntypedef struct PageCompressHeader\n{\npg_atomic_uint32nblocks;/* number of total blocks in this segment */\npg_atomic_uint32allocated_chunks;/* number of total allocated chunks in data area */\nuint16chunk_size;/* size of each chunk, must be 1/2 1/4 or 1/8 of BLCKSZ */\nuint8algorithm;/* compress algorithm, 1=pglz, 2=lz4 */\npg_atomic_uint32last_synced_nblocks;/* last synced nblocks */\npg_atomic_uint32last_synced_allocated_chunks;/* last synced allocated_chunks */\nTimestampTzlast_recovery_start_time; /* postmaster start time of last recovery */\n} PageCompressHeader;\n\n\ntypedef struct PageCompressAddr\n{\nvolatile uint8nchunks;/* number of chunks for this block */\nvolatile uint8allocated_chunks;/* number of allocated chunks for this block */\n\n\n/* variable-length fields, 1 based chunk no array for this block, size of the array must be 2, 4 or 8 */\npc_chunk_number_tchunknos[FLEXIBLE_ARRAY_MEMBER];\n} PageCompressAddr;\n\n\n3. Once a chunk is allocated, it will always belong to a specific data block until the relation is truncated(or vacuum tail block), \n avoiding frequent changes of address information.\n4. When replaying WAL in the recovery phase after a postgres crash, check the address file of all compressed relations opened for the first time,\n and repair if inconsistent data (refer to the check_and_repair_compress_address function).\n\n\n\n\n# 4. Problem\n\n\n- When compress_chunk_size=1024, about 4MB of space is needed to store the address,\n which will cause the space of the small file to become larger after compression.\n Therefore, should avoid enabling compression for small tables.\n- The zstd library needs to be installed separately. Could copy the source code of zstd to postgres?\n\n\n\n\n# 5. TODO list\n\n\n1. docs\n2. optimize code style, error message and so on \n3. more test\n\n\nBTW:\nIf anyone thinks this Patch is valuable, hope to improve it together.\n\n\n\n\nBest Regards\nChen Hujaun", "msg_date": "Thu, 10 Dec 2020 05:44:18 +0800 (CST)", "msg_from": "chenhj <chjischj@163.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Page Compression for OLTP" }, { "msg_contents": "Hi, hackers\n\n\nI want to know whether this patch can be accepted by the community, that is, whether it is necessary for me to continue working for this Patch. \nIf you have any suggestions, please feedback to me.\n\n\nBest Regards\nChen Huajun\n\n\nHi, hackersI want to know whether this patch can be accepted by the community, that is, whether it is necessary for me to continue working for this Patch. If you have any suggestions, please feedback to me.Best RegardsChen Huajun", "msg_date": "Tue, 16 Feb 2021 22:45:59 +0800 (CST)", "msg_from": "chenhj <chjischj@163.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Page Compression for OLTP" }, { "msg_contents": "> On 16 Feb 2021, at 15:45, chenhj <chjischj@163.com> wrote:\n\n> I want to know whether this patch can be accepted by the community, that is, whether it is necessary for me to continue working for this Patch. \n> If you have any suggestions, please feedback to me.\n\nIt doesn't seem like the patch has been registered in the commitfest app so it\nmay have been forgotten about, the number of proposed patches often outnumber\nthe code review bandwidth. Please register it at:\n\n\thttps://commitfest.postgresql.org/32/\n\n..to make sure it doesn't get lost.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 16 Feb 2021 15:51:14 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Page Compression for OLTP" }, { "msg_contents": "At 2021-02-16 21:51:14, \"Daniel Gustafsson\" <daniel@yesql.se> wrote:\n\n>> On 16 Feb 2021, at 15:45, chenhj <chjischj@163.com> wrote:\n>\n>> I want to know whether this patch can be accepted by the community, that is, whether it is necessary for me to continue working for this Patch. \n>> If you have any suggestions, please feedback to me.\n>\n>It doesn't seem like the patch has been registered in the commitfest app so it\n>may have been forgotten about, the number of proposed patches often outnumber\n>the code review bandwidth. Please register it at:\n>\n>\thttps://commitfest.postgresql.org/32/\n>\n>..to make sure it doesn't get lost.\n>\n>--\n\n>Daniel Gustafsson\t\thttps://vmware.com/\n\n\nThanks, I will complete this patch and registered it later.\nChen Huajun\nAt 2021-02-16 21:51:14, \"Daniel Gustafsson\" <daniel@yesql.se> wrote:>> On 16 Feb 2021, at 15:45, chenhj <chjischj@163.com> wrote:\n>\n>> I want to know whether this patch can be accepted by the community, that is, whether it is necessary for me to continue working for this Patch. \n>> If you have any suggestions, please feedback to me.\n>\n>It doesn't seem like the patch has been registered in the commitfest app so it\n>may have been forgotten about, the number of proposed patches often outnumber\n>the code review bandwidth. Please register it at:\n>\n>\thttps://commitfest.postgresql.org/32/\n>\n>..to make sure it doesn't get lost.\n>\n>--\n>Daniel Gustafsson\t\thttps://vmware.com/Thanks, I will complete this patch and registered it later. Chen Huajun", "msg_date": "Tue, 16 Feb 2021 23:15:36 +0800 (CST)", "msg_from": "chenhj <chjischj@163.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Page Compression for OLTP" }, { "msg_contents": "On Tue, Feb 16, 2021 at 11:15:36PM +0800, chenhj wrote:\n> At 2021-02-16 21:51:14, \"Daniel Gustafsson\" <daniel@yesql.se> wrote:\n> \n> >> On 16 Feb 2021, at 15:45, chenhj <chjischj@163.com> wrote:\n> >\n> >> I want to know whether this patch can be accepted by the community, that is, whether it is necessary for me to continue working for this Patch. \n> >> If you have any suggestions, please feedback to me.\n> >\n> >It doesn't seem like the patch has been registered in the commitfest app so it\n> >may have been forgotten about, the number of proposed patches often outnumber\n> >the code review bandwidth. Please register it at:\n> >\n> >\thttps://commitfest.postgresql.org/32/\n> >\n> >..to make sure it doesn't get lost.\n> >\n> >--\n> \n> >Daniel Gustafsson\t\thttps://vmware.com/\n> \n> \n> Thanks, I will complete this patch and registered it later.\n> Chen Huajun\n\nThe simplest way forward is to register it now so it doesn't miss the\nwindow for the upcoming commitfest (CF), which closes at the end of\nthis month. That way, everybody has the entire time between now and\nthe end of the CF to review the patch, work on it, etc, and the CF bot\nwill be testing it against the changing code base to ensure people\nknow if such a change causes it to need a rebase.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Thu, 18 Feb 2021 17:12:57 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Page Compression for OLTP" }, { "msg_contents": "Hi hackers,\n\nI have rebase this patch and made some improvements.\n\n\n\n\n1. A header is added to each chunk in the pcd file, which records the chunk of which block the chunk belongs to, and the checksum of the chunk.\n\n Accordingly, all pages in a compressed relation are stored in compressed format, even if the compressed page is larger than BLCKSZ.\n\n The maximum space occupied by a compressed page is BLCKSZ + chunk_size (exceeding this range will report an error when writing the page).\n\n2. Repair the pca file through the information recorded in the pcd when recovering from a crash\n\n3. For compressed relation, do not release the free blocks at the end of the relation (just like what old_snapshot_threshold does), reducing the risk of data inconsistency between pcd and pca file.\n\n4. During backup, only check the checksum in the chunk header for the pcd file, and avoid assembling and decompressing chunks into the original page.\n\n5. bugfix, doc, code style and so on\n\n\n\n\nAnd see src/backend/storage/smgr/README.compression for detail\n\n\n\n\nOther\n\n1. remove support of default compression option in tablespace, I'm not sure about the necessity of this feature, so don't support it for now.\n\n2. pg_rewind currently does not support copying only changed blocks from pcd file. This feature is relatively independent and could be implemented later.\n\n\n\n\nBest Regard\n\nChen Huajun\n\n\nAt 2021-02-18 23:12:57, \"David Fetter\" <david@fetter.org> wrote:\n>On Tue, Feb 16, 2021 at 11:15:36PM +0800, chenhj wrote:\n>> At 2021-02-16 21:51:14, \"Daniel Gustafsson\" <daniel@yesql.se> wrote:\n>> \n>> >> On 16 Feb 2021, at 15:45, chenhj <chjischj@163.com> wrote:\n>> >\n>> >> I want to know whether this patch can be accepted by the community, that is, whether it is necessary for me to continue working for this Patch. \n>> >> If you have any suggestions, please feedback to me.\n>> >\n>> >It doesn't seem like the patch has been registered in the commitfest app so it\n>> >may have been forgotten about, the number of proposed patches often outnumber\n>> >the code review bandwidth. Please register it at:\n>> >\n>> >\thttps://commitfest.postgresql.org/32/\n>> >\n>> >..to make sure it doesn't get lost.\n>> >\n>> >--\n>> \n>> >Daniel Gustafsson\t\thttps://vmware.com/\n>> \n>> \n>> Thanks, I will complete this patch and registered it later.\n>> Chen Huajun\n>\n>The simplest way forward is to register it now so it doesn't miss the\n>window for the upcoming commitfest (CF), which closes at the end of\n>this month. That way, everybody has the entire time between now and\n>the end of the CF to review the patch, work on it, etc, and the CF bot\n>will be testing it against the changing code base to ensure people\n>know if such a change causes it to need a rebase.\n>\n>Best,\n>David.\n>-- \n>David Fetter <david(at)fetter(dot)org> http://fetter.org/\n>Phone: +1 415 235 3778\n>\n>Remember to vote!\n>Consider donating to Postgres: http://www.postgresql.org/about/donate", "msg_date": "Wed, 27 Jul 2022 01:47:04 +0800 (CST)", "msg_from": "chenhj <chjischj@163.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Page Compression for OLTP" }, { "msg_contents": "2022年7月27日(水) 2:47 chenhj <chjischj@163.com>:\n>\n> Hi hackers,\n>\n> I have rebase this patch and made some improvements.\n>\n>\n> 1. A header is added to each chunk in the pcd file, which records the chunk of which block the chunk belongs to, and the checksum of the chunk.\n>\n> Accordingly, all pages in a compressed relation are stored in compressed format, even if the compressed page is larger than BLCKSZ.\n>\n> The maximum space occupied by a compressed page is BLCKSZ + chunk_size (exceeding this range will report an error when writing the page).\n>\n> 2. Repair the pca file through the information recorded in the pcd when recovering from a crash\n>\n> 3. For compressed relation, do not release the free blocks at the end of the relation (just like what old_snapshot_threshold does), reducing the risk of data inconsistency between pcd and pca file.\n>\n> 4. During backup, only check the checksum in the chunk header for the pcd file, and avoid assembling and decompressing chunks into the original page.\n>\n> 5. bugfix, doc, code style and so on\n>\n>\n> And see src/backend/storage/smgr/README.compression for detail\n>\n>\n> Other\n>\n> 1. remove support of default compression option in tablespace, I'm not sure about the necessity of this feature, so don't support it for now.\n>\n> 2. pg_rewind currently does not support copying only changed blocks from pcd file. This feature is relatively independent and could be implemented later.\n\nHi\n\ncfbot reports the patch no longer applies. As CommitFest 2022-11 is\ncurrently underway, this would be an excellent time to update the patch.\n\nThanks\n\nIan Barwick\n\n\n", "msg_date": "Fri, 4 Nov 2022 10:32:14 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Page Compression for OLTP" }, { "msg_contents": "On Fri, 4 Nov 2022 at 07:02, Ian Lawrence Barwick <barwick@gmail.com> wrote:\n>\n> 2022年7月27日(水) 2:47 chenhj <chjischj@163.com>:\n> >\n> > Hi hackers,\n> >\n> > I have rebase this patch and made some improvements.\n> >\n> >\n> > 1. A header is added to each chunk in the pcd file, which records the chunk of which block the chunk belongs to, and the checksum of the chunk.\n> >\n> > Accordingly, all pages in a compressed relation are stored in compressed format, even if the compressed page is larger than BLCKSZ.\n> >\n> > The maximum space occupied by a compressed page is BLCKSZ + chunk_size (exceeding this range will report an error when writing the page).\n> >\n> > 2. Repair the pca file through the information recorded in the pcd when recovering from a crash\n> >\n> > 3. For compressed relation, do not release the free blocks at the end of the relation (just like what old_snapshot_threshold does), reducing the risk of data inconsistency between pcd and pca file.\n> >\n> > 4. During backup, only check the checksum in the chunk header for the pcd file, and avoid assembling and decompressing chunks into the original page.\n> >\n> > 5. bugfix, doc, code style and so on\n> >\n> >\n> > And see src/backend/storage/smgr/README.compression for detail\n> >\n> >\n> > Other\n> >\n> > 1. remove support of default compression option in tablespace, I'm not sure about the necessity of this feature, so don't support it for now.\n> >\n> > 2. pg_rewind currently does not support copying only changed blocks from pcd file. This feature is relatively independent and could be implemented later.\n>\n> Hi\n>\n> cfbot reports the patch no longer applies. As CommitFest 2022-11 is\n> currently underway, this would be an excellent time to update the patch.\n\nThere has been no updates on this thread for some time, so this has\nbeen switched as Returned with Feedback. Feel free to open it in the\nnext commitfest if you plan to continue on this.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 31 Jan 2023 23:19:30 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Page Compression for OLTP" } ]
[ { "msg_contents": "Hi,\n\nAttached is a patch to use perfect hashing to speed up Unicode\nnormalization quick check.\n\n0001 changes the set of multipliers attempted when generating the hash\nfunction. The set in HEAD works for the current set of NFC codepoints,\nbut not for the other types. Also, the updated multipliers now all\ncompile to shift-and-add on most platform/compiler combinations\navailable on godbolt.org (earlier experiments found in [1]). The\nexisting keyword lists are fine with the new set, and don't seem to be\nvery picky in general. As a test, it also successfully finds a\nfunction for the OS \"words\" file, the \"D\" sets of codepoints, and for\nsets of the first n built-in OIDs, where n > 5.\n\n0002 builds on top of the existing normprops infrastructure to use a\nhash function for NFC quick check. Below are typical numbers in a\nnon-assert build:\n\nselect count(*) from (select md5(i::text) as t from\ngenerate_series(1,100000) as i) s where t is nfc normalized;\n\nHEAD 411ms 413ms 409ms\npatch 296ms 297ms 299ms\n\nThe addition of \"const\" was to silence a compiler warning. Also, I\nchanged the formatting of the output file slightly to match pgindent.\n\n0003 uses hashing for NFKC and removes binary search. This is split\nout for readability. I gather NFKC is a less common use case, so this\ncould technically be left out. Since this set is larger, the\nperformance gains are a bit larger as well, at the cost of 19kB of\nbinary space:\n\nHEAD 439ms 440ms 442ms\npatch 299ms 301ms 301ms\n\nI'll add this to the July commitfest.\n\n[1] https://www.postgresql.org/message-id/CACPNZCuVTiLhxAzXp9uCeHGUyHMa59h6_pmP+_W-SzXG0UyY9w@mail.gmail.com\n\n--\nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 21 May 2020 15:12:06 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "speed up unicode normalization quick check" }, { "msg_contents": "\n\n> On May 21, 2020, at 12:12 AM, John Naylor <john.naylor@2ndquadrant.com> wrote:\n> \n> Hi,\n> \n> Attached is a patch to use perfect hashing to speed up Unicode\n> normalization quick check.\n> \n> 0001 changes the set of multipliers attempted when generating the hash\n> function. The set in HEAD works for the current set of NFC codepoints,\n> but not for the other types. Also, the updated multipliers now all\n> compile to shift-and-add on most platform/compiler combinations\n> available on godbolt.org (earlier experiments found in [1]). The\n> existing keyword lists are fine with the new set, and don't seem to be\n> very picky in general. As a test, it also successfully finds a\n> function for the OS \"words\" file, the \"D\" sets of codepoints, and for\n> sets of the first n built-in OIDs, where n > 5.\n\nPrior to this patch, src/tools/gen_keywordlist.pl is the only script that uses PerfectHash. Your patch adds a second. I'm not convinced that modifying the PerfectHash code directly each time a new caller needs different multipliers is the right way to go. Could you instead make them arguments such that gen_keywordlist.pl, generate-unicode_combining_table.pl, and future callers can pass in the numbers they want? Or is there some advantage to having it this way?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 28 May 2020 14:59:52 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "On Fri, May 29, 2020 at 5:59 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> > On May 21, 2020, at 12:12 AM, John Naylor <john.naylor@2ndquadrant.com> wrote:\n\n> > very picky in general. As a test, it also successfully finds a\n> > function for the OS \"words\" file, the \"D\" sets of codepoints, and for\n> > sets of the first n built-in OIDs, where n > 5.\n>\n> Prior to this patch, src/tools/gen_keywordlist.pl is the only script that uses PerfectHash. Your patch adds a second. I'm not convinced that modifying the PerfectHash code directly each time a new caller needs different multipliers is the right way to go.\n\nCalling it \"each time\" with a sample size of two is a bit of a\nstretch. The first implementation made a reasonable attempt to suit\nfuture uses and I simply made it a bit more robust. In the text quoted\nabove you can see I tested some scenarios beyond the current use\ncases, with key set sizes as low as 6 and as high as 250k.\n\n> Could you instead make them arguments such that gen_keywordlist.pl, generate-unicode_combining_table.pl, and future callers can pass in the numbers they want? Or is there some advantage to having it this way?\n\nThat is an implementation detail that callers have no business knowing about.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 29 May 2020 11:54:39 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "\n\n> On May 28, 2020, at 8:54 PM, John Naylor <john.naylor@2ndquadrant.com> wrote:\n> \n> On Fri, May 29, 2020 at 5:59 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> \n>>> On May 21, 2020, at 12:12 AM, John Naylor <john.naylor@2ndquadrant.com> wrote:\n> \n>>> very picky in general. As a test, it also successfully finds a\n>>> function for the OS \"words\" file, the \"D\" sets of codepoints, and for\n>>> sets of the first n built-in OIDs, where n > 5.\n>> \n>> Prior to this patch, src/tools/gen_keywordlist.pl is the only script that uses PerfectHash. Your patch adds a second. I'm not convinced that modifying the PerfectHash code directly each time a new caller needs different multipliers is the right way to go.\n\nI forgot in my first round of code review to mention, \"thanks for the patch\". I generally like what you are doing here, and am trying to review it so it gets committed.\n\n> Calling it \"each time\" with a sample size of two is a bit of a\n> stretch. The first implementation made a reasonable attempt to suit\n> future uses and I simply made it a bit more robust. In the text quoted\n> above you can see I tested some scenarios beyond the current use\n> cases, with key set sizes as low as 6 and as high as 250k.\n\nI don't really have an objection to what you did in the patch. I'm not going to lose any sleep if it gets committed this way.\n\nThe reason I gave this feedback is that I saved the *kwlist_d.h files generated before applying the patch, and compared them with the same files generated after applying the patch, and noticed a very slight degradation. Most of the files changed without any expansion, but the largest of them, src/common/kwlist_d.h, changed from\n\n static const int16 h[901]\n\nto\n\n static const int16 h[902]\n\nsuggesting that even with your reworking of the parameters for PerfectHash, you weren't able to find a single set of numbers that worked for the two datasets quite as well as different sets of numbers each tailored for a particular data set. I started to imagine that if we wanted to use PerfectHash for yet more stuff, the problem of finding numbers that worked across all N data sets (even if N is only 3 or 4) might be harder still. That's all I was referring to. 901 -> 902 is such a small expansion that it might not be worth worrying about.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 29 May 2020 09:13:12 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "On Sat, May 30, 2020 at 12:13 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n>\n> I forgot in my first round of code review to mention, \"thanks for the patch\". I generally like what you are doing here, and am trying to review it so it gets committed.\n\nAnd I forgot to say thanks for taking a look!\n\n> The reason I gave this feedback is that I saved the *kwlist_d.h files generated before applying the patch, and compared them with the same files generated after applying the patch, and noticed a very slight degradation. Most of the files changed without any expansion, but the largest of them, src/common/kwlist_d.h, changed from\n>\n> static const int16 h[901]\n>\n> to\n>\n> static const int16 h[902]\n\nInteresting, I hadn't noticed. With 450 keywords, we need at least 901\nelements in the table. Since 901 is divisible by the new hash\nmultiplier 17, this gets triggered:\n\n# However, it would be very bad if $nverts were exactly equal to either\n# $hash_mult1 or $hash_mult2: effectively, that hash function would be\n# sensitive to only the last byte of each key. Cases where $nverts is a\n# multiple of either multiplier likewise lose information. (But $nverts\n# can't actually divide them, if they've been intelligently chosen as\n# primes.) We can avoid such problems by adjusting the table size.\nwhile ($nverts % $hash_mult1 == 0\n || $nverts % $hash_mult2 == 0)\n{\n $nverts++;\n}\n\nThis is harmless, and will go away next time we add a keyword.\n\n--\nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 30 May 2020 14:52:24 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "Attached is version 4, which excludes the output file from pgindent,\nto match recent commit 74d4608f5. Since it won't be indented again, I\nalso tweaked the generator script to match pgindent for the typedef,\nsince we don't want to lose what pgindent has fixed already. This last\npart isn't new to v4, but I thought I'd highlight it anyway.\n\n--\nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 18 Sep 2020 12:41:02 -0400", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "> On Sep 18, 2020, at 9:41 AM, John Naylor <john.naylor@2ndquadrant.com> wrote:\n> \n> Attached is version 4, which excludes the output file from pgindent,\n> to match recent commit 74d4608f5. Since it won't be indented again, I\n> also tweaked the generator script to match pgindent for the typedef,\n> since we don't want to lose what pgindent has fixed already. This last\n> part isn't new to v4, but I thought I'd highlight it anyway.\n\n0001 looks ok to me. The change is quite minor. I reviewed it by comparing the assembly generated for perfect hash functions before and after applying the patch.\n\nFor 0001, the assembly code generated from the perfect hash functions in src/common/keywords.s and src/pl/plpgsql/src/pl_scanner.s do not appear to differ in any performance significant way. The assembly code generated in src/interfaces/ecpg/preproc/ecpg_keywords.s and src/interfaces/ecpg/preproc/c_keywords.s change enough that I wouldn't try to compare them just by visual inspection.\n\nCompiled using -g -O2\n\nApple clang version 11.0.0 (clang-1100.0.33.17)\nTarget: x86_64-apple-darwin19.6.0\nThread model: posix\nInstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin\n\nI'm attaching the diffs of the old and new assembly files, if anyone cares to look.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 18 Sep 2020 12:41:54 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "\n\n> On Sep 18, 2020, at 9:41 AM, John Naylor <john.naylor@2ndquadrant.com> wrote:\n> \n> Attached is version 4, which excludes the output file from pgindent,\n> to match recent commit 74d4608f5. Since it won't be indented again, I\n> also tweaked the generator script to match pgindent for the typedef,\n> since we don't want to lose what pgindent has fixed already. This last\n> part isn't new to v4, but I thought I'd highlight it anyway.\n> \n> --\n> John Naylor https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n> <v4-0001-Tweak-the-set-of-candidate-multipliers-for-genera.patch><v4-0002-Use-perfect-hashing-for-NFC-Unicode-normalization.patch><v4-0003-Use-perfect-hashing-for-NKFC-Unicode-normalizatio.patch>\n\n0002 and 0003 look good to me. I like the way you cleaned up a bit with the unicode_norm_props struct, which makes the code a bit more tidy, and on my compiler under -O2 it does not generate any extra runtime dereferences, as the compiler can see through the struct just fine. My only concern would be if some other compilers might not see through the struct, resulting in a runtime performance cost? I wouldn't even ask, except that qc_hash_lookup is called in a fairly tight loop.\n\nTo clarify, the following changes to the generated code which remove the struct and corresponding dereferences (not intended as a patch submission) cause zero bytes of change in the compiled output for me on mac/clang, which is good, and generate inconsequential changes on linux/gcc, which is also good, but I wonder if that is true for all compilers. In your commit message for 0001 you mentioned testing on a multiplicity of compilers. Did you do that for 0002 and 0003 as well?\n\ndiff --git a/src/common/unicode_norm.c b/src/common/unicode_norm.c\nindex 1714837e64..976b96e332 100644\n--- a/src/common/unicode_norm.c\n+++ b/src/common/unicode_norm.c\n@@ -476,8 +476,11 @@ qc_compare(const void *p1, const void *p2)\n return (v1 - v2);\n }\n \n-static const pg_unicode_normprops *\n-qc_hash_lookup(pg_wchar ch, const unicode_norm_info * norminfo)\n+static inline const pg_unicode_normprops *\n+qc_hash_lookup(pg_wchar ch,\n+ const pg_unicode_normprops *normprops,\n+ qc_hash_func hash,\n+ int num_normprops)\n {\n int h;\n uint32 hashkey;\n@@ -487,21 +490,21 @@ qc_hash_lookup(pg_wchar ch, const unicode_norm_info * norminfo)\n * in network order.\n */\n hashkey = htonl(ch);\n- h = norminfo->hash(&hashkey);\n+ h = hash(&hashkey);\n \n /* An out-of-range result implies no match */\n- if (h < 0 || h >= norminfo->num_normprops)\n+ if (h < 0 || h >= num_normprops)\n return NULL;\n \n /*\n * Since it's a perfect hash, we need only match to the specific codepoint\n * it identifies.\n */\n- if (ch != norminfo->normprops[h].codepoint)\n+ if (ch != normprops[h].codepoint)\n return NULL;\n \n /* Success! */\n- return &norminfo->normprops[h];\n+ return &normprops[h];\n }\n \n /*\n@@ -518,7 +521,10 @@ qc_is_allowed(UnicodeNormalizationForm form, pg_wchar ch)\n switch (form)\n {\n case UNICODE_NFC:\n- found = qc_hash_lookup(ch, &UnicodeNormInfo_NFC_QC);\n+ found = qc_hash_lookup(ch,\n+ UnicodeNormProps_NFC_QC,\n+ NFC_QC_hash_func,\n+ NFC_QC_num_normprops);\n break;\n case UNICODE_NFKC:\n found = bsearch(&key,\ndiff --git a/src/include/common/unicode_normprops_table.h b/src/include/common/unicode_normprops_table.h\nindex 5e1e382af5..38300cfa12 100644\n--- a/src/include/common/unicode_normprops_table.h\n+++ b/src/include/common/unicode_normprops_table.h\n@@ -13,13 +13,6 @@ typedef struct\n signed int quickcheck:4; /* really UnicodeNormalizationQC */\n } pg_unicode_normprops;\n \n-typedef struct\n-{\n- const pg_unicode_normprops *normprops;\n- qc_hash_func hash;\n- int num_normprops;\n-} unicode_norm_info;\n-\n static const pg_unicode_normprops UnicodeNormProps_NFC_QC[] = {\n {0x0300, UNICODE_NORM_QC_MAYBE},\n {0x0301, UNICODE_NORM_QC_MAYBE},\n@@ -1583,12 +1576,6 @@ NFC_QC_hash_func(const void *key)\n return h[a % 2463] + h[b % 2463];\n }\n \n-static const unicode_norm_info UnicodeNormInfo_NFC_QC = {\n- UnicodeNormProps_NFC_QC,\n- NFC_QC_hash_func,\n- 1231\n-};\n-\n static const pg_unicode_normprops UnicodeNormProps_NFKC_QC[] = {\n {0x00A0, UNICODE_NORM_QC_NO},\n {0x00A8, UNICODE_NORM_QC_NO},\n\n\n--\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sat, 19 Sep 2020 10:46:08 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "On Sat, Sep 19, 2020 at 1:46 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n\n> 0002 and 0003 look good to me. I like the way you cleaned up a bit with the unicode_norm_props struct, which makes the code a bit more tidy, and on my compiler under -O2 it does not generate any extra runtime dereferences, as the compiler can see through the struct just fine. My only concern would be if some other compilers might not see through the struct, resulting in a runtime performance cost? I wouldn't even ask, except that qc_hash_lookup is called in a fairly tight loop.\n\n(I assume you mean unicode_norm_info) Yeah, that usage was copied from\nthe keyword list code. I believe it was done for the convenience of\nthe callers. That is worth something, and so is consistency. That\nsaid, I'd be curious if there is a measurable impact for some\nplatforms.\n\n> In your commit message for 0001 you mentioned testing on a multiplicity of compilers. Did you do that for 0002 and 0003 as well?\n\nFor that, I was simply using godbolt.org to test compiling the\nmultiplications down to shift-and-adds. Very widespread, I only\nremember MSVC as not doing it. I'm not sure a few extra cycles would\nhave been noticeable here, but it can't hurt to have that guarantee.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 19 Sep 2020 18:58:11 -0400", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "\n\n> On Sep 19, 2020, at 3:58 PM, John Naylor <john.naylor@2ndquadrant.com> wrote:\n> \n> On Sat, Sep 19, 2020 at 1:46 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> \n>> 0002 and 0003 look good to me. I like the way you cleaned up a bit with the unicode_norm_props struct, which makes the code a bit more tidy, and on my compiler under -O2 it does not generate any extra runtime dereferences, as the compiler can see through the struct just fine. My only concern would be if some other compilers might not see through the struct, resulting in a runtime performance cost? I wouldn't even ask, except that qc_hash_lookup is called in a fairly tight loop.\n> \n> (I assume you mean unicode_norm_info) Yeah, that usage was copied from\n> the keyword list code. I believe it was done for the convenience of\n> the callers. That is worth something, and so is consistency. That\n> said, I'd be curious if there is a measurable impact for some\n> platforms.\n\nRight, unicode_norm_info. I'm not sure the convenience of the callers matters here, since the usage is restricted to just one file, but I also don't have a problem with the code as you have it.\n\n>> In your commit message for 0001 you mentioned testing on a multiplicity of compilers. Did you do that for 0002 and 0003 as well?\n> \n> For that, I was simply using godbolt.org to test compiling the\n> multiplications down to shift-and-adds. Very widespread, I only\n> remember MSVC as not doing it. I'm not sure a few extra cycles would\n> have been noticeable here, but it can't hurt to have that guarantee.\n\nI am marking this ready for committer. I didn't object to the whitespace weirdness in your patch (about which `git apply` grumbles) since you seem to have done that intentionally. I have no further comments on the performance issue, since I don't have any other platforms at hand to test it on. Whichever committer picks this up can decide if the issue matters to them enough to punt it back for further performance testing.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sat, 19 Sep 2020 16:09:27 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "On Sat, Sep 19, 2020 at 04:09:27PM -0700, Mark Dilger wrote:\n> I am marking this ready for committer. I didn't object to the\n> whitespace weirdness in your patch (about which `git apply`\n> grumbles) since you seem to have done that intentionally. I have no\n> further comments on the performance issue, since I don't have any\n> other platforms at hand to test it on. Whichever committer picks\n> this up can decide if the issue matters to them enough to punt it\n> back for further performance testing.\n\nAbout 0001, the new set of multipliers looks fine to me. Even if this\nadds an extra item from 901 to 902 because this can be divided by 17\nin kwlist_d.h, I also don't think that this is really much bothering\nand. As mentioned, this impacts none of the other tables that are much\nsmaller in size, on top of coming back to normal once a new keyword\nwill be added. Being able to generate perfect hash functions for much\nlarger sets is a nice property to have. While on it, I also looked at\nthe assembly code with gcc -O2 for keywords.c & co and I have not\nspotted any huge difference. So I'd like to apply this first if there\nare no objections.\n\nI have tested 0002 and 0003, that had better be merged together at the\nend, and I can see performance improvements with MSVC and gcc similar\nto what is being reported upthread, with 20~30% gains for simple\ndata sample using IS NFC/NFKC. That's cool.\n\nIncluding unicode_normprops_table.h in what gets ignored with pgindent\nis also fine at the end, even with the changes to make the output of\nthe structures generated more in-line with what pgindent generates.\nOne tiny comment I have is that I would have added an extra comment in\nthe unicode header generated to document the set of structures\ngenerated for the perfect hash, but that's easy enough to add.\n--\nMichael", "msg_date": "Wed, 7 Oct 2020 15:18:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "On Wed, Oct 07, 2020 at 03:18:44PM +0900, Michael Paquier wrote:\n> About 0001, the new set of multipliers looks fine to me. Even if this\n> adds an extra item from 901 to 902 because this can be divided by 17\n> in kwlist_d.h, I also don't think that this is really much bothering\n> and. As mentioned, this impacts none of the other tables that are much\n> smaller in size, on top of coming back to normal once a new keyword\n> will be added. Being able to generate perfect hash functions for much\n> larger sets is a nice property to have. While on it, I also looked at\n> the assembly code with gcc -O2 for keywords.c & co and I have not\n> spotted any huge difference. So I'd like to apply this first if there\n> are no objections.\n\nI looked at this one again today, and applied it. I looked at what\nMSVC compiler was able to do in terms of optimizations with\nshift-and-add for multipliers, and it is by far not as good as gcc or\nclang, applying imul for basically all the primes we could use for the\nperfect hash generation.\n\n> I have tested 0002 and 0003, that had better be merged together at the\n> end, and I can see performance improvements with MSVC and gcc similar\n> to what is being reported upthread, with 20~30% gains for simple\n> data sample using IS NFC/NFKC. That's cool.\n\nFor these two, I have merged both together and did some adjustments as\nper the attached. Not many tweaks, mainly some more comments for the\nunicode header files as the number of structures generated gets\nhigher. FWIW, with the addition of the two hash tables,\nlibpgcommon_srv.a grows from 1032600B to 1089240B, which looks like a\nsmall price to pay for the ~30% performance gains with the quick\nchecks.\n--\nMichael", "msg_date": "Thu, 8 Oct 2020 15:48:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "On Thu, Oct 8, 2020 at 2:48 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Oct 07, 2020 at 03:18:44PM +0900, Michael Paquier wrote:\n> I looked at this one again today, and applied it. I looked at what\n> MSVC compiler was able to do in terms of optimizationswith\n> shift-and-add for multipliers, and it is by far not as good as gcc or\n> clang, applying imul for basically all the primes we could use for the\n> perfect hash generation.\n>\n\nThanks for picking this up! As I recall, godbolt.org also showed MSVC\nunable to do this optimization.\n\n\n> > I have tested 0002 and 0003, that had better be merged together at the\n> > end, and I can see performance improvements with MSVC and gcc similar\n> > to what is being reported upthread, with 20~30% gains for simple\n> > data sample using IS NFC/NFKC. That's cool.\n>\n> For these two, I have merged both together and did some adjustments as\n> per the attached. Not many tweaks, mainly some more comments for the\n> unicode header files as the number of structures generated gets\n> higher.\n\n\nLooks fine overall, but one minor nit: I'm curious why you made a separate\nsection in the pgindent exclusions. The style in that file seems to be one\ncomment per category.\n\n--\nJohn Naylor\n\nOn Thu, Oct 8, 2020 at 2:48 AM Michael Paquier <michael@paquier.xyz> wrote:On Wed, Oct 07, 2020 at 03:18:44PM +0900, Michael Paquier wrote:I looked at this one again today, and applied it.  I looked at what\nMSVC compiler was able to do in terms of optimizationswithshift-and-add for multipliers, and it is by far not as good as gcc or\nclang, applying imul for basically all the primes we could use for the\nperfect hash generation.Thanks for picking this up! As I recall, godbolt.org also showed MSVC unable to do this optimization. \n> I have tested 0002 and 0003, that had better be merged together at the\n> end, and I can see performance improvements with MSVC and gcc similar\n> to what is being reported upthread, with 20~30% gains for simple\n> data sample using IS NFC/NFKC.  That's cool.\n\nFor these two, I have merged both together and did some adjustments as\nper the attached.  Not many tweaks, mainly some more comments for the\nunicode header files as the number of structures generated gets\nhigher.  Looks fine overall, but one minor nit: I'm curious why you made a separate section in the pgindent exclusions. The style in that file seems to be one comment per category. --John Naylor", "msg_date": "Thu, 8 Oct 2020 04:52:18 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "On Thu, Oct 08, 2020 at 04:52:18AM -0400, John Naylor wrote:\n> Looks fine overall, but one minor nit: I'm curious why you made a separate\n> section in the pgindent exclusions. The style in that file seems to be one\n> comment per category.\n\nBoth parts indeed use PerfectHash.pm, but are generated by different\nscripts so that looked better as separate items.\n--\nMichael", "msg_date": "Thu, 8 Oct 2020 21:29:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "On Thu, Oct 8, 2020 at 8:29 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Oct 08, 2020 at 04:52:18AM -0400, John Naylor wrote:\n> > Looks fine overall, but one minor nit: I'm curious why you made a\n> separate\n> > section in the pgindent exclusions. The style in that file seems to be\n> one\n> > comment per category.\n>\n> Both parts indeed use PerfectHash.pm, but are generated by different\n> scripts so that looked better as separate items.\n\n\nOkay, thanks.\n\n--\nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Thu, Oct 8, 2020 at 8:29 AM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Oct 08, 2020 at 04:52:18AM -0400, John Naylor wrote:\n> Looks fine overall, but one minor nit: I'm curious why you made a separate\n> section in the pgindent exclusions. The style in that file seems to be one\n> comment per category.\n\nBoth parts indeed use PerfectHash.pm, but are generated by different\nscripts so that looked better as separate items.Okay, thanks.--John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Thu, 8 Oct 2020 18:22:39 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "On Thu, Oct 08, 2020 at 06:22:39PM -0400, John Naylor wrote:\n> Okay, thanks.\n\nAnd applied. I did some more micro benchmarking with the quick\nchecks, and the numbers are cool, close to what you mentioned for the\nquick checks of both NFC and NFKC.\n\nJust wondering about something in the same area, did you look at the\ndecomposition table?\n--\nMichael", "msg_date": "Sun, 11 Oct 2020 19:27:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "On Sun, 11 Oct 2020 at 19:27, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Oct 08, 2020 at 06:22:39PM -0400, John Naylor wrote:\n> > Okay, thanks.\n>\n> And applied.\n\nThe following warning recently started to be shown in my\nenvironment(FreeBSD clang 8.0.1). Maybe it is relevant with this\ncommit:\n\nunicode_norm.c:478:12: warning: implicit declaration of function\n'htonl' is invalid in C99 [-Wimplicit-function-declaration]\n hashkey = htonl(ch);\n ^\n1 warning generated.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\n\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 12 Oct 2020 14:43:06 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "On Mon, Oct 12, 2020 at 02:43:06PM +0900, Masahiko Sawada wrote:\n> The following warning recently started to be shown in my\n> environment(FreeBSD clang 8.0.1). Maybe it is relevant with this\n> commit:\n> \n> unicode_norm.c:478:12: warning: implicit declaration of function\n> 'htonl' is invalid in C99 [-Wimplicit-function-declaration]\n> hashkey = htonl(ch);\n> ^\n\nThanks, it is of course relevant to this commit. None of the\nBSD animals complain here. So, while it would be tempting to have an\nextra include with arpa/inet.h, I think that it would be better to\njust use pg_hton32() in pg_bswap.h, as per the attached. Does that\ntake care of your problem?\n--\nMichael", "msg_date": "Mon, 12 Oct 2020 15:27:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "On Mon, 12 Oct 2020 at 15:27, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Oct 12, 2020 at 02:43:06PM +0900, Masahiko Sawada wrote:\n> > The following warning recently started to be shown in my\n> > environment(FreeBSD clang 8.0.1). Maybe it is relevant with this\n> > commit:\n> >\n> > unicode_norm.c:478:12: warning: implicit declaration of function\n> > 'htonl' is invalid in C99 [-Wimplicit-function-declaration]\n> > hashkey = htonl(ch);\n> > ^\n>\n> Thanks, it is of course relevant to this commit. None of the\n> BSD animals complain here. So, while it would be tempting to have an\n> extra include with arpa/inet.h, I think that it would be better to\n> just use pg_hton32() in pg_bswap.h, as per the attached. Does that\n> take care of your problem?\n\nThank you for the patch!\n\nYes, this patch resolves the problem.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 12 Oct 2020 15:39:51 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "On Sun, Oct 11, 2020 at 6:27 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> And applied. I did some more micro benchmarking with the quick\n> checks, and the numbers are cool, close to what you mentioned for the\n> quick checks of both NFC and NFKC.\n>\n\nThanks for confirming.\n\n\n> Just wondering about something in the same area, did you look at the\n> decomposition table?\n\n\nHmm, I hadn't actually, but now that you mention it, that looks worth\noptimizing that as well, since there are multiple callers that search that\ntable -- thanks for the reminder. The attached patch was easy to whip up,\nbeing similar to the quick check (doesn't include the pg_hton32 fix). I'll\ndo some performance testing soon. Note that a 25kB increase in size could\nbe present in frontend binaries as well in this case. While looking at\ndecomposition, I noticed that recomposition does a linear search through\nall 6600+ entries, although it seems only about 800 are valid for that.\nThat could be optimized as well now, since with hashing we have more\nflexibility in the ordering and can put the recomp-valid entries in front.\nI'm not yet sure if it's worth the additional complexity. I'll take a look\nand start a new thread.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 12 Oct 2020 05:46:16 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "On Mon, Oct 12, 2020 at 05:46:16AM -0400, John Naylor wrote:\n> Hmm, I hadn't actually, but now that you mention it, that looks worth\n> optimizing that as well, since there are multiple callers that search that\n> table -- thanks for the reminder. The attached patch was easy to whip up,\n> being similar to the quick check (doesn't include the pg_hton32 fix). I'll\n> do some performance testing soon. Note that a 25kB increase in size could\n> be present in frontend binaries as well in this case. While looking at\n> decomposition, I noticed that recomposition does a linear search through\n> all 6600+ entries, although it seems only about 800 are valid for that.\n> That could be optimized as well now, since with hashing we have more\n> flexibility in the ordering and can put the recomp-valid entries in front.\n> I'm not yet sure if it's worth the additional complexity. I'll take a look\n> and start a new thread.\n\nStarting a new thread for this topic sounds like a good idea to me,\nwith a separate performance analysis. Thanks!\n--\nMichael", "msg_date": "Mon, 12 Oct 2020 20:29:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "On Mon, Oct 12, 2020 at 03:39:51PM +0900, Masahiko Sawada wrote:\n> Yes, this patch resolves the problem.\n\nOkay, applied then.\n--\nMichael", "msg_date": "Mon, 12 Oct 2020 20:36:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "On 2020-10-12 13:36, Michael Paquier wrote:\n> On Mon, Oct 12, 2020 at 03:39:51PM +0900, Masahiko Sawada wrote:\n>> Yes, this patch resolves the problem.\n> \n> Okay, applied then.\n\nCould you adjust the generation script so that the resulting header file \npasses the git whitespace check? Check the output of\n\ngit show --check 80f8eb79e24d9b7963eaf17ce846667e2c6b6e6f\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 19 Oct 2020 08:15:56 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "On Mon, Oct 19, 2020 at 08:15:56AM +0200, Peter Eisentraut wrote:\n> On 2020-10-12 13:36, Michael Paquier wrote:\n> > On Mon, Oct 12, 2020 at 03:39:51PM +0900, Masahiko Sawada wrote:\n> > > Yes, this patch resolves the problem.\n> > \n> > Okay, applied then.\n> \n> Could you adjust the generation script so that the resulting header file\n> passes the git whitespace check? Check the output of\n> \n> git show --check 80f8eb79e24d9b7963eaf17ce846667e2c6b6e6f\n\nHmm. Giving up on the left space padding would make the table harder\nto read because the elements would not be aligned anymore across\nmultiple lines, and I'd rather keep 8 elements per lines as we do now.\nThis is generated by this part in PerfectHash.pm, where we apply a\nat most 7 spaces of padding to all the members, except the first one\nof a line that uses 6 spaces at most with two tabs:\n\tfor (my $i = 0; $i < $nhash; $i++)\n\t{\n\t\t$f .= sprintf \"%s%6d,%s\",\n\t\t ($i % 8 == 0 ? \"\\t\\t\" : \" \"),\n\t\t $hashtab[$i],\n\t\t ($i % 8 == 7 ? \"\\n\" : \"\");\n\t}\nCould we consider this stuff as a special case in .gitattributes\ninstead?\n--\nMichael", "msg_date": "Mon, 19 Oct 2020 15:57:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "On Mon, Oct 19, 2020 at 2:16 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-10-12 13:36, Michael Paquier wrote:\n> > On Mon, Oct 12, 2020 at 03:39:51PM +0900, Masahiko Sawada wrote:\n> >> Yes, this patch resolves the problem.\n> >\n> > Okay, applied then.\n>\n> Could you adjust the generation script so that the resulting header file\n> passes the git whitespace check? Check the output of\n>\n> git show --check 80f8eb79e24d9b7963eaf17ce846667e2c6b6e6f\n>\n\nMy git manual says:\n\n\"By default, trailing\nwhitespaces (including lines that consist solely of\nwhitespaces) and a space character that is immediately\nfollowed by a tab character inside the initial indent of the\nline are considered whitespace errors.\"\n\nThe above would mean we should have errors for every function whose\nparameters are lined with the opening paren, so I don't see why it would\nfire in this case. Is the manual backwards?\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Mon, Oct 19, 2020 at 2:16 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2020-10-12 13:36, Michael Paquier wrote:\n> On Mon, Oct 12, 2020 at 03:39:51PM +0900, Masahiko Sawada wrote:\n>> Yes, this patch resolves the problem.\n> \n> Okay, applied then.\n\nCould you adjust the generation script so that the resulting header file \npasses the git whitespace check?  Check the output of\n\ngit show --check 80f8eb79e24d9b7963eaf17ce846667e2c6b6e6f\nMy git manual says:\"By default, trailingwhitespaces (including lines that consist solely ofwhitespaces) and a space character that is immediatelyfollowed by a tab character inside the initial indent of theline are considered whitespace errors.\"The above would mean we should have errors for every function whose parameters are lined with the opening paren, so I don't see why it would fire in this case. Is the manual backwards?-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Mon, 19 Oct 2020 10:15:48 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> On Mon, Oct 19, 2020 at 2:16 AM Peter Eisentraut <\n> peter.eisentraut@2ndquadrant.com> wrote:\n>> Could you adjust the generation script so that the resulting header file\n>> passes the git whitespace check? Check the output of\n>> git show --check 80f8eb79e24d9b7963eaf17ce846667e2c6b6e6f\n\n> My git manual says:\n> ...\n> The above would mean we should have errors for every function whose\n> parameters are lined with the opening paren, so I don't see why it would\n> fire in this case. Is the manual backwards?\n\nProbably not, but our whitespace rules are not git's default.\nSee .gitattributes at the top level of a git checkout.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Oct 2020 10:38:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "On Mon, Oct 19, 2020 at 10:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > On Mon, Oct 19, 2020 at 2:16 AM Peter Eisentraut <\n> > peter.eisentraut@2ndquadrant.com> wrote:\n> >> Could you adjust the generation script so that the resulting header file\n> >> passes the git whitespace check? Check the output of\n> >> git show --check 80f8eb79e24d9b7963eaf17ce846667e2c6b6e6f\n>\n> > My git manual says:\n> > ...\n> > The above would mean we should have errors for every function whose\n> > parameters are lined with the opening paren, so I don't see why it would\n> > fire in this case. Is the manual backwards?\n>\n> Probably not, but our whitespace rules are not git's default.\n> See .gitattributes at the top level of a git checkout\n>\n\nI see, I should have looked for that when Michael mentioned it. We could\nleft-justify instead, as in the attached. If it were up to me, though, I'd\njust format it like pgindent expects, even if not nice looking. It's just a\nbunch of numbers.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 19 Oct 2020 12:12:00 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "On Mon, Oct 19, 2020 at 12:12:00PM -0400, John Naylor wrote:\n> I see, I should have looked for that when Michael mentioned it. We could\n> left-justify instead, as in the attached. If it were up to me, though, I'd\n> just format it like pgindent expects, even if not nice looking. It's just a\n> bunch of numbers.\n\nThe aligned numbers have the advantage to make the checks of the code\ngenerated easier, for the contents and the format produced. So using\na right padding as you are suggesting here rather than a new exception\nin .gitattributes sounds fine to me. I simplified things a bit as the\nattached, getting rid of the last comma while on it. Does that look\nfine to you?\n--\nMichael", "msg_date": "Tue, 20 Oct 2020 10:49:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "On Mon, Oct 19, 2020 at 9:49 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n>\n> The aligned numbers have the advantage to make the checks of the code\n> generated easier, for the contents and the format produced. So using\n> a right padding as you are suggesting here rather than a new exception\n> in .gitattributes sounds fine to me. I simplified things a bit as the\n> attached, getting rid of the last comma while on it. Does that look\n> fine to you?\n>\n\nThis is cleaner, so I'm good with this.\n\n-- \nJohn Naylor\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Mon, Oct 19, 2020 at 9:49 PM Michael Paquier <michael@paquier.xyz> wrote:\nThe aligned numbers have the advantage to make the checks of the code\ngenerated easier, for the contents and the format produced.  So using\na right padding as you are suggesting here rather than a new exception\nin .gitattributes sounds fine to me.  I simplified things a bit as the\nattached, getting rid of the last comma while on it.  Does that look\nfine to you?\nThis is cleaner, so I'm good with this.-- John NaylorEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Tue, 20 Oct 2020 05:33:43 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" }, { "msg_contents": "On Tue, Oct 20, 2020 at 05:33:43AM -0400, John Naylor wrote:\n> This is cleaner, so I'm good with this.\n\nThanks. Applied this way, then.\n--\nMichael", "msg_date": "Wed, 21 Oct 2020 09:24:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: speed up unicode normalization quick check" } ]
[ { "msg_contents": "Hi all,\n\nWhile working on some other logical decoding patch recently, I bumped\ninto the fact that we have special handling for the case of REPLICA\nIDENTITY USING INDEX when the dependent index is dropped, where the\ncode handles that case as an equivalent of NOTHING.\n\nAttached is a patch to add more coverage for that. Any thoughts?\n--\nMichael", "msg_date": "Fri, 22 May 2020 12:50:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "More tests with USING INDEX replident and dropped indexes" }, { "msg_contents": "On Fri, 22 May 2020 at 12:50, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Hi all,\n>\n> While working on some other logical decoding patch recently, I bumped\n> into the fact that we have special handling for the case of REPLICA\n> IDENTITY USING INDEX when the dependent index is dropped, where the\n> code handles that case as an equivalent of NOTHING.\n>\n> Attached is a patch to add more coverage for that. Any thoughts?\n\nHow about avoiding such an inconsistent situation? In that case,\nreplica identity works as NOTHING, but pg_class.relreplident is still\n‘i’, confusing users. It seems to me that dropping an index specified\nby REPLICA IDENTITY USING INDEX is not a valid operation.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 2 Jun 2020 16:46:55 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: More tests with USING INDEX replident and dropped indexes" }, { "msg_contents": "On Tue, Jun 02, 2020 at 04:46:55PM +0900, Masahiko Sawada wrote:\n> How about avoiding such an inconsistent situation? In that case,\n> replica identity works as NOTHING, but pg_class.relreplident is still\n> ‘i’, confusing users. It seems to me that dropping an index specified\n> by REPLICA IDENTITY USING INDEX is not a valid operation.\n\nThis looks first like complicating RemoveRelations() or the internal\nobject removal APIs with a dedicated lookup at this index's pg_index\ntuple, but you could just put that in index_drop when REINDEX\nCONCURRENTLY is not used. Still, I am not sure if it is worth\ncomplicating those code paths. It would be better to get more\nopinions about that first.\n--\nMichael", "msg_date": "Wed, 3 Jun 2020 15:13:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: More tests with USING INDEX replident and dropped indexes" }, { "msg_contents": "On Wed, 3 Jun 2020 at 03:14, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Jun 02, 2020 at 04:46:55PM +0900, Masahiko Sawada wrote:\n> > How about avoiding such an inconsistent situation? In that case,\n> > replica identity works as NOTHING, but pg_class.relreplident is still\n> > ‘i’, confusing users. It seems to me that dropping an index specified\n> > by REPLICA IDENTITY USING INDEX is not a valid operation.\n>\n> This looks first like complicating RemoveRelations() or the internal\n> object removal APIs with a dedicated lookup at this index's pg_index\n> tuple, but you could just put that in index_drop when REINDEX\n> CONCURRENTLY is not used. Still, I am not sure if it is worth\n> complicating those code paths. It would be better to get more\n> opinions about that first.\n>\n>\nConsistency is a good goal. Why don't we clear the relreplident from the\nrelation while dropping the index? relation_mark_replica_identity() already\ndoes that but do other things too. Let's move the first code block from\nrelation_mark_replica_identity to another function and call this new\nfunction\nwhile dropping the index.\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Wed, 3 Jun 2020 at 03:14, Michael Paquier <michael@paquier.xyz> wrote:On Tue, Jun 02, 2020 at 04:46:55PM +0900, Masahiko Sawada wrote:\n> How about avoiding such an inconsistent situation? In that case,\n> replica identity works as NOTHING, but pg_class.relreplident is still\n> ‘i’, confusing users. It seems to me that dropping an index specified\n> by REPLICA IDENTITY USING INDEX is not a valid operation.\n\nThis looks first like complicating RemoveRelations() or the internal\nobject removal APIs with a dedicated lookup at this index's pg_index\ntuple, but you could just put that in index_drop when REINDEX\nCONCURRENTLY is not used.  Still, I am not sure if it is worth\ncomplicating those code paths.  It would be better to get more\nopinions about that first.\nConsistency is a good goal. Why don't we clear the relreplident from therelation while dropping the index? relation_mark_replica_identity() alreadydoes that but do other things too. Let's move the first code block fromrelation_mark_replica_identity to another function and call this new functionwhile dropping the index.-- Euler Taveira                 http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 3 Jun 2020 12:08:56 -0300", "msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: More tests with USING INDEX replident and dropped indexes" }, { "msg_contents": "On Wed, Jun 03, 2020 at 12:08:56PM -0300, Euler Taveira wrote:\n> Consistency is a good goal. Why don't we clear the relreplident from the\n> relation while dropping the index? relation_mark_replica_identity() already\n> does that but do other things too. Let's move the first code block from\n> relation_mark_replica_identity to another function and call this new\n> function\n> while dropping the index.\n\nI have looked at your suggestion, and finished with the attached.\nThere are a couple of things to be aware of:\n- For DROP INDEX CONCURRENTLY, pg_class.relreplident of the parent\ntable is set in the last transaction doing the drop. It would be\ntempting to reset the flag in the same transaction as the one marking\nthe index as invalid, but there could be a point in reindexing the\ninvalid index whose drop has failed, and this adds more complexity to\nthe patch.\n- relreplident is switched to REPLICA_IDENTITY_NOTHING, which is the\nbehavior we have now after an index is dropped, even if there is a\nprimary key.\n- The CCI done even if ri_type is not updated in index_drop() may look\nuseless, but the CCI will happen in any case as switching the replica\nidentity of a table to NOTHING resets pg_index.indisreplident for an\nindex previously used. \n\nThoughts?\n--\nMichael", "msg_date": "Mon, 17 Aug 2020 15:59:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: More tests with USING INDEX replident and dropped indexes" }, { "msg_contents": "Hi,\n\nI couldn't test the patch as it does not apply cleanly on master.\n\nPlease find below some review comments:\n\n1. Would it better to throw a warning at the time of dropping the \nREPLICA IDENTITY\n\nindex that it would also� drop the REPLICA IDENTITY of the parent table?\n\n\n2. CCI is used after calling SetRelationReplIdent from index_drop() but not\n\nafter following call\n\nrelation_mark_replica_identity(Relation rel, char ri_type, Oid indexOid,\n ��������������������������������������������������������� bool is_internal)\n\n+������ /* update relreplident if necessary */\n+������ SetRelationReplIdent(RelationGetRelid(rel), ri_type);\n\n3.�� I think the former variable names `pg_class_tuple` and \n`pg_class_form`provided better clarity\n �+������ HeapTuple tuple;\n\n �+������ Form_pg_class classtuple;\n\n> - relreplident is switched to REPLICA_IDENTITY_NOTHING, which is the\n> behavior we have now after an index is dropped, even if there is a\n> primary key.\n\n4. I am not aware of the reason of the current behavior, however it \nseems better\n\nto switch to REPLICA_IDENTITY_DEFAULT. Can't think of a reason why user \nwould not be\n\nfine with using primary key as replica identity when replica identity \nindex is dropped\n\n\nThank you,\n\n\n\n\n\n\n\nHi,\nI couldn't test the patch as it does not apply�\n cleanly on master. \n\nPlease find below some review comments:\n1. Would it better to throw a warning at the time\n of dropping the REPLICA IDENTITY� \n\nindex that it would also� drop the REPLICA\n IDENTITY of the parent table? \n\n2. CCI is used after calling SetRelationReplIdent\n from index_drop() but not\nafter following call\nrelation_mark_replica_identity(Relation\n rel, char ri_type, Oid indexOid,\n ��������������������������������������������������������� bool\n is_internal)\n\n +������ /* update relreplident if necessary */\n +������ SetRelationReplIdent(RelationGetRelid(rel), ri_type);\n\n3.�� I think the former variable names\n `pg_class_tuple` and `pg_class_form`provided better clarity\n�+������ HeapTuple������\n tuple;\n\n�+������ Form_pg_class\n classtuple;\n\n- relreplident is switched to REPLICA_IDENTITY_NOTHING, which is the\nbehavior we have now after an index is dropped, even if there is a\nprimary key.\n\n4. I am not aware of the reason of the current\n behavior, however it seems better \n\nto switch to REPLICA_IDENTITY_DEFAULT. Can't\n think of a reason why user would not be\nfine with using primary key as replica identity\n when replica identity index is dropped\n\n\n\nThank you,", "msg_date": "Wed, 19 Aug 2020 17:33:36 +0530", "msg_from": "Rahila Syed <rahila.syed@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: More tests with USING INDEX replident and dropped indexes" }, { "msg_contents": "On Wed, Aug 19, 2020 at 05:33:36PM +0530, Rahila Syed wrote:\n> I couldn't test the patch as it does not apply cleanly on master.\n\nThe CF bot is green, and I can apply v2 cleanly FWIW:\nhttp://commitfest.cputube.org/michael-paquier.html\n\n> Please find below some review comments:\n> \n> 1. Would it better to throw a warning at the time of dropping the REPLICA\n> IDENTITY index that it would also  drop the REPLICA IDENTITY of the\n> parent table?\n\nNot sure that's worth it. Updating relreplident is a matter of\nconsistency.\n\n> 2. CCI is used after calling SetRelationReplIdent from index_drop() but not\n> after following call\n> \n> relation_mark_replica_identity(Relation rel, char ri_type, Oid indexOid,\n>                                                           bool is_internal)\n> \n> +       /* update relreplident if necessary */\n> +       SetRelationReplIdent(RelationGetRelid(rel), ri_type);\n\nYes, not having a CCI here is the intention, because it is not\nnecessary. That's not the case in index_drop() where the pg_class\nentry of the parent table gets updated afterwards.\n\n> 3.   I think the former variable names `pg_class_tuple` and\n> `pg_class_form`provided better clarity\n>  +       HeapTuple tuple;\n> \n>  +       Form_pg_class classtuple;\n\nThis is chosen to be consistent with SetRelationHasSubclass().\n\n> 4. I am not aware of the reason of the current behavior, however it seems\n> better\n> \n> to switch to REPLICA_IDENTITY_DEFAULT. Can't think of a reason why user\n> would not be fine with using primary key as replica identity when\n> replica identity index is dropped\n\nUsing NOTHING is more consistent with the current behavior we have\nsince 9.4 now. There could be an argument for switching back to the\ndefault, but that could be surprising to change an old behavior.\n--\nMichael", "msg_date": "Wed, 19 Aug 2020 22:24:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: More tests with USING INDEX replident and dropped indexes" }, { "msg_contents": "Hi,\n\n\n>> I couldn't test the patch as it does not apply cleanly on master.\n> The CF bot is green, and I can apply v2 cleanly FWIW:\n> http://commitfest.cputube.org/michael-paquier.html\n\nSorry, I didn't apply correctly.� The� tests pass for me. In addition, I \ntested\n\nwith partitioned tables.� It works as expected and makes the REPLICA \nIDENTITY\n\n'n' for the partitions as well when an index on a partitioned table is \ndropped.\n\n\nThank you,\n\n\n", "msg_date": "Tue, 25 Aug 2020 20:59:37 +0530", "msg_from": "Rahila <rahila.syed@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: More tests with USING INDEX replident and dropped indexes" }, { "msg_contents": "On Tue, Aug 25, 2020 at 08:59:37PM +0530, Rahila wrote:\n> Sorry, I didn't apply correctly. The tests pass for me. In addition, I\n> tested with partitioned tables. It works as expected and makes the REPLICA\n> IDENTITY 'n' for the partitions as well when an index on a partitioned\n> table is dropped.\n\nIndeed. I have added a test for that.\n\nWhile looking at this patch again, I noticed that the new tests for\ncontrib/test_decoding/ and the improvements for relreplident are\nreally two different bullet points, and that the new tests should not\nbe impacted as long as we switch to NOTHING the replica identity once\nits index is dropped. So, for now, I have applied the new decoding\ntests with a first commit, and attached is an updated patch which\nincludes tests in the main regression test suite for\nreplica_identity.sql, which is more consistent with the rest as that's\nthe main place where we look at the state of pg_class.relreplident.\nI'll go through this part again tomorrow, it is late here.\n--\nMichael", "msg_date": "Wed, 26 Aug 2020 21:08:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: More tests with USING INDEX replident and dropped indexes" }, { "msg_contents": "On Wed, Aug 26, 2020 at 09:08:51PM +0900, Michael Paquier wrote:\n> I'll go through this part again tomorrow, it is late here.\n\nThere is actually a huge gotcha here that exists since replica\nidentities exist: relation_mark_replica_identity() only considers\nvalid indexes! So, on HEAD, if DROP INDEX CONCURRENTLY fails in the\nsecond transaction doing the physical drop, then we would finish with\na catalog entry that has indisreplident and !indisinvalid. If you\nreindex it afterwards and switch the index back to be valid, there\ncan be more than one valid index marked with indisreplident, which\nmesses up with the logic in tablecmds.c because we use\nRelationGetIndexList(), that discards invalid indexes. This case is\nactually rather similar to the handling of indisclustered.\n\nSo we have a set of two issues here:\n1) indisreplident should be switched to false when we clear the valid\nflag of an index for INDEX_DROP_CLEAR_VALID. This issue exists since\n9.4.\n2) To never finish in a state where we have REPLICA IDENTITY INDEX\nwithout an index marked as indisreplident, we need to reset the\nreplident of the parent table in the same transaction as the one\nclearing indisvalid for the concurrent case. That's a problem of the\npatch proposed upthread.\n\nAttached is a patch for 1) and 2) grouped together, to ease review for\nnow. I think that we had better fix 1) separately though, so I am\ngoing to start a new thread about that with a separate patch as the\ncurrent thread is misleading.\n--\nMichael", "msg_date": "Thu, 27 Aug 2020 11:28:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: More tests with USING INDEX replident and dropped indexes" }, { "msg_contents": "On Thu, Aug 27, 2020 at 11:28:35AM +0900, Michael Paquier wrote:\n> Attached is a patch for 1) and 2) grouped together, to ease review for\n> now. I think that we had better fix 1) separately though, so I am\n> going to start a new thread about that with a separate patch as the\n> current thread is misleading.\n\nA fix for consistency problem with indisreplident and invalid indexes\nhas been committed as of 9511fb37. Attached is a rebased patch, where\nI noticed two incorrect things:\n- The comment of REPLICA_IDENTITY_INDEX is originally incorrect. If\nthe index is dropped, the replica index switches back to nothing.\n- relreplident was getting reset one transaction too late, when the\nold index is marked as dead.\n--\nMichael", "msg_date": "Sun, 30 Aug 2020 16:05:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: More tests with USING INDEX replident and dropped indexes" }, { "msg_contents": "Hi Michael,\n\nThanks for updating the patch.\n\nPlease see following comments,\n\n1.\n\n ������� */\n �������� if (resetreplident)\n �������� {\n ��������������� SetRelationReplIdent(heapId, REPLICA_IDENTITY_NOTHING);\n\n ��������������� /* make the update visible, only for the non-concurrent \ncase */\n ��������������� if (!concurrent)\n ��������������������� CommandCounterIncrement();\n ��� � }\n\n �������� /* continue the concurrent process */\n �������� if (concurrent)\n �������� {\n ���������������� PopActiveSnapshot();\n ���������������� CommitTransactionCommand();\n ���������������� StartTransactionCommand();\n\nNow, the relreplident is being set in the transaction previous to\nthe one marking index as invalid for concurrent drop. Won't\nit cause issues with relreplident cleared but index not dropped,\nif drop index concurrently fails in the small window after\ncommit transaction in above snippet and before the start transaction above?\n\nAlthough, it seems like a really small window.\n\n2.� I have following suggestion for the test.\n\nTo the existing partitioned table test, can we add a test to demonstrate\nthat relreplident is set to 'n' for even the individual partitions.\nI mean explicitly setting replica identity index for individual partitions\n\nALTER TABLE part1 REPLICA IDENTITY\nUSING INDEX test_replica_part_index_part_1;\n\nand checking for relreplident for individual partition after parent \nindex is dropped.\n\nSELECT relreplident FROM pg_class WHERE oid = 'part1'::regclass;\n\n\n3.\n\n � +* Again, commit the transaction to make the pg_index and pg_class\n � +��������������� * (in the case where indisreplident is set) updates \nare visible to\n � +��������������� * other sessions.\n\nTypo, s/updates are visible/updates visible\n\n\n4. I am wondering with the recent change to move the SetRelationRepldent\ntogether with invalidating index, whether your initial concern stated as \nfollows\nbecomes valid.\n\n- For DROP INDEX CONCURRENTLY, pg_class.relreplident of the parent\ntable is set in the last transaction doing the drop.� It would be\ntempting to reset the flag in the same transaction as the one marking\nthe index as invalid, but there could be a point in reindexing the\ninvalid index whose drop has failed, and this addsmorecomplexity to\nthe patch.\n\nI will try to test that.\n\n\nThank you,\n\nRahila Syed\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi Michael, \n\nThanks for updating the patch. \n\nPlease see following comments,\n1.\n\n������� */\n �������� if (resetreplident)\n �������� {\n ��������������� SetRelationReplIdent(heapId,\n REPLICA_IDENTITY_NOTHING);\n\n ��������������� /* make the update visible, only for the\n non-concurrent case */\n ��������������� if (!concurrent)\n ��������������������� CommandCounterIncrement();\n ��� � }\n �\n �������� /* continue the concurrent process */\n �������� if (concurrent)\n �������� {\n ���������������� PopActiveSnapshot();\n ���������������� CommitTransactionCommand();\n ���������������� StartTransactionCommand();\n\n Now, the relreplident is being set in the transaction previous to\n the one marking index as invalid for concurrent drop. Won't\n it cause issues with relreplident cleared but index not dropped, \n if drop index concurrently fails in the small window after� \n commit transaction in above snippet and before the start transaction\n above?\n Although, it seems like a really small window. \n\n2.� I have following suggestion for the test.\n To the existing partitioned table test, can we add a test to\n demonstrate \n that relreplident is set to 'n' for even the individual partitions.\n \n I mean explicitly setting replica identity index for individual\n partitions\nALTER TABLE part1 REPLICA IDENTITY\n USING INDEX test_replica_part_index_part_1;\n\nand checking for relreplident for individual partition after\n parent index is dropped.\n\nSELECT relreplident FROM pg_class WHERE oid =\n 'part1'::regclass;\n\n\n3.��\n\n� +* Again, commit the transaction to make the\n pg_index and pg_class\n � +��������������� * (in the case where indisreplident is set)\n updates are visible to\n � +��������������� * other sessions.\nTypo, s/updates are visible/updates visible\n\n\n\n 4. I am wondering with the recent change to move the\n SetRelationRepldent \n together with invalidating index, whether your initial concern\n stated as follows\n becomes valid. \n- For DROP\n INDEX CONCURRENTLY, pg_class.relreplident of the parent\ntable is set\n in the last transaction doing the drop.� It would be\ntempting to\n reset the flag in the same transaction as the one marking\nthe index as\n invalid, but there could be a point in reindexing the\ninvalid index\n whose drop has failed, and this adds�more�complexity to\nthe patch.\nI will try to test that.\n\n\nThank you,\nRahila Syed", "msg_date": "Mon, 31 Aug 2020 10:36:13 +0530", "msg_from": "Rahila <rahila.syed@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: More tests with USING INDEX replident and dropped indexes" }, { "msg_contents": "On Mon, Aug 31, 2020 at 10:36:13AM +0530, Rahila wrote:\n> Now, the relreplident is being set in the transaction previous to\n> the one marking index as invalid for concurrent drop. Won't\n> it cause issues with relreplident cleared but index not dropped,\n> if drop index concurrently fails in the small window after\n> commit transaction in above snippet and before the start transaction above?\n\nArgh. I missed your point that this stuff uses heap_inplace_update(),\nso the update of indisvalid flag is not transactional. The thing is\nthat we won't be able to update the flag consistently with\nrelreplident except if we switch index_set_state_flags() to use a\ntransactional operation here. So, with the current state of affairs,\nit looks like we could just call SetRelationReplIdent() in the\nlast transaction, after the transaction marking the index as dead has\ncommitted (see the top of index_set_state_flags() saying that this\nshould be the last step of a transaction), but that leaves a window\nopen as you say.\n\nOn the other hand, it looks appealing to make index_set_state_flags()\ntransactional. This would solve the consistency problem, and looking\nat the code scanning pg_index, I don't see a reason why we could not\ndo that. What do you think?\n\n> To the existing partitioned table test, can we add a test to demonstrate\n> that relreplident is set to 'n' for even the individual partitions.\n\nMakes sense. It is still important to test the case where a\npartitioned table without leaves is correctly reset IMO.\n\n> Typo, s/updates are visible/updates visible\n\nThanks.\n\n> - For DROP INDEX CONCURRENTLY, pg_class.relreplident of the parent\n> table is set in the last transaction doing the drop.  It would be\n> tempting to reset the flag in the same transaction as the one marking\n> the index as invalid, but there could be a point in reindexing the\n> invalid index whose drop has failed, and this adds morecomplexity to\n> the patch.\n\nThat's a point I studied, but it makes actually more sense to just\nreset the flag once the index is marked as invalid, similarly to\nindisclustered. Reindexing an invalid index can have value in some\ncases, but we would have much more problems with the relcache if we\nfinish with two indexes marked as indreplident :(\n--\nMichael", "msg_date": "Mon, 31 Aug 2020 15:23:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: More tests with USING INDEX replident and dropped indexes" }, { "msg_contents": ">\n>\n>\n> On the other hand, it looks appealing to make index_set_state_flags()\n> transactional. This would solve the consistency problem, and looking\n> at the code scanning pg_index, I don't see a reason why we could not\n> do that. What do you think?\n>\n\nTBH, I am not sure. I think it is a reasonable change. It is even\nindicated in the\ncomment above index_set_state_flags() that it can be made transactional.\nAt the same time, probably we can just go ahead with current\ninconsistent update of relisreplident and indisvalid flags. Can't see what\nwill break with that.\n\nThank you,\n-- \nRahila Syed\nPerformance Engineer\n2ndQuadrant\nhttp://www.2ndQuadrant.com <http://www.2ndquadrant.com/>\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\nOn the other hand, it looks appealing to make index_set_state_flags()\ntransactional.  This would solve the consistency problem, and looking\nat the code scanning pg_index, I don't see a reason why we could not\ndo that.  What do you think?TBH, I am not sure.  I think it is a reasonable change. It is even indicated in thecomment above index_set_state_flags() that it can be made transactional.At the same time, probably we can just go ahead with currentinconsistent update of relisreplident and indisvalid flags. Can't see what will break with that.  Thank you,-- Rahila SyedPerformance Engineer2ndQuadrant http://www.2ndQuadrant.comPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 2 Sep 2020 09:27:33 +0530", "msg_from": "Rahila Syed <rahila.syed@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: More tests with USING INDEX replident and dropped indexes" }, { "msg_contents": "On Wed, Sep 02, 2020 at 09:27:33AM +0530, Rahila Syed wrote:\n> TBH, I am not sure. I think it is a reasonable change. It is even\n> indicated in the\n> comment above index_set_state_flags() that it can be made transactional.\n> At the same time, probably we can just go ahead with current\n> inconsistent update of relisreplident and indisvalid flags. Can't see what\n> will break with that.\n\nYeah, that's true as well. Still, I would like to see first if people\nare fine with changing this code path to be transactional. This way,\nwe will have zero history in the tree where there was a risk for an\ninconsistent window.\n--\nMichael", "msg_date": "Wed, 2 Sep 2020 15:00:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: More tests with USING INDEX replident and dropped indexes" }, { "msg_contents": "On Wed, Sep 02, 2020 at 03:00:44PM +0900, Michael Paquier wrote:\n> Yeah, that's true as well. Still, I would like to see first if people\n> are fine with changing this code path to be transactional. This way,\n> we will have zero history in the tree where there was a risk for an\n> inconsistent window.\n\nSo, I have begun a thread about that part with a dedicated CF entry:\nhttps://commitfest.postgresql.org/30/2725/\n\nWhile considering more this point, I think that making this code path\ntransactional is the correct way to go, as a first step. Also, as\nthis thread is about adding more tests and that this has been done\nwith fe7fd4e9, I am marking the CF entry as committed. Let's discuss\nthe reset of relreplident for the parent relation when its replica\nidentity index is dropped once the transactional part is fully\ndiscussed, on a new thread.\n--\nMichael", "msg_date": "Sat, 5 Sep 2020 11:36:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: More tests with USING INDEX replident and dropped indexes" } ]
[ { "msg_contents": "Hi hackers,\n\nFor EXISTS SubLink, in some cases the subquery can be reduced to\nconstant TRUE or FALSE, based on the knowledge that it's being used in\nEXISTS(). One such case is when the subquery has aggregates without\nGROUP BY or HAVING, and we know its result is exactly one row, unless\nthat row is discarded by LIMIT/OFFSET. (Greenplum does this.)\n\nFor example:\n\n# explain (costs off) select * from a where exists\n (select avg(i) from b where a.i = b.i);\n QUERY PLAN\n-----------------------------------\n Seq Scan on a\n Filter: (SubPlan 1)\n SubPlan 1\n -> Aggregate\n -> Seq Scan on b\n Filter: (a.i = i)\n(6 rows)\n\nThis query can be reduced to:\n\n# explain (costs off) select * from a where exists\n (select avg(i) from b where a.i = b.i);\n QUERY PLAN\n---------------\n Seq Scan on a\n(1 row)\n\nAnd likewise, for this query below:\n\n# explain (costs off) select * from a where exists\n (select avg(i) from b where a.i = b.i offset 1);\n QUERY PLAN\n-----------------------------------------\n Seq Scan on a\n Filter: (SubPlan 1)\n SubPlan 1\n -> Limit\n -> Aggregate\n -> Seq Scan on b\n Filter: (a.i = i)\n(7 rows)\n\nIt can be reduced to:\n\n# explain (costs off) select * from a where exists\n (select avg(i) from b where a.i = b.i offset 1);\n QUERY PLAN\n--------------------------\n Result\n One-Time Filter: false\n(2 rows)\n\nIs it worthwhile to add some codes for such optimization? If so, I can\ntry to propose a patch.\n\nThanks\nRichard\n\nHi hackers,For EXISTS SubLink, in some cases the subquery can be reduced toconstant TRUE or FALSE, based on the knowledge that it's being used inEXISTS(). One such case is when the subquery has aggregates withoutGROUP BY or HAVING, and we know its result is exactly one row, unlessthat row is discarded by LIMIT/OFFSET. (Greenplum does this.)For example:# explain (costs off) select * from a where exists                        (select avg(i) from b where a.i = b.i);            QUERY PLAN----------------------------------- Seq Scan on a   Filter: (SubPlan 1)   SubPlan 1     ->  Aggregate           ->  Seq Scan on b                 Filter: (a.i = i)(6 rows)This query can be reduced to:# explain (costs off) select * from a where exists                        (select avg(i) from b where a.i = b.i);  QUERY PLAN--------------- Seq Scan on a(1 row)And likewise, for this query below:# explain (costs off) select * from a where exists                        (select avg(i) from b where a.i = b.i offset 1);               QUERY PLAN----------------------------------------- Seq Scan on a   Filter: (SubPlan 1)   SubPlan 1     ->  Limit           ->  Aggregate                 ->  Seq Scan on b                       Filter: (a.i = i)(7 rows)It can be reduced to:# explain (costs off) select * from a where exists                        (select avg(i) from b where a.i = b.i offset 1);        QUERY PLAN-------------------------- Result   One-Time Filter: false(2 rows)Is it worthwhile to add some codes for such optimization? If so, I cantry to propose a patch.ThanksRichard", "msg_date": "Fri, 22 May 2020 16:26:20 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "About reducing EXISTS sublink" }, { "msg_contents": "On Friday, May 22, 2020, Richard Guo <guofenglinux@gmail.com> wrote:\n\n> Hi hackers,\n>\n> For EXISTS SubLink, in some cases the subquery can be reduced to\n> constant TRUE or FALSE, based on the knowledge that it's being used in\n> EXISTS(). One such case is when the subquery has aggregates without\n> GROUP BY or HAVING, and we know its result is exactly one row, unless\n> that row is discarded by LIMIT/OFFSET. (Greenplum does this.)\n>\n> Is it worthwhile to add some codes for such optimization? If so, I can\n> try to propose a patch\n>\n\nWhile the examples clearly demonstrate what you are saying they don’t\nprovide any motivation to do anything about it - adding aggregates and\noffset to an exists subquery seems like poor query design that should be\nfixed by the query writer not by spending hacker cycles optimizing it.\n\nDavid J.\n\nOn Friday, May 22, 2020, Richard Guo <guofenglinux@gmail.com> wrote:Hi hackers,For EXISTS SubLink, in some cases the subquery can be reduced toconstant TRUE or FALSE, based on the knowledge that it's being used inEXISTS(). One such case is when the subquery has aggregates withoutGROUP BY or HAVING, and we know its result is exactly one row, unlessthat row is discarded by LIMIT/OFFSET. (Greenplum does this.)Is it worthwhile to add some codes for such optimization? If so, I cantry to propose a patchWhile the examples clearly demonstrate what you are saying they don’t provide any motivation to do anything about it - adding aggregates and offset to an exists subquery seems like poor query design that should be fixed by the query writer not by spending hacker cycles optimizing it.David J.", "msg_date": "Fri, 22 May 2020 07:59:51 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: About reducing EXISTS sublink" }, { "msg_contents": "On Fri, May 22, 2020 at 10:59 PM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> On Friday, May 22, 2020, Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>> Hi hackers,\n>>\n>> For EXISTS SubLink, in some cases the subquery can be reduced to\n>> constant TRUE or FALSE, based on the knowledge that it's being used in\n>> EXISTS(). One such case is when the subquery has aggregates without\n>> GROUP BY or HAVING, and we know its result is exactly one row, unless\n>> that row is discarded by LIMIT/OFFSET. (Greenplum does this.)\n>>\n>> Is it worthwhile to add some codes for such optimization? If so, I can\n>> try to propose a patch\n>>\n>\n> While the examples clearly demonstrate what you are saying they don’t\n> provide any motivation to do anything about it - adding aggregates and\n> offset to an exists subquery seems like poor query design that should be\n> fixed by the query writer not by spending hacker cycles optimizing it.\n>\n\nThanks David. I have the same concern that this case is not common\nenough to worth hacker cycles.\n\nThanks\nRichard\n\nOn Fri, May 22, 2020 at 10:59 PM David G. Johnston <david.g.johnston@gmail.com> wrote:On Friday, May 22, 2020, Richard Guo <guofenglinux@gmail.com> wrote:Hi hackers,For EXISTS SubLink, in some cases the subquery can be reduced toconstant TRUE or FALSE, based on the knowledge that it's being used inEXISTS(). One such case is when the subquery has aggregates withoutGROUP BY or HAVING, and we know its result is exactly one row, unlessthat row is discarded by LIMIT/OFFSET. (Greenplum does this.)Is it worthwhile to add some codes for such optimization? If so, I cantry to propose a patchWhile the examples clearly demonstrate what you are saying they don’t provide any motivation to do anything about it - adding aggregates and offset to an exists subquery seems like poor query design that should be fixed by the query writer not by spending hacker cycles optimizing it.Thanks David. I have the same concern that this case is not commonenough to worth hacker cycles.ThanksRichard", "msg_date": "Tue, 26 May 2020 10:23:22 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: About reducing EXISTS sublink" } ]
[ { "msg_contents": "Hi All,\n\nI am getting ERROR when using the \"FOR UPDATE\" clause for the partitioned\ntable. below is a reproducible test case for the same.\n\nCREATE TABLE tbl (c1 INT,c2 TEXT) PARTITION BY LIST (c1);\nCREATE TABLE tbl_null PARTITION OF tbl FOR VALUES IN (NULL);\nCREATE TABLE tbl_1 PARTITION OF tbl FOR VALUES IN (1,2,3);\n\nINSERT INTO tbl SELECT i,i FROM generate_series(1,3) i;\n\nCREATE OR REPLACE FUNCTION func(i int) RETURNS int\nAS $$\nDECLARE\n v_var tbl%ROWTYPE;\n cur CURSOR IS SELECT * FROM tbl WHERE c1< 5 FOR UPDATE;\nBEGIN\n OPEN cur;\n LOOP\n FETCH cur INTO v_var;\n EXIT WHEN NOT FOUND;\n UPDATE tbl SET c2='aa' WHERE CURRENT OF cur;\n END LOOP;\n CLOSE cur;\n RETURN 10;\nEND;\n$$ LANGUAGE PLPGSQL;\n\nSELECT func(10);\n\npostgres=# SELECT func(10);\nERROR: cursor \"cur\" does not have a FOR UPDATE/SHARE reference to table\n\"tbl_null\"\nCONTEXT: SQL statement \"UPDATE tbl SET c2='aa' WHERE CURRENT OF cur\"\nPL/pgSQL function func(integer) line 10 at SQL statement\n\nThanks & Regards,\nRajkumar Raghuwanshi\n\nHi All,I am getting ERROR when using the \"FOR UPDATE\" clause for the partitioned table. below is a reproducible test case for the same.CREATE TABLE tbl (c1 INT,c2 TEXT) PARTITION BY LIST (c1);CREATE TABLE tbl_null PARTITION OF tbl FOR VALUES IN (NULL);CREATE TABLE tbl_1 PARTITION OF tbl FOR VALUES IN (1,2,3);INSERT INTO tbl SELECT i,i FROM generate_series(1,3) i;CREATE OR REPLACE FUNCTION func(i int) RETURNS intAS $$DECLARE  v_var tbl%ROWTYPE; cur CURSOR IS SELECT * FROM tbl WHERE c1< 5 FOR UPDATE;BEGIN OPEN cur; LOOP \tFETCH cur INTO v_var; \tEXIT WHEN NOT FOUND; \tUPDATE tbl SET c2='aa' WHERE CURRENT OF cur; END LOOP; CLOSE cur; RETURN 10;END; $$ LANGUAGE PLPGSQL;SELECT func(10);postgres=# SELECT func(10);ERROR:  cursor \"cur\" does not have a FOR UPDATE/SHARE reference to table \"tbl_null\"CONTEXT:  SQL statement \"UPDATE tbl SET c2='aa' WHERE CURRENT OF cur\"PL/pgSQL function func(integer) line 10 at SQL statementThanks & Regards,Rajkumar Raghuwanshi", "msg_date": "Fri, 22 May 2020 17:00:02 +0530", "msg_from": "Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Getting ERROR with FOR UPDATE/SHARE for partitioned table." }, { "msg_contents": "On Fri, May 22, 2020 at 5:00 PM Rajkumar Raghuwanshi <\nrajkumar.raghuwanshi@enterprisedb.com> wrote:\n\n> Hi All,\n>\n> I am getting ERROR when using the \"FOR UPDATE\" clause for the partitioned\n> table. below is a reproducible test case for the same.\n>\n> CREATE TABLE tbl (c1 INT,c2 TEXT) PARTITION BY LIST (c1);\n> CREATE TABLE tbl_null PARTITION OF tbl FOR VALUES IN (NULL);\n> CREATE TABLE tbl_1 PARTITION OF tbl FOR VALUES IN (1,2,3);\n>\n> INSERT INTO tbl SELECT i,i FROM generate_series(1,3) i;\n>\n> CREATE OR REPLACE FUNCTION func(i int) RETURNS int\n> AS $$\n> DECLARE\n> v_var tbl%ROWTYPE;\n> cur CURSOR IS SELECT * FROM tbl WHERE c1< 5 FOR UPDATE;\n> BEGIN\n> OPEN cur;\n> LOOP\n> FETCH cur INTO v_var;\n> EXIT WHEN NOT FOUND;\n> UPDATE tbl SET c2='aa' WHERE CURRENT OF cur;\n> END LOOP;\n> CLOSE cur;\n> RETURN 10;\n> END;\n> $$ LANGUAGE PLPGSQL;\n>\n> SELECT func(10);\n>\n\nI tried similar things on inherit partitioning as follow and that looks\nfine:\n\nDROP TABLE tbl;\nCREATE TABLE tbl (c1 INT,c2 TEXT);\nCREATE TABLE tbl_null(check (c1 is NULL)) INHERITS (tbl);\nCREATE TABLE tbl_1 (check (c1 > 0 and c1 < 4)) INHERITS (tbl);\nINSERT INTO tbl_1 VALUES(generate_series(1,3));\n\npostgres=# SELECT func(10);\n func\n------\n 10\n(1 row)\n\nOn looking further for declarative partition, I found that issue happens\nonly if\nthe partitioning pruning enabled, see this:\n\n-- Execute on original set of test case.\npostgres=# ALTER FUNCTION func SET enable_partition_pruning to off;\nALTER FUNCTION\n\npostgres=# SELECT func(10);\n func\n------\n 10\n(1 row)\n\nI think we need some indication in execCurrentOf() to skip error if the\nrelation\nis pruned. Something like that we already doing for inheriting\npartitioning,\nsee following comment execCurrentOf():\n\n /*\n * This table didn't produce the cursor's current row; some other\n * inheritance child of the same parent must have. Signal caller to\n * do nothing on this table.\n */\n\nRegards,\nAmul\n\nOn Fri, May 22, 2020 at 5:00 PM Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote:Hi All,I am getting ERROR when using the \"FOR UPDATE\" clause for the partitioned table. below is a reproducible test case for the same.CREATE TABLE tbl (c1 INT,c2 TEXT) PARTITION BY LIST (c1);CREATE TABLE tbl_null PARTITION OF tbl FOR VALUES IN (NULL);CREATE TABLE tbl_1 PARTITION OF tbl FOR VALUES IN (1,2,3);INSERT INTO tbl SELECT i,i FROM generate_series(1,3) i;CREATE OR REPLACE FUNCTION func(i int) RETURNS intAS $$DECLARE  v_var tbl%ROWTYPE; cur CURSOR IS SELECT * FROM tbl WHERE c1< 5 FOR UPDATE;BEGIN OPEN cur; LOOP \tFETCH cur INTO v_var; \tEXIT WHEN NOT FOUND; \tUPDATE tbl SET c2='aa' WHERE CURRENT OF cur; END LOOP; CLOSE cur; RETURN 10;END; $$ LANGUAGE PLPGSQL;SELECT func(10); I tried similar things on inherit partitioning as follow and that looks fine:DROP TABLE tbl;CREATE TABLE tbl (c1 INT,c2 TEXT);CREATE TABLE tbl_null(check (c1 is NULL)) INHERITS (tbl);CREATE TABLE tbl_1 (check (c1 > 0 and c1 < 4)) INHERITS (tbl);INSERT INTO tbl_1 VALUES(generate_series(1,3));postgres=# SELECT func(10); func ------   10(1 row)On looking further for declarative partition, I found that issue happens only ifthe partitioning pruning enabled, see this:-- Execute on original set of test case.postgres=# ALTER FUNCTION func SET enable_partition_pruning to off;ALTER FUNCTIONpostgres=# SELECT func(10); func ------   10(1 row)I think we need some indication in execCurrentOf() to skip error if the relationis pruned.  Something like that we already doing for inheriting partitioning,see following comment execCurrentOf():        /*           * This table didn't produce the cursor's current row; some other         * inheritance child of the same parent must have.  Signal caller to         * do nothing on this table.         */Regards,Amul", "msg_date": "Fri, 22 May 2020 17:38:41 +0530", "msg_from": "amul sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Getting ERROR with FOR UPDATE/SHARE for partitioned table." }, { "msg_contents": "On Fri, May 22, 2020 at 9:09 PM amul sul <sulamul@gmail.com> wrote:\n> I tried similar things on inherit partitioning as follow and that looks fine:\n>\n> DROP TABLE tbl;\n> CREATE TABLE tbl (c1 INT,c2 TEXT);\n> CREATE TABLE tbl_null(check (c1 is NULL)) INHERITS (tbl);\n> CREATE TABLE tbl_1 (check (c1 > 0 and c1 < 4)) INHERITS (tbl);\n> INSERT INTO tbl_1 VALUES(generate_series(1,3));\n>\n> postgres=# SELECT func(10);\n> func\n> ------\n> 10\n> (1 row)\n>\n> On looking further for declarative partition, I found that issue happens only if\n> the partitioning pruning enabled, see this:\n>\n> -- Execute on original set of test case.\n> postgres=# ALTER FUNCTION func SET enable_partition_pruning to off;\n> ALTER FUNCTION\n>\n> postgres=# SELECT func(10);\n> func\n> ------\n> 10\n> (1 row)\n>\n> I think we need some indication in execCurrentOf() to skip error if the relation\n> is pruned. Something like that we already doing for inheriting partitioning,\n> see following comment execCurrentOf():\n>\n> /*\n> * This table didn't produce the cursor's current row; some other\n> * inheritance child of the same parent must have. Signal caller to\n> * do nothing on this table.\n> */\n\nActually, if you declare the cursor without FOR SHARE/UPDATE, the case\nwould fail even with traditional inheritance:\n\ndrop table if exists p cascade;\ncreate table p (a int);\ncreate table c (check (a = 2)) inherits (p);\ninsert into p values (1);\ninsert into c values (2);\nbegin;\ndeclare c cursor for select * from p where a = 1;\nfetch c;\nupdate p set a = a where current of c;\nERROR: cursor \"c\" is not a simply updatable scan of table \"c\"\nROLLBACK\n\nWhen there are no RowMarks to use because no FOR SHARE/UPDATE clause\nwas specified when declaring the cursor, execCurrentOf() tries to find\nthe cursor's current table by looking up its Scan node in the plan\ntree but will not find it if it was excluded in the cursor's query.\n\nWith FOR SHARE/UPDATE, it seems to work because the planner delivers\nthe RowMarks of all the children irrespective of whether or not they\nare present in the plan tree itself (something I had complained about\nin past [1]). execCurrentOf() doesn't complain as long as there is a\nRowMark present even if it's never used. For partitioning, the\nplanner doesn't make RowMarks for pruned partitions, so\nexecCurrentOf() can't find one if it's passed a pruned partition's\noid.\n\nI don't see a way to avoid these errors. How does execCurrentOf()\ndistinguish a table that could *never* be present in a cursor from a\ntable that could be had it not been pruned/excluded? If it can do\nthat, then give an error for the former and return false for the\nlatter.\n\nI guess the workaround is to declare the cursor such that no\npartitions/children are pruned/excluded.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/468c85d9-540e-66a2-1dde-fec2b741e688%40lab.ntt.co.jp\n\n\n", "msg_date": "Wed, 27 May 2020 16:23:31 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Getting ERROR with FOR UPDATE/SHARE for partitioned table." }, { "msg_contents": "On Wed, May 27, 2020 at 12:53 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Fri, May 22, 2020 at 9:09 PM amul sul <sulamul@gmail.com> wrote:\n> > I tried similar things on inherit partitioning as follow and that looks fine:\n> >\n> > DROP TABLE tbl;\n> > CREATE TABLE tbl (c1 INT,c2 TEXT);\n> > CREATE TABLE tbl_null(check (c1 is NULL)) INHERITS (tbl);\n> > CREATE TABLE tbl_1 (check (c1 > 0 and c1 < 4)) INHERITS (tbl);\n> > INSERT INTO tbl_1 VALUES(generate_series(1,3));\n> >\n> > postgres=# SELECT func(10);\n> > func\n> > ------\n> > 10\n> > (1 row)\n> >\n> > On looking further for declarative partition, I found that issue happens only if\n> > the partitioning pruning enabled, see this:\n> >\n> > -- Execute on original set of test case.\n> > postgres=# ALTER FUNCTION func SET enable_partition_pruning to off;\n> > ALTER FUNCTION\n> >\n> > postgres=# SELECT func(10);\n> > func\n> > ------\n> > 10\n> > (1 row)\n> >\n> > I think we need some indication in execCurrentOf() to skip error if the relation\n> > is pruned. Something like that we already doing for inheriting partitioning,\n> > see following comment execCurrentOf():\n> >\n> > /*\n> > * This table didn't produce the cursor's current row; some other\n> > * inheritance child of the same parent must have. Signal caller to\n> > * do nothing on this table.\n> > */\n>\n> Actually, if you declare the cursor without FOR SHARE/UPDATE, the case\n> would fail even with traditional inheritance:\n>\n> drop table if exists p cascade;\n> create table p (a int);\n> create table c (check (a = 2)) inherits (p);\n> insert into p values (1);\n> insert into c values (2);\n> begin;\n> declare c cursor for select * from p where a = 1;\n> fetch c;\n> update p set a = a where current of c;\n> ERROR: cursor \"c\" is not a simply updatable scan of table \"c\"\n> ROLLBACK\n>\n> When there are no RowMarks to use because no FOR SHARE/UPDATE clause\n> was specified when declaring the cursor, execCurrentOf() tries to find\n> the cursor's current table by looking up its Scan node in the plan\n> tree but will not find it if it was excluded in the cursor's query.\n>\n> With FOR SHARE/UPDATE, it seems to work because the planner delivers\n> the RowMarks of all the children irrespective of whether or not they\n> are present in the plan tree itself (something I had complained about\n> in past [1]). execCurrentOf() doesn't complain as long as there is a\n> RowMark present even if it's never used. For partitioning, the\n> planner doesn't make RowMarks for pruned partitions, so\n> execCurrentOf() can't find one if it's passed a pruned partition's\n> oid.\n\nI am missing something in this explanation. WHERE CURRENT OF works on\nthe row that was last fetched from a cursor. How could a pruned\npartition's row be fetched and thus cause this error.\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 27 May 2020 17:41:16 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Getting ERROR with FOR UPDATE/SHARE for partitioned table." }, { "msg_contents": "On Wed, May 27, 2020 at 9:11 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> On Wed, May 27, 2020 at 12:53 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Fri, May 22, 2020 at 9:09 PM amul sul <sulamul@gmail.com> wrote:\n> > > I tried similar things on inherit partitioning as follow and that looks fine:\n> > >\n> > > DROP TABLE tbl;\n> > > CREATE TABLE tbl (c1 INT,c2 TEXT);\n> > > CREATE TABLE tbl_null(check (c1 is NULL)) INHERITS (tbl);\n> > > CREATE TABLE tbl_1 (check (c1 > 0 and c1 < 4)) INHERITS (tbl);\n> > > INSERT INTO tbl_1 VALUES(generate_series(1,3));\n> > >\n> > > postgres=# SELECT func(10);\n> > > func\n> > > ------\n> > > 10\n> > > (1 row)\n> > >\n> > > On looking further for declarative partition, I found that issue happens only if\n> > > the partitioning pruning enabled, see this:\n> > >\n> > > -- Execute on original set of test case.\n> > > postgres=# ALTER FUNCTION func SET enable_partition_pruning to off;\n> > > ALTER FUNCTION\n> > >\n> > > postgres=# SELECT func(10);\n> > > func\n> > > ------\n> > > 10\n> > > (1 row)\n> > >\n> > > I think we need some indication in execCurrentOf() to skip error if the relation\n> > > is pruned. Something like that we already doing for inheriting partitioning,\n> > > see following comment execCurrentOf():\n> > >\n> > > /*\n> > > * This table didn't produce the cursor's current row; some other\n> > > * inheritance child of the same parent must have. Signal caller to\n> > > * do nothing on this table.\n> > > */\n> >\n> > Actually, if you declare the cursor without FOR SHARE/UPDATE, the case\n> > would fail even with traditional inheritance:\n> >\n> > drop table if exists p cascade;\n> > create table p (a int);\n> > create table c (check (a = 2)) inherits (p);\n> > insert into p values (1);\n> > insert into c values (2);\n> > begin;\n> > declare c cursor for select * from p where a = 1;\n> > fetch c;\n> > update p set a = a where current of c;\n> > ERROR: cursor \"c\" is not a simply updatable scan of table \"c\"\n> > ROLLBACK\n> >\n> > When there are no RowMarks to use because no FOR SHARE/UPDATE clause\n> > was specified when declaring the cursor, execCurrentOf() tries to find\n> > the cursor's current table by looking up its Scan node in the plan\n> > tree but will not find it if it was excluded in the cursor's query.\n> >\n> > With FOR SHARE/UPDATE, it seems to work because the planner delivers\n> > the RowMarks of all the children irrespective of whether or not they\n> > are present in the plan tree itself (something I had complained about\n> > in past [1]). execCurrentOf() doesn't complain as long as there is a\n> > RowMark present even if it's never used. For partitioning, the\n> > planner doesn't make RowMarks for pruned partitions, so\n> > execCurrentOf() can't find one if it's passed a pruned partition's\n> > oid.\n>\n> I am missing something in this explanation. WHERE CURRENT OF works on\n> the row that was last fetched from a cursor. How could a pruned\n> partition's row be fetched and thus cause this error.\n\nSo in Rajkumar's example, the cursor is declared as:\n\nCURSOR IS SELECT * FROM tbl WHERE c1< 5 FOR UPDATE;\n\nand the WHERE CURRENT OF query is this:\n\n UPDATE tbl SET c2='aa' WHERE CURRENT OF cur;\n\nYou can see that the UPDATE will scan all partitions, whereas the\ncursor's query does not.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 27 May 2020 22:21:24 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Getting ERROR with FOR UPDATE/SHARE for partitioned table." }, { "msg_contents": "On Wed, May 27, 2020 at 12:53 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> On Fri, May 22, 2020 at 9:09 PM amul sul <sulamul@gmail.com> wrote:\n> > I tried similar things on inherit partitioning as follow and that looks\n> fine:\n> >\n> > DROP TABLE tbl;\n> > CREATE TABLE tbl (c1 INT,c2 TEXT);\n> > CREATE TABLE tbl_null(check (c1 is NULL)) INHERITS (tbl);\n> > CREATE TABLE tbl_1 (check (c1 > 0 and c1 < 4)) INHERITS (tbl);\n> > INSERT INTO tbl_1 VALUES(generate_series(1,3));\n> >\n> > postgres=# SELECT func(10);\n> > func\n> > ------\n> > 10\n> > (1 row)\n> >\n> > On looking further for declarative partition, I found that issue happens\n> only if\n> > the partitioning pruning enabled, see this:\n> >\n> > -- Execute on original set of test case.\n> > postgres=# ALTER FUNCTION func SET enable_partition_pruning to off;\n> > ALTER FUNCTION\n> >\n> > postgres=# SELECT func(10);\n> > func\n> > ------\n> > 10\n> > (1 row)\n> >\n> > I think we need some indication in execCurrentOf() to skip error if the\n> relation\n> > is pruned. Something like that we already doing for inheriting\n> partitioning,\n> > see following comment execCurrentOf():\n> >\n> > /*\n> > * This table didn't produce the cursor's current row; some other\n> > * inheritance child of the same parent must have. Signal\n> caller to\n> > * do nothing on this table.\n> > */\n>\n> Actually, if you declare the cursor without FOR SHARE/UPDATE, the case\n> would fail even with traditional inheritance:\n>\n> drop table if exists p cascade;\n> create table p (a int);\n> create table c (check (a = 2)) inherits (p);\n> insert into p values (1);\n> insert into c values (2);\n> begin;\n> declare c cursor for select * from p where a = 1;\n> fetch c;\n> update p set a = a where current of c;\n> ERROR: cursor \"c\" is not a simply updatable scan of table \"c\"\n> ROLLBACK\n>\n>\nI am not sure I understood the point, you'll see the same error with\ndeclarative\npartitioning as well.\n\n\n> When there are no RowMarks to use because no FOR SHARE/UPDATE clause\n> was specified when declaring the cursor, execCurrentOf() tries to find\n> the cursor's current table by looking up its Scan node in the plan\n> tree but will not find it if it was excluded in the cursor's query.\n>\n> With FOR SHARE/UPDATE, it seems to work because the planner delivers\n> the RowMarks of all the children irrespective of whether or not they\n> are present in the plan tree itself (something I had complained about\n> in past [1]). execCurrentOf() doesn't complain as long as there is a\n> RowMark present even if it's never used. For partitioning, the\n> planner doesn't make RowMarks for pruned partitions, so\n> execCurrentOf() can't find one if it's passed a pruned partition's\n> oid.\n>\n>\nRight.\n\n\n> I don't see a way to avoid these errors. How does execCurrentOf()\n> distinguish a table that could *never* be present in a cursor from a\n> table that could be had it not been pruned/excluded? If it can do\n> that, then give an error for the former and return false for the\n> latter.\n>\n\nYeah. I haven't thought much about this; I was thinking initially just to\nskip\nerror by assuming that the table that we are looking might have pruned, but\nI am\nnot sure how bad or good approach it is.\n\n\n> I guess the workaround is to declare the cursor such that no\n> partitions/children are pruned/excluded.\n>\n>\nDisabling pruning as well -- at-least for the statement or function.\n\nRegards,\nAmul\n\n\n-- \n> Amit Langote\n> EnterpriseDB: http://www.enterprisedb.com\n>\n> [1]\n> https://www.postgresql.org/message-id/468c85d9-540e-66a2-1dde-fec2b741e688%40lab.ntt.co.jp\n>\n\nOn Wed, May 27, 2020 at 12:53 PM Amit Langote <amitlangote09@gmail.com> wrote:On Fri, May 22, 2020 at 9:09 PM amul sul <sulamul@gmail.com> wrote:\n> I tried similar things on inherit partitioning as follow and that looks fine:\n>\n> DROP TABLE tbl;\n> CREATE TABLE tbl (c1 INT,c2 TEXT);\n> CREATE TABLE tbl_null(check (c1 is NULL)) INHERITS (tbl);\n> CREATE TABLE tbl_1 (check (c1 > 0 and c1 < 4)) INHERITS (tbl);\n> INSERT INTO tbl_1 VALUES(generate_series(1,3));\n>\n> postgres=# SELECT func(10);\n>  func\n> ------\n>    10\n> (1 row)\n>\n> On looking further for declarative partition, I found that issue happens only if\n> the partitioning pruning enabled, see this:\n>\n> -- Execute on original set of test case.\n> postgres=# ALTER FUNCTION func SET enable_partition_pruning to off;\n> ALTER FUNCTION\n>\n> postgres=# SELECT func(10);\n>  func\n> ------\n>    10\n> (1 row)\n>\n> I think we need some indication in execCurrentOf() to skip error if the relation\n> is pruned.  Something like that we already doing for inheriting partitioning,\n> see following comment execCurrentOf():\n>\n>         /*\n>          * This table didn't produce the cursor's current row; some other\n>          * inheritance child of the same parent must have.  Signal caller to\n>          * do nothing on this table.\n>          */\n\nActually, if you declare the cursor without FOR SHARE/UPDATE, the case\nwould fail even with traditional inheritance:\n\ndrop table if exists p cascade;\ncreate table p (a int);\ncreate table c (check (a = 2)) inherits (p);\ninsert into p values (1);\ninsert into c values (2);\nbegin;\ndeclare c cursor for select * from p where a = 1;\nfetch c;\nupdate p set a = a where current of c;\nERROR:  cursor \"c\" is not a simply updatable scan of table \"c\"\nROLLBACK\nI am not sure I understood the point, you'll see the same error with declarativepartitioning as well.  \nWhen there are no RowMarks to use because no FOR SHARE/UPDATE clause\nwas specified when declaring the cursor, execCurrentOf() tries to find\nthe cursor's current table by looking up its Scan node in the plan\ntree but will not find it if it was excluded in the cursor's query.\n\nWith FOR SHARE/UPDATE, it seems to work because the planner delivers\nthe RowMarks of all the children irrespective of whether or not they\nare present in the plan tree itself (something I had complained about\nin past [1]).  execCurrentOf() doesn't complain as long as there is a\nRowMark present even if it's never used.  For partitioning, the\nplanner doesn't make RowMarks for pruned partitions, so\nexecCurrentOf() can't find one if it's passed a pruned partition's\noid.\nRight. \nI don't see a way to avoid these errors.  How does execCurrentOf()\ndistinguish a table that could *never* be present in a cursor from a\ntable that could be had it not been pruned/excluded?  If it can do\nthat, then give an error for the former and return false for the\nlatter.Yeah. I haven't thought much about this; I was thinking initially just to skiperror by assuming that the table that we are looking might have pruned, but I amnot sure how bad or good approach it is. \n\nI guess the workaround is to declare the cursor such that no\npartitions/children are pruned/excluded.\nDisabling pruning as well -- at-least for the statement or function. Regards,Amul\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/468c85d9-540e-66a2-1dde-fec2b741e688%40lab.ntt.co.jp", "msg_date": "Thu, 28 May 2020 10:05:39 +0530", "msg_from": "amul sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Getting ERROR with FOR UPDATE/SHARE for partitioned table." }, { "msg_contents": "On Thu, May 28, 2020 at 1:36 PM amul sul <sulamul@gmail.com> wrote:\n> On Wed, May 27, 2020 at 12:53 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> Actually, if you declare the cursor without FOR SHARE/UPDATE, the case\n>> would fail even with traditional inheritance:\n>>\n>> drop table if exists p cascade;\n>> create table p (a int);\n>> create table c (check (a = 2)) inherits (p);\n>> insert into p values (1);\n>> insert into c values (2);\n>> begin;\n>> declare c cursor for select * from p where a = 1;\n>> fetch c;\n>> update p set a = a where current of c;\n>> ERROR: cursor \"c\" is not a simply updatable scan of table \"c\"\n>> ROLLBACK\n>>\n>\n> I am not sure I understood the point, you'll see the same error with declarative\n> partitioning as well.\n\nMy point is that if a table is not present in the cursor's plan, there\nis no way for CURRENT OF to access it. Giving an error in that case\nseems justified.\n\nOTOH, when the CURRENT OF implementation has RowMarks to look at, it\navoids the error for traditional inheritance children due their\ninactive RowMarks being present in the cursor's PlannedStmt. I think\nthat's only by accident though.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 28 May 2020 18:36:00 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Getting ERROR with FOR UPDATE/SHARE for partitioned table." }, { "msg_contents": "On Thu, May 28, 2020 at 1:36 PM amul sul <sulamul@gmail.com> wrote:\n> On Wed, May 27, 2020 at 12:53 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> I guess the workaround is to declare the cursor such that no\n>> partitions/children are pruned/excluded.\n>\n> Disabling pruning as well -- at-least for the statement or function.\n\nNow *that* is actually a workaround to tell a customer. :)\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 28 May 2020 19:31:37 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Getting ERROR with FOR UPDATE/SHARE for partitioned table." }, { "msg_contents": "On Thu, May 28, 2020 at 3:06 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> On Thu, May 28, 2020 at 1:36 PM amul sul <sulamul@gmail.com> wrote:\n> > On Wed, May 27, 2020 at 12:53 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> >> Actually, if you declare the cursor without FOR SHARE/UPDATE, the case\n> >> would fail even with traditional inheritance:\n> >>\n> >> drop table if exists p cascade;\n> >> create table p (a int);\n> >> create table c (check (a = 2)) inherits (p);\n> >> insert into p values (1);\n> >> insert into c values (2);\n> >> begin;\n> >> declare c cursor for select * from p where a = 1;\n> >> fetch c;\n> >> update p set a = a where current of c;\n> >> ERROR: cursor \"c\" is not a simply updatable scan of table \"c\"\n> >> ROLLBACK\n> >>\n> >\n> > I am not sure I understood the point, you'll see the same error with\n> declarative\n> > partitioning as well.\n>\n> My point is that if a table is not present in the cursor's plan, there\n> is no way for CURRENT OF to access it. Giving an error in that case\n> seems justified.\n>\n> OTOH, when the CURRENT OF implementation has RowMarks to look at, it\n> avoids the error for traditional inheritance children due their\n> inactive RowMarks being present in the cursor's PlannedStmt. I think\n> that's only by accident though.\n>\n\nYeah, make sense, thank you.\n\nRegards,\nAmul\n\nOn Thu, May 28, 2020 at 3:06 PM Amit Langote <amitlangote09@gmail.com> wrote:On Thu, May 28, 2020 at 1:36 PM amul sul <sulamul@gmail.com> wrote:\n> On Wed, May 27, 2020 at 12:53 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> Actually, if you declare the cursor without FOR SHARE/UPDATE, the case\n>> would fail even with traditional inheritance:\n>>\n>> drop table if exists p cascade;\n>> create table p (a int);\n>> create table c (check (a = 2)) inherits (p);\n>> insert into p values (1);\n>> insert into c values (2);\n>> begin;\n>> declare c cursor for select * from p where a = 1;\n>> fetch c;\n>> update p set a = a where current of c;\n>> ERROR:  cursor \"c\" is not a simply updatable scan of table \"c\"\n>> ROLLBACK\n>>\n>\n> I am not sure I understood the point, you'll see the same error with declarative\n> partitioning as well.\n\nMy point is that if a table is not present in the cursor's plan, there\nis no way for CURRENT OF to access it.  Giving an error in that case\nseems justified.\n\nOTOH, when the CURRENT OF implementation has RowMarks to look at, it\navoids the error for traditional inheritance children due their\ninactive RowMarks being present in the cursor's PlannedStmt.  I think\nthat's only by accident though.Yeah, make sense, thank you.Regards,Amul", "msg_date": "Thu, 28 May 2020 17:00:40 +0530", "msg_from": "amul sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Getting ERROR with FOR UPDATE/SHARE for partitioned table." }, { "msg_contents": "On Wed, May 27, 2020 at 6:51 PM Amit Langote <amitlangote09@gmail.com> wrote:\n\n>\n> So in Rajkumar's example, the cursor is declared as:\n>\n> CURSOR IS SELECT * FROM tbl WHERE c1< 5 FOR UPDATE;\n>\n> and the WHERE CURRENT OF query is this:\n>\n> UPDATE tbl SET c2='aa' WHERE CURRENT OF cur;\n\nThanks for the clarification. So it looks like we expand UPDATE on\npartitioned table to UPDATE on each partition (inheritance_planner for\nDML) and then execute each of those. If CURRENT OF were to save the\ntable oid or something we could run the UPDATE only on that partition.\nI am possibly shooting in dark, but this puzzles me. And it looks like\nwe can cause wrong rows to be updated in non-partition inheritance\nwhere the ctids match?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 28 May 2020 19:37:55 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Getting ERROR with FOR UPDATE/SHARE for partitioned table." }, { "msg_contents": "On Thu, May 28, 2020 at 11:08 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> On Wed, May 27, 2020 at 6:51 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > So in Rajkumar's example, the cursor is declared as:\n> >\n> > CURSOR IS SELECT * FROM tbl WHERE c1< 5 FOR UPDATE;\n> >\n> > and the WHERE CURRENT OF query is this:\n> >\n> > UPDATE tbl SET c2='aa' WHERE CURRENT OF cur;\n>\n> Thanks for the clarification. So it looks like we expand UPDATE on\n> partitioned table to UPDATE on each partition (inheritance_planner for\n> DML) and then execute each of those. If CURRENT OF were to save the\n> table oid or something we could run the UPDATE only on that partition.\n\nAre you saying that the planner should take into account the state of\nthe cursor specified in WHERE CURRENT OF to determine which of the\ntables to scan for the UPDATE? Note that neither partition pruning\nnor constraint exclusion know that CurrentOfExpr can possibly allow to\nexclude children of the UPDATE target.\n\n> I am possibly shooting in dark, but this puzzles me. And it looks like\n> we can cause wrong rows to be updated in non-partition inheritance\n> where the ctids match?\n\nI don't think that hazard exists, because the table OID is matched\nbefore the TID. Consider this example:\n\ndrop table if exists p cascade;\ncreate table p (a int);\ncreate table c (check (a = 2)) inherits (p);\ninsert into p values (1);\ninsert into c values (2);\nbegin;\ndeclare c cursor for select * from p;\nfetch c;\nupdate p set a = a where current of c;\n QUERY PLAN\n------------------------------------------------------------\n Update on p (cost=0.00..8.02 rows=2 width=10)\n Update on p\n Update on c p_1\n -> Tid Scan on p (cost=0.00..4.01 rows=1 width=10)\n TID Cond: CURRENT OF c\n -> Tid Scan on c p_1 (cost=0.00..4.01 rows=1 width=10)\n TID Cond: CURRENT OF c\n(7 rows)\n\nWhenever the TID scan evaluates the CURRENT OF qual, it passes the\ntable being scanned to execCurrentOf(). execCurrentOf() then fetches\nthe ExecRowMark or the ScanState for *that* table from the cursor's\n(\"c\") PlanState via its portal. Only if it confirms that such a\nExecRowMark or a ScanState exists and is valid/active that it returns\nthe TID found therein as the cursor's current TID.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Jun 2020 16:14:38 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Getting ERROR with FOR UPDATE/SHARE for partitioned table." }, { "msg_contents": "On 2020-Jun-03, Amit Langote wrote:\n\n> Are you saying that the planner should take into account the state of\n> the cursor specified in WHERE CURRENT OF to determine which of the\n> tables to scan for the UPDATE? Note that neither partition pruning\n> nor constraint exclusion know that CurrentOfExpr can possibly allow to\n> exclude children of the UPDATE target.\n\nI think from a user POV this is pretty obvious. The user doesn't really\ncare that there are partitions that were pruned, because obviously\nUPDATE WHERE CURRENT OF cannot refer to a tuple in those partitions.\n\n> > I am possibly shooting in dark, but this puzzles me. And it looks like\n> > we can cause wrong rows to be updated in non-partition inheritance\n> > where the ctids match?\n> \n> I don't think that hazard exists, because the table OID is matched\n> before the TID.\n\nIt sounds like CURRENT OF should somehow inform pruning that the\npartition OID is to be matched as well. I don't know offhand if this is\neasily implementable, though.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 8 Jun 2020 12:39:32 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Getting ERROR with FOR UPDATE/SHARE for partitioned table." }, { "msg_contents": "On Wed, Jun 3, 2020 at 12:44 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Thu, May 28, 2020 at 11:08 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > On Wed, May 27, 2020 at 6:51 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > So in Rajkumar's example, the cursor is declared as:\n> > >\n> > > CURSOR IS SELECT * FROM tbl WHERE c1< 5 FOR UPDATE;\n> > >\n> > > and the WHERE CURRENT OF query is this:\n> > >\n> > > UPDATE tbl SET c2='aa' WHERE CURRENT OF cur;\n> >\n> > Thanks for the clarification. So it looks like we expand UPDATE on\n> > partitioned table to UPDATE on each partition (inheritance_planner for\n> > DML) and then execute each of those. If CURRENT OF were to save the\n> > table oid or something we could run the UPDATE only on that partition.\n>\n> Are you saying that the planner should take into account the state of\n> the cursor specified in WHERE CURRENT OF to determine which of the\n> tables to scan for the UPDATE? Note that neither partition pruning\n> nor constraint exclusion know that CurrentOfExpr can possibly allow to\n> exclude children of the UPDATE target.\n\nYes. But it may not be possible to know the value of current of at the\ntime of planning since that need not be a plan time constant. This\npruning has to happen at run time. But as Alvaro has mentioned in his\nreply for a user this is a surprising behaviour and should be fixed.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 12 Jun 2020 17:52:28 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Getting ERROR with FOR UPDATE/SHARE for partitioned table." }, { "msg_contents": "On Fri, Jun 12, 2020 at 9:22 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> On Wed, Jun 3, 2020 at 12:44 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Are you saying that the planner should take into account the state of\n> > the cursor specified in WHERE CURRENT OF to determine which of the\n> > tables to scan for the UPDATE? Note that neither partition pruning\n> > nor constraint exclusion know that CurrentOfExpr can possibly allow to\n> > exclude children of the UPDATE target.\n>\n> Yes. But it may not be possible to know the value of current of at the\n> time of planning since that need not be a plan time constant. This\n> pruning has to happen at run time.\n\nGood point about not doing anything at planning time.\n\nI wonder if it wouldn't be okay to simply make execCurrentOf() return\nfalse if it can't find either a row mark or a Scan node in the cursor\nmatching the table being updated/deleted from, instead of giving an\nerror message? I mean what do we gain by erroring out here instead of\nsimply not doing anything? Now, it would be nicer if we could do so\nonly if the table being updated/deleted from is a child table, but it\nseems pretty inconvenient to tell that from the bottom of a plan tree\nfrom where execCurrentOf() is called.\n\nThe other option would be to have some bespoke \"pruning\" logic in,\nsay, ExecInitModifyTable() that fetches the current active table from\nthe cursor and processes only the matching child result relation. Or\nmaybe wait until we have run-time pruning for ModifyTable, because the\nresult relation code restructuring required for that will also be\nsomething we'd need for this.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 16 Jun 2020 15:14:55 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Getting ERROR with FOR UPDATE/SHARE for partitioned table." }, { "msg_contents": "On Tue, 16 Jun 2020 at 11:45, Amit Langote <amitlangote09@gmail.com> wrote:\n\n> On Fri, Jun 12, 2020 at 9:22 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > On Wed, Jun 3, 2020 at 12:44 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> > > Are you saying that the planner should take into account the state of\n> > > the cursor specified in WHERE CURRENT OF to determine which of the\n> > > tables to scan for the UPDATE? Note that neither partition pruning\n> > > nor constraint exclusion know that CurrentOfExpr can possibly allow to\n> > > exclude children of the UPDATE target.\n> >\n> > Yes. But it may not be possible to know the value of current of at the\n> > time of planning since that need not be a plan time constant. This\n> > pruning has to happen at run time.\n>\n> Good point about not doing anything at planning time.\n>\n> I wonder if it wouldn't be okay to simply make execCurrentOf() return\n> false if it can't find either a row mark or a Scan node in the cursor\n> matching the table being updated/deleted from, instead of giving an\n> error message? I mean what do we gain by erroring out here instead of\n> simply not doing anything? Now, it would be nicer if we could do so\n> only if the table being updated/deleted from is a child table, but it\n> seems pretty inconvenient to tell that from the bottom of a plan tree\n> from where execCurrentOf() is called.\n>\n\nA safe guard from a bug where current of is set to wrong table or\nsomething. Quite rare bug but if we can fix the problem itself removing a\nsafe guard doesn't seem wise.\n\n\n> The other option would be to have some bespoke \"pruning\" logic in,\n> say, ExecInitModifyTable() that fetches the current active table from\n> the cursor and processes only the matching child result relation.\n\n\nlooks better if that works and I don't see a reason why it won't work.\n\n\n> Or\n> maybe wait until we have run-time pruning for ModifyTable, because the\n> result relation code restructuring required for that will also be\n> something we'd need for this.\n>\n>\nI don't see much difference in the final plan with either options.\n\n-- \nBest Wishes,\nAshutosh\n\nOn Tue, 16 Jun 2020 at 11:45, Amit Langote <amitlangote09@gmail.com> wrote:On Fri, Jun 12, 2020 at 9:22 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> On Wed, Jun 3, 2020 at 12:44 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Are you saying that the planner should take into account the state of\n> > the cursor specified in WHERE CURRENT OF to determine which of the\n> > tables to scan for the UPDATE?  Note that neither partition pruning\n> > nor constraint exclusion know that CurrentOfExpr can possibly allow to\n> > exclude children of the UPDATE target.\n>\n> Yes. But it may not be possible to know the value of current of at the\n> time of planning since that need not be a plan time constant. This\n> pruning has to happen at run time.\n\nGood point about not doing anything at planning time.\n\nI wonder if it wouldn't be okay to simply make execCurrentOf() return\nfalse if it can't find either a row mark or a Scan node in the cursor\nmatching the table being updated/deleted from, instead of giving an\nerror message?  I mean what do we gain by erroring out here instead of\nsimply not doing anything?  Now, it would be nicer if we could do so\nonly if the table being updated/deleted from is a child table, but it\nseems pretty inconvenient to tell that from the bottom of a plan tree\nfrom where execCurrentOf() is called.A safe guard from a bug where current of is set to wrong table or something. Quite rare bug but if we can fix the problem itself removing a safe guard doesn't seem wise.\n\nThe other option would be to have some bespoke \"pruning\" logic in,\nsay, ExecInitModifyTable() that fetches the current active table from\nthe cursor and processes only the matching child result relation.  looks better if that works and I don't see a reason why it won't work. Or\nmaybe wait until we have run-time pruning for ModifyTable, because the\nresult relation code restructuring required for that will also be\nsomething we'd need for this.\nI don't see much difference in the final plan with either options.-- Best Wishes,Ashutosh", "msg_date": "Tue, 16 Jun 2020 18:17:33 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Getting ERROR with FOR UPDATE/SHARE for partitioned table." } ]
[ { "msg_contents": "Snowball has made a release! With a tag!\n\nI have prepared a patch to update PostgreSQL's copy. (not attached \nhere, 566230 bytes, but see \nhttps://github.com/petere/postgresql/commit/52a6133b58c77ada4210a96e5155cbe4da5e5583)\n\nSince we last updated our copy from their commit date 2019-06-24 and the \nrelease is from 2019-10-02, the changes are pretty small and mostly \nreformatting. But there are three new stemmers: Basque, Catalan, Hindi.\n\nI think some consideration could be given for including this into PG13. \nOtherwise, I'll park it for PG14.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 22 May 2020 14:40:44 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "snowball release" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Snowball has made a release! With a tag!\n> I have prepared a patch to update PostgreSQL's copy.\n\nYeah, this was on my to-do list as well. Thanks for doing it.\n\n> I think some consideration could be given for including this into PG13. \n> Otherwise, I'll park it for PG14.\n\nMeh, I think v14 at this point. It looks more like new features\nthan bug fixes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 May 2020 10:11:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: snowball release" }, { "msg_contents": "On Fri, May 22, 2020 at 5:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > Snowball has made a release! With a tag!\n> > I have prepared a patch to update PostgreSQL's copy.\n>\n> Yeah, this was on my to-do list as well. Thanks for doing it.\n>\n\n+1\n\n\n>\n> > I think some consideration could be given for including this into PG13.\n> > Otherwise, I'll park it for PG14.\n>\n> Meh, I think v14 at this point. It looks more like new features\n> than bug fixes.\n>\n>\nI would vote for including these new languages. There is no risk to let\npeople try and test them.\n\n\n> regards, tom lane\n>\n>\n>\n\n-- \nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\nOn Fri, May 22, 2020 at 5:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Snowball has made a release!  With a tag!\n> I have prepared a patch to update PostgreSQL's copy.\n\nYeah, this was on my to-do list as well.  Thanks for doing it.+1 \n\n> I think some consideration could be given for including this into PG13. \n> Otherwise, I'll park it for PG14.\n\nMeh, I think v14 at this point.  It looks more like new features\nthan bug fixes.\nI would vote for including these new languages. There is no risk to let people try and test them. \n                        regards, tom lane\n\n\n-- Postgres Professional: http://www.postgrespro.comThe Russian Postgres Company", "msg_date": "Sat, 23 May 2020 11:21:46 +0300", "msg_from": "Oleg Bartunov <obartunov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: snowball release" }, { "msg_contents": "On 2020-05-22 14:40, Peter Eisentraut wrote:\n> Snowball has made a release! With a tag!\n> \n> I have prepared a patch to update PostgreSQL's copy. (not attached\n> here, 566230 bytes, but see\n> https://github.com/petere/postgresql/commit/52a6133b58c77ada4210a96e5155cbe4da5e5583)\n> \n> Since we last updated our copy from their commit date 2019-06-24 and the\n> release is from 2019-10-02, the changes are pretty small and mostly\n> reformatting. But there are three new stemmers: Basque, Catalan, Hindi.\n> \n> I think some consideration could be given for including this into PG13.\n> Otherwise, I'll park it for PG14.\n\ncommitted to master\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 8 Jun 2020 08:21:02 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: snowball release" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-05-22 14:40, Peter Eisentraut wrote:\n>> I think some consideration could be given for including this into PG13.\n>> Otherwise, I'll park it for PG14.\n\n> committed to master\n\nHm, I don't see any documentation change in that commit --- don't\nwe have (at least) a list of the stemmers somewhere in the SGML docs?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Jun 2020 10:33:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: snowball release" }, { "msg_contents": "> On 8 Jun 2020, at 16:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> On 2020-05-22 14:40, Peter Eisentraut wrote:\n>>> I think some consideration could be given for including this into PG13.\n>>> Otherwise, I'll park it for PG14.\n> \n>> committed to master\n> \n> Hm, I don't see any documentation change in that commit --- don't\n> we have (at least) a list of the stemmers somewhere in the SGML docs?\n\nIIRC we refer to the Snowball site, and only have a list in the \\dFd output,\nbut that can be argued to be an example and not expected to be updated to\nmatch. Perhaps we should though?\n\ncheers ./daniel\n\n\n", "msg_date": "Mon, 8 Jun 2020 16:47:01 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: snowball release" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 8 Jun 2020, at 16:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hm, I don't see any documentation change in that commit --- don't\n>> we have (at least) a list of the stemmers somewhere in the SGML docs?\n\n> IIRC we refer to the Snowball site, and only have a list in the \\dFd output,\n> but that can be argued to be an example and not expected to be updated to\n> match. Perhaps we should though?\n\nLooking in the commit logs, our past updates 7b925e127 and\nfd582317e just updated that \\dFd sample. So I guess that's the\nminimum expectation. Maybe we should think about having a more\nformal list in the actual Snowball section (12.6.6)?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Jun 2020 11:19:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: snowball release" }, { "msg_contents": "On 2020-06-08 17:19, Tom Lane wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 8 Jun 2020, at 16:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Hm, I don't see any documentation change in that commit --- don't\n>>> we have (at least) a list of the stemmers somewhere in the SGML docs?\n> \n>> IIRC we refer to the Snowball site, and only have a list in the \\dFd output,\n>> but that can be argued to be an example and not expected to be updated to\n>> match. Perhaps we should though?\n> \n> Looking in the commit logs, our past updates 7b925e127 and\n> fd582317e just updated that \\dFd sample. So I guess that's the\n> minimum expectation.\n\ndone\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 8 Jun 2020 22:48:01 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: snowball release" } ]
[ { "msg_contents": "We didn't get anywhere with making the default authentication method in \na source build anything other than trust. But perhaps we should change \nthe default for password_encryption to nudge people to adopt SCRAM? \nRight now, passwords are still hashed using MD5 by default, unless you \nspecify scram-sha-256 using initdb -A or similar.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 22 May 2020 14:45:19 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "password_encryption default" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> We didn't get anywhere with making the default authentication method in \n> a source build anything other than trust. But perhaps we should change \n> the default for password_encryption to nudge people to adopt SCRAM? \n> Right now, passwords are still hashed using MD5 by default, unless you \n> specify scram-sha-256 using initdb -A or similar.\n\nI think what that was waiting on was for client libraries to become\nSCRAM-ready. Do we have an idea of the state of play on that side?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 May 2020 10:13:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On Fri, May 22, 2020 at 4:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > We didn't get anywhere with making the default authentication method in\n> > a source build anything other than trust. But perhaps we should change\n> > the default for password_encryption to nudge people to adopt SCRAM?\n> > Right now, passwords are still hashed using MD5 by default, unless you\n> > specify scram-sha-256 using initdb -A or similar.\n>\n> I think what that was waiting on was for client libraries to become\n> SCRAM-ready. Do we have an idea of the state of play on that side?\n>\n\nIf the summary table on the wiki at\nhttps://wiki.postgresql.org/wiki/List_of_drivers is to be trusted, every\nlisted driver except Swift does.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, May 22, 2020 at 4:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> We didn't get anywhere with making the default authentication method in \n> a source build anything other than trust.  But perhaps we should change \n> the default for password_encryption to nudge people to adopt SCRAM? \n> Right now, passwords are still hashed using MD5 by default, unless you \n> specify scram-sha-256 using initdb -A or similar.\n\nI think what that was waiting on was for client libraries to become\nSCRAM-ready.  Do we have an idea of the state of play on that side?If the summary table on the wiki at https://wiki.postgresql.org/wiki/List_of_drivers is to be trusted, every listed driver except Swift does. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 22 May 2020 16:31:19 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> On Fri, May 22, 2020 at 4:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > > We didn't get anywhere with making the default authentication method in\n> > > a source build anything other than trust. But perhaps we should change\n> > > the default for password_encryption to nudge people to adopt SCRAM?\n> > > Right now, passwords are still hashed using MD5 by default, unless you\n> > > specify scram-sha-256 using initdb -A or similar.\n> >\n> > I think what that was waiting on was for client libraries to become\n> > SCRAM-ready. Do we have an idea of the state of play on that side?\n> >\n> \n> If the summary table on the wiki at\n> https://wiki.postgresql.org/wiki/List_of_drivers is to be trusted, every\n> listed driver except Swift does.\n\nYes, Katz actually went through and worked with folks to make that\nhappen. I'm +1 on moving the default for password_encryption to be\nscram. Even better would be changing the pg_hba.conf default, but I\nthink we still have concerns about that having problems with the\nregression tests and the buildfarm.\n\nThanks,\n\nStephen", "msg_date": "Fri, 22 May 2020 10:46:38 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Magnus Hagander (magnus@hagander.net) wrote:\n>> On Fri, May 22, 2020 at 4:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>>>> We didn't get anywhere with making the default authentication method in\n>>>> a source build anything other than trust.\n\n> I'm +1 on moving the default for password_encryption to be\n> scram. Even better would be changing the pg_hba.conf default, but I\n> think we still have concerns about that having problems with the\n> regression tests and the buildfarm.\n\nAs far as that last goes, we *did* get the buildfarm fixed to be all\nv11 scripts, so I thought we were ready to move forward on trying\n09f08930f again. It's too late to consider that for v13, but\nperhaps it'd be reasonable to change the SCRAM default now? Not sure.\nPost-beta1 isn't the best time for such things.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 May 2020 10:59:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Magnus Hagander (magnus@hagander.net) wrote:\n> >> On Fri, May 22, 2020 at 4:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> >>>> We didn't get anywhere with making the default authentication method in\n> >>>> a source build anything other than trust.\n> \n> > I'm +1 on moving the default for password_encryption to be\n> > scram. Even better would be changing the pg_hba.conf default, but I\n> > think we still have concerns about that having problems with the\n> > regression tests and the buildfarm.\n> \n> As far as that last goes, we *did* get the buildfarm fixed to be all\n> v11 scripts, so I thought we were ready to move forward on trying\n> 09f08930f again. It's too late to consider that for v13, but\n> perhaps it'd be reasonable to change the SCRAM default now? Not sure.\n\nI feel like it is. I'm not even sure that I agree that it's really too\nlate to consider 09f08930f considering that's it's a pretty minor code\nchange and the up-side to that is having reasonable defaults out of the\nbox, as it were, something we have *long* been derided for.\n\n> Post-beta1 isn't the best time for such things.\n\nIt'd be good to be consistent about this between the packagers and the\nsource builds, imv, and we don't tend to think about that until we have\npackages being built and distributed and used and that ends up being\npost-beta1. If we want that changed then we should go back to having\nalphas..\n\nIn general though, I'm reasonably comfortable with changing of default\nvalues post beta1. I do appreciate that not everyone would agree with\nthat, but with all the effort that's put into getting everything working\nwith SCRAM, it'd be a real shame to keep md5 as the default for yet\nanother year and a half..\n\nThanks,\n\nStephen", "msg_date": "Fri, 22 May 2020 11:14:38 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> As far as that last goes, we *did* get the buildfarm fixed to be all\n>> v11 scripts, so I thought we were ready to move forward on trying\n>> 09f08930f again. It's too late to consider that for v13, but\n>> perhaps it'd be reasonable to change the SCRAM default now? Not sure.\n\n> I feel like it is. I'm not even sure that I agree that it's really too\n> late to consider 09f08930f considering that's it's a pretty minor code\n> change and the up-side to that is having reasonable defaults out of the\n> box, as it were, something we have *long* been derided for.\n\nWell, the argument against changing right now is that it would invalidate\nportability testing done against beta1, which users would be justifiably\nupset about.\n\nI'm +1 for changing both of these things as soon as we branch for v14,\nbut I feel like it's a bit late for v13. If we aren't feature-frozen\nnow, when will we be?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 May 2020 11:34:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> As far as that last goes, we *did* get the buildfarm fixed to be all\n> >> v11 scripts, so I thought we were ready to move forward on trying\n> >> 09f08930f again. It's too late to consider that for v13, but\n> >> perhaps it'd be reasonable to change the SCRAM default now? Not sure.\n> \n> > I feel like it is. I'm not even sure that I agree that it's really too\n> > late to consider 09f08930f considering that's it's a pretty minor code\n> > change and the up-side to that is having reasonable defaults out of the\n> > box, as it were, something we have *long* been derided for.\n> \n> Well, the argument against changing right now is that it would invalidate\n> portability testing done against beta1, which users would be justifiably\n> upset about.\n\nI don't think we're in complete agreement about the amount of\nportability testing that's done with our beta source builds. To that\npoint, however, the lack of such testing happening, if there is a lack,\nis on us just as much as anyone else- we should be testing, to the\nextent possible, as many variations of our configuration options as we\ncan across as many platforms as we can in the buildfarm. If a\nnon-default setting doesn't work on one platform or another, that's a\nbug to fix regardless and doesn't really impact the question of \"what\nshould be the default\".\n\n> I'm +1 for changing both of these things as soon as we branch for v14,\n> but I feel like it's a bit late for v13. If we aren't feature-frozen\n> now, when will we be?\n\nI really don't consider changing of defaults to be on the same level as\nimplementation of whole features, even if changing those defaults\nrequires a few lines of code to go with the change.\n\nThanks,\n\nStephen", "msg_date": "Fri, 22 May 2020 11:44:25 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> I'm +1 for changing both of these things as soon as we branch for v14,\n>> but I feel like it's a bit late for v13. If we aren't feature-frozen\n>> now, when will we be?\n\n> I really don't consider changing of defaults to be on the same level as\n> implementation of whole features, even if changing those defaults\n> requires a few lines of code to go with the change.\n\nThe buildfarm fiasco with 09f08930f should remind us that changing\ndefaults *does* break things, even if theoretically it shouldn't.\nAt this phase of the v13 cycle, we should be looking to fix bugs,\nnot to break more stuff.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 May 2020 11:50:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> I'm +1 for changing both of these things as soon as we branch for v14,\n> >> but I feel like it's a bit late for v13. If we aren't feature-frozen\n> >> now, when will we be?\n> \n> > I really don't consider changing of defaults to be on the same level as\n> > implementation of whole features, even if changing those defaults\n> > requires a few lines of code to go with the change.\n> \n> The buildfarm fiasco with 09f08930f should remind us that changing\n> defaults *does* break things, even if theoretically it shouldn't.\n> At this phase of the v13 cycle, we should be looking to fix bugs,\n> not to break more stuff.\n\nSure it does- for the special case of the buildfarm, and that takes\nbuildfarm code to fix. Having users make changes to whatever scripts\nthey're using with PG between major versions is certainly not\nunreasonable, or even between beta and final. These things are not set\nin stone at this point, they're the defaults, and it's beta time now,\nnot post release or RC.\n\nIf it breaks for regular users who are using the system properly then we\nwant to know about that and we'd ideally like to get that fixed before\nthe release.\n\nThanks,\n\nStephen", "msg_date": "Fri, 22 May 2020 11:54:12 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On 5/22/20 11:34 AM, Tom Lane wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n>> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>>> As far as that last goes, we *did* get the buildfarm fixed to be all\n>>> v11 scripts, so I thought we were ready to move forward on trying\n>>> 09f08930f again. It's too late to consider that for v13, but\n>>> perhaps it'd be reasonable to change the SCRAM default now? Not sure.\n> \n>> I feel like it is. I'm not even sure that I agree that it's really too\n>> late to consider 09f08930f considering that's it's a pretty minor code\n>> change and the up-side to that is having reasonable defaults out of the\n>> box, as it were, something we have *long* been derided for.\n> \n> Well, the argument against changing right now is that it would invalidate\n> portability testing done against beta1, which users would be justifiably\n> upset about.\n> \n> I'm +1 for changing both of these things as soon as we branch for v14,\n> but I feel like it's a bit late for v13. If we aren't feature-frozen\n> now, when will we be?\n\nAs someone who is an unabashed SCRAM fan and was hoping the default\nwould be up'd for v13, I would actually +1 making it the default in v14,\ni.e. because 9.5 will be EOL at that point, and as such we both have\nevery* driver supporting SCRAM AND every version of PostgreSQL\nsupporting SCRAM.\n\n(Would I personally love to do it sooner? Yes...but I think the stars\nalign for doing it in v14).\n\nJonathan", "msg_date": "Fri, 22 May 2020 15:09:03 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On 5/22/20 9:09 PM, Jonathan S. Katz wrote:\n> As someone who is an unabashed SCRAM fan and was hoping the default\n> would be up'd for v13, I would actually +1 making it the default in v14,\n> i.e. because 9.5 will be EOL at that point, and as such we both have\n> every* driver supporting SCRAM AND every version of PostgreSQL\n> supporting SCRAM.\n\nWasn't SCRAM introduced in 10?\n-- \nVik Fearing\n\n\n", "msg_date": "Fri, 22 May 2020 22:12:22 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> On 5/22/20 9:09 PM, Jonathan S. Katz wrote:\n>> As someone who is an unabashed SCRAM fan and was hoping the default\n>> would be up'd for v13, I would actually +1 making it the default in v14,\n>> i.e. because 9.5 will be EOL at that point, and as such we both have\n>> every* driver supporting SCRAM AND every version of PostgreSQL\n>> supporting SCRAM.\n\n> Wasn't SCRAM introduced in 10?\n\nYeah. But there's still something to Jonathan's argument, because 9.6\nwill go EOL in November 2021, which is pretty close to when v14 will\nreach public release (assuming we can hold to the typical schedule).\nIf we do it in v13, there'll be a full year where still-supported\nversions of PG can't do SCRAM, implying that clients would likely\nfail to connect to an up-to-date server.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 May 2020 17:21:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On 5/22/20 5:21 PM, Tom Lane wrote:\n> Vik Fearing <vik@postgresfriends.org> writes:\n>> On 5/22/20 9:09 PM, Jonathan S. Katz wrote:\n>>> As someone who is an unabashed SCRAM fan and was hoping the default\n>>> would be up'd for v13, I would actually +1 making it the default in v14,\n>>> i.e. because 9.5 will be EOL at that point, and as such we both have\n>>> every* driver supporting SCRAM AND every version of PostgreSQL\n>>> supporting SCRAM.\n> \n>> Wasn't SCRAM introduced in 10?\n> \n> Yeah. But there's still something to Jonathan's argument, because 9.6\n> will go EOL in November 2021, which is pretty close to when v14 will\n> reach public release (assuming we can hold to the typical schedule).\n> If we do it in v13, there'll be a full year where still-supported\n> versions of PG can't do SCRAM, implying that clients would likely\n> fail to connect to an up-to-date server.\n\n^ that's what I meant.\n\nJonathan", "msg_date": "Fri, 22 May 2020 17:23:00 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On 2020-05-22 23:23, Jonathan S. Katz wrote:\n>> Yeah. But there's still something to Jonathan's argument, because 9.6\n>> will go EOL in November 2021, which is pretty close to when v14 will\n>> reach public release (assuming we can hold to the typical schedule).\n>> If we do it in v13, there'll be a full year where still-supported\n>> versions of PG can't do SCRAM, implying that clients would likely\n>> fail to connect to an up-to-date server.\n> \n> ^ that's what I meant.\n\nHere is a proposed patch for PG14 then.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 25 May 2020 11:45:25 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On 5/25/20 5:45 AM, Peter Eisentraut wrote:\n> On 2020-05-22 23:23, Jonathan S. Katz wrote:\n>>> Yeah.  But there's still something to Jonathan's argument, because 9.6\n>>> will go EOL in November 2021, which is pretty close to when v14 will\n>>> reach public release (assuming we can hold to the typical schedule).\n>>> If we do it in v13, there'll be a full year where still-supported\n>>> versions of PG can't do SCRAM, implying that clients would likely\n>>> fail to connect to an up-to-date server.\n>>\n>> ^ that's what I meant.\n> \n> Here is a proposed patch for PG14 then.\n\nThis makes me happy :D\n\nI took a look over, it looks good. One question on the initdb.c diff:\n\n-\tif (strcmp(authmethodlocal, \"scram-sha-256\") == 0 ||\n-\t\tstrcmp(authmethodhost, \"scram-sha-256\") == 0)\n-\t{\n-\t\tconflines = replace_token(conflines,\n-\t\t\t\t\t\t\t\t \"#password_encryption = md5\",\n-\t\t\t\t\t\t\t\t \"password_encryption = scram-sha-256\");\n-\t}\n-\n\nWould we reverse this, i.e. if someone chooses authmethodlocal to be\n\"md5\", we would then set \"password_encryption = md5\"?\n\nThanks,\n\nJonathan", "msg_date": "Mon, 25 May 2020 11:57:19 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On 2020-05-25 17:57, Jonathan S. Katz wrote:\n> I took a look over, it looks good. One question on the initdb.c diff:\n> \n> -\tif (strcmp(authmethodlocal, \"scram-sha-256\") == 0 ||\n> -\t\tstrcmp(authmethodhost, \"scram-sha-256\") == 0)\n> -\t{\n> -\t\tconflines = replace_token(conflines,\n> -\t\t\t\t\t\t\t\t \"#password_encryption = md5\",\n> -\t\t\t\t\t\t\t\t \"password_encryption = scram-sha-256\");\n> -\t}\n> -\n> \n> Would we reverse this, i.e. if someone chooses authmethodlocal to be\n> \"md5\", we would then set \"password_encryption = md5\"?\n\nYeah, I was too enthusiastic about removing that. Here is a better patch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 26 May 2020 10:25:25 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On Tue, May 26, 2020 at 10:25:25AM +0200, Peter Eisentraut wrote:\n> Yeah, I was too enthusiastic about removing that. Here is a better patch.\n\n+ as an MD5 hash. (<literal>on</literal> is also accepted, as an alias\n+ for <literal>md5</literal>.) The default is\n+ <literal>scram-sha-256</literal>.\nShouldn't password_encryption = on/true/1/yes be an equivalent of\nscram-sha-256 as the default gets changed?\n--\nMichael", "msg_date": "Wed, 27 May 2020 15:00:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On 2020-05-27 08:00, Michael Paquier wrote:\n> On Tue, May 26, 2020 at 10:25:25AM +0200, Peter Eisentraut wrote:\n>> Yeah, I was too enthusiastic about removing that. Here is a better patch.\n> \n> + as an MD5 hash. (<literal>on</literal> is also accepted, as an alias\n> + for <literal>md5</literal>.) The default is\n> + <literal>scram-sha-256</literal>.\n> Shouldn't password_encryption = on/true/1/yes be an equivalent of\n> scram-sha-256 as the default gets changed?\n\nI think these are mostly legacy options anyway, so if we wanted to make \na change, we should remove them.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 27 May 2020 08:29:25 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On Wed, May 27, 2020 at 8:29 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-05-27 08:00, Michael Paquier wrote:\n> > On Tue, May 26, 2020 at 10:25:25AM +0200, Peter Eisentraut wrote:\n> >> Yeah, I was too enthusiastic about removing that. Here is a better\n> patch.\n> >\n> > + as an MD5 hash. (<literal>on</literal> is also accepted, as an\n> alias\n> > + for <literal>md5</literal>.) The default is\n> > + <literal>scram-sha-256</literal>.\n> > Shouldn't password_encryption = on/true/1/yes be an equivalent of\n> > scram-sha-256 as the default gets changed?\n>\n> I think these are mostly legacy options anyway, so if we wanted to make\n> a change, we should remove them.\n>\n\nSeems like the better choice yeah. Since we're changing the default anyway,\nmaybe now is the time to do that? Or if not, maybe have it log an explicit\ndeprecation warning when it loads a config with it?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, May 27, 2020 at 8:29 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2020-05-27 08:00, Michael Paquier wrote:\n> On Tue, May 26, 2020 at 10:25:25AM +0200, Peter Eisentraut wrote:\n>> Yeah, I was too enthusiastic about removing that.  Here is a better patch.\n> \n> +        as an MD5 hash.  (<literal>on</literal> is also accepted, as an alias\n> +        for <literal>md5</literal>.)  The default is\n> +        <literal>scram-sha-256</literal>.\n> Shouldn't password_encryption = on/true/1/yes be an equivalent of\n> scram-sha-256 as the default gets changed?\n\nI think these are mostly legacy options anyway, so if we wanted to make \na change, we should remove them.Seems like the better choice yeah. Since we're changing the default anyway, maybe now is the time to do that? Or if not, maybe have it log an explicit deprecation warning when it loads a config with it? --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 27 May 2020 14:56:34 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On Wed, May 27, 2020 at 02:56:34PM +0200, Magnus Hagander wrote:\n> Seems like the better choice yeah. Since we're changing the default anyway,\n> maybe now is the time to do that? Or if not, maybe have it log an explicit\n> deprecation warning when it loads a config with it?\n\nNot sure that's worth it here, so I would just remove the whole. It\nwould be confusing to keep the past values and have them map to\nsomething we think is not an appropriate default.\n--\nMichael", "msg_date": "Wed, 27 May 2020 22:13:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On 5/26/20 4:25 AM, Peter Eisentraut wrote:\n> On 2020-05-25 17:57, Jonathan S. Katz wrote:\n>> I took a look over, it looks good. One question on the initdb.c diff:\n>>\n>> -    if (strcmp(authmethodlocal, \"scram-sha-256\") == 0 ||\n>> -        strcmp(authmethodhost, \"scram-sha-256\") == 0)\n>> -    {\n>> -        conflines = replace_token(conflines,\n>> -                                  \"#password_encryption = md5\",\n>> -                                  \"password_encryption =\n>> scram-sha-256\");\n>> -    }\n>> -\n>>\n>> Would we reverse this, i.e. if someone chooses authmethodlocal to be\n>> \"md5\", we would then set \"password_encryption = md5\"?\n> \n> Yeah, I was too enthusiastic about removing that.  Here is a better patch.\n\nDid some testing. Overall it looks good. Here are my test cases and what\nhappened:\n\n$ initdb -D data\n\nDeferred password_encryption to the default, confirmed it was indeed scram\n\n$ initdb -D data --auth-local=md5\n\n\nSet password_encryption to md5\n\n$ initdb -D data --auth-host=md5\n\nSet password_encryption to md5\n\n$ initdb -D data --auth-local=md5 --auth-host=scram-sha-256\n\nGot an error message:\n\ninitdb: error: must specify a password for the superuser to enable\nscram-sha-256 authentication\n\n$ initdb -D data --auth-local=scram-sha-256 --auth-host=md5\n\nGot an error message:\n\n\"initdb: error: must specify a password for the superuser to enable md5\nauthentication\"\n\nFor the last two, that behavior is to be expected (after all, you've set\nthe two login vectors to require passwords), but the error message seems\nodd now. Perhaps we tweak it to be:\n\n\n\"initdb: error: must specify a password for the superuser when requiring\npasswords for both local and host authentication.\"\n\nAnother option could be to set the message based on whether both\nlocal/host have the same setting, and then default to something like the\nabove when they differ.\n\nOther than that, looks great!\n\nJonathan", "msg_date": "Wed, 27 May 2020 09:25:35 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On 5/27/20 9:13 AM, Michael Paquier wrote:\n> On Wed, May 27, 2020 at 02:56:34PM +0200, Magnus Hagander wrote:\n>> Seems like the better choice yeah. Since we're changing the default anyway,\n>> maybe now is the time to do that? Or if not, maybe have it log an explicit\n>> deprecation warning when it loads a config with it?\n> \n> Not sure that's worth it here, so I would just remove the whole. It\n> would be confusing to keep the past values and have them map to\n> something we think is not an appropriate default.\n\n+1 to removing the legacy options. It could break some people on legacy\nupgrades, but my guess would be that said situations are very small, and\nwe would document the removal of these as \"breaking changes\" in the\nrelease notes.\n\nJonathan", "msg_date": "Wed, 27 May 2020 09:54:47 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "Greetings,\n\n* Jonathan S. Katz (jkatz@postgresql.org) wrote:\n> On 5/27/20 9:13 AM, Michael Paquier wrote:\n> > On Wed, May 27, 2020 at 02:56:34PM +0200, Magnus Hagander wrote:\n> >> Seems like the better choice yeah. Since we're changing the default anyway,\n> >> maybe now is the time to do that? Or if not, maybe have it log an explicit\n> >> deprecation warning when it loads a config with it?\n> > \n> > Not sure that's worth it here, so I would just remove the whole. It\n> > would be confusing to keep the past values and have them map to\n> > something we think is not an appropriate default.\n> \n> +1 to removing the legacy options. It could break some people on legacy\n> upgrades, but my guess would be that said situations are very small, and\n> we would document the removal of these as \"breaking changes\" in the\n> release notes.\n\nAgreed- let's remove the legacy options. As I've mentioned elsewhere,\ndistros may manage the issue for us, and if we want to get into it, we\ncould consider adding support to pg_upgrade to complain if it comes\nacross a legacy setting that isn't valid. I'm not sure that's\nworthwhile though.\n\nThanks,\n\nStephen", "msg_date": "Wed, 27 May 2020 09:59:17 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On 2020-05-27 15:25, Jonathan S. Katz wrote:\n> $ initdb -D data --auth-local=scram-sha-256 --auth-host=md5\n> \n> Got an error message:\n> \n> \"initdb: error: must specify a password for the superuser to enable md5\n> authentication\"\n> \n> For the last two, that behavior is to be expected (after all, you've set\n> the two login vectors to require passwords), but the error message seems\n> odd now. Perhaps we tweak it to be:\n> \n> \n> \"initdb: error: must specify a password for the superuser when requiring\n> passwords for both local and host authentication.\"\n\nThat message is a bit long. Maybe just\n\nmust specify a password for the superuser to enable password authentication\n\nwithout reference to the specific method. I think the idea is clear there.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 28 May 2020 14:10:33 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On 2020-05-27 15:59, Stephen Frost wrote:\n> Agreed- let's remove the legacy options. As I've mentioned elsewhere,\n> distros may manage the issue for us, and if we want to get into it, we\n> could consider adding support to pg_upgrade to complain if it comes\n> across a legacy setting that isn't valid. I'm not sure that's\n> worthwhile though.\n\nMore along these lines: We could also remove the ENCRYPTED and \nUNENCRYPTED keywords from CREATE and ALTER ROLE. AFAICT, these have \nnever been emitted by pg_dump or psql, so there are no concerns from \nthat end. Thoughts?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 28 May 2020 14:53:17 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On 5/28/20 8:10 AM, Peter Eisentraut wrote:\n> On 2020-05-27 15:25, Jonathan S. Katz wrote:\n>> $ initdb -D data --auth-local=scram-sha-256 --auth-host=md5\n>>\n>> Got an error message:\n>>\n>> \"initdb: error: must specify a password for the superuser to enable md5\n>> authentication\"\n>>\n>> For the last two, that behavior is to be expected (after all, you've set\n>> the two login vectors to require passwords), but the error message seems\n>> odd now. Perhaps we tweak it to be:\n>>\n>>\n>> \"initdb: error: must specify a password for the superuser when requiring\n>> passwords for both local and host authentication.\"\n> \n> That message is a bit long.  Maybe just\n> \n> must specify a password for the superuser to enable password authentication\n> \n> without reference to the specific method.  I think the idea is clear there.\n\n+1\n\nJonathan", "msg_date": "Thu, 28 May 2020 09:28:26 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On Thu, May 28, 2020 at 8:53 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> More along these lines: We could also remove the ENCRYPTED and\n> UNENCRYPTED keywords from CREATE and ALTER ROLE. AFAICT, these have\n> never been emitted by pg_dump or psql, so there are no concerns from\n> that end. Thoughts?\n\nI have a question about this. My understanding of this area isn't\ngreat. As I understand it, you can specify a password unencrypted and\nlet the system compute the validator from it, or you can compute the\nvalidator yourself and then send that as the 'encrypted' password.\nBut, apparently, CREATE ROLE and ALTER ROLE don't really know which\nthing you did. They just examine the string that you passed and decide\nwhether it looks like a validator. If so, they assume it is; if not,\nthey assume it's just a password.\n\nBut that seems really odd. What if you choose a password that just\nhappens to look like a validator? Perhaps that's not real likely, but\nwhy do we not permit -- or even require -- the user to specify intent?\nIt seems out of character for us to, essentially, guess the meaning of\nsomething ambiguous rather than requiring the user to be clear about\nit.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 28 May 2020 09:54:23 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Thu, May 28, 2020 at 8:53 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> > More along these lines: We could also remove the ENCRYPTED and\n> > UNENCRYPTED keywords from CREATE and ALTER ROLE. AFAICT, these have\n> > never been emitted by pg_dump or psql, so there are no concerns from\n> > that end. Thoughts?\n> \n> I have a question about this. My understanding of this area isn't\n> great. As I understand it, you can specify a password unencrypted and\n> let the system compute the validator from it, or you can compute the\n> validator yourself and then send that as the 'encrypted' password.\n> But, apparently, CREATE ROLE and ALTER ROLE don't really know which\n> thing you did. They just examine the string that you passed and decide\n> whether it looks like a validator. If so, they assume it is; if not,\n> they assume it's just a password.\n> \n> But that seems really odd. What if you choose a password that just\n> happens to look like a validator? Perhaps that's not real likely, but\n> why do we not permit -- or even require -- the user to specify intent?\n> It seems out of character for us to, essentially, guess the meaning of\n> something ambiguous rather than requiring the user to be clear about\n> it.\n\nIndeed, and it's also been a source of bugs... Watching pgcon atm but\nI do recall some history around exactly this.\n\nI'd certainly be in favor of having these things be more explicit-\nincluding doing things like actually splitting out the actual password\nvalidator from the algorithm instead of having them smashed together as\none string as if we don't know what columns are (also recall complaining\nabout that when scram was first being developed too, though that might\njust be in my head).\n\nThanks,\n\nStephen", "msg_date": "Thu, 28 May 2020 10:01:23 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On Thu, May 28, 2020 at 10:01 AM Stephen Frost <sfrost@snowman.net> wrote:\n> as if we don't know what columns are\n\nAmen to that!\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 28 May 2020 11:45:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On Thu, May 28, 2020 at 02:53:17PM +0200, Peter Eisentraut wrote:\n> More along these lines: We could also remove the ENCRYPTED and UNENCRYPTED\n> keywords from CREATE and ALTER ROLE. AFAICT, these have never been emitted\n> by pg_dump or psql, so there are no concerns from that end. Thoughts?\n\n+0.5. I think that you have a good point about the removal of\nUNENCRYPTED (one keyword gone!) as we don't support it since 10. For\nENCRYPTED, I'd rather keep it around for compatibility reasons for a\nlonger time, just to be on the safe side.\n--\nMichael", "msg_date": "Fri, 29 May 2020 16:33:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On 5/29/20 3:33 AM, Michael Paquier wrote:\n> On Thu, May 28, 2020 at 02:53:17PM +0200, Peter Eisentraut wrote:\n>> More along these lines: We could also remove the ENCRYPTED and UNENCRYPTED\n>> keywords from CREATE and ALTER ROLE. AFAICT, these have never been emitted\n>> by pg_dump or psql, so there are no concerns from that end. Thoughts?\n> \n> +0.5. I think that you have a good point about the removal of\n> UNENCRYPTED (one keyword gone!) as we don't support it since 10. For\n> ENCRYPTED, I'd rather keep it around for compatibility reasons for a\n> longer time, just to be on the safe side.\n\nBy that logic, I would +1 removing ENCRYPTED & UNENCRYPTED, given\nENCRYPTED effectively has no meaning either after all this time too. If\nit's not emitted by any of our scripts, and it's been effectively moot\nfor 4 years (by the time of PG14), and we've been saying in the docs \"he\nENCRYPTED keyword has no effect, but is accepted for backwards\ncompatibility\" I think we'd be safe with removing it.\n\nPerhaps a stepping stone is to emit a deprecation warning on PG14 and\nremove in PG15, but I think it's safe to remove.\n\nPerhaps stating the obvious here, but I also think it's a separate patch\nfrom the $SUBJECT, but glad to see the clean up :)\n\nJonathan", "msg_date": "Fri, 29 May 2020 09:13:26 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Thu, May 28, 2020 at 02:53:17PM +0200, Peter Eisentraut wrote:\n> > More along these lines: We could also remove the ENCRYPTED and UNENCRYPTED\n> > keywords from CREATE and ALTER ROLE. AFAICT, these have never been emitted\n> > by pg_dump or psql, so there are no concerns from that end. Thoughts?\n> \n> +0.5. I think that you have a good point about the removal of\n> UNENCRYPTED (one keyword gone!) as we don't support it since 10. For\n> ENCRYPTED, I'd rather keep it around for compatibility reasons for a\n> longer time, just to be on the safe side.\n\nIt's both inaccurate and would be completely legacy at that point.\n\nI disagree entirely about keeping it around 'for compatibility'.\n\nThanks,\n\nStephen", "msg_date": "Fri, 29 May 2020 09:18:27 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "Greetings,\n\n* Jonathan S. Katz (jkatz@postgresql.org) wrote:\n> On 5/29/20 3:33 AM, Michael Paquier wrote:\n> > On Thu, May 28, 2020 at 02:53:17PM +0200, Peter Eisentraut wrote:\n> >> More along these lines: We could also remove the ENCRYPTED and UNENCRYPTED\n> >> keywords from CREATE and ALTER ROLE. AFAICT, these have never been emitted\n> >> by pg_dump or psql, so there are no concerns from that end. Thoughts?\n> > \n> > +0.5. I think that you have a good point about the removal of\n> > UNENCRYPTED (one keyword gone!) as we don't support it since 10. For\n> > ENCRYPTED, I'd rather keep it around for compatibility reasons for a\n> > longer time, just to be on the safe side.\n> \n> By that logic, I would +1 removing ENCRYPTED & UNENCRYPTED, given\n> ENCRYPTED effectively has no meaning either after all this time too. If\n> it's not emitted by any of our scripts, and it's been effectively moot\n> for 4 years (by the time of PG14), and we've been saying in the docs \"he\n> ENCRYPTED keyword has no effect, but is accepted for backwards\n> compatibility\" I think we'd be safe with removing it.\n> \n> Perhaps a stepping stone is to emit a deprecation warning on PG14 and\n> remove in PG15, but I think it's safe to remove.\n\nWe're terrible about that, and people reasonably complain about such\nthings because we don't *know* we're gonna remove it in 15.\n\nI'll argue again for the approach I mentioned before somewhere: when we\ncommit the patch for 14, we go back and update the older docs to note\nthat it's gone as of v14. Deprecation notices and other such don't work\nand we instead end up carrying legacy things on forever.\n\nThanks,\n\nStephen", "msg_date": "Fri, 29 May 2020 09:22:14 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On 5/29/20 9:22 AM, Stephen Frost wrote:\n> Greetings,\n> \n> * Jonathan S. Katz (jkatz@postgresql.org) wrote:\n>> On 5/29/20 3:33 AM, Michael Paquier wrote:\n>>> On Thu, May 28, 2020 at 02:53:17PM +0200, Peter Eisentraut wrote:\n>>>> More along these lines: We could also remove the ENCRYPTED and UNENCRYPTED\n>>>> keywords from CREATE and ALTER ROLE. AFAICT, these have never been emitted\n>>>> by pg_dump or psql, so there are no concerns from that end. Thoughts?\n>>>\n>>> +0.5. I think that you have a good point about the removal of\n>>> UNENCRYPTED (one keyword gone!) as we don't support it since 10. For\n>>> ENCRYPTED, I'd rather keep it around for compatibility reasons for a\n>>> longer time, just to be on the safe side.\n>>\n>> By that logic, I would +1 removing ENCRYPTED & UNENCRYPTED, given\n>> ENCRYPTED effectively has no meaning either after all this time too. If\n>> it's not emitted by any of our scripts, and it's been effectively moot\n>> for 4 years (by the time of PG14), and we've been saying in the docs \"he\n>> ENCRYPTED keyword has no effect, but is accepted for backwards\n>> compatibility\" I think we'd be safe with removing it.\n>>\n>> Perhaps a stepping stone is to emit a deprecation warning on PG14 and\n>> remove in PG15, but I think it's safe to remove.\n> \n> We're terrible about that, and people reasonably complain about such\n> things because we don't *know* we're gonna remove it in 15.\n> \n> I'll argue again for the approach I mentioned before somewhere: when we\n> commit the patch for 14, we go back and update the older docs to note\n> that it's gone as of v14. Deprecation notices and other such don't work\n> and we instead end up carrying legacy things on forever.\n\nYeah, my first preference is to just remove it. I'm ambivalent towards\nupdating the older docs, but I do think it would be helpful.\n\nJonathan", "msg_date": "Fri, 29 May 2020 09:23:40 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Jonathan S. Katz (jkatz@postgresql.org) wrote:\n>> By that logic, I would +1 removing ENCRYPTED & UNENCRYPTED, given\n>> ENCRYPTED effectively has no meaning either after all this time too.\n>> Perhaps a stepping stone is to emit a deprecation warning on PG14 and\n>> remove in PG15, but I think it's safe to remove.\n\n> We're terrible about that, and people reasonably complain about such\n> things because we don't *know* we're gonna remove it in 15.\n\nIf we're changing associated defaults, there's already some risk of\nbreaking badly-written applications. +1 for just removing these\nkeywords in v14.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 May 2020 09:34:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On 2020-05-28 15:28, Jonathan S. Katz wrote:\n> On 5/28/20 8:10 AM, Peter Eisentraut wrote:\n>> On 2020-05-27 15:25, Jonathan S. Katz wrote:\n>>> $ initdb -D data --auth-local=scram-sha-256 --auth-host=md5\n>>>\n>>> Got an error message:\n>>>\n>>> \"initdb: error: must specify a password for the superuser to enable md5\n>>> authentication\"\n>>>\n>>> For the last two, that behavior is to be expected (after all, you've set\n>>> the two login vectors to require passwords), but the error message seems\n>>> odd now. Perhaps we tweak it to be:\n>>>\n>>>\n>>> \"initdb: error: must specify a password for the superuser when requiring\n>>> passwords for both local and host authentication.\"\n>>\n>> That message is a bit long.  Maybe just\n>>\n>> must specify a password for the superuser to enable password authentication\n>>\n>> without reference to the specific method.  I think the idea is clear there.\n> \n> +1\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 10 Jun 2020 16:47:42 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On 6/10/20 10:47 AM, Peter Eisentraut wrote:\n> On 2020-05-28 15:28, Jonathan S. Katz wrote:\n>> On 5/28/20 8:10 AM, Peter Eisentraut wrote:\n>>> On 2020-05-27 15:25, Jonathan S. Katz wrote:\n>>>> $ initdb -D data --auth-local=scram-sha-256 --auth-host=md5\n>>>>\n>>>> Got an error message:\n>>>>\n>>>> \"initdb: error: must specify a password for the superuser to enable md5\n>>>> authentication\"\n>>>>\n>>>> For the last two, that behavior is to be expected (after all, you've\n>>>> set\n>>>> the two login vectors to require passwords), but the error message\n>>>> seems\n>>>> odd now. Perhaps we tweak it to be:\n>>>>\n>>>>\n>>>> \"initdb: error: must specify a password for the superuser when\n>>>> requiring\n>>>> passwords for both local and host authentication.\"\n>>>\n>>> That message is a bit long.  Maybe just\n>>>\n>>> must specify a password for the superuser to enable password\n>>> authentication\n>>>\n>>> without reference to the specific method.  I think the idea is clear\n>>> there.\n>>\n>> +1\n> \n> committed\n\nYay!!! Thank you!\n\nJonathan", "msg_date": "Wed, 10 Jun 2020 10:51:22 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" }, { "msg_contents": "On Wed, Jun 10, 2020 at 10:51:22AM -0400, Jonathan S. Katz wrote:\n> On 6/10/20 10:47 AM, Peter Eisentraut wrote:\n>> committed\n> \n> Yay!!! Thank you!\n\nThanks, all.\n--\nMichael", "msg_date": "Thu, 11 Jun 2020 15:57:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: password_encryption default" } ]
[ { "msg_contents": "Here is a patch to provide default gucs for EXPLAIN options.\n\nI have two goals with this patch. The first is that I personally\n*always* want BUFFERS turned on, so this would allow me to do it without\ntyping it every time.\n\nThe second is it would make it easier to change the actual default for\nsettings if we choose to do so because users would be able to switch it\nback if they prefer.\n\nThe patch is based off of a995b371ae.\n-- \nVik Fearing", "msg_date": "Sat, 23 May 2020 11:14:05 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Default gucs for EXPLAIN" }, { "msg_contents": "On Sat, May 23, 2020 at 11:14:05AM +0200, Vik Fearing wrote:\n> Here is a patch to provide default gucs for EXPLAIN options.\n> \n> I have two goals with this patch. The first is that I personally\n> *always* want BUFFERS turned on, so this would allow me to do it without\n> typing it every time.\n> \n> The second is it would make it easier to change the actual default for\n> settings if we choose to do so because users would be able to switch it\n> back if they prefer.\n> \n> The patch is based off of a995b371ae.\n\nThe patch adds new GUCs for each explain() option.\n\nWould it be better to make a GUC called default_explain_options which might say\n\"COSTS ON, ANALYZE ON, VERBOSE OFF, BUFFERS TBD, FORMAT TEXT, ...\"\n..and parsed using the same thing that parses the existing options (which would\nneed to be factored out of ExplainQuery()).\n\nDo we really want default_explain_analyze ?\nIt sounds like bad news that EXPLAIN DELETE might or might not remove rows\ndepending on the state of a variable.\n\nI think this should be split into two patches:\nOne to make the default explain options configurable, and a separate patch to\nchange the defaults.\n\n+\t/* Set defaults. */\n+\tes->analyze = default_explain_analyze;\n+\tes->buffers = default_explain_buffers;\n+\tes->costs = default_explain_costs;\n...\n\nI think you could avoid eight booleans and nine DefElems by making\ndefault_explain_* a struct, maybe ExplainState. Maybe all the defaults should\njust be handled in NewExplainState() ?\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 23 May 2020 11:12:26 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "so 23. 5. 2020 v 11:14 odesílatel Vik Fearing <vik@postgresfriends.org>\nnapsal:\n\n> Here is a patch to provide default gucs for EXPLAIN options.\n>\n> I have two goals with this patch. The first is that I personally\n> *always* want BUFFERS turned on, so this would allow me to do it without\n> typing it every time.\n>\n> The second is it would make it easier to change the actual default for\n> settings if we choose to do so because users would be able to switch it\n> back if they prefer.\n>\n> The patch is based off of a995b371ae.\n>\n\nIt's lot of new GUC variables. Isn't better only one that allows list of\nvalues?\n\nRegards\n\nPavel\n\n\n> --\n> Vik Fearing\n>\n\nso 23. 5. 2020 v 11:14 odesílatel Vik Fearing <vik@postgresfriends.org> napsal:Here is a patch to provide default gucs for EXPLAIN options.\n\nI have two goals with this patch.  The first is that I personally\n*always* want BUFFERS turned on, so this would allow me to do it without\ntyping it every time.\n\nThe second is it would make it easier to change the actual default for\nsettings if we choose to do so because users would be able to switch it\nback if they prefer.\n\nThe patch is based off of a995b371ae.It's lot of new GUC variables. Isn't better only one that allows list of values?RegardsPavel \n-- \nVik Fearing", "msg_date": "Sat, 23 May 2020 18:23:27 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On 5/23/20 6:12 PM, Justin Pryzby wrote:\n\n> The patch adds new GUCs for each explain() option.\n\nThank you for looking at it!\n\n> Would it be better to make a GUC called default_explain_options which might say\n> \"COSTS ON, ANALYZE ON, VERBOSE OFF, BUFFERS TBD, FORMAT TEXT, ...\"\n> ..and parsed using the same thing that parses the existing options (which would\n> need to be factored out of ExplainQuery()).\nI do not think that would be better, no.\n\n> Do we really want default_explain_analyze ?\n> It sounds like bad news that EXPLAIN DELETE might or might not remove rows\n> depending on the state of a variable.\n\nI have had sessions where not using ANALYZE was the exception, not the\nrule. I would much prefer to type EXPLAIN (ANALYZE OFF) in those cases.\n\n> I think this should be split into two patches:\n> One to make the default explain options configurable, and a separate patch to\n> change the defaults.\n\nThis patch does not change the defaults, so I'm not sure what you mean here?\n-- \nVik Fearing\n\n\n", "msg_date": "Sat, 23 May 2020 18:33:48 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On 5/23/20 6:23 PM, Pavel Stehule wrote:\n\n> It's lot of new GUC variables. Isn't better only one that allows list of\n> values?\nI like this way better.\n-- \nVik Fearing\n\n\n", "msg_date": "Sat, 23 May 2020 18:34:32 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "This is a very good improvement! Using information about buffers is my favorite way to optimize queries.\n\nNot having BUFFERS enabled by default means that in most cases, when asking for help, people send execution plans without buffers info.\n\nAnd it's simply in on event to type \"(ANALYZE, BUFFERS)\" all the time.\n\nSo I strongly support this change, thank you, Vik.\n\nOn Sat, May 23 2020 at 02:14, Vik Fearing < vik@postgresfriends.org > wrote:\n\n> \n> \n> \n> Here is a patch to provide default gucs for EXPLAIN options.\n> \n> \n> \n> I have two goals with this patch. The first is that I personally\n> *always* want BUFFERS turned on, so this would allow me to do it without\n> typing it every time.\n> \n> \n> \n> The second is it would make it easier to change the actual default for\n> settings if we choose to do so because users would be able to switch it\n> back if they prefer.\n> \n> \n> \n> The patch is based off of a995b371ae.\n> --\n> Vik Fearing\n> \n> \n>\n\n\n\nThis is a very good improvement! Using information about buffers is my favorite way to optimize queries.Not having BUFFERS enabled by default means that in most cases, when asking for help, people send execution plans without buffers info.And it's simply in on event to type \"(ANALYZE, BUFFERS)\" all the time.\nSo I strongly support this change, thank you, Vik.\n\n\n\n\n On Sat, May 23 2020 at 02:14, Vik Fearing\n <vik@postgresfriends.org>\n wrote:\n \n\nHere is a patch to provide default gucs for EXPLAIN options.\n\nI have two goals with this patch. The first is that I personally\n\n*always* want BUFFERS turned on, so this would allow me to do it without\ntyping it every time.\n\nThe second is it would make it easier to change the actual default for\nsettings if we choose to do so because users would be able to switch it\nback if they prefer.\n\nThe patch is based off of a995b371ae.\n\n-- \n\nVik Fearing", "msg_date": "Sat, 23 May 2020 18:16:25 +0000", "msg_from": "\"Nikolay Samokhvalov\" <samokhvalov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "Bonjour Vik,\n\n>> Do we really want default_explain_analyze ?\n>> It sounds like bad news that EXPLAIN DELETE might or might not remove rows\n>> depending on the state of a variable.\n>\n> I have had sessions where not using ANALYZE was the exception, not the\n> rule. I would much prefer to type EXPLAIN (ANALYZE OFF) in those cases.\n\nI concur with Justin that having EXPLAIN DELETE/UPDATE actually executing \nthe query can be too much a bit of a surprise for a user attempting it.\n\nA typical scenario would be \"this DELETE/UPDATE query is too slow\", admin \nconnects to production and try safe EXPLAIN on some random sample, and get \nbitten because the default was changed.\n\nA way out could be having 3 states for analyse (off, read-only, on) which \nwould block updates/deletes by making the transaction/operation read-only \nto prevent side effects, unless explicitely asked for? I'm not sure if \nthis can be easily implemented, though. Or maybe run the query in a \nseparate transaction which is then coldly rollbacked? Hmmm, I'm not really \nconvincing myself on this one… The safe option seems not allowing to \nchange ANALYZE option default.\n\nWhile testing the issue, I'm surprised at the syntax:\n\n EXPLAIN [ ( option [, ...] ) ] statement\n EXPLAIN [ ANALYZE ] [ VERBOSE ] statement\n\nWhy not allowing the following:\n\n EXPLAIN [ ANALYZE ] [ VERBOSE ] [ ( option [, ...] ) ] statement\n\n-- \nFabien.", "msg_date": "Sun, 24 May 2020 09:31:47 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "\n> The safe option seems not allowing to change ANALYZE option default.\n\n> EXPLAIN [ ANALYZE ] [ VERBOSE ] statement\n\nSome more thoughts:\n\nAn argument for keeping it that way is that there is already a special \nsyntax to enable ANALYSE explicitely, which as far as I am concerned I \nonly ever attempt after having tried a \"EXPLAIN query\" first.\n\nMoreover, having to just add the ANALYSE keyword is kind of cheap, while \nhaving to type \"(some list of options)\" is pretty cumbersome.\n\n-- \nFabien.\n\n\n", "msg_date": "Sun, 24 May 2020 10:57:49 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On 5/24/20 9:31 AM, Fabien COELHO wrote:\n> While testing the issue, I'm surprised at the syntax:\n> \n>  EXPLAIN [ ( option [, ...] ) ] statement\n>  EXPLAIN [ ANALYZE ] [ VERBOSE ] statement\n> \n> Why not allowing the following:\n> \n>  EXPLAIN [ ANALYZE ] [ VERBOSE ] [ ( option [, ...] ) ] statement\n\nThat has nothing to do with this patch.\n-- \nVik Fearing\n\n\n", "msg_date": "Sun, 24 May 2020 11:25:28 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": ">> Why not allowing the following:\n>>\n>>  EXPLAIN [ ANALYZE ] [ VERBOSE ] [ ( option [, ...] ) ] statement\n>\n> That has nothing to do with this patch.\n\nSure, it was just in passing, I was surprised by this restriction.\n\n-- \nFabien.", "msg_date": "Sun, 24 May 2020 13:36:59 +0200 (CEST)", "msg_from": "Fabien COELHO <fabien.coelho@mines-paristech.fr>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On Sat, May 23, 2020 at 06:16:25PM +0000, Nikolay Samokhvalov wrote:\n> This is a very good improvement! Using information about buffers is my favorite\n> way to optimize queries.\n> \n> Not having BUFFERS enabled by default means that in most cases, when asking for\n> help, people send execution plans without buffers info.\n> \n> And it's simply in on event to type \"(ANALYZE, BUFFERS)\" all the time.\n> \n> So I strongly support this change, thank you, Vik.\n\nI am not excited about this new feature. Why do it only for EXPLAIN? \nThat is a log of GUCs. I can see this becoming a feature creep\ndisaster.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 25 May 2020 21:36:50 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On Mon, May 25, 2020 at 09:36:50PM -0400, Bruce Momjian wrote:\n> I am not excited about this new feature. Why do it only for EXPLAIN? \n> That is a log of GUCs. I can see this becoming a feature creep\n> disaster.\n\nFWIW, Neither am I. This would create an extra maintenance cost, and\nI would not want such stuff to spread to other commands either, say\nCLUSTER, VACUUM, REINDEX, etc. And note that it is always possible to\ndo that with a small extension using the utility hook and some\npre-loaded user-settable GUCs.\n--\nMichael", "msg_date": "Tue, 26 May 2020 10:51:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Mon, May 25, 2020 at 09:36:50PM -0400, Bruce Momjian wrote:\n> > I am not excited about this new feature. Why do it only for EXPLAIN? \n\nWould probably help to understand what your thinking is here regarding\nhow it could be done for everything...? In particular, what else are\nyou thinking it'd be sensible for?\n\n> > That is a log of GUCs. I can see this becoming a feature creep\n> > disaster.\n\nI'd only view it as a feature creep disaster if we end up extending it\nto things that don't make any sense.. I don't see any particular reason\nwhy we'd have to do that though. On the other hand, if there's a clean\nway to do it for everything, that'd be pretty neat.\n\n> FWIW, Neither am I. This would create an extra maintenance cost, and\n> I would not want such stuff to spread to other commands either, say\n> CLUSTER, VACUUM, REINDEX, etc. And note that it is always possible to\n> do that with a small extension using the utility hook and some\n> pre-loaded user-settable GUCs.\n\nThe suggestion to \"go write C code that will be loaded via a utility\nhook\" is really entirely inappropriate here.\n\nThis strikes me as a pretty reasonable 'creature comfort' kind of idea.\nInventing GUCs to handle it is maybe not the best approach, but we\nhaven't really got anything better right at hand- psql can't parse\ngeneral SQL, today, and hasn't got it's own idea of \"how to run\nexplain\". On the other hand, I could easily see a similar feature\nbeing included in pgAdmin4 where running explain is clicking on a button\ninstead of typing 'explain'.\n\nTo that end- what if this was done client-side with '\\explain' or\nsimilar? Basically, it'd work like \\watch or \\g but we'd have options\nunder pset like \"explain_analyze t/f\" and such. I feel like that'd also\nlargely address the concerns about how this might 'feature creep' to\nother commands- because those other commands don't work with a query\nbuffer, so it wouldn't really make sense for them.\n\nAs for the concerns wrt explain UPDATE or explain DETELE actually\nrunning the query, that's what transactions are for, and if you don't\nfeel comfortable using transactions or using these options- then don't.\n\nThanks,\n\nStephen", "msg_date": "Mon, 25 May 2020 22:27:34 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On Mon, May 25, 2020 at 6:36 PM, Bruce Momjian < bruce@momjian.us > wrote:\n\n> \n> \n> \n> I am not excited about this new feature. Why do it only for EXPLAIN? That\n> is a log of GUCs. I can see this becoming a feature creep disaster.\n> \n> \n> \n> \n\nHow about changing the default behavior, making BUFFERS enabled by default? Those who don't need it, always can say BUFFERS OFF — the say as for TIMING.\nOn Mon, May 25, 2020 at 6:36 PM, Bruce Momjian <bruce@momjian.us> wrote:\nI am not excited about this new feature. Why do it only for EXPLAIN? \nThat is a log of GUCs. I can see this becoming a feature creep\ndisaster.\n\nHow about changing the default behavior, making BUFFERS enabled by default? Those who don't need it, always can say BUFFERS OFF — the say as for TIMING.", "msg_date": "Tue, 26 May 2020 02:49:46 +0000", "msg_from": "\"Nikolay Samokhvalov\" <samokhvalov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On Tue, 2020-05-26 at 02:49 +0000, Nikolay Samokhvalov wrote:\n> > I am not excited about this new feature. Why do it only for EXPLAIN? That is a log of GUCs. I can see this becoming a feature creep disaster. \n> > \n> \n> How about changing the default behavior, making BUFFERS enabled by default? Those who don't need it, always can say BUFFERS OFF — the say as for TIMING.\n\n+1\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 26 May 2020 05:17:22 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "Le mar. 26 mai 2020 à 04:27, Stephen Frost <sfrost@snowman.net> a écrit :\n\n> Greetings,\n>\n> * Michael Paquier (michael@paquier.xyz) wrote:\n> > On Mon, May 25, 2020 at 09:36:50PM -0400, Bruce Momjian wrote:\n> > > I am not excited about this new feature. Why do it only for EXPLAIN?\n>\n> Would probably help to understand what your thinking is here regarding\n> how it could be done for everything...? In particular, what else are\n> you thinking it'd be sensible for?\n>\n> > > That is a log of GUCs. I can see this becoming a feature creep\n> > > disaster.\n>\n> I'd only view it as a feature creep disaster if we end up extending it\n> to things that don't make any sense.. I don't see any particular reason\n> why we'd have to do that though. On the other hand, if there's a clean\n> way to do it for everything, that'd be pretty neat.\n>\n> > FWIW, Neither am I. This would create an extra maintenance cost, and\n> > I would not want such stuff to spread to other commands either, say\n> > CLUSTER, VACUUM, REINDEX, etc. And note that it is always possible to\n> > do that with a small extension using the utility hook and some\n> > pre-loaded user-settable GUCs.\n>\n> The suggestion to \"go write C code that will be loaded via a utility\n> hook\" is really entirely inappropriate here.\n>\n> This strikes me as a pretty reasonable 'creature comfort' kind of idea.\n> Inventing GUCs to handle it is maybe not the best approach, but we\n> haven't really got anything better right at hand- psql can't parse\n> general SQL, today, and hasn't got it's own idea of \"how to run\n> explain\". On the other hand, I could easily see a similar feature\n> being included in pgAdmin4 where running explain is clicking on a button\n> instead of typing 'explain'.\n>\n> To that end- what if this was done client-side with '\\explain' or\n> similar? Basically, it'd work like \\watch or \\g but we'd have options\n> under pset like \"explain_analyze t/f\" and such. I feel like that'd also\n> largely address the concerns about how this might 'feature creep' to\n> other commands- because those other commands don't work with a query\n> buffer, so it wouldn't really make sense for them.\n>\n> As for the concerns wrt explain UPDATE or explain DETELE actually\n> running the query, that's what transactions are for, and if you don't\n> feel comfortable using transactions or using these options- then don't.\n>\n>\nThis means you'll always have to check if the new GUCs are set up in a way\nit will actually execute the query or to open a transaction for the same\nreason. This is a huge behaviour change where people might lose data.\n\nI really don't like this proposal (the new GUCs).\n\n\n-- \nGuillaume.\n\nLe mar. 26 mai 2020 à 04:27, Stephen Frost <sfrost@snowman.net> a écrit :Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Mon, May 25, 2020 at 09:36:50PM -0400, Bruce Momjian wrote:\n> > I am not excited about this new feature.  Why do it only for EXPLAIN? \n\nWould probably help to understand what your thinking is here regarding\nhow it could be done for everything...?  In particular, what else are\nyou thinking it'd be sensible for?\n\n> > That is a log of GUCs.  I can see this becoming a feature creep\n> > disaster.\n\nI'd only view it as a feature creep disaster if we end up extending it\nto things that don't make any sense..  I don't see any particular reason\nwhy we'd have to do that though.  On the other hand, if there's a clean\nway to do it for everything, that'd be pretty neat.\n\n> FWIW, Neither am I.  This would create an extra maintenance cost, and\n> I would not want such stuff to spread to other commands either, say\n> CLUSTER, VACUUM, REINDEX, etc.  And note that it is always possible to\n> do that with a small extension using the utility hook and some\n> pre-loaded user-settable GUCs.\n\nThe suggestion to \"go write C code that will be loaded via a utility\nhook\" is really entirely inappropriate here.\n\nThis strikes me as a pretty reasonable 'creature comfort' kind of idea.\nInventing GUCs to handle it is maybe not the best approach, but we\nhaven't really got anything better right at hand- psql can't parse\ngeneral SQL, today, and hasn't got it's own idea of \"how to run\nexplain\".  On the other hand, I could easily see a similar feature\nbeing included in pgAdmin4 where running explain is clicking on a button\ninstead of typing 'explain'.\n\nTo that end- what if this was done client-side with '\\explain' or\nsimilar?  Basically, it'd work like \\watch or \\g but we'd have options\nunder pset like \"explain_analyze t/f\" and such.  I feel like that'd also\nlargely address the concerns about how this might 'feature creep' to\nother commands- because those other commands don't work with a query\nbuffer, so it wouldn't really make sense for them.\n\nAs for the concerns wrt explain UPDATE or explain DETELE actually\nrunning the query, that's what transactions are for, and if you don't\nfeel comfortable using transactions or using these options- then don't.\nThis means you'll always have to check if the new GUCs are set up in a way it will actually execute the query or to open a transaction for the same reason. This is a huge behaviour change where people might lose data.I really don't like this proposal (the new GUCs).-- Guillaume.", "msg_date": "Tue, 26 May 2020 08:15:00 +0200", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On Saturday, May 23, 2020, Vik Fearing <vik@postgresfriends.org> wrote:\n>\n>\n> > Do we really want default_explain_analyze ?\n> > It sounds like bad news that EXPLAIN DELETE might or might not remove\n> rows\n> > depending on the state of a variable.\n>\n> I have had sessions where not using ANALYZE was the exception, not the\n> rule. I would much prefer to type EXPLAIN (ANALYZE OFF) in those cases.\n>\n\nNot sure about the feature as a whole but i’m strongly against having a GUC\nexist that conditions whether a query is actually executed.\n\nDavid J.\n\nOn Saturday, May 23, 2020, Vik Fearing <vik@postgresfriends.org> wrote:\n\n> Do we really want default_explain_analyze ?\n> It sounds like bad news that EXPLAIN DELETE might or might not remove rows\n> depending on the state of a variable.\n\nI have had sessions where not using ANALYZE was the exception, not the\nrule.  I would much prefer to type  EXPLAIN (ANALYZE OFF)  in those cases.\nNot sure about the feature as a whole but i’m strongly against having a GUC exist that conditions whether a query is actually executed.David J.", "msg_date": "Mon, 25 May 2020 23:38:35 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On Monday, May 25, 2020, Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * Michael Paquier (michael@paquier.xyz) wrote:\n> > On Mon, May 25, 2020 at 09:36:50PM -0400, Bruce Momjian wrote:\n> > > I am not excited about this new feature. Why do it only for EXPLAIN?\n>\n> Would probably help to understand what your thinking is here regarding\n> how it could be done for everything...? In particular, what else are\n> you thinking it'd be sensible for?\n>\n\nCOPY comes to mind immediately.\n\nDavid J.\n\nOn Monday, May 25, 2020, Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Mon, May 25, 2020 at 09:36:50PM -0400, Bruce Momjian wrote:\n> > I am not excited about this new feature.  Why do it only for EXPLAIN? \n\nWould probably help to understand what your thinking is here regarding\nhow it could be done for everything...?  In particular, what else are\nyou thinking it'd be sensible for?\nCOPY comes to mind immediately.David J.", "msg_date": "Mon, 25 May 2020 23:51:42 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On Tue, 26 May 2020 at 13:36, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Sat, May 23, 2020 at 06:16:25PM +0000, Nikolay Samokhvalov wrote:\n> > This is a very good improvement! Using information about buffers is my favorite\n> > way to optimize queries.\n> >\n> > Not having BUFFERS enabled by default means that in most cases, when asking for\n> > help, people send execution plans without buffers info.\n> >\n> > And it's simply in on event to type \"(ANALYZE, BUFFERS)\" all the time.\n> >\n> > So I strongly support this change, thank you, Vik.\n>\n> I am not excited about this new feature.\n\nI'm against adding GUCs to control what EXPLAIN does by default.\n\nA few current GUCs come to mind which gives external control to a\ncommand's behaviour are:\n\nstandard_conforming_strings\nbackslash_quote\nbytea_output\n\nIt's pretty difficult for application authors to write code that will\njust work due to these GUCs. We end up with GUCs like\nescape_string_warning to try and help application authors find areas\nwhich may be problematic.\n\nIt's not an easy thing to search for in the archives, but we've\nrejected GUCs that have proposed new ways which can break applications\nin this way. For example [1]. You can see some arguments against that\nin [2].\n\nNow, there are certainly far fewer applications out there that will\nexecute an EXPLAIN, but the number is still above zero. I imagine the\nauthors of those applications might get upset if we create something\noutside of the command that controls what the command does. Perhaps\nthe idea here is not quite as bad as that as applications could still\noverride the options by mentioning each EXPLAIN option in the command\nthey send to the server. However, we're not done adding new options\nyet, so by doing this we'd be pretty much insisting that applications\nthat use EXPLAIN know about all EXPLAIN options for the server version\nthey're connected to. That seems like a big demand given that we've\nbeen careful to still support the old\nEXPLAIN syntax after we added the new way to specify the options in parenthesis.\n\n[1] https://www.postgresql.org/message-id/flat/ACF85C502E55A143AB9F4ECFE960660A17282D@mailserver2.local.mstarlabs.com\n[2] https://www.postgresql.org/message-id/17880.1482648516%40sss.pgh.pa.us\n\nDavid\n\n\n", "msg_date": "Tue, 26 May 2020 23:30:04 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On 5/26/20 1:30 PM, David Rowley wrote:\n> On Tue, 26 May 2020 at 13:36, Bruce Momjian <bruce@momjian.us> wrote:\n>>\n>> On Sat, May 23, 2020 at 06:16:25PM +0000, Nikolay Samokhvalov wrote:\n>>> This is a very good improvement! Using information about buffers is my favorite\n>>> way to optimize queries.\n>>>\n>>> Not having BUFFERS enabled by default means that in most cases, when asking for\n>>> help, people send execution plans without buffers info.\n>>>\n>>> And it's simply in on event to type \"(ANALYZE, BUFFERS)\" all the time.\n>>>\n>>> So I strongly support this change, thank you, Vik.\n>>\n>> I am not excited about this new feature.\n> \n> I'm against adding GUCs to control what EXPLAIN does by default.\n> \n> A few current GUCs come to mind which gives external control to a\n> command's behaviour are:\n> \n> standard_conforming_strings\n> backslash_quote\n> bytea_output\n> \n> It's pretty difficult for application authors to write code that will\n> just work due to these GUCs. We end up with GUCs like\n> escape_string_warning to try and help application authors find areas\n> which may be problematic.\n> \n> It's not an easy thing to search for in the archives, but we've\n> rejected GUCs that have proposed new ways which can break applications\n> in this way. For example [1]. You can see some arguments against that\n> in [2].\n> \n> Now, there are certainly far fewer applications out there that will\n> execute an EXPLAIN, but the number is still above zero. I imagine the\n> authors of those applications might get upset if we create something\n> outside of the command that controls what the command does. Perhaps\n> the idea here is not quite as bad as that as applications could still\n> override the options by mentioning each EXPLAIN option in the command\n> they send to the server. However, we're not done adding new options\n> yet, so by doing this we'd be pretty much insisting that applications\n> that use EXPLAIN know about all EXPLAIN options for the server version\n> they're connected to. That seems like a big demand given that we've\n> been careful to still support the old\n> EXPLAIN syntax after we added the new way to specify the options in parenthesis.\n\n\nNah, this argument doesn't hold. If an app wants something on or off,\nit should say so. If it doesn't care, then it doesn't care.\n\nAre you saying we should have all new EXPLAIN options off forever into\nthe future because apps won't know about the new data? I guess we\nshould also not ever introduce new plan nodes because those won't be\nknown either.\n\nI'm not buying that.\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 26 May 2020 13:59:26 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "út 26. 5. 2020 v 4:27 odesílatel Stephen Frost <sfrost@snowman.net> napsal:\n\n> Greetings,\n>\n> * Michael Paquier (michael@paquier.xyz) wrote:\n> > On Mon, May 25, 2020 at 09:36:50PM -0400, Bruce Momjian wrote:\n> > > I am not excited about this new feature. Why do it only for EXPLAIN?\n>\n> Would probably help to understand what your thinking is here regarding\n> how it could be done for everything...? In particular, what else are\n> you thinking it'd be sensible for?\n>\n> > > That is a log of GUCs. I can see this becoming a feature creep\n> > > disaster.\n>\n> I'd only view it as a feature creep disaster if we end up extending it\n> to things that don't make any sense.. I don't see any particular reason\n> why we'd have to do that though. On the other hand, if there's a clean\n> way to do it for everything, that'd be pretty neat.\n>\n> > FWIW, Neither am I. This would create an extra maintenance cost, and\n> > I would not want such stuff to spread to other commands either, say\n> > CLUSTER, VACUUM, REINDEX, etc. And note that it is always possible to\n> > do that with a small extension using the utility hook and some\n> > pre-loaded user-settable GUCs.\n>\n> The suggestion to \"go write C code that will be loaded via a utility\n> hook\" is really entirely inappropriate here.\n>\n> This strikes me as a pretty reasonable 'creature comfort' kind of idea.\n> Inventing GUCs to handle it is maybe not the best approach, but we\n> haven't really got anything better right at hand- psql can't parse\n> general SQL, today, and hasn't got it's own idea of \"how to run\n> explain\". On the other hand, I could easily see a similar feature\n> being included in pgAdmin4 where running explain is clicking on a button\n> instead of typing 'explain'.\n>\n> To that end- what if this was done client-side with '\\explain' or\n> similar? Basically, it'd work like \\watch or \\g but we'd have options\n> under pset like \"explain_analyze t/f\" and such. I feel like that'd also\n> largely address the concerns about how this might 'feature creep' to\n> other commands- because those other commands don't work with a query\n> buffer, so it wouldn't really make sense for them.\n>\n> As for the concerns wrt explain UPDATE or explain DETELE actually\n> running the query, that's what transactions are for, and if you don't\n> feel comfortable using transactions or using these options- then don't.\n>\n\nthe partial solution can be custom psql statements. Now, it can be just\nworkaround\n\n\\set explain 'explain (analyze, buffers)'\n:explain select * from pg_class ;\n\nand anybody can prepare customized statements how he likes\n\nRegards\n\nPavel\n\n\n\n\n> Thanks,\n>\n> Stephen\n>\n\nút 26. 5. 2020 v 4:27 odesílatel Stephen Frost <sfrost@snowman.net> napsal:Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Mon, May 25, 2020 at 09:36:50PM -0400, Bruce Momjian wrote:\n> > I am not excited about this new feature.  Why do it only for EXPLAIN? \n\nWould probably help to understand what your thinking is here regarding\nhow it could be done for everything...?  In particular, what else are\nyou thinking it'd be sensible for?\n\n> > That is a log of GUCs.  I can see this becoming a feature creep\n> > disaster.\n\nI'd only view it as a feature creep disaster if we end up extending it\nto things that don't make any sense..  I don't see any particular reason\nwhy we'd have to do that though.  On the other hand, if there's a clean\nway to do it for everything, that'd be pretty neat.\n\n> FWIW, Neither am I.  This would create an extra maintenance cost, and\n> I would not want such stuff to spread to other commands either, say\n> CLUSTER, VACUUM, REINDEX, etc.  And note that it is always possible to\n> do that with a small extension using the utility hook and some\n> pre-loaded user-settable GUCs.\n\nThe suggestion to \"go write C code that will be loaded via a utility\nhook\" is really entirely inappropriate here.\n\nThis strikes me as a pretty reasonable 'creature comfort' kind of idea.\nInventing GUCs to handle it is maybe not the best approach, but we\nhaven't really got anything better right at hand- psql can't parse\ngeneral SQL, today, and hasn't got it's own idea of \"how to run\nexplain\".  On the other hand, I could easily see a similar feature\nbeing included in pgAdmin4 where running explain is clicking on a button\ninstead of typing 'explain'.\n\nTo that end- what if this was done client-side with '\\explain' or\nsimilar?  Basically, it'd work like \\watch or \\g but we'd have options\nunder pset like \"explain_analyze t/f\" and such.  I feel like that'd also\nlargely address the concerns about how this might 'feature creep' to\nother commands- because those other commands don't work with a query\nbuffer, so it wouldn't really make sense for them.\n\nAs for the concerns wrt explain UPDATE or explain DETELE actually\nrunning the query, that's what transactions are for, and if you don't\nfeel comfortable using transactions or using these options- then don't.the partial solution can be custom psql statements. Now, it can be just workaround \\set explain 'explain (analyze, buffers)':explain select * from pg_class ;and anybody can prepare customized statements how he likesRegardsPavel\n\nThanks,\n\nStephen", "msg_date": "Tue, 26 May 2020 15:10:41 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "Greetings,\n\n* Guillaume Lelarge (guillaume@lelarge.info) wrote:\n> Le mar. 26 mai 2020 à 04:27, Stephen Frost <sfrost@snowman.net> a écrit :\n> > To that end- what if this was done client-side with '\\explain' or\n> > similar? Basically, it'd work like \\watch or \\g but we'd have options\n> > under pset like \"explain_analyze t/f\" and such. I feel like that'd also\n> > largely address the concerns about how this might 'feature creep' to\n> > other commands- because those other commands don't work with a query\n> > buffer, so it wouldn't really make sense for them.\n> >\n> > As for the concerns wrt explain UPDATE or explain DETELE actually\n> > running the query, that's what transactions are for, and if you don't\n> > feel comfortable using transactions or using these options- then don't.\n>\n> This means you'll always have to check if the new GUCs are set up in a way\n> it will actually execute the query or to open a transaction for the same\n> reason. This is a huge behaviour change where people might lose data.\n\nIt's only a behaviour change if you enable it.. and the suggestion I\nmade specifically wouldn't even be a regular 'explain', you'd be using\n'\\explain' in psql, a new command.\n\n> I really don't like this proposal (the new GUCs).\n\nThe proposal you're commenting on (seemingly mine, anyway) didn't\ninclude adding any new GUCs.\n\nThanks,\n\nStephen", "msg_date": "Tue, 26 May 2020 10:25:15 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "Greetings,\n\n* David G. Johnston (david.g.johnston@gmail.com) wrote:\n> On Monday, May 25, 2020, Stephen Frost <sfrost@snowman.net> wrote:\n> > * Michael Paquier (michael@paquier.xyz) wrote:\n> > > On Mon, May 25, 2020 at 09:36:50PM -0400, Bruce Momjian wrote:\n> > > > I am not excited about this new feature. Why do it only for EXPLAIN?\n> >\n> > Would probably help to understand what your thinking is here regarding\n> > how it could be done for everything...? In particular, what else are\n> > you thinking it'd be sensible for?\n> \n> COPY comes to mind immediately.\n\nIndeed... and we have a \\copy already, so following my proposal, at\nleast, it seems like we could naturally add in options to have defaults\nto be used with \\copy is used in psql. That might end up being a bit\nmore interesting since we didn't contempalte that idea when \\copy was\nfirst written and therefore we might need to change the syntax that the\nbackend COPY commands to make this work (maybe adopting a similar syntax\nto explain, in addition to the existing WITH options after the COPY\ncommand, and then deciding which to prefer when both exist, or thorw an\nerror in such a case).\n\nThanks,\n\nStephen", "msg_date": "Tue, 26 May 2020 10:32:06 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "Greetings,\n\n* Pavel Stehule (pavel.stehule@gmail.com) wrote:\n> the partial solution can be custom psql statements. Now, it can be just\n> workaround\n> \n> \\set explain 'explain (analyze, buffers)'\n> :explain select * from pg_class ;\n> \n> and anybody can prepare customized statements how he likes\n\nYeah, it's really very rudimentary though, unfortunately. A proper\nlanguage in psql would be *really* nice with good ways to reference\nvariables and such..\n\nI don't view this as really being a good justification to not have a\n\\explain type of command.\n\nThanks,\n\nStephen", "msg_date": "Tue, 26 May 2020 10:35:01 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "Le mar. 26 mai 2020 à 16:25, Stephen Frost <sfrost@snowman.net> a écrit :\n\n> Greetings,\n>\n> * Guillaume Lelarge (guillaume@lelarge.info) wrote:\n> > Le mar. 26 mai 2020 à 04:27, Stephen Frost <sfrost@snowman.net> a écrit\n> :\n> > > To that end- what if this was done client-side with '\\explain' or\n> > > similar? Basically, it'd work like \\watch or \\g but we'd have options\n> > > under pset like \"explain_analyze t/f\" and such. I feel like that'd\n> also\n> > > largely address the concerns about how this might 'feature creep' to\n> > > other commands- because those other commands don't work with a query\n> > > buffer, so it wouldn't really make sense for them.\n> > >\n> > > As for the concerns wrt explain UPDATE or explain DETELE actually\n> > > running the query, that's what transactions are for, and if you don't\n> > > feel comfortable using transactions or using these options- then don't.\n> >\n> > This means you'll always have to check if the new GUCs are set up in a\n> way\n> > it will actually execute the query or to open a transaction for the same\n> > reason. This is a huge behaviour change where people might lose data.\n>\n> It's only a behaviour change if you enable it.. and the suggestion I\n> made specifically wouldn't even be a regular 'explain', you'd be using\n> '\\explain' in psql, a new command.\n>\n> > I really don't like this proposal (the new GUCs).\n>\n> The proposal you're commenting on (seemingly mine, anyway) didn't\n> include adding any new GUCs.\n>\n>\nMy bad. I didn't read your email properly, sorry.\n\nI wouldn't complain about a \\explain metacommand. The proposal I (still)\ndislike is Vik's.\n\n\n-- \nGuillaume.\n\nLe mar. 26 mai 2020 à 16:25, Stephen Frost <sfrost@snowman.net> a écrit :Greetings,\n\n* Guillaume Lelarge (guillaume@lelarge.info) wrote:\n> Le mar. 26 mai 2020 à 04:27, Stephen Frost <sfrost@snowman.net> a écrit :\n> > To that end- what if this was done client-side with '\\explain' or\n> > similar?  Basically, it'd work like \\watch or \\g but we'd have options\n> > under pset like \"explain_analyze t/f\" and such.  I feel like that'd also\n> > largely address the concerns about how this might 'feature creep' to\n> > other commands- because those other commands don't work with a query\n> > buffer, so it wouldn't really make sense for them.\n> >\n> > As for the concerns wrt explain UPDATE or explain DETELE actually\n> > running the query, that's what transactions are for, and if you don't\n> > feel comfortable using transactions or using these options- then don't.\n>\n> This means you'll always have to check if the new GUCs are set up in a way\n> it will actually execute the query or to open a transaction for the same\n> reason. This is a huge behaviour change where people might lose data.\n\nIt's only a behaviour change if you enable it.. and the suggestion I\nmade specifically wouldn't even be a regular 'explain', you'd be using\n'\\explain' in psql, a new command.\n\n> I really don't like this proposal (the new GUCs).\n\nThe proposal you're commenting on (seemingly mine, anyway) didn't\ninclude adding any new GUCs.\nMy bad. I didn't read your email properly, sorry.I wouldn't complain about a \\explain metacommand. The proposal I (still) dislike is Vik's.-- Guillaume.", "msg_date": "Tue, 26 May 2020 16:44:59 +0200", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On Sat, May 23, 2020 at 06:33:48PM +0200, Vik Fearing wrote:\n> > Do we really want default_explain_analyze ?\n> > It sounds like bad news that EXPLAIN DELETE might or might not remove rows\n> > depending on the state of a variable.\n> \n> I have had sessions where not using ANALYZE was the exception, not the\n> rule. I would much prefer to type EXPLAIN (ANALYZE OFF) in those cases.\n\nI suggest that such sessions are themselves exceptional.\n\n> > I think this should be split into two patches:\n> > One to make the default explain options configurable, and a separate patch to\n> > change the defaults.\n> \n> This patch does not change the defaults, so I'm not sure what you mean here?\n\nSorry, ignore that; I wrote it before digesting the patch. \n\nOn Sat, May 23, 2020 at 06:16:25PM +0000, Nikolay Samokhvalov wrote:\n> Not having BUFFERS enabled by default means that in most cases, when asking\n> for help, people send execution plans without buffers info.\n\nI also presumed that's where this patch was going to lead, but it doesn't\nactually change the default. So doesn't address that, except that if someone\nreports a performance problem, we can tell them to run:\n\n|alter system set explain_buffers=on; SELECT pg_reload_conf()\n\n..which is no better, except that it would also affect any *additional* problem\nreports which might be made from that cluster.\n\nIf you want to change the default, I think that should be a separate patch/thread.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 26 May 2020 15:08:12 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On 5/26/20 10:08 PM, Justin Pryzby wrote:\n> If you want to change the default, I think that should be a separate patch/thread.\n\nYes, it will be.\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 26 May 2020 23:50:19 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On Tue, 26 May 2020 at 23:59, Vik Fearing <vik@postgresfriends.org> wrote:\n>\n> On 5/26/20 1:30 PM, David Rowley wrote:\n> > On Tue, 26 May 2020 at 13:36, Bruce Momjian <bruce@momjian.us> wrote:\n> >>\n> >> On Sat, May 23, 2020 at 06:16:25PM +0000, Nikolay Samokhvalov wrote:\n> >>> This is a very good improvement! Using information about buffers is my favorite\n> >>> way to optimize queries.\n> >>>\n> >>> Not having BUFFERS enabled by default means that in most cases, when asking for\n> >>> help, people send execution plans without buffers info.\n> >>>\n> >>> And it's simply in on event to type \"(ANALYZE, BUFFERS)\" all the time.\n> >>>\n> >>> So I strongly support this change, thank you, Vik.\n> >>\n> >> I am not excited about this new feature.\n> >\n> > I'm against adding GUCs to control what EXPLAIN does by default.\n> >\n> > A few current GUCs come to mind which gives external control to a\n> > command's behaviour are:\n> >\n> > standard_conforming_strings\n> > backslash_quote\n> > bytea_output\n> >\n> > It's pretty difficult for application authors to write code that will\n> > just work due to these GUCs. We end up with GUCs like\n> > escape_string_warning to try and help application authors find areas\n> > which may be problematic.\n> >\n> > It's not an easy thing to search for in the archives, but we've\n> > rejected GUCs that have proposed new ways which can break applications\n> > in this way. For example [1]. You can see some arguments against that\n> > in [2].\n> >\n> > Now, there are certainly far fewer applications out there that will\n> > execute an EXPLAIN, but the number is still above zero. I imagine the\n> > authors of those applications might get upset if we create something\n> > outside of the command that controls what the command does. Perhaps\n> > the idea here is not quite as bad as that as applications could still\n> > override the options by mentioning each EXPLAIN option in the command\n> > they send to the server. However, we're not done adding new options\n> > yet, so by doing this we'd be pretty much insisting that applications\n> > that use EXPLAIN know about all EXPLAIN options for the server version\n> > they're connected to. That seems like a big demand given that we've\n> > been careful to still support the old\n> > EXPLAIN syntax after we added the new way to specify the options in parenthesis.\n>\n>\n> Nah, this argument doesn't hold. If an app wants something on or off,\n> it should say so. If it doesn't care, then it doesn't care.\n>\n> Are you saying we should have all new EXPLAIN options off forever into\n> the future because apps won't know about the new data? I guess we\n> should also not ever introduce new plan nodes because those won't be\n> known either.\n\nI don't think this is a particularly good counter argument. If we add\na new executor node then that's something that the server will send to\nthe client. The client does not need knowledge about which version of\nPostreSQL it is connected to. If it receives details about some new\nnode type in an EXPLAIN then it can be fairly certain that the server\nsupports that node type.\n\nWhat we're talking about here is the opposite direction. The client is\nsending the command to the server, and the command it'll need to send\nis going to have to be specific to the server version. Now perhaps\nall such tools already have good infrastructure to change behaviour\nbased on version, after all, these tools do also tend to query\ncatalogue tables from time to time and those change between versions.\nPerhaps it would be good to hear from authors of such tools and get\ntheir input. If they all agree that it's not a problem then that\ncertainly weakens my argument, but if they don't then perhaps you\nshould reconsider.\n\nDavid\n\n\n", "msg_date": "Wed, 27 May 2020 11:52:56 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On Tue, May 26, 2020 at 4:30 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n>\n> I imagine the\n> authors of those applications might get upset if we create something\n> outside of the command that controls what the command does. Perhaps\n> the idea here is not quite as bad as that as applications could still\n> override the options by mentioning each EXPLAIN option in the command\n> they send to the server.\n>\n\nI admittedly haven't tried to write an explain output parser but I'm\ndoubting the conclusion that it is necessary to know the values of the\nvarious options in order to properly parse the output.\n\nThe output format type is knowable by observing the actual structure (first\nfew characters probably) of the output and for everything else (all of the\nbooleans) any parser worth its salt is going to be able to parse output\nwhere every possible setting is set to on.\n\nI'm inclined to go with having everything except ANALYZE be something that\nhas a GUC default override.\n\nDavid J.\n\nOn Tue, May 26, 2020 at 4:30 AM David Rowley <dgrowleyml@gmail.com> wrote:I imagine the\nauthors of those applications might get upset if we create something\noutside of the command that controls what the command does. Perhaps\nthe idea here is not quite as bad as that as applications could still\noverride the options by mentioning each EXPLAIN option in the command\nthey send to the server.I admittedly haven't tried to write an explain output parser but I'm doubting the conclusion that it is necessary to know the values of the various options in order to properly parse the output.The output format type is knowable by observing the actual structure (first few characters probably) of the output and for everything else (all of the booleans) any parser worth its salt is going to be able to parse output where every possible setting is set to on.I'm inclined to go with having everything except ANALYZE be something that has a GUC default override.David J.", "msg_date": "Tue, 26 May 2020 17:13:30 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On Tue, May 26, 2020 at 4:53 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> If we add\n> a new executor node then that's something that the server will send to\n> the client. The client does not need knowledge about which version of\n> PostreSQL it is connected to. If it receives details about some new\n> node type in an EXPLAIN then it can be fairly certain that the server\n> supports that node type.\n>\n\nThe above is basically how I imagine explain handling software works today\n- if it sees a specific structure in the output it processes it. It has\nzero expectations about whether a feature with a option knob is set to true\nor false. And its deals with the one non-boolean option by examining the\noutput text.\n\n\n> What we're talking about here is the opposite direction. The client is\n> sending the command to the server, and the command it'll need to send\n> is going to have to be specific to the server version. Now perhaps\n> all such tools already have good infrastructure to change behaviour\n> based on version, after all, these tools do also tend to query\n> catalogue tables from time to time and those change between versions.\n>\n\nI don't see how adding these optional GUCs impacts that materially. If the\nclient provides a custom UI to the user and then writes an explain command\nitself it will need to possibly understand version differences whether\nthese GUCs exist or not.\n\nTo hammer the point home if that client software is memorizing the choices\nmade for the various options and then conditions its output based upon\nthose choices then it should be specifying every one of them explicitly, in\nwhich case the GUCs wouldn't matter. If it is somehow depending upon the\nexisting defaults and user choices to figure out the option values then,\nyes, the GUCs would be hidden information that may possibly confuse it if,\nsay, a user has a GUC BUFFERS on but didn't make a choice in the client UI\nwhich defaulted to FALSE mimicking our default and because the default was\nchosen didn't output BUFFER off but left the option unspecified and now the\nbuffers appear, which it for some reason isn't expecting and thus blows\nup. I could care less about that client and certainly wouldn't let its\npossible existence hold me back from adding a feature that bare-bones\nclient users who send their own explain queries would find useful.\n\nDavid J.\n\nOn Tue, May 26, 2020 at 4:53 PM David Rowley <dgrowleyml@gmail.com> wrote:If we add\na new executor node then that's something that the server will send to\nthe client.  The client does not need knowledge about which version of\nPostreSQL it is connected to. If it receives details about some new\nnode type in an EXPLAIN then it can be fairly certain that the server\nsupports that node type.The above is basically how I imagine explain handling software works today - if it sees a specific structure in the output it processes it.  It has zero expectations about whether a feature with a option knob is set to true or false.  And its deals with the one non-boolean option by examining the output text. \nWhat we're talking about here is the opposite direction. The client is\nsending the command to the server, and the command it'll need to send\nis going to have to be specific to the server version.   Now perhaps\nall such tools already have good infrastructure to change behaviour\nbased on version, after all, these tools do also tend to query\ncatalogue tables from time to time and those change between versions.I don't see how adding these optional GUCs impacts that materially.  If the client provides a custom UI to the user and then writes an explain command itself it will need to possibly understand version differences whether these GUCs exist or not.To hammer the point home if that client software is memorizing the choices made for the various options and then conditions its output based upon those choices then it should be specifying every one of them explicitly, in which case the GUCs wouldn't matter.  If it is somehow depending upon the existing defaults and user choices to figure out the option values then, yes, the GUCs would be hidden information that may possibly confuse it if, say, a user has a GUC BUFFERS on but didn't make a choice in the client UI which defaulted to FALSE mimicking our default and because the default was chosen didn't output BUFFER off but left the option unspecified and now the buffers appear, which it for some reason isn't expecting and thus blows up.  I could care less about that client and certainly wouldn't let its possible existence hold me back from adding a feature that bare-bones client users who send their own explain queries would find useful.David J.", "msg_date": "Tue, 26 May 2020 17:29:24 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On Tue, 26 May 2020 at 23:59, Vik Fearing <vik@postgresfriends.org> wrote:\n> Are you saying we should have all new EXPLAIN options off forever into\n> the future because apps won't know about the new data? I guess we\n> should also not ever introduce new plan nodes because those won't be\n> known either.\n\nAnother argument against this is that it creates dependency among the\nnew GUCs. Many of the options are not compatible with each other. e.g.\n\npostgres=# explain (timing on) select 1;\nERROR: EXPLAIN option TIMING requires ANALYZE\n\nWould you propose we just error out in that case, or should we\nsilently enable the required option, or disable the conflicting\noption?\n\nDavid\n\n\n", "msg_date": "Wed, 27 May 2020 17:10:00 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On Tuesday, May 26, 2020, David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Tue, 26 May 2020 at 23:59, Vik Fearing <vik@postgresfriends.org> wrote:\n> > Are you saying we should have all new EXPLAIN options off forever into\n> > the future because apps won't know about the new data? I guess we\n> > should also not ever introduce new plan nodes because those won't be\n> > known either.\n>\n> Another argument against this is that it creates dependency among the\n> new GUCs. Many of the options are not compatible with each other. e.g.\n>\n> postgres=# explain (timing on) select 1;\n> ERROR: EXPLAIN option TIMING requires ANALYZE\n>\n> Would you propose we just error out in that case, or should we\n> silently enable the required option, or disable the conflicting\n> option?\n>\n>\nThe same thing we do today...ignore options that require analyze if analyze\nis not specified. There are no other options documented that are dependent\nwith options besides than analyze. The docs say timing defaults to on, its\nonly when explicitly specified instead of being treated as a default that\nthe user message appears. All the GUCs are doing is changing the default.\n\nDavid J.\n\nOn Tuesday, May 26, 2020, David Rowley <dgrowleyml@gmail.com> wrote:On Tue, 26 May 2020 at 23:59, Vik Fearing <vik@postgresfriends.org> wrote:\n> Are you saying we should have all new EXPLAIN options off forever into\n> the future because apps won't know about the new data?  I guess we\n> should also not ever introduce new plan nodes because those won't be\n> known either.\n\nAnother argument against this is that it creates dependency among the\nnew GUCs. Many of the options are not compatible with each other. e.g.\n\npostgres=# explain (timing on) select 1;\nERROR:  EXPLAIN option TIMING requires ANALYZE\n\nWould you propose we just error out in that case, or should we\nsilently enable the required option, or disable the conflicting\noption?\nThe same thing we do today...ignore options that require analyze if analyze is not specified.  There are no other options documented that are dependent with options besides than analyze.  The docs say timing defaults to on, its only when explicitly specified instead of being treated as a default that the user message appears.  All the GUCs are doing is changing the default.David J.", "msg_date": "Tue, 26 May 2020 22:27:52 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On 5/27/20 7:27 AM, David G. Johnston wrote:\n> On Tuesday, May 26, 2020, David Rowley <dgrowleyml@gmail.com> wrote:\n> \n>> On Tue, 26 May 2020 at 23:59, Vik Fearing <vik@postgresfriends.org> wrote:\n>>> Are you saying we should have all new EXPLAIN options off forever into\n>>> the future because apps won't know about the new data? I guess we\n>>> should also not ever introduce new plan nodes because those won't be\n>>> known either.\n>>\n>> Another argument against this is that it creates dependency among the\n>> new GUCs. Many of the options are not compatible with each other. e.g.\n>>\n>> postgres=# explain (timing on) select 1;\n>> ERROR: EXPLAIN option TIMING requires ANALYZE\n>>\n>> Would you propose we just error out in that case, or should we\n>> silently enable the required option, or disable the conflicting\n>> option?\n>>\n>>\n> The same thing we do today...ignore options that require analyze if analyze\n> is not specified. There are no other options documented that are dependent\n> with options besides than analyze. The docs say timing defaults to on, its\n> only when explicitly specified instead of being treated as a default that\n> the user message appears. All the GUCs are doing is changing the default.\n\n\nYes, the patch handles this case the way you describe. In fact, the\npatch doesn't (or shouldn't) change any behavior at all.\n-- \nVik Fearing\n\n\n", "msg_date": "Wed, 27 May 2020 11:10:35 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On Tue, May 26, 2020 at 7:30 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> I'm against adding GUCs to control what EXPLAIN does by default.\n>\n> A few current GUCs come to mind which gives external control to a\n> command's behaviour are:\n>\n> standard_conforming_strings\n> backslash_quote\n> bytea_output\n>\n> It's pretty difficult for application authors to write code that will\n> just work due to these GUCs. We end up with GUCs like\n> escape_string_warning to try and help application authors find areas\n> which may be problematic.\n\nI agree with this concern, as well as with what David says later,\nnamely that the concern is less here than in some other cases but\nstill not zero.\n\nI do think the idea of changing the default for BUFFERS from OFF to ON\nis a pretty good one, though.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 29 May 2020 15:44:29 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On Tue, May 26, 2020 at 02:49:46AM +0000, Nikolay Samokhvalov wrote:\n> On Mon, May 25, 2020 at 6:36 PM, Bruce Momjian < bruce@momjian.us > wrote:\n> > I am not excited about this new feature. Why do it only for\n> > EXPLAIN? That is a log of GUCs. I can see this becoming a feature\n> > creep disaster.\n> \n> How about changing the default behavior, making BUFFERS enabled by\n> default? Those who don't need it, always can say BUFFERS OFF — the\n> say as for TIMING.\n\n+1 for changing the default of BUFFERS to ON.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Mon, 1 Jun 2020 03:00:25 +0200", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On Wed, May 27, 2020 at 11:10:35AM +0200, Vik Fearing wrote:\n> On 5/27/20 7:27 AM, David G. Johnston wrote:\n> >> Would you propose we just error out in that case, or should we\n> >> silently enable the required option, or disable the conflicting\n> >> option?\n> >>\n> > The same thing we do today...ignore options that require analyze if analyze\n> > is not specified. There are no other options documented that are dependent\n> > with options besides than analyze. The docs say timing defaults to on, its\n> > only when explicitly specified instead of being treated as a default that\n> > the user message appears. All the GUCs are doing is changing the default.\n> \n> \n> Yes, the patch handles this case the way you describe. In fact, the\n> patch doesn't (or shouldn't) change any behavior at all.\n\nI think it would have been helpful if an email explaining this idea for\ndiscussion would have been posted before a patch was generated and\nposted.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 2 Jun 2020 13:25:08 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On Tue, Jun 2, 2020 at 10:25 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Wed, May 27, 2020 at 11:10:35AM +0200, Vik Fearing wrote:\n> > On 5/27/20 7:27 AM, David G. Johnston wrote:\n> > >> Would you propose we just error out in that case, or should we\n> > >> silently enable the required option, or disable the conflicting\n> > >> option?\n> > >>\n> > > The same thing we do today...ignore options that require analyze if\n> analyze\n> > > is not specified. There are no other options documented that are\n> dependent\n> > > with options besides than analyze. The docs say timing defaults to\n> on, its\n> > > only when explicitly specified instead of being treated as a default\n> that\n> > > the user message appears. All the GUCs are doing is changing the\n> default.\n> >\n> >\n> > Yes, the patch handles this case the way you describe. In fact, the\n> > patch doesn't (or shouldn't) change any behavior at all.\n>\n> I think it would have been helpful if an email explaining this idea for\n> discussion would have been posted before a patch was generated and\n> posted.\n>\n>\nI can see where it would have saved Vik some effort but I'm not seeing how\nan email without a patch is better for the rest of us than having a\nconcrete change to discuss.\n\nAt this point, given the original goal of the patch was to try and grease a\nsmoother path to changing the default for BUFFERS, and that people seem OK\nwith doing just that without having this patch, I'd say we should just\nchange the default and forget this patch. There hasn't been any other\ndemand from our users for this capability and I also doubt that having\nBUFFERS on by default is going to bother people.\n\nHowever, the one default on option, TIMING, also has a nice blurb about why\nhaving it enabled can be problematic and to consider turning it off. Is\nthere a similar \"oh by the way\" with BUFFERS that I just haven't come\nacross that would making having it on cause more problems than it solves?\n\nDavid J.\n\nOn Tue, Jun 2, 2020 at 10:25 AM Bruce Momjian <bruce@momjian.us> wrote:On Wed, May 27, 2020 at 11:10:35AM +0200, Vik Fearing wrote:\n> On 5/27/20 7:27 AM, David G. Johnston wrote:\n> >> Would you propose we just error out in that case, or should we\n> >> silently enable the required option, or disable the conflicting\n> >> option?\n> >>\n> > The same thing we do today...ignore options that require analyze if analyze\n> > is not specified.  There are no other options documented that are dependent\n> > with options besides than analyze.  The docs say timing defaults to on, its\n> > only when explicitly specified instead of being treated as a default that\n> > the user message appears.  All the GUCs are doing is changing the default.\n> \n> \n> Yes, the patch handles this case the way you describe.  In fact, the\n> patch doesn't (or shouldn't) change any behavior at all.\n\nI think it would have been helpful if an email explaining this idea for\ndiscussion would have been posted before a patch was generated and\nposted.I can see where it would have saved Vik some effort but I'm not seeing how an email without a patch is better for the rest of us than having a concrete change to discuss.At this point, given the original goal of the patch was to try and grease a smoother path to changing the default for BUFFERS, and that people seem OK with doing just that without having this patch, I'd say we should just change the default and forget this patch.  There hasn't been any other demand from our users for this capability and I also doubt that having BUFFERS on by default is going to bother people.However, the one default on option, TIMING, also has a nice blurb about why having it enabled can be problematic and to consider turning it off.  Is there a similar \"oh by the way\" with BUFFERS that I just haven't come across that would making having it on cause more problems than it solves?David J.", "msg_date": "Tue, 2 Jun 2020 10:54:16 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On 6/2/20 7:54 PM, David G. Johnston wrote:\n> At this point, given the original goal of the patch was to try and grease a\n> smoother path to changing the default for BUFFERS, and that people seem OK\n> with doing just that without having this patch, I'd say we should just\n> change the default and forget this patch. There hasn't been any other\n> demand from our users for this capability and I also doubt that having\n> BUFFERS on by default is going to bother people.\n\nWhat about WAL? Can we turn that one one by default, too?\n\nI often find having VERBOSE on helps when people don't qualify their\ncolumns and I don't know the schema. We should turn that on by default,\nas well.\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 2 Jun 2020 21:28:48 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On 6/2/20 7:25 PM, Bruce Momjian wrote:\n> On Wed, May 27, 2020 at 11:10:35AM +0200, Vik Fearing wrote:\n>> On 5/27/20 7:27 AM, David G. Johnston wrote:\n>>>> Would you propose we just error out in that case, or should we\n>>>> silently enable the required option, or disable the conflicting\n>>>> option?\n>>>>\n>>> The same thing we do today...ignore options that require analyze if analyze\n>>> is not specified. There are no other options documented that are dependent\n>>> with options besides than analyze. The docs say timing defaults to on, its\n>>> only when explicitly specified instead of being treated as a default that\n>>> the user message appears. All the GUCs are doing is changing the default.\n>>\n>>\n>> Yes, the patch handles this case the way you describe. In fact, the\n>> patch doesn't (or shouldn't) change any behavior at all.\n> \n> I think it would have been helpful if an email explaining this idea for\n> discussion would have been posted before a patch was generated and\n> posted.\n\nWhy?\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 2 Jun 2020 21:29:09 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On Tue, Jun 2, 2020 at 09:29:09PM +0200, Vik Fearing wrote:\n> On 6/2/20 7:25 PM, Bruce Momjian wrote:\n> > On Wed, May 27, 2020 at 11:10:35AM +0200, Vik Fearing wrote:\n> >> On 5/27/20 7:27 AM, David G. Johnston wrote:\n> >>>> Would you propose we just error out in that case, or should we\n> >>>> silently enable the required option, or disable the conflicting\n> >>>> option?\n> >>>>\n> >>> The same thing we do today...ignore options that require analyze if analyze\n> >>> is not specified. There are no other options documented that are dependent\n> >>> with options besides than analyze. The docs say timing defaults to on, its\n> >>> only when explicitly specified instead of being treated as a default that\n> >>> the user message appears. All the GUCs are doing is changing the default.\n> >>\n> >>\n> >> Yes, the patch handles this case the way you describe. In fact, the\n> >> patch doesn't (or shouldn't) change any behavior at all.\n> > \n> > I think it would have been helpful if an email explaining this idea for\n> > discussion would have been posted before a patch was generated and\n> > posted.\n> \n> Why?\n\nBecause you often have to go backwards to religitate things in the\npatch, rather than opening with the design issues. Our TODO list is\nvery clear about this:\n\n\thttps://wiki.postgresql.org/wiki/Todo\n\tDesirability -> Design -> Implement -> Test -> Review -> Commit\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 2 Jun 2020 16:51:16 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On 6/2/20 10:51 PM, Bruce Momjian wrote:\n> On Tue, Jun 2, 2020 at 09:29:09PM +0200, Vik Fearing wrote:\n>> On 6/2/20 7:25 PM, Bruce Momjian wrote:\n>>> I think it would have been helpful if an email explaining this idea for\n>>> discussion would have been posted before a patch was generated and\n>>> posted.\n>>\n>> Why?\n> \n> Because you often have to go backwards to religitate things in the\n> patch, rather than opening with the design issues.\n\n\nSurely that's my problem; and it looks like the only thing I need to\nchange in this patch is to remove the guc for ANALYZE.\n\n\n> Our TODO list is\n> very clear about this:\n> \n> \thttps://wiki.postgresql.org/wiki/Todo\n> \tDesirability -> Design -> Implement -> Test -> Review -> Commit\n\n\nI can't read everything on this list (far from it), but I don't recall\nany other spontaneous patch being chastised for not having the\nbikeshedders-at-large do the first two steps before the implementer.\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 2 Jun 2020 23:58:51 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On Tue, Jun 2, 2020 at 11:58:51PM +0200, Vik Fearing wrote:\n> On 6/2/20 10:51 PM, Bruce Momjian wrote:\n> > On Tue, Jun 2, 2020 at 09:29:09PM +0200, Vik Fearing wrote:\n> >> On 6/2/20 7:25 PM, Bruce Momjian wrote:\n> >>> I think it would have been helpful if an email explaining this idea for\n> >>> discussion would have been posted before a patch was generated and\n> >>> posted.\n> >>\n> >> Why?\n> > \n> > Because you often have to go backwards to religitate things in the\n> > patch, rather than opening with the design issues.\n> \n> \n> Surely that's my problem; and it looks like the only thing I need to\n> change in this patch is to remove the guc for ANALYZE.\n> \n> \n> > Our TODO list is\n> > very clear about this:\n> > \n> > \thttps://wiki.postgresql.org/wiki/Todo\n> > \tDesirability -> Design -> Implement -> Test -> Review -> Commit\n> \n> \n> I can't read everything on this list (far from it), but I don't recall\n> any other spontaneous patch being chastised for not having the\n> bikeshedders-at-large do the first two steps before the implementer.\n\nWell, you have been around a long time, so I assumed you would know\nthis, and have seen this in practice.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 2 Jun 2020 20:35:04 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On Tue, Jun 02, 2020 at 09:28:48PM +0200, Vik Fearing wrote:\n> On 6/2/20 7:54 PM, David G. Johnston wrote:\n> > At this point, given the original goal of the patch was to try and\n> > grease a smoother path to changing the default for BUFFERS, and\n> > that people seem OK with doing just that without having this\n> > patch, I'd say we should just change the default and forget this\n> > patch. There hasn't been any other demand from our users for this\n> > capability and I also doubt that having BUFFERS on by default is\n> > going to bother people.\n> \n> What about WAL? Can we turn that one one by default, too?\n> \n> I often find having VERBOSE on helps when people don't qualify their\n> columns and I don't know the schema. We should turn that on by\n> default, as well.\n\n+1 for all on (except ANALYZE because it would be a foot cannon) by\ndefault. For those few to whom it really matters, there'd be OFF\nswitches.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Wed, 3 Jun 2020 04:16:21 +0200", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "Le mer. 3 juin 2020 à 04:16, David Fetter <david@fetter.org> a écrit :\n\n> On Tue, Jun 02, 2020 at 09:28:48PM +0200, Vik Fearing wrote:\n> > On 6/2/20 7:54 PM, David G. Johnston wrote:\n> > > At this point, given the original goal of the patch was to try and\n> > > grease a smoother path to changing the default for BUFFERS, and\n> > > that people seem OK with doing just that without having this\n> > > patch, I'd say we should just change the default and forget this\n> > > patch. There hasn't been any other demand from our users for this\n> > > capability and I also doubt that having BUFFERS on by default is\n> > > going to bother people.\n> >\n> > What about WAL? Can we turn that one one by default, too?\n> >\n> > I often find having VERBOSE on helps when people don't qualify their\n> > columns and I don't know the schema. We should turn that on by\n> > default, as well.\n>\n> +1 for all on (except ANALYZE because it would be a foot cannon) by\n> default. For those few to whom it really matters, there'd be OFF\n> switches.\n>\n>\n+1 for all on, except ANALYZE (foot cannon as David says) and VERBOSE\n(verbose is something you ask for when the usual display isn't enough).\n\n-1 for GUCs, we already have too many of them.\n\n\n-- \nGuillaume.\n\nLe mer. 3 juin 2020 à 04:16, David Fetter <david@fetter.org> a écrit :On Tue, Jun 02, 2020 at 09:28:48PM +0200, Vik Fearing wrote:\n> On 6/2/20 7:54 PM, David G. Johnston wrote:\n> > At this point, given the original goal of the patch was to try and\n> > grease a smoother path to changing the default for BUFFERS, and\n> > that people seem OK with doing just that without having this\n> > patch, I'd say we should just change the default and forget this\n> > patch.  There hasn't been any other demand from our users for this\n> > capability and I also doubt that having BUFFERS on by default is\n> > going to bother people.\n> \n> What about WAL?  Can we turn that one one by default, too?\n> \n> I often find having VERBOSE on helps when people don't qualify their\n> columns and I don't know the schema.  We should turn that on by\n> default, as well.\n\n+1 for all on (except ANALYZE because it would be a foot cannon) by\ndefault. For those few to whom it really matters, there'd be OFF\nswitches.\n+1 for all on, except ANALYZE (foot cannon as David says) and VERBOSE (verbose is something you ask for when the usual display isn't enough).-1 for GUCs, we already have too many of them.-- Guillaume.", "msg_date": "Wed, 3 Jun 2020 08:51:10 +0200", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "I think this got ample review, so I've set it to \"Waiting\".\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 5 Jul 2020 12:36:35 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On 2020-05-23 11:14, Vik Fearing wrote:\n> Here is a patch to provide default gucs for EXPLAIN options.\n> \n> I have two goals with this patch. The first is that I personally\n> *always* want BUFFERS turned on, so this would allow me to do it without\n> typing it every time.\n\nThere was a lot of opposition to the approach taken by this patch, but \nthere was a lot of support turning BUFFERS on by default. Would you \nlike to submit a patch for that?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 10 Jul 2020 15:30:07 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "> On 10 Jul 2020, at 15:30, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> On 2020-05-23 11:14, Vik Fearing wrote:\n>> Here is a patch to provide default gucs for EXPLAIN options.\n>> I have two goals with this patch. The first is that I personally\n>> *always* want BUFFERS turned on, so this would allow me to do it without\n>> typing it every time.\n> \n> There was a lot of opposition to the approach taken by this patch, but there was a lot of support turning BUFFERS on by default. Would you like to submit a patch for that?\n\nMy reading of this thread and the above that the patch, and CF entry, as it\nstands should be rejected - but that a separate patch for turning BUFFERS on by\ndefault would be highly appreciated. Unless objections I'll go do that in the\nCF app for 2020-07.\n\ncheers ./daniel\n\n", "msg_date": "Sun, 2 Aug 2020 00:00:56 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "On Sun, Aug 02, 2020 at 12:00:56AM +0200, Daniel Gustafsson wrote:\n> > On 10 Jul 2020, at 15:30, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> > \n> > On 2020-05-23 11:14, Vik Fearing wrote:\n> >> Here is a patch to provide default gucs for EXPLAIN options.\n> >> I have two goals with this patch. The first is that I personally\n> >> *always* want BUFFERS turned on, so this would allow me to do it without\n> >> typing it every time.\n> > \n> > There was a lot of opposition to the approach taken by this patch, but there was a lot of support turning BUFFERS on by default. Would you like to submit a patch for that?\n> \n> My reading of this thread and the above that the patch, and CF entry, as it\n> stands should be rejected - but that a separate patch for turning BUFFERS on by\n> default would be highly appreciated. Unless objections I'll go do that in the\n> CF app for 2020-07.\n\nSounds right. I have a patch to enable buffers by default, but that raises the\nissue about how to avoid machine-dependent output when we run explain ANALYZE.\nOne option is to add \"buffers off\" to the existing incantation of (COSTS OFF,\nTIMING OFF, SUMMARY OFF). But I think we should just add a new \"REGRESS\"\noption, which does that and any future similar thing (like WAL OFF).\n\nI have a patch to do that, too, which I included in an early version of this\nseries (which I since changed to run explain without analyze). That handles\nmost but not all machine-dependant output.\nhttps://www.postgresql.org/message-id/20200306213310.GM684%40telsasoft.com\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 1 Aug 2020 17:12:30 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" }, { "msg_contents": "> On 2 Aug 2020, at 00:12, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Sun, Aug 02, 2020 at 12:00:56AM +0200, Daniel Gustafsson wrote:\n\n>> My reading of this thread and the above that the patch, and CF entry, as it\n>> stands should be rejected - but that a separate patch for turning BUFFERS on by\n>> default would be highly appreciated. Unless objections I'll go do that in the\n>> CF app for 2020-07.\n> \n> Sounds right.\n\nDone that way.\n\n> I have a patch to enable buffers by default\n\nPlease attach this thread as well to the CF entry for that patch once\nregistered.\n\ncheers ./daniel\n\n", "msg_date": "Sun, 2 Aug 2020 22:43:33 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Default gucs for EXPLAIN" } ]
[ { "msg_contents": "There are a couple of function call overheads I observed in pl/pgsql\ncode : exec_stmt() and exec_cast_value(). Removing these overheads\nresulted in some performance gains.\n\nexec_stmt() :\n\nplpgsql_exec_function() and other toplevel block executors currently\ncall exec_stmt(). But actually they don't need to do everything that\nexec_stmt() does. So they can call a new function instead of\nexec_stmt(), and all the exec_stmt() code can be moved to\nexec_stmts(). The things that exec_stmt() do, but are not necessary\nfor a top level block stmt, are :\n\n1. save_estmt = estate->err_stmt; estate->err_stmt = stmt;\nFor top level blocks, saving the estate->err_stmt is not necessary,\nbecause there is no statement after this block statement. Anyways,\nplpgsql_exec_function() assigns estate.err_stmt just before calling\nexec_stmt so there is really no point in exec_stmt() setting it again.\n\n2. CHECK_FOR_INTERRUPTS()\nThis is not necessary for toplevel block callers.\n\n3. exec_stmt_block() can be directly called rather than exec_stmt()\nbecause func->action is a block statement. So the switch statement is\nnot necessary.\n\nBut this one might be necessary for toplevel block statement:\n if (*plpgsql_plugin_ptr && (*plpgsql_plugin_ptr)->stmt_beg)\n ((*plpgsql_plugin_ptr)->stmt_beg) (estate, stmt);\n\nThere was already a repetitive code in plpgsql_exec_function() and\nother functions around the exec_stmt() call. So in a separate patch\n0001*.patch, I moved that code into a common function\nexec_toplevel_block(). In the main patch\n0002-Get-rid-of-exec_stmt-function-call.patch, I additionally called\nplpgsql_plugin_ptr->stmt_beg() inside exec_toplevel_block(). And moved\nexec_stmt() code into exec_stmts().\n\n\n\nexec_cast_value() :\n\nThis function does not do the casting if not required. So moved the\ncode that actually does the cast into a separate function, so as to\nreduce the exec_cast_value() code and make it inline. Attached is the\n0003-Inline-exec_cast_value.patch\n\n\nTesting\n----------\n\nI used two available VMs (one x86_64 and the other arm64), and the\nbenefit showed up on both of these machines. Attached patches 0001,\n0002, 0003 are to be applied in that order. 0001 is just a preparatory\npatch.\n\nFirst I tried with a simple for loop with a single assignment\n(attached forcounter.sql)\n\nBy inlining of the two functions, found noticeable reduction in\nexecution time as shown (figures are in milliseconds, averaged over\nmultiple runs; taken from 'explain analyze' execution times) :\nARM VM :\n HEAD : 100 ; Patched : 88 => 13.6% improvement\nx86 VM :\n HEAD : 71 ; Patched : 66 => 7.63% improvement.\n\nThen I included many assignment statements as shown in attachment\nassignmany.sql. This showed further benefit :\nARM VM :\n HEAD : 1820 ; Patched : 1549 => 17.5% improvement\nx86 VM :\n HEAD : 1020 ; Patched : 869 => 17.4% improvement\n\nInlining just exec_stmt() showed the improvement mainly on the arm64\nVM (7.4%). For x86, it was 2.7%\nBut inlining exec_stmt() and exec_cast_value() together showed\nbenefits on both machines, as can be seen above.\n\n-- \nThanks,\n-Amit Khandekar\nHuawei Technologies", "msg_date": "Sat, 23 May 2020 22:33:43 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Inlining of couple of functions in pl_exec.c improves performance" }, { "msg_contents": "Hi\n\nso 23. 5. 2020 v 19:03 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com>\nnapsal:\n\n> There are a couple of function call overheads I observed in pl/pgsql\n> code : exec_stmt() and exec_cast_value(). Removing these overheads\n> resulted in some performance gains.\n>\n> exec_stmt() :\n>\n> plpgsql_exec_function() and other toplevel block executors currently\n> call exec_stmt(). But actually they don't need to do everything that\n> exec_stmt() does. So they can call a new function instead of\n> exec_stmt(), and all the exec_stmt() code can be moved to\n> exec_stmts(). The things that exec_stmt() do, but are not necessary\n> for a top level block stmt, are :\n>\n> 1. save_estmt = estate->err_stmt; estate->err_stmt = stmt;\n> For top level blocks, saving the estate->err_stmt is not necessary,\n> because there is no statement after this block statement. Anyways,\n> plpgsql_exec_function() assigns estate.err_stmt just before calling\n> exec_stmt so there is really no point in exec_stmt() setting it again.\n>\n> 2. CHECK_FOR_INTERRUPTS()\n> This is not necessary for toplevel block callers.\n>\n> 3. exec_stmt_block() can be directly called rather than exec_stmt()\n> because func->action is a block statement. So the switch statement is\n> not necessary.\n>\n> But this one might be necessary for toplevel block statement:\n> if (*plpgsql_plugin_ptr && (*plpgsql_plugin_ptr)->stmt_beg)\n> ((*plpgsql_plugin_ptr)->stmt_beg) (estate, stmt);\n>\n> There was already a repetitive code in plpgsql_exec_function() and\n> other functions around the exec_stmt() call. So in a separate patch\n> 0001*.patch, I moved that code into a common function\n> exec_toplevel_block(). In the main patch\n> 0002-Get-rid-of-exec_stmt-function-call.patch, I additionally called\n> plpgsql_plugin_ptr->stmt_beg() inside exec_toplevel_block(). And moved\n> exec_stmt() code into exec_stmts().\n>\n>\n>\n> exec_cast_value() :\n>\n> This function does not do the casting if not required. So moved the\n> code that actually does the cast into a separate function, so as to\n> reduce the exec_cast_value() code and make it inline. Attached is the\n> 0003-Inline-exec_cast_value.patch\n>\n>\n> Testing\n> ----------\n>\n> I used two available VMs (one x86_64 and the other arm64), and the\n> benefit showed up on both of these machines. Attached patches 0001,\n> 0002, 0003 are to be applied in that order. 0001 is just a preparatory\n> patch.\n>\n> First I tried with a simple for loop with a single assignment\n> (attached forcounter.sql)\n>\n> By inlining of the two functions, found noticeable reduction in\n> execution time as shown (figures are in milliseconds, averaged over\n> multiple runs; taken from 'explain analyze' execution times) :\n> ARM VM :\n> HEAD : 100 ; Patched : 88 => 13.6% improvement\n> x86 VM :\n> HEAD : 71 ; Patched : 66 => 7.63% improvement.\n>\n> Then I included many assignment statements as shown in attachment\n> assignmany.sql. This showed further benefit :\n> ARM VM :\n> HEAD : 1820 ; Patched : 1549 => 17.5% improvement\n> x86 VM :\n> HEAD : 1020 ; Patched : 869 => 17.4% improvement\n>\n> Inlining just exec_stmt() showed the improvement mainly on the arm64\n> VM (7.4%). For x86, it was 2.7%\n> But inlining exec_stmt() and exec_cast_value() together showed\n> benefits on both machines, as can be seen above.\n>\n\n\n FOR counter IN 1..1800000 LOOP\n id = 0; id = 0; id1 = 0;\n id2 = 0; id3 = 0; id1 = 0; id2 = 0;\n id3 = 0; id = 0; id = 0; id1 = 0;\n id2 = 0; id3 = 0; id1 = 0; id2 = 0;\n id3 = 0;\n END LOOP;\n\nThis is not too much typical PLpgSQL code. All expressions are not\nparametrized - so this test is little bit obscure.\n\nLast strange performance plpgsql benchmark did calculation of pi value. It\ndoes something real\n\nRegards\n\nPavel\n\n\n> --\n> Thanks,\n> -Amit Khandekar\n> Huawei Technologies\n>\n\nHiso 23. 5. 2020 v 19:03 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com> napsal:There are a couple of function call overheads I observed in pl/pgsql\ncode : exec_stmt() and exec_cast_value(). Removing these overheads\nresulted in some performance gains.\n\nexec_stmt() :\n\nplpgsql_exec_function() and other toplevel block executors currently\ncall exec_stmt(). But actually they don't need to do everything that\nexec_stmt() does. So they can call a new function instead of\nexec_stmt(), and all the exec_stmt() code can be moved to\nexec_stmts(). The things that exec_stmt() do, but are not necessary\nfor a top level block stmt, are :\n\n1. save_estmt = estate->err_stmt; estate->err_stmt = stmt;\nFor top level blocks, saving the estate->err_stmt is not necessary,\nbecause there is no statement after this block statement. Anyways,\nplpgsql_exec_function() assigns estate.err_stmt just before calling\nexec_stmt so there is really no point in exec_stmt() setting it again.\n\n2. CHECK_FOR_INTERRUPTS()\nThis is not necessary for toplevel block callers.\n\n3. exec_stmt_block() can be directly called rather than exec_stmt()\nbecause func->action is a block statement. So the switch statement is\nnot necessary.\n\nBut this one might be necessary for toplevel block statement:\n  if (*plpgsql_plugin_ptr && (*plpgsql_plugin_ptr)->stmt_beg)\n     ((*plpgsql_plugin_ptr)->stmt_beg) (estate, stmt);\n\nThere was already a repetitive code in plpgsql_exec_function() and\nother functions around the exec_stmt() call. So in a separate patch\n0001*.patch, I moved that code into a common function\nexec_toplevel_block(). In the main patch\n0002-Get-rid-of-exec_stmt-function-call.patch, I additionally called\nplpgsql_plugin_ptr->stmt_beg() inside exec_toplevel_block(). And moved\nexec_stmt() code into exec_stmts().\n\n\n\nexec_cast_value() :\n\nThis function does not do the casting if not required. So moved the\ncode that actually does the cast into a separate function, so as to\nreduce the exec_cast_value() code and make it inline. Attached is the\n0003-Inline-exec_cast_value.patch\n\n\nTesting\n----------\n\nI used two available VMs (one x86_64 and the other arm64), and the\nbenefit showed up on both of these machines. Attached patches 0001,\n0002, 0003 are to be applied in that order. 0001 is just a preparatory\npatch.\n\nFirst I tried with a simple for loop with a single assignment\n(attached forcounter.sql)\n\nBy inlining of the two functions, found noticeable reduction in\nexecution time as shown (figures are in milliseconds, averaged over\nmultiple runs; taken from 'explain analyze' execution times) :\nARM VM :\n   HEAD : 100 ; Patched : 88 => 13.6% improvement\nx86 VM :\n   HEAD :  71 ; Patched : 66 => 7.63% improvement.\n\nThen I included many assignment statements as shown in attachment\nassignmany.sql. This showed further benefit :\nARM VM :\n   HEAD : 1820 ; Patched : 1549  => 17.5% improvement\nx86 VM :\n   HEAD : 1020 ; Patched :  869  => 17.4% improvement\n\nInlining just exec_stmt() showed the improvement mainly on the arm64\nVM (7.4%). For x86, it was 2.7%\nBut inlining exec_stmt() and exec_cast_value() together showed\nbenefits on both machines, as can be seen above.    FOR counter IN 1..1800000 LOOP      id = 0; id = 0; id1 = 0;      id2 = 0; id3 = 0; id1 = 0; id2 = 0;      id3 = 0; id = 0; id = 0; id1 = 0;      id2 = 0; id3 = 0; id1 = 0; id2 = 0;      id3 = 0;   END LOOP;This is not too much typical PLpgSQL code. All expressions are not parametrized - so this test is little bit obscure.Last strange performance plpgsql benchmark did calculation of pi value. It does something realRegardsPavel\n\n-- \nThanks,\n-Amit Khandekar\nHuawei Technologies", "msg_date": "Sat, 23 May 2020 19:53:38 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inlining of couple of functions in pl_exec.c improves performance" }, { "msg_contents": "On Sat, 23 May 2020 at 23:24, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> FOR counter IN 1..1800000 LOOP\n> id = 0; id = 0; id1 = 0;\n> id2 = 0; id3 = 0; id1 = 0; id2 = 0;\n> id3 = 0; id = 0; id = 0; id1 = 0;\n> id2 = 0; id3 = 0; id1 = 0; id2 = 0;\n> id3 = 0;\n> END LOOP;\n>\n> This is not too much typical PLpgSQL code. All expressions are not parametrized - so this test is little bit obscure.\n>\n> Last strange performance plpgsql benchmark did calculation of pi value. It does something real\n\nYeah, basically I wanted to have many statements, and that too with\nmany assignments where casts are not required. Let me check if I can\ncome up with a real-enough testcase. Thanks.\n\n\n", "msg_date": "Tue, 26 May 2020 09:06:12 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inlining of couple of functions in pl_exec.c improves performance" }, { "msg_contents": "On Tue, 26 May 2020 at 09:06, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n> On Sat, 23 May 2020 at 23:24, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> >\n> > FOR counter IN 1..1800000 LOOP\n> > id = 0; id = 0; id1 = 0;\n> > id2 = 0; id3 = 0; id1 = 0; id2 = 0;\n> > id3 = 0; id = 0; id = 0; id1 = 0;\n> > id2 = 0; id3 = 0; id1 = 0; id2 = 0;\n> > id3 = 0;\n> > END LOOP;\n> >\n> > This is not too much typical PLpgSQL code. All expressions are not parametrized - so this test is little bit obscure.\n> >\n> > Last strange performance plpgsql benchmark did calculation of pi value. It does something real\n>\n> Yeah, basically I wanted to have many statements, and that too with\n> many assignments where casts are not required. Let me check if I can\n> come up with a real-enough testcase. Thanks.\n\ncreate table tab (id int[]);\ninsert into tab select array((select ((random() * 100000)::bigint) id\nfrom generate_series(1, 30000) order by 1));\ninsert into tab select array((select ((random() * 600000)::bigint) id\nfrom generate_series(1, 30000) order by 1));\ninsert into tab select array((select ((random() * 1000000)::bigint) id\nfrom generate_series(1, 30000) order by 1));\ninsert into tab select array((select ((random() * 100000)::bigint) id\nfrom generate_series(1, 30000) order by 1));\ninsert into tab select array((select ((random() * 600000)::bigint) id\nfrom generate_series(1, 30000) order by 1));\ninsert into tab select array((select ((random() * 1000000)::bigint) id\nfrom generate_series(1, 30000) order by 1));\ninsert into tab select array((select ((random() * 100000)::bigint) id\nfrom generate_series(1, 30000) order by 1));\ninsert into tab select array((select ((random() * 600000)::bigint) id\nfrom generate_series(1, 30000) order by 1));\ninsert into tab select array((select ((random() * 1000000)::bigint) id\nfrom generate_series(1, 30000) order by 1));\n\n\n-- Return how much two consecutive array elements are apart from each\nother, on average; i.e. how much the numbers are spaced out.\n-- Input is an ordered array of integers.\nCREATE OR REPLACE FUNCTION avg_space(int[]) RETURNS bigint AS $$\nDECLARE\n diff int = 0;\n num int;\n prevnum int = 1;\nBEGIN\n FOREACH num IN ARRAY $1\n LOOP\n diff = diff + num - prevnum;\n prevnum = num;\n END LOOP;\n RETURN diff/array_length($1, 1);\nEND;\n$$ LANGUAGE plpgsql;\n\nexplain analyze select avg_space(id) from tab;\nLike earlier figures, these are execution times in milliseconds, taken\nfrom explain-analyze.\nARM VM:\n HEAD : 49.8\n patch 0001+0002 : 47.8 => 4.2%\n patch 0001+0002+0003 : 42.9 => 16.1%\nx86 VM:\n HEAD : 32.8\n patch 0001+0002 : 32.7 => 0%\n patch 0001+0002+0003 : 28.0 => 17.1%\n\n\n", "msg_date": "Wed, 27 May 2020 17:01:39 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inlining of couple of functions in pl_exec.c improves performance" }, { "msg_contents": "Hi\n\nst 27. 5. 2020 v 13:31 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com>\nnapsal:\n\n> On Tue, 26 May 2020 at 09:06, Amit Khandekar <amitdkhan.pg@gmail.com>\n> wrote:\n> >\n> > On Sat, 23 May 2020 at 23:24, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > >\n> > > FOR counter IN 1..1800000 LOOP\n> > > id = 0; id = 0; id1 = 0;\n> > > id2 = 0; id3 = 0; id1 = 0; id2 = 0;\n> > > id3 = 0; id = 0; id = 0; id1 = 0;\n> > > id2 = 0; id3 = 0; id1 = 0; id2 = 0;\n> > > id3 = 0;\n> > > END LOOP;\n> > >\n> > > This is not too much typical PLpgSQL code. All expressions are not\n> parametrized - so this test is little bit obscure.\n> > >\n> > > Last strange performance plpgsql benchmark did calculation of pi\n> value. It does something real\n> >\n> > Yeah, basically I wanted to have many statements, and that too with\n> > many assignments where casts are not required. Let me check if I can\n> > come up with a real-enough testcase. Thanks.\n>\n> create table tab (id int[]);\n> insert into tab select array((select ((random() * 100000)::bigint) id\n> from generate_series(1, 30000) order by 1));\n> insert into tab select array((select ((random() * 600000)::bigint) id\n> from generate_series(1, 30000) order by 1));\n> insert into tab select array((select ((random() * 1000000)::bigint) id\n> from generate_series(1, 30000) order by 1));\n> insert into tab select array((select ((random() * 100000)::bigint) id\n> from generate_series(1, 30000) order by 1));\n> insert into tab select array((select ((random() * 600000)::bigint) id\n> from generate_series(1, 30000) order by 1));\n> insert into tab select array((select ((random() * 1000000)::bigint) id\n> from generate_series(1, 30000) order by 1));\n> insert into tab select array((select ((random() * 100000)::bigint) id\n> from generate_series(1, 30000) order by 1));\n> insert into tab select array((select ((random() * 600000)::bigint) id\n> from generate_series(1, 30000) order by 1));\n> insert into tab select array((select ((random() * 1000000)::bigint) id\n> from generate_series(1, 30000) order by 1));\n>\n>\n> -- Return how much two consecutive array elements are apart from each\n> other, on average; i.e. how much the numbers are spaced out.\n> -- Input is an ordered array of integers.\n> CREATE OR REPLACE FUNCTION avg_space(int[]) RETURNS bigint AS $$\n> DECLARE\n> diff int = 0;\n> num int;\n> prevnum int = 1;\n> BEGIN\n> FOREACH num IN ARRAY $1\n> LOOP\n> diff = diff + num - prevnum;\n> prevnum = num;\n> END LOOP;\n> RETURN diff/array_length($1, 1);\n> END;\n> $$ LANGUAGE plpgsql;\n>\n> explain analyze select avg_space(id) from tab;\n> Like earlier figures, these are execution times in milliseconds, taken\n> from explain-analyze.\n> ARM VM:\n> HEAD : 49.8\n> patch 0001+0002 : 47.8 => 4.2%\n> patch 0001+0002+0003 : 42.9 => 16.1%\n> x86 VM:\n> HEAD : 32.8\n> patch 0001+0002 : 32.7 => 0%\n> patch 0001+0002+0003 : 28.0 => 17.1%\n>\n\nI tested these patches on my notebook - Lenovo T520 (x64) - on pi\ncalculation\n\nCREATE OR REPLACE FUNCTION pi_est_1(n int)\nRETURNS numeric AS $$\nDECLARE\n accum double precision DEFAULT 1.0;\n c1 double precision DEFAULT 2.0;\n c2 double precision DEFAULT 1.0;\nBEGIN\n FOR i IN 1..n\n LOOP\n accum := accum * ((c1 * c1) / (c2 * (c2 + 2.0)));\n c1 := c1 + 2.0;\n c2 := c2 + 2.0;\n END LOOP;\n RETURN accum * 2.0;\nEND;\n$$ LANGUAGE plpgsql;\n\nand I see about 3-5% of speedup\n\nextra simply test shows\n\ndo $$ declare i int default 0; begin while i < 100000000 loop i := i + 1;\nend loop; raise notice 'i=%', i;end $$;\n\n2% speedup\n\nI don't see 17% anywhere, but 3-5% is not bad.\n\npatch 0001 has sense and can help with code structure\npatch 0002 it is little bit against simplicity, but for PLpgSQL with blocks\nstructure it is correct.\npatch 0003 has sense too\n\ntested on Fedora 32 with gcc 10.1.1 and -O2 option\n\nRegards\n\nPavel\n\nHist 27. 5. 2020 v 13:31 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com> napsal:On Tue, 26 May 2020 at 09:06, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:\n>\n> On Sat, 23 May 2020 at 23:24, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> >\n> >    FOR counter IN 1..1800000 LOOP\n> >       id = 0; id = 0; id1 = 0;\n> >       id2 = 0; id3 = 0; id1 = 0; id2 = 0;\n> >       id3 = 0; id = 0; id = 0; id1 = 0;\n> >       id2 = 0; id3 = 0; id1 = 0; id2 = 0;\n> >       id3 = 0;\n> >    END LOOP;\n> >\n> > This is not too much typical PLpgSQL code. All expressions are not parametrized - so this test is little bit obscure.\n> >\n> > Last strange performance plpgsql benchmark did calculation of pi value. It does something real\n>\n> Yeah, basically I wanted to have many statements, and that too with\n> many assignments where casts are not required. Let me check if I can\n> come up with a real-enough testcase. Thanks.\n\ncreate table tab (id int[]);\ninsert into tab select array((select ((random() * 100000)::bigint) id\nfrom generate_series(1, 30000) order by 1));\ninsert into tab select array((select ((random() * 600000)::bigint) id\nfrom generate_series(1, 30000) order by 1));\ninsert into tab select array((select ((random() * 1000000)::bigint) id\nfrom generate_series(1, 30000) order by 1));\ninsert into tab select array((select ((random() * 100000)::bigint) id\nfrom generate_series(1, 30000) order by 1));\ninsert into tab select array((select ((random() * 600000)::bigint) id\nfrom generate_series(1, 30000) order by 1));\ninsert into tab select array((select ((random() * 1000000)::bigint) id\nfrom generate_series(1, 30000) order by 1));\ninsert into tab select array((select ((random() * 100000)::bigint) id\nfrom generate_series(1, 30000) order by 1));\ninsert into tab select array((select ((random() * 600000)::bigint) id\nfrom generate_series(1, 30000) order by 1));\ninsert into tab select array((select ((random() * 1000000)::bigint) id\nfrom generate_series(1, 30000) order by 1));\n\n\n-- Return how much two consecutive array elements are apart from each\nother, on average; i.e. how much the numbers are spaced out.\n-- Input is an ordered array of integers.\nCREATE OR REPLACE FUNCTION avg_space(int[]) RETURNS bigint AS $$\nDECLARE\n  diff int = 0;\n  num int;\n  prevnum int = 1;\nBEGIN\n  FOREACH num IN ARRAY $1\n  LOOP\n    diff = diff + num - prevnum;\n    prevnum = num;\n  END LOOP;\n  RETURN diff/array_length($1, 1);\nEND;\n$$ LANGUAGE plpgsql;\n\nexplain analyze select avg_space(id) from tab;\nLike earlier figures, these are execution times in milliseconds, taken\nfrom explain-analyze.\nARM VM:\n   HEAD                             : 49.8\n   patch 0001+0002           : 47.8 => 4.2%\n   patch 0001+0002+0003 : 42.9 => 16.1%\nx86 VM:\n   HEAD                             : 32.8\n   patch 0001+0002           : 32.7 => 0%\n   patch 0001+0002+0003 : 28.0 => 17.1%I tested these patches on my notebook - Lenovo T520 (x64) - on pi calculationCREATE OR REPLACE FUNCTION pi_est_1(n int)RETURNS numeric AS $$DECLARE  accum double precision DEFAULT 1.0;  c1 double precision DEFAULT 2.0;  c2 double precision DEFAULT 1.0;BEGIN  FOR i IN 1..n  LOOP    accum := accum * ((c1 * c1) / (c2 * (c2 + 2.0)));    c1 := c1 + 2.0;    c2 := c2 + 2.0;  END LOOP;  RETURN accum * 2.0;END;$$ LANGUAGE plpgsql;and I see about 3-5% of speedupextra simply test showsdo $$ declare i int default 0; begin while i < 100000000 loop i := i + 1; end loop; raise notice 'i=%', i;end $$;2% speedupI don't see 17% anywhere, but 3-5% is not bad. patch 0001 has sense and can help with code structurepatch 0002 it is little bit against simplicity, but for PLpgSQL with blocks structure it is correct.patch 0003 has sense tootested on Fedora 32 with gcc 10.1.1 and -O2 optionRegardsPavel", "msg_date": "Thu, 28 May 2020 11:08:27 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inlining of couple of functions in pl_exec.c improves performance" }, { "msg_contents": "On Thu, 28 May 2020 at 14:39, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> I don't see 17% anywhere, but 3-5% is not bad.\nDid you see 3-5% only for the pi function, or did you see the same\nimprovement also for the functions that I wrote ? I was getting a\nconsistent result of 14-18 % on both of the VMs. Also, is your test\nmachine running on Windows ? All the machines I tested were on Linux\nkernel (Ubuntu)\n\nBelow are my results for your pi_est_1() function. For this function,\nI am consistently getting 5-9 % improvement. I tested on 3 machines :\n\ngcc : 8.4.0. -O2 option\nOS : Ubuntu Bionic\n\nexplain analyze select pi_est_1(10000000)\n\n1. x86_64 laptop VM (Intel Core i7-8665U)\nHEAD : 2666 2617 2600 2630\nPatched : 2502 2409 2460 2444\n\n\n2. x86_64 VM (Xeon Gold 6151)\nHEAD : 1664 1662 1721 1660\nPatched : 1541 1548 1537 1526\n\n3. ARM64 VM (Kunpeng)\nHEAD : 2873 2864 2860 2861\nPatched : 2568 2513 2501 2538\n\n\n>\n> patch 0001 has sense and can help with code structure\n> patch 0002 it is little bit against simplicity, but for PLpgSQL with blocks structure it is correct.\n\nHere, I moved the exec_stmt code into exec_stmts() function because\nexec_stmts() was the only caller, and that function is not that big. I\nam assuming you were referring to this point when you said it is a bit\nagainst simplicity. But I didn't get what you implied by \"but for\nPLpgSQL with blocks structure it is correct\"\n\n-- \nThanks,\n-Amit Khandekar\nHuawei Technologies\n\n\n", "msg_date": "Sat, 30 May 2020 10:58:35 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inlining of couple of functions in pl_exec.c improves performance" }, { "msg_contents": "so 30. 5. 2020 v 7:28 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com>\nnapsal:\n\n> On Thu, 28 May 2020 at 14:39, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > I don't see 17% anywhere, but 3-5% is not bad.\n> Did you see 3-5% only for the pi function, or did you see the same\n> improvement also for the functions that I wrote ? I was getting a\n> consistent result of 14-18 % on both of the VMs. Also, is your test\n> machine running on Windows ? All the machines I tested were on Linux\n> kernel (Ubuntu)\n>\n\nIt was similar with your example too.\n\nI tested it on Linux Fedora Core 32 - laptop T520 - I7.\n\nI think so the effect of these patches strongly depends on CPU and compiler\n- but it is micro optimization, and when I look to profiler, the bottle\nneck is elsewhere.\n\n\n\n> Below are my results for your pi_est_1() function. For this function,\n> I am consistently getting 5-9 % improvement. I tested on 3 machines :\n>\n> gcc : 8.4.0. -O2 option\n> OS : Ubuntu Bionic\n>\n> explain analyze select pi_est_1(10000000)\n>\n> 1. x86_64 laptop VM (Intel Core i7-8665U)\n> HEAD : 2666 2617 2600 2630\n> Patched : 2502 2409 2460 2444\n>\n>\n> 2. x86_64 VM (Xeon Gold 6151)\n> HEAD : 1664 1662 1721 1660\n> Patched : 1541 1548 1537 1526\n>\n> 3. ARM64 VM (Kunpeng)\n> HEAD : 2873 2864 2860 2861\n> Patched : 2568 2513 2501 2538\n>\n>\n> >\n> > patch 0001 has sense and can help with code structure\n> > patch 0002 it is little bit against simplicity, but for PLpgSQL with\n> blocks structure it is correct.\n>\n> Here, I moved the exec_stmt code into exec_stmts() function because\n> exec_stmts() was the only caller, and that function is not that big. I\n> am assuming you were referring to this point when you said it is a bit\n> against simplicity. But I didn't get what you implied by \"but for\n> PLpgSQL with blocks structure it is correct\"\n>\n\nNested statement in PLpgSQL is always a list of statements. It is not\nsingle statement ever. So is not too strange don't have a function\nexecute_stmt.\n\nPavel\n\n\n> --\n> Thanks,\n> -Amit Khandekar\n> Huawei Technologies\n>\n\nso 30. 5. 2020 v 7:28 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com> napsal:On Thu, 28 May 2020 at 14:39, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> I don't see 17% anywhere, but 3-5% is not bad.\nDid you see 3-5% only for the pi function, or did you see the same\nimprovement also for the functions that I wrote ? I was getting a\nconsistent result of 14-18 % on both of the VMs. Also, is your test\nmachine running on Windows ? All the machines I tested were on Linux\nkernel (Ubuntu)It was similar with your example too.I tested it on Linux Fedora Core 32 - laptop T520 - I7.I think so the effect of these patches strongly depends on CPU and compiler - but it is micro optimization, and when I look to profiler, the bottle neck is elsewhere. \n\nBelow are my results for your pi_est_1() function. For this function,\nI am consistently getting 5-9 % improvement. I tested on 3 machines :\n\ngcc : 8.4.0. -O2 option\nOS : Ubuntu Bionic\n\nexplain analyze select pi_est_1(10000000)\n\n1. x86_64 laptop VM (Intel Core i7-8665U)\nHEAD :    2666 2617 2600 2630\nPatched : 2502 2409 2460 2444\n\n\n2. x86_64 VM (Xeon Gold 6151)\nHEAD :    1664 1662 1721 1660\nPatched : 1541 1548 1537 1526\n\n3. ARM64 VM (Kunpeng)\nHEAD :    2873 2864 2860 2861\nPatched : 2568 2513 2501 2538\n\n\n>\n> patch 0001 has sense and can help with code structure\n> patch 0002 it is little bit against simplicity, but for PLpgSQL with blocks structure it is correct.\n\nHere, I moved the exec_stmt code into exec_stmts() function because\nexec_stmts() was the only caller, and that function is not that big. I\nam assuming you were referring to this point when you said it is a bit\nagainst simplicity. But I didn't get what you implied by \"but for\nPLpgSQL with blocks structure it is correct\"Nested statement in PLpgSQL is always a list of statements. It is not single statement ever. So is not too strange don't have a function execute_stmt.Pavel\n\n-- \nThanks,\n-Amit Khandekar\nHuawei Technologies", "msg_date": "Sat, 30 May 2020 07:40:39 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inlining of couple of functions in pl_exec.c improves performance" }, { "msg_contents": "On Sat, May 23, 2020 at 10:33:43PM +0530, Amit Khandekar wrote:\n> By inlining of the two functions, found noticeable reduction in\n> execution time as shown (figures are in milliseconds, averaged over\n> multiple runs; taken from 'explain analyze' execution times) :\n> ARM VM :\n> HEAD : 100 ; Patched : 88 => 13.6% improvement\n> x86 VM :\n> HEAD : 71 ; Patched : 66 => 7.63% improvement.\n> \n> Then I included many assignment statements as shown in attachment\n> assignmany.sql. This showed further benefit :\n> ARM VM :\n> HEAD : 1820 ; Patched : 1549 => 17.5% improvement\n> x86 VM :\n> HEAD : 1020 ; Patched : 869 => 17.4% improvement\n> \n> Inlining just exec_stmt() showed the improvement mainly on the arm64\n> VM (7.4%). For x86, it was 2.7%\n> But inlining exec_stmt() and exec_cast_value() together showed\n> benefits on both machines, as can be seen above.\n\nThis stuff is interesting. Do you have some perf profiles to share?\nI am wondering what's the effect of the inlining with your test\ncases.\n--\nMichael", "msg_date": "Sun, 31 May 2020 11:34:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Inlining of couple of functions in pl_exec.c improves performance" }, { "msg_contents": "On Sun, 31 May 2020 at 08:04, Michael Paquier <michael@paquier.xyz> wrote:\n> This stuff is interesting. Do you have some perf profiles to share?\n> I am wondering what's the effect of the inlining with your test\n> cases.\n\nBelow are the perf numbers for asignmany.sql :\n\nHEAD :\n\n+ 16.88% postgres postgres [.] CachedPlanIsSimplyValid\n+ 16.64% postgres plpgsql.so [.] exec_stmt\n+ 15.56% postgres plpgsql.so [.] exec_eval_expr\n+ 13.58% postgres plpgsql.so [.] exec_assign_value\n+ 7.49% postgres plpgsql.so [.] exec_cast_value\n+ 7.17% postgres plpgsql.so [.] exec_assign_expr\n+ 5.39% postgres postgres [.] MemoryContextReset\n+ 3.91% postgres postgres [.] ExecJustConst\n+ 3.33% postgres postgres [.] recomputeNamespacePath\n+ 2.88% postgres postgres [.] OverrideSearchPathMatchesCurrent\n+ 2.18% postgres plpgsql.so [.] exec_eval_cleanup.isra.17\n+ 2.15% postgres plpgsql.so [.] exec_stmts\n+ 1.32% postgres plpgsql.so [.] MemoryContextReset@plt\n+ 0.57% postgres plpgsql.so [.] CachedPlanIsSimplyValid@plt\n+ 0.57% postgres postgres [.] GetUserId\n 0.30% postgres plpgsql.so [.] assign_simple_var.isra.13\n 0.05% postgres [kernel.kallsyms] [k] unmap_page_range\n\nPatched :\n\n+ 18.22% postgres postgres [.] CachedPlanIsSimplyValid\n+ 17.25% postgres plpgsql.so [.] exec_eval_expr\n+ 16.31% postgres plpgsql.so [.] exec_stmts\n+ 15.00% postgres plpgsql.so [.] exec_assign_value\n+ 7.56% postgres plpgsql.so [.] exec_assign_expr\n+ 5.64% postgres postgres [.] MemoryContextReset\n+ 5.16% postgres postgres [.] ExecJustConst\n+ 4.86% postgres postgres [.] recomputeNamespacePath\n+ 4.54% postgres postgres [.] OverrideSearchPathMatchesCurrent\n+ 2.33% postgres plpgsql.so [.] exec_eval_cleanup.isra.17\n+ 1.26% postgres plpgsql.so [.] MemoryContextReset@plt\n+ 0.81% postgres postgres [.] GetUserId\n+ 0.71% postgres plpgsql.so [.] CachedPlanIsSimplyValid@plt\n 0.26% postgres plpgsql.so [.] assign_simple_var.isra.13\n 0.03% postgres [kernel.kallsyms] [k] unmap_page_range\n 0.02% postgres [kernel.kallsyms] [k] mark_page_accessed\n\nNotice the reduction in percentages :\nHEAD : exec_stmts + exec_stmt = 18.79\nPatched : exec_stmts = 16.31\n\nHEAD : exec_assign_value + exec_cast_value : 21.07\nPatched : exec_assign_value = 15.00\n\nAs expected, reduction of percentage in these two functions caused\nother functions like CachedPlanIsSimplyValid() and exec_eval_expr() to\nshow rise in their percentages.\n\n\n", "msg_date": "Mon, 1 Jun 2020 11:23:04 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inlining of couple of functions in pl_exec.c improves performance" }, { "msg_contents": "On Sat, 30 May 2020 at 11:11, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> I think so the effect of these patches strongly depends on CPU and compile\n\nI quickly tried pi() with gcc 10 as well, and saw more or less the\nsame benefit. I think, we are bound to see some differences in the\nbenefits across architectures, kernels and compilers; but looks like\nsome benefit is always there.\n\n> but it is micro optimization, and when I look to profiler, the bottle neck is elsewhere.\n\nPlease check the perf numbers in my reply to Michael. I suppose you\nmeant CachedPlanIsSimplyValid() when you say the bottle neck is\nelsewhere ? Yeah, this function is always the hottest spot, which I\nrecall is being discussed elsewhere. But I always see exec_stmt(),\nexec_assign_value as the next functions.\n\n>> > patch 0002 it is little bit against simplicity, but for PLpgSQL with blocks structure it is correct.\n>>\n>> Here, I moved the exec_stmt code into exec_stmts() function because\n>> exec_stmts() was the only caller, and that function is not that big. I\n>> am assuming you were referring to this point when you said it is a bit\n>> against simplicity. But I didn't get what you implied by \"but for\n>> PLpgSQL with blocks structure it is correct\"\n>\n>\n> Nested statement in PLpgSQL is always a list of statements. It is not single statement ever. So is not too strange don't have a function execute_stmt.\n\nRight.\n\n\n", "msg_date": "Mon, 1 Jun 2020 11:32:48 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inlining of couple of functions in pl_exec.c improves performance" }, { "msg_contents": "po 1. 6. 2020 v 8:15 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com>\nnapsal:\n\n> On Sat, 30 May 2020 at 11:11, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > I think so the effect of these patches strongly depends on CPU and\n> compile\n>\n> I quickly tried pi() with gcc 10 as well, and saw more or less the\n> same benefit. I think, we are bound to see some differences in the\n> benefits across architectures, kernels and compilers; but looks like\n> some benefit is always there.\n>\n> > but it is micro optimization, and when I look to profiler, the bottle\n> neck is elsewhere.\n>\n> Please check the perf numbers in my reply to Michael. I suppose you\n> meant CachedPlanIsSimplyValid() when you say the bottle neck is\n> elsewhere ? Yeah, this function is always the hottest spot, which I\n> recall is being discussed elsewhere. But I always see exec_stmt(),\n> exec_assign_value as the next functions.\n>\n\nIt is hard to read the profile result, because these functions are nested\ntogether. For your example\n\n18.22% postgres postgres [.] CachedPlanIsSimplyValid\n\nIs little bit strange, and probably this is real bottleneck in your simple\nexample, and maybe some work can be done there, because you assign just\nconstant.\n\nOn second hand, your example is pretty unrealistic - and against any\ndeveloper's best practices for writing cycles.\n\nI think so we can look on PostGIS, where is some computing heavy routines\nin PLpgSQL, and we can look on real profiles.\n\nProbably the most people will have benefit from these optimization.\n\n\n\n\n> >> > patch 0002 it is little bit against simplicity, but for PLpgSQL with\n> blocks structure it is correct.\n> >>\n> >> Here, I moved the exec_stmt code into exec_stmts() function because\n> >> exec_stmts() was the only caller, and that function is not that big. I\n> >> am assuming you were referring to this point when you said it is a bit\n> >> against simplicity. But I didn't get what you implied by \"but for\n> >> PLpgSQL with blocks structure it is correct\"\n> >\n> >\n> > Nested statement in PLpgSQL is always a list of statements. It is not\n> single statement ever. So is not too strange don't have a function\n> execute_stmt.\n>\n> Right.\n>\n\npo 1. 6. 2020 v 8:15 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com> napsal:On Sat, 30 May 2020 at 11:11, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> I think so the effect of these patches strongly depends on CPU and compile\n\nI quickly tried pi() with gcc 10 as well, and saw more or less the\nsame benefit. I think, we are bound to see some differences in the\nbenefits across architectures, kernels and compilers; but looks like\nsome benefit is always there.\n\n> but it is micro optimization, and when I look to profiler, the bottle neck is elsewhere.\n\nPlease check the perf numbers in my reply to Michael. I suppose you\nmeant CachedPlanIsSimplyValid() when you say the bottle neck is\nelsewhere ? Yeah, this function is always the hottest spot, which I\nrecall is being discussed elsewhere. But I always see exec_stmt(),\nexec_assign_value as the next functions.It is hard to read the profile result, because these functions are nested together. For your example  18.22%  postgres  postgres           [.] CachedPlanIsSimplyValidIs little bit strange, and probably this is real bottleneck in your simple example, and maybe some work can be done there, because you assign just constant.On second hand, your example is pretty unrealistic - and against any developer's best practices for writing cycles.I think so we can look on PostGIS, where is some computing heavy routines in PLpgSQL, and we can look on real profiles.Probably the most people will have benefit from these optimization. \n\n>> > patch 0002 it is little bit against simplicity, but for PLpgSQL with blocks structure it is correct.\n>>\n>> Here, I moved the exec_stmt code into exec_stmts() function because\n>> exec_stmts() was the only caller, and that function is not that big. I\n>> am assuming you were referring to this point when you said it is a bit\n>> against simplicity. But I didn't get what you implied by \"but for\n>> PLpgSQL with blocks structure it is correct\"\n>\n>\n> Nested statement in PLpgSQL is always a list of statements. It is not single statement ever. So is not too strange don't have a function execute_stmt.\n\nRight.", "msg_date": "Mon, 1 Jun 2020 08:56:26 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inlining of couple of functions in pl_exec.c improves performance" }, { "msg_contents": "On Mon, 1 Jun 2020 at 12:27, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> po 1. 6. 2020 v 8:15 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com> napsal:\n>>\n>> On Sat, 30 May 2020 at 11:11, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> > I think so the effect of these patches strongly depends on CPU and compile\n>>\n>> I quickly tried pi() with gcc 10 as well, and saw more or less the\n>> same benefit. I think, we are bound to see some differences in the\n>> benefits across architectures, kernels and compilers; but looks like\n>> some benefit is always there.\n>>\n>> > but it is micro optimization, and when I look to profiler, the bottle neck is elsewhere.\n>>\n>> Please check the perf numbers in my reply to Michael. I suppose you\n>> meant CachedPlanIsSimplyValid() when you say the bottle neck is\n>> elsewhere ? Yeah, this function is always the hottest spot, which I\n>> recall is being discussed elsewhere. But I always see exec_stmt(),\n>> exec_assign_value as the next functions.\n>\n>\n> It is hard to read the profile result, because these functions are nested together. For your example\n>\n> 18.22% postgres postgres [.] CachedPlanIsSimplyValid\n>\n> Is little bit strange, and probably this is real bottleneck in your simple example, and maybe some work can be done there, because you assign just constant.\n\nI had earlier had a quick look on this one. CachedPlanIsSimplyValid()\nwas, I recall, hitting a hotspot when it tries to access\nplansource->search_path (possibly cacheline miss). But didn't get a\nchance to further dig on that. For now, i am focusing on these other\nfunctions for which the patches were submitted.\n\n\n>\n> On second hand, your example is pretty unrealistic - and against any developer's best practices for writing cycles.\n>\n> I think so we can look on PostGIS, where is some computing heavy routines in PLpgSQL, and we can look on real profiles.\n>\n> Probably the most people will have benefit from these optimization.\n\nI understand it's not a real world example. For generating perf\nfigures, I had to use an example which amplifies the benefits, so that\nthe effect of the patches on the perf figures also becomes visible.\nHence, used that example. I had shown the benefits up-thread using a\npractical function avg_space(). But the perf figures for that example\nwere varying a lot.\n\nSo below, what I did was : Run the avg_space() ~150 times, and took\nperf report. This stabilized the results a bit :\n\nHEAD :\n+ 16.10% 17.29% 16.82% postgres postgres [.]\nExecInterpExpr\n+ 13.80% 13.56% 14.49% postgres plpgsql.so [.]\nexec_assign_value\n+ 12.64% 12.10% 12.09% postgres plpgsql.so [.]\nplpgsql_param_eval_var\n+ 12.15% 11.28% 11.05% postgres plpgsql.so [.]\nexec_stmt\n+ 10.81% 10.24% 10.55% postgres plpgsql.so [.]\nexec_eval_expr\n+ 9.50% 9.35% 9.37% postgres plpgsql.so [.]\nexec_cast_value\n.....\n+ 1.19% 1.06% 1.21% postgres plpgsql.so [.]\nexec_stmts\n\n\n0001+0002 patches applied (i.e. inline exec_stmt) :\n+ 16.90% 17.20% 16.54% postgres postgres [.]\nExecInterpExpr\n+ 16.42% 15.37% 15.28% postgres plpgsql.so [.]\nexec_assign_value\n+ 11.34% 11.92% 11.93% postgres plpgsql.so [.]\nplpgsql_param_eval_var\n+ 11.18% 11.86% 10.99% postgres plpgsql.so [.] exec_stmts.part.0\n+ 10.51% 9.52% 10.61% postgres plpgsql.so [.]\nexec_eval_expr\n+ 9.39% 9.48% 9.30% postgres plpgsql.so [.]\nexec_cast_value\n\nHEAD : exec_stmts + exec_stmt = ~12.7 %\nPatched (0001+0002): exec_stmts = 11.3 %\n\nJust 0003 patch applied (i.e. inline exec_cast_value) :\n+ 17.00% 16.77% 17.09% postgres postgres [.] ExecInterpExpr\n+ 15.21% 15.64% 15.09% postgres plpgsql.so [.] exec_assign_value\n+ 14.48% 14.06% 13.94% postgres plpgsql.so [.] exec_stmt\n+ 13.26% 13.30% 13.14% postgres plpgsql.so [.]\nplpgsql_param_eval_var\n+ 11.48% 11.64% 12.66% postgres plpgsql.so [.] exec_eval_expr\n....\n+ 1.03% 0.85% 0.87% postgres plpgsql.so [.] exec_stmts\n\nHEAD : exec_assign_value + exec_cast_value = ~23.4 %\nPatched (0001+0002): exec_assign_value = 15.3%\n\n\nTime in milliseconds after calling avg_space() 150 times :\nHEAD : 7210\nPatch 0001+0002 : 6925\nPatch 0003 : 6670\nPatch 0001+0002+0003 : 6346\n\n\n", "msg_date": "Mon, 1 Jun 2020 19:29:39 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inlining of couple of functions in pl_exec.c improves performance" }, { "msg_contents": "po 1. 6. 2020 v 15:59 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com>\nnapsal:\n\n> On Mon, 1 Jun 2020 at 12:27, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > po 1. 6. 2020 v 8:15 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com>\n> napsal:\n> >>\n> >> On Sat, 30 May 2020 at 11:11, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >> > I think so the effect of these patches strongly depends on CPU and\n> compile\n> >>\n> >> I quickly tried pi() with gcc 10 as well, and saw more or less the\n> >> same benefit. I think, we are bound to see some differences in the\n> >> benefits across architectures, kernels and compilers; but looks like\n> >> some benefit is always there.\n> >>\n> >> > but it is micro optimization, and when I look to profiler, the bottle\n> neck is elsewhere.\n> >>\n> >> Please check the perf numbers in my reply to Michael. I suppose you\n> >> meant CachedPlanIsSimplyValid() when you say the bottle neck is\n> >> elsewhere ? Yeah, this function is always the hottest spot, which I\n> >> recall is being discussed elsewhere. But I always see exec_stmt(),\n> >> exec_assign_value as the next functions.\n> >\n> >\n> > It is hard to read the profile result, because these functions are\n> nested together. For your example\n> >\n> > 18.22% postgres postgres [.] CachedPlanIsSimplyValid\n> >\n> > Is little bit strange, and probably this is real bottleneck in your\n> simple example, and maybe some work can be done there, because you assign\n> just constant.\n>\n> I had earlier had a quick look on this one. CachedPlanIsSimplyValid()\n> was, I recall, hitting a hotspot when it tries to access\n> plansource->search_path (possibly cacheline miss). But didn't get a\n> chance to further dig on that. For now, i am focusing on these other\n> functions for which the patches were submitted.\n>\n>\n> >\n> > On second hand, your example is pretty unrealistic - and against any\n> developer's best practices for writing cycles.\n> >\n> > I think so we can look on PostGIS, where is some computing heavy\n> routines in PLpgSQL, and we can look on real profiles.\n> >\n> > Probably the most people will have benefit from these optimization.\n>\n> I understand it's not a real world example. For generating perf\n> figures, I had to use an example which amplifies the benefits, so that\n> the effect of the patches on the perf figures also becomes visible.\n> Hence, used that example. I had shown the benefits up-thread using a\n> practical function avg_space(). But the perf figures for that example\n> were varying a lot.\n>\n> So below, what I did was : Run the avg_space() ~150 times, and took\n> perf report. This stabilized the results a bit :\n>\n> HEAD :\n> + 16.10% 17.29% 16.82% postgres postgres [.]\n> ExecInterpExpr\n> + 13.80% 13.56% 14.49% postgres plpgsql.so [.]\n> exec_assign_value\n> + 12.64% 12.10% 12.09% postgres plpgsql.so [.]\n> plpgsql_param_eval_var\n> + 12.15% 11.28% 11.05% postgres plpgsql.so [.]\n> exec_stmt\n> + 10.81% 10.24% 10.55% postgres plpgsql.so [.]\n> exec_eval_expr\n> + 9.50% 9.35% 9.37% postgres plpgsql.so [.]\n> exec_cast_value\n> .....\n> + 1.19% 1.06% 1.21% postgres plpgsql.so [.]\n> exec_stmts\n>\n>\n> 0001+0002 patches applied (i.e. inline exec_stmt) :\n> + 16.90% 17.20% 16.54% postgres postgres [.]\n> ExecInterpExpr\n> + 16.42% 15.37% 15.28% postgres plpgsql.so [.]\n> exec_assign_value\n> + 11.34% 11.92% 11.93% postgres plpgsql.so [.]\n> plpgsql_param_eval_var\n> + 11.18% 11.86% 10.99% postgres plpgsql.so [.]\n> exec_stmts.part.0\n> + 10.51% 9.52% 10.61% postgres plpgsql.so [.]\n> exec_eval_expr\n> + 9.39% 9.48% 9.30% postgres plpgsql.so [.]\n> exec_cast_value\n>\n> HEAD : exec_stmts + exec_stmt = ~12.7 %\n> Patched (0001+0002): exec_stmts = 11.3 %\n>\n> Just 0003 patch applied (i.e. inline exec_cast_value) :\n> + 17.00% 16.77% 17.09% postgres postgres [.] ExecInterpExpr\n> + 15.21% 15.64% 15.09% postgres plpgsql.so [.]\n> exec_assign_value\n> + 14.48% 14.06% 13.94% postgres plpgsql.so [.] exec_stmt\n> + 13.26% 13.30% 13.14% postgres plpgsql.so [.]\n> plpgsql_param_eval_var\n> + 11.48% 11.64% 12.66% postgres plpgsql.so [.] exec_eval_expr\n> ....\n> + 1.03% 0.85% 0.87% postgres plpgsql.so [.] exec_stmts\n>\n> HEAD : exec_assign_value + exec_cast_value = ~23.4 %\n> Patched (0001+0002): exec_assign_value = 15.3%\n>\n>\n> Time in milliseconds after calling avg_space() 150 times :\n> HEAD : 7210\n> Patch 0001+0002 : 6925\n> Patch 0003 : 6670\n> Patch 0001+0002+0003 : 6346\n>\n\nIs your patch in commitfest in commitfest application?\n\nRegards\n\nPavel\n\npo 1. 6. 2020 v 15:59 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com> napsal:On Mon, 1 Jun 2020 at 12:27, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> po 1. 6. 2020 v 8:15 odesílatel Amit Khandekar <amitdkhan.pg@gmail.com> napsal:\n>>\n>> On Sat, 30 May 2020 at 11:11, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> > I think so the effect of these patches strongly depends on CPU and compile\n>>\n>> I quickly tried pi() with gcc 10 as well, and saw more or less the\n>> same benefit. I think, we are bound to see some differences in the\n>> benefits across architectures, kernels and compilers; but looks like\n>> some benefit is always there.\n>>\n>> > but it is micro optimization, and when I look to profiler, the bottle neck is elsewhere.\n>>\n>> Please check the perf numbers in my reply to Michael. I suppose you\n>> meant CachedPlanIsSimplyValid() when you say the bottle neck is\n>> elsewhere ? Yeah, this function is always the hottest spot, which I\n>> recall is being discussed elsewhere. But I always see exec_stmt(),\n>> exec_assign_value as the next functions.\n>\n>\n> It is hard to read the profile result, because these functions are nested together. For your example\n>\n> 18.22%  postgres  postgres           [.] CachedPlanIsSimplyValid\n>\n> Is little bit strange, and probably this is real bottleneck in your simple example, and maybe some work can be done there, because you assign just constant.\n\nI had earlier had a quick look on this one. CachedPlanIsSimplyValid()\nwas, I recall, hitting a hotspot when it tries to access\nplansource->search_path (possibly cacheline miss). But didn't get a\nchance to further dig on that. For now, i am focusing on these other\nfunctions for which the patches were submitted.\n\n\n>\n> On second hand, your example is pretty unrealistic - and against any developer's best practices for writing cycles.\n>\n> I think so we can look on PostGIS, where is some computing heavy routines in PLpgSQL, and we can look on real profiles.\n>\n> Probably the most people will have benefit from these optimization.\n\nI understand it's not a real world example. For generating perf\nfigures, I had to use an example which amplifies the benefits, so that\nthe effect of the patches on the perf figures also becomes visible.\nHence, used that example. I had shown the benefits up-thread using a\npractical function avg_space(). But the perf figures for that example\nwere varying a lot.\n\nSo below, what I did was : Run the avg_space() ~150 times, and took\nperf report. This stabilized the results a bit :\n\nHEAD :\n+   16.10%  17.29%  16.82%  postgres  postgres            [.]\nExecInterpExpr\n+   13.80%  13.56%  14.49%  postgres  plpgsql.so          [.]\nexec_assign_value\n+   12.64%  12.10%  12.09%  postgres  plpgsql.so          [.]\nplpgsql_param_eval_var\n+   12.15%  11.28%  11.05%  postgres  plpgsql.so          [.]\nexec_stmt\n+   10.81%  10.24%  10.55%  postgres  plpgsql.so          [.]\nexec_eval_expr\n+    9.50%   9.35%   9.37%  postgres  plpgsql.so          [.]\nexec_cast_value\n.....\n+    1.19%   1.06%   1.21%  postgres  plpgsql.so          [.]\nexec_stmts\n\n\n0001+0002 patches applied (i.e. inline exec_stmt) :\n+   16.90%  17.20%  16.54%  postgres  postgres            [.]\nExecInterpExpr\n+   16.42%  15.37%  15.28%  postgres  plpgsql.so          [.]\nexec_assign_value\n+   11.34%  11.92%  11.93%  postgres  plpgsql.so          [.]\nplpgsql_param_eval_var\n+   11.18%  11.86%  10.99%  postgres  plpgsql.so          [.] exec_stmts.part.0\n+   10.51%   9.52%  10.61%  postgres  plpgsql.so          [.]\nexec_eval_expr\n+    9.39%   9.48%   9.30%  postgres  plpgsql.so          [.]\nexec_cast_value\n\nHEAD : exec_stmts + exec_stmt = ~12.7 %\nPatched (0001+0002): exec_stmts = 11.3 %\n\nJust 0003 patch applied (i.e. inline exec_cast_value) :\n+   17.00%  16.77%  17.09% postgres  postgres           [.] ExecInterpExpr\n+   15.21%  15.64%  15.09% postgres  plpgsql.so         [.] exec_assign_value\n+   14.48%  14.06%  13.94% postgres  plpgsql.so         [.] exec_stmt\n+   13.26%  13.30%  13.14% postgres  plpgsql.so         [.]\nplpgsql_param_eval_var\n+   11.48%  11.64%  12.66% postgres  plpgsql.so         [.] exec_eval_expr\n....\n+    1.03%   0.85%   0.87% postgres  plpgsql.so         [.] exec_stmts\n\nHEAD : exec_assign_value + exec_cast_value = ~23.4 %\nPatched (0001+0002): exec_assign_value =  15.3%\n\n\nTime in milliseconds after calling avg_space() 150 times :\nHEAD  : 7210\nPatch 0001+0002 : 6925\nPatch 0003 : 6670\nPatch 0001+0002+0003 : 6346Is your patch in commitfest in commitfest application?RegardsPavel", "msg_date": "Tue, 9 Jun 2020 18:19:09 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inlining of couple of functions in pl_exec.c improves performance" }, { "msg_contents": "On Tue, 9 Jun 2020 at 21:49, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> Is your patch in commitfest in commitfest application?\n\nThanks for reminding me. Just added.\nhttps://commitfest.postgresql.org/28/2590/\n\n\n", "msg_date": "Wed, 10 Jun 2020 16:09:58 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inlining of couple of functions in pl_exec.c improves performance" }, { "msg_contents": "Amit Khandekar <amitdkhan.pg@gmail.com> writes:\n> There are a couple of function call overheads I observed in pl/pgsql\n> code : exec_stmt() and exec_cast_value(). Removing these overheads\n> resulted in some performance gains.\n\nI took a look at the 0001/0002 patches (not 0003 as yet). I do not\nlike 0001 too much. The most concrete problem with it is that\nyou broke translatability of the error messages, cf the first\ntranslatability guideline at [1]. While that could be fixed by passing\nthe entire message not just part of it, I don't see anything that we're\ngaining by moving that stuff into exec_toplevel_block in the first place.\nCertainly, passing a string that describes what will happen *after*\nexec_toplevel_block is just weird. I think what you've got here is\na very arbitrary chopping-up of the existing code based on chance\nsimilarities of the existing callers. I think we're better off to make\nexec_toplevel_block be as nearly as possible a match for exec_stmts'\nsemantics.\n\nHence, I propose the attached 0001 to replace 0001/0002. This should\nbe basically indistinguishable performance-wise, though I have not\ntried to benchmark. Note that for reviewability's sake, I did not\nreindent the former body of exec_stmt, though we'd want to do that\nbefore committing.\n\nAlso, 0002 is a small patch on top of that to avoid redundant saves\nand restores of estate->err_stmt within the loop in exec_stmts. This\nmay well not be a measurable improvement, but it's a pretty obvious\ninefficiency in exec_stmts now that it's refactored this way.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/devel/nls-programmer.html#NLS-GUIDELINES", "msg_date": "Wed, 01 Jul 2020 18:17:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Inlining of couple of functions in pl_exec.c improves performance" }, { "msg_contents": "On Thu, 2 Jul 2020 at 03:47, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Khandekar <amitdkhan.pg@gmail.com> writes:\n> > There are a couple of function call overheads I observed in pl/pgsql\n> > code : exec_stmt() and exec_cast_value(). Removing these overheads\n> > resulted in some performance gains.\n>\n> I took a look at the 0001/0002 patches (not 0003 as yet). I do not\n> like 0001 too much. The most concrete problem with it is that\n> you broke translatability of the error messages, cf the first\n> translatability guideline at [1].\n\nYeah, I thought we can safely use %s for proper nouns such as \"trigger\nprocedure\" or \"function\" as those would not be translated. But looks\nlike even if they won't be translated, the difference in word order\namong languages might create problems with this.\n\n> While that could be fixed by passing\n> the entire message not just part of it, I don't see anything that we're\n> gaining by moving that stuff into exec_toplevel_block in the first place.\n> Certainly, passing a string that describes what will happen *after*\n> exec_toplevel_block is just weird. I think what you've got here is\n> a very arbitrary chopping-up of the existing code based on chance\n> similarities of the existing callers. I think we're better off to make\n> exec_toplevel_block be as nearly as possible a match for exec_stmts'\n> semantics.\n\nI thought some of those things that I kept in exec_toplevel_block() do\nlook like they belong to a top level function. But what you are saying\nalso makes sense : better to keep it similar to exec_stmts.\n\n>\n> Hence, I propose the attached 0001 to replace 0001/0002. This should\n> be basically indistinguishable performance-wise, though I have not\n> tried to benchmark.\n\nThanks for the patches. Yeah, performance-wise it does look similar;\nbut anyways I tried running, and got similar performance numbers.\n\n> Note that for reviewability's sake, I did not\n> reindent the former body of exec_stmt, though we'd want to do that\n> before committing.\n\nRight.\n\n>\n> Also, 0002 is a small patch on top of that to avoid redundant saves\n> and restores of estate->err_stmt within the loop in exec_stmts. This\n> may well not be a measurable improvement, but it's a pretty obvious\n> inefficiency in exec_stmts now that it's refactored this way.\n\n0002 also makes sense.\n\n\n", "msg_date": "Fri, 3 Jul 2020 12:11:23 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inlining of couple of functions in pl_exec.c improves performance" }, { "msg_contents": "I did some performance testing on 0001+0002 here, and found that\nfor me, there's basically no change on x86_64 but a win of 2 to 3\npercent on aarch64, using Pavel's pi_est_1() as a representative\ncase for simple plpgsql statements. That squares with your original\nresults I believe. It's not clear to me whether any of the later\ntests in this thread measured these changes in isolation, or only\nwith 0003 added.\n\nAnyway, that's good enough for me, so I pushed 0001+0002 after a\nlittle bit of additional cosmetic tweaking.\n\nI attach your original 0003 here (it still applies, with some line\noffsets). That's just so the cfbot doesn't get confused about what\nit's supposed to test now.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 03 Jul 2020 15:49:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Inlining of couple of functions in pl_exec.c improves performance" }, { "msg_contents": "I wrote:\n> I attach your original 0003 here (it still applies, with some line\n> offsets). That's just so the cfbot doesn't get confused about what\n> it's supposed to test now.\n\nPushed that part now, too.\n\nBTW, the first test run I did on this (on x86_64) was actually several\npercent *slower* than HEAD. I couldn't reproduce that after restarting\nthe postmaster; all later tests concurred that there was a speedup.\nSo I suppose that was just some phase-of-the-moon effect, perhaps caused\nby an ASLR-dependent collision of bits of code in processor cache.\nStill, that illustrates the difficulty of getting useful, reproducible\nimprovements when doing this kind of hacking. I tend to think that\nmost of the time we're better off leaving this to the compiler.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 Jul 2020 13:21:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Inlining of couple of functions in pl_exec.c improves performance" }, { "msg_contents": "On Sat, 4 Jul 2020 at 01:19, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I did some performance testing on 0001+0002 here, and found that\n> for me, there's basically no change on x86_64 but a win of 2 to 3\n> percent on aarch64, using Pavel's pi_est_1() as a representative\n> case for simple plpgsql statements. That squares with your original\n> results I believe. It's not clear to me whether any of the later\n> tests in this thread measured these changes in isolation, or only\n> with 0003 added.\n\nYeah I had the same observation. 0001+0002 seems to benefit mostly on\naarch64. And 0003 (exec_case_value) benefited both on amd64 and\naarch64.\n\n>\n> Anyway, that's good enough for me, so I pushed 0001+0002 after a\n> little bit of additional cosmetic tweaking.\n>\n> I attach your original 0003 here (it still applies, with some line\n> offsets). That's just so the cfbot doesn't get confused about what\n> it's supposed to test now.\n\nThanks for pushing all the three !\n\nThanks,\n-Amit Khandekar\nHuawei Technologies\n\n\n", "msg_date": "Mon, 6 Jul 2020 17:59:34 +0530", "msg_from": "Amit Khandekar <amitdkhan.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inlining of couple of functions in pl_exec.c improves performance" } ]
[ { "msg_contents": "Hi,\nThere was news in Phoronix about the Beta 1 Release of Postgres (1).\nUnfortunately for Postgres advocacy it does not bring good news,\nit is showing regressions in the benchmarks compared to version 12.\nWithout going into the technical merits of how the test was done,\nthey have no way of knowing whether such regressions actually exist or if\nit is a failure of how the tests were done.\nBut it would be nice to have arguments to counter, for the sake of\nPostgres' promotion.\nI'm using the development version (latest), and for now, it seems to be\nfaster than version 12.\n\nregards,\nRanier Vilela\n\n1. https://www.phoronix.com/scan.php?page=news_item&px=PostgreSQL-13-Beta\n\nHi,There was news in Phoronix about the Beta 1 Release of Postgres (1).Unfortunately for Postgres advocacy it does not bring good news, it is showing regressions in the benchmarks compared to version 12.Without going into the technical merits of how the test was done, they have no way of knowing whether such regressions actually exist or if it is a failure of how the tests were done.But it would be nice to have arguments to counter, for the sake of Postgres' promotion.I'm using the development version (latest), and for now, it seems to be faster than version 12.regards,Ranier Vilela1. https://www.phoronix.com/scan.php?page=news_item&px=PostgreSQL-13-Beta", "msg_date": "Sun, 24 May 2020 06:51:08 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "PostgresSQL 13.0 Beta 1 on Phoronix" }, { "msg_contents": "On Sun, May 24, 2020 at 2:52 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> There was news in Phoronix about the Beta 1 Release of Postgres (1).\n> Unfortunately for Postgres advocacy it does not bring good news,\n> it is showing regressions in the benchmarks compared to version 12.\n> Without going into the technical merits of how the test was done,\n> they have no way of knowing whether such regressions actually exist or if it is a failure of how the tests were done.\n\nThis shellscript appears to be used by Phoronix to run pgbench:\n\nhttps://github.com/phoronix-test-suite/phoronix-test-suite/blob/f0f8c726f2700faea363f176a4b28dab026d45d0/ob-cache/test-profiles/pts/pgbench-1.8.4/install.sh\n\nIt looks like they're only running pgbench for 60 second runs in all\nconfigurations -- notice that \"-T 60\" is passed to pgbench. I'm not\nentirely sure that that's all that there is to it. Still, there isn't\nany real attempt to make it clear what's going on here. I have my\ndoubts about how representative these numbers are for that reason.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 24 May 2020 10:33:48 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PostgresSQL 13.0 Beta 1 on Phoronix" }, { "msg_contents": "Em dom., 24 de mai. de 2020 às 14:34, Peter Geoghegan <pg@bowt.ie> escreveu:\n\n> On Sun, May 24, 2020 at 2:52 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > There was news in Phoronix about the Beta 1 Release of Postgres (1).\n> > Unfortunately for Postgres advocacy it does not bring good news,\n> > it is showing regressions in the benchmarks compared to version 12.\n> > Without going into the technical merits of how the test was done,\n> > they have no way of knowing whether such regressions actually exist or\n> if it is a failure of how the tests were done.\n>\n> This shellscript appears to be used by Phoronix to run pgbench:\n>\n>\n> https://github.com/phoronix-test-suite/phoronix-test-suite/blob/f0f8c726f2700faea363f176a4b28dab026d45d0/ob-cache/test-profiles/pts/pgbench-1.8.4/install.sh\n>\n> It looks like they're only running pgbench for 60 second runs in all\n> configurations -- notice that \"-T 60\" is passed to pgbench. I'm not\n> entirely sure that that's all that there is to it. Still, there isn't\n> any real attempt to make it clear what's going on here. I have my\n> doubts about how representative these numbers are for that reason.\n>\nI also find it very suspicious.V12 seems to be better at read-only\nworkloads (at least it shows the graphics).\nI'm using V13, in normal mode (read / write), medium load.\n\nregards,\nRanier VIlela\n\nEm dom., 24 de mai. de 2020 às 14:34, Peter Geoghegan <pg@bowt.ie> escreveu:On Sun, May 24, 2020 at 2:52 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> There was news in Phoronix about the Beta 1 Release of Postgres (1).\n> Unfortunately for Postgres advocacy it does not bring good news,\n> it is showing regressions in the benchmarks compared to version 12.\n> Without going into the technical merits of how the test was done,\n> they have no way of knowing whether such regressions actually exist or if it is a failure of how the tests were done.\n\nThis shellscript appears to be used by Phoronix to run pgbench:\n\nhttps://github.com/phoronix-test-suite/phoronix-test-suite/blob/f0f8c726f2700faea363f176a4b28dab026d45d0/ob-cache/test-profiles/pts/pgbench-1.8.4/install.sh\n\nIt looks like they're only running pgbench for 60 second runs in all\nconfigurations -- notice that \"-T 60\" is passed to pgbench. I'm not\nentirely sure that that's all that there is to it. Still, there isn't\nany real attempt to make it clear what's going on here. I have my\ndoubts about how representative these numbers are for that reason.I also find it very suspicious.V12 seems to be better at read-only workloads (at least it shows the graphics).I'm using V13, in normal mode (read / write), medium load.regards,Ranier VIlela", "msg_date": "Sun, 24 May 2020 14:50:08 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PostgresSQL 13.0 Beta 1 on Phoronix" }, { "msg_contents": "On Sun, May 24, 2020 at 02:50:08PM -0300, Ranier Vilela wrote:\n> Em dom., 24 de mai. de 2020 às 14:34, Peter Geoghegan <pg@bowt.ie> escreveu:\n>> It looks like they're only running pgbench for 60 second runs in all\n>> configurations -- notice that \"-T 60\" is passed to pgbench. I'm not\n>> entirely sure that that's all that there is to it. Still, there isn't\n>> any real attempt to make it clear what's going on here. I have my\n>> doubts about how representative these numbers are for that reason.\n>\n> I also find it very suspicious.\n\nI don't know, but what seems pretty clear to me is this benchmark does\nzero customization of postgresql.conf (it disables autovacuum!?), and\nthat the number of connections is calculated based on the number of\ncores while the scaling factor is visibly calculated from the amount\nof memory available in the environment. Perhaps the first part is\nwanted, but we are very conservative to allow PG to work on small-ish\nmachines with the default configuration, and a 56-core machine with\n378GB of memory is not something I would define as small-ish.\n--\nMichael", "msg_date": "Mon, 25 May 2020 15:57:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PostgresSQL 13.0 Beta 1 on Phoronix" }, { "msg_contents": "Em seg., 25 de mai. de 2020 às 03:57, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Sun, May 24, 2020 at 02:50:08PM -0300, Ranier Vilela wrote:\n> > Em dom., 24 de mai. de 2020 às 14:34, Peter Geoghegan <pg@bowt.ie>\n> escreveu:\n> >> It looks like they're only running pgbench for 60 second runs in all\n> >> configurations -- notice that \"-T 60\" is passed to pgbench. I'm not\n> >> entirely sure that that's all that there is to it. Still, there isn't\n> >> any real attempt to make it clear what's going on here. I have my\n> >> doubts about how representative these numbers are for that reason.\n> >\n> > I also find it very suspicious.\n>\n> I don't know, but what seems pretty clear to me is this benchmark does\n> zero customization of postgresql.conf (it disables autovacuum!?), and\n> that the number of connections is calculated based on the number of\n> cores while the scaling factor is visibly calculated from the amount\n> of memory available in the environment. Perhaps the first part is\n> wanted, but we are very conservative to allow PG to work on small-ish\n> machines with the default configuration, and a 56-core machine with\n> 378GB of memory is not something I would define as small-ish.\n>\nDoes this mean that V13 would need additional settings in postgresql.conf,\nto perform better than V12, out of the box?\nIf there is any new feature in V13 that needs some configuration in\npostgresql.conf,\nWould be bettert it should be documented or configured in the installation\nitself,\nto avoid this type of misunderstanding, which harms the perception of\nPostgres.\n\nregards,\nRanier Vilela\n\nEm seg., 25 de mai. de 2020 às 03:57, Michael Paquier <michael@paquier.xyz> escreveu:On Sun, May 24, 2020 at 02:50:08PM -0300, Ranier Vilela wrote:\n> Em dom., 24 de mai. de 2020 às 14:34, Peter Geoghegan <pg@bowt.ie> escreveu:\n>> It looks like they're only running pgbench for 60 second runs in all\n>> configurations -- notice that \"-T 60\" is passed to pgbench. I'm not\n>> entirely sure that that's all that there is to it. Still, there isn't\n>> any real attempt to make it clear what's going on here. I have my\n>> doubts about how representative these numbers are for that reason.\n>\n> I also find it very suspicious.\n\nI don't know, but what seems pretty clear to me is this benchmark does\nzero customization of postgresql.conf (it disables autovacuum!?), and\nthat the number of connections is calculated based on the number of\ncores while the scaling factor is visibly calculated from the amount\nof memory available in the environment.  Perhaps the first part is\nwanted, but we are very conservative to allow PG to work on small-ish\nmachines with the default configuration, and a 56-core machine with\n378GB of memory is not something I would define as small-ish.Does this mean that V13 would need additional settings in postgresql.conf, to perform better than V12, out of the box?If there is any new feature in V13 that needs some configuration in postgresql.conf, Would be bettert it should be documented or configured in the installation itself, to avoid this type of misunderstanding, which harms the perception of Postgres. regards,Ranier Vilela", "msg_date": "Mon, 25 May 2020 09:16:47 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PostgresSQL 13.0 Beta 1 on Phoronix" } ]
[ { "msg_contents": "Greetings.\n\nI am getting random failures in `CREATE INDEX USING gist` over ltree column\nwhile performing pg_restore.\nI get either\n ERROR: stack depth limit exceeded\nor\n ERROR: failed to add item to index page\n\nThing is — if I retry index creation manually, I get it successfully built\nin ~50% of the cases.\n\nI would like to find out what's the real cause here, but I am not sure how\nto do it.\nIf anybody could provide some guidance, I am open to investigate this case.\n\nI'm on PostgreSQL 11.8 (Debian 11.8-1.pgdg90+1), debugging symbols\ninstalled.\n\n-- \nVictor Yegorov\n\nGreetings.I am getting random failures in `CREATE INDEX USING gist` over ltree column while performing pg_restore.I get either    ERROR:  stack depth limit exceededor    ERROR:  failed to add item to index pageThing is — if I retry index creation manually, I get it successfully built in ~50% of the cases.I would like to find out what's the real cause here, but I am not sure how to do it.If anybody could provide some guidance, I am open to investigate this case.I'm on PostgreSQL 11.8 (Debian 11.8-1.pgdg90+1), debugging symbols installed.-- Victor Yegorov", "msg_date": "Sun, 24 May 2020 21:30:15 +0300", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": true, "msg_subject": "Failure to create GiST on ltree column" }, { "msg_contents": "On Sun, May 24, 2020 at 09:30:15PM +0300, Victor Yegorov wrote:\n> Greetings.\n> \n> I am getting random failures in `CREATE INDEX USING gist` over ltree column\n> while performing pg_restore.\n> I get either\n> ERROR: stack depth limit exceeded\n> or\n> ERROR: failed to add item to index page\n> \n> Thing is — if I retry index creation manually, I get it successfully built\n> in ~50% of the cases.\n> \n> I would like to find out what's the real cause here, but I am not sure how\n> to do it.\n> If anybody could provide some guidance, I am open to investigate this case.\n> \n> I'm on PostgreSQL 11.8 (Debian 11.8-1.pgdg90+1), debugging symbols\n> installed.\n\nI think you'd want to attach a debugger to the backend's PID and set breakpoint\non errfinish() or pg_re_throw() and reproduce the problem to get a stacktrace\n(separate stacks for both errors).\nhttps://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD\n\nWhat is value of maintenance_work_mem ?\n\nWhat's the definition of the index and relevant table columns ?\n\nDo you know if that was that an issue under 11.7 as well ?\n\nAre you running on any interesting hardware ?\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 24 May 2020 17:52:48 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Failure to create GiST on ltree column" }, { "msg_contents": "пн, 25 мая 2020 г. в 01:52, Justin Pryzby <pryzby@telsasoft.com>:\n\n> I think you'd want to attach a debugger to the backend's PID and set\n> breakpoint\n> on errfinish() or pg_re_throw() and reproduce the problem to get a\n> stacktrace\n> (separate stacks for both errors).\n>\n> https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD\n\n\nThanks. I'm attaching 2 backtraces.\nBig one is for the “stack depth limit exceeded” case.\n\nAfter I set max_stack_depth to the maximum possible value,\nI am getting “failed to add item to index page” errors.\n\n\nWhat is value of maintenance_work_mem ?\n>\n\n1GB\n\nWhat's the definition of the index and relevant table columns ?\n>\n\n# \\d comments.mp_comments\n Table\n\"comments.mp_comments\"\n Column | Type | Collation |\nNullable | Default\n---------------------------+--------------------------+-----------+----------+--------------------------------------------------------------\n obj_id | bigint | | not\nnull | (nextval('comments.comments_obj_id_seq'::regclass))::numeric\n obj_created | timestamp with time zone | | not\nnull | now()\n obj_modified | timestamp with time zone | | not\nnull | now()\n obj_status_did | smallint | | not\nnull | 1\n c_comment | character varying | | not\nnull |\n mpath | ltree | | not\nnull |\n c_person_obj_id | bigint | | not\nnull |\n c_lcid | character varying | |\n | 32\n c_rating | double precision | | not\nnull | 0\n c_mpath_level | bigint | | not\nnull | 1\n c_root_obj_id | bigint | | not\nnull |\n c_root_obj_type | smallint | |\n |\n c_parent_obj_id | bigint | |\n |\n c_root_obj_vislvl_content | smallint | | not\nnull | 9\n c_forecast_bias | smallint | |\n |\n mpath_array | bigint[] | |\n |\n anonymous | boolean | | not\nnull | false\n c_edited | smallint | |\n |\n c_edited_at | timestamp with time zone | |\n |\n c_image | character varying(255) | |\n |\n c_from_mobile | boolean | |\n |\nIndexes:\n \"mpath_pkey\" PRIMARY KEY, btree (obj_id)\n \"i_mp_comments_mpath_btree\" UNIQUE, btree (mpath)\n \"i_comment_c_comment_ts_vector\" gin (make_tsvector(c_comment::text,\n'russian'::text))\n \"i_comment_mp_comments_person_created\" btree (c_person_obj_id,\nobj_status_did, obj_created)\nInherits: obj_base\n\nNew index to be created:\nCREATE INDEX i_mp_comments_mpath_gist ON comments.mp_comments USING gist\n(mpath);\n\nDo you know if that was that an issue under 11.7 as well ?\n>\n\nIt was an issue on the 11.2, I've updated to the latest minor release, no\nchanges.\n\nAre you running on any interesting hardware ?\n>\n\nNothing fancy, no VM.\n\n\n-- \nVictor Yegorov", "msg_date": "Mon, 25 May 2020 16:41:49 +0300", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Failure to create GiST on ltree column" }, { "msg_contents": "On Mon, May 25, 2020 at 04:41:49PM +0300, Victor Yegorov wrote:\n> New index to be created:\n> CREATE INDEX i_mp_comments_mpath_gist ON comments.mp_comments USING gist (mpath);\n\nI wonder if/how that fails if you create the index before adding data:\n\nCREATE TABLE test_path(path ltree);\nCREATE INDEX ON test_path USING GIST(path);\nINSERT INTO test_path SELECT * FROM comments.mp_comments;\n\nDoes that fail on a particular row ?\n\nHow many paths do you have and how long? How big is the table?\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 25 May 2020 10:25:45 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Failure to create GiST on ltree column" }, { "msg_contents": "пн, 25 мая 2020 г. в 18:25, Justin Pryzby <pryzby@telsasoft.com>:\n\n> I wonder if/how that fails if you create the index before adding data:\n>\n> CREATE TABLE test_path(path ltree);\n> CREATE INDEX ON test_path USING GIST(path);\n> INSERT INTO test_path SELECT * FROM comments.mp_comments;\n>\n> Does that fail on a particular row ?\n>\n> How many paths do you have and how long? How big is the table?\n>\n\nYes, it fails.\n\nI got permission and created a partial dump of the data with:\nCREATE TABLE lc AS SELECT id, path FROM comments.mp_comments WHERE\nlength(path::text)>=500;\n\nAttached. It is reproduces the error I get. One needs to create ltree\nextension first.\n\nI understand, that issue most likely comes from the length of the ltree\ndata stored in the columns.\nBut error is a bit misleading…\n\n\n-- \nVictor Yegorov", "msg_date": "Wed, 27 May 2020 22:17:18 +0300", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Failure to create GiST on ltree column" } ]
[ { "msg_contents": "Hi hackers,\n\nWhile browsing the system catalog docs earlier I noticed that a lot of\nthem mention other catalogs or views in the introductory paragrah\nwithout hyperlinking them. Now, most of these are linked in the\n\"references\" column in the table, but some, like pg_proc's mention of\npg_aggregate have no direct links at all.\n\nThe attached patch makes the first mention of another system catalog or\nview (as well as pg_hba.conf in pg_hba_file_lines) a link, for easier\nnavigation.\n\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n the mainstream press' sense at least. - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n to a mainstream media article. - Calle Dybedahl", "msg_date": "Sun, 24 May 2020 22:05:30 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": true, "msg_subject": "Missing links between system catalog documentation pages" }, { "msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari Mannsåker) writes:\n\n> The attached patch makes the first mention of another system catalog or\n> view (as well as pg_hba.conf in pg_hba_file_lines) a link, for easier\n> navigation.\n\nbump...\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen\n\n\n", "msg_date": "Mon, 15 Jun 2020 19:47:43 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": true, "msg_subject": "[PATCH] Missing links between system catalog documentation pages" }, { "msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari Mannsåker) writes:\n\n> ilmari@ilmari.org (Dagfinn Ilmari Mannsåker) writes:\n>\n>> The attached patch makes the first mention of another system catalog or\n>> view (as well as pg_hba.conf in pg_hba_file_lines) a link, for easier\n>> navigation.\n>\n> bump...\n\nAdded to the current commitfest:\n\nhttps://commitfest.postgresql.org/28/2599/\n\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n the mainstream press' sense at least. - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n to a mainstream media article. - Calle Dybedahl\n\n\n", "msg_date": "Mon, 15 Jun 2020 19:52:31 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": true, "msg_subject": "Re: [PATCH] Missing links between system catalog documentation pages" }, { "msg_contents": "\nHello Dagfinn,\n\n>> The attached patch makes the first mention of another system catalog or\n>> view (as well as pg_hba.conf in pg_hba_file_lines) a link, for easier\n>> navigation.\n\nWhy only the first mention? It seems unlikely that I would ever read such \nchapter linearly, and even so that I would want to jump especially on the \nfirst occurrence but not on others, so ISTM that it should be done all \nmentions?\n\n-- \nFabien.\n\n\n", "msg_date": "Tue, 16 Jun 2020 06:44:45 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Missing links between system catalog documentation\n pages" }, { "msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n\n> Hello Dagfinn,\n>\n>>> The attached patch makes the first mention of another system catalog or\n>>> view (as well as pg_hba.conf in pg_hba_file_lines) a link, for easier\n>>> navigation.\n>\n> Why only the first mention? It seems unlikely that I would ever read\n> such chapter linearly, and even so that I would want to jump especially\n> on the first occurrence but not on others, so ISTM that it should be\n> done all mentions?\n\nIt's the first mention in the introductory paragraph of _each_ catalog\ntable/view page, not the first mention in the entire catalogs.sgml file.\nE.g. https://www.postgresql.org/docs/current/catalog-pg-aggregate.html\nhas two mentions of pg_proc one word apart:\n\n Each entry in pg_aggregate is an extension of an entry in pg_proc. The\n pg_proc entry carries the aggregate's name, …\n\nI didn't think there was much point in linkifying both in that case, and\nother similar situations.\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen\n\n\n", "msg_date": "Wed, 17 Jun 2020 14:55:18 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": true, "msg_subject": "Re: [PATCH] Missing links between system catalog documentation pages" }, { "msg_contents": "Hello Dagfinn,\n\n>>>> The attached patch\n\napplies cleanly, doc generation is ok. I'm ok with adding such links \nsystematically.\n\n>>>> makes the first mention of another system catalog or view (as well as \n>>>> pg_hba.conf in pg_hba_file_lines) a link, for easier navigation.\n>>\n>> Why only the first mention? It seems unlikely that I would ever read\n>> such chapter linearly, and even so that I would want to jump especially\n>> on the first occurrence but not on others, so ISTM that it should be\n>> done all mentions?\n>\n> It's the first mention in the introductory paragraph of _each_ catalog\n> table/view page, not the first mention in the entire catalogs.sgml file.\n> E.g. https://www.postgresql.org/docs/current/catalog-pg-aggregate.html\n> has two mentions of pg_proc one word apart:\n>\n> Each entry in pg_aggregate is an extension of an entry in pg_proc. The\n> pg_proc entry carries the aggregate's name, …\n>\n> I didn't think there was much point in linkifying both in that case, and\n> other similar situations.\n\nThe point is that the user reads a sentence, attempts to jump but \nsometimes can't, because the is not the first occurrence. I'd go for all \nmentions of another relation should be link.\n\nAlse, ISTM you missed some, maybe you could consider adding them? eg \npg_database in the very first paragraph of the file, pg_attrdef in \npg_attribute description, quite a few in pg_class…\n\n-- \nFabien.", "msg_date": "Sun, 21 Jun 2020 09:01:12 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Missing links between system catalog documentation\n pages" }, { "msg_contents": "Hi Fabien,\n\nFabien COELHO <coelho@cri.ensmp.fr> writes:\n\n>> It's the first mention in the introductory paragraph of _each_ catalog\n>> table/view page, not the first mention in the entire catalogs.sgml file.\n>> E.g. https://www.postgresql.org/docs/current/catalog-pg-aggregate.html\n>> has two mentions of pg_proc one word apart:\n>>\n>> Each entry in pg_aggregate is an extension of an entry in pg_proc. The\n>> pg_proc entry carries the aggregate's name, …\n>>\n>> I didn't think there was much point in linkifying both in that case, and\n>> other similar situations.\n>\n> The point is that the user reads a sentence, attempts to jump but\n> sometimes can't, because the is not the first occurrence. I'd go for all\n> mentions of another relation should be link.\n\nOkay, I'll make them all links, except the pg_aggregate aggfnoid column,\nwhich I've changed from \"pg_proc OID of the aggregate function\" to just\n\"OID of the agregate function\", since pg_proc is linked immediately\nprior in the \"references\" section, and we generally don't mention the\ncatalog table again in similar cases elsehwere.\n\n> Alse, ISTM you missed some, maybe you could consider adding them? eg\n> pg_database in the very first paragraph of the file, pg_attrdef in\n> pg_attribute description, quite a few in pg_class…\n\nYes, I only looked at the intro paragraphs of the per-catalog pages, not\nthe overview section nor the text after the column tables. I've gone\nthrough them all now and linked them. Updated patch attached.\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen", "msg_date": "Sun, 21 Jun 2020 15:02:07 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": true, "msg_subject": "Re: [PATCH] Missing links between system catalog documentation pages" }, { "msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> I didn't think there was much point in linkifying both in that case, and\n>> other similar situations.\n\n> The point is that the user reads a sentence, attempts to jump but \n> sometimes can't, because the is not the first occurrence. I'd go for all \n> mentions of another relation should be link.\n\nThat has not been our practice up to now, eg in comparable cases in\ndiscussions of GUC variables, only the first reference is xref-ified.\nI think it could be kind of annoying to make every reference a link,\nboth for regular readers (the link decoration is too bold in most\nbrowsers) and for users of screen-reader software.\n\nThere is a fair question as to how far apart two references should\nbe before we <xref> both of them. But I think that distance\ndoes need to be more than zero, and probably more than one para.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 21 Jun 2020 10:03:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Missing links between system catalog documentation pages" }, { "msg_contents": "On 2020-Jun-21, Tom Lane wrote:\n\n> That has not been our practice up to now, eg in comparable cases in\n> discussions of GUC variables, only the first reference is xref-ified.\n> I think it could be kind of annoying to make every reference a link,\n> both for regular readers (the link decoration is too bold in most\n> browsers) and for users of screen-reader software.\n\nIn the glossary I also changed things so that only the first reference\nof a term in another term's definition is linked; my experience reading\nthe originals as submitted (which did link them all at some point) is\nthat the extra links are very distracting, bad for readability. So +1\nfor not adding links to every single mention.\n\n> There is a fair question as to how far apart two references should\n> be before we <xref> both of them. But I think that distance\n> does need to be more than zero, and probably more than one para.\n\nNod.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 21 Jun 2020 11:08:02 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Missing links between system catalog documentation pages" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n\n> On 2020-Jun-21, Tom Lane wrote:\n>\n>> That has not been our practice up to now, eg in comparable cases in\n>> discussions of GUC variables, only the first reference is xref-ified.\n>> I think it could be kind of annoying to make every reference a link,\n>> both for regular readers (the link decoration is too bold in most\n>> browsers) and for users of screen-reader software.\n>\n> In the glossary I also changed things so that only the first reference\n> of a term in another term's definition is linked; my experience reading\n> the originals as submitted (which did link them all at some point) is\n> that the extra links are very distracting, bad for readability. So +1\n> for not adding links to every single mention.\n\nThere were only three cases of multiple mentions of the same table in a\nsingle paragraph, I've removed them in the attached patch.\n\nI've also added a second patch that makes the SQL commands links. There\nwere some cases of the same commands being mentioned in the descriptions\nof multiple columns in the same table, but I've left those in place,\nsince that feels less disruptive than in prose.\n\n>> There is a fair question as to how far apart two references should\n>> be before we <xref> both of them. But I think that distance\n>> does need to be more than zero, and probably more than one para.\n>\n> Nod.\n\nI tend to agree.\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen", "msg_date": "Sun, 21 Jun 2020 18:49:46 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": true, "msg_subject": "Re: [PATCH] Missing links between system catalog documentation pages" }, { "msg_contents": "\nHello Tom,\n\n>>> I didn't think there was much point in linkifying both in that case, \n>>> and other similar situations.\n>\n>> The point is that the user reads a sentence, attempts to jump but \n>> sometimes can't, because the is not the first occurrence. I'd go for \n>> all mentions of another relation should be link.\n>\n> That has not been our practice up to now, eg in comparable cases in\n> discussions of GUC variables, only the first reference is xref-ified.\n> I think it could be kind of annoying to make every reference a link,\n> both for regular readers (the link decoration is too bold in most\n> browsers)\n\nHmmm. That looks like an underlying CSS issue, not that links are \nintrinsically bad.\n\nI find it annoying that the same thing appears differently from one line \nto the next. It seems I'm the only one who likes things to be uniform, \nthough.\n\n> and for users of screen-reader software.\n\nI do not know about those, and what constraints it puts on markup.\n\n-- \nFabien.\n\n\n", "msg_date": "Sun, 21 Jun 2020 23:52:37 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Missing links between system catalog documentation\n pages" }, { "msg_contents": "On 2020-06-21 19:49, Dagfinn Ilmari Mannsåker wrote:\n> There were only three cases of multiple mentions of the same table in a\n> single paragraph, I've removed them in the attached patch.\n> \n> I've also added a second patch that makes the SQL commands links. There\n> were some cases of the same commands being mentioned in the descriptions\n> of multiple columns in the same table, but I've left those in place,\n> since that feels less disruptive than in prose.\n\nCommitted after some rebasing and tweaking.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 3 Sep 2020 13:38:21 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Missing links between system catalog documentation pages" }, { "msg_contents": "Hi Peter,\n\nPeter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n\n> On 2020-06-21 19:49, Dagfinn Ilmari Mannsåker wrote:\n>> There were only three cases of multiple mentions of the same table in a\n>> single paragraph, I've removed them in the attached patch.\n>>\n>> I've also added a second patch that makes the SQL commands links. There\n>> were some cases of the same commands being mentioned in the descriptions\n>> of multiple columns in the same table, but I've left those in place,\n>> since that feels less disruptive than in prose.\n>\n> Committed after some rebasing and tweaking.\n\nThanks!\n\nI just noticed that both this and the command link patch modified the\nsame sentence about CREATE DATABASE and pg_database, so those changes\nseem to have been lost in the merge. Attached is a follow-up patch that\nadds them both.\n\n- ilmari\n-- \n\"The surreality of the universe tends towards a maximum\" -- Skud's Law\n\"Never formulate a law or axiom that you're not prepared to live with\n the consequences of.\" -- Skud's Meta-Law", "msg_date": "Thu, 03 Sep 2020 16:40:28 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": true, "msg_subject": "Re: [PATCH] Missing links between system catalog documentation pages" }, { "msg_contents": "On 2020-09-03 17:40, Dagfinn Ilmari Mannsåker wrote:\n> I just noticed that both this and the command link patch modified the\n> same sentence about CREATE DATABASE and pg_database, so those changes\n> seem to have been lost in the merge. Attached is a follow-up patch that\n> adds them both.\n\nI think in those cases I would leave off the link. The mentions there \nare just examples of the relationship between a catalog and a command. \nIt doesn't mean you are meant to look up the specific catalog and command.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 10 Sep 2020 16:04:31 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Missing links between system catalog documentation pages" } ]
[ { "msg_contents": "Hi all,\n\nI have been playing with the new APIs of xlogreader.h, and while\nmerging some of my stuff with 13, I found the handling around\n->seg.ws_file overcomplicated and confusing as it is necessary for a\nplugin to manipulate directly the fd of an opened segment in the WAL\nsegment open/close callbacks.\n\nWouldn't it be cleaner to limit the exposition of ->seg.ws_file to the\nuser if possible? There are cases like a WAL sender where you cannot\ndo that, but something that came to my mind is to make\nWALSegmentOpenCB return the fd of the opened segment, and pass down the\nfd to close to WALSegmentCloseCB. Then xlogreader.c is in charge of\nresetting the field when a segment is closed.\n\nAny thoughts?\n--\nMichael", "msg_date": "Mon, 25 May 2020 07:44:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "WAL reader APIs and WAL segment open/close callbacks" }, { "msg_contents": "At Mon, 25 May 2020 07:44:09 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> Hi all,\n> \n> I have been playing with the new APIs of xlogreader.h, and while\n> merging some of my stuff with 13, I found the handling around\n> ->seg.ws_file overcomplicated and confusing as it is necessary for a\n> plugin to manipulate directly the fd of an opened segment in the WAL\n> segment open/close callbacks.\n\nThat depends on where we draw responsibility border, or who is\nresponsible to the value of ws_file. I think that this API change was\nassuming the callbacks having full-knowledge of the xlogreader struct\nand are responsible to maintain related struct members, and I agree to\nthat direction.\n\n> Wouldn't it be cleaner to limit the exposition of ->seg.ws_file to the\n> user if possible? There are cases like a WAL sender where you cannot\n> do that, but something that came to my mind is to make\n> WALSegmentOpenCB return the fd of the opened segment, and pass down the\n> fd to close to WALSegmentCloseCB. Then xlogreader.c is in charge of\n> resetting the field when a segment is closed.\n> \n> Any thoughts?\n\nIf we are going to hide the struct from the callbacks, we shouldn't\npass to the callbacks a pointer to the complete XLogReaderState\nstruct.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 25 May 2020 11:17:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL reader APIs and WAL segment open/close callbacks" }, { "msg_contents": "On Mon, May 25, 2020 at 11:17:06AM +0900, Kyotaro Horiguchi wrote:\n> That depends on where we draw responsibility border, or who is\n> responsible to the value of ws_file. I think that this API change was\n> assuming the callbacks having full-knowledge of the xlogreader struct\n> and are responsible to maintain related struct members, and I agree to\n> that direction.\n\nSure. Still I am skeptical that the interface of HEAD is the most\ninstinctive choice as designed. We assume that plugins using\nxlogreader.c have to set the flag all the way down for something which\nis mostly an internal state. WAL senders need to be able to use the\nfd directly to close the segment in some code paths, but the only\nthing where we give, and IMO, should give access to the information of\nWALOpenSegment is for the error path after a failed WALRead(). And it\nseems more natural to me to return the opened fd to xlogreader.c, that\nis then the part in charge of assigning the fd to the correct part of\nXLogReaderState. This reminds me a bit of what we did for \nlibpqrcv_receive() a few years ago where we manipulate directly a fd\nto wait on instead of setting it directly in some internal structure.\n\n> If we are going to hide the struct from the callbacks, we shouldn't\n> pass to the callbacks a pointer to the complete XLogReaderState\n> struct.\n\nIt still seems to me that it is helpful to pass down the whole thing\nto the close and open callbacks, for at least debugging purposes. I\nfound that helpful when debugging my tool through my rebase with\nv13.\n\nAs a side note, it was actually tricky to find out that you have to\ncall WALRead() to force the opening of a new segment when calling\nXLogFindNextRecord() in the block read callback after WAL reader\nallocation. Perhaps we should call segment_open() at the beginning of\nXLogFindNextRecord() if no segment is opened yet? I would bet that\nnot everything is aimed at using WALRead() even if that's a useful\nwrapper, and as shaped the block-read callbacks are mostly useful to\ngive the callers the ability to adjust to a maximum record length that\ncan be read, which looks limited (?).\n--\nMichael", "msg_date": "Mon, 25 May 2020 14:19:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: WAL reader APIs and WAL segment open/close callbacks" }, { "msg_contents": "At Mon, 25 May 2020 14:19:34 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Mon, May 25, 2020 at 11:17:06AM +0900, Kyotaro Horiguchi wrote:\n> > That depends on where we draw responsibility border, or who is\n> > responsible to the value of ws_file. I think that this API change was\n> > assuming the callbacks having full-knowledge of the xlogreader struct\n> > and are responsible to maintain related struct members, and I agree to\n> > that direction.\n> \n> Sure. Still I am skeptical that the interface of HEAD is the most\n> instinctive choice as designed. We assume that plugins using\n> xlogreader.c have to set the flag all the way down for something which\n> is mostly an internal state. WAL senders need to be able to use the\n> fd directly to close the segment in some code paths, but the only\n> thing where we give, and IMO, should give access to the information of\n> WALOpenSegment is for the error path after a failed WALRead(). And it\n> seems more natural to me to return the opened fd to xlogreader.c, that\n> is then the part in charge of assigning the fd to the correct part of\n> XLogReaderState. This reminds me a bit of what we did for \n> libpqrcv_receive() a few years ago where we manipulate directly a fd\n> to wait on instead of setting it directly in some internal structure.\n\nI agree that it's generally natural that open callback returns an fd\nand close takes an fd. However, for the xlogreader case, the\nxlogreader itself doesn't know much about files. WALRead is the\nexception as a convenient routine for reading WAL files in a generic\nway, which can be thought as belonging to the caller side. The\ncallbacks (that is, the caller side of xlogreader) are in charge of\nopening, reading and closing segment files. That is the same for the\ncase of XLogPageRead, which is the caller of xlogreader and is\ndirectly manipulating ws_file and ws_tli.\n\nFurther, I think that xlogreader shouldn't know about file handling at\nall. The reason that xlogreaderstate has fd and tli is that it is\nneeded by file-handling callbacks, which belongs to the caller of\nxlogreader.\n\nIf the call structure were upside-down, that is, callers handled files\nprivately then call xlogreader to retrieve records from the page data,\nthings would be simpler in the caller's view. That is the patch I'm\nproposing as a xlog reader refactoring [1].\n\n> > If we are going to hide the struct from the callbacks, we shouldn't\n> > pass to the callbacks a pointer to the complete XLogReaderState\n> > struct.\n> \n> It still seems to me that it is helpful to pass down the whole thing\n> to the close and open callbacks, for at least debugging purposes. I\n> found that helpful when debugging my tool through my rebase with\n> v13.\n\nWhy do you not looking into upper stack-frames?\n\n> As a side note, it was actually tricky to find out that you have to\n> call WALRead() to force the opening of a new segment when calling\n> XLogFindNextRecord() in the block read callback after WAL reader\n> allocation. Perhaps we should call segment_open() at the beginning of\n> XLogFindNextRecord() if no segment is opened yet? I would bet that\n\nThe API change was mere a refactoring and didn't change the whole\nlogic largely.\n\nThe segment open callback is not a part of xlogreader but only a part\nof the WALRead function. As I mentioned above, file handling is the\nmatter of the caller side (and WALRead, which is a part of\ncaller-side). ReadPageInternal doesn't know about underlying files at\nall.\n\n> not everything is aimed at using WALRead() even if that's a useful\n> wrapper, and as shaped the block-read callbacks are mostly useful to\n> give the callers the ability to adjust to a maximum record length that\n> can be read, which looks limited (?).\n\nI'm not sure. The reason for the fact that the page-read callbacks\nthat don't use WALRead doesn't need open callback is it's not actually\nbelongs to xlogreader, I think.\n\n[1] https://www.postgresql.org/message-id/20200422.101246.331162888498679491.horikyota.ntt%40gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 25 May 2020 15:50:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WAL reader APIs and WAL segment open/close callbacks" }, { "msg_contents": "On 2020-May-25, Michael Paquier wrote:\n\n> I have been playing with the new APIs of xlogreader.h, and while\n> merging some of my stuff with 13, I found the handling around\n> ->seg.ws_file overcomplicated and confusing as it is necessary for a\n> plugin to manipulate directly the fd of an opened segment in the WAL\n> segment open/close callbacks.\n> \n> Wouldn't it be cleaner to limit the exposition of ->seg.ws_file to the\n> user if possible? There are cases like a WAL sender where you cannot\n> do that, but something that came to my mind is to make\n> WALSegmentOpenCB return the fd of the opened segment, and pass down the\n> fd to close to WALSegmentCloseCB. Then xlogreader.c is in charge of\n> resetting the field when a segment is closed.\n\nThe original code did things as you suggest: the open_segment callback\nreturned the FD, and the caller installed it in the struct. We then\nchanged it in commit 850196b610d2 to have the CB install the FD in the\nstruct directly. I didn't like this idea when I first saw it -- my\nreaction was pretty much the same as yours -- but eventually I settled\non it because if we want xlogreader to be in charge of installing the\nFD, then we should also make it responsible for reacting properly when a\nbad FD is returned, and report errors correctly.\n\n(In the previous coding, xlogreader didn't tolerate bad FDs; it just\nblindly installed a bad FD if one was returned. Luckily, existing CBs\nnever returned any bad FDs so there's no bug, but it seems hazardous API\ndesign.)\n\nIn my ideal world, the open_segment CB would just open and return a\nvalid FD, or return an error message if unable to; if WALRead sees that\nthe returned FD is not valid, it installs the error message in *errinfo\nso its caller can report it. I'm not opposed to doing things that way,\nbut it seemed more complexity to me than what we have now.\n\nNow maybe you wish for a middle ground: the CB returns the FD, or fails\ntrying. Is that what you want? I didn't like that, as it seems\nunprincipled. I'd rather keep things as they're now.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 25 May 2020 16:30:34 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: WAL reader APIs and WAL segment open/close callbacks" }, { "msg_contents": "On Mon, May 25, 2020 at 04:30:34PM -0400, Alvaro Herrera wrote:\n> The original code did things as you suggest: the open_segment callback\n> returned the FD, and the caller installed it in the struct. We then\n> changed it in commit 850196b610d2 to have the CB install the FD in the\n> struct directly. I didn't like this idea when I first saw it -- my\n> reaction was pretty much the same as yours -- but eventually I settled\n> on it because if we want xlogreader to be in charge of installing the\n> FD, then we should also make it responsible for reacting properly when a\n> bad FD is returned, and report errors correctly.\n\nInstalling the fd in WALOpenSegment and reporting an error are not\nrelated concepts though, no? segment_open could still report errors\nand return the fd, where then xlogreader.c saves the returned fd in\nws_file.\n\n> (In the previous coding, xlogreader didn't tolerate bad FDs; it just\n> blindly installed a bad FD if one was returned. Luckily, existing CBs\n> never returned any bad FDs so there's no bug, but it seems hazardous API\n> design.)\n\nI think that the assertions making sure that bad fds are not passed\ndown by segment_open are fine to live with.\n\n> In my ideal world, the open_segment CB would just open and return a\n> valid FD, or return an error message if unable to; if WALRead sees that\n> the returned FD is not valid, it installs the error message in *errinfo\n> so its caller can report it. I'm not opposed to doing things that way,\n> but it seemed more complexity to me than what we have now.\n\nHm. We require now that segment_open fails immediately if it cannot\nhave a correct fd, so it does not return an error message, it just\ngives up. I am indeed not sure that moving around more WALReadError\nis that interesting for that purpose. It could be interesting to\nallow plugins to have a way to retry opening a new segment though\ninstead of giving up? But we don't really need that much now in\ncore.\n\n> Now maybe you wish for a middle ground: the CB returns the FD, or fails\n> trying. Is that what you want? I didn't like that, as it seems\n> unprincipled. I'd rather keep things as they're now.\n\nYeah, I think that the patch I sent previously is attempting at doing\nthings in a middle ground, which felt more natural to me while merging\nmy own stuff: do not fill directly ws_file within segment_open, and\nlet xlogreader.c save the returned fd, with segment_open giving up\nimmediately if we cannot get one. If you wish to keep things as they\nare now that's fine by me :)\n\nNB: I found some incorrect comments as per the attached:\ns/open_segment/segment_open/\ns/close_segment/segment_close/\n--\nMichael", "msg_date": "Tue, 26 May 2020 08:49:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: WAL reader APIs and WAL segment open/close callbacks" }, { "msg_contents": "On Tue, May 26, 2020 at 08:49:44AM +0900, Michael Paquier wrote:\n> NB: I found some incorrect comments as per the attached:\n> s/open_segment/segment_open/\n> s/close_segment/segment_close/\n\nAnd fixed this one with f93bb0c.\n--\nMichael", "msg_date": "Thu, 28 May 2020 16:44:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: WAL reader APIs and WAL segment open/close callbacks" } ]
[ { "msg_contents": "Hi\n\nIn this list:\n\n https://www.postgresql.org/docs/devel/views-overview.html\n\n\"pg_shmem_allocations\" is not quite in alphabetical order and\nneeds to be swapped with the preceding entry, per attached patch.\n\n\nRegards\n\nIan Barwick\n\n--\nIan Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Mon, 25 May 2020 11:03:01 +0900", "msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "pg13 docs: minor fix for \"System views\" list" }, { "msg_contents": "\n\nOn 2020/05/25 11:03, Ian Barwick wrote:\n> Hi\n> \n> In this list:\n> \n>   https://www.postgresql.org/docs/devel/views-overview.html\n> \n> \"pg_shmem_allocations\" is not quite in alphabetical order and\n> needs to be swapped with the preceding entry, per attached patch.\n\nThanks! LGTM. Will commit this.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 25 May 2020 15:12:57 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: pg13 docs: minor fix for \"System views\" list" }, { "msg_contents": "On Mon, May 25, 2020 at 11:03:01AM +0900, Ian Barwick wrote:\n> \"pg_shmem_allocations\" is not quite in alphabetical order and\n> needs to be swapped with the preceding entry, per attached patch.\n\nThanks, fixed!\n--\nMichael", "msg_date": "Mon, 25 May 2020 15:23:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg13 docs: minor fix for \"System views\" list" }, { "msg_contents": "On Mon, May 25, 2020 at 03:12:57PM +0900, Fujii Masao wrote:\n> Thanks! LGTM. Will commit this.\n\nOops :)\n--\nMichael", "msg_date": "Mon, 25 May 2020 15:24:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg13 docs: minor fix for \"System views\" list" }, { "msg_contents": "\n\nOn 2020/05/25 15:24, Michael Paquier wrote:\n> On Mon, May 25, 2020 at 03:12:57PM +0900, Fujii Masao wrote:\n>> Thanks! LGTM. Will commit this.\n> \n> Oops :)\n\nNo problem :) Thanks for the commit!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 25 May 2020 16:03:46 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: pg13 docs: minor fix for \"System views\" list" }, { "msg_contents": "On 2020/05/25 16:03, Fujii Masao wrote:\n> \n> \n> On 2020/05/25 15:24, Michael Paquier wrote:\n>> On Mon, May 25, 2020 at 03:12:57PM +0900, Fujii Masao wrote:\n>>> Thanks! LGTM. Will commit this.\n>>\n>> Oops :)\n> \n> No problem :) Thanks for the commit!\n\nThanks both!\n\n\nRegards\n\nIan Barwick\n\n\n-- \nIan Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Mon, 25 May 2020 16:13:02 +0900", "msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: pg13 docs: minor fix for \"System views\" list" } ]
[ { "msg_contents": "Certificates I get at $work come four layers deep:\n\n\nSelf-signed CA cert from \"WE ISSUE TO EVERYBODY.COM\"\n\n Intermediate from \"WE ISSUE TO LOTS OF FOLKS.COM\"\n\n Intermediate from \"WE ISSUE TO ORGS LIKE YOURS.COM\"\n\n End-entity cert for my server.\n\n\nUntil today, we had the topmost, self-signed cert in root.crt\nand stuff worked. But I needed to renew, and it seems that\nrecently WE ISSUE TO ORGS LIKE YOURS has chosen somebody else\nto sign their certs, so I have new certs for the issuers above them,\nso I have to go deal with root.crt.\n\nAnd that got me thinking: do I really want WE ISSUE TO EVERYBODY\nto be what I'm calling trusted in root.crt?\n\nI considered just putting the end-entity cert for my server in there,\nbut it's only good for a couple years, and I'd rather not have to\nfuss with editing and distributing root.crt that often.\n\nAs a compromise, I tried putting the WE ISSUE TO ORGS LIKE YOURS cert there.\nI think I'm willing to accept that much risk. But psql says:\n\npsql: SSL error: certificate verify failed\n\nI would be happy if it gave a little more detail. Is it failing\nverification because the cert I put in root.crt is *not* self-signed,\nand I didn't include the two issuers above it?\n\nDoes that mean it also would fail if I directly put the server's\nend-entity cert there?\n\nWould I have to put all three of WE ISSUE TO ORGS LIKE YOURS,\nWE ISSUE TO LOTS, and WE ISSUE TO EVERYBODY in the root.crt file\nin order for verification to succeed?\n\nIf I did that, would the effect be any different from simply putting\nWE ISSUE TO EVERYBODY there, as before? Would it then happily accept\na cert with a chain that ended at WE ISSUE TO EVERYBODY via some other\npath? Is there a way I can accomplish trusting only certs issued by\nWE ISSUE TO ORGS LIKE YOURS?\n\nI never noticed how thin the docs or verify-failure messages were\non this topic until just now. Are there any options, openssl\nenvironment variables, or the like, to get it to be a little more\nforthcoming about what it expects?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 25 May 2020 15:15:48 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "what can go in root.crt ?" }, { "msg_contents": "On 05/25/20 15:15, Chapman Flack wrote:\n> Does that mean it also would fail if I directly put the server's\n> end-entity cert there?\n> \n> Would I have to put all three of WE ISSUE TO ORGS LIKE YOURS,\n> WE ISSUE TO LOTS, and WE ISSUE TO EVERYBODY in the root.crt file\n> in order for verification to succeed?\n> \n> If I did that, would the effect be any different from simply putting\n> WE ISSUE TO EVERYBODY there, as before? Would it then happily accept\n> a cert with a chain that ended at WE ISSUE TO EVERYBODY via some other\n> path? Is there a way I can accomplish trusting only certs issued by\n> WE ISSUE TO ORGS LIKE YOURS?\n\nThe client library is the PG 10 one that comes with Ubuntu 18.04\nin case it matters.\n\nI think I have just verified that I can't make it work by putting\nthe end entity cert there either. It is back working again with only\nthe WE ISSUE TO EVERYBODY cert there, but if there is a workable way\nto narrow that grant of trust a teensy little bit, I would be happy\nto do that.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 25 May 2020 15:32:52 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On Mon, May 25, 2020 at 03:32:52PM -0400, Chapman Flack wrote:\n> On 05/25/20 15:15, Chapman Flack wrote:\n> > Does that mean it also would fail if I directly put the server's\n> > end-entity cert there?\n> > \n> > Would I have to put all three of WE ISSUE TO ORGS LIKE YOURS,\n> > WE ISSUE TO LOTS, and WE ISSUE TO EVERYBODY in the root.crt file\n> > in order for verification to succeed?\n> > \n> > If I did that, would the effect be any different from simply putting\n> > WE ISSUE TO EVERYBODY there, as before? Would it then happily accept\n> > a cert with a chain that ended at WE ISSUE TO EVERYBODY via some other\n> > path? Is there a way I can accomplish trusting only certs issued by\n> > WE ISSUE TO ORGS LIKE YOURS?\n> \n> The client library is the PG 10 one that comes with Ubuntu 18.04\n> in case it matters.\n> \n> I think I have just verified that I can't make it work by putting\n> the end entity cert there either. It is back working again with only\n> the WE ISSUE TO EVERYBODY cert there, but if there is a workable way\n> to narrow that grant of trust a teensy little bit, I would be happy\n> to do that.\n\nDid you review the PG documentation about intermediate certificates?\n\n\thttps://www.postgresql.org/docs/13/ssl-tcp.html#SSL-CERTIFICATE-CREATION\n\nIs there a specific question you have? I don't know how to improve the\nerror reporting.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 25 May 2020 22:03:43 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On 05/25/20 22:03, Bruce Momjian wrote:\n> Did you review the PG documentation about intermediate certificates?\n> \n> \thttps://www.postgresql.org/docs/13/ssl-tcp.html#SSL-CERTIFICATE-CREATION\n\nAFAICT, there isn't much in that section to apply to my question.\n\n> Is there a specific question you have?\n\nI'm pretty sure there is. :) Let me try to clarify it.\n\nThe doc section you linked has an example showing how to be my own CA,\nand generate a chain of certs including a self-signed one to serve as\nthe root. It suggests putting that self-signed one in root.crt on the\nclient.\n\nThat means the client will happily connect to any server wearing a\ncertificate signed by that root (or by intermediates that can be followed\nup to that root). For the example that's fine, because that root signer\nis me, and there aren't a lot of other certs around that chain back to it.\n\nAt $work we have Ways Of Doing Things. Generating our own self-signed certs\ngenerally isn't among those. If I want a certificate so I can stand up\na server, I generate a key and a CSR, I send the CSR to our Bureau of\nMaking Certificates Happen, and they send me back a signed cert with\na chain of external authorities, ending in the self-signed certificate\nof a prominent commercial root CA.\n\nSure, I can put that self-signed root cert into root.crt on the client,\nand my client will happily connect to my server.\n\nBut in this case the world is teeming with other certificates and even\nother whole sub-CAs that chain back to that prominent root issuer.\nGranted, you might have to be a bit enterprising to find a sub-CA out\nthere that will sign a cert for you with the name of my server in it,\nbut if you can, my client will follow the chain back to that same root,\nand therefore trust it.\n\nSo I would like to be able to do one of two things:\n\n1. I would like to put my server's end-entity (leaf) certificate\n in the root.crt file, and have my client only accept a server\n with that exact cert.\n\nOr,\n\n2. I would like to put one of the lower intermediates from the chain\n into the root.crt file, to at least limit my client to trusting\n only certs signed by that particular sub-CA.\n\n\nWhat seems to be happening (for the libpq and libssl versions in 18.04\nanyway) is that the certificate that I put in root.crt is found, but\nbecause it isn't a literal \"root\", as in signer-of-itself, the library\ndeclares a verification failure because it hasn't been able to continue\nclimbing the chain to find a \"root\" cert.\n\nWhereas I would like it to say \"but I don't have to do that, because\nI have already verified as far as this certificate that the administrator\ndeliberately placed in this file here to tell me to trust it.\"\n\nIn Java, for example, the analogous file is called trustStore, which\nmay be a better name. You populate the trustStore with certificates you\nconsider trusted. They can be root certs, intermediate certs, or flat-out\nleaf certs of individual servers. Whenever Java is verifying a connection,\nas soon as its chain-following brings it to a cert that you placed in\nthe trustStore, it stops and says \"yes, I trust this, because you have\ntold me to.\"\n\nI have also encountered web browsers that work in both of these ways.\nThe last time I was standing up a temporary web service to test something,\nI did make a self-signed cert and then use it to sign a leaf cert for\nthe service. I was testing with Chrome and Firefox and they both have\nspiffy UIs for managing a list of trusted certs, but one of them (I have\nforgotten which) allowed me to simply load the leaf cert that I wanted\nto trust, while the other insisted I give it the self-signed root that\nI had signed the leaf cert with.\n\nI think the former behavior, which is like Java's, is strictly more useful.\n\nWhat puzzled me today, and why I began this thread, is that I hadn't\n(and still haven't) found a clear discussion in the doc of these two\napproaches and which one libpq is intended to supply. I know that my\nattempts to use root.crt like a trustStore have so far been met with\nfailure, but between the terse error message and the sparse doc, it is\nhard to know whether that's a \"you can't do that, dummy!\" or a \"you\njust haven't guessed the right way yet.\"\n\nIf there is a way to get a trustStore-like behavior and have the client\ntrust an intermediate or leaf cert that I explicitly tell it to, but I\njust haven't pronounced the magic words right, this email may be read\nas \"oh good, how do I do it?\"\n\nIf the current implementation really is stuck accepting only self-signed\ncerts in that file and therefore can't offer trustStore-like behavior,\nthis email may be read as \"it could be made more useful by changing that.\"\n\nAnd in either case, there seems to be room in the docs for some\ndiscussion of the difference between those two models and which one\nlibpq is meant to offer.\n\nI would not be unwilling to try my hand at such a doc patch one day,\nbut for now I'm still hoping to learn the answers myself.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 25 May 2020 23:17:46 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On Mon, 2020-05-25 at 15:15 -0400, Chapman Flack wrote:\n> Certificates I get at $work come four layers deep:\n> \n> \n> Self-signed CA cert from \"WE ISSUE TO EVERYBODY.COM\"\n> \n> Intermediate from \"WE ISSUE TO LOTS OF FOLKS.COM\"\n> \n> Intermediate from \"WE ISSUE TO ORGS LIKE YOURS.COM\"\n> \n> End-entity cert for my server.\n> \n> \n> And that got me thinking: do I really want WE ISSUE TO EVERYBODY\n> to be what I'm calling trusted in root.crt?\n\nI don't know if there is a way to get this to work, but the\nfundamental problem seems that you have got the system wrong.\n\nIf you don't trust WE ISSUE TO EVERYBODY, then you shouldn't use\nit as a certification authority.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 26 May 2020 05:22:13 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On Tue, May 26, 2020 at 05:22:13AM +0200, Laurenz Albe wrote:\n> On Mon, 2020-05-25 at 15:15 -0400, Chapman Flack wrote:\n> > Certificates I get at $work come four layers deep:\n> > \n> > \n> > Self-signed CA cert from \"WE ISSUE TO EVERYBODY.COM\"\n> > \n> > Intermediate from \"WE ISSUE TO LOTS OF FOLKS.COM\"\n> > \n> > Intermediate from \"WE ISSUE TO ORGS LIKE YOURS.COM\"\n> > \n> > End-entity cert for my server.\n> > \n> > \n> > And that got me thinking: do I really want WE ISSUE TO EVERYBODY\n> > to be what I'm calling trusted in root.crt?\n> \n> I don't know if there is a way to get this to work, but the\n> fundamental problem seems that you have got the system wrong.\n> \n> If you don't trust WE ISSUE TO EVERYBODY, then you shouldn't use\n> it as a certification authority.\n\nIt is true that WE ISSUE TO EVERYBODY can create a new intermediate with\nthe same intemediate name anytime they want.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 25 May 2020 23:36:32 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On 05/25/20 23:22, Laurenz Albe wrote:\n> I don't know if there is a way to get this to work, but the\n> fundamental problem seems that you have got the system wrong.\n> \n> If you don't trust WE ISSUE TO EVERYBODY, then you shouldn't use\n> it as a certification authority.\n\nThat's a reasonable viewpoint.\n\nI've worked in organizations from smallish to largish, and in the\nlargish ones, sometimes there are Ways Of Doing Things, that were\nlaid down by Other People.\n\nThere the challenge becomes how to piece together practices that\nmaximize my comfort level, within the ways of doing things that\ncome down from others.\n\nIf the libpq root.crt file can be made to work similarly to a\nJava trustStore, that expands the possible solution space.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 25 May 2020 23:43:01 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "What about the SSH model? In the Postgres context, this would basically be\na table containing authorized certificates for each user. Upon receiving a\nconnection attempt, look up the user and the presented certificate and see\nif it is one of the authorized ones. If so, do the usual verification that\nthe client really does have the corresponding private key and if so,\nauthenticate the connection.\n\nThis is way simpler than messing around with certificate authorities.\nPlease, if anybody can give a coherent explanation why this isn't the first\ncertificate authentication model supported, I would love to understand.\n\nOn Mon, 25 May 2020 at 23:43, Chapman Flack <chap@anastigmatix.net> wrote:\n\n> On 05/25/20 23:22, Laurenz Albe wrote:\n> > I don't know if there is a way to get this to work, but the\n> > fundamental problem seems that you have got the system wrong.\n> >\n> > If you don't trust WE ISSUE TO EVERYBODY, then you shouldn't use\n> > it as a certification authority.\n>\n> That's a reasonable viewpoint.\n>\n> I've worked in organizations from smallish to largish, and in the\n> largish ones, sometimes there are Ways Of Doing Things, that were\n> laid down by Other People.\n>\n> There the challenge becomes how to piece together practices that\n> maximize my comfort level, within the ways of doing things that\n> come down from others.\n>\n> If the libpq root.crt file can be made to work similarly to a\n> Java trustStore, that expands the possible solution space.\n>\n> Regards,\n> -Chap\n>\n>\n>\n\nWhat about the SSH model? In the Postgres context, this would basically be a table containing authorized certificates for each user. Upon receiving a connection attempt, look up the user and the presented certificate and see if it is one of the authorized ones. If so, do the usual verification that the client really does have the corresponding private key and if so, authenticate the connection.This is way simpler than messing around with certificate authorities. Please, if anybody can give a coherent explanation why this isn't the first certificate authentication model supported, I would love to understand.On Mon, 25 May 2020 at 23:43, Chapman Flack <chap@anastigmatix.net> wrote:On 05/25/20 23:22, Laurenz Albe wrote:\n> I don't know if there is a way to get this to work, but the\n> fundamental problem seems that you have got the system wrong.\n> \n> If you don't trust WE ISSUE TO EVERYBODY, then you shouldn't use\n> it as a certification authority.\n\nThat's a reasonable viewpoint.\n\nI've worked in organizations from smallish to largish, and in the\nlargish ones, sometimes there are Ways Of Doing Things, that were\nlaid down by Other People.\n\nThere the challenge becomes how to piece together practices that\nmaximize my comfort level, within the ways of doing things that\ncome down from others.\n\nIf the libpq root.crt file can be made to work similarly to a\nJava trustStore, that expands the possible solution space.\n\nRegards,\n-Chap", "msg_date": "Tue, 26 May 2020 00:07:23 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On 2020-May-25, Chapman Flack wrote:\n\n> If the libpq root.crt file can be made to work similarly to a\n> Java trustStore, that expands the possible solution space.\n\nIf I understand you correctly, you want a file in which you drop any of\nthese intermediate CA's cert in, causing the server to trust a cert\nemitted by that CA -- regardless of that CA being actually root.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 26 May 2020 00:07:50 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On Tue, 26 May 2020 at 00:08, Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2020-May-25, Chapman Flack wrote:\n>\n> > If the libpq root.crt file can be made to work similarly to a\n> > Java trustStore, that expands the possible solution space.\n>\n> If I understand you correctly, you want a file in which you drop any of\n> these intermediate CA's cert in, causing the server to trust a cert\n> emitted by that CA -- regardless of that CA being actually root.\n>\n\nI think he wants only certificates signed by the specific intermediate\ncertificate to be trusted.\n\nI just had an idea: would it work to create a self-signed root certificate,\nput it in root.crt, and then use it to sign the intermediate certificate?\n\nYou can't use other people's certificates to sign your certificates, and\nit's not usual to sign other people's intermediate certificates, but as far\nas I can tell there is no reason you can't.\n\nOn Tue, 26 May 2020 at 00:08, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2020-May-25, Chapman Flack wrote:\n\n> If the libpq root.crt file can be made to work similarly to a\n> Java trustStore, that expands the possible solution space.\n\nIf I understand you correctly, you want a file in which you drop any of\nthese intermediate CA's cert in, causing the server to trust a cert\nemitted by that CA -- regardless of that CA being actually root.I think he wants only certificates signed by the specific intermediate certificate to be trusted.I just had an idea: would it work to create a self-signed root certificate, put it in root.crt, and then use it to sign the intermediate certificate?You can't use other people's certificates to sign your certificates, and it's not usual to sign other people's intermediate certificates, but as far as I can tell there is no reason you can't.", "msg_date": "Tue, 26 May 2020 00:12:18 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On 05/26/20 00:07, Alvaro Herrera wrote:\n>> If the libpq root.crt file can be made to work similarly to a\n>> Java trustStore, that expands the possible solution space.\n> \n> If I understand you correctly, you want a file in which you drop any of\n> these intermediate CA's cert in, causing the server to trust a cert\n> emitted by that CA -- regardless of that CA being actually root.\n\nRight: an intermediate cert, or a self-signed root cert, or even the\nend-entity (leaf) cert for a specific machine. You name it, if I put\nin in the trust store, and a connection verification starts with or leads\nto a cert that I put there, success.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 26 May 2020 00:31:34 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On 05/26/20 00:07, Isaac Morland wrote:\n> What about the SSH model? In the Postgres context, this would basically be\n> a table containing authorized certificates for each user. Upon receiving a\n> connection attempt, look up the user and the presented certificate and see\n> if it is one of the authorized ones. If so, do the usual verification that\n> the client really does have the corresponding private key and if so,\n> authenticate the connection.\n\nI like the SSH model, but just in case it wasn't clear, I wasn't thinking\nabout client-cert authentication here, just about conventional verification\nby the client of a certificate for the server.\n\nBy the same token, there's no reason not to ask the same questions about\nthe other direction.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 26 May 2020 00:35:06 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On Tue, 26 May 2020 at 11:43, Chapman Flack <chap@anastigmatix.net> wrote:\n\n> On 05/25/20 23:22, Laurenz Albe wrote:\n> > I don't know if there is a way to get this to work, but the\n> > fundamental problem seems that you have got the system wrong.\n> >\n> > If you don't trust WE ISSUE TO EVERYBODY, then you shouldn't use\n> > it as a certification authority.\n>\n>\nRight. In fact you must not, because WE ISSUE TO EVERYBODY can issue a new\ncertificate in the name of WE ISSUE TO ORGS LIKE YOURS.COM - right down to\nmatching backdated signing date and fingerprint.\n\nThen give it to WE ARE THE BAD GUYS.COM.\n\nIf you don't trust the root, you don't trust any of the intermediate\nbranches.\n\nThe main reason to put intermediate certificates in the root.crt is that it\nallows PostgreSQL to supply the whole certificate chain to a client during\nthe TLS handshake. That frees the clients from needing to have local copies\nof the intermediate certificates; they only have to know about WE ISSUE TO\nEVERYBODY.\n\nIf you wanted to require that your certs are signed by WE ISSUE TO ORGS\nLIKE YOURS.COM, you must configure your CLIENTS with a restricted root of\ntrust that accepts only the intermediate certificate of WE ISSUE TO ORGS\nLIKE YOURS.COM . Assuming the client will accept it; not all clients allow\nyou to configure \"certificates I trust to sign peers\" separately to\n\"certificates that sign my trusted roots\". Because really, in security\nterms that's nonsensical.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Tue, 26 May 2020 at 11:43, Chapman Flack <chap@anastigmatix.net> wrote:On 05/25/20 23:22, Laurenz Albe wrote:\n> I don't know if there is a way to get this to work, but the\n> fundamental problem seems that you have got the system wrong.\n> \n> If you don't trust WE ISSUE TO EVERYBODY, then you shouldn't use\n> it as a certification authority.\nRight. In fact you must not, because WE ISSUE TO EVERYBODY can issue a new certificate in the name of WE ISSUE TO ORGS LIKE YOURS.COM - right down to matching backdated signing date and fingerprint.Then give it to WE ARE THE BAD GUYS.COM.If you don't trust the root, you don't trust any of the intermediate branches.The main reason to put intermediate certificates in the root.crt is that it allows PostgreSQL to supply the whole certificate chain to a client during the TLS handshake. That frees the clients from needing to have local copies of the intermediate certificates; they only have to know about WE ISSUE TO EVERYBODY.If you wanted to require that your certs are signed by WE ISSUE TO ORGS LIKE YOURS.COM, you must configure your CLIENTS with a restricted root of trust that accepts only the intermediate certificate of WE ISSUE TO ORGS LIKE YOURS.COM . Assuming the client will accept it; not all clients allow you to configure \"certificates I trust to sign peers\" separately to \"certificates that sign my trusted roots\". Because really, in security terms that's nonsensical.--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Tue, 26 May 2020 14:05:17 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On 05/26/20 02:05, Craig Ringer wrote:\n> The main reason to put intermediate certificates in the root.crt is that it\n> allows PostgreSQL to supply the whole certificate chain to a client during\n\nHold on a sec; you're not talking about what I'm talking about, yet.\n\nYes, you have make the chain available to the server to serve out with\nits own cert so clients can verify. root.crt isn't where you put that,\nthough. You put that in server.crt (or wherever the ssl_cert_file GUC\npoints).\n\n> That frees the clients from needing to have local copies\n> of the intermediate certificates; they only have to know about WE ISSUE TO\n> EVERYBODY.\n\nBingo. Put WE ISSUE TO EVERYBODY in the root.crt (client-side, libpq) file,\nand the clients happily connect to the server. It is easy and convenient.\n\nBut if WE STEAL YOUR STUFF gets their certs signed by WE SIGN ANYTHING\nFOR A PRICE and their CA is WE'RE SOMETIMES LESS ATTENTIVE THAN YOU HOPE\nand /their/ CA is WE ISSUE TO EVERYBODY, then the clients would just as\nhappily connect to a server of the same name run by WE STEAL YOUR STUFF.\n\nWhich brings us around to what I was talking about.\n\n> If you wanted to require that your certs are signed by WE ISSUE TO ORGS\n> LIKE YOURS.COM, you must configure your CLIENTS with a restricted root of\n> trust that accepts only the intermediate certificate of WE ISSUE TO ORGS\n> LIKE YOURS.COM .\n\nPrecisely. And the place to configure that restricted root of trust would\nhave to be ~/.postgresql/root.crt on the client, and the question is,\ndoes that work?\n\n> Assuming the client will accept it; not all clients allow\n> you to configure \"certificates I trust to sign peers\" separately to\n> \"certificates that sign my trusted roots\". Because really, in security\n> terms that's nonsensical.\n\nAnd that's the key question: there are clients that grok that and clients\nthat don't; so now, libpq is which kind of client?\n\nCould you expand on your \"sign _peers_\" notion, and on what exactly\nyou are calling nonsensical?\n\nEach of those intermediate CAs really is a CA; the WE ISSUE TO ORGS\nLIKE yours cert does contain these extensions:\n\n X509v3 extensions:\n ...\n X509v3 Key Usage: critical\n Digital Signature, Certificate Sign, CRL Sign\n X509v3 Basic Constraints: critical\n CA:TRUE, pathlen:0\n ...\n\nMy server's end-entity certificate does not have the Certificate Sign,\nCRL Sign or CA:TRUE bits, or a URL to a revocation-checking service.\nAn end entity and a CA are not 'peers' in those respects.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 26 May 2020 09:21:47 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "\nOn 5/25/20 3:32 PM, Chapman Flack wrote:\n> On 05/25/20 15:15, Chapman Flack wrote:\n>> Does that mean it also would fail if I directly put the server's\n>> end-entity cert there?\n>>\n>> Would I have to put all three of WE ISSUE TO ORGS LIKE YOURS,\n>> WE ISSUE TO LOTS, and WE ISSUE TO EVERYBODY in the root.crt file\n>> in order for verification to succeed?\n>>\n>> If I did that, would the effect be any different from simply putting\n>> WE ISSUE TO EVERYBODY there, as before? Would it then happily accept\n>> a cert with a chain that ended at WE ISSUE TO EVERYBODY via some other\n>> path? Is there a way I can accomplish trusting only certs issued by\n>> WE ISSUE TO ORGS LIKE YOURS?\n> The client library is the PG 10 one that comes with Ubuntu 18.04\n> in case it matters.\n>\n> I think I have just verified that I can't make it work by putting\n> the end entity cert there either. It is back working again with only\n> the WE ISSUE TO EVERYBODY cert there, but if there is a workable way\n> to narrow that grant of trust a teensy little bit, I would be happy\n> to do that.\n>\n\n\nThe trouble is I think you have it the wrong way round. It makes sense\nto give less trust to a non-root CA than to one of its up-chain\nauthorities, e.g. only trust it for certain domains, or for a lesser\nperiod of time. But it doesn't seem to make much sense to trust the\nup-chain CA less, since it is what you should base your trust of the\nlower CA on.\n\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Tue, 26 May 2020 09:35:01 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On 05/26/20 09:35, Andrew Dunstan wrote:\n\n> The trouble is I think you have it the wrong way round. It makes sense\n> to give less trust to a non-root CA than to one of its up-chain\n> authorities, e.g. only trust it for certain domains, or for a lesser\n> period of time. But it doesn't seem to make much sense to trust the\n> up-chain CA less, since it is what you should base your trust of the\n> lower CA on.\n\nI wonder if there might be different meanings of 'trust' in play here\ncomplicating the conversation.\n\nAt $work, when I make a certificate request and send it off to our\nown in-house bureau of making certificates happen, what you might\nexpect is that they would be running the first level of CA right\nin house (and IIRC that was the case in my early years here).\nSo I would get back some chain like this:\n\n WE ARE A PROMINENT GLOBAL ISSUER FOUND IN WEB BROWSER TRUST STORES\n WE ISSUE TO LOTS OF FOLKS\n WE ISSUE TO ORGS LIKE YOURS\n WE ARE YOUR ORG\n my server cert\n\nIn that picture, the question of whether I give more or less trust to\nPROMINENT GLOBAL ISSUER because they have larger market cap and their\nname in the news, or to WE ARE YOUR ORG because they are my org, seems\nto turn on different understandings of trust. There might be a lot of\nreasons in general to trust PROMINENT GLOBAL in the sense of putting\ntheir cert in widely distributed web browser trust stores. But there\nare excellent reasons to trust WE ARE YOUR ORG as authoritative on\nwhat's a server for my org.\n\nNow in these later days when there is no longer an in-house CA at the\nbottom of this chain, the situation's not as clear-cut. WE ISSUE TO ORGS\nLIKE YOURS isn't quite authoritative on what's a server for my org.\nBut there are inked signatures on paper between their honcho and my org's\nhoncho that don't exist between my org and PROMINENT GLOBAL. And you would\nhave to work harder to get a spoof cert for one of my servers signed by\nthem. You would have to talk /them/ into it.\n\nIf I have PROMINENT GLOBAL in there, you just have to make offers to\ntheir umpty sub-CAs and their umpty-squared sub-sub-CAs and find just\none that will make a deal.\n\n> to give less trust to a non-root CA than to one of its up-chain\n> authorities, e.g. only trust it for certain domains, or for a lesser\n\nThat's certainly appropriate, and I'd be delighted if the root.crt file\nsupported syntax like this:\n\n *.myorg.org: WE ARE YOUR ORG.crt\n *: PROMINENT GLOBAL ISSUER.crt { show exfiltration/HIPAA/FERPA banner }\n\n\nDoing the same thing (or some of it) in certificate style, you would\nwant WE ARE YOUR ORG.crt to be signed with a Name Constraints extension\nlimiting it to be a signer for .myorg.org certificates. That is indeed\na thing. The history in [1] shows it was at first of limited value\nbecause client libraries didn't all grok it, or would accept certificates\nwithout Subject Alt Name extensions and verify by CN instead, without the\nconstraint. But I have noticed more recently that mainstream web browsers,\nanyway, are no longer tolerant of certs without SAN, and that seems to be\npart of a road map to giving the Name Constraints more teeth.\n\nRegards,\n-Chap\n\n[1]\nhttps://security.stackexchange.com/questions/31376/can-i-restrict-a-certification-authority-to-signing-certain-domains-only\n\n\n", "msg_date": "Tue, 26 May 2020 10:13:56 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On Tue, May 26, 2020 at 10:13:56AM -0400, Chapman Flack wrote:\n> At $work, when I make a certificate request and send it off to our\n> own in-house bureau of making certificates happen, what you might\n> expect is that they would be running the first level of CA right\n> in house (and IIRC that was the case in my early years here).\n> So I would get back some chain like this:\n> \n> WE ARE A PROMINENT GLOBAL ISSUER FOUND IN WEB BROWSER TRUST STORES\n> WE ISSUE TO LOTS OF FOLKS\n> WE ISSUE TO ORGS LIKE YOURS\n> WE ARE YOUR ORG\n> my server cert\n> \n> In that picture, the question of whether I give more or less trust to\n> PROMINENT GLOBAL ISSUER because they have larger market cap and their\n> name in the news, or to WE ARE YOUR ORG because they are my org, seems\n> to turn on different understandings of trust. There might be a lot of\n> reasons in general to trust PROMINENT GLOBAL in the sense of putting\n> their cert in widely distributed web browser trust stores. But there\n> are excellent reasons to trust WE ARE YOUR ORG as authoritative on\n> what's a server for my org.\n\nI think it gets down to an issue I blogged about in 2017:\n\n\thttps://momjian.us/main/blogs/pgblog/2017.html#January_9_2017\n\n\tThe use of public certificate authorities doesn't make sense for most\n\tdatabases because it allows third parties to create trusted\n\tcertificates. Their only reasonable use is if you wish to allow public\n\tcertificate authorities to independently issue certificates that you\n\twish to trust. This is necessary for browsers because they often connect\n\tto unaffiliated websites where trust must be established by a third\n\tparty. (Browsers include a list of public certificate authorities who\n\tcan issue website certificates it trusts.) \n\nThe server certificate should be issued by a certificate authority root\noutside of your organization only if you want people outside of your\norganization to trust your server certificate, but you are then asking\nfor the client to only trust an intermediate inside your organization. \nThe big question is why bother having the server certificate chain to a\nroot certificat you don't trust when you have no intention of having\nclients outside of your organization trust the server certificate. \nPostgres could be made to handle such cases, but is is really a valid\nconfiguration we should support?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 2 Jun 2020 13:14:17 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On Tue, 2 Jun 2020 at 20:14, Bruce Momjian <bruce@momjian.us> wrote:\n\n> The server certificate should be issued by a certificate authority root\n> outside of your organization only if you want people outside of your\n> organization to trust your server certificate, but you are then asking\n> for the client to only trust an intermediate inside your organization.\n> The big question is why bother having the server certificate chain to a\n> root certificat you don't trust when you have no intention of having\n> clients outside of your organization trust the server certificate.\n> Postgres could be made to handle such cases, but is is really a valid\n> configuration we should support?\n>\n\nI think the \"why\" the org cert is not root was already made clear, that is\nthe copmany policy. I don't think postgres should take a stance whether the\ncertificate designated as the root of trust is self-signed or claims to get\nits power from somewhere else.\n\nIt's pretty easy to conceive of certificate management procedures that make\nuse of this chain to implement certificate replacement securely. For\nexample one might trust the global issuer to verify that a CSR is coming\nfrom the O= value that it's claiming to come from to automate replacement\nof intermediate certificates, but not trust that every other sub-CA signed\nby root and their sub-sub-CA-s are completely honest and secure.\n\nRegards,\nAnts Aasma\n\nOn Tue, 2 Jun 2020 at 20:14, Bruce Momjian <bruce@momjian.us> wrote:The server certificate should be issued by a certificate authority root\noutside of your organization only if you want people outside of your\norganization to trust your server certificate, but you are then asking\nfor the client to only trust an intermediate inside your organization. \nThe big question is why bother having the server certificate chain to a\nroot certificat you don't trust when you have no intention of having\nclients outside of your organization trust the server certificate. \nPostgres could be made to handle such cases, but is is really a valid\nconfiguration we should support?I think the \"why\" the org cert is not root was already made clear, that is the copmany policy. I don't think postgres should take a stance whether the certificate designated as the root of trust is self-signed or claims to get its power from somewhere else.It's pretty easy to conceive of certificate management procedures that make use of this chain to implement certificate replacement securely. For example one might trust the global issuer to verify that a CSR is coming from the O= value that it's claiming to come from to automate replacement of intermediate certificates, but not trust that every other sub-CA signed by root and their sub-sub-CA-s are completely honest and secure.Regards,Ants Aasma", "msg_date": "Wed, 3 Jun 2020 15:07:30 +0300", "msg_from": "Ants Aasma <ants@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On Wed, Jun 3, 2020 at 03:07:30PM +0300, Ants Aasma wrote:\n> On Tue, 2 Jun 2020 at 20:14, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> The server certificate should be issued by a certificate authority root\n> outside of your organization only if you want people outside of your\n> organization to trust your server certificate, but you are then asking\n> for the client to only trust an intermediate inside your organization.\n> The big question is why bother having the server certificate chain to a\n> root certificat you don't trust when you have no intention of having\n> clients outside of your organization trust the server certificate.\n> Postgres could be made to handle such cases, but is is really a valid\n> configuration we should support?\n> \n> \n> I think the \"why\" the org cert is not root was already made clear, that is the\n> copmany policy. I don't think postgres should take a stance whether the\n> certificate designated as the root of trust is self-signed or claims to get its\n> power from�somewhere else.\n\nUh, we sure can. We disallow many configurations that we consider\nunsafe. openssl allowed a lot of things, and their flexibility make\nthem less secure.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 3 Jun 2020 16:34:20 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On 06/03/20 08:07, Ants Aasma wrote:\n> I think the \"why\" the org cert is not root was already made clear, that is\n> the copmany policy.\n\nThank you, yes, that was what I had intended to convey, and you have saved\nme finishing a weedsier follow-up message hoping to convey it better.\n\n> I don't think postgres should take a stance ...\n\n> whether the\n> certificate designated as the root of trust is self-signed or claims to get\n> its power from somewhere else.\n\nOn 06/03/20 16:34, Bruce Momjian wrote:\n> Uh, we sure can. We disallow many configurations that we consider\n> unsafe.\n\nOk, so a person in the situation described here, who is not in a position\nto demand changes in an organizational policy (whether or not it seems\nill-conceived to you or even to him/her), is facing this question:\n\nWhat are the \"safest\" things I /can/ do, under the existing constraints,\nand /which of those will work in PostgreSQL/?\n\nFor example, we might agree that it is safe to trust nothing but the\nend-entity cert of my server itself. I made a server, here is its cert,\nhere is a root.crt file for libpq containing only this exact cert, I\nwant libpq to connect only ever to this server with this cert and nothing\nelse. It's a pain because I have to roll out new root.crt files to everybody\nwhenever the cert changes, but it would be hard to call it unsafe.\n\nGreat! Can I do that? I think the answer is no. I haven't found it\ndocumented, but I think libpq will fail such a connection, because\nthe cert it has found in root.crt is not self-signed.\n\nOr, vary the scenario just enough that my organization, or even my\ndepartment in my organization, now has its own CA, as the first\nintermediate, the issuer of the end-entity cert.\n\nIt might be entirely reasonable to put that CA cert into root.crt,\nso libpq would only connect to things whose certs were issued in\nmy department, or at least my org. I trust the person who would be\nissuing my department's certs (in all likelihood, me). I would\nmore-or-less trust my org to issue certs for my org.\n\nGreat! Can I do that? I think that answer is also no, for the same\nreason.\n\nMy department's or org's CA cert isn't going to be self-signed, it's\ngoing to be vouched for by a chain of more certs leading to a globally\nrecognized one. Why? Short answer, our org also has web sites. We like\nfor people's browsers to be able to see those. We have one group that\nmakes server certs and they follow one procedure, and the certs they make\ncome out the same way. That shouldn't be a problem for PostgreSQL, so it's\nhard to argue they should have to use a different procedure just for my\ncert.\n\n> I don't think postgres should take a stance whether the\n> certificate designated as the root of trust is self-signed or claims\n> to get its power from somewhere else.\n\nI'm inclined to agree but I would change the wording a bit for clarity.\n*Any* certificate we are going to trust gets its power from somewhere\nelse. Being self-signed is not an exception to that rule (if it were,\nevery Snake Oil, Ltd. self-signed cert generated by every student in\nevery network security class ever would be a root of trust).\n\nFor us to trust a cert, it must be vouched for in some way. The most\nimportant vouching that happens, as far as libpq is concerned, is\nvouching by the administrator who controls the file system where root.crt\nis found and the contents of that file.\n\nIf libpq is looking at a cert and finds it in that file, I have vouched\nfor it. That's why it's there.\n\nIf it is self-signed, then I'm the only person vouching for it, and that's\nok.\n\nIf it is not self-signed, that just means somebody else also has vouched\nfor it. Maybe for the same use, maybe for some other use. In any event,\nthe fact that somebody else has also vouched for it does not in any way\nnegate that I vouched for it, by putting it there in that file I control.\n\n> It's pretty easy to conceive of certificate management procedures that make\n> use of this chain to implement certificate replacement securely. For\n> example one might trust the global issuer to verify that a CSR is coming\n> from the O= value that it's claiming to come from to automate replacement\n> of intermediate certificates, but not trust that every other sub-CA signed\n> by root and their sub-sub-CA-s are completely honest and secure.\n\nThat's an example of the kind of policy design I think ought to be possible,\nbut a first step to getting there would be to just better document what\ndoes and doesn't work in libpq now. There seem to be some possible\nconfigurations that aren't available, not because of principled arguments\nfor disallowing them, but because they fail unstated assumptions.\n\nIn an ideal world, I think libpq would be using this algorithm:\n\n I'm looking at the server's certificate, s.\n Is s unexpired and in the trust file? If so, SUCCEED.\n\n otherwise, loop:\n get issuer certificate i from s (if s is self-signed, FAIL).\n does i have CA:TRUE and Certificate Sign bits? If not, FAIL.\n does i's Domain Constraint allow it to sign s? If not, FAIL.\n is i unexpired, or has s a Signed Certificate Timestamp made\n while i was unexpired? If not, FAIL.\n is i in the trust file? If so, SUCCEED.\n s := i, continue.\n\n(I left out steps like verify signature, check revocation, etc.)\n\nWhat it seems to be doing, though, is just:\n\n I'm looking at s\n Follow chain all the way to a self-signed cert\n is that in the file?\n\nwhich seems too simplistic.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 3 Jun 2020 19:57:16 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On Wed, 2020-06-03 at 19:57 -0400, Chapman Flack wrote:\n> Ok, so a person in the situation described here, who is not in a position\n> to demand changes in an organizational policy (whether or not it seems\n> ill-conceived to you or even to him/her), is facing this question:\n> \n> What are the \"safest\" things I /can/ do, under the existing constraints,\n> and /which of those will work in PostgreSQL/?\n\nI feel bad about bending the basic idea of certificates and trust to suit\nsome misbegotten bureaucratic constraints on good security.\n\nIf you are working for a company that has a bad idea of security\nand cannot be dissuaded from it, you point that out loudly and then\nkeep going. Trying to subvert the principles of an architecture\nvery often leads to pain in my experience.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Thu, 04 Jun 2020 08:07:24 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On 06/04/20 02:07, Laurenz Albe wrote:\n> I feel bad about bending the basic idea of certificates and trust to suit\n> some misbegotten bureaucratic constraints on good security.\n\nCan you elaborate on what, in the email message you replied to here,\nrepresented a bending of the basic idea of certificates and trust?\n\nI didn't notice any.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Thu, 4 Jun 2020 08:25:10 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On Thu, 2020-06-04 at 08:25 -0400, Chapman Flack wrote:\n> > I feel bad about bending the basic idea of certificates and trust to suit\n> > some misbegotten bureaucratic constraints on good security.\n> \n> Can you elaborate on what, in the email message you replied to here,\n> represented a bending of the basic idea of certificates and trust?\n> \n> I didn't notice any.\n\nI was referring to the wish to *not* use a self-signed CA certificate,\nbut an intermediate certificate as the ultimate authority, based on\na distrust of the certification authority that your organization says\nyou should trust.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Thu, 04 Jun 2020 17:04:37 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On 06/04/20 11:04, Laurenz Albe wrote:\n> I was referring to the wish to *not* use a self-signed CA certificate,\n> but an intermediate certificate as the ultimate authority, based on\n> a distrust of the certification authority that your organization says\n> you should trust.\n\nAre you aware of any principled reason it should be impossible to\ninclude an end-entity certificate in the trust store used by a client?\n\nAre you aware of any principled reason it should be impossible to\ninclude a certificate that has the CA:TRUE and Certificate Sign bits\nin the trust store used by a client, whether it is its own signer\nor has been signed by another CA?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Thu, 4 Jun 2020 11:21:08 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "\nOn 6/3/20 7:57 PM, Chapman Flack wrote:\n>\n> In an ideal world, I think libpq would be using this algorithm:\n>\n> I'm looking at the server's certificate, s.\n> Is s unexpired and in the trust file? If so, SUCCEED.\n>\n> otherwise, loop:\n> get issuer certificate i from s (if s is self-signed, FAIL).\n> does i have CA:TRUE and Certificate Sign bits? If not, FAIL.\n> does i's Domain Constraint allow it to sign s? If not, FAIL.\n> is i unexpired, or has s a Signed Certificate Timestamp made\n> while i was unexpired? If not, FAIL.\n> is i in the trust file? If so, SUCCEED.\n> s := i, continue.\n>\n> (I left out steps like verify signature, check revocation, etc.)\n>\n> What it seems to be doing, though, is just:\n>\n> I'm looking at s\n> Follow chain all the way to a self-signed cert\n> is that in the file?\n>\n> which seems too simplistic.\n>\n\n\nDo we actually do any of this sort of thing? I confess my impression was\nthis is all handled by the openssl libraries, we just hand over the\ncerts and let openssl do its thing. Am I misinformed about that?\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 4 Jun 2020 17:31:41 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On 06/04/20 17:31, Andrew Dunstan wrote:\n> Do we actually do any of this sort of thing? I confess my impression was\n> this is all handled by the openssl libraries, we just hand over the\n> certs and let openssl do its thing. Am I misinformed about that?\n\nI haven't delved very far into the code yet (my initial aim with this\nthread was not to pose a rhetorical question, but an ordinary one, and\nsomebody would know the answer).\n\nBy analogy to other SSL libraries I have worked with, my guess would\nbe that there are certain settings and callbacks available that would\ndetermine some of what it is doing.\n\nIn the javax.net.ssl package [1], for example, there are HostnameVerifier\nand TrustManager interfaces; client code can supply implementations of these\nthat embody its desired policies.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Thu, 4 Jun 2020 17:39:47 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 06/04/20 17:31, Andrew Dunstan wrote:\n>> Do we actually do any of this sort of thing? I confess my impression was\n>> this is all handled by the openssl libraries, we just hand over the\n>> certs and let openssl do its thing. Am I misinformed about that?\n\n> By analogy to other SSL libraries I have worked with, my guess would\n> be that there are certain settings and callbacks available that would\n> determine some of what it is doing.\n\nIt's possible that we could force openssl to validate cases it doesn't\naccept now. Whether we *should* deviate from its standard behavior is\na fairly debatable question though. I would not be inclined to do so\nunless we find that many other consumers of the library also do that.\nOverriding a library in its specific area of expertise seems like a\ngood way to get your fingers burnt.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jun 2020 18:03:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On 06/04/20 18:03, Tom Lane wrote:\n> It's possible that we could force openssl to validate cases it doesn't\n> accept now. Whether we *should* deviate from its standard behavior is\n> a fairly debatable question though. I would not be inclined to do so\n> unless we find that many other consumers of the library also do that.\n> Overriding a library in its specific area of expertise seems like a\n> good way to get your fingers burnt.\n\nSure. It seems sensible to me to start by documenting /what/ it is doing\nnow, and to what extent that should be called \"its standard behavior\"\nversus \"the way libpq is calling it\", because even if nothing is to be\nchanged, there will be people who need to be able to find that information\nto understand what will and won't work.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Thu, 4 Jun 2020 18:09:31 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> Sure. It seems sensible to me to start by documenting /what/ it is doing\n> now, and to what extent that should be called \"its standard behavior\"\n> versus \"the way libpq is calling it\", because even if nothing is to be\n> changed, there will be people who need to be able to find that information\n> to understand what will and won't work.\n\nFair enough. I'm certainly prepared to believe that there might be things\nwe're doing with that API that are not (anymore?) considered best\npractice. But I'd want to approach any changes as \"what is considered\nbest practice\", not \"how can we get this predetermined behavior\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jun 2020 18:14:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On Wed, Jun 3, 2020 at 07:57:16PM -0400, Chapman Flack wrote:\n> For example, we might agree that it is safe to trust nothing but the\n> end-entity cert of my server itself. I made a server, here is its cert,\n> here is a root.crt file for libpq containing only this exact cert, I\n> want libpq to connect only ever to this server with this cert and nothing\n> else. It's a pain because I have to roll out new root.crt files to everybody\n> whenever the cert changes, but it would be hard to call it unsafe.\n\nI think you have hit on the reason CAs are used. By putting a valid\nroot certificate on the client, the server certificate can be changed\nwithout modifying the certificate on the client.\n\nWithout that ability, every client would need be changed as soon as the\nserver certificate was changed. Allowing intermediate certificates to\nfunction as root certificates would fix that problem. When the\nnon-trusted CA changes your certificate, you are going to have the same\nproblem updating everything at once. This is why a root certificate,\nwhich never changes, is helpful.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Fri, 12 Jun 2020 15:13:35 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On 06/12/20 15:13, Bruce Momjian wrote:\n> On Wed, Jun 3, 2020 at 07:57:16PM -0400, Chapman Flack wrote:\n>> here is a root.crt file for libpq containing only this exact cert, I\n>> want libpq to connect only ever to this server with this cert and nothing\n>> else. It's a pain because I have to roll out new root.crt files to everybody\n>> whenever the cert changes, but it would be hard to call it unsafe.\n> \n> I think you have hit on the reason CAs are used. By putting a valid\n> root certificate on the client, the server certificate can be changed\n> without modifying the certificate on the client.\n> \n> Without that ability, every client would need be changed as soon as the\n> server certificate was changed. Allowing intermediate certificates to\n> function as root certificates would fix that problem. When the\n> non-trusted CA changes your certificate, you are going to have the same\n> problem updating everything at once.\n\nThere seems to be a use of language here that works to make the picture\nmuddier rather than clearer.\n\nI mean the use of \"trusted\"/\"non-trusted\" as if they somehow mapped onto\n\"self-signed\"/\"not self-signed\" (unless you had some other mapping in mind\nthere).\n\nThat's downright ironic, as a certificate that is self-signed is one that\ncarries with it the absolute minimum grounds for trusting it: precisely\nzero. There can't be any certificate you have less reason to trust than\na self-signed one.\n\n(Ok, I take it back: a certificate you find on a revocation list /might/\nbe one you have less reason to trust.)\n\nIf a certificate, signed only by itself, ends up being relied on by\na TLS validator, that can only be because it is trusted for some other\nreason. Typically that reason is that it has been placed in a file that\ncan only be edited by the admin who decides what certs to trust. By\nediting it into that file, that responsible person has vouched for it,\nand /that/ is why the TLS client should trust it. The fact that it is\nself-signed, meaning only that nobody else ever vouched for it anywhere,\nhas nothing to do with why the TLS client should trust it.\n\nNow, suppose that same responsible person edits that same file, but this\ntime places in it a cert that has been signed by some other authority.\nThat is a cert that has been vouched for in two ways: by the admin\nplacing it in this file, and by some other PKI authority.\n\nAs far as the TLS client is concerned, the endorsement that counts is\nstill the local one, that it has been placed in the local file by the\nadmin responsible for deciding what this client should trust. The fact\nthat somebody else vouched for it too is no reason for this client\nto trust it, but is also no reason for this client not to trust it.\nIt is certainly in no way less to be trusted than a cert signed only\nby itself.\n\nThe key point is that as soon as you find the cert you are looking at\nin the local file curated by your admin, you know you've been cleared\nto trust what you're looking at.\n\nIf the cert you're looking at is not in that file, and it has no signer\nbut itself, you must at that point fail. Dead end. There can be no reason\nto trust it.\n\nOn the other hand, if you are looking a cert that has a signer, you have\nnot hit a dead end yet; you can climb that link and hope to find the signer\nin your curated file, and so on.\n\nYou need to climb until you find something that's in that curated file.\nEvery step that you climbed needs to have had a valid signature made\nwhile the signer was valid and not revoked, the signer needed to be allowed\nto sign certs, and to sign certs for the subject's domain. Those things\nneeded to be checked at every step.\n\nBut once you have followed those steps and arrived at a cert that\nwas placed in your trust store by the admin, it's unnecessary and\nlimiting to insist arbitrarily on other properties of the cert you\nfound there.\n\n> This is why a root certificate, which never changes, is helpful.\n\nBut who says it never changes?\n\nAs I mentioned earlier, my org has not always had its current procedures\non the issuance of certs, and not so many years ago did have its own\nin-house CA. I ran across a copy of that CA cert recently. It was generated\nin July of 2010 and is still good through next month. (I have not checked\nto see whether the former in-house CA made a revocation entry somewhere\nfor it before turning the lights out.)\n\nIf we were still using that CA cert, I would still have to roll out\nnew root.crt files next month. I'm sure at the time it was generated,\nten years seemed like 'almost never', and like a reasonable time in which\nto hope no adversary would crack a 2048 bit RSA key.\n\nOne certainly wouldn't plan on giving an important cert a lifetime\nmuch longer than that.\n\nSo the benefit of putting a signing cert in root.crt is not so much\nthat it will never expire and need updating, but that you can keep\nusing it to sign other certs for new services you stand up or update,\nand so you don't have to distribute new root.crt files every time you\ndo those things.\n\nFor that purpose, it matters not whether the signing cert you put there\nis self-signed or not.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 12 Jun 2020 16:17:56 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On 06/12/20 16:17, Chapman Flack wrote:\n> reason. Typically that reason is that it has been placed in a file that\n> can only be edited by the admin who decides what certs to trust. By\n> editing it into that file, that responsible person has vouched for it,\n> and /that/ is why the TLS client should trust it.\n\nIn order to wave my hands less, and map more easily onto the RFCs,\nI ought to start saying these things:\n\nRelying Party when I mean libpq in its role of validating a server\n\nTrust Anchor Store when I mean libpq's root.crt file\n\nTrust Anchor Manager when I mean me, putting a thing into root.crt\n\nTrust Anchor when I mean a thing I put into root.crt\n\nTarget Certificate the one the Relying Party wants to validate; in this\n case, the end-entity cert assigned to the pgsql server\n\nCertification Path Validation the algorithm in RFC 5280 sec. 6\n\nCertification Path Building the task described in RFC 4158\n\n\n\nRFC 5280 expresses the Path Validation algorithm as starting from\na Trust Anchor and proceeding toward the Target Certificate. In this\nthread so far I've been waving my hands in the other direction, but\nthat properly falls under Path Building.\n\nIf your Trust Anchor Store contains only one Trust Anchor, then the\nPath Validation algorithm is all you need. If there may be multiple\nTrust Anchors there, Path Building is the process of enumerating\npossible paths with a Trust Anchor at one end and the Target Certificate\nat the other, in the hope that Path Validation will succeed for at least\none of them.\n\nRFC 4158 isn't prescriptive: it doesn't give one way to build paths, but\na sm�rg�sbord of approaches. Amusingly, what it calls \"forward path\nbuilding\" is when you start from the Target Certificate and search toward\na Trust Anchor (same way I've been waving my hands, but reversed w.r.t.\nPath Validation), and what it calls \"reverse path building\" is when you\nstart with your Trust Anchors and search toward the Target Certificate\n(the same direction as Path Validation).\n\nRFC 4158 has an extensive gallery of ASCII art showing what the PKI\ncan end up looking like in some large enterprises. :O\n\n\n\nInitial inputs to Path Validation include a distinguished name and\na public key, optionally with some constraints, and those things come\nfrom a Trust Anchor. There is no requirement that a Trust Anchor be\na cert, signed, self-signed, or otherwise. The certificate format has\noften been used as a Trust Anchor container format because, hey, it\nholds a distinguished name and a public key and constraints, and if\nyou're writing a path validator, you already have a parser for it.\n\nOther things that happen to be present in a certificate-as-Trust-Anchor,\nsuch as an issuer name, key, and signature, are non-inputs to the path\nvalidation algorithm and have no effect on it.\n\nDisappointingly, common implementations have tended also to ignore\nconstraints held in a certificate-as-Trust-Anchor, even though they\ncorrectly apply constraints in other certs encountered along the path,\nand initial constraints are supposed to be inputs to the algorithm.\nIt is the point of RFC 5937 to fix that.\n\n'Constraints' in this context are limits such as \"this cert is for\nidentifying servers\" or \"this is a CA cert but only allowed to sign\ncerts for *.example.com\". The RFCs plainly anticipate that I might\nwant to put new constraints on a Trust Anchor, say to use the cert\nof a CA that has other customers, without implying a trust relationship\nwith their other customers.\n\nFor that purpose, the historical 'convenience' of using certificates\nas Trust Anchor containers is a genuine hindrance, because of course\ncertificates are cryptographically signed objects so editing their\nconstraints can't be easily done.[1]\n\nThe Trust Anchor Format (cited in [1] as \"in progress\" but since published\nas RFC 5914) therefore proposes a couple alternatives to the use of a\ncertificate as an ersatz Trust Anchor container.\n\nRFC 6024, Trust Anchor Management Requirements, sets out the considerations\nRFC 5914 and RFC 5934 were to address. (In classic see-I-did-it-perfectly\nfashion, it was published after they were.)\n\n\nSo, if libpq had a Trust Anchor Store that worked as described in these\nRFCs, my use case would be trivial to set up.\n\nI guess the next question is to what extent recent OpenSSL groks those,\nor how far back was the first version that did (these RFCs are from 2010),\nand what would be entailed in taking advantage of that support if it's\npresent.\n\nRegards,\n-Chap\n\n\n[1] https://dl.acm.org/doi/10.1145/1750389.1750403\n\n\n", "msg_date": "Fri, 12 Jun 2020 22:05:16 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: what can go in root.crt ?" }, { "msg_contents": "On Fri, Jun 12, 2020 at 04:17:56PM -0400, Chapman Flack wrote:\n> On 06/12/20 15:13, Bruce Momjian wrote:\n> > Without that ability, every client would need be changed as soon as the\n> > server certificate was changed. Allowing intermediate certificates to\n> > function as root certificates would fix that problem. When the\n> > non-trusted CA changes your certificate, you are going to have the same\n> > problem updating everything at once.\n> \n> There seems to be a use of language here that works to make the picture\n> muddier rather than clearer.\n> \n> I mean the use of \"trusted\"/\"non-trusted\" as if they somehow mapped onto\n> \"self-signed\"/\"not self-signed\" (unless you had some other mapping in mind\n> there).\n\nI meant you trust your local/intermediate CA, but not the root one. \n\n> That's downright ironic, as a certificate that is self-signed is one that\n> carries with it the absolute minimum grounds for trusting it: precisely\n> zero. There can't be any certificate you have less reason to trust than\n> a self-signed one.\n\nSelf-signed certs can certainly be trusted by the creator. Organizations\noften create self-signed certs that are trusted inside the organization.\n\n> As far as the TLS client is concerned, the endorsement that counts is\n> still the local one, that it has been placed in the local file by the\n> admin responsible for deciding what this client should trust. The fact\n> that somebody else vouched for it too is no reason for this client\n> to trust it, but is also no reason for this client not to trust it.\n> It is certainly in no way less to be trusted than a cert signed only\n> by itself.\n\nYes, I see your point in that the intermediate has more validity than a\nself-signed certificate, though that extra validity is useless in the\nuse-case we are describing.\n\n> But once you have followed those steps and arrived at a cert that\n> was placed in your trust store by the admin, it's unnecessary and\n> limiting to insist arbitrarily on other properties of the cert you\n> found there.\n\nWell, I can see the use-case for what you are saying, but I also think\nit could lead to misconfiguration. Right now, Postgres uses the client\nroot.cert, which can contain intermediates certs, and the\nserver-provided cert, which can also contain intermediates shipped to\nthe client, to try to check for a common root:\n\n\thttps://www.postgresql.org/docs/13/ssl-tcp.html\n\nWhat you are suggesting is that we take the server chain and client\nchain and claim success when _any_ cert matches between the two, not\njust the root one.\n\nI can see that working but I can also imagine people putting only\nintermediate certs in their root.cert and not realizing that they are\nmisconfigured since they might want to expire the intermediate someday\nor might want to trust a different intermediate from the same root. \nFrankly, we really didn't even documention how to handle intermediate\ncertificates until 2018, which shows how obscure this security stuff can\nbe:\n\n\thttps://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=815f84aa16\n\nDo we want to allow such cases, or is the risk of misconfiguration too\nhigh? I am thinking it is the later. I think we could have a libpq\nparameter that allowed it, but is there enough demand to add it since it\nwould be a user-visible API?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Sat, 13 Jun 2020 13:47:52 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: what can go in root.crt ?" } ]
[ { "msg_contents": "I recently noticed this in a customer log file:\n ERROR: could not read from hash-join temporary file: Success\n\nThe problem is we're reporting with %m when the problem is a partial\nread or write.\n\nI propose the attached patch to solve it: report \"wrote only X of X\nbytes\". This caused a lot of other trouble, the cause of which I've\nbeen unable to pinpoint as yet. But in the meantime, this is already a\nsmall improvement.\n\n-- \n�lvaro Herrera", "msg_date": "Mon, 25 May 2020 19:02:20 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "hash join error improvement (old)" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I recently noticed this in a customer log file:\n> ERROR: could not read from hash-join temporary file: Success\n> The problem is we're reporting with %m when the problem is a partial\n> read or write.\n> I propose the attached patch to solve it: report \"wrote only X of X\n> bytes\". This caused a lot of other trouble, the cause of which I've\n> been unable to pinpoint as yet. But in the meantime, this is already a\n> small improvement.\n\n+1 for the idea, but there's still one small problem with what you did\nhere: errcode_for_file_access() looks at errno, which we can presume\nwill not have a relevant value in the \"wrote only %d bytes\" paths.\n\nMost places that are taking pains to deal with this scenario\ndo something like\n\n errno = 0;\n if (write(fd, data, len, xlrec->offset) != len)\n {\n /* if write didn't set errno, assume problem is no disk space */\n if (errno == 0)\n errno = ENOSPC;\n ereport(ERROR,\n (errcode_for_file_access(),\n errmsg(\"could not write to file \\\"%s\\\": %m\", path)));\n }\n\nI don't mind if you want to extend that paradigm to also use \"wrote only\n%d bytes\" wording, but the important point is to get the SQLSTATE set on\nthe basis of ENOSPC rather than whatever random value errno will have\notherwise.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 May 2020 19:33:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hash join error improvement (old)" }, { "msg_contents": "Hi Tom, thanks for looking.\n\nOn 2020-May-25, Tom Lane wrote:\n\n> I don't mind if you want to extend that paradigm to also use \"wrote only\n> %d bytes\" wording, but the important point is to get the SQLSTATE set on\n> the basis of ENOSPC rather than whatever random value errno will have\n> otherwise.\n\nHmm, right -- I was extending the partial read case to apply to a\npartial write, and we deal with those very differently. I changed the\nwrite case to use our standard approach.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 26 May 2020 09:55:50 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: hash join error improvement (old)" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Hmm, right -- I was extending the partial read case to apply to a\n> partial write, and we deal with those very differently. I changed the\n> write case to use our standard approach.\n\nActually ... looking more closely, this proposed change in\nExecHashJoinSaveTuple flat out doesn't work, because it assumes that\nBufFileWrite reports errors the same way as write(), which is not the\ncase. In particular, written < 0 can't happen; moreover, you've\nremoved detection of a short write as opposed to a completely failed\nwrite.\n\nDigging further down, it looks like BufFileWrite calls BufFileDumpBuffer\nwhich calls FileWrite which takes pains to set errno correctly after a\nshort write --- so other than the lack of commentary about these\nfunctions' error-reporting API, I don't think there's any actual bug here.\nAre you sure you correctly identified the source of the bogus error\nreport?\n\nSimilarly, I'm afraid you introduced rather than removed problems\nin ExecHashJoinGetSavedTuple. BufFileRead doesn't use negative\nreturn values either.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 May 2020 10:43:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hash join error improvement (old)" }, { "msg_contents": "On 2020-May-26, Tom Lane wrote:\n\n> Digging further down, it looks like BufFileWrite calls BufFileDumpBuffer\n> which calls FileWrite which takes pains to set errno correctly after a\n> short write --- so other than the lack of commentary about these\n> functions' error-reporting API, I don't think there's any actual bug here.\n\nDoh, you're right, this patch is completely broken ... aside from\ncarelessly writing the wrong \"if\" test, my unfamiliarity with the stdio\nfread/fwrite interface is showing. I'll look more carefully.\n\n> Are you sure you correctly identified the source of the bogus error\n> report?\n\nNope. And I wish the bogus error report was all there was to it. The\nactual problem is a server crash.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 26 May 2020 15:17:10 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: hash join error improvement (old)" }, { "msg_contents": "On 2020-May-26, Tom Lane wrote:\n\n> Are you sure you correctly identified the source of the bogus error\n> report?\n\nThis version's better. It doesn't touch the write side at all.\nOn the read side, only report a short read as such if errno's not set.\n\nThis error isn't frequently seen. This page\nhttps://blog.csdn.net/pg_hgdb/article/details/106279303\n(A Postgres fork; blames the error on the temp hash files being encrypted,\nsuggests to increase temp_buffers) is the only one I found.\n\nThere are more uses of BufFileRead that don't bother to distinguish\nthese two cases apart, though -- logtape.c, tuplestore.c,\ngistbuildbuffers.c all do the same.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 26 May 2020 18:27:53 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: hash join error improvement (old)" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> There are more uses of BufFileRead that don't bother to distinguish\n> these two cases apart, though -- logtape.c, tuplestore.c,\n> gistbuildbuffers.c all do the same.\n\nYeah. I rather suspect that callers of BufFileRead/Write are mostly\nexpecting that those functions will throw an ereport() for any interesting\nerror condition. Maybe we should make it so, instead of piecemeal fixing\nthe callers?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 26 May 2020 21:30:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hash join error improvement (old)" }, { "msg_contents": "On Wed, May 27, 2020 at 1:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > There are more uses of BufFileRead that don't bother to distinguish\n> > these two cases apart, though -- logtape.c, tuplestore.c,\n> > gistbuildbuffers.c all do the same.\n>\n> Yeah. I rather suspect that callers of BufFileRead/Write are mostly\n> expecting that those functions will throw an ereport() for any interesting\n> error condition. Maybe we should make it so, instead of piecemeal fixing\n> the callers?\n\nYeah. I proposed that over here:\n\nhttps://www.postgresql.org/message-id/CA+hUKGK0w+GTs8aDvsKDVu7cFzSE5q+0NP_9kPSxg2NA1NeZew@mail.gmail.com\n\nBut I got stuck trying to figure out whether to back-patch (arguably\nyes: there are bugs here, but arguably no: the interfaces change).\n\n\n", "msg_date": "Wed, 27 May 2020 13:46:54 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: hash join error improvement (old)" } ]
[ { "msg_contents": "I am reposting this from a few months back (see below). I am not trying to\nbe a pest, just very motivated. I really think this feature has merit, and\nif not generally worthwhile, I'd be willing to pay someone to code it for\nme as I don't have strong enough C skills to modify the PostgreSQL code\nmyself. So anyone who might have such skills that would be interested,\nplease contact me: bertscalzo2@gmail.com.\n\nMySQL has a really useful feature they call the query rewrite cache. The\noptimizer checks incoming queries to see if a known better rewrite has been\nplaced within the query rewrite cache table. If one is found, the rewrite\nreplaces the incoming query before sending it to the execution engine. This\ncapability allows for one to fix poorly performing queries in 3rd party\napplication code that cannot be modified. For example, suppose a 3rd party\napplication contains the following inefficient query: SELECT COUNT(*) FROM\ntable WHERE SUBSTRING(column,1,3) = 'ABC'. One can place the following\nrewrite in the query rewrite cache: SELECT COUNT(*) FROM table WHERE column\nLIKE 'ABC%'. The original query cannot use an index while the rewrite can.\nSince it's a 3rd party application there is really no other way to make\nsuch an improvement. The existing rewrite rules in PostgreSQL are too\nnarrowly defined to permit such a substitution as the incoming query could\ninvolve many tables, so what's needed is a general \"if input SQL string\nmatches X then replace it with Y\". This check could be placed at the\nbeginning of the parser.c code. Suggest that the matching code should first\ncheck the string lengths and hash values before checking entire string\nmatch for efficiency.\n\nI am reposting this from a few months back (see below). I am not trying to be a pest, just very motivated. I really think this feature has merit, and if not generally worthwhile, I'd be willing to pay someone to code it for me as I don't have strong enough C skills to modify the PostgreSQL code myself. So anyone who might have such skills that would be interested, please contact me: bertscalzo2@gmail.com.MySQL has a really useful feature they call the query rewrite cache. The optimizer checks incoming queries to see if a known better rewrite has been placed within the query rewrite cache table. If one is found, the rewrite replaces the incoming query before sending it to the execution engine. This capability allows for one to fix poorly performing queries in 3rd party application code that cannot be modified. For example, suppose a 3rd party application contains the following inefficient query: SELECT COUNT(*) FROM table WHERE SUBSTRING(column,1,3) = 'ABC'. One can place the following rewrite in the query rewrite cache: SELECT COUNT(*) FROM table WHERE column LIKE 'ABC%'. The original query cannot use an index while the rewrite can. Since it's a 3rd party application there is really no other way to make such an improvement. The existing rewrite rules in PostgreSQL are too narrowly defined to permit such a substitution as the incoming query could involve many tables, so what's needed is a general \"if input SQL string matches X then replace it with Y\". This check could be placed at the beginning of the parser.c code. Suggest that the matching code should first check the string lengths and hash values before checking entire string match for efficiency.", "msg_date": "Mon, 25 May 2020 19:53:40 -0500", "msg_from": "Bert Scalzo <bertscalzo2@gmail.com>", "msg_from_op": true, "msg_subject": "New Feature Request" }, { "msg_contents": "On Mon, May 25, 2020 at 07:53:40PM -0500, Bert Scalzo wrote:\n> I am reposting this from a few months back (see below). I am not trying to be a\n> pest,�just very motivated. I really think this feature has merit, and if not\n> generally worthwhile, I'd be willing to pay someone to code it for me as I\n> don't have strong enough C skills to modify the PostgreSQL code myself. So\n> anyone who might have such skills that would be interested, please contact me:\n> bertscalzo2@gmail.com.\n\nI think your best bet is to try getting someone to write a hook\nthat will do the replacement so that you don't need to modify too much\nof the Postgres core code. You will need to have the hook updated for\nnew versions of Postgres, which adds to the complexity.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Mon, 25 May 2020 21:21:26 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: New Feature Request" }, { "msg_contents": "On Mon, May 25, 2020 at 09:21:26PM -0400, Bruce Momjian wrote:\n> I think your best bet is to try getting someone to write a hook\n> that will do the replacement so that you don't need to modify too much\n> of the Postgres core code. You will need to have the hook updated for\n> new versions of Postgres, which adds to the complexity.\n\nCouldn't one just use the existing planner hook for that? The\npost-parse analysis hook is run before a query rewrite, but the\nplanner hook could manipulate a rewrite before planning the query.\n--\nMichael", "msg_date": "Tue, 26 May 2020 10:37:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: New Feature Request" }, { "msg_contents": "On Mon, May 25, 2020 at 09:21:26PM -0400, Bruce Momjian wrote:\n>On Mon, May 25, 2020 at 07:53:40PM -0500, Bert Scalzo wrote:\n>> I am reposting this from a few months back (see below). I am not trying to be a\n>> pest,�just very motivated. I really think this feature has merit, and if not\n>> generally worthwhile, I'd be willing to pay someone to code it for me as I\n>> don't have strong enough C skills to modify the PostgreSQL code myself. So\n>> anyone who might have such skills that would be interested, please contact me:\n>> bertscalzo2@gmail.com.\n>\n>I think your best bet is to try getting someone to write a hook\n>that will do the replacement so that you don't need to modify too much\n>of the Postgres core code. You will need to have the hook updated for\n>new versions of Postgres, which adds to the complexity.\n>\n\nI don't think we have a hook to tweak the incoming SQL, though. We only\nhave post_parse_analyze_hook, i.e. post-parse, at which point we can't\njust rewrite the SQL directly. So I guess we'd need new hook.\n\nI do however wonder if an earlier hook is a good idea at all - matching\nthe SQL directly seems like a rather naive approach that'll break easily\ndue to formatting, upper/lower-case, subqueries, and many other things.\n From this standpoint it seems actually better to inspect and tweak the\nparse-analyze result. Not sure how to define the rules easily, though.\n\nAs for the complexity, I think hooks are fairly low-maintenance in\npractice, we tend not to modify them very often, and when we do it's\nusually just adding an argument etc.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 26 May 2020 03:47:58 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: New Feature Request" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, May 25, 2020 at 09:21:26PM -0400, Bruce Momjian wrote:\n>> I think your best bet is to try getting someone to write a hook\n>> that will do the replacement so that you don't need to modify too much\n>> of the Postgres core code. You will need to have the hook updated for\n>> new versions of Postgres, which adds to the complexity.\n\n> Couldn't one just use the existing planner hook for that?\n\nYeah, probably. One could also make a case for creating a similar\nhook for the rewriter so that a new parsetree could be substituted\nbefore the rewrite step ... but it's really not clear whether it's\nbetter to try to match the parsetree before or after rewriting.\nI'd be inclined to just make use of the existing hook until there's\na pretty solid argument why we need another one.\n\nNote to OP: the lack of response to your previous post seems to me\nto indicate that there's little enthusiasm for having such a feature\nin core Postgres. Thus, everybody is focusing on what sort of hooks\nwould be needed in-core to let an extension implement the feature.\nGetting someone to write such an extension is left as an exercise\nfor the reader.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 May 2020 22:02:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: New Feature Request" }, { "msg_contents": "\n\nOn 26.05.2020 04:47, Tomas Vondra wrote:\n> On Mon, May 25, 2020 at 09:21:26PM -0400, Bruce Momjian wrote:\n>> On Mon, May 25, 2020 at 07:53:40PM -0500, Bert Scalzo wrote:\n>>> I am reposting this from a few months back (see below). I am not \n>>> trying to be a\n>>> pest, just very motivated. I really think this feature has merit, \n>>> and if not\n>>> generally worthwhile, I'd be willing to pay someone to code it for \n>>> me as I\n>>> don't have strong enough C skills to modify the PostgreSQL code \n>>> myself. So\n>>> anyone who might have such skills that would be interested, please \n>>> contact me:\n>>> bertscalzo2@gmail.com.\n>>\n>> I think your best bet is to try getting someone to write a hook\n>> that will do the replacement so that you don't need to modify too much\n>> of the Postgres core code.  You will need to have the hook updated for\n>> new versions of Postgres, which adds to the complexity.\n>>\n>\n> I don't think we have a hook to tweak the incoming SQL, though. We only\n> have post_parse_analyze_hook, i.e. post-parse, at which point we can't\n> just rewrite the SQL directly. So I guess we'd need new hook.\n\nVOPS extension performs query substitution (replace query to the \noriginal table with query to projection) using post_parse_analysis_hook\nand SPI. So I do not understand why  some extra hook is needed.\n\n>\n> I do however wonder if an earlier hook is a good idea at all - matching\n> the SQL directly seems like a rather naive approach that'll break easily\n> due to formatting, upper/lower-case, subqueries, and many other things.\n> From this standpoint it seems actually better to inspect and tweak the\n> parse-analyze result. Not sure how to define the rules easily, though.\n>\n\nIn some cases we need to know exact parameter value (as in case \nSUBSTRING(column,1,3) = 'ABC').\nSometime concrete value of parameter is not important...\nAlso it is not clear where such pattern-matching transformation should \nbe used only for the whole query or for any its subtrees?\n\n> As for the complexity, I think hooks are fairly low-maintenance in\n> practice, we tend not to modify them very often, and when we do it's\n> usually just adding an argument etc.\n\nI am not sure if the proposed approach can really be useful in many cases.\nBad queries are used to be generated by various ORM tools.\nBut them rarely generate exactly the same query. So defining matching \nrules for the whole query tree will rarely work.\n\nIt seems to be more useful to have extensible SQL optimizer, which \nallows to add user defined rules (may as transformation patterns).\nThis is how it is done in GCC code optimizer.\nDefinitely writing such rules is very non-trivial task.\nVery few developers will be able to add their own meaningful rules.\nBut in any case it significantly simplify improvement of optimizer, \nalthough most of problems with choosing optimal plan are\ncaused by wrong statistic and rue-based optimization can not help here.\n\n\n\n", "msg_date": "Tue, 26 May 2020 10:17:24 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: New Feature Request" }, { "msg_contents": "I greatly appreciate all the replies. Thanks. I also fully understand and\nappreciate all the points made - especially that this idea may not have\ngeneral value or acceptance as worthwhile. No argument from me. Let me\nexplain why I am looking to do this to see if that changes any opinions. I\nhave written a product called QIKR for MySQL that leverages the MySQL query\nrewrite feature and places a knowledge expert of SQL rewrite rules as a\npreprocessor to the MySQL optimizer. I have defined an extensive set of\nrules based on my 30 years of doing code reviews for app developers who\nwrite terrible SQL. Right now QIKR does 100% syntactic analysis (hoping to\ndo semantic analysis in a later version). For MySQL (which has a less\nmature and less robust optimizer) the performance gains are huge - in\nexcess of 10X. So far QIKR shows about a 2.5X improvement over the\nPostgreSQL optimizer when fed bad SQL. I am not saying the\nPotsgrSQL optimizer does a poor job, but rather that QIKR was designed for\n\"garbage in, not garbage out\" - so QIKR fixes all the stupid mistakes that\npeople make which can confuse or even cripple an optimizer. Hence why I am\nlooking for this hook - and have come to the experts for help. I have two\nvery large PostgreSQL partner organizations who have asked me to make\nQIKR work for PostgreSQL as it does for MySQL. Again, I am willing to pay\nfor this hook since it's a special request for a special purpose and not\ngenerally worthwhile in many people's opinions - which I cannot argue with.\n\nOn Tue, May 26, 2020 at 2:17 AM Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> wrote:\n\n>\n>\n> On 26.05.2020 04:47, Tomas Vondra wrote:\n> > On Mon, May 25, 2020 at 09:21:26PM -0400, Bruce Momjian wrote:\n> >> On Mon, May 25, 2020 at 07:53:40PM -0500, Bert Scalzo wrote:\n> >>> I am reposting this from a few months back (see below). I am not\n> >>> trying to be a\n> >>> pest, just very motivated. I really think this feature has merit,\n> >>> and if not\n> >>> generally worthwhile, I'd be willing to pay someone to code it for\n> >>> me as I\n> >>> don't have strong enough C skills to modify the PostgreSQL code\n> >>> myself. So\n> >>> anyone who might have such skills that would be interested, please\n> >>> contact me:\n> >>> bertscalzo2@gmail.com.\n> >>\n> >> I think your best bet is to try getting someone to write a hook\n> >> that will do the replacement so that you don't need to modify too much\n> >> of the Postgres core code. You will need to have the hook updated for\n> >> new versions of Postgres, which adds to the complexity.\n> >>\n> >\n> > I don't think we have a hook to tweak the incoming SQL, though. We only\n> > have post_parse_analyze_hook, i.e. post-parse, at which point we can't\n> > just rewrite the SQL directly. So I guess we'd need new hook.\n>\n> VOPS extension performs query substitution (replace query to the\n> original table with query to projection) using post_parse_analysis_hook\n> and SPI. So I do not understand why some extra hook is needed.\n>\n> >\n> > I do however wonder if an earlier hook is a good idea at all - matching\n> > the SQL directly seems like a rather naive approach that'll break easily\n> > due to formatting, upper/lower-case, subqueries, and many other things.\n> > From this standpoint it seems actually better to inspect and tweak the\n> > parse-analyze result. Not sure how to define the rules easily, though.\n> >\n>\n> In some cases we need to know exact parameter value (as in case\n> SUBSTRING(column,1,3) = 'ABC').\n> Sometime concrete value of parameter is not important...\n> Also it is not clear where such pattern-matching transformation should\n> be used only for the whole query or for any its subtrees?\n>\n> > As for the complexity, I think hooks are fairly low-maintenance in\n> > practice, we tend not to modify them very often, and when we do it's\n> > usually just adding an argument etc.\n>\n> I am not sure if the proposed approach can really be useful in many cases.\n> Bad queries are used to be generated by various ORM tools.\n> But them rarely generate exactly the same query. So defining matching\n> rules for the whole query tree will rarely work.\n>\n> It seems to be more useful to have extensible SQL optimizer, which\n> allows to add user defined rules (may as transformation patterns).\n> This is how it is done in GCC code optimizer.\n> Definitely writing such rules is very non-trivial task.\n> Very few developers will be able to add their own meaningful rules.\n> But in any case it significantly simplify improvement of optimizer,\n> although most of problems with choosing optimal plan are\n> caused by wrong statistic and rue-based optimization can not help here.\n>\n>\n>\n>\n\nI greatly appreciate all the replies. Thanks. I also fully understand and appreciate all the points made - especially that this idea may not have general value or acceptance as worthwhile. No argument from me. Let me explain why I am looking to do this to see if that changes any opinions. I have written a product called QIKR for MySQL that leverages the MySQL query rewrite feature and places a knowledge expert of SQL rewrite rules as a preprocessor to the MySQL optimizer. I have defined an extensive set of rules based on my 30 years of doing code reviews for app developers who write terrible SQL. Right now QIKR does 100% syntactic analysis (hoping to do semantic analysis in a later version). For MySQL (which has a less mature and less robust optimizer) the performance gains are huge - in excess of 10X. So far QIKR shows about a 2.5X improvement over the PostgreSQL optimizer when fed bad SQL. I am not saying the PotsgrSQL optimizer does a poor job, but rather that QIKR was designed for \"garbage in, not garbage out\" - so QIKR fixes all the stupid mistakes that people make which can confuse or even cripple an optimizer. Hence why I am looking for this hook - and have come to the experts for help. I have two very large PostgreSQL partner organizations who have asked me to make QIKR work for PostgreSQL as it does for MySQL. Again, I am willing to pay for this hook since it's a special request for a special purpose and not generally worthwhile in many people's opinions - which I cannot argue with.On Tue, May 26, 2020 at 2:17 AM Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\n\nOn 26.05.2020 04:47, Tomas Vondra wrote:\n> On Mon, May 25, 2020 at 09:21:26PM -0400, Bruce Momjian wrote:\n>> On Mon, May 25, 2020 at 07:53:40PM -0500, Bert Scalzo wrote:\n>>> I am reposting this from a few months back (see below). I am not \n>>> trying to be a\n>>> pest, just very motivated. I really think this feature has merit, \n>>> and if not\n>>> generally worthwhile, I'd be willing to pay someone to code it for \n>>> me as I\n>>> don't have strong enough C skills to modify the PostgreSQL code \n>>> myself. So\n>>> anyone who might have such skills that would be interested, please \n>>> contact me:\n>>> bertscalzo2@gmail.com.\n>>\n>> I think your best bet is to try getting someone to write a hook\n>> that will do the replacement so that you don't need to modify too much\n>> of the Postgres core code.  You will need to have the hook updated for\n>> new versions of Postgres, which adds to the complexity.\n>>\n>\n> I don't think we have a hook to tweak the incoming SQL, though. We only\n> have post_parse_analyze_hook, i.e. post-parse, at which point we can't\n> just rewrite the SQL directly. So I guess we'd need new hook.\n\nVOPS extension performs query substitution (replace query to the \noriginal table with query to projection) using post_parse_analysis_hook\nand SPI. So I do not understand why  some extra hook is needed.\n\n>\n> I do however wonder if an earlier hook is a good idea at all - matching\n> the SQL directly seems like a rather naive approach that'll break easily\n> due to formatting, upper/lower-case, subqueries, and many other things.\n> From this standpoint it seems actually better to inspect and tweak the\n> parse-analyze result. Not sure how to define the rules easily, though.\n>\n\nIn some cases we need to know exact parameter value (as in case \nSUBSTRING(column,1,3) = 'ABC').\nSometime concrete value of parameter is not important...\nAlso it is not clear where such pattern-matching transformation should \nbe used only for the whole query or for any its subtrees?\n\n> As for the complexity, I think hooks are fairly low-maintenance in\n> practice, we tend not to modify them very often, and when we do it's\n> usually just adding an argument etc.\n\nI am not sure if the proposed approach can really be useful in many cases.\nBad queries are used to be generated by various ORM tools.\nBut them rarely generate exactly the same query. So defining matching \nrules for the whole query tree will rarely work.\n\nIt seems to be more useful to have extensible SQL optimizer, which \nallows to add user defined rules (may as transformation patterns).\nThis is how it is done in GCC code optimizer.\nDefinitely writing such rules is very non-trivial task.\nVery few developers will be able to add their own meaningful rules.\nBut in any case it significantly simplify improvement of optimizer, \nalthough most of problems with choosing optimal plan are\ncaused by wrong statistic and rue-based optimization can not help here.", "msg_date": "Tue, 26 May 2020 05:10:44 -0500", "msg_from": "Bert Scalzo <bertscalzo2@gmail.com>", "msg_from_op": true, "msg_subject": "Re: New Feature Request" }, { "msg_contents": "On 2020-05-26 12:10, Bert Scalzo wrote:\n> So far QIKR shows about a \n> 2.5X improvement over the PostgreSQL optimizer when fed bad SQL. I am \n> not saying the PotsgrSQL optimizer does a poor job, but rather that \n> QIKR was designed for \"garbage in, not garbage out\" - so QIKR fixes all \n> the stupid mistakes that people make which can confuse or even cripple \n> an optimizer. Hence why I am looking for this hook - and have come to \n> the experts for help. I have two very large PostgreSQL partner \n> organizations who have asked me to make QIKR work for PostgreSQL as it \n> does for MySQL. Again, I am willing to pay for this hook since it's a \n> special request for a special purpose and not generally worthwhile in \n> many people's opinions - which I cannot argue with.\n\nYour project seems entirely legitimate as a third-party optional plugin.\n\nI think the post_parse_analyze_hook would work for this. I suggest you \nstart with it and see how far you can take it.\n\nIt may turn out that you need a hook after the rewriter, but that should \nbe a small change and shouldn't affect your own code very much, since \nyou'd get handed the same data structure in each case.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 26 May 2020 12:54:48 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: New Feature Request" } ]
[ { "msg_contents": "Hello,\n\nI think it is more useful if the name list of the\npg_shmem_allocations view is listed in one page.\n\nFor example,\n* Wal Sender Ctl: walsender-related shared memory\n* AutoVacuum Data: autovacuum-related shared memory\n* PROCLOCK hash: shared memory for hash table for PROCLOCK structs\n\nWhy don't you document pg_shmem_allocations view's name list?\n\nRegards,\n\n-- \nMasahiro Ikeda\n\n\n", "msg_date": "Tue, 26 May 2020 10:16:19 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Why don't you to document pg_shmem_allocations view's name list?" }, { "msg_contents": "On Tue, May 26, 2020 at 10:16:19AM +0900, Masahiro Ikeda wrote:\n> I think it is more useful if the name list of the\n> pg_shmem_allocations view is listed in one page.\n> \n> Why don't you document pg_shmem_allocations view's name list?\n\nDocumenting that would create a dependency between the docs and the\nbackend code, with very high chances of missing doc updates each time\nnew code that does an extra shared memory allocation is added. I\nthink that there would be a point in bringing more sanity and\nconsistency to the names of the shared memory sections passed down to\nShmemInitStruct() and SimpleLruInit() though.\n--\nMichael", "msg_date": "Tue, 26 May 2020 11:08:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Why don't you to document pg_shmem_allocations view's name list?" }, { "msg_contents": "On 2020-05-26 11:08, Michael Paquier wrote:\n> On Tue, May 26, 2020 at 10:16:19AM +0900, Masahiro Ikeda wrote:\n>> I think it is more useful if the name list of the\n>> pg_shmem_allocations view is listed in one page.\n>> \n>> Why don't you document pg_shmem_allocations view's name list?\n> \n> Documenting that would create a dependency between the docs and the\n> backend code, with very high chances of missing doc updates each time\n> new code that does an extra shared memory allocation is added. I\n> think that there would be a point in bringing more sanity and\n> consistency to the names of the shared memory sections passed down to\n> ShmemInitStruct() and SimpleLruInit() though.\n> --\n> Michael\n\nThanks for replaying.\n\nI understood the reason why the name list is not documented.\nI agree your opinion.\n\nRegards,\n\n-- \nMasahiro Ikeda\n\n\n", "msg_date": "Wed, 27 May 2020 13:14:51 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Why don't you to document pg_shmem_allocations view's name list?" } ]
[ { "msg_contents": "Consider the below example:\n\ncreate table j1(i int, im5 int, im100 int, im1000 int);\ninsert into j1 select i, i%5, i%100, i%1000 from generate_series(1,\n10000000)i;\ncreate index j1_i_im5 on j1(i, im5);\ncreate index j1_i_im100 on j1(i, im100);\nanalyze j1;\nexplain select * from j1 where i = 100 and im5 = 5;\n\nWe may get the plan like this:\n\ndemo=# explain select * from j1 where i = 100 and im5 = 1;\n QUERY PLAN\n----------------------------------------------------------------------\n Index Scan using j1_i_im100 on j1 (cost=0.43..8.46 rows=1 width=16)\n Index Cond: (i = 100)\n Filter: (im5 = 1)\n(3 rows)\n\nAt this case, optimizer can estimate there are only 1 row to return, so both\nindexes have same cost, which one will be choose is un-controlable. This is\nfine for above query based on the estimation is accurate. However estimation\ncan't be always accurate in real life. Some inaccurate estimation can cause\nan\nwrong index choose. As an experience, j1_i_im5 index should always be choose\nfor above query.\n\nThis one line change is the best method I can think.\n\n- cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;\n+ cpu_per_tuple = cpu_tuple_cost + (qpqual_cost.per_tuple * 1.001);\n\nWe make the qual cost on index filter is slightly higher than qual cost in\nIndex\nCond. This will also good for QUAL (i=x AND m=y AND n=z). Index are (i, m,\nother_col1) and (i, other_col1, other_col2). But this change also\nchanged the relation between the qual cost on index scan and qual cost on\nseq\nscan. However I think that impact is so tiny that I think we can ignore\nthat (we\ncan choose a better factor between 1 and 1.001).\n\nEven the root cause of this issue comes from an inaccurate estimation. but I\ndon't think that is an issue easy/possible to fix, however I'm open for\nsuggestion on that as well.\n\nAny suggestions?\n\n-- \nBest Regards\nAndy Fan\n\nConsider the below example:create table j1(i int, im5 int,  im100 int, im1000 int);insert into j1 select i, i%5, i%100, i%1000 from generate_series(1, 10000000)i;create index j1_i_im5 on j1(i, im5);create index j1_i_im100 on j1(i, im100);analyze j1;explain select * from j1 where i = 100 and im5 = 5;We may get the plan like this:demo=# explain select  * from  j1 where i = 100 and im5 = 1;                              QUERY PLAN---------------------------------------------------------------------- Index Scan using j1_i_im100 on j1  (cost=0.43..8.46 rows=1 width=16)   Index Cond: (i = 100)   Filter: (im5 = 1)(3 rows)At this case, optimizer can estimate there are only 1 row to return, so bothindexes have same cost, which one will be choose is un-controlable. This isfine for above query based on the estimation is accurate. However estimationcan't be always accurate in real life. Some inaccurate estimation can cause anwrong index choose. As an experience, j1_i_im5 index should always be choose for above query.This one line change is the best method I can think.-       cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;+      cpu_per_tuple = cpu_tuple_cost + (qpqual_cost.per_tuple * 1.001);We make the qual cost on index filter is slightly higher than qual cost in IndexCond. This will also good for QUAL (i=x AND m=y AND n=z). Index are (i, m,other_col1) and (i, other_col1, other_col2).  But this change alsochanged the relation between the qual cost on index scan and qual cost on seqscan. However I think that impact is so tiny that I think we can ignore that (wecan choose a better factor between 1 and 1.001). Even the root cause of this issue comes from an inaccurate estimation. but Idon't think that is an issue easy/possible to fix, however I'm open forsuggestion on that as well.Any suggestions?-- Best RegardsAndy Fan", "msg_date": "Tue, 26 May 2020 16:22:01 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Make the qual cost on index Filter slightly higher than qual cost on\n index Cond." }, { "msg_contents": "On Tue, May 26, 2020 at 1:52 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>\n> Consider the below example:\n>\n> create table j1(i int, im5 int, im100 int, im1000 int);\n> insert into j1 select i, i%5, i%100, i%1000 from generate_series(1, 10000000)i;\n> create index j1_i_im5 on j1(i, im5);\n> create index j1_i_im100 on j1(i, im100);\n> analyze j1;\n> explain select * from j1 where i = 100 and im5 = 5;\n>\n> We may get the plan like this:\n>\n> demo=# explain select * from j1 where i = 100 and im5 = 1;\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> Index Scan using j1_i_im100 on j1 (cost=0.43..8.46 rows=1 width=16)\n> Index Cond: (i = 100)\n> Filter: (im5 = 1)\n> (3 rows)\n>\n> At this case, optimizer can estimate there are only 1 row to return, so both\n> indexes have same cost, which one will be choose is un-controlable. This is\n> fine for above query based on the estimation is accurate. However estimation\n> can't be always accurate in real life. Some inaccurate estimation can cause an\n> wrong index choose. As an experience, j1_i_im5 index should always be choose\n> for above query.\n\nI think we need a better example where choosing an index makes a difference.\n\nAn index can be chosen just because it's path was created before some\nother more appropriate index but the cost difference was within fuzzy\nlimit. Purely based on the order in which index paths are created.\n\n>\n> This one line change is the best method I can think.\n>\n> - cpu_per_tuple = cpu_tuple_cost + qpqual_cost.per_tuple;\n> + cpu_per_tuple = cpu_tuple_cost + (qpqual_cost.per_tuple * 1.001);\n>\n> We make the qual cost on index filter is slightly higher than qual cost in Index\n> Cond. This will also good for QUAL (i=x AND m=y AND n=z). Index are (i, m,\n> other_col1) and (i, other_col1, other_col2). But this change also\n> changed the relation between the qual cost on index scan and qual cost on seq\n> scan. However I think that impact is so tiny that I think we can ignore that (we\n> can choose a better factor between 1 and 1.001).\n>\n> Even the root cause of this issue comes from an inaccurate estimation. but I\n> don't think that is an issue easy/possible to fix, however I'm open for\n> suggestion on that as well.\n>\n> Any suggestions?\n>\n> --\n> Best Regards\n> Andy Fan\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 26 May 2020 19:29:39 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make the qual cost on index Filter slightly higher than qual cost\n on index Cond." }, { "msg_contents": "On Tue, May 26, 2020 at 9:59 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> On Tue, May 26, 2020 at 1:52 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> >\n> > Consider the below example:\n> >\n> > create table j1(i int, im5 int, im100 int, im1000 int);\n> > insert into j1 select i, i%5, i%100, i%1000 from generate_series(1,\n> 10000000)i;\n> > create index j1_i_im5 on j1(i, im5);\n> > create index j1_i_im100 on j1(i, im100);\n> > analyze j1;\n> > explain select * from j1 where i = 100 and im5 = 5;\n> >\n> > We may get the plan like this:\n> >\n> > demo=# explain select * from j1 where i = 100 and im5 = 1;\n> > QUERY PLAN\n> > ----------------------------------------------------------------------\n> > Index Scan using j1_i_im100 on j1 (cost=0.43..8.46 rows=1 width=16)\n> > Index Cond: (i = 100)\n> > Filter: (im5 = 1)\n> > (3 rows)\n> >\n> > At this case, optimizer can estimate there are only 1 row to return, so\n> both\n> > indexes have same cost, which one will be choose is un-controlable. This\n> is\n> > fine for above query based on the estimation is accurate. However\n> estimation\n> > can't be always accurate in real life. Some inaccurate estimation can\n> cause an\n> > wrong index choose. As an experience, j1_i_im5 index should always be\n> choose\n> > for above query.\n>\n> I think we need a better example where choosing an index makes a\n> difference.\n>\n> An index can be chosen just because it's path was created before some\n> other more appropriate index but the cost difference was within fuzzy\n> limit. Purely based on the order in which index paths are created.\n>\n\nHere is an further example with the above case:\n\ndemo=# insert into j1 select 1, 1, 1, 1 from generate_series(1, 100000)i;\nINSERT 0 100000\n\nWith the current implementation, it is\n\ndemo=# explain analyze select * from j1 where i = 1 and im5 = 2;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------\n Index Scan using j1_i_im100 on j1 (cost=0.43..8.44 rows=1 width=16)\n(actual time=63.431..63.431 rows=0 loops=1)\n Index Cond: (i = 1)\n Filter: (im5 = 2)\n Rows Removed by Filter: 100001\n Planning Time: 0.183 ms\n Execution Time: 63.484 ms\n(6 rows)\n\nWith the patch above, it can always choose a correct index even the\nstatistics is inaccurate:\n\ndemo=# explain analyze select * from j1 where i = 1 and im5 = 2;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------\n Index Scan using j1_i_im5 on j1 (cost=0.43..8.46 rows=1 width=16) (actual\ntime=0.030..0.030 rows=0 loops=1)\n Index Cond: ((i = 1) AND (im5 = 2))\n Planning Time: 1.087 ms\n Execution Time: 0.077 ms\n(4 rows)\n\n-- \nBest Regards\nAndy Fan\n\nOn Tue, May 26, 2020 at 9:59 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:On Tue, May 26, 2020 at 1:52 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>\n> Consider the below example:\n>\n> create table j1(i int, im5 int,  im100 int, im1000 int);\n> insert into j1 select i, i%5, i%100, i%1000 from generate_series(1, 10000000)i;\n> create index j1_i_im5 on j1(i, im5);\n> create index j1_i_im100 on j1(i, im100);\n> analyze j1;\n> explain select * from j1 where i = 100 and im5 = 5;\n>\n> We may get the plan like this:\n>\n> demo=# explain select  * from  j1 where i = 100 and im5 = 1;\n>                               QUERY PLAN\n> ----------------------------------------------------------------------\n>  Index Scan using j1_i_im100 on j1  (cost=0.43..8.46 rows=1 width=16)\n>    Index Cond: (i = 100)\n>    Filter: (im5 = 1)\n> (3 rows)\n>\n> At this case, optimizer can estimate there are only 1 row to return, so both\n> indexes have same cost, which one will be choose is un-controlable. This is\n> fine for above query based on the estimation is accurate. However estimation\n> can't be always accurate in real life. Some inaccurate estimation can cause an\n> wrong index choose. As an experience, j1_i_im5 index should always be choose\n> for above query.\n\nI think we need a better example where choosing an index makes a difference.\n\nAn index can be chosen just because it's path was created before some\nother more appropriate index but the cost difference was within fuzzy\nlimit. Purely based on the order in which index paths are created.Here is an further example with the above case:demo=# insert into j1 select 1, 1, 1, 1 from generate_series(1, 100000)i;INSERT 0 100000 With the current implementation, it is demo=# explain analyze select * from j1 where i = 1 and im5 = 2;                                                    QUERY PLAN------------------------------------------------------------------------------------------------------------------ Index Scan using j1_i_im100 on j1  (cost=0.43..8.44 rows=1 width=16) (actual time=63.431..63.431 rows=0 loops=1)   Index Cond: (i = 1)   Filter: (im5 = 2)   Rows Removed by Filter: 100001 Planning Time: 0.183 ms Execution Time: 63.484 ms(6 rows)With the patch above, it can always choose a correct index even the statistics is inaccurate:demo=# explain analyze select * from j1 where i = 1 and im5 = 2;                                                  QUERY PLAN-------------------------------------------------------------------------------------------------------------- Index Scan using j1_i_im5 on j1  (cost=0.43..8.46 rows=1 width=16) (actual time=0.030..0.030 rows=0 loops=1)   Index Cond: ((i = 1) AND (im5 = 2)) Planning Time: 1.087 ms Execution Time: 0.077 ms(4 rows)-- Best RegardsAndy Fan", "msg_date": "Wed, 27 May 2020 06:49:40 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make the qual cost on index Filter slightly higher than qual cost\n on index Cond." }, { "msg_contents": "You can use the attached sql to reproduce this issue, but I'm not sure you\ncan\nget the above result at the first time that is because when optimizer think\nthe\n2 index scan have the same cost, it will choose the first one it found, the\norder\ndepends on RelationGetIndexList. If so, you may try drop and create\nj1_i_im5 index.\n\nThe sense behind this patch is we still use the cost based optimizer, just\nwhen we\nwe find out the 2 index scans have the same cost, we prefer to use the\nindex which\nhave more qual filter on Index Cond. This is implemented by adjust the\nqual cost\non index filter slightly higher.\n\nThe issue here is not so uncommon in real life. consider a log based\napplication, which\nhas serval indexes on with create_date as a leading column, when the\ncreate_date\nfirst load the for the given day but before the new statistics is gathered,\nthat probably run\ninto this issue.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Wed, 27 May 2020 07:12:52 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make the qual cost on index Filter slightly higher than qual cost\n on index Cond." }, { "msg_contents": "On Wed, 27 May 2020 at 04:43, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> You can use the attached sql to reproduce this issue, but I'm not sure you\n> can\n> get the above result at the first time that is because when optimizer\n> think the\n> 2 index scan have the same cost, it will choose the first one it found,\n> the order\n> depends on RelationGetIndexList. If so, you may try drop and create\n> j1_i_im5 index.\n>\n> The sense behind this patch is we still use the cost based optimizer, just\n> when we\n> we find out the 2 index scans have the same cost, we prefer to use the\n> index which\n> have more qual filter on Index Cond. This is implemented by adjust the\n> qual cost\n> on index filter slightly higher.\n>\n\nThanks for the example and the explanation.\n\nThe execution time difference in your example is pretty high to account for\nexecuting the filter on so many rows. My guess is this has to do with the\nheap access. For applying the filter the entire row needs to be fetched\nfrom the heap. So we should investigate this case from that angle. Another\nguess I have is the statistics is not correct and hence the cost is wrong.\n\n-- \nBest Wishes,\nAshutosh\n\nOn Wed, 27 May 2020 at 04:43, Andy Fan <zhihui.fan1213@gmail.com> wrote:You can use the attached sql to reproduce this issue, but I'm not sure you canget the above result at the first time that is because when optimizer think the 2 index scan have the same cost, it will choose the first one it found, the orderdepends on RelationGetIndexList.  If so,  you may try drop and create j1_i_im5 index.The sense behind this patch is we still use the cost based optimizer, just when we we find out the 2 index scans have the same cost,  we prefer to use the index whichhave more qual filter on Index Cond.  This is implemented by adjust the qual cost on index filter slightly higher. Thanks for the example and the explanation.The execution time difference in your example is pretty high to account for executing the filter on so many rows. My guess is this has to do with the heap access. For applying the filter the entire row needs to be fetched from the heap. So we should investigate this case from that angle. Another guess I have is the statistics is not correct and hence the cost is wrong.-- Best Wishes,Ashutosh", "msg_date": "Wed, 27 May 2020 17:30:55 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Make the qual cost on index Filter slightly higher than qual cost\n on index Cond." }, { "msg_contents": "On Wed, May 27, 2020 at 8:01 PM Ashutosh Bapat <\nashutosh.bapat@2ndquadrant.com> wrote:\n\n>\n>\n> On Wed, 27 May 2020 at 04:43, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>> You can use the attached sql to reproduce this issue, but I'm not sure\n>> you can\n>> get the above result at the first time that is because when optimizer\n>> think the\n>> 2 index scan have the same cost, it will choose the first one it found,\n>> the order\n>> depends on RelationGetIndexList. If so, you may try drop and create\n>> j1_i_im5 index.\n>>\n>> The sense behind this patch is we still use the cost based optimizer,\n>> just when we\n>> we find out the 2 index scans have the same cost, we prefer to use the\n>> index which\n>> have more qual filter on Index Cond. This is implemented by adjust the\n>> qual cost\n>> on index filter slightly higher.\n>>\n>\n> Thanks for the example and the explanation.\n>\n> The execution time difference in your example is pretty high to account\n> for executing the filter on so many rows. My guess is this has to do with\n> the heap access. For applying the filter the entire row needs to be fetched\n> from the heap. So we should investigate this case from that angle. Another\n> guess I have is the statistics is not correct and hence the cost is wrong.\n>\n>\nI believe this is a statistics issue and then the cost is wrong. More\ncharacters of this\nissue are: 1). If a data is out of range in the old statistics,\noptimizer will given an 1 row\nassumption. 2). based on the 1 row assumption, for query\n\"col1=out_of_range_val AND\ncol2 = any_value\" Index (col1, col2) and (col1, col3) will have exactly\nsame cost for current\ncost model. 3). If the statistics was wrong, (col1, col3) maybe a very bad\nplan as shown\nabove, but index (col1, col2) should always better/no worse than (col1,\ncol3) in any case.\n4). To expand the rule, for query \"col1 = out_of_range_val AND col2 =\nany_value AND col3 = any_val\",\nindex are (col1, col2, col_m) and (col1, col_m, col_n), the former index\nwill aways has better/no worse\nthan the later one. 5). an statistics issue like this is not uncommon,\nfor example\nan log based application, creation_date is very easy to out of range in\nstatistics.\n\nso we need to optimize the cost model for such case, the method is the\npatch I mentioned above.\nI can't have a solid data to prove oracle did something similar, but based\non the talk with my\ncustomer, oracle is likely did something like this.\n\n-- \nBest Regards\nAndy Fan\n\nOn Wed, May 27, 2020 at 8:01 PM Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com> wrote:On Wed, 27 May 2020 at 04:43, Andy Fan <zhihui.fan1213@gmail.com> wrote:You can use the attached sql to reproduce this issue, but I'm not sure you canget the above result at the first time that is because when optimizer think the 2 index scan have the same cost, it will choose the first one it found, the orderdepends on RelationGetIndexList.  If so,  you may try drop and create j1_i_im5 index.The sense behind this patch is we still use the cost based optimizer, just when we we find out the 2 index scans have the same cost,  we prefer to use the index whichhave more qual filter on Index Cond.  This is implemented by adjust the qual cost on index filter slightly higher. Thanks for the example and the explanation.The execution time difference in your example is pretty high to account for executing the filter on so many rows. My guess is this has to do with the heap access. For applying the filter the entire row needs to be fetched from the heap. So we should investigate this case from that angle. Another guess I have is the statistics is not correct and hence the cost is wrong. I believe this is a statistics issue and then the cost is wrong.  More characters of thisissue are:  1).  If a data is out of range in the old statistics,  optimizer will given an 1 rowassumption.  2).  based on the 1 row assumption,  for query \"col1=out_of_range_val ANDcol2 = any_value\"   Index (col1, col2) and (col1, col3) will have exactly same cost for currentcost model. 3).  If the statistics was wrong, (col1, col3) maybe a very bad plan as shown above, but index (col1, col2) should  always better/no worse than (col1, col3) in any case.4). To expand the rule, for query \"col1 = out_of_range_val AND col2 = any_value AND col3 = any_val\",  index are (col1, col2, col_m) and (col1, col_m, col_n),  the former index will aways has better/no worsethan the later one.  5). an statistics issue like this is not  uncommon, for example an log based application, creation_date is very easy to out of range in statistics. so we need to optimize the cost model for such case, the method is the patch I mentioned above. I can't have a solid data to prove oracle did something similar, but based on the talk with mycustomer,  oracle is likely did something like this. -- Best RegardsAndy Fan", "msg_date": "Wed, 27 May 2020 21:58:04 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make the qual cost on index Filter slightly higher than qual cost\n on index Cond." }, { "msg_contents": "On Wed, May 27, 2020 at 09:58:04PM +0800, Andy Fan wrote:\n>On Wed, May 27, 2020 at 8:01 PM Ashutosh Bapat <\n>ashutosh.bapat@2ndquadrant.com> wrote:\n>\n>>\n>>\n>> On Wed, 27 May 2020 at 04:43, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>>\n>>> You can use the attached sql to reproduce this issue, but I'm not sure\n>>> you can\n>>> get the above result at the first time that is because when optimizer\n>>> think the\n>>> 2 index scan have the same cost, it will choose the first one it found,\n>>> the order\n>>> depends on RelationGetIndexList. If so, you may try drop and create\n>>> j1_i_im5 index.\n>>>\n>>> The sense behind this patch is we still use the cost based optimizer,\n>>> just when we\n>>> we find out the 2 index scans have the same cost, we prefer to use the\n>>> index which\n>>> have more qual filter on Index Cond. This is implemented by adjust the\n>>> qual cost\n>>> on index filter slightly higher.\n>>>\n>>\n>> Thanks for the example and the explanation.\n>>\n>> The execution time difference in your example is pretty high to account\n>> for executing the filter on so many rows. My guess is this has to do with\n>> the heap access. For applying the filter the entire row needs to be fetched\n>> from the heap. So we should investigate this case from that angle. Another\n>> guess I have is the statistics is not correct and hence the cost is wrong.\n>>\n>>\n>I believe this is a statistics issue and then the cost is wrong.\n\nI think you're both right. Most of the time probably comes from the\nheap accesses, but the dabatabase has no chance to account for that\nas there was no analyze after inseting the data causing that. So it's\nvery difficult to account for this when computing the cost.\n\n>More characters of this issue are: 1). If a data is out of range in\n>the old statistics, optimizer will given an 1 row assumption. 2).\n>based on the 1 row assumption, for query \"col1=out_of_range_val AND\n>col2 = any_value\" Index (col1, col2) and (col1, col3) will have\n>exactly same cost for current cost model. 3). If the statistics was\n>wrong, (col1, col3) maybe a very bad plan as shown above, but index\n>(col1, col2) should always better/no worse than (col1, col3) in any\n>case. 4). To expand the rule, for query \"col1 = out_of_range_val AND\n>col2 = any_value AND col3 = any_val\", index are (col1, col2, col_m) and\n>(col1, col_m, col_n), the former index will aways has better/no worse\n>than the later one. 5). an statistics issue like this is not\n>uncommon, for example an log based application, creation_date is very\n>easy to out of range in statistics.\n>\n\nRight. There are many ways to cause issues like this.\n\n>so we need to optimize the cost model for such case, the method is the\n>patch I mentioned above.\n\nMaking the planner more robust w.r.t. to estimation errors is nice, but\nI wouldn't go as far saying we should optimize for such cases. The stats\ncan be arbitrarily off, so should we expect the error to be 10%, 100% or\n1000000%? We'd probably end up with plans that handle worst cases well,\nbut the average performance would end up being way worse :-(\n\nAnyway, I kinda doubt making the conditions 1.001 more expensive is a\nway to make the planning more robust. I'm pretty sure we could construct\nexamples in the opposite direction, in which case this change make it\nmore likely we use the wrong index.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 29 May 2020 02:16:02 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Make the qual cost on index Filter slightly higher than qual\n cost on index Cond." }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Wed, May 27, 2020 at 09:58:04PM +0800, Andy Fan wrote:\n>> so we need to optimize the cost model for such case, the method is the\n>> patch I mentioned above.\n\n> Making the planner more robust w.r.t. to estimation errors is nice, but\n> I wouldn't go as far saying we should optimize for such cases.\n\nYeah, it's a serious mistake to try to \"optimize\" for cases where we have\nno data or wrong data. By definition, we don't know what we're doing,\nso who's to say whether we've made it better or worse? And the possible\nside effects on cases where we do have good data are not to be ignored.\n\n> Anyway, I kinda doubt making the conditions 1.001 more expensive is a\n> way to make the planning more robust. I'm pretty sure we could construct\n> examples in the opposite direction, in which case this change make it\n> more likely we use the wrong index.\n\nThe other serious error we could be making here is to change things on\nthe basis of just a few examples. You really need a pretty wide range\nof test cases to be sure that you're not making things worse, any time\nyou're twiddling basic parameters like these.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 May 2020 21:04:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Make the qual cost on index Filter slightly higher than qual cost\n on index Cond." }, { "msg_contents": "> >so we need to optimize the cost model for such case, the method is the\n> >patch I mentioned above.\n>\n> Making the planner more robust w.r.t. to estimation errors is nice, but\n> I wouldn't go as far saying we should optimize for such cases. The stats\n> can be arbitrarily off, so should we expect the error to be 10%, 100% or\n> 1000000%?\n\n\nI don't think my patch relay on anything like that. My patch doesn't fix\nthe\nstatistics issue, just adding the extra cost on qual cost on Index Filter\npart.\nAssume the query pattern are where col1= X and col2 = Y. The impacts are :\n1). Make the cost of (col1, other_column) is higher than (col1, col2)\n2). The relationship between seqscan and index scan on index (col1,\nother_column)\nis changed, (this is something I don't want). However my cost difference\nbetween\nindex scan & seq scan usually very huge, so the change above should has\nnearly no impact on that choice. 3). Make the cost higher index scan for\nIndex (col1) only. Overall I think nothing will make thing worse.\n\n\n> We'd probably end up with plans that handle worst cases well,\n> but the average performance would end up being way worse :-(\n>\n>\nThat's possible, that's why I hope to get some feedback on that. Actually\nI\ncan't think out such case. can you have anything like that in mind?\n\n----\nI'm feeling that (qpqual_cost.per_tuple * 1.001) is not good enough since\nuser\nmay have some where expensive_func(col1) = X. we may change it\ncpu_tuple_cost + qpqual_cost.per_tuple + (0.0001) * list_lenght(qpquals).\n\n-- \nBest Regards\nAndy Fan\n\n \n>so we need to optimize the cost model for such case, the method is the\n>patch I mentioned above.\n\nMaking the planner more robust w.r.t. to estimation errors is nice, but\nI wouldn't go as far saying we should optimize for such cases. The stats\ncan be arbitrarily off, so should we expect the error to be 10%, 100% or\n1000000%? I don't think my patch relay on anything like that.   My patch doesn't fix thestatistics issue,  just adding the extra cost on qual cost on Index Filter part. Assume the query pattern are where col1= X and col2 = Y. The impacts are : 1).  Make the cost of (col1, other_column) is higher than (col1, col2) 2). The relationship between seqscan and index scan on index (col1, other_column)is changed, (this is something I don't want).  However my cost difference betweenindex scan & seq scan usually very huge, so the change above should hasnearly no impact on that choice.   3). Make the cost higher index scan forIndex (col1) only.  Overall I think nothing will make thing worse.   We'd probably end up with plans that handle worst cases well,\nbut the average performance would end up being way worse :-(\nThat's possible,  that's why I hope to get some feedback on that.  Actually Ican't think out such case.   can you have anything like that in mind?----I'm feeling that (qpqual_cost.per_tuple * 1.001) is not good enough since usermay have some where expensive_func(col1) = X.   we may change it cpu_tuple_cost + qpqual_cost.per_tuple  + (0.0001) * list_lenght(qpquals).  -- Best RegardsAndy Fan", "msg_date": "Fri, 29 May 2020 09:10:40 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make the qual cost on index Filter slightly higher than qual cost\n on index Cond." }, { "msg_contents": "Thanks all of you for your feedback.\n\nOn Fri, May 29, 2020 at 9:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > On Wed, May 27, 2020 at 09:58:04PM +0800, Andy Fan wrote:\n> >> so we need to optimize the cost model for such case, the method is the\n> >> patch I mentioned above.\n>\n> > Making the planner more robust w.r.t. to estimation errors is nice, but\n> > I wouldn't go as far saying we should optimize for such cases.\n>\n> Yeah, it's a serious mistake to try to \"optimize\" for cases where we have\n> no data or wrong data. By definition, we don't know what we're doing,\n> so who's to say whether we've made it better or worse?\n\n\nActually I think it is a more robust way.. the patch can't fix think all\nthe impact\nof bad statistics(That is impossible I think), but it will make some\nsimple things\nbetter and make others no worse. By definition I think I know what we are\ndoing\nhere, like what I replied to Tomas above. But it is possible my think is\nwrong.\n\n\n> The other serious error we could be making here is to change things on\n> the basis of just a few examples. You really need a pretty wide range\n> of test cases to be sure that you're not making things worse, any time\n> you're twiddling basic parameters like these.\n>\n>\nI will try more thing with this direction, thanks for suggestion.\n\n-- \nBest Regards\nAndy Fan\n\nThanks all of you for your feedback. On Fri, May 29, 2020 at 9:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Wed, May 27, 2020 at 09:58:04PM +0800, Andy Fan wrote:\n>> so we need to optimize the cost model for such case, the method is the\n>> patch I mentioned above.\n\n> Making the planner more robust w.r.t. to estimation errors is nice, but\n> I wouldn't go as far saying we should optimize for such cases.\n\nYeah, it's a serious mistake to try to \"optimize\" for cases where we have\nno data or wrong data.  By definition, we don't know what we're doing,\nso who's to say whether we've made it better or worse?  Actually I think it is a more robust way..  the patch can't fix think all the impactof bad statistics(That is impossible I think),  but it will make some simple things better and make others no worse.  By definition I think I know what we are doinghere, like what I replied to Tomas above.  But it is possible my think is wrong.   \nThe other serious error we could be making here is to change things on\nthe basis of just a few examples.  You really need a pretty wide range\nof test cases to be sure that you're not making things worse, any time\nyou're twiddling basic parameters like these.I will try more thing with this direction,  thanks for suggestion.   -- Best RegardsAndy Fan", "msg_date": "Fri, 29 May 2020 09:20:37 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make the qual cost on index Filter slightly higher than qual cost\n on index Cond." }, { "msg_contents": "On Fri, May 29, 2020 at 6:40 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>\n>>\n>> >so we need to optimize the cost model for such case, the method is the\n>> >patch I mentioned above.\n>>\n>> Making the planner more robust w.r.t. to estimation errors is nice, but\n>> I wouldn't go as far saying we should optimize for such cases. The stats\n>> can be arbitrarily off, so should we expect the error to be 10%, 100% or\n>> 1000000%?\n>\n>\n> I don't think my patch relay on anything like that. My patch doesn't fix the\n> statistics issue, just adding the extra cost on qual cost on Index Filter part.\n> Assume the query pattern are where col1= X and col2 = Y. The impacts are :\n> 1). Make the cost of (col1, other_column) is higher than (col1, col2)\n> 2). The relationship between seqscan and index scan on index (col1, other_column)\n> is changed, (this is something I don't want). However my cost difference between\n> index scan & seq scan usually very huge, so the change above should has\n> nearly no impact on that choice. 3). Make the cost higher index scan for\n> Index (col1) only. Overall I think nothing will make thing worse.\n\nWhen the statistics is almost correct (or better than what you have in\nyour example), the index which does not cover all the columns in all\nthe conditions will be expensive anyways because of extra cost to\naccess heap for the extra rows not filtered by that index. An index\ncovering all the conditions would have its scan cost cheaper since\nthere will be fewer rows and hence fewer heap page accesses because of\nmore filtering. So I don't think we need any change in the current\ncosting model.\n\n>\n>>\n>> We'd probably end up with plans that handle worst cases well,\n>> but the average performance would end up being way worse :-(\n>>\n>\n> That's possible, that's why I hope to get some feedback on that. Actually I\n> can't think out such case. can you have anything like that in mind?\n>\n> ----\n> I'm feeling that (qpqual_cost.per_tuple * 1.001) is not good enough since user\n> may have some where expensive_func(col1) = X. we may change it\n> cpu_tuple_cost + qpqual_cost.per_tuple + (0.0001) * list_lenght(qpquals).\n>\n> --\n> Best Regards\n> Andy Fan\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 29 May 2020 19:07:03 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make the qual cost on index Filter slightly higher than qual cost\n on index Cond." }, { "msg_contents": "On Fri, May 29, 2020 at 9:37 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> On Fri, May 29, 2020 at 6:40 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> >\n> >>\n> >> >so we need to optimize the cost model for such case, the method is the\n> >> >patch I mentioned above.\n> >>\n> >> Making the planner more robust w.r.t. to estimation errors is nice, but\n> >> I wouldn't go as far saying we should optimize for such cases. The stats\n> >> can be arbitrarily off, so should we expect the error to be 10%, 100% or\n> >> 1000000%?\n> >\n> >\n> > I don't think my patch relay on anything like that. My patch doesn't\n> fix the\n> > statistics issue, just adding the extra cost on qual cost on Index\n> Filter part.\n> > Assume the query pattern are where col1= X and col2 = Y. The impacts are\n> :\n> > 1). Make the cost of (col1, other_column) is higher than (col1, col2)\n> > 2). The relationship between seqscan and index scan on index (col1,\n> other_column)\n> > is changed, (this is something I don't want). However my cost\n> difference between\n> > index scan & seq scan usually very huge, so the change above should has\n> > nearly no impact on that choice. 3). Make the cost higher index scan\n> for\n> > Index (col1) only. Overall I think nothing will make thing worse.\n>\n> When the statistics is almost correct (or better than what you have in\n> your example), the index which does not cover all the columns in all\n> the conditions will be expensive anyways because of extra cost to\n> access heap for the extra rows not filtered by that index. An index\n> covering all the conditions would have its scan cost cheaper since\n> there will be fewer rows and hence fewer heap page accesses because of\n> more filtering. So I don't think we need any change in the current\n\ncosting model.\n>\n\nThank you for your reply. Looks you comments is based on the statistics\nis almost correct (or better than what I have in my example), That is\ntrue.\nHowever my goal is to figure out a way which can generate better plan even\nthe statistics is not correct (the statistics with such issue is not very\nuncommon,\nI just run into one such case and spend 1 week to handle some\nnon-technology\nstuff after that). I think the current issue is even my patch can make\nthe worst case\nbetter, we need to make sure the average performance not worse.\n\n-- \nBest Regards\nAndy Fan\n\nOn Fri, May 29, 2020 at 9:37 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:On Fri, May 29, 2020 at 6:40 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>\n>>\n>> >so we need to optimize the cost model for such case, the method is the\n>> >patch I mentioned above.\n>>\n>> Making the planner more robust w.r.t. to estimation errors is nice, but\n>> I wouldn't go as far saying we should optimize for such cases. The stats\n>> can be arbitrarily off, so should we expect the error to be 10%, 100% or\n>> 1000000%?\n>\n>\n> I don't think my patch relay on anything like that.   My patch doesn't fix the\n> statistics issue,  just adding the extra cost on qual cost on Index Filter part.\n> Assume the query pattern are where col1= X and col2 = Y. The impacts are :\n> 1).  Make the cost of (col1, other_column) is higher than (col1, col2)\n> 2). The relationship between seqscan and index scan on index (col1, other_column)\n> is changed, (this is something I don't want).  However my cost difference between\n> index scan & seq scan usually very huge, so the change above should has\n> nearly no impact on that choice.   3). Make the cost higher index scan for\n> Index (col1) only.  Overall I think nothing will make thing worse.\n\nWhen the statistics is almost correct (or better than what you have in\nyour example), the index which does not cover all the columns in all\nthe conditions will be expensive anyways because of extra cost to\naccess heap for the extra rows not filtered by that index. An index\ncovering all the conditions would have its scan cost cheaper since\nthere will be fewer rows and hence fewer heap page accesses because of\nmore filtering. So I don't think we need any change in the current\ncosting model. Thank you for your reply.  Looks you comments is based on the statisticsis almost correct (or better than what I have in my example),  That is true. However my goal is to figure out a way which can generate better plan eventhe statistics is not correct (the statistics with such issue is not very uncommon,I just run into one such case and spend 1 week to handle some non-technology stuff after that).   I think the current issue is even my patch can make the worst casebetter, we need to make sure the average performance not worse. -- Best RegardsAndy Fan", "msg_date": "Fri, 29 May 2020 21:58:49 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make the qual cost on index Filter slightly higher than qual cost\n on index Cond." }, { "msg_contents": "> The other serious error we could be making here is to change things on\n>> the basis of just a few examples. You really need a pretty wide range\n>> of test cases to be sure that you're not making things worse, any time\n>> you're twiddling basic parameters like these.\n>>\n>>\n> I will try more thing with this direction, thanks for suggestion.\n>\n\nI choose TPC-H for this purpose and the data and index setup based on [1],\nthe attached normal.log is the plan without this patch, and patched.log is\nthe\nplan with the patch. In general, the best path doesn't change due to this\npatch,\nAll the plans whose cost changed has the following patten, which is\nexpected.\n\nIndex Scan ...\n Index Cond: ...\n Filter: ...\n\nIf you diff the two file, you may find the cost of \"Index Scan\" doesn't\nchange,\nthat is mainly because it only show 2 digits in cost, which is not accurate\nenough\nto show the difference. However with a nest loop, the overall plan shows\nthe cost\ndifference.\n\n[1] https://ankane.org/tpc-h\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Fri, 29 May 2020 22:11:52 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Make the qual cost on index Filter slightly higher than qual cost\n on index Cond." } ]
[ { "msg_contents": "In postgresql.conf, it says:\n\n#max_slot_wal_keep_size = -1 # measured in bytes; -1 disables\n\nI don't know if that is describing the dimension of this parameter or the\nunits of it, but the default units for it are megabytes, not individual\nbytes, so I think it is pretty confusing.\n\nCheers,\n\nJeff\n\nIn postgresql.conf, it says:#max_slot_wal_keep_size = -1  # measured in bytes; -1 disablesI don't know if that is describing the dimension of this parameter or the units of it, but the default units for it are megabytes, not individual bytes, so I think it is pretty confusing.Cheers,Jeff", "msg_date": "Tue, 26 May 2020 09:10:40 -0400", "msg_from": "Jeff Janes <jeff.janes@gmail.com>", "msg_from_op": true, "msg_subject": "max_slot_wal_keep_size comment in postgresql.conf" }, { "msg_contents": "At Tue, 26 May 2020 09:10:40 -0400, Jeff Janes <jeff.janes@gmail.com> wrote in \n> In postgresql.conf, it says:\n> \n> #max_slot_wal_keep_size = -1 # measured in bytes; -1 disables\n> \n> I don't know if that is describing the dimension of this parameter or the\n> units of it, but the default units for it are megabytes, not individual\n> bytes, so I think it is pretty confusing.\n\nAgreed. It should be a leftover at the time the unit was changed\n(before committed) to MB from bytes. The default value makes the\nconfusion worse.\n\nIs the following works?\n\n#max_slot_wal_keep_size = -1 # in MB; -1 disables\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 27 May 2020 10:46:27 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: max_slot_wal_keep_size comment in postgresql.conf" }, { "msg_contents": "On Tue, 26 May 2020 at 21:46, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Tue, 26 May 2020 09:10:40 -0400, Jeff Janes <jeff.janes@gmail.com>\n> wrote in\n> > In postgresql.conf, it says:\n> >\n> > #max_slot_wal_keep_size = -1 # measured in bytes; -1 disables\n> >\n> > I don't know if that is describing the dimension of this parameter or the\n> > units of it, but the default units for it are megabytes, not individual\n> > bytes, so I think it is pretty confusing.\n>\n> Agreed. It should be a leftover at the time the unit was changed\n> (before committed) to MB from bytes. The default value makes the\n> confusion worse.\n>\n> Is the following works?\n>\n> #max_slot_wal_keep_size = -1 # in MB; -1 disables\n\n\nExtreme pedant question: Is it MB (10^6 bytes) or MiB (2^20 bytes)?\n\nOn Tue, 26 May 2020 at 21:46, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Tue, 26 May 2020 09:10:40 -0400, Jeff Janes <jeff.janes@gmail.com> wrote in \n> In postgresql.conf, it says:\n> \n> #max_slot_wal_keep_size = -1  # measured in bytes; -1 disables\n> \n> I don't know if that is describing the dimension of this parameter or the\n> units of it, but the default units for it are megabytes, not individual\n> bytes, so I think it is pretty confusing.\n\nAgreed. It should be a leftover at the time the unit was changed\n(before committed) to MB from bytes.  The default value makes the\nconfusion worse.\n\nIs the following works?\n\n#max_slot_wal_keep_size = -1  # in MB; -1 disablesExtreme pedant question: Is it MB (10^6 bytes) or MiB (2^20 bytes)?", "msg_date": "Tue, 26 May 2020 22:56:39 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: max_slot_wal_keep_size comment in postgresql.conf" }, { "msg_contents": "On Wed, May 27, 2020 at 10:46:27AM +0900, Kyotaro Horiguchi wrote:\n> Agreed. It should be a leftover at the time the unit was changed\n> (before committed) to MB from bytes. The default value makes the\n> confusion worse.\n> \n> Is the following works?\n> \n> #max_slot_wal_keep_size = -1 # in MB; -1 disables\n\nIndeed, better to fix that. The few GUCs using memory units that have\nsuch a mention in their comments use the actual name of the memory\nunit, and not its abbreviation (see log_temp_files). So it seems more\nlogic to me to just use \"in megabytes; -1 disables\", that would be\nalso more consistent with the time-unit-based ones.\n--\nMichael", "msg_date": "Wed, 27 May 2020 15:11:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: max_slot_wal_keep_size comment in postgresql.conf" }, { "msg_contents": "At Tue, 26 May 2020 22:56:39 -0400, Isaac Morland <isaac.morland@gmail.com> wrote in \n> On Tue, 26 May 2020 at 21:46, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n> \n> > At Tue, 26 May 2020 09:10:40 -0400, Jeff Janes <jeff.janes@gmail.com>\n> > wrote in\n> > > In postgresql.conf, it says:\n> > >\n> > > #max_slot_wal_keep_size = -1 # measured in bytes; -1 disables\n> > >\n> > > I don't know if that is describing the dimension of this parameter or the\n> > > units of it, but the default units for it are megabytes, not individual\n> > > bytes, so I think it is pretty confusing.\n> >\n> > Agreed. It should be a leftover at the time the unit was changed\n> > (before committed) to MB from bytes. The default value makes the\n> > confusion worse.\n> >\n> > Is the following works?\n> >\n> > #max_slot_wal_keep_size = -1 # in MB; -1 disables\n> \n> \n> Extreme pedant question: Is it MB (10^6 bytes) or MiB (2^20 bytes)?\n\n\nGUC variables for file/memory sizes are in a traditional\nrepresentation, that is, a power of two represented by\nSI-prefixes. AFAICS PostgreSQL doesn't use binary-prefixed units.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 27 May 2020 15:48:37 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: max_slot_wal_keep_size comment in postgresql.conf" }, { "msg_contents": "\n\nOn 2020/05/27 15:11, Michael Paquier wrote:\n> On Wed, May 27, 2020 at 10:46:27AM +0900, Kyotaro Horiguchi wrote:\n>> Agreed. It should be a leftover at the time the unit was changed\n>> (before committed) to MB from bytes. The default value makes the\n>> confusion worse.\n>>\n>> Is the following works?\n>>\n>> #max_slot_wal_keep_size = -1 # in MB; -1 disables\n> \n> Indeed, better to fix that. The few GUCs using memory units that have\n> such a mention in their comments use the actual name of the memory\n> unit, and not its abbreviation (see log_temp_files). So it seems more\n> logic to me to just use \"in megabytes; -1 disables\", that would be\n> also more consistent with the time-unit-based ones.\n\n+1\n\n#temp_file_limit = -1\t\t\t# limits per-process temp file space\n\t\t\t\t\t# in kB, or -1 for no limit\n\nBTW, the abbreviation \"in kB\" is used in temp_file_limit.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 27 May 2020 16:00:09 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: max_slot_wal_keep_size comment in postgresql.conf" }, { "msg_contents": "At Wed, 27 May 2020 15:11:00 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, May 27, 2020 at 10:46:27AM +0900, Kyotaro Horiguchi wrote:\n> > Agreed. It should be a leftover at the time the unit was changed\n> > (before committed) to MB from bytes. The default value makes the\n> > confusion worse.\n> > \n> > Is the following works?\n> > \n> > #max_slot_wal_keep_size = -1 # in MB; -1 disables\n> \n> Indeed, better to fix that. The few GUCs using memory units that have\n> such a mention in their comments use the actual name of the memory\n> unit, and not its abbreviation (see log_temp_files). So it seems more\n\nI was not sure which is preferable. Does that mean we will fix the\nfollowing, too?\n\n> #temp_file_limit = -1 # limits per-process temp file space\n> # in kB, or -1 for no limit\n\n> logic to me to just use \"in megabytes; -1 disables\", that would be\n> also more consistent with the time-unit-based ones.\n\nI don't oppose to full-spelling. How about the attached?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 27 May 2020 16:21:59 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: max_slot_wal_keep_size comment in postgresql.conf" }, { "msg_contents": "On Wed, May 27, 2020 at 04:21:59PM +0900, Kyotaro Horiguchi wrote:\n> I don't oppose to full-spelling. How about the attached?\n\nNo problem from me.\n--\nMichael", "msg_date": "Wed, 27 May 2020 16:35:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: max_slot_wal_keep_size comment in postgresql.conf" }, { "msg_contents": "On Wed, May 27, 2020 at 04:35:51PM +0900, Michael Paquier wrote:\n> On Wed, May 27, 2020 at 04:21:59PM +0900, Kyotaro Horiguchi wrote:\n> > I don't oppose to full-spelling. How about the attached?\n> \n> No problem from me.\n\nAnd applied this one as of 55ca50d.\n--\nMichael", "msg_date": "Thu, 28 May 2020 15:44:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: max_slot_wal_keep_size comment in postgresql.conf" }, { "msg_contents": "At Thu, 28 May 2020 15:44:26 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, May 27, 2020 at 04:35:51PM +0900, Michael Paquier wrote:\n> > On Wed, May 27, 2020 at 04:21:59PM +0900, Kyotaro Horiguchi wrote:\n> > > I don't oppose to full-spelling. How about the attached?\n> > \n> > No problem from me.\n> \n> And applied this one as of 55ca50d.\n\nThanks!\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 28 May 2020 17:09:25 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: max_slot_wal_keep_size comment in postgresql.conf" } ]
[ { "msg_contents": "Hi,\n\nIs there way to insert the cron job execution status to a table?\nOr any other method to identify the job status without checking the log\nfile?\n\n*Regards,*\n*Rajin *\n\nHi,Is there way to insert the cron job execution status to a table? Or any other method to identify the job status without checking the log file? Regards,Rajin", "msg_date": "Wed, 27 May 2020 00:08:53 +0530", "msg_from": "Rajin Raj <rajin.raj@opsveda.com>", "msg_from_op": true, "msg_subject": "PG_CRON logging" }, { "msg_contents": "On Tuesday, May 26, 2020, Rajin Raj <rajin.raj@opsveda.com> wrote:\n\n> Hi,\n>\n> Is there way to insert the cron job execution status to a table?\n> Or any other method to identify the job status without checking the log\n> file?\n>\n\nPlease just pick one list to send emails to. And this isn’t a topic\nrelevant to -hackers\n\nSimplest solution is to call out to psql in whatever process is being run.\nI don’t how, if at all, to get the cron daemon to send output to a custom\ntarget that could do the psql work more generically. Though i do not think\ndoing So would be a good idea even if it can be done.\n\nDavid J.\n\nOn Tuesday, May 26, 2020, Rajin Raj <rajin.raj@opsveda.com> wrote:Hi,Is there way to insert the cron job execution status to a table? Or any other method to identify the job status without checking the log file?Please just pick one list to send emails to.  And this isn’t a topic relevant to -hackersSimplest solution is to call out to psql in whatever process is being run.  I don’t how, if at all, to get the cron daemon to send output to a custom target that could do the psql work more generically.  Though i do not think doing So would be a good idea even if it can be done.David J.", "msg_date": "Tue, 26 May 2020 11:50:50 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG_CRON logging" } ]
[ { "msg_contents": "Hackers,\n\nAttached is a patch for a `pg' command that consolidates various PostgreSQL functionality into a single command, along the lines of how `git' commands are run from a single 'git' executable. In other words,\n\n `pg upgrade` # instead of `pg_upgrade`\n `pg resetwal` # instead of `pg_resetwal`\n\nThis has been discussed before on the -hackers list, but I don't recall seeing a patch. I'm submitting this patch mostly as a way of moving the conversation along, fully expecting the community to want some (or all) of what I wrote to be changed.\n\nI'd also appreciate +1 and -1 votes on the overall idea, in case this entire feature, regardless of implementation, is simply something the community does not want.\n\nOnce again, this is mostly intended as a starting point for discussion.\n\n\nThe patch moves some commands from BINDIR to LIBEXECDIR where `pg' expects to find them. For commands named pg_foo, the executable is still named pg_foo and the sources are still located in src/bin/pg_foo/, but the command can now be run as `pg foo`, `pg foo --version`, `pg foo FOO SPECIFIC ARGS`, etc.\n\nThe command pgbench (no underscore) maps to 'pg bench'.\n\nCommands without a \"pg\" prefix stay the same, so \"createdb\" => \"pg createdb\", etc.\n\nThe 'psql' and 'postgres' executables (and the 'postmaster' link) have been left in BINDIR, as has 'ecpg'. The 'pg' executable has been added to BINDIR.\n\nAll other executables have been moved to LIBEXECDIR where they retain their old names and can still be run directly from the command line. If we committed this patch for v14, I think it makes sense that packagers could put the LIBEXECDIR in the PATH so that 3rd-party scripts which call pg_ctl, initdb, etc. continue to work. For that reason, I did not change the names of the executables, merely their location. During conversations with Robert off-list, we discussed renaming the executables to things like 'pg-ctl' (hyphen rather than underscore), mostly because that's the more modern way of doing it and follows what 'git' does. To avoid breaking scripts that execute these commands by the old name, this patch doesn't go that far. It also leaves the usage() functions alone such that when they report their own progname in the usage text, they do so under the old name. This would need to change at some point, but I'm unclear on whether that would be for v14 or if it would be delayed.\n\nThe binaries 'createuser' and 'dropuser' might be better named 'createrole' and 'droprole'. I don't currently have aliases in this patch, but it might make sense to allow 'pg createrole' as a synonym for 'pg createuser' and 'pg droprole' as a synonym for 'pg dropuser'. I have not pursued that yet, largely because as soon as you go that route, it starts making sense to have things like 'pg drop user', 'pg cluster db' and so forth, with the extra spaces. How far would people want me to go in this direction?\n\nPrior to this patch, postgres binaries that need to execute other postgres binaries determine the BINDIR using find_my_exec and trimming off their own executable name. They can then assume the other binary is in that same directory. After this patch, binaries need to find the common prefix ROOTDIR = commonprefix(BINDIR,LIBEXECDIR) and then assume the other binary is either in ROOTDIR/binsuffix or ROOTDIR/libexecsuffix. This may cause problems on windows if BINDIR and LIBEXECDIR are configured on different drives, as there won't be a common prefix of C:\\my\\pg\\bin and D:\\my\\pg\\libexec. I'm hoping somebody with more Windows savvy expresses an opinion about how to handle this.\n\nThe handling of the old libexec directory in pg_upgrade is not as robust as it could be. I'll look to improve that for a subsequent version of the patch, assuming the overall idea of the patch seems acceptable.\n\nI've updated some of the doc/sgml/* files, but don't want to spend too much time changing documentation until we have some consensus that the patch is moving in the right direction.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 26 May 2020 16:19:37 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "New 'pg' consolidated metacommand patch" }, { "msg_contents": "On Tue, May 26, 2020 at 4:19 PM Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n> I'd also appreciate +1 and -1 votes on the overall idea, in case this\n> entire feature, regardless of implementation, is simply something the\n> community does not want.\n>\n\n-1, at least as part of core. My question would be how much of this is\nwould be needed if someone were to create an external project that\ninstalled a \"pg\" command on top of an existing PostgreSQL installation. Or\nput differently, how many of the changes to the existing binaries are\nrequired versus nice-to-have?\n\nDavid J.\n\nOn Tue, May 26, 2020 at 4:19 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:I'd also appreciate +1 and -1 votes on the overall idea, in case this entire feature, regardless of implementation, is simply something the community does not want.-1, at least as part of core.  My question would be how much of this is would be needed if someone were to create an external project that installed a \"pg\" command on top of an existing PostgreSQL installation.  Or put differently, how many of the changes to the existing binaries are required versus nice-to-have?David J.", "msg_date": "Tue, 26 May 2020 16:59:10 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: New 'pg' consolidated metacommand patch" }, { "msg_contents": "Hi\n\nOn Wed, May 27, 2020 at 12:19 AM Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n>\n> I think it makes sense that packagers could put the LIBEXECDIR in the PATH\n> so that 3rd-party scripts which call pg_ctl, initdb, etc. continue to\n> work.\n\n\nHaving packages that futz with the PATH is generally a bad idea, especially\nthose that support side-by-side installations of different versions. None\nof ours (EDBs) will be doing so.\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEnterpriseDB UK: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nHiOn Wed, May 27, 2020 at 12:19 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:I think it makes sense that packagers could put the LIBEXECDIR in the PATH so that 3rd-party scripts which call pg_ctl, initdb, etc. continue to work.  Having packages that futz with the PATH is generally a bad idea, especially those that support side-by-side installations of different versions. None of ours (EDBs) will be doing so.-- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEnterpriseDB UK: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Wed, 27 May 2020 09:13:19 +0100", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: New 'pg' consolidated metacommand patch" }, { "msg_contents": "On Wed, May 27, 2020 at 1:19 AM Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n> Hackers,\n>\n> Attached is a patch for a `pg' command that consolidates various\n> PostgreSQL functionality into a single command, along the lines of how\n> `git' commands are run from a single 'git' executable. In other words,\n>\n> `pg upgrade` # instead of `pg_upgrade`\n> `pg resetwal` # instead of `pg_resetwal`\n>\n> This has been discussed before on the -hackers list, but I don't recall\n> seeing a patch. I'm submitting this patch mostly as a way of moving the\n> conversation along, fully expecting the community to want some (or all) of\n> what I wrote to be changed.\n>\n\nAs mentioned at least once before, the \"pg\" name is already taken in posix.\nGranted it has been removed now, but it was removed from posix in 2018,\nwhich I think is nowhere near soon enough to \"steal. See for example\nhttps://en.wikipedia.org/wiki/Pg_(Unix)\n\n\n\nAll other executables have been moved to LIBEXECDIR where they retain their\n> old names and can still be run directly from the command line. If we\n> committed this patch for v14, I think it makes sense that packagers could\n> put the LIBEXECDIR in the PATH so that 3rd-party scripts which call pg_ctl,\n> initdb, etc. continue to work.\n\n\nI would definitely not expect a packager to change the PATH, as also\nmentioned by others. More likely options would be to symlink the binaries\ninto the actual bindir, or just set both those directories to the same one\n(in the path) for a number of releases as a transition.\n\nBut you should definitely poll the packagers separately to make sure\nsomething is done that works well for them -- especially when it comes to\nintegrating with for example the debian/ubuntu wrapper system that already\nsupports multiple parallel installs. And mind that they don't typically\nfollow hackers actively (I think), so it would be worthwhile to bring their\nattention specifically to the thread. In many ways I'd find them more\nimportant to get input from than most \"other hackers\" :)\n\n\n\n> For that reason, I did not change the names of the executables, merely\n> their location. During conversations with Robert off-list, we discussed\n> renaming the executables to things like 'pg-ctl' (hyphen rather than\n> underscore), mostly because that's the more modern way of doing it and\n> follows what 'git' does. To avoid breaking scripts that execute these\n> commands by the old name, this patch doesn't go that far. It also leaves\n> the usage() functions alone such that when they report their own progname\n> in the usage text, they do so under the old name. This would need to\n> change at some point, but I'm unclear on whether that would be for v14 or\n> if it would be delayed.\n>\n\nUgh, yeah, please don't do that. Renaming them just to make it \"look more\nmodern\" helps nobody, really. Especially if the suggestion is people should\nbe using the shared-launcher binary anyway.\n\nusage() seems more reasonable to change as part of a patch like this.\n\n\nThe binaries 'createuser' and 'dropuser' might be better named 'createrole'\n> and 'droprole'. I don't currently have aliases in this patch, but it might\n> make sense to allow 'pg createrole' as a synonym for 'pg createuser' and\n> 'pg droprole' as a synonym for 'pg dropuser'. I have not pursued that yet,\n> largely because as soon as you go that route, it starts making sense to\n> have things like 'pg drop user', 'pg cluster db' and so forth, with the\n> extra spaces. How far would people want me to go in this direction?\n>\n\nI'd say a createrole would make sense, but certainly not a \"create role\".\nYou'd end up with unlimited number of commands. But in either of them, I'd\nsay keep aliases completely out of it for a first iteration.\n\n\nPrior to this patch, postgres binaries that need to execute other postgres\n> binaries determine the BINDIR using find_my_exec and trimming off their own\n> executable name. They can then assume the other binary is in that same\n> directory. After this patch, binaries need to find the common prefix\n> ROOTDIR = commonprefix(BINDIR,LIBEXECDIR) and then assume the other binary\n> is either in ROOTDIR/binsuffix or ROOTDIR/libexecsuffix. This may cause\n> problems on windows if BINDIR and LIBEXECDIR are configured on different\n> drives, as there won't be a common prefix of C:\\my\\pg\\bin and\n> D:\\my\\pg\\libexec. I'm hoping somebody with more Windows savvy expresses an\n> opinion about how to handle this.\n>\n\nMaybe the \"pg\" binary could just pass down it's own location as a parameter\nto the binary it calls, thereby making sure that binary has direct access\nto both?\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, May 27, 2020 at 1:19 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:Hackers,\n\nAttached is a patch for a `pg' command that consolidates various PostgreSQL functionality into a single command, along the lines of how `git' commands are run from a single 'git' executable.  In other words,\n\n  `pg upgrade`   # instead of `pg_upgrade`\n  `pg resetwal`   # instead of `pg_resetwal`\n\nThis has been discussed before on the -hackers list, but I don't recall seeing a patch.  I'm submitting this patch mostly as a way of moving the conversation along, fully expecting the community to want some (or all) of what I wrote to be changed.As mentioned at least once before, the \"pg\" name is already taken in posix. Granted it has been removed now, but it was removed from posix in 2018, which I think is nowhere near soon enough to \"steal. See for example https://en.wikipedia.org/wiki/Pg_(Unix)All other executables have been moved to LIBEXECDIR where they retain their old names and can still be run directly from the command line.  If we committed this patch for v14, I think it makes sense that packagers could put the LIBEXECDIR in the PATH so that 3rd-party scripts which call pg_ctl, initdb, etc. continue to work.  I would definitely not expect a packager to change the PATH, as also mentioned by others. More likely options would be to symlink the binaries into the actual  bindir, or just set both those directories to the same one (in the path) for a number of releases as a transition.But you should definitely poll the packagers separately to make sure something is done that works well for them -- especially when it comes to integrating with for example the debian/ubuntu wrapper system that already supports multiple parallel installs. And mind that they don't typically follow hackers actively (I think), so it would be worthwhile to bring their attention specifically to the thread. In many ways I'd find them more important to get input from than most \"other hackers\" :) For that reason, I did not change the names of the executables, merely their location.  During conversations with Robert off-list, we discussed renaming the executables to things like 'pg-ctl' (hyphen rather than underscore), mostly because that's the more modern way of doing it and follows what 'git' does.  To avoid breaking scripts that execute these commands by the old name, this patch doesn't go that far.  It also leaves the usage() functions alone such that when they report their own progname in the usage text, they do so under the old name.  This would need to change at some point, but I'm unclear on whether that would be for v14 or if it would be delayed.Ugh, yeah, please don't do that. Renaming them just to make it \"look more modern\" helps nobody, really. Especially if the suggestion is people should be using the shared-launcher binary anyway. usage() seems more reasonable to change as part of a patch like this.\nThe binaries 'createuser' and 'dropuser' might be better named 'createrole' and 'droprole'.  I don't currently have aliases in this patch, but it might make sense to allow 'pg createrole' as a synonym for 'pg createuser' and 'pg droprole' as a synonym for 'pg dropuser'.  I have not pursued that yet, largely because as soon as you go that route, it starts making sense to have things like 'pg drop user', 'pg cluster db' and so forth, with the extra spaces.  How far would people want me to go in this direction?I'd say a createrole would make sense, but certainly not a \"create role\". You'd end up with unlimited number of commands. But in either of them, I'd say keep aliases completely out of it for a first iteration.\nPrior to this patch, postgres binaries that need to execute other postgres binaries determine the BINDIR using find_my_exec and trimming off their own executable name.  They can then assume the other binary is in that same directory.  After this patch, binaries need to find the common prefix ROOTDIR = commonprefix(BINDIR,LIBEXECDIR) and then assume the other binary is either in ROOTDIR/binsuffix or ROOTDIR/libexecsuffix.  This may cause problems on windows if BINDIR and LIBEXECDIR are configured on different drives, as there won't be a common prefix of C:\\my\\pg\\bin and D:\\my\\pg\\libexec.  I'm hoping somebody with more Windows savvy expresses an opinion about how to handle this.Maybe the \"pg\" binary could just pass down it's own location as a parameter to the binary it calls, thereby making sure that binary has direct access to both? --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 27 May 2020 10:50:48 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: New 'pg' consolidated metacommand patch" }, { "msg_contents": "\n\n> On May 26, 2020, at 4:59 PM, David G. Johnston <david.g.johnston@gmail.com> wrote:\n> \n> On Tue, May 26, 2020 at 4:19 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> I'd also appreciate +1 and -1 votes on the overall idea, in case this entire feature, regardless of implementation, is simply something the community does not want.\n> \n> -1, at least as part of core. My question would be how much of this is would be needed if someone were to create an external project that installed a \"pg\" command on top of an existing PostgreSQL installation. Or put differently, how many of the changes to the existing binaries are required versus nice-to-have?\n\nIf the only goal of something like this were to have a frontend that could execute the various postgres binaries, then I'd say no changes to those binaries would be needed, and the frontend would not be worth very much. The value in having the frontend is that it makes it less difficult to introduce new commands to the postgres suite of commands, as you don't need to worry about whether another executable by the same name might happen to already exist somewhere. Even introducing a command named \"pg\" has already gotten such a response on this thread. By having the commands installed in postgres's libexec rather than bin, you can put whatever commands you want in libexec without worrying about conflicts. That still leaves open the question of whether existing commands get moved into libexec, and if so, if they keep the same name. An external project for this would be worthless in this regard, as the community wouldn't get any benefit when debating the merits of introducing a new command vs. the potential for conflicts.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 27 May 2020 07:00:35 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: New 'pg' consolidated metacommand patch" }, { "msg_contents": "\n\n> On May 27, 2020, at 1:13 AM, Dave Page <dpage@pgadmin.org> wrote:\n> \n> Hi\n> \n> On Wed, May 27, 2020 at 12:19 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> I think it makes sense that packagers could put the LIBEXECDIR in the PATH so that 3rd-party scripts which call pg_ctl, initdb, etc. continue to work. \n> \n> Having packages that futz with the PATH is generally a bad idea, especially those that support side-by-side installations of different versions. None of ours (EDBs) will be doing so.\n\nI probably phrased that badly. The operative word in that sentence was \"could\". If we rename the binaries, people can still make links to them from the old name, but if we don't rename them, then either links or PATH changes *could* be used. I'm not trying to recommend any particular approach. Mentioning \"packagers\" probably wasn't helpful, as \"people\" works just as well in that sentence.\n\nThere is also the option of not moving the binaries at all, and only putting new commands into libexec, while grandfathering existing ones in bin.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 27 May 2020 07:06:15 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: New 'pg' consolidated metacommand patch" }, { "msg_contents": "\n\n> On May 27, 2020, at 1:50 AM, Magnus Hagander <magnus@hagander.net> wrote:\n> \n> On Wed, May 27, 2020 at 1:19 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> Hackers,\n> \n> Attached is a patch for a `pg' command that consolidates various PostgreSQL functionality into a single command, along the lines of how `git' commands are run from a single 'git' executable. In other words,\n> \n> `pg upgrade` # instead of `pg_upgrade`\n> `pg resetwal` # instead of `pg_resetwal`\n> \n> This has been discussed before on the -hackers list, but I don't recall seeing a patch. I'm submitting this patch mostly as a way of moving the conversation along, fully expecting the community to want some (or all) of what I wrote to be changed.\n> \n> As mentioned at least once before, the \"pg\" name is already taken in posix. Granted it has been removed now, but it was removed from posix in 2018, which I think is nowhere near soon enough to \"steal. See for example https://en.wikipedia.org/wiki/Pg_(Unix)\n\nCare to recommend a different name?\n\n> All other executables have been moved to LIBEXECDIR where they retain their old names and can still be run directly from the command line. If we committed this patch for v14, I think it makes sense that packagers could put the LIBEXECDIR in the PATH so that 3rd-party scripts which call pg_ctl, initdb, etc. continue to work. \n> \n> I would definitely not expect a packager to change the PATH, as also mentioned by others. More likely options would be to symlink the binaries into the actual bindir, or just set both those directories to the same one (in the path) for a number of releases as a transition.\n\nThere is nothing in the patch that expects packagers to muck with the PATH. The idea, badly phrased, was that by keeping the names of the executables and only changing locations, people would have more options for how to deal with the change.\n\n> But you should definitely poll the packagers separately to make sure something is done that works well for them -- especially when it comes to integrating with for example the debian/ubuntu wrapper system that already supports multiple parallel installs. And mind that they don't typically follow hackers actively (I think), so it would be worthwhile to bring their attention specifically to the thread. In many ways I'd find them more important to get input from than most \"other hackers\" :)\n\nYeah, good advice. Since I've already floated this on -hackers, I might wait a few days for comment, then if it looks encouraging, ask on other lists.\n\n> For that reason, I did not change the names of the executables, merely their location. During conversations with Robert off-list, we discussed renaming the executables to things like 'pg-ctl' (hyphen rather than underscore), mostly because that's the more modern way of doing it and follows what 'git' does. To avoid breaking scripts that execute these commands by the old name, this patch doesn't go that far. It also leaves the usage() functions alone such that when they report their own progname in the usage text, they do so under the old name. This would need to change at some point, but I'm unclear on whether that would be for v14 or if it would be delayed.\n> \n> Ugh, yeah, please don't do that. Renaming them just to make it \"look more modern\" helps nobody, really. Especially if the suggestion is people should be using the shared-launcher binary anyway. \n> \n> usage() seems more reasonable to change as part of a patch like this.\n> \n> \n> The binaries 'createuser' and 'dropuser' might be better named 'createrole' and 'droprole'. I don't currently have aliases in this patch, but it might make sense to allow 'pg createrole' as a synonym for 'pg createuser' and 'pg droprole' as a synonym for 'pg dropuser'. I have not pursued that yet, largely because as soon as you go that route, it starts making sense to have things like 'pg drop user', 'pg cluster db' and so forth, with the extra spaces. How far would people want me to go in this direction?\n> \n> I'd say a createrole would make sense, but certainly not a \"create role\". You'd end up with unlimited number of commands. But in either of them, I'd say keep aliases completely out of it for a first iteration.\n\nOk.\n\n> Prior to this patch, postgres binaries that need to execute other postgres binaries determine the BINDIR using find_my_exec and trimming off their own executable name. They can then assume the other binary is in that same directory. After this patch, binaries need to find the common prefix ROOTDIR = commonprefix(BINDIR,LIBEXECDIR) and then assume the other binary is either in ROOTDIR/binsuffix or ROOTDIR/libexecsuffix. This may cause problems on windows if BINDIR and LIBEXECDIR are configured on different drives, as there won't be a common prefix of C:\\my\\pg\\bin and D:\\my\\pg\\libexec. I'm hoping somebody with more Windows savvy expresses an opinion about how to handle this.\n> \n> Maybe the \"pg\" binary could just pass down it's own location as a parameter to the binary it calls, thereby making sure that binary has direct access to both? \n\nPerhaps. Thus far, I've avoided making the binaries dependent on being called from 'pg'. Having them depend on a parameter that 'pg' passes would be a big change.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 27 May 2020 07:20:14 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: New 'pg' consolidated metacommand patch" }, { "msg_contents": "On Wed, May 27, 2020 at 3:00 PM Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n>\n>\n> > On May 26, 2020, at 4:59 PM, David G. Johnston <\n> david.g.johnston@gmail.com> wrote:\n> >\n> > On Tue, May 26, 2020 at 4:19 PM Mark Dilger <\n> mark.dilger@enterprisedb.com> wrote:\n> > I'd also appreciate +1 and -1 votes on the overall idea, in case this\n> entire feature, regardless of implementation, is simply something the\n> community does not want.\n> >\n> > -1, at least as part of core. My question would be how much of this is\n> would be needed if someone were to create an external project that\n> installed a \"pg\" command on top of an existing PostgreSQL installation. Or\n> put differently, how many of the changes to the existing binaries are\n> required versus nice-to-have?\n>\n> If the only goal of something like this were to have a frontend that could\n> execute the various postgres binaries, then I'd say no changes to those\n> binaries would be needed, and the frontend would not be worth very much.\n> The value in having the frontend is that it makes it less difficult to\n> introduce new commands to the postgres suite of commands, as you don't need\n> to worry about whether another executable by the same name might happen to\n> already exist somewhere. Even introducing a command named \"pg\" has already\n> gotten such a response on this thread. By having the commands installed in\n> postgres's libexec rather than bin, you can put whatever commands you want\n> in libexec without worrying about conflicts. That still leaves open the\n> question of whether existing commands get moved into libexec, and if so, if\n> they keep the same name. An external project for this would be worthless\n> in this regard, as the community wouldn't get any benefit when debating the\n> merits of introducing a new command vs. the potential for conflicts.\n>\n\nThe issue you raise can almost certainly be resolved simply by prefixing\npg- or something similar on all the existing binary names.\n\nI think the beauty of having a single CLI executable is that we can\nredesign the user interface to make it nice and consistent for all the\ndifferent functions it offers, and to cleanup old cruft such as createuser\nvs. createrole and pgbench vs. pg_* and so on.\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEnterpriseDB UK: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Wed, May 27, 2020 at 3:00 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n\n> On May 26, 2020, at 4:59 PM, David G. Johnston <david.g.johnston@gmail.com> wrote:\n> \n> On Tue, May 26, 2020 at 4:19 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> I'd also appreciate +1 and -1 votes on the overall idea, in case this entire feature, regardless of implementation, is simply something the community does not want.\n> \n> -1, at least as part of core.  My question would be how much of this is would be needed if someone were to create an external project that installed a \"pg\" command on top of an existing PostgreSQL installation.  Or put differently, how many of the changes to the existing binaries are required versus nice-to-have?\n\nIf the only goal of something like this were to have a frontend that could execute the various postgres binaries, then I'd say no changes to those binaries would be needed, and the frontend would not be worth very much.  The value in having the frontend is that it makes it less difficult to introduce new commands to the postgres suite of commands, as you don't need to worry about whether another executable by the same name might happen to already exist somewhere.  Even introducing a command named \"pg\" has already gotten such a response on this thread.  By having the commands installed in postgres's libexec rather than bin, you can put whatever commands you want in libexec without worrying about conflicts.  That still leaves open the question of whether existing commands get moved into libexec, and if so, if they keep the same name.  An external project for this would be worthless in this regard, as the community wouldn't get any benefit when debating the merits of introducing a new command vs. the potential for conflicts.The issue you raise can almost certainly be resolved simply by prefixing pg- or something similar on all the existing binary names. I think the beauty of having a single CLI executable is that we can redesign the user interface to make it nice and consistent for all the different functions it offers, and to cleanup old cruft such as createuser vs. createrole and pgbench vs. pg_* and so on.-- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEnterpriseDB UK: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Wed, 27 May 2020 15:20:41 +0100", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: New 'pg' consolidated metacommand patch" }, { "msg_contents": "## Magnus Hagander (magnus@hagander.net):\n\n> Ugh, yeah, please don't do that. Renaming them just to make it \"look more\n> modern\" helps nobody, really. Especially if the suggestion is people should\n> be using the shared-launcher binary anyway.\n\nQuick, let's invent a fancy name like \"microcommand\" for doing this\nlike we're used to; then we tell people that's the new \"modern\" (anybody\ncares to write a Medium article for that? (why Medium? it's neither\nRare nor Well Done)). What might make sense for (some) version control\nsystems and is tempting in languages which have forgotten howto shared\nlibrary might not be the best architecture for everything. What has\nbecome of the collection of small dedicated tools?\n\nRegards,\nChristoph\n\n-- \nSpare Space\n\n\n", "msg_date": "Wed, 27 May 2020 17:32:45 +0200", "msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>", "msg_from_op": false, "msg_subject": "Re: New 'pg' consolidated metacommand patch" }, { "msg_contents": "On Wed, May 27, 2020 at 4:51 AM Magnus Hagander <magnus@hagander.net> wrote:\n> As mentioned at least once before, the \"pg\" name is already taken in posix. Granted it has been removed now, but it was removed from posix in 2018, which I think is nowhere near soon enough to \"steal. See for example https://en.wikipedia.org/wiki/Pg_(Unix)\n\nThe previous discussion of this general topic starts at\nhttp://postgr.es/m/CA+TgmoZQmDY7nLrQ96nLm-wrnmNPY90qdMvZ6LtJO941GwgLMg@mail.gmail.com\nand the discussion of this particular issue starts at\nhttps://www.postgresql.org/message-id/15135.1586703479%40sss.pgh.pa.us\n\nI think I agree with what Andres said on that thread: rather than\nwaiting a long time to see what happens, we should grab the name\nbefore somebody else does. As also discussed on that thread, perhaps\nwe should have the official name of the binary be 'pgsql' with 'pg' as\na symlink that some packagers might choose to omit.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 27 May 2020 11:56:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: New 'pg' consolidated metacommand patch" }, { "msg_contents": "On Wed, May 27, 2020 at 4:51 AM Magnus Hagander <magnus@hagander.net> wrote:\n>> For that reason, I did not change the names of the executables, merely their location. During conversations with Robert off-list, we discussed renaming the executables to things like 'pg-ctl' (hyphen rather than underscore), mostly because that's the more modern way of doing it and follows what 'git' does. To avoid breaking scripts that execute these commands by the old name, this patch doesn't go that far. It also leaves the usage() functions alone such that when they report their own progname in the usage text, they do so under the old name. This would need to change at some point, but I'm unclear on whether that would be for v14 or if it would be delayed.\n>\n> Ugh, yeah, please don't do that. Renaming them just to make it \"look more modern\" helps nobody, really. Especially if the suggestion is people should be using the shared-launcher binary anyway.\n\nThe way things like 'git' work is that 'git thunk' just looks in a\ndesignated directory for an executable called git-thunk, and invokes\nit if it's found. If you want to invent your own git subcommand, you\ncan. I guess 'git help' wouldn't know to list it, but you can still\nget the metacommand to execute it. That only works if you use a\nstandard naming, though. If the meta-executable has to hard-code the\nnames of all the individual executables that it calls, then you can't\nreally make that work.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 27 May 2020 12:35:12 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: New 'pg' consolidated metacommand patch" }, { "msg_contents": "On Wed, 27 May 2020 at 12:35, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> Ugh, yeah, please don't do that. Renaming them just to make it \"look more\n> modern\" helps nobody, really. Especially if the suggestion is people should\n> be using the shared-launcher binary anyway.\n>\n> The way things like 'git' work is that 'git thunk' just looks in a\n> designated directory for an executable called git-thunk, and invokes\n> it if it's found. If you want to invent your own git subcommand, you\n> can. I guess 'git help' wouldn't know to list it, but you can still\n> get the metacommand to execute it. That only works if you use a\n> standard naming, though. If the meta-executable has to hard-code the\n> names of all the individual executables that it calls, then you can't\n> really make that work.\n>\n\nYou could make the legacy names symlinks to the new systematic names.\n\nOn Wed, 27 May 2020 at 12:35, Robert Haas <robertmhaas@gmail.com> wrote:\n> Ugh, yeah, please don't do that. Renaming them just to make it \"look more modern\" helps nobody, really. Especially if the suggestion is people should be using the shared-launcher binary anyway.\n\nThe way things like 'git' work is that 'git thunk' just looks in a\ndesignated directory for an executable called git-thunk, and invokes\nit if it's found. If you want to invent your own git subcommand, you\ncan. I guess 'git help' wouldn't know to list it, but you can still\nget the metacommand to execute it. That only works if you use a\nstandard naming, though. If the meta-executable has to hard-code the\nnames of all the individual executables that it calls, then you can't\nreally make that work.You could make the legacy names symlinks to the new systematic names.", "msg_date": "Wed, 27 May 2020 13:19:57 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: New 'pg' consolidated metacommand patch" }, { "msg_contents": "On 2020-05-27 01:19, Mark Dilger wrote:\n> Attached is a patch for a `pg' command that consolidates various PostgreSQL functionality into a single command, along the lines of how `git' commands are run from a single 'git' executable. In other words,\n> \n> `pg upgrade` # instead of `pg_upgrade`\n> `pg resetwal` # instead of `pg_resetwal`\n> \n> This has been discussed before on the -hackers list, but I don't recall seeing a patch. I'm submitting this patch mostly as a way of moving the conversation along, fully expecting the community to want some (or all) of what I wrote to be changed.\n> \n> I'd also appreciate +1 and -1 votes on the overall idea, in case this entire feature, regardless of implementation, is simply something the community does not want.\n\nI'm not excited about this.\n\nFirst, consider that git has over 170 subcommands. PostgreSQL currently \nhas 36, and we're probably not going to add dozens more any time soon. \nSo the issue is not of the same scope. It also seems to me that the way \ngit is organized this becomes a self-perpetuating system: They are \nadding subcommands all the time without much care where you might in \nother situations think harder about combining them and keeping the \nsurface area smaller. For example, we wouldn't really need separate \ncommands clusterdb, reindexdb, vacuumdb if we had better support in psql \nfor \"run this command in each database [in parallel]\".\n\ngit (and svn etc. before it) also has a much more consistent operating \nmodel that is sensible to reflect in the command structure. They all \nmore or less operate on a git repository, in apparently 170 different \nways. The 36 PostgreSQL commands don't all work in the same way. Now \nif someone were to propose a way to combine server tools, perhaps like \npginstancetool {init|controldata|resetwal|checksum}, and perhaps also in \na way that actually saves code duplication and inconsistency, that would \nbe something to consider. Or maybe a client-side tool that does \npgclienttool {create user|drop user|create database|...} -- but that \npretty much already exists by the name of psql. But just renaming \neverything that's shipped with PostgreSQL to one common bucket without \nregard to how it actually works and what role it plays would be \nunnecessarily confusing.\n\nAlso consider some practical concerns with the command structure you \ndescribe: Tab completion of commands wouldn't work anymore, unless you \nsupply custom tab completion setups. The direct association between a \ncommand and its man page would be broken. Shell scripting becomes more \nchallenging: Instead of writing common things like \"if which \npg_waldump; then\" you'd need some custom code, to be determined. These \nare all solvable, but just a sum of slight annoyances, for no real benefit.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 27 May 2020 21:59:59 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: New 'pg' consolidated metacommand patch" }, { "msg_contents": "On Wed, 27 May 2020 at 16:00, Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n\n> Also consider some practical concerns with the command structure you\n> describe: Tab completion of commands wouldn't work anymore, unless you\n> supply custom tab completion setups. The direct association between a\n> command and its man page would be broken. Shell scripting becomes more\n> challenging: Instead of writing common things like \"if which\n> pg_waldump; then\" you'd need some custom code, to be determined. These\n> are all solvable, but just a sum of slight annoyances, for no real benefit.\n>\n\nI don’t think the man page concern is justified. We could have a “help”\nsubcommand, just like git; “git help add” is (to a casual observer;\nprobably not precisely) the same as “man git-add”.\n\nOn Wed, 27 May 2020 at 16:00, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote: \nAlso consider some practical concerns with the command structure you \ndescribe: Tab completion of commands wouldn't work anymore, unless you \nsupply custom tab completion setups.  The direct association between a \ncommand and its man page would be broken.  Shell scripting becomes more \nchallenging:  Instead of writing common things like \"if which \npg_waldump; then\" you'd need some custom code, to be determined.  These \nare all solvable, but just a sum of slight annoyances, for no real benefit.I don’t think the man page concern is justified. We could have a “help” subcommand, just like git; “git help add” is (to a casual observer; probably not precisely) the same as “man git-add”.", "msg_date": "Wed, 27 May 2020 16:49:12 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: New 'pg' consolidated metacommand patch" }, { "msg_contents": "On Wed, 27 May 2020 at 16:49, Isaac Morland <isaac.morland@gmail.com> wrote:\n\n> On Wed, 27 May 2020 at 16:00, Peter Eisentraut <\n> peter.eisentraut@2ndquadrant.com> wrote:\n>\n>\n>> Also consider some practical concerns with the command structure you\n>> describe: Tab completion of commands wouldn't work anymore, unless you\n>> supply custom tab completion setups. The direct association between a\n>> command and its man page would be broken. Shell scripting becomes more\n>> challenging: Instead of writing common things like \"if which\n>> pg_waldump; then\" you'd need some custom code, to be determined. These\n>> are all solvable, but just a sum of slight annoyances, for no real\n>> benefit.\n>>\n>\n> I don’t think the man page concern is justified. We could have a “help”\n> subcommand, just like git; “git help add” is (to a casual observer;\n> probably not precisely) the same as “man git-add”.\n>\n\nThere's some very small gulf in between the concerns...\n\n- On the one hand, git (and systems with similar \"keyword\" subsystems) have\narrived at reasonable solutions to cope with various of the systematic\nissues, so that these shouldn't be considered to be gigantic insurmountable\nbarriers.\n\nIndeed, some of these tools present systematic solutions to additional\nmatters. I was very pleased when I found that some of the Kubernetes tools\nI was working with included subcommands to configure my shell to know how\nto do command completion. Seems like a fine thing to me to have that\nbecome systematically *easier*, and I think that would be a good new\nsubcommand... \"pg completion bash\" and \"pg completion zsh\" would be mighty\nfine things.\n\n- On the other hand, mapping old commands that are names of programs onto\n\"pg subcommands\" is some additional effort, and we haven't yet started\nbikeshedding on the favoured names :-)\n\nI have lately noticed some interesting looking apps wandering about that\nbriefly attracted my attention, but, which, due to being painfully\ndifferent from the existing commands and tools that I have already learned,\nand have \"muscle memory\" for, am loath to leave. I'll throw out 4\nexamples, 3 of them personal:\na) nnn is a terminal-based file manager. It has some nifty features\nsurrounding the concept that you can set up custom file filters to look for\nsorts of files that you find interesting, and then offers customizable UI\nfor running favorite actions against them. I'm 25 years into using Emacs\nDired mode; as neat as nnn seems, it's not enough of an improvement to be\nworth the pain in the neck of relearning stuff.\nb) 3mux is a redo of tmux (which was a redo of GNU Screen), and has key\nmappings that make it way easier for a new user to learn. I'm 20-ish years\ninto Screen/Tmux; I wasn't looking for it to be easier to learn, because I\ndid that quite a while ago.\nc) Kakoune is a vi-like editor which rotates from vi's \"verb/object\"\napproach to commands to a \"object/verb\" approach, for apparent more\nefficiency. I think I already mentioned that my \"muscle memory\" is biased\nby Emacs features... I'm not adding a \"rotated-vi-like\" thing into my mix\n:-(\nd) systemd is a Controversial System; the folk that seem particularly irate\nabout it seem to be \"Old Bearded Sysadmins\" that hate the idea of redoing\ntheir understandings of how Unix systems initialize. Personally, my\nfeelings are ambivalent; I'm using it where I find some use, and have not\nbeen displeased with my results. And since modern systems now have USB and\nnetwork devices added and dropped on a whim, there's a critical need for\nsomething newer with more dynamic responses than old SysV Init. But I\ncertainly \"get\" that some aren't so happy with it, and I'm not thrilled at\nthe ongoing scope creep that never seems to end.\n\nThere is merit to having a new, harmonious set of \"pg commands.\" But it's\neminently easy to get into trouble (and get people mad) by changing things\nthat have been working fine for many years. Half the battle (against the\n\"getting people mad\" part) is making sure that it's clear that people were\nlistened to. Listening to the community is one of the important things to\ndo :-).\n-- \nWhen confronted by a difficult problem, solve it by reducing it to the\nquestion, \"How would the Lone Ranger handle this?\"\n\nOn Wed, 27 May 2020 at 16:49, Isaac Morland <isaac.morland@gmail.com> wrote:On Wed, 27 May 2020 at 16:00, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote: \nAlso consider some practical concerns with the command structure you \ndescribe: Tab completion of commands wouldn't work anymore, unless you \nsupply custom tab completion setups.  The direct association between a \ncommand and its man page would be broken.  Shell scripting becomes more \nchallenging:  Instead of writing common things like \"if which \npg_waldump; then\" you'd need some custom code, to be determined.  These \nare all solvable, but just a sum of slight annoyances, for no real benefit.I don’t think the man page concern is justified. We could have a “help” subcommand, just like git; “git help add” is (to a casual observer; probably not precisely) the same as “man git-add”.There's some very small gulf in between the concerns...- On the one hand, git (and systems with similar \"keyword\" subsystems) have arrived at reasonable solutions to cope with various of the systematic issues, so that these shouldn't be considered to be gigantic insurmountable barriers.Indeed, some of these tools present systematic solutions to additional matters.  I was very pleased when I found that some of the Kubernetes tools I was working with included subcommands to configure my shell to know how to do command completion.  Seems like a fine thing to me to have that become systematically *easier*, and I think that would be a good new subcommand...  \"pg completion bash\" and \"pg completion zsh\" would be mighty fine things.- On the other hand, mapping old commands that are names of programs onto \"pg subcommands\" is some additional effort, and we haven't yet started bikeshedding on the favoured names :-)I have lately noticed some interesting looking apps wandering about that briefly attracted my attention, but, which, due to being painfully different from the existing commands and tools that I have already learned, and have \"muscle memory\" for, am loath to leave.   I'll throw out 4 examples, 3 of them personal:a) nnn is a terminal-based file manager.  It has some nifty features surrounding the concept that you can set up custom file filters to look for sorts of files that you find interesting, and then offers customizable UI for running favorite actions against them.  I'm 25 years into using Emacs Dired mode; as neat as nnn seems, it's not enough of an improvement to be worth the pain in the neck of relearning stuff.b) 3mux is a redo of tmux (which was a redo of GNU Screen), and has key mappings that make it way easier for a new user to learn.  I'm 20-ish years into Screen/Tmux; I wasn't looking for it to be easier to learn, because I did that quite a while ago.c) Kakoune is a vi-like editor which rotates from vi's \"verb/object\" approach to commands to a \"object/verb\" approach, for apparent more efficiency.  I think I already mentioned that my \"muscle memory\" is biased by Emacs features...  I'm not adding a \"rotated-vi-like\" thing into my mix :-(d) systemd is a Controversial System; the folk that seem particularly irate about it seem to be \"Old Bearded Sysadmins\" that hate the idea of redoing their understandings of how Unix systems initialize.  Personally, my feelings are ambivalent; I'm using it where I find some use, and have not been displeased with my results.  And since modern systems now have USB and network devices added and dropped on a whim, there's a critical need for something newer with more dynamic responses than old SysV Init.  But I certainly \"get\" that some aren't so happy with it, and I'm not thrilled at the ongoing scope creep that never seems to end.There is merit to having a new, harmonious set of \"pg commands.\"  But it's eminently easy to get into trouble (and get people mad) by changing things that have been working fine for many years.  Half the battle (against the \"getting people mad\" part) is making sure that it's clear that people were listened to.  Listening to the community is one of the important things to do :-).-- When confronted by a difficult problem, solve it by reducing it to thequestion, \"How would the Lone Ranger handle this?\"", "msg_date": "Wed, 27 May 2020 17:42:18 -0400", "msg_from": "Christopher Browne <cbbrowne@gmail.com>", "msg_from_op": false, "msg_subject": "Re: New 'pg' consolidated metacommand patch" }, { "msg_contents": "\n\n> On May 27, 2020, at 2:42 PM, Christopher Browne <cbbrowne@gmail.com> wrote:\n> \n> There is merit to having a new, harmonious set of \"pg commands.\" But it's eminently easy to get into trouble (and get people mad) by changing things that have been working fine for many years. Half the battle (against the \"getting people mad\" part) is making sure that it's clear that people were listened to. Listening to the community is one of the important things to do :-).\n\nI totally agree.\n\nThere are options for keeping the existing tools and not modifying them. If the \"pg\" command (or \"pgsql\" command, if we use that naming) knows, for example, how to execute pg_ctl, that's no harm to people who prefer to just run pg_ctl directly. It only becomes a problem when this patch, or one like it, decides that \"pg_ctl\" needs to work differently, have a different set of command line options, etc. The only thing I changed about pg_ctl and friends in the v1 patch is that they moved from BINDIR to LIBEXECDIR, and internally they were updated to be able to still work despite the move. That change was partly designed to spark conversation. If people prefer they get moved back into BINDIR, fine by me. If people instead prefer that the patch include links between the old BINDIR location and the new LIBEXECDIR location, that's also fine by me. The \"pg\" command doesn't really care either. I'm intentionally not calling the shots here. I'm asking the community members, many of whom expressed an interest in something along the lines of this patch. I'm happy to do the grunt work of the patch to meet the community needs.\n\nDave Page expressed an interest upthread in standardizing the interfaces of the various commands. He didn't say this, but I assume he is thinking about things like -d meaning --debug in initdb but meaning --dbname=CONNSTR in pg_basebackup. We could break backwards compatibility by changing one or both of those commands to interpret those options in some new standardized way. Or, we could preserve backwards compatibililty by having \"pg\" take --dbname and --debug options and pass them to the subcommand according to the grandfathered rules of the subcommand. I tend towards preserving compability, but maybe somebody on this list wants to argue for the other side? For new commands introduced after this patch gets committed (assuming it does), options could be passed from \"pg\" through to the subcommand unmolested. That supports Robert's idea that people could install new subcommands from contrib modules without the \"pg\" command needing to know anything about them. This, too, is still open to conversation and debate.\n\nI'd like to hear from more community members on this. I'm listening.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 27 May 2020 16:00:03 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: New 'pg' consolidated metacommand patch" }, { "msg_contents": "On 2020-05-27 23:42, Christopher Browne wrote:\n> d) systemd is a Controversial System; the folk that seem particularly \n> irate about it seem to be \"Old Bearded Sysadmins\" that hate the idea of \n> redoing their understandings of how Unix systems initialize. Personally, \n> my feelings are ambivalent; I'm using it where I find some use, and have \n> not been displeased with my results.  And since modern systems now have \n> USB and network devices added and dropped on a whim, there's a critical \n> need for something newer with more dynamic responses than old SysV \n> Init.  But I certainly \"get\" that some aren't so happy with it, and I'm \n> not thrilled at the ongoing scope creep that never seems to end.\n\nIt is worth noting that systemd did not go for a one-binary-for-all \napproach. It has different binaries for different parts of the \nfunctionality. systemctl for controlling services, journalctl for \ncontrolling the journal, etc. Just as a data point to show that there \nis no single \"new\" way to do things.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 28 May 2020 08:39:55 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: New 'pg' consolidated metacommand patch" }, { "msg_contents": "On Wed, May 27, 2020 at 4:00 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> First, consider that git has over 170 subcommands. PostgreSQL currently\n> has 36, and we're probably not going to add dozens more any time soon.\n> So the issue is not of the same scope. It also seems to me that the way\n> git is organized this becomes a self-perpetuating system: They are\n> adding subcommands all the time without much care where you might in\n> other situations think harder about combining them and keeping the\n> surface area smaller. For example, we wouldn't really need separate\n> commands clusterdb, reindexdb, vacuumdb if we had better support in psql\n> for \"run this command in each database [in parallel]\".\n\nI see your point here, but I think it's clear that some people are\nalready concerned about the proliferation of binaries on namespace\ngrounds. For example, it was proposed that pg_verifybackup should be\npart of pg_basebackup, and then later and by someone else that it\nshould be part of pg_checksums. It's quite different from either of\nthose things and I'm pretty confident it shouldn't be merged with\neither one, but there was certainly pressure in that direction. So\napparently for some people 36 is already too many. It's not clear to\nme why it should be a problem to have a lot of commands, as long as\nthey all start with \"pg_\", but if it is then we should do something\nabout it rather than waiting until we have more of them.\n\n> git (and svn etc. before it) also has a much more consistent operating\n> model that is sensible to reflect in the command structure. They all\n> more or less operate on a git repository, in apparently 170 different\n> ways. The 36 PostgreSQL commands don't all work in the same way. Now\n> if someone were to propose a way to combine server tools, perhaps like\n> pginstancetool {init|controldata|resetwal|checksum}, and perhaps also in\n> a way that actually saves code duplication and inconsistency, that would\n> be something to consider. Or maybe a client-side tool that does\n> pgclienttool {create user|drop user|create database|...} -- but that\n> pretty much already exists by the name of psql. But just renaming\n> everything that's shipped with PostgreSQL to one common bucket without\n> regard to how it actually works and what role it plays would be\n> unnecessarily confusing.\n\nThis doesn't strike me as a very practical proposal because\n\"pginstancetool checksums\" is really stinking long compared to\n\"pg_checksums\", where as \"pg checksums\" is no different, or one\nkeystroke better if you assume that the underscore requires pressing\nshift.\n\n> Also consider some practical concerns with the command structure you\n> describe: Tab completion of commands wouldn't work anymore, unless you\n> supply custom tab completion setups. The direct association between a\n> command and its man page would be broken. Shell scripting becomes more\n> challenging: Instead of writing common things like \"if which\n> pg_waldump; then\" you'd need some custom code, to be determined. These\n> are all solvable, but just a sum of slight annoyances, for no real benefit.\n\nThere are some potential benefits, I think, such as:\n\n1. It seems to be the emerging standard for command line interfaces.\nThere's not only the 'git' example but also things like 'aws', which\nis perhaps more similar to the case proposed here in that there are a\nbunch of subcommands that do quite different sorts of things. I think\na lot of developers are now quite familiar with the idea of a main\ncommand with a bunch of subcommands, and they expect to be able to\ntype 'git help' or 'aws help' or 'pg help' to get a list of commands,\nand then 'pg help createdb' for help with that. If you don't know what\npg commands exist today, how do you discover them? You're right that\nnot everyone is going this way but it seems to be pretty common\n(kubectl, yum, brew, npm, heroku, ...).\n\n2. It lowers the barrier to adding more commands. For example, as\nChris Browne says, we could have a 'pg completion' command to emit\ncompletion information for various shells. True, that would be more\nnecessary with this proposal. But I bet there's stuff that could be\ndone even today -- I think most modern shells have pretty powerful\ncompletion facilities. Someone could propose it, but what are the\nchances that a patch adding a pg_completion binary would be accepted?\nI feel like the argument that this is too narrow to justify the\nnamespace pollution is almost inevitable.\n\n3. It might help us achieve some better consistency between commands.\nRight now we have a lot of warts in the way things are named, like\npgbench vs. pg_ctl vs. createdb, and also pg_basebackup vs.\npg_test_fsync (why not pg_base_backup or pg_testfsync?). Standardizing\non something like this would probably help us be more consistent\nthere. Over time, we might be able to also clean other things up.\nMaybe we could get all of our client-side utilities to share a common\nconfig file, and have a 'pg config' utility to configure it. Maybe we\ncould have common options that are shared by all commands as 'git'\ndoes. These things aren't impossible without unifying the interface,\nbut unifying the interface does help to make it clearer why the other\nthings should also be unified.\n\nNow maybe it's just not worth it. I'm pretty sure that if we made this\nchange I would spend some time cursing this because my fingers would\ntype commands that don't work any more, and that could be annoying,\nand I suspect a lot of other people might feel similarly. I don't\nthink this is something we HAVE to do, but I am a little worried that\nwe're otherwise locked into a system that isn't particularly scalable\nto more commands and doesn't really have any sort of unifying design\neither.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 28 May 2020 09:25:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: New 'pg' consolidated metacommand patch" } ]
[ { "msg_contents": "The recent discussion about EXPLAIN and the possible inclusion of\ndefault-specifying GUCs raised a behavior that I did not fully appreciate\nnor find to be self-evident. Running EXPLAIN ANALYZE results in any\nside-effects of the explained and analyzed statement being permanently\nwritten to the current transaction - which is in many cases is implicitly\nimmediately committed unless the user takes care otherwise. This seems\nlike an implementation expedient behavior but an unfriendly default. It\ndoesn't seem unreasonable for a part-time dba to expect an explain outcome\nto always be non-persistent, even in ANALYZE mode since the execution of\nthat command could be done in a transaction (or savepoint...) and then\nimmediately undone before sending the explain output to the client.\n\nI'm against having a GUC that implicitly triggers an ANALYZE version of the\nEXPLAIN command. I also think that it would be worth the effort to try and\nmake EXPLAIN ANALYZE default to using auto-rollback behavior. Overriding\nthat default behavior could be done on a per command basis by specifying\nthe option \"ROLLBACK off\". With the new GUCs users that find themselves in\nthe situation of needing a non-permanent outcome across multiple commands\ncould then get back to the less safe behavior by setting the corresponding\nGUC to off in their session. I won't pretend to have any idea how often\nthat would be useful - especially as it would depend upon whether the\nauto-savepoint idea is workable or whether the client has to be outside of\na transaction in order for the rollback limited behavior to work.\n\nI cannot make this happen even if there is interest but it seems like a\ngood time to bring up the idea.\n\nDavid J.\n\nThe recent discussion about EXPLAIN and the possible inclusion of default-specifying GUCs raised a behavior that I did not fully appreciate nor find to be self-evident.  Running EXPLAIN ANALYZE results in any side-effects of the explained and analyzed statement being permanently written to the current transaction - which is in many cases is implicitly immediately committed unless the user takes care otherwise.  This seems like an implementation expedient behavior but an unfriendly default.  It doesn't seem unreasonable for a part-time dba to expect an explain outcome to always be non-persistent, even in ANALYZE mode since the execution of that command could be done in a transaction (or savepoint...) and then immediately undone before sending the explain output to the client.I'm against having a GUC that implicitly triggers an ANALYZE version of the EXPLAIN command.  I also think that it would be worth the effort to try and make EXPLAIN ANALYZE default to using auto-rollback behavior.  Overriding that default behavior could be done on a per command basis by specifying the option \"ROLLBACK off\".  With the new GUCs users that find themselves in the situation of needing a non-permanent outcome across multiple commands could then get back to the less safe behavior by setting the corresponding GUC to off in their session.  I won't pretend to have any idea how often that would be useful - especially as it would depend upon whether the auto-savepoint idea is workable or whether the client has to be outside of a transaction in order for the rollback limited behavior to work.I cannot make this happen even if there is interest but it seems like a good time to bring up the idea.David J.", "msg_date": "Wed, 27 May 2020 07:48:04 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Explain Analyze (Rollback off) Suggestion" }, { "msg_contents": "On Wed, May 27, 2020 at 10:48 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> The recent discussion about EXPLAIN and the possible inclusion of default-specifying GUCs raised a behavior that I did not fully appreciate nor find to be self-evident. Running EXPLAIN ANALYZE results in any side-effects of the explained and analyzed statement being permanently written to the current transaction - which is in many cases is implicitly immediately committed unless the user takes care otherwise. This seems like an implementation expedient behavior but an unfriendly default. It doesn't seem unreasonable for a part-time dba to expect an explain outcome to always be non-persistent, even in ANALYZE mode since the execution of that command could be done in a transaction (or savepoint...) and then immediately undone before sending the explain output to the client.\n>\n> I'm against having a GUC that implicitly triggers an ANALYZE version of the EXPLAIN command. I also think that it would be worth the effort to try and make EXPLAIN ANALYZE default to using auto-rollback behavior. Overriding that default behavior could be done on a per command basis by specifying the option \"ROLLBACK off\". With the new GUCs users that find themselves in the situation of needing a non-permanent outcome across multiple commands could then get back to the less safe behavior by setting the corresponding GUC to off in their session. I won't pretend to have any idea how often that would be useful - especially as it would depend upon whether the auto-savepoint idea is workable or whether the client has to be outside of a transaction in order for the rollback limited behavior to work.\n\nI think the only way to make the effects of an EXPLAIN ANALYZE\nstatement be automatically rolled back would be to wrap the entire\noperation in a subtransaction. While we could certainly implement\nthat, it might have its own share of surprises; for example, it would\nconsume an XID, leading to faster wraparound vacuums if you do it\nfrequently.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 27 May 2020 12:03:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Explain Analyze (Rollback off) Suggestion" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I think the only way to make the effects of an EXPLAIN ANALYZE\n> statement be automatically rolled back would be to wrap the entire\n> operation in a subtransaction. While we could certainly implement\n> that, it might have its own share of surprises; for example, it would\n> consume an XID, leading to faster wraparound vacuums if you do it\n> frequently.\n\nRight, but it's just automating something that people now do by hand\n(ie manually wrap the EXPLAIN in BEGIN/ROLLBACK) when that's what they\nneed. I think the idea of having an option to do it for you isn't bad.\n\nI'm strongly against changing the very-longstanding default behavior of\nEXPLAIN ANALYZE, though; the villagers at your doorstep will not be\nbringing flowers. So this new option has to *not* default to on.\n\nAs far as the general topic of the thread goes, I like the idea of\ncontrolling EXPLAIN options on the client side way better than inventing\nstatement-behavior-altering GUCs. We learned our lesson about that a\ndecade or two back; only those who don't remember propose new ones.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 27 May 2020 20:31:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Explain Analyze (Rollback off) Suggestion" }, { "msg_contents": "On Wed, May 27, 2020 at 5:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I think the only way to make the effects of an EXPLAIN ANALYZE\n> > statement be automatically rolled back would be to wrap the entire\n> > operation in a subtransaction. While we could certainly implement\n> > that, it might have its own share of surprises; for example, it would\n> > consume an XID, leading to faster wraparound vacuums if you do it\n> > frequently.\n>\n> Right, but it's just automating something that people now do by hand\n> (ie manually wrap the EXPLAIN in BEGIN/ROLLBACK) when that's what they\n> need. I think the idea of having an option to do it for you isn't bad.\n>\n\nAgreed\n\n> I'm strongly against changing the very-longstanding default behavior of\n> EXPLAIN ANALYZE, though; the villagers at your doorstep will not be\n> bringing flowers. So this new option has to *not* default to on.\n>\n\nThe \"safety\" aspect of this is a motivator but at least having the option\nexist makes users both more aware and also simplifies usage, so ok.\n\n> As far as the general topic of the thread goes, I like the idea of\n> controlling EXPLAIN options on the client side way better than inventing\n> statement-behavior-altering GUCs. We learned our lesson about that a\n> decade or two back; only those who don't remember propose new ones.\n>\n\nI'm not seeing enough similarity with the reasons for, and specific\nbehaviors, of those previous GUCs to dismiss this proposal on that basis\nalone. These are \"crap we messed things up\" switches that alter a query\nbehind the scenes in ways that a user cannot do through SQL - they simply\nprovide for changing a default that we already allow the user to override\nper-query. Its akin to \"DateStyle\" and its pure cosmetic influencing\nease-of-use option rather than some changing the fundamental structural\nmeaning of '\\n'\n\nIf that isn't enough then I would just drop the idea since I don't see\nenough benefit to introducing a wrapper layer in psql on top of explain.\n\nDavid J.\n\nOn Wed, May 27, 2020 at 5:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Robert Haas <robertmhaas@gmail.com> writes:\n> I think the only way to make the effects of an EXPLAIN ANALYZE\n> statement be automatically rolled back would be to wrap the entire\n> operation in a subtransaction. While we could certainly implement\n> that, it might have its own share of surprises; for example, it would\n> consume an XID, leading to faster wraparound vacuums if you do it\n> frequently.\n\nRight, but it's just automating something that people now do by hand\n(ie manually wrap the EXPLAIN in BEGIN/ROLLBACK) when that's what they\nneed.  I think the idea of having an option to do it for you isn't bad.Agreed\nI'm strongly against changing the very-longstanding default behavior of\nEXPLAIN ANALYZE, though; the villagers at your doorstep will not be\nbringing flowers.  So this new option has to *not* default to on.The \"safety\" aspect of this is a motivator but at least having the option exist makes users both more aware and also simplifies usage, so ok.\nAs far as the general topic of the thread goes, I like the idea of\ncontrolling EXPLAIN options on the client side way better than inventing\nstatement-behavior-altering GUCs.  We learned our lesson about that a\ndecade or two back; only those who don't remember propose new ones.I'm not seeing enough similarity with the reasons for, and specific behaviors, of those previous GUCs to dismiss this proposal on that basis alone.  These are \"crap we messed things up\" switches that alter a query behind the scenes in ways that a user cannot do through SQL - they simply provide for changing a default that we already allow the user to override per-query.  Its akin to \"DateStyle\" and its pure cosmetic influencing ease-of-use option rather than some changing the fundamental structural meaning of '\\n'If that isn't enough then I would just drop the idea since I don't see enough benefit to introducing a wrapper layer in psql on top of explain.David J.", "msg_date": "Wed, 27 May 2020 18:33:16 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Explain Analyze (Rollback off) Suggestion" }, { "msg_contents": "On Wed, May 27, 2020 at 9:33 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> I'm not seeing enough similarity with the reasons for, and specific behaviors, of those previous GUCs to dismiss this proposal on that basis alone. These are \"crap we messed things up\" switches that alter a query behind the scenes in ways that a user cannot do through SQL - they simply provide for changing a default that we already allow the user to override per-query. Its akin to \"DateStyle\" and its pure cosmetic influencing ease-of-use option rather than some changing the fundamental structural meaning of '\\n'\n\nWell, I think it's usually worse to have two possible behaviors rather\nthan one. Like, a lot of people have probably made the mistake of\nrunning EXPLAIN ANALYZE without realizing that it's actually running\nthe query, and then been surprised or dismayed afterwards. But each\nperson only has to learn that once. If we had a GUC controlling this\nbehavior, then you'd have to always be aware of the setting on any\nparticular system on which you might be thinking of running the\ncommand. Likewise, if you write an application or tool of some sort\nthat uses EXPLAIN ANALYZE, it has to be aware of the GUC value, or it\nwon't work as expected on some systems.\n\nThis is the general problem with behavior-changing GUCs. I kind of\nhave mixed feelings about this. On the one hand, it sucks for\noperators of individual systems not to be able to customize things so\nas to produce the behavior they want. On the other hand, each one you\nadd makes it harder to write code that will work the same way on every\nPostgreSQL system. I don't think the problem would be as bad in this\nparticular case as in some others that have been proposed, mostly\nbecause EXPLAIN ANALYZE isn't widely-used by applications, so maybe\nit's worth considering. But on the whole, I'm inclined to agree with\nTom that it's better not to create too many ways for fundamental\nbehavior of the system to vary from one installation to another.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 28 May 2020 09:42:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Explain Analyze (Rollback off) Suggestion" }, { "msg_contents": "On Thu, May 28, 2020 at 6:42 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, May 27, 2020 at 9:33 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> > I'm not seeing enough similarity with the reasons for, and specific\n> behaviors, of those previous GUCs to dismiss this proposal on that basis\n> alone. These are \"crap we messed things up\" switches that alter a query\n> behind the scenes in ways that a user cannot do through SQL - they simply\n> provide for changing a default that we already allow the user to override\n> per-query. Its akin to \"DateStyle\" and its pure cosmetic influencing\n> ease-of-use option rather than some changing the fundamental structural\n> meaning of '\\n'\n>\n> Well, I think it's usually worse to have two possible behaviors rather\n> than one. Like, a lot of people have probably made the mistake of\n> running EXPLAIN ANALYZE without realizing that it's actually running\n> the query, and then been surprised or dismayed afterwards.\n\n\nThis really belongs on the other thread (though I basically said the same\nthing there two days ago):\n\nThe ANALYZE option should not be part of the GUC setup. None of the other\nEXPLAIN default changing options have the same issues with being on by\ndefault - which is basically what we are talking about here: being able to\nhave an option be on without specifying that option in the command itself.\nTIMING already does this without difficulty and the others are no different.\n\nDavid J.\n\nOn Thu, May 28, 2020 at 6:42 AM Robert Haas <robertmhaas@gmail.com> wrote:On Wed, May 27, 2020 at 9:33 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> I'm not seeing enough similarity with the reasons for, and specific behaviors, of those previous GUCs to dismiss this proposal on that basis alone.  These are \"crap we messed things up\" switches that alter a query behind the scenes in ways that a user cannot do through SQL - they simply provide for changing a default that we already allow the user to override per-query.  Its akin to \"DateStyle\" and its pure cosmetic influencing ease-of-use option rather than some changing the fundamental structural meaning of '\\n'\n\nWell, I think it's usually worse to have two possible behaviors rather\nthan one. Like, a lot of people have probably made the mistake of\nrunning EXPLAIN ANALYZE without realizing that it's actually running\nthe query, and then been surprised or dismayed afterwards.This really belongs on the other thread (though I basically said the same thing there two days ago):The ANALYZE option should not be part of the GUC setup.  None of the other EXPLAIN default changing options have the same issues with being on by default - which is basically what we are talking about here: being able to have an option be on without specifying that option in the command itself.  TIMING already does this without difficulty and the others are no different.David J.", "msg_date": "Thu, 28 May 2020 07:19:11 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Explain Analyze (Rollback off) Suggestion" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> The ANALYZE option should not be part of the GUC setup.\n\nYeah. While I'm generally not in favor of putting GUCs into the mix\nhere, the only one that seriously scares me is a GUC that would affect\nwhether the EXPLAIN'd query executes or not. A GUC that causes buffer\ncounts to be reported/not-reported is not going to result in data\ndestruction when someone forgets that it's on.\n\n(BTW, adding an option for auto-rollback wouldn't change my opinion\nabout that. Not all side-effects of a query can be rolled back. Thus,\nif there is an auto-rollback option, it mustn't be GUC-adjustable\neither.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 May 2020 10:52:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Explain Analyze (Rollback off) Suggestion" }, { "msg_contents": "On Thu, May 28, 2020 at 7:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> (BTW, adding an option for auto-rollback wouldn't change my opinion\n> about that. Not all side-effects of a query can be rolled back. Thus,\n> if there is an auto-rollback option, it mustn't be GUC-adjustable\n> either.)\n>\n\nYeah, I've worked myself around to that as well, this thread's proposal\nwould be to just make setting up rollback more obvious and easier for a\nuser of explain analyze - whose value at this point is wholly independent\nof the GUC discussion.\n\nDavid J.\n\nOn Thu, May 28, 2020 at 7:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:(BTW, adding an option for auto-rollback wouldn't change my opinion\nabout that.  Not all side-effects of a query can be rolled back.  Thus,\nif there is an auto-rollback option, it mustn't be GUC-adjustable\neither.)Yeah, I've worked myself around to that as well, this thread's proposal would be to just make setting up rollback more obvious and easier for a user of explain analyze - whose value at this point is wholly independent of the GUC discussion.David J.", "msg_date": "Thu, 28 May 2020 07:56:22 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Explain Analyze (Rollback off) Suggestion" } ]
[ { "msg_contents": "Hello! My name is Grigory Kryachko, I decided to join the efforts with\nAndrey Borodin in his working on amcheck.\n\nHere is the patch which I (with Andrey as my advisor) built on the top of\nthe last patch from this thread: https://commitfest.postgresql.org/25/1800/\n.\nIt adds an ability to verify validity of GIN index. It is not polished\nyet, but it works and we wanted to show it to you so you can give us some\nfeedback, and also let you know about this work if you have any plans of\nwriting something like that yourselves, so that you do not redo what is\nalready done.\n\nIn the mentioned above thread there was an issue with right type of lock,\nwe have not addressed it yet. Right now I am primarily interested in\nfeedback about GIN part.", "msg_date": "Wed, 27 May 2020 22:00:06 +0500", "msg_from": "Grigory Kryachko <gskryachko@gmail.com>", "msg_from_op": true, "msg_subject": "amcheck verification for GiST and GIN" }, { "msg_contents": "On Wed, May 27, 2020 at 10:11 AM Grigory Kryachko <gskryachko@gmail.com> wrote:\n> Here is the patch which I (with Andrey as my advisor) built on the top of the last patch from this thread: https://commitfest.postgresql.org/25/1800/ .\n> It adds an ability to verify validity of GIN index. It is not polished yet, but it works and we wanted to show it to you so you can give us some feedback, and also let you know about this work if you have any plans of writing something like that yourselves, so that you do not redo what is already done.\n\nCan you rebase this patch, please?\n\nAlso suggest breaking out the series into distinct patch files using\n\"git format-patch master\".\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 6 Aug 2020 14:33:02 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: amcheck verification for GiST and GIN" }, { "msg_contents": "On 07/08/2020 00:33, Peter Geoghegan wrote:\n> On Wed, May 27, 2020 at 10:11 AM Grigory Kryachko <gskryachko@gmail.com> wrote:\n>> Here is the patch which I (with Andrey as my advisor) built on the top of the last patch from this thread: https://commitfest.postgresql.org/25/1800/ .\n>> It adds an ability to verify validity of GIN index. It is not polished yet, but it works and we wanted to show it to you so you can give us some feedback, and also let you know about this work if you have any plans of writing something like that yourselves, so that you do not redo what is already done.\n> \n> Can you rebase this patch, please?\n> \n> Also suggest breaking out the series into distinct patch files using\n> \"git format-patch master\".\n\nI rebased the GIN parts of this patch, see attached. I also ran pgindent \nand made some other tiny cosmetic fixes, but I didn't review the patch, \nonly rebased it in the state it was.\n\nI was hoping that this would be useful to track down the bug we're \ndiscussing here: \nhttps://www.postgresql.org/message-id/CAJYBUS8aBQQL22oHsAwjHdwYfdB_NMzt7-sZxhxiOdEdn7cOkw%40mail.gmail.com. \nBut now that I look what checks this performs, I doubt this will catch \nthe kind of corruption that's happened there. I suspect it's more subtle \nthan an inconsistencies between parent and child pages, because only a \nfew rows are affected. But doesn't hurt to try.\n\n- Heikki", "msg_date": "Thu, 15 Jul 2021 10:03:47 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: amcheck verification for GiST and GIN" }, { "msg_contents": "Hello,\n\nFirst of all, thank you all -- Andrey, Peter, Heikki and others -- for this\nwork, GIN support in amcheck is *really* needed, especially for OS upgrades\nsuch as from Ubuntu 16.04 (which is EOL now) to 18.04 or 20.04\n\nI was trying to check a bunch of GINs on some production after switching\nfrom Ubuntu 16.04 to 18.04 and got many errors. So decided to check for\n16.04 first (that is still used on prod for that DB), without any OS/glibc\nchanges.\n\nOn 16.04, I still saw errors and it was not really expected because this\nshould mean that production is corrupted too. So, REINDEX should fix it.\nBut it didn't -- see output below. I cannot give data and thinking how to\ncreate a synthetic demo of this. Any suggestions?\n\nAnd is this a sign that the tool is wrong rather that we have a real\ncorruption cases? (I assume if we did, we would see no errors after\nREINDEXing -- of course, if GIN itself doesn't have bugs).\n\nEnv: Ubuntu 16.04 (so, glibc 2.27), Postgres 12.7, patch from Heikki\nslightly adjusted to work with PG12 (\nhttps://gitlab.com/postgres/postgres/-/merge_requests/5) snippet used to\nrun amcheck:\nhttps://gitlab.com/-/snippets/2001962 (see file #3)\n\nBefore reindex:\n\n\nINFO: [2021-07-29 17:44:42.525+00] Processing 4/29: index:\nindex_XXX_trigram (index relpages: 117935; heap tuples: ~379793)...\n\nERROR: index \"index_XXX_trigram\" has wrong tuple order, block 65754, offset\n232\n\n\ntest=# reindex index index_XXX_trigram;\n\nREINDEX\n\n\n\nAfter REINDEX:\n\n\nINFO: [2021-07-29 18:01:23.339+00] Processing 4/29: index:\nindex_XXX_trigram (index relpages: 70100; heap tuples: ~379793)...\n\nERROR: index \"index_XXX_trigram\" has wrong tuple order, block 70048, offset\n253\n\n\n\nOn Thu, Jul 15, 2021 at 00:03 Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> On 07/08/2020 00:33, Peter Geoghegan wrote:\n> > On Wed, May 27, 2020 at 10:11 AM Grigory Kryachko <gskryachko@gmail.com>\n> wrote:\n> >> Here is the patch which I (with Andrey as my advisor) built on the top\n> of the last patch from this thread:\n> https://commitfest.postgresql.org/25/1800/ .\n> >> It adds an ability to verify validity of GIN index. It is not polished\n> yet, but it works and we wanted to show it to you so you can give us some\n> feedback, and also let you know about this work if you have any plans of\n> writing something like that yourselves, so that you do not redo what is\n> already done.\n> >\n> > Can you rebase this patch, please?\n> >\n> > Also suggest breaking out the series into distinct patch files using\n> > \"git format-patch master\".\n>\n> I rebased the GIN parts of this patch, see attached. I also ran pgindent\n> and made some other tiny cosmetic fixes, but I didn't review the patch,\n> only rebased it in the state it was.\n>\n> I was hoping that this would be useful to track down the bug we're\n> discussing here:\n>\n> https://www.postgresql.org/message-id/CAJYBUS8aBQQL22oHsAwjHdwYfdB_NMzt7-sZxhxiOdEdn7cOkw%40mail.gmail.com.\n>\n> But now that I look what checks this performs, I doubt this will catch\n> the kind of corruption that's happened there. I suspect it's more subtle\n> than an inconsistencies between parent and child pages, because only a\n> few rows are affected. But doesn't hurt to try.\n>\n> - Heikki\n>\n\nHello,First of all, thank you all -- Andrey, Peter, Heikki and others -- for this work, GIN support in amcheck is *really* needed, especially for OS upgrades such as from Ubuntu 16.04 (which is EOL now) to 18.04 or 20.04I was trying to check a bunch of GINs on some production after switching from Ubuntu 16.04 to 18.04 and got many errors. So decided to check for 16.04 first (that is still used on prod for that DB), without any OS/glibc changes.On 16.04, I still saw errors and it was not really expected because this should mean that production is corrupted too. So, REINDEX should fix it. But it didn't -- see output below. I cannot give data and thinking how to create a synthetic demo of this. Any suggestions?And is this a sign that the tool is wrong rather that we have a real corruption cases? (I assume if we did, we would see no errors after REINDEXing -- of course, if GIN itself doesn't have bugs).Env: Ubuntu 16.04 (so, glibc 2.27), Postgres 12.7, patch from Heikki slightly adjusted to work with PG12 (https://gitlab.com/postgres/postgres/-/merge_requests/5) snippet used to run amcheck: https://gitlab.com/-/snippets/2001962 (see file #3)Before reindex:\n\nINFO:  [2021-07-29 17:44:42.525+00] Processing 4/29: index: index_XXX_trigram (index relpages: 117935; heap tuples: ~379793)...\nERROR: index \"index_XXX_trigram\" has wrong tuple order, block 65754, offset 232\n\ntest=# reindex index index_XXX_trigram;\nREINDEX\n\n\nAfter REINDEX:\n\nINFO:  [2021-07-29 18:01:23.339+00] Processing 4/29: index: index_XXX_trigram (index relpages: 70100; heap tuples: ~379793)...\nERROR: index \"index_XXX_trigram\" has wrong tuple order, block 70048, offset 253\nOn Thu, Jul 15, 2021 at 00:03 Heikki Linnakangas <hlinnaka@iki.fi> wrote:On 07/08/2020 00:33, Peter Geoghegan wrote:\n> On Wed, May 27, 2020 at 10:11 AM Grigory Kryachko <gskryachko@gmail.com> wrote:\n>> Here is the patch which I (with Andrey as my advisor) built on the top of the last patch from this thread: https://commitfest.postgresql.org/25/1800/ .\n>> It adds an ability to verify validity  of GIN index. It is not polished yet, but it works and we wanted to show it to you so you can give us some feedback, and also let you know about this work if you have any plans of writing something like that yourselves, so that you do not redo what is already done.\n> \n> Can you rebase this patch, please?\n> \n> Also suggest breaking out the series into distinct patch files using\n> \"git format-patch master\".\n\nI rebased the GIN parts of this patch, see attached. I also ran pgindent \nand made some other tiny cosmetic fixes, but I didn't review the patch, \nonly rebased it in the state it was.\n\nI was hoping that this would be useful to track down the bug we're \ndiscussing here: \nhttps://www.postgresql.org/message-id/CAJYBUS8aBQQL22oHsAwjHdwYfdB_NMzt7-sZxhxiOdEdn7cOkw%40mail.gmail.com. \nBut now that I look what checks this performs, I doubt this will catch \nthe kind of corruption that's happened there. I suspect it's more subtle \nthan an inconsistencies between parent and child pages, because only a \nfew rows are affected. But doesn't hurt to try.\n\n- Heikki", "msg_date": "Thu, 29 Jul 2021 11:34:47 -0700", "msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: amcheck verification for GiST and GIN" }, { "msg_contents": "On 29/07/2021 21:34, Nikolay Samokhvalov wrote:\n> I was trying to check a bunch of GINs on some production after switching \n> from Ubuntu 16.04 to 18.04 and got many errors. So decided to check for \n> 16.04 first (that is still used on prod for that DB), without any \n> OS/glibc changes.\n> \n> On 16.04, I still saw errors and it was not really expected because this \n> should mean that production is corrupted too. So, REINDEX should fix it. \n> But it didn't -- see output below. I cannot give data and thinking how \n> to create a synthetic demo of this. Any suggestions?\n> \n> And is this a sign that the tool is wrong rather that we have a real \n> corruption cases? (I assume if we did, we would see no errors after \n> REINDEXing -- of course, if GIN itself doesn't have bugs).\n> \n> Env: Ubuntu 16.04 (so, glibc 2.27), Postgres 12.7, patch from Heikki \n> slightly adjusted to work with PG12 (\n> https://gitlab.com/postgres/postgres/-/merge_requests/5 \n> <https://gitlab.com/postgres/postgres/-/merge_requests/5>) snippet used \n> to run amcheck:\n> https://gitlab.com/-/snippets/2001962 \n> <https://gitlab.com/-/snippets/2001962> (see file #3)\n\nAlmost certainly the tool is wrong. We went back and forth a few times \nwith Pawel, fixing various bugs in the amcheck patch at this thread: \nhttps://www.postgresql.org/message-id/9fdbb584-1e10-6a55-ecc2-9ba8b5dca1cf%40iki.fi. \nCan you try again with the latest patch version from that thread, \nplease? That's v5-0001-Amcheck-for-GIN-13stable.patch.\n\n- Heikki\n\n\n", "msg_date": "Fri, 30 Jul 2021 12:35:53 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: amcheck verification for GiST and GIN" }, { "msg_contents": "Thank you, v5 didn't find any issues at all. One thing: for my 29 indexes,\nthe tool generated output 3.5 GiB. I guess many INFO messages should be\ndowngraded to something like DEBUG1?\n\nOn Fri, Jul 30, 2021 at 2:35 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> On 29/07/2021 21:34, Nikolay Samokhvalov wrote:\n> > I was trying to check a bunch of GINs on some production after switching\n> > from Ubuntu 16.04 to 18.04 and got many errors. So decided to check for\n> > 16.04 first (that is still used on prod for that DB), without any\n> > OS/glibc changes.\n> >\n> > On 16.04, I still saw errors and it was not really expected because this\n> > should mean that production is corrupted too. So, REINDEX should fix it.\n> > But it didn't -- see output below. I cannot give data and thinking how\n> > to create a synthetic demo of this. Any suggestions?\n> >\n> > And is this a sign that the tool is wrong rather that we have a real\n> > corruption cases? (I assume if we did, we would see no errors after\n> > REINDEXing -- of course, if GIN itself doesn't have bugs).\n> >\n> > Env: Ubuntu 16.04 (so, glibc 2.27), Postgres 12.7, patch from Heikki\n> > slightly adjusted to work with PG12 (\n> > https://gitlab.com/postgres/postgres/-/merge_requests/5\n> > <https://gitlab.com/postgres/postgres/-/merge_requests/5>) snippet used\n> > to run amcheck:\n> > https://gitlab.com/-/snippets/2001962\n> > <https://gitlab.com/-/snippets/2001962> (see file #3)\n>\n> Almost certainly the tool is wrong. We went back and forth a few times\n> with Pawel, fixing various bugs in the amcheck patch at this thread:\n>\n> https://www.postgresql.org/message-id/9fdbb584-1e10-6a55-ecc2-9ba8b5dca1cf%40iki.fi.\n>\n> Can you try again with the latest patch version from that thread,\n> please? That's v5-0001-Amcheck-for-GIN-13stable.patch.\n>\n> - Heikki\n>\n\nThank you, v5 didn't find any issues at all. One thing: for my 29 indexes, the tool generated output 3.5 GiB. I guess many INFO messages should be downgraded to something like DEBUG1?On Fri, Jul 30, 2021 at 2:35 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:On 29/07/2021 21:34, Nikolay Samokhvalov wrote:\n> I was trying to check a bunch of GINs on some production after switching \n> from Ubuntu 16.04 to 18.04 and got many errors. So decided to check for \n> 16.04 first (that is still used on prod for that DB), without any \n> OS/glibc changes.\n> \n> On 16.04, I still saw errors and it was not really expected because this \n> should mean that production is corrupted too. So, REINDEX should fix it. \n> But it didn't -- see output below. I cannot give data and thinking how \n> to create a synthetic demo of this. Any suggestions?\n> \n> And is this a sign that the tool is wrong rather that we have a real \n> corruption cases? (I assume if we did, we would see no errors after \n> REINDEXing -- of course, if GIN itself doesn't have bugs).\n> \n> Env: Ubuntu 16.04 (so, glibc 2.27), Postgres 12.7, patch from Heikki \n> slightly adjusted to work with PG12 (\n> https://gitlab.com/postgres/postgres/-/merge_requests/5 \n> <https://gitlab.com/postgres/postgres/-/merge_requests/5>) snippet used \n> to run amcheck:\n> https://gitlab.com/-/snippets/2001962 \n> <https://gitlab.com/-/snippets/2001962> (see file #3)\n\nAlmost certainly the tool is wrong. We went back and forth a few times \nwith Pawel, fixing various bugs in the amcheck patch at this thread: \nhttps://www.postgresql.org/message-id/9fdbb584-1e10-6a55-ecc2-9ba8b5dca1cf%40iki.fi. \nCan you try again with the latest patch version from that thread, \nplease? That's v5-0001-Amcheck-for-GIN-13stable.patch.\n\n- Heikki", "msg_date": "Sun, 1 Aug 2021 17:59:30 -0700", "msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: amcheck verification for GiST and GIN" } ]
[ { "msg_contents": "\nHello devs,\n\n commit 2c24051bacd2d0eb7141fc4adb870281aec4e714\n Author: Craig Topper <craig.topper@gmail.com>\n Date: Fri Apr 24 22:12:21 2020 -0700\n\n [CallSite removal] Rename CallSite.h to AbstractCallSite.h. NFC\n\n The CallSite and ImmutableCallSite were removed in a previous\n commit. So rename the file to match the remaining class and\n the name of the cpp that implements it.\n\nHence :\n\n .. llvmjit_inline.cpp\n llvmjit_inline.cpp:59:10: fatal error: llvm/IR/CallSite.h: No such file or\n directory\n 59 | #include <llvm/IR/CallSite.h>\n | ^~~~~~~~~~~~~~~~~~~~\n\nWhich is why animal seawasp is in now the red.\n\nIt looks unlikely that it will vanish, so pg must probably start aiming at \nthe moving llvm target.\n\n-- \nFabien.\n\n\n", "msg_date": "Wed, 27 May 2020 19:40:27 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "future pg+llvm compilation is broken" }, { "msg_contents": "On Wed, May 27, 2020 at 07:40:27PM +0200, Fabien COELHO wrote:\n> llvmjit_inline.cpp:59:10: fatal error: llvm/IR/CallSite.h: No such file or\n\nSeems to be the same as here:\nhttps://www.postgresql.org/message-id/flat/CAGf%2BfX4sDP5%2B43HBz_3fjchawO6boqwgbYVfuFc1D4gbA6qQxw%40mail.gmail.com#540c3746c79c0f13360b35c9c369a887\n\n> Which is why animal seawasp is in now the red.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 27 May 2020 15:30:23 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: future pg+llvm compilation is broken" }, { "msg_contents": "\nHello Justin,\n\n>> llvmjit_inline.cpp:59:10: fatal error: llvm/IR/CallSite.h: No such file or directory\n>\n> Seems to be the same as here:\n> https://www.postgresql.org/message-id/flat/CAGf%2BfX4sDP5%2B43HBz_3fjchawO6boqwgbYVfuFc1D4gbA6qQxw%40mail.gmail.com#540c3746c79c0f13360b35c9c369a887\n\nDefinitely. I did not notice this thread, should have.\n\n>> Which is why animal seawasp is in now the red.\n\nWhich I run, hence I noticed it. It should have turned red in April, but \nthe host had some issues hence the compiler was not updated, and I fixed \nit only a few days ago.\n\nSorry for the noise!\n\nOn the fixing philosophy, I'm for sooner rather than later (which does not \nmean immedialtely), and that it has to be backpatched to supported \nbranches.\n\n-- \nFabien.\n\n\n", "msg_date": "Thu, 28 May 2020 07:06:14 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: future pg+llvm compilation is broken" } ]
[ { "msg_contents": "Hi,\n\nPgjdbc test suite identified a SIGSEGV in the recent HEAD builds of\nPostgreSQL, Ubuntu 14.04.5 LTS\n\nHere's a call stack:\nhttps://travis-ci.org/github/pgjdbc/pgjdbc/jobs/691794110#L7484\nThe crash is consistent, and it reproduces 100% of the cases so far.\n\nThe CI history shows that HEAD was good at 11 May 13:27 UTC, and it became\nbad by 19 May 14:00 UTC,\nso the regression was introduced somewhere in-between.\n\nDoes that ring any bells?\n\nIn case you wonder:\n\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 XLogSendPhysical () at walsender.c:2762\n2762 if (!WALRead(xlogreader,\n(gdb) #0 XLogSendPhysical () at walsender.c:2762\n SendRqstPtr = 133473640\n startptr = 133473240\n endptr = 133473640\n nbytes = 400\n segno = 1\n errinfo = {wre_errno = 988942240, wre_off = 2, wre_req = -1,\n wre_read = -1, wre_seg = {ws_file = 4714224,\n ws_segno = 140729887364688, ws_tli = 0}}\n __func__ = \"XLogSendPhysical\"\n#1 0x000000000087fa8a in WalSndLoop (send_data=0x88009d <XLogSendPhysical>)\n at walsender.c:2300\n __func__ = \"WalSndLoop\"\n#2 0x000000000087d65a in StartReplication (cmd=0x299bda8) at\nwalsender.c:750\n buf = {data = 0x0, len = 3, maxlen = 1024, cursor = 87}\n FlushPtr = 133473640\n __func__ = \"StartReplication\"\n#3 0x000000000087eddc in exec_replication_command (\n cmd_string=0x2916e68 \"START_REPLICATION 0/7F4A3D8\") at walsender.c:1643\n cmd = 0x299bda8\n parse_rc = 0\n cmd_node = 0x299bda8\n cmd_context = 0x299bc60\n old_context = 0x2916d50\n qc = {commandTag = 988942640, nprocessed = 140729887363520}\n __func__ = \"exec_replication_command\"\n\n\nVladimir\n\nHi,Pgjdbc test suite identified a SIGSEGV in the recent HEAD builds of PostgreSQL, Ubuntu 14.04.5 LTSHere's a call stack: https://travis-ci.org/github/pgjdbc/pgjdbc/jobs/691794110#L7484The crash is consistent, and it reproduces 100% of the cases so far.The CI history shows that HEAD was good at 11 May 13:27 UTC, and it became bad by 19 May 14:00 UTC,so the regression was introduced somewhere in-between.Does that ring any bells?In case you wonder:Program terminated with signal SIGSEGV, Segmentation fault.#0  XLogSendPhysical () at walsender.c:27622762\t\tif (!WALRead(xlogreader,(gdb) #0  XLogSendPhysical () at walsender.c:2762        SendRqstPtr = 133473640        startptr = 133473240        endptr = 133473640        nbytes = 400        segno = 1        errinfo = {wre_errno = 988942240, wre_off = 2, wre_req = -1,           wre_read = -1, wre_seg = {ws_file = 4714224,             ws_segno = 140729887364688, ws_tli = 0}}        __func__ = \"XLogSendPhysical\"#1  0x000000000087fa8a in WalSndLoop (send_data=0x88009d <XLogSendPhysical>)    at walsender.c:2300        __func__ = \"WalSndLoop\"#2  0x000000000087d65a in StartReplication (cmd=0x299bda8) at walsender.c:750        buf = {data = 0x0, len = 3, maxlen = 1024, cursor = 87}        FlushPtr = 133473640        __func__ = \"StartReplication\"#3  0x000000000087eddc in exec_replication_command (    cmd_string=0x2916e68 \"START_REPLICATION 0/7F4A3D8\") at walsender.c:1643        cmd = 0x299bda8        parse_rc = 0        cmd_node = 0x299bda8        cmd_context = 0x299bc60        old_context = 0x2916d50        qc = {commandTag = 988942640, nprocessed = 140729887363520}        __func__ = \"exec_replication_command\"Vladimir", "msg_date": "Thu, 28 May 2020 09:07:04 +0300", "msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>", "msg_from_op": true, "msg_subject": "SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical () at\n walsender.c:2762" }, { "msg_contents": "On Thu, May 28, 2020 at 09:07:04AM +0300, Vladimir Sitnikov wrote:\n> The CI history shows that HEAD was good at 11 May 13:27 UTC, and it became\n> bad by 19 May 14:00 UTC,\n> so the regression was introduced somewhere in-between.\n> \n> Does that ring any bells?\n\nIt does, thanks! This would map with 1d374302 or 850196b6 that\nreworked this area of the code, so it seems like we are not quite done\nwith this work yet. Do you still see the problem as of 55ca50d\n(today's latest HEAD)?\n\nAlso, just wondering.. If I use more or less the same commands as\nyour travis job I should be able to reproduce the problem with a fresh\nJDBC repository, right? Or do you a sub-portion of your regression\ntests to run that easily?\n--\nMichael", "msg_date": "Thu, 28 May 2020 16:22:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "At Thu, 28 May 2020 16:22:33 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, May 28, 2020 at 09:07:04AM +0300, Vladimir Sitnikov wrote:\n> > The CI history shows that HEAD was good at 11 May 13:27 UTC, and it became\n> > bad by 19 May 14:00 UTC,\n> > so the regression was introduced somewhere in-between.\n> > \n> > Does that ring any bells?\n> \n> It does, thanks! This would map with 1d374302 or 850196b6 that\n> reworked this area of the code, so it seems like we are not quite done\n> with this work yet. Do you still see the problem as of 55ca50d\n> (today's latest HEAD)?\n\nI think that's not the case. I think I cause this crash with the\nHEAD. I'll post a fix soon.\n\n> Also, just wondering.. If I use more or less the same commands as\n> your travis job I should be able to reproduce the problem with a fresh\n> JDBC repository, right? Or do you a sub-portion of your regression\n> tests to run that easily?\n\nPgjdbc seems initiating physical replication on a logical replication\nsession.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 28 May 2020 16:32:22 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical\n () at walsender.c:2762" }, { "msg_contents": "On Thu, May 28, 2020 at 04:32:22PM +0900, Kyotaro Horiguchi wrote:\n> I think that's not the case. I think I cause this crash with the\n> HEAD. I'll post a fix soon.\n> \n> Pgjdbc seems initiating physical replication on a logical replication\n> session.\n\nLet me see... Indeed:\n$ psql \"replication=database dbname=postgres\"\n=# START_REPLICATION SLOT test_slot PHYSICAL 0/0;\n[one <boom> later]\n\n(gdb) bt\n#0 XLogSendPhysical () at walsender.c:2762\n#1 0x000055d5f7803318 in WalSndLoop (send_data=0x55d5f78039c7\n<XLogSendPhysical>) at walsender.c:2300\n#2 0x000055d5f7800d70 in StartReplication (cmd=0x55d5f919bc60) at\nwalsender.c:750\n#3 0x000055d5f78025ad in exec_replication_command\n(cmd_string=0x55d5f9119a80 \"START_REPLICATION SLOT test_slot PHYSICAL\n0/0;\") at walsender.c:1643\n#4 0x000055d5f786a7ea in PostgresMain (argc=1, argv=0x55d5f91472c8,\ndbname=0x55d5f9147210 \"ioltas\", username=0x55d5f91471f0 \"ioltas\") at\npostgres.c:4311\n#5 0x000055d5f77b4e9c in BackendRun (port=0x55d5f913dcd0) at\npostmaster.c:4523\n#6 0x000055d5f77b4606 in BackendStartup (port=0x55d5f913dcd0) at\npostmaster.c:4215\n#7 0x000055d5f77b08ad in ServerLoop () at postmaster.c:1727\n#8 0x000055d5f77b00fc in PostmasterMain (argc=3, argv=0x55d5f9113260)\nat postmaster.c:1400\n#9 0x000055d5f76b3736 in main (argc=3, argv=0x55d5f9113260) at\nmain.c:210\n\nNo need for the JDBC test suite then.\n--\nMichael", "msg_date": "Thu, 28 May 2020 16:58:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "At Thu, 28 May 2020 09:07:04 +0300, Vladimir Sitnikov <sitnikov.vladimir@gmail.com> wrote in \n> Pgjdbc test suite identified a SIGSEGV in the recent HEAD builds of\n> PostgreSQL, Ubuntu 14.04.5 LTS\n> \n> Here's a call stack:\n> https://travis-ci.org/github/pgjdbc/pgjdbc/jobs/691794110#L7484\n> The crash is consistent, and it reproduces 100% of the cases so far.\n> \n> The CI history shows that HEAD was good at 11 May 13:27 UTC, and it became\n> bad by 19 May 14:00 UTC,\n> so the regression was introduced somewhere in-between.\n> \n> Does that ring any bells?\n\nThanks for the report. It is surely a bug since the server crashes,\non the other hand Pgjdbc seems doing bad, too.\n\nIt seems to me that that crash means Pgjdbc is initiating a logical\nreplication connection to start physical replication.\n\n> In case you wonder:\n> \n> Program terminated with signal SIGSEGV, Segmentation fault.\n> #0 XLogSendPhysical () at walsender.c:2762\n> 2762 if (!WALRead(xlogreader,\n> (gdb) #0 XLogSendPhysical () at walsender.c:2762\n> SendRqstPtr = 133473640\n> startptr = 133473240\n> endptr = 133473640\n> nbytes = 400\n> segno = 1\n> errinfo = {wre_errno = 988942240, wre_off = 2, wre_req = -1,\n> wre_read = -1, wre_seg = {ws_file = 4714224,\n> ws_segno = 140729887364688, ws_tli = 0}}\n> __func__ = \"XLogSendPhysical\"\n\nI see the probably the same symptom by the following steps with the\ncurrent HEAD.\n\npsql 'host=/tmp replication=database'\n=# START_REPLICATION 0/1;\n<serer crashes>\n\nPhysical replication is not assumed to be started on a logical\nreplication connection. The attached would fix that. The patch adds\ntwo tests. One for this case and another for the reverse.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 28 May 2020 17:04:36 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical\n () at walsender.c:2762" }, { "msg_contents": "Kyotaro>It seems to me that that crash means Pgjdbc is initiating a logical\nKyotaro>replication connection to start physical replication.\n\nWell, it used to work previously, so it might be a breaking change from the\nclient/application point of view.\n\nVladimir\n\nKyotaro>It seems to me that that crash means Pgjdbc is initiating a logicalKyotaro>replication connection to start physical replication.Well, it used to work previously, so it might be a breaking change from the client/application point of view.Vladimir", "msg_date": "Thu, 28 May 2020 11:57:23 +0300", "msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>", "msg_from_op": true, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "Hello, Vladimir.\n\nAt Thu, 28 May 2020 11:57:23 +0300, Vladimir Sitnikov <sitnikov.vladimir@gmail.com> wrote in \n> Kyotaro>It seems to me that that crash means Pgjdbc is initiating a logical\n> Kyotaro>replication connection to start physical replication.\n> \n> Well, it used to work previously, so it might be a breaking change from the\n> client/application point of view.\n\nMmm. It is not the proper way to use physical replication and it's\ntotally accidental that that worked (or even it might be a bug). The\ndocumentation is saying as the follows, as more-or-less the same for\nall versions since 9.4.\n\nhttps://www.postgresql.org/docs/13/protocol-replication.html\n\n> To initiate streaming replication, the frontend sends the replication\n> parameter in the startup message. A Boolean value of true (or on, yes,\n> 1) tells the backend to go into physical replication walsender mode,\n> wherein a small set of replication commands, shown below, can be\n> issued instead of SQL statements.\n> \n> Passing database as the value for the replication parameter instructs\n> the backend to go into logical replication walsender mode, connecting\n> to the database specified in the dbname parameter. In logical\n> replication walsender mode, the replication commands shown below as\n> well as normal SQL commands can be issued.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 28 May 2020 18:11:39 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical\n () at walsender.c:2762" }, { "msg_contents": "On Thu, 28 May 2020 at 05:11, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> Hello, Vladimir.\n>\n> At Thu, 28 May 2020 11:57:23 +0300, Vladimir Sitnikov <\n> sitnikov.vladimir@gmail.com> wrote in\n> > Kyotaro>It seems to me that that crash means Pgjdbc is initiating a\n> logical\n> > Kyotaro>replication connection to start physical replication.\n> >\n> > Well, it used to work previously, so it might be a breaking change from\n> the\n> > client/application point of view.\n>\n> Mmm. It is not the proper way to use physical replication and it's\n> totally accidental that that worked (or even it might be a bug). The\n> documentation is saying as the follows, as more-or-less the same for\n> all versions since 9.4.\n>\n> https://www.postgresql.org/docs/13/protocol-replication.html\n>\n> > To initiate streaming replication, the frontend sends the replication\n> > parameter in the startup message. A Boolean value of true (or on, yes,\n> > 1) tells the backend to go into physical replication walsender mode,\n> > wherein a small set of replication commands, shown below, can be\n> > issued instead of SQL statements.\n> >\n> > Passing database as the value for the replication parameter instructs\n> > the backend to go into logical replication walsender mode, connecting\n> > to the database specified in the dbname parameter. In logical\n> > replication walsender mode, the replication commands shown below as\n> > well as normal SQL commands can be issued.\n>\n> regards.\n>\n> While the documentation does indeed say that there is quite a bit of\nadditional confusion added by:\n\nand\nSTART_REPLICATION [ SLOT *slot_name* ] [ PHYSICAL ] *XXX/XXX* [ TIMELINE\n*tli* ]\n\nIf we already have a physical replication slot according to the startup\nmessage why do we need to specify it in the START REPLICATION message ?\n\nDave\n\nOn Thu, 28 May 2020 at 05:11, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:Hello, Vladimir.\n\nAt Thu, 28 May 2020 11:57:23 +0300, Vladimir Sitnikov <sitnikov.vladimir@gmail.com> wrote in \n> Kyotaro>It seems to me that that crash means Pgjdbc is initiating a logical\n> Kyotaro>replication connection to start physical replication.\n> \n> Well, it used to work previously, so it might be a breaking change from the\n> client/application point of view.\n\nMmm. It is not the proper way to use physical replication and it's\ntotally accidental that that worked (or even it might be a bug). The\ndocumentation is saying as the follows, as more-or-less the same for\nall versions since 9.4.\n\nhttps://www.postgresql.org/docs/13/protocol-replication.html\n\n> To initiate streaming replication, the frontend sends the replication\n> parameter in the startup message. A Boolean value of true (or on, yes,\n> 1) tells the backend to go into physical replication walsender mode,\n> wherein a small set of replication commands, shown below, can be\n> issued instead of SQL statements.\n> \n> Passing database as the value for the replication parameter instructs\n> the backend to go into logical replication walsender mode, connecting\n> to the database specified in the dbname parameter. In logical\n> replication walsender mode, the replication commands shown below as\n> well as normal SQL commands can be issued.\n\nregards.\nWhile the documentation does indeed say that there is quite a bit of additional confusion added by:and START_REPLICATION [ SLOT slot_name ] [ PHYSICAL ] XXX/XXX [ TIMELINE tli ]If we already have a physical replication slot according to the startup message why do we need to specify it in the START REPLICATION message ?Dave", "msg_date": "Thu, 28 May 2020 09:08:19 -0400", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "At Thu, 28 May 2020 09:08:19 -0400, Dave Cramer <davecramer@postgres.rocks> wrote in \n> On Thu, 28 May 2020 at 05:11, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n> > Mmm. It is not the proper way to use physical replication and it's\n> > totally accidental that that worked (or even it might be a bug). The\n> > documentation is saying as the follows, as more-or-less the same for\n> > all versions since 9.4.\n> >\n> > https://www.postgresql.org/docs/13/protocol-replication.html\n...\n> >\n> While the documentation does indeed say that there is quite a bit of\n> additional confusion added by:\n> \n> and\n> START_REPLICATION [ SLOT *slot_name* ] [ PHYSICAL ] *XXX/XXX* [ TIMELINE\n> *tli* ]\n> \n> If we already have a physical replication slot according to the startup\n> message why do we need to specify it in the START REPLICATION message ?\n\nI don't know, but physical replication has worked that way since\nbefore the replication slots was introduced so we haven't needed to do\nso. Physical replication slots are not assumed as more than\nmemorandum for the oldest required WAL segment (and oldest xmin).\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 29 May 2020 12:10:39 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical\n () at walsender.c:2762" }, { "msg_contents": "On Thu, May 28, 2020 at 06:11:39PM +0900, Kyotaro Horiguchi wrote:\n> Mmm. It is not the proper way to use physical replication and it's\n> totally accidental that that worked (or even it might be a bug). The\n> documentation is saying as the follows, as more-or-less the same for\n> all versions since 9.4.\n> \n> https://www.postgresql.org/docs/13/protocol-replication.html\n\n+ if (am_db_walsender)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"cannot initiate physical\nreplication on a logical replication connection\")));\n\nI don't agree with this change. The only restriction that we have in\nplace now in walsender.c regarding MyDatabaseId not being set is to\nprevent the execution of SQL commands. Note that it is possible to\nstart physical replication even if MyDatabaseId is set in a\nreplication connection, so you could break cases that have been valid\nuntil now.\n\nI think that we actually should be much more careful with the\ninitialization of the WAL reader used in the context of a WAL sender\nbefore calling WALRead() and attempting to read a new WAL page.\n--\nMichael", "msg_date": "Fri, 29 May 2020 16:21:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "At Fri, 29 May 2020 16:21:38 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, May 28, 2020 at 06:11:39PM +0900, Kyotaro Horiguchi wrote:\n> > Mmm. It is not the proper way to use physical replication and it's\n> > totally accidental that that worked (or even it might be a bug). The\n> > documentation is saying as the follows, as more-or-less the same for\n> > all versions since 9.4.\n> > \n> > https://www.postgresql.org/docs/13/protocol-replication.html\n> \n> + if (am_db_walsender)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> + errmsg(\"cannot initiate physical\n> replication on a logical replication connection\")));\n> \n> I don't agree with this change. The only restriction that we have in\n> place now in walsender.c regarding MyDatabaseId not being set is to\n> prevent the execution of SQL commands. Note that it is possible to\n> start physical replication even if MyDatabaseId is set in a\n> replication connection, so you could break cases that have been valid\n> until now.\n\nIt donesn't check MyDatabase, but whether the connection parameter\n\"repliation\" is \"true\" or \"database\". The documentation is telling\nthat \"replication\" should be \"true\" for a connection that is to be\nused for physical replication, and \"replication\" should literally be\n\"database\" for a connection that is for logical replication. We need\nto revise the documentation if we are going to allow physical\nreplication on a conection with \"replication = database\".\n\n> I think that we actually should be much more careful with the\n> initialization of the WAL reader used in the context of a WAL sender\n> before calling WALRead() and attempting to read a new WAL page.\n\nI agree that the initialization can be improved, but the current code\nis no problem if we don't allow to run both logical and physical\nreplication on a single session.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 29 May 2020 17:56:53 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical\n () at walsender.c:2762" }, { "msg_contents": "On Fri, 29 May 2020 at 17:57, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> At Fri, 29 May 2020 16:21:38 +0900, Michael Paquier <michael@paquier.xyz> wrote in\n> > On Thu, May 28, 2020 at 06:11:39PM +0900, Kyotaro Horiguchi wrote:\n> > > Mmm. It is not the proper way to use physical replication and it's\n> > > totally accidental that that worked (or even it might be a bug). The\n> > > documentation is saying as the follows, as more-or-less the same for\n> > > all versions since 9.4.\n> > >\n> > > https://www.postgresql.org/docs/13/protocol-replication.html\n> >\n> > + if (am_db_walsender)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > + errmsg(\"cannot initiate physical\n> > replication on a logical replication connection\")));\n> >\n> > I don't agree with this change. The only restriction that we have in\n> > place now in walsender.c regarding MyDatabaseId not being set is to\n> > prevent the execution of SQL commands. Note that it is possible to\n> > start physical replication even if MyDatabaseId is set in a\n> > replication connection, so you could break cases that have been valid\n> > until now.\n>\n> It donesn't check MyDatabase, but whether the connection parameter\n> \"repliation\" is \"true\" or \"database\". The documentation is telling\n> that \"replication\" should be \"true\" for a connection that is to be\n> used for physical replication, and \"replication\" should literally be\n> \"database\" for a connection that is for logical replication. We need\n> to revise the documentation if we are going to allow physical\n> replication on a conection with \"replication = database\".\n>\n\nYes. Conversely, if we start logical replication in a physical\nreplication connection (i.g. replication=true) we got an error before\nstaring replication:\n\nERROR: logical decoding requires a database connection\n\nI think we can prevent that SEGV in a similar way.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 29 May 2020 18:09:06 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "On Fri, May 29, 2020 at 06:09:06PM +0900, Masahiko Sawada wrote:\n> Yes. Conversely, if we start logical replication in a physical\n> replication connection (i.g. replication=true) we got an error before\n> staring replication:\n> \n> ERROR: logical decoding requires a database connection\n> \n> I think we can prevent that SEGV in a similar way.\n\nStill unconvinced as this restriction stands for logical decoding\nrequiring a database connection but it is not necessarily true now as\nphysical replication has less restrictions than a logical one.\n\nLooking at the code, I think that there is some confusion with the\nfake WAL reader used as base reference in InitWalSender() where we\nassume that it could only be used in the context of a non-database WAL\nsender. However, this initialization happens when the WAL sender\nconnection is initialized, and what I think this misses is that we \nshould try to initialize a WAL reader when actually going through a\nSTART_REPLICATION command.\n\nI can note as well that StartLogicalReplication() moves in this sense\nby setting xlogreader to be the one from logical_decoding_ctx once the\ndecoding context has been created.\n\nThis results in the attached. The extra test from upthread to check\nthat logical decoding is not allowed in a non-database WAL sender is a\ngood idea, so I have kept it.\n--\nMichael", "msg_date": "Tue, 2 Jun 2020 13:24:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "\n\nOn 2020/06/02 13:24, Michael Paquier wrote:\n> On Fri, May 29, 2020 at 06:09:06PM +0900, Masahiko Sawada wrote:\n>> Yes. Conversely, if we start logical replication in a physical\n>> replication connection (i.g. replication=true) we got an error before\n>> staring replication:\n>>\n>> ERROR: logical decoding requires a database connection\n>>\n>> I think we can prevent that SEGV in a similar way.\n> \n> Still unconvinced as this restriction stands for logical decoding\n> requiring a database connection but it is not necessarily true now as\n> physical replication has less restrictions than a logical one.\n\nCould you tell me what the benefit for supporting physical replication on\nlogical rep connection is? If it's only for \"undocumented\"\nbackward-compatibility, IMO it's better to reject such \"tricky\" set up.\nBut if there are some use cases for that, I'm ok to support that.\n\n> Looking at the code, I think that there is some confusion with the\n> fake WAL reader used as base reference in InitWalSender() where we\n> assume that it could only be used in the context of a non-database WAL\n> sender. However, this initialization happens when the WAL sender\n> connection is initialized, and what I think this misses is that we\n> should try to initialize a WAL reader when actually going through a\n> START_REPLICATION command.\n> \n> I can note as well that StartLogicalReplication() moves in this sense\n> by setting xlogreader to be the one from logical_decoding_ctx once the\n> decoding context has been created.\n> \n> This results in the attached. The extra test from upthread to check\n> that logical decoding is not allowed in a non-database WAL sender is a\n> good idea, so I have kept it.\n\nYes. Also we should add the test to check if physical replication can work\nfine even on logical rep connection?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 2 Jun 2020 14:23:50 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "At Tue, 2 Jun 2020 13:24:56 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, May 29, 2020 at 06:09:06PM +0900, Masahiko Sawada wrote:\n> > Yes. Conversely, if we start logical replication in a physical\n> > replication connection (i.g. replication=true) we got an error before\n> > staring replication:\n> > \n> > ERROR: logical decoding requires a database connection\n> > \n> > I think we can prevent that SEGV in a similar way.\n> \n> Still unconvinced as this restriction stands for logical decoding\n> requiring a database connection but it is not necessarily true now as\n> physical replication has less restrictions than a logical one.\n\nIf we deliberately allow physical replication on a\ndatabase-replication connection, we should revise the documentation\nthat way. On the other hand physical replication has wider access to a\ndatabase cluster than logical replication. Thus allowing to start\nphysical replication on a logical replication connection could\nintroduce a problem related to privileges. So I think it might be\nbetter that physical and logical replication have separate pg_hba\nlines.\n\nOnce we explicitly allow physical replication on a logical replication\nconnection in documentation, it would be far harder to change the\nbehavior than now.\n\nIf we are sure that that cannot be a problem, I don't object the\nchange in documented behavior.\n\n> Looking at the code, I think that there is some confusion with the\n> fake WAL reader used as base reference in InitWalSender() where we\n> assume that it could only be used in the context of a non-database WAL\n> sender. However, this initialization happens when the WAL sender\n> connection is initialized, and what I think this misses is that we \n> should try to initialize a WAL reader when actually going through a\n> START_REPLICATION command.\n\nAt first fake_xlogreader was really a fake one that only provides\ncallback routines, but it should have been changed to a real\nxlogreader at the time it began to store segment information. In that\nsense moving to real xlogreader makes sense to me separately from\nwhether we allow physicalrep on logicalrep connections.\n\n> I can note as well that StartLogicalReplication() moves in this sense\n> by setting xlogreader to be the one from logical_decoding_ctx once the\n> decoding context has been created.\n> \n> This results in the attached. The extra test from upthread to check\n> that logical decoding is not allowed in a non-database WAL sender is a\n> good idea, so I have kept it.\n\n+\t\tereport(ERROR,\n+\t\t\t\t(errcode(ERRCODE_OUT_OF_MEMORY),\n+\t\t\t\t errmsg(\"out of memory\")));\n\nThe same error message is accompanied by a DETAILS in some other\nplaces. Don't we need one for this?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 02 Jun 2020 15:05:27 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical\n () at walsender.c:2762" }, { "msg_contents": "On Tue, Jun 02, 2020 at 02:23:50PM +0900, Fujii Masao wrote:\n> On 2020/06/02 13:24, Michael Paquier wrote:\n>> Still unconvinced as this restriction stands for logical decoding\n>> requiring a database connection but it is not necessarily true now as\n>> physical replication has less restrictions than a logical one.\n> \n> Could you tell me what the benefit for supporting physical replication on\n> logical rep connection is? If it's only for \"undocumented\"\n> backward-compatibility, IMO it's better to reject such \"tricky\" set up.\n> But if there are some use cases for that, I'm ok to support that.\n\nWell, I don't really think that we can just break a behavior that\nexists since 9.4 as you could break applications relying on the\nexisting behavior, and that's also the point of Vladimir upthread.\n\nOn top of it, the issue is actually unrelated to if we want to\nrestrict things more or not when starting replication in a WAL sender\nbecause the xlogreader creation just needs to happen when starting\nreplication. Now we have a static \"fake\" one created when a WAL\nsender process starts, something that it would not need in most cases\nlike answering to a BASE_BACKUP command for example.\n\n>> I can note as well that StartLogicalReplication() moves in this sense\n>> by setting xlogreader to be the one from logical_decoding_ctx once the\n>> decoding context has been created.\n>> \n>> This results in the attached. The extra test from upthread to check\n>> that logical decoding is not allowed in a non-database WAL sender is a\n>> good idea, so I have kept it.\n> \n> Yes. Also we should add the test to check if physical replication can work\n> fine even on logical rep connection?\n\nI found confusing the use of psql to confirm that it actually works,\nbecause we'd just return a protocol-level error in this case with psql\nbumping on COPY_BOTH and it is not reliable to do just an error\nmessage match. Note as well that GetConnection() discards\nautomatically the database name for pg_basebackup and pg_receivewal as\nwell as libpqrcv_connect() for standbys so we cannot use that.\nPerhaps using psql is better than nothing, but that makes me\nuncomfortable.\n--\nMichael", "msg_date": "Wed, 3 Jun 2020 14:19:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "On Wed, 3 Jun 2020 at 01:19, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Jun 02, 2020 at 02:23:50PM +0900, Fujii Masao wrote:\n> > On 2020/06/02 13:24, Michael Paquier wrote:\n> >> Still unconvinced as this restriction stands for logical decoding\n> >> requiring a database connection but it is not necessarily true now as\n> >> physical replication has less restrictions than a logical one.\n> >\n> > Could you tell me what the benefit for supporting physical replication on\n> > logical rep connection is? If it's only for \"undocumented\"\n> > backward-compatibility, IMO it's better to reject such \"tricky\" set up.\n> > But if there are some use cases for that, I'm ok to support that.\n>\n> Well, I don't really think that we can just break a behavior that\n> exists since 9.4 as you could break applications relying on the\n> existing behavior, and that's also the point of Vladimir upthread.\n>\n\nI don't see this is a valid reason to keep doing something. If it is broken\nthen fix it.\nClients can deal with the change.\n\nDave Cramer\nhttps://www.postgres.rocks\n\nOn Wed, 3 Jun 2020 at 01:19, Michael Paquier <michael@paquier.xyz> wrote:On Tue, Jun 02, 2020 at 02:23:50PM +0900, Fujii Masao wrote:\n> On 2020/06/02 13:24, Michael Paquier wrote:\n>> Still unconvinced as this restriction stands for logical decoding\n>> requiring a database connection but it is not necessarily true now as\n>> physical replication has less restrictions than a logical one.\n> \n> Could you tell me what the benefit for supporting physical replication on\n> logical rep connection is? If it's only for \"undocumented\"\n> backward-compatibility, IMO it's better to reject such \"tricky\" set up.\n> But if there are some use cases for that, I'm ok to support that.\n\nWell, I don't really think that we can just break a behavior that\nexists since 9.4 as you could break applications relying on the\nexisting behavior, and that's also the point of Vladimir upthread.I don't see this is a valid reason to keep doing something. If it is broken then fix it.Clients can deal with the change. Dave Cramerhttps://www.postgres.rocks", "msg_date": "Wed, 3 Jun 2020 07:33:14 -0400", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "\n\nOn 2020/06/03 20:33, Dave Cramer wrote:\n> \n> \n> \n> On Wed, 3 Jun 2020 at 01:19, Michael Paquier <michael@paquier.xyz <mailto:michael@paquier.xyz>> wrote:\n> \n> On Tue, Jun 02, 2020 at 02:23:50PM +0900, Fujii Masao wrote:\n> > On 2020/06/02 13:24, Michael Paquier wrote:\n> >> Still unconvinced as this restriction stands for logical decoding\n> >> requiring a database connection but it is not necessarily true now as\n> >> physical replication has less restrictions than a logical one.\n> >\n> > Could you tell me what the benefit for supporting physical replication on\n> > logical rep connection is? If it's only for \"undocumented\"\n> > backward-compatibility, IMO it's better to reject such \"tricky\" set up.\n> > But if there are some use cases for that, I'm ok to support that.\n> \n> Well, I don't really think that we can just break a behavior that\n> exists since 9.4 as you could break applications relying on the\n> existing behavior, and that's also the point of Vladimir upthread.\n\nFor the back branches, I agree with you. Even if it's undocumented behavior,\nbasically we should not get rid of it from the back branches unless there is\nvery special reason.\n\nFor v13, if it has no functional merit, I don't think it's so bad to get rid of\nthat undocumented (and maybe not-fully tested) behavior. If there are\napplications depending it, I think that they can be updated.\n\n> I don't see this is a valid reason to keep doing something. If it is broken then fix it.\n> Clients can deal with the change.\n\n+1\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 3 Jun 2020 22:10:41 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "On 2020-Jun-02, Michael Paquier wrote:\n\n> I can note as well that StartLogicalReplication() moves in this sense\n> by setting xlogreader to be the one from logical_decoding_ctx once the\n> decoding context has been created.\n> \n> This results in the attached. The extra test from upthread to check\n> that logical decoding is not allowed in a non-database WAL sender is a\n> good idea, so I have kept it.\n\nI don't particularly disagree with your proposed patch -- in fact, it\nseems to make things cleaner. It is a little wasteful, but I don't\nreally mind that. It's just some memory, and it's not a significant\namount.\n\nThat said, I would *also* apply Kyotaro's proposed patch to prohibit a\nphysical standby running with a logical slot, if only because that\nreduces the number of combinations that we need to test and keep our\ncollective heads straight about. Just reject the weird case and have\none type of slot for each type of replication. I didn't even think this\nwas at all possible.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 3 Jun 2020 17:44:48 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "Hi,\n\nOn 2020-06-02 14:23:50 +0900, Fujii Masao wrote:\n> On 2020/06/02 13:24, Michael Paquier wrote:\n> > On Fri, May 29, 2020 at 06:09:06PM +0900, Masahiko Sawada wrote:\n> > > Yes. Conversely, if we start logical replication in a physical\n> > > replication connection (i.g. replication=true) we got an error before\n> > > staring replication:\n> > > \n> > > ERROR: logical decoding requires a database connection\n> > > \n> > > I think we can prevent that SEGV in a similar way.\n> > \n> > Still unconvinced as this restriction stands for logical decoding\n> > requiring a database connection but it is not necessarily true now as\n> > physical replication has less restrictions than a logical one.\n> \n> Could you tell me what the benefit for supporting physical replication on\n> logical rep connection is? If it's only for \"undocumented\"\n> backward-compatibility, IMO it's better to reject such \"tricky\" set up.\n> But if there are some use cases for that, I'm ok to support that.\n\nI don't think we should prohibit this. For one, it'd probably break some\nclients, without a meaningful need.\n\nBut I think it's also actually quite useful to be able to access\ncatalogs before streaming data. You e.g. can look up configuration of\nthe primary before streaming WAL. With a second connection that's\nactually harder to do reliably in some cases, because you need to be\nsure that you actually reached the right server (consider a pooler,\nautomatic failover etc).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 3 Jun 2020 15:13:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "On 2020-Jun-03, Andres Freund wrote:\n\n> I don't think we should prohibit this. For one, it'd probably break some\n> clients, without a meaningful need.\n\nThere *is* a need, namely to keep complexity down. This is quite\nconvoluted, it's got a lot of historical baggage because of the way it\nwas implemented, and it's very difficult to understand. The greatest\nmotive I see is to make this easier to understand, so that it is easier\nto modify and improve in the future.\n\n> But I think it's also actually quite useful to be able to access\n> catalogs before streaming data. You e.g. can look up configuration of\n> the primary before streaming WAL. With a second connection that's\n> actually harder to do reliably in some cases, because you need to be\n> sure that you actually reached the right server (consider a pooler,\n> automatic failover etc).\n\nI don't think having a physical replication connection access catalog\ndata directly is a great idea. We already have gadgets like\nIDENTIFY_SYSTEM for physical replication that can do that, and if you\nneed particular settings you can use SHOW (commit d1ecd539477). If\nthere was a strong need for even more than that, we can add something to\nthe grammar.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 3 Jun 2020 18:27:12 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "Hi,\n\nOn 2020-06-03 18:27:12 -0400, Alvaro Herrera wrote:\n> On 2020-Jun-03, Andres Freund wrote:\n> > I don't think we should prohibit this. For one, it'd probably break some\n> > clients, without a meaningful need.\n> \n> There *is* a need, namely to keep complexity down. This is quite\n> convoluted, it's got a lot of historical baggage because of the way it\n> was implemented, and it's very difficult to understand. The greatest\n> motive I see is to make this easier to understand, so that it is easier\n> to modify and improve in the future.\n\nThat seems like a possibly convincing argument for not introducing the\ncapability, but doesn't seem strong enough to remove it. Especially not\nif it was just broken as part of effectively a refactoring, as far as I\nunderstand?\n\n\n> > But I think it's also actually quite useful to be able to access\n> > catalogs before streaming data. You e.g. can look up configuration of\n> > the primary before streaming WAL. With a second connection that's\n> > actually harder to do reliably in some cases, because you need to be\n> > sure that you actually reached the right server (consider a pooler,\n> > automatic failover etc).\n> \n> I don't think having a physical replication connection access catalog\n> data directly is a great idea. We already have gadgets like\n> IDENTIFY_SYSTEM for physical replication that can do that, and if you\n> need particular settings you can use SHOW (commit d1ecd539477). If\n> there was a strong need for even more than that, we can add something to\n> the grammar.\n\nThose special case things are a bad idea, and we shouldn't introduce\nmore. It's unrealistic that we can ever make that support everything,\nand since we already have to support the database connected thing, I\ndon't see the point.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 3 Jun 2020 18:33:11 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "On Wed, Jun 03, 2020 at 06:33:11PM -0700, Andres Freund wrote:\n> On 2020-06-03 18:27:12 -0400, Alvaro Herrera wrote:\n>> There *is* a need, namely to keep complexity down. This is quite\n>> convoluted, it's got a lot of historical baggage because of the way it\n>> was implemented, and it's very difficult to understand. The greatest\n>> motive I see is to make this easier to understand, so that it is easier\n>> to modify and improve in the future.\n> \n> That seems like a possibly convincing argument for not introducing the\n> capability, but doesn't seem strong enough to remove it. Especially not\n> if it was just broken as part of effectively a refactoring, as far as I\n> understand?\n\nAre there any objections in fixing the issue first then? As far as I\ncan see there is no objection to this part, like here:\nhttps://www.postgresql.org/message-id/20200603214448.GA901@alvherre.pgsql\n\n>> I don't think having a physical replication connection access catalog\n>> data directly is a great idea. We already have gadgets like\n>> IDENTIFY_SYSTEM for physical replication that can do that, and if you\n>> need particular settings you can use SHOW (commit d1ecd539477). If\n>> there was a strong need for even more than that, we can add something to\n>> the grammar.\n> \n> Those special case things are a bad idea, and we shouldn't introduce\n> more. It's unrealistic that we can ever make that support everything,\n> and since we already have to support the database connected thing, I\n> don't see the point.\n\nLet's continue discussing this part as well.\n--\nMichael", "msg_date": "Thu, 4 Jun 2020 11:07:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "On 2020-Jun-03, Andres Freund wrote:\n\n> On 2020-06-03 18:27:12 -0400, Alvaro Herrera wrote:\n> > On 2020-Jun-03, Andres Freund wrote:\n> > > I don't think we should prohibit this. For one, it'd probably break some\n> > > clients, without a meaningful need.\n> > \n> > There *is* a need, namely to keep complexity down. This is quite\n> > convoluted, it's got a lot of historical baggage because of the way it\n> > was implemented, and it's very difficult to understand. The greatest\n> > motive I see is to make this easier to understand, so that it is easier\n> > to modify and improve in the future.\n> \n> That seems like a possibly convincing argument for not introducing the\n> capability, but doesn't seem strong enough to remove it.\n\nThis \"capability\" has never been introduced. The fact that it's there\nis just an accident. In fact, it's not a capability, since the feature\n(physical replication) is invoked differently -- namely, using a\nphysical replication connection. JDBC uses a logical replication\nconnection for it only because they never realized that they were\nsupposed to do differently, because we failed to throw the correct\nerror message in the first place.\n\n> > I don't think having a physical replication connection access catalog\n> > data directly is a great idea. We already have gadgets like\n> > IDENTIFY_SYSTEM for physical replication that can do that, and if you\n> > need particular settings you can use SHOW (commit d1ecd539477). If\n> > there was a strong need for even more than that, we can add something to\n> > the grammar.\n> \n> Those special case things are a bad idea, and we shouldn't introduce\n> more.\n\nWhat special case things? The replication connection has never been\nsupposed to run SQL. That's why we have SHOW in the replication\ngrammar.\n\n> It's unrealistic that we can ever make that support everything,\n> and since we already have to support the database connected thing, I\n> don't see the point.\n\nA logical replication connection is not supposed to be used for physical\nreplication. That's just going to make more bugs appear.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 3 Jun 2020 22:25:00 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "On 2020-Jun-04, Michael Paquier wrote:\n\n> On Wed, Jun 03, 2020 at 06:33:11PM -0700, Andres Freund wrote:\n\n> >> I don't think having a physical replication connection access catalog\n> >> data directly is a great idea. We already have gadgets like\n> >> IDENTIFY_SYSTEM for physical replication that can do that, and if you\n> >> need particular settings you can use SHOW (commit d1ecd539477). If\n> >> there was a strong need for even more than that, we can add something to\n> >> the grammar.\n> > \n> > Those special case things are a bad idea, and we shouldn't introduce\n> > more. It's unrealistic that we can ever make that support everything,\n> > and since we already have to support the database connected thing, I\n> > don't see the point.\n> \n> Let's continue discussing this part as well.\n\nA logical replication connection cannot run SQL anyway, can it? it's\nlimited to the replication grammar. So it's not like you can run\narbitrary queries to access catalog data. So even if we do need to\naccess the catalogs, we'd have to add stuff to the replication grammar\nin order to support that.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 4 Jun 2020 16:44:53 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "Hi,\n\nOn 2020-06-04 16:44:53 -0400, Alvaro Herrera wrote:\n> A logical replication connection cannot run SQL anyway, can it?\n\nYou can:\n\nandres@awork3:~/src/postgresql$ psql 'replication=database'\n\npostgres[52656][1]=# IDENTIFY_SYSTEM;\n┌─────────────────────┬──────────┬────────────┬──────────┐\n│ systemid │ timeline │ xlogpos │ dbname │\n├─────────────────────┼──────────┼────────────┼──────────┤\n│ 6821634567571961151 │ 1 │ 1/D256EC40 │ postgres │\n└─────────────────────┴──────────┴────────────┴──────────┘\n(1 row)\n\npostgres[52656][1]=# SELECT 1;\n┌──────────┐\n│ ?column? │\n├──────────┤\n│ 1 │\n└──────────┘\n(1 row)\n\n\nI am very much not in love with the way that was implemented, but it's\nthere, and it's used as far as I know (cf tablesync.c).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 4 Jun 2020 15:51:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "On 2020-Jun-04, Andres Freund wrote:\n\n> postgres[52656][1]=# SELECT 1;\n> ┌──────────┐\n> │ ?column? │\n> ├──────────┤\n> │ 1 │\n> └──────────┘\n> (1 row)\n> \n> \n> I am very much not in love with the way that was implemented, but it's\n> there, and it's used as far as I know (cf tablesync.c).\n\nOuch ... so they made IDENT in the replication grammar be a trigger to\nenter the regular grammar. Crazy. No way to put those worms back in\nthe tin now, I guess.\n\nIt is still my opinion that we should prohibit a logical replication\nconnection from being used to do physical replication. Horiguchi-san,\nSawada-san and Masao-san are all of the same opinion. Dave Cramer (of\nthe JDBC team) is not opposed to the change -- he says they're just\nusing it because they didn't realize they should be doing differently.\n\nBoth Michael P. and you are saying we shouldn't break it because it\nworks today, but there isn't a real use-case for it.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 4 Jun 2020 19:46:00 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "On Thu, 4 Jun 2020 at 19:46, Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2020-Jun-04, Andres Freund wrote:\n>\n> > postgres[52656][1]=# SELECT 1;\n> > ┌──────────┐\n> > │ ?column? │\n> > ├──────────┤\n> > │ 1 │\n> > └──────────┘\n> > (1 row)\n> >\n> >\n> > I am very much not in love with the way that was implemented, but it's\n> > there, and it's used as far as I know (cf tablesync.c).\n>\n> Ouch ... so they made IDENT in the replication grammar be a trigger to\n> enter the regular grammar. Crazy. No way to put those worms back in\n> the tin now, I guess.\n>\n\nIs that documented ?\n\n>\n> It is still my opinion that we should prohibit a logical replication\n> connection from being used to do physical replication. Horiguchi-san,\n> Sawada-san and Masao-san are all of the same opinion. Dave Cramer (of\n> the JDBC team) is not opposed to the change -- he says they're just\n> using it because they didn't realize they should be doing differently.\n\n\nI think my exact words were\n\n\"I don't see this is a valid reason to keep doing something. If it is\nbroken then fix it.\nClients can deal with the change.\"\n\nin response to:\n\nWell, I don't really think that we can just break a behavior that\n> exists since 9.4 as you could break applications relying on the\n> existing behavior, and that's also the point of Vladimir upthread.\n>\n\nWhich is different than not being opposed to the change. I don't see this\nas broken,\nand it's quite possible that some of our users are using it. It certainly\nneeds to be documented\n\nDave\n\nOn Thu, 4 Jun 2020 at 19:46, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2020-Jun-04, Andres Freund wrote:\n\n> postgres[52656][1]=# SELECT 1;\n> ┌──────────┐\n> │ ?column? │\n> ├──────────┤\n> │        1 │\n> └──────────┘\n> (1 row)\n> \n> \n> I am very much not in love with the way that was implemented, but it's\n> there, and it's used as far as I know (cf tablesync.c).\n\nOuch ... so they made IDENT in the replication grammar be a trigger to\nenter the regular grammar.  Crazy.  No way to put those worms back in\nthe tin now, I guess.Is that documented ? \n\nIt is still my opinion that we should prohibit a logical replication\nconnection from being used to do physical replication.  Horiguchi-san,\nSawada-san and Masao-san are all of the same opinion.  Dave Cramer (of\nthe JDBC team) is not opposed to the change -- he says they're just\nusing it because they didn't realize they should be doing differently.I think my exact words were \"I don't see this is a valid reason to keep doing something. If it is broken then fix it.Clients can deal with the change.\"in response to:Well, I don't really think that we can just break a behavior thatexists since 9.4 as you could break applications relying on theexisting behavior, and that's also the point of Vladimir upthread.Which is different than not being opposed to the change. I don't see this as broken,and it's quite possible that some of our users are using it. It certainly needs to be documented Dave", "msg_date": "Fri, 5 Jun 2020 09:04:31 -0400", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "On 2020-Jun-05, Dave Cramer wrote:\n\n> On Thu, 4 Jun 2020 at 19:46, Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n\n> > Ouch ... so they made IDENT in the replication grammar be a trigger to\n> > enter the regular grammar. Crazy. No way to put those worms back in\n> > the tin now, I guess.\n> \n> Is that documented ?\n\nI don't think it is.\n\n> > It is still my opinion that we should prohibit a logical replication\n> > connection from being used to do physical replication. Horiguchi-san,\n> > Sawada-san and Masao-san are all of the same opinion. Dave Cramer (of\n> > the JDBC team) is not opposed to the change -- he says they're just\n> > using it because they didn't realize they should be doing differently.\n> \n> I think my exact words were\n> \n> \"I don't see this is a valid reason to keep doing something. If it is\n> broken then fix it.\n> Clients can deal with the change.\"\n> \n> in response to:\n> \n> > Well, I don't really think that we can just break a behavior that\n> > exists since 9.4 as you could break applications relying on the\n> > existing behavior, and that's also the point of Vladimir upthread.\n> \n> Which is different than not being opposed to the change. I don't see this\n> as broken, and it's quite possible that some of our users are using\n> it.\n\nApologies for misinterpreting.\n\n> It certainly needs to be documented\n\nI'd rather not.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 5 Jun 2020 11:51:48 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "On Thu, Jun 04, 2020 at 11:07:29AM +0900, Michael Paquier wrote:\n> Are there any objections in fixing the issue first then? As far as I\n> can see there is no objection to this part, like here:\n> https://www.postgresql.org/message-id/20200603214448.GA901@alvherre.pgsql\n\nHearing nothing, I have applied this part and fixed the crash to take\ncare of the open item.\n--\nMichael", "msg_date": "Mon, 8 Jun 2020 10:17:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "Hi,\n\nOn 6/5/20 11:51 AM, Alvaro Herrera wrote:\n> On 2020-Jun-05, Dave Cramer wrote:\n> \n>> On Thu, 4 Jun 2020 at 19:46, Alvaro Herrera <alvherre@2ndquadrant.com>\n>> wrote:\n> \n>>> Ouch ... so they made IDENT in the replication grammar be a trigger to\n>>> enter the regular grammar. Crazy. No way to put those worms back in\n>>> the tin now, I guess.\n>>\n>> Is that documented ?\n> \n> I don't think it is.\n> \n>>> It is still my opinion that we should prohibit a logical replication\n>>> connection from being used to do physical replication. Horiguchi-san,\n>>> Sawada-san and Masao-san are all of the same opinion. Dave Cramer (of\n>>> the JDBC team) is not opposed to the change -- he says they're just\n>>> using it because they didn't realize they should be doing differently.\n>>\n>> I think my exact words were\n>>\n>> \"I don't see this is a valid reason to keep doing something. If it is\n>> broken then fix it.\n>> Clients can deal with the change.\"\n>>\n>> in response to:\n>>\n>>> Well, I don't really think that we can just break a behavior that\n>>> exists since 9.4 as you could break applications relying on the\n>>> existing behavior, and that's also the point of Vladimir upthread.\n>>\n>> Which is different than not being opposed to the change. I don't see this\n>> as broken, and it's quite possible that some of our users are using\n>> it.\n> \n> Apologies for misinterpreting.\n> \n>> It certainly needs to be documented\n> \n> I'd rather not.\n\nThe PG13 RMT had a discussion about this thread, and while the initial\ncrash has been fixed, we decided to re-open the Open Item around whether\nwe should allow physical replication to be initiated in a logical\nreplication session.\n\nWe anticipate a resolution for PG13, whether it is explicitly\ndisallowing physical replication from occurring on a logical replication\nslot, maintaining the status quo, or something else such that there is\nconsensus on the approach.\n\nThanks,\n\nJonathan", "msg_date": "Sun, 21 Jun 2020 13:45:36 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "Hi,\n\nOn 2020-06-21 13:45:36 -0400, Jonathan S. Katz wrote:\n> The PG13 RMT had a discussion about this thread, and while the initial\n> crash has been fixed, we decided to re-open the Open Item around whether\n> we should allow physical replication to be initiated in a logical\n> replication session.\n\nSince this is a long-time issue, this doesn't quite seem like an issue\nfor the RMT?\n\n\n> We anticipate a resolution for PG13, whether it is explicitly\n> disallowing physical replication from occurring on a logical replication\n> slot, maintaining the status quo, or something else such that there is\n> consensus on the approach.\n\ns/logical replication slot/logical replication connection/?\n\n\nI still maintain that adding restrictions here is a bad idea. Even\ndisregarding the discussion of running normal queries interspersed, it's\nuseful to be able to both request WAL and receive logical changes over\nthe same connection. E.g. for creating a logical replica by first doing\na physical base backup (vastly faster), or fetching WAL for decoding\nlarge transactions onto a standby.\n\nAnd I just don't see any reasons to disallow it. There's basically no\nreduction in complexity by doing so.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 21 Jun 2020 13:02:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "On Sun, Jun 21, 2020 at 01:02:34PM -0700, Andres Freund wrote:\n> I still maintain that adding restrictions here is a bad idea. Even\n> disregarding the discussion of running normal queries interspersed, it's\n> useful to be able to both request WAL and receive logical changes over\n> the same connection. E.g. for creating a logical replica by first doing\n> a physical base backup (vastly faster), or fetching WAL for decoding\n> large transactions onto a standby.\n> \n> And I just don't see any reasons to disallow it. There's basically no\n> reduction in complexity by doing so.\n\nYeah, I still stand by the same opinion here to do nothing. I suspect\nthat we have good chances to annoy people and some cases we are\noverlooking here, that used to work.\n--\nMichael", "msg_date": "Tue, 23 Jun 2020 10:51:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "At Tue, 23 Jun 2020 10:51:40 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Sun, Jun 21, 2020 at 01:02:34PM -0700, Andres Freund wrote:\n> > I still maintain that adding restrictions here is a bad idea. Even\n> > disregarding the discussion of running normal queries interspersed, it's\n> > useful to be able to both request WAL and receive logical changes over\n> > the same connection. E.g. for creating a logical replica by first doing\n> > a physical base backup (vastly faster), or fetching WAL for decoding\n> > large transactions onto a standby.\n> > \n> > And I just don't see any reasons to disallow it. There's basically no\n> > reduction in complexity by doing so.\n> \n> Yeah, I still stand by the same opinion here to do nothing. I suspect\n> that we have good chances to annoy people and some cases we are\n> overlooking here, that used to work.\n\nIn logical replication, a replication role is intended to be\naccessible only to the GRANTed databases. On the other hand the same\nrole can create a dead copy of the whole cluster, including\nnon-granted databases. It seems like a sieve missing a mesh screen.\n\nI agree that that doesn't harm as far as roles are strictly managed so\nI don't insist so strongly on inhibiting the behavior. However, the\ndocumentation at least needs amendment.\n\nhttps://www.postgresql.org/docs/13/protocol-replication.html\n\n====\nTo initiate streaming replication, the frontend sends the replication\nparameter in the startup message. A Boolean value of true (or on, yes,\n1) tells the backend to go into physical replication walsender mode,\nwherein a small set of replication commands, shown below, can be\nissued instead of SQL statements.\n\nPassing database as the value for the replication parameter instructs\nthe backend to go into logical replication walsender mode, connecting\nto the database specified in the dbname parameter. In logical\nreplication walsender mode, the replication commands shown below as\nwell as normal SQL commands can be issued.\n====\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 24 Jun 2020 11:56:38 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical\n () at walsender.c:2762" }, { "msg_contents": "\n\nOn 2020/06/24 11:56, Kyotaro Horiguchi wrote:\n> At Tue, 23 Jun 2020 10:51:40 +0900, Michael Paquier <michael@paquier.xyz> wrote in\n>> On Sun, Jun 21, 2020 at 01:02:34PM -0700, Andres Freund wrote:\n>>> I still maintain that adding restrictions here is a bad idea. Even\n>>> disregarding the discussion of running normal queries interspersed, it's\n>>> useful to be able to both request WAL and receive logical changes over\n>>> the same connection. E.g. for creating a logical replica by first doing\n>>> a physical base backup (vastly faster), or fetching WAL for decoding\n>>> large transactions onto a standby.\n>>>\n>>> And I just don't see any reasons to disallow it. There's basically no\n>>> reduction in complexity by doing so.\n>>\n>> Yeah, I still stand by the same opinion here to do nothing. I suspect\n>> that we have good chances to annoy people and some cases we are\n>> overlooking here, that used to work.\n> \n> In logical replication, a replication role is intended to be\n> accessible only to the GRANTed databases. On the other hand the same\n> role can create a dead copy of the whole cluster, including\n> non-granted databases. It seems like a sieve missing a mesh screen.\n\nPersonally I'd like to disallow physical replication commands\nwhen I explicitly reject physical replication connection\n(i.e., set \"host replication user x.x.x.x/x reject\") in pg_hba.conf,\nwhether on physical or logical replication connection.\n\n\n> I agree that that doesn't harm as far as roles are strictly managed so\n> I don't insist so strongly on inhibiting the behavior. However, the\n> documentation at least needs amendment.\n\n+1\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 24 Jun 2020 18:45:38 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "On 2020-Jun-24, Kyotaro Horiguchi wrote:\n\n> In logical replication, a replication role is intended to be\n> accessible only to the GRANTed databases. On the other hand the same\n> role can create a dead copy of the whole cluster, including\n> non-granted databases.\n\nIn other words -- essentially, if you grant replication access to a role\nonly to a specific database, they can steal the whole cluster.\n\nI don't see what's so great about that, but apparently people like it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 24 Jun 2020 10:58:08 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "Greetings,\n\n* Alvaro Herrera (alvherre@2ndquadrant.com) wrote:\n> On 2020-Jun-24, Kyotaro Horiguchi wrote:\n> \n> > In logical replication, a replication role is intended to be\n> > accessible only to the GRANTed databases. On the other hand the same\n> > role can create a dead copy of the whole cluster, including\n> > non-granted databases.\n> \n> In other words -- essentially, if you grant replication access to a role\n> only to a specific database, they can steal the whole cluster.\n> \n> I don't see what's so great about that, but apparently people like it.\n\nSure, people who aren't in charge of security I'm sure like the ease of\nuse.\n\nDoesn't mean it makes sense or that we should be supporting that. What\nwe should have is a way to allow administrators to configure a system\nfor exactly what they want to allow, and it doesn't seem like we're\ndoing that today and therefore we should fix it. This isn't the only\narea we have that issue in.\n\nThanks,\n\nStephen", "msg_date": "Wed, 24 Jun 2020 12:50:16 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "On 2020-Jun-24, Stephen Frost wrote:\n\n> Doesn't mean it makes sense or that we should be supporting that. What\n> we should have is a way to allow administrators to configure a system\n> for exactly what they want to allow, and it doesn't seem like we're\n> doing that today and therefore we should fix it. This isn't the only\n> area we have that issue in.\n\nThe way to do that, for the case under discussion, is to reject using a\nlogical replication connection for physical replication commands.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 24 Jun 2020 13:05:49 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "On Wed, Jun 24, 2020 at 1:06 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2020-Jun-24, Stephen Frost wrote:\n> > Doesn't mean it makes sense or that we should be supporting that. What\n> > we should have is a way to allow administrators to configure a system\n> > for exactly what they want to allow, and it doesn't seem like we're\n> > doing that today and therefore we should fix it. This isn't the only\n> > area we have that issue in.\n>\n> The way to do that, for the case under discussion, is to reject using a\n> logical replication connection for physical replication commands.\n\nReading over this discussion, I see basically three arguments:\n\n1. Andres argues that being able to execute physical replication\ncommands from the same connection as SQL queries is useful, and that\npeople may be relying on it, and that we shouldn't break it without\nneed.\n\n2. Fujii Masao argues that the current situation makes it impossible\nto write a pg_hba.conf rule that disallows all physical replication\nconnections, because people could get around it by using a logical\nreplication connection for physical replication.\n\n3. Various people argue that it's only accidental that physical\nreplication on a replication=database connection ever worked at all,\nand therefore we ought to block it.\n\nI find argument #1 most convincing, #2 less convincing, and #3 least\nconvincing. In my view, the problem with argument #3 is that just\nbecause some feature combination was unintentional doesn't mean it's\nunuseful or unused. As for #2, suppose someone were to propose a\ndesign for logical replication that allowed it to take place without a\ndatabase connection, so that it could be done with just a regular\nreplication connection. Such a feature would create the same problem\nFujii Masao mentions here, but it seems inconceivable that we would\nfor that reason reject it; we make decisions about features based on\ntheir usefulness, not their knock-on effects on pg_hba.conf rules. We\ncan always add new kinds of access control restrictions if they are\nneeded; that is a better approach than removing features so that the\nexisting pg_hba.conf facilities can be used to accomplish some\nparticular goal. So really I think this turns on #1: is it plausible\nthat people are using this feature, however inadvertent it may be, and\nis it potentially useful? I don't see that anybody's made an argument\nagainst either of those things. Unless someone can do so, I think we\nshouldn't disable this.\n\nThat having been said, I think that the fact that you can execute SQL\nqueries in replication=database connections is horrifying. I really\nhate that feature. I think it's a bad design, and a bad\nimplementation, and a recipe for tons of bugs. But, blocking physical\nreplication commands on such connections isn't going to solve any of\nthat.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 24 Jun 2020 15:20:20 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "On 2020-Jun-24, Robert Haas wrote:\n\n> So really I think this turns on #1: is it plausible\n> that people are using this feature, however inadvertent it may be, and\n> is it potentially useful? I don't see that anybody's made an argument\n> against either of those things. Unless someone can do so, I think we\n> shouldn't disable this.\n\nPeople (specifically the jdbc driver) *are* using this feature in this\nway, but they didn't realize they were doing it. It was an accident and\nthey didn't notice.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 24 Jun 2020 15:41:14 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "On Wed, 24 Jun 2020 at 15:41, Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2020-Jun-24, Robert Haas wrote:\n>\n> > So really I think this turns on #1: is it plausible\n> > that people are using this feature, however inadvertent it may be, and\n> > is it potentially useful? I don't see that anybody's made an argument\n> > against either of those things. Unless someone can do so, I think we\n> > shouldn't disable this.\n>\n> People (specifically the jdbc driver) *are* using this feature in this\n> way, but they didn't realize they were doing it. It was an accident and\n> they didn't notice.\n>\n>\nNot sure we are using it as much as we accidently did it that way. It would\nbe trivial to fix.\n\nThat said I think we should fix the security hole this opens and leave the\nfunctionality.\n\nDave\n\nOn Wed, 24 Jun 2020 at 15:41, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2020-Jun-24, Robert Haas wrote:\n\n> So really I think this turns on #1: is it plausible\n> that people are using this feature, however inadvertent it may be, and\n> is it potentially useful? I don't see that anybody's made an argument\n> against either of those things. Unless someone can do so, I think we\n> shouldn't disable this.\n\nPeople (specifically the jdbc driver) *are* using this feature in this\nway, but they didn't realize they were doing it.  It was an accident and\nthey didn't notice.\nNot sure we are using it as much as we accidently did it that way. It would be trivial to fix.That said I think we should fix the security hole this opens and leave the functionality.Dave", "msg_date": "Wed, 24 Jun 2020 15:50:37 -0400", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "Hi,\n\nOn 2020-06-24 15:41:14 -0400, Alvaro Herrera wrote:\n> On 2020-Jun-24, Robert Haas wrote:\n> \n> > So really I think this turns on #1: is it plausible\n> > that people are using this feature, however inadvertent it may be, and\n> > is it potentially useful? I don't see that anybody's made an argument\n> > against either of those things. Unless someone can do so, I think we\n> > shouldn't disable this.\n> \n> People (specifically the jdbc driver) *are* using this feature in this\n> way, but they didn't realize they were doing it. It was an accident and\n> they didn't notice.\n\nAs I said before, I've utilized being able to do both over a single\nconnection (among others to initialize a logical replica using a base\nbackup). And I've seen at least one other codebase (developed without my\ninput) doing so. I really don't understand how you just dismiss this\nwithout any sort of actual argument. Yes, those uses can be fixed to\nreconnect with a different replication parameter, but that's code that\nneeds to be adjusted and it requires adjustments to pg_hba.conf etc.\n\nAnd obviously you'd lock out older versions of jdbc, and possibly other\ndrivers.\n\nObviously we should allow more granular permissions here, I don't think\nanybody is arguing against that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 24 Jun 2020 12:52:39 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "On Wed, Jun 24, 2020 at 3:41 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> People (specifically the jdbc driver) *are* using this feature in this\n> way, but they didn't realize they were doing it. It was an accident and\n> they didn't notice.\n\nBut you don't know that that's true of everyone using this feature,\nand even if it were, so what? Breaking a feature that someone didn't\nknow they were using is just as much of a break as breaking a feature\nsomeone DID know they were using.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 25 Jun 2020 07:56:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" }, { "msg_contents": "On 2020-Jun-24, Andres Freund wrote:\n\n> As I said before, I've utilized being able to do both over a single\n> connection (among others to initialize a logical replica using a base\n> backup). And I've seen at least one other codebase (developed without my\n> input) doing so. I really don't understand how you just dismiss this\n> without any sort of actual argument. Yes, those uses can be fixed to\n> reconnect with a different replication parameter, but that's code that\n> needs to be adjusted and it requires adjustments to pg_hba.conf etc.\n> \n> And obviously you'd lock out older versions of jdbc, and possibly other\n> drivers.\n\nWell, I had understood that you were talking from a hypothetical\nposition, not that you were already using the thing that way. After\nthese arguments, I agree to leave things alone, and nobody else seems to\nbe arguing in that direction, so I'll mark the open item as closed.\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 7 Jul 2020 13:46:17 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: SIGSEGV from START_REPLICATION 0/XXXXXXX in XLogSendPhysical ()\n at walsender.c:2762" } ]
[ { "msg_contents": "Hi all,\n\nWhile working on some monitoring tasks I realised that the pg_monitor\nrole doesn't have access to the pg_replication_origin_status.\n\nAre there any strong thoughts on not giving pg_monitor access to this\nview, or is it just something that nobody asked for yet? I can't find\nany reason for pg_monitor not to have access to it.\n\nRegards,\n\n-- \nMartín Marqués http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Thu, 28 May 2020 08:42:59 -0300", "msg_from": "=?UTF-8?B?TWFydMOtbiBNYXJxdcOpcw==?= <martin@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "Hi,\n\n> While working on some monitoring tasks I realised that the pg_monitor\n> role doesn't have access to the pg_replication_origin_status.\n>\n> Are there any strong thoughts on not giving pg_monitor access to this\n> view, or is it just something that nobody asked for yet? I can't find\n> any reason for pg_monitor not to have access to it.\n\nFurther looking into this, I can see that the requirement of a\nsuperuser to access/monify the replication origins is hardcoded in\nbackend/replication/logical/origin.c, so it's not a question of\nGRANTing access to the pg_monitor user.\n\n```\nstatic void\nreplorigin_check_prerequisites(bool check_slots, bool recoveryOK)\n{\n if (!superuser())\n ereport(ERROR,\n (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n errmsg(\"only superusers can query or manipulate\nreplication origins\")));\n\n if (check_slots && max_replication_slots == 0)\n ereport(ERROR,\n (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n errmsg(\"cannot query or manipulate replication origin\nwhen max_replication_slots = 0\")));\n\n if (!recoveryOK && RecoveryInProgress())\n ereport(ERROR,\n (errcode(ERRCODE_READ_ONLY_SQL_TRANSACTION),\n errmsg(\"cannot manipulate replication origins during\nrecovery\")));\n\n}\n```\n\nI believe we could skip the superuser() check for cases like\npg_replication_origin_session_progress() and\npg_replication_origin_progress().\n\nOnce option could be to add a third bool argument check_superuser to\nreplorigin_check_prerequisites() and have it set to false for the\nfunctions which a none superuser could execute.\n\nPatch attached\n\n\n-- \nMartín Marqués http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Fri, 29 May 2020 17:39:31 -0300", "msg_from": "=?UTF-8?B?TWFydMOtbiBNYXJxdcOpcw==?= <martin@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "On Fri, May 29, 2020 at 05:39:31PM -0300, Martín Marqués wrote:\n> I believe we could skip the superuser() check for cases like\n> pg_replication_origin_session_progress() and\n> pg_replication_origin_progress().\n> \n> Once option could be to add a third bool argument check_superuser to\n> replorigin_check_prerequisites() and have it set to false for the\n> functions which a none superuser could execute.\n\nWouldn't it be just better to remove this hardcoded superuser check\nand replace it with equivalent ACLs by default? The trick is to make\nsure that any function calling replorigin_check_prerequisites() has\nits execution correctly revoked from public. See for example\ne79350fe.\n--\nMichael", "msg_date": "Sun, 31 May 2020 11:02:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "Hi Michael,\n\n> Wouldn't it be just better to remove this hardcoded superuser check\n> and replace it with equivalent ACLs by default? The trick is to make\n> sure that any function calling replorigin_check_prerequisites() has\n> its execution correctly revoked from public. See for example\n> e79350fe.\n\nLooking at that, it seems a better solution. Let me wrap up a new\npatch, likely later today or early tomorrow as it's Sunday ;-)\n\n-- \nMartín Marqués http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Sun, 31 May 2020 12:13:08 -0300", "msg_from": "=?UTF-8?B?TWFydMOtbiBNYXJxdcOpcw==?= <martin@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "Hi,\n\nTook me a bit longer than expected, but here is a new version, now\nwith the idea of just removing the superuser() check and REVOKEing\nexecution of the functions from public. At the end I grant permission\nto functions and the pg_replication_origin_status view.\n\nI wonder now if I needed to GRANT execution of the functions. A grant\non the view should be enough.\n\nI'll think about it.\n\nEl dom., 31 de may. de 2020 a la(s) 12:13, Martín Marqués\n(martin@2ndquadrant.com) escribió:\n>\n> Hi Michael,\n>\n> > Wouldn't it be just better to remove this hardcoded superuser check\n> > and replace it with equivalent ACLs by default? The trick is to make\n> > sure that any function calling replorigin_check_prerequisites() has\n> > its execution correctly revoked from public. See for example\n> > e79350fe.\n>\n> Looking at that, it seems a better solution. Let me wrap up a new\n> patch, likely later today or early tomorrow as it's Sunday ;-)\n>\n> --\n> Martín Marqués http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n-- \n\nMartín Marqués http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Mon, 1 Jun 2020 15:38:07 -0300", "msg_from": "=?UTF-8?B?TWFydMOtbiBNYXJxdcOpcw==?= <martin@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "Hi,\n\n> Took me a bit longer than expected, but here is a new version, now\n> with the idea of just removing the superuser() check and REVOKEing\n> execution of the functions from public. At the end I grant permission\n> to functions and the pg_replication_origin_status view.\n>\n> I wonder now if I needed to GRANT execution of the functions. A grant\n> on the view should be enough.\n>\n> I'll think about it.\n\nYeah, those `GRANT EXECUTE` for the 2 functions should go, as the view\nwhich is what we want to `SELECT` from has the appropriate ACL set.\n\n$ git diff\ndiff --git a/src/backend/catalog/system_views.sql\nb/src/backend/catalog/system_views.sql\nindex c16061f8f00..97ee72a9cfc 100644\n--- a/src/backend/catalog/system_views.sql\n+++ b/src/backend/catalog/system_views.sql\n@@ -1494,9 +1494,6 @@ GRANT EXECUTE ON FUNCTION\npg_ls_archive_statusdir() TO pg_monitor;\n GRANT EXECUTE ON FUNCTION pg_ls_tmpdir() TO pg_monitor;\n GRANT EXECUTE ON FUNCTION pg_ls_tmpdir(oid) TO pg_monitor;\n\n-GRANT EXECUTE ON FUNCTION pg_replication_origin_progress(text,\nboolean) TO pg_monitor;\n-GRANT EXECUTE ON FUNCTION\npg_replication_origin_session_progress(boolean) TO pg_monitor;\n-\n GRANT pg_read_all_settings TO pg_monitor;\n GRANT pg_read_all_stats TO pg_monitor;\n GRANT pg_stat_scan_tables TO pg_monitor;\n\n\nRegards,\n\n-- \nMartín Marqués http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Mon, 1 Jun 2020 21:41:13 -0300", "msg_from": "=?UTF-8?B?TWFydMOtbiBNYXJxdcOpcw==?= <martin@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "On Mon, Jun 01, 2020 at 03:38:07PM -0300, Martín Marqués wrote:\n> Took me a bit longer than expected, but here is a new version, now\n> with the idea of just removing the superuser() check and REVOKEing\n> execution of the functions from public. At the end I grant permission\n> to functions and the pg_replication_origin_status view.\n> \n> I wonder now if I needed to GRANT execution of the functions. A grant\n> on the view should be enough.\n\n+GRANT EXECUTE ON FUNCTION pg_replication_origin_progress(text, boolean) TO pg_monitor;\n+GRANT EXECUTE ON FUNCTION pg_replication_origin_session_progress(boolean) TO pg_monitor;\n\nFWIW, I think that removing a hardcoded superuser() restriction and\nassigning new rights to system roles are two different things, so it\nwould be better to split the logic into two patches to ease the\nreview.\n\nI can also see the following in func.sgml:\n Use of functions for replication origin is restricted to\n superusers. \n\nBut that's not right as one can use GRANT to leverage the ACLs of\nthose functions with your patch. I have a suggestion for this part,\nas follows:\n\"Use of functions for replication origin is only allowed to the\nsuperuser by default, but may be allowed to other users by using the\nGRANT command.\"\n\nAlso, you may want to add this patch to the next commit fest:\nhttps://commitfest.postgresql.org/28/\n--\nMichael", "msg_date": "Tue, 2 Jun 2020 13:49:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "Hi.\n\nAt Mon, 1 Jun 2020 21:41:13 -0300, Martín Marqués <martin@2ndquadrant.com> wrote in \n> Hi,\n> \n> > Took me a bit longer than expected, but here is a new version, now\n> > with the idea of just removing the superuser() check and REVOKEing\n> > execution of the functions from public. At the end I grant permission\n> > to functions and the pg_replication_origin_status view.\n> \n> > I wonder now if I needed to GRANT execution of the functions. A grant\n> > on the view should be enough.\n> \n> > I'll think about it.\n> \n> Yeah, those `GRANT EXECUTE` for the 2 functions should go, as the view\n> which is what we want to `SELECT` from has the appropriate ACL set.\n> \n> $ git diff\n> diff --git a/src/backend/catalog/system_views.sql\n> b/src/backend/catalog/system_views.sql\n> index c16061f8f00..97ee72a9cfc 100644\n> --- a/src/backend/catalog/system_views.sql\n> +++ b/src/backend/catalog/system_views.sql\n> @@ -1494,9 +1494,6 @@ GRANT EXECUTE ON FUNCTION\n> pg_ls_archive_statusdir() TO pg_monitor;\n> GRANT EXECUTE ON FUNCTION pg_ls_tmpdir() TO pg_monitor;\n> GRANT EXECUTE ON FUNCTION pg_ls_tmpdir(oid) TO pg_monitor;\n> \n> -GRANT EXECUTE ON FUNCTION pg_replication_origin_progress(text,\n> boolean) TO pg_monitor;\n> -GRANT EXECUTE ON FUNCTION\n> pg_replication_origin_session_progress(boolean) TO pg_monitor;\n> -\n> GRANT pg_read_all_settings TO pg_monitor;\n> GRANT pg_read_all_stats TO pg_monitor;\n> GRANT pg_stat_scan_tables TO pg_monitor;\n\n\nAgreed on this part. The two functions aren't needed to be granted.\n\nBut, pg_show_replication_origin_status() should be allowed\npg_read_all_stats, not pg_monitor. pg_monitor is just a union role of\nactual privileges.\n\nAnother issue would be how to control output of\npg_show_replication_origin_status(). Most of functions that needs\npg_read_all_stats privileges are filtering sensitive columns in each\nrow, instead of hiding the existence of rows. Maybe the view\npg_replication_origin_status should show only local_id and hide other\ncolumns from non-pg_read_all_stats users.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 02 Jun 2020 14:01:02 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "Hi,\n\n> > $ git diff\n> > diff --git a/src/backend/catalog/system_views.sql\n> > b/src/backend/catalog/system_views.sql\n> > index c16061f8f00..97ee72a9cfc 100644\n> > --- a/src/backend/catalog/system_views.sql\n> > +++ b/src/backend/catalog/system_views.sql\n> > @@ -1494,9 +1494,6 @@ GRANT EXECUTE ON FUNCTION\n> > pg_ls_archive_statusdir() TO pg_monitor;\n> > GRANT EXECUTE ON FUNCTION pg_ls_tmpdir() TO pg_monitor;\n> > GRANT EXECUTE ON FUNCTION pg_ls_tmpdir(oid) TO pg_monitor;\n> >\n> > -GRANT EXECUTE ON FUNCTION pg_replication_origin_progress(text,\n> > boolean) TO pg_monitor;\n> > -GRANT EXECUTE ON FUNCTION\n> > pg_replication_origin_session_progress(boolean) TO pg_monitor;\n> > -\n> > GRANT pg_read_all_settings TO pg_monitor;\n> > GRANT pg_read_all_stats TO pg_monitor;\n> > GRANT pg_stat_scan_tables TO pg_monitor;\n>\n>\n> Agreed on this part. The two functions aren't needed to be granted.\n>\n> But, pg_show_replication_origin_status() should be allowed\n> pg_read_all_stats, not pg_monitor. pg_monitor is just a union role of\n> actual privileges.\n\nI placed that GRANT on purpose to `pg_monitor`, separated from the\n`pg_read_all_stats` role, because it doesn't match the description for\nthat role.\n\n```\nRead all pg_stat_* views and use various statistics related\nextensions, even those normally visible only to superusers.\n```\n\nI have no problem adding it to this ROLE, but we'd have to amend the\ndoc for default-roles to reflect that SELECT for this view is also\ngranted to `pg_read_all_stats`.\n\n> Another issue would be how to control output of\n> pg_show_replication_origin_status(). Most of functions that needs\n> pg_read_all_stats privileges are filtering sensitive columns in each\n> row, instead of hiding the existence of rows. Maybe the view\n> pg_replication_origin_status should show only local_id and hide other\n> columns from non-pg_read_all_stats users.\n\nI think that the output from `pg_show_replication_origin_status()`\ndoesn't expose any data that `pg_read_all_stats` or `pg_monitor`\nshouldn't be able to read. Removing or obfuscating `external_id`\nand/or `remote_lsn` would make the view somehow pointless, in\nparticular for monitoring and diagnostic tools.\n\nI'll upload new patches shortly following Michael's suggestions.\n\n-- \nMartín Marqués http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Tue, 2 Jun 2020 11:23:26 -0300", "msg_from": "=?UTF-8?B?TWFydMOtbiBNYXJxdcOpcw==?= <martin@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "Greetings,\n\n* Martín Marqués (martin@2ndquadrant.com) wrote:\n> > > $ git diff\n> > > diff --git a/src/backend/catalog/system_views.sql\n> > > b/src/backend/catalog/system_views.sql\n> > > index c16061f8f00..97ee72a9cfc 100644\n> > > --- a/src/backend/catalog/system_views.sql\n> > > +++ b/src/backend/catalog/system_views.sql\n> > > @@ -1494,9 +1494,6 @@ GRANT EXECUTE ON FUNCTION\n> > > pg_ls_archive_statusdir() TO pg_monitor;\n> > > GRANT EXECUTE ON FUNCTION pg_ls_tmpdir() TO pg_monitor;\n> > > GRANT EXECUTE ON FUNCTION pg_ls_tmpdir(oid) TO pg_monitor;\n> > >\n> > > -GRANT EXECUTE ON FUNCTION pg_replication_origin_progress(text,\n> > > boolean) TO pg_monitor;\n> > > -GRANT EXECUTE ON FUNCTION\n> > > pg_replication_origin_session_progress(boolean) TO pg_monitor;\n> > > -\n> > > GRANT pg_read_all_settings TO pg_monitor;\n> > > GRANT pg_read_all_stats TO pg_monitor;\n> > > GRANT pg_stat_scan_tables TO pg_monitor;\n> >\n> >\n> > Agreed on this part. The two functions aren't needed to be granted.\n> >\n> > But, pg_show_replication_origin_status() should be allowed\n> > pg_read_all_stats, not pg_monitor. pg_monitor is just a union role of\n> > actual privileges.\n> \n> I placed that GRANT on purpose to `pg_monitor`, separated from the\n> `pg_read_all_stats` role, because it doesn't match the description for\n> that role.\n> \n> ```\n> Read all pg_stat_* views and use various statistics related\n> extensions, even those normally visible only to superusers.\n> ```\n> \n> I have no problem adding it to this ROLE, but we'd have to amend the\n> doc for default-roles to reflect that SELECT for this view is also\n> granted to `pg_read_all_stats`.\n\nI agree in general that pg_monitor shouldn't have privileges granted\ndirectly to it. If this needs a new default role, that's an option, but\nit seems like it'd make sense to be part of pg_read_all_stats to me, so\namending the docs looks reasonable from here.\n\n> > Another issue would be how to control output of\n> > pg_show_replication_origin_status(). Most of functions that needs\n> > pg_read_all_stats privileges are filtering sensitive columns in each\n> > row, instead of hiding the existence of rows. Maybe the view\n> > pg_replication_origin_status should show only local_id and hide other\n> > columns from non-pg_read_all_stats users.\n> \n> I think that the output from `pg_show_replication_origin_status()`\n> doesn't expose any data that `pg_read_all_stats` or `pg_monitor`\n> shouldn't be able to read. Removing or obfuscating `external_id`\n> and/or `remote_lsn` would make the view somehow pointless, in\n> particular for monitoring and diagnostic tools.\n\nYeah, pg_read_all_stats is a rather privileged role when it comes to\nreading data, consider that it can see basically everything in\npg_stat_activity, for example.\n\nThanks,\n\nStephen", "msg_date": "Tue, 2 Jun 2020 11:11:11 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "Hi Stephen,\n\n> > I have no problem adding it to this ROLE, but we'd have to amend the\n> > doc for default-roles to reflect that SELECT for this view is also\n> > granted to `pg_read_all_stats`.\n>\n> I agree in general that pg_monitor shouldn't have privileges granted\n> directly to it. If this needs a new default role, that's an option, but\n> it seems like it'd make sense to be part of pg_read_all_stats to me, so\n> amending the docs looks reasonable from here.\n\nGood, that's more or less what I had in mind.\n\nHere goes v2 of the patch, now there are 4 files (I could have\nsquashed the docs with the code changes, but hey, that'll be easy to\nmerge if needed :-) )\n\nI did some fiddling to Michaels doc proposal, but it says basically the same.\n\nNot 100% happy with the change to user-manag.sgml, but ok enough to send.\n\nI also added an entry to the commitfest so we can track this there as well.\n\nRegards,\n\n-- \nMartín Marqués http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Tue, 2 Jun 2020 13:13:18 -0300", "msg_from": "=?UTF-8?B?TWFydMOtbiBNYXJxdcOpcw==?= <martin@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "At Tue, 2 Jun 2020 13:13:18 -0300, Martín Marqués <martin@2ndquadrant.com> wrote in \n> > > I have no problem adding it to this ROLE, but we'd have to amend the\n> > > doc for default-roles to reflect that SELECT for this view is also\n> > > granted to `pg_read_all_stats`.\n> >\n> > I agree in general that pg_monitor shouldn't have privileges granted\n> > directly to it. If this needs a new default role, that's an option, but\n> > it seems like it'd make sense to be part of pg_read_all_stats to me, so\n> > amending the docs looks reasonable from here.\n> \n> Good, that's more or less what I had in mind.\n> \n> Here goes v2 of the patch, now there are 4 files (I could have\n> squashed the docs with the code changes, but hey, that'll be easy to\n> merge if needed :-) )\n> \n> I did some fiddling to Michaels doc proposal, but it says basically the same.\n> \n> Not 100% happy with the change to user-manag.sgml, but ok enough to send.\n> \n> I also added an entry to the commitfest so we can track this there as well.\n\n0001:\n\nLooks good to me. REVOKE is performed on all functions that called\nreplorigin_check_prerequisites.\n\n0002:\n\nIt is forgetting to grant pg_read_all_stats to execute\npg_show_replication_origin_status. As the result pg_read_all_stats\ngets error on executing the function, not on doing select on the view.\n\nEven if we also granted execution of the function to the specific\nrole, anyone who wants to grant a user for the view also needs to\ngrant execution of the function.\n\nTo avoid such an inconvenience, as I mentioned upthread, the view and\nthe function should be granted to public and the function should just\nmute the output all the rows, or some columns in each row. That can be\ndone the same way with pg_stat_get_activity(), as Stephen said.\n\n\n0003:\n\nIt seems to be a take-after of adminpack's documentation, but a\nsuperuser is not the only one on PostgreSQL. The something like the\ndescription in 27.2.2 Viewing Statistics looks more suitable.\n\n> Superusers and members of the built-in role pg_read_all_stats (see\n> also Section 21.5) can see all the information about all sessions.\n\nSection 21.5 is already saying as follows.\n\n> pg_monitor\n> Read/execute various monitoring views and functions. This role is a\n> member of pg_read_all_settings, pg_read_all_stats and\n> pg_stat_scan_tables.\n\n\n0004:\n\nLooks fine by me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 03 Jun 2020 17:29:27 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "Hi Kyotaro-san,\n\nThank you for taking the time to review my patches. Would you like to\nset yourself as a reviewer in the commit entry here?\nhttps://commitfest.postgresql.org/28/2577/\n\n> 0002:\n>\n> It is forgetting to grant pg_read_all_stats to execute\n> pg_show_replication_origin_status. As the result pg_read_all_stats\n> gets error on executing the function, not on doing select on the view.\n\nSeems I was testing on a cluster where I had already been digging, so\npg_real_all_stats had execute privileges on\npg_show_replication_origin_status (I had manually granted that) and\ndidn't notice because I forgot to drop the cluster and initialize\nagain.\n\nThanks for the pointer here!\n\n> 0003:\n>\n> It seems to be a take-after of adminpack's documentation, but a\n> superuser is not the only one on PostgreSQL. The something like the\n> description in 27.2.2 Viewing Statistics looks more suitable.\n>\n> > Superusers and members of the built-in role pg_read_all_stats (see\n> > also Section 21.5) can see all the information about all sessions.\n>\n> Section 21.5 is already saying as follows.\n>\n> > pg_monitor\n> > Read/execute various monitoring views and functions. This role is a\n> > member of pg_read_all_settings, pg_read_all_stats and\n> > pg_stat_scan_tables.\n\nI'm not sure if I got this right, but I added some more text to point\nout that the pg_read_all_stats role can also access one specific\nfunction. I personally think it's a bit too detailed, and if we wanted\nto add details it should be formatted differently, which would require\na more invasive patch (would prefer leaving that out, as it might even\nmean moving parts which are not part of this patch).\n\nIn any case, I hope the change fits what you've kindly pointed out.\n\n> 0004:\n>\n> Looks fine by me.\n\nHere goes v3 of the patch\n\n-- \nMartín Marqués http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Wed, 3 Jun 2020 13:32:28 -0300", "msg_from": "=?UTF-8?B?TWFydMOtbiBNYXJxdcOpcw==?= <martin@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "Hi, Martin.\n\nAt Wed, 3 Jun 2020 13:32:28 -0300, Martín Marqués <martin@2ndquadrant.com> wrote in \n> Hi Kyotaro-san,\n> \n> Thank you for taking the time to review my patches. Would you like to\n> set yourself as a reviewer in the commit entry here?\n> https://commitfest.postgresql.org/28/2577/\n\nDone.\n\n> > 0002:\n> >\n> > It is forgetting to grant pg_read_all_stats to execute\n> > pg_show_replication_origin_status. As the result pg_read_all_stats\n> > gets error on executing the function, not on doing select on the view.\n> \n> Seems I was testing on a cluster where I had already been digging, so\n> pg_real_all_stats had execute privileges on\n> pg_show_replication_origin_status (I had manually granted that) and\n> didn't notice because I forgot to drop the cluster and initialize\n> again.\n> \n> Thanks for the pointer here!\n\nSorry for not mentioning it at that time, but about the following diff:\n\n+GRANT SELECT ON pg_replication_origin_status TO pg_read_all_stats;\n\nsystem_views.sql already has a REVOKE command on the view. We should\nput the above just below the REVOKE command.\n\nI'm not sure where to put the GRANT on\npg_show_replication_origin_status(), but maybe it also should be at\nthe same place.\n\n> > 0003:\n> >\n> > It seems to be a take-after of adminpack's documentation, but a\n> > superuser is not the only one on PostgreSQL. The something like the\n> > description in 27.2.2 Viewing Statistics looks more suitable.\n> >\n> > > Superusers and members of the built-in role pg_read_all_stats (see\n> > > also Section 21.5) can see all the information about all sessions.\n> >\n> > Section 21.5 is already saying as follows.\n> >\n> > > pg_monitor\n> > > Read/execute various monitoring views and functions. This role is a\n> > > member of pg_read_all_settings, pg_read_all_stats and\n> > > pg_stat_scan_tables.\n> \n> I'm not sure if I got this right, but I added some more text to point\n> out that the pg_read_all_stats role can also access one specific\n> function. I personally think it's a bit too detailed, and if we wanted\n> to add details it should be formatted differently, which would require\n> a more invasive patch (would prefer leaving that out, as it might even\n> mean moving parts which are not part of this patch).\n> \n> In any case, I hope the change fits what you've kindly pointed out.\n\nI forgot to mention it at that time, but the function\npg_show_replication_origin_status is a function to back up\nsystem-views, like pg_stat_get_activity(), pg_show_all_file_settings()\nand so on. Such functions are not documented since users don't need to\ncall them. pg_show_replication_origin_status is not documented for the\nsame readon. Thus we don't need to mention the function.\n\nIn the previous comment, one point I meant is that the \"to the\nsuperuser\" should be \"to superusers\", because a PostgreSQL server\n(cluster) can define multiple superusers. Another is that \"permitted\nto other users by using the GRANT command.\" might be obscure for\nusers. In this regard I found a more specific description in the same\nfile:\n\n Computes the total disk space used by the database with the specified\n name or OID. To use this function, you must\n have <literal>CONNECT</literal> privilege on the specified database\n (which is granted by default) or be a member of\n the <literal>pg_read_all_stats</literal> role.\n\nSo, as the result it would be like the following: (Note that, as you\nknow, I'm not good at this kind of task..)\n\n Use of functions for replication origin is restricted to superusers.\n Use for these functions may be permitted to other users by granting\n <literal>EXECUTE<literal> privilege on the functions.\n\nAnd in regard to the view, granting privileges on both the view and\nfunction to individual user is not practical so we should mention only\ngranting pg_read_all_stats to users, like the attached patch.\n\n> > 0004:\n> >\n> > Looks fine by me.\n> \n> Here goes v3 of the patch\n\nBy the way, the attachements of your mail are out-of-order. I'm not\nsure that that does something bad, though.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 04 Jun 2020 16:10:33 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "Hi Kyotaro-san,\n\n> Sorry for not mentioning it at that time, but about the following diff:\n>\n> +GRANT SELECT ON pg_replication_origin_status TO pg_read_all_stats;\n>\n> system_views.sql already has a REVOKE command on the view. We should\n> put the above just below the REVOKE command.\n>\n> I'm not sure where to put the GRANT on\n> pg_show_replication_origin_status(), but maybe it also should be at\n> the same place.\n\nYes, I agree that it makes the revoking/granting easier to read if\nit's grouped by objects, or groups of objects.\n\nDone.\n\n> In the previous comment, one point I meant is that the \"to the\n> superuser\" should be \"to superusers\", because a PostgreSQL server\n> (cluster) can define multiple superusers. Another is that \"permitted\n> to other users by using the GRANT command.\" might be obscure for\n> users. In this regard I found a more specific description in the same\n> file:\n\nOK, now I understand what you were saying. :-)\n\n> Computes the total disk space used by the database with the specified\n> name or OID. To use this function, you must\n> have <literal>CONNECT</literal> privilege on the specified database\n> (which is granted by default) or be a member of\n> the <literal>pg_read_all_stats</literal> role.\n>\n> So, as the result it would be like the following: (Note that, as you\n> know, I'm not good at this kind of task..)\n>\n> Use of functions for replication origin is restricted to superusers.\n> Use for these functions may be permitted to other users by granting\n> <literal>EXECUTE<literal> privilege on the functions.\n>\n> And in regard to the view, granting privileges on both the view and\n> function to individual user is not practical so we should mention only\n> granting pg_read_all_stats to users, like the attached patch.\n\nI did some re-writing of the doc, which is pretty close to what you\nproposed above.\n\n> By the way, the attachements of your mail are out-of-order. I'm not\n> sure that that does something bad, though.\n\nThat's likely Gmail giving them random order when you attach multiple\nfiles all at once.\n\nNew patches attached.\n\nRegards,\n\n-- \nMartín Marqués http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Thu, 4 Jun 2020 09:17:18 -0300", "msg_from": "=?UTF-8?B?TWFydMOtbiBNYXJxdcOpcw==?= <martin@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "On Thu, 4 Jun 2020 at 21:17, Martín Marqués <martin@2ndquadrant.com> wrote:\n>\n> Hi Kyotaro-san,\n>\n> > Sorry for not mentioning it at that time, but about the following diff:\n> >\n> > +GRANT SELECT ON pg_replication_origin_status TO pg_read_all_stats;\n> >\n> > system_views.sql already has a REVOKE command on the view. We should\n> > put the above just below the REVOKE command.\n> >\n> > I'm not sure where to put the GRANT on\n> > pg_show_replication_origin_status(), but maybe it also should be at\n> > the same place.\n>\n> Yes, I agree that it makes the revoking/granting easier to read if\n> it's grouped by objects, or groups of objects.\n>\n> Done.\n>\n> > In the previous comment, one point I meant is that the \"to the\n> > superuser\" should be \"to superusers\", because a PostgreSQL server\n> > (cluster) can define multiple superusers. Another is that \"permitted\n> > to other users by using the GRANT command.\" might be obscure for\n> > users. In this regard I found a more specific description in the same\n> > file:\n>\n> OK, now I understand what you were saying. :-)\n>\n> > Computes the total disk space used by the database with the specified\n> > name or OID. To use this function, you must\n> > have <literal>CONNECT</literal> privilege on the specified database\n> > (which is granted by default) or be a member of\n> > the <literal>pg_read_all_stats</literal> role.\n> >\n> > So, as the result it would be like the following: (Note that, as you\n> > know, I'm not good at this kind of task..)\n> >\n> > Use of functions for replication origin is restricted to superusers.\n> > Use for these functions may be permitted to other users by granting\n> > <literal>EXECUTE<literal> privilege on the functions.\n> >\n> > And in regard to the view, granting privileges on both the view and\n> > function to individual user is not practical so we should mention only\n> > granting pg_read_all_stats to users, like the attached patch.\n>\n> I did some re-writing of the doc, which is pretty close to what you\n> proposed above.\n>\n> > By the way, the attachements of your mail are out-of-order. I'm not\n> > sure that that does something bad, though.\n>\n> That's likely Gmail giving them random order when you attach multiple\n> files all at once.\n>\n> New patches attached.\n>\n\nI've looked at these patches and have one question:\n\n REVOKE ALL ON pg_replication_origin_status FROM public;\n\n+GRANT SELECT ON pg_replication_origin_status TO pg_read_all_stats;\n\n+REVOKE EXECUTE ON FUNCTION pg_show_replication_origin_status() FROM public;\n+\n+GRANT EXECUTE ON FUNCTION pg_show_replication_origin_status() TO\npg_read_all_stats;\n\nI thought that this patch has pg_replication_origin_status view behave\nlike other pg_stat_* views in terms of privileges but it's slightly\ndifferent. For instance, since we grant all privileges on\npg_stat_replication to public by default, the only user who either is\na member of pg_read_all_stats or is superuser can see all values but\nother users not having such privileges also can access that view and\nsee the part of statistics. On the other hand, with this patch, we\nallow only user who either is a member of pg_read_all_stats or is\nsuperuser to access pg_replication_origin_status view. Other users\ncannot even access to that view. Is there any reason why we grant\nselect privilege to only pg_read_all_stats? I wonder if we can have\npg_replication_origin_status accessible by public and filter some\ncolumn data in pg_show_replication_origin_status() that we don't want\nto show to users who neither a member of pg_read_all_stats nor\nsuperuser.\n\nThere is a typo in 0001 patch:\n\n+--\n+-- Permision to execute Replication Origin functions should be\nrevoked from public\n+--\n\ns/Permision/Permission/\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 8 Jun 2020 16:21:45 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "Hello, Martín.\n\nThanks for the new version.\n\nAt Thu, 4 Jun 2020 09:17:18 -0300, Martín Marqués <martin@2ndquadrant.com> wrote in \n> > I'm not sure where to put the GRANT on\n> > pg_show_replication_origin_status(), but maybe it also should be at\n> > the same place.\n> \n> Yes, I agree that it makes the revoking/granting easier to read if\n> it's grouped by objects, or groups of objects.\n> \n> Done.\n\n0002 looks fine to me.\n\n> > In the previous comment, one point I meant is that the \"to the\n> > superuser\" should be \"to superusers\", because a PostgreSQL server\n> > (cluster) can define multiple superusers. Another is that \"permitted\n> > to other users by using the GRANT command.\" might be obscure for\n> > users. In this regard I found a more specific description in the same\n> > file:\n> \n> OK, now I understand what you were saying. :-)\n\nI'm happy to hear that:)\n\n> > So, as the result it would be like the following: (Note that, as you\n> > know, I'm not good at this kind of task..)\n> >\n> > Use of functions for replication origin is restricted to superusers.\n> > Use for these functions may be permitted to other users by granting\n> > <literal>EXECUTE<literal> privilege on the functions.\n> >\n> > And in regard to the view, granting privileges on both the view and\n> > function to individual user is not practical so we should mention only\n> > granting pg_read_all_stats to users, like the attached patch.\n> \n> I did some re-writing of the doc, which is pretty close to what you\n> proposed above.\n\n(0003) Unfortunately, the closing tag of EXECUTE is missing prefixing\n'/'.\n\nI see many nearby occurrences of \"This function is restricted to\nsuperusers by default, but other users can be granted EXECUTE to run\nthe function\". I'm not sure, but it might be better to use the same\nexpression, but I don't insist on that. It's not changed in the\nattached.\n\n> > By the way, the attachements of your mail are out-of-order. I'm not\n> > sure that that does something bad, though.\n> \n> That's likely Gmail giving them random order when you attach multiple\n> files all at once.\n> \n> New patches attached.\n\n- I'm fine with the direction of this patch. Works as expected, that\n is, makes no changes of behavior for replication origin functions,\n and pg_read_all_stats can read the pg_replication_origin_status\n view.\n\n- The patches cleanly applied to the current HEAD and can be compiled\n with a minor fix (fixed in the attached v5).\n\n- The patches should be merged but I'll left that for committer.\n\n- The commit titles are too long. Each of them should be split up into\n a brief title and a description. But I think committers would rewrite\n them for the final patch to commit so I don't think we don't need to\n rewrite them right now.\n\nI'll wait for a couple of days for comments from others or opinions\nbefore moving this to Ready for Committer.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 08 Jun 2020 17:22:50 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "At Mon, 8 Jun 2020 16:21:45 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in \n> I've looked at these patches and have one question:\n> \n> REVOKE ALL ON pg_replication_origin_status FROM public;\n> \n> +GRANT SELECT ON pg_replication_origin_status TO pg_read_all_stats;\n> \n> +REVOKE EXECUTE ON FUNCTION pg_show_replication_origin_status() FROM public;\n> +\n> +GRANT EXECUTE ON FUNCTION pg_show_replication_origin_status() TO\n> pg_read_all_stats;\n> \n> I thought that this patch has pg_replication_origin_status view behave\n> like other pg_stat_* views in terms of privileges but it's slightly\n> different. For instance, since we grant all privileges on\n> pg_stat_replication to public by default, the only user who either is\n> a member of pg_read_all_stats or is superuser can see all values but\n> other users not having such privileges also can access that view and\n> see the part of statistics. On the other hand, with this patch, we\n> allow only user who either is a member of pg_read_all_stats or is\n> superuser to access pg_replication_origin_status view. Other users\n> cannot even access to that view. Is there any reason why we grant\n> select privilege to only pg_read_all_stats? I wonder if we can have\n> pg_replication_origin_status accessible by public and filter some\n> column data in pg_show_replication_origin_status() that we don't want\n> to show to users who neither a member of pg_read_all_stats nor\n> superuser.\n\nYeah, I agree to this (and wrote something like that before).\n\nOn the other hand Martín seems to just want to allow other users to\nsee it while preserving the current behavior. I also understand that\nthought.\n\n> There is a typo in 0001 patch:\n> \n> +--\n> +-- Permision to execute Replication Origin functions should be\n> revoked from public\n> +--\n> \n> s/Permision/Permission/\n\nMmm. Right.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 08 Jun 2020 17:44:56 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "On Mon, Jun 08, 2020 at 05:44:56PM +0900, Kyotaro Horiguchi wrote:\n> Mmm. Right.\n\nYep. I bumped on that myself. I am not sure about 0002 and 0004 yet,\nand IMO they are not mandatory pieces, but from what I can see in the\nset 0001 and 0003 can just be squashed together to remove those\nsuperuser checks, and no spots within the twelve functions calling\nreplorigin_check_prerequisites() are missing a REVOKE. So something\nlike the attached could just happen first, no? If the rights of\npg_read_all_stats need to be extended, it would always be possible to\ndo so once the attached is done with a custom script.\n\nAlso, why don't we use this occation to do the same thing for the\nfunctions working on replication slots? While we are looking at this\narea, we may as well just do it. Here is the set of functions that\nwould be involved:\n- pg_create_physical_replication_slot\n- pg_create_logical_replication_slot\n- pg_replication_slot_advance\n- pg_drop_replication_slot\n- pg_copy_logical_replication_slot (3 functions)\n- pg_copy_physical_replication_slot (2 functions)\n--\nMichael", "msg_date": "Tue, 9 Jun 2020 15:11:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "On Tue, 9 Jun 2020 at 15:11, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Jun 08, 2020 at 05:44:56PM +0900, Kyotaro Horiguchi wrote:\n> > Mmm. Right.\n>\n> Yep. I bumped on that myself. I am not sure about 0002 and 0004 yet,\n> and IMO they are not mandatory pieces, but from what I can see in the\n> set 0001 and 0003 can just be squashed together to remove those\n> superuser checks, and no spots within the twelve functions calling\n> replorigin_check_prerequisites() are missing a REVOKE. So something\n> like the attached could just happen first, no? If the rights of\n> pg_read_all_stats need to be extended, it would always be possible to\n> do so once the attached is done with a custom script.\n\nOne thing I'm concerned with this change is that we will end up\nneeding to grant both execute on pg_show_replication_origin_status()\nand select on pg_replication_origin_status view when we want a\nnon-super user to access pg_replication_origin_status. It’s unlikely\nthat the user can grant both privileges at once as\npg_show_replication_origin_status() is not documented.\n\n>\n> Also, why don't we use this occation to do the same thing for the\n> functions working on replication slots? While we are looking at this\n> area, we may as well just do it. Here is the set of functions that\n> would be involved:\n> - pg_create_physical_replication_slot\n> - pg_create_logical_replication_slot\n> - pg_replication_slot_advance\n> - pg_drop_replication_slot\n> - pg_copy_logical_replication_slot (3 functions)\n> - pg_copy_physical_replication_slot (2 functions)\n\nA user having a replication privilege already is able to execute these\nfunctions. Do you mean to ease it so that a user also executes them\nwithout replication privilege?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 9 Jun 2020 15:32:24 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "On Tue, Jun 09, 2020 at 03:32:24PM +0900, Masahiko Sawada wrote:\n> One thing I'm concerned with this change is that we will end up\n> needing to grant both execute on pg_show_replication_origin_status()\n> and select on pg_replication_origin_status view when we want a\n> non-super user to access pg_replication_origin_status. It’s unlikely\n> that the user can grant both privileges at once as\n> pg_show_replication_origin_status() is not documented.\n\nNot sure if that's worth worrying. We have similar cases like that,\ntake for example pg_file_settings with pg_show_all_file_settings()\nwhich requires both a SELECT ACL on pg_file_settings and an EXECUTE\nACL on pg_show_all_file_settings(). My point is that if you issue a\nGRANT SELECT on the catalog view, the user can immediately see when\ntrying to query it that an extra execution is needed.\n\n> A user having a replication privilege already is able to execute these\n> functions. Do you mean to ease it so that a user also executes them\n> without replication privilege?\n\nArf. Please forget what I wrote here, the hardcoded check for\nreplication rights would be a problem.\n--\nMichael", "msg_date": "Tue, 9 Jun 2020 16:35:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "On Tue, 9 Jun 2020 at 16:36, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jun 09, 2020 at 03:32:24PM +0900, Masahiko Sawada wrote:\n> > One thing I'm concerned with this change is that we will end up\n> > needing to grant both execute on pg_show_replication_origin_status()\n> > and select on pg_replication_origin_status view when we want a\n> > non-super user to access pg_replication_origin_status. It’s unlikely\n> > that the user can grant both privileges at once as\n> > pg_show_replication_origin_status() is not documented.\n>\n> Not sure if that's worth worrying. We have similar cases like that,\n> take for example pg_file_settings with pg_show_all_file_settings()\n> which requires both a SELECT ACL on pg_file_settings and an EXECUTE\n> ACL on pg_show_all_file_settings(). My point is that if you issue a\n> GRANT SELECT on the catalog view, the user can immediately see when\n> trying to query it that an extra execution is needed.\n\nOh, I see. There might be room for improvement but it's a separate issue.\n\nI agreed with the change you proposed.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 9 Jun 2020 17:07:39 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "At Tue, 9 Jun 2020 16:35:55 +0900, Michael Paquier <michael@paquier.xyz> wrote in \r\n> On Tue, Jun 09, 2020 at 03:32:24PM +0900, Masahiko Sawada wrote:\r\n> > One thing I'm concerned with this change is that we will end up\r\n> > needing to grant both execute on pg_show_replication_origin_status()\r\n> > and select on pg_replication_origin_status view when we want a\r\n> > non-super user to access pg_replication_origin_status. It’s unlikely\r\n> > that the user can grant both privileges at once as\r\n> > pg_show_replication_origin_status() is not documented.\r\n\r\nI also concerned that, but normally all that we should do to that is\r\nGRANTing pg_read_all_stats to the role. I don't think there is a case\r\nwhere someone wants to allow the view to a user, who should not be\r\nallowed to see other stats views.\r\n\r\n> Not sure if that's worth worrying. We have similar cases like that,\r\n> take for example pg_file_settings with pg_show_all_file_settings()\r\n> which requires both a SELECT ACL on pg_file_settings and an EXECUTE\r\n> ACL on pg_show_all_file_settings(). My point is that if you issue a\r\n> GRANT SELECT on the catalog view, the user can immediately see when\r\n> trying to query it that an extra execution is needed.\r\n\r\nI agree to that as far as that is not the typical use case, and I\r\ndon't think that that's the typical use case.\r\n\r\n> > A user having a replication privilege already is able to execute these\r\n> > functions. Do you mean to ease it so that a user also executes them\r\n> > without replication privilege?\r\n> \r\n> Arf. Please forget what I wrote here, the hardcoded check for\r\n> replication rights would be a problem.\r\n\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Tue, 09 Jun 2020 17:13:47 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "At Tue, 9 Jun 2020 15:11:04 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Mon, Jun 08, 2020 at 05:44:56PM +0900, Kyotaro Horiguchi wrote:\n> > Mmm. Right.\n> \n> Yep. I bumped on that myself. I am not sure about 0002 and 0004 yet,\n> and IMO they are not mandatory pieces, but from what I can see in the\n> set 0001 and 0003 can just be squashed together to remove those\n> superuser checks, and no spots within the twelve functions calling\n> replorigin_check_prerequisites() are missing a REVOKE. So something\n> like the attached could just happen first, no? If the rights of\n\nIt looks fine to me.\n\n> pg_read_all_stats need to be extended, it would always be possible to\n> do so once the attached is done with a custom script.\n\nThe depends on whether it is valid and safe (and useful) to reveal the\nview to pg_read_all_stats. If that is the case the additional\nprivilege should be granted for the monitoring sake by default. And I\nthink revealing the view to the role is valid and safe, and useful.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 09 Jun 2020 17:34:13 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "On Tue, Jun 09, 2020 at 05:07:39PM +0900, Masahiko Sawada wrote:\n> Oh, I see. There might be room for improvement but it's a separate issue.\n> \n> I agreed with the change you proposed.\n\nOK, thanks. Then let's wait a couple of days to see if anybody has\nany objections with the removal of the hardcoded superuser check\nfor those functions.\n--\nMichael", "msg_date": "Wed, 10 Jun 2020 12:35:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "On Wed, Jun 10, 2020 at 12:35:49PM +0900, Michael Paquier wrote:\n> On Tue, Jun 09, 2020 at 05:07:39PM +0900, Masahiko Sawada wrote:\n>> I agreed with the change you proposed.\n> \n> OK, thanks. Then let's wait a couple of days to see if anybody has\n> any objections with the removal of the hardcoded superuser check\n> for those functions.\n\nCommitted the part removing the superuser checks as of cc07264.\n--\nMichael", "msg_date": "Sun, 14 Jun 2020 12:46:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Jun 10, 2020 at 12:35:49PM +0900, Michael Paquier wrote:\n>> OK, thanks. Then let's wait a couple of days to see if anybody has\n>> any objections with the removal of the hardcoded superuser check\n>> for those functions.\n\n> Committed the part removing the superuser checks as of cc07264.\n\nFWIW, I'd have included a catversion bump in this, to enforce that\nthe modified backend functions are used with matching pg_proc entries.\nIt's not terribly important at this phase of the devel cycle, but still\nsomebody might wonder why the regression tests are failing for them\n(if they tried to skip an initdb).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 15 Jun 2020 00:44:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "On Mon, Jun 15, 2020 at 12:44:02AM -0400, Tom Lane wrote:\n> FWIW, I'd have included a catversion bump in this, to enforce that\n> the modified backend functions are used with matching pg_proc entries.\n> It's not terribly important at this phase of the devel cycle, but still\n> somebody might wonder why the regression tests are failing for them\n> (if they tried to skip an initdb).\n\nThanks, I am fixing that now.\n\n(It may be effective to print a T-shirt of that at some point and give\nit to people scoring the most in this area..)\n--\nMichael", "msg_date": "Mon, 15 Jun 2020 15:35:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "\n\n> On 14 Jun 2020, at 05:46, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Jun 10, 2020 at 12:35:49PM +0900, Michael Paquier wrote:\n>> On Tue, Jun 09, 2020 at 05:07:39PM +0900, Masahiko Sawada wrote:\n>>> I agreed with the change you proposed.\n>> \n>> OK, thanks. Then let's wait a couple of days to see if anybody has\n>> any objections with the removal of the hardcoded superuser check\n>> for those functions.\n> \n> Committed the part removing the superuser checks as of cc07264.\n\nAFAICT from the thread there is nothing left of this changeset to consider, so\nI've marked the entry as committed in the CF app. Feel free to update in case\nI've missed something.\n\ncheers ./daniel\n\n", "msg_date": "Thu, 2 Jul 2020 15:03:22 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "On Thu, Jul 02, 2020 at 03:03:22PM +0200, Daniel Gustafsson wrote:\n> AFAICT from the thread there is nothing left of this changeset to consider, so\n> I've marked the entry as committed in the CF app. Feel free to update in case\n> I've missed something.\n\nA second item discussed in this thread was if we should try to grant\nmore privileges to pg_read_all_stats when it comes to replication\norigins, that's why I let the entry in the CF app. Now, the thread\nhas stalled and what has been committed in cc07264 allows to do this\nchange anyway, so marking the entry as committed sounds fine to me.\nThanks!\n--\nMichael", "msg_date": "Fri, 3 Jul 2020 14:03:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" }, { "msg_contents": "Hello,\n\nI wanted to resurface this thread.\n\nThe original intention I had with this patch I sent over a year ago\nwas to have the possibility for monitoring ROLEs like pg_monitor and\npg_read_all_stats to have read access for the replication origin\nstatus. Seems the patch only got half way through (we removed the\nsuperuser hardcoded restriction).\n\nToo bad I didn't notice this until 14 got out, or I'd have done this\nmuch earlier. Well, maybe it's time to do it now :)\n\nSending a patch to change the privileges of the on the view and\nfunction called by the view.\n\nThe only thing I'm not sure, but can amend, is if we need tests for\nthis change (that would be something like switching ROLE to\npg_read_all_stats and query the pg_replication_origin_status, checking\nwe get the right result.\n\nKind regards, Martín\n\n-- \nMartín Marqués\nIt’s not that I have something to hide,\nit’s that I have nothing I want you to see", "msg_date": "Mon, 15 Nov 2021 14:45:09 -0300", "msg_from": "=?UTF-8?B?TWFydMOtbiBNYXJxdcOpcw==?= <martin.marques@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Read access for pg_monitor to pg_replication_origin_status view" } ]
[ { "msg_contents": "The comment in be-secure-openssl.c didn't get the memo that the hardcoded DH\nparameters moved in 573bd08b99e277026e87bb55ae69c489fab321b8. The attached\nupdates the wording, keeping it generic enough that we wont need to update it\nshould the parameters move again.\n\ncheers ./daniel", "msg_date": "Thu, 28 May 2020 17:15:17 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Incorrect comment in be-secure-openssl.c" }, { "msg_contents": "On Thu, May 28, 2020 at 05:15:17PM +0200, Daniel Gustafsson wrote:\n> The comment in be-secure-openssl.c didn't get the memo that the hardcoded DH\n> parameters moved in 573bd08b99e277026e87bb55ae69c489fab321b8. The attached\n> updates the wording, keeping it generic enough that we wont need to update it\n> should the parameters move again.\n\nIndeed, looks good to me. I'll go fix, ust let's wait and see first\nif others have any comments.\n--\nMichael", "msg_date": "Fri, 29 May 2020 14:38:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Incorrect comment in be-secure-openssl.c" }, { "msg_contents": "On Fri, May 29, 2020 at 02:38:53PM +0900, Michael Paquier wrote:\n> Indeed, looks good to me. I'll go fix, ust let's wait and see first\n> if others have any comments.\n\nActually, I was reading again the new sentence, and did not like its\nfirst part. Here is a rework that looks much better to me:\n * Load hardcoded DH parameters.\n *\n- * To prevent problems if the DH parameter files don't even exist, we\n- * can load hardcoded DH parameters supplied with the backend.\n+ * If DH parameters cannot be loaded from a specified file, we can load\n+ * the hardcoded DH parameters supplied with the backend to prevent\n+ * problems.\n\nDaniel, is that fine for you?\n--\nMichael", "msg_date": "Sun, 31 May 2020 15:54:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Incorrect comment in be-secure-openssl.c" }, { "msg_contents": "On Sun, May 31, 2020 at 2:54 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Fri, May 29, 2020 at 02:38:53PM +0900, Michael Paquier wrote:\n> > Indeed, looks good to me. I'll go fix, ust let's wait and see first\n> > if others have any comments.\n>\n> Actually, I was reading again the new sentence, and did not like its\n> first part. Here is a rework that looks much better to me:\n> * Load hardcoded DH parameters.\n> *\n> - * To prevent problems if the DH parameter files don't even exist, we\n> - * can load hardcoded DH parameters supplied with the backend.\n> + * If DH parameters cannot be loaded from a specified file, we can load\n> + * the hardcoded DH parameters supplied with the backend to prevent\n> + * problems.\n>\n> Daniel, is that fine for you?\n\nI don't understand why that change is an improvement.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 31 May 2020 17:47:01 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Incorrect comment in be-secure-openssl.c" }, { "msg_contents": "On Sun, May 31, 2020 at 05:47:01PM -0400, Robert Haas wrote:\n> On Sun, May 31, 2020 at 2:54 AM Michael Paquier <michael@paquier.xyz> wrote:\n> I don't understand why that change is an improvement.\n\nOops. I have managed to copy-paste an incorrect diff. The existing\ncomment is that:\n * To prevent problems if the DH parameters files don't even\n * exist, we can load DH parameters hardcoded into this file.\n\nDaniel's suggestion is that:\n * To prevent problems if the DH parameters files don't even \n * exist, we can load hardcoded DH parameters supplied with the backend.\n\nAnd my own suggestion became that:\n * If DH parameters cannot be loaded from a specified file, we can load\n * the hardcoded DH parameters supplied with the backend to prevent\n * problems.\n\nThe problem I have with first and second flavors is that \"DH\nparameters files\" does not sound right. First, the grammar sounds\nincorrect to me as in this case \"parameters\" should not be plural.\nSecond, it is only possible to load one file with ssl_dh_params_file,\nand we only attempt to load this single file within initialize_dh().\n\nOf course it would be possible to just switch to \"DH parameter file\"\nin the first part of the sentence, but I have just finished by\nrewriting the whole thing, as the third flavor.\n--\nMichael", "msg_date": "Mon, 1 Jun 2020 15:06:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Incorrect comment in be-secure-openssl.c" }, { "msg_contents": "> On 1 Jun 2020, at 08:06, Michael Paquier <michael@paquier.xyz> wrote:\n\n> The problem I have with first and second flavors is that \"DH\n> parameters files\" does not sound right. First, the grammar sounds\n> incorrect to me as in this case \"parameters\" should not be plural.\n\nI think \"parameters\" is the right term here, as the shared secret is determines\na set of Diffie-Hellman parameters.\n\n> Second, it is only possible to load one file with ssl_dh_params_file,\n> and we only attempt to load this single file within initialize_dh().\n\nThats correct, this is a leftover from when we allowed for different DH sizes\nand loaded the appropriate file. This was removed in c0a15e07cd718cb6e455e683\nin favor of only using 2048.\n\n> Of course it would be possible to just switch to \"DH parameter file\"\n> in the first part of the sentence, but I have just finished by\n> rewriting the whole thing, as the third flavor.\n\nI don't have a problem with the existing wording of the first sentence, and I\ndon't have a problem with your suggestion either (as long as it's parameters in\nplural).\n\ncheers ./daniel\n\n", "msg_date": "Mon, 1 Jun 2020 10:39:45 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Incorrect comment in be-secure-openssl.c" }, { "msg_contents": "On Mon, Jun 01, 2020 at 10:39:45AM +0200, Daniel Gustafsson wrote:\n> I don't have a problem with the existing wording of the first sentence, and I\n> don't have a problem with your suggestion either (as long as it's parameters in\n> plural).\n\nThanks, that's why I kept the word plural in my own suggestion. I was\njust reading through the whole set again, and still kind of prefer the\nlast flavor, so I think that I'll just fix it this way tomorrow and\ncall it a day.\n--\nMichael", "msg_date": "Wed, 3 Jun 2020 21:38:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Incorrect comment in be-secure-openssl.c" }, { "msg_contents": "> On 3 Jun 2020, at 14:38, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Mon, Jun 01, 2020 at 10:39:45AM +0200, Daniel Gustafsson wrote:\n>> I don't have a problem with the existing wording of the first sentence, and I\n>> don't have a problem with your suggestion either (as long as it's parameters in\n>> plural).\n> \n> Thanks, that's why I kept the word plural in my own suggestion. I was\n> just reading through the whole set again, and still kind of prefer the\n> last flavor, so I think that I'll just fix it this way tomorrow and\n> call it a day.\n\nSounds good, thanks!\n\ncheers ./daniel\n\n", "msg_date": "Wed, 3 Jun 2020 14:40:54 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Incorrect comment in be-secure-openssl.c" }, { "msg_contents": "On Wed, Jun 03, 2020 at 02:40:54PM +0200, Daniel Gustafsson wrote:\n> Sounds good, thanks!\n\nOkay, done then.\n--\nMichael", "msg_date": "Thu, 4 Jun 2020 13:16:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Incorrect comment in be-secure-openssl.c" } ]
[ { "msg_contents": "Hi!\n\nI think I’ve found a bug related to the strength of collations. Attached is a\nWIP patch, that address some other issues too.\n\nIn this this example, the conflict of implicit collations propagates correctly:\n\n WITH data (c, posix) AS (\n values ('a' COLLATE \"C\", 'b' COLLATE \"POSIX\")\n )\n SELECT *\n FROM data\n ORDER BY ( c || posix ) || posix\n\n ERROR: collation mismatch between implicit collations \"C\" and \"POSIX\"\n LINE 6: ORDER BY ( c || posix ) || posix;\n ^\n HINT: You can choose the collation by applying the COLLATE clause to one or both expressions.\n\nHowever, if the conflict happens in a subquery, it doesn’t anymore:\n\n WITH data (c, posix) AS (\n values ('a' COLLATE \"C\", 'b' COLLATE \"POSIX\")\n )\n SELECT *\n FROM (SELECT *, c || posix AS none FROM data) data\n ORDER BY none || posix;\n\n c | posix | none\n ---+-------+------\n a | b | ab\n (1 row)\n\nThe problem is in parse_collate.c:566: A Var (and some other nodes) without\nvalid collation gets the strength COLLATE_NONE:\n\n if (OidIsValid(collation))\n strength = COLLATE_IMPLICIT;\n else\n strength = COLLATE_NONE;\n\nHowever, for a Var it should be COLLATE_CONFLICT, which corresponds to the\nstandards collation derivation “none”. I guess there could be other similar\ncases which I don’t know about.\n\nThe patch fixes that, plus necessary consequences (error handling for new\nscenarios) as well as some “cosmetic” improvements I found along the way.\n\nUnnecessary to mention that this patch might break existing code.\n\nWith the patch, the second example form above fails similarly to the first example:\n\n ERROR: collation mismatch between implicit collation \"POSIX\" and unknown collation\n LINE 6: ORDER BY none || posix;\n ^\n HINT: You can choose the collation by applying the COLLATE clause to one or both expressions.\n\nOther changes in the patch:\n\n** Error Handling\n\nThe current parse time error handling of implicit collisions has always both\ncolliding collations. The patch allows a COLLATE_CONFLICT without knowing which\ncollations caused the conflict (it’s not stored in the Var node). Thus we may\nhave know two, one or none of the collations that contributed to the collision.\n\nThe new function ereport_implicit_collation_mismatch takes care of that.\n\n** Renaming COLLATE_NONE\n\nI found the name COLLATE_NONE a little bit unfortuante as it can easily be\nmistaken for the collation derivation “none” the SQL standard uses. I renamed\nit to COLLATE_NA for “not applicable”. The standard doesn’t have a word for\nthat as non character string types just don’t have collations.\n\n** Removal of location2\n\nThe code keeps track of the location (for parser_errposition) of both\ncollations that collided, even though it reports only one of them. The patch\nremoves location2.\n\nDue to this, the some errors report the other location as before (previously\nthe second, now the first). See collate.out in the patch. The patch has TODO\ncomments for code that would be needed to take the other one.\n\nIs there any policy to report the second location or to strictly keep the\nerrors where they used to be?\n\n** General cleanup\n\nThe patch also removes a little code that I think is not needed (anymore).\n\n** Tests\n\nBesides the errposition, only one existing test breaks. It previously triggered\na runtime error, now it triggers a parse time error:\n\n SELECT a, b FROM collate_test1 UNION ALL SELECT a, b FROM collate_test2 ORDER BY 2; -- fail\n -ERROR: could not determine which collation to use for string comparison\n -HINT: Use the COLLATE clause to set the collation explicitly.\n +ERROR: collation mismatch between implicit collations\n +LINE 1: SELECT a, b FROM collate_test1 UNION ALL SELECT a, b FROM co...\n + ^\n +HINT: Use the COLLATE clause to set the collation explicitly\n\nThe patch also adds a new test to trigger the problem in the first place.\n\nThe patch is against REL_12_STABLE but applies to master too.\n\n-markus", "msg_date": "Thu, 28 May 2020 23:22:39 +0200", "msg_from": "Markus Winand <markus.winand@winand.at>", "msg_from_op": true, "msg_subject": "Conflict of implicit collations doesn't propagate out of subqueries" }, { "msg_contents": "Markus Winand <markus.winand@winand.at> writes:\n> However, if the conflict happens in a subquery, it doesn’t anymore:\n\n> WITH data (c, posix) AS (\n> values ('a' COLLATE \"C\", 'b' COLLATE \"POSIX\")\n> )\n> SELECT *\n> FROM (SELECT *, c || posix AS none FROM data) data\n> ORDER BY none || posix;\n\n> c | posix | none\n> ---+-------+------\n> a | b | ab\n> (1 row)\n\nI'm not exactly convinced this is a bug. Can you cite chapter and verse\nin the spec to justify throwing an error?\n\nAIUI, collation conflicts can only occur within a single expression, and\nthis is not that. Moreover, even if data.none arguably has no collation,\ntreating it from outside the sub-query as having collation strength \"none\"\nseems to me to be similar to our policy of promoting unknown-type subquery\noutputs to type \"text\" rather than leaving them to cause trouble later.\nIt's not pedantically correct, but nobody liked the old behavior.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 May 2020 17:43:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Conflict of implicit collations doesn't propagate out of\n subqueries" }, { "msg_contents": "\n\n> On 28.05.2020, at 23:43, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Markus Winand <markus.winand@winand.at> writes:\n>> However, if the conflict happens in a subquery, it doesn’t anymore:\n> \n>> WITH data (c, posix) AS (\n>> values ('a' COLLATE \"C\", 'b' COLLATE \"POSIX\")\n>> )\n>> SELECT *\n>> FROM (SELECT *, c || posix AS none FROM data) data\n>> ORDER BY none || posix;\n> \n>> c | posix | none\n>> ---+-------+------\n>> a | b | ab\n>> (1 row)\n> \n> I'm not exactly convinced this is a bug. Can you cite chapter and verse\n> in the spec to justify throwing an error?\n\nI think it is 6.6 Syntax Rule 17:\n\n • 17) If the declared type of a <basic identifier chain> BIC is character string, then the collation derivation of the declared type of BIC is\n\n Case:\n\n • a) If the declared type has a declared type collation DTC, then implicit.\n\n • b) Otherwise, none.\n\nThat gives derivation “none” to the column.\n\nWhen this is concatenated, 9.5 (\"Result of data type combinations”) SR 3 a ii 3 applies:\n\n • ii) The collation derivation and declared type collation of the result are determined as follows. Case:\n\n • 1) If some data type in DTS has an explicit collation derivation [… doesn’t apply]\n • 2) If every data type in DTS has an implicit collation derivation, then [… doesn’t apply beause of “every\"]\n • 3) Otherwise, the collation derivation is none. [applies]\n\nAlso, the standard doesn’t have a forth derivation (strength). It also says that\nnot having a declared type collation implies the derivation “none”. See 4.2.2:\n\n Every declared type that is a character string type has a collation\n derivation, this being either none, implicit, or explicit. The\n collation derivation of a declared type with a declared type collation\n that is explicitly or implicitly specified by a <data type> is implicit.\n If the collation derivation of a declared type that has a declared type\n collation is not implicit, then it is explicit. The collation derivation\n of an expression of character string type that has no declared type\n collation is none.\n\n-markus\n\n\n\n\n", "msg_date": "Fri, 29 May 2020 00:08:49 +0200", "msg_from": "Markus Winand <markus.winand@winand.at>", "msg_from_op": true, "msg_subject": "Re: Conflict of implicit collations doesn't propagate out of\n subqueries" } ]
[ { "msg_contents": "Since OpenSSL is now releasing 3.0.0-alpha versions, I took a look at using it\nwith postgres to see what awaits us. As it is now shipping in releases (with\nGA planned for Q4), users will probably soon start to test against it so I\nwanted to be prepared.\n\nRegarding the deprecations, we can either set preprocessor directives or use\ncompiler flags to silence the warning and do nothing (for now), or we could\nupdate to the new API. We probably want to different things for master vs\nback-branches, but as an illustration of what the latter could look like I've\nimplemented this in 0001.\n\nSSL_CTX_load_verify_locations and X509_STORE_load_locations are deprecated and\nreplaced by individual calls to load files and directories. These are quite\nstraightforward, and are implemented like how we handle the TLS protocol API.\nDH_check has been discouraged to use for quite some time, and is now marked\ndeprecated without a 1:1 replacement. The OpenSSL docs recommends using the\nEVP API, which is also done in 0001. For now I just stuck it in with version\ngated ifdefs, something cleaner than that can clearly be done. 0001 is clearly\nnot proposed for review/commit yet, it's included in case someone else is\ninterested as well.\n\nOpenSSL also deprecates DES keys in 3.0.0, which cause our password callback\ntests to fail with the cryptic error \"fetch failed\", as the test suite keys are\nencrypted with DES. 0002 fixes this by changing to AES256 (randomly chosen\namong the ciphers supported in 1.0.1+ and likely to be around), and could be\napplied already today as there is nothing 3.0.0 specific about it.\n\nOn top of DES keys there are also a lot of deprecations or low-level functions\nwhich breaks pgcrypto in quite a few ways. I haven't tackled these yet, but it\nlooks like we have to do the EVP rewrite there.\n\ncheers ./daniel", "msg_date": "Fri, 29 May 2020 00:16:47 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On Fri, 2020-05-29 at 00:16 +0200, Daniel Gustafsson wrote:\n> Since OpenSSL is now releasing 3.0.0-alpha versions, I took a look at using it\n> with postgres to see what awaits us. As it is now shipping in releases (with\n> GA planned for Q4), users will probably soon start to test against it so I\n> wanted to be prepared.\n> \n> Regarding the deprecations, we can either set preprocessor directives or use\n> compiler flags to silence the warning and do nothing (for now), or we could\n> update to the new API. We probably want to different things for master vs\n> back-branches, but as an illustration of what the latter could look like I've\n> implemented this in 0001.\n\nAn important question will be: if we convert to functions that are not deprecated,\nwhat is the earliest OpenSSL version we can support?\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Fri, 29 May 2020 08:06:30 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 29 May 2020, at 08:06, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n\n>> Regarding the deprecations, we can either set preprocessor directives or use\n>> compiler flags to silence the warning and do nothing (for now), or we could\n>> update to the new API. We probably want to different things for master vs\n>> back-branches, but as an illustration of what the latter could look like I've\n>> implemented this in 0001.\n> \n> An important question will be: if we convert to functions that are not deprecated,\n> what is the earliest OpenSSL version we can support?\n\nThe replacement functions for _locations calls are introduced together with the\ndeprecation in 3.0.0, so there is no overlap.\n\nFor pgcrypto, that remains to be seen once it attempted, but ideally all the\nway down to 1.0.1.\n\ncheers ./daniel\n\n", "msg_date": "Fri, 29 May 2020 09:04:38 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On 2020-05-29 00:16, Daniel Gustafsson wrote:\n> Regarding the deprecations, we can either set preprocessor directives or use\n> compiler flags to silence the warning and do nothing (for now), or we could\n> update to the new API. We probably want to different things for master vs\n> back-branches, but as an illustration of what the latter could look like I've\n> implemented this in 0001.\n\nI think we should set OPENSSL_API_COMPAT=10001, and move that along with \nwhatever our oldest supported release is going forward. That declares \nour intention, it will silence the deprecation warnings, and IIUC, if \nthe deprecated stuff actually gets removed, you get a clean compiler \nerror that your API level is too low.\n\n> OpenSSL also deprecates DES keys in 3.0.0, which cause our password callback\n> tests to fail with the cryptic error \"fetch failed\", as the test suite keys are\n> encrypted with DES. 0002 fixes this by changing to AES256 (randomly chosen\n> among the ciphers supported in 1.0.1+ and likely to be around), and could be\n> applied already today as there is nothing 3.0.0 specific about it.\n\nIt appears you can load a \"legacy provider\" to support these keys. What \nif someone made a key using 0.9.* with an older PostgreSQL version and \nkeeps using the same key? I'm wondering about the implications in \npractice here. Obviously moving off legacy crypto is good in general.\n\nThere is also the question of what to do with the test suites in the \nback branches.\n\nMaybe we will want some user-exposed option about which providers to \nload, since that also affects whether the FIPS provider gets loaded. \nWhat's the plan there?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 29 May 2020 13:34:49 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 29 May 2020, at 13:34, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> On 2020-05-29 00:16, Daniel Gustafsson wrote:\n>> Regarding the deprecations, we can either set preprocessor directives or use\n>> compiler flags to silence the warning and do nothing (for now), or we could\n>> update to the new API. We probably want to different things for master vs\n>> back-branches, but as an illustration of what the latter could look like I've\n>> implemented this in 0001.\n> \n> I think we should set OPENSSL_API_COMPAT=10001, and move that along with whatever our oldest supported release is going forward. That declares our intention, it will silence the deprecation warnings, and IIUC, if the deprecated stuff actually gets removed, you get a clean compiler error that your API level is too low.\n\nI think I know what you mean but just to clarify: I master, back-branches or\nall of the above?\n\nConsidering how little effort it is to not use the deprecated API's I'm not\nentirely convinced, but I don't have too strong opinions there.\n\n>> OpenSSL also deprecates DES keys in 3.0.0, which cause our password callback\n>> tests to fail with the cryptic error \"fetch failed\", as the test suite keys are\n>> encrypted with DES. 0002 fixes this by changing to AES256 (randomly chosen\n>> among the ciphers supported in 1.0.1+ and likely to be around), and could be\n>> applied already today as there is nothing 3.0.0 specific about it.\n> \n> It appears you can load a \"legacy provider\" to support these keys. What if someone made a key using 0.9.* with an older PostgreSQL version and keeps using the same key? I'm wondering about the implications in practice here. Obviously moving off legacy crypto is good in general.\n\nIf they do, then that key will stop working with any OpenSSL 3 enabled software\nunless the legacy provider has been loaded. My understanding is that users can\nload these in openssl.conf, so maybe it's mostly a documentation patch for us?\n\nIff key loading fails with an errormessage that indicates that the algorithm\ncouldn't be fetched (ie fetch failed), we could add an errhint on algorithm\nuse. Currently it's easy to believe that it's the key file that couldn't be\nfetched, as the error message from OpenSSL is quite cryptic and expects the\nuser to understand OpenSSL terminology. This could happen already in pre-3\nversions, so maybe it's worthwhile to do?\n\n> There is also the question of what to do with the test suites in the back branches.\n\nIf we don't want to change the testdata in the backbranches, we could add a\nSKIP section for the password key tests iff OpenSSL is 3.0.0+?\n\n> Maybe we will want some user-exposed option about which providers to load, since that also affects whether the FIPS provider gets loaded. What's the plan there?\n\nThis again boils down to if we want to load providers, or if we want to punt\nthat to openssl.conf completely. Since we will support olders versions for a\nlong time still, maybe punting will render the cleanest code?\n\nAFAICT, if care is taken with openssl.conf one could already load providers\nsuch that postgres will operate in FIPS mode due to nothing non-FIPS being\navailable. For libpq I'm not sure if there is anything to do, as we don't\nmandate any cipher use (SSL_CTX_set_cipher_list will simply fail IIUC). For\npgcrypto however it's another story, but there we'd need to rewrite it to not\nuse low-level APIs first since the use of those aren't FIPS compliant. Once\ndone, we could have an option for FIPS mode and pass the \"fips=yes\" property\nwhen loading ciphers to make sure the right version is loaded if multiple ones\nare available.\n\ncheers ./daniel\n\n", "msg_date": "Fri, 29 May 2020 14:45:44 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On 2020-05-29 14:45, Daniel Gustafsson wrote:\n>> I think we should set OPENSSL_API_COMPAT=10001, and move that along with whatever our oldest supported release is going forward. That declares our intention, it will silence the deprecation warnings, and IIUC, if the deprecated stuff actually gets removed, you get a clean compiler error that your API level is too low.\n> \n> I think I know what you mean but just to clarify: I master, back-branches or\n> all of the above?\n\nI'm not sure. I don't have a good sense of what OpenSSL versions we \nclaim to support in branches older than PG13. We made a conscious \ndecision for 1.0.1 in PG13, but I seem to recall that that discussion \nalso revealed that the version assumptions before that were quite \ninconsistent. Code in PG12 and before makes references to OpenSSL as \nold as 0.9.6. But OpenSSL 3.0.0 will reject a compat level older than \n0.9.8.\n\nMy proposal would be to introduce OPENSSL_API_COMPAT=10001 into master \nafter the 13/14 branching, along with any other changes to make it \ncompile cleanly against OpenSSL 3.0.0. Once that has survived some \nscrutiny from the buildfarm and also from folks building against \nLibreSSL etc., it should probably be backpatched into PG13. In the \nimmediate future, I wouldn't bother about the older branches (<=PG12) at \nall. As long as they still compile, users can just disable deprecation \nwarnings, and we may add some patches to that effect at some point, but \nit's not like OpenSSL 3.0.0 will be adopted into production builds any \ntime soon.\n\n> Considering how little effort it is to not use the deprecated API's I'm not\n> entirely convinced, but I don't have too strong opinions there.\n\nWell, in the case like X509_STORE_load_locations(), the solution is in \neither case to write a wrapper. It doesn't matter if we write the \nwrapper or OpenSSL writes the wrapper. Only OpenSSL has already written \nthe wrapper and has created a well-defined way to declare that you want \nto use the wrapper, so I'd just take that.\n\nIn any case, using OPENSSL_API_COMPAT is also good just for our own \ndocumentation, so we can keep track of what version we claim to support \nin different branches.\n\n> If they do, then that key will stop working with any OpenSSL 3 enabled software\n> unless the legacy provider has been loaded. My understanding is that users can\n> load these in openssl.conf, so maybe it's mostly a documentation patch for us?\n\nYes, it looks like that should work, so no additional work required from us.\n\n>> There is also the question of what to do with the test suites in the back branches.\n> \n> If we don't want to change the testdata in the backbranches, we could add a\n> SKIP section for the password key tests iff OpenSSL is 3.0.0+?\n\nI suggest to update the test data in PG13+, since we require OpenSSL \n1.0.1 there. For the older branches, I would look into changing the \ntest driver setup so that it loads a custom openssl.cnf that loads the \nlegacy providers.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 30 May 2020 11:29:11 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "\nOn 5/28/20 6:16 PM, Daniel Gustafsson wrote:\n>\n> OpenSSL also deprecates DES keys in 3.0.0, which cause our password callback\n> tests to fail with the cryptic error \"fetch failed\", as the test suite keys are\n> encrypted with DES. 0002 fixes this by changing to AES256 (randomly chosen\n> among the ciphers supported in 1.0.1+ and likely to be around), and could be\n> applied already today as there is nothing 3.0.0 specific about it.\n>\n\n+1 for applying this forthwith. The key in my recent commit 896fcdb230\nis encrypted with AES256.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Sat, 30 May 2020 08:34:37 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On Sat, May 30, 2020 at 11:29:11AM +0200, Peter Eisentraut wrote:\n> I'm not sure. I don't have a good sense of what OpenSSL versions we claim\n> to support in branches older than PG13. We made a conscious decision for\n> 1.0.1 in PG13, but I seem to recall that that discussion also revealed that\n> the version assumptions before that were quite inconsistent. Code in PG12\n> and before makes references to OpenSSL as old as 0.9.6. But OpenSSL 3.0.0\n> will reject a compat level older than 0.9.8.\n\n593d4e4 claims that we only support OpenSSL >= 0.9.8, meaning that\ndown to PG 10 we have this requirement, and that PG 9.6 and 9.5 should\nbe able to work with OpenSSL 0.9.7 and 0.9.6, but little effort has\nbeen put in testing these.\n\n> My proposal would be to introduce OPENSSL_API_COMPAT=10001 into master after\n> the 13/14 branching, along with any other changes to make it compile cleanly\n> against OpenSSL 3.0.0. Once that has survived some scrutiny from the\n> buildfarm and also from folks building against LibreSSL etc., it should\n> probably be backpatched into PG13. In the immediate future, I wouldn't\n> bother about the older branches (<=PG12) at all. As long as they still\n> compile, users can just disable deprecation warnings, and we may add some\n> patches to that effect at some point, but it's not like OpenSSL 3.0.0 will\n> be adopted into production builds any time soon.\n\nPlease note that I actually may have to bother about 12 and OpenSSL\n3.0.0 as 1.0.2 deprecation is visibly accelerating a move to newer\nversions at least in my close neighborhood. We are not there yet of\ncourse, so doing this work with 14 and 13 sounds fine to me for now.\n--\nMichael", "msg_date": "Sun, 31 May 2020 11:52:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On 2020-05-30 14:34, Andrew Dunstan wrote:\n> \n> On 5/28/20 6:16 PM, Daniel Gustafsson wrote:\n>>\n>> OpenSSL also deprecates DES keys in 3.0.0, which cause our password callback\n>> tests to fail with the cryptic error \"fetch failed\", as the test suite keys are\n>> encrypted with DES. 0002 fixes this by changing to AES256 (randomly chosen\n>> among the ciphers supported in 1.0.1+ and likely to be around), and could be\n>> applied already today as there is nothing 3.0.0 specific about it.\n>>\n> \n> +1 for applying this forthwith. The key in my recent commit 896fcdb230\n> is encrypted with AES256.\n\nI don't see anything in that commit about how to regenerate those files, \nsuch as a makefile rule. Is that missing?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 1 Jun 2020 10:33:12 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On 2020-05-31 04:52, Michael Paquier wrote:\n> 593d4e4 claims that we only support OpenSSL >= 0.9.8, meaning that\n> down to PG 10 we have this requirement, and that PG 9.6 and 9.5 should\n> be able to work with OpenSSL 0.9.7 and 0.9.6, but little effort has\n> been put in testing these.\n\nThen we can stick a OPENSSL_API_COMPAT=908 into at least PG10, 11, and \n12, and probably also into PG9.5 and 9.6 without harm.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 1 Jun 2020 10:44:17 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 30 May 2020, at 11:29, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> On 2020-05-29 14:45, Daniel Gustafsson wrote:\n>>> I think we should set OPENSSL_API_COMPAT=10001, and move that along with whatever our oldest supported release is going forward. That declares our intention, it will silence the deprecation warnings, and IIUC, if the deprecated stuff actually gets removed, you get a clean compiler error that your API level is too low.\n>> I think I know what you mean but just to clarify: I master, back-branches or\n>> all of the above?\n> \n> I'm not sure. I don't have a good sense of what OpenSSL versions we claim to support in branches older than PG13. We made a conscious decision for 1.0.1 in PG13, but I seem to recall that that discussion also revealed that the version assumptions before that were quite inconsistent. Code in PG12 and before makes references to OpenSSL as old as 0.9.6. But OpenSSL 3.0.0 will reject a compat level older than 0.9.8.\n> \n> My proposal would be to introduce OPENSSL_API_COMPAT=10001 into master after the 13/14 branching, along with any other changes to make it compile cleanly against OpenSSL 3.0.0. Once that has survived some scrutiny from the buildfarm and also from folks building against LibreSSL etc., it should probably be backpatched into PG13. In the immediate future, I wouldn't bother about the older branches (<=PG12) at all. As long as they still compile, users can just disable deprecation warnings, and we may add some patches to that effect at some point, but it's not like OpenSSL 3.0.0 will be adopted into production builds any time soon.\n> \n>> Considering how little effort it is to not use the deprecated API's I'm not\n>> entirely convinced, but I don't have too strong opinions there.\n> \n> Well, in the case like X509_STORE_load_locations(), the solution is in either case to write a wrapper. It doesn't matter if we write the wrapper or OpenSSL writes the wrapper. Only OpenSSL has already written the wrapper and has created a well-defined way to declare that you want to use the wrapper, so I'd just take that.\n\nI'll buy that argument.\n\n> In any case, using OPENSSL_API_COMPAT is also good just for our own documentation, so we can keep track of what version we claim to support in different branches.\n\nGood point.\n\n>>> There is also the question of what to do with the test suites in the back branches.\n>> If we don't want to change the testdata in the backbranches, we could add a\n>> SKIP section for the password key tests iff OpenSSL is 3.0.0+?\n> \n> I suggest to update the test data in PG13+, since we require OpenSSL 1.0.1 there. For the older branches, I would look into changing the test driver setup so that it loads a custom openssl.cnf that loads the legacy providers.\n\nOk, I'll roll a patch along these lines for master for ~ the 13/14 branch time\nand then we'll see how we deal with PG13 once the dust has settled not only on\nour side but for OpenSSL.\n\ncheers ./daniel\n\n", "msg_date": "Mon, 1 Jun 2020 10:49:04 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "\nOn 6/1/20 4:33 AM, Peter Eisentraut wrote:\n> On 2020-05-30 14:34, Andrew Dunstan wrote:\n>>\n>> On 5/28/20 6:16 PM, Daniel Gustafsson wrote:\n>>>\n>>> OpenSSL also deprecates DES keys in 3.0.0, which cause our password\n>>> callback\n>>> tests to fail with the cryptic error \"fetch failed\", as the test\n>>> suite keys are\n>>> encrypted with DES.� 0002 fixes this by changing to AES256 (randomly\n>>> chosen\n>>> among the ciphers supported in 1.0.1+ and likely to be around), and\n>>> could be\n>>> applied already today as there is nothing 3.0.0 specific about it.\n>>>\n>>\n>> +1 for applying this forthwith. The key in my recent commit 896fcdb230\n>> is encrypted with AES256.\n>\n> I don't see anything in that commit about how to regenerate those\n> files, such as a makefile rule.� Is that missing?\n\n\n\nYou missed these comments in the test file:\n\n\n# self-signed cert was generated like this:\n# system('openssl req -new -x509 -days 10000 -nodes -out server.crt\n-keyout server.ckey -subj \"/CN=localhost\"');\n# add the cleartext passphrase to the key, remove the unprotected key\n# system(\"openssl rsa -aes256 -in server.ckey -out server.key -passout\npass:$clearpass\");\n# unlink \"server.ckey\";\n\n\nIf you want I can add a rule for it to the Makefile, although who knows\nwhat commands will actually apply when the certificate runs out?\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Mon, 1 Jun 2020 07:58:21 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 1 Jun 2020, at 13:58, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n\n> If you want I can add a rule for it to the Makefile, although who knows\n> what commands will actually apply when the certificate runs out?\n\nBeing able to easily regenerate the testdata, regardless of expiration status,\nhas proven very helpful for me when implementing support for new TLS backends.\n+1 for adding it to the Makefile.\n\ncheers ./daniel\n\n", "msg_date": "Mon, 1 Jun 2020 14:03:38 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On 6/1/20 8:03 AM, Daniel Gustafsson wrote:\n>> On 1 Jun 2020, at 13:58, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n>> If you want I can add a rule for it to the Makefile, although who knows\n>> what commands will actually apply when the certificate runs out?\n> Being able to easily regenerate the testdata, regardless of expiration status,\n> has proven very helpful for me when implementing support for new TLS backends.\n> +1 for adding it to the Makefile.\n>\n\n\nOK, here's a patch.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 1 Jun 2020 09:23:24 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 6/1/20 8:03 AM, Daniel Gustafsson wrote:\n>> +1 for adding it to the Makefile.\n\n> OK, here's a patch.\n\nLikewise +1 for having it in the makefile. But now you have two copies,\nthe other being in comments in the test script. The latter should go\naway, as we surely won't remember to maintain it. (You could replace it\nwith a pointer to the makefile rules if you want.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Jun 2020 10:00:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On 2020-06-01 15:23, Andrew Dunstan wrote:\n> \n> On 6/1/20 8:03 AM, Daniel Gustafsson wrote:\n>>> On 1 Jun 2020, at 13:58, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n>>> If you want I can add a rule for it to the Makefile, although who knows\n>>> what commands will actually apply when the certificate runs out?\n>> Being able to easily regenerate the testdata, regardless of expiration status,\n>> has proven very helpful for me when implementing support for new TLS backends.\n>> +1 for adding it to the Makefile.\n>>\n> OK, here's a patch.\n\nIn src/test/ssl/ we have targets sslfiles and sslfiles-clean, and here \nwe have ssl-files and ssl-files-clean. Let's keep that consistent.\n\nOr, why not actually use the generated files from src/test/ssl/ instead \nof making another set?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 2 Jun 2020 10:57:40 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "\nOn 6/2/20 4:57 AM, Peter Eisentraut wrote:\n> On 2020-06-01 15:23, Andrew Dunstan wrote:\n>>\n>> On 6/1/20 8:03 AM, Daniel Gustafsson wrote:\n>>>> On 1 Jun 2020, at 13:58, Andrew Dunstan\n>>>> <andrew.dunstan@2ndquadrant.com> wrote:\n>>>> If you want I can add a rule for it to the Makefile, although who\n>>>> knows\n>>>> what commands will actually apply when the certificate runs out?\n>>> Being able to easily regenerate the testdata, regardless of\n>>> expiration status,\n>>> has proven very helpful for me when implementing support for new TLS\n>>> backends.\n>>> +1 for adding it to the Makefile.\n>>>\n>> OK, here's a patch.\n>\n> In src/test/ssl/ we have targets sslfiles and sslfiles-clean, and here\n> we have ssl-files and ssl-files-clean.  Let's keep that consistent.\n>\n> Or, why not actually use the generated files from src/test/ssl/\n> instead of making another set?\n\n\nHonestly, I think we've spent plenty of time on this already. I don't\nsee a problem with each module having its own certificate(s) - that\nmakes them more self-contained -  nor any great need to have the targets\nnamed the same.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Tue, 2 Jun 2020 14:45:11 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On Tue, Jun 02, 2020 at 02:45:11PM -0400, Andrew Dunstan wrote:\n> Honestly, I think we've spent plenty of time on this already. I don't\n> see a problem with each module having its own certificate(s) - that\n> makes them more self-contained -  nor any great need to have the targets\n> named the same.\n\nYeah, I don't see much point in combining both of them as those\nmodules have different assumptions behind the files built. Now I\nagree with Peter's point to use the same Makefile rule names in both\nfiles so as it gets easier to grep for all instances.\n\nSo, src/test/ssl/ being the oldest one, ssl_passphrase_callback should\njust do s/ssl-files/sslfiles/.\n--\nMichael", "msg_date": "Wed, 3 Jun 2020 14:26:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On Mon, Jun 01, 2020 at 10:44:17AM +0200, Peter Eisentraut wrote:\n> Then we can stick a OPENSSL_API_COMPAT=908 into at least PG10, 11, and 12,\n> and probably also into PG9.5 and 9.6 without harm.\n\nFWIW, I am fine that for PG >= 10, and I don't think that it is worth\nbothering with PG <= 9.6.\n--\nMichael", "msg_date": "Fri, 5 Jun 2020 16:31:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On 2020-05-30 11:29, Peter Eisentraut wrote:\n> I suggest to update the test data in PG13+, since we require OpenSSL\n> 1.0.1 there. For the older branches, I would look into changing the\n> test driver setup so that it loads a custom openssl.cnf that loads the\n> legacy providers.\n\nI have pushed your 0002 patch (the one that updates the test data) to \nmaster.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 5 Jun 2020 11:21:43 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On 2020-05-30 11:29, Peter Eisentraut wrote:\n> My proposal would be to introduce OPENSSL_API_COMPAT=10001 into master\n> after the 13/14 branching, along with any other changes to make it\n> compile cleanly against OpenSSL 3.0.0. Once that has survived some\n> scrutiny from the buildfarm and also from folks building against\n> LibreSSL etc., it should probably be backpatched into PG13. In the\n> immediate future, I wouldn't bother about the older branches (<=PG12) at\n> all. As long as they still compile, users can just disable deprecation\n> warnings, and we may add some patches to that effect at some point, but\n> it's not like OpenSSL 3.0.0 will be adopted into production builds any\n> time soon.\n\nTrying to move this along, where would be a good place to define \nOPENSSL_API_COMPAT? The only place that's shared between frontend and \nbackend code is c.h. The attached patch does it that way.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 7 Jul 2020 19:45:58 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Trying to move this along, where would be a good place to define \n> OPENSSL_API_COMPAT? The only place that's shared between frontend and \n> backend code is c.h. The attached patch does it that way.\n\npg_config_manual.h, perhaps?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Jul 2020 13:53:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 7 Jul 2020, at 19:53, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> Trying to move this along,\n\nThanks, this has stalled a bit on my TODO.\n\n>> where would be a good place to define \n>> OPENSSL_API_COMPAT? The only place that's shared between frontend and \n>> backend code is c.h. The attached patch does it that way.\n> \n> pg_config_manual.h, perhaps?\n\nI don't have a strong preference. When starting hacking on this I went for the\nquick and simple option of adding it to CFLAGS in configure.in for the time\nbeing since I wasn't sure where to put it.\n\nA slightly more complicated problem arise when trying to run the pgcrypto\nregress tests, and make it run the tests for the now deprecated ciphers, as\nthey require the legacy provider to be loaded via the openssl configuration\nfile. As was mentioned upthread, this requires us to inject our own\nopenssl.cnf in OPENSSL_CONF, load the legacy provider there and then from that\nfile include the system openssl.cnf (or override the system one completely\nduring testing which doesn't seem like a good idea).\n\nHacking this up in a crude PoC I added a REGRESS_ENV option in pgxs.mk which\nthen pgcrypto/Makefile could use to set an OPENSSL_CONF, which in turn ends\nwith a .include=<path> for my system config. This enables pgcrypto to load\nthe now deprecated ciphers, but even as PoC's goes this is awfully brittle and\na significant amount of bricks shy.\n\nActually running the tests with the legacy provider loaded yields a fair number\nof errors like these, and somewhere around there I ran out of time for now as\nthe CF started.\n\n- decrypt\n-----------------------------\n- Lets try a longer message.\n+ decrypt\n+----------------------------------------------------------\n+ Lets try a longer messag\\177\\177\\177\\177\\177\\177\\177\\177\n\nMemorizing the \"cannot load cipher\" errors in an alternative output and\ndocumenting how to use old ciphers in pgcrypto together with OpenSSL 3.0.0+\nmight be the least bad option? Anyone else have any good ideas on how to get\nthis into the testrunner?\n\ncheers ./daniel\n\n", "msg_date": "Tue, 7 Jul 2020 22:52:47 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On Tue, Jul 7, 2020 at 1:46 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> Trying to move this along, where would be a good place to define\n> OPENSSL_API_COMPAT? The only place that's shared between frontend and\n> backend code is c.h. The attached patch does it that way.\n\nSo, if we go this way, does that mean that we're not going to pursue\nremoving dependencies on the deprecated interfaces? I wonder if we\nreally ought to be doing that too, with preprocessor conditionals.\nOtherwise, aren't we putting ourselves in an uncomfortable situation\nwhen the deprecated stuff eventually goes away upstream?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 8 Jul 2020 10:51:36 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On 2020-07-08 16:51, Robert Haas wrote:\n> On Tue, Jul 7, 2020 at 1:46 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> Trying to move this along, where would be a good place to define\n>> OPENSSL_API_COMPAT? The only place that's shared between frontend and\n>> backend code is c.h. The attached patch does it that way.\n> \n> So, if we go this way, does that mean that we're not going to pursue\n> removing dependencies on the deprecated interfaces? I wonder if we\n> really ought to be doing that too, with preprocessor conditionals.\n> Otherwise, aren't we putting ourselves in an uncomfortable situation\n> when the deprecated stuff eventually goes away upstream?\n\nI don't think there is a rush. The 3.0.0 alphas still support \ninterfaces deprecated in 0.9.8 (released 2005). AFAICT, nothing tagged \nunder this API compatibility scheme has ever been removed. If they \nstarted doing so, they would presumably do it step by step at the tail \nend, which would still give us several steps before it catches up with us.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 8 Jul 2020 18:03:48 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On 2020-07-07 22:52, Daniel Gustafsson wrote:\n> Actually running the tests with the legacy provider loaded yields a fair number\n> of errors like these, and somewhere around there I ran out of time for now as\n> the CF started.\n> \n> - decrypt\n> -----------------------------\n> - Lets try a longer message.\n> + decrypt\n> +----------------------------------------------------------\n> + Lets try a longer messag\\177\\177\\177\\177\\177\\177\\177\\177\n> \n> Memorizing the \"cannot load cipher\" errors in an alternative output and\n> documenting how to use old ciphers in pgcrypto together with OpenSSL 3.0.0+\n> might be the least bad option? Anyone else have any good ideas on how to get\n> this into the testrunner?\n\nI think an alternative test output file would be a legitimate solution \nin the short and mid term.\n\nHowever, as you mention, and looking at the test output, this might also \nrequire a bit of work making the handling of these new various error \nconditions more robust.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 9 Jul 2020 08:43:24 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On 2020-07-07 22:52, Daniel Gustafsson wrote:\n>>> where would be a good place to define\n>>> OPENSSL_API_COMPAT? The only place that's shared between frontend and\n>>> backend code is c.h. The attached patch does it that way.\n>> pg_config_manual.h, perhaps?\n> I don't have a strong preference. When starting hacking on this I went for the\n> quick and simple option of adding it to CFLAGS in configure.in for the time\n> being since I wasn't sure where to put it.\n\nActually, it would be most formally correct to set it using AC_DEFINE in \nconfigure.in, so that configure tests see it. It doesn't look like we \ncurrently have any configure tests that would really be affected, but it \nseems sensible to do it there anyway.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 13 Jul 2020 09:40:25 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>>> where would be a good place to define\n>>> OPENSSL_API_COMPAT?\n\n> Actually, it would be most formally correct to set it using AC_DEFINE in \n> configure.in, so that configure tests see it.\n\nYeah, very good point.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Jul 2020 09:38:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On 2020-07-13 15:38, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>>>> where would be a good place to define\n>>>> OPENSSL_API_COMPAT?\n> \n>> Actually, it would be most formally correct to set it using AC_DEFINE in\n>> configure.in, so that configure tests see it.\n> \n> Yeah, very good point.\n\nNew patch done that way.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 16 Jul 2020 10:58:58 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On Thu, Jul 16, 2020 at 10:58:58AM +0200, Peter Eisentraut wrote:\n> if test \"$with_openssl\" = yes ; then\n> dnl Order matters!\n> + AC_DEFINE(OPENSSL_API_COMPAT, [10001],\n> + [Define to the OpenSSL API version in use. This avoids\n> deprecation warnings from newer OpenSSL versions.])\n> if test \"$PORTNAME\" != \"win32\"; then\n\nI think that you should additionally mention the version number\ndirectly in the description, so as when support for 1.0.1 gets removed\nit is possible to grep for it, and then adjust the number and the\ndescription.\n--\nMichael", "msg_date": "Thu, 16 Jul 2020 20:45:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On 2020-07-16 13:45, Michael Paquier wrote:\n> On Thu, Jul 16, 2020 at 10:58:58AM +0200, Peter Eisentraut wrote:\n>> if test \"$with_openssl\" = yes ; then\n>> dnl Order matters!\n>> + AC_DEFINE(OPENSSL_API_COMPAT, [10001],\n>> + [Define to the OpenSSL API version in use. This avoids\n>> deprecation warnings from newer OpenSSL versions.])\n>> if test \"$PORTNAME\" != \"win32\"; then\n> \n> I think that you should additionally mention the version number\n> directly in the description, so as when support for 1.0.1 gets removed\n> it is possible to grep for it, and then adjust the number and the\n> description.\n\nGood point. I have committed it with that adjustment. Also, I had the \nformat of the version number wrong, so I changed that, too.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 19 Jul 2020 16:29:54 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On Sun, Jul 19, 2020 at 04:29:54PM +0200, Peter Eisentraut wrote:\n> Good point. I have committed it with that adjustment. Also, I had the\n> format of the version number wrong, so I changed that, too.\n\nThanks. I was not paying much attention to the format, but what you\nhave committed is in line with the upstream docs:\nhttps://www.openssl.org/docs/manmaster/man7/OPENSSL_API_COMPAT.html\nSo we are all good.\n--\nMichael", "msg_date": "Tue, 21 Jul 2020 14:32:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On Fri, May 29, 2020 at 12:16:47AM +0200, Daniel Gustafsson wrote:\n> SSL_CTX_load_verify_locations and X509_STORE_load_locations are deprecated and\n> replaced by individual calls to load files and directories. These are quite\n> straightforward, and are implemented like how we handle the TLS protocol API.\n> DH_check has been discouraged to use for quite some time, and is now marked\n> deprecated without a 1:1 replacement. The OpenSSL docs recommends using the\n> EVP API, which is also done in 0001. For now I just stuck it in with version\n> gated ifdefs, something cleaner than that can clearly be done. 0001 is clearly\n> not proposed for review/commit yet, it's included in case someone else is\n> interested as well.\n\nLeaving the problems with pgcrypto aside for now, we have also two\nparts involving DH_check() deprecation and the load-file APIs for the\nbackend.\n\nFrom what I can see, the new APIs to load files are new as of 3.0.0,\nbut these are not marked as deprecated yet, so I am not sure that it\nis worth having now one extra API compatibility layer just for that\nnow as proposed in cert_openssl.c. Most of the the EVP counterparts,\nthough, are much older than 1.0.1, except for EVP_PKEY_param_check()\nintroduced in 1.1.1 :(\n\nBy the way, in the previous patch, EVP_PKEY_CTX_new_from_pkey() was\ngetting used but it is new as of 3.0. We could just use\nEVP_PKEY_CTX_new() which does the same work (see\ncrypto/evp/pmeth_lib.c as we have no library context of engine to pass\ndown), and is much older, but it does not reduce the diffs. Then the\nactual problem is EVP_PKEY_param_check(), new as of 1.1.1, which looks\nto be the expected replacement for DH_check().\n\nIt seems to me that it would not be a bad thing to switch to the EVP\nAPIs on HEAD to be prepared for the future, but I would switch to\nEVP_PKEY_CTX_new() instead of EVP_PKEY_CTX_new_from_pkey() and add a\nconfigure check to see if EVP_PKEY_param_check() is defined or not.\nOPENSSL_VERSION_NUMBER cannot be used because of LibreSSL overriding\nit, and I guess that OPENSSL_VERSION_MAJOR, as used in the original\npatch, would not work with LibreSSL either.\n\nAny thoughts?\n--\nMichael", "msg_date": "Mon, 17 Aug 2020 13:12:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 17 Aug 2020, at 06:12, Michael Paquier <michael@paquier.xyz> wrote:\n\n> Leaving the problems with pgcrypto aside for now\n\nReturning to this subject, I decided to take a stab at fixing pgcrypto since it\nwasn't working.\n\nSince we support ciphers that are now deprecated, we have no other choice than\nto load the legacy provider. The other problem was that the cipher context\npadding must be explicitly set, whereas in previous versions relying on the\ndefault worked fine. EVP_CIPHER_CTX_set_padding always returns 1 so thats why\nit isn't checking the returnvalue as the other nearby initialization calls.\nTo avoid problems with the by LibreSSL overloaded OPENSSL_VERSION_NUMBER macro\n(which too is deprecated in 3.0.0), I used the new macro which is only set in\n3.0.0. Not sure if that's considered acceptable or if we should invent our own\nversion macro in autoconf.\n\nFor the main SSL tests, the incorrect password test has a new errormessage\nwhich is added in 0002.\n\nWith these two all SSL tests pass for me in 1.0.1 through 3.0.0-alpha6 (tested\non a mix of Debian and macOS).\n\nThoughts on these?\n\ncheers ./daniel", "msg_date": "Fri, 18 Sep 2020 16:11:13 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On Fri, Sep 18, 2020 at 04:11:13PM +0200, Daniel Gustafsson wrote:\n> Since we support ciphers that are now deprecated, we have no other choice than\n> to load the legacy provider.\n\nAh, thanks. I actually tried something similar to that when I had my\nmind on it by loading the legacy providers, but missed the padding\npart. Nice find.\n\n> The other problem was that the cipher context\n> padding must be explicitly set, whereas in previous versions relying on the\n> default worked fine. EVP_CIPHER_CTX_set_padding always returns 1 so thats why\n> it isn't checking the returnvalue as the other nearby initialization calls.\n\nIt seems to me that it would be a good idea to still check for the\nreturn value of EVP_CIPHER_CTX_set_padding() and just return with\na PXE_CIPHER_INIT. By looking at the upstream code, it is true that\nit always returns true for <= 1.1.1, but that's not the case for\n3.0.0. Some code paths of upstream also check after it.\n\nAlso, what's the impact with disabling the padding for <= 1.1.1? This\npart of the upstream code is still a bit obscure to me..\n\n> To avoid problems with the by LibreSSL overloaded OPENSSL_VERSION_NUMBER macro\n> (which too is deprecated in 3.0.0), I used the new macro which is only set in\n> 3.0.0. Not sure if that's considered acceptable or if we should invent our own\n> version macro in autoconf.\n\nOSSL_PROVIDER_load() is new as of 3.0.0, so using a configure switch\nsimilarly as what we do for the other functions should be more\nconsistent and enough, no?\n\n> For the main SSL tests, the incorrect password test has a new errormessage\n> which is added in 0002.\n\nHmm. I am linking to a build of alpha6 here, but I still see the\nerror being reported as a bad decrypt for this test. Interesting. \n--\nMichael", "msg_date": "Sat, 19 Sep 2020 11:11:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 19 Sep 2020, at 04:11, Michael Paquier <michael@paquier.xyz> wrote:\n> On Fri, Sep 18, 2020 at 04:11:13PM +0200, Daniel Gustafsson wrote:\n\n>> The other problem was that the cipher context\n>> padding must be explicitly set, whereas in previous versions relying on the\n>> default worked fine. EVP_CIPHER_CTX_set_padding always returns 1 so thats why\n>> it isn't checking the returnvalue as the other nearby initialization calls.\n> \n> It seems to me that it would be a good idea to still check for the\n> return value of EVP_CIPHER_CTX_set_padding() and just return with\n> a PXE_CIPHER_INIT. By looking at the upstream code, it is true that\n> it always returns true for <= 1.1.1, but that's not the case for\n> 3.0.0. Some code paths of upstream also check after it.\n\nThats a good point, it's now provider dependent and can thus fail in case the\nprovider isn't supplying the functionality. We've already loaded a provider\nwhere we know the call is supported, but it's of course better to check. Fixed\nin the attached v2.\n\nI was only reading the docs and not the code, and they haven't been updated to\nreflect this. I'll open a PR to the OpenSSL devs to fix that.\n\n> Also, what's the impact with disabling the padding for <= 1.1.1? This\n> part of the upstream code is still a bit obscure to me..\n\nI'm far from fluent in OpenSSL, but AFAICT it has to do with the new provider\nAPI. The default value for padding is unchanged, but it needs to be propaged\ninto the provider to be set in the context where the old code picked it up\nautomatically. The relevant OpenSSL commit (83f68df32f0067ee7b0) which changes\nthe docs to say the function should be called doesn't contain enough\ninformation to explain why.\n\n>> To avoid problems with the by LibreSSL overloaded OPENSSL_VERSION_NUMBER macro\n>> (which too is deprecated in 3.0.0), I used the new macro which is only set in\n>> 3.0.0. Not sure if that's considered acceptable or if we should invent our own\n>> version macro in autoconf.\n> \n> OSSL_PROVIDER_load() is new as of 3.0.0, so using a configure switch\n> similarly as what we do for the other functions should be more\n> consistent and enough, no?\n\nGood point, fixed in the attached.\n\n>> For the main SSL tests, the incorrect password test has a new errormessage\n>> which is added in 0002.\n> \n> Hmm. I am linking to a build of alpha6 here, but I still see the\n> error being reported as a bad decrypt for this test. Interesting. \n\nTurns out it was coming from when I tested against OpenSSL git HEAD, so it may\ncome in alpha7 (or whatever the next version will be). Let's disregard this\nfor now until dust settles, I've dropped the patch from the series.\n\ncheers ./daniel", "msg_date": "Mon, 21 Sep 2020 20:18:42 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On 2020-09-18 16:11, Daniel Gustafsson wrote:\n> Since we support ciphers that are now deprecated, we have no other choice than\n> to load the legacy provider.\n\nWell, we could just have deprecated ciphers fail, unless the user loads \nthe legacy provider in the OS configuration. There might be an argument \nthat that is more proper.\n\nAs a more extreme analogy, what if OpenSSL remove a cipher from the \nlegacy provider? Are we then obliged to reimplement it manually for the \npurpose of pgcrypto? Probably not.\n\nThe code you wrote to load the necessary providers is small enough that \nI think it's fine, but it's worth pondering this question briefly.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 22 Sep 2020 11:37:57 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 22 Sep 2020, at 11:37, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> On 2020-09-18 16:11, Daniel Gustafsson wrote:\n>> Since we support ciphers that are now deprecated, we have no other choice than\n>> to load the legacy provider.\n> \n> Well, we could just have deprecated ciphers fail, unless the user loads the legacy provider in the OS configuration. There might be an argument that that is more proper.\n\nThats a fair point. The issue I have with that is that we'd impose a system\nwide loading of the legacy provider, impacting other consumers of libssl as\nwell.\n\n> As a more extreme analogy, what if OpenSSL remove a cipher from the legacy provider? Are we then obliged to reimplement it manually for the purpose of pgcrypto? Probably not.\n\nI don't think we have made any such guarantees of support, especially since\nit's in contrib/. That doesn't mean that some users wont expect it though.\n\nAnother option would be to follow OpenSSL's deprecations and mark these ciphers\nas deprecated such that we can remove them in case they indeed get removed from\nlibcypto. That would give users a heads-up that they should have a migration\nplan for if that time comes.\n\n> The code you wrote to load the necessary providers is small enough that I think it's fine, but it's worth pondering this question briefly.\n\nAbsolutely.\n\ncheers ./daniel\n\n", "msg_date": "Tue, 22 Sep 2020 14:01:18 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On Mon, Sep 21, 2020 at 08:18:42PM +0200, Daniel Gustafsson wrote:\n> I'm far from fluent in OpenSSL, but AFAICT it has to do with the new provider\n> API. The default value for padding is unchanged, but it needs to be propaged\n> into the provider to be set in the context where the old code picked it up\n> automatically. The relevant OpenSSL commit (83f68df32f0067ee7b0) which changes\n> the docs to say the function should be called doesn't contain enough\n> information to explain why.\n\nHmm. Perhaps we should consider making this part conditional on\n3.0.0? But I don't see an actual reason why we cannot do it\nunconditionally either. This needs careful testing for sure.\n\n> Turns out it was coming from when I tested against OpenSSL git HEAD, so it may\n> come in alpha7 (or whatever the next version will be). Let's disregard this\n> for now until dust settles, I've dropped the patch from the series.\n\nYeah. I have just tried that on the latest HEAD at e771249 and I\ncould reproduce what you saw. It smells to me like a regression\nintroduced by upstream, as per 9a30f40c and c2150f7. I'd rather wait\nfor 3.0.0 to be GA before concluding here. If it proves to be\nreproducible with their golden version as a bug (or even not as a\nbug), we will need to have a workaround in any case.\n--\nMichael", "msg_date": "Wed, 23 Sep 2020 17:17:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On Tue, Sep 22, 2020 at 02:01:18PM +0200, Daniel Gustafsson wrote:\n> Another option would be to follow OpenSSL's deprecations and mark these ciphers\n> as deprecated such that we can remove them in case they indeed get removed from\n> libcypto. That would give users a heads-up that they should have a migration\n> plan for if that time comes.\n\nDoes that mean a deprecation note in the docs as well as a WARNING\nwhen attempting to use those ciphers in pgcryto with the version of\nOpenSSL marking such ciphers as deprecated? I would assume that we\nshould do both, rather than only one of them to bring more visibility\nto the user.\n--\nMichael", "msg_date": "Wed, 23 Sep 2020 17:19:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 23 Sep 2020, at 10:19, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Sep 22, 2020 at 02:01:18PM +0200, Daniel Gustafsson wrote:\n>> Another option would be to follow OpenSSL's deprecations and mark these ciphers\n>> as deprecated such that we can remove them in case they indeed get removed from\n>> libcypto. That would give users a heads-up that they should have a migration\n>> plan for if that time comes.\n> \n> Does that mean a deprecation note in the docs as well as a WARNING\n> when attempting to use those ciphers in pgcryto with the version of\n> OpenSSL marking such ciphers as deprecated? I would assume that we\n> should do both, rather than only one of them to bring more visibility\n> to the user.\n\nWe generally don't issue WARNINGs for deprecated functionality do we? The only\none I can see is for GLOBAL TEMPORARY in temp table creation. The relevant\nerrcode ERRCODE_WARNING_DEPRECATED_FEATURE is also not used anywhere.\n\nI'd expect it to just be a note in the documentation, with a prominent\nplacement in the release notes, if we decide to do something like this.\n\ncheers ./daniel\n\n", "msg_date": "Wed, 23 Sep 2020 10:44:20 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 22 Sep 2020, at 14:01, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 22 Sep 2020, at 11:37, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>> \n>> On 2020-09-18 16:11, Daniel Gustafsson wrote:\n>>> Since we support ciphers that are now deprecated, we have no other choice than\n>>> to load the legacy provider.\n>> \n>> Well, we could just have deprecated ciphers fail, unless the user loads the legacy provider in the OS configuration. There might be an argument that that is more proper.\n> \n> Thats a fair point. The issue I have with that is that we'd impose a system\n> wide loading of the legacy provider, impacting other consumers of libssl as\n> well.\n\nAfter another round of thinking on this, somewhat spurred on by findings in the\nthe nearby FIPS thread, I'm coming around to this idea.\n\nLooking at SCRAM+FIPS made me realize that we can't enforce FIPS mode in\npgcrypto via the OpenSSL configuration file, since pgcrypto doesn't load the\nconfig. The automatic initialization in 1.1.0+ will however load the config\nfile, so not doing it for older versions makes pgcrypto inconsistent. Thus, I\nthink we should make sure that pgcrypto loads the configuratio for all OpenSSL\nversions, and defer the provider decision in 3.0.0 to the users. This makes\nthe patch minimally intrusive while making pgcrypto behave consistently (and\ngiving 3.0.0 users the option to not run legacy).\n\nThe attached adds config loading to pgcrypto for < 1.1.0 and a doc notice for\nenabling the legacy provider in 3.0.0. This will require an alternative output\nfile for non-legacy configs, but that should wait until 3.0.0 is GA since the\nreturned error messages have changed over course of development and may not be\nset in stone just yet.\n\ncheers ./daniel", "msg_date": "Tue, 29 Sep 2020 12:25:05 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On Tue, Sep 29, 2020 at 12:25:05PM +0200, Daniel Gustafsson wrote:\n> The attached adds config loading to pgcrypto for < 1.1.0 and a doc notice for\n> enabling the legacy provider in 3.0.0. This will require an alternative output\n> file for non-legacy configs, but that should wait until 3.0.0 is GA since the\n> returned error messages have changed over course of development and may not be\n> set in stone just yet.\n\nFWIW, testing with 3.0.0-alpha9 dev (2d84089), I can see that the\nerror we have in our SSL tests when using a wrong password in the\nprivate PEM key leads now to \"PEM lib\" instead of \"bad decrypt\".\n\nUpthread, we had \"nested asn1 error\":\nhttps://www.postgresql.org/message-id/9CE70AF4-E1A0-4D24-86FA-4C3067077897@yesql.se\nIt looks like not everything is sorted out there yet.\n\npgcrypto is also throwing new errors. Daniel, what if we let this\npatch aside until upstream has sorted out their stuff?\n--\nMichael", "msg_date": "Thu, 26 Nov 2020 17:08:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 26 Nov 2020, at 09:08, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Sep 29, 2020 at 12:25:05PM +0200, Daniel Gustafsson wrote:\n>> The attached adds config loading to pgcrypto for < 1.1.0 and a doc notice for\n>> enabling the legacy provider in 3.0.0. This will require an alternative output\n>> file for non-legacy configs, but that should wait until 3.0.0 is GA since the\n>> returned error messages have changed over course of development and may not be\n>> set in stone just yet.\n> \n> FWIW, testing with 3.0.0-alpha9 dev (2d84089), I can see that the\n> error we have in our SSL tests when using a wrong password in the\n> private PEM key leads now to \"PEM lib\" instead of \"bad decrypt\".\n> \n> Upthread, we had \"nested asn1 error\":\n> https://www.postgresql.org/message-id/9CE70AF4-E1A0-4D24-86FA-4C3067077897@yesql.se\n> It looks like not everything is sorted out there yet.\n> \n> pgcrypto is also throwing new errors. Daniel, what if we let this\n> patch aside until upstream has sorted out their stuff?\n\nWell, the patch as it stands isn't changing any expected output at all, and\nonly adds a docs notice for OpenSSL 3.0.0 conformance. The gist of the patch\nis to ensure that all supported versions of OpenSSL are initialized equally as\ncurrently < 1.1.0 are bypassing the local openssl config, where 1.1.0+ isn't.\nSo I still think this patch is worth considering.\n\nRegarding test output: it's clear that we'll need to revisit this as the dust\nsettles on OpenSSL 3.0.0, but as you say there is no use in doing anything\nuntil it has. According to their tracker they are, at this time of writing,\n64% complete on the milestone to reach beta readiness [0] (which I believe\nstarted counting on alpha7).\n\ncheers ./daniel\n\n[0] https://github.com/openssl/openssl/milestone/17\n\n", "msg_date": "Mon, 30 Nov 2020 14:05:21 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "This thread is still in the commit fest, but I don't see any actual \nproposed patch still pending. Most of the activity has moved into other \nthreads.\n\nCould you update the status of this CF entry, and perhaps also on the \nstatus of OpenSSL compatibility in general?\n\n\n", "msg_date": "Wed, 3 Mar 2021 14:55:41 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On Wed, Mar 03, 2021 at 02:55:41PM +0100, Peter Eisentraut wrote:\n> This thread is still in the commit fest, but I don't see any actual proposed\n> patch still pending. Most of the activity has moved into other threads.\n> \n> Could you update the status of this CF entry, and perhaps also on the status\n> of OpenSSL compatibility in general?\n\n3.0.0 has released an alpha 12 on the 18th of February, so their stuff\nis not quite close to GA yet.\n\nI have not looked closely, but my guess is that it would take a couple\nof extra months at least to see a release. What if we just waited and\nrevisited this stuff during the next dev cycle, once there is at least\na beta, meaning mostly stable APIs?\n\nDaniel, what do you think?\n--\nMichael", "msg_date": "Thu, 4 Mar 2021 17:15:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 3 Mar 2021, at 14:55, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> This thread is still in the commit fest, but I don't see any actual proposed patch still pending. Most of the activity has moved into other threads.\n\nThe doc changes in the patch proposed on 29/9 still stands, although I see that\nit had an off by one in mentioning MD5 when it should be MD4 et.al; so\nsomething more like the below.\n\ndiff --git a/doc/src/sgml/pgcrypto.sgml b/doc/src/sgml/pgcrypto.sgml\nindex b6bb23de0f..d45464c7ea 100644\n--- a/doc/src/sgml/pgcrypto.sgml\n+++ b/doc/src/sgml/pgcrypto.sgml\n@@ -1234,6 +1234,12 @@ gen_random_uuid() returns uuid\n </tgroup>\n </table>\n\n+ <para>\n+ When compiled against <productname>OpenSSL</productname> 3.0.0, the legacy\n+ provider must be activated in the system <filename>openssl.cnf</filename>\n+ configuration file in order to use older ciphers like DES and Blowfish.\n+ </para>\n+\n <para>\n\n> Could you update the status of this CF entry, and perhaps also on the status of OpenSSL compatibility in general?\n\nLet's just wait for 3.0.0 to ship before we do anything.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 10 Mar 2021 09:23:15 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On 10.03.21 09:23, Daniel Gustafsson wrote:\n>> On 3 Mar 2021, at 14:55, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n>>\n>> This thread is still in the commit fest, but I don't see any actual proposed patch still pending. Most of the activity has moved into other threads.\n> \n> The doc changes in the patch proposed on 29/9 still stands, although I see that\n> it had an off by one in mentioning MD5 when it should be MD4 et.al; so\n> something more like the below.\n> \n> diff --git a/doc/src/sgml/pgcrypto.sgml b/doc/src/sgml/pgcrypto.sgml\n> index b6bb23de0f..d45464c7ea 100644\n> --- a/doc/src/sgml/pgcrypto.sgml\n> +++ b/doc/src/sgml/pgcrypto.sgml\n> @@ -1234,6 +1234,12 @@ gen_random_uuid() returns uuid\n> </tgroup>\n> </table>\n> \n> + <para>\n> + When compiled against <productname>OpenSSL</productname> 3.0.0, the legacy\n> + provider must be activated in the system <filename>openssl.cnf</filename>\n> + configuration file in order to use older ciphers like DES and Blowfish.\n> + </para>\n> +\n> <para>\n\nI tested the current master with openssl-3.0.0-alpha12.\n\nEverything builds cleanly.\n\nThe ssl tests fail with a small error message difference that must have \nbeen introduced recently, because I think this was never reported before:\n\n--- a/src/test/ssl/t/001_ssltests.pl\n+++ b/src/test/ssl/t/001_ssltests.pl\n@@ -449,7 +449,7 @@\n test_connect_fails(\n $common_connstr,\n \"user=ssltestuser sslcert=ssl/client.crt \nsslkey=ssl/client-encrypted-pem_tmp.key sslpassword='wrong'\",\n- qr!\\Qprivate key file \"ssl/client-encrypted-pem_tmp.key\": bad \ndecrypt\\E!,\n+ qr!\\Qprivate key file \"ssl/client-encrypted-pem_tmp.key\":\\E (bad \ndecrypt|PEM lib)!,\n \"certificate authorization fails with correct client cert and wrong \npassword in encrypted PEM format\"\n );\n\nThe pgcrypto tests fail all over the place. Some of these failures are \nquite likely because of the disabled legacy provider, but some appear to \nbe due to bad error handling.\n\nThen I tried enabling the legacy provider in openssl.cnf. This caused \npg_strong_random() to fail, which causes initdb to fail, like this:\n\nPANIC: could not generate secret authorization token\n\nI tried patching around in pg_strong_random.c to use the /dev/urandom \nvariant as a workaround, but apparently that doesn't work. You get all \nkinds of scary make check failures from md5 and sha256 calls.\n\nSo, we knew pgcrypto was in trouble with openssl 3.0.0, but can someone \nelse get its tests to pass with some kind of openssl.cnf configuration?\n\n\n", "msg_date": "Thu, 11 Mar 2021 11:03:21 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 11 Mar 2021, at 11:03, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> The ssl tests fail with a small error message difference that must have been introduced recently, because I think this was never reported before:\n\nThis was mentioned by Michael on 26/11, it was earlier in the 3.0.0 cycle\nreported as \"nested asn.1 error\". Waiting for 3.0.0 to go beta or GA before\nchanging saves us from having to change again should they update, but tests\nwill fail for anyone testing out new OpenSSL versions.\n\n> The pgcrypto tests fail all over the place. Some of these failures are quite likely because of the disabled legacy provider, but some appear to be due to bad error handling.\n\nThe below ones are likely due to the provider not being loaded, but as you say\nthey are probably cases of poor error handling as they fail in various places\nwhile probably being due to the same root cause:\n\n+ERROR: encrypt error: Key was too big\n\n+ERROR: encrypt error: Cipher cannot be initialized ?\n\n+ERROR: Wrong key or corrupt data\n\nThen there are a few where we get padding back where we really should have\nended up with the \"Cipher cannot be initialized\" error since DES is in the\nlegacy provider:\n\n select decrypt_iv(decode('50735067b073bb93', 'hex'), '0123456', 'abcd', 'des');\n- decrypt_iv\n-------------\n- foo\n+ decrypt_iv\n+----------------------------------\n+ \\177\\177\\177\\177\\177\\177\\177\\177\n (1 row)\n\n> Then I tried enabling the legacy provider in openssl.cnf. This caused pg_strong_random() to fail, which causes initdb to fail, like this:\n\nHuh? Enabling the legacy provider shouldn't affect the CRNG. I see no such\nfailure, and haven't in any alpha version tested. How did you change the\nopenssl config? If one can break pg_strong_random with a config change then we\nneed defensive coding to cope with that.\n\n> So, we knew pgcrypto was in trouble with openssl 3.0.0, but can someone else get its tests to pass with some kind of openssl.cnf configuration?\n\nIf I enable the legacy provider in openssl.cnf like this:\n\n[openssl_init]\nproviders = provider_sect\n\n[provider_sect]\ndefault = default_sect\nlegacy = legacy_sect\n\n[default_sect]\nactivate = 1\n\n[legacy_sect]\nactivate = 1\n\n.. and apply the padding changes as proposed in a patch upthread like this (these\nwork for all OpenSSL versions I've tested, and I'm rather more puzzled as to\nwhy we got away with not having them in the past):\n\ndiff --git a/contrib/pgcrypto/openssl.c b/contrib/pgcrypto/openssl.c\nindex ed8e74a2b9..e236b0d79c 100644\n--- a/contrib/pgcrypto/openssl.c\n+++ b/contrib/pgcrypto/openssl.c\n@@ -379,6 +379,8 @@ gen_ossl_decrypt(PX_Cipher *c, const uint8 *data, unsigned dlen,\n {\n if (!EVP_DecryptInit_ex(od->evp_ctx, od->evp_ciph, NULL, NULL, NULL))\n return PXE_CIPHER_INIT;\n+ if (!EVP_CIPHER_CTX_set_padding(od->evp_ctx, 0))\n+ return PXE_CIPHER_INIT;\n if (!EVP_CIPHER_CTX_set_key_length(od->evp_ctx, od->klen))\n return PXE_CIPHER_INIT;\n if (!EVP_DecryptInit_ex(od->evp_ctx, NULL, NULL, od->key, od->iv))\n@@ -403,6 +405,8 @@ gen_ossl_encrypt(PX_Cipher *c, const uint8 *data, unsigned dlen,\n {\n if (!EVP_EncryptInit_ex(od->evp_ctx, od->evp_ciph, NULL, NULL, NULL))\n return PXE_CIPHER_INIT;\n+ if (!EVP_CIPHER_CTX_set_padding(od->evp_ctx, 0))\n+ return PXE_CIPHER_INIT;\n if (!EVP_CIPHER_CTX_set_key_length(od->evp_ctx, od->klen))\n return PXE_CIPHER_INIT;\n if (!EVP_EncryptInit_ex(od->evp_ctx, NULL, NULL, od->key, od->iv))\n\n.. then all the pgcrypto tests pass for me as well as \"make check\", with a single\nSSL test failing on the above mentioned PEM lib error message.\n\nDid you build OpenSSL with anything non-standard?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 11 Mar 2021 11:41:22 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On Thu, Mar 11, 2021 at 11:41:22AM +0100, Daniel Gustafsson wrote:\n> .. and apply the padding changes as proposed in a patch upthread\n> like this (these work for all OpenSSL versions I've tested, and I'm\n> rather more puzzled as to why we got away with not having them in\n> the past):\n\nNo objections from here to disable the padding and tighten a bit the\nerror checks on the amount of data encrypted or decrypted based on\nthe block size. This indeed works correctly down to OpenSSL 1.0.1 as\nfar as I have tested, so let's extract this part first, and figure the\nrest after there is a beta.\n--\nMichael", "msg_date": "Thu, 11 Mar 2021 22:08:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On 11.03.21 11:41, Daniel Gustafsson wrote:\n> Then there are a few where we get padding back where we really should have\n> ended up with the \"Cipher cannot be initialized\" error since DES is in the\n> legacy provider:\n> \n> select decrypt_iv(decode('50735067b073bb93', 'hex'), '0123456', 'abcd', 'des');\n> - decrypt_iv\n> -------------\n> - foo\n> + decrypt_iv\n> +----------------------------------\n> + \\177\\177\\177\\177\\177\\177\\177\\177\n> (1 row)\n\nThe attached patch appears to address these cases.", "msg_date": "Fri, 12 Mar 2021 00:04:12 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 12 Mar 2021, at 00:04, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 11.03.21 11:41, Daniel Gustafsson wrote:\n>> Then there are a few where we get padding back where we really should have\n>> ended up with the \"Cipher cannot be initialized\" error since DES is in the\n>> legacy provider:\n>> select decrypt_iv(decode('50735067b073bb93', 'hex'), '0123456', 'abcd', 'des');\n>> - decrypt_iv\n>> -------------\n>> - foo\n>> + decrypt_iv\n>> +----------------------------------\n>> + \\177\\177\\177\\177\\177\\177\\177\\177\n>> (1 row)\n> \n> The attached patch appears to address these cases.\n\n+1, males a lot of sense. This removes said errors when running without the\nlegacy provider enabled, and all tests still pass with it enabled.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 12 Mar 2021 00:22:45 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "\nOn 11.03.21 11:41, Daniel Gustafsson wrote:\n> .. and apply the padding changes as proposed in a patch upthread like this (these\n> work for all OpenSSL versions I've tested, and I'm rather more puzzled as to\n> why we got away with not having them in the past):\n\nYes, before proceeding with this, we should probably understand why \nthese changes are effective and why they haven't been required in the past.\n\n\n", "msg_date": "Fri, 12 Mar 2021 08:51:39 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On 12.03.21 00:22, Daniel Gustafsson wrote:\n>> On 12 Mar 2021, at 00:04, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> On 11.03.21 11:41, Daniel Gustafsson wrote:\n>>> Then there are a few where we get padding back where we really should have\n>>> ended up with the \"Cipher cannot be initialized\" error since DES is in the\n>>> legacy provider:\n>>> select decrypt_iv(decode('50735067b073bb93', 'hex'), '0123456', 'abcd', 'des');\n>>> - decrypt_iv\n>>> -------------\n>>> - foo\n>>> + decrypt_iv\n>>> +----------------------------------\n>>> + \\177\\177\\177\\177\\177\\177\\177\\177\n>>> (1 row)\n>>\n>> The attached patch appears to address these cases.\n> \n> +1, males a lot of sense. This removes said errors when running without the\n> legacy provider enabled, and all tests still pass with it enabled.\n\nI have committed this to master. I see that the commit fest entry has \nbeen withdrawn in the meantime. I suppose we'll come back to this, \nincluding possible backpatching, when OpenSSL 3.0.0 is in beta.\n\n\n", "msg_date": "Tue, 23 Mar 2021 11:52:22 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On 12.03.21 08:51, Peter Eisentraut wrote:\n> \n> On 11.03.21 11:41, Daniel Gustafsson wrote:\n>> .. and apply the padding changes as proposed in a patch upthread like \n>> this (these\n>> work for all OpenSSL versions I've tested, and I'm rather more puzzled \n>> as to\n>> why we got away with not having them in the past):\n> \n> Yes, before proceeding with this, we should probably understand why \n> these changes are effective and why they haven't been required in the past.\n\nI took another look at this with openssl-3.0.0-beta1. The issue with \nthe garbled padding output is still there. What I found is that \npgcrypto has been using the encryption and decryption API slightly \nincorrectly. You are supposed to call EVP_DecryptUpdate() followed by \nEVP_DecryptFinal_ex() (and similarly for encryption), but pgcrypto \ndoesn't do the second one. (To be fair, this API was added to OpenSSL \nafter pgcrypto first appeared.) The \"final\" functions take care of the \npadding. We have been getting away with it like this because we do the \npadding manually elsewhere. But apparently, something has changed in \nOpenSSL 3.0.0 in that if padding is enabled in OpenSSL, \nEVP_DecryptUpdate() doesn't flush the last normal block until the \n\"final\" function is called.\n\nYour proposed fix was to explicitly disable padding, and then this \nproblem goes away. You can still call the \"final\" functions, but they \nwon't do anything, except check that there is no more data left, but we \nalready check that elsewhere.\n\nAnother option is to throw out our own padding code and let OpenSSL \nhandle it. See attached demo patch. But that breaks the non-OpenSSL \ncode in internal.c, so we'd have to re-add the padding code there. So \nthis isn't quite as straightforward an option. (At least, with the \npatch we can confirm that the OpenSSL padding works consistently with \nour own implementation.)\n\nSo I think your proposed patch is sound and a good short-term and \nlow-risk solution.", "msg_date": "Sat, 3 Jul 2021 17:00:42 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 3 Jul 2021, at 17:00, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 12.03.21 08:51, Peter Eisentraut wrote:\n>> On 11.03.21 11:41, Daniel Gustafsson wrote:\n>>> .. and apply the padding changes as proposed in a patch upthread like this (these\n>>> work for all OpenSSL versions I've tested, and I'm rather more puzzled as to\n>>> why we got away with not having them in the past):\n>> Yes, before proceeding with this, we should probably understand why these changes are effective and why they haven't been required in the past.\n> \n> I took another look at this with openssl-3.0.0-beta1. The issue with the garbled padding output is still there. What I found is that pgcrypto has been using the encryption and decryption API slightly incorrectly. You are supposed to call EVP_DecryptUpdate() followed by EVP_DecryptFinal_ex() (and similarly for encryption), but pgcrypto doesn't do the second one. (To be fair, this API was added to OpenSSL after pgcrypto first appeared.) The \"final\" functions take care of the padding. We have been getting away with it like this because we do the padding manually elsewhere.\n\nThat does make a lot of sense, following the code and API docs I concur with\nyour findings.\n\n> ..apparently, something has changed in OpenSSL 3.0.0 in that if padding is enabled in OpenSSL, EVP_DecryptUpdate() doesn't flush the last normal block until the \"final\" function is called.\n\nSkimming the code I wasn't able to find something off the cuff, but there has\nbeen work done to postpone/move padding for constant time operations so maybe\nit's related to that.\n\n> Your proposed fix was to explicitly disable padding, and then this problem goes away. You can still call the \"final\" functions, but they won't do anything, except check that there is no more data left, but we already check that elsewhere.\n\nIn earlier versions, _Final also closed the context to ensure nothing can leak\nfrom there, but I'm not sure if 1.0.1 constitutes as earlier. Still calling it\nfrom the finish function seems like a good idea though.\n\n> Another option is to throw out our own padding code and let OpenSSL handle it. See attached demo patch. But that breaks the non-OpenSSL code in internal.c, so we'd have to re-add the padding code there. So this isn't quite as straightforward an option.\n\nWhile the PX cleanup is nice, since we can't get rid of all the padding we\nsimply shift the complexity to another place where I'd be wary of introducing\nbugs into quite stable code.\n\n> (At least, with the patch we can confirm that the OpenSSL padding works consistently with our own implementation.)\n\n+1\n\n> So I think your proposed patch is sound and a good short-term and low-risk solution\n\nThe attached 0001 disables the padding. I've tested this with OpenSSL 1.0.1,\n1.0.2, 1.1.1 and Git HEAD at e278127cbfa2709d.\n\nAnother aspect of OpenSSL 3 compatibility is that of legacy cipher support, and\nas we concluded upthread it's best to leave that to the user to define in\nopenssl.cnf. The attached 0002 adds alternative output files for 3.0.0\ninstallations without the legacy provider loaded, as well as adds a note in the\npgcrypto docs to enable it in case DES is needed. It does annoy me a bit that\nwe don't load the openssl.cnf file for 1.0.1 if we start mentioning it in the\ndocs for other versions, but it's probably not worth the effort to fix it given\nthe lack of complaints so far (it needs a call to OPENSSL_config(NULL); guarded\nto HAVE_ macros for 1.0.1).\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Tue, 20 Jul 2021 01:23:42 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On Tue, Jul 20, 2021 at 01:23:42AM +0200, Daniel Gustafsson wrote:\n> Another aspect of OpenSSL 3 compatibility is that of legacy cipher support, and\n> as we concluded upthread it's best to leave that to the user to define in\n> openssl.cnf. The attached 0002 adds alternative output files for 3.0.0\n> installations without the legacy provider loaded, as well as adds a note in the\n> pgcrypto docs to enable it in case DES is needed. It does annoy me a bit that\n> we don't load the openssl.cnf file for 1.0.1 if we start mentioning it in the\n> docs for other versions, but it's probably not worth the effort to fix it given\n> the lack of complaints so far (it needs a call to OPENSSL_config(NULL); guarded\n> to HAVE_ macros for 1.0.1).\n\nSounds sensible as a whole. Another thing I can notice is that\nOpenSSL 3.0.0beta1 has taken care of the issue causing diffs in the\ntests of src/test/ssl/. So once pgcrypto is addressed, it looks like\nthere is nothing left for this thread.\n--\nMichael", "msg_date": "Tue, 20 Jul 2021 16:54:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 20 Jul 2021, at 09:54, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Jul 20, 2021 at 01:23:42AM +0200, Daniel Gustafsson wrote:\n>> Another aspect of OpenSSL 3 compatibility is that of legacy cipher support, and\n>> as we concluded upthread it's best to leave that to the user to define in\n>> openssl.cnf. The attached 0002 adds alternative output files for 3.0.0\n>> installations without the legacy provider loaded, as well as adds a note in the\n>> pgcrypto docs to enable it in case DES is needed. It does annoy me a bit that\n>> we don't load the openssl.cnf file for 1.0.1 if we start mentioning it in the\n>> docs for other versions, but it's probably not worth the effort to fix it given\n>> the lack of complaints so far (it needs a call to OPENSSL_config(NULL); guarded\n>> to HAVE_ macros for 1.0.1).\n> \n> Sounds sensible as a whole.\n\nThanks for reviewing!\n\n> Another thing I can notice is that\n> OpenSSL 3.0.0beta1 has taken care of the issue causing diffs in the\n> tests of src/test/ssl/. So once pgcrypto is addressed, it looks like\n> there is nothing left for this thread.\n\nThat's a good point, I forgot to bring that up.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 20 Jul 2021 23:55:26 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On 20.07.21 01:23, Daniel Gustafsson wrote:\n>> So I think your proposed patch is sound and a good short-term and low-risk solution\n> The attached 0001 disables the padding. I've tested this with OpenSSL 1.0.1,\n> 1.0.2, 1.1.1 and Git HEAD at e278127cbfa2709d.\n> \n> Another aspect of OpenSSL 3 compatibility is that of legacy cipher support, and\n> as we concluded upthread it's best to leave that to the user to define in\n> openssl.cnf. The attached 0002 adds alternative output files for 3.0.0\n> installations without the legacy provider loaded, as well as adds a note in the\n> pgcrypto docs to enable it in case DES is needed. It does annoy me a bit that\n> we don't load the openssl.cnf file for 1.0.1 if we start mentioning it in the\n> docs for other versions, but it's probably not worth the effort to fix it given\n> the lack of complaints so far (it needs a call to OPENSSL_config(NULL); guarded\n> to HAVE_ macros for 1.0.1).\n\nAre you going to commit these?\n\n\n", "msg_date": "Fri, 6 Aug 2021 20:41:22 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Are you going to commit these?\n\nNote that with release wraps scheduled for Monday, we are probably\nalready past the time when it'd be wise to push anything that has\na significant chance of introducing portability issues. There's\njust not much time to deal with it if the buildfarm shows problems.\nSo unless you intend this as HEAD-only, I'd counsel waiting for the\nrelease window to pass.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 06 Aug 2021 15:01:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 6 Aug 2021, at 21:01, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> Are you going to commit these?\n\nAbsolutely, a combination of unplanned home renovations and vacations changed\nmy plans a bit recently.\n\n> Note that with release wraps scheduled for Monday, we are probably\n> already past the time when it'd be wise to push anything that has\n> a significant chance of introducing portability issues. There's\n> just not much time to deal with it if the buildfarm shows problems.\n> So unless you intend this as HEAD-only, I'd counsel waiting for the\n> release window to pass.\n\nUntil there is an animal running OpenSSL 3.0.0 in the buildfarm I think this\nshould be HEAD only. Further down the line we need to support OpenSSL 3 in all\nbackbranches IMO since they are all equally likely to be compiled against it,\nbut not until we can regularly test against it in the farm.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 6 Aug 2021 21:14:15 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> Until there is an animal running OpenSSL 3.0.0 in the buildfarm I think this\n> should be HEAD only. Further down the line we need to support OpenSSL 3 in all\n> backbranches IMO since they are all equally likely to be compiled against it,\n> but not until we can regularly test against it in the farm.\n\nWorks for me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 06 Aug 2021 15:17:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 6 Aug 2021, at 21:17, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> Until there is an animal running OpenSSL 3.0.0 in the buildfarm I think this\n>> should be HEAD only. Further down the line we need to support OpenSSL 3 in all\n>> backbranches IMO since they are all equally likely to be compiled against it,\n>> but not until we can regularly test against it in the farm.\n> \n> Works for me.\n\nThese have now been committed, when OpenSSL 3.0.0 ships and there is coverage\nin the buildfarm I’ll revisit this for the backbranches.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 10 Aug 2021 15:27:18 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 10 Aug 2021, at 15:27, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> These have now been committed, when OpenSSL 3.0.0 ships and there is coverage\n> in the buildfarm I’ll revisit this for the backbranches.\n\nAs an update to this, I’ve tested the tree frozen for the upcoming 3.0.0\nrelease (scheduled for today AFAIK) and postgres still builds and tests clean\nwith the patches that were applied.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 7 Sep 2021 14:04:23 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On Tue, Sep 07, 2021 at 02:04:23PM +0200, Daniel Gustafsson wrote:\n> On 10 Aug 2021, at 15:27, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> These have now been committed, when OpenSSL 3.0.0 ships and there is coverage\n>> in the buildfarm I’ll revisit this for the backbranches.\n> \n> As an update to this, I’ve tested the tree frozen for the upcoming 3.0.0\n> release (scheduled for today AFAIK) and postgres still builds and tests clean\n> with the patches that were applied.\n\nI think that the time to do a backpatch of 318df8 has come. caiman,\nthat runs Fedora 35, has just failed:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2021-09-22%2006%3A28%3A00\n\nHere is a diff:\n@@ -8,168 +8,88 @@\n decode('0000000000000000', 'hex'),\n decode('0000000000000000', 'hex'),\n 'bf-ecb/pad:none'), 'hex');\n- encode \n-------------------\n- 4ef997456198dd78\n-(1 row)\n-\n+ERROR: encrypt error: Cipher cannot be initialized ?\n\nAnd if I look at the list of packages at the top of Fedora, I see an\nupdate to OpenSSL 3.0.0:\nhttps://fedora.pkgs.org/rawhide/fedora-aarch64/openssl-libs-3.0.0-1.fc36.aarch64.rpm.html\n\nSo the coverage is here. HEAD passes, not the stabele branches. At\nleast for 14 it would be nice to do that before the release of next\nweek.\n--\nMichael", "msg_date": "Wed, 22 Sep 2021 16:49:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 22 Sep 2021, at 09:49, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Sep 07, 2021 at 02:04:23PM +0200, Daniel Gustafsson wrote:\n>> On 10 Aug 2021, at 15:27, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>>> These have now been committed, when OpenSSL 3.0.0 ships and there is coverage\n>>> in the buildfarm I’ll revisit this for the backbranches.\n>> \n>> As an update to this, I’ve tested the tree frozen for the upcoming 3.0.0\n>> release (scheduled for today AFAIK) and postgres still builds and tests clean\n>> with the patches that were applied.\n> \n> I think that the time to do a backpatch of 318df8 has come. caiman,\n> that runs Fedora 35, has just failed:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=caiman&dt=2021-09-22%2006%3A28%3A00\n> \n> Here is a diff:\n> @@ -8,168 +8,88 @@\n> decode('0000000000000000', 'hex'),\n> decode('0000000000000000', 'hex'),\n> 'bf-ecb/pad:none'), 'hex');\n> - encode \n> -------------------\n> - 4ef997456198dd78\n> -(1 row)\n> -\n> +ERROR: encrypt error: Cipher cannot be initialized ?\n\nThat particular error stems from the legacy provider not being enabled in\nopenssl.cnf, so for this we need to backpatch 72bbff4cd as well.\n\n> So the coverage is here. HEAD passes, not the stabele branches. At\n> least for 14 it would be nice to do that before the release of next\n> week.\n\nAgreed, I will go ahead and prep backpatches for 318df8 and 72bbff4cd.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 22 Sep 2021 10:06:26 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 22 Sep 2021, at 10:06, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> Agreed, I will go ahead and prep backpatches for 318df8 and 72bbff4cd.\n\nThese commits are enough to keep 14 happy, and I intend to apply them tomorrow\nafter another round of testing and caffeine.\n\nFor the 13- backbranches we also need to backport 22e1943f1 (\"pgcrypto: Check\nfor error return of px_cipher_decrypt()\" by Peter E) in order to avoid\nincorrect results for decrypt tests on disallowed ciphers. Does anyone have\nany concerns about applying this to backbranches?\n\n13 and older will, when compiled against OpenSSL 3.0.0, produce a fair amount\nof compiler warnings on usage of depreceted functionality but there is really\nanything we can do as suppressing that is beyond the scope of a backpatchable\nfix IMHO.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 23 Sep 2021 20:51:08 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On Thu, Sep 23, 2021 at 2:51 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> For the 13- backbranches we also need to backport 22e1943f1 (\"pgcrypto: Check\n> for error return of px_cipher_decrypt()\" by Peter E) in order to avoid\n> incorrect results for decrypt tests on disallowed ciphers. Does anyone have\n> any concerns about applying this to backbranches?\n\nTo me it looks like it would be more concerning if we did not apply it\nto back-branches.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 23 Sep 2021 15:54:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On 23.09.21 20:51, Daniel Gustafsson wrote:\n> For the 13- backbranches we also need to backport 22e1943f1 (\"pgcrypto: Check\n> for error return of px_cipher_decrypt()\" by Peter E) in order to avoid\n> incorrect results for decrypt tests on disallowed ciphers. Does anyone have\n> any concerns about applying this to backbranches?\n\nThis should be backpatched as a bug fix.\n\n> 13 and older will, when compiled against OpenSSL 3.0.0, produce a fair amount\n> of compiler warnings on usage of depreceted functionality but there is really\n> anything we can do as suppressing that is beyond the scope of a backpatchable\n> fix IMHO.\n\nRight, that's just a matter of adjusting the compiler warnings.\n\nEarlier in this thread, I had suggested backpatching the \nOPENSSL_API_COMPAT definition to PG13, but now I'm thinking I wouldn't \nbother, since that still wouldn't help with anything older.\n\n\n\n", "msg_date": "Thu, 23 Sep 2021 23:26:46 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 23 Sep 2021, at 23:26, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 23.09.21 20:51, Daniel Gustafsson wrote:\n>> For the 13- backbranches we also need to backport 22e1943f1 (\"pgcrypto: Check\n>> for error return of px_cipher_decrypt()\" by Peter E) in order to avoid\n>> incorrect results for decrypt tests on disallowed ciphers. Does anyone have\n>> any concerns about applying this to backbranches?\n> \n> This should be backpatched as a bug fix.\n\nThanks for confirming, I will go ahead and do that.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 23 Sep 2021 23:41:26 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 23 Sep 2021, at 23:41, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> Thanks for confirming, I will go ahead and do that.\n\nThese have now been pushed to 14 through to 10 ahead of next week releases, I\nwill keep an eye on caiman as it builds these branches. The OpenSSL support in\n9.6 pgcrypto isn't using the EVP API (committed in 5ff4a67f63) so it's a bit\ntrickier to get green, but I'll take a stab at that when my fever goes down a\nbit.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Sat, 25 Sep 2021 11:55:03 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On Sat, Sep 25, 2021 at 11:55:03AM +0200, Daniel Gustafsson wrote:\n> These have now been pushed to 14 through to 10 ahead of next week releases, I\n> will keep an eye on caiman as it builds these branches. The OpenSSL support in\n> 9.6 pgcrypto isn't using the EVP API (committed in 5ff4a67f63) so it's a bit\n> trickier to get green,\n\nThanks! As 9.6 will be EOL'd in a couple of weeks, is that really\nworth the effort though? It sounds risky to me to introduce an\ninvasive change as that would increase the risk of bugs for existing\nusers. So my vote would be to just let this one go.\n\n> but I'll take a stab at that when my fever goes down a bit.\n\nOuch. Take care!\n--\nMichael", "msg_date": "Sat, 25 Sep 2021 19:03:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 25 Sep 2021, at 12:03, Michael Paquier <michael@paquier.xyz> wrote:\n\n> As 9.6 will be EOL'd in a couple of weeks, is that really\n> worth the effort though? It sounds risky to me to introduce an\n> invasive change as that would increase the risk of bugs for existing\n> users. So my vote would be to just let this one go.\n\nAgreed, if it's not a simple fix it's unlikely to be worth it.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Sat, 25 Sep 2021 12:07:43 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 25 Sep 2021, at 12:03, Michael Paquier <michael@paquier.xyz> wrote:\n>> As 9.6 will be EOL'd in a couple of weeks, is that really\n>> worth the effort though? It sounds risky to me to introduce an\n>> invasive change as that would increase the risk of bugs for existing\n>> users. So my vote would be to just let this one go.\n\n> Agreed, if it's not a simple fix it's unlikely to be worth it.\n\nYeah, there will be no second chance to get 9.6.last right,\nso I'd vote against touching it for this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 25 Sep 2021 09:45:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 25 Sep 2021, at 15:45, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>>> On 25 Sep 2021, at 12:03, Michael Paquier <michael@paquier.xyz> wrote:\n>>> As 9.6 will be EOL'd in a couple of weeks, is that really\n>>> worth the effort though? It sounds risky to me to introduce an\n>>> invasive change as that would increase the risk of bugs for existing\n>>> users. So my vote would be to just let this one go.\n> \n>> Agreed, if it's not a simple fix it's unlikely to be worth it.\n> \n> Yeah, there will be no second chance to get 9.6.last right,\n> so I'd vote against touching it for this.\n\nFair point. Should we perhaps instead include a note in the pgcrypto docs for\n9.6 that 3.0.0 isn't supported and leave it at that?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Sat, 25 Sep 2021 20:34:44 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "On Wed, 29 Jun 2022 at 10:55, Daniel Gustafsson <daniel@yesql.se> wrote:\n> These have now been pushed to 14 through to 10 ahead of next week releases\n\nI upgraded my OS to Ubuntu 22.04 and it seems that \"Define\nOPENSSL_API_COMPAT\" commit was never backported\n(4d3db13621be64fbac2faf7c01c4879d20885c1b). I now get various\ndeprecation warnings when compiling PG13 on Ubuntu 22.04, because of\nOpenSSL 3.0. Was this simply forgotten, or is there a reason why it\nwasn't backported?\n\n\n", "msg_date": "Wed, 29 Jun 2022 11:02:56 +0200", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 29 Jun 2022, at 11:02, Jelte Fennema <postgres@jeltef.nl> wrote:\n> \n> On Wed, 29 Jun 2022 at 10:55, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> These have now been pushed to 14 through to 10 ahead of next week releases\n> \n> I upgraded my OS to Ubuntu 22.04 and it seems that \"Define\n> OPENSSL_API_COMPAT\" commit was never backported\n> (4d3db13621be64fbac2faf7c01c4879d20885c1b). I now get various\n> deprecation warnings when compiling PG13 on Ubuntu 22.04, because of\n> OpenSSL 3.0. Was this simply forgotten, or is there a reason why it\n> wasn't backported?\n\nSee upthread in ef5c7896-20cb-843f-e91e-0ee5f7fd932e@enterprisedb.com, below is\nthe relevant portion:\n\n>> 13 and older will, when compiled against OpenSSL 3.0.0, produce a fair amount\n>> of compiler warnings on usage of depreceted functionality but there is really\n>> anything we can do as suppressing that is beyond the scope of a backpatchable\n>> fix IMHO.\n> \n> Right, that's just a matter of adjusting the compiler warnings.\n> \n> Earlier in this thread, I had suggested backpatching the OPENSSL_API_COMPAT definition to PG13, but now I'm thinking I wouldn't bother, since that still wouldn't help with anything older.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 29 Jun 2022 11:25:59 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> See upthread in ef5c7896-20cb-843f-e91e-0ee5f7fd932e@enterprisedb.com\n\nI saw that section, but I thought that only applied before you\nbackpatched the actual fixes to PG13 and below. I mean there's no\nreason anymore not to compile those older versions with OpenSSL 3.0,\nright? If so, it seems confusing for the build to spit out warnings\nthat indicate the contrary.\n\n\n", "msg_date": "Wed, 29 Jun 2022 11:44:00 +0200", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" }, { "msg_contents": "> On 29 Jun 2022, at 11:44, Jelte Fennema <postgres@jeltef.nl> wrote:\n> \n>> See upthread in ef5c7896-20cb-843f-e91e-0ee5f7fd932e@enterprisedb.com\n> \n> I saw that section, but I thought that only applied before you\n> backpatched the actual fixes to PG13 and below. I mean there's no\n> reason anymore not to compile those older versions with OpenSSL 3.0,\n> right? If so, it seems confusing for the build to spit out warnings\n> that indicate the contrary.\n\nThe project isn't automatically fixing compiler warnings or library deprecation\nwarnings in back-branches. I guess one could make the argument for this case\ngiven how widespread OpenSSL 3.0, but it comes with a significant testing\neffort to ensure that all back-branches behave correctly with all version of\nOpenSSL so it's not for free (it should be, but with OpenSSL I would personally\nnot trust that). Also, PG12 and below had 0.9.8 as minimum version.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 29 Jun 2022 12:59:32 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: OpenSSL 3.0.0 compatibility" } ]
[ { "msg_contents": "There's the age-old problem of SET NOT NULL being impossible on large actively used tables, because it needs to lock the table and do a table scan to check if there are any existing NULL values. I currently have a table that's not particularly huge but a scan takes 70 seconds, which causes unacceptable downtime for my entire application.\n\nPostgres is not able to use an index when doing this check: https://dba.stackexchange.com/questions/267947\n\nWould it be possible to have Postgres use an index for this check? Given the right index, the check could be instant and the table would only need to be locked for milliseconds.\n\n(I'm sure I'm not the first person to think of this, but I couldn't find any other discussion on this list or elsewhere.)\n\nThanks for reading!\nJohn\n\n\n", "msg_date": "Thu, 28 May 2020 23:24:40 -0400", "msg_from": "\"John Bachir\" <j@jjb.cc>", "msg_from_op": true, "msg_subject": "feature idea: use index when checking for NULLs before SET NOT NULL" }, { "msg_contents": "Hello\n\nCorrect index lookup is a difficult task. I tried to implement this previously...\n\nBut the answer in SO is a bit incomplete for recent postgresql releases. Seqscan is not the only possible way to set not null in pg12+. My patch was commited ( https://commitfest.postgresql.org/22/1389/ ) and now it's possible to do this way:\n\nalter table foos \n add constraint foos_not_null \n check (bar1 is not null) not valid; -- short-time exclusive lock\n\nalter table foos validate constraint foos_not_null; -- still seqscan entire table but without exclusive lock\n\nAn then another short lock:\nalter table foos alter column bar1 set not null;\nalter table foos drop constraint foos_not_null;\n\nregards, Sergei\n\n\n", "msg_date": "Fri, 29 May 2020 09:56:38 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: feature idea: use index when checking for NULLs before SET NOT\n NULL" }, { "msg_contents": "On Fri, May 29, 2020 at 8:56 AM Sergei Kornilov <sk@zsrv.org> wrote:\n\n> Hello\n>\n> Correct index lookup is a difficult task. I tried to implement this\n> previously...\n>\n> But the answer in SO is a bit incomplete for recent postgresql releases.\n> Seqscan is not the only possible way to set not null in pg12+. My patch was\n> commited ( https://commitfest.postgresql.org/22/1389/ ) and now it's\n> possible to do this way:\n>\n> alter table foos\n> add constraint foos_not_null\n> check (bar1 is not null) not valid; -- short-time exclusive lock\n>\n> alter table foos validate constraint foos_not_null; -- still seqscan\n> entire table but without exclusive lock\n>\n> An then another short lock:\n> alter table foos alter column bar1 set not null;\n> alter table foos drop constraint foos_not_null;\n>\n\nThat's really good to know, Sergei!\n\nJohn, I think it's worth pointing out that Postgres most likely does a full\ntable scan to validate a constraint by design and not in optimization\noversight. Think of what's gonna happen if the index used for checking is\ncorrupted?\n\nCheers,\n--\nAlex\n\nOn Fri, May 29, 2020 at 8:56 AM Sergei Kornilov <sk@zsrv.org> wrote:Hello\n\nCorrect index lookup is a difficult task. I tried to implement this previously...\n\nBut the answer in SO is a bit incomplete for recent postgresql releases. Seqscan is not the only possible way to set not null in pg12+. My patch was commited ( https://commitfest.postgresql.org/22/1389/ ) and now it's possible to do this way:\n\nalter table foos \n     add constraint foos_not_null \n     check (bar1 is not null) not valid; -- short-time exclusive lock\n\nalter table foos validate constraint foos_not_null; -- still seqscan entire table but without exclusive lock\n\nAn then another short lock:\nalter table foos alter column bar1 set not null;\nalter table foos drop constraint foos_not_null;That's really good to know, Sergei!John, I think it's worth pointing out that Postgres most likely does a full table scan to validate a constraint by design and not in optimization oversight.  Think of what's gonna happen if the index used for checking is corrupted?Cheers,--Alex", "msg_date": "Fri, 29 May 2020 09:26:03 +0200", "msg_from": "Oleksandr Shulgin <oleksandr.shulgin@zalando.de>", "msg_from_op": false, "msg_subject": "Re: feature idea: use index when checking for NULLs before SET NOT\n NULL" }, { "msg_contents": "Wow! Thank you Sergei for working on this patch, for working for months/years to get it in, and for replying to my email!\n\nFor others reading this later:\n- the feature was introduced in 12\n- the commit is here https://github.com/postgres/postgres/commit/bbb96c3704c041d139181c6601e5bc770e045d26\n\nSergei, a few questions:\n\n- Just to be clear, your recipe does not require any indexes, right? Because the constraint check table scan is inherently concurrent?\n- Was this new behavior mentioned in the release nose?\n- Do you know if there are any blog posts etc. discussing this? (I'm definitely going to write one!!)\n\nJohn\n\n\n> \n> But the answer in SO is a bit incomplete for recent postgresql \n> releases. Seqscan is not the only possible way to set not null in \n> pg12+. My patch was commited ( \n> https://commitfest.postgresql.org/22/1389/ ) and now it's possible to \n> do this way:\n> \n> alter table foos \n> add constraint foos_not_null \n> check (bar1 is not null) not valid; -- short-time exclusive lock\n> \n> alter table foos validate constraint foos_not_null; -- still seqscan \n> entire table but without exclusive lock\n> \n> An then another short lock:\n> alter table foos alter column bar1 set not null;\n> alter table foos drop constraint foos_not_null;\n> \n> regards, Sergei\n>\n\n\n", "msg_date": "Fri, 29 May 2020 08:08:22 -0400", "msg_from": "\"John Bachir\" <j@jjb.cc>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Re:_feature_idea:_use_index_when_checking_for_NULLs_before_SET?=\n =?UTF-8?Q?_NOT_NULL?=" }, { "msg_contents": ">\n>\n>\n> John, I think it's worth pointing out that Postgres most likely does a\n> full table scan to validate a constraint by design and not in optimization\n> oversight. Think of what's gonna happen if the index used for checking is\n> corrupted?\n>\n>\nThis can't be true: a corrupted index is a failure mode, and failure modes\nare not expected in normal flow. Think of it this way: we must never use\nindex scan, because if index is corrupted the results are going to be\ndisastrous, so we will always do Seq Scans.\n\nIt's ok to assume index is not corrupted.\n\n\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nJohn, I think it's worth pointing out that Postgres most likely does a full table scan to validate a constraint by design and not in optimization oversight.  Think of what's gonna happen if the index used for checking is corrupted?This can't be true: a corrupted index is a failure mode, and failure modes are not expected in normal flow. Think of it this way: we must never use index scan, because if index is corrupted the results are going to be disastrous, so we will always do Seq Scans.It's ok to assume index is not corrupted. -- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa", "msg_date": "Fri, 29 May 2020 15:56:41 +0300", "msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>", "msg_from_op": false, "msg_subject": "Re: feature idea: use index when checking for NULLs before SET NOT\n NULL" }, { "msg_contents": "Hi\n\n> Sergei, a few questions:\n>\n> - Just to be clear, your recipe does not require any indexes, right? Because the constraint check table scan is inherently concurrent?\n\nRight. \"alter table validate constraint\" can not use indexes, but does not block concurrent read/write queries. Other queries in this scenario can not use indexes too, but should be fast.\n\n> - Was this new behavior mentioned in the release nose?\n\nYes, small note in documentation and small note in release notes https://www.postgresql.org/docs/12/release-12.html this one:\n\n> Allow ALTER TABLE ... SET NOT NULL to avoid unnecessary table scans (Sergei Kornilov)\n> This can be optimized when the table's column constraints can be recognized as disallowing nulls.\n\n> - Do you know if there are any blog posts etc. discussing this? (I'm definitely going to write one!!)\n\nI do not know. Personally, I mentioned this feature in only a few Russian-speaking communities. And right now I wrote answer in dba.SO.\n\nregards, Sergei\n\n\n", "msg_date": "Fri, 29 May 2020 16:21:06 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: feature idea: use index when checking for NULLs before SET NOT\n NULL" }, { "msg_contents": "Hi Sergei - I just used the recipe on my production database. I didn't observe all the expected benefits, I wonder if there were confounding factors or if I did something wrong. If you have time, I'd love to get your feedback. Let me know if you need more info. I'd love to write a blog post informing the world about this potentially game-changing feature!\n\nHere are the commands I did, with some notes. All the columns are boolean. The table has about 8,600,000 rows.\n\nThis (blocking operation) was not fast, perhaps 60-100 seconds. maybe running them individually\nwould have been proportionally faster. but even then, not near-instant as expected.\nor, maybe running them together had some sort of aggregate negative effect, so running them individually\nwould have been instant? I don't have much experience with such constraints.\n\nALTER TABLE my_table\n ADD CONSTRAINT my_table_column1_not_null CHECK (column1 IS NOT NULL) NOT VALID,\n ADD CONSTRAINT my_table_column2_not_null CHECK (column2 IS NOT NULL) NOT VALID,\n ADD CONSTRAINT my_table_column3_not_null CHECK (column3 IS NOT NULL) NOT VALID,\n ADD CONSTRAINT my_table_column4_not_null CHECK (column4 IS NOT NULL) NOT VALID;\n\n\nas expected these took as long as a table scan, and as expected they did not block.\n\nALTER TABLE my_table validate CONSTRAINT my_table_column1_not_null;\nALTER TABLE my_table validate CONSTRAINT my_table_column2_not_null;\nALTER TABLE my_table validate CONSTRAINT my_table_column3_not_null;\nALTER TABLE my_table validate CONSTRAINT my_table_column4_not_null;\n\n\nSLOW (table scan speed) - didn't have timing on, but I think about same time as the next one.\nALTER TABLE my_table ALTER COLUMN column1 SET NOT NULL;\n\n01:39 SLOW (table scan speed)\nALTER TABLE my_table ALTER COLUMN column2 SET NOT NULL;\n\n00:22 - 1/4 time of table scan but still not instant like expected\nALTER TABLE my_table ALTER COLUMN column3 SET NOT NULL;\n\n20.403 ms - instant, like expected\nALTER TABLE my_table ALTER COLUMN column4 SET NOT NULL;\n\n\nall < 100ms\nALTER TABLE my_table DROP CONSTRAINT my_table_column1_not_null;\nALTER TABLE my_table DROP CONSTRAINT my_table_column2_not_null;\nALTER TABLE my_table DROP CONSTRAINT my_table_column3_not_null;\nALTER TABLE my_table DROP CONSTRAINT my_table_column4_not_null;\n\n\n", "msg_date": "Fri, 29 May 2020 21:53:14 -0400", "msg_from": "\"John Bachir\" <j@jjb.cc>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Re:_feature_idea:_use_index_when_checking_for_NULLs_before_SET?=\n =?UTF-8?Q?_NOT_NULL?=" }, { "msg_contents": "On Fri, May 29, 2020 at 09:53:14PM -0400, John Bachir wrote:\n> Hi Sergei - I just used the recipe on my production database. I didn't\n> observe all the expected benefits, I wonder if there were confounding factors\n> or if I did something wrong. If you have time, I'd love to get your feedback.\n> Let me know if you need more info. I'd love to write a blog post informing\n> the world about this potentially game-changing feature!\n\nIf you do it right, you can see a DEBUG:\n\npostgres=# CREATE TABLE tn (i int);\npostgres=# ALTER TABLE tn ADD CONSTRAINT nn CHECK (i IS NOT NULL) NOT VALID;\npostgres=# ALTER TABLE tn VALIDATE CONSTRAINT nn;\npostgres=# SET client_min_messages=debug;\npostgres=# ALTER TABLE tn ALTER i SET NOT NULL ;\nDEBUG: existing constraints on column \"tn\".\"i\" are sufficient to prove that it does not contain nulls\n\n> SLOW (table scan speed) - didn't have timing on, but I think about same time as the next one.\n> ALTER TABLE my_table ALTER COLUMN column1 SET NOT NULL;\n> \n> 01:39 SLOW (table scan speed)\n> ALTER TABLE my_table ALTER COLUMN column2 SET NOT NULL;\n> \n> 00:22 - 1/4 time of table scan but still not instant like expected\n> ALTER TABLE my_table ALTER COLUMN column3 SET NOT NULL;\n> \n> 20.403 ms - instant, like expected\n> ALTER TABLE my_table ALTER COLUMN column4 SET NOT NULL;\n\nThat the duration decreased every time may have been due to caching?\nHow big is the table vs RAM ?\nDo you know if the SET NOT NULL blocked or not ?\n\nMaybe something else had a nontrivial lock on the table, and those commands\nwere waiting on lock. If you \"SET deadlock_timeout='1'; SET\nlog_lock_waits=on;\", then you could see that.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 29 May 2020 21:10:02 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: feature idea: use index when checking for NULLs before SET NOT\n NULL" }, { "msg_contents": "\nOn Fri, May 29, 2020, at 10:10 PM, Justin Pryzby wrote:\n\n> If you do it right, you can see a DEBUG:\n\n> postgres=# SET client_min_messages=debug;\n> postgres=# ALTER TABLE tn ALTER i SET NOT NULL ;\n> DEBUG: existing constraints on column \"tn\".\"i\" are sufficient to prove \n> that it does not contain nulls\n\nThanks! I'll add that to my recipe for the future. Although by that time it would be too late, so to make use of this I would have to set up a cloned test environment and hope that all conditions are correctly cloned. Is there a way to check sufficiency before running the command?\n\n\n> That the duration decreased every time may have been due to caching?\n> How big is the table vs RAM ?\n\nTable is about 10 gigs, machine has 16gigs, I'm hoping OS & PG did not decided to kick out everything else from ram when doing the operation. But even with caching, the final command being 20ms, and the first 2 commands being the same time as a table scan, seems like something other than caching is at play here? IDK!\n\n> Do you know if the SET NOT NULL blocked or not ?\n> Maybe something else had a nontrivial lock on the table, and those commands\n> were waiting on lock. If you \"SET deadlock_timeout='1'; SET\n> log_lock_waits=on;\", then you could see that.\n\nI don't know if it blocked. Great idea! I'll add that to my recipe as well.\n\nJohn\n\n\np.s. current recipe: https://gist.github.com/jjb/fab5cc5f0e1b23af28694db4fc01c55a\np.p.s I think one of the biggest surprises was that setting the NOT NULL condition was slow. That's totally unrelated to this feature though and out of scope for this list though, I asked about it here https://dba.stackexchange.com/questions/268301/why-is-add-constraint-not-valid-taking-a-long-time\n\n\n", "msg_date": "Mon, 01 Jun 2020 10:49:25 -0400", "msg_from": "\"John Bachir\" <j@jjb.cc>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Re:_feature_idea:_use_index_when_checking_for_NULLs_before_SET?=\n =?UTF-8?Q?_NOT_NULL?=" }, { "msg_contents": "> Maybe something else had a nontrivial lock on the table, and those commands\n> were waiting on lock. If you \"SET deadlock_timeout='1'; SET\n> log_lock_waits=on;\", then you could see that.\n\nJust checking - I think you mean lock_timeout? (although setting deadlock_timeout is also not a bad idea just in case).\n\n\n", "msg_date": "Mon, 01 Jun 2020 21:55:43 -0400", "msg_from": "\"John Bachir\" <j@jjb.cc>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Re:_feature_idea:_use_index_when_checking_for_NULLs_before_SET?=\n =?UTF-8?Q?_NOT_NULL?=" }, { "msg_contents": "On Mon, Jun 01, 2020 at 10:49:25AM -0400, John Bachir wrote:\n> On Fri, May 29, 2020, at 10:10 PM, Justin Pryzby wrote:\n> \n> > If you do it right, you can see a DEBUG:\n> > postgres=# SET client_min_messages=debug;\n> > postgres=# ALTER TABLE tn ALTER i SET NOT NULL ;\n> > DEBUG: existing constraints on column \"tn\".\"i\" are sufficient to prove \n> > that it does not contain nulls\n> \n> Thanks! I'll add that to my recipe for the future. Although by that time it would be too late, so to make use of this I would have to set up a cloned test environment and hope that all conditions are correctly cloned. Is there a way to check sufficiency before running the command?\n\nYea, client_min_messages is there to demonstrate that the feature is working\nand allow you to check whether it work using your own recipe.\n\nIf you want to avoid blocking the table for nontrivial time, maybe you'd add:\nSET statement_timeout='1s';\n\nOn Mon, Jun 01, 2020 at 09:55:43PM -0400, John Bachir wrote:\n> > Maybe something else had a nontrivial lock on the table, and those commands\n> > were waiting on lock. If you \"SET deadlock_timeout='1'; SET\n> > log_lock_waits=on;\", then you could see that.\n> \n> Just checking - I think you mean lock_timeout? (although setting deadlock_timeout is also not a bad idea just in case).\n\nNo, actually (but I've had to double check):\n\nhttps://www.postgresql.org/docs/current/runtime-config-locks.html\n|When log_lock_waits is set, this parameter also determines the length of time\n|to wait before a log message is issued about the lock wait. If you are trying\n|to investigate locking delays you might want to set a shorter than normal\n|deadlock_timeout.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 1 Jun 2020 21:04:47 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: feature idea: use index when checking for NULLs before SET NOT\n NULL" }, { "msg_contents": "Thank you Justin for all that useful info! A couple nitpicky questions, so I can get my recipe right.\n\nOn Mon, Jun 1, 2020, at 10:04 PM, Justin Pryzby wrote:\n> On Mon, Jun 01, 2020 at 10:49:25AM -0400, John Bachir wrote:\n> > Thanks! I'll add that to my recipe for the future. Although by that time it would be too late, so to make use of this I would have to set up a cloned test environment and hope that all conditions are correctly cloned. Is there a way to check sufficiency before running the command?\n> \n> Yea, client_min_messages is there to demonstrate that the feature is working\n> and allow you to check whether it work using your own recipe.\n\nGotcha. Okay, just to double-check - you are saying there is _not_ a way to check before running the command, right?\n\n\n> > Just checking - I think you mean lock_timeout? (although setting deadlock_timeout is also not a bad idea just in case).\n> \n> No, actually (but I've had to double check):\n> \n> https://www.postgresql.org/docs/current/runtime-config-locks.html\n> |When log_lock_waits is set, this parameter also determines the length of time\n> |to wait before a log message is issued about the lock wait. If you are trying\n> |to investigate locking delays you might want to set a shorter than normal\n> |deadlock_timeout.\n\nHah! Unexpected and useful.\n\nI think to avoid blocking my application activity, I should _also_ set lock_timeout, and retry if it fails. (maybe this is orthogonal to what you were addressing before, but I'm just wondering what you think).\n\nThanks,\nJohn\n\n\n", "msg_date": "Mon, 01 Jun 2020 22:22:58 -0400", "msg_from": "\"John Bachir\" <j@jjb.cc>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Re:_feature_idea:_use_index_when_checking_for_NULLs_before_SET?=\n =?UTF-8?Q?_NOT_NULL?=" } ]
[ { "msg_contents": "Hi,\n\nIn [1] I mentioned that I'd like to look into coding a data structure\nto allow Node types to be looked up more efficiently than what List\nallows. There are many places in the planner where we must determine\nif a given Node is in some List of Nodes. This is an O(n) operation.\nWe could reduce these to as low as O(log n) with a binary search tree.\n\nA few places in the code that does this which can become a problem\nwhen the lists being searched are large:\n\nget_eclass_for_sort_expr()\ntlist_member()\n\nFor the former, it's quite easy to see this come up in the profiles by\nquerying a partitioned table with many partitions. e.g\n\ncreate table listp (a int) partition by list(a);\nselect 'create table listp'||x::Text || ' partition of listp for\nvalues in('||x::text||');' from generate_Series(1,10000)x;\n\\gexec\ncreate index on listp(a);\nexplain select * from listp order by a;\n\nThere are a few places that this new data structure could be used. My\nprimary interest at the moment is EquivalenceClass.ec_members. There\nare perhaps other interesting places, such as planner targetlists, but\nobviously, any changes would need to be proved worthwhile before\nthey're made.\n\nPer what I mentioned in [1], I'd like this structure to be a binary\nsearch tree, (either a red/black or AVL binary search tree compacted\ninto an array.) So instead of having ->left ->right pointers, we have\nleft and right offsets back into the array. This allows very fast and\ncache-friendly unordered traversals of the tree, as it's an array\nwhenever we want it to be and a tree when we need to perform a lookup\nof a specific value. Because I'm not using a true tree, i.e pointers\nto elements, then it does not appear like I can use rbtree.c\n\nThe first step to make this work is to modify equalfuncs.c to add an\nequalCompare() function which will return an int to indicate the sort\norder of the node against another node. I've drafted a patch to\nimplement this and attached it here. I've done this in 2 phases, 0001\ncreates and implements datatype specific macros for the comparisons.\nDoing this allows us to optimise the bool/char field comparison macros\nin 0002 so we can do a simple subtransaction rather than 2\ncomparisons. The 0002 patch modifies each comparison macro and changes\nall the equal functions to return int rather than bool. The external\ncomparison function is equalCompare(). equal() just calls\nequalCompare() and checks the value returned is 0.\n\nWith both patches, there is an increase in size of about 17% for the\nobject file for equalfuncs:\n\nequalfuncs.o\nMaster: 56768 bytes\nPatched: 66912 bytes\n\nIf I don't use the macros specifically optimised for bool and char,\nthen the size is 68240 bytes. So I think doing 0001 is worth it.\n\nThis does break the API for ExtensibleNodeMethods as it's no longer\nenough to just have nodeEqual(). In the 0002 patch I've changed this\nto nodeCompare(). Extension authors who are using this will need to\nalter their code to implement a proper comparison function.\n\nFor now, I'm mostly just interested in getting feedback about making\nthis change to equalfuncs.c. Does anyone object to it?\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAKJS1f8v-fUG8YpaAGj309ZuALo3aEk7f6cqMHr_AVwz1fKXug%40mail.gmail.com", "msg_date": "Fri, 29 May 2020 17:16:32 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Speeding up parts of the planner using a binary search tree structure\n for nodes" }, { "msg_contents": "On Fri, May 29, 2020 at 10:47 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> Hi,\n>\n> In [1] I mentioned that I'd like to look into coding a data structure\n> to allow Node types to be looked up more efficiently than what List\n> allows. There are many places in the planner where we must determine\n> if a given Node is in some List of Nodes. This is an O(n) operation.\n> We could reduce these to as low as O(log n) with a binary search tree.\n>\n> A few places in the code that does this which can become a problem\n> when the lists being searched are large:\n>\n> get_eclass_for_sort_expr()\n\neclass members have relids as their key. Can we use hash table instead\nlike how we manage join relations? I know that there are other cases\nwhere we search for subset of relids instead of exact match. So the\nhash table has to account for that. If we could somehow create a hash\nvalue out of an expression node (we almost hash anything so that\nshould be possible) we should be able to use hash table for searching\nexpression as well.\n\n> tlist_member()\n\nhash table by expression as key here as well?\n\n>\n> For the former, it's quite easy to see this come up in the profiles by\n> querying a partitioned table with many partitions. e.g\n>\n> create table listp (a int) partition by list(a);\n> select 'create table listp'||x::Text || ' partition of listp for\n> values in('||x::text||');' from generate_Series(1,10000)x;\n> \\gexec\n> create index on listp(a);\n> explain select * from listp order by a;\n>\n> There are a few places that this new data structure could be used. My\n> primary interest at the moment is EquivalenceClass.ec_members. There\n> are perhaps other interesting places, such as planner targetlists, but\n> obviously, any changes would need to be proved worthwhile before\n> they're made.\n>\n> Per what I mentioned in [1], I'd like this structure to be a binary\n> search tree, (either a red/black or AVL binary search tree compacted\n> into an array.) So instead of having ->left ->right pointers, we have\n> left and right offsets back into the array. This allows very fast and\n> cache-friendly unordered traversals of the tree, as it's an array\n> whenever we want it to be and a tree when we need to perform a lookup\n> of a specific value. Because I'm not using a true tree, i.e pointers\n> to elements, then it does not appear like I can use rbtree.c\n>\n> The first step to make this work is to modify equalfuncs.c to add an\n> equalCompare() function which will return an int to indicate the sort\n> order of the node against another node. I've drafted a patch to\n> implement this and attached it here. I've done this in 2 phases, 0001\n> creates and implements datatype specific macros for the comparisons.\n> Doing this allows us to optimise the bool/char field comparison macros\n> in 0002 so we can do a simple subtransaction rather than 2\n> comparisons. The 0002 patch modifies each comparison macro and changes\n> all the equal functions to return int rather than bool. The external\n> comparison function is equalCompare(). equal() just calls\n> equalCompare() and checks the value returned is 0.\n>\n> With both patches, there is an increase in size of about 17% for the\n> object file for equalfuncs:\n>\n> equalfuncs.o\n> Master: 56768 bytes\n> Patched: 66912 bytes\n>\n> If I don't use the macros specifically optimised for bool and char,\n> then the size is 68240 bytes. So I think doing 0001 is worth it.\n>\n> This does break the API for ExtensibleNodeMethods as it's no longer\n> enough to just have nodeEqual(). In the 0002 patch I've changed this\n> to nodeCompare(). Extension authors who are using this will need to\n> alter their code to implement a proper comparison function.\n>\n> For now, I'm mostly just interested in getting feedback about making\n> this change to equalfuncs.c. Does anyone object to it?\n>\n> David\n>\n> [1] https://www.postgresql.org/message-id/CAKJS1f8v-fUG8YpaAGj309ZuALo3aEk7f6cqMHr_AVwz1fKXug%40mail.gmail.com\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 29 May 2020 19:22:01 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Speeding up parts of the planner using a binary search tree\n structure for nodes" }, { "msg_contents": "On Sat, 30 May 2020 at 01:52, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Fri, May 29, 2020 at 10:47 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > In [1] I mentioned that I'd like to look into coding a data structure\n> > to allow Node types to be looked up more efficiently than what List\n> > allows. There are many places in the planner where we must determine\n> > if a given Node is in some List of Nodes. This is an O(n) operation.\n> > We could reduce these to as low as O(log n) with a binary search tree.\n> >\n> > A few places in the code that does this which can become a problem\n> > when the lists being searched are large:\n> >\n> > get_eclass_for_sort_expr()\n>\n> eclass members have relids as their key. Can we use hash table instead\n> like how we manage join relations? I know that there are other cases\n> where we search for subset of relids instead of exact match. So the\n> hash table has to account for that. If we could somehow create a hash\n> value out of an expression node (we almost hash anything so that\n> should be possible) we should be able to use hash table for searching\n> expression as well.\n\nThis certainly could be done with hash tables. However, I feel it\nraises the bar a little as it means we either need to maintain a hash\nfunction for each node type or do some pre-processor magic in\nequalfuncs.c to adjust the comparison macros into hashing macros to\nallow it to be compiled again to implement hash functions. If\neveryone feels that's an okay thing to go and do then perhaps hash\ntables are a better option. I was just trying to keep the bar to some\nlevel that I thought might be acceptable to the community.\n\nIt does seem likely to me that hash tables would be a more efficient\nstructure, but the gains might not be as big as you think. To perform\na lookup in the table we need to hash the entire node to get the hash\nvalue, then perform at least one equal comparison. With the Binary\nSearch Lists, we'd need more comparisons, but those can be partial\ncomparisons as they can abort early when we find the first mismatching\nfield in the Node. When the number of items in the list is fairly\nsmall that might actually be less work, especially when the Node type\nvaries (since that's the first field we compare). Hash tables are\nlikely to become more efficient when the number of items is larger.\nThe general case is that we'll just have a small number of items. I'd\nlike to improve the less common situation when the number of items is\nlarge with minimal impact for when the number of items is small.\n\n> > tlist_member()\n>\n> hash table by expression as key here as well?\n\nThe order of the tlist does matter in many cases. I'm unsure how we\ncould track the order that items were added to the hash table and then\nobtain the items back in that order in an efficient way. I imagine\nwe'd need to store the item in some subsequent data structure, such as\na List. There's obviously going to be some overhead to doing that.\nThe idea with the Binary Search Lists was that items would be stored\nin an array, similar to List, but the cells of the list would contain\nthe offset index to its parent and left and right child. New items\nwould always take the next free slot in the list, just like List does\nso we'd easily be able to get the insert order by looping over the\narray of elements in order.\n\nDavid\n\n> > [1] https://www.postgresql.org/message-id/CAKJS1f8v-fUG8YpaAGj309ZuALo3aEk7f6cqMHr_AVwz1fKXug%40mail.gmail.com\n\n\n", "msg_date": "Tue, 2 Jun 2020 09:42:51 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Speeding up parts of the planner using a binary search tree\n structure for nodes" }, { "msg_contents": "Hi David,\n\nOn Fri, May 29, 2020 at 2:16 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> In [1] I mentioned that I'd like to look into coding a data structure\n> to allow Node types to be looked up more efficiently than what List\n> allows. There are many places in the planner where we must determine\n> if a given Node is in some List of Nodes. This is an O(n) operation.\n> We could reduce these to as low as O(log n) with a binary search tree.\n>\n> A few places in the code that does this which can become a problem\n> when the lists being searched are large:\n>\n> get_eclass_for_sort_expr()\n> tlist_member()\n>\n> For the former, it's quite easy to see this come up in the profiles by\n> querying a partitioned table with many partitions. e.g\n>\n> create table listp (a int) partition by list(a);\n> select 'create table listp'||x::Text || ' partition of listp for\n> values in('||x::text||');' from generate_Series(1,10000)x;\n> \\gexec\n> create index on listp(a);\n> explain select * from listp order by a;\n>\n> There are a few places that this new data structure could be used. My\n> primary interest at the moment is EquivalenceClass.ec_members. There\n> are perhaps other interesting places, such as planner targetlists, but\n> obviously, any changes would need to be proved worthwhile before\n> they're made.\n\nI'm glad to see you mention this problem.\n\nI have often wondered if we could do something about these data\nstructures growing bigger with the number of partitions in the first\nplace. I could imagine that when \"child\" EC members were designed,\nTom maybe didn't think we'd continue relying on the abstraction for\nthe 10000 partitions case. Making the lookup algorithm smarter with\nbinary or hashed search, as you are trying to do, might help us\nsomewhat, but would it be interesting to consider alternative designs\nfor the underlying abstraction? Sorry, I don't have any concrete\nideas to offer at the moment, but thought sharing this perspective of\nthe problem might inspire others whose may have some.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Jun 2020 12:14:51 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Speeding up parts of the planner using a binary search tree\n structure for nodes" }, { "msg_contents": "On Tue, 2 Jun 2020 at 03:13, David Rowley <dgrowleyml@gmail.com> wrote:\n\n>\n>\n> It does seem likely to me that hash tables would be a more efficient\n> structure, but the gains might not be as big as you think. To perform\n> a lookup in the table we need to hash the entire node to get the hash\n> value, then perform at least one equal comparison. With the Binary\n> Search Lists, we'd need more comparisons, but those can be partial\n> comparisons as they can abort early when we find the first mismatching\n> field in the Node. When the number of items in the list is fairly\n> small that might actually be less work, especially when the Node type\n> varies (since that's the first field we compare). Hash tables are\n> likely to become more efficient when the number of items is larger.\n> The general case is that we'll just have a small number of items. I'd\n> like to improve the less common situation when the number of items is\n> large with minimal impact for when the number of items is small.\n>\n\nAgree with that. I think we can again borrow from the way we search a join\n- when small number of joins use list and for a larger number use hash\ntable. I am not suggesting that we use list in this case, but the idea is\nto use two data structures. May be every eclass will use one of them\ndepending upon the number of members. Queries involving partitioned tables\nwith lots of partitions will have a large number of child eclass members.\n\n\n>\n> > > tlist_member()\n> >\n> > hash table by expression as key here as well?\n>\n> The order of the tlist does matter in many cases. I'm unsure how we\n> could track the order that items were added to the hash table and then\n> obtain the items back in that order in an efficient way. I imagine\n> we'd need to store the item in some subsequent data structure, such as\n> a List.\n\n\nIf we use hash table then we retain the list as well. But your idea below\nlooks better.\n\n\n> There's obviously going to be some overhead to doing that.\n> The idea with the Binary Search Lists was that items would be stored\n> in an array, similar to List, but the cells of the list would contain\n> the offset index to its parent and left and right child. New items\n> would always take the next free slot in the list, just like List does\n> so we'd easily be able to get the insert order by looping over the\n> array of elements in order.\n>\n> That seems like a good idea. I am worried that the expression nodes do not\nhave any inherent ordering and we are proposing to use a structure which\nrelies on order.\n\n-- \nBest Wishes,\nAshutosh\n\nOn Tue, 2 Jun 2020 at 03:13, David Rowley <dgrowleyml@gmail.com> wrote:\n\nIt does seem likely to me that hash tables would be a more efficient\nstructure, but the gains might not be as big as you think. To perform\na lookup in the table we need to hash the entire node to get the hash\nvalue, then perform at least one equal comparison.  With the Binary\nSearch Lists, we'd need more comparisons, but those can be partial\ncomparisons as they can abort early when we find the first mismatching\nfield in the Node. When the number of items in the list is fairly\nsmall that might actually be less work, especially when the Node type\nvaries (since that's the first field we compare). Hash tables are\nlikely to become more efficient when the number of items is larger.\nThe general case is that we'll just have a small number of items. I'd\nlike to improve the less common situation when the number of items is\nlarge with minimal impact for when the number of items is small.Agree with that. I think we can again borrow from the way we search a join - when small number of joins use list and for a larger number use hash table. I am not suggesting that we use list in this case, but the idea is to use two data structures. May be every eclass will use one of them depending upon the number of members. Queries involving partitioned tables with lots of partitions will have a large number of child eclass members. \n\n> > tlist_member()\n>\n> hash table by expression as key here as well?\n\nThe order of the tlist does matter in many cases. I'm unsure how we\ncould track the order that items were added to the hash table and then\nobtain the items back in that order in an efficient way. I imagine\nwe'd need to store the item in some subsequent data structure, such as\na List. If we use hash table then we retain the list as well. But your idea below looks better.  There's obviously going to be some overhead to doing that.\nThe idea with the Binary Search Lists was that items would be stored\nin an array, similar to List, but the cells of the list would contain\nthe offset index to its parent and left and right child.  New items\nwould always take the next free slot in the list, just like List does\nso we'd easily be able to get the insert order by looping over the\narray of elements in order.\nThat seems like a good idea. I am worried that the expression nodes do not have any inherent ordering and we are proposing to use a structure which relies on order.-- Best Wishes,Ashutosh", "msg_date": "Mon, 8 Jun 2020 19:47:59 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Speeding up parts of the planner using a binary search tree\n structure for nodes" } ]
[ { "msg_contents": "Hi,\n\nIs there any extension or option in PG to keep information of any (memory\ncontext/some memory address) of backend process after getting killed in\n(postgres server process/postmaster process)\n\nThanks\nBraj\n\nHi,Is there any extension or option in PG to keep information of any (memory context/some memory address) of backend process after getting killed in (postgres server process/postmaster  process) ThanksBraj", "msg_date": "Fri, 29 May 2020 13:42:17 +0530", "msg_from": "brajmohan saxena <braj.saxena.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Does PG server process keep backend info" } ]
[ { "msg_contents": "Hello,\n\nI noticed pg_dump failed to not dump creation or comment commands for public\nschema when we explicitly ask it to dump public schema.\n\nShorter example: pg_dump -n public dump will give:\n\n--\n-- Name: public; Type: SCHEMA; Schema: -; Owner: postgres\n--\n\nCREATE SCHEMA public;\n\n\nALTER SCHEMA public OWNER TO postgres;\n\n--\n-- Name: SCHEMA public; Type: COMMENT; Schema: -; Owner: postgres\n--\n\nCOMMENT ON SCHEMA public IS 'standard public schema';\n\n\nObviously, it trigger errors when we try to restore it as public schema already\nexists.\n\n\nGit bisect blame this commit (since pg11):\n\ncommit 5955d934194c3888f30318209ade71b53d29777f (refs/bisect/bad)\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Thu Jan 25 13:54:42 2018 -0500\n Improve pg_dump's handling of \"special\" built-in objects.\n\nI first tried to add an only_dump_public_schema test. I am not used to how\npg_dump tests works but I do not think it is the best approach due to how many\ntest I had to disable for only_dump_public_schema.\n\nThen I tried to change selectDumpableNamespace in order to apply the same\ntreatment to public schema when we explicitly ask pg_dump to dump public schema.\n\nUnfortunately this broke other tests, all related to how we handle COLLATION.\nFor example:\n\n# Failed test 'only_dump_test_schema: should not dump ALTER COLLATION test0\nOWNER TO'\n\n# Failed test 'only_dump_test_schema: should not dump COMMENT ON COLLATION test0'\n\n# Failed test 'only_dump_test_schema: should not dump CREATE COLLATION test0\nFROM \"C\"'\n\n# Failed test 'only_dump_test_schema: should not dump REVOKE CREATE ON SCHEMA\npublic FROM public'\n\n\nRegards,", "msg_date": "Fri, 29 May 2020 15:13:42 +0200", "msg_from": "Adrien Nayrat <adrien.nayrat@anayrat.info>", "msg_from_op": true, "msg_subject": "pg_dump fail to not dump public schema orders" }, { "msg_contents": "On Friday, May 29, 2020, Adrien Nayrat <adrien.nayrat@anayrat.info> wrote:\n\n> Hello,\n>\n> I noticed pg_dump failed to not dump creation or comment commands for\n> public\n> schema when we explicitly ask it to dump public schema.\n>\n> Shorter example: pg_dump -n public dump will give:\n\n\n\n> [Create schema public....]\n>\n\nAs far as I can tell this is working as intended/documented. The public\nschema doesn’t and doesn’t and shouldn’t get special treatment relative to\nany other user schema here.\n\nDavid J.\n\nOn Friday, May 29, 2020, Adrien Nayrat <adrien.nayrat@anayrat.info> wrote:Hello,\n\nI noticed pg_dump failed to not dump creation or comment commands for public\nschema when we explicitly ask it to dump public schema.\n\nShorter example: pg_dump -n public dump will give: [Create schema public....] As far as I can tell this is working as intended/documented.  The public schema doesn’t and doesn’t and shouldn’t get special treatment relative to any other user schema here.David J.", "msg_date": "Fri, 29 May 2020 06:56:36 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump fail to not dump public schema orders" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Friday, May 29, 2020, Adrien Nayrat <adrien.nayrat@anayrat.info> wrote:\n>> I noticed pg_dump failed to not dump creation or comment commands for\n>> public schema when we explicitly ask it to dump public schema.\n\n> As far as I can tell this is working as intended/documented. The public\n> schema doesn’t and doesn’t and shouldn’t get special treatment relative to\n> any other user schema here.\n\nNote this is something we intentionally changed a little while ago\n(v11, looks like), along with a larger refactoring of pg_dump vs.\npg_dumpall. But yeah, public is not treated differently from other\nschemas anymore.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 May 2020 10:40:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump fail to not dump public schema orders" }, { "msg_contents": "On 5/29/20 3:56 PM, David G. Johnston wrote:\n> On Friday, May 29, 2020, Adrien Nayrat <adrien.nayrat@anayrat.info\n> <mailto:adrien.nayrat@anayrat.info>> wrote:\n> \n> Hello,\n> \n> I noticed pg_dump failed to not dump creation or comment commands for public\n> schema when we explicitly ask it to dump public schema.\n> \n> Shorter example: pg_dump -n public dump will give:\n> \n>  \n> \n> [Create schema public....]\n> \n>  \n> As far as I can tell this is working as intended/documented.  The public schema\n> doesn’t and doesn’t and shouldn’t get special treatment relative to any other\n> user schema here.\n> \n\nI am not sure. See this comment from selectDumpableNamespace:\n\n/*\n * The public schema is a strange beast that sits in a sort of\n * no-mans-land between being a system object and a user object. We\n * don't want to dump creation or comment commands for it, because\n * that complicates matters for non-superuser use of pg_dump. But we\n * should dump any ACL changes that have occurred for it, and of\n * course we should dump contained objects.\n */\n\n\nFYI this behavior appeared with pg11. With pg10 you won't have \"CREATE SCHEMA\npublic\" orders.\n\nRegards,\n\n\n", "msg_date": "Fri, 29 May 2020 16:42:21 +0200", "msg_from": "Adrien Nayrat <adrien.nayrat@anayrat.info>", "msg_from_op": true, "msg_subject": "Re: pg_dump fail to not dump public schema orders" }, { "msg_contents": "On Fri, May 29, 2020 at 7:42 AM Adrien Nayrat <adrien.nayrat@anayrat.info>\nwrote:\n\n> On 5/29/20 3:56 PM, David G. Johnston wrote:\n> > On Friday, May 29, 2020, Adrien Nayrat <adrien.nayrat@anayrat.info\n> > <mailto:adrien.nayrat@anayrat.info>> wrote:\n> >\n> > Hello,\n> >\n> > I noticed pg_dump failed to not dump creation or comment commands\n> for public\n> > schema when we explicitly ask it to dump public schema.\n> >\n> > Shorter example: pg_dump -n public dump will give:\n> >\n> >\n> >\n> > [Create schema public....]\n> >\n> >\n> > As far as I can tell this is working as intended/documented. The public\n> schema\n> > doesn’t and doesn’t and shouldn’t get special treatment relative to any\n> other\n> > user schema here.\n> >\n>\n> I am not sure. See this comment from selectDumpableNamespace:\n>\n>\nThat comment doesn't apply to this situation as it is attached to an\nif/else branch that doesn't handle the \"-n\" option case.\n\n\n> FYI this behavior appeared with pg11. With pg10 you won't have \"CREATE\n> SCHEMA\n> public\" orders.\n>\n\nThat matches what Tom said.\nIt is indeed a behavior change and the commit that caused it to change\ndidn't change the documentation - so either the current behavior is a bug\nor the old documentation is wrong for failing to describe the old behavior\nsufficiently.\n\nI stand by my comment that the current behavior and documentation agree -\nit doesn't call out any special behavior for the public schema being\nspecified in \"-n\" and none is observed (now).\n\nI'm tending to believe that the code benefits that resulted in this change\nare sufficient to keep new behavior as-is and not go back and introduce\nspecial public schema handling code to get it back to the way things were.\nThe public schema has issues and at this point the only reason it should\nexist and be populated in a database is for learning or quick debugging.\nIts not worth breaking stuff to make that point more bluntly but if the\nnatural evolution of the code results in people either adapting or\nabandoning the public schema I find that to be an acceptable price for\nprogress.\n\nDavid J.\n\nOn Fri, May 29, 2020 at 7:42 AM Adrien Nayrat <adrien.nayrat@anayrat.info> wrote:On 5/29/20 3:56 PM, David G. Johnston wrote:\n> On Friday, May 29, 2020, Adrien Nayrat <adrien.nayrat@anayrat.info\n> <mailto:adrien.nayrat@anayrat.info>> wrote:\n> \n>     Hello,\n> \n>     I noticed pg_dump failed to not dump creation or comment commands for public\n>     schema when we explicitly ask it to dump public schema.\n> \n>     Shorter example: pg_dump -n public dump will give:\n> \n>  \n> \n>     [Create schema public....]\n> \n>  \n> As far as I can tell this is working as intended/documented.  The public schema\n> doesn’t and doesn’t and shouldn’t get special treatment relative to any other\n> user schema here.\n> \n\nI am not sure. See this comment from selectDumpableNamespace:That comment doesn't apply to this situation as it is attached to an if/else branch that doesn't handle the \"-n\" option case. \nFYI this behavior appeared with pg11. With pg10 you won't have \"CREATE SCHEMA\npublic\" orders.That matches what Tom said.It is indeed a behavior change and the commit that caused it to change didn't change the documentation - so either the current behavior is a bug or the old documentation is wrong for failing to describe the old behavior sufficiently.I stand by my comment that the current behavior and documentation agree - it doesn't call out any special behavior for the public schema being specified in \"-n\" and none is observed (now).I'm tending to believe that the code benefits that resulted in this change are sufficient to keep new behavior as-is and not go back and introduce special public schema handling code to get it back to the way things were.  The public schema has issues and at this point the only reason it should exist and be populated in a database is for learning or quick debugging.  Its not worth breaking stuff to make that point more bluntly but if the natural evolution of the code results in people either adapting or abandoning the public schema I find that to be an acceptable price for progress.David J.", "msg_date": "Fri, 29 May 2020 10:40:54 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump fail to not dump public schema orders" }, { "msg_contents": "On 5/29/20 7:40 PM, David G. Johnston wrote:\n>  \n> \n> FYI this behavior appeared with pg11. With pg10 you won't have \"CREATE SCHEMA\n> public\" orders.\n> \n> \n> That matches what Tom said.\n> It is indeed a behavior change and the commit that caused it to change didn't\n> change the documentation - so either the current behavior is a bug or the old\n> documentation is wrong for failing to describe the old behavior sufficiently.\n\nYes, if it is expected it should me mentioned in release notes. As is, it is a\nregression.\n\nFYI, our continuous integration hit this issue: First we restore the schema and\nthen we apply migration step. We do this for every schema and this change broke\nthe initial restoration. It is weird that the restoration can fail on a clear\ndatabase.\n\n> \n> I stand by my comment that the current behavior and documentation agree - it\n> doesn't call out any special behavior for the public schema being specified in\n> \"-n\" and none is observed (now).\n> \n> I'm tending to believe that the code benefits that resulted in this change are\n> sufficient to keep new behavior as-is and not go back and introduce special\n> public schema handling code to get it back to the way things were.  The public\n> schema has issues and at this point the only reason it should exist and be\n> populated in a database is for learning or quick debugging.  Its not worth\n> breaking stuff to make that point more bluntly but if the natural evolution of\n> the code results in people either adapting or abandoning the public schema I\n> find that to be an acceptable price for progress.\n\n\nExcuse me, but there is no mention that public schema exists for learning or\nquick debugging?\nhttps://www.postgresql.org/docs/11/ddl-schemas.html\n\nI am pretty sure most users use public schema and even postgres default database.\n\nRegards,\n\n\n", "msg_date": "Sat, 30 May 2020 10:23:30 +0200", "msg_from": "Adrien Nayrat <adrien.nayrat@anayrat.info>", "msg_from_op": true, "msg_subject": "Re: pg_dump fail to not dump public schema orders" } ]
[ { "msg_contents": "Hi\n\none my customer has to specify dumped tables name by name. After years and\nincreasing database size and table numbers he has problem with too short\ncommand line. He need to read the list of tables from file (or from stdin).\n\nI wrote simple PoC patch\n\nComments, notes, ideas?\n\nRegards\n\nPavel", "msg_date": "Fri, 29 May 2020 16:21:00 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> one my customer has to specify dumped tables name by name. After years and\n> increasing database size and table numbers he has problem with too short\n> command line. He need to read the list of tables from file (or from stdin).\n\nI guess the question is why. That seems like an enormously error-prone\napproach. Can't they switch to selecting schemas? Or excluding the\nhopefully-short list of tables they don't want?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 May 2020 10:28:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "pá 29. 5. 2020 v 16:28 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > one my customer has to specify dumped tables name by name. After years\n> and\n> > increasing database size and table numbers he has problem with too short\n> > command line. He need to read the list of tables from file (or from\n> stdin).\n>\n> I guess the question is why. That seems like an enormously error-prone\n> approach. Can't they switch to selecting schemas? Or excluding the\n> hopefully-short list of tables they don't want?\n>\n\nIt is not typical application. It is a analytic application when the schema\nof database is based on dynamic specification of end user (end user can do\ncustomization every time). So schema is very dynamic.\n\nFor example - typical server has about four thousand databases and every\ndatabase has some between 1K .. 10K tables.\n\nAnother specific are different versions of data in different tables. A user\ncan work with one set of data (one set of tables) and a application\nprepares new set of data (new set of tables). Load can be slow, because\nsometimes bigger tables are filled (about forty GB). pg_dump backups one\nset of tables (little bit like snapshot of data). So it is strange OLAP\n(but successfull) application.\n\nRegards\n\nPavel\n\n\n\n> regards, tom lane\n>\n\npá 29. 5. 2020 v 16:28 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> one my customer has to specify dumped tables name by name. After years and\n> increasing database size and table numbers he has problem with too short\n> command line. He need to read the list of tables from file (or from stdin).\n\nI guess the question is why.  That seems like an enormously error-prone\napproach.  Can't they switch to selecting schemas?  Or excluding the\nhopefully-short list of tables they don't want?It is not typical application. It is a analytic application when the schema of database is  based on dynamic specification of end user (end user can do customization every time). So schema is very dynamic. For example - typical server has about four thousand databases and every database has some between 1K .. 10K tables.Another specific are different versions of data in different tables. A user can work with one set of data (one set of tables) and a application prepares new set of data (new set of tables). Load can be slow, because sometimes bigger tables are filled (about forty GB). pg_dump backups one set of tables (little bit like snapshot of data). So it is strange OLAP (but successfull) application.RegardsPavel\n\n                        regards, tom lane", "msg_date": "Fri, 29 May 2020 16:47:15 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Fri, May 29, 2020 at 04:21:00PM +0200, Pavel Stehule wrote:\n> Hi\n> \n> one my customer has to specify dumped tables name by name. After years and\n> increasing database size and table numbers he has problem with too short\n> command line. He need to read the list of tables from file (or from stdin).\n> \n> I wrote simple PoC patch\n> \n> Comments, notes, ideas?\n\nThis seems like a handy addition. What I've done in cases similar to\nthis was to use `grep -f` on the output of `pg_dump -l` to create a\nfile suitable for `pg_dump -L`, or mash them together like this:\n\n pg_restore -L <(pg_dump -l /path/to/dumpfile | grep -f /path/to/listfile) -d new_db /path/to/dumpfile\n\nThat's a lot of shell magic and obscure corners of commands to expect\npeople to use.\n\nWould it make sense to expand this patch to handle other objects?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Fri, 29 May 2020 17:28:57 +0200", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "David Fetter <david@fetter.org> writes:\n> Would it make sense to expand this patch to handle other objects?\n\nIf we're gonna do something like this, +1 for being more general.\nThe fact that pg_dump only has selection switches for tables and\nschemas has always struck me as an omission.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 May 2020 12:03:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "pá 29. 5. 2020 v 18:03 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> David Fetter <david@fetter.org> writes:\n> > Would it make sense to expand this patch to handle other objects?\n>\n\nSure. Just we should to design system (and names of options).\n\n\n>\n> If we're gonna do something like this, +1 for being more general.\n> The fact that pg_dump only has selection switches for tables and\n> schemas has always struck me as an omission.\n>\n\na implementation is trivial, hard is good design of names for these options.\n\nPavel\n\n\n\n\n>\n> regards, tom lane\n>\n\npá 29. 5. 2020 v 18:03 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:David Fetter <david@fetter.org> writes:\n> Would it make sense to expand this patch to handle other objects?Sure. Just we should to design system (and names of options). \n\nIf we're gonna do something like this, +1 for being more general.\nThe fact that pg_dump only has selection switches for tables and\nschemas has always struck me as an omission.a implementation is trivial, hard is good design of names for these options.Pavel \n\n                        regards, tom lane", "msg_date": "Fri, 29 May 2020 18:14:37 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Fri, May 29, 2020 at 04:21:00PM +0200, Pavel Stehule wrote:\n> one my customer has to specify dumped tables name by name. After years and\n> increasing database size and table numbers he has problem with too short\n> command line. He need to read the list of tables from file (or from stdin).\n\n+1 - we would use this.\n\nWe put a regex (actually a pg_dump pattern) of tables to skip (timeseries\npartitions which are older than a few days and which are also dumped once not\nexpected to change, and typically not redumped). We're nowhere near the\nexecve() limit, but it'd be nice if the command was primarily a list of options\nand not a long regex.\n\nPlease also support reading from file for --exclude-table=pattern.\n\nI'm drawing a parallel between this and rsync --include/--exclude and --filter.\n\nWe'd be implementing a new --filter, which might have similar syntax to rsync\n(which I always forget).\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 29 May 2020 13:25:15 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\npá 29. 5. 2020 v 20:25 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Fri, May 29, 2020 at 04:21:00PM +0200, Pavel Stehule wrote:\n> > one my customer has to specify dumped tables name by name. After years\n> and\n> > increasing database size and table numbers he has problem with too short\n> > command line. He need to read the list of tables from file (or from\n> stdin).\n>\n> +1 - we would use this.\n>\n> We put a regex (actually a pg_dump pattern) of tables to skip (timeseries\n> partitions which are older than a few days and which are also dumped once\n> not\n> expected to change, and typically not redumped). We're nowhere near the\n> execve() limit, but it'd be nice if the command was primarily a list of\n> options\n> and not a long regex.\n>\n> Please also support reading from file for --exclude-table=pattern.\n>\n> I'm drawing a parallel between this and rsync --include/--exclude and\n> --filter.\n>\n> We'd be implementing a new --filter, which might have similar syntax to\n> rsync\n> (which I always forget).\n>\n\nI implemented support for all \"repeated\" pg_dump options.\n\n--exclude-schemas-file=FILENAME\n--exclude-tables-data-file=FILENAME\n--exclude-tables-file=FILENAME\n--include-foreign-data-file=FILENAME\n--include-schemas-file=FILENAME\n--include-tables-file=FILENAME\n\nRegards\n\nPavel\n\nI invite any help with doc. There is just very raw text\n\n\n\n> --\n> Justin\n>", "msg_date": "Mon, 8 Jun 2020 19:18:49 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Mon, Jun 08, 2020 at 07:18:49PM +0200, Pavel Stehule wrote:\n> p� 29. 5. 2020 v 20:25 odes�latel Justin Pryzby <pryzby@telsasoft.com> napsal:\n> \n> > On Fri, May 29, 2020 at 04:21:00PM +0200, Pavel Stehule wrote:\n> > > one my customer has to specify dumped tables name by name. After years and\n> > > increasing database size and table numbers he has problem with too short\n> > > command line. He need to read the list of tables from file (or from stdin).\n> >\n> > +1 - we would use this.\n> >\n> > We put a regex (actually a pg_dump pattern) of tables to skip (timeseries\n> > partitions which are older than a few days and which are also dumped once not\n> > expected to change, and typically not redumped). We're nowhere near the\n> > execve() limit, but it'd be nice if the command was primarily a list of options\n> > and not a long regex.\n> >\n> > Please also support reading from file for --exclude-table=pattern.\n> >\n> > I'm drawing a parallel between this and rsync --include/--exclude and\n> > --filter.\n> >\n> > We'd be implementing a new --filter, which might have similar syntax to rsync\n> > (which I always forget).\n> \n> I implemented support for all \"repeated\" pg_dump options.\n> \n> I invite any help with doc. There is just very raw text\n> \n> + Do not dump data of tables spefified in file.\n\n*specified\n\nI still wonder if a better syntax would use a unified --filter option, whose\nargument would allow including/excluding any type of object:\n\n+[tn] include (t)table/(n)namespace/...\n-[tn] exclude (t)table/(n)namespace/...\n\nIn the past, I looked for a way to exclude extended stats objects, and ended up\nusing a separate schema. An \"extensible\" syntax might be better (although\nreading a file of just patterns has the advantage that the function can just be\ncalled once for each option for each type of object).\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 8 Jun 2020 16:29:57 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "po 8. 6. 2020 v 23:30 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Mon, Jun 08, 2020 at 07:18:49PM +0200, Pavel Stehule wrote:\n> > pá 29. 5. 2020 v 20:25 odesílatel Justin Pryzby <pryzby@telsasoft.com>\n> napsal:\n> >\n> > > On Fri, May 29, 2020 at 04:21:00PM +0200, Pavel Stehule wrote:\n> > > > one my customer has to specify dumped tables name by name. After\n> years and\n> > > > increasing database size and table numbers he has problem with too\n> short\n> > > > command line. He need to read the list of tables from file (or from\n> stdin).\n> > >\n> > > +1 - we would use this.\n> > >\n> > > We put a regex (actually a pg_dump pattern) of tables to skip\n> (timeseries\n> > > partitions which are older than a few days and which are also dumped\n> once not\n> > > expected to change, and typically not redumped). We're nowhere near\n> the\n> > > execve() limit, but it'd be nice if the command was primarily a list\n> of options\n> > > and not a long regex.\n> > >\n> > > Please also support reading from file for --exclude-table=pattern.\n> > >\n> > > I'm drawing a parallel between this and rsync --include/--exclude and\n> > > --filter.\n> > >\n> > > We'd be implementing a new --filter, which might have similar syntax\n> to rsync\n> > > (which I always forget).\n> >\n> > I implemented support for all \"repeated\" pg_dump options.\n> >\n> > I invite any help with doc. There is just very raw text\n> >\n> > + Do not dump data of tables spefified in file.\n>\n> *specified\n>\n>\nI am sending updated version - now with own implementation GNU (not POSIX)\nfunction getline\n\nI still wonder if a better syntax would use a unified --filter option, whose\n> argument would allow including/excluding any type of object:\n>\n> +[tn] include (t)table/(n)namespace/...\n> -[tn] exclude (t)table/(n)namespace/...\n>\n> In the past, I looked for a way to exclude extended stats objects, and\n> ended up\n> using a separate schema. An \"extensible\" syntax might be better (although\n> reading a file of just patterns has the advantage that the function can\n> just be\n> called once for each option for each type of object).\n>\n\nI tried to implement simple format \"[+-][tndf] objectname\"\n\nplease, check attached patch\n\nRegards\n\nPavel\n\n\n\n>\n> --\n> Justin\n>", "msg_date": "Tue, 9 Jun 2020 11:46:24 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Tue, Jun 09, 2020 at 11:46:24AM +0200, Pavel Stehule wrote:\n> po 8. 6. 2020 v 23:30 odes�latel Justin Pryzby <pryzby@telsasoft.com> napsal:\n\n> I still wonder if a better syntax would use a unified --filter option, whose\n> > argument would allow including/excluding any type of object:\n> I tried to implement simple format \"[+-][tndf] objectname\"\n\nThanks.\n\n> +\t\t\t\t\t\t/* ignore empty rows */\n> +\t\t\t\t\t\tif (*line != '\\0')\n\nMaybe: if line=='\\0': continue\n\nWe should also support comments.\n\n> +\t\t\t\t\t\t\tbool\t\tinclude_filter = false;\n> +\t\t\t\t\t\t\tbool\t\texclude_filter = false;\n\nI think we only need one bool.\nYou could call it: bool is_exclude = false\n\n> +\n> +\t\t\t\t\t\t\tif (chars < 4)\n> +\t\t\t\t\t\t\t\tinvalid_filter_format(optarg, line, lineno);\n\nI think that check is too lax.\nI think it's ok if we require the first char to be [-+] and the 2nd char to be\n[dntf]\n\n> +\t\t\t\t\t\t\tobjecttype = line[1];\n\n... but I think this is inadequately \"liberal in what it accepts\"; I think it\nshould skip spaces. In my proposed scheme, someone might reasonably write:\n\n> +\n> +\t\t\t\t\t\t\tobjectname = &line[3];\n> +\n> +\t\t\t\t\t\t\t/* skip initial spaces */\n> +\t\t\t\t\t\t\twhile (*objectname == ' ')\n> +\t\t\t\t\t\t\t\tobjectname++;\n\nI suggest to use isspace()\n\nI think we should check that *objectname != '\\0', rather than chars>=4, above.\n\n> +\t\t\t\t\t\t\t\tif (include_filter)\n> +\t\t\t\t\t\t\t\t{\n> +\t\t\t\t\t\t\t\t\tsimple_string_list_append(&table_include_patterns, objectname);\n> +\t\t\t\t\t\t\t\t\tdopt.include_everything = false;\n> +\t\t\t\t\t\t\t\t}\n> +\t\t\t\t\t\t\t\telse if (exclude_filter)\n> +\t\t\t\t\t\t\t\t\tsimple_string_list_append(&table_exclude_patterns, objectname);\n\nIf you use bool is_exclude, then this becomes \"else\" and you don't need to\nthink about checking if (!include && !exclude).\n\n> +\t\t\t\t\t\t\telse if (objecttype == 'f')\n> +\t\t\t\t\t\t\t{\n> +\t\t\t\t\t\t\t\tif (include_filter)\n> +\t\t\t\t\t\t\t\t\tsimple_string_list_append(&foreign_servers_include_patterns, objectname);\n> +\t\t\t\t\t\t\t\telse if (exclude_filter)\n> +\t\t\t\t\t\t\t\t\tinvalid_filter_format(optarg, line, lineno);\n> +\t\t\t\t\t\t\t}\n\nI would handle invalid object types as \"else: invalid_filter_format()\" here,\nrather than duplicating above as: !=ALL('d','n','t','f')\n\n> +\n> +\t\t\t\t\tif (ferror(f))\n> +\t\t\t\t\t\tfatal(\"could not read from file \\\"%s\\\": %s\",\n> +\t\t\t\t\t\t\t f == stdin ? \"stdin\" : optarg,\n> +\t\t\t\t\t\t\t strerror(errno));\n\nI think we're allowed to use %m here ?\n\n> +\tprintf(_(\" --filter=FILENAME read object names from file\\n\"));\n\nObject name filter expression, or something..\n\n> + * getline is originaly GNU function, and should not be everywhere still.\noriginally\n\n> + * Use own reduced implementation.\n\nDid you \"reduce\" this from another implementation? Where?\nWhat is its license ?\n\nMaybe a line-reader already exists in the frontend (?) .. or maybe it should.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 9 Jun 2020 17:30:42 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "st 10. 6. 2020 v 0:30 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Tue, Jun 09, 2020 at 11:46:24AM +0200, Pavel Stehule wrote:\n> > po 8. 6. 2020 v 23:30 odesílatel Justin Pryzby <pryzby@telsasoft.com>\n> napsal:\n>\n> > I still wonder if a better syntax would use a unified --filter option,\n> whose\n> > > argument would allow including/excluding any type of object:\n> > I tried to implement simple format \"[+-][tndf] objectname\"\n>\n\nI had another idea about format - instead using +-, we can use case\nsensitive options same to pg_dump command line (with extending Df -\nbecause these options doesn't exists in short form)\n\nSo format can looks like\n\n[tTnNDf] {objectname}\n\nWhat do you think about this? This format is simpler, and it can work. What\ndo you think about it?\n\n> Did you \"reduce\" this from another implementation? Where?\n> What is its license ?\n\nThe code is 100% mine. It is not copy from gnulib and everybody can simply\ncheck it\n\nhttps://code.woboq.org/userspace/glibc/stdio-common/getline.c.html\nhttps://code.woboq.org/userspace/glibc/libio/iogetdelim.c.html#_IO_getdelim\n\nReduced in functionality sense. There is no full argument check that is\nnecessary for glibc functions. There are no memory checks because\npg_malloc, pg_realloc are used.\n\nst 10. 6. 2020 v 0:30 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:On Tue, Jun 09, 2020 at 11:46:24AM +0200, Pavel Stehule wrote:\n> po 8. 6. 2020 v 23:30 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:\n\n> I still wonder if a better syntax would use a unified --filter option, whose\n> > argument would allow including/excluding any type of object:\n> I tried to implement simple format \"[+-][tndf] objectname\"I had another idea about format - instead using +-, we can use case sensitive options  same to pg_dump command line (with extending Df - because these options doesn't exists in short form)So format can looks like[tTnNDf] {objectname}What do you think about this? This format is simpler, and it can work. What do you think about it?> Did you \"reduce\" this from another implementation?  Where?\n> What is its license ?The code is 100% mine. It is not copy from gnulib and everybody can simply check ithttps://code.woboq.org/userspace/glibc/stdio-common/getline.c.htmlhttps://code.woboq.org/userspace/glibc/libio/iogetdelim.c.html#_IO_getdelimReduced in functionality sense. There is no full argument check that is necessary for glibc functions. There are no memory checks because pg_malloc, pg_realloc are used.", "msg_date": "Wed, 10 Jun 2020 05:03:49 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Wed, Jun 10, 2020 at 05:03:49AM +0200, Pavel Stehule wrote:\n> st 10. 6. 2020 v 0:30 odes�latel Justin Pryzby <pryzby@telsasoft.com> napsal:\n> \n> > On Tue, Jun 09, 2020 at 11:46:24AM +0200, Pavel Stehule wrote:\n> > > po 8. 6. 2020 v 23:30 odes�latel Justin Pryzby <pryzby@telsasoft.com> napsal:\n> >\n> > > I still wonder if a better syntax would use a unified --filter option, whose\n> > > > argument would allow including/excluding any type of object:\n> > > I tried to implement simple format \"[+-][tndf] objectname\"\n> \n> I had another idea about format - instead using +-, we can use case\n> sensitive options same to pg_dump command line (with extending Df -\n> because these options doesn't exists in short form)\n> \n> So format can looks like\n> \n> [tTnNDf] {objectname}\n> \n> What do you think about this? This format is simpler, and it can work. What\n> do you think about it?\n\nI prefer [-+][dtnf], which is similar to rsync --filter, and clear what it's\ndoing. I wouldn't put much weight on what the short options are.\n\nI wonder if some people would want to be able to use *long* or short options:\n\n-table foo\n+schema baz\n\nOr maybe:\n\nexclude-table=foo\nschema=bar\n\nSome tools use \"long options without leading dashes\" as their configuration\nfile format. Examples: openvpn, mysql. So that could be a good option.\nOTOH, there's only a few \"keys\", so I'm not sure how many people would want to\nrepeat them, if there's enough to bother putting them in the file rather than\nthe cmdline.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 9 Jun 2020 22:57:18 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "st 10. 6. 2020 v 0:30 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Tue, Jun 09, 2020 at 11:46:24AM +0200, Pavel Stehule wrote:\n> > po 8. 6. 2020 v 23:30 odesílatel Justin Pryzby <pryzby@telsasoft.com>\n> napsal:\n>\n> > I still wonder if a better syntax would use a unified --filter option,\n> whose\n> > > argument would allow including/excluding any type of object:\n> > I tried to implement simple format \"[+-][tndf] objectname\"\n>\n> Thanks.\n>\n> > + /* ignore empty rows */\n> > + if (*line != '\\0')\n>\n> Maybe: if line=='\\0': continue\n>\n\nok\n\n\n> We should also support comments.\n>\n> > + bool\n> include_filter = false;\n> > + bool\n> exclude_filter = false;\n>\n> I think we only need one bool.\n> You could call it: bool is_exclude = false\n>\n>\nok\n\n> +\n> > + if (chars < 4)\n> > +\n> invalid_filter_format(optarg, line, lineno);\n>\n> I think that check is too lax.\n> I think it's ok if we require the first char to be [-+] and the 2nd char\n> to be\n> [dntf]\n>\n\n> > + objecttype =\n> line[1];\n>\n> ... but I think this is inadequately \"liberal in what it accepts\"; I think\n> it\n> should skip spaces. In my proposed scheme, someone might reasonably write:\n>\n> > +\n> > + objectname =\n> &line[3];\n> > +\n> > + /* skip initial\n> spaces */\n> > + while (*objectname\n> == ' ')\n> > +\n> objectname++;\n>\n> I suggest to use isspace()\n>\n\nok\n\n\n> I think we should check that *objectname != '\\0', rather than chars>=4,\n> above.\n>\n\ndone\n\n\n> > + if\n> (include_filter)\n> > + {\n> > +\n> simple_string_list_append(&table_include_patterns, objectname);\n> > +\n> dopt.include_everything = false;\n> > + }\n> > + else if\n> (exclude_filter)\n> > +\n> simple_string_list_append(&table_exclude_patterns, objectname);\n>\n> If you use bool is_exclude, then this becomes \"else\" and you don't need to\n> think about checking if (!include && !exclude).\n>\n> > + else if\n> (objecttype == 'f')\n> > + {\n> > + if\n> (include_filter)\n> > +\n> simple_string_list_append(&foreign_servers_include_patterns, objectname);\n> > + else if\n> (exclude_filter)\n> > +\n> invalid_filter_format(optarg, line, lineno);\n> > + }\n>\n> I would handle invalid object types as \"else: invalid_filter_format()\"\n> here,\n> rather than duplicating above as: !=ALL('d','n','t','f')\n>\n\ngood idea\n\n\n> > +\n> > + if (ferror(f))\n> > + fatal(\"could not read from\n> file \\\"%s\\\": %s\",\n> > + f == stdin ?\n> \"stdin\" : optarg,\n> > + strerror(errno));\n>\n> I think we're allowed to use %m here ?\n>\n\nchanged\n\n\n> > + printf(_(\" --filter=FILENAME read object names from\n> file\\n\"));\n>\n> Object name filter expression, or something..\n>\n\nyes, it is not object names now\n\n\n> > + * getline is originaly GNU function, and should not be everywhere\n> still.\n> originally\n>\n> > + * Use own reduced implementation.\n>\n> Did you \"reduce\" this from another implementation? Where?\n> What is its license ?\n>\n> Maybe a line-reader already exists in the frontend (?) .. or maybe it\n> should.\n>\n\neverywhere else is used a function fgets. Currently pg_getline is used just\non only one place, so I don't think so moving it to some common part is\nmaybe premature.\n\nMaybe it can be used as replacement of some fgets calls, but then it is\ndifferent topic, I think.\n\nThank you for comments, attached updated patch\n\nRegards\n\nPavel\n\n\n> --\n> Justin\n>", "msg_date": "Thu, 11 Jun 2020 09:36:18 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Thu, Jun 11, 2020 at 1:07 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> Thank you for comments, attached updated patch\n>\n\nFew comments:\n+invalid_filter_format(char *message, char *filename, char *line, int lineno)\n+{\n+ char *displayname;\n+\n+ displayname = *filename == '-' ? \"stdin\" : filename;\n+\n+ pg_log_error(\"invalid format of filter file \\\"%s\\\": %s\",\n+ displayname,\n+ message);\n+\n+ fprintf(stderr, \"%d: %s\\n\", lineno, line);\n+ exit_nicely(1);\n+}\nI think fclose is missing here.\n\n+ if (line[chars - 1] == '\\n')\n+ line[chars - 1] = '\\0';\nShould we check for '\\r' also to avoid failures in some platforms.\n\n+ <varlistentry>\n+ <term><option>--filter=<replaceable\nclass=\"parameter\">filename</replaceable></option></term>\n+ <listitem>\n+ <para>\n+ Read filters from file. Format \"(+|-)(tnfd) objectname:\n+ </para>\n+ </listitem>\n+ </varlistentry>\n\nI felt some documentation is missing here. We could include,\noptions tnfd is for controlling table, schema, foreign server data &\ntable exclude patterns.\n\nInstead of using tnfd, if we could use the same options as existing\npg_dump options it will be less confusing.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 27 Jun 2020 18:25:19 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Thu, Jun 11, 2020 at 09:36:18AM +0200, Pavel Stehule wrote:\n> st 10. 6. 2020 v 0:30 odes�latel Justin Pryzby <pryzby@telsasoft.com> napsal:\n> > > + /* ignore empty rows */\n> > > + if (*line != '\\0')\n> >\n> > Maybe: if line=='\\0': continue\n> > We should also support comments.\n\nComment support is still missing but easily added :)\n\nI tried this patch and it works for my purposes.\n\nAlso, your getline is dynamically re-allocating lines of arbitrary length.\nPossibly that's not needed. We'll typically read \"+t schema.relname\", which is\n132 chars. Maybe it's sufficient to do\nchar buf[1024];\nfgets(buf);\nif strchr(buf, '\\n') == NULL: error();\nret = pstrdup(buf);\n\nIn any case, you could have getline return a char* and (rather than following\nGNU) no need to take char**, int* parameters to conflate inputs and outputs.\n\nI realized that --filter has an advantage over the previous implementation\n(with multiple --exclude-* and --include-*) in that it's possible to use stdin\nfor includes *and* excludes.\n\nBy chance, I had the opportunity yesterday to re-use with rsync a regex that\nI'd previously been using with pg_dump and grep. What this patch calls\n\"--filter\" in rsync is called \"--filter-from\". rsync's --filter-from rejects\nfilters of length longer than max filename, so I had to split it up into\nmultiple lines instead of using regex alternation (\"|\"). This option is a\nclose parallel in pg_dump.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 1 Jul 2020 16:24:52 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "so 27. 6. 2020 v 14:55 odesílatel vignesh C <vignesh21@gmail.com> napsal:\n\n> On Thu, Jun 11, 2020 at 1:07 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> > Thank you for comments, attached updated patch\n> >\n>\n> Few comments:\n> +invalid_filter_format(char *message, char *filename, char *line, int\n> lineno)\n> +{\n> + char *displayname;\n> +\n> + displayname = *filename == '-' ? \"stdin\" : filename;\n> +\n> + pg_log_error(\"invalid format of filter file \\\"%s\\\": %s\",\n> + displayname,\n> + message);\n> +\n> + fprintf(stderr, \"%d: %s\\n\", lineno, line);\n> + exit_nicely(1);\n> +}\n>\nI think fclose is missing here.\n>\n\ndone\n\n\n>\n> + if (line[chars - 1] ==\n> '\\n')\n> + line[chars - 1] =\n> '\\0';\n> Should we check for '\\r' also to avoid failures in some platforms.\n>\n\nI checked other usage of fgets in Postgres source code, and everywhere is\nused test on \\n\n\nWhen I did some fast research, then\nhttps://stackoverflow.com/questions/12769289/carriage-return-by-fgets \\r in\nthis case should be thrown by libc on Microsoft\n\nhttps://stackoverflow.com/questions/2061334/fgets-linux-vs-mac\n\n\\n should be on Mac OS X .. 2001 year .. I am not sure if Mac OS 9 should\nbe supported.\n\n\n\n\n>\n> + <varlistentry>\n> + <term><option>--filter=<replaceable\n> class=\"parameter\">filename</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + Read filters from file. Format \"(+|-)(tnfd) objectname:\n> + </para>\n> + </listitem>\n> + </varlistentry>\n>\n> I felt some documentation is missing here. We could include,\n> options tnfd is for controlling table, schema, foreign server data &\n> table exclude patterns.\n>\n\nI have a plan to completate doc when the design is completed. It was not\nclear if people prefer long or short forms of option names.\n\n\n> Instead of using tnfd, if we could use the same options as existing\n> pg_dump options it will be less confusing.\n>\n\nit almost same\n\n+-t .. tables\n+-n schema\n-d exclude data .. there is not short option for --exclude-table-data\n+f include foreign table .. there is not short option for\n--include-foreign-data\n\nSo still, there is a opened question if use +-tnfd system, or system based\non long option\n\ntable foo\nexclude-table foo\nschema xx\nexclude-schema xx\ninclude-foreign-data yyy\nexclude-table-data zzz\n\n\nTypically these files will be generated by scripts and processed via pipe,\nso there I see just two arguments for and aginst:\n\nshort format - there is less probability to do typo error (but there is not\nfull consistency with pg_dump options)\nlong format - it is self documented (and there is full consistency with\npg_dump)\n\nIn this case I prefer short form .. it is more comfortable for users, and\nthere are only a few variants, so it is not necessary to use too verbose\nlanguage (design). But my opinion is not aggressively strong and I'll\naccept any common agreement.\n\nRegards\n\nUpdated patch attached\n\n\n\n> Regards,\n> Vignesh\n> EnterpriseDB: http://www.enterprisedb.com\n>", "msg_date": "Sun, 5 Jul 2020 21:50:34 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "st 1. 7. 2020 v 23:24 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Thu, Jun 11, 2020 at 09:36:18AM +0200, Pavel Stehule wrote:\n> > st 10. 6. 2020 v 0:30 odesílatel Justin Pryzby <pryzby@telsasoft.com>\n> napsal:\n> > > > + /* ignore empty rows */\n> > > > + if (*line != '\\0')\n> > >\n> > > Maybe: if line=='\\0': continue\n> > > We should also support comments.\n>\n> Comment support is still missing but easily added :)\n>\n> I tried this patch and it works for my purposes.\n>\n> Also, your getline is dynamically re-allocating lines of arbitrary length.\n> Possibly that's not needed. We'll typically read \"+t schema.relname\",\n> which is\n> 132 chars. Maybe it's sufficient to do\n> char buf[1024];\n> fgets(buf);\n> if strchr(buf, '\\n') == NULL: error();\n> ret = pstrdup(buf);\n>\n\n63 bytes is max effective identifier size, but it is not max size of\nidentifiers. It is very probably so buff with 1024 bytes will be enough for\nall, but I do not want to increase any new magic limit. More when dynamic\nimplementation is not too hard.\n\nTable name can be very long - sometimes the data names (table names) can be\nstored in external storages with full length and should not be practical to\nrequire truncating in filter file.\n\nFor this case it is very effective, because a resized (increased) buffer is\nused for following rows, so realloc should not be often. So when I have to\nchoose between two implementations with similar complexity, I prefer more\ndynamic code without hardcoded limits. This dynamic hasn't any overhead.\n\n\n> In any case, you could have getline return a char* and (rather than\n> following\n> GNU) no need to take char**, int* parameters to conflate inputs and\n> outputs.\n>\n\nno, it has a special benefit. It eliminates the short malloc/free cycle.\nWhen some lines are longer, then the buffer is increased (and limits), and\nfor other rows with same or less size is not necessary realloc.\n\n\n> I realized that --filter has an advantage over the previous implementation\n> (with multiple --exclude-* and --include-*) in that it's possible to use\n> stdin\n> for includes *and* excludes.\n>\n\nyes, it looks like better choose\n\n\n> By chance, I had the opportunity yesterday to re-use with rsync a regex\n> that\n> I'd previously been using with pg_dump and grep. What this patch calls\n> \"--filter\" in rsync is called \"--filter-from\". rsync's --filter-from\n> rejects\n> filters of length longer than max filename, so I had to split it up into\n> multiple lines instead of using regex alternation (\"|\"). This option is a\n> close parallel in pg_dump.\n>\n\nwe can talk about option name - maybe \"--filter-from\" is better than just\n\"--filter\"\n\nRegards\n\nPavel\n\n\n\n\n>\n> --\n> Justin\n>\n\nst 1. 7. 2020 v 23:24 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:On Thu, Jun 11, 2020 at 09:36:18AM +0200, Pavel Stehule wrote:\n> st 10. 6. 2020 v 0:30 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:\n> > > +                                             /* ignore empty rows */\n> > > +                                             if (*line != '\\0')\n> >\n> > Maybe: if line=='\\0': continue\n> > We should also support comments.\n\nComment support is still missing but easily added :)\n\nI tried this patch and it works for my purposes.\n\nAlso, your getline is dynamically re-allocating lines of arbitrary length.\nPossibly that's not needed.  We'll typically read \"+t schema.relname\", which is\n132 chars.  Maybe it's sufficient to do\nchar buf[1024];\nfgets(buf);\nif strchr(buf, '\\n') == NULL: error();\nret = pstrdup(buf);63 bytes is max effective identifier size, but it is not max size of identifiers. It is very probably so buff with 1024 bytes will be enough for all, but I do not want to increase any new magic limit. More when dynamic implementation is not too hard.Table name can be very long - sometimes the data names (table names) can be stored in external storages with full length and should not be practical to require truncating in filter file.For this case it is very effective, because a resized (increased) buffer is used for following rows, so realloc should not be often. So when I have to choose between two implementations with similar complexity, I prefer more dynamic code without hardcoded limits. This dynamic hasn't any overhead. \n\nIn any case, you could have getline return a char* and (rather than following\nGNU) no need to take char**, int* parameters to conflate inputs and outputs.no, it has a special benefit. It eliminates the short malloc/free cycle. When some lines are longer, then the buffer is increased (and limits), and for other rows with same or less size is not necessary realloc.\n\nI realized that --filter has an advantage over the previous implementation\n(with multiple --exclude-* and --include-*) in that it's possible to use stdin\nfor includes *and* excludes.yes, it looks like better choose \n\nBy chance, I had the opportunity yesterday to re-use with rsync a regex that\nI'd previously been using with pg_dump and grep.  What this patch calls\n\"--filter\" in rsync is called \"--filter-from\".  rsync's --filter-from rejects\nfilters of length longer than max filename, so I had to split it up into\nmultiple lines instead of using regex alternation (\"|\").  This option is a\nclose parallel in pg_dump.we can talk about option name - maybe \"--filter-from\" is better than just \"--filter\"RegardsPavel \n\n-- \nJustin", "msg_date": "Sun, 5 Jul 2020 22:08:09 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Wed, Jul 01, 2020 at 04:24:52PM -0500, Justin Pryzby wrote:\n> On Thu, Jun 11, 2020 at 09:36:18AM +0200, Pavel Stehule wrote:\n> > st 10. 6. 2020 v 0:30 odes�latel Justin Pryzby <pryzby@telsasoft.com> napsal:\n> > > > + /* ignore empty rows */\n> > > > + if (*line != '\\0')\n> > >\n> > > Maybe: if line=='\\0': continue\n> > > We should also support comments.\n> \n> Comment support is still missing but easily added :)\n\nStill missing from the latest patch.\n\nWith some added documentation, I think this can be RfC.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 5 Jul 2020 15:31:19 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "ne 5. 7. 2020 v 22:31 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Wed, Jul 01, 2020 at 04:24:52PM -0500, Justin Pryzby wrote:\n> > On Thu, Jun 11, 2020 at 09:36:18AM +0200, Pavel Stehule wrote:\n> > > st 10. 6. 2020 v 0:30 odesílatel Justin Pryzby <pryzby@telsasoft.com>\n> napsal:\n> > > > > + /* ignore empty rows\n> */\n> > > > > + if (*line != '\\0')\n> > > >\n> > > > Maybe: if line=='\\0': continue\n> > > > We should also support comments.\n> >\n> > Comment support is still missing but easily added :)\n>\n> Still missing from the latest patch.\n>\n\nI can implement a comment support. But I am not sure about the format. The\nstart can be \"--\" or classic #.\n\nbut \"--\" can be in this context messy\n\n\n\n> With some added documentation, I think this can be RfC.\n>\n> --\n> Justin\n>\n\nne 5. 7. 2020 v 22:31 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:On Wed, Jul 01, 2020 at 04:24:52PM -0500, Justin Pryzby wrote:\n> On Thu, Jun 11, 2020 at 09:36:18AM +0200, Pavel Stehule wrote:\n> > st 10. 6. 2020 v 0:30 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:\n> > > > +                                             /* ignore empty rows */\n> > > > +                                             if (*line != '\\0')\n> > >\n> > > Maybe: if line=='\\0': continue\n> > > We should also support comments.\n> \n> Comment support is still missing but easily added :)\n\nStill missing from the latest patch.I can implement a comment support. But I am not sure about the format. The start can be \"--\" or classic #.but \"--\" can be in this context messy\n\nWith some added documentation, I think this can be RfC.\n\n-- \nJustin", "msg_date": "Sun, 5 Jul 2020 22:37:00 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "ne 5. 7. 2020 v 22:37 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> ne 5. 7. 2020 v 22:31 odesílatel Justin Pryzby <pryzby@telsasoft.com>\n> napsal:\n>\n>> On Wed, Jul 01, 2020 at 04:24:52PM -0500, Justin Pryzby wrote:\n>> > On Thu, Jun 11, 2020 at 09:36:18AM +0200, Pavel Stehule wrote:\n>> > > st 10. 6. 2020 v 0:30 odesílatel Justin Pryzby <pryzby@telsasoft.com>\n>> napsal:\n>> > > > > + /* ignore empty\n>> rows */\n>> > > > > + if (*line != '\\0')\n>> > > >\n>> > > > Maybe: if line=='\\0': continue\n>> > > > We should also support comments.\n>> >\n>> > Comment support is still missing but easily added :)\n>>\n>> Still missing from the latest patch.\n>>\n>\n> I can implement a comment support. But I am not sure about the format. The\n> start can be \"--\" or classic #.\n>\n> but \"--\" can be in this context messy\n>\n\nhere is support for comment's line - first char should be #\n\nRegards\n\nPavel\n\n\n>\n>\n>> With some added documentation, I think this can be RfC.\n>>\n>> --\n>> Justin\n>>\n>", "msg_date": "Mon, 6 Jul 2020 06:34:15 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Mon, Jul 06, 2020 at 06:34:15AM +0200, Pavel Stehule wrote:\n> >> > Comment support is still missing but easily added :)\n> >> Still missing from the latest patch.\n> >\n> > I can implement a comment support. But I am not sure about the format. The\n> \n> here is support for comment's line - first char should be #\n\nThanks, that's how I assumed it would look.\n\n> >> With some added documentation, I think this can be RfC.\n\nDo you want to add any more documentation ? \n\nFew more things:\n\n> +exit_invalid_filter_format(FILE *fp, char *filename, char *message, char *line, int lineno)\n> +{\n> +\tpg_log_error(\"invalid format of filter file \\\"%s\\\": %s\",\n> +\t\t\t\t *filename == '-' ? \"stdin\" : filename,\n> +\t\t\t\t message);\n\nYou refer to as \"stdin\" any filename beginning with -.\n\nI think you could just print \"-\" and not \"stdin\".\nIn any case, *filename=='-' is wrong since it only checks filename[0].\nIn a few places you compare ==stdin (which is right).\n\nAlso, I think \"f\" isn't as good a name as \"fp\".\n\nYou're adding 139 lines to a 600 line main(), and no other option is more than\n15 lines. Would you put it in a separate function ?\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 11 Jul 2020 18:06:50 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Mon, Jul 6, 2020 at 10:05 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> here is support for comment's line - first char should be #\n>\n\nFew comments:\n+ str = fgets(*lineptr + total_chars,\n+ *n - total_chars,\n+ fp);\n+\n+ if (ferror(fp))\n+ return -1;\n\nShould we include any error message in the above case.\n\n+ else\n+ break;\n+ }\n+\n+ if (ferror(fp))\n+ return -1;\n\nSimilar to above.\n\n+ /* check, if there is good enough space for\nnext content */\n+ if (*n - total_chars < 2)\n+ {\n+ *n += 1024;\n+ *lineptr = pg_realloc(*lineptr, *n);\n+ }\nWe could use a macro in place of 1024.\n\n+ if (objecttype == 't')\n+ {\n+ if (is_include)\n+ {\n+\nsimple_string_list_append(&table_include_patterns,\n+\n objectname);\n+\ndopt.include_everything = false;\n+ }\n+ else\n+\nsimple_string_list_append(&table_exclude_patterns,\n+\n objectname);\n+ }\n+ else if (objecttype == 'n')\n+ {\n+ if (is_include)\n+ {\n+\nsimple_string_list_append(&schema_include_patterns,\n+\n objectname);\n+\ndopt.include_everything = false;\n+ }\n+ else\n+\nsimple_string_list_append(&schema_exclude_patterns,\n+\n objectname);\n+ }\nSome of the above code is repetitive in above, can the common code be\nmade into a macro and called?\n\n printf(_(\" --extra-float-digits=NUM override default\nsetting for extra_float_digits\\n\"));\n+ printf(_(\" --filter=FILENAME read object name\nfilter expressions from file\\n\"));\n printf(_(\" --if-exists use IF EXISTS when\ndropping objects\\n\"));\nCan this be changed to dump objects and data based on the filter\nexpressions from the filter file.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 12 Jul 2020 07:13:07 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "ne 12. 7. 2020 v 1:06 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Mon, Jul 06, 2020 at 06:34:15AM +0200, Pavel Stehule wrote:\n> > >> > Comment support is still missing but easily added :)\n> > >> Still missing from the latest patch.\n> > >\n> > > I can implement a comment support. But I am not sure about the format.\n> The\n> >\n> > here is support for comment's line - first char should be #\n>\n> Thanks, that's how I assumed it would look.\n>\n> > >> With some added documentation, I think this can be RfC.\n>\n> Do you want to add any more documentation ?\n>\n\ndone\n\n\n> Few more things:\n>\n> > +exit_invalid_filter_format(FILE *fp, char *filename, char *message,\n> char *line, int lineno)\n> > +{\n> > + pg_log_error(\"invalid format of filter file \\\"%s\\\": %s\",\n> > + *filename == '-' ? \"stdin\" : filename,\n> > + message);\n>\n> You refer to as \"stdin\" any filename beginning with -.\n>\n> I think you could just print \"-\" and not \"stdin\".\n> In any case, *filename=='-' is wrong since it only checks filename[0].\n> In a few places you compare ==stdin (which is right).\n>\n\ndone\n\n\n> Also, I think \"f\" isn't as good a name as \"fp\".\n>\n\ndone\n\n\n> You're adding 139 lines to a 600 line main(), and no other option is more\n> than\n> 15 lines. Would you put it in a separate function ?\n>\n\ndone\n\nplease, check attached patch\n\nRegards\n\nPavel\n\n\n> --\n> Justin\n>", "msg_date": "Mon, 13 Jul 2020 08:15:42 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "ne 12. 7. 2020 v 3:43 odesílatel vignesh C <vignesh21@gmail.com> napsal:\n\n> On Mon, Jul 6, 2020 at 10:05 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> > here is support for comment's line - first char should be #\n> >\n>\n> Few comments:\n> + str = fgets(*lineptr + total_chars,\n> + *n - total_chars,\n> + fp);\n> +\n> + if (ferror(fp))\n> + return -1;\n>\n> Should we include any error message in the above case.\n>\n> + else\n> + break;\n> + }\n> +\n> + if (ferror(fp))\n> + return -1;\n>\n> Similar to above.\n>\n\nit should be ok, both variant finishing by\n\n<-->if (ferror(fp))\n<--><-->fatal(\"could not read from file \\\"%s\\\": %m\", filename);\n\n%m should to print related error message\n\n\n>\n> + /* check, if there is good enough space for\n> next content */\n> + if (*n - total_chars < 2)\n> + {\n> + *n += 1024;\n> + *lineptr = pg_realloc(*lineptr, *n);\n> + }\n> We could use a macro in place of 1024.\n>\n\ndone\n\n\n> + if (objecttype == 't')\n> + {\n> + if (is_include)\n> + {\n> +\n> simple_string_list_append(&table_include_patterns,\n> +\n> objectname);\n> +\n> dopt.include_everything = false;\n> + }\n> + else\n> +\n> simple_string_list_append(&table_exclude_patterns,\n> +\n> objectname);\n> + }\n> + else if (objecttype == 'n')\n> + {\n> + if (is_include)\n> + {\n> +\n> simple_string_list_append(&schema_include_patterns,\n> +\n> objectname);\n> +\n> dopt.include_everything = false;\n> + }\n> + else\n> +\n> simple_string_list_append(&schema_exclude_patterns,\n> +\n> objectname);\n> + }\n> Some of the above code is repetitive in above, can the common code be\n> made into a macro and called?\n>\n\nThere are two same fragments and two different fragments. In this case I\ndon't think so using macro or auxiliary function can help with readability.\nCurrent code is well structured and well readable.\n\n\n>\n> printf(_(\" --extra-float-digits=NUM override default\n> setting for extra_float_digits\\n\"));\n> + printf(_(\" --filter=FILENAME read object name\n> filter expressions from file\\n\"));\n> printf(_(\" --if-exists use IF EXISTS when\n> dropping objects\\n\"));\n> Can this be changed to dump objects and data based on the filter\n> expressions from the filter file.\n>\n\nI am sorry, I don't understand. This should work for data from specified by\nfilter without any modification.\n\nattached updated patch\n\nRegards\n\nPavel\n\n\n> Regards,\n> Vignesh\n> EnterpriseDB: http://www.enterprisedb.com\n>", "msg_date": "Mon, 13 Jul 2020 10:20:42 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "> On 13 Jul 2020, at 10:20, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> attached updated patch\n\nSorry for jumping in late, but thinking about this extension to pg_dump:\ndoesn't it make more sense to use an existing file format like JSON for this,\ngiven that virtually all devops/cd/etc tooling know about JSON already?\n\nConsidering its application and the problem space, I'd expect users to generate\nthis file rather than handcraft it with 10 rows of content, and standard file\nformats help there. Creative users could even use the database itself to\neasily manage its content and generate the file (which isn't limited to JSON of\ncourse, but it would be easier). Also, we now have backup manifests in JSON\nwhich IMO sets a bit of a precedent, even though thats a separate thing.\n\nAt the very least it seems limiting to not include a file format version\nidentifier since we'd otherwise risk running into backwards compat issues\nshould we want to expand on this in the future.\n\ncheers ./daniel\n\n", "msg_date": "Mon, 13 Jul 2020 12:04:09 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "po 13. 7. 2020 v 12:04 odesílatel Daniel Gustafsson <daniel@yesql.se>\nnapsal:\n\n> > On 13 Jul 2020, at 10:20, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> > attached updated patch\n>\n> Sorry for jumping in late, but thinking about this extension to pg_dump:\n> doesn't it make more sense to use an existing file format like JSON for\n> this,\n> given that virtually all devops/cd/etc tooling know about JSON already?\n>\n> Considering its application and the problem space, I'd expect users to\n> generate\n> this file rather than handcraft it with 10 rows of content, and standard\n> file\n> formats help there. Creative users could even use the database itself to\n> easily manage its content and generate the file (which isn't limited to\n> JSON of\n> course, but it would be easier). Also, we now have backup manifests in\n> JSON\n> which IMO sets a bit of a precedent, even though thats a separate thing.\n>\n> At the very least it seems limiting to not include a file format version\n> identifier since we'd otherwise risk running into backwards compat issues\n> should we want to expand on this in the future.\n>\n\nI like JSON format. But why here? For this purpose the JSON is over\nengineered. This input file has no nested structure - it is just a stream\nof lines.\n\nI don't think so introducing JSON here can be a good idea. For this feature\ntypical usage can be used in pipe, and the most simple format (what is\npossible) is ideal.\n\nIt is a really different case than pg_dump manifest file - in this case, in\nthis case pg_dump is consument.\n\nRegards\n\nPavel\n\n\n\n\n> cheers ./daniel\n>\n>\n\npo 13. 7. 2020 v 12:04 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:> On 13 Jul 2020, at 10:20, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> attached updated patch\n\nSorry for jumping in late, but thinking about this extension to pg_dump:\ndoesn't it make more sense to use an existing file format like JSON for this,\ngiven that virtually all devops/cd/etc tooling know about JSON already?\n\nConsidering its application and the problem space, I'd expect users to generate\nthis file rather than handcraft it with 10 rows of content, and standard file\nformats help there.  Creative users could even use the database itself to\neasily manage its content and generate the file (which isn't limited to JSON of\ncourse, but it would be easier).  Also, we now have backup manifests in JSON\nwhich IMO sets a bit of a precedent, even though thats a separate thing.\n\nAt the very least it seems limiting to not include a file format version\nidentifier since we'd otherwise risk running into backwards compat issues\nshould we want to expand on this in the future.I like JSON format. But why here? For this purpose the JSON is over engineered. This input file has  no nested structure - it is just a stream of lines. I don't think so introducing JSON here can be a good idea. For this feature typical usage can be used in pipe, and the most simple format (what is possible) is ideal. It is a really different case than pg_dump manifest file - in this case, in this case pg_dump is consument. RegardsPavel\n\ncheers ./daniel", "msg_date": "Mon, 13 Jul 2020 13:02:32 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Mon, Jul 13, 2020 at 1:51 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> Can this be changed to dump objects and data based on the filter\n>> expressions from the filter file.\n>\n>\n> I am sorry, I don't understand. This should work for data from specified by filter without any modification.\n>\nI meant can this:\nprintf(_(\" --filter=FILENAME read object name filter\nexpressions from file\\n\"));\nbe changed to:\nprintf(_(\" --filter=FILENAME dump objects and data based\non the filter expressions from the filter file\\n\"));\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Jul 2020 19:02:59 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Mon, Jul 13, 2020 at 12:04:09PM +0200, Daniel Gustafsson wrote:\n> Sorry for jumping in late, but thinking about this extension to pg_dump:\n> doesn't it make more sense to use an existing file format like JSON for this,\n> given that virtually all devops/cd/etc tooling know about JSON already?\n> \n> Considering its application and the problem space, I'd expect users to generate\n> this file rather than handcraft it with 10 rows of content, and standard file\n> formats help there.\n\nI mentioned having tested this patch as we would use it. But it's likely I\n*wouldn't* use it if the format was something which required added complexity\nto pipe in from an existing shell script.\n\n> At the very least it seems limiting to not include a file format version\n> identifier since we'd otherwise risk running into backwards compat issues\n> should we want to expand on this in the future.\n\nMaybe .. I'm not sure. The patch would of course be extended to handle\nadditional include/exclude options. Is there any other future behavior we\nmight reasonably anticipate ?\n\nIf at some point we wanted to support another file format, maybe it would look\nlike: --format=v2:filters.txt (or maybe the old one would be v1:filters.txt)\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 13 Jul 2020 09:32:04 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "> On 13 Jul 2020, at 13:02, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> I like JSON format. But why here? For this purpose the JSON is over engineered.\n\nI respectfully disagree, JSON is a commonly used and known format in systems\nadministration and most importantly: we already have code to parse it in the\nfrontend.\n\n> This input file has no nested structure - it is just a stream of lines. \n\nWell, it has a set of object types which in turn have objects. There is more\nstructure than meets the eye.\n\nAlso, the current patch allows arbitrary whitespace before object names, but no\nwhitespace before comments etc. Using something where the rules of parsing are\nknown is rarely a bad thing.\n\n> I don't think so introducing JSON here can be a good idea.\n\nQuite possibly it isn't, but not discussing options seems like a worse idea so\nI wanted to bring it up.\n\n> It is a really different case than pg_dump manifest file - in this case, in this case pg_dump is consument. \n\nRight, as I said these are two different, while tangentially related, things.\n\ncheers ./daniel\n\n", "msg_date": "Mon, 13 Jul 2020 16:57:41 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "po 13. 7. 2020 v 16:57 odesílatel Daniel Gustafsson <daniel@yesql.se>\nnapsal:\n\n> > On 13 Jul 2020, at 13:02, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> > I like JSON format. But why here? For this purpose the JSON is over\n> engineered.\n>\n> I respectfully disagree, JSON is a commonly used and known format in\n> systems\n> administration and most importantly: we already have code to parse it in\n> the\n> frontend.\n>\n\nI disagree with the idea so if we have a client side JSON parser we have\nto use it everywhere.\nFor this case, parsing JSON means more code, not less. I checked the\nparse_manifest.c. More\nthe JSON API is DOM type. For this purpose the SAX type is better. But\nstill, things should be simple as possible.\nThere is not any necessity to use it.\n\nJSON is good for a lot of purposes, and can be good if the document uses\nmore lexer types, numeric, ... But nothing is used there\n\n\n> > This input file has no nested structure - it is just a stream of lines.\n>\n> Well, it has a set of object types which in turn have objects. There is\n> more\n> structure than meets the eye.\n>\n\n> Also, the current patch allows arbitrary whitespace before object names,\n> but no\n> whitespace before comments etc. Using something where the rules of\n> parsing are\n> known is rarely a bad thing.\n>\n\nif I know - JSON hasn't comments at all.\n\n\n> > I don't think so introducing JSON here can be a good idea.\n>\n> Quite possibly it isn't, but not discussing options seems like a worse\n> idea so\n> I wanted to bring it up.\n>\n> > It is a really different case than pg_dump manifest file - in this case,\n> in this case pg_dump is consument.\n>\n> Right, as I said these are two different, while tangentially related,\n> things.\n>\n\nBackup manifest format has no trivial complexity - and using JSON has\nsense. Input filter file is a trivial - +/- list of strings (and it will be\neverytime).\n\nIn this case I don't see any benefits from JSON - on both sides (producent,\nconsuments). It is harder (little bit) to parse it, it is harder (little\nbit) to generate it.\n\nRegards\n\nPavel\n\n\n> cheers ./daniel\n\npo 13. 7. 2020 v 16:57 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:> On 13 Jul 2020, at 13:02, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> I like JSON format. But why here? For this purpose the JSON is over engineered.\n\nI respectfully disagree, JSON is a commonly used and known format in systems\nadministration and most importantly: we already have code to parse it in the\nfrontend.I disagree with the idea so  if we have a client side JSON parser we have to use it everywhere. For this case, parsing JSON means more code, not less. I checked the parse_manifest.c. Morethe JSON API is DOM type. For this purpose the SAX type is better. But still, things should be simple as possible. There is not any necessity to use it.JSON is good for a lot of purposes, and can be good if the document uses more lexer types, numeric, ... But nothing is used there \n\n> This input file has  no nested structure - it is just a stream of lines. \n\nWell, it has a set of object types which in turn have objects.  There is more\nstructure than meets the eye. \n\nAlso, the current patch allows arbitrary whitespace before object names, but no\nwhitespace before comments etc.  Using something where the rules of parsing are\nknown is rarely a bad thing.if I know - JSON hasn't comments at all. \n\n> I don't think so introducing JSON here can be a good idea.\n\nQuite possibly it isn't, but not discussing options seems like a worse idea so\nI wanted to bring it up.\n\n> It is a really different case than pg_dump manifest file - in this case, in this case pg_dump is consument. \n\nRight, as I said these are two different, while tangentially related, things.Backup manifest format has no trivial complexity - and using JSON has sense. Input filter file is a trivial - +/- list of strings (and it will be everytime).In this case I don't see any benefits from JSON - on both sides (producent, consuments). It is harder (little bit) to parse it, it is harder (little bit) to generate it.RegardsPavel\n\ncheers ./daniel", "msg_date": "Mon, 13 Jul 2020 17:33:38 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Mon, Jul 13, 2020 at 08:15:42AM +0200, Pavel Stehule wrote:\n> > Do you want to add any more documentation ?\n> >\n> \n> done\n\nThanks - I think the documentation was maybe excessive. See attached.\n\n-- \nJustin", "msg_date": "Mon, 13 Jul 2020 12:59:57 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "po 13. 7. 2020 v 20:00 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Mon, Jul 13, 2020 at 08:15:42AM +0200, Pavel Stehule wrote:\n> > > Do you want to add any more documentation ?\n> > >\n> >\n> > done\n>\n> Thanks - I think the documentation was maybe excessive. See attached.\n>\n\nI merged your patch - thank you\n\nnew patch with doc changes and text of help change requested by Vignesh\nattached\n\nRegards\n\nPavel\n\n\n\n> --\n> Justin\n>", "msg_date": "Tue, 14 Jul 2020 08:31:31 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "po 13. 7. 2020 v 15:33 odesílatel vignesh C <vignesh21@gmail.com> napsal:\n\n> On Mon, Jul 13, 2020 at 1:51 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >> Can this be changed to dump objects and data based on the filter\n> >> expressions from the filter file.\n> >\n> >\n> > I am sorry, I don't understand. This should work for data from specified\n> by filter without any modification.\n> >\n> I meant can this:\n> printf(_(\" --filter=FILENAME read object name filter\n> expressions from file\\n\"));\n> be changed to:\n> printf(_(\" --filter=FILENAME dump objects and data based\n> on the filter expressions from the filter file\\n\"));\n>\n\ndone in today patch\n\nPavel\n\n\n> Regards,\n> Vignesh\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\npo 13. 7. 2020 v 15:33 odesílatel vignesh C <vignesh21@gmail.com> napsal:On Mon, Jul 13, 2020 at 1:51 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> Can this be changed to dump objects and data based on the filter\n>> expressions from the filter file.\n>\n>\n> I am sorry, I don't understand. This should work for data from specified by filter without any modification.\n>\nI meant can this:\nprintf(_(\"  --filter=FILENAME            read object name filter\nexpressions from file\\n\"));\nbe changed to:\nprintf(_(\"  --filter=FILENAME            dump objects and data based\non the filter expressions from the filter file\\n\"));done in today patchPavel\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 14 Jul 2020 08:32:41 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Tue, Jul 14, 2020 at 12:03 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> I meant can this:\n>> printf(_(\" --filter=FILENAME read object name filter\n>> expressions from file\\n\"));\n>> be changed to:\n>> printf(_(\" --filter=FILENAME dump objects and data based\n>> on the filter expressions from the filter file\\n\"));\n>\n> done in today patch\n>\n\nThanks for fixing the comments.\nFew comments:\n+ /* use \"-\" as symbol for stdin */\n+ if (strcmp(filename, \"-\") != 0)\n+ {\n+ fp = fopen(filename, \"r\");\n+ if (!fp)\n+ fatal(\"could not open the input file \\\"%s\\\": %m\",\n+ filename);\n+ }\n+ else\n+ fp = stdin;\n\nWe could use STDIN itself instead of -, it will be a more easier\noption to understand.\n\n+ /* when first char is hash, ignore whole line */\n+ if (*line == '#')\n+ continue;\n\nIf line starts with # we ignore that line, I feel this should be\nincluded in the documentation.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 25 Jul 2020 18:56:31 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Sat, Jul 25, 2020 at 06:56:31PM +0530, vignesh C wrote:\n> On Tue, Jul 14, 2020 at 12:03 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> >> I meant can this:\n> >> printf(_(\" --filter=FILENAME read object name filter\n> >> expressions from file\\n\"));\n> >> be changed to:\n> >> printf(_(\" --filter=FILENAME dump objects and data based\n> >> on the filter expressions from the filter file\\n\"));\n> >\n> > done in today patch\n> \n> Thanks for fixing the comments.\n> Few comments:\n> + /* use \"-\" as symbol for stdin */\n> + if (strcmp(filename, \"-\") != 0)\n> + {\n> + fp = fopen(filename, \"r\");\n> + if (!fp)\n> + fatal(\"could not open the input file \\\"%s\\\": %m\",\n> + filename);\n> + }\n> + else\n> + fp = stdin;\n> \n> We could use STDIN itself instead of -, it will be a more easier\n> option to understand.\n\nI think \"-\" is used widely for commandline tools, and STDIN is not (even though\nit's commonly used by programmers). For example, since last year, pg_restore\n-f - means stdout.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 26 Jul 2020 14:10:06 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "so 25. 7. 2020 v 15:26 odesílatel vignesh C <vignesh21@gmail.com> napsal:\n\n> On Tue, Jul 14, 2020 at 12:03 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >> I meant can this:\n> >> printf(_(\" --filter=FILENAME read object name filter\n> >> expressions from file\\n\"));\n> >> be changed to:\n> >> printf(_(\" --filter=FILENAME dump objects and data based\n> >> on the filter expressions from the filter file\\n\"));\n> >\n> > done in today patch\n> >\n>\n> Thanks for fixing the comments.\n> Few comments:\n> + /* use \"-\" as symbol for stdin */\n> + if (strcmp(filename, \"-\") != 0)\n> + {\n> + fp = fopen(filename, \"r\");\n> + if (!fp)\n> + fatal(\"could not open the input file \\\"%s\\\": %m\",\n> + filename);\n> + }\n> + else\n> + fp = stdin;\n>\n> We could use STDIN itself instead of -, it will be a more easier\n> option to understand.\n>\n> + /* when first char is hash, ignore whole line */\n> + if (*line == '#')\n> + continue;\n>\n> If line starts with # we ignore that line, I feel this should be\n> included in the documentation.\n>\n\n\nGood note - I wrote sentence to doc\n\n+ <para>\n+ The lines starting with symbol <literal>#</literal> are ignored.\n+ Previous white chars (spaces, tabs) are not allowed. These\n+ lines can be used for comments, notes.\n+ </para>\n+\n\n\n> Regards,\n> Vignesh\n> EnterpriseDB: http://www.enterprisedb.com\n>", "msg_date": "Mon, 27 Jul 2020 07:25:54 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "ne 26. 7. 2020 v 21:10 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Sat, Jul 25, 2020 at 06:56:31PM +0530, vignesh C wrote:\n> > On Tue, Jul 14, 2020 at 12:03 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > >> I meant can this:\n> > >> printf(_(\" --filter=FILENAME read object name filter\n> > >> expressions from file\\n\"));\n> > >> be changed to:\n> > >> printf(_(\" --filter=FILENAME dump objects and data based\n> > >> on the filter expressions from the filter file\\n\"));\n> > >\n> > > done in today patch\n> >\n> > Thanks for fixing the comments.\n> > Few comments:\n> > + /* use \"-\" as symbol for stdin */\n> > + if (strcmp(filename, \"-\") != 0)\n> > + {\n> > + fp = fopen(filename, \"r\");\n> > + if (!fp)\n> > + fatal(\"could not open the input file \\\"%s\\\": %m\",\n> > + filename);\n> > + }\n> > + else\n> > + fp = stdin;\n> >\n> > We could use STDIN itself instead of -, it will be a more easier\n> > option to understand.\n>\n> I think \"-\" is used widely for commandline tools, and STDIN is not (even\n> though\n> it's commonly used by programmers). For example, since last year,\n> pg_restore\n> -f - means stdout.\n>\n\nyes, STDIN is used by programming languages, but it is not usual in command\nline tools. And because it was used by pg_restore, then we should not use\nnew inconsistency.\n\nRegards\n\nPavel\n\n\n> --\n> Justin\n>\n\nne 26. 7. 2020 v 21:10 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:On Sat, Jul 25, 2020 at 06:56:31PM +0530, vignesh C wrote:\n> On Tue, Jul 14, 2020 at 12:03 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> >> I meant can this:\n> >> printf(_(\"  --filter=FILENAME            read object name filter\n> >> expressions from file\\n\"));\n> >> be changed to:\n> >> printf(_(\"  --filter=FILENAME            dump objects and data based\n> >> on the filter expressions from the filter file\\n\"));\n> >\n> > done in today patch\n> \n> Thanks for fixing the  comments.\n> Few comments:\n> + /* use \"-\" as symbol for stdin */\n> + if (strcmp(filename, \"-\") != 0)\n> + {\n> + fp = fopen(filename, \"r\");\n> + if (!fp)\n> + fatal(\"could not open the input file \\\"%s\\\": %m\",\n> +   filename);\n> + }\n> + else\n> + fp = stdin;\n> \n> We could use STDIN itself instead of -, it will be a more easier\n> option to understand.\n\nI think \"-\" is used widely for commandline tools, and STDIN is not (even though\nit's commonly used by programmers).  For example, since last year, pg_restore\n-f - means stdout.yes, STDIN is used by programming languages, but it is not usual in command line tools. And because it was used by pg_restore, then we should not use new inconsistency.RegardsPavel \n\n-- \nJustin", "msg_date": "Mon, 27 Jul 2020 07:28:09 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Mon, Jul 27, 2020 at 07:25:54AM +0200, Pavel Stehule wrote:\n> so 25. 7. 2020 v 15:26 odes�latel vignesh C <vignesh21@gmail.com> napsal:\n> \n> > On Tue, Jul 14, 2020 at 12:03 PM Pavel Stehule <pavel.stehule@gmail.com>\n> > wrote:\n> > >> I meant can this:\n> > >> printf(_(\" --filter=FILENAME read object name filter\n> > >> expressions from file\\n\"));\n> > >> be changed to:\n> > >> printf(_(\" --filter=FILENAME dump objects and data based\n> > >> on the filter expressions from the filter file\\n\"));\n> > >\n> > > done in today patch\n\nThis looks good to my eyes - marked RFC.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 13 Aug 2020 16:41:04 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Sun, Jul 05, 2020 at 10:08:09PM +0200, Pavel Stehule wrote:\n> st 1. 7. 2020 v 23:24 odes�latel Justin Pryzby <pryzby@telsasoft.com> napsal:\n> \n> > On Thu, Jun 11, 2020 at 09:36:18AM +0200, Pavel Stehule wrote:\n> > > st 10. 6. 2020 v 0:30 odes�latel Justin Pryzby <pryzby@telsasoft.com>> napsal:\n> > Also, your getline is dynamically re-allocating lines of arbitrary length.\n> > Possibly that's not needed. We'll typically read \"+t schema.relname\",\n> > which is\n> > 132 chars. Maybe it's sufficient to do\n> > char buf[1024];\n> > fgets(buf);\n> > if strchr(buf, '\\n') == NULL: error();\n> > ret = pstrdup(buf);\n> \n> 63 bytes is max effective identifier size, but it is not max size of\n> identifiers. It is very probably so buff with 1024 bytes will be enough for\n> all, but I do not want to increase any new magic limit. More when dynamic\n> implementation is not too hard.\n\nMaybe you'd want to use a StrInfo like recent patches (8f8154a50).\n\n> Table name can be very long - sometimes the data names (table names) can be\n> stored in external storages with full length and should not be practical to\n> require truncating in filter file.\n> \n> For this case it is very effective, because a resized (increased) buffer is\n> used for following rows, so realloc should not be often. So when I have to\n> choose between two implementations with similar complexity, I prefer more\n> dynamic code without hardcoded limits. This dynamic hasn't any overhead.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 3 Sep 2020 14:48:02 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On 2020-Jul-27, Pavel Stehule wrote:\n\n> +/*\n> + * getline is originally GNU function, and should not be everywhere still.\n> + * Use own reduced implementation.\n> + */\n> +static size_t\n> +pg_getline(char **lineptr, size_t *n, FILE *fp)\n> +{\n\nSo, Tom added a coding pattern for doing this in commit 8f8154a503c7,\nwhich is ostensibly also to be used in pg_regress [1] -- maybe it'd be\nuseful to have this in src/common?\n\n[1] https://postgr.es/m/m_1NfbowTqSJnrC6rq1a9cQK7E-CHQE7B6Kz9w6fNH-OiV-4mcsdMw7UP2oA2_6dZmXvAMjbSPZjW9U7FD2R52D3d9DtaJxcBprsqJqZNBc=@protonmail.com\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 3 Sep 2020 16:08:42 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> So, Tom added a coding pattern for doing this in commit 8f8154a503c7,\n> which is ostensibly also to be used in pg_regress [1] -- maybe it'd be\n> useful to have this in src/common?\n\nDone, see pg_get_line() added by 67a472d71.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 03 Sep 2020 20:15:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "pá 4. 9. 2020 v 2:15 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > So, Tom added a coding pattern for doing this in commit 8f8154a503c7,\n> > which is ostensibly also to be used in pg_regress [1] -- maybe it'd be\n> > useful to have this in src/common?\n>\n> Done, see pg_get_line() added by 67a472d71.\n>\n\nHere is updated patch for pg_dump\n\nRegards\n\nPavel\n\n\n\n>\n> regards, tom lane\n>", "msg_date": "Fri, 4 Sep 2020 05:21:37 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "pá 4. 9. 2020 v 5:21 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> pá 4. 9. 2020 v 2:15 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n>> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> > So, Tom added a coding pattern for doing this in commit 8f8154a503c7,\n>> > which is ostensibly also to be used in pg_regress [1] -- maybe it'd be\n>> > useful to have this in src/common?\n>>\n>> Done, see pg_get_line() added by 67a472d71.\n>>\n>\n> Here is updated patch for pg_dump\n>\n\nanother update based on pg_get_line_append function\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>\n>\n>>\n>> regards, tom lane\n>>\n>", "msg_date": "Mon, 7 Sep 2020 07:08:18 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi Pavel\n\nOn Fri, Sep 4, 2020 at 6:22 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n> Here is updated patch for pg_dump\n>\n>\npg_dumpall also has –exclude-database=pattern and –no-comments option\ndoesn't that qualify it to benefits from this feature? And please add a\ntest case for this option\n\nregards\n\nSurafel\n\nHi PavelOn Fri, Sep 4, 2020 at 6:22 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:Here is updated patch for pg_dump\n\npg_dumpall\nalso has –exclude-database=pattern and –no-comments option\ndoesn't that qualify it to benefits from this feature? And please\nadd a test case for this option\nregards\nSurafel", "msg_date": "Mon, 7 Sep 2020 15:14:45 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\npo 7. 9. 2020 v 14:14 odesílatel Surafel Temesgen <surafel3000@gmail.com>\nnapsal:\n\n> Hi Pavel\n>\n> On Fri, Sep 4, 2020 at 6:22 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>>\n>> Here is updated patch for pg_dump\n>>\n>>\n> pg_dumpall also has –exclude-database=pattern and –no-comments option\n> doesn't that qualify it to benefits from this feature? And please add a\n> test case for this option\n>\n\nThis patch is related to pg_dump (in this moment), so pg_dumpall options\nare out of scope.\n\nI am not sure if pg_dumpall needs this functionality - maybe, but I can use\nbash or some similar for implementation of this feature. There is no\nrequirement to do it all necessary work under one transaction, one snapshot.\n\nFor pg_dump can be used different format, because it uses different\ngranularity. Some like \"{+/-} dbname\"\n\n\"--no-comments\" is a global parameter without arguments. I don't understand\nhow this parameter can be related to this feature?\n\nI am working on regress tests.\n\nRegards\n\nPavel\n\n\n> regards\n>\n> Surafel\n>\n>\n\nHipo 7. 9. 2020 v 14:14 odesílatel Surafel Temesgen <surafel3000@gmail.com> napsal:Hi PavelOn Fri, Sep 4, 2020 at 6:22 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:Here is updated patch for pg_dump\n\npg_dumpall\nalso has –exclude-database=pattern and –no-comments option\ndoesn't that qualify it to benefits from this feature? And please\nadd a test case for this optionThis patch is related to pg_dump (in this moment), so pg_dumpall options are out of scope.I am not sure if pg_dumpall needs this functionality - maybe, but I can use bash or some similar for implementation of this feature. There is no requirement to do it all necessary work under one transaction, one snapshot.For pg_dump can be used different format, because it uses different granularity.  Some like \"{+/-} dbname\"\"--no-comments\" is a global parameter without arguments. I don't understand how this parameter can be related to this feature?I am working on regress tests.RegardsPavel \nregards\nSurafel", "msg_date": "Fri, 11 Sep 2020 10:50:01 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\npá 11. 9. 2020 v 10:50 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> po 7. 9. 2020 v 14:14 odesílatel Surafel Temesgen <surafel3000@gmail.com>\n> napsal:\n>\n>> Hi Pavel\n>>\n>> On Fri, Sep 4, 2020 at 6:22 AM Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>>\n>>>\n>>> Here is updated patch for pg_dump\n>>>\n>>>\n>> pg_dumpall also has –exclude-database=pattern and –no-comments option\n>> doesn't that qualify it to benefits from this feature? And please add a\n>> test case for this option\n>>\n>\n> This patch is related to pg_dump (in this moment), so pg_dumpall options\n> are out of scope.\n>\n> I am not sure if pg_dumpall needs this functionality - maybe, but I can\n> use bash or some similar for implementation of this feature. There is no\n> requirement to do it all necessary work under one transaction, one snapshot.\n>\n> For pg_dump can be used different format, because it uses different\n> granularity. Some like \"{+/-} dbname\"\n>\n> \"--no-comments\" is a global parameter without arguments. I don't\n> understand how this parameter can be related to this feature?\n>\n> I am working on regress tests.\n>\n\nThere is a updated version with regress tests\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>\n>> regards\n>>\n>> Surafel\n>>\n>>\n>", "msg_date": "Sat, 12 Sep 2020 10:12:16 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nrebase + minor change - using pg_get_line_buf instead pg_get_line_append\n\nRegards\n\nPavel", "msg_date": "Thu, 24 Sep 2020 19:47:32 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Greetings,\n\n* Pavel Stehule (pavel.stehule@gmail.com) wrote:\n> rebase + minor change - using pg_get_line_buf instead pg_get_line_append\n\nI started looking at this and went back through the thread and while I\ntend to agree that JSON may not be a good choice for this, it's not the\nonly possible alternative. There is no doubt that pg_dump is already a\nsophisticated data export tool, and likely to continue to gain new\nfeatures, such that having a configuration file for it would be very\nhandy, but this clearly isn't really going in a direction that would\nallow for that.\n\nPerhaps this feature could co-exist with a full blown configuration for\npg_dump, but even then there's certainly issues with what's proposed-\nhow would you handle explicitly asking for a table which is named \n\" mytable\" to be included or excluded? Or a table which has a newline\nin it? Using a standardized format which supports the full range of\nwhat we do in a table name, explicitly and clearly, would address these\nissues and also give us the flexibility to extend the options which\ncould be used through the configuration file beyond just the filters in\nthe future.\n\nUnlike for the pg_basebackup manifest, which we generate and read\nentirely programatically, a config file for pg_dump would almost\ncertainly be updated manually (or, at least, parts of it would be and\nperhaps other parts generated), which means it'd really be ideal to have\na proper way to support comments in it (something that the proposed\nformat also doesn't really get right- # must be the *first* character,\nand you can only have whole-line comments..?), avoid extra unneeded\npunctuation (or, at times, allow it- such as trailing commas in lists),\ncleanly handle multi-line strings (consider the oft discussed idea\naround having pg_dump support a WHERE clause for exporting data from\ntables...), etc.\n\nOverall, -1 from me on this approach. Maybe it could be fixed up to\nhandle all the different names of objects that we support today\n(something which, imv, is really a clear requirement for this feature to\nbe committed), but I suspect you'd end up half-way to yet another\nconfiguration format when we could be working to support something like\nTOML or maybe YAML... but if you want my 2c, TOML seems closer to what\nwe do for postgresql.conf and getting that over to something that's\nstandardized, while a crazy long shot, is a general nice idea, imv.\n\nThanks,\n\nStephen", "msg_date": "Tue, 10 Nov 2020 15:09:04 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nút 10. 11. 2020 v 21:09 odesílatel Stephen Frost <sfrost@snowman.net>\nnapsal:\n\n> Greetings,\n>\n> * Pavel Stehule (pavel.stehule@gmail.com) wrote:\n> > rebase + minor change - using pg_get_line_buf instead pg_get_line_append\n>\n> I started looking at this and went back through the thread and while I\n> tend to agree that JSON may not be a good choice for this, it's not the\n> only possible alternative. There is no doubt that pg_dump is already a\n> sophisticated data export tool, and likely to continue to gain new\n> features, such that having a configuration file for it would be very\n> handy, but this clearly isn't really going in a direction that would\n> allow for that.\n>\n> Perhaps this feature could co-exist with a full blown configuration for\n> pg_dump, but even then there's certainly issues with what's proposed-\n> how would you handle explicitly asking for a table which is named\n> \" mytable\" to be included or excluded? Or a table which has a newline\n> in it? Using a standardized format which supports the full range of\n> what we do in a table name, explicitly and clearly, would address these\n> issues and also give us the flexibility to extend the options which\n> could be used through the configuration file beyond just the filters in\n> the future.\n>\n\nThis is the correct argument - I will check a possibility to use strange\nnames, but there is the same possibility and functionality like we allow\nfrom the command line. So you can use double quoted names. I'll check it.\n\n\n> Unlike for the pg_basebackup manifest, which we generate and read\n> entirely programatically, a config file for pg_dump would almost\n> certainly be updated manually (or, at least, parts of it would be and\n> perhaps other parts generated), which means it'd really be ideal to have\n> a proper way to support comments in it (something that the proposed\n> format also doesn't really get right- # must be the *first* character,\n> and you can only have whole-line comments..?), avoid extra unneeded\n> punctuation (or, at times, allow it- such as trailing commas in lists),\n> cleanly handle multi-line strings (consider the oft discussed idea\n> around having pg_dump support a WHERE clause for exporting data from\n> tables...), etc.\n>\n\nI think the proposed feature is very far to be the config file for pg_dump\n(it implements a option \"--filter\"). This is not the target. It is not\ndesigned for this. This is just an alternative for options like -t, -T, ...\nand I am sure so nobody will generate this file manually. Main target of\nthis patch is eliminating problems with the max length of the command line.\nSo it is really not designed to be the config file for pg_dump.\n\n\n>\n> Overall, -1 from me on this approach. Maybe it could be fixed up to\n> handle all the different names of objects that we support today\n> (something which, imv, is really a clear requirement for this feature to\n> be committed), but I suspect you'd end up half-way to yet another\n> configuration format when we could be working to support something like\n> TOML or maybe YAML... but if you want my 2c, TOML seems closer to what\n> we do for postgresql.conf and getting that over to something that's\n> standardized, while a crazy long shot, is a general nice idea, imv.\n>\n\nI have nothing against TOML, but I don't see a sense of usage in this\npatch. This patch doesn't implement a config file for pg_dump, and I don't\nsee any sense or benefits of it. The TOML is designed for different\npurposes. TOML is good for manual creating, but it is not this case.\nTypical usage of this patch is some like, and TOML syntax (or JSON) is not\ngood for this.\n\npsql -c \"select '+t' || quote_ident(relname) from pg_class where relname\n...\" | pg_dump --filter=/dev/stdin\n\nI can imagine some benefits of saved configure files for postgres\napplications - but it should be designed generally and implemented\ngenerally. Probably you would use one for pg_dump, psql, pg_restore, ....\nBut it is a different feature with different usage. This patch doesn't\nimplement option \"--config\", it implements option \"--filter\".\n\nRegards\n\nPavel\n\n\n\n> Thanks,\n>\n> Stephen\n>\n\nHiút 10. 11. 2020 v 21:09 odesílatel Stephen Frost <sfrost@snowman.net> napsal:Greetings,\n\n* Pavel Stehule (pavel.stehule@gmail.com) wrote:\n> rebase + minor change - using pg_get_line_buf instead pg_get_line_append\n\nI started looking at this and went back through the thread and while I\ntend to agree that JSON may not be a good choice for this, it's not the\nonly possible alternative.  There is no doubt that pg_dump is already a\nsophisticated data export tool, and likely to continue to gain new\nfeatures, such that having a configuration file for it would be very\nhandy, but this clearly isn't really going in a direction that would\nallow for that.\n\nPerhaps this feature could co-exist with a full blown configuration for\npg_dump, but even then there's certainly issues with what's proposed-\nhow would you handle explicitly asking for a table which is named \n\"  mytable\" to be included or excluded?  Or a table which has a newline\nin it?  Using a standardized format which supports the full range of\nwhat we do in a table name, explicitly and clearly, would address these\nissues and also give us the flexibility to extend the options which\ncould be used through the configuration file beyond just the filters in\nthe future.This is the correct argument - I will check a possibility to use strange names, but there is the same possibility and functionality like we allow from the command line. So you can use double quoted names. I'll check it. \n\nUnlike for the pg_basebackup manifest, which we generate and read\nentirely programatically, a config file for pg_dump would almost\ncertainly be updated manually (or, at least, parts of it would be and\nperhaps other parts generated), which means it'd really be ideal to have\na proper way to support comments in it (something that the proposed\nformat also doesn't really get right- # must be the *first* character,\nand you can only have whole-line comments..?), avoid extra unneeded\npunctuation (or, at times, allow it- such as trailing commas in lists),\ncleanly handle multi-line strings (consider the oft discussed idea\naround having pg_dump support a WHERE clause for exporting data from\ntables...), etc.I think the proposed feature is very far to be the config file for pg_dump (it implements a option \"--filter\"). This is not the target. It is not designed for this. This is just an alternative for options like -t, -T, ... and I am sure so nobody will generate this file manually. Main target of this patch is eliminating problems with the max length of the command line. So it is really not designed to be the config file for pg_dump.  \n\nOverall, -1 from me on this approach.  Maybe it could be fixed up to\nhandle all the different names of objects that we support today\n(something which, imv, is really a clear requirement for this feature to\nbe committed), but I suspect you'd end up half-way to yet another\nconfiguration format when we could be working to support something like\nTOML or maybe YAML... but if you want my 2c, TOML seems closer to what\nwe do for postgresql.conf and getting that over to something that's\nstandardized, while a crazy long shot, is a general nice idea, imv.I have nothing against TOML, but I don't see a sense of usage in this patch. This patch doesn't implement a config file for pg_dump, and I don't see any sense or benefits of it. The TOML is designed for different purposes. TOML is good for manual creating, but it is not this case. Typical usage of this patch is some like, and TOML syntax (or JSON) is not good for this.psql -c \"select '+t' || quote_ident(relname) from pg_class where relname ...\" | pg_dump --filter=/dev/stdinI can imagine some benefits of saved configure files for postgres applications - but it should be designed generally and implemented generally. Probably you would use one for pg_dump, psql, pg_restore, .... But it is a different feature with different usage. This patch doesn't implement option \"--config\", it implements option \"--filter\".RegardsPavel\n\nThanks,\n\nStephen", "msg_date": "Wed, 11 Nov 2020 06:32:33 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nPerhaps this feature could co-exist with a full blown configuration for\n>> pg_dump, but even then there's certainly issues with what's proposed-\n>> how would you handle explicitly asking for a table which is named\n>> \" mytable\" to be included or excluded? Or a table which has a newline\n>> in it? Using a standardized format which supports the full range of\n>> what we do in a table name, explicitly and clearly, would address these\n>> issues and also give us the flexibility to extend the options which\n>> could be used through the configuration file beyond just the filters in\n>> the future.\n>>\n>\n> This is the correct argument - I will check a possibility to use strange\n> names, but there is the same possibility and functionality like we allow\n> from the command line. So you can use double quoted names. I'll check it.\n>\n\nI checked\n\necho \"+t \\\"bad Name\\\"\" | /usr/local/pgsql/master/bin/pg_dump\n--filter=/dev/stdin\n\nIt is working without any problem\n\nRegards\n\nPavel\n\nHi\nPerhaps this feature could co-exist with a full blown configuration for\npg_dump, but even then there's certainly issues with what's proposed-\nhow would you handle explicitly asking for a table which is named \n\"  mytable\" to be included or excluded?  Or a table which has a newline\nin it?  Using a standardized format which supports the full range of\nwhat we do in a table name, explicitly and clearly, would address these\nissues and also give us the flexibility to extend the options which\ncould be used through the configuration file beyond just the filters in\nthe future.This is the correct argument - I will check a possibility to use strange names, but there is the same possibility and functionality like we allow from the command line. So you can use double quoted names. I'll check it.I checkedecho \"+t \\\"bad Name\\\"\" | /usr/local/pgsql/master/bin/pg_dump --filter=/dev/stdinIt is working without any problemRegardsPavel", "msg_date": "Wed, 11 Nov 2020 06:49:43 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nčt 24. 9. 2020 v 19:47 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> rebase + minor change - using pg_get_line_buf instead pg_get_line_append\n>\n>\nfresh rebase\n\n\nRegards\n>\n\n> Pavel\n>", "msg_date": "Wed, 11 Nov 2020 06:54:22 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "st 11. 11. 2020 v 6:32 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> út 10. 11. 2020 v 21:09 odesílatel Stephen Frost <sfrost@snowman.net>\n> napsal:\n>\n>> Greetings,\n>>\n>> * Pavel Stehule (pavel.stehule@gmail.com) wrote:\n>> > rebase + minor change - using pg_get_line_buf instead pg_get_line_append\n>>\n>> I started looking at this and went back through the thread and while I\n>> tend to agree that JSON may not be a good choice for this, it's not the\n>> only possible alternative. There is no doubt that pg_dump is already a\n>> sophisticated data export tool, and likely to continue to gain new\n>> features, such that having a configuration file for it would be very\n>> handy, but this clearly isn't really going in a direction that would\n>> allow for that.\n>>\n>> Perhaps this feature could co-exist with a full blown configuration for\n>> pg_dump, but even then there's certainly issues with what's proposed-\n>> how would you handle explicitly asking for a table which is named\n>> \" mytable\" to be included or excluded? Or a table which has a newline\n>> in it? Using a standardized format which supports the full range of\n>> what we do in a table name, explicitly and clearly, would address these\n>> issues and also give us the flexibility to extend the options which\n>> could be used through the configuration file beyond just the filters in\n>> the future.\n>>\n>\n> This is the correct argument - I will check a possibility to use strange\n> names, but there is the same possibility and functionality like we allow\n> from the command line. So you can use double quoted names. I'll check it.\n>\n>\n>> Unlike for the pg_basebackup manifest, which we generate and read\n>> entirely programatically, a config file for pg_dump would almost\n>> certainly be updated manually (or, at least, parts of it would be and\n>> perhaps other parts generated), which means it'd really be ideal to have\n>> a proper way to support comments in it (something that the proposed\n>> format also doesn't really get right- # must be the *first* character,\n>> and you can only have whole-line comments..?), avoid extra unneeded\n>> punctuation (or, at times, allow it- such as trailing commas in lists),\n>> cleanly handle multi-line strings (consider the oft discussed idea\n>> around having pg_dump support a WHERE clause for exporting data from\n>> tables...), etc.\n>>\n>\n> I think the proposed feature is very far to be the config file for pg_dump\n> (it implements a option \"--filter\"). This is not the target. It is not\n> designed for this. This is just an alternative for options like -t, -T, ...\n> and I am sure so nobody will generate this file manually. Main target of\n> this patch is eliminating problems with the max length of the command line.\n> So it is really not designed to be the config file for pg_dump.\n>\n>\n>>\n>> Overall, -1 from me on this approach. Maybe it could be fixed up to\n>> handle all the different names of objects that we support today\n>> (something which, imv, is really a clear requirement for this feature to\n>> be committed), but I suspect you'd end up half-way to yet another\n>> configuration format when we could be working to support something like\n>> TOML or maybe YAML... but if you want my 2c, TOML seems closer to what\n>> we do for postgresql.conf and getting that over to something that's\n>> standardized, while a crazy long shot, is a general nice idea, imv.\n>>\n>\n> I have nothing against TOML, but I don't see a sense of usage in this\n> patch. This patch doesn't implement a config file for pg_dump, and I don't\n> see any sense or benefits of it. The TOML is designed for different\n> purposes. TOML is good for manual creating, but it is not this case.\n> Typical usage of this patch is some like, and TOML syntax (or JSON) is not\n> good for this.\n>\n> psql -c \"select '+t' || quote_ident(relname) from pg_class where relname\n> ...\" | pg_dump --filter=/dev/stdin\n>\n> I can imagine some benefits of saved configure files for postgres\n> applications - but it should be designed generally and implemented\n> generally. Probably you would use one for pg_dump, psql, pg_restore, ....\n> But it is a different feature with different usage. This patch doesn't\n> implement option \"--config\", it implements option \"--filter\".\n>\n\nSome generic configuration for postgres binary applications is an\ninteresting idea. And TOML language can be well for this purpose. We can\nparametrize applications by command line and by system variables. But\nfiltering objects is a really different case - although there is some small\nintersection, and it will be used very differently, and I don't think so\none language can be practical for both cases. The object filtering is an\nindependent feature, and both features can coexist together.\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>\n>\n>> Thanks,\n>>\n>> Stephen\n>>\n>\n\nst 11. 11. 2020 v 6:32 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:Hiút 10. 11. 2020 v 21:09 odesílatel Stephen Frost <sfrost@snowman.net> napsal:Greetings,\n\n* Pavel Stehule (pavel.stehule@gmail.com) wrote:\n> rebase + minor change - using pg_get_line_buf instead pg_get_line_append\n\nI started looking at this and went back through the thread and while I\ntend to agree that JSON may not be a good choice for this, it's not the\nonly possible alternative.  There is no doubt that pg_dump is already a\nsophisticated data export tool, and likely to continue to gain new\nfeatures, such that having a configuration file for it would be very\nhandy, but this clearly isn't really going in a direction that would\nallow for that.\n\nPerhaps this feature could co-exist with a full blown configuration for\npg_dump, but even then there's certainly issues with what's proposed-\nhow would you handle explicitly asking for a table which is named \n\"  mytable\" to be included or excluded?  Or a table which has a newline\nin it?  Using a standardized format which supports the full range of\nwhat we do in a table name, explicitly and clearly, would address these\nissues and also give us the flexibility to extend the options which\ncould be used through the configuration file beyond just the filters in\nthe future.This is the correct argument - I will check a possibility to use strange names, but there is the same possibility and functionality like we allow from the command line. So you can use double quoted names. I'll check it. \n\nUnlike for the pg_basebackup manifest, which we generate and read\nentirely programatically, a config file for pg_dump would almost\ncertainly be updated manually (or, at least, parts of it would be and\nperhaps other parts generated), which means it'd really be ideal to have\na proper way to support comments in it (something that the proposed\nformat also doesn't really get right- # must be the *first* character,\nand you can only have whole-line comments..?), avoid extra unneeded\npunctuation (or, at times, allow it- such as trailing commas in lists),\ncleanly handle multi-line strings (consider the oft discussed idea\naround having pg_dump support a WHERE clause for exporting data from\ntables...), etc.I think the proposed feature is very far to be the config file for pg_dump (it implements a option \"--filter\"). This is not the target. It is not designed for this. This is just an alternative for options like -t, -T, ... and I am sure so nobody will generate this file manually. Main target of this patch is eliminating problems with the max length of the command line. So it is really not designed to be the config file for pg_dump.  \n\nOverall, -1 from me on this approach.  Maybe it could be fixed up to\nhandle all the different names of objects that we support today\n(something which, imv, is really a clear requirement for this feature to\nbe committed), but I suspect you'd end up half-way to yet another\nconfiguration format when we could be working to support something like\nTOML or maybe YAML... but if you want my 2c, TOML seems closer to what\nwe do for postgresql.conf and getting that over to something that's\nstandardized, while a crazy long shot, is a general nice idea, imv.I have nothing against TOML, but I don't see a sense of usage in this patch. This patch doesn't implement a config file for pg_dump, and I don't see any sense or benefits of it. The TOML is designed for different purposes. TOML is good for manual creating, but it is not this case. Typical usage of this patch is some like, and TOML syntax (or JSON) is not good for this.psql -c \"select '+t' || quote_ident(relname) from pg_class where relname ...\" | pg_dump --filter=/dev/stdinI can imagine some benefits of saved configure files for postgres applications - but it should be designed generally and implemented generally. Probably you would use one for pg_dump, psql, pg_restore, .... But it is a different feature with different usage. This patch doesn't implement option \"--config\", it implements option \"--filter\".Some generic configuration for postgres binary applications is an interesting idea.  And TOML language can be well for this purpose. We can parametrize applications by command line and by system variables. But filtering objects is a really different case - although there is some small intersection, and it will be used very differently, and I don't think so one language can be practical for both cases. The object filtering is an independent feature, and both features can coexist together.RegardsPavelRegardsPavel\n\nThanks,\n\nStephen", "msg_date": "Wed, 11 Nov 2020 08:09:40 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On 2020-Nov-11, Pavel Stehule wrote:\n\n> I think the proposed feature is very far to be the config file for pg_dump\n> (it implements a option \"--filter\"). This is not the target. It is not\n> designed for this. This is just an alternative for options like -t, -T, ...\n> and I am sure so nobody will generate this file manually. Main target of\n> this patch is eliminating problems with the max length of the command line.\n> So it is really not designed to be the config file for pg_dump.\n\nI agree that a replacement for existing command line arguments is a good\ngoal, but at the same time it's good to keep in mind the request that\nmore object types are supported as dumpable. While it's not necessary\nthat this infrastructure supports all object types in the first cut,\nit'd be good to have it support that. I would propose that instead of a\nsingle letter 't' etc we support keywords, maybe similar to those\nreturned by getObjectTypeDescription() (with additions -- for example\nfor \"table data\"). Then we can extend for other object types later\nwithout struggling to find good letters for each.\n\nOf course we could allow abbrevations for common cases, such that \"t\"\nmeans \"table\".\n\nFor example: it'll be useful to support selective dumping of functions,\nmaterialized views, foreign objects, etc.\n\n\n", "msg_date": "Wed, 11 Nov 2020 12:17:36 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "st 11. 11. 2020 v 16:17 odesílatel Alvaro Herrera <alvherre@alvh.no-ip.org>\nnapsal:\n\n> On 2020-Nov-11, Pavel Stehule wrote:\n>\n> > I think the proposed feature is very far to be the config file for\n> pg_dump\n> > (it implements a option \"--filter\"). This is not the target. It is not\n> > designed for this. This is just an alternative for options like -t, -T,\n> ...\n> > and I am sure so nobody will generate this file manually. Main target of\n> > this patch is eliminating problems with the max length of the command\n> line.\n> > So it is really not designed to be the config file for pg_dump.\n>\n> I agree that a replacement for existing command line arguments is a good\n> goal, but at the same time it's good to keep in mind the request that\n> more object types are supported as dumpable. While it's not necessary\n> that this infrastructure supports all object types in the first cut,\n> it'd be good to have it support that. I would propose that instead of a\n> single letter 't' etc we support keywords, maybe similar to those\n> returned by getObjectTypeDescription() (with additions -- for example\n> for \"table data\"). Then we can extend for other object types later\n> without struggling to find good letters for each.\n>\n> Of course we could allow abbrevations for common cases, such that \"t\"\n> means \"table\".\n>\n> For example: it'll be useful to support selective dumping of functions,\n> materialized views, foreign objects, etc.\n>\n\nImplementation of this is trivial.\n\nThe hard work is mapping pg_dump options on database objects. t -> table is\nsimple, but n -> schema looks a little bit inconsistent - although it is\nconsistent with pg_dump. d or D - there is no system object like data. I am\nafraid so there are two independent systems - pg_dump options, and database\nobjects, and it can be hard or not very practical to join these systems.\nUnfortunately there is not good consistency in the short options of pg_dump\ntoday. More - a lot of object names are multi words with inner space. This\nis not too practical.\n\nWhat about supporting two syntaxes?\n\n1. first short current +-tndf filter - but the implementation should not be\nlimited to one char - there can be any string until space\n\n2. long syntax - all these pg_dump options has long options, and then we\ncan extend this feature without any problem in future\n\ntable|exclude-table|exclude-table-data|schema|exclude-schema|include-foreign-data=PATTERN\n\nso the content of filter file can looks like:\n\n+t mytable\n+t tabprefix*\n-t bigtable\n\ntable=mytable2\nexclude-table=mytable2\n\nThis format allows quick work for most common database objects, and it is\nextensible and consistent with pg_dump's long options.\n\nWhat do you think about it?\n\nPersonally, I am thinking that it is over-engineering a little bit, maybe\nwe can implement this feature just test first string after +- symbols\n(instead first char like now) - and any enhanced syntax can be implemented\nin future when there will be this requirement. Second syntax can be\nimplemented very simply, because it can be identified by first char\nprocessing. We can implement second syntax only too. It will work too, but\nI think so short syntax is more practical for daily work (for common\noptions). I expect so 99% percent of this objects will be \"+t tablename\".\n\n\nRegards\n\nPavel\n\nst 11. 11. 2020 v 16:17 odesílatel Alvaro Herrera <alvherre@alvh.no-ip.org> napsal:On 2020-Nov-11, Pavel Stehule wrote:\n\n> I think the proposed feature is very far to be the config file for pg_dump\n> (it implements a option \"--filter\"). This is not the target. It is not\n> designed for this. This is just an alternative for options like -t, -T, ...\n> and I am sure so nobody will generate this file manually. Main target of\n> this patch is eliminating problems with the max length of the command line.\n> So it is really not designed to be the config file for pg_dump.\n\nI agree that a replacement for existing command line arguments is a good\ngoal, but at the same time it's good to keep in mind the request that\nmore object types are supported as dumpable.  While it's not necessary\nthat this infrastructure supports all object types in the first cut,\nit'd be good to have it support that.  I would propose that instead of a\nsingle letter 't' etc we support keywords, maybe similar to those\nreturned by getObjectTypeDescription() (with additions -- for example\nfor \"table data\").  Then we can extend for other object types later\nwithout struggling to find good letters for each.\n\nOf course we could allow abbrevations for common cases, such that \"t\"\nmeans \"table\".\n\nFor example: it'll be useful to support selective dumping of functions,\nmaterialized views, foreign objects, etc.Implementation of this is trivial. The hard work is mapping pg_dump options on database objects. t -> table is simple, but n -> schema looks a little bit inconsistent - although it is consistent with pg_dump. d or D - there is no system object like data. I am afraid so there are two independent systems - pg_dump options, and database objects, and it can be hard or not very practical to join these systems. Unfortunately there is not good consistency in the short options of pg_dump today. More - a lot of object names are multi words with inner space. This is not too practical.What about supporting two syntaxes?1. first short current +-tndf filter - but the implementation should not be limited to one char - there can be any string until space2. long syntax - all these pg_dump options has long options, and then we can extend this feature without any problem in futuretable|exclude-table|exclude-table-data|schema|exclude-schema|include-foreign-data=PATTERNso the content of filter file can looks like:+t mytable+t tabprefix*-t bigtabletable=mytable2exclude-table=mytable2This format allows quick work for most common database objects, and it is extensible and consistent with pg_dump's long options.What do you think about it? Personally, I am thinking that it is over-engineering a little bit, maybe we can implement this feature just test first string after +- symbols (instead first char like now) - and any enhanced syntax can be implemented in future when there will be this requirement. Second syntax can be implemented very simply, because it can be identified by first char processing. We can implement second syntax only too. It will work too, but I think so short syntax is more practical for daily work (for common options). I expect so 99% percent of this objects will be \"+t tablename\".RegardsPavel", "msg_date": "Thu, 12 Nov 2020 08:45:21 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Wed, Nov 11, 2020 at 06:49:43AM +0100, Pavel Stehule wrote:\n> Perhaps this feature could co-exist with a full blown configuration for\n> >> pg_dump, but even then there's certainly issues with what's proposed-\n> >> how would you handle explicitly asking for a table which is named\n> >> \" mytable\" to be included or excluded? Or a table which has a newline\n> >> in it? Using a standardized format which supports the full range of\n> >> what we do in a table name, explicitly and clearly, would address these\n> >> issues and also give us the flexibility to extend the options which\n> >> could be used through the configuration file beyond just the filters in\n> >> the future.\n\nI think it's a reasonable question - why would a new configuration file option\ninclude support for only a handful of existing arguments but not the rest.\n\n> > This is the correct argument - I will check a possibility to use strange\n> > names, but there is the same possibility and functionality like we allow\n> > from the command line. So you can use double quoted names. I'll check it.\n> \n> I checked\n> echo \"+t \\\"bad Name\\\"\" | /usr/local/pgsql/master/bin/pg_dump --filter=/dev/stdin\n> It is working without any problem\n\nI think it couldn't possibly work with newlines, since you call pg_get_line().\nI realize that entering a newline into the shell would also be a PITA, but that\ncould be one *more* reason to support a config file - to allow terrible table\nnames to be in a file and avoid writing dash tee quote something enter else\nquote in a pg_dump command, or shell script.\n\nI fooled with argument parsing to handle reading from a file in the quickest\nway. As written, this fails to handle multiple config files, and special table\nnames, which need to support arbitrary, logical lines, with quotes surrounding\nnewlines or other special chars. As written, the --config file is parsed\n*after* all other arguments, so it could override previous args (like\n--no-blobs --no-blogs, --file, --format, --compress, --lock-wait), which I\nguess is bad, so the config file should be processed *during* argument parsing.\nUnfortunately, I think that suggests duplicating parsing of all/most the\nargument parsing for config file support - I'd be happy if someone suggested a\nbetter way.\n\nBTW, in your most recent patch:\ns/empty rows/empty lines/\nunbalanced parens: \"invalid option type (use [+-]\"\n\n@cfbot: I renamed the patch so please ignore it.\n\n-- \nJustin", "msg_date": "Tue, 17 Nov 2020 15:53:21 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Greetings,\n\n* Justin Pryzby (pryzby@telsasoft.com) wrote:\n> On Wed, Nov 11, 2020 at 06:49:43AM +0100, Pavel Stehule wrote:\n> > >> Perhaps this feature could co-exist with a full blown configuration for\n> > >> pg_dump, but even then there's certainly issues with what's proposed-\n> > >> how would you handle explicitly asking for a table which is named\n> > >> \" mytable\" to be included or excluded? Or a table which has a newline\n> > >> in it? Using a standardized format which supports the full range of\n> > >> what we do in a table name, explicitly and clearly, would address these\n> > >> issues and also give us the flexibility to extend the options which\n> > >> could be used through the configuration file beyond just the filters in\n> > >> the future.\n> \n> I think it's a reasonable question - why would a new configuration file option\n> include support for only a handful of existing arguments but not the rest.\n\nEven if the first version of having a config file for pg_dump only\nsupported some options, that would be reasonable imv, but I dislike the\nidea of building it in such a way that it'll be awkward to add more\noptions to it in the future, something that I definitely think people\nwould like to see (I know I would...).\n\n> > > This is the correct argument - I will check a possibility to use strange\n> > > names, but there is the same possibility and functionality like we allow\n> > > from the command line. So you can use double quoted names. I'll check it.\n> > \n> > I checked\n> > echo \"+t \\\"bad Name\\\"\" | /usr/local/pgsql/master/bin/pg_dump --filter=/dev/stdin\n> > It is working without any problem\n> \n> I think it couldn't possibly work with newlines, since you call pg_get_line().\n\nYeah, I didn't really believe that it actually worked but hadn't had a\nchance to demonstrate that it didn't yet.\n\n> I realize that entering a newline into the shell would also be a PITA, but that\n> could be one *more* reason to support a config file - to allow terrible table\n> names to be in a file and avoid writing dash tee quote something enter else\n> quote in a pg_dump command, or shell script.\n\nAgreed.\n\n> I fooled with argument parsing to handle reading from a file in the quickest\n> way. As written, this fails to handle multiple config files, and special table\n> names, which need to support arbitrary, logical lines, with quotes surrounding\n> newlines or other special chars. As written, the --config file is parsed\n> *after* all other arguments, so it could override previous args (like\n> --no-blobs --no-blogs, --file, --format, --compress, --lock-wait), which I\n> guess is bad, so the config file should be processed *during* argument parsing.\n> Unfortunately, I think that suggests duplicating parsing of all/most the\n> argument parsing for config file support - I'd be happy if someone suggested a\n> better way.\n\nThis still feels like we're trying to quickly hack-and-slash at adding a\nconfig file option rather than thinking through what a sensible design\nfor a pg_dump config file would look like. Having a way to avoid having\nmultiple places in the code that has to handle all the possible options\nis a nice idea but, as I tried to allude to up-thread, I fully expect\nthat once we've got this config file capability that we're going to want\nto add things to it that would be difficult to utilize through the\ncommand-line and so I expect these code paths to diverge anyway.\n\nI would imagine something like:\n\n[connection]\ndb-host=whatever\ndb-port=5433\n...\n\n[output]\nowners = true\nprivileges = false\nformat = \"custom\"\nfile = \"myoutput.dump\"\n\n# This is a comment\n[include]\ntables = [ \"sometable\", \"table with spaces\",\n\"table with quoted \\\"\",\n\"\"\"this is my table\nwith a carriage return\"\"\", \"anothertable\" ]\ntable-patterns = [ \"table*\" ]\nschemas = [ \"myschema\" ]\n\n[exclude]\ntables = [ \"similar to include\" ]\nfunctions = [ \"somefunction(int)\" ]\n\netc, etc ...\n\nThanks,\n\nStephen", "msg_date": "Wed, 18 Nov 2020 09:46:30 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "út 17. 11. 2020 v 22:53 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Wed, Nov 11, 2020 at 06:49:43AM +0100, Pavel Stehule wrote:\n> > Perhaps this feature could co-exist with a full blown configuration for\n> > >> pg_dump, but even then there's certainly issues with what's proposed-\n> > >> how would you handle explicitly asking for a table which is named\n> > >> \" mytable\" to be included or excluded? Or a table which has a\n> newline\n> > >> in it? Using a standardized format which supports the full range of\n> > >> what we do in a table name, explicitly and clearly, would address\n> these\n> > >> issues and also give us the flexibility to extend the options which\n> > >> could be used through the configuration file beyond just the filters\n> in\n> > >> the future.\n>\n> I think it's a reasonable question - why would a new configuration file\n> option\n> include support for only a handful of existing arguments but not the rest.\n>\n\nI don't see a strong technical problem - enhancing parsing is not hard\nwork, but I miss a use case for this. The option \"--filter\" tries to solve\na problem with limited command line size. This is a clean use case and\nthere and supported options are options that can be used repeatedly on the\ncommand line. Nothing less, nothing more. The format that is used is\ndesigned just for this purpose.\n\nWhen we would implement an alternative configuration to command line and\nsystem environments, then the use case should be defined first. When the\nuse case is defined, we can talk about implementation and about good\nformat. There are a lot of interesting formats, but I miss a reason why the\nusage of this alternative configuration can be helpful for pg_dump. Using\nexternal libraries for richer formats means a new dependency, necessity to\nsolve portability issues, and maybe other issues, and for this there should\nbe a good use case. Passing a list of tables for dumping doesn't need a\nrich format.\n\nI cannot imagine using a config file with generated object names and some\nother options together. Maybe if these configurations will not be too long\n(then handy written) configuration can be usable. But when I think about\nusing pg_dump from some bash scripts, then much more practical is using\nusual command line options and passing a list of objects by pipe. I really\nmiss the use case for special pg_dump's config file, and if there is, then\nit is very different from a use case for \"--filter\" option.\n\n\n> > > This is the correct argument - I will check a possibility to use\n> strange\n> > > names, but there is the same possibility and functionality like we\n> allow\n> > > from the command line. So you can use double quoted names. I'll check\n> it.\n> >\n> > I checked\n> > echo \"+t \\\"bad Name\\\"\" | /usr/local/pgsql/master/bin/pg_dump\n> --filter=/dev/stdin\n> > It is working without any problem\n>\n> I think it couldn't possibly work with newlines, since you call\n> pg_get_line().\n> I realize that entering a newline into the shell would also be a PITA, but\n> that\n> could be one *more* reason to support a config file - to allow terrible\n> table\n> names to be in a file and avoid writing dash tee quote something enter else\n> quote in a pg_dump command, or shell script.\n>\n\nNew patch is working with names that contains multilines\n\n[pavel@localhost postgresql.master]$ psql -At -X -c \"select '+t ' ||\nquote_ident(table_name) from information_schema.tables where table_name\nlike 'foo%'\"| /usr/local/pgsql/master/bin/pg_dump --filter=/dev/stdin\n--\n-- PostgreSQL database dump\n--\n\n-- Dumped from database version 14devel\n-- Dumped by pg_dump version 14devel\n\n-\n-- Name: foo boo; Type: TABLE; Schema: public; Owner: pavel\n--\n\nCREATE TABLE public.\"foo\nboo\" (\n a integer\n);\n\n\nALTER TABLE public.\"foo\nboo\" OWNER TO pavel;\n\n--\n-- Data for Name: foo boo; Type: TABLE DATA; Schema: public; Owner: pavel\n--\n\nCOPY public.\"foo\nboo\" (a) FROM stdin;\n\\.\n\n\n--\n-- PostgreSQL database dump complete\n--\n\n\n> I fooled with argument parsing to handle reading from a file in the\n> quickest\n> way. As written, this fails to handle multiple config files, and special\n> table\n> names, which need to support arbitrary, logical lines, with quotes\n> surrounding\n> newlines or other special chars. As written, the --config file is parsed\n> *after* all other arguments, so it could override previous args (like\n> --no-blobs --no-blogs, --file, --format, --compress, --lock-wait), which I\n> guess is bad, so the config file should be processed *during* argument\n> parsing.\n> Unfortunately, I think that suggests duplicating parsing of all/most the\n> argument parsing for config file support - I'd be happy if someone\n> suggested a\n> better way.\n>\n> BTW, in your most recent patch:\n> s/empty rows/empty lines/\n> unbalanced parens: \"invalid option type (use [+-]\"\n>\n\nshould be fixed now, thank you for check\n\nRegards\n\nPavel\n\n\n\n> @cfbot: I renamed the patch so please ignore it.\n>\n> --\n> Justin\n>", "msg_date": "Thu, 19 Nov 2020 20:51:18 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": ">> BTW, in your most recent patch:\n>> s/empty rows/empty lines/\n>> unbalanced parens: \"invalid option type (use [+-]\"\n>>\n>\n> should be fixed now, thank you for check\n>\n\nminor update - fixed handling of processing names with double quotes inside\n\n\n\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>> @cfbot: I renamed the patch so please ignore it.\n>>\n>> --\n>> Justin\n>>\n>", "msg_date": "Thu, 19 Nov 2020 20:57:01 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Thu, 19 Nov 2020 at 19:57, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> minor update - fixed handling of processing names with double quotes inside\n>\n\nI see this is marked RFC, but reading the thread it doesn't feel like\nwe have reached consensus on the design for this feature.\n\nI agree that being able to configure pg_dump via a config file would\nbe very useful, but the syntax proposed here feels much more like a\nhacked-up syntax designed to meet this one use case, rather than a\ngood general-purpose design that can be easily extended.\n\nIMO, a pg_dump config file should be able to specify all options\ncurrently supported through the command line, and vice versa (subject\nto command line length limits), with a single common code path for\nhandling options. That way, any new options we add will work on the\ncommand line and in config files. Likewise, the user should only need\nto learn one set of options, and have the choice of specifying them on\nthe command line or in a config file (or a mix of both).\n\nI can imagine eventually supporting multiple different file formats,\neach just being a different representation of the same data, so\nperhaps this could work with 2 new options:\n\n --option-file-format=plain|yaml|json|...\n --option-file=filename\n\nwith \"plain\" being the default initial implementation, which might be\nsomething like our current postgresql.conf file format.\n\nAlso, I think we should allow multiple \"--option-file\" arguments\n(e.g., to list different object types in different files), and for a\nconfig file to contain its own \"--option-file\" arguments, to allow\nconfig files to include other config files.\n\nThe current design feels far too limited to me, and requires new code\nand new syntax to be added each time we extend it, so I'm -1 on this\npatch as it stands.\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 25 Nov 2020 18:25:27 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nst 25. 11. 2020 v 19:25 odesílatel Dean Rasheed <dean.a.rasheed@gmail.com>\nnapsal:\n\n> On Thu, 19 Nov 2020 at 19:57, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> > minor update - fixed handling of processing names with double quotes\n> inside\n> >\n>\n> I see this is marked RFC, but reading the thread it doesn't feel like\n> we have reached consensus on the design for this feature.\n>\n> I agree that being able to configure pg_dump via a config file would\n> be very useful, but the syntax proposed here feels much more like a\n> hacked-up syntax designed to meet this one use case, rather than a\n> good general-purpose design that can be easily extended.\n>\n\nNobody sent a real use case for introducing the config file. There was a\ndiscussion about formats, and you introduce other dimensions and\nvariability.\n\nBut I don't understand why? What is a use case? What is a benefit against\ncommand line, or libpq variables? And why should config files be better as\na solution for limited length of command line, when I need to dump\nthousands of tables exactly specified?\n\nRegards\n\nPavel\n\n\n> IMO, a pg_dump config file should be able to specify all options\n> currently supported through the command line, and vice versa (subject\n> to command line length limits), with a single common code path for\n> handling options. That way, any new options we add will work on the\n> command line and in config files. Likewise, the user should only need\n> to learn one set of options, and have the choice of specifying them on\n> the command line or in a config file (or a mix of both).\n>\n> I can imagine eventually supporting multiple different file formats,\n> each just being a different representation of the same data, so\n> perhaps this could work with 2 new options:\n>\n> --option-file-format=plain|yaml|json|...\n> --option-file=filename\n>\n\n> with \"plain\" being the default initial implementation, which might be\n> something like our current postgresql.conf file format.\n>\n> Also, I think we should allow multiple \"--option-file\" arguments\n> (e.g., to list different object types in different files), and for a\n> config file to contain its own \"--option-file\" arguments, to allow\n> config files to include other config files.\n>\n> The current design feels far too limited to me, and requires new code\n> and new syntax to be added each time we extend it, so I'm -1 on this\n> patch as it stands.\n\n\nThis new syntax tries to be consistent and simple. It really doesn't try to\nimplement an alternative configuration file for pg_dump. The code is simple\nand can be easily extended.\n\nWhat are the benefits of supporting multiple formats?\n\n\n> Regards,\n> Dean\n>\n\nHist 25. 11. 2020 v 19:25 odesílatel Dean Rasheed <dean.a.rasheed@gmail.com> napsal:On Thu, 19 Nov 2020 at 19:57, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> minor update - fixed handling of processing names with double quotes inside\n>\n\nI see this is marked RFC, but reading the thread it doesn't feel like\nwe have reached consensus on the design for this feature.\n\nI agree that being able to configure pg_dump via a config file would\nbe very useful, but the syntax proposed here feels much more like a\nhacked-up syntax designed to meet this one use case, rather than a\ngood general-purpose design that can be easily extended.Nobody sent a real use case for introducing the config file. There was a discussion about formats, and you introduce other dimensions and variability. But I don't understand why? What is a use case? What is a benefit against command line, or libpq variables? And why should config files be better as a solution for limited length of command line, when I need to dump thousands of tables exactly specified?RegardsPavel \nIMO, a pg_dump config file should be able to specify all options\ncurrently supported through the command line, and vice versa (subject\nto command line length limits), with a single common code path for\nhandling options. That way, any new options we add will work on the\ncommand line and in config files. Likewise, the user should only need\nto learn one set of options, and have the choice of specifying them on\nthe command line or in a config file (or a mix of both).\n\nI can imagine eventually supporting multiple different file formats,\neach just being a different representation of the same data, so\nperhaps this could work with 2 new options:\n\n  --option-file-format=plain|yaml|json|...\n  --option-file=filename\n\nwith \"plain\" being the default initial implementation, which might be\nsomething like our current postgresql.conf file format.\n\nAlso, I think we should allow multiple \"--option-file\" arguments\n(e.g., to list different object types in different files), and for a\nconfig file to contain its own \"--option-file\" arguments, to allow\nconfig files to include other config files.\n\nThe current design feels far too limited to me, and requires new code\nand new syntax to be added each time we extend it, so I'm -1 on this\npatch as it stands.This new syntax tries to be consistent and simple. It really doesn't try to implement an alternative configuration file for pg_dump. The code is simple and can be easily extended.What are the benefits of supporting multiple formats? \n\nRegards,\nDean", "msg_date": "Wed, 25 Nov 2020 20:29:00 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> st 25. 11. 2020 v 19:25 odesílatel Dean Rasheed <dean.a.rasheed@gmail.com>\n> napsal:\n>> I agree that being able to configure pg_dump via a config file would\n>> be very useful, but the syntax proposed here feels much more like a\n>> hacked-up syntax designed to meet this one use case, rather than a\n>> good general-purpose design that can be easily extended.\n\n> But I don't understand why? What is a use case? What is a benefit against\n> command line, or libpq variables? And why should config files be better as\n> a solution for limited length of command line, when I need to dump\n> thousands of tables exactly specified?\n\nBecause next week somebody will want to dump thousands of functions\nselected by name, or schemas selected by name, etc etc. I agree with\nthe position that we don't want a single-purpose solution. The idea\nthat the syntax should match the command line switch syntax seems\nreasonable, though I'm not wedded to it. (One thing to consider is\nhow painful will it be for people to quote table names containing\nfunny characters, for instance. On the command line, we largely\ndepend on the shell's quoting behavior to solve that, but we'd not\nhave that infrastructure when reading from a file.)\n\n> What are the benefits of supporting multiple formats?\n\nYeah, that part of Dean's sketch seemed like overkill to me too.\n\nI wasn't very excited about multiple switch files either, though\ndepending on how the implementation is done, that could be simple\nenough to be in the might-as-well category.\n\nOne other point that I'm wondering about is that there's really no\nvalue in doing anything here until you get to some thousands of\ntable names; as long as the list fits in the shell's command line\nlength limit, you might as well just make a shell script file.\nDoes pg_dump really have sane performance for that situation, or\nare we soon going to be fielding requests to make it not be O(N^2)\nin the number of listed tables?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Nov 2020 15:00:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "st 25. 11. 2020 v 21:00 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > st 25. 11. 2020 v 19:25 odesílatel Dean Rasheed <\n> dean.a.rasheed@gmail.com>\n> > napsal:\n> >> I agree that being able to configure pg_dump via a config file would\n> >> be very useful, but the syntax proposed here feels much more like a\n> >> hacked-up syntax designed to meet this one use case, rather than a\n> >> good general-purpose design that can be easily extended.\n>\n> > But I don't understand why? What is a use case? What is a benefit against\n> > command line, or libpq variables? And why should config files be better\n> as\n> > a solution for limited length of command line, when I need to dump\n> > thousands of tables exactly specified?\n>\n> Because next week somebody will want to dump thousands of functions\n> selected by name, or schemas selected by name, etc etc. I agree with\n> the position that we don't want a single-purpose solution. The idea\n> that the syntax should match the command line switch syntax seems\n> reasonable, though I'm not wedded to it. (One thing to consider is\n> how painful will it be for people to quote table names containing\n> funny characters, for instance. On the command line, we largely\n> depend on the shell's quoting behavior to solve that, but we'd not\n> have that infrastructure when reading from a file.)\n>\n\nThis is not a problem with the current patch - and the last version of this\npatch supports well obscure names.\n\nThere was a requirement for supporting all and future pg_dump options - ok\nit can make sense. I have not a problem to use instead a line format\n\n\"option argument\" or \"long-option=argument\"\n\nThis format - can it be a solution? I'll try to rewrite the parser for this\nformat.\n\nIt is implementable, but this is in collision with Stephen's requirement\nfor human well readable format designed for handy writing. There are\nrequests that have no intersection. Well readable format needs a more\ncomplex parser. And machine generating in this format needs more fork -\ngenerating flat file is more simple and more robust than generating JSON or\nYAML.\n\n\n>\n> > What are the benefits of supporting multiple formats?\n>\n> Yeah, that part of Dean's sketch seemed like overkill to me too.\n>\n> I wasn't very excited about multiple switch files either, though\n> depending on how the implementation is done, that could be simple\n> enough to be in the might-as-well category.\n>\n> One other point that I'm wondering about is that there's really no\n> value in doing anything here until you get to some thousands of\n> table names; as long as the list fits in the shell's command line\n> length limit, you might as well just make a shell script file.\n> Does pg_dump really have sane performance for that situation, or\n> are we soon going to be fielding requests to make it not be O(N^2)\n> in the number of listed tables?\n>\n\nPerformance is another factor, but the command line limit can be easily\ntouched when table names have maximum width.\n\n\n> regards, tom lane\n>\n\nst 25. 11. 2020 v 21:00 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> st 25. 11. 2020 v 19:25 odesílatel Dean Rasheed <dean.a.rasheed@gmail.com>\n> napsal:\n>> I agree that being able to configure pg_dump via a config file would\n>> be very useful, but the syntax proposed here feels much more like a\n>> hacked-up syntax designed to meet this one use case, rather than a\n>> good general-purpose design that can be easily extended.\n\n> But I don't understand why? What is a use case? What is a benefit against\n> command line, or libpq variables? And why should config files be better as\n> a solution for limited length of command line, when I need to dump\n> thousands of tables exactly specified?\n\nBecause next week somebody will want to dump thousands of functions\nselected by name, or schemas selected by name, etc etc.  I agree with\nthe position that we don't want a single-purpose solution.  The idea\nthat the syntax should match the command line switch syntax seems\nreasonable, though I'm not wedded to it.  (One thing to consider is\nhow painful will it be for people to quote table names containing\nfunny characters, for instance.  On the command line, we largely\ndepend on the shell's quoting behavior to solve that, but we'd not\nhave that infrastructure when reading from a file.)This is not a problem with the current patch - and the last version of this patch supports well obscure names.There was a requirement for supporting all and future pg_dump options - ok it can make sense. I have not a problem to use instead a line format\"option argument\" or \"long-option=argument\"This format - can it be a solution? I'll try to rewrite the parser for this format.It is implementable, but this is in collision with Stephen's requirement for human well readable format designed for handy writing. There are requests that have no intersection. Well readable format needs a more complex parser. And machine generating in this format needs more fork - generating flat file is more simple and more robust than generating JSON or YAML. \n\n> What are the benefits of supporting multiple formats?\n\nYeah, that part of Dean's sketch seemed like overkill to me too.\n\nI wasn't very excited about multiple switch files either, though\ndepending on how the implementation is done, that could be simple\nenough to be in the might-as-well category.\n\nOne other point that I'm wondering about is that there's really no\nvalue in doing anything here until you get to some thousands of\ntable names; as long as the list fits in the shell's command line\nlength limit, you might as well just make a shell script file.\nDoes pg_dump really have sane performance for that situation, or\nare we soon going to be fielding requests to make it not be O(N^2)\nin the number of listed tables?Performance is another factor, but the command line limit can be easily touched when table names have maximum width. \n\n                        regards, tom lane", "msg_date": "Thu, 26 Nov 2020 07:42:33 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Thu, 26 Nov 2020 at 06:43, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> st 25. 11. 2020 v 21:00 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>>\n>> (One thing to consider is\n>> how painful will it be for people to quote table names containing\n>> funny characters, for instance. On the command line, we largely\n>> depend on the shell's quoting behavior to solve that, but we'd not\n>> have that infrastructure when reading from a file.)\n>\n> This is not a problem with the current patch - and the last version of this patch supports well obscure names.\n>\n\nActually, that raises a different possible benefit of passing options\nin an options file -- if the user wants to pass in a table name\npattern, it can be a nuisance if the shell's argument processing does\nadditional unwanted things like globbing and environment variable\nsubstitutions. Using an options file could provide a handy way to\nensure that any option values are interpreted exactly as written,\nwithout any additional mangling.\n\n> There was a requirement for supporting all and future pg_dump options - ok it can make sense. I have not a problem to use instead a line format\n>\n> \"option argument\" or \"long-option=argument\"\n>\n> This format - can it be a solution? I'll try to rewrite the parser for this format.\n>\n\nYes, that's the sort of thing I was thinking of, to make the feature\nmore general-purpose.\n\n> It is implementable, but this is in collision with Stephen's requirement for human well readable format designed for handy writing. There are requests that have no intersection. Well readable format needs a more complex parser. And machine generating in this format needs more fork - generating flat file is more simple and more robust than generating JSON or YAML.\n>\n\nTo be clear, I wasn't suggesting that this patch implement multiple\nformats. Just the \"plain\" format above. Other formats like YAML might\nwell be more human-readable, and be able to take advantage of values\nthat are lists to avoid repeating option names, and they would have\nthe advantage of being readable and writable by other standard tools,\nwhich might be useful. But I think such things would be for the\nfuture. Maybe no one will ever add support for other formats, or maybe\nsomeone will just write a separate external tool to convert YAML or\nJSON to our plain format. I don't know. But supporting all pg_dump\noptions makes more things possible.\n\n>> I wasn't very excited about multiple switch files either, though\n>> depending on how the implementation is done, that could be simple\n>> enough to be in the might-as-well category.\n>>\n\nThat's what I was hoping.\n\nRegards,\nDean\n\n\n", "msg_date": "Thu, 26 Nov 2020 11:50:34 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> Actually, that raises a different possible benefit of passing options\n> in an options file -- if the user wants to pass in a table name\n> pattern, it can be a nuisance if the shell's argument processing does\n> additional unwanted things like globbing and environment variable\n> substitutions. Using an options file could provide a handy way to\n> ensure that any option values are interpreted exactly as written,\n> without any additional mangling.\n\nHuh? Any format we might devise, or borrow, will have to have some\nkind of escaping/quoting convention. The idea that \"we don't need\nthat\" tends to lead to very ugly workarounds later.\n\nI do agree that the shell's quoting conventions are pretty messy\nand so those aren't the ones we should borrow. We could do a lot\nworse than to use some established data format like JSON or YAML.\nGiven that we already have src/common/jsonapi.c, it seems like\nJSON would be the better choice of those two.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Nov 2020 11:02:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> > Actually, that raises a different possible benefit of passing options\n> > in an options file -- if the user wants to pass in a table name\n> > pattern, it can be a nuisance if the shell's argument processing does\n> > additional unwanted things like globbing and environment variable\n> > substitutions. Using an options file could provide a handy way to\n> > ensure that any option values are interpreted exactly as written,\n> > without any additional mangling.\n> \n> Huh? Any format we might devise, or borrow, will have to have some\n> kind of escaping/quoting convention. The idea that \"we don't need\n> that\" tends to lead to very ugly workarounds later.\n\nAgreed.\n\n> I do agree that the shell's quoting conventions are pretty messy\n> and so those aren't the ones we should borrow. We could do a lot\n> worse than to use some established data format like JSON or YAML.\n> Given that we already have src/common/jsonapi.c, it seems like\n> JSON would be the better choice of those two.\n\nJSON doesn't support comments, something that's really useful to have in\nconfiguration files, so I don't agree that it's a sensible thing to use\nin this case. JSON also isn't very forgiving, which is also\nunfortunate and makes for a poor choice.\n\nThis is why I was suggesting TOML up-thread, which is MIT licensed, has\nbeen around for a number of years, supports comments, has sensible\nquoting that's easier to deal with than the shell, and has a C (C99)\nimplementation. It's also used in quite a few other projects.\n\nIn a quick look, I suspect it might also be something that could be used\nto replace our existing hand-hacked postgresql.conf parser and give us\nthe ability to handle things a bit cleaner there too...\n\nThanks,\n\nStephen", "msg_date": "Fri, 27 Nov 2020 11:56:31 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Greetings,\n\n* Pavel Stehule (pavel.stehule@gmail.com) wrote:\n> > I agree that being able to configure pg_dump via a config file would\n> > be very useful, but the syntax proposed here feels much more like a\n> > hacked-up syntax designed to meet this one use case, rather than a\n> > good general-purpose design that can be easily extended.\n> \n> Nobody sent a real use case for introducing the config file. There was a\n> discussion about formats, and you introduce other dimensions and\n> variability.\n\nI'm a bit baffled by this because it seems abundently clear to me that\nbeing able to have a config file for pg_dump would be extremely helpful.\nThere's no shortage of times that I've had to hack up a shell script and\nfigure out quoting and set up the right set of options for pg_dump,\nresulting in things like:\n\npg_dump \\\n --host=myserver.com \\\n --username=postgres \\\n --schema=public \\\n --schema=myschema \\\n --no-comments \\\n --no-tablespaces \\\n --file=somedir \\\n --format=d \\\n --jobs=5\n\nwhich really is pretty grotty. Being able to have a config file that\nhas proper comments would be much better and we could start to extend to\nthings like \"please export schema A to directory A, schema B to\ndirectory B\" and other ways of selecting source and destination, and\nimagine if we could validate it too, eg:\n\npg_dump --config=whatever --dry-run\n\nor --check-config maybe.\n\nThis isn't a new concept either- export and import tools for other\ndatabases have similar support, eg: Oracle's imp/exp tool, mysqldump\n(see: https://dev.mysql.com/doc/refman/8.0/en/option-files.html which\nhas a TOML-looking format too), pgloader of course has a config file,\netc. We certainly aren't in novel territory here.\n\nThanks,\n\nStephen", "msg_date": "Fri, 27 Nov 2020 13:45:09 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nst 25. 11. 2020 v 21:00 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > st 25. 11. 2020 v 19:25 odesílatel Dean Rasheed <\n> dean.a.rasheed@gmail.com>\n> > napsal:\n> >> I agree that being able to configure pg_dump via a config file would\n> >> be very useful, but the syntax proposed here feels much more like a\n> >> hacked-up syntax designed to meet this one use case, rather than a\n> >> good general-purpose design that can be easily extended.\n>\n> > But I don't understand why? What is a use case? What is a benefit against\n> > command line, or libpq variables? And why should config files be better\n> as\n> > a solution for limited length of command line, when I need to dump\n> > thousands of tables exactly specified?\n>\n> Because next week somebody will want to dump thousands of functions\n> selected by name, or schemas selected by name, etc etc. I agree with\n> the position that we don't want a single-purpose solution. The idea\n> that the syntax should match the command line switch syntax seems\n> reasonable, though I'm not wedded to it. (One thing to consider is\n> how painful will it be for people to quote table names containing\n> funny characters, for instance. On the command line, we largely\n> depend on the shell's quoting behavior to solve that, but we'd not\n> have that infrastructure when reading from a file.)\n>\n> > What are the benefits of supporting multiple formats?\n>\n> Yeah, that part of Dean's sketch seemed like overkill to me too.\n>\n> I wasn't very excited about multiple switch files either, though\n> depending on how the implementation is done, that could be simple\n> enough to be in the might-as-well category.\n>\n> One other point that I'm wondering about is that there's really no\n> value in doing anything here until you get to some thousands of\n> table names; as long as the list fits in the shell's command line\n> length limit, you might as well just make a shell script file.\n> Does pg_dump really have sane performance for that situation, or\n> are we soon going to be fielding requests to make it not be O(N^2)\n> in the number of listed tables?\n>\n\nHere is a fresh implementation. I used the name of a new option -\n\"options-file\". Looks more accurate than \"config\", but the name can be\nchanged easily anytime.\n\nAny short or long option can be read from this file in simple format - one\noption per line. Arguments inside double quotes can be multi lined. Row\ncomments started by # and can be used everywhere.\n\nThe implementation is very generic - support of new options doesn't require\nchange of this new part code. The parser can ignore white spaces almost\neverywhere, where it has sense.\n\nThe option should start with \"-\" or \"--\" in the options file too, because\nthis is necessary for good detection if the option is short or long.\n\nRegards\n\nPavel\n\n\n\n> regards, tom lane\n>", "msg_date": "Sat, 28 Nov 2020 21:14:35 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "pá 27. 11. 2020 v 19:45 odesílatel Stephen Frost <sfrost@snowman.net>\nnapsal:\n\n> Greetings,\n>\n> * Pavel Stehule (pavel.stehule@gmail.com) wrote:\n> > > I agree that being able to configure pg_dump via a config file would\n> > > be very useful, but the syntax proposed here feels much more like a\n> > > hacked-up syntax designed to meet this one use case, rather than a\n> > > good general-purpose design that can be easily extended.\n> >\n> > Nobody sent a real use case for introducing the config file. There was a\n> > discussion about formats, and you introduce other dimensions and\n> > variability.\n>\n> I'm a bit baffled by this because it seems abundently clear to me that\n> being able to have a config file for pg_dump would be extremely helpful.\n> There's no shortage of times that I've had to hack up a shell script and\n> figure out quoting and set up the right set of options for pg_dump,\n> resulting in things like:\n>\n> pg_dump \\\n> --host=myserver.com \\\n> --username=postgres \\\n> --schema=public \\\n> --schema=myschema \\\n> --no-comments \\\n> --no-tablespaces \\\n> --file=somedir \\\n> --format=d \\\n> --jobs=5\n>\n> which really is pretty grotty. Being able to have a config file that\n> has proper comments would be much better and we could start to extend to\n> things like \"please export schema A to directory A, schema B to\n> directory B\" and other ways of selecting source and destination, and\n> imagine if we could validate it too, eg:\n>\n> pg_dump --config=whatever --dry-run\n>\n> or --check-config maybe.\n>\n> This isn't a new concept either- export and import tools for other\n> databases have similar support, eg: Oracle's imp/exp tool, mysqldump\n> (see: https://dev.mysql.com/doc/refman/8.0/en/option-files.html which\n> has a TOML-looking format too), pgloader of course has a config file,\n> etc. We certainly aren't in novel territory here\n>\n\nStill, I am not a fan of this. pg_dump is a simple tool for simple\npurposes. It is not a pgloader or any ETL tool. It can be changed in\nfuture, maybe, but still, why? And any time, there will be a question if\npg_dump is a good foundation for massive enhancement in ETL direction. The\ndevelopment in C is expensive and pg_dump is too Postgres specific, so I\ncannot imagine so pg_dump will be used for some complex tasks directly, and\nthere will be requirements for special configuration. When we have a\npgloader, then we don't need to move pg_dump in the pgloader direction.\n\nAnyway - new patch allows to store any options (one per line) with possible\ncomments (everywhere in line) and argument's can be across more lines. It\nhasn't any more requirements on memory or CPU.\n\nRegards\n\nPavel\n\n\n>\n> Thanks,\n>\n> Stephen\n>\n\npá 27. 11. 2020 v 19:45 odesílatel Stephen Frost <sfrost@snowman.net> napsal:Greetings,\n\n* Pavel Stehule (pavel.stehule@gmail.com) wrote:\n> > I agree that being able to configure pg_dump via a config file would\n> > be very useful, but the syntax proposed here feels much more like a\n> > hacked-up syntax designed to meet this one use case, rather than a\n> > good general-purpose design that can be easily extended.\n> \n> Nobody sent a real use case for introducing the config file. There was a\n> discussion about formats, and you introduce other dimensions and\n> variability.\n\nI'm a bit baffled by this because it seems abundently clear to me that\nbeing able to have a config file for pg_dump would be extremely helpful.\nThere's no shortage of times that I've had to hack up a shell script and\nfigure out quoting and set up the right set of options for pg_dump,\nresulting in things like:\n\npg_dump \\\n  --host=myserver.com \\\n  --username=postgres \\\n  --schema=public \\\n  --schema=myschema \\\n  --no-comments \\\n  --no-tablespaces \\\n  --file=somedir \\\n  --format=d \\\n  --jobs=5\n\nwhich really is pretty grotty.  Being able to have a config file that\nhas proper comments would be much better and we could start to extend to\nthings like \"please export schema A to directory A, schema B to\ndirectory B\" and other ways of selecting source and destination, and\nimagine if we could validate it too, eg:\n\npg_dump --config=whatever --dry-run\n\nor --check-config maybe.\n\nThis isn't a new concept either- export and import tools for other\ndatabases have similar support, eg: Oracle's imp/exp tool, mysqldump\n(see: https://dev.mysql.com/doc/refman/8.0/en/option-files.html which\nhas a TOML-looking format too), pgloader of course has a config file,\netc.  We certainly aren't in novel territory hereStill, I am not a fan of this. pg_dump is a simple tool for simple purposes. It is not a pgloader or any ETL tool. It can be changed in future, maybe, but still, why? And any time, there will be a question if pg_dump is a good foundation for massive enhancement in ETL direction. The development in C is expensive and pg_dump is too Postgres specific, so I cannot imagine so pg_dump will be used for some complex tasks directly, and there will be requirements for special configuration. When we have a pgloader, then we don't need to move pg_dump in the pgloader direction.Anyway - new patch allows to store any options (one per line) with possible comments (everywhere in line) and argument's can be across more lines. It hasn't any more requirements on memory or CPU. RegardsPavel \n\nThanks,\n\nStephen", "msg_date": "Sat, 28 Nov 2020 22:14:38 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Sat, Nov 28, 2020 at 09:14:35PM +0100, Pavel Stehule wrote:\n> Any short or long option can be read from this file in simple format - one\n> option per line. Arguments inside double quotes can be multi lined. Row\n> comments started by # and can be used everywhere.\n\nDoes this support even funkier table names ?\n\nThis tests a large number and fraction of characters in dbname/username, so all\nof pg_dump has to continue supporting that:\n./src/bin/pg_dump/t/010_dump_connstr.pl\n\nI tested and it seems to work with -t \"foo�\"\nBut it didn't work with -t \"foo\\nbar\" (literal newline). Fix attached.\nIf you send another patch, please consider including a test case for quoted\nnames in long and short options.\n\n> +static char *optsfilename = NULL;\n\n> + * It assign the values of options to related DumpOption fields or to\n> + * some global values. It is called from twice. First, for processing\n> + * the command line argumens. Second, for processing an options from\n> + * options file.\n\nThis didn't support multiple config files, nor config files which include\nconfig files, as Dean and I mentioned. I think the argument parsers should\nthemselves call the config file parser, as need be, so the last option\nspecification should override previous ones.\n\nFor example pg_dump --config-file=./pg_dump.conf --blobs should have blobs even\nif the config file says --no-blobs. (Command-line arguments normally take\nprecedence over config files, certainly if the argument is specified \"later\").\nI think it'd be ok if it's recursive. I made a quick hack to do that.\n\nI doubt this will satisfy Stephen. Personally, I would use this if it were a\nplain and simple text config file (which for our purposes I would pass on\nstdin), and I would almost certainly not use it if it were json. But it'd be\nswell if there were a standard config file format, that handled postgresql.conf\nand maybe pg_hba.conf.\n\n-- \nJustin", "msg_date": "Sat, 28 Nov 2020 17:49:45 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nne 29. 11. 2020 v 0:49 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Sat, Nov 28, 2020 at 09:14:35PM +0100, Pavel Stehule wrote:\n> > Any short or long option can be read from this file in simple format -\n> one\n> > option per line. Arguments inside double quotes can be multi lined. Row\n> > comments started by # and can be used everywhere.\n>\n\nhere is updated patch\n\n\n> Does this support even funkier table names ?\n>\n> This tests a large number and fraction of characters in dbname/username,\n> so all\n> of pg_dump has to continue supporting that:\n> ./src/bin/pg_dump/t/010_dump_connstr.pl\n>\n> I tested and it seems to work with -t \"fooå\"\n> But it didn't work with -t \"foo\\nbar\" (literal newline). Fix attached.\n> If you send another patch, please consider including a test case for quoted\n> names in long and short options.\n>\n\nI implemented some basic backslash escaping. I will write more tests, when\nthere will be good agreement on the main concept.\n\n\n>\n> > +static char *optsfilename = NULL;\n>\n> > + * It assign the values of options to related DumpOption fields or to\n> > + * some global values. It is called from twice. First, for processing\n> > + * the command line argumens. Second, for processing an options from\n> > + * options file.\n>\n> This didn't support multiple config files, nor config files which include\n> config files, as Dean and I mentioned. I think the argument parsers should\n> themselves call the config file parser, as need be, so the last option\n> specification should override previous ones.\n>\n> For example pg_dump --config-file=./pg_dump.conf --blobs should have blobs\n> even\n> if the config file says --no-blobs. (Command-line arguments normally take\n> precedence over config files, certainly if the argument is specified\n> \"later\").\n> I think it'd be ok if it's recursive. I made a quick hack to do that.\n>\n\nI did it. I used a different design than you. Making \"dopt\" be a global\nvariable looks too invasive. Almost all functions there expect \"dopt\" as an\nargument. But I think it is not necessary.\n\nI implemented two iterations of argument's processing. 1. for options file\n(more options-file options are allowed, and nesting is allowed too), 2. all\nother arguments from the command line. Any options file is processed only\nonce - second processing is ignored. So there is no problem with cycles.\n\nThe name of the new option - \"config-file\" or \"options-file\" ? I prefer\n\"options-file\". \"config-file\" is valid too, but \"options-file\" is more\nspecific, more descriptive (it is self descriptive).\n\nI merged your patch with a fix of typos.\n\nRegards\n\nPavel\n\n\n> I doubt this will satisfy Stephen. Personally, I would use this if it\n> were a\n> plain and simple text config file (which for our purposes I would pass on\n> stdin), and I would almost certainly not use it if it were json. But it'd\n> be\n> swell if there were a standard config file format, that handled\n> postgresql.conf\n> and maybe pg_hba.conf.\n>\n> --\n> Justin\n>", "msg_date": "Sun, 29 Nov 2020 15:09:48 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nrebase\n\nRegards\n\nPavel", "msg_date": "Tue, 16 Feb 2021 20:32:19 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nút 16. 2. 2021 v 20:32 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n> Hi\n>\n> rebase\n>\n\nfresh rebase\n\nRegards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>", "msg_date": "Sun, 11 Apr 2021 09:48:35 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nne 11. 4. 2021 v 9:48 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> út 16. 2. 2021 v 20:32 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>>\n>> Hi\n>>\n>> rebase\n>>\n>\n>\nrebase\n\n\n\n> fresh rebase\n>\n> Regards\n>\n> Pavel\n>\n>\n>> Regards\n>>\n>> Pavel\n>>\n>", "msg_date": "Wed, 12 May 2021 08:22:26 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi,\n\nI started looking at the patch allowing to export just functions [1], \nand I got pointed to this patch as an alternative approach (to adding a \nseparate filtering option for every possible object type).\n\nI'm familiar with the customer that inspired Pavel to start working on \nthis, so I understand the use case he's trying to address - a flexible \nway to filter (include/exclude) large number of objects.\n\nIMHO it's a mistake to try to broaden the scope of the patch and require \nimplementing some universal pg_dump config file, particularly if it \nrequires \"complex\" structure or formats like JSON, TOML or whatever. \nMaybe that's worth doing, but in my mind it's orthogonal to what this \npatch aims (or aimed) to do - filtering objects using rules in a file, \nnot on the command line.\n\nI believe it's much closer to .gitignore or rsync --filter than to a \nfull config file. Even if we end up implementing the pg_dump config \nfile, it'd be nice to keep the filter rules in a separate file and just \nreference that file from the config file.\n\nThat also means I find it pointless to use an \"advanced\" format like \nJSON or TOML - I think the format should be as simple as possible. Yes, \nit has to support all valid identifiers, comments and so on. But I don't \nquite see a point in using JSON or similar \"full\" format. If a simple \nformat is good enough for rsync or gitignore, why should we insist on \nusing something more complex?\n\nOTOH I don't quite like the current approach of simply reading options \nfrom a file, because that requires adding new command-line options for \neach type of object we want to support. Which seems to contradict the \nidea of \"general filter\" method as mentioned in [1].\n\nSo if it was up to me, I'd go back to the original format or something \nclose it. So something like this:\n\n[+-] OBJECT_TYPE_PATTERN OBJECT_NAME_PATTERN\n\n\nregards\n\n\n[1] https://commitfest.postgresql.org/33/3051/\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 10 Jul 2021 17:47:12 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "> On 10 Jul 2021, at 17:47, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n\n> So if it was up to me, I'd go back to the original format or something close it. So something like this:\n> \n> [+-] OBJECT_TYPE_PATTERN OBJECT_NAME_PATTERN\n\nThat still leaves the parsing with quoting and escaping that needs to be done\nless trivial and more bespoke than what meets the eye, no?\n\nAs mentioned upthread, I'm still hesitant to add a file format which deosn't\nhave any version information of sorts for distinguishing it from when the\ninevitable \"now wouldn't it be nice if we could do this too\" patch which we all\nknow will come. The amount of selectivity switches we have for pg_dump is an\nindication about just how much control users like, this will no doubt be\nsubject to the same.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 13 Jul 2021 00:08:22 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On 7/13/21 12:08 AM, Daniel Gustafsson wrote:\n>> On 10 Jul 2021, at 17:47, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> \n>> So if it was up to me, I'd go back to the original format or something close it. So something like this:\n>>\n>> [+-] OBJECT_TYPE_PATTERN OBJECT_NAME_PATTERN\n> \n> That still leaves the parsing with quoting and escaping that needs to be done\n> less trivial and more bespoke than what meets the eye, no?\n> \n\nYes, it'd require proper escaping/quoting of the fields/identifiers etc.\n\n> As mentioned upthread, I'm still hesitant to add a file format which deosn't\n> have any version information of sorts for distinguishing it from when the\n> inevitable \"now wouldn't it be nice if we could do this too\" patch which we all\n> know will come. The amount of selectivity switches we have for pg_dump is an\n> indication about just how much control users like, this will no doubt be\n> subject to the same.\n> \n\nI'm not going to fight against some sort of versioning, but I think \nkeeping the scope as narrow as possible would make it unnecessary. That \nis, let's stick to the original goal to allow passing filtering rules \nthat would not fit on the command-line, and maybe let's make it a bit \nmore flexible to support other object types etc.\n\nIMHO the filtering rules are simple enough to not really need elaborate \nversioning, and if a more advanced rule is proposed in the future it can \nbe supported in the existing format (extra field, ...).\n\nOf course, maybe my imagination is not wild enough.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 13 Jul 2021 00:47:05 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On 2021-Jul-13, Tomas Vondra wrote:\n\n> I'm not going to fight against some sort of versioning, but I think keeping\n> the scope as narrow as possible would make it unnecessary. That is, let's\n> stick to the original goal to allow passing filtering rules that would not\n> fit on the command-line, and maybe let's make it a bit more flexible to\n> support other object types etc.\n> \n> IMHO the filtering rules are simple enough to not really need elaborate\n> versioning, and if a more advanced rule is proposed in the future it can be\n> supported in the existing format (extra field, ...).\n\nI don't understand why is versioning needed for this file. Surely we\ncan just define some line-based grammar that's accepted by the current\npg_dump[1] and that would satisfy the current need as well as allowing\nfor extending the grammar in the future; even JSON or Windows-INI format\n(ugh?) if that's necessary to tailor the output file in some other way\nnot covered by that.\n\n[1] your proposal of \"[+-] OBJTYPE OBJIDENT\" plus empty lines allowed\n plus lines starting with # are comments, seems plenty. Any line not\n following that format would cause an error to be thrown.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 12 Jul 2021 18:59:47 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> [1] your proposal of \"[+-] OBJTYPE OBJIDENT\" plus empty lines allowed\n> plus lines starting with # are comments, seems plenty. Any line not\n> following that format would cause an error to be thrown.\n\nI'd like to see some kind of keyword on each line, so that we could extend\nthe command set by adding new keywords. As this stands, I fear we'd end\nup using random punctuation characters in place of [+-], which seems\npretty horrid from a readability standpoint.\n\nI think that this file format should be designed with an eye to allowing\nevery, or at least most, pg_dump options to be written in the file rather\nthan on the command line. I don't say we have to *implement* that right\nnow; but if the format spec is incapable of being extended to meet\nrequests like that one, I think we'll regret it. This line of thought\nsuggests that the initial commands ought to match the existing\ninclude/exclude switches, at least approximately.\n\nHence I suggest\n\n\tinclude table PATTERN\n\texclude table PATTERN\n\nwhich ends up being the above but with words not [+-].\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 12 Jul 2021 19:16:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > [1] your proposal of \"[+-] OBJTYPE OBJIDENT\" plus empty lines allowed\n> > plus lines starting with # are comments, seems plenty. Any line not\n> > following that format would cause an error to be thrown.\n> \n> I'd like to see some kind of keyword on each line, so that we could extend\n> the command set by adding new keywords. As this stands, I fear we'd end\n> up using random punctuation characters in place of [+-], which seems\n> pretty horrid from a readability standpoint.\n\nI agree that it'd end up being bad with single characters.\n\n> I think that this file format should be designed with an eye to allowing\n> every, or at least most, pg_dump options to be written in the file rather\n> than on the command line. I don't say we have to *implement* that right\n> now; but if the format spec is incapable of being extended to meet\n> requests like that one, I think we'll regret it. This line of thought\n> suggests that the initial commands ought to match the existing\n> include/exclude switches, at least approximately.\n\nI agree that we want to have an actual config file that allows just\nabout every pg_dump option. I'm also fine with saying that we don't\nhave to implement that initially but the format should be one which can\nbe extended to allow that.\n\n> Hence I suggest\n> \n> \tinclude table PATTERN\n> \texclude table PATTERN\n> \n> which ends up being the above but with words not [+-].\n\nWhich ends up inventing yet-another-file-format which people will end up\nwriting generators and parsers for. Which is exactly what I was arguing\nwe really should be trying to avoid doing.\n\nI definitely feel that we should have a way to allow anything that can\nbe created as an object in the database to be explicitly included in the\nfile and that means whatever we do need to be able to handle objects\nthat have names that span multiple lines, etc. It's not clear how the\nabove would. As I recall, the proposed patch didn't have anything for\nhandling that, which was one of the issues I had with it and is why I\nbring it up again.\n\nThanks,\n\nStephen", "msg_date": "Tue, 13 Jul 2021 09:40:13 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "\n\nOn 7/13/21 3:40 PM, Stephen Frost wrote:\n> Greetings,\n> \n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>>> [1] your proposal of \"[+-] OBJTYPE OBJIDENT\" plus empty lines allowed\n>>> plus lines starting with # are comments, seems plenty. Any line not\n>>> following that format would cause an error to be thrown.\n>>\n>> I'd like to see some kind of keyword on each line, so that we could extend\n>> the command set by adding new keywords. As this stands, I fear we'd end\n>> up using random punctuation characters in place of [+-], which seems\n>> pretty horrid from a readability standpoint.\n> \n> I agree that it'd end up being bad with single characters.\n> \n\nThe [+-] format is based on what rsync does, so there's at least some \nprecedent for that, and IMHO it's fairly readable. I agree the rest of \nthe rule (object type, ...) may be a bit more verbose.\n\n>> I think that this file format should be designed with an eye to allowing\n>> every, or at least most, pg_dump options to be written in the file rather\n>> than on the command line. I don't say we have to *implement* that right\n>> now; but if the format spec is incapable of being extended to meet\n>> requests like that one, I think we'll regret it. This line of thought\n>> suggests that the initial commands ought to match the existing\n>> include/exclude switches, at least approximately.\n> \n> I agree that we want to have an actual config file that allows just\n> about every pg_dump option. I'm also fine with saying that we don't\n> have to implement that initially but the format should be one which can\n> be extended to allow that.\n> \n\nI understand the desire to have a config file that may contain all \npg_dump options, but I really don't see why we'd want to mix that with \nthe file containing filter rules.\n\nI think those should be separate, one of the reasons being that I find \nit desirable to be able to \"include\" the filter rules into different \npg_dump configs. That also means the format for the filter rules can be \nmuch simpler.\n\nIt's also not clear to me whether the single-file approach would allow \nfiltering not supported by actual pg_dump option, for example.\n\n>> Hence I suggest\n>>\n>> \tinclude table PATTERN\n>> \texclude table PATTERN\n>>\n>> which ends up being the above but with words not [+-].\n> \nWork for me.\n\n> Which ends up inventing yet-another-file-format which people will end up\n> writing generators and parsers for. Which is exactly what I was arguing\n> we really should be trying to avoid doing.\n> \n\nPeople will have to write generators *in any case* because how else \nwould you use this? Unless we also provide tools to manipulate that file \n(which seems rather futile), they'll have to do that. Even if we used \nJSON/YAML/TOML/... they'd still need to deal with the semantics of the \nfile format.\n\nFWIW I don't understand why would they need to write parsers. That's \nsomething we'd need to do to process the file. I think the case when the \nfilter file needs to be modified is rather rare - it certainly is not \nwhat the original use case Pavel tried to address needs. (I know that \ncustomer and the filter would be generated and used for a single dump.)\n\nMy opinion is that the best solution (to make both generators and \nparsers simple) is to keep the format itself as simple as possible. \nWhich is exactly why I'm arguing for only addressing the filtering, not \ntrying to invent a \"universal\" pg_dump config file format.\n\n> I definitely feel that we should have a way to allow anything that can\n> be created as an object in the database to be explicitly included in the\n> file and that means whatever we do need to be able to handle objects\n> that have names that span multiple lines, etc. It's not clear how the\n> above would. As I recall, the proposed patch didn't have anything for\n> handling that, which was one of the issues I had with it and is why I\n> bring it up again.\n> \n\nI really don't understand why you think the current format can't do \nescaping/quoting or handle names spanning multiple lines. The fact that \nthe original patch did not handle that correctly is a bug, but it does \nnot mean the format can't handle that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 13 Jul 2021 18:14:03 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "> On 13 Jul 2021, at 18:14, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n\n> FWIW I don't understand why would they need to write parsers.\n\nIt's quite common to write unit tests for VM recipes/playbooks wheen using\ntools like Chef etc, parsing and checking the installed/generated files is part\nof that. This would be one very real use case for writing a parser.\n\n> I think the case when the filter file needs to be modified is rather rare - it certainly is not what the original use case Pavel tried to address needs. (I know that customer and the filter would be generated and used for a single dump.)\n\nI'm not convinced that basing design decisions on a single customer reference\nwho only want to use the code once is helpful. I hear what you're saying, but\nI think this will see more diverse use cases than what we can foresee here.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 13 Jul 2021 22:44:38 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "> On 13 Jul 2021, at 00:59, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> On 2021-Jul-13, Tomas Vondra wrote:\n> \n>> I'm not going to fight against some sort of versioning, but I think keeping\n>> the scope as narrow as possible would make it unnecessary. That is, let's\n>> stick to the original goal to allow passing filtering rules that would not\n>> fit on the command-line, and maybe let's make it a bit more flexible to\n>> support other object types etc.\n>> \n>> IMHO the filtering rules are simple enough to not really need elaborate\n>> versioning, and if a more advanced rule is proposed in the future it can be\n>> supported in the existing format (extra field, ...).\n> \n> I don't understand why is versioning needed for this file. Surely we\n> can just define some line-based grammar that's accepted by the current\n> pg_dump[1] and that would satisfy the current need as well as allowing\n> for extending the grammar in the future; even JSON or Windows-INI format\n> (ugh?) if that's necessary to tailor the output file in some other way\n> not covered by that.\n\nI wasn't expressing myself very well; by \"versioning\" I mean a way to be able\nto add to/change/fix the format and still be able to deterministically parse it\nwithout having to resort to ugly heuristics and hacks. If that's achieved by\nan explicit version number or if it's an inherit characteristic of the format\ndoesn't really matter (to me). My worry is that the very simple proposed\nformat might not fit that bill, but since I don't know what the future of the\nfeature might bring it's (mostly) a gut feeling.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 13 Jul 2021 22:55:35 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Greetings,\n\nOn Tue, Jul 13, 2021 at 16:44 Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 13 Jul 2021, at 18:14, Tomas Vondra <tomas.vondra@enterprisedb.com>\n> wrote:\n>\n> > FWIW I don't understand why would they need to write parsers.\n>\n> It's quite common to write unit tests for VM recipes/playbooks wheen using\n> tools like Chef etc, parsing and checking the installed/generated files is\n> part\n> of that. This would be one very real use case for writing a parser.\n\n\nConsider pgAdmin and the many other tools which essentially embed pg_dump\nand pg_restore. There’s no shortage of use cases for a variety of tools to\nbe able to understand, read, parse, generate, rewrite, and probably do\nmore, with such a pg_dump/restore config file.\n\n> I think the case when the filter file needs to be modified is rather rare\n> - it certainly is not what the original use case Pavel tried to address\n> needs. (I know that customer and the filter would be generated and used for\n> a single dump.)\n>\n> I'm not convinced that basing design decisions on a single customer\n> reference\n> who only want to use the code once is helpful.\n\n\nAgreed.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Tue, Jul 13, 2021 at 16:44 Daniel Gustafsson <daniel@yesql.se> wrote:> On 13 Jul 2021, at 18:14, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n\n> FWIW I don't understand why would they need to write parsers.\n\nIt's quite common to write unit tests for VM recipes/playbooks wheen using\ntools like Chef etc, parsing and checking the installed/generated files is part\nof that. This would be one very real use case for writing a parser.Consider pgAdmin and the many other tools which essentially embed pg_dump and pg_restore.  There’s no shortage of use cases for a variety of tools to be able to understand, read, parse, generate, rewrite, and probably do more, with such a pg_dump/restore config file.\n> I think the case when the filter file needs to be modified is rather rare - it certainly is not what the original use case Pavel tried to address needs. (I know that customer and the filter would be generated and used for a single dump.)\n\nI'm not convinced that basing design decisions on a single customer reference\nwho only want to use the code once is helpful.  Agreed.Thanks,Stephen", "msg_date": "Tue, 13 Jul 2021 16:55:56 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On 7/13/21 10:55 PM, Stephen Frost wrote:\n> Greetings,\n> \n> On Tue, Jul 13, 2021 at 16:44 Daniel Gustafsson <daniel@yesql.se \n> <mailto:daniel@yesql.se>> wrote:\n> \n> > On 13 Jul 2021, at 18:14, Tomas Vondra\n> <tomas.vondra@enterprisedb.com\n> <mailto:tomas.vondra@enterprisedb.com>> wrote:\n> \n> > FWIW I don't understand why would they need to write parsers.\n> \n> It's quite common to write unit tests for VM recipes/playbooks wheen\n> using\n> tools like Chef etc, parsing and checking the installed/generated\n> files is part\n> of that. This would be one very real use case for writing a parser.\n> \n> \n> Consider pgAdmin and the many other tools which essentially embed \n> pg_dump and pg_restore.  There’s no shortage of use cases for a variety \n> of tools to be able to understand, read, parse, generate, rewrite, and \n> probably do more, with such a pg_dump/restore config file.\n> \n\nSure. Which is why I'm advocating for the simplest possible format (and \nnot expanding the scope of this patch beyond filtering), because that \nmakes this kind of processing simpler.\n\n> > I think the case when the filter file needs to be modified is\n> rather rare - it certainly is not what the original use case Pavel\n> tried to address needs. (I know that customer and the filter would\n> be generated and used for a single dump.)\n> \n> I'm not convinced that basing design decisions on a single customer\n> reference\n> who only want to use the code once is helpful. \n> \n> \n> Agreed.\n> \n\nI wasn't really basing this on a single customer - that was merely an \nexample, of course. FWIW Justin Pryzby already stated having to use some \nmore complex format would likely mean they would not use the feature, so \nthat's another data point to consider.\n\nFWIW I believe it's clear what my opinions on this topic are. Repeating \nthat seems a bit pointless, so I'll step aside and let this thread move \nforward in whatever direction.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 13 Jul 2021 23:57:10 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> On 7/13/21 10:55 PM, Stephen Frost wrote:\n> >On Tue, Jul 13, 2021 at 16:44 Daniel Gustafsson <daniel@yesql.se\n> ><mailto:daniel@yesql.se>> wrote:\n> >\n> > > On 13 Jul 2021, at 18:14, Tomas Vondra\n> > <tomas.vondra@enterprisedb.com\n> > <mailto:tomas.vondra@enterprisedb.com>> wrote:\n> >\n> > > FWIW I don't understand why would they need to write parsers.\n> >\n> > It's quite common to write unit tests for VM recipes/playbooks wheen\n> > using\n> > tools like Chef etc, parsing and checking the installed/generated\n> > files is part\n> > of that. This would be one very real use case for writing a parser.\n>\n> >Consider pgAdmin and the many other tools which essentially embed pg_dump\n> >and pg_restore.  There’s no shortage of use cases for a variety of tools\n> >to be able to understand, read, parse, generate, rewrite, and probably do\n> >more, with such a pg_dump/restore config file.\n> \n> Sure. Which is why I'm advocating for the simplest possible format (and not\n> expanding the scope of this patch beyond filtering), because that makes this\n> kind of processing simpler.\n\nThe simplest possible format isn't going to work with all the different\npg_dump options and it still isn't going to be 'simple' since it needs\nto work with the flexibility that we have in what we support for object\nnames, and is still going to require people write a new parser and\ngenerator for it instead of using something existing.\n\nI don't know that the options that I suggested previously would\ndefinitely work or not but they at least would allow other projects like\npgAdmin to leverage existing code for parsing and generating these\nconfig files. I'm not completely against inventing something new, but\nI'd really prefer that we at least try to make something existing work\nfirst before inventing something new that everyone is going to have to\ndeal with. If we do invent a new thing for $reasons, then we should\nreally look at what exists today and try to design it properly instead\nof just throwing something together and formally document it because\nit's absolutely going to become a standard of sorts that people are\ngoing to almost immediately write their own parsers/generators in\nvarious languages for.\n\nThanks,\n\nStephen", "msg_date": "Tue, 13 Jul 2021 18:32:17 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On 2021-Jul-13, Stephen Frost wrote:\n\n> The simplest possible format isn't going to work with all the different\n> pg_dump options and it still isn't going to be 'simple' since it needs\n> to work with the flexibility that we have in what we support for object\n> names,\n\nThat's fine. If people want a mechanism that allows changing the other\npg_dump options that are not related to object filtering, they can\nimplement a configuration file for that.\n\n> and is still going to require people write a new parser and\n> generator for it instead of using something existing.\n\nSure. That's not part of this patch.\n\n> I don't know that the options that I suggested previously would\n> definitely work or not but they at least would allow other projects like\n> pgAdmin to leverage existing code for parsing and generating these\n> config files.\n\nKeep in mind that this patch is not intended to help pgAdmin\nspecifically. It would be great if pgAdmin uses the functionality\nimplemented here, but if they decide not to, that's not terrible. They\nhave survived decades without a pg_dump configuration file; they still\ncan.\n\nThere are several votes in this thread for pg_dump to gain functionality\nto filter objects based on a simple specification -- particularly one\nthat can be written using shell pipelines. This patch gives it.\n\n> I'm not completely against inventing something new, but I'd really\n> prefer that we at least try to make something existing work first\n> before inventing something new that everyone is going to have to deal\n> with.\n\nThat was discussed upthread and led nowhere.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Nunca se desea ardientemente lo que solo se desea por razón\" (F. Alexandre)\n\n\n", "msg_date": "Tue, 13 Jul 2021 19:00:38 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Greetings,\n\n* Alvaro Herrera (alvherre@2ndquadrant.com) wrote:\n> On 2021-Jul-13, Stephen Frost wrote:\n> > The simplest possible format isn't going to work with all the different\n> > pg_dump options and it still isn't going to be 'simple' since it needs\n> > to work with the flexibility that we have in what we support for object\n> > names,\n> \n> That's fine. If people want a mechanism that allows changing the other\n> pg_dump options that are not related to object filtering, they can\n> implement a configuration file for that.\n\nIt's been said multiple times that people *do* want that and that they\nwant it to all be part of this one file, and specifically that they\ndon't want to end up with a file structure that actively works against\nallowing other options to be added to it.\n\n> > I don't know that the options that I suggested previously would\n> > definitely work or not but they at least would allow other projects like\n> > pgAdmin to leverage existing code for parsing and generating these\n> > config files.\n> \n> Keep in mind that this patch is not intended to help pgAdmin\n> specifically. It would be great if pgAdmin uses the functionality\n> implemented here, but if they decide not to, that's not terrible. They\n> have survived decades without a pg_dump configuration file; they still\n> can.\n\nThe adding of a config file for pg_dump should specifically be looking\nat pgAdmin as the exact use-case for having such a capability.\n\n> There are several votes in this thread for pg_dump to gain functionality\n> to filter objects based on a simple specification -- particularly one\n> that can be written using shell pipelines. This patch gives it.\n\nAnd several votes for having a config file that supports, or at least\ncan support in the future, the various options which pg_dump supports-\nand active voices against having a new file format that doesn't allow\nfor that.\n\n> > I'm not completely against inventing something new, but I'd really\n> > prefer that we at least try to make something existing work first\n> > before inventing something new that everyone is going to have to deal\n> > with.\n> \n> That was discussed upthread and led nowhere.\n\nYou're right- no one followed up on that. Instead, one group continues\nto push for 'simple' and to just accept what's been proposed, while\nanother group counters that we should be looking at the broader design\nquestion and work towards a solution which will work for us down the\nroad, and not just right now.\n\nOne thing remains clear- there's no consensus here.\n\nThanks,\n\nStephen", "msg_date": "Tue, 13 Jul 2021 20:18:35 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\n\n> You're right- no one followed up on that. Instead, one group continues\n> to push for 'simple' and to just accept what's been proposed, while\n> another group counters that we should be looking at the broader design\n> question and work towards a solution which will work for us down the\n> road, and not just right now.\n>\n> One thing remains clear- there's no consensus here.\n>\n\nI think there should be some misunderstanding about the target of this\npatch, and I am afraid so there cannot be consensus, because the people are\nspeaking about two very different features. And it is not possible to push\nit to one thing. It cannot work I am afraid.\n\n1. The main target of this patch is to solve the problem with the too large\ncommand line of pg_dump when there are a lot of dumped objects. You need to\ncall pg_dump only once to ensure dump in one transaction. And sometimes it\nis not possible to use wild characters effectively, because the state of\nobjects is in different databases. Enhancing the length of the command line\nis not secure, and there are other production issues. In this case you need\na very simple format - just because you want to use pg_dump in pipe. This\nformat should be line oriented - and usually it will contain just \"dump\nthis table, dump second table\". Nothing else. Nobody will read this format,\nnobody will edit this format. Because the main platform for this format is\nprobably the UNIX shell, the format should be simple. I really don't see\nany joy in generating JSON and parsing JSON later. These data will be\nprocessed locally. This is one purpose designed format, and it is not\ndesigned for holding configuration. For this purpose the complex format has\nnot any advantage. There is not a problem with parsing JSON or other\nformats on the pg_dump side, but it is pretty hard to generate valid JSON\nfrom bash script. For a unix shell we need the most possible simple format.\nTheoretically this format (this file) can hold any pg_dump's option, but\nfor usual streaming processing the only filter's options will be there.\nOriginally this feature had the name \"filter file\". There are a lot of\nexamples of successful filter's file formats in the UNIX world, and I think\nso nobody doubts about sense and usability. Probably there is a consensus\nso filter's files are not config files.\n\nThe format of the filter file can look like \"+d tablename\" or \"include data\ntablename\". If we find a consensus so the filter file is a good thing, then\nthe format design and implementation is easy work. Isn't problem to invent\ncomment lines.\n\n2. Is true, so there is only a small step from filter's file to option's\nfile. I rewrote this patch in this direction. The advantage is universality\n- it can support any options without necessity to modify related code.\nStill this format is not difficult for producers, and it is simple for\nparsing. Now, the format should be defined by command line format: \"-t\ntablename\" or \"--table tablename\" or \"table tablename\". There can be issues\nrelated to different parsers in shell and in implemented code, but it can\nbe solved. Isn't problem to introduce comment lines. The big advantage is\nsimplicity of usage, simplicity of implementation - more the implementation\nis generic.\n\n3. But the option's file is just a small step to config file. I can imagine\nsomebody wanting to store typical configuration (and usual options) for\npsql, pg_dump, pg_restore, pgAdmin, ... somewhere. The config files are\nvery different creatures than filter's files. Although they can be\ngenerated, usually are edited and can be very complex. There can be shared\nparts for all applications, and specific sections for psql, and specific\nsections for every database. The config files can be brutally complex. The\nsimple text format is not good for this purpose. And some people prefer\nYAML, some people hate this format. Other people prefer XML or JSON or\nanything else. Sometimes the complexity of config files is too big, and\npeople prefer startup scripting.\n\nAlthough there is an intersection between filter's files and config files,\nI see very big differences in usage. Filter's files are usually temporal\nand generated and non shared. Config file's are persistent, usually\nmanually modified and can be shared. The requests are different, and should\nbe different too. I don't propose any configuration's file related\nfeatures, and my proposal doesn't block the introduction of configuration's\nfile in any format in future. I think these features are very different,\nand should be implemented differently. The filter's file or option's file\nwill be a pretty ugly config file, and config's file will be a pretty\nimpractical filter's file.\n\nSo can we talk about implementation of filter's file or option's file? And\ncan we talk about implementation config's files in separate topics? Without\nit, I am afraid so there is no possibility of finding an agreement and\nmoving forward.\n\nRegards\n\nPavel\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n> Thanks,\n>\n> Stephen\n>\n\nHi\n\nYou're right- no one followed up on that.  Instead, one group continues\nto push for 'simple' and to just accept what's been proposed, while\nanother group counters that we should be looking at the broader design\nquestion and work towards a solution which will work for us down the\nroad, and not just right now.\n\nOne thing remains clear- there's no consensus here.I think there should be some misunderstanding about the target of this patch, and I am afraid so there cannot be consensus, because the people are speaking about two very different features. And it is not possible to push it to one thing. It cannot work I am afraid.1. The main target of this patch is to solve the problem with the too large command line of pg_dump when there are a lot of dumped objects. You need to call pg_dump only once to ensure dump in one transaction. And sometimes it is not possible to use wild characters effectively, because the state of objects is in different databases. Enhancing the length of the command line is not secure, and there are other production issues. In this case you need a very simple format - just because you want to use pg_dump in pipe. This format should be line oriented - and usually it will contain just \"dump this table, dump second table\". Nothing else. Nobody will read this format, nobody will edit this format. Because the main platform for this format is probably the UNIX shell, the format should be simple. I really don't see any joy in generating JSON and parsing JSON later. These data will be processed locally. This is one purpose designed format, and it is not designed for holding configuration. For this purpose the complex format has not any advantage. There is not a problem with parsing JSON or other formats on the pg_dump side, but it is pretty hard to generate valid JSON from bash script. For a unix shell we need the most possible simple format. Theoretically this format (this file) can hold any pg_dump's option, but for usual streaming processing the only filter's options will be there. Originally this feature had the name \"filter file\". There are a lot of examples of successful filter's file formats in the UNIX world, and I think so nobody doubts about sense and usability. Probably there is a consensus so filter's files are not config files.The format of the filter file can look like \"+d tablename\" or \"include data tablename\". If we find a consensus so the filter file is a good thing, then the format design and implementation is easy work. Isn't problem to invent comment lines.2. Is true, so there is only a small step from filter's file to option's file. I rewrote this patch in this direction. The advantage is universality - it can support any options without necessity to modify related code. Still this format is not difficult for producers, and it is simple for parsing. Now, the format should be defined by command line format: \"-t tablename\" or \"--table tablename\" or \"table tablename\". There can be issues related to different parsers in shell and in implemented code, but it can be solved. Isn't problem to introduce comment lines. The big advantage is simplicity of usage, simplicity of implementation - more the implementation is generic.3. But the option's file is just a small step to config file. I can imagine somebody wanting to store typical configuration (and usual options) for psql, pg_dump, pg_restore, pgAdmin, ... somewhere. The config files are very different creatures than filter's files. Although they can be generated, usually are edited and can be very complex. There can be shared parts for all applications, and specific sections for psql, and specific sections for every database. The config files can be brutally complex. The simple text format is not good for this purpose. And some people prefer YAML, some people hate this format. Other people prefer XML or JSON or anything else. Sometimes the complexity of config files is too big, and people prefer startup scripting.Although there is an intersection between filter's files and config files, I see very big differences in usage. Filter's files are usually temporal and generated and non shared. Config file's are persistent, usually manually modified and can be shared. The requests are different, and should be different too. I don't propose any configuration's file related features, and my proposal doesn't block the introduction of configuration's file in any format in future. I think these features are very different, and should be implemented differently. The filter's file or option's file will be a pretty ugly config file, and config's file will be a pretty impractical filter's file. So can we talk about implementation of filter's file or option's file? And can we talk about implementation config's files in separate topics? Without it, I am afraid so there is no possibility of finding an agreement and moving forward.RegardsPavel\n\nThanks,\n\nStephen", "msg_date": "Wed, 14 Jul 2021 07:00:26 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On 7/14/21 2:18 AM, Stephen Frost wrote:\n> Greetings,\n> \n> * Alvaro Herrera (alvherre@2ndquadrant.com) wrote:\n>> On 2021-Jul-13, Stephen Frost wrote:\n>>> The simplest possible format isn't going to work with all the different\n>>> pg_dump options and it still isn't going to be 'simple' since it needs\n>>> to work with the flexibility that we have in what we support for object\n>>> names,\n>>\n>> That's fine. If people want a mechanism that allows changing the other\n>> pg_dump options that are not related to object filtering, they can\n>> implement a configuration file for that.\n> \n> It's been said multiple times that people *do* want that and that they\n> want it to all be part of this one file, and specifically that they\n> don't want to end up with a file structure that actively works against\n> allowing other options to be added to it.\n> \n\nI have no problem believing some people want to be able to specify \npg_dump parameters in a file, similarly to IMPDP/EXPDP parameter files \netc. That seems useful, but I doubt they considered the case with many \nfilter rules ... which is what \"my people\" want.\n\nNot sure how keeping the filter rules in a separate file (which I assume \nis what you mean by \"file structure\"), with a format tailored for filter \nrules, works *actively* against adding options to the \"main\" config.\n\nI'm not buying the argument that keeping some of the stuff in a separate \nfile is an issue - plenty of established tools do that, the concept of \n\"including\" a config is not a radical new thing, and I don't expect we'd \nhave many options supported by a file.\n\nIn any case, I think user input is important, but ultimately it's up to \nus to reconcile the conflicting requirements coming from various users \nand come up with a reasonable compromise design.\n\n>>> I don't know that the options that I suggested previously would\n>>> definitely work or not but they at least would allow other projects like\n>>> pgAdmin to leverage existing code for parsing and generating these\n>>> config files.\n>>\n>> Keep in mind that this patch is not intended to help pgAdmin\n>> specifically. It would be great if pgAdmin uses the functionality\n>> implemented here, but if they decide not to, that's not terrible. They\n>> have survived decades without a pg_dump configuration file; they still\n>> can.\n> \n> The adding of a config file for pg_dump should specifically be looking\n> at pgAdmin as the exact use-case for having such a capability.\n> \n>> There are several votes in this thread for pg_dump to gain functionality\n>> to filter objects based on a simple specification -- particularly one\n>> that can be written using shell pipelines. This patch gives it.\n> \n> And several votes for having a config file that supports, or at least\n> can support in the future, the various options which pg_dump supports-\n> and active voices against having a new file format that doesn't allow\n> for that.\n> \n\nIMHO the whole \"problem\" here stems from the question whether there \nshould be a single universal pg_dump config file, containing everything \nincluding the filter rules. I'm of the opinion it's better to keep the \nfilter rules separate, mainly because:\n\n1) simplicity - Options (key/value) and filter rules (with more internal \nstructure) seem quite different, and mixing them in the same file will \njust make the format more complex.\n\n2) flexibility - Keeping the filter rules in a separate file makes it \neasier to reuse the same set of rules with different pg_dump configs, \nspecified in (much smaller) config files.\n\nSo in principle, the \"main\" config could use e.g. TOML or whatever we \nfind most suitable for this type of key/value config file (or we could \njust use the same format as for postgresql.conf et al). And the filter \nrules could use something as simple as CSV (yes, I know it's not great, \nbut there's plenty of parsers, it handles multi-line strings etc.).\n\n\n>>> I'm not completely against inventing something new, but I'd really\n>>> prefer that we at least try to make something existing work first\n>>> before inventing something new that everyone is going to have to deal\n>>> with.\n>>\n>> That was discussed upthread and led nowhere.\n> \n> You're right- no one followed up on that. Instead, one group continues\n> to push for 'simple' and to just accept what's been proposed, while\n> another group counters that we should be looking at the broader design\n> question and work towards a solution which will work for us down the\n> road, and not just right now.\n> \n\nI have quite thick skin, but I have to admit I rather dislike how this \npaints the people arguing for simplicity.\n\nIMO simplicity is a perfectly legitimate (and desirable) design feature, \nand simpler solutions often fare better in the long run. Yes, we need to \nlook at the broader design, no doubt about that.\n\n> One thing remains clear- there's no consensus here.\n> \n\nTrue.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 14 Jul 2021 12:08:01 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "st 12. 5. 2021 v 8:22 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> ne 11. 4. 2021 v 9:48 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>> Hi\n>>\n>> út 16. 2. 2021 v 20:32 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n>> napsal:\n>>\n>>>\n>>> Hi\n>>>\n>>> rebase\n>>>\n>>\n>>\n> rebase\n>\n>\n>\n>> fresh rebase\n>>\n>\nfresh rebase\n\n\n\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>> Regards\n>>>\n>>> Pavel\n>>>\n>>", "msg_date": "Wed, 28 Jul 2021 06:18:01 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nút 13. 7. 2021 v 1:16 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > [1] your proposal of \"[+-] OBJTYPE OBJIDENT\" plus empty lines allowed\n> > plus lines starting with # are comments, seems plenty. Any line not\n> > following that format would cause an error to be thrown.\n>\n> I'd like to see some kind of keyword on each line, so that we could extend\n> the command set by adding new keywords. As this stands, I fear we'd end\n> up using random punctuation characters in place of [+-], which seems\n> pretty horrid from a readability standpoint.\n>\n> I think that this file format should be designed with an eye to allowing\n> every, or at least most, pg_dump options to be written in the file rather\n> than on the command line. I don't say we have to *implement* that right\n> now; but if the format spec is incapable of being extended to meet\n> requests like that one, I think we'll regret it. This line of thought\n> suggests that the initial commands ought to match the existing\n> include/exclude switches, at least approximately.\n>\n> Hence I suggest\n>\n> include table PATTERN\n> exclude table PATTERN\n>\n> which ends up being the above but with words not [+-].\n>\n\nHere is an updated implementation of filter's file, that implements syntax\nproposed by you.\n\nRegards\n\nPavel\n\n\n\n> regards, tom lane\n>", "msg_date": "Wed, 28 Jul 2021 09:28:17 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "> On 28 Jul 2021, at 09:28, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> út 13. 7. 2021 v 1:16 odesílatel Tom Lane <tgl@sss.pgh.pa.us <mailto:tgl@sss.pgh.pa.us>> napsal:\n\n> Hence I suggest\n> \n> include table PATTERN\n> exclude table PATTERN\n> \n> which ends up being the above but with words not [+-].\n\nOne issue with this syntax is that the include keyword can be quite misleading\nas it's semantic interpretion of \"include table t\" can be different from\n\"--table=t\". The former is less clear about the fact that it means \"exclude\nall other tables than \" then the latter. It can be solved with documentation,\nbut I think that needs be to be made clearer.\n\n> Here is an updated implementation of filter's file, that implements syntax proposed by you.\n\nWhile it's not the format I would prefer, it does allow for most (all?) use\ncases expressed in this thread with ample armtwisting applied so let's go ahead\nfrom this point and see if we can agree on it (or a version of it).\n\nA few notes on the patch after a first pass over it:\n\n+(include|exclude)[table|schema|foreign_data|data] <replaceable class=\"parameter\">objectname</replaceable>\nLacks whitespace between keyword and object type. Also, since these are\nmandatory parameters, shouldn't they be within '{..}' ?\n\n\n+\t/* skip initial white spaces */\n+\twhile (isblank(*ptr))\n+\t\tptr += 1;\nWe don't trust isblank() as of 3fd5faed5 due to portability concerns, this\nshould probably use a version of the pg_isblank() we already have (and possibly\nmove that to src/common/string.c as there now are more consumers).\n\n\n+static bool\n+isblank_line(const char *line)\nThis could be replaced with a single call to strspn() as we already do for\nparsing the TOC file.\n\n\n+\t/* when first char is hash, ignore whole line */\n+\tif (*str == '#')\n+\t\tcontinue;\nI think we should strip leading whitespace before this to allow commentlines to\nstart with whitespace, it's easy enough and will make life easier for users.\n\n\n+ pg_log_error(\"invalid format of filter file \\\"%s\\\": %s\",\n+ filename,\n+ message);\n+\n+ fprintf(stderr, \"%d: %s\\n\", lineno, line);\nCan't we just include the lineno in the error logging and skip dumping the\noffending line? Fast-forwarding the pointer to print the offending part is\nless useful than a context marker, and in some cases suboptimal. With this\ncoding, if a pattern is omitted for example the below error message is given:\n\n pg_dump: error: invalid format of filter file \"filter.txt\": missing object name\n 1:\n\nThe errormessage and the linenumber in the file should be enough for the user\nto figure out what to fix.\n\n\n+ if (keyword && is_keyword(keyword, size, \"table\"))\n+ objecttype = 't';\nShould this use an enum, or at least a struct translation the literal keyword\nto the internal representation? Magic constants without explicit connection to\ntheir token counterparts can easily be cause of bugs.\n\n\nIf I create a table called \"a\\nb\" and try to dump it I get an error in parsing\nthe file. Isn't this supposed to work?\n $ cat filter.txt\n include table \"a\n b\"\n $ ./bin/pg_dump --filter=filter.txt\n pg_dump: error: invalid format of filter file \"filter.txt\": unexpected chars after object name\n 2:\n\n\nDid you consider implementing this in Bison to abstract some of the messier\nparsing logic?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 13 Sep 2021 15:01:56 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Wed, Jul 28, 2021 at 09:28:17AM +0200, Pavel Stehule wrote:\n> Here is an updated implementation of filter's file, that implements syntax\n> proposed by you.\n\nThanks.\n\nIf there's any traction for this approach. I have some comments for the next\nrevision,\n\n> +++ b/doc/src/sgml/ref/pg_dump.sgml\n> @@ -789,6 +789,56 @@ PostgreSQL documentation\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term><option>--filter=<replaceable class=\"parameter\">filename</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + Read objects filters from the specified file.\n> + If you use \"-\" as a filename, the filters are read from stdin.\n\nSay 'Specify \"-\" to read from stdin'\n\n> + The lines starting with symbol <literal>#</literal> are ignored.\n\nRemove \"The\" and \"symbol\"\n\n> + Previous white chars (spaces, tabs) are not allowed. These\n\nPreceding whitespace characters...\n\nBut actually, they are allowed? But if it needs to be explained, maybe they\nshouldn't be - I don't see the utility of it.\n\n> +static bool\n> +isblank_line(const char *line)\n> +{\n> +\twhile (*line)\n> +\t{\n> +\t\tif (!isblank(*line++))\n> +\t\t\treturn false;\n> +\t}\n> +\n> +\treturn true;\n> +}\n\nI don't think this requires nor justifies having a separate function.\nEither don't support blank lines, or use get_keyword() with size==0 for that ?\n\n> +\t\t/* Now we expect sequence of two keywords */\n> +\t\tif (keyword && is_keyword(keyword, size, \"include\"))\n> +\t\t\tis_include = true;\n> +\t\telse if (keyword && is_keyword(keyword, size, \"exclude\"))\n> +\t\t\tis_include = false;\n> +\t\telse\n\nI think this should first check \"if keyword == NULL\".\nThat could give a more specific error message like \"no keyword found\",\n\n> +\t\t\texit_invalid_filter_format(fp,\n> +\t\t\t\t\t\t\t\t\t filename,\n> +\t\t\t\t\t\t\t\t\t \"expected keyword \\\"include\\\" or \\\"exclude\\\"\",\n> +\t\t\t\t\t\t\t\t\t line.data,\n> +\t\t\t\t\t\t\t\t\t lineno);\n\n..and then this one can say \"invalid keyword\".\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 13 Sep 2021 08:11:52 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\npo 13. 9. 2021 v 15:01 odesílatel Daniel Gustafsson <daniel@yesql.se>\nnapsal:\n\n> > On 28 Jul 2021, at 09:28, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > út 13. 7. 2021 v 1:16 odesílatel Tom Lane <tgl@sss.pgh.pa.us <mailto:\n> tgl@sss.pgh.pa.us>> napsal:\n>\n> > Hence I suggest\n> >\n> > include table PATTERN\n> > exclude table PATTERN\n> >\n> > which ends up being the above but with words not [+-].\n>\n> One issue with this syntax is that the include keyword can be quite\n> misleading\n> as it's semantic interpretion of \"include table t\" can be different from\n> \"--table=t\". The former is less clear about the fact that it means\n> \"exclude\n> all other tables than \" then the latter. It can be solved with\n> documentation,\n> but I think that needs be to be made clearer.\n>\n\nI invite any documentation enhancing and fixing\n\n>\n> > Here is an updated implementation of filter's file, that implements\n> syntax proposed by you.\n>\n> While it's not the format I would prefer, it does allow for most (all?) use\n> cases expressed in this thread with ample armtwisting applied so let's go\n> ahead\n> from this point and see if we can agree on it (or a version of it).\n>\n> A few notes on the patch after a first pass over it:\n>\n> +(include|exclude)[table|schema|foreign_data|data] <replaceable\n> class=\"parameter\">objectname</replaceable>\n> Lacks whitespace between keyword and object type. Also, since these are\n> mandatory parameters, shouldn't they be within '{..}' ?\n>\n> yes, fixed\n\n\n\n>\n> + /* skip initial white spaces */\n> + while (isblank(*ptr))\n> + ptr += 1;\n> We don't trust isblank() as of 3fd5faed5 due to portability concerns, this\n> should probably use a version of the pg_isblank() we already have (and\n> possibly\n> move that to src/common/string.c as there now are more consumers).\n>\n>\nI rewrote this part, and I don't use function isblank ever\n\n\n>\n> +static bool\n> +isblank_line(const char *line)\n> This could be replaced with a single call to strspn() as we already do for\n> parsing the TOC file.\n>\n>\n> + /* when first char is hash, ignore whole line */\n> + if (*str == '#')\n> + continue;\n> I think we should strip leading whitespace before this to allow\n> commentlines to\n> start with whitespace, it's easy enough and will make life easier for\n> users.\n>\n\nnow, the comments can be used as first non blank char or after filter\n\n>\n>\n> + pg_log_error(\"invalid format of filter file \\\"%s\\\": %s\",\n> + filename,\n> + message);\n> +\n> + fprintf(stderr, \"%d: %s\\n\", lineno, line);\n> Can't we just include the lineno in the error logging and skip dumping the\n> offending line? Fast-forwarding the pointer to print the offending part is\n> less useful than a context marker, and in some cases suboptimal. With this\n> coding, if a pattern is omitted for example the below error message is\n> given:\n>\n>\n pg_dump: error: invalid format of filter file \"filter.txt\": missing\n> object name\n> 1:\n>\n> The errormessage and the linenumber in the file should be enough for the\n> user\n> to figure out what to fix.\n>\n\nI did it like you proposed, but still, I think the content can be useful.\nMore times you read dynamically generated files, or you read data from\nstdin, and in complex environments it can be hard regenerate new content\nfor debugging.\n\n\n>\n> + if (keyword && is_keyword(keyword, size, \"table\"))\n> + objecttype = 't';\n> Should this use an enum, or at least a struct translation the literal\n> keyword\n> to the internal representation? Magic constants without explicit\n> connection to\n> their token counterparts can easily be cause of bugs.\n>\n>\nfixed\n\n\n> If I create a table called \"a\\nb\" and try to dump it I get an error in\n> parsing\n> the file. Isn't this supposed to work?\n> $ cat filter.txt\n> include table \"a\n> b\"\n> $ ./bin/pg_dump --filter=filter.txt\n> pg_dump: error: invalid format of filter file \"filter.txt\": unexpected\n> chars after object name\n> 2:\n\n\nprobably there was some issue, because it should work. I tested a new\nversion and this is tested in new regress tests. Please, check\n\n\n>\n> Did you consider implementing this in Bison to abstract some of the messier\n> parsing logic?\n>\n\nInitially not, but now, when I am thinking about it, I don't think so Bison\nhelps. The syntax of the filter file is nicely linear. Now, the code of the\nparser is a little bit larger than minimalistic, but it is due to nicer\nerror's messages. The raw implementation in Bison raised just \"syntax\nerror\" and positions. I did code refactoring, and now the scanning, parsing\nand processing are divided into separated routines. Parsing related code\nhas 90 lines. In this case, I don't think using a parser grammar file can\ncarry any benefit. grammar is more readable, sure, but we need to include\nbison, we need to handle errors, and if we want to raise more helpful\nerrors than just \"syntax error\", then the code will be longer.\n\nplease, check attached patch\n\nRegards\n\nPavel\n\n\n\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>", "msg_date": "Wed, 15 Sep 2021 19:31:13 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "po 13. 9. 2021 v 15:11 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Wed, Jul 28, 2021 at 09:28:17AM +0200, Pavel Stehule wrote:\n> > Here is an updated implementation of filter's file, that implements\n> syntax\n> > proposed by you.\n>\n> Thanks.\n>\n> If there's any traction for this approach. I have some comments for the\n> next\n> revision,\n>\n> > +++ b/doc/src/sgml/ref/pg_dump.sgml\n> > @@ -789,6 +789,56 @@ PostgreSQL documentation\n> > </listitem>\n> > </varlistentry>\n> >\n> > + <varlistentry>\n> > + <term><option>--filter=<replaceable\n> class=\"parameter\">filename</replaceable></option></term>\n> > + <listitem>\n> > + <para>\n> > + Read objects filters from the specified file.\n> > + If you use \"-\" as a filename, the filters are read from stdin.\n>\n> Say 'Specify \"-\" to read from stdin'\n>\n> > + The lines starting with symbol <literal>#</literal> are ignored.\n>\n> Remove \"The\" and \"symbol\"\n>\n> > + Previous white chars (spaces, tabs) are not allowed. These\n>\n> Preceding whitespace characters...\n>\n> But actually, they are allowed? But if it needs to be explained, maybe\n> they\n> shouldn't be - I don't see the utility of it.\n>\n> > +static bool\n> > +isblank_line(const char *line)\n> > +{\n> > + while (*line)\n> > + {\n> > + if (!isblank(*line++))\n> > + return false;\n> > + }\n> > +\n> > + return true;\n> > +}\n>\n> I don't think this requires nor justifies having a separate function.\n> Either don't support blank lines, or use get_keyword() with size==0 for\n> that ?\n>\n> > + /* Now we expect sequence of two keywords */\n> > + if (keyword && is_keyword(keyword, size, \"include\"))\n> > + is_include = true;\n> > + else if (keyword && is_keyword(keyword, size, \"exclude\"))\n> > + is_include = false;\n> > + else\n>\n> I think this should first check \"if keyword == NULL\".\n> That could give a more specific error message like \"no keyword found\",\n>\n> > + exit_invalid_filter_format(fp,\n> > +\n> filename,\n> > +\n> \"expected keyword \\\"include\\\" or \\\"exclude\\\"\",\n> > +\n> line.data,\n> > +\n> lineno);\n>\n> ..and then this one can say \"invalid keyword\".\n>\n\nI fixed (I hope) mentioned issues. Please check last patch\n\nRegards\n\nPavel\n\n\n> --\n> Justin\n>\n\npo 13. 9. 2021 v 15:11 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:On Wed, Jul 28, 2021 at 09:28:17AM +0200, Pavel Stehule wrote:\n> Here is an updated implementation of filter's file, that implements syntax\n> proposed by you.\n\nThanks.\n\nIf there's any traction for this approach.  I have some comments for the next\nrevision,\n\n> +++ b/doc/src/sgml/ref/pg_dump.sgml\n> @@ -789,6 +789,56 @@ PostgreSQL documentation\n>        </listitem>\n>       </varlistentry>\n>  \n> +     <varlistentry>\n> +      <term><option>--filter=<replaceable class=\"parameter\">filename</replaceable></option></term>\n> +      <listitem>\n> +       <para>\n> +        Read objects filters from the specified file.\n> +        If you use \"-\" as a filename, the filters are read from stdin.\n\nSay 'Specify \"-\" to read from stdin'\n\n> +        The lines starting with symbol <literal>#</literal> are ignored.\n\nRemove \"The\" and \"symbol\"\n\n> +        Previous white chars (spaces, tabs) are not allowed. These\n\nPreceding whitespace characters...\n\nBut actually, they are allowed?  But if it needs to be explained, maybe they\nshouldn't be - I don't see the utility of it.\n\n> +static bool\n> +isblank_line(const char *line)\n> +{\n> +     while (*line)\n> +     {\n> +             if (!isblank(*line++))\n> +                     return false;\n> +     }\n> +\n> +     return true;\n> +}\n\nI don't think this requires nor justifies having a separate function.\nEither don't support blank lines, or use get_keyword() with size==0 for that ?\n\n> +             /* Now we expect sequence of two keywords */\n> +             if (keyword && is_keyword(keyword, size, \"include\"))\n> +                     is_include = true;\n> +             else if (keyword && is_keyword(keyword, size, \"exclude\"))\n> +                     is_include = false;\n> +             else\n\nI think this should first check \"if keyword == NULL\".\nThat could give a more specific error message like \"no keyword found\",\n\n> +                     exit_invalid_filter_format(fp,\n> +                                                                        filename,\n> +                                                                        \"expected keyword \\\"include\\\" or \\\"exclude\\\"\",\n> +                                                                        line.data,\n> +                                                                        lineno);\n\n..and then this one can say \"invalid keyword\".I fixed (I hope) mentioned issues. Please check last patchRegardsPavel\n\n-- \nJustin", "msg_date": "Wed, 15 Sep 2021 20:30:45 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nIn yesterday's patch I used strndup, which is not available on win. I am\nsending update when I used pnstrdup instead.\n\nRegards\n\nPavel", "msg_date": "Thu, 16 Sep 2021 06:45:30 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "As there have been a lot of differing opinions raised in this thread, I re-read\nit and tried to summarize the discussion so far to try and figure out where we\nagree and on what (and disagree) before we get deep into the technicalities wrt\nthe current patch. If anyone feel I've misrepresented them below then I\nsincerely do apologize. If I missed a relevant viewpoint I also apologize,\nI've tried to objectively represent the thread.\n\nI proposed JSON in [0] which is where the format discussion to some extent\nstarted, Justin and Pavel had up until that point discussed the format by\nrefining the original proposal.\n\nIn [1] Surafel Temesgen brought up --exclude-database from pg_dumpall and\n--no-comments, and argued for them being handled by this patch. This was\nobjected against on the grounds that pg_dumpall is out of scope, and\nall-or-nothing switches not being applicable in a filter option.\n\nStephen objected to both the proposed, and the suggestion of JSON, in [2] and\nargued for a more holistic configuration file approach. TOML was suggested.\nDean then +1'd the config file approach in [3].\n\nIn [4] Tom supported the idea of a more generic config file, and remarked that\nthe proposed filter for table names only makes sense when the number of exclude\npatterns are large enough that we might hit other problems in pg_dump.\nFurther, in [5] Tom commented that a format with established quoting\nconventions would buy us not having to invent our own to cope with complicated\nrelation names.\n\nThe fact that JSON doesn't support comments is brought up in a few emails and\nis a very valid point, as the need for comments regardless of format is brought\nup as well.\n\nTomas Vondra in [6] wanted the object filter be a separate file from a config\nfile, and argued for a simpler format for these lists (while still supporting\nmultiple object types).\n\nAlvaro agreed with Tomas on [+-] OBJTYPE OBJIDENT in [7] and Tom extended the\nproposal to use [include/exclude] keywords in [8] in order to support more than\njust excluding and including. Regardless of stance on format, the use of\nkeywords instead of [+-] is a rare point of consensus in this thread.\n\nStephen and myself have also expressed concern in various parts of the thread\nthat inventing our own format rather than using something with existing broad\nlibrary support will end up with third-parties (like pgAdmin et.al) having to\nall write their own generators and parsers.\n\nA main concern among most (all?) participants of the thread, regardless of\nformat supported, is that quoting is hard and must be done right for all object\nnames postgres support (including any not currently in scope by this patch).\n\nBelow is an attempt at summarizing and grouping the proposals so far into the\nset of ideas presented:\n\n A) A keyword+object based format to invoke with a switch to essentially\n allow for more filters than the commandline can handle and nothing more.\n After a set of revisions, the current proposal is:\n [include|exclude] [<objtype>] [<objident>]\n\n B) A format similar to (A) which can also be used for pg_dump configuration\n\n C) The format in (A), or a close variant thereof, with the intention of it\n being included in/referred to from a future configuration file of currently\n unknown format. One reference being a .gitignore type file.\n\n D) An existing format (JSON and TOML have been suggested, with JSON\n being dismissed due to lack of comment support) which has quoting\n conventions that supports postgres' object names and which can be used to\n define a full pg_dump configuration file syntax.\n\nFor B), C) and D) there is implicit consensus in the thread that we don't need\nto implement the full configuration file as of this patch, merely that it\n*must* be possible to do so without having to paint ourselves out of a corner.\n\nAt this point it seems to me that B) and C) has the broadest support. Can the\nC) option may represent the compromise between \"simple\" format for object\nfiltering and a more structured format for configuration? Are there other\noptions?\n\nThoughts?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n[0] https://postgr.es/m/F6674FF0-5800-4AED-9DC7-13C475707241@yesql.se\n[1] https://postgr.es/m/CALAY4q9u30L7oGhbsfY3dPECQ8SrYa8YO=H-xOn5xWUeiEneeg@mail.gmail.com\n[2] https://postgr.es/m/20201110200904.GU16415@tamriel.snowman.net\n[3] https://postgr.es/m/CAEZATCVKMG7+b+_5tNwrNZ-aNDBy3=FMRNea2bO9O4qGcEvSTg@mail.gmail.com\n[4] https://postgr.es/m/502641.1606334432@sss.pgh.pa.us\n[5] https://postgr.es/m/619671.1606406538@sss.pgh.pa.us\n[6] https://postgr.es/m/cb545d78-2dae-8d27-f062-822a07ca56cf@enterprisedb.com\n[7] https://postgr.es/m/202107122259.n6o5uwb5erza@alvherre.pgsql\n[8] https://postgr.es/m/3183720.1626131795@sss.pgh.pa.us\n\n\n\n", "msg_date": "Fri, 17 Sep 2021 13:18:18 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "> On 15 Sep 2021, at 19:31, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> po 13. 9. 2021 v 15:01 odesílatel Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> napsal:\n\n> One issue with this syntax is that the include keyword can be quite misleading\n> as it's semantic interpretion of \"include table t\" can be different from\n> \"--table=t\". The former is less clear about the fact that it means \"exclude\n> all other tables than \" then the latter. It can be solved with documentation,\n> but I think that needs be to be made clearer.\n> \n> I invite any documentation enhancing and fixing \n\nSure, that can be collabored on. This gist is though that IMO the keywords in\nthe filter file aren't as clear on the sideeffects as the command line params,\neven though they are equal in functionality.\n\n> + pg_log_error(\"invalid format of filter file \\\"%s\\\": %s\",\n> + filename,\n> + message);\n> +\n> + fprintf(stderr, \"%d: %s\\n\", lineno, line);\n> Can't we just include the lineno in the error logging and skip dumping the\n> offending line? Fast-forwarding the pointer to print the offending part is\n> less useful than a context marker, and in some cases suboptimal. With this\n> coding, if a pattern is omitted for example the below error message is given:\n> \n> pg_dump: error: invalid format of filter file \"filter.txt\": missing object name\n> 1:\n> \n> The errormessage and the linenumber in the file should be enough for the user\n> to figure out what to fix.\n> \n> I did it like you proposed, but still, I think the content can be useful.\n\nNot when there is no content in the error message, printing an empty string for\na line number which isn't a blank line doesn't seem terribly helpful. If we\nknow the error context is empty, printing a tailored error hint seems more\nuseful for the user.\n\n> More times you read dynamically generated files, or you read data from stdin, and in complex environments it can be hard regenerate new content for debugging.\n\nThat seems odd given that the arguments for this format has been that it's\nlikely to be handwritten.\n\n> If I create a table called \"a\\nb\" and try to dump it I get an error in parsing\n> the file. Isn't this supposed to work?\n> $ cat filter.txt\n> include table \"a\n> b\"\n> $ ./bin/pg_dump --filter=filter.txt\n> pg_dump: error: invalid format of filter file \"filter.txt\": unexpected chars after object name\n> 2:\n> \n> probably there was some issue, because it should work. I tested a new version and this is tested in new regress tests. Please, check\n\nThat seems to work, but I am unable to write a filter statement which can\nhandle this relname:\n\nCREATE TABLE \"a\"\"\n\"\"b\" (a integer);\n\nAre you able to craft one for that?\n\n> Did you consider implementing this in Bison to abstract some of the messier\n> parsing logic?\n> \n> Initially not, but now, when I am thinking about it, I don't think so Bison helps. The syntax of the filter file is nicely linear. Now, the code of the parser is a little bit larger than minimalistic, but it is due to nicer error's messages. The raw implementation in Bison raised just \"syntax error\" and positions. I did code refactoring, and now the scanning, parsing and processing are divided into separated routines. Parsing related code has 90 lines. In this case, I don't think using a parser grammar file can carry any benefit. grammar is more readable, sure, but we need to include bison, we need to handle errors, and if we want to raise more helpful errors than just \"syntax error\", then the code will be longer. \n\nI'm not so concerned by code size, but rather parsing of quotations etc and\nbeing able to reason about it's correctness. IMHO that's easier done by\nreading a defined grammar than parsing a handwritten parser.\n\nWill do a closer review on the patch shortly.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 17 Sep 2021 13:42:09 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\n\n> A main concern among most (all?) participants of the thread, regardless of\n> format supported, is that quoting is hard and must be done right for all\n> object\n> names postgres support (including any not currently in scope by this\n> patch).\n>\n>\nJust a small note - when quoting is calculated to design, then the\nimplementation is easy. I am sure, so my last code covers all\npossibilities, and it is about 100 lines of code.\n\n\n\n> Below is an attempt at summarizing and grouping the proposals so far into\n> the\n> set of ideas presented:\n>\n> A) A keyword+object based format to invoke with a switch to essentially\n> allow for more filters than the commandline can handle and nothing\n> more.\n> After a set of revisions, the current proposal is:\n> [include|exclude] [<objtype>] [<objident>]\n>\n> B) A format similar to (A) which can also be used for pg_dump\n> configuration\n>\n> C) The format in (A), or a close variant thereof, with the intention\n> of it\n> being included in/referred to from a future configuration file of\n> currently\n> unknown format. One reference being a .gitignore type file.\n>\n> D) An existing format (JSON and TOML have been suggested, with JSON\n> being dismissed due to lack of comment support) which has quoting\n> conventions that supports postgres' object names and which can be used\n> to\n> define a full pg_dump configuration file syntax.\n>\n> For B), C) and D) there is implicit consensus in the thread that we don't\n> need\n> to implement the full configuration file as of this patch, merely that it\n> *must* be possible to do so without having to paint ourselves out of a\n> corner.\n>\n> At this point it seems to me that B) and C) has the broadest support. Can\n> the\n> C) option may represent the compromise between \"simple\" format for object\n> filtering and a more structured format for configuration? Are there other\n> options?\n>\n\nWhat should be a benefit of this variant?\n\nRegards\n\nPavel\n\n\n>\n> Thoughts?\n>\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n> [0] https://postgr.es/m/F6674FF0-5800-4AED-9DC7-13C475707241@yesql.se\n> [1]\n> https://postgr.es/m/CALAY4q9u30L7oGhbsfY3dPECQ8SrYa8YO=H-xOn5xWUeiEneeg@mail.gmail.com\n> [2] https://postgr.es/m/20201110200904.GU16415@tamriel.snowman.net\n> [3]\n> https://postgr.es/m/CAEZATCVKMG7+b+_5tNwrNZ-aNDBy3=FMRNea2bO9O4qGcEvSTg@mail.gmail.com\n> [4] https://postgr.es/m/502641.1606334432@sss.pgh.pa.us\n> [5] https://postgr.es/m/619671.1606406538@sss.pgh.pa.us\n> [6]\n> https://postgr.es/m/cb545d78-2dae-8d27-f062-822a07ca56cf@enterprisedb.com\n> [7] https://postgr.es/m/202107122259.n6o5uwb5erza@alvherre.pgsql\n> [8] https://postgr.es/m/3183720.1626131795@sss.pgh.pa.us\n>\n>\n\nHi\n\nA main concern among most (all?) participants of the thread, regardless of\nformat supported, is that quoting is hard and must be done right for all object\nnames postgres support (including any not currently in scope by this patch).\nJust a small note - when quoting is calculated to design, then the implementation is easy. I am sure, so my last code covers all possibilities, and it is about 100 lines of code. \nBelow is an attempt at summarizing and grouping the proposals so far into the\nset of ideas presented:\n\n    A) A keyword+object based format to invoke with a switch to essentially\n    allow for more filters than the commandline can handle and nothing more.\n    After a set of revisions, the current proposal is:\n        [include|exclude] [<objtype>] [<objident>]\n\n    B) A format similar to (A) which can also be used for pg_dump configuration\n\n    C) The format in (A), or a close variant thereof, with the intention of it\n    being included in/referred to from a future configuration file of currently\n    unknown format.  One reference being a .gitignore type file.\n\n    D) An existing format (JSON and TOML have been suggested, with JSON\n    being dismissed due to lack of comment support) which has quoting\n    conventions that supports postgres' object names and which can be used to\n    define a full pg_dump configuration file syntax.\n\nFor B), C) and D) there is implicit consensus in the thread that we don't need\nto implement the full configuration file as of this patch, merely that it\n*must* be possible to do so without having to paint ourselves out of a corner.\n\nAt this point it seems to me that B) and C) has the broadest support.  Can the\nC) option may represent the compromise between \"simple\" format for object\nfiltering and a more structured format for configuration?  Are there other\noptions?What should be a benefit of this variant?RegardsPavel \n\nThoughts?\n\n--\nDaniel Gustafsson               https://vmware.com/\n\n[0] https://postgr.es/m/F6674FF0-5800-4AED-9DC7-13C475707241@yesql.se\n[1] https://postgr.es/m/CALAY4q9u30L7oGhbsfY3dPECQ8SrYa8YO=H-xOn5xWUeiEneeg@mail.gmail.com\n[2] https://postgr.es/m/20201110200904.GU16415@tamriel.snowman.net\n[3] https://postgr.es/m/CAEZATCVKMG7+b+_5tNwrNZ-aNDBy3=FMRNea2bO9O4qGcEvSTg@mail.gmail.com\n[4] https://postgr.es/m/502641.1606334432@sss.pgh.pa.us\n[5] https://postgr.es/m/619671.1606406538@sss.pgh.pa.us\n[6] https://postgr.es/m/cb545d78-2dae-8d27-f062-822a07ca56cf@enterprisedb.com\n[7] https://postgr.es/m/202107122259.n6o5uwb5erza@alvherre.pgsql\n[8] https://postgr.es/m/3183720.1626131795@sss.pgh.pa.us", "msg_date": "Fri, 17 Sep 2021 13:42:55 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "pá 17. 9. 2021 v 13:42 odesílatel Daniel Gustafsson <daniel@yesql.se>\nnapsal:\n\n> > On 15 Sep 2021, at 19:31, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > po 13. 9. 2021 v 15:01 odesílatel Daniel Gustafsson <daniel@yesql.se\n> <mailto:daniel@yesql.se>> napsal:\n>\n> > One issue with this syntax is that the include keyword can be quite\n> misleading\n> > as it's semantic interpretion of \"include table t\" can be different from\n> > \"--table=t\". The former is less clear about the fact that it means\n> \"exclude\n> > all other tables than \" then the latter. It can be solved with\n> documentation,\n> > but I think that needs be to be made clearer.\n> >\n> > I invite any documentation enhancing and fixing\n>\n> Sure, that can be collabored on. This gist is though that IMO the\n> keywords in\n> the filter file aren't as clear on the sideeffects as the command line\n> params,\n> even though they are equal in functionality.\n>\n> > + pg_log_error(\"invalid format of filter file \\\"%s\\\": %s\",\n> > + filename,\n> > + message);\n> > +\n> > + fprintf(stderr, \"%d: %s\\n\", lineno, line);\n> > Can't we just include the lineno in the error logging and skip dumping\n> the\n> > offending line? Fast-forwarding the pointer to print the offending part\n> is\n> > less useful than a context marker, and in some cases suboptimal. With\n> this\n> > coding, if a pattern is omitted for example the below error message is\n> given:\n> >\n> > pg_dump: error: invalid format of filter file \"filter.txt\": missing\n> object name\n> > 1:\n> >\n> > The errormessage and the linenumber in the file should be enough for the\n> user\n> > to figure out what to fix.\n> >\n> > I did it like you proposed, but still, I think the content can be useful.\n>\n> Not when there is no content in the error message, printing an empty\n> string for\n> a line number which isn't a blank line doesn't seem terribly helpful. If\n> we\n> know the error context is empty, printing a tailored error hint seems more\n> useful for the user.\n>\n> > More times you read dynamically generated files, or you read data from\n> stdin, and in complex environments it can be hard regenerate new content\n> for debugging.\n>\n> That seems odd given that the arguments for this format has been that it's\n> likely to be handwritten.\n>\n> > If I create a table called \"a\\nb\" and try to dump it I get an error in\n> parsing\n> > the file. Isn't this supposed to work?\n> > $ cat filter.txt\n> > include table \"a\n> > b\"\n> > $ ./bin/pg_dump --filter=filter.txt\n> > pg_dump: error: invalid format of filter file \"filter.txt\":\n> unexpected chars after object name\n> > 2:\n> >\n> > probably there was some issue, because it should work. I tested a new\n> version and this is tested in new regress tests. Please, check\n>\n> That seems to work, but I am unable to write a filter statement which can\n> handle this relname:\n>\n> CREATE TABLE \"a\"\"\n> \"\"b\" (a integer);\n>\n> Are you able to craft one for that?\n>\n\nI am not able to dump this directly in pg_dump. Is it possible?\n\n\n\n> > Did you consider implementing this in Bison to abstract some of the\n> messier\n> > parsing logic?\n> >\n> > Initially not, but now, when I am thinking about it, I don't think so\n> Bison helps. The syntax of the filter file is nicely linear. Now, the code\n> of the parser is a little bit larger than minimalistic, but it is due to\n> nicer error's messages. The raw implementation in Bison raised just \"syntax\n> error\" and positions. I did code refactoring, and now the scanning, parsing\n> and processing are divided into separated routines. Parsing related code\n> has 90 lines. In this case, I don't think using a parser grammar file can\n> carry any benefit. grammar is more readable, sure, but we need to include\n> bison, we need to handle errors, and if we want to raise more helpful\n> errors than just \"syntax error\", then the code will be longer.\n>\n> I'm not so concerned by code size, but rather parsing of quotations etc and\n> being able to reason about it's correctness. IMHO that's easier done by\n> reading a defined grammar than parsing a handwritten parser.\n>\n> Will do a closer review on the patch shortly.\n>\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>\n\npá 17. 9. 2021 v 13:42 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:> On 15 Sep 2021, at 19:31, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> po 13. 9. 2021 v 15:01 odesílatel Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> napsal:\n\n> One issue with this syntax is that the include keyword can be quite misleading\n> as it's semantic interpretion of \"include table t\" can be different from\n> \"--table=t\".  The former is less clear about the fact that it means \"exclude\n> all other tables than \" then the latter.  It can be solved with documentation,\n> but I think that needs be to be made clearer.\n> \n> I invite any documentation enhancing and fixing \n\nSure, that can be collabored on.  This gist is though that IMO the keywords in\nthe filter file aren't as clear on the sideeffects as the command line params,\neven though they are equal in functionality.\n\n> +       pg_log_error(\"invalid format of filter file \\\"%s\\\": %s\",\n> +                                filename,\n> +                                message);\n> +\n> +       fprintf(stderr, \"%d: %s\\n\", lineno, line);\n> Can't we just include the lineno in the error logging and skip dumping the\n> offending line?  Fast-forwarding the pointer to print the offending part is\n> less useful than a context marker, and in some cases suboptimal.  With this\n> coding, if a pattern is omitted for example the below error message is given:\n>   \n>     pg_dump: error: invalid format of filter file \"filter.txt\": missing object name\n>     1:\n> \n> The errormessage and the linenumber in the file should be enough for the user\n> to figure out what to fix.\n> \n> I did it like you proposed, but still, I think the content can be useful.\n\nNot when there is no content in the error message, printing an empty string for\na line number which isn't a blank line doesn't seem terribly helpful.  If we\nknow the error context is empty, printing a tailored error hint seems more\nuseful for the user.\n\n> More times you read dynamically generated files, or you read data from stdin, and in complex environments it can be hard regenerate new content for debugging.\n\nThat seems odd given that the arguments for this format has been that it's\nlikely to be handwritten.\n\n> If I create a table called \"a\\nb\" and try to dump it I get an error in parsing\n> the file.  Isn't this supposed to work?\n>     $ cat filter.txt\n>     include table \"a\n>     b\"\n>     $ ./bin/pg_dump --filter=filter.txt\n>     pg_dump: error: invalid format of filter file \"filter.txt\": unexpected chars after object name\n>     2:\n> \n> probably there was some issue, because it should work. I tested a new version and this is tested in new regress tests. Please, check\n\nThat seems to work, but I am unable to write a filter statement which can\nhandle this relname:\n\nCREATE TABLE \"a\"\"\n\"\"b\" (a integer);\n\nAre you able to craft one for that?I am not able to dump this directly in pg_dump. Is it possible? \n\n> Did you consider implementing this in Bison to abstract some of the messier\n> parsing logic?\n> \n> Initially not, but now, when I am thinking about it, I don't think so Bison helps. The syntax of the filter file is nicely linear. Now, the code of the parser is a little bit larger than minimalistic, but it is due to nicer error's messages. The raw implementation in Bison raised just \"syntax error\" and positions. I did code refactoring, and now the scanning, parsing and processing are divided into separated routines. Parsing related code has 90 lines. In this case, I don't think using a parser grammar file can carry any benefit. grammar is more readable, sure, but we need to include bison, we need to handle errors, and if we want to raise more helpful errors than just \"syntax error\", then the code will be longer. \n\nI'm not so concerned by code size, but rather parsing of quotations etc and\nbeing able to reason about it's correctness.  IMHO that's easier done by\nreading a defined grammar than parsing a handwritten parser.\n\nWill do a closer review on the patch shortly.\n\n--\nDaniel Gustafsson               https://vmware.com/", "msg_date": "Fri, 17 Sep 2021 13:51:34 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "> On 17 Sep 2021, at 13:51, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> pá 17. 9. 2021 v 13:42 odesílatel Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> napsal:\n\n> I am unable to write a filter statement which can\n> handle this relname:\n> \n> CREATE TABLE \"a\"\"\n> \"\"b\" (a integer);\n> \n> Are you able to craft one for that?\n> \n> I am not able to dump this directly in pg_dump. Is it possible?\n\nSure, see below:\n\n$ ./bin/psql filter\npsql (15devel)\nType \"help\" for help.\n\nfilter=# create table \"a\"\"\nfilter\"# \"\"b\" (a integer);\nCREATE TABLE\nfilter=# select relname from pg_class order by oid desc limit 1;\n relname\n---------\n a\" +\n \"b\n(1 row)\n\nfilter=# ^D\\q\n$ ./bin/pg_dump -s filter\n--\n-- PostgreSQL database dump\n--\n\n-- Dumped from database version 15devel\n-- Dumped by pg_dump version 15devel\n\nSET statement_timeout = 0;\nSET lock_timeout = 0;\nSET idle_in_transaction_session_timeout = 0;\nSET client_encoding = 'UTF8';\nSET standard_conforming_strings = on;\nSELECT pg_catalog.set_config('search_path', '', false);\nSET check_function_bodies = false;\nSET xmloption = content;\nSET client_min_messages = warning;\nSET row_security = off;\n\nSET default_tablespace = '';\n\nSET default_table_access_method = heap;\n\n--\n-- Name: a\" \"b; Type: TABLE; Schema: public; Owner: danielg\n--\n\nCREATE TABLE public.\"a\"\"\n\"\"b\" (\n a integer\n);\n\n\nALTER TABLE public.\"a\"\"\n\"\"b\" OWNER TO danielg;\n\n--\n-- PostgreSQL database dump complete\n--\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 17 Sep 2021 13:56:46 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "pá 17. 9. 2021 v 13:56 odesílatel Daniel Gustafsson <daniel@yesql.se>\nnapsal:\n\n> > On 17 Sep 2021, at 13:51, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > pá 17. 9. 2021 v 13:42 odesílatel Daniel Gustafsson <daniel@yesql.se\n> <mailto:daniel@yesql.se>> napsal:\n>\n> > I am unable to write a filter statement which can\n> > handle this relname:\n> >\n> > CREATE TABLE \"a\"\"\n> > \"\"b\" (a integer);\n> >\n> > Are you able to craft one for that?\n> >\n> > I am not able to dump this directly in pg_dump. Is it possible?\n>\n> Sure, see below:\n>\n> $ ./bin/psql filter\n> psql (15devel)\n> Type \"help\" for help.\n>\n>\nI didn't ask on this\n\nI asked if you can use -t and some for filtering this name\n\n?\n\n\n> filter=# create table \"a\"\"\n> filter\"# \"\"b\" (a integer);\n> CREATE TABLE\n> filter=# select relname from pg_class order by oid desc limit 1;\n> relname\n> ---------\n> a\" +\n> \"b\n> (1 row)\n>\n> filter=# ^D\\q\n> $ ./bin/pg_dump -s filter\n> --\n> -- PostgreSQL database dump\n> --\n>\n> -- Dumped from database version 15devel\n> -- Dumped by pg_dump version 15devel\n>\n> SET statement_timeout = 0;\n> SET lock_timeout = 0;\n> SET idle_in_transaction_session_timeout = 0;\n> SET client_encoding = 'UTF8';\n> SET standard_conforming_strings = on;\n> SELECT pg_catalog.set_config('search_path', '', false);\n> SET check_function_bodies = false;\n> SET xmloption = content;\n> SET client_min_messages = warning;\n> SET row_security = off;\n>\n> SET default_tablespace = '';\n>\n> SET default_table_access_method = heap;\n>\n> --\n> -- Name: a\" \"b; Type: TABLE; Schema: public; Owner: danielg\n> --\n>\n> CREATE TABLE public.\"a\"\"\n> \"\"b\" (\n> a integer\n> );\n>\n>\n> ALTER TABLE public.\"a\"\"\n> \"\"b\" OWNER TO danielg;\n>\n> --\n> -- PostgreSQL database dump complete\n> --\n>\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>\n\npá 17. 9. 2021 v 13:56 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:> On 17 Sep 2021, at 13:51, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> pá 17. 9. 2021 v 13:42 odesílatel Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> napsal:\n\n> I am unable to write a filter statement which can\n> handle this relname:\n> \n> CREATE TABLE \"a\"\"\n> \"\"b\" (a integer);\n> \n> Are you able to craft one for that?\n> \n> I am not able to dump this directly in pg_dump. Is it possible?\n\nSure, see below:\n\n$ ./bin/psql filter\npsql (15devel)\nType \"help\" for help.\nI didn't ask on thisI asked if you can use -t and some for filtering this name? \nfilter=# create table \"a\"\"\nfilter\"# \"\"b\" (a integer);\nCREATE TABLE\nfilter=# select relname from pg_class order by oid desc limit 1;\n relname\n---------\n a\"     +\n \"b\n(1 row)\n\nfilter=# ^D\\q\n$ ./bin/pg_dump -s filter\n--\n-- PostgreSQL database dump\n--\n\n-- Dumped from database version 15devel\n-- Dumped by pg_dump version 15devel\n\nSET statement_timeout = 0;\nSET lock_timeout = 0;\nSET idle_in_transaction_session_timeout = 0;\nSET client_encoding = 'UTF8';\nSET standard_conforming_strings = on;\nSELECT pg_catalog.set_config('search_path', '', false);\nSET check_function_bodies = false;\nSET xmloption = content;\nSET client_min_messages = warning;\nSET row_security = off;\n\nSET default_tablespace = '';\n\nSET default_table_access_method = heap;\n\n--\n-- Name: a\" \"b; Type: TABLE; Schema: public; Owner: danielg\n--\n\nCREATE TABLE public.\"a\"\"\n\"\"b\" (\n    a integer\n);\n\n\nALTER TABLE public.\"a\"\"\n\"\"b\" OWNER TO danielg;\n\n--\n-- PostgreSQL database dump complete\n--\n\n--\nDaniel Gustafsson               https://vmware.com/", "msg_date": "Fri, 17 Sep 2021 13:59:13 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Greetings,\n\nOn Fri, Sep 17, 2021 at 13:59 Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n>\n>\n> pá 17. 9. 2021 v 13:56 odesílatel Daniel Gustafsson <daniel@yesql.se>\n> napsal:\n>\n>> > On 17 Sep 2021, at 13:51, Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>> > pá 17. 9. 2021 v 13:42 odesílatel Daniel Gustafsson <daniel@yesql.se\n>> <mailto:daniel@yesql.se>> napsal:\n>>\n>> > I am unable to write a filter statement which can\n>> > handle this relname:\n>> >\n>> > CREATE TABLE \"a\"\"\n>> > \"\"b\" (a integer);\n>> >\n>> > Are you able to craft one for that?\n>> >\n>> > I am not able to dump this directly in pg_dump. Is it possible?\n>>\n>> Sure, see below:\n>>\n>> $ ./bin/psql filter\n>> psql (15devel)\n>> Type \"help\" for help.\n>>\n>>\n> I didn't ask on this\n>\n> I asked if you can use -t and some for filtering this name\n>\n> ?\n>\n\nFor my part, at least, I don’t see that this particularly matters.. for a\nnew feature that’s being developed to allow users to export specific\ntables, I would think we’d want to support any table names which can exist.\n\nThanks,\n\nStephen\n\nGreetings,On Fri, Sep 17, 2021 at 13:59 Pavel Stehule <pavel.stehule@gmail.com> wrote:pá 17. 9. 2021 v 13:56 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:> On 17 Sep 2021, at 13:51, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> pá 17. 9. 2021 v 13:42 odesílatel Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> napsal:\n\n> I am unable to write a filter statement which can\n> handle this relname:\n> \n> CREATE TABLE \"a\"\"\n> \"\"b\" (a integer);\n> \n> Are you able to craft one for that?\n> \n> I am not able to dump this directly in pg_dump. Is it possible?\n\nSure, see below:\n\n$ ./bin/psql filter\npsql (15devel)\nType \"help\" for help.\nI didn't ask on thisI asked if you can use -t and some for filtering this name?For my part, at least, I don’t see that this particularly matters..  for a new feature that’s being developed to allow users to export specific tables, I would think we’d want to support any table names which can exist. Thanks,Stephen", "msg_date": "Fri, 17 Sep 2021 14:06:48 +0200", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "> On 17 Sep 2021, at 13:59, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> pá 17. 9. 2021 v 13:56 odesílatel Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> napsal:\n> > On 17 Sep 2021, at 13:51, Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> wrote:\n> > pá 17. 9. 2021 v 13:42 odesílatel Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se> <mailto:daniel@yesql.se <mailto:daniel@yesql.se>>> napsal:\n\n> > I am unable to write a filter statement which can\n> > handle this relname:\n> > \n> > CREATE TABLE \"a\"\"\n> > \"\"b\" (a integer);\n> > \n> > Are you able to craft one for that?\n> > \n> > I am not able to dump this directly in pg_dump. Is it possible?\n> \n> Sure, see below:\n> \n> $ ./bin/psql filter\n> psql (15devel)\n> Type \"help\" for help.\n> \n> I didn't ask on this\n> \n> I asked if you can use -t and some for filtering this name?\n\nI didn't try as I don't see how that's relevant? Surely we're not limiting the\ncapabilities of a filtering file format based on the quoting semantics of a\nshell?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 17 Sep 2021 14:07:12 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Greetings,\n\nOn Fri, Sep 17, 2021 at 14:07 Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 17 Sep 2021, at 13:59, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > pá 17. 9. 2021 v 13:56 odesílatel Daniel Gustafsson <daniel@yesql.se\n> <mailto:daniel@yesql.se>> napsal:\n> > > On 17 Sep 2021, at 13:51, Pavel Stehule <pavel.stehule@gmail.com\n> <mailto:pavel.stehule@gmail.com>> wrote:\n> > > pá 17. 9. 2021 v 13:42 odesílatel Daniel Gustafsson <daniel@yesql.se\n> <mailto:daniel@yesql.se> <mailto:daniel@yesql.se <mailto:daniel@yesql.se>>>\n> napsal:\n>\n> > > I am unable to write a filter statement which can\n> > > handle this relname:\n> > >\n> > > CREATE TABLE \"a\"\"\n> > > \"\"b\" (a integer);\n> > >\n> > > Are you able to craft one for that?\n> > >\n> > > I am not able to dump this directly in pg_dump. Is it possible?\n> >\n> > Sure, see below:\n> >\n> > $ ./bin/psql filter\n> > psql (15devel)\n> > Type \"help\" for help.\n> >\n> > I didn't ask on this\n> >\n> > I asked if you can use -t and some for filtering this name?\n>\n> I didn't try as I don't see how that's relevant? Surely we're not\n> limiting the\n> capabilities of a filtering file format based on the quoting semantics of a\n> shell?\n\n\nYeah, agreed. I would think that a DBA might specifically want to be able\nto use a config file to get away from having to deal with shell quoting, in\nfact…\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Fri, Sep 17, 2021 at 14:07 Daniel Gustafsson <daniel@yesql.se> wrote:> On 17 Sep 2021, at 13:59, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> pá 17. 9. 2021 v 13:56 odesílatel Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> napsal:\n> > On 17 Sep 2021, at 13:51, Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> wrote:\n> > pá 17. 9. 2021 v 13:42 odesílatel Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se> <mailto:daniel@yesql.se <mailto:daniel@yesql.se>>> napsal:\n\n> > I am unable to write a filter statement which can\n> > handle this relname:\n> > \n> > CREATE TABLE \"a\"\"\n> > \"\"b\" (a integer);\n> > \n> > Are you able to craft one for that?\n> > \n> > I am not able to dump this directly in pg_dump. Is it possible?\n> \n> Sure, see below:\n> \n> $ ./bin/psql filter\n> psql (15devel)\n> Type \"help\" for help.\n> \n> I didn't ask on this\n> \n> I asked if you can use -t and some for filtering this name?\n\nI didn't try as I don't see how that's relevant?  Surely we're not limiting the\ncapabilities of a filtering file format based on the quoting semantics of a\nshell?Yeah, agreed. I would think that a DBA might specifically want to be able to use a config file to get away from having to deal with shell quoting, in fact…Thanks,Stephen", "msg_date": "Fri, 17 Sep 2021 14:09:32 +0200", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "pá 17. 9. 2021 v 14:07 odesílatel Daniel Gustafsson <daniel@yesql.se>\nnapsal:\n\n> > On 17 Sep 2021, at 13:59, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > pá 17. 9. 2021 v 13:56 odesílatel Daniel Gustafsson <daniel@yesql.se\n> <mailto:daniel@yesql.se>> napsal:\n> > > On 17 Sep 2021, at 13:51, Pavel Stehule <pavel.stehule@gmail.com\n> <mailto:pavel.stehule@gmail.com>> wrote:\n> > > pá 17. 9. 2021 v 13:42 odesílatel Daniel Gustafsson <daniel@yesql.se\n> <mailto:daniel@yesql.se> <mailto:daniel@yesql.se <mailto:daniel@yesql.se>>>\n> napsal:\n>\n> > > I am unable to write a filter statement which can\n> > > handle this relname:\n> > >\n> > > CREATE TABLE \"a\"\"\n> > > \"\"b\" (a integer);\n> > >\n> > > Are you able to craft one for that?\n> > >\n> > > I am not able to dump this directly in pg_dump. Is it possible?\n> >\n> > Sure, see below:\n> >\n> > $ ./bin/psql filter\n> > psql (15devel)\n> > Type \"help\" for help.\n> >\n> > I didn't ask on this\n> >\n> > I asked if you can use -t and some for filtering this name?\n>\n> I didn't try as I don't see how that's relevant? Surely we're not\n> limiting the\n> capabilities of a filtering file format based on the quoting semantics of a\n> shell?\n>\n\nthis patch just use existing functionality, that can be buggy too.\n\nbut I had a bug in this part - if I detect double double quotes on input I\nhave to send double quotes to output too.\n\nIt should be fixed in attached patch\n\n[pavel@localhost pg_dump]$ echo 'include table \"a\"\"\\n\"\"b\"' | ./pg_dump\n--filter=-\n--\n-- PostgreSQL database dump\n--\n\n-- Dumped from database version 15devel\n-- Dumped by pg_dump version 15devel\n\n\n\n\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>", "msg_date": "Fri, 17 Sep 2021 14:16:15 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "> >\n> > Initially not, but now, when I am thinking about it, I don't think so\n> Bison helps. The syntax of the filter file is nicely linear. Now, the code\n> of the parser is a little bit larger than minimalistic, but it is due to\n> nicer error's messages. The raw implementation in Bison raised just \"syntax\n> error\" and positions. I did code refactoring, and now the scanning, parsing\n> and processing are divided into separated routines. Parsing related code\n> has 90 lines. In this case, I don't think using a parser grammar file can\n> carry any benefit. grammar is more readable, sure, but we need to include\n> bison, we need to handle errors, and if we want to raise more helpful\n> errors than just \"syntax error\", then the code will be longer.\n>\n> I'm not so concerned by code size, but rather parsing of quotations etc and\n> being able to reason about it's correctness. IMHO that's easier done by\n> reading a defined grammar than parsing a handwritten parser.\n>\n>\nIn this case the complex part is not a parser, but the scanner is complex\nand writing this in flex is not too easy. I wrote so the grammar file can\nbe more readable, but the usual error from Bison is \"syntax error\" and\nposition, so it does not win from the user perspective. When a parser is\nnot linear, then a generated parser can help a lot, but using it at this\nmoment is premature.\n\n\n> Will do a closer review on the patch shortly.\n>\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>\n\n\n> \n> Initially not, but now, when I am thinking about it, I don't think so Bison helps. The syntax of the filter file is nicely linear. Now, the code of the parser is a little bit larger than minimalistic, but it is due to nicer error's messages. The raw implementation in Bison raised just \"syntax error\" and positions. I did code refactoring, and now the scanning, parsing and processing are divided into separated routines. Parsing related code has 90 lines. In this case, I don't think using a parser grammar file can carry any benefit. grammar is more readable, sure, but we need to include bison, we need to handle errors, and if we want to raise more helpful errors than just \"syntax error\", then the code will be longer. \n\nI'm not so concerned by code size, but rather parsing of quotations etc and\nbeing able to reason about it's correctness.  IMHO that's easier done by\nreading a defined grammar than parsing a handwritten parser.\nIn this case the complex part is not a parser, but the scanner is complex and writing this in flex is not too easy. I wrote so the grammar file can be more readable, but the usual error from Bison is \"syntax error\" and position, so it does not win from the user perspective. When a parser is not linear, then a generated parser can help a lot, but using it at this moment is premature.  \nWill do a closer review on the patch shortly.\n\n--\nDaniel Gustafsson               https://vmware.com/", "msg_date": "Fri, 17 Sep 2021 15:06:51 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "> Will do a closer review on the patch shortly.\n\nHad a read through, and tested, the latest posted version today:\n\n+ Read objects filters from the specified file. Specify \"-\" to read from\n+ stdin. Lines of this file must have the following format:\nI think this should be <filename>-</filename> and <literal>STDIN</literal> to\nmatch the rest of the docs.\n\n\n+ <para>\n+ With the following filter file, the dump would include table\n+ <literal>mytable1</literal> and data from foreign tables of\n+ <literal>some_foreign_server</literal> foreign server, but exclude data\n+ from table <literal>mytable2</literal>.\n+<programlisting>\n+include table mytable1\n+include foreign_data some_foreign_server\n+exclude table mytable2\n+</programlisting>\n+ </para>\nThis example is highlighting the issue I've previously raised with the UX/doc\nof this feature. The \"exclude table mytable2\" is totally pointless in the\nabove since the exact match of \"mytable1\" will remove all other objects. What\nwe should be doing instead is use the pattern matching aspect along the lines\nof the below:\n\n include table mytable*\n exclude table mytable2\n\n+ The <option>--filter</option> option works just like the other\n+ options to include or exclude tables, schemas, table data, or foreign\nThis should refer to the actual options by name to make it clear which we are\ntalking about.\n\n\n+ printf(_(\" --filter=FILENAME dump objects and data based on the filter expressions\\n\"\n+ \" from the filter file\\n\"));\nBefore we settle on --filter I think we need to conclude whether this file is\nintended to be included from a config file, or used on it's own. If we gow tih\nthe former then we might not want a separate option for just --filter.\n\n\n+ if (filter_is_keyword(keyword, size, \"include\"))\nI would prefer if this function call was replaced by just the pg_strcasecmp()\ncall in filter_is_keyword() and the strlen optimization there removed. The is\nnot a hot-path, we can afford the string comparison in case of errors. Having\nthe string comparison done inline here will improve readability saving the\nreading from jumping to another function to see what it does.\n\n\n+ initStringInfo(&line);\nWhy is this using a StringInfo rather than a PQExpBuffer as the rest of pg_dump\ndoes?\n\n\n+typedef struct\nI think these should be at the top of the file with the other typedefs.\n\n\nWhen testing strange object names, I was unable to express this name in the filter file:\n\n$ ./bin/psql\npsql (15devel)\nType \"help\" for help.\n\ndanielg=# create table \"\ndanielg\"# t\ndanielg\"# t\ndanielg\"# \" (a integer);\nCREATE TABLE\ndanielg=# select relname from pg_class order by oid desc limit 1;\n relname\n---------\n +\n t +\n t +\n\n(1 row)\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 20 Sep 2021 14:10:45 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "po 20. 9. 2021 v 14:10 odesílatel Daniel Gustafsson <daniel@yesql.se>\nnapsal:\n\n> > Will do a closer review on the patch shortly.\n>\n> Had a read through, and tested, the latest posted version today:\n>\n> + Read objects filters from the specified file. Specify \"-\" to read from\n> + stdin. Lines of this file must have the following format:\n> I think this should be <filename>-</filename> and <literal>STDIN</literal>\n> to\n> match the rest of the docs.\n>\n>\n> + <para>\n> + With the following filter file, the dump would include table\n> + <literal>mytable1</literal> and data from foreign tables of\n> + <literal>some_foreign_server</literal> foreign server, but\n> exclude data\n> + from table <literal>mytable2</literal>.\n> +<programlisting>\n> +include table mytable1\n> +include foreign_data some_foreign_server\n> +exclude table mytable2\n> +</programlisting>\n> + </para>\n> This example is highlighting the issue I've previously raised with the\n> UX/doc\n> of this feature. The \"exclude table mytable2\" is totally pointless in the\n> above since the exact match of \"mytable1\" will remove all other objects.\n> What\n> we should be doing instead is use the pattern matching aspect along the\n> lines\n> of the below:\n>\n> include table mytable*\n> exclude table mytable2\n>\n> + The <option>--filter</option> option works just like the other\n> + options to include or exclude tables, schemas, table data, or\n> foreign\n> This should refer to the actual options by name to make it clear which we\n> are\n> talking about.\n>\n\nfixed\n\n\n>\n> + printf(_(\" --filter=FILENAME dump objects and data\n> based on the filter expressions\\n\"\n> + \" from the filter\n> file\\n\"));\n> Before we settle on --filter I think we need to conclude whether this file\n> is\n> intended to be included from a config file, or used on it's own. If we\n> gow tih\n> the former then we might not want a separate option for just --filter.\n>\n\nI prefer to separate two files. Although there is some intersection, I\nthink it is good to have two simple separate files for two really different\ntasks.\nIt does filtering, and it should be controlled by option \"--filter\". When\nthe implementation will be changed, then this option can be changed too.\nFiltering is just a pg_dump related feature. Revision of client application\nconfiguration is a much more generic task, and if we mix it to one, we can\nbe\nin a trap. It can be hard to find one good format for large script\ngenerated content, and possibly hand written structured content. For\npractical\nreasons it can be good to have two files too. Filters and configurations\ncan have different life cycles.\n\n\n>\n> + if (filter_is_keyword(keyword, size, \"include\"))\n> I would prefer if this function call was replaced by just the\n> pg_strcasecmp()\n> call in filter_is_keyword() and the strlen optimization there removed.\n> The is\n> not a hot-path, we can afford the string comparison in case of errors.\n> Having\n> the string comparison done inline here will improve readability saving the\n> reading from jumping to another function to see what it does.\n>\n\nI agree that this is not a hot-path, just I don't feel well if I need to\nmake a zero end string just for comparison pg_strcasecmp. Current design\nreduces malloc/free cycles. It is used in more places, when Postgres parses\nstrings - SQL parser, plpgsql parser. I am not sure about the benefits and\ncosts - pg_strcasecmp can be more readable, but for any keyword I have to\ncall pstrdup and pfree. Is it necessary? My opinion in this part is not too\nstrong - it is a minor issue, maybe I have a little bit different feelings\nabout benefits and costs in this specific case, and if you really think the\nbenefits of rewriting are higher, I'll do it.\n\n\n>\n> + initStringInfo(&line);\n> Why is this using a StringInfo rather than a PQExpBuffer as the rest of\n> pg_dump\n> does?\n>\n\nThe StringInfo is used because I use the pg_get_line_buf function, and this\nfunction uses this API.\n\n\n>\n> +typedef struct\n> I think these should be at the top of the file with the other typedefs.\n>\n>\ndone\n\n\n\n>\n> When testing strange object names, I was unable to express this name in\n> the filter file:\n>\n> $ ./bin/psql\n> psql (15devel)\n> Type \"help\" for help.\n>\n> danielg=# create table \"\n> danielg\"# t\n> danielg\"# t\n> danielg\"# \" (a integer);\n> CREATE TABLE\n> danielg=# select relname from pg_class order by oid desc limit 1;\n> relname\n> ---------\n> +\n> t +\n> t +\n>\n> (1 row)\n>\n>\nGood catch - I had badly placed pg_strip_crlf function, fixed and regress\ntests enhanced\n\nPlease check assigned patch\n\nRegards\n\nPavel\n\n\n\n\n\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>", "msg_date": "Tue, 21 Sep 2021 08:50:31 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "> On 21 Sep 2021, at 08:50, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> po 20. 9. 2021 v 14:10 odesílatel Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> napsal:\n\n> + printf(_(\" --filter=FILENAME dump objects and data based on the filter expressions\\n\"\n> + \" from the filter file\\n\"));\n> Before we settle on --filter I think we need to conclude whether this file is\n> intended to be included from a config file, or used on it's own. If we gow tih\n> the former then we might not want a separate option for just --filter.\n> \n> I prefer to separate two files. Although there is some intersection, I think it is good to have two simple separate files for two really different tasks.\n> It does filtering, and it should be controlled by option \"--filter\". When the implementation will be changed, then this option can be changed too. \n> Filtering is just a pg_dump related feature. Revision of client application configuration is a much more generic task, and if we mix it to one, we can be\n> in a trap. It can be hard to find one good format for large script generated content, and possibly hand written structured content. For practical\n> reasons it can be good to have two files too. Filters and configurations can have different life cycles.\n\nI'm not convinced that we can/should change or remove a commandline parameter\nin a coming version when there might be scripts expecting it to work in a\nspecific way. Having a --filter as well as a --config where the configfile can\nrefer to the filterfile also passed via --filter sounds like problem waiting to\nhappen, so I think we need to settle how we want to interact with this file\nbefore anything goes in.\n\nAny thoughts from those in the thread who have had strong opinions on config\nfiles etc?\n\n> + if (filter_is_keyword(keyword, size, \"include\"))\n> I would prefer if this function call was replaced by just the pg_strcasecmp()\n> call in filter_is_keyword() and the strlen optimization there removed. The is\n> not a hot-path, we can afford the string comparison in case of errors. Having\n> the string comparison done inline here will improve readability saving the\n> reading from jumping to another function to see what it does.\n> \n> I agree that this is not a hot-path, just I don't feel well if I need to make a zero end string just for comparison pg_strcasecmp. Current design reduces malloc/free cycles. It is used in more places, when Postgres parses strings - SQL parser, plpgsql parser. I am not sure about the benefits and costs - pg_strcasecmp can be more readable, but for any keyword I have to call pstrdup and pfree. Is it necessary? My opinion in this part is not too strong - it is a minor issue, maybe I have a little bit different feelings about benefits and costs in this specific case, and if you really think the benefits of rewriting are higher, I'll do it\n\nSorry, I typoed my response. What I meant was to move the pg_strncasecmp call\ninline and not do the strlen check, to save readers from jumping around. So\nbasically end up with the below in read_filter_item():\n\n+\t/* Now we expect sequence of two keywords */\n+\tif (pg_strncasecmp(keyword, \"include\", size) == 0)\n+\t\t*is_include = true;\n\n> + initStringInfo(&line);\n> Why is this using a StringInfo rather than a PQExpBuffer as the rest of pg_dump\n> does?\n> \n> The StringInfo is used because I use the pg_get_line_buf function, and this function uses this API.\n\nAh, of course.\n\nA few other comments from another pass over this:\n\n+\texit_nicely(-1);\nWhy -1? pg_dump (and all other binaries) exits with 1 on IMO even more serious\nerrors so I think this should use 1 as well.\n\n\n+\tif (!pg_get_line_buf(fstate->fp, line))\n+\t{\n+\t\tif (ferror(fstate->fp))\n+\t\t\tfatal(\"could not read from file \\\"%s\\\": %m\", fstate->filename);\n+\n+\t\texit_invalid_filter_format(fstate,\"unexpected end of file\");\n+\t}\nIn the ferror() case this codepath isn't running fclose() on the file pointer\n(unless stdin) which we do elsewhere, so this should use pg_log_error and\nexit_nicely instead.\n\n\n+\tpg_log_fatal(\"could not read from file \\\"%s\\\": %m\", fstate->filename);\nBased on how other errors are treated in pg_dump I think this should be\ndowngraded to a pg_log_error.\n\nThe above comments are fixed in the attached, as well as a pass over the docs\nand extended tests to actually test matching a foreign server. What do think\nabout this version? I'm still not convinced that there aren't more quoting\nbugs in the parser, but I've left that intact for now.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Tue, 21 Sep 2021 14:37:55 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "út 21. 9. 2021 v 14:37 odesílatel Daniel Gustafsson <daniel@yesql.se>\nnapsal:\n\n> > On 21 Sep 2021, at 08:50, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > po 20. 9. 2021 v 14:10 odesílatel Daniel Gustafsson <daniel@yesql.se\n> <mailto:daniel@yesql.se>> napsal:\n>\n> > + printf(_(\" --filter=FILENAME dump objects and data\n> based on the filter expressions\\n\"\n> > + \" from the filter\n> file\\n\"));\n> > Before we settle on --filter I think we need to conclude whether this\n> file is\n> > intended to be included from a config file, or used on it's own. If we\n> gow tih\n> > the former then we might not want a separate option for just --filter.\n> >\n> > I prefer to separate two files. Although there is some intersection, I\n> think it is good to have two simple separate files for two really different\n> tasks.\n> > It does filtering, and it should be controlled by option \"--filter\".\n> When the implementation will be changed, then this option can be changed\n> too.\n> > Filtering is just a pg_dump related feature. Revision of client\n> application configuration is a much more generic task, and if we mix it to\n> one, we can be\n> > in a trap. It can be hard to find one good format for large script\n> generated content, and possibly hand written structured content. For\n> practical\n> > reasons it can be good to have two files too. Filters and configurations\n> can have different life cycles.\n>\n> I'm not convinced that we can/should change or remove a commandline\n> parameter\n> in a coming version when there might be scripts expecting it to work in a\n> specific way. Having a --filter as well as a --config where the\n> configfile can\n> refer to the filterfile also passed via --filter sounds like problem\n> waiting to\n> happen, so I think we need to settle how we want to interact with this file\n> before anything goes in.\n>\n> Any thoughts from those in the thread who have had strong opinions on\n> config\n> files etc?\n>\n> > + if (filter_is_keyword(keyword, size, \"include\"))\n> > I would prefer if this function call was replaced by just the\n> pg_strcasecmp()\n> > call in filter_is_keyword() and the strlen optimization there removed.\n> The is\n> > not a hot-path, we can afford the string comparison in case of errors.\n> Having\n> > the string comparison done inline here will improve readability saving\n> the\n> > reading from jumping to another function to see what it does.\n> >\n> > I agree that this is not a hot-path, just I don't feel well if I need to\n> make a zero end string just for comparison pg_strcasecmp. Current design\n> reduces malloc/free cycles. It is used in more places, when Postgres parses\n> strings - SQL parser, plpgsql parser. I am not sure about the benefits and\n> costs - pg_strcasecmp can be more readable, but for any keyword I have to\n> call pstrdup and pfree. Is it necessary? My opinion in this part is not too\n> strong - it is a minor issue, maybe I have a little bit different feelings\n> about benefits and costs in this specific case, and if you really think the\n> benefits of rewriting are higher, I'll do it\n>\n> Sorry, I typoed my response. What I meant was to move the pg_strncasecmp\n> call\n> inline and not do the strlen check, to save readers from jumping around.\n> So\n> basically end up with the below in read_filter_item():\n>\n> + /* Now we expect sequence of two keywords */\n> + if (pg_strncasecmp(keyword, \"include\", size) == 0)\n> + *is_include = true;\n>\n>\nI don't think so it is safe (strict). Only pg_strncasecmp(..) is true for\nkeywords \"includex\", \"includedsss\", ... You should to compare the size\n\nRegards\n\nPavel\n\n\n>\n>\n\nút 21. 9. 2021 v 14:37 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:> On 21 Sep 2021, at 08:50, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> po 20. 9. 2021 v 14:10 odesílatel Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> napsal:\n\n> +       printf(_(\"  --filter=FILENAME            dump objects and data based on the filter expressions\\n\"\n> +                        \"                               from the filter file\\n\"));\n> Before we settle on --filter I think we need to conclude whether this file is\n> intended to be included from a config file, or used on it's own.  If we gow tih\n> the former then we might not want a separate option for just --filter.\n> \n> I prefer to separate two files. Although there is some intersection, I think it is good to have two simple separate files for two really different tasks.\n> It does filtering, and it should be controlled by option \"--filter\". When the implementation will be changed, then this option can be changed too. \n> Filtering is just a pg_dump related feature. Revision of client application configuration is a much more generic task, and if we mix it to one, we can be\n> in a trap. It can be hard to find one good format for large script generated content, and possibly hand written structured content. For practical\n> reasons it can be good to have two files too. Filters and configurations can have different life cycles.\n\nI'm not convinced that we can/should change or remove a commandline parameter\nin a coming version when there might be scripts expecting it to work in a\nspecific way.  Having a --filter as well as a --config where the configfile can\nrefer to the filterfile also passed via --filter sounds like problem waiting to\nhappen, so I think we need to settle how we want to interact with this file\nbefore anything goes in.\n\nAny thoughts from those in the thread who have had strong opinions on config\nfiles etc?\n\n> +    if (filter_is_keyword(keyword, size, \"include\"))\n> I would prefer if this function call was replaced by just the pg_strcasecmp()\n> call in filter_is_keyword() and the strlen optimization there removed.  The is\n> not a hot-path, we can afford the string comparison in case of errors.  Having\n> the string comparison done inline here will improve readability saving the\n> reading from jumping to another function to see what it does.\n> \n> I agree that this is not a hot-path, just I don't feel well if I need to make a zero end string just for comparison pg_strcasecmp. Current design reduces malloc/free cycles. It is used in more places, when Postgres parses strings - SQL parser, plpgsql parser. I am not sure about the benefits and costs - pg_strcasecmp can be more readable, but for any keyword I have to call pstrdup and pfree. Is it necessary? My opinion in this part is not too strong - it is a minor issue, maybe I have a little bit different feelings about benefits and costs in this specific case, and if you really think the benefits of rewriting are higher, I'll do it\n\nSorry, I typoed my response.  What I meant was to move the pg_strncasecmp call\ninline and not do the strlen check, to save readers from jumping around.  So\nbasically end up with the below in read_filter_item():\n\n+       /* Now we expect sequence of two keywords */\n+       if (pg_strncasecmp(keyword, \"include\", size) == 0)\n+               *is_include = true;\nI don't think so it is safe (strict). Only pg_strncasecmp(..)  is true for keywords \"includex\", \"includedsss\", ... You should to compare the sizeRegardsPavel", "msg_date": "Tue, 21 Sep 2021 14:46:27 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "I definitely agree that we should have two files, one for config and\nanother one for filter, since their purposes are orthogonal and their\nformats are likely different; trying to cram the filter specification in\nthe config file seems unfriendly because it'd force users to write the\nfilter in whatever alien grammar used for the config file. Also, this\nwould make it easier to use a single config file with a bunch of\ndifferent filter files.\n\nOn 2021-Sep-21, Daniel Gustafsson wrote:\n\n> I'm not convinced that we can/should change or remove a commandline parameter\n> in a coming version when there might be scripts expecting it to work in a\n> specific way. Having a --filter as well as a --config where the configfile can\n> refer to the filterfile also passed via --filter sounds like problem waiting to\n> happen, so I think we need to settle how we want to interact with this file\n> before anything goes in.\n\nI think both the filter and the hypothetical config file are going to\ninteract (be redundant) with almost all already existing switches, and\nthere's no need to talk about removing anything (e.g., nobody would\nargue for the removal of \"-t\" even though that's redundant with the\nfilter file).\n\nI see no problem with the config file specifying a filter file.\n\nAFAICS if the config file specifies a filter and the user also specifies\na filter in the command line, we have two easy options: raise an error\nabout the redundant option, or have the command line option supersede\nthe one in the config file. The latter strikes me as the more useful\nbehavior, and it's in line with what other tools do in similar cases, so\nthat's what I propose doing.\n\n(There might be less easy options too, such as somehow combining the two\nfilters, but offhand I don't see any reason why this is real-world\nuseful, so I don't propose doing that.)\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"How amazing is that? I call it a night and come back to find that a bug has\nbeen identified and patched while I sleep.\" (Robert Davidson)\n http://archives.postgresql.org/pgsql-sql/2006-03/msg00378.php\n\n\n", "msg_date": "Tue, 21 Sep 2021 10:28:51 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On 9/21/21 3:28 PM, Alvaro Herrera wrote:\n> I definitely agree that we should have two files, one for config and\n> another one for filter, since their purposes are orthogonal and their\n> formats are likely different; trying to cram the filter specification in\n> the config file seems unfriendly because it'd force users to write the\n> filter in whatever alien grammar used for the config file. Also, this\n> would make it easier to use a single config file with a bunch of\n> different filter files.\n> \n\n+1, that is pretty much excatly what I argued for not too long ago.\n\n> On 2021-Sep-21, Daniel Gustafsson wrote:\n> \n>> I'm not convinced that we can/should change or remove a commandline parameter\n>> in a coming version when there might be scripts expecting it to work in a\n>> specific way. Having a --filter as well as a --config where the configfile can\n>> refer to the filterfile also passed via --filter sounds like problem waiting to\n>> happen, so I think we need to settle how we want to interact with this file\n>> before anything goes in.\n> \n> I think both the filter and the hypothetical config file are going to\n> interact (be redundant) with almost all already existing switches, and\n> there's no need to talk about removing anything (e.g., nobody would\n> argue for the removal of \"-t\" even though that's redundant with the\n> filter file).\n> \n> I see no problem with the config file specifying a filter file.\n> \n> AFAICS if the config file specifies a filter and the user also specifies\n> a filter in the command line, we have two easy options: raise an error\n> about the redundant option, or have the command line option supersede\n> the one in the config file. The latter strikes me as the more useful\n> behavior, and it's in line with what other tools do in similar cases, so\n> that's what I propose doing.\n> \n> (There might be less easy options too, such as somehow combining the two\n> filters, but offhand I don't see any reason why this is real-world\n> useful, so I don't propose doing that.)\n> \n\nWell, I think we already have to do decisions like that, because you can\ndo e.g. this:\n\n pg_dump -T t -t t\n\nSo we already do combine the switches, and we do this:\n\n When both -t and -T are given, the behavior is to dump just the\n tables that match at least one -t switch but no -T switches. If -T\n appears without -t, then tables matching -T are excluded from what\n is otherwise a normal dump.\n\nThat seems fairly reasonable, and I don't see why not to use the same\nlogic for combining patterns no matter where we got them (filter file,\ncommand-line option, etc.).\n\nJust combine everything, and then check if there's any \"exclude\" rule.\nIf yes, we're done - exclude. If not, check if there's \"include\" rule.\nIf not, still exclude. Otherwise include.\n\nSeems reasonable and consistent to me, and I don't see why not to allow\nmultiple --filter parameters.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 21 Sep 2021 18:06:52 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\n\n> The above comments are fixed in the attached, as well as a pass over the\n> docs\n> and extended tests to actually test matching a foreign server. What do\n> think\n> about this version? I'm still not convinced that there aren't more quoting\n> bugs in the parser, but I've left that intact for now.\n>\n\nThe problematic points are double quotes and new line char. Any other is\njust in sequence of bytes.\n\nI have just one note to your patch. When you use pg_strncasecmp, then you\nhave to check the size too\n\n\n char *xxx = \"incl\";\n int xxx_size = 4;\n\nelog(NOTICE, \">>>>%d<<<<\",\n pg_strncasecmp(xxx, \"include\", xxx_size) == 0);\n\nresult is NOTICE: >>>>1<<<<\n\n\"incl\" is not keyword \"include\"\n\nRegards\n\nPavel\n\n\n\n>\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>\n\nHi\n\nThe above comments are fixed in the attached, as well as a pass over the docs\nand extended tests to actually test matching a foreign server.  What do think\nabout this version?  I'm still not convinced that there aren't more quoting\nbugs in the parser, but I've left that intact for now.The problematic points are double quotes and new line char. Any other is just in  sequence of bytes.I have just one note to your patch. When you use pg_strncasecmp, then you have to check the size too    char       *xxx = \"incl\";    int         xxx_size = 4;    elog(NOTICE, \">>>>%d<<<<\",      pg_strncasecmp(xxx, \"include\", xxx_size) == 0);result is NOTICE:  >>>>1<<<<\"incl\" is not keyword \"include\"RegardsPavel \n\n--\nDaniel Gustafsson               https://vmware.com/", "msg_date": "Tue, 21 Sep 2021 18:20:52 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\n\n> The above comments are fixed in the attached, as well as a pass over the\n> docs\n> and extended tests to actually test matching a foreign server. What do\n> think\n> about this version? I'm still not convinced that there aren't more quoting\n> bugs in the parser, but I've left that intact for now.\n>\n\nThis patch is based on the version that you sent 21.9. Just I modified\nstring comparison in keyword detection. If we don't allow support\nabbreviations of keywords (and I dislike it), then the check of size is\nnecessary. Any other is without change.\n\nRegards\n\nPavel\n\n\n\n\n>\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>", "msg_date": "Fri, 24 Sep 2021 05:59:54 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "I took another pass over this today and touched up the documentation (docs and\ncode) as well as tweaked the code a bit here and there to both make it fit the\npg_dump style better and to clean up a few small things. I've also added a set\nof additional tests to cover more of the functionality.\n\nI'm still not happy with the docs, I need to take another look there and see if\nI make them more readable but otherwise I don't think there are any open issues\nwith this.\n\nAs has been discussed upthread, this format strikes a compromise wrt simplicity\nand doesn't preclude adding a more structured config file in the future should\nwe want that. I think this takes care of most comments and opinions made in\nthis thread.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Fri, 1 Oct 2021 15:19:30 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "pá 1. 10. 2021 v 15:19 odesílatel Daniel Gustafsson <daniel@yesql.se>\nnapsal:\n\n> I took another pass over this today and touched up the documentation (docs\n> and\n> code) as well as tweaked the code a bit here and there to both make it fit\n> the\n> pg_dump style better and to clean up a few small things. I've also added\n> a set\n> of additional tests to cover more of the functionality.\n>\n> I'm still not happy with the docs, I need to take another look there and\n> see if\n> I make them more readable but otherwise I don't think there are any open\n> issues\n> with this.\n>\n> As has been discussed upthread, this format strikes a compromise wrt\n> simplicity\n> and doesn't preclude adding a more structured config file in the future\n> should\n> we want that. I think this takes care of most comments and opinions made\n> in\n> this thread.\n>\n\nIt looks well.\n\nThank you\n\nPavel\n\n\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>\n\npá 1. 10. 2021 v 15:19 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:I took another pass over this today and touched up the documentation (docs and\ncode) as well as tweaked the code a bit here and there to both make it fit the\npg_dump style better and to clean up a few small things.  I've also added a set\nof additional tests to cover more of the functionality.\n\nI'm still not happy with the docs, I need to take another look there and see if\nI make them more readable but otherwise I don't think there are any open issues\nwith this.\n\nAs has been discussed upthread, this format strikes a compromise wrt simplicity\nand doesn't preclude adding a more structured config file in the future should\nwe want that.  I think this takes care of most comments and opinions made in\nthis thread.It looks well.Thank youPavel\n\n--\nDaniel Gustafsson               https://vmware.com/", "msg_date": "Fri, 1 Oct 2021 18:00:17 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On 10/1/21 3:19 PM, Daniel Gustafsson wrote:\n> \n> As has been discussed upthread, this format strikes a compromise wrt simplicity\n> and doesn't preclude adding a more structured config file in the future should\n> we want that. I think this takes care of most comments and opinions made in\n> this thread.\n> \n> --\n> Daniel Gustafsson\t\thttps://vmware.com/\n> \n\nHi,\n\nIf you try to dump/restore a foreign file from a file_fdw server, the \nrestore step will complain and thus leave the returnvalue nonzero. The \nforeign table will be there, with complete 'data'.\n\nA complete runnable exampe is a lot of work; I hope the below bits of \ninput and output makes the problem clear. Main thing: the pg_restore \ncontains 2 ERROR lines like:\n\npg_restore: error: COPY failed for table \"ireise1\": ERROR: cannot \ninsert into foreign table \"ireise1\"\n\n\n\n----------------------\n From the test bash:\n\necho \"\ninclude table table0 # ok public\ninclude table test.table1 #\ninclude foreign_data goethe # foreign server 'goethe' (file_fdw)\ninclude table gutenberg.ireise1 # foreign table\ninclude table gutenberg.ireise2 # foreign table\n\" > inputfile1.txt\n\npg_dump --create -Fc -c -p $port -d $db1 -f dump1 --filter=inputfile1.txt\necho\n\n# prepare for restore\nserver_name=goethe\necho \"create schema if not exists test;\" | psql -qaXd $db2\necho \"create schema if not exists gutenberg;\" | psql -qaXd $db2\necho \"create server if not exists $server_name foreign data wrapper \nfile_fdw \" \\\n | psql -qaXd $db2\n\necho \"-- pg_restore --if-exists -cvd $db2 dump1 \"\n pg_restore --if-exists -cvd $db2 dump1\nrc=$?\necho \"-- rc [$rc]\" -\necho\n\n----------------------\n\nfrom the output:\n\n-- pg_dump --create -Fc -c -p 6969 -d testdb1 -f dump1 \n--filter=inputfile1.txt\n\n\n-- pg_restore --if-exists -cvd testdb2 dump1\npg_restore: connecting to database for restore\npg_restore: dropping TABLE table1\npg_restore: dropping TABLE table0\npg_restore: dropping FOREIGN TABLE ireise2\npg_restore: dropping FOREIGN TABLE ireise1\npg_restore: creating FOREIGN TABLE \"gutenberg.ireise1\"\npg_restore: creating COMMENT \"gutenberg.FOREIGN TABLE ireise1\"\npg_restore: creating FOREIGN TABLE \"gutenberg.ireise2\"\npg_restore: creating COMMENT \"gutenberg.FOREIGN TABLE ireise2\"\npg_restore: creating TABLE \"public.table0\"\npg_restore: creating TABLE \"test.table1\"\npg_restore: processing data for table \"gutenberg.ireise1\"\npg_restore: while PROCESSING TOC:\npg_restore: from TOC entry 5570; 0 23625 TABLE DATA ireise1 aardvark\npg_restore: error: COPY failed for table \"ireise1\": ERROR: cannot \ninsert into foreign table \"ireise1\"\npg_restore: processing data for table \"gutenberg.ireise2\"\npg_restore: from TOC entry 5571; 0 23628 TABLE DATA ireise2 aardvark\npg_restore: error: COPY failed for table \"ireise2\": ERROR: cannot \ninsert into foreign table \"ireise2\"\npg_restore: processing data for table \"public.table0\"\npg_restore: processing data for table \"test.table1\"\npg_restore: warning: errors ignored on restore: 2\n-- rc [1]\n\n---------\n\n\nA second, separate practical hickup is that schema's are not restored \nfrom the dumped $schema.$table includes -- but this can be worked \naround; for my inputfile1.txt I had to run separately (as seen above, \nbefore running the pg_restore):\n\ncreate schema if not exists test;\ncreate schema if not exists gutenberg;\ncreate server if not exists goethe foreign data wrapper file_fdw;\n\nA bit annoying but still maybe all right.\n\n\nThanks,\n\nErik Rijkers\n\n\n\n", "msg_date": "Fri, 1 Oct 2021 18:19:56 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On 10/1/21 6:19 PM, Erik Rijkers wrote:\n> On 10/1/21 3:19 PM, Daniel Gustafsson wrote:\n>>\n>> As has been discussed upthread, this format strikes a compromise wrt \n>> simplicity\n>> and doesn't preclude adding a more structured config file in the \n>> future should\n\n> \n> If you try to dump/restore a foreign file from a file_fdw server, the \n> restore step will complain and thus leave the returnvalue nonzero. The \n> foreign table will be there, with complete 'data'.\n> \n> A complete runnable exampe is a lot of work; I hope the below bits of \n> input and output makes the problem clear.� Main thing: the pg_restore \n> contains 2 ERROR lines like:\n> \n> pg_restore: error: COPY failed for table \"ireise1\": ERROR:� cannot \n> insert into foreign table \"ireise1\"\n\nFurther testing makes clear that the file_fdw-addressing line\n include foreign_data goethe\nwas the culprit: it causes a COPY which of course fails in a readonly \nwrapper like file_fdw. Without that line it works (because I run the \nrestore on the same machine so the underlying file_fdw .txt files are \nthere for testdb2 too)\n\nSo the issue is not as serious as it seemed. The complaint remaining is \nonly that this could somehow be documented better.\n\nI attach a running example (careful, it deletes stuff) of the original \nERROR-producing bash (remove the 'include foreign_data' line from the \ninput file to run it without error).\n\nthanks,\n\nErik Rijkers", "msg_date": "Sat, 2 Oct 2021 08:18:14 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "> On 2 Oct 2021, at 08:18, Erik Rijkers <er@xs4all.nl> wrote:\n\n> So the issue is not as serious as it seemed. \n\nThis is also not related to this patch in any way, or am I missing a point\nhere? This can just as well be achieved without this patch.\n\n> The complaint remaining is only that this could somehow be documented better.\n\nThe pg_dump documentation today have a large highlighted note about this:\n\n \"When --include-foreign-data is specified, pg_dump does not check that the\n foreign table is writable. Therefore, there is no guarantee that the\n results of a foreign table dump can be successfully restored.\"\n\nThis was extensively discussed [0] when this went in, is there additional\ndocumentation you'd like to see for this?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n[0] https://postgr.es/m/LEJPR01MB0185483C0079D2F651B16231E7FC0@LEJPR01MB0185.DEUPRD01.PROD.OUTLOOK.DE\n\n\n\n", "msg_date": "Mon, 4 Oct 2021 14:54:27 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "> On 1 Oct 2021, at 15:19, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> I'm still not happy with the docs, I need to take another look there and see if\n> I make them more readable but otherwise I don't think there are any open issues\n> with this.\n\nAttached is a rebased version which has rewritten docs which I think are more\nin line with the pg_dump documentation. I've also added tests for\n--strict-names operation, as well subjected it to pgindent and pgperltidy.\n\nUnless there are objections, I think this is pretty much ready to go in.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Tue, 5 Oct 2021 14:30:20 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "út 5. 10. 2021 v 14:30 odesílatel Daniel Gustafsson <daniel@yesql.se>\nnapsal:\n\n> > On 1 Oct 2021, at 15:19, Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > I'm still not happy with the docs, I need to take another look there and\n> see if\n> > I make them more readable but otherwise I don't think there are any open\n> issues\n> > with this.\n>\n> Attached is a rebased version which has rewritten docs which I think are\n> more\n> in line with the pg_dump documentation. I've also added tests for\n> --strict-names operation, as well subjected it to pgindent and pgperltidy.\n>\n> Unless there are objections, I think this is pretty much ready to go in.\n>\n\ngreat, thank you\n\nPavel\n\n\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>\n\nút 5. 10. 2021 v 14:30 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:> On 1 Oct 2021, at 15:19, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> I'm still not happy with the docs, I need to take another look there and see if\n> I make them more readable but otherwise I don't think there are any open issues\n> with this.\n\nAttached is a rebased version which has rewritten docs which I think are more\nin line with the pg_dump documentation.  I've also added tests for\n--strict-names operation, as well subjected it to pgindent and pgperltidy.\n\nUnless there are objections, I think this is pretty much ready to go in.great, thank youPavel\n\n--\nDaniel Gustafsson               https://vmware.com/", "msg_date": "Tue, 5 Oct 2021 14:37:52 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Op 05-10-2021 om 14:30 schreef Daniel Gustafsson:\n\n> \n> Unless there are objections, I think this is pretty much ready to go in.\n\nAgreed. One typo:\n\n'This keyword can only be with the exclude keyword.' should be\n'This keyword can only be used with the exclude keyword.'\n\n\nthanks,\n\nErik Rijkers\n\n\n\n\n> \n> --\n> Daniel Gustafsson\t\thttps://vmware.com/\n> \n\n\n", "msg_date": "Tue, 5 Oct 2021 16:39:05 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nút 5. 10. 2021 v 14:30 odesílatel Daniel Gustafsson <daniel@yesql.se>\nnapsal:\n\n> > On 1 Oct 2021, at 15:19, Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > I'm still not happy with the docs, I need to take another look there and\n> see if\n> > I make them more readable but otherwise I don't think there are any open\n> issues\n> > with this.\n>\n> Attached is a rebased version which has rewritten docs which I think are\n> more\n> in line with the pg_dump documentation. I've also added tests for\n> --strict-names operation, as well subjected it to pgindent and pgperltidy.\n>\n> Unless there are objections, I think this is pretty much ready to go in.\n>\n\nI am sending a rebased version of patch pg_dump-filteropt-20211005.patch\nwith fixed regress tests and fixed documentation (reported by Erik).\nI found another issue - the stringinfo line used in filter_get_pattern was\nreleased too early - the line (memory) was used later in check of unexpected\nchars after pattern string. I fixed it by moving this stringinfo buffer to\nfstate structure. It can be shared by all routines, and it can be safely\nreleased at\nan end of filter processing, where we are sure, so these data can be free.\n\nRegards\n\nPavel\n\n\n\n\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>", "msg_date": "Wed, 27 Oct 2021 11:15:31 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nfresh rebase\n\nRegards\n\nPavel", "msg_date": "Mon, 25 Apr 2022 19:39:58 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "\nOn 2022-04-25 Mo 13:39, Pavel Stehule wrote:\n> Hi\n>\n> fresh rebase\n>\n>\n\n\nIf we're going to do this for pg_dump's include/exclude options,\nshouldn't we also provide an equivalent facility for pg_dumpall's\n--exclude-database option?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 13 Jul 2022 16:49:01 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "st 13. 7. 2022 v 22:49 odesílatel Andrew Dunstan <andrew@dunslane.net>\nnapsal:\n\n>\n> On 2022-04-25 Mo 13:39, Pavel Stehule wrote:\n> > Hi\n> >\n> > fresh rebase\n> >\n> >\n>\n>\n> If we're going to do this for pg_dump's include/exclude options,\n> shouldn't we also provide an equivalent facility for pg_dumpall's\n> --exclude-database option?\n>\n>\nIt has sense\n\nRegards\n\nPavel\n\n\n\n\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\nst 13. 7. 2022 v 22:49 odesílatel Andrew Dunstan <andrew@dunslane.net> napsal:\nOn 2022-04-25 Mo 13:39, Pavel Stehule wrote:\n> Hi\n>\n> fresh rebase\n>\n>\n\n\nIf we're going to do this for pg_dump's include/exclude options,\nshouldn't we also provide an equivalent facility for pg_dumpall's\n--exclude-database option?\nIt has senseRegardsPavel \n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 14 Jul 2022 06:54:49 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nčt 14. 7. 2022 v 6:54 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> st 13. 7. 2022 v 22:49 odesílatel Andrew Dunstan <andrew@dunslane.net>\n> napsal:\n>\n>>\n>> On 2022-04-25 Mo 13:39, Pavel Stehule wrote:\n>> > Hi\n>> >\n>> > fresh rebase\n>> >\n>> >\n>>\n>>\n>> If we're going to do this for pg_dump's include/exclude options,\n>> shouldn't we also provide an equivalent facility for pg_dumpall's\n>> --exclude-database option?\n>>\n>>\n> It has sense\n>\n\nThe attached patch implements the --filter option for pg_dumpall and for\npg_restore too.\n\nRegards\n\nPavel\n\n\n\n>>", "msg_date": "Sun, 17 Jul 2022 08:20:47 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Thanks for updating the patch.\n\nThis failed to build on windows.\nhttp://cfbot.cputube.org/pavel-stehule.html\n\nSome more comments inline.\n\nOn Sun, Jul 17, 2022 at 08:20:47AM +0200, Pavel Stehule wrote:\n> The attached patch implements the --filter option for pg_dumpall and for\n> pg_restore too.\n\n> diff --git a/doc/src/sgml/ref/pg_dump.sgml b/doc/src/sgml/ref/pg_dump.sgml\n> index 5efb442b44..ba2920dbee 100644\n> --- a/doc/src/sgml/ref/pg_dump.sgml\n> +++ b/doc/src/sgml/ref/pg_dump.sgml\n> @@ -779,6 +779,80 @@ PostgreSQL documentation\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term><option>--filter=<replaceable class=\"parameter\">filename</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + Specify a filename from which to read patterns for objects to include\n> + or exclude from the dump. The patterns are interpreted according to the\n> + same rules as the corresponding options:\n> + <option>-t</option>/<option>--table</option> for tables,\n> + <option>-n</option>/<option>--schema</option> for schemas,\n> + <option>--include-foreign-data</option> for data on foreign servers and\n> + <option>--exclude-table-data</option> for table data.\n> + To read from <literal>STDIN</literal> use <filename>-</filename> as the\n\nSTDIN comma\n\n> + <para>\n> + Lines starting with <literal>#</literal> are considered comments and\n> + are ignored. Comments can be placed after filter as well. Blank lines\n\nchange \"are ignored\" to \"ignored\", I think.\n\n> diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml\n> index 8a081f0080..137491340c 100644\n> --- a/doc/src/sgml/ref/pg_dumpall.sgml\n> +++ b/doc/src/sgml/ref/pg_dumpall.sgml\n> @@ -122,6 +122,29 @@ PostgreSQL documentation\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term><option>--filter=<replaceable class=\"parameter\">filename</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + Specify a filename from which to read patterns for databases excluded\n> + from dump. The patterns are interpretted according to the same rules\n> + like <option>--exclude-database</option>.\n\nsame rules *as*\n\n> + To read from <literal>STDIN</literal> use <filename>-</filename> as the\n\ncomma\n\n> + filename. The <option>--filter</option> option can be specified in\n> + conjunction with the above listed options for including or excluding\n\nFor dumpall, remove \"for including or\"\nchange \"above listed options\" to \"exclude-database\" ?\n\n> diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml\n> index 526986eadb..5f16c4a333 100644\n> --- a/doc/src/sgml/ref/pg_restore.sgml\n> +++ b/doc/src/sgml/ref/pg_restore.sgml\n> @@ -188,6 +188,31 @@ PostgreSQL documentation\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term><option>--filter=<replaceable class=\"parameter\">filename</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + Specify a filename from which to read patterns for objects excluded\n> + or included from restore. The patterns are interpretted according to the\n> + same rules like <option>--schema</option>, <option>--exclude-schema</option>,\n\ns/like/as/\n\n> + <option>--function</option>, <option>--index</option>, <option>--table</option>\n> + or <option>--trigger</option>.\n> + To read from <literal>STDIN</literal> use <filename>-</filename> as the\n\nSTDIN comma\n\n> +/*\n> + * filter_get_keyword - read the next filter keyword from buffer\n> + *\n> + * Search for keywords (limited to containing ascii alphabetic characters) in\n\nremove \"containing\"\n\n> +\t/*\n> +\t * If the object name pattern has been quoted we must take care parse out\n> +\t * the entire quoted pattern, which may contain whitespace and can span\n> +\t * over many lines.\n\nquoted comma\n*to parse\nremove \"over\"\n\n> + * The pattern is either simple without any whitespace, or properly quoted\n\ndouble space\n\n> + * in case there is whitespace in the object name. The pattern handling follows\n\ns/is/may be/\n\n> +\t\t\tif (size == 7 && pg_strncasecmp(keyword, \"include\", 7) == 0)\n> +\t\t\t\t*is_include = true;\n> +\t\t\telse if (size == 7 && pg_strncasecmp(keyword, \"exclude\", 7) == 0)\n> +\t\t\t\t*is_include = false;\n\nCan't you write strncasecmp(keyword, \"include\", size) to avoid hardcoding \"7\" ?\n\n> +\n> +\t\t\tif (size == 4 && pg_strncasecmp(keyword, \"data\", 4) == 0)\n> +\t\t\t\t*objtype = FILTER_OBJECT_TYPE_DATA;\n> +\t\t\telse if (size == 8 && pg_strncasecmp(keyword, \"database\", 8) == 0)\n> +\t\t\t\t*objtype = FILTER_OBJECT_TYPE_DATABASE;\n> +\t\t\telse if (size == 12 && pg_strncasecmp(keyword, \"foreign_data\", 12) == 0)\n> +\t\t\t\t*objtype = FILTER_OBJECT_TYPE_FOREIGN_DATA;\n> +\t\t\telse if (size == 8 && pg_strncasecmp(keyword, \"function\", 8) == 0)\n> +\t\t\t\t*objtype = FILTER_OBJECT_TYPE_FUNCTION;\n> +\t\t\telse if (size == 5 && pg_strncasecmp(keyword, \"index\", 5) == 0)\n> +\t\t\t\t*objtype = FILTER_OBJECT_TYPE_INDEX;\n> +\t\t\telse if (size == 6 && pg_strncasecmp(keyword, \"schema\", 6) == 0)\n> +\t\t\t\t*objtype = FILTER_OBJECT_TYPE_SCHEMA;\n> +\t\t\telse if (size == 5 && pg_strncasecmp(keyword, \"table\", 5) == 0)\n> +\t\t\t\t*objtype = FILTER_OBJECT_TYPE_TABLE;\n> +\t\t\telse if (size == 7 && pg_strncasecmp(keyword, \"trigger\", 7) == 0)\n> +\t\t\t\t*objtype = FILTER_OBJECT_TYPE_TRIGGER;\n\nAvoid hardcoding these constants.\n\n> diff --git a/src/bin/pg_dump/filter.h b/src/bin/pg_dump/filter.h\n> new file mode 100644\n> index 0000000000..e4a1a74b10\n> --- /dev/null\n> +++ b/src/bin/pg_dump/filter.h\n...\n> \\ No newline at end of file\n\n:(\n\n\n", "msg_date": "Sun, 17 Jul 2022 09:01:46 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "ne 17. 7. 2022 v 16:01 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> Thanks for updating the patch.\n>\n> This failed to build on windows.\n> http://cfbot.cputube.org/pavel-stehule.html\n>\n>\nYes, there was a significant problem with the function exit_nicely, that is\ndifferently implemented in pg_dump and pg_dumpall.\n\n\n\n> Some more comments inline.\n>\n> On Sun, Jul 17, 2022 at 08:20:47AM +0200, Pavel Stehule wrote:\n> > The attached patch implements the --filter option for pg_dumpall and for\n> > pg_restore too.\n>\n> > diff --git a/doc/src/sgml/ref/pg_dump.sgml\n> b/doc/src/sgml/ref/pg_dump.sgml\n> > index 5efb442b44..ba2920dbee 100644\n> > --- a/doc/src/sgml/ref/pg_dump.sgml\n> > +++ b/doc/src/sgml/ref/pg_dump.sgml\n> > @@ -779,6 +779,80 @@ PostgreSQL documentation\n> > </listitem>\n> > </varlistentry>\n> >\n> > + <varlistentry>\n> > + <term><option>--filter=<replaceable\n> class=\"parameter\">filename</replaceable></option></term>\n> > + <listitem>\n> > + <para>\n> > + Specify a filename from which to read patterns for objects to\n> include\n> > + or exclude from the dump. The patterns are interpreted\n> according to the\n> > + same rules as the corresponding options:\n> > + <option>-t</option>/<option>--table</option> for tables,\n> > + <option>-n</option>/<option>--schema</option> for schemas,\n> > + <option>--include-foreign-data</option> for data on foreign\n> servers and\n> > + <option>--exclude-table-data</option> for table data.\n> > + To read from <literal>STDIN</literal> use\n> <filename>-</filename> as the\n\nSTDIN comma\n>\n\nfixed\n\n\n>\n> > + <para>\n> > + Lines starting with <literal>#</literal> are considered\n> comments and\n> > + are ignored. Comments can be placed after filter as well. Blank\n> lines\n>\n> change \"are ignored\" to \"ignored\", I think.\n>\n\nchanged\n\n\n>\n> > diff --git a/doc/src/sgml/ref/pg_dumpall.sgml\n> b/doc/src/sgml/ref/pg_dumpall.sgml\n> > index 8a081f0080..137491340c 100644\n> > --- a/doc/src/sgml/ref/pg_dumpall.sgml\n> > +++ b/doc/src/sgml/ref/pg_dumpall.sgml\n> > @@ -122,6 +122,29 @@ PostgreSQL documentation\n> > </listitem>\n> > </varlistentry>\n> >\n> > + <varlistentry>\n> > + <term><option>--filter=<replaceable\n> class=\"parameter\">filename</replaceable></option></term>\n> > + <listitem>\n> > + <para>\n> > + Specify a filename from which to read patterns for databases\n> excluded\n> > + from dump. The patterns are interpretted according to the same\n> rules\n> > + like <option>--exclude-database</option>.\n>\n> same rules *as*\n>\n\nfixed\n\n\n>\n> > + To read from <literal>STDIN</literal> use\n> <filename>-</filename> as the\n>\n> comma\n>\n\nfixed\n\n\n>\n> > + filename. The <option>--filter</option> option can be\n> specified in\n> > + conjunction with the above listed options for including or\n> excluding\n>\n> For dumpall, remove \"for including or\"\n> change \"above listed options\" to \"exclude-database\" ?\n>\n\nfixed\n\n\n>\n> > diff --git a/doc/src/sgml/ref/pg_restore.sgml\n> b/doc/src/sgml/ref/pg_restore.sgml\n> > index 526986eadb..5f16c4a333 100644\n> > --- a/doc/src/sgml/ref/pg_restore.sgml\n> > +++ b/doc/src/sgml/ref/pg_restore.sgml\n> > @@ -188,6 +188,31 @@ PostgreSQL documentation\n> > </listitem>\n> > </varlistentry>\n> >\n> > + <varlistentry>\n> > + <term><option>--filter=<replaceable\n> class=\"parameter\">filename</replaceable></option></term>\n> > + <listitem>\n> > + <para>\n> > + Specify a filename from which to read patterns for objects\n> excluded\n> > + or included from restore. The patterns are interpretted\n> according to the\n> > + same rules like <option>--schema</option>,\n> <option>--exclude-schema</option>,\n>\n> s/like/as/\n>\n\nchanged\n\n\n>\n> > + <option>--function</option>, <option>--index</option>,\n> <option>--table</option>\n> > + or <option>--trigger</option>.\n> > + To read from <literal>STDIN</literal> use\n> <filename>-</filename> as the\n>\n> STDIN comma\n>\n\nfixed\n\n\n\n>\n> > +/*\n> > + * filter_get_keyword - read the next filter keyword from buffer\n> > + *\n> > + * Search for keywords (limited to containing ascii alphabetic\n> characters) in\n>\n> remove \"containing\"\n>\n\nfixed\n\n\n>\n> > + /*\n> > + * If the object name pattern has been quoted we must take care\n> parse out\n> > + * the entire quoted pattern, which may contain whitespace and can\n> span\n> > + * over many lines.\n>\n> quoted comma\n> *to parse\n> remove \"over\"\n>\n\nfixed\n\n\n>\n> > + * The pattern is either simple without any whitespace, or properly\n> quoted\n>\n> double space\n>\n\nfixed\n\n\n>\n> > + * in case there is whitespace in the object name. The pattern handling\n> follows\n>\n> s/is/may be/\n>\n\nfixed\n\n\n>\n> > + if (size == 7 && pg_strncasecmp(keyword,\n> \"include\", 7) == 0)\n> > + *is_include = true;\n> > + else if (size == 7 && pg_strncasecmp(keyword,\n> \"exclude\", 7) == 0)\n> > + *is_include = false;\n>\n> Can't you write strncasecmp(keyword, \"include\", size) to avoid hardcoding\n> \"7\" ?\n>\n\nI need to compare the size of the keyword with expected size, but I can use\nstrlen(conststr). I wrote new macro is_keyword_str to fix this issue\n\nfixed\n\n\n>\n> > +\n> > + if (size == 4 && pg_strncasecmp(keyword, \"data\",\n> 4) == 0)\n> > + *objtype = FILTER_OBJECT_TYPE_DATA;\n> > + else if (size == 8 && pg_strncasecmp(keyword,\n> \"database\", 8) == 0)\n> > + *objtype = FILTER_OBJECT_TYPE_DATABASE;\n> > + else if (size == 12 && pg_strncasecmp(keyword,\n> \"foreign_data\", 12) == 0)\n> > + *objtype = FILTER_OBJECT_TYPE_FOREIGN_DATA;\n> > + else if (size == 8 && pg_strncasecmp(keyword,\n> \"function\", 8) == 0)\n> > + *objtype = FILTER_OBJECT_TYPE_FUNCTION;\n> > + else if (size == 5 && pg_strncasecmp(keyword,\n> \"index\", 5) == 0)\n> > + *objtype = FILTER_OBJECT_TYPE_INDEX;\n> > + else if (size == 6 && pg_strncasecmp(keyword,\n> \"schema\", 6) == 0)\n> > + *objtype = FILTER_OBJECT_TYPE_SCHEMA;\n> > + else if (size == 5 && pg_strncasecmp(keyword,\n> \"table\", 5) == 0)\n> > + *objtype = FILTER_OBJECT_TYPE_TABLE;\n> > + else if (size == 7 && pg_strncasecmp(keyword,\n> \"trigger\", 7) == 0)\n> > + *objtype = FILTER_OBJECT_TYPE_TRIGGER;\n>\n> Avoid hardcoding these constants.\n>\n\nfixed\n\n\n>\n> > diff --git a/src/bin/pg_dump/filter.h b/src/bin/pg_dump/filter.h\n> > new file mode 100644\n> > index 0000000000..e4a1a74b10\n> > --- /dev/null\n> > +++ b/src/bin/pg_dump/filter.h\n> ...\n> > \\ No newline at end of file\n>\n\nfixed\n\nupdated patch attached\n\nRegards\n\nPavel\n\n\n>\n> :(\n>", "msg_date": "Mon, 18 Jul 2022 20:48:56 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nI am sending fresh rebase + enhancing tests for pg_dumpall and pg_restore\n\nRegards\n\nPavel", "msg_date": "Mon, 22 Aug 2022 11:52:42 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "As noted upthread at some point, I'm not overly excited about the parser in\nfilter.c, for maintainability and readability reasons. So, I've reimplemented\nthe parser in Flex/Bison in the attached patch, which IMHO provides a clear(er)\npicture of the grammar and is more per project standards. This version of the\npatch is your latest version with just the parser replaced (at a reduction in\nsize as a side benefit).\n\nAll features supported in your latest patch version are present, and it passes\nall the tests added by this patch. It's been an undisclosed amount of years\nsince I wrote a Bison parser (well, yacc really) from scratch so I don't rule\nout having made silly mistakes. I would very much appreciate review from those\nmore well versed in this area.\n\nOne thing this patchversion currently lacks is refined error messaging, but if\nwe feel that this approach is a viable path then that can be tweaked. The\nfunction which starts the parser can also be refactored to be shared across\npg_dump, pg_dumpall and pg_restore but I've kept it simple for now.\n\nThoughts? It would be nice to get this patch across the finishline during this\ncommitfest.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Wed, 7 Sep 2022 21:45:54 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Op 07-09-2022 om 21:45 schreef Daniel Gustafsson:\n> \n> One thing this patchversion currently lacks is refined error messaging, but if\n> we feel that this approach is a viable path then that can be tweaked. The\n> function which starts the parser can also be refactored to be shared across\n> pg_dump, pg_dumpall and pg_restore but I've kept it simple for now.\n> \n> Thoughts? It would be nice to get this patch across the finishline during this\n> commitfest.\n\n > [0001-Add-include-exclude-filtering-via-file-in-pg_dump.patch]\n\nThis seems to dump & restore well (as Pavels patch does).\n\nI did notice one peculiarity (in your patch) where for each table a few \nspaces are omitted by pg_dump.\n\n-------------\n#! /bin/bash\n\npsql -qXc \"drop database if exists testdb2\"\npsql -qXc \"create database testdb2\"\n\necho \"\ncreate schema if not exists test;\ncreate table table0 (id integer);\ncreate table table1 (id integer);\ninsert into table0 select n from generate_series(1,2) as f(n);\ninsert into table1 select n from generate_series(1,2) as f(n);\n\" | psql -qXad testdb2\n\necho \"include table table0\" > inputfile1.txt\n\necho \"include table table0\ninclude table table1\" > inputfile2.txt\n\n# 1 table, emits 2 spaces\necho -ne \">\"\npg_dump -F p -f plainfile1 --filter=inputfile1.txt -d testdb2\necho \"<\"\n\n# 2 tables, emits 4 space\necho -ne \">\"\npg_dump -F p -f plainfile2 --filter=inputfile2.txt -d testdb2\necho \"<\"\n\n# dump without filter emits no spaces\necho -ne \">\"\npg_dump -F c -f plainfile3 -t table0 -table1 -d testdb2\necho \"<\"\n-------------\n\nIt's probably a small thing -- but I didn't find it.\n\nthanks,\n\nErik Rijkers\n> \n> --\n> Daniel Gustafsson\t\thttps://vmware.com/\n> \n\n\n", "msg_date": "Thu, 8 Sep 2022 12:00:17 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "> On 8 Sep 2022, at 12:00, Erik Rijkers <er@xs4all.nl> wrote:\n> \n> Op 07-09-2022 om 21:45 schreef Daniel Gustafsson:\n>> One thing this patchversion currently lacks is refined error messaging, but if\n>> we feel that this approach is a viable path then that can be tweaked. The\n>> function which starts the parser can also be refactored to be shared across\n>> pg_dump, pg_dumpall and pg_restore but I've kept it simple for now.\n>> Thoughts? It would be nice to get this patch across the finishline during this\n>> commitfest.\n> \n> > [0001-Add-include-exclude-filtering-via-file-in-pg_dump.patch]\n> \n> This seems to dump & restore well (as Pavels patch does).\n\nThanks for looking!\n\n> I did notice one peculiarity (in your patch) where for each table a few spaces are omitted by pg_dump.\n\nRight, I had that on my TODO to fix before submitting but clearly forgot. It\nboils down to consuming the space between commands and object types and object\npatterns. The attached v2 fixes that.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Thu, 8 Sep 2022 13:38:42 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi,\n\nOn Thu, Sep 08, 2022 at 01:38:42PM +0200, Daniel Gustafsson wrote:\n> > On 8 Sep 2022, at 12:00, Erik Rijkers <er@xs4all.nl> wrote:\n> > \n> > Op 07-09-2022 om 21:45 schreef Daniel Gustafsson:\n> >> One thing this patchversion currently lacks is refined error messaging, but if\n> >> we feel that this approach is a viable path then that can be tweaked. The\n> >> function which starts the parser can also be refactored to be shared across\n> >> pg_dump, pg_dumpall and pg_restore but I've kept it simple for now.\n> >> Thoughts? It would be nice to get this patch across the finishline during this\n> >> commitfest.\n> > \n> > > [0001-Add-include-exclude-filtering-via-file-in-pg_dump.patch]\n> > \n> > This seems to dump & restore well (as Pavels patch does).\n> \n> Thanks for looking!\n> \n> > I did notice one peculiarity (in your patch) where for each table a few spaces are omitted by pg_dump.\n> \n> Right, I had that on my TODO to fix before submitting but clearly forgot. It\n> boils down to consuming the space between commands and object types and object\n> patterns. The attached v2 fixes that.\n\nI only had a quick look at the parser, and one thing that strikes me is:\n\n+Patterns:\n+\t/* EMPTY */\n+\t| Patterns Pattern\n+\t| Pattern\n+\t;\n+\n+Pattern:\n+\t\tC_INCLUDE include_object pattern { include_item(priv, $2, $3); }\n\nIt seems confusing to mix Pattern(s) (the rules) and pattern (the token).\nMaybe instead using Include(s) or Item(s) on the bison side, and/or\nname_pattern on the lexer side?\n\n\n", "msg_date": "Thu, 8 Sep 2022 19:44:30 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "> On 8 Sep 2022, at 13:44, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Thu, Sep 08, 2022 at 01:38:42PM +0200, Daniel Gustafsson wrote:\n>>> On 8 Sep 2022, at 12:00, Erik Rijkers <er@xs4all.nl> wrote:\n\n>>> I did notice one peculiarity (in your patch) where for each table a few spaces are omitted by pg_dump.\n>> \n>> Right, I had that on my TODO to fix before submitting but clearly forgot. It\n>> boils down to consuming the space between commands and object types and object\n>> patterns. The attached v2 fixes that.\n> \n> I only had a quick look at the parser,\n\nThanks for looking!\n\n> .. and one thing that strikes me is:\n> \n> +Patterns:\n> +\t/* EMPTY */\n> +\t| Patterns Pattern\n> +\t| Pattern\n> +\t;\n> +\n> +Pattern:\n> +\t\tC_INCLUDE include_object pattern { include_item(priv, $2, $3); }\n> \n> It seems confusing to mix Pattern(s) (the rules) and pattern (the token).\n> Maybe instead using Include(s) or Item(s) on the bison side, and/or\n> name_pattern on the lexer side?\n\nThat makes a lot of sense, I renamed the rules in the parser but kept them in\nthe lexer since that seemed like the clearest scheme.\n\nAlso in the attached is a small refactoring to share parser init between\npg_dump and pg_restore (pg_dumpall shares little with these so not there for\nnow), buffer resize overflow calculation and some error message tweaking.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Thu, 8 Sep 2022 14:32:22 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\n\nst 7. 9. 2022 v 21:46 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:\n\n> As noted upthread at some point, I'm not overly excited about the parser in\n> filter.c, for maintainability and readability reasons. So, I've\n> reimplemented\n> the parser in Flex/Bison in the attached patch, which IMHO provides a\n> clear(er)\n> picture of the grammar and is more per project standards. This version of\n> the\n> patch is your latest version with just the parser replaced (at a reduction\n> in\n> size as a side benefit).\n>\n> All features supported in your latest patch version are present, and it\n> passes\n> all the tests added by this patch. It's been an undisclosed amount of\n> years\n> since I wrote a Bison parser (well, yacc really) from scratch so I don't\n> rule\n> out having made silly mistakes. I would very much appreciate review from\n> those\n> more well versed in this area.\n>\n> One thing this patchversion currently lacks is refined error messaging,\n> but if\n> we feel that this approach is a viable path then that can be tweaked. The\n> function which starts the parser can also be refactored to be shared across\n> pg_dump, pg_dumpall and pg_restore but I've kept it simple for now.\n>\n> Thoughts? It would be nice to get this patch across the finishline during\n> this\n> commitfest.\n>\n>\nI have no objections to this, and thank you so you try to move this patch\nforward.\n\nRegards\n\nPavel\n\n--\n> Daniel Gustafsson https://vmware.com/\n>\n>\n\nHist 7. 9. 2022 v 21:46 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:As noted upthread at some point, I'm not overly excited about the parser in\nfilter.c, for maintainability and readability reasons.  So, I've reimplemented\nthe parser in Flex/Bison in the attached patch, which IMHO provides a clear(er)\npicture of the grammar and is more per project standards.  This version of the\npatch is your latest version with just the parser replaced (at a reduction in\nsize as a side benefit).\n\nAll features supported in your latest patch version are present, and it passes\nall the tests added by this patch.  It's been an undisclosed amount of years\nsince I wrote a Bison parser (well, yacc really) from scratch so I don't rule\nout having made silly mistakes.  I would very much appreciate review from those\nmore well versed in this area.\n\nOne thing this patchversion currently lacks is refined error messaging, but if\nwe feel that this approach is a viable path then that can be tweaked.  The\nfunction which starts the parser can also be refactored to be shared across\npg_dump, pg_dumpall and pg_restore but I've kept it simple for now.\n\nThoughts?  It would be nice to get this patch across the finishline during this\ncommitfest.\nI have no objections to this, and thank you so you try to move this patch forward.RegardsPavel\n--\nDaniel Gustafsson               https://vmware.com/", "msg_date": "Thu, 8 Sep 2022 17:03:01 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Thu, Sep 8, 2022 at 7:32 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> [v3]\n\nNote that the grammar has shift-reduce conflicts. If you run a fairly\nrecent Bison, you can show them like this:\n\nbison -Wno-deprecated -Wcounterexamples -d -o filterparse.c filterparse.y\n\nfilterparse.y: warning: 2 shift/reduce conflicts [-Wconflicts-sr]\nfilterparse.y: warning: shift/reduce conflict on token C_INCLUDE\n[-Wcounterexamples]\n Example: • C_INCLUDE include_object pattern\n Shift derivation\n Filters\n ↳ 3: Filter\n ↳ 4: • C_INCLUDE include_object pattern\n Reduce derivation\n Filters\n ↳ 2: Filters Filter\n ↳ 1: ε • ↳ 4: C_INCLUDE include_object pattern\nfilterparse.y: warning: shift/reduce conflict on token C_EXCLUDE\n[-Wcounterexamples]\n Example: • C_EXCLUDE exclude_object pattern\n Shift derivation\n Filters\n ↳ 3: Filter\n ↳ 5: • C_EXCLUDE exclude_object pattern\n Reduce derivation\n Filters\n ↳ 2: Filters Filter\n ↳ 1: ε • ↳ 5: C_EXCLUDE exclude_object pattern\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 9 Sep 2022 14:53:31 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "\n\n\n\n> On Sep 9, 2022, at 5:53 PM, John Naylor <john.naylor@enterprisedb.com> wrote:\n> \n> On Thu, Sep 8, 2022 at 7:32 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> [v3]\n> \n> Note that the grammar has shift-reduce conflicts. If you run a fairly\n> recent Bison, you can show them like this:\n> \n> bison -Wno-deprecated -Wcounterexamples -d -o filterparse.c filterparse.y\n> \n> filterparse.y: warning: 2 shift/reduce conflicts [-Wconflicts-sr]\n> filterparse.y: warning: shift/reduce conflict on token C_INCLUDE\n> [-Wcounterexamples]\n> Example: • C_INCLUDE include_object pattern\n> Shift derivation\n> Filters\n> ↳ 3: Filter\n> ↳ 4: • C_INCLUDE include_object pattern\n> Reduce derivation\n> Filters\n> ↳ 2: Filters Filter\n> ↳ 1: ε • ↳ 4: C_INCLUDE include_object pattern\n> filterparse.y: warning: shift/reduce conflict on token C_EXCLUDE\n> [-Wcounterexamples]\n> Example: • C_EXCLUDE exclude_object pattern\n> Shift derivation\n> Filters\n> ↳ 3: Filter\n> ↳ 5: • C_EXCLUDE exclude_object pattern\n> Reduce derivation\n> Filters\n> ↳ 2: Filters Filter\n> ↳ 1: ε • ↳ 5: C_EXCLUDE exclude_object pattern\n> \n\n\nLooks like the last rule for Filters should not be there. I do wonder whether we should be using bison/flex here, seems like using a sledgehammer to crack a nut.\n\nCheers\n\nAndrew\n\n", "msg_date": "Fri, 9 Sep 2022 19:00:36 +1000", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "> On 9 Sep 2022, at 11:00, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n>> On Sep 9, 2022, at 5:53 PM, John Naylor <john.naylor@enterprisedb.com> wrote:\n>> \n>> Note that the grammar has shift-reduce conflicts. \n\n> Looks like the last rule for Filters should not be there.\n\nCorrect, fixed in the attached.\n\n> I do wonder whether we should be using bison/flex here, seems like using a\n> sledgehammer to crack a nut.\n\n\nI don't the capabilities of the tool is all that interesting compared to the\nlong term maintainability and readability of the source code. Personally I\nthink a simple Bison/Flex parser is easier to read and reason about than the\ncorresponding written in C.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Mon, 12 Sep 2022 09:58:37 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "po 12. 9. 2022 v 9:59 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:\n\n> > On 9 Sep 2022, at 11:00, Andrew Dunstan <andrew@dunslane.net> wrote:\n> >\n> >> On Sep 9, 2022, at 5:53 PM, John Naylor <john.naylor@enterprisedb.com>\n> wrote:\n> >>\n> >> Note that the grammar has shift-reduce conflicts.\n>\n> > Looks like the last rule for Filters should not be there.\n>\n> Correct, fixed in the attached.\n>\n> > I do wonder whether we should be using bison/flex here, seems like using\n> a\n> > sledgehammer to crack a nut.\n>\n>\n> I don't the capabilities of the tool is all that interesting compared to\n> the\n> long term maintainability and readability of the source code. Personally I\n> think a simple Bison/Flex parser is easier to read and reason about than\n> the\n> corresponding written in C.\n>\n\nWhen this work is done, then there is no reason to throw it. The parser in\nbison/flex does the same work and it is true, so code is more readable.\nAlthough for this case, a handy written parser was trivial too.\n\nRegards\n\nPavel\n\n\n\n>\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>\n\npo 12. 9. 2022 v 9:59 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:> On 9 Sep 2022, at 11:00, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n>> On Sep 9, 2022, at 5:53 PM, John Naylor <john.naylor@enterprisedb.com> wrote:\n>> \n>> Note that the grammar has shift-reduce conflicts. \n\n> Looks like the last rule for Filters should not be there.\n\nCorrect, fixed in the attached.\n\n> I do wonder whether we should be using bison/flex here, seems like using a\n> sledgehammer to crack a nut.\n\n\nI don't the capabilities of the tool is all that interesting compared to the\nlong term maintainability and readability of the source code.  Personally I\nthink a simple Bison/Flex parser is easier to read and reason about than the\ncorresponding written in C.When this work is done, then there is no reason to throw it. The parser in bison/flex does the same work and it is true, so code is more readable. Although for this case, a handy written parser was trivial too.RegardsPavel \n\n--\nDaniel Gustafsson               https://vmware.com/", "msg_date": "Mon, 12 Sep 2022 15:09:24 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Op 12-09-2022 om 09:58 schreef Daniel Gustafsson:\n>> On 9 Sep 2022, at 11:00, Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>>> On Sep 9, 2022, at 5:53 PM, John Naylor <john.naylor@enterprisedb.com> wrote:\n\n>>> [v4-0001-Add-include-exclude-filtering-via-file-in-pg_dump.patch]\n\nI noticed that pg_restore --filter cannot, or at last not always, be \nused with the same filter-file that was used to produce a dump with \npg_dump --filter.\n\nIs that as designed? It seems a bit counterintuitive. It'd be nice if \nthat could be fixed. Admittedly, the 'same' problem in pg_restore -t, \nalso less than ideal.\n\n(A messy bashdemo below)\n\nthanks,\n\nErik Rijkers\n\n\n#! /bin/bash\ndb2='testdb2' db3='testdb3'\ndb2='testdb_source' db3='testdb_target'\nsql_dropdb=\"drop database if exists $db2; drop database if exists $db3;\"\nsql_createdb=\"create database $db2; create database $db3;\"\nschema1=s1 table1=table1 t1=$schema1.$table1\nschema2=s2 table2=table2 t2=$schema2.$table2\nsql_schema_init=\"create schema if not exists $schema1; create schema if \nnot exists $schema2;\"\nsql_test=\"select '$t1', n from $t1 order by n; select '$t2', n from $t2 \norder by n;\"\n\nfunction sqltest()\n{\n for database_name in $db2 $db3 ;do\n port_used=$( echo \"show port\" |psql -qtAX -d $database_name )\n echo -n \"-- $database_name ($port_used): \"\n echo \"$sql_test\" | psql -qtAX -a -d $database_name | md5sum\n done\n echo\n}\n\necho \"setting up orig db $db2, target db $db3\"\necho \"$sql_dropdb\" | psql -qtAX\necho \"$sql_createdb\" | psql -qtAX\n\npsql -X -d $db2 << SQL\n$sql_schema_init\ncreate table $t1 as select n from generate_series(1, (10^1)::int) as f(n);\ncreate table $t2 as select n from generate_series(2, (10^2)::int) as f(n);\nSQL\necho \"\ninclude table $t1\ninclude table $t2\n# include schema $s1\n# include schema $s2\n\" > inputfile1.txt\n\n# in filter; out plain\necho \"-- pg_dump -F p -f plainfile1 --filter=inputfile1.txt -d $db2\"\n pg_dump -F p -f plainfile1 --filter=inputfile1.txt -d $db2\n\necho \"$sql_schema_init\" | psql -qX -d $db3\necho \"-- pg_restore -d $db3 dumpfile1\"\n pg_restore -d $db3 dumpfile1\n rc=$?\necho \"-- pg_restore returned [$rc] -- pg_restore without --filter\"\nsqltest\n\n# enable this to see it fail\nif [[ 1 -eq 1 ]]\nthen\n\n# clean out\necho \"drop schema $schema1 cascade; drop schema $schema2 cascade; \" | \npsql -qtAXad $db3\n\n--filter=inputfile1.txt\"\necho \"$sql_schema_init\" | psql -qX -d $db3\necho \"-- pg_restore -d $db3 --filter=inputfile1.txt dumpfile1\"\n pg_restore -d $db3 --filter=inputfile1.txt dumpfile1\n rc=$?\necho \"-- pg_restore returned [$rc] -- pg_restore without --filter\"\nsqltest\n\nfi\n\n\n\n", "msg_date": "Mon, 12 Sep 2022 16:00:07 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "\n\nOp 12-09-2022 om 16:00 schreef Erik Rijkers:\n> Op 12-09-2022 om 09:58 schreef Daniel Gustafsson:\n>>> On 9 Sep 2022, at 11:00, Andrew Dunstan <andrew@dunslane.net> wrote:\n>>>\n>>>> On Sep 9, 2022, at 5:53 PM, John Naylor \n>>>> <john.naylor@enterprisedb.com> wrote:\n> \n>>>> [v4-0001-Add-include-exclude-filtering-via-file-in-pg_dump.patch]\n> \n> I noticed that pg_restore --filter cannot, or at last not always, be \n> used with the same filter-file that was used to produce a dump with \n> pg_dump --filter.\n> \n> Is that as designed?  It seems a bit counterintuitive.  It'd be nice if \n> that could be fixed.  Admittedly, the 'same' problem in pg_restore -t, \n> also less than ideal.\n> \n> (A messy bashdemo below)\n\nI hope the issue is still clear, even though in the bash I sent, I \nmessed up the dumpfile name (i.e., in the bash that I sent the pg_dump \ncreates another dump name than what is given to pg_restore. They should \nuse the same dumpname, obviously)\n\n> \n> thanks,\n> \n> Erik Rijkers\n\n\n", "msg_date": "Mon, 12 Sep 2022 16:18:41 +0200", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Mon, Sep 12, 2022 at 8:10 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> po 12. 9. 2022 v 9:59 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:\n>> I don't the capabilities of the tool is all that interesting compared to the\n>> long term maintainability and readability of the source code.\n\nWith make distprep and maintainer-clean, separate makefile and MSVC\nbuild logic a short time before converting to Meson, I'm not sure that\neven the short term maintainability here is a good trade off for what\nwe're getting.\n\n> The parser in bison/flex does the same work and it is true, so code is more readable. Although for this case, a handy written parser was trivial too.\n\nIf the hand-written version is trivial, then we should prefer it.\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 13 Sep 2022 15:46:22 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "út 13. 9. 2022 v 10:46 odesílatel John Naylor <john.naylor@enterprisedb.com>\nnapsal:\n\n> On Mon, Sep 12, 2022 at 8:10 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> > po 12. 9. 2022 v 9:59 odesílatel Daniel Gustafsson <daniel@yesql.se>\n> napsal:\n> >> I don't the capabilities of the tool is all that interesting compared\n> to the\n> >> long term maintainability and readability of the source code.\n>\n> With make distprep and maintainer-clean, separate makefile and MSVC\n> build logic a short time before converting to Meson, I'm not sure that\n> even the short term maintainability here is a good trade off for what\n> we're getting.\n>\n> > The parser in bison/flex does the same work and it is true, so code is\n> more readable. Although for this case, a handy written parser was trivial\n> too.\n>\n> If the hand-written version is trivial, then we should prefer it.\n>\n\nPlease, can you check and compare both versions? My view is subjective.\n\nRegards\n\nPavel\n\n\n\n\n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com\n>\n\nút 13. 9. 2022 v 10:46 odesílatel John Naylor <john.naylor@enterprisedb.com> napsal:On Mon, Sep 12, 2022 at 8:10 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> po 12. 9. 2022 v 9:59 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:\n>> I don't the capabilities of the tool is all that interesting compared to the\n>> long term maintainability and readability of the source code.\n\nWith make distprep and maintainer-clean, separate makefile and MSVC\nbuild logic a short time before converting to Meson, I'm not sure that\neven the short term maintainability here is a good trade off for what\nwe're getting.\n\n> The parser in bison/flex does the same work and it is true, so code is more readable. Although for this case, a handy written parser was trivial too.\n\nIf the hand-written version is trivial, then we should prefer it.Please, can you check and compare both versions? My view is subjective.RegardsPavel \n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 13 Sep 2022 11:36:43 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi,\n\nOn 2022-09-12 09:58:37 +0200, Daniel Gustafsson wrote:\n> > On 9 Sep 2022, at 11:00, Andrew Dunstan <andrew@dunslane.net> wrote:\n> > \n> >> On Sep 9, 2022, at 5:53 PM, John Naylor <john.naylor@enterprisedb.com> wrote:\n> >> \n> >> Note that the grammar has shift-reduce conflicts. \n> \n> > Looks like the last rule for Filters should not be there.\n> \n> Correct, fixed in the attached.\n\nDue to the merge of the meson build, this patch now needs to adjust the\nrelevant meson.build. This is the cause of the failures at:\nhttps://cirrus-ci.com/build/5788292678418432\n\nSee e.g. src/bin/pgbench/meson.build\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 22 Sep 2022 08:10:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi,\n\nOn 2022-09-12 09:58:37 +0200, Daniel Gustafsson wrote:\n> Correct, fixed in the attached.\n\nUpdated patch adding meson compatibility attached.\n\nGreetings,\n\nAndres Freund", "msg_date": "Sat, 1 Oct 2022 23:56:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi,\n\nOn 2022-10-01 23:56:59 -0700, Andres Freund wrote:\n> On 2022-09-12 09:58:37 +0200, Daniel Gustafsson wrote:\n> > Correct, fixed in the attached.\n> \n> Updated patch adding meson compatibility attached.\n\nErr, forgot to amend one hunk :(\n\nGreetings,\n\nAndres Freund", "msg_date": "Sun, 2 Oct 2022 00:19:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi,\n\nOn 2022-10-02 00:19:59 -0700, Andres Freund wrote:\n> On 2022-10-01 23:56:59 -0700, Andres Freund wrote:\n> > On 2022-09-12 09:58:37 +0200, Daniel Gustafsson wrote:\n> > > Correct, fixed in the attached.\n> > \n> > Updated patch adding meson compatibility attached.\n> \n> Err, forgot to amend one hunk :(\n\nThat fixed it on all platforms but windows, due to copy-pasto. I really should\nhave stopped earlier yesterday...\n\n\n> +/*-------------------------------------------------------------------------\n> + *\n> + * filter.h\n> + *\t Common header file for the parser of filter file\n> + *\n> + * Portions Copyright (c) 1996-2022, PostgreSQL Global Development Group\n> + * Portions Copyright (c) 1994, Regents of the University of California\n> + *\n> + * src/bin/pg_dump/filter.h\n> + *\n> + *-------------------------------------------------------------------------\n> + */\n> +#ifndef FILTER_H\n> +#define FILTER_H\n> +#include \"c.h\"\n\nc.h (and postgres.h, postgres_fe.h) shouldn't be included in headers.\n\n\nThis is a common enough mistake that I'm wondering if we could automate\nwarning about it somehow.\n\nGreetings,\n\nAndres Freund", "msg_date": "Sun, 2 Oct 2022 09:04:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "> On 2 Oct 2022, at 18:04, Andres Freund <andres@anarazel.de> wrote:\n> On 2022-10-02 00:19:59 -0700, Andres Freund wrote:\n>> On 2022-10-01 23:56:59 -0700, Andres Freund wrote:\n>>> On 2022-09-12 09:58:37 +0200, Daniel Gustafsson wrote:\n>>>> Correct, fixed in the attached.\n>>> \n>>> Updated patch adding meson compatibility attached.\n>> \n>> Err, forgot to amend one hunk :(\n> \n> That fixed it on all platforms but windows, due to copy-pasto. I really should\n> have stopped earlier yesterday...\n\nThanks for updating the patch!\n\nThe parser in the original submission was -1'd by me, and the current version\nproposed as an alternative. This was subsequently -1'd as well but no updated\npatch with a rewritten parser has been posted. So this is now stalled again.\n\nHaving been around in 12 commitfests without a committer feeling confident\nabout pushing this I plan to mark it returned with feedback, and if a new\nparser materializes itc can be readded instead of being dragged along.\n\n> c.h (and postgres.h, postgres_fe.h) shouldn't be included in headers.\n> \n> This is a common enough mistake that I'm wondering if we could automate\n> warning about it somehow.\n\nMaybe we can add a simple git grep invocation in the CompilerWarnings CI job to\ncatch this in the CFBot? If something like the below sketch matches then we\ncan throw an error. (only for illustration, all three files needs to checked).\n\n\tgit grep \"\\\"c\\.h\" -- *.h :^src/include/postgres*.h;\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Sun, 2 Oct 2022 22:52:33 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 2 Oct 2022, at 18:04, Andres Freund <andres@anarazel.de> wrote:\n>> c.h (and postgres.h, postgres_fe.h) shouldn't be included in headers.\n>> This is a common enough mistake that I'm wondering if we could automate\n>> warning about it somehow.\n\n> Maybe we can add a simple git grep invocation in the CompilerWarnings CI job to\n> catch this in the CFBot?\n\nI'd be inclined to teach headerscheck or cpluspluscheck about it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 02 Oct 2022 17:02:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On 2022-10-02 22:52:33 +0200, Daniel Gustafsson wrote:\n> The parser in the original submission was -1'd by me, and the current version\n> proposed as an alternative. This was subsequently -1'd as well but no updated\n> patch with a rewritten parser has been posted. So this is now stalled again.\n> \n> Having been around in 12 commitfests without a committer feeling confident\n> about pushing this I plan to mark it returned with feedback, and if a new\n> parser materializes itc can be readded instead of being dragged along.\n\nMakes sense to me.\n\n\n", "msg_date": "Sun, 2 Oct 2022 14:11:39 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "ne 2. 10. 2022 v 22:52 odesílatel Daniel Gustafsson <daniel@yesql.se>\nnapsal:\n\n> > On 2 Oct 2022, at 18:04, Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-10-02 00:19:59 -0700, Andres Freund wrote:\n> >> On 2022-10-01 23:56:59 -0700, Andres Freund wrote:\n> >>> On 2022-09-12 09:58:37 +0200, Daniel Gustafsson wrote:\n> >>>> Correct, fixed in the attached.\n> >>>\n> >>> Updated patch adding meson compatibility attached.\n> >>\n> >> Err, forgot to amend one hunk :(\n> >\n> > That fixed it on all platforms but windows, due to copy-pasto. I really\n> should\n> > have stopped earlier yesterday...\n>\n> Thanks for updating the patch!\n>\n> The parser in the original submission was -1'd by me, and the current\n> version\n> proposed as an alternative. This was subsequently -1'd as well but no\n> updated\n> patch with a rewritten parser has been posted. So this is now stalled\n> again.\n>\n\nYou started rewriting it, but you didn't finish it.\n\nUnfortunately, there is not a clean opinion on using bison's parser for\nthis purpose. I understand that the complexity of this language is too low,\nso the benefit of using bison's gramatic is low too. Personally, I have not\nany problem using bison for this purpose. For this case, I think we compare\ntwo similarly long ways, but unfortunately, customers that have a problem\nwith long command lines still have this problem.\n\nCan we go forward? Daniel is strongly against handwritten parser. Is there\nsomebody strongly against bison's based parser? There is not any other way.\n\nI am able to complete Daniel's patch, if there will not be objections.\n\nRegards\n\nPavel\n\n\n\n\n\n\n\n> Having been around in 12 commitfests without a committer feeling confident\n> about pushing this I plan to mark it returned with feedback, and if a new\n> parser materializes itc can be readded instead of being dragged along.\n>\n> > c.h (and postgres.h, postgres_fe.h) shouldn't be included in headers.\n> >\n> > This is a common enough mistake that I'm wondering if we could automate\n> > warning about it somehow.\n>\n> Maybe we can add a simple git grep invocation in the CompilerWarnings CI\n> job to\n> catch this in the CFBot? If something like the below sketch matches then\n> we\n> can throw an error. (only for illustration, all three files needs to\n> checked).\n>\n> git grep \"\\\"c\\.h\" -- *.h :^src/include/postgres*.h;\n>\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>\n\nne 2. 10. 2022 v 22:52 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:> On 2 Oct 2022, at 18:04, Andres Freund <andres@anarazel.de> wrote:\n> On 2022-10-02 00:19:59 -0700, Andres Freund wrote:\n>> On 2022-10-01 23:56:59 -0700, Andres Freund wrote:\n>>> On 2022-09-12 09:58:37 +0200, Daniel Gustafsson wrote:\n>>>> Correct, fixed in the attached.\n>>> \n>>> Updated patch adding meson compatibility attached.\n>> \n>> Err, forgot to amend one hunk :(\n> \n> That fixed it on all platforms but windows, due to copy-pasto. I really should\n> have stopped earlier yesterday...\n\nThanks for updating the patch!\n\nThe parser in the original submission was -1'd by me, and the current version\nproposed as an alternative.  This was subsequently -1'd as well but no updated\npatch with a rewritten parser has been posted.  So this is now stalled again.You started rewriting it, but you didn't finish it.Unfortunately, there is not a clean opinion on using bison's parser for this purpose. I understand that the complexity of this language is too low, so the benefit of using bison's gramatic is low too. Personally, I have not any problem using bison for this purpose. For this case, I think we compare two similarly long ways, but unfortunately, customers that have a problem with long command lines still have this problem.Can we go forward? Daniel is strongly against handwritten parser. Is there somebody strongly against bison's based parser? There is not any other way.I am able to complete Daniel's patch, if there will not be objections.RegardsPavel \n\nHaving been around in 12 commitfests without a committer feeling confident\nabout pushing this I plan to mark it returned with feedback, and if a new\nparser materializes itc can be readded instead of being dragged along.\n\n> c.h (and postgres.h, postgres_fe.h) shouldn't be included in headers.\n> \n> This is a common enough mistake that I'm wondering if we could automate\n> warning about it somehow.\n\nMaybe we can add a simple git grep invocation in the CompilerWarnings CI job to\ncatch this in the CFBot?  If something like the below sketch matches then we\ncan throw an error. (only for illustration, all three files needs to checked).\n\n        git grep \"\\\"c\\.h\" -- *.h :^src/include/postgres*.h;\n\n--\nDaniel Gustafsson               https://vmware.com/", "msg_date": "Mon, 3 Oct 2022 06:00:12 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi,\n\nOn Mon, Oct 03, 2022 at 06:00:12AM +0200, Pavel Stehule wrote:\n> ne 2. 10. 2022 v 22:52 odes�latel Daniel Gustafsson <daniel@yesql.se>\n> napsal:\n> \n> > > On 2 Oct 2022, at 18:04, Andres Freund <andres@anarazel.de> wrote:\n> > > On 2022-10-02 00:19:59 -0700, Andres Freund wrote:\n> > >> On 2022-10-01 23:56:59 -0700, Andres Freund wrote:\n> > >>> On 2022-09-12 09:58:37 +0200, Daniel Gustafsson wrote:\n> > >>>> Correct, fixed in the attached.\n> > >>>\n> > >>> Updated patch adding meson compatibility attached.\n> > >>\n> > >> Err, forgot to amend one hunk :(\n> > >\n> > > That fixed it on all platforms but windows, due to copy-pasto. I really\n> > should\n> > > have stopped earlier yesterday...\n> >\n> > Thanks for updating the patch!\n> >\n> > The parser in the original submission was -1'd by me, and the current\n> > version\n> > proposed as an alternative. This was subsequently -1'd as well but no\n> > updated\n> > patch with a rewritten parser has been posted. So this is now stalled\n> > again.\n> >\n> \n> You started rewriting it, but you didn't finish it.\n> \n> Unfortunately, there is not a clean opinion on using bison's parser for\n> this purpose. I understand that the complexity of this language is too low,\n> so the benefit of using bison's gramatic is low too. Personally, I have not\n> any problem using bison for this purpose. For this case, I think we compare\n> two similarly long ways, but unfortunately, customers that have a problem\n> with long command lines still have this problem.\n> \n> Can we go forward? Daniel is strongly against handwritten parser. Is there\n> somebody strongly against bison's based parser? There is not any other way.\n\nI don't have a strong opinion either, but it seems that 2 people argued against\na bison parser (vs only 1 arguing for) and the fact that the current habit is\nto rely on hand written parsers for simple cases (e.g. jsonapi.c /\npg_parse_json()), it seems that we should go back to Pavel's original parser.\n\nI only had a quick look but it indeed seems trivial, it just maybe need a bit\nof refactoring to avoid some code duplication (getFiltersFromFile is\nduplicated, and getDatabaseExcludeFiltersFromFile could be removed if\ngetFiltersFromFile knew about the 2 patterns).\n\n\n", "msg_date": "Mon, 3 Oct 2022 12:34:35 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nI am sending version with handy written parser and meson support\n\npo 3. 10. 2022 v 6:34 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> > You started rewriting it, but you didn't finish it.\n> >\n> > Unfortunately, there is not a clean opinion on using bison's parser for\n> > this purpose. I understand that the complexity of this language is too\n> low,\n> > so the benefit of using bison's gramatic is low too. Personally, I have\n> not\n> > any problem using bison for this purpose. For this case, I think we\n> compare\n> > two similarly long ways, but unfortunately, customers that have a problem\n> > with long command lines still have this problem.\n> >\n> > Can we go forward? Daniel is strongly against handwritten parser. Is\n> there\n> > somebody strongly against bison's based parser? There is not any other\n> way.\n>\n> I don't have a strong opinion either, but it seems that 2 people argued\n> against\n> a bison parser (vs only 1 arguing for) and the fact that the current habit\n> is\n> to rely on hand written parsers for simple cases (e.g. jsonapi.c /\n> pg_parse_json()), it seems that we should go back to Pavel's original\n> parser.\n>\n> I only had a quick look but it indeed seems trivial, it just maybe need a\n> bit\n> of refactoring to avoid some code duplication (getFiltersFromFile is\n> duplicated, and getDatabaseExcludeFiltersFromFile could be removed if\n> getFiltersFromFile knew about the 2 patterns).\n>\n\nI checked this code again, and I don't think some refactoring is easy.\ngetFiltersFromFile is not duplicated. It is just probably badly named.\n\nThese routines are used from pg_dump, pg_dumpall and pg_restore. There are\nsignificant differences in supported objects and in types used for returned\nlists (dumpOptions, SimpleStringList, and RestoreOptions). If I have one\nroutine, then I need to implement some mechanism for specification of\nsupported objects, and a special type that can be used as a proxy between\ncaller and parser to hold lists of parsed values. To be names less\nconfusing I renamed them to read_dump_filters, read_dumpall_filters and\nread_restore_filters\n\nRegards\n\nPavel", "msg_date": "Fri, 7 Oct 2022 07:26:08 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Fri, Oct 07, 2022 at 07:26:08AM +0200, Pavel Stehule wrote:\n>\n> I checked this code again, and I don't think some refactoring is easy.\n> getFiltersFromFile is not duplicated. It is just probably badly named.\n>\n> These routines are used from pg_dump, pg_dumpall and pg_restore. There are\n> significant differences in supported objects and in types used for returned\n> lists (dumpOptions, SimpleStringList, and RestoreOptions). If I have one\n> routine, then I need to implement some mechanism for specification of\n> supported objects, and a special type that can be used as a proxy between\n> caller and parser to hold lists of parsed values. To be names less\n> confusing I renamed them to read_dump_filters, read_dumpall_filters and\n> read_restore_filters\n\nAh right, I missed the different argument types. Now that the functions have\nimproved names it looks way clearer, and it seems just fine!\n\n\n", "msg_date": "Fri, 7 Oct 2022 22:03:23 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "pá 7. 10. 2022 v 16:03 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> On Fri, Oct 07, 2022 at 07:26:08AM +0200, Pavel Stehule wrote:\n> >\n> > I checked this code again, and I don't think some refactoring is easy.\n> > getFiltersFromFile is not duplicated. It is just probably badly named.\n> >\n> > These routines are used from pg_dump, pg_dumpall and pg_restore. There\n> are\n> > significant differences in supported objects and in types used for\n> returned\n> > lists (dumpOptions, SimpleStringList, and RestoreOptions). If I have one\n> > routine, then I need to implement some mechanism for specification of\n> > supported objects, and a special type that can be used as a proxy between\n> > caller and parser to hold lists of parsed values. To be names less\n> > confusing I renamed them to read_dump_filters, read_dumpall_filters and\n> > read_restore_filters\n>\n> Ah right, I missed the different argument types. Now that the functions\n> have\n> improved names it looks way clearer, and it seems just fine!\n>\n\nThank you for check\n\nPavel\n\npá 7. 10. 2022 v 16:03 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Fri, Oct 07, 2022 at 07:26:08AM +0200, Pavel Stehule wrote:\n>\n> I checked this code again, and I don't think some refactoring is easy.\n> getFiltersFromFile is not duplicated. It is just probably badly named.\n>\n> These routines are used from pg_dump, pg_dumpall and pg_restore. There are\n> significant differences in supported objects and in types used for returned\n> lists (dumpOptions, SimpleStringList, and RestoreOptions). If I have one\n> routine, then I need to implement some mechanism for specification of\n> supported objects, and a special type that can be used as a proxy between\n> caller and parser to hold lists of parsed values. To be names less\n> confusing I renamed them to read_dump_filters, read_dumpall_filters and\n> read_restore_filters\n\nAh right, I missed the different argument types.  Now that the functions have\nimproved names it looks way clearer, and it seems just fine!Thank you for checkPavel", "msg_date": "Fri, 7 Oct 2022 18:48:49 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi,\n\nOn 2022-10-07 07:26:08 +0200, Pavel Stehule wrote:\n> I am sending version with handy written parser and meson support\n\nGiven this is a new approach it seems inaccurate to have the CF entry marked\nready-for-committer. I've updated it to needs-review.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 13 Oct 2022 11:46:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Thu, Oct 13, 2022 at 11:46:34AM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2022-10-07 07:26:08 +0200, Pavel Stehule wrote:\n> > I am sending version with handy written parser and meson support\n> \n> Given this is a new approach it seems inaccurate to have the CF entry marked\n> ready-for-committer. I've updated it to needs-review.\n\nI just had a quick look at the rest of the patch.\n\nFor the parser, it seems that filter_get_pattern is reimplementing an\nidentifier parsing function but isn't entirely correct. It can correctly parse\nquoted non-qualified identifiers and non-quoted qualified identifiers, but not\nquoted and qualified ones. For instance:\n\n$ echo 'include table nsp.tbl' | pg_dump --filter - >/dev/null\n$echo $?\n0\n\n$ echo 'include table \"TBL\"' | pg_dump --filter - >/dev/null\n$echo $?\n0\n\n$ echo 'include table \"NSP\".\"TBL\"' | pg_dump --filter - >/dev/null\npg_dump: error: invalid format of filter on line 1: unexpected extra data after pattern\n\nThis should also be covered in the regression tests.\n\nI'm wondering if psql's parse_identifier() could be exported and reused here\nrather than creating yet another version. \n\n\nNitpicking: the comments needs some improvements:\n\n+ /*\n+ * Simple routines - just don't repeat same code\n+ *\n+ * Returns true, when filter's file is opened\n+ */\n+ bool\n+ filter_init(FilterStateData *fstate, const char *filename)\n\nalso, is there any reason why this function doesn't call exit_nicely in case of\nerror rather than letting each caller do it without any other cleanup?\n\n+ /*\n+ * Release allocated sources for filter\n+ */\n+ void\n+ filter_free_sources(FilterStateData *fstate)\n\nI'm assuming \"ressources\" not \"sources\"?\n\n+ /*\n+ * log_format_error - Emit error message\n+ *\n+ * This is mostly a convenience routine to avoid duplicating file closing code\n+ * in multiple callsites.\n+ */\n+ void\n+ log_invalid_filter_format(FilterStateData *fstate, char *message)\n\nmismatch between comment and function name (same for filter_read_item)\n\n+ static const char *\n+ filter_object_type_name(FilterObjectType fot)\n\nNo description.\n\n/*\n * Helper routine to reduce duplicated code\n */\nvoid\nlog_unsupported_filter_object_type(FilterStateData *fstate,\n\t\t\t\t\t\t\t\t\tconst char *appname,\n\t\t\t\t\t\t\t\t\tFilterObjectType fot)\n\nNeed more helpful comment.\n\n\n", "msg_date": "Tue, 18 Oct 2022 17:33:29 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\n\nút 18. 10. 2022 v 11:33 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> On Thu, Oct 13, 2022 at 11:46:34AM -0700, Andres Freund wrote:\n> > Hi,\n> >\n> > On 2022-10-07 07:26:08 +0200, Pavel Stehule wrote:\n> > > I am sending version with handy written parser and meson support\n> >\n> > Given this is a new approach it seems inaccurate to have the CF entry\n> marked\n> > ready-for-committer. I've updated it to needs-review.\n>\n> I just had a quick look at the rest of the patch.\n>\n> For the parser, it seems that filter_get_pattern is reimplementing an\n> identifier parsing function but isn't entirely correct. It can correctly\n> parse\n> quoted non-qualified identifiers and non-quoted qualified identifiers, but\n> not\n> quoted and qualified ones. For instance:\n>\n> $ echo 'include table nsp.tbl' | pg_dump --filter - >/dev/null\n> $echo $?\n> 0\n>\n> $ echo 'include table \"TBL\"' | pg_dump --filter - >/dev/null\n> $echo $?\n> 0\n>\n> $ echo 'include table \"NSP\".\"TBL\"' | pg_dump --filter - >/dev/null\n> pg_dump: error: invalid format of filter on line 1: unexpected extra data\n> after pattern\n>\n\nfixed\n\n\n>\n> This should also be covered in the regression tests.\n>\n\ndone\n\n\n>\n> I'm wondering if psql's parse_identifier() could be exported and reused\n> here\n> rather than creating yet another version.\n>\n\nI looked there, and I don't think this parser is usable for this purpose.\nIt is very sensitive on white spaces, and doesn't support multi-lines. It\nis designed for support readline tab complete, it is designed for\nsimplicity not for correctness.\n\n\n>\n> Nitpicking: the comments needs some improvements:\n>\n> + /*\n> + * Simple routines - just don't repeat same code\n> + *\n> + * Returns true, when filter's file is opened\n> + */\n> + bool\n> + filter_init(FilterStateData *fstate, const char *filename)\n>\n\ndone\n\n\n>\n> also, is there any reason why this function doesn't call exit_nicely in\n> case of\n> error rather than letting each caller do it without any other cleanup?\n>\n\nIt is commented few lines up\n\n/*\n * Following routines are called from pg_dump, pg_dumpall and pg_restore.\n * Unfortunatelly, implementation of exit_nicely in pg_dump and pg_restore\n * is different from implementation of this rutine in pg_dumpall. So instead\n * direct calling exit_nicely we have to return some error flag (in this\n * case NULL), and exit_nicelly will be executed from caller's routine.\n */\n\n\n>\n> + /*\n> + * Release allocated sources for filter\n> + */\n> + void\n> + filter_free_sources(FilterStateData *fstate)\n>\n> I'm assuming \"ressources\" not \"sources\"?\n>\n\nchanged\n\n\n>\n> + /*\n> + * log_format_error - Emit error message\n> + *\n> + * This is mostly a convenience routine to avoid duplicating file\n> closing code\n> + * in multiple callsites.\n> + */\n> + void\n> + log_invalid_filter_format(FilterStateData *fstate, char *message)\n>\n> mismatch between comment and function name (same for filter_read_item)\n>\n\nfixes\n\n\n>\n> + static const char *\n> + filter_object_type_name(FilterObjectType fot)\n>\n> No description.\n>\n>\nfixed\n\n\n> /*\n> * Helper routine to reduce duplicated code\n> */\n> void\n> log_unsupported_filter_object_type(FilterStateData *fstate,\n>\n> const char *appname,\n>\n> FilterObjectType fot)\n>\n> Need more helpful comment.\n>\n\nfixed\n\nThank you for comments\n\nattached updated patch\n\nRegards\n\nPavel", "msg_date": "Wed, 26 Oct 2022 06:26:26 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi,\n\nOn Wed, Oct 26, 2022 at 06:26:26AM +0200, Pavel Stehule wrote:\n>\n> �t 18. 10. 2022 v 11:33 odes�latel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n>\n> >\n> > I'm wondering if psql's parse_identifier() could be exported and reused\n> > here\n> > rather than creating yet another version.\n> >\n>\n> I looked there, and I don't think this parser is usable for this purpose.\n> It is very sensitive on white spaces, and doesn't support multi-lines. It\n> is designed for support readline tab complete, it is designed for\n> simplicity not for correctness.\n\nAh, sorry I should have checked more thoroughly. I guess it's ok to have\nanother identifier parser for the include file then, as this new one wouldn't\nreally fit the tab-completion use case.\n\n> > also, is there any reason why this function doesn't call exit_nicely in\n> > case of\n> > error rather than letting each caller do it without any other cleanup?\n> >\n>\n> It is commented few lines up\n>\n> /*\n> * Following routines are called from pg_dump, pg_dumpall and pg_restore.\n> * Unfortunatelly, implementation of exit_nicely in pg_dump and pg_restore\n> * is different from implementation of this rutine in pg_dumpall. So instead\n> * direct calling exit_nicely we have to return some error flag (in this\n> * case NULL), and exit_nicelly will be executed from caller's routine.\n> */\n\nOh right, I totally missed it sorry about that!\n\nAbout the new version, I didn't find any problem with the feature itself so\nit's a good thing!\n\nI still have a few comments about the patch. First, about the behavior:\n\n- is that ok to have just \"data\" pattern instead of \"table_data\" or something\n like that, since it's supposed to match --exclude-table-data option?\n\n- the error message are sometimes not super helpful. For instance:\n\n$ echo \"include data t1\" | pg_dump --filter -\npg_dump: error: invalid format of filter on line 1: include filter is not allowed for this type of object\n\nIt would be nice if the error message mentioned \"data\" rather than a generic\n\"this type of object\". Also, maybe we should quote \"include\" to outline that\nwe found this keyword?\n\nAbout the patch itself:\nfilter.c:\n\n+#include \"postgres_fe.h\"\n+\n+#include \"filter.h\"\n+\n+#include \"common/logging.h\"\n\nthe filter.h inclusion should be done with the rest of the includes, in\nalphabetical order.\n\n+#define\t\tis_keyword_str(cstr, str, bytes) \\\n+\t((strlen(cstr) == bytes) && (pg_strncasecmp(cstr, str, bytes) == 0))\n\nnit: our guidline is to protect macro arguments with parenthesis. Some\narguments can be evaluated multiple times but I don't think it's worth adding a\ncomment for that.\n\n+ * Unfortunatelly, implementation of exit_nicely in pg_dump and pg_restore\n+ * is different from implementation of this rutine in pg_dumpall. So instead\n+ * direct calling exit_nicely we have to return some error flag (in this\n\ntypos: s/Unfortunatelly/Unfortunately/ and s/rutine/routine/\nAlso, it would probably be better to say \"instead of directly calling...\"\n\n+static const char *\n+filter_object_type_name(FilterObjectType fot)\n+{\n+\tswitch (fot)\n+\t{\n+\t\tcase FILTER_OBJECT_TYPE_NONE:\n+\t\t\treturn \"comment or empty line\";\n+[...]\n+\t}\n+\n+\treturn \"unknown object type\";\n+}\n\nI'm wondering if we should add a pg_unreachable() there, some compilers might\ncomplain otherwise. See CreateDestReceiver() for instance for similar pattern.\n\n+ * Emit error message \"invalid format of filter file ...\"\n+ *\n+ * This is mostly a convenience routine to avoid duplicating file closing code\n+ * in multiple callsites.\n+ */\n+void\n+log_invalid_filter_format(FilterStateData *fstate, char *message)\n\nnit: invalid format *in* filter file...?\n\n+void\n+log_unsupported_filter_object_type(FilterStateData *fstate,\n+\t\t\t\t\t\t\t\t\tconst char *appname,\n+\t\t\t\t\t\t\t\t\tFilterObjectType fot)\n+{\n+\tPQExpBuffer str = createPQExpBuffer();\n+\n+\tprintfPQExpBuffer(str,\n+\t\t\t\t\t \"The application \\\"%s\\\" doesn't support filter for object type \\\"%s\\\".\",\n\nnit: there shouldn't be uppercase in error messages, especially since this will\nbe appended to another message by log_invalid_filter_format. I would just just\ndrop \"The application\" entirely for brevity.\n\n+/*\n+ * Release allocated resources for filter\n+ */\n+void\n+filter_free(FilterStateData *fstate)\n\nnit: Release allocated resources for *the given* filter?\n\n+ * Search for keywords (limited to ascii alphabetic characters) in\n+ * the passed in line buffer. Returns NULL, when the buffer is empty or first\n+ * char is not alpha. The length of the found keyword is returned in the size\n+ * parameter.\n+ */\n+static const char *\n+filter_get_keyword(const char **line, int *size)\n+{\n+ [...]\n+\tif (isascii(*ptr) && isalpha(*ptr))\n+\t{\n+\t\tresult = ptr++;\n+\n+\t\twhile (isascii(*ptr) && (isalpha(*ptr) || *ptr == '_'))\n+\t\t\tptr++;\n\nIs there any reason to test isascii()? isalpha() should already cover that and\nshould be cheaper to test anyway.\n\nAlso nit: \"Returns NULL when the buffer...\" (unnecessary comma), and the '_'\nchar is also allowed.\n\n+filter_read_item(FilterStateData *fstate,\n+\t\t\t\t bool *is_include,\n+\t\t\t\t char **objname,\n+\t\t\t\t FilterObjectType *objtype)\n+{\n+\tAssert(!fstate->is_error);\n+\n+\tif (pg_get_line_buf(fstate->fp, &fstate->linebuff))\n+\t{\n+\t\tchar\t *str = fstate->linebuff.data;\n+\t\tconst char *keyword;\n+\t\tint\t\t\tsize;\n+\n+\t\tfstate->lineno++;\n+\n+\t\t(void) pg_strip_crlf(str);\n+\n+\t\t/* Skip initial white spaces */\n+\t\twhile (isspace(*str))\n+\t\t\tstr++;\n+[...]\n+\t\t\tkeyword = filter_get_keyword((const char **) &str, &size);\n\nIs there any interest with the initial pg_strip_crlf? AFAICT all the rest of\nthe code will ignore such caracters using isspace() so it wouldn't change\nanything.\n\nDropping both pg_strip_crlf() would allow you to declare str as const rather\nthan doing it in function calls. It would require to add const qualifiers in a\nfew other places, but it seems like an improvement, as for instance right now\nfilter_get_pattern is free to rewrite the str (because it's also calling\npg_strip_crlf, but there's no guarantee that it doesn't do anything else).\n\n+/*\n+ * filter_get_pattern - Read an object identifier pattern from the buffer\n+ *\n+ * Parses an object identifier pattern from the passed in buffer and sets\n+ * objname to a string with object identifier pattern. Returns pointer to the\n+ * first character after the pattern. Returns NULL on error.\n+ */\n+static char *\n+filter_get_pattern(FilterStateData *fstate,\n\nnit: suggestion to reword the comment, maybe something like\n\n/*\n * filter_get_pattern - Identify an object identifier pattern\n *\n * Try to parse an object identifier pattern from the passed buffer. If one is\n * found, it sets objname to a string with the object identifier pattern and\n * returns a pointer to the first byte after the found pattern. Otherwise NULL\n * is returned.\n */\n\n+bool\n+filter_read_item(FilterStateData *fstate,\n\nAnother suggestion for comment rewrite:\n\n/*-------------------\n * filter_read_item - Read command/type/pattern triplet from a filter file\n *\n * This will parse one filter item from the filter file, and while it is a\n * row based format a pattern may span more than one line due to how object\n * names can be constructed. The expected format of the filter file is:\n *\n * <command> <object_type> <pattern>\n *\n * command can be \"include\" or \"exclude\"\n * object_type can one of: \"table\", \"schema\", \"foreign_data\", \"data\",\n * \"database\", \"function\", \"trigger\" or \"index\"\n * pattern can be any possibly-quoted and possibly-qualified identifier. It\n * follows the same rules as other object include and exclude functions so it\n * can also use wildcards.\n *\n * Returns true when one filter item was successfully read and parsed. When\n * object name contains \\n chars, then more than one line from input file can\n * be processed. Returns false when the filter file reaches EOF. In case of\n * error, the function will emit an appropriate error message before returning\n * false.\n */\n\nNote also that your original comment said:\n+ * In case of\n+ * errors, the function wont return but will exit with an appropriate error\n+ * message.\n\nBut AFAICS that's not the case: it will indeed log an appropriate error message\nbut will return false. I'm assuming that the comment was outdated as the\ncalling code handles it just fine, so I just modified the comment.\n\nfilter.h:\n\n+#ifndef FILTER_H\n+#define FILTER_H\n+#include \"c.h\"\n\nIt's definitely not ok to include .ch in frontend code. But AFAICS just\nremoving it doesn't cause any problem. Note also that there should be an empty\nline after the #define FILTER_H per usual coding style.\n\n\n", "msg_date": "Thu, 3 Nov 2022 12:09:36 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "čt 3. 11. 2022 v 5:09 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> On Wed, Oct 26, 2022 at 06:26:26AM +0200, Pavel Stehule wrote:\n> >\n> > út 18. 10. 2022 v 11:33 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> > napsal:\n> >\n> > >\n> > > I'm wondering if psql's parse_identifier() could be exported and reused\n> > > here\n> > > rather than creating yet another version.\n> > >\n> >\n> > I looked there, and I don't think this parser is usable for this purpose.\n> > It is very sensitive on white spaces, and doesn't support multi-lines.\n> It\n> > is designed for support readline tab complete, it is designed for\n> > simplicity not for correctness.\n>\n> Ah, sorry I should have checked more thoroughly. I guess it's ok to have\n> another identifier parser for the include file then, as this new one\n> wouldn't\n> really fit the tab-completion use case.\n>\n> > > also, is there any reason why this function doesn't call exit_nicely in\n> > > case of\n> > > error rather than letting each caller do it without any other cleanup?\n> > >\n> >\n> > It is commented few lines up\n> >\n> > /*\n> > * Following routines are called from pg_dump, pg_dumpall and pg_restore.\n> > * Unfortunatelly, implementation of exit_nicely in pg_dump and\n> pg_restore\n> > * is different from implementation of this rutine in pg_dumpall. So\n> instead\n> > * direct calling exit_nicely we have to return some error flag (in this\n> > * case NULL), and exit_nicelly will be executed from caller's routine.\n> > */\n>\n> Oh right, I totally missed it sorry about that!\n>\n> About the new version, I didn't find any problem with the feature itself so\n> it's a good thing!\n>\n> I still have a few comments about the patch. First, about the behavior:\n>\n> - is that ok to have just \"data\" pattern instead of \"table_data\" or\n> something\n> like that, since it's supposed to match --exclude-table-data option?\n>\n\ndone\n\n\n>\n> - the error message are sometimes not super helpful. For instance:\n>\n> $ echo \"include data t1\" | pg_dump --filter -\n> pg_dump: error: invalid format of filter on line 1: include filter is not\n> allowed for this type of object\n>\n> It would be nice if the error message mentioned \"data\" rather than a\n> generic\n> \"this type of object\". Also, maybe we should quote \"include\" to outline\n> that\n> we found this keyword?\n>\n>\ndone\n\n\n\n> About the patch itself:\n> filter.c:\n>\n> +#include \"postgres_fe.h\"\n> +\n> +#include \"filter.h\"\n> +\n> +#include \"common/logging.h\"\n>\n> the filter.h inclusion should be done with the rest of the includes, in\n> alphabetical order.\n>\n>\ndone\n\n\n\n\n> +#define is_keyword_str(cstr, str, bytes) \\\n> + ((strlen(cstr) == bytes) && (pg_strncasecmp(cstr, str, bytes) ==\n> 0))\n>\n> nit: our guidline is to protect macro arguments with parenthesis. Some\n> arguments can be evaluated multiple times but I don't think it's worth\n> adding a\n> comment for that.\n>\n>\ndone\n\n\n> + * Unfortunatelly, implementation of exit_nicely in pg_dump and pg_restore\n> + * is different from implementation of this rutine in pg_dumpall. So\n> instead\n> + * direct calling exit_nicely we have to return some error flag (in this\n>\n> typos: s/Unfortunatelly/Unfortunately/ and s/rutine/routine/\n> Also, it would probably be better to say \"instead of directly calling...\"\n>\n>\ndone\n\n\n> +static const char *\n> +filter_object_type_name(FilterObjectType fot)\n> +{\n> + switch (fot)\n> + {\n> + case FILTER_OBJECT_TYPE_NONE:\n> + return \"comment or empty line\";\n> +[...]\n> + }\n> +\n> + return \"unknown object type\";\n> +}\n>\n> I'm wondering if we should add a pg_unreachable() there, some compilers\n> might\n> complain otherwise. See CreateDestReceiver() for instance for similar\n> pattern.\n>\n\ndone\n\n\n>\n> + * Emit error message \"invalid format of filter file ...\"\n> + *\n> + * This is mostly a convenience routine to avoid duplicating file closing\n> code\n> + * in multiple callsites.\n> + */\n> +void\n> +log_invalid_filter_format(FilterStateData *fstate, char *message)\n>\n> nit: invalid format *in* filter file...?\n>\n\nchanged\n\n\n>\n> +void\n> +log_unsupported_filter_object_type(FilterStateData *fstate,\n> +\n> const char *appname,\n> +\n> FilterObjectType fot)\n> +{\n> + PQExpBuffer str = createPQExpBuffer();\n> +\n> + printfPQExpBuffer(str,\n> + \"The application \\\"%s\\\" doesn't\n> support filter for object type \\\"%s\\\".\",\n>\n> nit: there shouldn't be uppercase in error messages, especially since this\n> will\n> be appended to another message by log_invalid_filter_format. I would just\n> just\n> drop \"The application\" entirely for brevity.\n>\n\nchanged\n\n\n>\n> +/*\n> + * Release allocated resources for filter\n> + */\n> +void\n> +filter_free(FilterStateData *fstate)\n>\n> nit: Release allocated resources for *the given* filter?\n>\n\nchanged\n\n>\n> + * Search for keywords (limited to ascii alphabetic characters) in\n> + * the passed in line buffer. Returns NULL, when the buffer is empty or\n> first\n> + * char is not alpha. The length of the found keyword is returned in the\n> size\n> + * parameter.\n> + */\n> +static const char *\n> +filter_get_keyword(const char **line, int *size)\n> +{\n> + [...]\n> + if (isascii(*ptr) && isalpha(*ptr))\n> + {\n> + result = ptr++;\n> +\n> + while (isascii(*ptr) && (isalpha(*ptr) || *ptr == '_'))\n> + ptr++;\n>\n> Is there any reason to test isascii()? isalpha() should already cover\n> that and\n> should be cheaper to test anyway.\n>\n\nchanged. I wanted to limit keyword's char just for basic ascii alphabets,\nbut the benefit probably is not too strong, and the real effect can be\nmessy, so I removed isascii test\n\n\n>\n> Also nit: \"Returns NULL when the buffer...\" (unnecessary comma), and the\n> '_'\n> char is also allowed.\n>\n\ndone\n\n\n>\n> +filter_read_item(FilterStateData *fstate,\n> + bool *is_include,\n> + char **objname,\n> + FilterObjectType *objtype)\n> +{\n> + Assert(!fstate->is_error);\n> +\n> + if (pg_get_line_buf(fstate->fp, &fstate->linebuff))\n> + {\n> + char *str = fstate->linebuff.data;\n> + const char *keyword;\n> + int size;\n> +\n> + fstate->lineno++;\n> +\n> + (void) pg_strip_crlf(str);\n> +\n> + /* Skip initial white spaces */\n> + while (isspace(*str))\n> + str++;\n> +[...]\n> + keyword = filter_get_keyword((const char **) &str,\n> &size);\n>\n> Is there any interest with the initial pg_strip_crlf? AFAICT all the rest\n> of\n> the code will ignore such caracters using isspace() so it wouldn't change\n> anything.\n>\n\nI think reading multiline identifiers is a little bit easier, because I\ndon't need to check the ending \\n and \\r\nWhen I read multiline identifiers, I cannot ignore white spaces.\n\n\n\n\n> Dropping both pg_strip_crlf() would allow you to declare str as const\n> rather\n> than doing it in function calls. It would require to add const qualifiers\n> in a\n> few other places, but it seems like an improvement, as for instance right\n> now\n> filter_get_pattern is free to rewrite the str (because it's also calling\n> pg_strip_crlf, but there's no guarantee that it doesn't do anything else).\n>\n> +/*\n> + * filter_get_pattern - Read an object identifier pattern from the buffer\n> + *\n> + * Parses an object identifier pattern from the passed in buffer and sets\n> + * objname to a string with object identifier pattern. Returns pointer to\n> the\n> + * first character after the pattern. Returns NULL on error.\n> + */\n> +static char *\n> +filter_get_pattern(FilterStateData *fstate,\n>\n> nit: suggestion to reword the comment, maybe something like\n>\n> /*\n> * filter_get_pattern - Identify an object identifier pattern\n> *\n> * Try to parse an object identifier pattern from the passed buffer. If\n> one is\n> * found, it sets objname to a string with the object identifier pattern\n> and\n> * returns a pointer to the first byte after the found pattern. Otherwise\n> NULL\n> * is returned.\n> */\n>\n>\nreplaced\n\n\n> +bool\n> +filter_read_item(FilterStateData *fstate,\n>\n> Another suggestion for comment rewrite:\n>\n> /*-------------------\n> * filter_read_item - Read command/type/pattern triplet from a filter file\n> *\n> * This will parse one filter item from the filter file, and while it is a\n> * row based format a pattern may span more than one line due to how object\n> * names can be constructed. The expected format of the filter file is:\n> *\n> * <command> <object_type> <pattern>\n> *\n> * command can be \"include\" or \"exclude\"\n> * object_type can one of: \"table\", \"schema\", \"foreign_data\", \"data\",\n> * \"database\", \"function\", \"trigger\" or \"index\"\n> * pattern can be any possibly-quoted and possibly-qualified identifier.\n> It\n> * follows the same rules as other object include and exclude functions so\n> it\n> * can also use wildcards.\n> *\n> * Returns true when one filter item was successfully read and parsed.\n> When\n> * object name contains \\n chars, then more than one line from input file\n> can\n> * be processed. Returns false when the filter file reaches EOF. In case\n> of\n> * error, the function will emit an appropriate error message before\n> returning\n> * false.\n> */\n>\n>\nreplaced, thank you for the text\n\n\n> Note also that your original comment said:\n> + * In case of\n> + * errors, the function wont return but will exit with an appropriate\n> error\n> + * message.\n>\n> But AFAICS that's not the case: it will indeed log an appropriate error\n> message\n> but will return false. I'm assuming that the comment was outdated as the\n> calling code handles it just fine, so I just modified the comment.\n>\n\nyes\n\n\n>\n> filter.h:\n>\n> +#ifndef FILTER_H\n> +#define FILTER_H\n> +#include \"c.h\"\n>\n> It's definitely not ok to include .ch in frontend code. But AFAICS just\n> removing it doesn't cause any problem. Note also that there should be an\n> empty\n> line after the #define FILTER_H per usual coding style.\n>\n\nfixed - it looks so it was some garbage\n\nupdated patch attached\n\nbig thanks for these comments and tips\n\nRegards\n\nPavel", "msg_date": "Thu, 3 Nov 2022 22:22:15 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Thu, Nov 03, 2022 at 10:22:15PM +0100, Pavel Stehule wrote:\n> čt 3. 11. 2022 v 5:09 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n>\n> >\n> > Is there any interest with the initial pg_strip_crlf? AFAICT all the rest\n> > of\n> > the code will ignore such caracters using isspace() so it wouldn't change\n> > anything.\n> >\n>\n> I think reading multiline identifiers is a little bit easier, because I\n> don't need to check the ending \\n and \\r\n> When I read multiline identifiers, I cannot ignore white spaces.\n\nOk. I don't have a strong objection to it.\n\n>\n> updated patch attached\n>\n> big thanks for these comments and tips\n\nThanks for the updated patch! As far as I'm concerned the patch is in a good\nshape, passes the CI and I don't have anything more to say so I'm marking it as\nReady for Committer!\n\n\n", "msg_date": "Fri, 4 Nov 2022 19:59:01 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Fri, Nov 04, 2022 at 07:59:01PM +0800, Julien Rouhaud wrote:\n> On Thu, Nov 03, 2022 at 10:22:15PM +0100, Pavel Stehule wrote:\n> > čt 3. 11. 2022 v 5:09 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n> > updated patch attached\n> >\n> > big thanks for these comments and tips\n> \n> Thanks for the updated patch! As far as I'm concerned the patch is in a good\n> shape, passes the CI and I don't have anything more to say so I'm marking it as\n> Ready for Committer!\n\n+1\n\nI started looking to see if it's possible to simplify the patch at all,\nbut nothing to show yet.\n\nBut one thing I noticed is that \"optarg\" looks wrong here:\n\nsimple_string_list_append(&opts->triggerNames, optarg);\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 4 Nov 2022 07:19:27 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Fri, Nov 04, 2022 at 07:19:27AM -0500, Justin Pryzby wrote:\n> On Fri, Nov 04, 2022 at 07:59:01PM +0800, Julien Rouhaud wrote:\n> > On Thu, Nov 03, 2022 at 10:22:15PM +0100, Pavel Stehule wrote:\n> > > čt 3. 11. 2022 v 5:09 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n> > > updated patch attached\n> > >\n> > > big thanks for these comments and tips\n> > \n> > Thanks for the updated patch! As far as I'm concerned the patch is in a good\n> > shape, passes the CI and I don't have anything more to say so I'm marking it as\n> > Ready for Committer!\n> \n> +1\n> \n> I started looking to see if it's possible to simplify the patch at all,\n> but nothing to show yet.\n> \n> But one thing I noticed is that \"optarg\" looks wrong here:\n> \n> simple_string_list_append(&opts->triggerNames, optarg);\n\nAh indeed, good catch! Maybe there should be an explicit test for every\n(include|exclude) / objtype combination? It would be a bit verbose (and\npossibly hard to maintain).\n\n\n", "msg_date": "Fri, 4 Nov 2022 21:28:30 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "pá 4. 11. 2022 v 14:28 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> On Fri, Nov 04, 2022 at 07:19:27AM -0500, Justin Pryzby wrote:\n> > On Fri, Nov 04, 2022 at 07:59:01PM +0800, Julien Rouhaud wrote:\n> > > On Thu, Nov 03, 2022 at 10:22:15PM +0100, Pavel Stehule wrote:\n> > > > čt 3. 11. 2022 v 5:09 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n> > > > updated patch attached\n> > > >\n> > > > big thanks for these comments and tips\n> > >\n> > > Thanks for the updated patch! As far as I'm concerned the patch is in\n> a good\n> > > shape, passes the CI and I don't have anything more to say so I'm\n> marking it as\n> > > Ready for Committer!\n> >\n> > +1\n> >\n> > I started looking to see if it's possible to simplify the patch at all,\n> > but nothing to show yet.\n> >\n> > But one thing I noticed is that \"optarg\" looks wrong here:\n> >\n> > simple_string_list_append(&opts->triggerNames, optarg);\n>\n> Ah indeed, good catch! Maybe there should be an explicit test for every\n> (include|exclude) / objtype combination? It would be a bit verbose (and\n> possibly hard to maintain).\n>\n\nI'll do it\n\npá 4. 11. 2022 v 14:28 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Fri, Nov 04, 2022 at 07:19:27AM -0500, Justin Pryzby wrote:\n> On Fri, Nov 04, 2022 at 07:59:01PM +0800, Julien Rouhaud wrote:\n> > On Thu, Nov 03, 2022 at 10:22:15PM +0100, Pavel Stehule wrote:\n> > > čt 3. 11. 2022 v 5:09 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n> > > updated patch attached\n> > >\n> > > big thanks for these comments and tips\n> > \n> > Thanks for the updated patch!  As far as I'm concerned the patch is in a good\n> > shape, passes the CI and I don't have anything more to say so I'm marking it as\n> > Ready for Committer!\n> \n> +1\n> \n> I started looking to see if it's possible to simplify the patch at all,\n> but nothing to show yet.\n> \n> But one thing I noticed is that \"optarg\" looks wrong here:\n> \n> simple_string_list_append(&opts->triggerNames, optarg);\n\nAh indeed, good catch!  Maybe there should be an explicit test for every\n(include|exclude) / objtype combination?  It would be a bit verbose (and\npossibly hard to maintain).I'll do it", "msg_date": "Fri, 4 Nov 2022 14:37:05 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Fri, Nov 04, 2022 at 02:37:05PM +0100, Pavel Stehule wrote:\n> p� 4. 11. 2022 v 14:28 odes�latel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n>\n> > >\n> > > But one thing I noticed is that \"optarg\" looks wrong here:\n> > >\n> > > simple_string_list_append(&opts->triggerNames, optarg);\n> >\n> > Ah indeed, good catch! Maybe there should be an explicit test for every\n> > (include|exclude) / objtype combination? It would be a bit verbose (and\n> > possibly hard to maintain).\n> >\n>\n> I'll do it\n\nThanks a lot Pavel! I switched the CF entry back to \"Waiting on Author\" in the\nmeantime.\n\n\n", "msg_date": "Sat, 5 Nov 2022 13:39:29 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\npá 4. 11. 2022 v 14:28 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> On Fri, Nov 04, 2022 at 07:19:27AM -0500, Justin Pryzby wrote:\n> > On Fri, Nov 04, 2022 at 07:59:01PM +0800, Julien Rouhaud wrote:\n> > > On Thu, Nov 03, 2022 at 10:22:15PM +0100, Pavel Stehule wrote:\n> > > > čt 3. 11. 2022 v 5:09 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n> > > > updated patch attached\n> > > >\n> > > > big thanks for these comments and tips\n> > >\n> > > Thanks for the updated patch! As far as I'm concerned the patch is in\n> a good\n> > > shape, passes the CI and I don't have anything more to say so I'm\n> marking it as\n> > > Ready for Committer!\n> >\n> > +1\n> >\n> > I started looking to see if it's possible to simplify the patch at all,\n> > but nothing to show yet.\n> >\n> > But one thing I noticed is that \"optarg\" looks wrong here:\n> >\n> > simple_string_list_append(&opts->triggerNames, optarg);\n>\n> Ah indeed, good catch! Maybe there should be an explicit test for every\n> (include|exclude) / objtype combination? It would be a bit verbose (and\n> possibly hard to maintain).\n>\n\nyes - pg_restore is not well covered by tests, fixed\n\nI found another issue. The pg_restore requires a full signature of the\nfunction and it is pretty sensitive on white spaces (pg_restore). I made a\nmistake when I partially parsed patterns like SQL identifiers. It can work\nfor simple cases, but when I parse the function's signature it stops\nworking. So I rewrote the parsing pattern part. Now, I just read an input\nstring and I try to reduce spaces. Still multiline identifiers are\nsupported. Against the previous method of pattern parsing, I needed to\nchange just one regress test - now I am not able to detect garbage after\npattern :-/. It is possible to enter types like \"double precision\" or\n\"timestamp with time zone\", without needing to check it on the server side.\n\nWhen I wroted regress tests I found some issues of pg_restore filtering\noptions (not related to this patch)\n\n* function's filtering doesn't support schema - when the name of function\nis specified with schema, then the function is not found\n\n* the function has to be specified with an argument type list - the\nseparator has to be exactly \", \" string. Without space or with one space\nmore, the filtering doesn't work (new implementation of pattern parsing\nreduces white spaces sensitivity). This is not a bug, but it is not well\ndocumented.\n\n* the trigger filtering is probably broken (on pg_restore side). The name\nshould be entered in form \"tablename triggername\"\n\nattached updated patch\n\nRegards\n\nPavel", "msg_date": "Sat, 5 Nov 2022 20:54:57 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi,\n\nOn Sat, Nov 05, 2022 at 08:54:57PM +0100, Pavel Stehule wrote:\n>\n> p� 4. 11. 2022 v 14:28 odes�latel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n>\n> > > But one thing I noticed is that \"optarg\" looks wrong here:\n> > >\n> > > simple_string_list_append(&opts->triggerNames, optarg);\n> >\n> > Ah indeed, good catch! Maybe there should be an explicit test for every\n> > (include|exclude) / objtype combination? It would be a bit verbose (and\n> > possibly hard to maintain).\n> >\n>\n> yes - pg_restore is not well covered by tests, fixed\n>\n> I found another issue. The pg_restore requires a full signature of the\n> function and it is pretty sensitive on white spaces (pg_restore).\n\nArgh, indeed. It's a good thing to have expanded the regression tests :)\n\n> I made a\n> mistake when I partially parsed patterns like SQL identifiers. It can work\n> for simple cases, but when I parse the function's signature it stops\n> working. So I rewrote the parsing pattern part. Now, I just read an input\n> string and I try to reduce spaces. Still multiline identifiers are\n> supported. Against the previous method of pattern parsing, I needed to\n> change just one regress test - now I am not able to detect garbage after\n> pattern :-/.\n\nI'm not sure it's really problematic. It looks POLA-violation compatible with\nregular pg_dump options, for instance:\n\n$ echo \"include table t1()\" | pg_dump --filter - | ag CREATE\nCREATE TABLE public.t1 (\n\n$ pg_dump -t \"t1()\" | ag CREATE\nCREATE TABLE public.t1 (\n\n$ echo \"include table t1()blabla\" | pg_dump --filter - | ag CREATE\npg_dump: error: no matching tables were found\n\n$ pg_dump -t \"t1()blabla\" | ag CREATE\npg_dump: error: no matching tables were found\n\nI don't think the file parsing code should try to be smart about checking the\nfound patterns.\n\n> * function's filtering doesn't support schema - when the name of function\n> is specified with schema, then the function is not found\n\nAh I didn't know that. Indeed it only expect a non-qualified identifier, and\nwould restore any function that matches the name (and arguments), possibly\nmultiple ones if there are variants in different schema. That's unrelated to\nthis patch though.\n\n> * the function has to be specified with an argument type list - the\n> separator has to be exactly \", \" string. Without space or with one space\n> more, the filtering doesn't work (new implementation of pattern parsing\n> reduces white spaces sensitivity). This is not a bug, but it is not well\n> documented.\n\nAgreed.\n\n> attached updated patch\n\nIt looks overall good to me! I just have a few minor nitpicking complaints:\n\n- you removed the pg_strip_clrf() calls and declared everything as \"const char\n *\", so there's no need to explicitly cast the filter_get_keyword() arguments\n anymore\n\nNote also that the code now relies on the fact that there are some non-zero\nbytes after a pattern to know that no errors happened. It's not a problem as\nyou should find an EOF marker anyway if CLRF were stripped.\n\n+ * Following routines are called from pg_dump, pg_dumpall and pg_restore.\n+ * Unfortunately, implementation of exit_nicely in pg_dump and pg_restore\n+ * is different from implementation of this routine in pg_dumpall. So instead\n+ * of directly calling exit_nicely we have to return some error flag (in this\n+ * case NULL), and exit_nicelly will be executed from caller's routine.\n\nSlight improvement:\n[...]\nUnfortunately, the implementation of exit_nicely in pg_dump and pg_restore is\ndifferent from the one in pg_dumpall, so instead of...\n\n+ * read_pattern - reads an pattern from input. The pattern can be mix of\n+ * single line or multi line subpatterns. Single line subpattern starts first\n+ * non white space char, and ending last non space char on line or by char\n+ * '#'. The white spaces inside are removed (around char \".()\"), or reformated\n+ * around char ',' or reduced (the multiple spaces are replaced by one).\n+ * Multiline subpattern starts by double quote and ending by this char too.\n+ * The escape rules are same like for SQL quoted literal.\n+ *\n+ * Routine signalizes error by returning NULL. Otherwise returns pointer\n+ * to next char after last processed char in input string.\n\n\ntypo: reads \"a\" pattern from input...\n\nI don't fully understand the part about subpatterns, but is that necessary to\ndescribe it? Simply saying that any valid and possibly-quoted identifier can\nbe parsed should make it clear that identifiers containing \\n characters should\nwork too. Maybe also just mention that whitespaces are removed and special\ncare is taken to output routines in exactly the same way calling code will\nexpect it (that is comma-and-single-space type delimiter).\n\n\n", "msg_date": "Fri, 11 Nov 2022 16:11:25 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "pá 11. 11. 2022 v 9:11 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> Hi,\n>\n> On Sat, Nov 05, 2022 at 08:54:57PM +0100, Pavel Stehule wrote:\n> >\n> > pá 4. 11. 2022 v 14:28 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> > napsal:\n> >\n> > > > But one thing I noticed is that \"optarg\" looks wrong here:\n> > > >\n> > > > simple_string_list_append(&opts->triggerNames, optarg);\n> > >\n> > > Ah indeed, good catch! Maybe there should be an explicit test for\n> every\n> > > (include|exclude) / objtype combination? It would be a bit verbose\n> (and\n> > > possibly hard to maintain).\n> > >\n> >\n> > yes - pg_restore is not well covered by tests, fixed\n> >\n> > I found another issue. The pg_restore requires a full signature of the\n> > function and it is pretty sensitive on white spaces (pg_restore).\n>\n> Argh, indeed. It's a good thing to have expanded the regression tests :)\n>\n> > I made a\n> > mistake when I partially parsed patterns like SQL identifiers. It can\n> work\n> > for simple cases, but when I parse the function's signature it stops\n> > working. So I rewrote the parsing pattern part. Now, I just read an input\n> > string and I try to reduce spaces. Still multiline identifiers are\n> > supported. Against the previous method of pattern parsing, I needed to\n> > change just one regress test - now I am not able to detect garbage after\n> > pattern :-/.\n>\n> I'm not sure it's really problematic. It looks POLA-violation compatible\n> with\n> regular pg_dump options, for instance:\n>\n> $ echo \"include table t1()\" | pg_dump --filter - | ag CREATE\n> CREATE TABLE public.t1 (\n>\n> $ pg_dump -t \"t1()\" | ag CREATE\n> CREATE TABLE public.t1 (\n>\n> $ echo \"include table t1()blabla\" | pg_dump --filter - | ag CREATE\n> pg_dump: error: no matching tables were found\n>\n> $ pg_dump -t \"t1()blabla\" | ag CREATE\n> pg_dump: error: no matching tables were found\n>\n> I don't think the file parsing code should try to be smart about checking\n> the\n> found patterns.\n>\n> > * function's filtering doesn't support schema - when the name of function\n> > is specified with schema, then the function is not found\n>\n> Ah I didn't know that. Indeed it only expect a non-qualified identifier,\n> and\n> would restore any function that matches the name (and arguments), possibly\n> multiple ones if there are variants in different schema. That's unrelated\n> to\n> this patch though.\n>\n> > * the function has to be specified with an argument type list - the\n> > separator has to be exactly \", \" string. Without space or with one space\n> > more, the filtering doesn't work (new implementation of pattern parsing\n> > reduces white spaces sensitivity). This is not a bug, but it is not well\n> > documented.\n>\n> Agreed.\n>\n> > attached updated patch\n>\n> It looks overall good to me! I just have a few minor nitpicking\n> complaints:\n>\n> - you removed the pg_strip_clrf() calls and declared everything as \"const\n> char\n> *\", so there's no need to explicitly cast the filter_get_keyword()\n> arguments\n> anymore\n>\n\nremoved\n\n\n>\n> Note also that the code now relies on the fact that there are some non-zero\n> bytes after a pattern to know that no errors happened. It's not a problem\n> as\n> you should find an EOF marker anyway if CLRF were stripped.\n>\n\nI am not sure if I understand this note well?\n\n\n>\n> + * Following routines are called from pg_dump, pg_dumpall and pg_restore.\n> + * Unfortunately, implementation of exit_nicely in pg_dump and pg_restore\n> + * is different from implementation of this routine in pg_dumpall. So\n> instead\n> + * of directly calling exit_nicely we have to return some error flag (in\n> this\n> + * case NULL), and exit_nicelly will be executed from caller's routine.\n>\n> Slight improvement:\n> [...]\n> Unfortunately, the implementation of exit_nicely in pg_dump and pg_restore\n> is\n> different from the one in pg_dumpall, so instead of...\n>\n> + * read_pattern - reads an pattern from input. The pattern can be mix of\n> + * single line or multi line subpatterns. Single line subpattern starts\n> first\n> + * non white space char, and ending last non space char on line or by char\n> + * '#'. The white spaces inside are removed (around char \".()\"), or\n> reformated\n> + * around char ',' or reduced (the multiple spaces are replaced by one).\n> + * Multiline subpattern starts by double quote and ending by this char\n> too.\n> + * The escape rules are same like for SQL quoted literal.\n> + *\n> + * Routine signalizes error by returning NULL. Otherwise returns pointer\n> + * to next char after last processed char in input string.\n>\n>\n> typo: reads \"a\" pattern from input...\n>\n\nfixed\n\n\n>\n> I don't fully understand the part about subpatterns, but is that necessary\n> to\n> describe it? Simply saying that any valid and possibly-quoted identifier\n> can\n> be parsed should make it clear that identifiers containing \\n characters\n> should\n> work too. Maybe also just mention that whitespaces are removed and special\n> care is taken to output routines in exactly the same way calling code will\n> expect it (that is comma-and-single-space type delimiter).\n>\n\nIn this case I hit the limits of my English language skills.\n\nI rewrote this comment, but it needs more care. Please, can you look at it?\n\npá 11. 11. 2022 v 9:11 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:Hi,\n\nOn Sat, Nov 05, 2022 at 08:54:57PM +0100, Pavel Stehule wrote:\n>\n> pá 4. 11. 2022 v 14:28 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n>\n> > > But one thing I noticed is that \"optarg\" looks wrong here:\n> > >\n> > > simple_string_list_append(&opts->triggerNames, optarg);\n> >\n> > Ah indeed, good catch!  Maybe there should be an explicit test for every\n> > (include|exclude) / objtype combination?  It would be a bit verbose (and\n> > possibly hard to maintain).\n> >\n>\n> yes - pg_restore is not well covered by  tests, fixed\n>\n> I found another issue. The pg_restore requires a full signature of the\n> function and it is pretty sensitive on white spaces (pg_restore).\n\nArgh, indeed.  It's a good thing to have expanded the regression tests :)\n\n> I made a\n> mistake when I partially parsed patterns like SQL identifiers. It can work\n> for simple cases, but when I parse the function's signature it stops\n> working. So I rewrote the parsing pattern part. Now, I just read an input\n> string and I try to reduce spaces. Still multiline identifiers are\n> supported. Against the previous method of pattern parsing, I needed to\n> change just one regress test - now I am not able to detect garbage after\n> pattern :-/.\n\nI'm not sure it's really problematic.  It looks POLA-violation compatible with\nregular pg_dump options, for instance:\n\n$ echo \"include table t1()\" | pg_dump --filter - | ag CREATE\nCREATE TABLE public.t1 (\n\n$ pg_dump -t \"t1()\" | ag CREATE\nCREATE TABLE public.t1 (\n\n$ echo \"include table t1()blabla\" | pg_dump --filter - | ag CREATE\npg_dump: error: no matching tables were found\n\n$ pg_dump -t \"t1()blabla\" | ag CREATE\npg_dump: error: no matching tables were found\n\nI don't think the file parsing code should try to be smart about checking the\nfound patterns.\n\n> * function's filtering doesn't support schema - when the name of function\n> is specified with schema, then the function is not found\n\nAh I didn't know that.  Indeed it only expect a non-qualified identifier, and\nwould restore any function that matches the name (and arguments), possibly\nmultiple ones if there are variants in different schema.  That's unrelated to\nthis patch though.\n\n> * the function has to be specified with an argument type list - the\n> separator has to be exactly \", \" string. Without space or with one space\n> more, the filtering doesn't work (new implementation of pattern parsing\n> reduces white spaces sensitivity). This is not a bug, but it is not well\n> documented.\n\nAgreed.\n\n> attached updated patch\n\nIt looks overall good to me!  I just have a few minor nitpicking complaints:\n\n- you removed the pg_strip_clrf() calls and declared everything as \"const char\n  *\", so there's no need to explicitly cast the filter_get_keyword() arguments\n  anymoreremoved \n\nNote also that the code now relies on the fact that there are some non-zero\nbytes after a pattern to know that no errors happened.  It's not a problem as\nyou should find an EOF marker anyway if CLRF were stripped.I am not sure if I understand this note well? \n\n+ * Following routines are called from pg_dump, pg_dumpall and pg_restore.\n+ * Unfortunately, implementation of exit_nicely in pg_dump and pg_restore\n+ * is different from implementation of this routine in pg_dumpall. So instead\n+ * of directly calling exit_nicely we have to return some error flag (in this\n+ * case NULL), and exit_nicelly will be executed from caller's routine.\n\nSlight improvement:\n[...]\nUnfortunately, the implementation of exit_nicely in pg_dump and pg_restore is\ndifferent from the one in pg_dumpall, so instead of...\n\n+ * read_pattern - reads an pattern from input. The pattern can be mix of\n+ * single line or multi line subpatterns. Single line subpattern starts first\n+ * non white space char, and ending last non space char on line or by char\n+ * '#'. The white spaces inside are removed (around char \".()\"), or reformated\n+ * around char ',' or reduced (the multiple spaces are replaced by one).\n+ * Multiline subpattern starts by double quote and ending by this char too.\n+ * The escape rules are same like for SQL quoted literal.\n+ *\n+ * Routine signalizes error by returning NULL. Otherwise returns pointer\n+ * to next char after last processed char in input string.\n\n\ntypo: reads \"a\" pattern from input...fixed \n\nI don't fully understand the part about subpatterns, but is that necessary to\ndescribe it?  Simply saying that any valid and possibly-quoted identifier can\nbe parsed should make it clear that identifiers containing \\n characters should\nwork too.  Maybe also just mention that whitespaces are removed and special\ncare is taken to output routines in exactly the same way calling code will\nexpect it (that is comma-and-single-space type delimiter).In this case I hit the limits of my English language skills.   I rewrote this comment, but it needs more care. Please, can you look at it?", "msg_date": "Sat, 12 Nov 2022 21:35:59 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nand updated patch\n\nRegards\n\nPavel", "msg_date": "Sat, 12 Nov 2022 21:37:03 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Sat, Nov 12, 2022 at 09:35:59PM +0100, Pavel Stehule wrote:\n\nThanks for the updated patch. Apart from the function comment it looks good to\nme.\n\nJustin, did you have any other comment on the patch?\n\n> > I don't fully understand the part about subpatterns, but is that necessary\n> > to\n> > describe it? Simply saying that any valid and possibly-quoted identifier\n> > can\n> > be parsed should make it clear that identifiers containing \\n characters\n> > should\n> > work too. Maybe also just mention that whitespaces are removed and special\n> > care is taken to output routines in exactly the same way calling code will\n> > expect it (that is comma-and-single-space type delimiter).\n> >\n>\n> In this case I hit the limits of my English language skills.\n>\n> I rewrote this comment, but it needs more care. Please, can you look at it?\n\nI'm also not a native English speaker so I'm far for writing perfect comments\nmyself :)\n\nMaybe something like\n\n/*\n * read_pattern - reads on object pattern from input\n *\n * This function will parse any valid identifier (quoted or not, qualified or\n * not), which can also includes the full signature for routines.\n * Note that this function takes special care to sanitize the detected\n * identifier (removing extraneous whitespaces or other unnecessary\n * characters). This is necessary as most backup/restore filtering functions\n * only recognize identifiers if they are written exactly way as they are\n * regenerated.\n * Returns a pointer to next character after the found identifier, or NULL on\n * error.\n */\n\n\n", "msg_date": "Sun, 13 Nov 2022 16:58:38 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "ne 13. 11. 2022 v 9:58 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> On Sat, Nov 12, 2022 at 09:35:59PM +0100, Pavel Stehule wrote:\n>\n> Thanks for the updated patch. Apart from the function comment it looks\n> good to\n> me.\n>\n> Justin, did you have any other comment on the patch?\n>\n> > > I don't fully understand the part about subpatterns, but is that\n> necessary\n> > > to\n> > > describe it? Simply saying that any valid and possibly-quoted\n> identifier\n> > > can\n> > > be parsed should make it clear that identifiers containing \\n\n> characters\n> > > should\n> > > work too. Maybe also just mention that whitespaces are removed and\n> special\n> > > care is taken to output routines in exactly the same way calling code\n> will\n> > > expect it (that is comma-and-single-space type delimiter).\n> > >\n> >\n> > In this case I hit the limits of my English language skills.\n> >\n> > I rewrote this comment, but it needs more care. Please, can you look at\n> it?\n>\n> I'm also not a native English speaker so I'm far for writing perfect\n> comments\n> myself :)\n>\n\nfar better than mine :)\n\nThank you very much\n\nupdated patch attached\n\nRegards\n\nPavel\n\n\n>\n> Maybe something like\n>\n> /*\n> * read_pattern - reads on object pattern from input\n> *\n> * This function will parse any valid identifier (quoted or not, qualified\n> or\n> * not), which can also includes the full signature for routines.\n> * Note that this function takes special care to sanitize the detected\n> * identifier (removing extraneous whitespaces or other unnecessary\n> * characters). This is necessary as most backup/restore filtering\n> functions\n> * only recognize identifiers if they are written exactly way as they are\n> * regenerated.\n> * Returns a pointer to next character after the found identifier, or NULL\n> on\n> * error.\n> */\n>", "msg_date": "Sun, 13 Nov 2022 20:32:47 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi,\n\nOn Sun, Nov 13, 2022 at 08:32:47PM +0100, Pavel Stehule wrote:\n>\n> updated patch attached\n\nThanks!\n\nSome enhancement could probably be done by a native english speaker, but apart\nfrom that it looks good to me, so hearing no other complaints I'm marking the\nCF entry as Ready for Committer!\n\n\n", "msg_date": "Tue, 22 Nov 2022 13:25:56 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "út 22. 11. 2022 v 6:26 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> Hi,\n>\n> On Sun, Nov 13, 2022 at 08:32:47PM +0100, Pavel Stehule wrote:\n> >\n> > updated patch attached\n>\n> Thanks!\n>\n> Some enhancement could probably be done by a native english speaker, but\n> apart\n> from that it looks good to me, so hearing no other complaints I'm marking\n> the\n> CF entry as Ready for Committer!\n>\n\nThank you very much for check and help\n\nRegards\n\nPavel\n\nút 22. 11. 2022 v 6:26 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:Hi,\n\nOn Sun, Nov 13, 2022 at 08:32:47PM +0100, Pavel Stehule wrote:\n>\n> updated patch attached\n\nThanks!\n\nSome enhancement could probably be done by a native english speaker, but apart\nfrom that it looks good to me, so hearing no other complaints I'm marking the\nCF entry as Ready for Committer!Thank you very much for check and helpRegardsPavel", "msg_date": "Tue, 22 Nov 2022 06:30:59 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi,\n\nOn 2022-11-13 20:32:47 +0100, Pavel Stehule wrote:\n> updated patch attached\n\nIt fails with address sanitizer that's now part of CI:\n\nhttps://cirrus-ci.com/task/6031397744279552?logs=test_world#L2659\n\n[06:33:11.271] # ==31965==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x619000000480 at pc 0x559f1ac40822 bp 0x7ffea83e1ad0 sp 0x7ffea83e1ac8\n[06:33:11.271] # READ of size 1 at 0x619000000480 thread T0\n[06:33:11.271] # #0 0x559f1ac40821 in read_pattern /tmp/cirrus-ci-build/src/bin/pg_dump/filter.c:302\n[06:33:11.271] # #1 0x559f1ac40e4d in filter_read_item /tmp/cirrus-ci-build/src/bin/pg_dump/filter.c:459\n[06:33:11.271] # #2 0x559f1abe6fa5 in read_dump_filters /tmp/cirrus-ci-build/src/bin/pg_dump/pg_dump.c:18229\n[06:33:11.271] # #3 0x559f1ac2bb1b in main /tmp/cirrus-ci-build/src/bin/pg_dump/pg_dump.c:630\n[06:33:11.271] # #4 0x7fd91fabfd09 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x23d09)\n[06:33:11.271] # #5 0x559f1abe5d29 in _start (/tmp/cirrus-ci-build/tmp_install/usr/local/pgsql/bin/pg_dump+0x39d29)\n[06:33:11.271] # \n[06:33:11.271] # 0x619000000480 is located 0 bytes to the right of 1024-byte region [0x619000000080,0x619000000480)\n[06:33:11.271] # allocated by thread T0 here:\n[06:33:11.271] # #0 0x7fd91fe14e8f in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:145\n[06:33:11.271] # #1 0x559f1ac69f35 in pg_malloc_internal /tmp/cirrus-ci-build/src/common/fe_memutils.c:30\n[06:33:11.271] # #2 0x559f1ac69f35 in palloc /tmp/cirrus-ci-build/src/common/fe_memutils.c:117\n[06:33:11.271] # \n[06:33:11.271] # SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/cirrus-ci-build/src/bin/pg_dump/filter.c:302 in read_pattern\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Nov 2022 23:39:08 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "út 22. 11. 2022 v 8:39 odesílatel Andres Freund <andres@anarazel.de> napsal:\n\n> Hi,\n>\n> On 2022-11-13 20:32:47 +0100, Pavel Stehule wrote:\n> > updated patch attached\n>\n> It fails with address sanitizer that's now part of CI:\n>\n> https://cirrus-ci.com/task/6031397744279552?logs=test_world#L2659\n>\n> [06:33:11.271] # ==31965==ERROR: AddressSanitizer: heap-buffer-overflow on\n> address 0x619000000480 at pc 0x559f1ac40822 bp 0x7ffea83e1ad0 sp\n> 0x7ffea83e1ac8\n> [06:33:11.271] # READ of size 1 at 0x619000000480 thread T0\n> [06:33:11.271] # #0 0x559f1ac40821 in read_pattern\n> /tmp/cirrus-ci-build/src/bin/pg_dump/filter.c:302\n> [06:33:11.271] # #1 0x559f1ac40e4d in filter_read_item\n> /tmp/cirrus-ci-build/src/bin/pg_dump/filter.c:459\n> [06:33:11.271] # #2 0x559f1abe6fa5 in read_dump_filters\n> /tmp/cirrus-ci-build/src/bin/pg_dump/pg_dump.c:18229\n> [06:33:11.271] # #3 0x559f1ac2bb1b in main\n> /tmp/cirrus-ci-build/src/bin/pg_dump/pg_dump.c:630\n> [06:33:11.271] # #4 0x7fd91fabfd09 in __libc_start_main\n> (/lib/x86_64-linux-gnu/libc.so.6+0x23d09)\n> [06:33:11.271] # #5 0x559f1abe5d29 in _start\n> (/tmp/cirrus-ci-build/tmp_install/usr/local/pgsql/bin/pg_dump+0x39d29)\n> [06:33:11.271] #\n> [06:33:11.271] # 0x619000000480 is located 0 bytes to the right of\n> 1024-byte region [0x619000000080,0x619000000480)\n> [06:33:11.271] # allocated by thread T0 here:\n> [06:33:11.271] # #0 0x7fd91fe14e8f in __interceptor_malloc\n> ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:145\n> [06:33:11.271] # #1 0x559f1ac69f35 in pg_malloc_internal\n> /tmp/cirrus-ci-build/src/common/fe_memutils.c:30\n> [06:33:11.271] # #2 0x559f1ac69f35 in palloc\n> /tmp/cirrus-ci-build/src/common/fe_memutils.c:117\n> [06:33:11.271] #\n> [06:33:11.271] # SUMMARY: AddressSanitizer: heap-buffer-overflow\n> /tmp/cirrus-ci-build/src/bin/pg_dump/filter.c:302 in read_pattern\n>\n\nI'll check it\n\n\n>\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nút 22. 11. 2022 v 8:39 odesílatel Andres Freund <andres@anarazel.de> napsal:Hi,\n\nOn 2022-11-13 20:32:47 +0100, Pavel Stehule wrote:\n> updated patch attached\n\nIt fails with address sanitizer that's now part of CI:\n\nhttps://cirrus-ci.com/task/6031397744279552?logs=test_world#L2659\n\n[06:33:11.271] # ==31965==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x619000000480 at pc 0x559f1ac40822 bp 0x7ffea83e1ad0 sp 0x7ffea83e1ac8\n[06:33:11.271] # READ of size 1 at 0x619000000480 thread T0\n[06:33:11.271] #     #0 0x559f1ac40821 in read_pattern /tmp/cirrus-ci-build/src/bin/pg_dump/filter.c:302\n[06:33:11.271] #     #1 0x559f1ac40e4d in filter_read_item /tmp/cirrus-ci-build/src/bin/pg_dump/filter.c:459\n[06:33:11.271] #     #2 0x559f1abe6fa5 in read_dump_filters /tmp/cirrus-ci-build/src/bin/pg_dump/pg_dump.c:18229\n[06:33:11.271] #     #3 0x559f1ac2bb1b in main /tmp/cirrus-ci-build/src/bin/pg_dump/pg_dump.c:630\n[06:33:11.271] #     #4 0x7fd91fabfd09 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x23d09)\n[06:33:11.271] #     #5 0x559f1abe5d29 in _start (/tmp/cirrus-ci-build/tmp_install/usr/local/pgsql/bin/pg_dump+0x39d29)\n[06:33:11.271] # \n[06:33:11.271] # 0x619000000480 is located 0 bytes to the right of 1024-byte region [0x619000000080,0x619000000480)\n[06:33:11.271] # allocated by thread T0 here:\n[06:33:11.271] #     #0 0x7fd91fe14e8f in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:145\n[06:33:11.271] #     #1 0x559f1ac69f35 in pg_malloc_internal /tmp/cirrus-ci-build/src/common/fe_memutils.c:30\n[06:33:11.271] #     #2 0x559f1ac69f35 in palloc /tmp/cirrus-ci-build/src/common/fe_memutils.c:117\n[06:33:11.271] # \n[06:33:11.271] # SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/cirrus-ci-build/src/bin/pg_dump/filter.c:302 in read_patternI'll check it \n\n\nGreetings,\n\nAndres Freund", "msg_date": "Tue, 22 Nov 2022 08:41:24 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nút 22. 11. 2022 v 8:39 odesílatel Andres Freund <andres@anarazel.de> napsal:\n\n> Hi,\n>\n> On 2022-11-13 20:32:47 +0100, Pavel Stehule wrote:\n> > updated patch attached\n>\n> It fails with address sanitizer that's now part of CI:\n>\n> https://cirrus-ci.com/task/6031397744279552?logs=test_world#L2659\n>\n> [06:33:11.271] # ==31965==ERROR: AddressSanitizer: heap-buffer-overflow on\n> address 0x619000000480 at pc 0x559f1ac40822 bp 0x7ffea83e1ad0 sp\n> 0x7ffea83e1ac8\n> [06:33:11.271] # READ of size 1 at 0x619000000480 thread T0\n> [06:33:11.271] # #0 0x559f1ac40821 in read_pattern\n> /tmp/cirrus-ci-build/src/bin/pg_dump/filter.c:302\n> [06:33:11.271] # #1 0x559f1ac40e4d in filter_read_item\n> /tmp/cirrus-ci-build/src/bin/pg_dump/filter.c:459\n> [06:33:11.271] # #2 0x559f1abe6fa5 in read_dump_filters\n> /tmp/cirrus-ci-build/src/bin/pg_dump/pg_dump.c:18229\n> [06:33:11.271] # #3 0x559f1ac2bb1b in main\n> /tmp/cirrus-ci-build/src/bin/pg_dump/pg_dump.c:630\n> [06:33:11.271] # #4 0x7fd91fabfd09 in __libc_start_main\n> (/lib/x86_64-linux-gnu/libc.so.6+0x23d09)\n> [06:33:11.271] # #5 0x559f1abe5d29 in _start\n> (/tmp/cirrus-ci-build/tmp_install/usr/local/pgsql/bin/pg_dump+0x39d29)\n> [06:33:11.271] #\n> [06:33:11.271] # 0x619000000480 is located 0 bytes to the right of\n> 1024-byte region [0x619000000080,0x619000000480)\n> [06:33:11.271] # allocated by thread T0 here:\n> [06:33:11.271] # #0 0x7fd91fe14e8f in __interceptor_malloc\n> ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:145\n> [06:33:11.271] # #1 0x559f1ac69f35 in pg_malloc_internal\n> /tmp/cirrus-ci-build/src/common/fe_memutils.c:30\n> [06:33:11.271] # #2 0x559f1ac69f35 in palloc\n> /tmp/cirrus-ci-build/src/common/fe_memutils.c:117\n> [06:33:11.271] #\n> [06:33:11.271] # SUMMARY: AddressSanitizer: heap-buffer-overflow\n> /tmp/cirrus-ci-build/src/bin/pg_dump/filter.c:302 in read_pattern\n>\n>\nshould be fixed in attached patch\n\nI found and fix small memleak 24bytes per filter row (PQExpBufferData)\n\nRegards\n\nPavel\n\n\n>\n> Greetings,\n>\n> Andres Freund\n>", "msg_date": "Tue, 22 Nov 2022 18:02:21 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "So.... This patch has been through a lot of commitfests. And it really\ndoesn't seem that hard to resolve -- Pavel has seemingly been willing\nto go along whichever way the wind has been blowing but honestly it\nkind of seems like he's just gotten drive-by suggestions and he's put\na lot of work into trying to satisfy them.\n\nHe implemented --include-tables-from-file=... etc. Then he implemented\na hand-written parser for a DSL to select objects, then he implemented\na bison parser, then he went back to the hand-written parser.\n\nCan we get some consensus on whether the DSL looks right and whether\nthe hand-written parser is sensible. And if so then can a committer\nstep up to actual review and commit the patch? The last review said it\nmight need a native English speaker to tweak some wording but\notherwise looked good.\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Mon, 6 Mar 2023 15:45:31 -0500", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "> On 6 Mar 2023, at 21:45, Gregory Stark (as CFM) <stark.cfm@gmail.com> wrote:\n> \n> So.... This patch has been through a lot of commitfests. And it really\n> doesn't seem that hard to resolve -- Pavel has seemingly been willing\n> to go along whichever way the wind has been blowing but honestly it\n> kind of seems like he's just gotten drive-by suggestions and he's put\n> a lot of work into trying to satisfy them.\n\nAgreed.\n\n> He implemented --include-tables-from-file=... etc. Then he implemented\n> a hand-written parser for a DSL to select objects, then he implemented\n> a bison parser, then he went back to the hand-written parser.\n\nWell, kind of. I was trying to take the patch to the finishing line but was\nuncomfortable with the hand written parser so I implemented a parser in Bison\nto replace it with. Not that hand-written parsers are bad per se (or that my\nbison parser was perfect), but reading quoted identifiers across line\nboundaries tend to require a fair amount of handwritten code. Pavel did not\nobject to this version, but it was objected to by two other committers.\n\nAt this point [0] I stepped down from trying to finish it as the approach I was\ncomfortable didn't gain traction (which is totally fine).\n\nDownthread from this the patch got a lot of reviews from Julien with the old\nparser back in place.\n\n> Can we get some consensus on whether the DSL looks right\n\nI would consider this pretty settled.\n\n> and whether the hand-written parser is sensible.\n\nThis is the part where a committer who wants to pursue the hand-written parser\nneed to step up. With the amount of review received it's hopefully pretty close.\n\n--\nDaniel Gustafsson\n\n[0] 098531E1-FBA9-4B7D-884E-0A4363EEE6DF@yesql.se\n\n\n\n", "msg_date": "Mon, 6 Mar 2023 22:20:32 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi,\n\nOn Mon, Mar 06, 2023 at 10:20:32PM +0100, Daniel Gustafsson wrote:\n> > On 6 Mar 2023, at 21:45, Gregory Stark (as CFM) <stark.cfm@gmail.com> wrote:\n> >\n> > So.... This patch has been through a lot of commitfests. And it really\n> > doesn't seem that hard to resolve -- Pavel has seemingly been willing\n> > to go along whichever way the wind has been blowing but honestly it\n> > kind of seems like he's just gotten drive-by suggestions and he's put\n> > a lot of work into trying to satisfy them.\n>\n> Agreed.\n\nIndeed, I'm not sure I would have had that much patience.\n\n> > He implemented --include-tables-from-file=... etc. Then he implemented\n> > a hand-written parser for a DSL to select objects, then he implemented\n> > a bison parser, then he went back to the hand-written parser.\n>\n> Well, kind of. I was trying to take the patch to the finishing line but was\n> uncomfortable with the hand written parser so I implemented a parser in Bison\n> to replace it with. Not that hand-written parsers are bad per se (or that my\n> bison parser was perfect), but reading quoted identifiers across line\n> boundaries tend to require a fair amount of handwritten code. Pavel did not\n> object to this version, but it was objected to by two other committers.\n>\n> At this point [0] I stepped down from trying to finish it as the approach I was\n> comfortable didn't gain traction (which is totally fine).\n>\n> Downthread from this the patch got a lot of reviews from Julien with the old\n> parser back in place.\n\nYeah, and the current state seems quite good to me.\n\n> > Can we get some consensus on whether the DSL looks right\n>\n> I would consider this pretty settled.\n\nAgreed.\n\n\n", "msg_date": "Tue, 7 Mar 2023 10:47:09 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "út 7. 3. 2023 v 3:47 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> On Mon, Mar 06, 2023 at 10:20:32PM +0100, Daniel Gustafsson wrote:\n> > > On 6 Mar 2023, at 21:45, Gregory Stark (as CFM) <stark.cfm@gmail.com>\n> wrote:\n> > >\n> > > So.... This patch has been through a lot of commitfests. And it really\n> > > doesn't seem that hard to resolve -- Pavel has seemingly been willing\n> > > to go along whichever way the wind has been blowing but honestly it\n> > > kind of seems like he's just gotten drive-by suggestions and he's put\n> > > a lot of work into trying to satisfy them.\n> >\n> > Agreed.\n>\n> Indeed, I'm not sure I would have had that much patience.\n>\n> > > He implemented --include-tables-from-file=... etc. Then he implemented\n> > > a hand-written parser for a DSL to select objects, then he implemented\n> > > a bison parser, then he went back to the hand-written parser.\n> >\n> > Well, kind of. I was trying to take the patch to the finishing line but\n> was\n> > uncomfortable with the hand written parser so I implemented a parser in\n> Bison\n> > to replace it with. Not that hand-written parsers are bad per se (or\n> that my\n> > bison parser was perfect), but reading quoted identifiers across line\n> > boundaries tend to require a fair amount of handwritten code. Pavel did\n> not\n> > object to this version, but it was objected to by two other committers.\n> >\n> > At this point [0] I stepped down from trying to finish it as the\n> approach I was\n> > comfortable didn't gain traction (which is totally fine).\n> >\n> > Downthread from this the patch got a lot of reviews from Julien with the\n> old\n> > parser back in place.\n>\n> Yeah, and the current state seems quite good to me.\n>\n> > > Can we get some consensus on whether the DSL looks right\n> >\n> > I would consider this pretty settled.\n>\n> Agreed.\n>\n\nrebase + enhancing about related option from a563c24\n\nRegards\n\nPavel", "msg_date": "Thu, 16 Mar 2023 13:05:41 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nfresh rebase\n\nregards\n\nPavel", "msg_date": "Sat, 18 Mar 2023 20:48:33 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Thu, Mar 16, 2023 at 01:05:41PM +0100, Pavel Stehule wrote:\n> rebase + enhancing about related option from a563c24\n\nThanks.\n\nIt looks like this doesn't currently handle extensions, which were added\nat 6568cef26e.\n\n> + <literal>table_and_children</literal>: tables, works like\n> + <option>-t</option>/<option>--table</option>, except that\n> + it also includes any partitions or inheritance child\n> + tables of the table(s) matching the\n> + <replaceable class=\"parameter\">pattern</replaceable>.\n\nWhy doesn't this just say \"works like --table-and-children\" ?\n\nI think as you wrote log_invalid_filter_format(), the messages wouldn't\nbe translated, because they're printed via %s. One option is to call\n_() on the message.\n\n> +ok($dump !=~ qr/^CREATE TABLE public\\.bootab/m, \"exclude dumped children table\");\n\n!=~ is being interpretted as as numeric \"!=\" and throwing warnings.\nIt should be a !~ b, right ?\nIt'd be nice if perl warnings during the tests were less easy to miss.\n\n> + * char is not alpha. The char '_' is allowed too (exclude first position).\n\nWhy is it treated specially? Could it be treated the same as alpha?\n\n> +\t\t\t\tlog_invalid_filter_format(&fstate,\n> +\t\t\t\t\t\t\t\t\t\t \"\\\"include\\\" table data filter is not allowed\");\n> +\t\t\t\tlog_invalid_filter_format(&fstate,\n> +\t\t\t\t\t\t\t\t\t\t \"\\\"include\\\" table data and children filter is not allowed\");\n\nFor these, it might be better to write the literal option:\n\n> +\t\t\t\t\t\t\t\t\t\t \"include filter for \\\"table_data_and_children\\\" is not allowed\");\n\nBecause the option is a literal and shouldn't be translated.\nAnd it's probably better to write that using %s, like:\n\n> +\t\t\t\t\t\t\t\t\t\t \"include filter for \\\"%s\\\" is not allowed\");\n\nThat makes shorter and fewer strings.\n\nFind attached a bunch of other corrections as 0002.txt\n\nI also dug up what I'd started in november, trying to reduce the code\nduplication betwen pg_restore/dump/all. This isn't done, but I might\nnever finish it, so I'll at least show what I have in case you think\nit's a good idea. This passes tests on CI, except for autoconf, due to\nusing exit_nicely() differently.\n\n-- \nJustin", "msg_date": "Sun, 19 Mar 2023 09:01:51 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "ne 19. 3. 2023 v 15:01 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Thu, Mar 16, 2023 at 01:05:41PM +0100, Pavel Stehule wrote:\n> > rebase + enhancing about related option from a563c24\n>\n> Thanks.\n>\n> It looks like this doesn't currently handle extensions, which were added\n> at 6568cef26e.\n>\n> > + <literal>table_and_children</literal>: tables, works like\n> > + <option>-t</option>/<option>--table</option>, except that\n> > + it also includes any partitions or inheritance child\n> > + tables of the table(s) matching the\n> > + <replaceable class=\"parameter\">pattern</replaceable>.\n>\n> Why doesn't this just say \"works like --table-and-children\" ?\n>\n> I think as you wrote log_invalid_filter_format(), the messages wouldn't\n> be translated, because they're printed via %s. One option is to call\n> _() on the message.\n>\n> > +ok($dump !=~ qr/^CREATE TABLE public\\.bootab/m, \"exclude dumped\n> children table\");\n>\n> !=~ is being interpretted as as numeric \"!=\" and throwing warnings.\n> It should be a !~ b, right ?\n> It'd be nice if perl warnings during the tests were less easy to miss.\n>\n> > + * char is not alpha. The char '_' is allowed too (exclude first\n> position).\n>\n> Why is it treated specially? Could it be treated the same as alpha?\n>\n> > + log_invalid_filter_format(&fstate,\n> > +\n> \"\\\"include\\\" table data filter is not allowed\");\n> > + log_invalid_filter_format(&fstate,\n> > +\n> \"\\\"include\\\" table data and children filter is not allowed\");\n>\n> For these, it might be better to write the literal option:\n>\n> > +\n> \"include filter for \\\"table_data_and_children\\\" is not allowed\");\n>\n> Because the option is a literal and shouldn't be translated.\n> And it's probably better to write that using %s, like:\n>\n> > +\n> \"include filter for \\\"%s\\\" is not allowed\");\n>\n> That makes shorter and fewer strings.\n>\n> Find attached a bunch of other corrections as 0002.txt\n>\n\nThank you very much - I'll recheck the mentioned points tomorrow.\n\n\n>\n> I also dug up what I'd started in november, trying to reduce the code\n> duplication betwen pg_restore/dump/all. This isn't done, but I might\n> never finish it, so I'll at least show what I have in case you think\n> it's a good idea. This passes tests on CI, except for autoconf, due to\n> using exit_nicely() differently.\n>\n\nYour implementation reduced 60 lines, but the interface and code is more\ncomplex. I cannot say what is significantly better. Personally, in this\ncase, I prefer my variant, because I think it is a little bit more\nreadable, and possible modification can be more simple. But this is just my\nopinion, and I have no problem accepting other opinions. I can imagine to\ndefine some configuration array like getopt, but it looks like over\nengineering\n\nRegards\n\nPavel\n\n\n> --\n> Justin\n>\n\nne 19. 3. 2023 v 15:01 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:On Thu, Mar 16, 2023 at 01:05:41PM +0100, Pavel Stehule wrote:\n> rebase + enhancing about related option from a563c24\n\nThanks.\n\nIt looks like this doesn't currently handle extensions, which were added\nat 6568cef26e.\n\n> +           <literal>table_and_children</literal>: tables, works like\n> +           <option>-t</option>/<option>--table</option>, except that\n> +           it also includes any partitions or inheritance child\n> +           tables of the table(s) matching the\n> +           <replaceable class=\"parameter\">pattern</replaceable>.\n\nWhy doesn't this just say \"works like --table-and-children\" ?\n\nI think as you wrote log_invalid_filter_format(), the messages wouldn't\nbe translated, because they're printed via %s.  One option is to call\n_() on the message.\n\n> +ok($dump !=~ qr/^CREATE TABLE public\\.bootab/m,   \"exclude dumped children table\");\n\n!=~ is being interpretted as as numeric \"!=\" and throwing warnings.\nIt should be a !~ b, right ?\nIt'd be nice if perl warnings during the tests were less easy to miss.\n\n> + * char is not alpha. The char '_' is allowed too (exclude first position).\n\nWhy is it treated specially?  Could it be treated the same as alpha?\n\n> +                             log_invalid_filter_format(&fstate,\n> +                                                                               \"\\\"include\\\" table data filter is not allowed\");\n> +                             log_invalid_filter_format(&fstate,\n> +                                                                               \"\\\"include\\\" table data and children filter is not allowed\");\n\nFor these, it might be better to write the literal option:\n\n> +                                                                               \"include filter for \\\"table_data_and_children\\\" is not allowed\");\n\nBecause the option is a literal and shouldn't be translated.\nAnd it's probably better to write that using %s, like:\n\n> +                                                                               \"include filter for \\\"%s\\\" is not allowed\");\n\nThat makes shorter and fewer strings.\n\nFind attached a bunch of other corrections as 0002.txtThank you very much - I'll recheck the mentioned points tomorrow. \n\nI also dug up what I'd started in november, trying to reduce the code\nduplication betwen pg_restore/dump/all.  This isn't done, but I might\nnever finish it, so I'll at least show what I have in case you think\nit's a good idea.  This passes tests on CI, except for autoconf, due to\nusing exit_nicely() differently.Your implementation reduced 60 lines, but the interface and code is more complex. I cannot say what is significantly better. Personally, in this case, I prefer my variant, because I think it is a little bit more readable, and possible modification can be more simple. But this is just my opinion, and I have no problem accepting other opinions. I can imagine to define some configuration array like getopt, but it looks like over engineering RegardsPavel\n\n-- \nJustin", "msg_date": "Sun, 19 Mar 2023 21:27:34 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "ne 19. 3. 2023 v 15:01 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Thu, Mar 16, 2023 at 01:05:41PM +0100, Pavel Stehule wrote:\n> > rebase + enhancing about related option from a563c24\n>\n> Thanks.\n>\n> It looks like this doesn't currently handle extensions, which were added\n> at 6568cef26e.\n>\n> > + <literal>table_and_children</literal>: tables, works like\n> > + <option>-t</option>/<option>--table</option>, except that\n> > + it also includes any partitions or inheritance child\n> > + tables of the table(s) matching the\n> > + <replaceable class=\"parameter\">pattern</replaceable>.\n>\n> Why doesn't this just say \"works like --table-and-children\" ?\n>\n\nchanged\n\n\n>\n> I think as you wrote log_invalid_filter_format(), the messages wouldn't\n> be translated, because they're printed via %s. One option is to call\n> _() on the message.\n>\n\nfixed\n\n\n>\n> > +ok($dump !=~ qr/^CREATE TABLE public\\.bootab/m, \"exclude dumped\n> children table\");\n>\n> !=~ is being interpretted as as numeric \"!=\" and throwing warnings.\n> It should be a !~ b, right ?\n> It'd be nice if perl warnings during the tests were less easy to miss.\n>\n\nshould be fixed by you\n\n\n>\n> > + * char is not alpha. The char '_' is allowed too (exclude first\n> position).\n>\n\n\n\n>\n> Why is it treated specially? Could it be treated the same as alpha?\n>\n\nIt is usual behaviour in Postgres for keywords. Important is the complete\nsentence \"Returns NULL when the buffer is empty or the first char is not\nalpha.\"\n\nIn this case this implementation has no big impact on behaviour - probably\nyou got a message \"unknown keyword\" instead of \"missing keyword\". But I\nwould\nimplement behaviour consistent with other places. My opinion in this case\nis not extra strong - we can define the form of keywords like we want, just\nthis is consistent\nwith other parsers in Postgres.\n\n\n\n>\n> > + log_invalid_filter_format(&fstate,\n> > +\n> \"\\\"include\\\" table data filter is not allowed\");\n> > + log_invalid_filter_format(&fstate,\n> > +\n> \"\\\"include\\\" table data and children filter is not allowed\");\n>\n> For these, it might be better to write the literal option:\n>\n> > +\n> \"include filter for \\\"table_data_and_children\\\" is not allowed\");\n>\n> Because the option is a literal and shouldn't be translated.\n> And it's probably better to write that using %s, like:\n>\n> > +\n> \"include filter for \\\"%s\\\" is not allowed\");\n>\n\ndone\n\n\n\n>\n> That makes shorter and fewer strings.\n>\n> Find attached a bunch of other corrections as 0002.txt\n>\n\nmerged\n\nRegards\n\nPavel", "msg_date": "Mon, 20 Mar 2023 08:01:13 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "On Mon, Mar 20, 2023 at 08:01:13AM +0100, Pavel Stehule wrote:\n> ne 19. 3. 2023 v 15:01 odes�latel Justin Pryzby <pryzby@telsasoft.com> napsal:\n> \n> > On Thu, Mar 16, 2023 at 01:05:41PM +0100, Pavel Stehule wrote:\n> > > rebase + enhancing about related option from a563c24\n> >\n> > Thanks.\n> >\n> > It looks like this doesn't currently handle extensions, which were added\n> > at 6568cef26e.\n\nWhat about this part ? Should extension filters be supported ?\n\nI think the comment that I'd patched that lists all the filter types\nshould be minimized, rather than duplicating the list of all the\npossible filters that's already in the usrr-facing documentation.\n\nOne new typo: childrent\n\n\n", "msg_date": "Tue, 21 Mar 2023 10:32:07 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "út 21. 3. 2023 v 16:32 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Mon, Mar 20, 2023 at 08:01:13AM +0100, Pavel Stehule wrote:\n> > ne 19. 3. 2023 v 15:01 odesílatel Justin Pryzby <pryzby@telsasoft.com>\n> napsal:\n> >\n> > > On Thu, Mar 16, 2023 at 01:05:41PM +0100, Pavel Stehule wrote:\n> > > > rebase + enhancing about related option from a563c24\n> > >\n> > > Thanks.\n> > >\n> > > It looks like this doesn't currently handle extensions, which were\n> added\n> > > at 6568cef26e.\n>\n> What about this part ? Should extension filters be supported ?\n>\n\nI missed this, yes, it should be supported.\n\n\n\n>\n> I think the comment that I'd patched that lists all the filter types\n> should be minimized, rather than duplicating the list of all the\n> possible filters that's already in the usrr-facing documentation.\n>\n> One new typo: childrent\n>\n\nút 21. 3. 2023 v 16:32 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:On Mon, Mar 20, 2023 at 08:01:13AM +0100, Pavel Stehule wrote:\n> ne 19. 3. 2023 v 15:01 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:\n> \n> > On Thu, Mar 16, 2023 at 01:05:41PM +0100, Pavel Stehule wrote:\n> > > rebase + enhancing about related option from a563c24\n> >\n> > Thanks.\n> >\n> > It looks like this doesn't currently handle extensions, which were added\n> > at 6568cef26e.\n\nWhat about this part ?  Should extension filters be supported ?I missed this, yes, it should be supported. \n\nI think the comment that I'd patched that lists all the filter types\nshould be minimized, rather than duplicating the list of all the\npossible filters that's already in the usrr-facing documentation.\n\nOne new typo: childrent", "msg_date": "Tue, 21 Mar 2023 16:33:18 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "út 21. 3. 2023 v 16:32 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Mon, Mar 20, 2023 at 08:01:13AM +0100, Pavel Stehule wrote:\n> > ne 19. 3. 2023 v 15:01 odesílatel Justin Pryzby <pryzby@telsasoft.com>\n> napsal:\n> >\n> > > On Thu, Mar 16, 2023 at 01:05:41PM +0100, Pavel Stehule wrote:\n> > > > rebase + enhancing about related option from a563c24\n> > >\n> > > Thanks.\n> > >\n> > > It looks like this doesn't currently handle extensions, which were\n> added\n> > > at 6568cef26e.\n>\n> What about this part ? Should extension filters be supported ?\n>\n\nshould be fixed\n\n\n>\n> I think the comment that I'd patched that lists all the filter types\n> should be minimized, rather than duplicating the list of all the\n> possible filters that's already in the usrr-facing documentation.\n>\n\nI modified this comment. Please, check\n\n>\n> One new typo: childrent\n>\n\nfixed\n\nRegards\n\nPavel", "msg_date": "Tue, 21 Mar 2023 23:00:07 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nonly rebase\n\nRegards\n\nPavel", "msg_date": "Mon, 11 Sep 2023 06:34:56 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\npo 11. 9. 2023 v 6:34 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> only rebase\n>\n\nUnfortunately this rebase was not correct. I am sorry.\n\nfixed version\n\nRegards\n\nPavel\n\n\n>\n> Regards\n>\n> Pavel\n>\n>", "msg_date": "Mon, 11 Sep 2023 06:57:12 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\npo 11. 9. 2023 v 6:57 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> po 11. 9. 2023 v 6:34 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>> Hi\n>>\n>> only rebase\n>>\n>\n> Unfortunately this rebase was not correct. I am sorry.\n>\n> fixed version\n>\n\nand fixed forgotten \"break\" in switch\n\nRegards\n\nPavel\n\n\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>", "msg_date": "Mon, 11 Sep 2023 08:33:42 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "I went and had another look at this. The patch has been around for 18\ncommitfests and is widely considered to add a good feature, so it seems about\ntime to get reach closure.\n\nAs I've mentioned in the past I'm not a big fan of the parser, but the thread\nhas overruled on that. Another thing I think is a bit overcomplicated is the\nlayered error handling for printing log messages, and bubbling up of errors to\nget around not being able to call exit_nicely.\n\nIn the attached version I've boiled down the error logging into a single new\nfunction pg_log_filter_error() which takes a variable format string. This\nremoves a fair bit of the extra calls and makes logging easier. I've also\nadded a function pointer to the FilterStateData for passing the exit function\nvia filter_init. This allows the filtering code to exit gracefully regardless\nof which application is using it. Finally, I've also reimplemented the logic\nfor checking the parsed tokens into switch statements without defaults in order\nto get the compilerwarning on a missed case. It's easy to miss adding code to\nhandle a state, especially when adding new ones, and this should help highlight\nthat.\n\nOverall, this does shave a bit off the patch in size for what IMHO is better\nreadability and maintainability. (I've also made a pgindent pass over it of\ncourse).\n\nWhat are your thoughts on this version? It's not in a committable state as it\nneeds a bit more comments here and there and a triplecheck that nothing was\nmissed in changing this, but I prefer to get your thoughts before spending the\nextra time.\n\n--\nDaniel Gustafsson", "msg_date": "Thu, 9 Nov 2023 15:26:10 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\n\n> What are your thoughts on this version? It's not in a committable state\n> as it\n> needs a bit more comments here and there and a triplecheck that nothing was\n> missed in changing this, but I prefer to get your thoughts before spending\n> the\n> extra time.\n>\n\nI think using pointer to exit function is an elegant solution. I checked\nthe code and I found only one issue. I fixed warning\n\n[13:57:22.578] time make -s -j${BUILD_JOBS} world-bin\n[13:58:20.858] filter.c: In function ‘pg_log_filter_error’:\n[13:58:20.858] filter.c:161:2: error: function ‘pg_log_filter_error’ might\nbe a candidate for ‘gnu_printf’ format attribute\n[-Werror=suggest-attribute=format]\n[13:58:20.858] 161 | vsnprintf(buf, sizeof(buf), fmt, argp);\n[13:58:20.858] | ^~~~~~~~~\n[13:58:20.858] cc1: all warnings being treated as errors\n\nand probably copy/paste bug\n\ndiff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c\nindex f647bde28d..ab2abedf5f 100644\n--- a/src/bin/pg_dump/pg_restore.c\n+++ b/src/bin/pg_dump/pg_restore.c\n@@ -535,7 +535,7 @@ read_restore_filters(const char *filename,\nRestoreOptions *opts)\n case FILTER_OBJECT_TYPE_EXTENSION:\n case FILTER_OBJECT_TYPE_FOREIGN_DATA:\n pg_log_filter_error(&fstate, _(\"%s filter for \\\"%s\\\" is\nnot allowed.\"),\n- \"exclude\",\n+ \"include\",\n filter_object_type_name(objtype));\n exit_nicely(1);\n\nRegards\n\nPavel\n\n\n>\n> --\n> Daniel Gustafsson\n>\n>", "msg_date": "Sun, 12 Nov 2023 14:17:03 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nne 12. 11. 2023 v 14:17 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n>\n>> What are your thoughts on this version? It's not in a committable state\n>> as it\n>> needs a bit more comments here and there and a triplecheck that nothing\n>> was\n>> missed in changing this, but I prefer to get your thoughts before\n>> spending the\n>> extra time.\n>>\n>\n> I think using pointer to exit function is an elegant solution. I checked\n> the code and I found only one issue. I fixed warning\n>\n> [13:57:22.578] time make -s -j${BUILD_JOBS} world-bin\n> [13:58:20.858] filter.c: In function ‘pg_log_filter_error’:\n> [13:58:20.858] filter.c:161:2: error: function ‘pg_log_filter_error’ might\n> be a candidate for ‘gnu_printf’ format attribute\n> [-Werror=suggest-attribute=format]\n> [13:58:20.858] 161 | vsnprintf(buf, sizeof(buf), fmt, argp);\n> [13:58:20.858] | ^~~~~~~~~\n> [13:58:20.858] cc1: all warnings being treated as errors\n>\n> and probably copy/paste bug\n>\n> diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c\n> index f647bde28d..ab2abedf5f 100644\n> --- a/src/bin/pg_dump/pg_restore.c\n> +++ b/src/bin/pg_dump/pg_restore.c\n> @@ -535,7 +535,7 @@ read_restore_filters(const char *filename,\n> RestoreOptions *opts)\n> case FILTER_OBJECT_TYPE_EXTENSION:\n> case FILTER_OBJECT_TYPE_FOREIGN_DATA:\n> pg_log_filter_error(&fstate, _(\"%s filter for \\\"%s\\\"\n> is not allowed.\"),\n> - \"exclude\",\n> + \"include\",\n> filter_object_type_name(objtype));\n> exit_nicely(1);\n>\n> Regards\n>\n> Pavel\n>\n\nnext update - fix used, but uninitialized \"is_include\" variable, when\nfilter is of FILTER_OBJECT_TYPE_NONE\n\nfix crash\n\n# Running: pg_ctl -w -D\n/tmp/cirrus-ci-build/build-32/testrun/pg_dump/005_pg_dump_filterfile/data/t_005_pg_dump_filterfile_main_data/pgdata\n-l /tmp/cirrus-ci-build/build-32/testrun/pg_dump/005_pg_dump_filterfile/log/005_pg_dump_filterfile_main.log\n-o --cluster-name=main start\nwaiting for server to start.... done\nserver started\n# Postmaster PID for node \"main\" is 71352\n# Running: pg_dump -p 65454 -f\n/tmp/cirrus-ci-build/build-32/testrun/pg_dump/005_pg_dump_filterfile/data/t_005_pg_dump_filterfile_main_data/backup/plain.sql\n--filter=/tmp/cirrus-ci-build/build-32/testrun/pg_dump/005_pg_dump_filterfile/data/tmp_test_0mO3/inputfile.txt\npostgres\n../src/bin/pg_dump/pg_dump.c:18800:7: runtime error: load of value 86,\nwhich is not a valid value for type '_Bool'\n==71579==Using libbacktrace symbolizer.\n #0 0x566302cd in read_dump_filters ../src/bin/pg_dump/pg_dump.c:18800\n #1 0x56663429 in main ../src/bin/pg_dump/pg_dump.c:670\n #2 0xf7694e45 in __libc_start_main (/lib/i386-linux-gnu/libc.so.6+0x1ae45)\n #3 0x56624d50 in _start\n(/tmp/cirrus-ci-build/build-32/tmp_install/usr/local/pgsql/bin/pg_dump+0x1ad50)\n\nRegards\n\nPavel\n\n>\n>\n>>\n>> --\n>> Daniel Gustafsson\n>>\n>>", "msg_date": "Mon, 13 Nov 2023 14:15:50 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "> On 13 Nov 2023, at 14:15, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n> Hi\n> \n> ne 12. 11. 2023 v 14:17 odesílatel Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> napsal:\n> Hi\n> \n> \n> What are your thoughts on this version? It's not in a committable state as it\n> needs a bit more comments here and there and a triplecheck that nothing was\n> missed in changing this, but I prefer to get your thoughts before spending the\n> extra time. \n> \n> I think using pointer to exit function is an elegant solution. I checked the code and I found only one issue. I fixed warning\n> \n> [13:57:22.578] time make -s -j${BUILD_JOBS} world-bin\n> [13:58:20.858] filter.c: In function ‘pg_log_filter_error’:\n> [13:58:20.858] filter.c:161:2: error: function ‘pg_log_filter_error’ might be a candidate for ‘gnu_printf’ format attribute [-Werror=suggest-attribute=format]\n> [13:58:20.858] 161 | vsnprintf(buf, sizeof(buf), fmt, argp);\n> [13:58:20.858] | ^~~~~~~~~\n> [13:58:20.858] cc1: all warnings being treated as errors\n> \n> and probably copy/paste bug\n> \n> diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c\n> index f647bde28d..ab2abedf5f 100644\n> --- a/src/bin/pg_dump/pg_restore.c\n> +++ b/src/bin/pg_dump/pg_restore.c\n> @@ -535,7 +535,7 @@ read_restore_filters(const char *filename, RestoreOptions *opts)\n> case FILTER_OBJECT_TYPE_EXTENSION:\n> case FILTER_OBJECT_TYPE_FOREIGN_DATA:\n> pg_log_filter_error(&fstate, _(\"%s filter for \\\"%s\\\" is not allowed.\"),\n> - \"exclude\",\n> + \"include\",\n> filter_object_type_name(objtype));\n> exit_nicely(1);\n> \n> Regards\n> \n> Pavel\n> \n> next update - fix used, but uninitialized \"is_include\" variable, when filter is of FILTER_OBJECT_TYPE_NONE\n\nThanks, the posted patchset was indeed a bit of a sketch, thanks for fixing up\nthese. I'll go over it again too to clean it up and try to make into something\ncommittable.\n\nI was pondering replacing the is_include handling with returning an enum for\nthe operation, to keep things more future proof in case we add more operations\n(and also a bit less magic IMHO).\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 13 Nov 2023 14:39:15 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "po 13. 11. 2023 v 14:39 odesílatel Daniel Gustafsson <daniel@yesql.se>\nnapsal:\n\n> > On 13 Nov 2023, at 14:15, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> >\n> > Hi\n> >\n> > ne 12. 11. 2023 v 14:17 odesílatel Pavel Stehule <\n> pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> napsal:\n> > Hi\n> >\n> >\n> > What are your thoughts on this version? It's not in a committable state\n> as it\n> > needs a bit more comments here and there and a triplecheck that nothing\n> was\n> > missed in changing this, but I prefer to get your thoughts before\n> spending the\n> > extra time.\n> >\n> > I think using pointer to exit function is an elegant solution. I checked\n> the code and I found only one issue. I fixed warning\n> >\n> > [13:57:22.578] time make -s -j${BUILD_JOBS} world-bin\n> > [13:58:20.858] filter.c: In function ‘pg_log_filter_error’:\n> > [13:58:20.858] filter.c:161:2: error: function ‘pg_log_filter_error’\n> might be a candidate for ‘gnu_printf’ format attribute\n> [-Werror=suggest-attribute=format]\n> > [13:58:20.858] 161 | vsnprintf(buf, sizeof(buf), fmt, argp);\n> > [13:58:20.858] | ^~~~~~~~~\n> > [13:58:20.858] cc1: all warnings being treated as errors\n> >\n> > and probably copy/paste bug\n> >\n> > diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c\n> > index f647bde28d..ab2abedf5f 100644\n> > --- a/src/bin/pg_dump/pg_restore.c\n> > +++ b/src/bin/pg_dump/pg_restore.c\n> > @@ -535,7 +535,7 @@ read_restore_filters(const char *filename,\n> RestoreOptions *opts)\n> > case FILTER_OBJECT_TYPE_EXTENSION:\n> > case FILTER_OBJECT_TYPE_FOREIGN_DATA:\n> > pg_log_filter_error(&fstate, _(\"%s filter for \\\"%s\\\"\n> is not allowed.\"),\n> > - \"exclude\",\n> > + \"include\",\n> >\n> filter_object_type_name(objtype));\n> > exit_nicely(1);\n> >\n> > Regards\n> >\n> > Pavel\n> >\n> > next update - fix used, but uninitialized \"is_include\" variable, when\n> filter is of FILTER_OBJECT_TYPE_NONE\n>\n> Thanks, the posted patchset was indeed a bit of a sketch, thanks for\n> fixing up\n> these. I'll go over it again too to clean it up and try to make into\n> something\n> committable.\n>\n> I was pondering replacing the is_include handling with returning an enum\n> for\n> the operation, to keep things more future proof in case we add more\n> operations\n> (and also a bit less magic IMHO).\n>\n\n+1\n\nPavel\n\n\n> --\n> Daniel Gustafsson\n>\n>\n\npo 13. 11. 2023 v 14:39 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:> On 13 Nov 2023, at 14:15, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n> Hi\n> \n> ne 12. 11. 2023 v 14:17 odesílatel Pavel Stehule <pavel.stehule@gmail.com <mailto:pavel.stehule@gmail.com>> napsal:\n> Hi\n> \n> \n> What are your thoughts on this version?  It's not in a committable state as it\n> needs a bit more comments here and there and a triplecheck that nothing was\n> missed in changing this, but I prefer to get your thoughts before spending the\n> extra time. \n> \n> I think using pointer to exit function is an elegant solution. I checked the code and I found only one issue. I fixed warning\n> \n> [13:57:22.578] time make -s -j${BUILD_JOBS} world-bin\n> [13:58:20.858] filter.c: In function ‘pg_log_filter_error’:\n> [13:58:20.858] filter.c:161:2: error: function ‘pg_log_filter_error’ might be a candidate for ‘gnu_printf’ format attribute [-Werror=suggest-attribute=format]\n> [13:58:20.858] 161 | vsnprintf(buf, sizeof(buf), fmt, argp);\n> [13:58:20.858] | ^~~~~~~~~\n> [13:58:20.858] cc1: all warnings being treated as errors\n> \n> and probably copy/paste bug\n> \n> diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c\n> index f647bde28d..ab2abedf5f 100644\n> --- a/src/bin/pg_dump/pg_restore.c\n> +++ b/src/bin/pg_dump/pg_restore.c\n> @@ -535,7 +535,7 @@ read_restore_filters(const char *filename, RestoreOptions *opts)\n>                 case FILTER_OBJECT_TYPE_EXTENSION:\n>                 case FILTER_OBJECT_TYPE_FOREIGN_DATA:\n>                     pg_log_filter_error(&fstate, _(\"%s filter for \\\"%s\\\" is not allowed.\"),\n> -                                       \"exclude\",\n> +                                       \"include\",\n>                                         filter_object_type_name(objtype));\n>                     exit_nicely(1);\n> \n> Regards\n> \n> Pavel\n> \n> next update - fix used, but uninitialized  \"is_include\" variable, when filter is of FILTER_OBJECT_TYPE_NONE\n\nThanks, the posted patchset was indeed a bit of a sketch, thanks for fixing up\nthese.  I'll go over it again too to clean it up and try to make into something\ncommittable.\n\nI was pondering replacing the is_include handling with returning an enum for\nthe operation, to keep things more future proof in case we add more operations\n(and also a bit less magic IMHO).+1Pavel\n\n--\nDaniel Gustafsson", "msg_date": "Mon, 13 Nov 2023 17:07:29 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nI was pondering replacing the is_include handling with returning an enum for\n>> the operation, to keep things more future proof in case we add more\n>> operations\n>> (and also a bit less magic IMHO).\n>>\n>\n> +1\n>\n\nI did it.\n\nRegards\n\nPavel\n\n\n>\n> Pavel\n>\n>\n>> --\n>> Daniel Gustafsson\n>>\n>>", "msg_date": "Mon, 20 Nov 2023 06:20:27 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "> On 20 Nov 2023, at 06:20, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> I was pondering replacing the is_include handling with returning an enum for\n> the operation, to keep things more future proof in case we add more operations\n> (and also a bit less magic IMHO).\n> \n> +1\n> \n> I did it.\n\nNice, I think it's an improvement.\n\n+ <literal>extension</literal>: data on foreign servers, works like\n+ <option>--extension</option>. This keyword can only be\n+ used with the <literal>include</literal> keyword.\nThis seems like a copy-pasteo, fixed in the attached.\n\nI've spent some time polishing this version of the patch, among other things\ntrying to make the docs and --help screen consistent across the tools. I've\nadded the diff as a txt file to this email (to keep the CFbot from applying\nit), it's mainly reformatting a few comments and making things consistent.\n\nThe attached is pretty close to a committable patch IMO, review is welcome on\nboth the patch and commit message. I tried to identify all reviewers over the\npast 3+ years but I might have missed someone.\n\n--\nDaniel Gustafsson", "msg_date": "Tue, 21 Nov 2023 22:10:22 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Op 11/21/23 om 22:10 schreef Daniel Gustafsson:\n>> On 20 Nov 2023, at 06:20, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n\n> The attached is pretty close to a committable patch IMO, review is welcome on\n> both the patch and commit message. I tried to identify all reviewers over the\n> past 3+ years but I might have missed someone.\n\nI've tested this, albeit mostly in the initial iterations (*shrug* but \na mention is nice)\n\nErik Rijkers\n\n> \n> --\n> Daniel Gustafsson\n> \n\n\n\n", "msg_date": "Wed, 22 Nov 2023 05:27:14 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "> On 22 Nov 2023, at 05:27, Erik Rijkers <er@xs4all.nl> wrote:\n> \n> Op 11/21/23 om 22:10 schreef Daniel Gustafsson:\n>>> On 20 Nov 2023, at 06:20, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n>> The attached is pretty close to a committable patch IMO, review is welcome on\n>> both the patch and commit message. I tried to identify all reviewers over the\n>> past 3+ years but I might have missed someone.\n\nI took another look at this, found some more polish that was needed, added\nanother testcase and ended up pushing it.\n\n> I've tested this, albeit mostly in the initial iterations (*shrug* but a mention is nice)\n\nAs I mentioned above it's easy to miss when reviewing three years worth of\nemails, no-one was intentionally left out. I went back and looked and added\nyou as a reviewer. Thanks for letting me know.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 29 Nov 2023 15:43:58 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nst 29. 11. 2023 v 15:44 odesílatel Daniel Gustafsson <daniel@yesql.se>\nnapsal:\n\n> > On 22 Nov 2023, at 05:27, Erik Rijkers <er@xs4all.nl> wrote:\n> >\n> > Op 11/21/23 om 22:10 schreef Daniel Gustafsson:\n> >>> On 20 Nov 2023, at 06:20, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> >> The attached is pretty close to a committable patch IMO, review is\n> welcome on\n> >> both the patch and commit message. I tried to identify all reviewers\n> over the\n> >> past 3+ years but I might have missed someone.\n>\n> I took another look at this, found some more polish that was needed, added\n> another testcase and ended up pushing it.\n>\n> > I've tested this, albeit mostly in the initial iterations (*shrug* but\n> a mention is nice)\n>\n> As I mentioned above it's easy to miss when reviewing three years worth of\n> emails, no-one was intentionally left out. I went back and looked and\n> added\n> you as a reviewer. Thanks for letting me know.\n>\n\nThank you very much\n\nRegards\n\nPavel\n\n\n> --\n> Daniel Gustafsson\n>\n>\n\nHist 29. 11. 2023 v 15:44 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:> On 22 Nov 2023, at 05:27, Erik Rijkers <er@xs4all.nl> wrote:\n> \n> Op 11/21/23 om 22:10 schreef Daniel Gustafsson:\n>>> On 20 Nov 2023, at 06:20, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n>> The attached is pretty close to a committable patch IMO, review is welcome on\n>> both the patch and commit message.  I tried to identify all reviewers over the\n>> past 3+ years but I might have missed someone.\n\nI took another look at this, found some more polish that was needed, added\nanother testcase and ended up pushing it.\n\n> I've tested this, albeit mostly in the initial iterations  (*shrug* but a mention is nice)\n\nAs I mentioned above it's easy to miss when reviewing three years worth of\nemails, no-one was intentionally left out.  I went back and looked and added\nyou as a reviewer. Thanks for letting me know.Thank you very muchRegardsPavel \n\n--\nDaniel Gustafsson", "msg_date": "Wed, 29 Nov 2023 16:50:42 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> I took another look at this, found some more polish that was needed, added\n> another testcase and ended up pushing it.\n\nmamba is unhappy because this uses <ctype.h> functions without\ncasting their arguments to unsigned char:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2023-11-30%2002%3A53%3A25\n\n(I had not realized that we still had buildfarm animals that would\ncomplain about this ... but I'm glad we do, because it's a hazard.\nPOSIX is quite clear that the behavior is undefined for signed chars.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 29 Nov 2023 22:39:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "Hi\n\nčt 30. 11. 2023 v 4:40 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> > I took another look at this, found some more polish that was needed,\n> added\n> > another testcase and ended up pushing it.\n>\n> mamba is unhappy because this uses <ctype.h> functions without\n> casting their arguments to unsigned char:\n>\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2023-11-30%2002%3A53%3A25\n>\n> (I had not realized that we still had buildfarm animals that would\n> complain about this ... but I'm glad we do, because it's a hazard.\n> POSIX is quite clear that the behavior is undefined for signed chars.)\n>\n\nhere is a patch\n\nRegards\n\nPavel\n\n\n>\n> regards, tom lane\n>", "msg_date": "Thu, 30 Nov 2023 07:13:19 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "> On 30 Nov 2023, at 07:13, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> čt 30. 11. 2023 v 4:40 odesílatel Tom Lane <tgl@sss.pgh.pa.us <mailto:tgl@sss.pgh.pa.us>> napsal:\n> Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> writes:\n> > I took another look at this, found some more polish that was needed, added\n> > another testcase and ended up pushing it.\n> \n> mamba is unhappy because this uses <ctype.h> functions without\n> casting their arguments to unsigned char:\n\nThanks for the heads-up.\n\n> here is a patch\n\nI agree with this fix, and have applied it.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 30 Nov 2023 14:05:47 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" }, { "msg_contents": "čt 30. 11. 2023 v 14:05 odesílatel Daniel Gustafsson <daniel@yesql.se>\nnapsal:\n\n> > On 30 Nov 2023, at 07:13, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > čt 30. 11. 2023 v 4:40 odesílatel Tom Lane <tgl@sss.pgh.pa.us <mailto:\n> tgl@sss.pgh.pa.us>> napsal:\n> > Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> writes:\n> > > I took another look at this, found some more polish that was needed,\n> added\n> > > another testcase and ended up pushing it.\n> >\n> > mamba is unhappy because this uses <ctype.h> functions without\n> > casting their arguments to unsigned char:\n>\n> Thanks for the heads-up.\n>\n> > here is a patch\n>\n> I agree with this fix, and have applied it.\n>\n\nThank you\n\nPavel\n\n>\n> --\n> Daniel Gustafsson\n>\n>\n\nčt 30. 11. 2023 v 14:05 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:> On 30 Nov 2023, at 07:13, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> čt 30. 11. 2023 v 4:40 odesílatel Tom Lane <tgl@sss.pgh.pa.us <mailto:tgl@sss.pgh.pa.us>> napsal:\n> Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> writes:\n> > I took another look at this, found some more polish that was needed, added\n> > another testcase and ended up pushing it.\n> \n> mamba is unhappy because this uses <ctype.h> functions without\n> casting their arguments to unsigned char:\n\nThanks for the heads-up.\n\n> here is a patch\n\nI agree with this fix, and have applied it.Thank youPavel \n\n--\nDaniel Gustafsson", "msg_date": "Thu, 30 Nov 2023 16:45:26 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal: possibility to read dumped table's name from file" } ]
[ { "msg_contents": "Hi All,\n\nPlease check the below scenario, with pseudotype \"anyelement\" for IN, OUT\nparameter and the RETURN record in a function.\n\npostgres=# create table tab1(c1 int, c2 int, c3 timestamp) ;\nCREATE TABLE\npostgres=# CREATE OR REPLACE FUNCTION func_any(IN anyelement, IN\nanyelement, OUT v1 anyelement, OUT v2 anyelement)\nRETURNS record\nAS\n$$\nBEGIN\n SELECT $1 + 1, $2 + 1 into v1, v2;\n insert into tab1 values(v1, v2, now());\nEND;\n$$\nlanguage 'plpgsql';\nCREATE FUNCTION\npostgres=# SELECT (func_any(1, 2)).*;\n v1 | v2\n----+----\n 2 | 3\n(1 row)\n\npostgres=# select * from tab1;\n c1 | c2 | c3\n----+----+----------------------------\n 2 | 3 | 2020-05-30 19:26:32.036924\n 2 | 3 | 2020-05-30 19:26:32.036924\n(2 rows)\n\nI hope, the table \"tab1\" should have only a single record, but we are able\nto see 2 records in tab1.\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu\nEnterpriseDB: http://www.enterprisedb.com\n\nHi All,Please check the below scenario, with pseudotype \"anyelement\" for IN, OUT parameter and the RETURN record in a function.postgres=# create table tab1(c1 int, c2 int, c3 timestamp) ;CREATE TABLEpostgres=# CREATE OR REPLACE FUNCTION func_any(IN anyelement, IN anyelement, OUT v1 anyelement, OUT v2 anyelement)RETURNS recordAS$$BEGIN  SELECT $1 + 1, $2 + 1 into v1, v2;  insert into tab1 values(v1, v2, now());END;$$language 'plpgsql';CREATE FUNCTIONpostgres=# SELECT (func_any(1, 2)).*; v1 | v2 ----+----  2 |  3(1 row)postgres=# select * from tab1; c1 | c2 |             c3             ----+----+----------------------------  2 |  3 | 2020-05-30 19:26:32.036924  2 |  3 | 2020-05-30 19:26:32.036924(2 rows)I hope, the table \"tab1\" should have only a single record, but we are able to see 2 records in tab1.-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 29 May 2020 20:14:55 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": true, "msg_subject": "PG function with pseudotype \"anyelement\" for IN, OUT parameter shows\n wrong behaviour." }, { "msg_contents": "pá 29. 5. 2020 v 16:45 odesílatel Prabhat Sahu <\nprabhat.sahu@enterprisedb.com> napsal:\n\n> Hi All,\n>\n> Please check the below scenario, with pseudotype \"anyelement\" for IN, OUT\n> parameter and the RETURN record in a function.\n>\n> postgres=# create table tab1(c1 int, c2 int, c3 timestamp) ;\n> CREATE TABLE\n> postgres=# CREATE OR REPLACE FUNCTION func_any(IN anyelement, IN\n> anyelement, OUT v1 anyelement, OUT v2 anyelement)\n> RETURNS record\n> AS\n> $$\n> BEGIN\n> SELECT $1 + 1, $2 + 1 into v1, v2;\n> insert into tab1 values(v1, v2, now());\n> END;\n> $$\n> language 'plpgsql';\n> CREATE FUNCTION\n> postgres=# SELECT (func_any(1, 2)).*;\n> v1 | v2\n> ----+----\n> 2 | 3\n> (1 row)\n>\n> postgres=# select * from tab1;\n> c1 | c2 | c3\n> ----+----+----------------------------\n> 2 | 3 | 2020-05-30 19:26:32.036924\n> 2 | 3 | 2020-05-30 19:26:32.036924\n> (2 rows)\n>\n> I hope, the table \"tab1\" should have only a single record, but we are able\n> to see 2 records in tab1.\n>\n\nit is correct, because you use composite unpacking syntax\n\nSELECT (func_any(1, 2)).*;\n\nmeans\n\nSELECT (func_any(1, 2)).c1, (func_any(1, 2)).c2;\n\nIf you don't want double execution, you should to run your function in FROM\nclause\n\npostgres=# SELECT * FROM func_any(1, 2);\n┌────┬────┐\n│ v1 │ v2 │\n╞════╪════╡\n│ 2 │ 3 │\n└────┴────┘\n(1 row)\n\nRegards\n\nPavel\n\n\n\n> --\n>\n> With Regards,\n> Prabhat Kumar Sahu\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\npá 29. 5. 2020 v 16:45 odesílatel Prabhat Sahu <prabhat.sahu@enterprisedb.com> napsal:Hi All,Please check the below scenario, with pseudotype \"anyelement\" for IN, OUT parameter and the RETURN record in a function.postgres=# create table tab1(c1 int, c2 int, c3 timestamp) ;CREATE TABLEpostgres=# CREATE OR REPLACE FUNCTION func_any(IN anyelement, IN anyelement, OUT v1 anyelement, OUT v2 anyelement)RETURNS recordAS$$BEGIN  SELECT $1 + 1, $2 + 1 into v1, v2;  insert into tab1 values(v1, v2, now());END;$$language 'plpgsql';CREATE FUNCTIONpostgres=# SELECT (func_any(1, 2)).*; v1 | v2 ----+----  2 |  3(1 row)postgres=# select * from tab1; c1 | c2 |             c3             ----+----+----------------------------  2 |  3 | 2020-05-30 19:26:32.036924  2 |  3 | 2020-05-30 19:26:32.036924(2 rows)I hope, the table \"tab1\" should have only a single record, but we are able to see 2 records in tab1.it is correct, because you use composite unpacking syntax SELECT (func_any(1, 2)).*;means SELECT (func_any(1, 2)).c1, (func_any(1, 2)).c2; If you don't want double execution, you should to run your function in FROM clausepostgres=# SELECT * FROM func_any(1, 2);┌────┬────┐│ v1 │ v2 │╞════╪════╡│  2 │  3 │└────┴────┘(1 row)RegardsPavel-- \nWith Regards,Prabhat Kumar SahuEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 29 May 2020 16:59:42 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG function with pseudotype \"anyelement\" for IN, OUT parameter\n shows wrong behaviour." }, { "msg_contents": "On Fri, May 29, 2020 at 8:30 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> pá 29. 5. 2020 v 16:45 odesílatel Prabhat Sahu <\n> prabhat.sahu@enterprisedb.com> napsal:\n>\n>> Hi All,\n>>\n>> Please check the below scenario, with pseudotype \"anyelement\" for IN, OUT\n>> parameter and the RETURN record in a function.\n>>\n>> postgres=# create table tab1(c1 int, c2 int, c3 timestamp) ;\n>> CREATE TABLE\n>> postgres=# CREATE OR REPLACE FUNCTION func_any(IN anyelement, IN\n>> anyelement, OUT v1 anyelement, OUT v2 anyelement)\n>> RETURNS record\n>> AS\n>> $$\n>> BEGIN\n>> SELECT $1 + 1, $2 + 1 into v1, v2;\n>> insert into tab1 values(v1, v2, now());\n>> END;\n>> $$\n>> language 'plpgsql';\n>> CREATE FUNCTION\n>> postgres=# SELECT (func_any(1, 2)).*;\n>> v1 | v2\n>> ----+----\n>> 2 | 3\n>> (1 row)\n>>\n>> postgres=# select * from tab1;\n>> c1 | c2 | c3\n>> ----+----+----------------------------\n>> 2 | 3 | 2020-05-30 19:26:32.036924\n>> 2 | 3 | 2020-05-30 19:26:32.036924\n>> (2 rows)\n>>\n>> I hope, the table \"tab1\" should have only a single record, but we are\n>> able to see 2 records in tab1.\n>>\n>\n> it is correct, because you use composite unpacking syntax\n>\n> SELECT (func_any(1, 2)).*;\n>\n> means\n>\n> SELECT (func_any(1, 2)).c1, (func_any(1, 2)).c2;\n>\n> If you don't want double execution, you should to run your function in\n> FROM clause\n>\n> postgres=# SELECT * FROM func_any(1, 2);\n> ┌────┬────┐\n> │ v1 │ v2 │\n> ╞════╪════╡\n> │ 2 │ 3 │\n> └────┴────┘\n> (1 row)\n>\n\nThanks Pavel, for the help, I have verified the same, Now I am getting a\nsingle record in tab1.\npostgres=# SELECT func_any(1, 2);\n func_any\n----------\n (2,3)\n(1 row)\n\npostgres=# select * from tab1;\n c1 | c2 | c3\n----+----+----------------------------\n 2 | 3 | 2020-05-30 20:17:59.989087\n(1 row)\nThanks,\nPrabhat Sahu\n\nOn Fri, May 29, 2020 at 8:30 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:pá 29. 5. 2020 v 16:45 odesílatel Prabhat Sahu <prabhat.sahu@enterprisedb.com> napsal:Hi All,Please check the below scenario, with pseudotype \"anyelement\" for IN, OUT parameter and the RETURN record in a function.postgres=# create table tab1(c1 int, c2 int, c3 timestamp) ;CREATE TABLEpostgres=# CREATE OR REPLACE FUNCTION func_any(IN anyelement, IN anyelement, OUT v1 anyelement, OUT v2 anyelement)RETURNS recordAS$$BEGIN  SELECT $1 + 1, $2 + 1 into v1, v2;  insert into tab1 values(v1, v2, now());END;$$language 'plpgsql';CREATE FUNCTIONpostgres=# SELECT (func_any(1, 2)).*; v1 | v2 ----+----  2 |  3(1 row)postgres=# select * from tab1; c1 | c2 |             c3             ----+----+----------------------------  2 |  3 | 2020-05-30 19:26:32.036924  2 |  3 | 2020-05-30 19:26:32.036924(2 rows)I hope, the table \"tab1\" should have only a single record, but we are able to see 2 records in tab1.it is correct, because you use composite unpacking syntax SELECT (func_any(1, 2)).*;means SELECT (func_any(1, 2)).c1, (func_any(1, 2)).c2; If you don't want double execution, you should to run your function in FROM clausepostgres=# SELECT * FROM func_any(1, 2);┌────┬────┐│ v1 │ v2 │╞════╪════╡│  2 │  3 │└────┴────┘(1 row)Thanks Pavel, for the help, I have verified the same, Now I am getting a single record in tab1.postgres=# SELECT func_any(1, 2); func_any ---------- (2,3)(1 row)postgres=# select * from tab1; c1 | c2 |             c3             ----+----+----------------------------  2 |  3 | 2020-05-30 20:17:59.989087(1 row)Thanks,Prabhat Sahu", "msg_date": "Fri, 29 May 2020 21:01:53 +0530", "msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PG function with pseudotype \"anyelement\" for IN, OUT parameter\n shows wrong behaviour." } ]
[ { "msg_contents": "Hello!\nWhen parameter cannot be changed without restarting the server postgresql\nwrite:\n\"LOG: configuration file \"/var/lib/postgresql/data/postgresql.auto.conf\"\ncontains errors; unaffected changes were applied\"\nMay be not write this string to LOG?\n\nThis string confuses people. If all log send to ELK, then\nadministrator think that postgresql have error. But postgresql do not have\nerror.\n\n-- \nС уважением, Антон Пацев.\nBest regards, Anton Patsev.\n\nHello!When parameter cannot be changed without restarting the server postgresql write:\"LOG: configuration file \"/var/lib/postgresql/data/postgresql.auto.conf\" contains errors; unaffected changes were applied\"May be not write this string to LOG?This string confuses people. If all log send to ELK, then administrator think that postgresql have error. But postgresql do not have error.-- \nС уважением, Антон Пацев.Best regards, Anton Patsev.", "msg_date": "Sun, 31 May 2020 13:43:45 +0600", "msg_from": "=?UTF-8?B?0JDQvdGC0L7QvSDQn9Cw0YbQtdCy?= <patsev.anton@gmail.com>", "msg_from_op": true, "msg_subject": "Proposal: remove string \"contains errors;\n unaffected changes were applied\"" }, { "msg_contents": "On Sun, May 31, 2020 at 01:43:45PM +0600, Антон Пацев wrote:\n> Hello!\n> When parameter cannot be changed without restarting the server postgresql\n> write:\n> \"LOG: configuration file \"/var/lib/postgresql/data/postgresql.auto.conf\"\n> contains errors; unaffected changes were applied\"\n> May be not write this string to LOG?\n> \n> This string confuses people. If all log send to ELK, then\n> administrator think that postgresql have error. But postgresql do not have\n> error.\n\nI think you're suggesting that the message should be sent to the client, but\nnot to the log.\n\nBut I think it *should* go to the log; otherwise, a bad change might cause the\nserver to later refuse to start. Any admins or monitoring system watching the\nlog should have the ability to see that a change isn't effective. That's also\nwhy (in my mind) we have pg_settings.pending_restart.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 31 May 2020 12:31:14 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Proposal: remove string \"contains errors; unaffected changes\n were applied\"" }, { "msg_contents": "No. I think the message be sent to the log.\ni created diagram.\nMay be check postgresql.conf of valid or invalid?\n[image: contains errors unaffected changes were applied.png]\n\nвс, 31 мая 2020 г. в 23:31, Justin Pryzby <pryzby@telsasoft.com>:\n\n> On Sun, May 31, 2020 at 01:43:45PM +0600, Антон Пацев wrote:\n> > Hello!\n> > When parameter cannot be changed without restarting the server postgresql\n> > write:\n> > \"LOG: configuration file \"/var/lib/postgresql/data/postgresql.auto.conf\"\n> > contains errors; unaffected changes were applied\"\n> > May be not write this string to LOG?\n> >\n> > This string confuses people. If all log send to ELK, then\n> > administrator think that postgresql have error. But postgresql do not\n> have\n> > error.\n>\n> I think you're suggesting that the message should be sent to the client,\n> but\n> not to the log.\n>\n> But I think it *should* go to the log; otherwise, a bad change might cause\n> the\n> server to later refuse to start. Any admins or monitoring system watching\n> the\n> log should have the ability to see that a change isn't effective. That's\n> also\n> why (in my mind) we have pg_settings.pending_restart.\n>\n> --\n> Justin\n>\n\n\n-- \nС уважением, Антон Пацев.\nBest regards, Anton Patsev.", "msg_date": "Tue, 2 Jun 2020 15:33:33 +0600", "msg_from": "=?UTF-8?B?0JDQvdGC0L7QvSDQn9Cw0YbQtdCy?= <patsev.anton@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposal: remove string \"contains errors;\n unaffected changes were applied\"" } ]
[ { "msg_contents": "This thread is a follow-up of thread [1] where I don't have a good writing\nto\ndescribe the issue and solution in my mind. So I start this thread to fix\nthat\nand also enrich the topic by taking the advices from Ashutosh, Tomas and\nTom.\n\nInaccurate statistics is not avoidable and can cause lots of issue, I don't\nthink we can fix much of them, but the issue I want to talk about now like\n:select * from t where a = x and b = y, and we have index on (a, b) and (a,\nc);\nThe current implementation may choose (a, c) at last while I think we should\nalways choose index (a, b) over (a, c).\n\nWhy will the (a, c) be choose? If planner think a = x has only 1 row, it\nwill cost\nthe index (a, b) as \"an index access to with 2 qual checking and get 1 row\n+ table\nscan with the index result,\". the cost of (a, c) as \"an index access with\n1 qual\nand get 1 row, and table scan the 1 row and filter the another qual\". There\nis no cost\ndifference for the qual filter on index scan and table scan, so the final\ncost is\nexactly same. If the cost is exactly same, which path is found first,\nwhich one will be choose at last. You can use the attached reproduced.sql\nto\nreproduce this issue.\n\nThe solution in my mind is just hacking the cost model to treat the qual\nfilter\non table scan is slightly higher so that the index (a, b) will be always\nchoose.\nAt the same time, planner still think only 1 row returned which maybe wrong,\nbut that's the issue I can fix here and will not impact on the final index\nchoose\nanyway.\n\nThe one-line fix describe the exact idea in my mind:\n\n+++ b/src/backend/optimizer/path/costsize.c\n@@ -730,6 +730,13 @@ cost_index(IndexPath *path, PlannerInfo *root, double\nloop_count,\n\n cpu_run_cost += cpu_per_tuple * tuples_fetched;\n\n+ /*\n+ * To make the planner more robust to handle some inaccurate\nstatistics\n+ * issue, we will add a extra cost to qpquals so that the less\nqpquals\n+ * the lower cost it has.\n+ */\n+ cpu_run_cost += 0.01 * list_length(qpquals);\n+\n\nThis change do fix the issue above, but will it make some other cases\nworse? My\nanswer is no based on my current knowledge, and this is most important place\nI want to get advised. The mainly impact I can figure out is: it not only\nchange the cost difference between (a, b) and (a, c) it also cause the cost\ndifference between Index scan on (a, c) and seq scan. However the\ncost different between index scan and seq scan are huge by practice so\nI don't think this impact is harmful.\n\nI test TPC-H to see if this change causes any unexpected behavior, the\ndata and index are setup based on [2], the attached normal.log is the plan\nwithout this patch and patched.log is the plan with this patch. In summary,\nI\ndidn't find any unexpected behaviors because of that change. All the plans\nwhose cost changed have the following pattern which is expected.\n\nIndex Scan ...\n Index Cond: ...\n Filter: ...\n\nAny thought?\n\n[1] https://postgrespro.com/list/id/8810.1590714246@sss.pgh.pa.us\n[2] https://ankane.org/tpc-h\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Sun, 31 May 2020 21:24:05 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "A wrong index choose issue because of inaccurate statistics" }, { "msg_contents": ">\n>\n>\n> Why will the (a, c) be choose? If planner think a = x has only 1 row ..\n>\n\nI just did more research and found above statement is not accurate,\nthe root cause of this situation is because IndexSelectivity = 0. Even\nthrough I\ndon't think we can fix anything here since IndexSelectivity is calculated\nfrom\nstatistics and we don't know it is 1% wrong or 10% wrong or more just like\nWhat\nTomas said.\n\nThe way of fixing it is just add a \"small\" extra cost for pqquals for index\nscan. that should be small enough to not have impacts on others part. I will\ndiscuss how small is small later with details, we can say it just a guc\nvariable\nfor now.\n\n+++ b/src/backend/optimizer/path/costsize.c\n@@ -730,6 +730,13 @@ cost_index(IndexPath *path, PlannerInfo *root, double\nloop_count,\n\n cpu_run_cost += cpu_per_tuple * tuples_fetched;\n\n+ /*\n+ * To make the planner more robust to handle some inaccurate\nstatistics\n+ * issue, we will add a extra cost to qpquals so that the less\nqpquals\n+ * the lower cost it has.\n+ */\n+ cpu_run_cost += stat_stale_cost * list_length(qpquals);\n+\n\nIf we want to reduce the impact of this change further, we can only add\nthis if\nthe IndexSelecivity == 0.\n\nHow to set the value of stat_stale_cost? Since the minimum cost for a query\nshould be a cpu_tuple_cost which is 0.01 default. Adding an 0.01 cost for\neach\npqqual in index scan should not make a big difference. However sometimes\nwe may\nset it to 0.13 if we consider index->tree_height was estimated wrongly for\n1 (cost is\n50 * 0.0025 = 0.125). I don't know how it happened, but looks it do happen\nin prod\nenvironment. At the same time it is unlikely index->tree_height is estimated\nwrongly for 2 or more. so basically we can set this value to 0(totally\ndisable\nthis feature), 0.01 (should be ok for most case), 0.13 (A bit aggressive).\n\nThe wrong estimation of IndexSelectitity = 0 might be common case and if\npeople just have 2 related index like (A, B) and (A, C). we have 50%\nchances to\nhave a wrong decision, so I would say this case worth the troubles. My\ncurrent\nimplementation looks not cool, so any suggestion to research further is\npretty\nwelcome.\n\n-- \nBest Regards\nAndy Fan\n\nWhy will the (a, c) be choose?  If planner think a = x has only 1 row ..I just did more research and found above statement is not accurate,the root cause of this situation is because IndexSelectivity = 0. Even through Idon't think we can fix anything here since IndexSelectivity is calculated fromstatistics and we don't know it is 1% wrong or 10% wrong or more just like WhatTomas said.The way of fixing it is just add a \"small\" extra cost for pqquals for indexscan. that should be small enough to not have impacts on others part. I willdiscuss how small is small later with details, we can say it just a guc variablefor now.+++ b/src/backend/optimizer/path/costsize.c@@ -730,6 +730,13 @@ cost_index(IndexPath *path, PlannerInfo *root, double loop_count,        cpu_run_cost += cpu_per_tuple * tuples_fetched;+       /*+        * To make the planner more robust to handle some inaccurate statistics+        * issue, we will add a extra cost to qpquals so that the less qpquals+        * the lower cost it has.+        */+       cpu_run_cost += stat_stale_cost * list_length(qpquals);+If we want to reduce the impact of this change further, we can only add this ifthe IndexSelecivity == 0.  How to set the value of stat_stale_cost? Since the minimum cost for a queryshould be a cpu_tuple_cost which is 0.01 default. Adding an 0.01 cost for eachpqqual in index scan should not make a big difference.  However sometimes we mayset it to 0.13 if we consider index->tree_height was estimated wrongly for 1 (cost is50 * 0.0025 = 0.125). I don't know how it happened, but looks it do happen in prodenvironment. At the same time it is unlikely index->tree_height is estimatedwrongly for 2 or more. so basically we can set this value to 0(totally disablethis feature), 0.01 (should be ok for most case), 0.13 (A bit aggressive).The wrong estimation of IndexSelectitity = 0 might be common case and ifpeople just have 2 related index like (A, B) and (A, C). we have 50% chances tohave a wrong decision, so I would say this case worth the troubles. My currentimplementation looks not cool, so any suggestion to research further is prettywelcome.-- Best RegardsAndy Fan", "msg_date": "Fri, 5 Jun 2020 11:32:43 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A wrong index choose issue because of inaccurate statistics" }, { "msg_contents": "On Mon, 1 Jun 2020 at 01:24, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> The one-line fix describe the exact idea in my mind:\n>\n> +++ b/src/backend/optimizer/path/costsize.c\n> @@ -730,6 +730,13 @@ cost_index(IndexPath *path, PlannerInfo *root, double loop_count,\n>\n> cpu_run_cost += cpu_per_tuple * tuples_fetched;\n>\n> + /*\n> + * To make the planner more robust to handle some inaccurate statistics\n> + * issue, we will add a extra cost to qpquals so that the less qpquals\n> + * the lower cost it has.\n> + */\n> + cpu_run_cost += 0.01 * list_length(qpquals);\n> +\n>\n> This change do fix the issue above, but will it make some other cases worse? My\n> answer is no based on my current knowledge, and this is most important place\n> I want to get advised. The mainly impact I can figure out is: it not only\n> change the cost difference between (a, b) and (a, c) it also cause the cost\n> difference between Index scan on (a, c) and seq scan. However the\n> cost different between index scan and seq scan are huge by practice so\n> I don't think this impact is harmful.\n\nDidn't that idea already get shot down in the final paragraph on [1]?\n\nI understand that you wish to increase the cost by some seemingly\ninnocent constant to fix your problem case. Here are my thoughts\nabout that: Telling lies is not really that easy to pull off. Bad\nliers think it's easy and good ones know it's hard. The problem is\nthat the lies can start small, but then at some point the future you\nmust fashion some more lies to account for your initial lie. Rinse and\nrepeat that a few times and before you know it, your course is set\nwell away from the point of truth. I feel the part about \"rinse and\nrepeat\" applies reasonably well to how join costing works. The lie is\nlikely to be amplified as the join level gets deeper.\n\nI think you need to think of a more generic solution and propose that\ninstead. There are plenty of other quirks in the planner that can\ncause suffering due to inaccurate or non-existing statistics. For\nexample, due to how we multiply individual selectivity estimates,\nhaving a few correlated columns in a join condition can cause the\nnumber of join rows to be underestimated. Subsequent joins can then\nend up making bad choices on which join operator to use based on those\ninaccurate row estimates. There's also a problem with WHERE <x> ORDER\nBY col LIMIT n; sometimes choosing an index that provides pre-sorted\ninput to the ORDER BY but cannot use <x> as an indexable condition.\nWe don't record any stats to make better choices there, maybe we\nshould, but either way, we're taking a bit risk there as all the rows\nmatching <x> might be right at the end of the index and we might need\nto scan the entire thing before hitting the LIMIT. For now, we just\nassume completely even distribution of rows. i.e. If there are 50 rows\nestimated in the path and the limit is for 5 rows, then we'll assume\nwe need to read 10% of those before finding all the ones we need. In\nreality, everything matching <x> might be 95% through the index and we\ncould end up reading 100% of rows. That particular problem is not just\ncaused by the uneven distribution of rows in the index, but also from\nselectivity underestimation.\n\nI'd more recently wondered if we shouldn't have some sort of \"risk\"\nfactor to the cost model. I really don't have ideas on how exactly we\nwould calculate the risk factor in all cases, but in this case, say\nthe risk factor always starts out as 1. We could multiply that risk\nfactor by some >1 const each time we find another index filter qual.\nadd_path() can prefer lower risk paths when the costs are similar.\nUnsure what the exact add_path logic would be. Perhaps a GUC would\nneed to assist with the decision there. Likewise, with\nNestedLoopPaths which have a large number of join quals, the risk\nfactor could go up a bit with those so that we take a stronger\nconsideration for hash or merge joins instead.\n\nAnyway, it's pretty much a large research subject which would take a\nlot of work to iron out even just the design. It's likely not a\nperfect idea, but I think it has a bit more merit that trying to\nintroduce lies to the cost modal to account for a single case where\nthere is a problem.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/20200529001602.eu7vuiouuuiclpgb%40development\n\n\n", "msg_date": "Fri, 5 Jun 2020 18:18:53 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A wrong index choose issue because of inaccurate statistics" }, { "msg_contents": "pá 5. 6. 2020 v 8:19 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n\n> On Mon, 1 Jun 2020 at 01:24, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > The one-line fix describe the exact idea in my mind:\n> >\n> > +++ b/src/backend/optimizer/path/costsize.c\n> > @@ -730,6 +730,13 @@ cost_index(IndexPath *path, PlannerInfo *root,\n> double loop_count,\n> >\n> > cpu_run_cost += cpu_per_tuple * tuples_fetched;\n> >\n> > + /*\n> > + * To make the planner more robust to handle some inaccurate\n> statistics\n> > + * issue, we will add a extra cost to qpquals so that the less\n> qpquals\n> > + * the lower cost it has.\n> > + */\n> > + cpu_run_cost += 0.01 * list_length(qpquals);\n> > +\n> >\n> > This change do fix the issue above, but will it make some other cases\n> worse? My\n> > answer is no based on my current knowledge, and this is most important\n> place\n> > I want to get advised. The mainly impact I can figure out is: it not only\n> > change the cost difference between (a, b) and (a, c) it also cause the\n> cost\n> > difference between Index scan on (a, c) and seq scan. However the\n> > cost different between index scan and seq scan are huge by practice so\n> > I don't think this impact is harmful.\n>\n> Didn't that idea already get shot down in the final paragraph on [1]?\n>\n> I understand that you wish to increase the cost by some seemingly\n> innocent constant to fix your problem case. Here are my thoughts\n> about that: Telling lies is not really that easy to pull off. Bad\n> liers think it's easy and good ones know it's hard. The problem is\n> that the lies can start small, but then at some point the future you\n> must fashion some more lies to account for your initial lie. Rinse and\n> repeat that a few times and before you know it, your course is set\n> well away from the point of truth. I feel the part about \"rinse and\n> repeat\" applies reasonably well to how join costing works. The lie is\n> likely to be amplified as the join level gets deeper.\n>\n> I think you need to think of a more generic solution and propose that\n> instead. There are plenty of other quirks in the planner that can\n> cause suffering due to inaccurate or non-existing statistics. For\n> example, due to how we multiply individual selectivity estimates,\n> having a few correlated columns in a join condition can cause the\n> number of join rows to be underestimated. Subsequent joins can then\n> end up making bad choices on which join operator to use based on those\n> inaccurate row estimates. There's also a problem with WHERE <x> ORDER\n> BY col LIMIT n; sometimes choosing an index that provides pre-sorted\n> input to the ORDER BY but cannot use <x> as an indexable condition.\n> We don't record any stats to make better choices there, maybe we\n> should, but either way, we're taking a bit risk there as all the rows\n> matching <x> might be right at the end of the index and we might need\n> to scan the entire thing before hitting the LIMIT. For now, we just\n> assume completely even distribution of rows. i.e. If there are 50 rows\n> estimated in the path and the limit is for 5 rows, then we'll assume\n> we need to read 10% of those before finding all the ones we need. In\n> reality, everything matching <x> might be 95% through the index and we\n> could end up reading 100% of rows. That particular problem is not just\n> caused by the uneven distribution of rows in the index, but also from\n> selectivity underestimation.\n>\n> I'd more recently wondered if we shouldn't have some sort of \"risk\"\n> factor to the cost model. I really don't have ideas on how exactly we\n> would calculate the risk factor in all cases, but in this case, say\n> the risk factor always starts out as 1. We could multiply that risk\n> factor by some >1 const each time we find another index filter qual.\n> add_path() can prefer lower risk paths when the costs are similar.\n> Unsure what the exact add_path logic would be. Perhaps a GUC would\n> need to assist with the decision there. Likewise, with\n> NestedLoopPaths which have a large number of join quals, the risk\n> factor could go up a bit with those so that we take a stronger\n> consideration for hash or merge joins instead.\n>\n>\nI thought about these ideas too. And I am not alone.\n\nhttps://hal.archives-ouvertes.fr/hal-01316823/document\n\nRegards\n\nPavel\n\nAnyway, it's pretty much a large research subject which would take a\n> lot of work to iron out even just the design. It's likely not a\n> perfect idea, but I think it has a bit more merit that trying to\n> introduce lies to the cost modal to account for a single case where\n> there is a problem.\n>\n> David\n>\n> [1]\n> https://www.postgresql.org/message-id/20200529001602.eu7vuiouuuiclpgb%40development\n>\n>\n>\n\npá 5. 6. 2020 v 8:19 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:On Mon, 1 Jun 2020 at 01:24, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> The one-line fix describe the exact idea in my mind:\n>\n> +++ b/src/backend/optimizer/path/costsize.c\n> @@ -730,6 +730,13 @@ cost_index(IndexPath *path, PlannerInfo *root, double loop_count,\n>\n>         cpu_run_cost += cpu_per_tuple * tuples_fetched;\n>\n> +       /*\n> +        * To make the planner more robust to handle some inaccurate statistics\n> +        * issue, we will add a extra cost to qpquals so that the less qpquals\n> +        * the lower cost it has.\n> +        */\n> +       cpu_run_cost += 0.01 * list_length(qpquals);\n> +\n>\n> This change do fix the issue above, but will it make some other cases worse? My\n> answer is no based on my current knowledge, and this is most important place\n> I want to get advised. The mainly impact I can figure out is: it not only\n> change the cost difference between (a, b) and (a, c) it also cause the cost\n> difference between Index scan on (a, c) and seq scan.  However the\n> cost different between index scan and seq scan are huge by practice so\n> I don't think this impact is harmful.\n\nDidn't that idea already get shot down in the final paragraph on [1]?\n\nI understand that you wish to increase the cost by some seemingly\ninnocent constant to fix your problem case.  Here are my thoughts\nabout that: Telling lies is not really that easy to pull off. Bad\nliers think it's easy and good ones know it's hard. The problem is\nthat the lies can start small, but then at some point the future you\nmust fashion some more lies to account for your initial lie. Rinse and\nrepeat that a few times and before you know it, your course is set\nwell away from the point of truth.  I feel the part about \"rinse and\nrepeat\" applies reasonably well to how join costing works.  The lie is\nlikely to be amplified as the join level gets deeper.\n\nI think you need to think of a more generic solution and propose that\ninstead.  There are plenty of other quirks in the planner that can\ncause suffering due to inaccurate or non-existing statistics. For\nexample, due to how we multiply individual selectivity estimates,\nhaving a few correlated columns in a join condition can cause the\nnumber of join rows to be underestimated. Subsequent joins can then\nend up making bad choices on which join operator to use based on those\ninaccurate row estimates.  There's also a problem with WHERE <x> ORDER\nBY col LIMIT n; sometimes choosing an index that provides pre-sorted\ninput to the ORDER BY but cannot use <x> as an indexable condition.\nWe don't record any stats to make better choices there, maybe we\nshould, but either way, we're taking a bit risk there as all the rows\nmatching <x> might be right at the end of the index and we might need\nto scan the entire thing before hitting the LIMIT. For now, we just\nassume completely even distribution of rows. i.e. If there are 50 rows\nestimated in the path and the limit is for 5 rows, then we'll assume\nwe need to read 10% of those before finding all the ones we need. In\nreality, everything matching <x> might be 95% through the index and we\ncould end up reading 100% of rows. That particular problem is not just\ncaused by the uneven distribution of rows in the index, but also from\nselectivity underestimation.\n\nI'd more recently wondered if we shouldn't have some sort of \"risk\"\nfactor to the cost model. I really don't have ideas on how exactly we\nwould calculate the risk factor in all cases, but in this case,  say\nthe risk factor always starts out as 1. We could multiply that risk\nfactor by some >1 const each time we find another index filter qual.\nadd_path() can prefer lower risk paths when the costs are similar.\nUnsure what the exact add_path logic would be. Perhaps a GUC would\nneed to assist with the decision there.   Likewise, with\nNestedLoopPaths which have a large number of join quals, the risk\nfactor could go up a bit with those so that we take a stronger\nconsideration for hash or merge joins instead.\nI thought about these ideas too. And I am not alone. https://hal.archives-ouvertes.fr/hal-01316823/documentRegardsPavel\nAnyway, it's pretty much a large research subject which would take a\nlot of work to iron out even just the design. It's likely not a\nperfect idea, but I think it has a bit more merit that trying to\nintroduce lies to the cost modal to account for a single case where\nthere is a problem.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/20200529001602.eu7vuiouuuiclpgb%40development", "msg_date": "Fri, 5 Jun 2020 08:30:05 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A wrong index choose issue because of inaccurate statistics" }, { "msg_contents": "I know one project where they used PostgreSQL code base to detect\n\"robust plans\". https://dsl.cds.iisc.ac.in/projects/PICASSO/. Some of\nthe papers cited in https://www.vldb.org/pvldb/vldb2010/papers/D01.pdf\ndescribe the idea.\n\nIn short, the idea is to annotate a plan with a \"bandwidth\" i.e. how\ndoes the plan fair with degradation of statistics. A plan which has a\nslightly higher cost which doesn't degrade much with degradation of\nstatistics is preferred over a low cost plan whose cost rises sharply\nwith degradation of statistics. This is similar to what David is\nsuggesting.\n\n\nOn Fri, Jun 5, 2020 at 12:00 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n>\n> pá 5. 6. 2020 v 8:19 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n>>\n>> On Mon, 1 Jun 2020 at 01:24, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> > The one-line fix describe the exact idea in my mind:\n>> >\n>> > +++ b/src/backend/optimizer/path/costsize.c\n>> > @@ -730,6 +730,13 @@ cost_index(IndexPath *path, PlannerInfo *root, double loop_count,\n>> >\n>> > cpu_run_cost += cpu_per_tuple * tuples_fetched;\n>> >\n>> > + /*\n>> > + * To make the planner more robust to handle some inaccurate statistics\n>> > + * issue, we will add a extra cost to qpquals so that the less qpquals\n>> > + * the lower cost it has.\n>> > + */\n>> > + cpu_run_cost += 0.01 * list_length(qpquals);\n>> > +\n>> >\n>> > This change do fix the issue above, but will it make some other cases worse? My\n>> > answer is no based on my current knowledge, and this is most important place\n>> > I want to get advised. The mainly impact I can figure out is: it not only\n>> > change the cost difference between (a, b) and (a, c) it also cause the cost\n>> > difference between Index scan on (a, c) and seq scan. However the\n>> > cost different between index scan and seq scan are huge by practice so\n>> > I don't think this impact is harmful.\n>>\n>> Didn't that idea already get shot down in the final paragraph on [1]?\n>>\n>> I understand that you wish to increase the cost by some seemingly\n>> innocent constant to fix your problem case. Here are my thoughts\n>> about that: Telling lies is not really that easy to pull off. Bad\n>> liers think it's easy and good ones know it's hard. The problem is\n>> that the lies can start small, but then at some point the future you\n>> must fashion some more lies to account for your initial lie. Rinse and\n>> repeat that a few times and before you know it, your course is set\n>> well away from the point of truth. I feel the part about \"rinse and\n>> repeat\" applies reasonably well to how join costing works. The lie is\n>> likely to be amplified as the join level gets deeper.\n>>\n>> I think you need to think of a more generic solution and propose that\n>> instead. There are plenty of other quirks in the planner that can\n>> cause suffering due to inaccurate or non-existing statistics. For\n>> example, due to how we multiply individual selectivity estimates,\n>> having a few correlated columns in a join condition can cause the\n>> number of join rows to be underestimated. Subsequent joins can then\n>> end up making bad choices on which join operator to use based on those\n>> inaccurate row estimates. There's also a problem with WHERE <x> ORDER\n>> BY col LIMIT n; sometimes choosing an index that provides pre-sorted\n>> input to the ORDER BY but cannot use <x> as an indexable condition.\n>> We don't record any stats to make better choices there, maybe we\n>> should, but either way, we're taking a bit risk there as all the rows\n>> matching <x> might be right at the end of the index and we might need\n>> to scan the entire thing before hitting the LIMIT. For now, we just\n>> assume completely even distribution of rows. i.e. If there are 50 rows\n>> estimated in the path and the limit is for 5 rows, then we'll assume\n>> we need to read 10% of those before finding all the ones we need. In\n>> reality, everything matching <x> might be 95% through the index and we\n>> could end up reading 100% of rows. That particular problem is not just\n>> caused by the uneven distribution of rows in the index, but also from\n>> selectivity underestimation.\n>>\n>> I'd more recently wondered if we shouldn't have some sort of \"risk\"\n>> factor to the cost model. I really don't have ideas on how exactly we\n>> would calculate the risk factor in all cases, but in this case, say\n>> the risk factor always starts out as 1. We could multiply that risk\n>> factor by some >1 const each time we find another index filter qual.\n>> add_path() can prefer lower risk paths when the costs are similar.\n>> Unsure what the exact add_path logic would be. Perhaps a GUC would\n>> need to assist with the decision there. Likewise, with\n>> NestedLoopPaths which have a large number of join quals, the risk\n>> factor could go up a bit with those so that we take a stronger\n>> consideration for hash or merge joins instead.\n>>\n>\n> I thought about these ideas too. And I am not alone.\n>\n> https://hal.archives-ouvertes.fr/hal-01316823/document\n>\n> Regards\n>\n> Pavel\n>\n>> Anyway, it's pretty much a large research subject which would take a\n>> lot of work to iron out even just the design. It's likely not a\n>> perfect idea, but I think it has a bit more merit that trying to\n>> introduce lies to the cost modal to account for a single case where\n>> there is a problem.\n>>\n>> David\n>>\n>> [1] https://www.postgresql.org/message-id/20200529001602.eu7vuiouuuiclpgb%40development\n>>\n>>\n\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 8 Jun 2020 19:45:56 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A wrong index choose issue because of inaccurate statistics" }, { "msg_contents": "On Fri, Jun 5, 2020 at 2:19 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Mon, 1 Jun 2020 at 01:24, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > The one-line fix describe the exact idea in my mind:\n> >\n> > +++ b/src/backend/optimizer/path/costsize.c\n> > @@ -730,6 +730,13 @@ cost_index(IndexPath *path, PlannerInfo *root,\n> double loop_count,\n> >\n> > cpu_run_cost += cpu_per_tuple * tuples_fetched;\n> >\n> > + /*\n> > + * To make the planner more robust to handle some inaccurate\n> statistics\n> > + * issue, we will add a extra cost to qpquals so that the less\n> qpquals\n> > + * the lower cost it has.\n> > + */\n> > + cpu_run_cost += 0.01 * list_length(qpquals);\n> > +\n> >\n> > This change do fix the issue above, but will it make some other cases\n> worse? My\n> > answer is no based on my current knowledge, and this is most important\n> place\n> > I want to get advised. The mainly impact I can figure out is: it not only\n> > change the cost difference between (a, b) and (a, c) it also cause the\n> cost\n> > difference between Index scan on (a, c) and seq scan. However the\n> > cost different between index scan and seq scan are huge by practice so\n> > I don't think this impact is harmful.\n>\n> Didn't that idea already get shot down in the final paragraph on [1]?\n>\n>\nThanks for chiming in. I treat this as I didn't describe my idea clearly\nenough then\nboth Tomas and Tom didn't spend much time to read it (no offense, and I\nunderstand they need to do lots of things every day), so I re-summarize the\nissue to make it easier to read.\n\nIn Tomas's reply, he raises concerns about how to fix the issue, since we\ndon't know how much it errored 1%, 10% and so on, so I emphasized I don't\ntouch that part actually. Even the wrong estimation still plays a bad role\non\nlater join, but that would not be the issue I would fix here.\n\n\nI understand that you wish to increase the cost by some seemingly\n> innocent constant to fix your problem case. Here are my thoughts\n> about that: Telling lies is not really that easy to pull off. Bad\n> liers think it's easy and good ones know it's hard. The problem is\n> that the lies can start small, but then at some point the future you\n> must fashion some more lies to account for your initial lie. Rinse and\n> repeat that a few times and before you know it, your course is set\n> well away from the point of truth. I feel the part about \"rinse and\n> repeat\" applies reasonably well to how join costing works. The lie is\n> likely to be amplified as the join level gets deeper.\n>\n\nI agree with that to some extent. However we can just provide more options\nto\nusers. At the same time, I still believe we should provide such options\nvery carefully.\n\n\n> Unsure what the exact add_path logic would be. Perhaps a GUC would\n> need to assist with the decision there. Likewise, with\n> NestedLoopPaths which have a large number of join quals, the risk\n> factor could go up a bit with those so that we take a stronger\n> consideration for hash or merge joins instead.\n\n\nI probably underestimated the impacts for a large number of join quals.\nThis looks like a weakness we can't ignore confomforably, so I checked\nthe code further, maybe we don't need a risk factor for this case.\n\nFor query WHERE a = 1 AND b = 2, both Selectivity(a = 1) and\nSelectivity(b = 2) are greater than 0 even the statistics are stale enough,\nso the IndexSelectivity is greater than 0 as well. Based on this,\nIndexSelectivity(A, B) should be less than IndexSelectivity(A, C) for the\nabove query However they still generate the same cost because of the\nbelow code. (genericcostestimate and btcostestimate)\n\nnumIndexTuples = indexSelectivity * index->rel->tuples;\nnumIndexTuples = rint(numIndexTuples / num_sa_scans);\n\nif (numIndexTuples < 1.0)\n\n numIndexTuples = 1.0;\n\nlater numIndexTuples is used to calculate cost. The above code eats the\ndifference of indexSelectivity.\n\nMany places say we need to \"round to integer\" but I am still not figuring\nout\nwhy it is a must. so If we can \"round to integer\" just after the\nIndexCost\nis calculated, the issue can be fixed as well.\n\nThe attached is the patch in my mind, since this patch may lower the index\ncosts if the numIndexTuples < 1.0, so it makes the optimizer prefer to use\nnest loop rather than a hash join if the loop is small, which is a common\ncase in our regression test where we don't have much data, so there are\nseveral changes like that.\n\nAnother impact I found in the regression test is that optimizer choose an\nIndexScan of a conditional index rather than IndexOnlyScan a normal index.\nI checked the code and didn't find anything wrong, so I'd like to say that\nis\nbecause the data is too small.\n\nI also tested TPC-H workload, Query-12 changed hash join to nest loop,\nThe execution time changed from1605.118ms to 699.032ms.\n\n\n> the root cause of this situation is because IndexSelectivity = 0.\n\nThis is wrong. I got the 0 by elog(INFO, \"%f\", IndexSelectivity).\nthat reported 0 while the value is pretty small.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Tue, 9 Jun 2020 11:33:03 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A wrong index choose issue because of inaccurate statistics" }, { "msg_contents": "On Fri, Jun 5, 2020 at 2:30 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> pá 5. 6. 2020 v 8:19 odesílatel David Rowley <dgrowleyml@gmail.com>\n> napsal:\n>\n>> On Mon, 1 Jun 2020 at 01:24, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> > The one-line fix describe the exact idea in my mind:\n>> >\n>> > +++ b/src/backend/optimizer/path/costsize.c\n>> > @@ -730,6 +730,13 @@ cost_index(IndexPath *path, PlannerInfo *root,\n>> double loop_count,\n>> >\n>> > cpu_run_cost += cpu_per_tuple * tuples_fetched;\n>> >\n>> > + /*\n>> > + * To make the planner more robust to handle some inaccurate\n>> statistics\n>> > + * issue, we will add a extra cost to qpquals so that the less\n>> qpquals\n>> > + * the lower cost it has.\n>> > + */\n>> > + cpu_run_cost += 0.01 * list_length(qpquals);\n>> > +\n>> >\n>> > This change do fix the issue above, but will it make some other cases\n>> worse? My\n>> > answer is no based on my current knowledge, and this is most important\n>> place\n>> > I want to get advised. The mainly impact I can figure out is: it not\n>> only\n>> > change the cost difference between (a, b) and (a, c) it also cause the\n>> cost\n>> > difference between Index scan on (a, c) and seq scan. However the\n>> > cost different between index scan and seq scan are huge by practice so\n>> > I don't think this impact is harmful.\n>>\n>> Didn't that idea already get shot down in the final paragraph on [1]?\n>>\n>> I understand that you wish to increase the cost by some seemingly\n>> innocent constant to fix your problem case. Here are my thoughts\n>> about that: Telling lies is not really that easy to pull off. Bad\n>> liers think it's easy and good ones know it's hard. The problem is\n>> that the lies can start small, but then at some point the future you\n>> must fashion some more lies to account for your initial lie. Rinse and\n>> repeat that a few times and before you know it, your course is set\n>> well away from the point of truth. I feel the part about \"rinse and\n>> repeat\" applies reasonably well to how join costing works. The lie is\n>> likely to be amplified as the join level gets deeper.\n>>\n>> I think you need to think of a more generic solution and propose that\n>> instead. There are plenty of other quirks in the planner that can\n>> cause suffering due to inaccurate or non-existing statistics. For\n>> example, due to how we multiply individual selectivity estimates,\n>> having a few correlated columns in a join condition can cause the\n>> number of join rows to be underestimated. Subsequent joins can then\n>> end up making bad choices on which join operator to use based on those\n>> inaccurate row estimates. There's also a problem with WHERE <x> ORDER\n>> BY col LIMIT n; sometimes choosing an index that provides pre-sorted\n>> input to the ORDER BY but cannot use <x> as an indexable condition.\n>> We don't record any stats to make better choices there, maybe we\n>> should, but either way, we're taking a bit risk there as all the rows\n>> matching <x> might be right at the end of the index and we might need\n>> to scan the entire thing before hitting the LIMIT. For now, we just\n>> assume completely even distribution of rows. i.e. If there are 50 rows\n>> estimated in the path and the limit is for 5 rows, then we'll assume\n>> we need to read 10% of those before finding all the ones we need. In\n>> reality, everything matching <x> might be 95% through the index and we\n>> could end up reading 100% of rows. That particular problem is not just\n>> caused by the uneven distribution of rows in the index, but also from\n>> selectivity underestimation.\n>>\n>> I'd more recently wondered if we shouldn't have some sort of \"risk\"\n>> factor to the cost model. I really don't have ideas on how exactly we\n>> would calculate the risk factor in all cases, but in this case, say\n>> the risk factor always starts out as 1. We could multiply that risk\n>> factor by some >1 const each time we find another index filter qual.\n>> add_path() can prefer lower risk paths when the costs are similar.\n>> Unsure what the exact add_path logic would be. Perhaps a GUC would\n>> need to assist with the decision there. Likewise, with\n>> NestedLoopPaths which have a large number of join quals, the risk\n>> factor could go up a bit with those so that we take a stronger\n>> consideration for hash or merge joins instead.\n>>\n>>\n> I thought about these ideas too. And I am not alone.\n>\n> https://hal.archives-ouvertes.fr/hal-01316823/document\n>\n>\nThanks for the documents, after checking it, that is more like\n oracle's statistics feedback[1]. Hope we can have it someday:)\n\n[1] https://blogs.oracle.com/optimizer/cardinality-feedback\n\n-- \nBest Regards\nAndy Fan\n\nOn Fri, Jun 5, 2020 at 2:30 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:pá 5. 6. 2020 v 8:19 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:On Mon, 1 Jun 2020 at 01:24, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> The one-line fix describe the exact idea in my mind:\n>\n> +++ b/src/backend/optimizer/path/costsize.c\n> @@ -730,6 +730,13 @@ cost_index(IndexPath *path, PlannerInfo *root, double loop_count,\n>\n>         cpu_run_cost += cpu_per_tuple * tuples_fetched;\n>\n> +       /*\n> +        * To make the planner more robust to handle some inaccurate statistics\n> +        * issue, we will add a extra cost to qpquals so that the less qpquals\n> +        * the lower cost it has.\n> +        */\n> +       cpu_run_cost += 0.01 * list_length(qpquals);\n> +\n>\n> This change do fix the issue above, but will it make some other cases worse? My\n> answer is no based on my current knowledge, and this is most important place\n> I want to get advised. The mainly impact I can figure out is: it not only\n> change the cost difference between (a, b) and (a, c) it also cause the cost\n> difference between Index scan on (a, c) and seq scan.  However the\n> cost different between index scan and seq scan are huge by practice so\n> I don't think this impact is harmful.\n\nDidn't that idea already get shot down in the final paragraph on [1]?\n\nI understand that you wish to increase the cost by some seemingly\ninnocent constant to fix your problem case.  Here are my thoughts\nabout that: Telling lies is not really that easy to pull off. Bad\nliers think it's easy and good ones know it's hard. The problem is\nthat the lies can start small, but then at some point the future you\nmust fashion some more lies to account for your initial lie. Rinse and\nrepeat that a few times and before you know it, your course is set\nwell away from the point of truth.  I feel the part about \"rinse and\nrepeat\" applies reasonably well to how join costing works.  The lie is\nlikely to be amplified as the join level gets deeper.\n\nI think you need to think of a more generic solution and propose that\ninstead.  There are plenty of other quirks in the planner that can\ncause suffering due to inaccurate or non-existing statistics. For\nexample, due to how we multiply individual selectivity estimates,\nhaving a few correlated columns in a join condition can cause the\nnumber of join rows to be underestimated. Subsequent joins can then\nend up making bad choices on which join operator to use based on those\ninaccurate row estimates.  There's also a problem with WHERE <x> ORDER\nBY col LIMIT n; sometimes choosing an index that provides pre-sorted\ninput to the ORDER BY but cannot use <x> as an indexable condition.\nWe don't record any stats to make better choices there, maybe we\nshould, but either way, we're taking a bit risk there as all the rows\nmatching <x> might be right at the end of the index and we might need\nto scan the entire thing before hitting the LIMIT. For now, we just\nassume completely even distribution of rows. i.e. If there are 50 rows\nestimated in the path and the limit is for 5 rows, then we'll assume\nwe need to read 10% of those before finding all the ones we need. In\nreality, everything matching <x> might be 95% through the index and we\ncould end up reading 100% of rows. That particular problem is not just\ncaused by the uneven distribution of rows in the index, but also from\nselectivity underestimation.\n\nI'd more recently wondered if we shouldn't have some sort of \"risk\"\nfactor to the cost model. I really don't have ideas on how exactly we\nwould calculate the risk factor in all cases, but in this case,  say\nthe risk factor always starts out as 1. We could multiply that risk\nfactor by some >1 const each time we find another index filter qual.\nadd_path() can prefer lower risk paths when the costs are similar.\nUnsure what the exact add_path logic would be. Perhaps a GUC would\nneed to assist with the decision there.   Likewise, with\nNestedLoopPaths which have a large number of join quals, the risk\nfactor could go up a bit with those so that we take a stronger\nconsideration for hash or merge joins instead.\nI thought about these ideas too. And I am not alone. https://hal.archives-ouvertes.fr/hal-01316823/documentThanks for the documents,  after checking it, that is more like oracle's statistics feedback[1].  Hope we can have it someday:) [1] https://blogs.oracle.com/optimizer/cardinality-feedback  -- Best RegardsAndy Fan", "msg_date": "Tue, 9 Jun 2020 13:09:45 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A wrong index choose issue because of inaccurate statistics" }, { "msg_contents": "On Mon, Jun 8, 2020 at 10:16 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> I know one project where they used PostgreSQL code base to detect\n> \"robust plans\". https://dsl.cds.iisc.ac.in/projects/PICASSO/. Some of\n> the papers cited in https://www.vldb.org/pvldb/vldb2010/papers/D01.pdf\n> describe the idea.\n\n\n>\nIn short, the idea is to annotate a plan with a \"bandwidth\" i.e. how\n> does the plan fair with degradation of statistics. A plan which has a\n> slightly higher cost which doesn't degrade much with degradation of\n> statistics is preferred over a low cost plan whose cost rises sharply\n> with degradation of statistics. This is similar to what David is\n> suggesting.\n>\n>\nGreat to know them, thank you for sharing it, links have been bookmarked.\n\nOn Fri, Jun 5, 2020 at 12:00 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> >\n> >\n> > pá 5. 6. 2020 v 8:19 odesílatel David Rowley <dgrowleyml@gmail.com>\n> napsal:\n> >>\n> >> On Mon, 1 Jun 2020 at 01:24, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >> > The one-line fix describe the exact idea in my mind:\n> >> >\n> >> > +++ b/src/backend/optimizer/path/costsize.c\n> >> > @@ -730,6 +730,13 @@ cost_index(IndexPath *path, PlannerInfo *root,\n> double loop_count,\n> >> >\n> >> > cpu_run_cost += cpu_per_tuple * tuples_fetched;\n> >> >\n> >> > + /*\n> >> > + * To make the planner more robust to handle some inaccurate\n> statistics\n> >> > + * issue, we will add a extra cost to qpquals so that the\n> less qpquals\n> >> > + * the lower cost it has.\n> >> > + */\n> >> > + cpu_run_cost += 0.01 * list_length(qpquals);\n> >> > +\n> >> >\n> >> > This change do fix the issue above, but will it make some other cases\n> worse? My\n> >> > answer is no based on my current knowledge, and this is most\n> important place\n> >> > I want to get advised. The mainly impact I can figure out is: it not\n> only\n> >> > change the cost difference between (a, b) and (a, c) it also cause\n> the cost\n> >> > difference between Index scan on (a, c) and seq scan. However the\n> >> > cost different between index scan and seq scan are huge by practice so\n> >> > I don't think this impact is harmful.\n> >>\n> >> Didn't that idea already get shot down in the final paragraph on [1]?\n> >>\n> >> I understand that you wish to increase the cost by some seemingly\n> >> innocent constant to fix your problem case. Here are my thoughts\n> >> about that: Telling lies is not really that easy to pull off. Bad\n> >> liers think it's easy and good ones know it's hard. The problem is\n> >> that the lies can start small, but then at some point the future you\n> >> must fashion some more lies to account for your initial lie. Rinse and\n> >> repeat that a few times and before you know it, your course is set\n> >> well away from the point of truth. I feel the part about \"rinse and\n> >> repeat\" applies reasonably well to how join costing works. The lie is\n> >> likely to be amplified as the join level gets deeper.\n> >>\n> >> I think you need to think of a more generic solution and propose that\n> >> instead. There are plenty of other quirks in the planner that can\n> >> cause suffering due to inaccurate or non-existing statistics. For\n> >> example, due to how we multiply individual selectivity estimates,\n> >> having a few correlated columns in a join condition can cause the\n> >> number of join rows to be underestimated. Subsequent joins can then\n> >> end up making bad choices on which join operator to use based on those\n> >> inaccurate row estimates. There's also a problem with WHERE <x> ORDER\n> >> BY col LIMIT n; sometimes choosing an index that provides pre-sorted\n> >> input to the ORDER BY but cannot use <x> as an indexable condition.\n> >> We don't record any stats to make better choices there, maybe we\n> >> should, but either way, we're taking a bit risk there as all the rows\n> >> matching <x> might be right at the end of the index and we might need\n> >> to scan the entire thing before hitting the LIMIT. For now, we just\n> >> assume completely even distribution of rows. i.e. If there are 50 rows\n> >> estimated in the path and the limit is for 5 rows, then we'll assume\n> >> we need to read 10% of those before finding all the ones we need. In\n> >> reality, everything matching <x> might be 95% through the index and we\n> >> could end up reading 100% of rows. That particular problem is not just\n> >> caused by the uneven distribution of rows in the index, but also from\n> >> selectivity underestimation.\n> >>\n> >> I'd more recently wondered if we shouldn't have some sort of \"risk\"\n> >> factor to the cost model. I really don't have ideas on how exactly we\n> >> would calculate the risk factor in all cases, but in this case, say\n> >> the risk factor always starts out as 1. We could multiply that risk\n> >> factor by some >1 const each time we find another index filter qual.\n> >> add_path() can prefer lower risk paths when the costs are similar.\n> >> Unsure what the exact add_path logic would be. Perhaps a GUC would\n> >> need to assist with the decision there. Likewise, with\n> >> NestedLoopPaths which have a large number of join quals, the risk\n> >> factor could go up a bit with those so that we take a stronger\n> >> consideration for hash or merge joins instead.\n> >>\n> >\n> > I thought about these ideas too. And I am not alone.\n> >\n> > https://hal.archives-ouvertes.fr/hal-01316823/document\n> >\n> > Regards\n> >\n> > Pavel\n> >\n> >> Anyway, it's pretty much a large research subject which would take a\n> >> lot of work to iron out even just the design. It's likely not a\n> >> perfect idea, but I think it has a bit more merit that trying to\n> >> introduce lies to the cost modal to account for a single case where\n> >> there is a problem.\n> >>\n> >> David\n> >>\n> >> [1]\n> https://www.postgresql.org/message-id/20200529001602.eu7vuiouuuiclpgb%40development\n> >>\n> >>\n>\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n>\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Mon, Jun 8, 2020 at 10:16 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:I know one project where they used PostgreSQL code base to detect\n\"robust plans\". https://dsl.cds.iisc.ac.in/projects/PICASSO/. Some of\nthe papers cited in https://www.vldb.org/pvldb/vldb2010/papers/D01.pdf\ndescribe the idea.  \nIn short, the idea is to annotate a plan with a \"bandwidth\" i.e. how\ndoes the plan fair with degradation of statistics. A plan which has a\nslightly higher cost which doesn't degrade much with degradation of\nstatistics is preferred over a low cost plan whose cost rises sharply\nwith degradation of statistics. This is similar to what David is\nsuggesting.Great to know them,  thank you for sharing it, links have been bookmarked.  \nOn Fri, Jun 5, 2020 at 12:00 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n>\n>\n> pá 5. 6. 2020 v 8:19 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n>>\n>> On Mon, 1 Jun 2020 at 01:24, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> > The one-line fix describe the exact idea in my mind:\n>> >\n>> > +++ b/src/backend/optimizer/path/costsize.c\n>> > @@ -730,6 +730,13 @@ cost_index(IndexPath *path, PlannerInfo *root, double loop_count,\n>> >\n>> >         cpu_run_cost += cpu_per_tuple * tuples_fetched;\n>> >\n>> > +       /*\n>> > +        * To make the planner more robust to handle some inaccurate statistics\n>> > +        * issue, we will add a extra cost to qpquals so that the less qpquals\n>> > +        * the lower cost it has.\n>> > +        */\n>> > +       cpu_run_cost += 0.01 * list_length(qpquals);\n>> > +\n>> >\n>> > This change do fix the issue above, but will it make some other cases worse? My\n>> > answer is no based on my current knowledge, and this is most important place\n>> > I want to get advised. The mainly impact I can figure out is: it not only\n>> > change the cost difference between (a, b) and (a, c) it also cause the cost\n>> > difference between Index scan on (a, c) and seq scan.  However the\n>> > cost different between index scan and seq scan are huge by practice so\n>> > I don't think this impact is harmful.\n>>\n>> Didn't that idea already get shot down in the final paragraph on [1]?\n>>\n>> I understand that you wish to increase the cost by some seemingly\n>> innocent constant to fix your problem case.  Here are my thoughts\n>> about that: Telling lies is not really that easy to pull off. Bad\n>> liers think it's easy and good ones know it's hard. The problem is\n>> that the lies can start small, but then at some point the future you\n>> must fashion some more lies to account for your initial lie. Rinse and\n>> repeat that a few times and before you know it, your course is set\n>> well away from the point of truth.  I feel the part about \"rinse and\n>> repeat\" applies reasonably well to how join costing works.  The lie is\n>> likely to be amplified as the join level gets deeper.\n>>\n>> I think you need to think of a more generic solution and propose that\n>> instead.  There are plenty of other quirks in the planner that can\n>> cause suffering due to inaccurate or non-existing statistics. For\n>> example, due to how we multiply individual selectivity estimates,\n>> having a few correlated columns in a join condition can cause the\n>> number of join rows to be underestimated. Subsequent joins can then\n>> end up making bad choices on which join operator to use based on those\n>> inaccurate row estimates.  There's also a problem with WHERE <x> ORDER\n>> BY col LIMIT n; sometimes choosing an index that provides pre-sorted\n>> input to the ORDER BY but cannot use <x> as an indexable condition.\n>> We don't record any stats to make better choices there, maybe we\n>> should, but either way, we're taking a bit risk there as all the rows\n>> matching <x> might be right at the end of the index and we might need\n>> to scan the entire thing before hitting the LIMIT. For now, we just\n>> assume completely even distribution of rows. i.e. If there are 50 rows\n>> estimated in the path and the limit is for 5 rows, then we'll assume\n>> we need to read 10% of those before finding all the ones we need. In\n>> reality, everything matching <x> might be 95% through the index and we\n>> could end up reading 100% of rows. That particular problem is not just\n>> caused by the uneven distribution of rows in the index, but also from\n>> selectivity underestimation.\n>>\n>> I'd more recently wondered if we shouldn't have some sort of \"risk\"\n>> factor to the cost model. I really don't have ideas on how exactly we\n>> would calculate the risk factor in all cases, but in this case,  say\n>> the risk factor always starts out as 1. We could multiply that risk\n>> factor by some >1 const each time we find another index filter qual.\n>> add_path() can prefer lower risk paths when the costs are similar.\n>> Unsure what the exact add_path logic would be. Perhaps a GUC would\n>> need to assist with the decision there.   Likewise, with\n>> NestedLoopPaths which have a large number of join quals, the risk\n>> factor could go up a bit with those so that we take a stronger\n>> consideration for hash or merge joins instead.\n>>\n>\n> I thought about these ideas too. And I am not alone.\n>\n> https://hal.archives-ouvertes.fr/hal-01316823/document\n>\n> Regards\n>\n> Pavel\n>\n>> Anyway, it's pretty much a large research subject which would take a\n>> lot of work to iron out even just the design. It's likely not a\n>> perfect idea, but I think it has a bit more merit that trying to\n>> introduce lies to the cost modal to account for a single case where\n>> there is a problem.\n>>\n>> David\n>>\n>> [1] https://www.postgresql.org/message-id/20200529001602.eu7vuiouuuiclpgb%40development\n>>\n>>\n\n\n--\nBest Wishes,\nAshutosh Bapat\n-- Best RegardsAndy Fan", "msg_date": "Tue, 9 Jun 2020 13:56:40 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A wrong index choose issue because of inaccurate statistics" } ]
[ { "msg_contents": "I noticed that the PostgreSQL entry in a pan-database feature matrix by\nModern SQL was not reflecting the reality of our features.[1]\n\nIt turns out that test case used by the author produced an error which\nthe tool took to mean the feature was not implemented. I don't have the\nactual test, but here is a simulation of it:\n\n\n postgres=# SELECT LAG(n, 1, -99) OVER (ORDER BY n)\n postgres-# FROM (VALUES (1.1), (2.2), (3.3)) AS v (n)\n postgres-# ORDER BY n;\n\n ERROR: function lag(numeric, integer, integer) does not exist\n LINE 1: SELECT LAG(n, 1, -99) OVER (ORDER BY n)\n ^\n HINT: No function matches the given name and argument types. You\nmight need to add explicit type casts.\n\n\nAttached is a patch that fixes this issue using the new anycompatible\npseudotype. I am hoping this can be slipped into 13 even though it\nrequires a catversion bump after BETA1.\n\nI looked for other functions with a similar issue but didn't find any.\n\n\n[1] https://twitter.com/pg_xocolatl/status/1266694496194093057\n-- \nVik Fearing", "msg_date": "Sun, 31 May 2020 19:20:10 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Compatible defaults for LEAD/LAG" }, { "msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> postgres=# SELECT LAG(n, 1, -99) OVER (ORDER BY n)\n> postgres-# FROM (VALUES (1.1), (2.2), (3.3)) AS v (n)\n> postgres-# ORDER BY n;\n> ERROR: function lag(numeric, integer, integer) does not exist\n> LINE 1: SELECT LAG(n, 1, -99) OVER (ORDER BY n)\n> ^\n\nYeah, we have similar issues elsewhere.\n\n> Attached is a patch that fixes this issue using the new anycompatible\n> pseudotype. I am hoping this can be slipped into 13 even though it\n> requires a catversion bump after BETA1.\n\nWhen the anycompatible patch went in, I thought for a little bit about\ntrying to use it with existing built-in functions, but didn't have the\ntime to investigate the issue in detail. I'm not in favor of hacking\nthings one-function-at-a-time here; we should look through the whole\nlibrary and see what we've got.\n\nThe main thing that makes this perhaps-not-trivial is that an\nanycompatible-ified function will match more cases than it would have\nbefore, possibly causing conflicts if the function or operator name\nis overloaded. We'd have to look at such cases and decide what we\nwant to do --- one answer would be to drop some of the alternatives\nand rely on the parser to add casts, but that might slow things down.\n\nAnyway, I agree that this is a good direction to pursue, but not in\na last-minute-hack-for-v13 way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 31 May 2020 15:53:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Compatible defaults for LEAD/LAG" }, { "msg_contents": "On 5/31/20 9:53 PM, Tom Lane wrote:\n> Vik Fearing <vik@postgresfriends.org> writes:\n>> postgres=# SELECT LAG(n, 1, -99) OVER (ORDER BY n)\n>> postgres-# FROM (VALUES (1.1), (2.2), (3.3)) AS v (n)\n>> postgres-# ORDER BY n;\n>> ERROR: function lag(numeric, integer, integer) does not exist\n>> LINE 1: SELECT LAG(n, 1, -99) OVER (ORDER BY n)\n>> ^\n> \n> Yeah, we have similar issues elsewhere.\n> \n>> Attached is a patch that fixes this issue using the new anycompatible\n>> pseudotype. I am hoping this can be slipped into 13 even though it\n>> requires a catversion bump after BETA1.\n> \n> When the anycompatible patch went in, I thought for a little bit about\n> trying to use it with existing built-in functions, but didn't have the\n> time to investigate the issue in detail. I'm not in favor of hacking\n> things one-function-at-a-time here; we should look through the whole\n> library and see what we've got.\n> \n> The main thing that makes this perhaps-not-trivial is that an\n> anycompatible-ified function will match more cases than it would have\n> before, possibly causing conflicts if the function or operator name\n> is overloaded. We'd have to look at such cases and decide what we\n> want to do --- one answer would be to drop some of the alternatives\n> and rely on the parser to add casts, but that might slow things down.\n> \n> Anyway, I agree that this is a good direction to pursue, but not in\n> a last-minute-hack-for-v13 way.\n\nFair enough. I put it in the commitfest app for version 14.\nhttps://commitfest.postgresql.org/28/2574/\n-- \nVik Fearing\n\n\n", "msg_date": "Sun, 31 May 2020 22:02:29 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Compatible defaults for LEAD/LAG" }, { "msg_contents": "On 5/31/20 9:53 PM, Tom Lane wrote:\n> When the anycompatible patch went in, I thought for a little bit about\n> trying to use it with existing built-in functions, but didn't have the\n> time to investigate the issue in detail. I'm not in favor of hacking\n> things one-function-at-a-time here; we should look through the whole\n> library and see what we've got.\n\nBTW, I did go through pg_proc.dat to see what we've got and these were\nthe only two I found. I mentioned that in a part you didn't quote. Now\nI went through again, this time using a query on pg_proc itself, and I\nmissed array_replace during my manual scan.\n\narray_replace, lead, and lag are the only functions we have that take\nmore than one anyelement.\n\nThere are many functions (seemingly all operator implementations) that\ntake multiple anyarray, anyrange, and anyenum; but none with anynonarray\nand only those three for anyelement. Are you sure we don't want to give\nat least the anycompatible type a nice public workout with this?\n-- \nVik Fearing\n\n\n", "msg_date": "Mon, 1 Jun 2020 01:30:21 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Compatible defaults for LEAD/LAG" }, { "msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> On 5/31/20 9:53 PM, Tom Lane wrote:\n>> When the anycompatible patch went in, I thought for a little bit about\n>> trying to use it with existing built-in functions, but didn't have the\n>> time to investigate the issue in detail. I'm not in favor of hacking\n>> things one-function-at-a-time here; we should look through the whole\n>> library and see what we've got.\n\n> array_replace, lead, and lag are the only functions we have that take\n> more than one anyelement.\n\nThat's just the tip of the iceberg, though. If you consider all the\nold-style polymorphic types, we have\n\n proc \n-----------------------------------------------\n array_append(anyarray,anyelement)\n array_cat(anyarray,anyarray)\n array_eq(anyarray,anyarray)\n array_ge(anyarray,anyarray)\n array_gt(anyarray,anyarray)\n array_larger(anyarray,anyarray)\n array_le(anyarray,anyarray)\n array_lt(anyarray,anyarray)\n array_ne(anyarray,anyarray)\n array_position(anyarray,anyelement)\n array_position(anyarray,anyelement,integer)\n array_positions(anyarray,anyelement)\n array_prepend(anyelement,anyarray)\n array_remove(anyarray,anyelement)\n array_replace(anyarray,anyelement,anyelement)\n array_smaller(anyarray,anyarray)\n arraycontained(anyarray,anyarray)\n arraycontains(anyarray,anyarray)\n arrayoverlap(anyarray,anyarray)\n btarraycmp(anyarray,anyarray)\n elem_contained_by_range(anyelement,anyrange)\n lag(anyelement,integer,anyelement)\n lead(anyelement,integer,anyelement)\n range_adjacent(anyrange,anyrange)\n range_after(anyrange,anyrange)\n range_before(anyrange,anyrange)\n range_cmp(anyrange,anyrange)\n range_contained_by(anyrange,anyrange)\n range_contains(anyrange,anyrange)\n range_contains_elem(anyrange,anyelement)\n range_eq(anyrange,anyrange)\n range_ge(anyrange,anyrange)\n range_gist_same(anyrange,anyrange,internal)\n range_gt(anyrange,anyrange)\n range_intersect(anyrange,anyrange)\n range_le(anyrange,anyrange)\n range_lt(anyrange,anyrange)\n range_merge(anyrange,anyrange)\n range_minus(anyrange,anyrange)\n range_ne(anyrange,anyrange)\n range_overlaps(anyrange,anyrange)\n range_overleft(anyrange,anyrange)\n range_overright(anyrange,anyrange)\n range_union(anyrange,anyrange)\n width_bucket(anyelement,anyarray)\n(45 rows)\n\n(I ignored anyenum here, since it has no mapping to the anycompatible\nfamily.) Some of these are internal or can otherwise be discounted,\nbut surely there'd be a market for, say, \"int8[] || int4\". The big\nquestion that raises is can we do it without creating impossible confusion\nwith text-style concatenation.\n\n> Are you sure we don't want to give\n> at least the anycompatible type a nice public workout with this?\n\nYes, I'm quite sure. If the idea crashes and burns for some reason,\nwe'll be glad we didn't buy into it full-speed-ahead right away.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 31 May 2020 22:07:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Compatible defaults for LEAD/LAG" }, { "msg_contents": "po 1. 6. 2020 v 4:07 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Vik Fearing <vik@postgresfriends.org> writes:\n> > On 5/31/20 9:53 PM, Tom Lane wrote:\n> >> When the anycompatible patch went in, I thought for a little bit about\n> >> trying to use it with existing built-in functions, but didn't have the\n> >> time to investigate the issue in detail. I'm not in favor of hacking\n> >> things one-function-at-a-time here; we should look through the whole\n> >> library and see what we've got.\n>\n> > array_replace, lead, and lag are the only functions we have that take\n> > more than one anyelement.\n>\n> That's just the tip of the iceberg, though. If you consider all the\n> old-style polymorphic types, we have\n>\n> proc\n> -----------------------------------------------\n> array_append(anyarray,anyelement)\n> array_cat(anyarray,anyarray)\n> array_eq(anyarray,anyarray)\n> array_ge(anyarray,anyarray)\n> array_gt(anyarray,anyarray)\n> array_larger(anyarray,anyarray)\n> array_le(anyarray,anyarray)\n> array_lt(anyarray,anyarray)\n> array_ne(anyarray,anyarray)\n> array_position(anyarray,anyelement)\n> array_position(anyarray,anyelement,integer)\n> array_positions(anyarray,anyelement)\n> array_prepend(anyelement,anyarray)\n> array_remove(anyarray,anyelement)\n> array_replace(anyarray,anyelement,anyelement)\n> array_smaller(anyarray,anyarray)\n> arraycontained(anyarray,anyarray)\n> arraycontains(anyarray,anyarray)\n> arrayoverlap(anyarray,anyarray)\n> btarraycmp(anyarray,anyarray)\n>\n\nI am not sure, if using anycompatible for buildin's array functions can be\ngood idea. Theoretically a users can do new performance errors due hidden\ncast of a longer array.\n\nFor example array_positions(int[], numeric). In this case conversion int[]\nto numeric[] can be bad idea in some cases. Reversely in this case we want\nto convert numeric to int. When it is not possible without loss, then we\ncan return false, or maybe raise exception. I designed anycompatible* for\nusage when the parameters has \"symmetric weight\", and cast to most common\ntype should not to have performance issue. It is not this case. When I\nthough about this cases, and about designing functions compatible with\nOracle I though about another generic family (families) with different\nbehave (specified by suffix or maybe with typemod or with some syntax):\n\na) force_cast .. it can be good for array's functions -\narray_position(anyarray, anyelement_force_cast), array_position(anyarray,\nanyelement(force_cast)) .. this is often behave in Oracle\nb) force_safe_cast .. special kind of casting - safer variant of tolerant\nOracle's casting - 1.0::int is valid, 1.1::int is not valid in this type of\ncasting\n\nanycompatible* family can helps with some cases, but it is not silver\nbullet for all unfriendly, or not compatible situation (mainly for buildin\nfunctionality).\n\nRegards\n\nPavel\n\n\n\n> elem_contained_by_range(anyelement,anyrange)\n> lag(anyelement,integer,anyelement)\n> lead(anyelement,integer,anyelement)\n> range_adjacent(anyrange,anyrange)\n> range_after(anyrange,anyrange)\n> range_before(anyrange,anyrange)\n> range_cmp(anyrange,anyrange)\n> range_contained_by(anyrange,anyrange)\n> range_contains(anyrange,anyrange)\n> range_contains_elem(anyrange,anyelement)\n> range_eq(anyrange,anyrange)\n> range_ge(anyrange,anyrange)\n> range_gist_same(anyrange,anyrange,internal)\n> range_gt(anyrange,anyrange)\n> range_intersect(anyrange,anyrange)\n> range_le(anyrange,anyrange)\n> range_lt(anyrange,anyrange)\n> range_merge(anyrange,anyrange)\n> range_minus(anyrange,anyrange)\n> range_ne(anyrange,anyrange)\n> range_overlaps(anyrange,anyrange)\n> range_overleft(anyrange,anyrange)\n> range_overright(anyrange,anyrange)\n> range_union(anyrange,anyrange)\n> width_bucket(anyelement,anyarray)\n> (45 rows)\n>\n> (I ignored anyenum here, since it has no mapping to the anycompatible\n> family.) Some of these are internal or can otherwise be discounted,\n> but surely there'd be a market for, say, \"int8[] || int4\". The big\n> question that raises is can we do it without creating impossible confusion\n> with text-style concatenation.\n>\n> > Are you sure we don't want to give\n> > at least the anycompatible type a nice public workout with this?\n>\n> Yes, I'm quite sure. If the idea crashes and burns for some reason,\n> we'll be glad we didn't buy into it full-speed-ahead right away.\n>\n> regards, tom lane\n>\n>\n>\n\npo 1. 6. 2020 v 4:07 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Vik Fearing <vik@postgresfriends.org> writes:\n> On 5/31/20 9:53 PM, Tom Lane wrote:\n>> When the anycompatible patch went in, I thought for a little bit about\n>> trying to use it with existing built-in functions, but didn't have the\n>> time to investigate the issue in detail.  I'm not in favor of hacking\n>> things one-function-at-a-time here; we should look through the whole\n>> library and see what we've got.\n\n> array_replace, lead, and lag are the only functions we have that take\n> more than one anyelement.\n\nThat's just the tip of the iceberg, though.  If you consider all the\nold-style polymorphic types, we have\n\n                     proc                      \n-----------------------------------------------\n array_append(anyarray,anyelement)\n array_cat(anyarray,anyarray)\n array_eq(anyarray,anyarray)\n array_ge(anyarray,anyarray)\n array_gt(anyarray,anyarray)\n array_larger(anyarray,anyarray)\n array_le(anyarray,anyarray)\n array_lt(anyarray,anyarray)\n array_ne(anyarray,anyarray)\n array_position(anyarray,anyelement)\n array_position(anyarray,anyelement,integer)\n array_positions(anyarray,anyelement)\n array_prepend(anyelement,anyarray)\n array_remove(anyarray,anyelement)\n array_replace(anyarray,anyelement,anyelement)\n array_smaller(anyarray,anyarray)\n arraycontained(anyarray,anyarray)\n arraycontains(anyarray,anyarray)\n arrayoverlap(anyarray,anyarray)\n btarraycmp(anyarray,anyarray)I am not sure, if using anycompatible for buildin's array functions can be good idea. Theoretically a users can do new performance errors due hidden cast of a longer array.For example array_positions(int[], numeric). In this case conversion int[] to numeric[] can be bad idea in some cases. Reversely in this case we want to convert numeric to int. When it is not possible without loss, then we can return false, or maybe raise exception. I designed anycompatible* for usage when the parameters has \"symmetric weight\", and cast to most common type should not to have performance issue. It is not this case. When I though about this cases, and about designing functions compatible with Oracle I though about another generic family (families) with different behave (specified by suffix or maybe with typemod or with some syntax):a) force_cast .. it can be good for array's functions - array_position(anyarray, anyelement_force_cast), array_position(anyarray, anyelement(force_cast)) .. this is often behave in Oracleb) force_safe_cast .. special kind of casting - safer variant of tolerant Oracle's casting - 1.0::int is valid, 1.1::int is not valid in this type of castinganycompatible* family can helps with some cases, but it is not silver bullet for all unfriendly, or not compatible situation (mainly for buildin functionality).RegardsPavel \n elem_contained_by_range(anyelement,anyrange)\n lag(anyelement,integer,anyelement)\n lead(anyelement,integer,anyelement)\n range_adjacent(anyrange,anyrange)\n range_after(anyrange,anyrange)\n range_before(anyrange,anyrange)\n range_cmp(anyrange,anyrange)\n range_contained_by(anyrange,anyrange)\n range_contains(anyrange,anyrange)\n range_contains_elem(anyrange,anyelement)\n range_eq(anyrange,anyrange)\n range_ge(anyrange,anyrange)\n range_gist_same(anyrange,anyrange,internal)\n range_gt(anyrange,anyrange)\n range_intersect(anyrange,anyrange)\n range_le(anyrange,anyrange)\n range_lt(anyrange,anyrange)\n range_merge(anyrange,anyrange)\n range_minus(anyrange,anyrange)\n range_ne(anyrange,anyrange)\n range_overlaps(anyrange,anyrange)\n range_overleft(anyrange,anyrange)\n range_overright(anyrange,anyrange)\n range_union(anyrange,anyrange)\n width_bucket(anyelement,anyarray)\n(45 rows)\n\n(I ignored anyenum here, since it has no mapping to the anycompatible\nfamily.)  Some of these are internal or can otherwise be discounted,\nbut surely there'd be a market for, say, \"int8[] || int4\".  The big\nquestion that raises is can we do it without creating impossible confusion\nwith text-style concatenation.\n\n> Are you sure we don't want to give\n> at least the anycompatible type a nice public workout with this?\n\nYes, I'm quite sure.  If the idea crashes and burns for some reason,\nwe'll be glad we didn't buy into it full-speed-ahead right away.\n\n                        regards, tom lane", "msg_date": "Mon, 1 Jun 2020 05:35:59 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Compatible defaults for LEAD/LAG" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> po 1. 6. 2020 v 4:07 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> That's just the tip of the iceberg, though. If you consider all the\n>> old-style polymorphic types, we have [for example]\n>> array_append(anyarray,anyelement)\n\n> I am not sure, if using anycompatible for buildin's array functions can be\n> good idea. Theoretically a users can do new performance errors due hidden\n> cast of a longer array.\n\nI don't buy that argument. If the query requires casting int4[] to\nint8[], making the user do it by hand isn't going to improve performance\nover having the parser insert the coercion automatically. Sure, there\nwill be some fraction of queries that could be rewritten to avoid the\nneed for any cast, but so what? Often the performance difference isn't\ngoing to matter; and when it does, I don't see that this is any different\nfrom hundreds of other cases in which there are more-efficient and\nless-efficient ways to write a query. SQL is full of performance traps\nand always will be. You're also assuming that when the user gets an\n\"operator does not exist\" error from \"int4[] || int8\", that will magically\nlead them to choosing an optimal substitute. I know of no reason to\nbelieve that --- it's at least as likely that they'll conclude it just\ncan't be done, as in the LAG() example we started the thread with. So\nI think most people would be much happier if the system just did something\nreasonable, and they can optimize later if it's important.\n\n> When I\n> though about this cases, and about designing functions compatible with\n> Oracle I though about another generic family (families) with different\n> behave (specified by suffix or maybe with typemod or with some syntax):\n\nSo we're already deciding anycompatible can't get the job done? Maybe\nit's a good thing we had this conversation now. It's not too late to\nrip the feature out of v13 altogether, and try again later. But if\nyou think I'm going to commit yet another variant of polymorphism on\ntop of this one, you're mistaken.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Jun 2020 11:36:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Compatible defaults for LEAD/LAG" }, { "msg_contents": "po 1. 6. 2020 v 17:36 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > po 1. 6. 2020 v 4:07 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n> >> That's just the tip of the iceberg, though. If you consider all the\n> >> old-style polymorphic types, we have [for example]\n> >> array_append(anyarray,anyelement)\n>\n> > I am not sure, if using anycompatible for buildin's array functions can\n> be\n> > good idea. Theoretically a users can do new performance errors due hidden\n> > cast of a longer array.\n>\n> I don't buy that argument. If the query requires casting int4[] to\n> int8[], making the user do it by hand isn't going to improve performance\n> over having the parser insert the coercion automatically. Sure, there\n> will be some fraction of queries that could be rewritten to avoid the\n> need for any cast, but so what? Often the performance difference isn't\n> going to matter; and when it does, I don't see that this is any different\n> from hundreds of other cases in which there are more-efficient and\n> less-efficient ways to write a query. SQL is full of performance traps\n> and always will be. You're also assuming that when the user gets an\n> \"operator does not exist\" error from \"int4[] || int8\", that will magically\n> lead them to choosing an optimal substitute. I know of no reason to\n> believe that --- it's at least as likely that they'll conclude it just\n> can't be done, as in the LAG() example we started the thread with. So\n> I think most people would be much happier if the system just did something\n> reasonable, and they can optimize later if it's important.\n>\n> > When I\n> > though about this cases, and about designing functions compatible with\n> > Oracle I though about another generic family (families) with different\n> > behave (specified by suffix or maybe with typemod or with some syntax):\n>\n> So we're already deciding anycompatible can't get the job done? Maybe\n> it's a good thing we had this conversation now. It's not too late to\n> rip the feature out of v13 altogether, and try again later. But if\n> you think I'm going to commit yet another variant of polymorphism on\n> top of this one, you're mistaken.\n>\n\nanycompatible types are fully conssistent with Postgres buildin functions\nsupported by parser. It is main benefit and important benefit.\nWithout anycompatible types is pretty hard to write variadic functions with\nfriendly behave like buildin functions has.\nIt can be perfect for LAG() example. It does almost same work what we did\nmanually in parser.\n\nSurely, it is not compatible with Oracle's polymorphism rules, because\n\na) Our and Postgres type system is different (sometimes very different).\n\nb) Oracle's casting rules depends on argument positions and some specific\nexceptions - so it is not possible to map it on Postgres type system (or\nsystem of polymorphic types).\n\nI wrote and I spent lot of time on this feature to be used - by core\ndevelopers, by extension's developers. Like lot of other feature - it can\ncarry more comfort with some risk of performance issues.\n\nOn second hand if we use this feature for buildin functions, there is zero\nimpact of current applications - there should not be any problem with\ncompatibility or performance.\n\nMaybe I am too old, and last year was too terrible so I have a problem to\nimagine a \"intelligent\" user now :)\n\nRegards\n\nPavel\n\nAlthough I can imagine other enhancing of polymorphic types, I don't\npropose any new, and I don't expect any enhancing in near feature. Is true,\nso there are not requests and I think so \"anycompatible\" and \"any\" are more\nthan good enough for 99.99% developers.\n\nI am little bit surprised so semi compatibility mode implemented in Orafce\nis good enough and full compatibility with Oracle a) isn't possible, b)\nisn't requested by visible group of users (or users who need it are\nsatisfied by EDB).\n\n\n\n>\n> regards, tom lane\n>\n\npo 1. 6. 2020 v 17:36 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> po 1. 6. 2020 v 4:07 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> That's just the tip of the iceberg, though.  If you consider all the\n>> old-style polymorphic types, we have [for example]\n>> array_append(anyarray,anyelement)\n\n> I am not sure, if using anycompatible for buildin's array functions can be\n> good idea. Theoretically a users can do new performance errors due hidden\n> cast of a longer array.\n\nI don't buy that argument.  If the query requires casting int4[] to\nint8[], making the user do it by hand isn't going to improve performance\nover having the parser insert the coercion automatically.  Sure, there\nwill be some fraction of queries that could be rewritten to avoid the\nneed for any cast, but so what?  Often the performance difference isn't\ngoing to matter; and when it does, I don't see that this is any different\nfrom hundreds of other cases in which there are more-efficient and\nless-efficient ways to write a query.  SQL is full of performance traps\nand always will be.  You're also assuming that when the user gets an\n\"operator does not exist\" error from \"int4[] || int8\", that will magically\nlead them to choosing an optimal substitute.  I know of no reason to\nbelieve that --- it's at least as likely that they'll conclude it just\ncan't be done, as in the LAG() example we started the thread with.  So\nI think most people would be much happier if the system just did something\nreasonable, and they can optimize later if it's important.\n\n> When I\n> though about this cases, and about designing functions compatible with\n> Oracle I though about another generic family (families) with different\n> behave (specified by suffix or maybe with typemod or with some syntax):\n\nSo we're already deciding anycompatible can't get the job done?  Maybe\nit's a good thing we had this conversation now.  It's not too late to\nrip the feature out of v13 altogether, and try again later.  But if\nyou think I'm going to commit yet another variant of polymorphism on\ntop of this one, you're mistaken.anycompatible types are fully conssistent with Postgres buildin functions  supported by parser. It is main benefit and important benefit. Without anycompatible types is pretty hard to write variadic functions with friendly behave like buildin functions has.It can be perfect for LAG() example. It does almost same work what we did manually in parser.Surely, it is not compatible with Oracle's polymorphism rules, becausea) Our and Postgres type system is different (sometimes very different). b) Oracle's casting rules depends on argument positions and some specific exceptions - so it is not possible to map it on Postgres type system (or system of polymorphic types).I wrote and I spent lot of time on this feature to be used - by core developers, by extension's developers. Like lot of other feature - it can carry  more comfort with some risk of performance issues.On second hand if we use this feature for buildin functions, there is zero impact of current applications - there should not be any problem with compatibility or performance. Maybe I am too old, and last year was too terrible so I have a problem to imagine a \"intelligent\" user now :)RegardsPavelAlthough I can imagine other enhancing of polymorphic types, I don't propose any new, and I don't expect any enhancing in near feature. Is true, so there are not requests and I think so \"anycompatible\" and \"any\" are more than good enough for 99.99% developers.I am little bit surprised so semi compatibility mode implemented in Orafce is good enough and full compatibility with Oracle a) isn't possible, b) isn't requested by visible group of users (or users who need it are satisfied by EDB). \n\n                        regards, tom lane", "msg_date": "Mon, 1 Jun 2020 18:26:47 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Compatible defaults for LEAD/LAG" }, { "msg_contents": "Hi\n\nne 31. 5. 2020 v 22:02 odesílatel Vik Fearing <vik@postgresfriends.org>\nnapsal:\n\n> On 5/31/20 9:53 PM, Tom Lane wrote:\n> > Vik Fearing <vik@postgresfriends.org> writes:\n> >> postgres=# SELECT LAG(n, 1, -99) OVER (ORDER BY n)\n> >> postgres-# FROM (VALUES (1.1), (2.2), (3.3)) AS v (n)\n> >> postgres-# ORDER BY n;\n> >> ERROR: function lag(numeric, integer, integer) does not exist\n> >> LINE 1: SELECT LAG(n, 1, -99) OVER (ORDER BY n)\n> >> ^\n> >\n> > Yeah, we have similar issues elsewhere.\n> >\n> >> Attached is a patch that fixes this issue using the new anycompatible\n> >> pseudotype. I am hoping this can be slipped into 13 even though it\n> >> requires a catversion bump after BETA1.\n> >\n> > When the anycompatible patch went in, I thought for a little bit about\n> > trying to use it with existing built-in functions, but didn't have the\n> > time to investigate the issue in detail. I'm not in favor of hacking\n> > things one-function-at-a-time here; we should look through the whole\n> > library and see what we've got.\n> >\n> > The main thing that makes this perhaps-not-trivial is that an\n> > anycompatible-ified function will match more cases than it would have\n> > before, possibly causing conflicts if the function or operator name\n> > is overloaded. We'd have to look at such cases and decide what we\n> > want to do --- one answer would be to drop some of the alternatives\n> > and rely on the parser to add casts, but that might slow things down.\n> >\n> > Anyway, I agree that this is a good direction to pursue, but not in\n> > a last-minute-hack-for-v13 way.\n>\n> Fair enough. I put it in the commitfest app for version 14.\n> https://commitfest.postgresql.org/28/2574/\n> --\n> Vik Fearing\n>\n\nThe patch is ok. It is almost trivial. It solves one issue, but maybe it\nintroduces a new issue.\n\nOther databases try to coerce default constant expression to value\nexpression. I found a description about this behaviour for MSSQL, Sybase,\nBigQuery.\n\nI didn't find related documentation for Oracle, and I have not a access to\nsome running instance of Oracle to test it.\n\nAnsi SQL say - type of default expression should be compatible with lag\nexpression, and declared type of result should be type of value expression.\n\nIWD 9075-2:201?(E) 6.10 <window function> - j) v)\n\nCurrent implementation is limited (and the behaviour is not user friendly\nin all details), but new behaviour (implemented by patch) is in half cases\nout of standard :(.\n\nThese cases are probably not often - and they are really corner cases, but\nI cannot to qualify how much important this fact is.\n\nFor users, the implemented feature is better, and it is safe.\nImplementation is trivial - is significantly simpler than implementation\nthat is 100% standard, although it should not be a hard problem too (in\nanalyze stage it probably needs a few lines too).\n\nThere has to be a decision, because now we can go in both directions. After\nchoosing there will not be a way back.\n\nRegards\n\nPavel\n\nHine 31. 5. 2020 v 22:02 odesílatel Vik Fearing <vik@postgresfriends.org> napsal:On 5/31/20 9:53 PM, Tom Lane wrote:\n> Vik Fearing <vik@postgresfriends.org> writes:\n>>   postgres=# SELECT LAG(n, 1, -99) OVER (ORDER BY n)\n>>   postgres-# FROM (VALUES (1.1), (2.2), (3.3)) AS v (n)\n>>   postgres-# ORDER BY n;\n>>   ERROR:  function lag(numeric, integer, integer) does not exist\n>>   LINE 1: SELECT LAG(n, 1, -99) OVER (ORDER BY n)\n>>                ^\n> \n> Yeah, we have similar issues elsewhere.\n> \n>> Attached is a patch that fixes this issue using the new anycompatible\n>> pseudotype.  I am hoping this can be slipped into 13 even though it\n>> requires a catversion bump after BETA1.\n> \n> When the anycompatible patch went in, I thought for a little bit about\n> trying to use it with existing built-in functions, but didn't have the\n> time to investigate the issue in detail.  I'm not in favor of hacking\n> things one-function-at-a-time here; we should look through the whole\n> library and see what we've got.\n> \n> The main thing that makes this perhaps-not-trivial is that an\n> anycompatible-ified function will match more cases than it would have\n> before, possibly causing conflicts if the function or operator name\n> is overloaded.  We'd have to look at such cases and decide what we\n> want to do --- one answer would be to drop some of the alternatives\n> and rely on the parser to add casts, but that might slow things down.\n> \n> Anyway, I agree that this is a good direction to pursue, but not in\n> a last-minute-hack-for-v13 way.\n\nFair enough.  I put it in the commitfest app for version 14.\nhttps://commitfest.postgresql.org/28/2574/\n-- \nVik FearingThe patch is ok. It is almost trivial. It solves one issue, but maybe it introduces a new issue. Other databases try to coerce default constant expression to value expression. I found a description about this behaviour for MSSQL, Sybase, BigQuery.I didn't find related documentation for Oracle, and I have not a access to some running instance of Oracle to test it.Ansi SQL say - type of default expression should be compatible with lag expression, and declared type of result should be type of value expression.IWD 9075-2:201?(E) 6.10 <window function> - j) v)Current implementation is limited (and the behaviour is not user friendly in all details), but new behaviour (implemented by patch) is in half cases out of standard :(.These cases are probably not often - and they are really corner cases, but I cannot to qualify how much important this fact is.For users, the implemented feature is better, and it is safe. Implementation is trivial - is significantly simpler than implementation that is 100% standard, although it should not be a hard problem too (in analyze stage it probably needs a few lines too).There has to be a decision, because now we can go in both directions. After choosing there will not be a way back.RegardsPavel", "msg_date": "Mon, 13 Jul 2020 19:23:57 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Compatible defaults for LEAD/LAG" }, { "msg_contents": "> On 13 Jul 2020, at 19:23, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> ne 31. 5. 2020 v 22:02 odesílatel Vik Fearing <vik@postgresfriends.org <mailto:vik@postgresfriends.org>> napsal:\n> On 5/31/20 9:53 PM, Tom Lane wrote:\n> > Vik Fearing <vik@postgresfriends.org <mailto:vik@postgresfriends.org>> writes:\n> >> postgres=# SELECT LAG(n, 1, -99) OVER (ORDER BY n)\n> >> postgres-# FROM (VALUES (1.1), (2.2), (3.3)) AS v (n)\n> >> postgres-# ORDER BY n;\n> >> ERROR: function lag(numeric, integer, integer) does not exist\n> >> LINE 1: SELECT LAG(n, 1, -99) OVER (ORDER BY n)\n> >> ^\n> > \n> > Yeah, we have similar issues elsewhere.\n> > \n> >> Attached is a patch that fixes this issue using the new anycompatible\n> >> pseudotype. I am hoping this can be slipped into 13 even though it\n> >> requires a catversion bump after BETA1.\n> > \n> > When the anycompatible patch went in, I thought for a little bit about\n> > trying to use it with existing built-in functions, but didn't have the\n> > time to investigate the issue in detail. I'm not in favor of hacking\n> > things one-function-at-a-time here; we should look through the whole\n> > library and see what we've got.\n> > \n> > The main thing that makes this perhaps-not-trivial is that an\n> > anycompatible-ified function will match more cases than it would have\n> > before, possibly causing conflicts if the function or operator name\n> > is overloaded. We'd have to look at such cases and decide what we\n> > want to do --- one answer would be to drop some of the alternatives\n> > and rely on the parser to add casts, but that might slow things down.\n> > \n> > Anyway, I agree that this is a good direction to pursue, but not in\n> > a last-minute-hack-for-v13 way.\n> \n> Fair enough. I put it in the commitfest app for version 14.\n> https://commitfest.postgresql.org/28/2574/ <https://commitfest.postgresql.org/28/2574/>\n> -- \n> Vik Fearing\n> \n> The patch is ok. It is almost trivial. It solves one issue, but maybe it introduces a new issue. \n> \n> Other databases try to coerce default constant expression to value expression. I found a description about this behaviour for MSSQL, Sybase, BigQuery.\n> \n> I didn't find related documentation for Oracle, and I have not a access to some running instance of Oracle to test it.\n> \n> Ansi SQL say - type of default expression should be compatible with lag expression, and declared type of result should be type of value expression.\n> \n> IWD 9075-2:201?(E) 6.10 <window function> - j) v)\n> \n> Current implementation is limited (and the behaviour is not user friendly in all details), but new behaviour (implemented by patch) is in half cases out of standard :(.\n> \n> These cases are probably not often - and they are really corner cases, but I cannot to qualify how much important this fact is.\n> \n> For users, the implemented feature is better, and it is safe. Implementation is trivial - is significantly simpler than implementation that is 100% standard, although it should not be a hard problem too (in analyze stage it probably needs a few lines too).\n> \n> There has to be a decision, because now we can go in both directions. After choosing there will not be a way back.\n\nThis patch is marked waiting for author, but it's not clear from this review\nwhat we're waiting on. Should this be moved to Needs review or Ready for\nCommitter instead to solicit the input from a committer? ISTM the latter is\nmore suitable. Or did I misunderstand your review?\n\ncheers ./daniel\n\n\n\n", "msg_date": "Thu, 23 Jul 2020 13:29:42 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Compatible defaults for LEAD/LAG" }, { "msg_contents": "čt 23. 7. 2020 v 13:29 odesílatel Daniel Gustafsson <daniel@yesql.se>\nnapsal:\n\n> > On 13 Jul 2020, at 19:23, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> > ne 31. 5. 2020 v 22:02 odesílatel Vik Fearing <vik@postgresfriends.org\n> <mailto:vik@postgresfriends.org>> napsal:\n> > On 5/31/20 9:53 PM, Tom Lane wrote:\n> > > Vik Fearing <vik@postgresfriends.org <mailto:vik@postgresfriends.org>>\n> writes:\n> > >> postgres=# SELECT LAG(n, 1, -99) OVER (ORDER BY n)\n> > >> postgres-# FROM (VALUES (1.1), (2.2), (3.3)) AS v (n)\n> > >> postgres-# ORDER BY n;\n> > >> ERROR: function lag(numeric, integer, integer) does not exist\n> > >> LINE 1: SELECT LAG(n, 1, -99) OVER (ORDER BY n)\n> > >> ^\n> > >\n> > > Yeah, we have similar issues elsewhere.\n> > >\n> > >> Attached is a patch that fixes this issue using the new anycompatible\n> > >> pseudotype. I am hoping this can be slipped into 13 even though it\n> > >> requires a catversion bump after BETA1.\n> > >\n> > > When the anycompatible patch went in, I thought for a little bit about\n> > > trying to use it with existing built-in functions, but didn't have the\n> > > time to investigate the issue in detail. I'm not in favor of hacking\n> > > things one-function-at-a-time here; we should look through the whole\n> > > library and see what we've got.\n> > >\n> > > The main thing that makes this perhaps-not-trivial is that an\n> > > anycompatible-ified function will match more cases than it would have\n> > > before, possibly causing conflicts if the function or operator name\n> > > is overloaded. We'd have to look at such cases and decide what we\n> > > want to do --- one answer would be to drop some of the alternatives\n> > > and rely on the parser to add casts, but that might slow things down.\n> > >\n> > > Anyway, I agree that this is a good direction to pursue, but not in\n> > > a last-minute-hack-for-v13 way.\n> >\n> > Fair enough. I put it in the commitfest app for version 14.\n> > https://commitfest.postgresql.org/28/2574/ <\n> https://commitfest.postgresql.org/28/2574/>\n> > --\n> > Vik Fearing\n> >\n> > The patch is ok. It is almost trivial. It solves one issue, but maybe it\n> introduces a new issue.\n> >\n> > Other databases try to coerce default constant expression to value\n> expression. I found a description about this behaviour for MSSQL, Sybase,\n> BigQuery.\n> >\n> > I didn't find related documentation for Oracle, and I have not a access\n> to some running instance of Oracle to test it.\n> >\n> > Ansi SQL say - type of default expression should be compatible with lag\n> expression, and declared type of result should be type of value expression.\n> >\n> > IWD 9075-2:201?(E) 6.10 <window function> - j) v)\n> >\n> > Current implementation is limited (and the behaviour is not user\n> friendly in all details), but new behaviour (implemented by patch) is in\n> half cases out of standard :(.\n> >\n> > These cases are probably not often - and they are really corner cases,\n> but I cannot to qualify how much important this fact is.\n> >\n> > For users, the implemented feature is better, and it is safe.\n> Implementation is trivial - is significantly simpler than implementation\n> that is 100% standard, although it should not be a hard problem too (in\n> analyze stage it probably needs a few lines too).\n> >\n> > There has to be a decision, because now we can go in both directions.\n> After choosing there will not be a way back.\n>\n> This patch is marked waiting for author, but it's not clear from this\n> review\n> what we're waiting on. Should this be moved to Needs review or Ready for\n> Committer instead to solicit the input from a committer? ISTM the latter\n> is\n> more suitable. Or did I misunderstand your review?\n>\n\nUnfortunately, I don't know - I am not sure about committing this patch,\nand I am not able to qualify for impact.\n\nThis is nice example of usage of anycompatible type (that is consistent\nwith other things in Postgres), but standard says something else.\n\nIt can be easily solved with https://commitfest.postgresql.org/28/2081/ -\nbut Tom doesn't like this patch.\n\nI am more inclined to think so this feature should be implemented\ndifferently - there is no strong reason to go against standard or against\nthe implementations of other databases (and increase the costs of porting).\nNow the implementation is limited, but allowed behaviour is 100% ANSI.\n\nOn second hand, the implemented feature (behaviour) is pretty small, and\nreally corner case, so maybe a simple solution (although it isn't perfect)\nis good.\n\nSo I would listen to the opinions of other people.\n\nRegards\n\nPavel\n\n\n\n> cheers ./daniel\n>\n>\n\nčt 23. 7. 2020 v 13:29 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:> On 13 Jul 2020, at 19:23, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> ne 31. 5. 2020 v 22:02 odesílatel Vik Fearing <vik@postgresfriends.org <mailto:vik@postgresfriends.org>> napsal:\n> On 5/31/20 9:53 PM, Tom Lane wrote:\n> > Vik Fearing <vik@postgresfriends.org <mailto:vik@postgresfriends.org>> writes:\n> >>   postgres=# SELECT LAG(n, 1, -99) OVER (ORDER BY n)\n> >>   postgres-# FROM (VALUES (1.1), (2.2), (3.3)) AS v (n)\n> >>   postgres-# ORDER BY n;\n> >>   ERROR:  function lag(numeric, integer, integer) does not exist\n> >>   LINE 1: SELECT LAG(n, 1, -99) OVER (ORDER BY n)\n> >>                ^\n> > \n> > Yeah, we have similar issues elsewhere.\n> > \n> >> Attached is a patch that fixes this issue using the new anycompatible\n> >> pseudotype.  I am hoping this can be slipped into 13 even though it\n> >> requires a catversion bump after BETA1.\n> > \n> > When the anycompatible patch went in, I thought for a little bit about\n> > trying to use it with existing built-in functions, but didn't have the\n> > time to investigate the issue in detail.  I'm not in favor of hacking\n> > things one-function-at-a-time here; we should look through the whole\n> > library and see what we've got.\n> > \n> > The main thing that makes this perhaps-not-trivial is that an\n> > anycompatible-ified function will match more cases than it would have\n> > before, possibly causing conflicts if the function or operator name\n> > is overloaded.  We'd have to look at such cases and decide what we\n> > want to do --- one answer would be to drop some of the alternatives\n> > and rely on the parser to add casts, but that might slow things down.\n> > \n> > Anyway, I agree that this is a good direction to pursue, but not in\n> > a last-minute-hack-for-v13 way.\n> \n> Fair enough.  I put it in the commitfest app for version 14.\n> https://commitfest.postgresql.org/28/2574/ <https://commitfest.postgresql.org/28/2574/>\n> -- \n> Vik Fearing\n> \n> The patch is ok. It is almost trivial. It solves one issue, but maybe it introduces a new issue. \n> \n> Other databases try to coerce default constant expression to value expression. I found a description about this behaviour for MSSQL, Sybase, BigQuery.\n> \n> I didn't find related documentation for Oracle, and I have not a access to some running instance of Oracle to test it.\n> \n> Ansi SQL say - type of default expression should be compatible with lag expression, and declared type of result should be type of value expression.\n> \n> IWD 9075-2:201?(E) 6.10 <window function> - j) v)\n> \n> Current implementation is limited (and the behaviour is not user friendly in all details), but new behaviour (implemented by patch) is in half cases out of standard :(.\n> \n> These cases are probably not often - and they are really corner cases, but I cannot to qualify how much important this fact is.\n> \n> For users, the implemented feature is better, and it is safe. Implementation is trivial - is significantly simpler than implementation that is 100% standard, although it should not be a hard problem too (in analyze stage it probably needs a few lines too).\n> \n> There has to be a decision, because now we can go in both directions. After choosing there will not be a way back.\n\nThis patch is marked waiting for author, but it's not clear from this review\nwhat we're waiting on.  Should this be moved to Needs review or Ready for\nCommitter instead to solicit the input from a committer?  ISTM the latter is\nmore suitable. Or did I misunderstand your review?Unfortunately, I don't know - I am not sure about committing this patch, and I am not able to qualify for impact. This is nice example of usage of anycompatible type (that is consistent with other things in Postgres), but standard says something else.It can be easily solved with https://commitfest.postgresql.org/28/2081/ - but Tom doesn't like this patch.I am more inclined to think so this feature should be implemented differently - there is no strong reason to go against standard or against the implementations of other databases (and increase the costs of porting). Now the implementation is limited, but allowed behaviour is 100% ANSI. On second hand, the implemented feature (behaviour) is pretty small, and really corner case, so maybe a simple solution (although it isn't perfect) is good.So I would listen to the opinions of other people.   RegardsPavel\n\ncheers ./daniel", "msg_date": "Thu, 23 Jul 2020 13:55:19 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Compatible defaults for LEAD/LAG" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> This is nice example of usage of anycompatible type (that is consistent\n> with other things in Postgres), but standard says something else.\n> It can be easily solved with https://commitfest.postgresql.org/28/2081/ -\n> but Tom doesn't like this patch.\n> I am more inclined to think so this feature should be implemented\n> differently - there is no strong reason to go against standard or against\n> the implementations of other databases (and increase the costs of porting).\n> Now the implementation is limited, but allowed behaviour is 100% ANSI.\n\nI don't particularly buy this argument. The case at hand is what to do\nif we have, say,\n\n\tselect lag(integer_column, 1, 1.2) over ...\n\nThe proposed patch would result in the output being of type numeric,\nand any rows using the default would show \"1.2\". The spec says that\nthe right thing is to return integer, and we should round the default\nto \"1\" to make that work. But\n\n(1) I doubt that anybody actually writes such things;\n\n(2) For anyone who does write it, the spec's behavior fails to meet\nthe principle of least surprise. It is not normally the case that\nany information-losing cast would be applied silently within an\nexpression.\n\nSo this deviation from spec doesn't bother me; we have much bigger ones.\n\nMy concern with this patch is what I said upthread: I'm not sure that\nwe should be adjusting this behavior in a piecemeal fashion. I looked\nthrough pg_proc to find all the functions that have more than one any*\nargument, and found these:\n\n oid | prorettype \n-----------------------------------------------+------------\n lag(anyelement,integer,anyelement) | anyelement\n lead(anyelement,integer,anyelement) | anyelement\n width_bucket(anyelement,anyarray) | integer\n btarraycmp(anyarray,anyarray) | integer\n array_eq(anyarray,anyarray) | boolean\n array_ne(anyarray,anyarray) | boolean\n array_lt(anyarray,anyarray) | boolean\n array_gt(anyarray,anyarray) | boolean\n array_le(anyarray,anyarray) | boolean\n array_ge(anyarray,anyarray) | boolean\n array_append(anyarray,anyelement) | anyarray\n array_prepend(anyelement,anyarray) | anyarray\n array_cat(anyarray,anyarray) | anyarray\n array_larger(anyarray,anyarray) | anyarray\n array_smaller(anyarray,anyarray) | anyarray\n array_position(anyarray,anyelement) | integer\n array_position(anyarray,anyelement,integer) | integer\n array_positions(anyarray,anyelement) | integer[]\n array_remove(anyarray,anyelement) | anyarray\n array_replace(anyarray,anyelement,anyelement) | anyarray\n arrayoverlap(anyarray,anyarray) | boolean\n arraycontains(anyarray,anyarray) | boolean\n arraycontained(anyarray,anyarray) | boolean\n elem_contained_by_range(anyelement,anyrange) | boolean\n range_contains_elem(anyrange,anyelement) | boolean\n range_eq(anyrange,anyrange) | boolean\n range_ne(anyrange,anyrange) | boolean\n range_overlaps(anyrange,anyrange) | boolean\n range_contains(anyrange,anyrange) | boolean\n range_contained_by(anyrange,anyrange) | boolean\n range_adjacent(anyrange,anyrange) | boolean\n range_before(anyrange,anyrange) | boolean\n range_after(anyrange,anyrange) | boolean\n range_overleft(anyrange,anyrange) | boolean\n range_overright(anyrange,anyrange) | boolean\n range_union(anyrange,anyrange) | anyrange\n range_merge(anyrange,anyrange) | anyrange\n range_intersect(anyrange,anyrange) | anyrange\n range_minus(anyrange,anyrange) | anyrange\n range_cmp(anyrange,anyrange) | integer\n range_lt(anyrange,anyrange) | boolean\n range_le(anyrange,anyrange) | boolean\n range_ge(anyrange,anyrange) | boolean\n range_gt(anyrange,anyrange) | boolean\n range_gist_same(anyrange,anyrange,internal) | internal\n(45 rows)\n\nNow, there's no point in changing range_eq and the later entries in this\ntable (the ones taking two anyrange's); our rather lame definition of\nanycompatiblerange means that we'd get no benefit from doing so. But\nI think there's a strong case for changing everything before range_eq.\nIt'd be nice if something like\n\nSELECT array[1] < array[1.1];\n\nwould work the same way that \"SELECT 1 < 1.1\" would.\n\nI checked the other concern that I had about whether the more flexible\npolymorphic definitions would create any new ambiguities, and I don't\nsee any problems with this list. As functions, none of these names are\noverloaded, except with different numbers of arguments so there's no\nambiguity. Most of the array functions are also operators, but the\ncompeting operators do not take arrays, so it doesn't look like there's\nany issue on that side either.\n\nSo I think we should be more ambitious and generalize all of these\nto use anycompatiblearray etc.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 30 Aug 2020 17:59:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Compatible defaults for LEAD/LAG" }, { "msg_contents": "ne 30. 8. 2020 v 23:59 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > This is nice example of usage of anycompatible type (that is consistent\n> > with other things in Postgres), but standard says something else.\n> > It can be easily solved with https://commitfest.postgresql.org/28/2081/\n> -\n> > but Tom doesn't like this patch.\n> > I am more inclined to think so this feature should be implemented\n> > differently - there is no strong reason to go against standard or against\n> > the implementations of other databases (and increase the costs of\n> porting).\n> > Now the implementation is limited, but allowed behaviour is 100% ANSI.\n>\n> I don't particularly buy this argument. The case at hand is what to do\n> if we have, say,\n>\n> select lag(integer_column, 1, 1.2) over ...\n>\n> The proposed patch would result in the output being of type numeric,\n> and any rows using the default would show \"1.2\". The spec says that\n> the right thing is to return integer, and we should round the default\n> to \"1\" to make that work. But\n>\n> (1) I doubt that anybody actually writes such things;\n>\n> (2) For anyone who does write it, the spec's behavior fails to meet\n> the principle of least surprise. It is not normally the case that\n> any information-losing cast would be applied silently within an\n> expression.\n>\n\npostgres=# create table foo(a int);\nCREATE TABLE\npostgres=# insert into foo values(1.1);\nINSERT 0 1\n\npostgres=# create table foo(a int default 1.1);\nCREATE TABLE\npostgres=# insert into foo values(default);\nINSERT 0 1\npostgres=# select * from foo;\n┌───┐\n│ a │\n╞═══╡\n│ 1 │\n└───┘\n(1 row)\n\nIt is maybe strange, but it is not an unusual pattern in SQL. I think it\ncan be analogy with default clause\n\nDECLARE a int DEFAULT 1.2;\n\nThe default value doesn't change a type of variable. This is maybe too\nstupid example. There can be other little bit more realistic\n\nCREATE OR REPLACE FUNCTION foo(a numeric, b numeric, ...\nDECLARE x int DEFAULT a;\nBEGIN\n ...\n\nI am afraid about performance - if default value can change type, then some\nother things can stop work - like using index.\n\nFor *this* case we don't speak about some operations between two\nindependent operands or function arguments. We are speaking about\nthe value and about a *default* for the value.\n\n\n> So this deviation from spec doesn't bother me; we have much bigger ones.\n>\n\nok, if it is acceptable for other people, I can accept it too - I\nunderstand well so it is a corner case and there is some consistency with\nother Postgres features.\n\nMaybe this difference should be mentioned in documentation.\n\n\n> My concern with this patch is what I said upthread: I'm not sure that\n> we should be adjusting this behavior in a piecemeal fashion. I looked\n> through pg_proc to find all the functions that have more than one any*\n> argument, and found these:\n>\n> oid | prorettype\n> -----------------------------------------------+------------\n> lag(anyelement,integer,anyelement) | anyelement\n> lead(anyelement,integer,anyelement) | anyelement\n> width_bucket(anyelement,anyarray) | integer\n> btarraycmp(anyarray,anyarray) | integer\n> array_eq(anyarray,anyarray) | boolean\n> array_ne(anyarray,anyarray) | boolean\n> array_lt(anyarray,anyarray) | boolean\n> array_gt(anyarray,anyarray) | boolean\n> array_le(anyarray,anyarray) | boolean\n> array_ge(anyarray,anyarray) | boolean\n> array_append(anyarray,anyelement) | anyarray\n> array_prepend(anyelement,anyarray) | anyarray\n> array_cat(anyarray,anyarray) | anyarray\n> array_larger(anyarray,anyarray) | anyarray\n> array_smaller(anyarray,anyarray) | anyarray\n> array_position(anyarray,anyelement) | integer\n> array_position(anyarray,anyelement,integer) | integer\n> array_positions(anyarray,anyelement) | integer[]\n> array_remove(anyarray,anyelement) | anyarray\n> array_replace(anyarray,anyelement,anyelement) | anyarray\n> arrayoverlap(anyarray,anyarray) | boolean\n> arraycontains(anyarray,anyarray) | boolean\n> arraycontained(anyarray,anyarray) | boolean\n> elem_contained_by_range(anyelement,anyrange) | boolean\n> range_contains_elem(anyrange,anyelement) | boolean\n> range_eq(anyrange,anyrange) | boolean\n> range_ne(anyrange,anyrange) | boolean\n> range_overlaps(anyrange,anyrange) | boolean\n> range_contains(anyrange,anyrange) | boolean\n> range_contained_by(anyrange,anyrange) | boolean\n> range_adjacent(anyrange,anyrange) | boolean\n> range_before(anyrange,anyrange) | boolean\n> range_after(anyrange,anyrange) | boolean\n> range_overleft(anyrange,anyrange) | boolean\n> range_overright(anyrange,anyrange) | boolean\n> range_union(anyrange,anyrange) | anyrange\n> range_merge(anyrange,anyrange) | anyrange\n> range_intersect(anyrange,anyrange) | anyrange\n> range_minus(anyrange,anyrange) | anyrange\n> range_cmp(anyrange,anyrange) | integer\n> range_lt(anyrange,anyrange) | boolean\n> range_le(anyrange,anyrange) | boolean\n> range_ge(anyrange,anyrange) | boolean\n> range_gt(anyrange,anyrange) | boolean\n> range_gist_same(anyrange,anyrange,internal) | internal\n> (45 rows)\n>\n> Now, there's no point in changing range_eq and the later entries in this\n> table (the ones taking two anyrange's); our rather lame definition of\n> anycompatiblerange means that we'd get no benefit from doing so. But\n> I think there's a strong case for changing everything before range_eq.\n> It'd be nice if something like\n>\n> SELECT array[1] < array[1.1];\n>\n> would work the same way that \"SELECT 1 < 1.1\" would.\n>\n\nThere it has sense without any discussion. But it is a little bit different\nthan using the default value in the LAG function.\n\n\n> I checked the other concern that I had about whether the more flexible\n> polymorphic definitions would create any new ambiguities, and I don't\n> see any problems with this list. As functions, none of these names are\n> overloaded, except with different numbers of arguments so there's no\n> ambiguity. Most of the array functions are also operators, but the\n> competing operators do not take arrays, so it doesn't look like there's\n> any issue on that side either.\n>\n> So I think we should be more ambitious and generalize all of these\n> to use anycompatiblearray etc.\n>\n\n+1\n\nPavel\n\n>\n> regards, tom lane\n>\n\nne 30. 8. 2020 v 23:59 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> This is nice example of usage of anycompatible type (that is consistent\n> with other things in Postgres), but standard says something else.\n> It can be easily solved with https://commitfest.postgresql.org/28/2081/ -\n> but Tom doesn't like this patch.\n> I am more inclined to think so this feature should be implemented\n> differently - there is no strong reason to go against standard or against\n> the implementations of other databases (and increase the costs of porting).\n> Now the implementation is limited, but allowed behaviour is 100% ANSI.\n\nI don't particularly buy this argument.  The case at hand is what to do\nif we have, say,\n\n        select lag(integer_column, 1, 1.2) over ...\n\nThe proposed patch would result in the output being of type numeric,\nand any rows using the default would show \"1.2\".  The spec says that\nthe right thing is to return integer, and we should round the default\nto \"1\" to make that work.  But\n\n(1) I doubt that anybody actually writes such things;\n\n(2) For anyone who does write it, the spec's behavior fails to meet\nthe principle of least surprise.  It is not normally the case that\nany information-losing cast would be applied silently within an\nexpression.postgres=# create table foo(a int);CREATE TABLEpostgres=# insert into foo values(1.1);INSERT 0 1postgres=# create table foo(a int default 1.1);CREATE TABLEpostgres=# insert into foo values(default);INSERT 0 1postgres=# select * from foo;┌───┐│ a │╞═══╡│ 1 │└───┘(1 row)It is maybe strange, but it is not an unusual pattern in SQL. I think it can be analogy with default clauseDECLARE a int DEFAULT 1.2;The default value doesn't change a type of variable. This is maybe too stupid example. There can be other little bit more realisticCREATE OR REPLACE FUNCTION foo(a numeric, b numeric, ...DECLARE x int DEFAULT a;BEGIN  ...I am afraid about performance - if default value can change type, then some other things can stop work - like using index.For *this* case we don't speak about some operations between two independent operands or function arguments. We are speaking aboutthe value and about a *default* for the value. \n\nSo this deviation from spec doesn't bother me; we have much bigger ones.ok,  if it is acceptable for other people, I can accept it too - I understand well so it is a corner case and there is some consistency with other Postgres features.Maybe this difference should be mentioned in documentation.\n\nMy concern with this patch is what I said upthread: I'm not sure that\nwe should be adjusting this behavior in a piecemeal fashion.  I looked\nthrough pg_proc to find all the functions that have more than one any*\nargument, and found these:\n\n                      oid                      | prorettype \n-----------------------------------------------+------------\n lag(anyelement,integer,anyelement)            | anyelement\n lead(anyelement,integer,anyelement)           | anyelement\n width_bucket(anyelement,anyarray)             | integer\n btarraycmp(anyarray,anyarray)                 | integer\n array_eq(anyarray,anyarray)                   | boolean\n array_ne(anyarray,anyarray)                   | boolean\n array_lt(anyarray,anyarray)                   | boolean\n array_gt(anyarray,anyarray)                   | boolean\n array_le(anyarray,anyarray)                   | boolean\n array_ge(anyarray,anyarray)                   | boolean\n array_append(anyarray,anyelement)             | anyarray\n array_prepend(anyelement,anyarray)            | anyarray\n array_cat(anyarray,anyarray)                  | anyarray\n array_larger(anyarray,anyarray)               | anyarray\n array_smaller(anyarray,anyarray)              | anyarray\n array_position(anyarray,anyelement)           | integer\n array_position(anyarray,anyelement,integer)   | integer\n array_positions(anyarray,anyelement)          | integer[]\n array_remove(anyarray,anyelement)             | anyarray\n array_replace(anyarray,anyelement,anyelement) | anyarray\n arrayoverlap(anyarray,anyarray)               | boolean\n arraycontains(anyarray,anyarray)              | boolean\n arraycontained(anyarray,anyarray)             | boolean\n elem_contained_by_range(anyelement,anyrange)  | boolean\n range_contains_elem(anyrange,anyelement)      | boolean\n range_eq(anyrange,anyrange)                   | boolean\n range_ne(anyrange,anyrange)                   | boolean\n range_overlaps(anyrange,anyrange)             | boolean\n range_contains(anyrange,anyrange)             | boolean\n range_contained_by(anyrange,anyrange)         | boolean\n range_adjacent(anyrange,anyrange)             | boolean\n range_before(anyrange,anyrange)               | boolean\n range_after(anyrange,anyrange)                | boolean\n range_overleft(anyrange,anyrange)             | boolean\n range_overright(anyrange,anyrange)            | boolean\n range_union(anyrange,anyrange)                | anyrange\n range_merge(anyrange,anyrange)                | anyrange\n range_intersect(anyrange,anyrange)            | anyrange\n range_minus(anyrange,anyrange)                | anyrange\n range_cmp(anyrange,anyrange)                  | integer\n range_lt(anyrange,anyrange)                   | boolean\n range_le(anyrange,anyrange)                   | boolean\n range_ge(anyrange,anyrange)                   | boolean\n range_gt(anyrange,anyrange)                   | boolean\n range_gist_same(anyrange,anyrange,internal)   | internal\n(45 rows)\n\nNow, there's no point in changing range_eq and the later entries in this\ntable (the ones taking two anyrange's); our rather lame definition of\nanycompatiblerange means that we'd get no benefit from doing so.  But\nI think there's a strong case for changing everything before range_eq.\nIt'd be nice if something like\n\nSELECT array[1] < array[1.1];\n\nwould work the same way that \"SELECT 1 < 1.1\" would.There it has sense without any discussion. But it is a little bit different than using the default value in the LAG function. \n\nI checked the other concern that I had about whether the more flexible\npolymorphic definitions would create any new ambiguities, and I don't\nsee any problems with this list.  As functions, none of these names are\noverloaded, except with different numbers of arguments so there's no\nambiguity.  Most of the array functions are also operators, but the\ncompeting operators do not take arrays, so it doesn't look like there's\nany issue on that side either.\n\nSo I think we should be more ambitious and generalize all of these\nto use anycompatiblearray etc.+1Pavel\n\n                        regards, tom lane", "msg_date": "Mon, 31 Aug 2020 07:05:41 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Compatible defaults for LEAD/LAG" }, { "msg_contents": "po 31. 8. 2020 v 7:05 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> ne 30. 8. 2020 v 23:59 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n>> Pavel Stehule <pavel.stehule@gmail.com> writes:\n>> > This is nice example of usage of anycompatible type (that is consistent\n>> > with other things in Postgres), but standard says something else.\n>> > It can be easily solved with https://commitfest.postgresql.org/28/2081/\n>> -\n>> > but Tom doesn't like this patch.\n>> > I am more inclined to think so this feature should be implemented\n>> > differently - there is no strong reason to go against standard or\n>> against\n>> > the implementations of other databases (and increase the costs of\n>> porting).\n>> > Now the implementation is limited, but allowed behaviour is 100% ANSI.\n>>\n>> I don't particularly buy this argument. The case at hand is what to do\n>> if we have, say,\n>>\n>> select lag(integer_column, 1, 1.2) over ...\n>>\n>> The proposed patch would result in the output being of type numeric,\n>> and any rows using the default would show \"1.2\". The spec says that\n>> the right thing is to return integer, and we should round the default\n>> to \"1\" to make that work. But\n>>\n>> (1) I doubt that anybody actually writes such things;\n>>\n>> (2) For anyone who does write it, the spec's behavior fails to meet\n>> the principle of least surprise. It is not normally the case that\n>> any information-losing cast would be applied silently within an\n>> expression.\n>>\n>\n> postgres=# create table foo(a int);\n> CREATE TABLE\n> postgres=# insert into foo values(1.1);\n> INSERT 0 1\n>\n> postgres=# create table foo(a int default 1.1);\n> CREATE TABLE\n> postgres=# insert into foo values(default);\n> INSERT 0 1\n> postgres=# select * from foo;\n> ┌───┐\n> │ a │\n> ╞═══╡\n> │ 1 │\n> └───┘\n> (1 row)\n>\n> It is maybe strange, but it is not an unusual pattern in SQL. I think it\n> can be analogy with default clause\n>\n> DECLARE a int DEFAULT 1.2;\n>\n> The default value doesn't change a type of variable. This is maybe too\n> stupid example. There can be other little bit more realistic\n>\n> CREATE OR REPLACE FUNCTION foo(a numeric, b numeric, ...\n> DECLARE x int DEFAULT a;\n> BEGIN\n> ...\n>\n> I am afraid about performance - if default value can change type, then\n> some other things can stop work - like using index.\n>\n> For *this* case we don't speak about some operations between two\n> independent operands or function arguments. We are speaking about\n> the value and about a *default* for the value.\n>\n>\n>> So this deviation from spec doesn't bother me; we have much bigger ones.\n>>\n>\n> ok, if it is acceptable for other people, I can accept it too - I\n> understand well so it is a corner case and there is some consistency with\n> other Postgres features.\n>\n> Maybe this difference should be mentioned in documentation.\n>\n\nI thought more about this problem, and I think so ANSI specification is\nsemantically fully correct - it is consistent with application of default\nvalue elsewhere in SQL environment.\n\nIn this case the optional argument is not \"any\" expression. It is the\ndefault value for some expression . In other cases we usually use forced\nexplicit cast.\n\nUnfortunately we do not have good tools for generic implementation of this\nsituation. In other cases there the functions have special support in\nparser for this case (example xmltable)\n\nI see few possibilities how to finish and close this issue:\n\n1. use anycompatible type and add note to documentation so expression of\noptional argument can change a result type, and so this is Postgres\nspecific - other databases and ANSI SQL disallow this.\nIt is the most simple solution, and it is good enough for this case, that\nis not extra important.\n\n2. choose the correct type somewhere inside the parser - for these two\nfunctions.\n\n3. introduce and implement some generic solution for this case - it can be\nimplemented on C level via some function helper or on syntax level\n\n CREATE OR REPLACE FUNCTION lag(a anyelement, offset int, default defval\na%type) ...\n\n\"defval argname%type\" means for caller's expression \"CAST(defval AS\ntypeof(argname))\"\n\n@3 can be a very interesting and useful feature, but it needs an agreement\nand harder work\n@2 this is 100% correct solution without hard work (but I am not sure if\nthere can be an agreement on this implementation)\n@1 it is good enough for this issue without harder work and probably there\nwe can find an agreement simply.\n\nRegards\n\nPavel\n\npo 31. 8. 2020 v 7:05 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:ne 30. 8. 2020 v 23:59 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> This is nice example of usage of anycompatible type (that is consistent\n> with other things in Postgres), but standard says something else.\n> It can be easily solved with https://commitfest.postgresql.org/28/2081/ -\n> but Tom doesn't like this patch.\n> I am more inclined to think so this feature should be implemented\n> differently - there is no strong reason to go against standard or against\n> the implementations of other databases (and increase the costs of porting).\n> Now the implementation is limited, but allowed behaviour is 100% ANSI.\n\nI don't particularly buy this argument.  The case at hand is what to do\nif we have, say,\n\n        select lag(integer_column, 1, 1.2) over ...\n\nThe proposed patch would result in the output being of type numeric,\nand any rows using the default would show \"1.2\".  The spec says that\nthe right thing is to return integer, and we should round the default\nto \"1\" to make that work.  But\n\n(1) I doubt that anybody actually writes such things;\n\n(2) For anyone who does write it, the spec's behavior fails to meet\nthe principle of least surprise.  It is not normally the case that\nany information-losing cast would be applied silently within an\nexpression.postgres=# create table foo(a int);CREATE TABLEpostgres=# insert into foo values(1.1);INSERT 0 1postgres=# create table foo(a int default 1.1);CREATE TABLEpostgres=# insert into foo values(default);INSERT 0 1postgres=# select * from foo;┌───┐│ a │╞═══╡│ 1 │└───┘(1 row)It is maybe strange, but it is not an unusual pattern in SQL. I think it can be analogy with default clauseDECLARE a int DEFAULT 1.2;The default value doesn't change a type of variable. This is maybe too stupid example. There can be other little bit more realisticCREATE OR REPLACE FUNCTION foo(a numeric, b numeric, ...DECLARE x int DEFAULT a;BEGIN  ...I am afraid about performance - if default value can change type, then some other things can stop work - like using index.For *this* case we don't speak about some operations between two independent operands or function arguments. We are speaking aboutthe value and about a *default* for the value. \n\nSo this deviation from spec doesn't bother me; we have much bigger ones.ok,  if it is acceptable for other people, I can accept it too - I understand well so it is a corner case and there is some consistency with other Postgres features.Maybe this difference should be mentioned in documentation.I thought more about this problem, and I think so ANSI specification is semantically fully correct - it is consistent with application of default value elsewhere in SQL environment.In this case the optional argument is not \"any\" expression. It is the default value for some expression . In other cases we usually use forced explicit cast.Unfortunately we do not have good tools for generic implementation of this situation. In other cases there the functions have special support in parser for this case (example xmltable)I see few possibilities how to finish and close this issue:1. use anycompatible type and add note to documentation so expression of optional argument can change a result type, and so this is Postgres specific - other databases and ANSI SQL disallow this.It is the most simple solution, and it is good enough for this case, that is not extra important.2. choose the correct type somewhere inside the parser - for these two functions. 3. introduce and implement some generic solution for this case - it can be implemented on C level via some function helper or on syntax level    CREATE OR REPLACE FUNCTION lag(a anyelement, offset int, default defval a%type) ... \"defval argname%type\" means for caller's expression \"CAST(defval AS typeof(argname))\"@3 can be a very interesting and useful feature, but it needs an agreement and harder work@2 this is 100% correct solution without hard work (but I am not sure if there can be an agreement on this implementation) @1 it is good enough for this issue without harder work and probably there we can find an agreement simply.RegardsPavel", "msg_date": "Fri, 4 Sep 2020 08:33:36 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Compatible defaults for LEAD/LAG" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I see few possibilities how to finish and close this issue:\n> 1. use anycompatible type and add note to documentation so expression of\n> optional argument can change a result type, and so this is Postgres\n> specific - other databases and ANSI SQL disallow this.\n> It is the most simple solution, and it is good enough for this case, that\n> is not extra important.\n> 2. choose the correct type somewhere inside the parser - for these two\n> functions.\n> 3. introduce and implement some generic solution for this case - it can be\n> implemented on C level via some function helper or on syntax level\n> CREATE OR REPLACE FUNCTION lag(a anyelement, offset int, default defval\n> a%type) ...\n> \"defval argname%type\" means for caller's expression \"CAST(defval AS\n> typeof(argname))\"\n\nI continue to feel that the spec's definition of this is not so\nobviously right that we should jump through hoops to duplicate it.\nIn fact, I don't even agree that we need a disclaimer in the docs\nsaying that it's not quite the same. Only the most nitpicky\nlanguage lawyers would ever even notice.\n\nIn hopes of moving this along a bit, I experimented with converting\nthe other functions I listed to use anycompatible. I soon found that\ntouching any functions/operators that are listed in operator classes\nwould open a can of worms far larger than I'd previously supposed.\nTo maintain consistency, we'd have to propagate the datatype changes\nto *every* function/operator associated with those opfamilies --- many\nof which only have one any-foo input and thus aren't on my original\nlist. (An example here is that extending btree array_ops will require\nchanging the max(anyarray) and min(anyarray) aggregates too.) We'd\nthen end up with a situation that would be rather confusing from a\nuser's standpoint, in that it'd be quite un-obvious why some array\nfunctions use anyarray while other ones use anycompatiblearray.\n\nSo I backed off to just changing the functions/operators that have\nno opclass connections, such as array_cat. Even that has some\ndownsides --- for example, in the attached patch, it's necessary\nto change some polymorphism.sql examples that explicitly reference\narray_cat(anyarray). I wonder whether this change would break a\nsignificant number of user-defined aggregates or operators.\n\n(Note that I think we'd have to resist the temptation to fix that\nby letting CREATE AGGREGATE et al accept support functions that\ntake anyarray/anycompatiblearray (etc) interchangeably. A lot of\nthe security analysis that went into CVE-2020-14350 depended on\nthe assumption that utility commands only do exact lookups of\nsupport functions. If we tried to be lax about this, we'd\nre-introduce the possibility for hostile capture of function\nreferences in extension scripts.)\n\nAnother interesting issue, not seen in the attached but which\ncame up while I was experimenting with the more aggressive patch,\nwas this failure in the polymorphism test:\n\n select max(histogram_bounds) from pg_stats where tablename = 'pg_am';\n-ERROR: cannot compare arrays of different element types\n+ERROR: function max(anyarray) does not exist\n\nThat's because we don't accept pg_stats.histogram_bounds (of\ndeclared type anyarray) as a valid input for anycompatiblearray.\nI wonder if that isn't a bug we need to fix in the anycompatible\npatch, independently of anything else. We'd not be able to deduce\nan actual element type from such an input, but we already cannot\ndo so when we match it to an anyarray parameter.\n\nAnyway, attached find\n\n0001 - updates Vik's original patch to HEAD and tweaks the\ngrammar in the docs a bit.\n\n0002 - add-on patch to convert array_append, array_prepend,\narray_cat, array_position, array_positions, array_remove,\narray_replace, and width_bucket to use anycompatiblearray.\n\nI think 0001 is committable, but 0002 is just WIP since\nI didn't touch the docs. I'm slightly discouraged about\nwhether 0002 is worth proceeding with. Any thoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 21 Sep 2020 20:33:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Compatible defaults for LEAD/LAG" }, { "msg_contents": "út 22. 9. 2020 v 2:33 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > I see few possibilities how to finish and close this issue:\n> > 1. use anycompatible type and add note to documentation so expression of\n> > optional argument can change a result type, and so this is Postgres\n> > specific - other databases and ANSI SQL disallow this.\n> > It is the most simple solution, and it is good enough for this case, that\n> > is not extra important.\n> > 2. choose the correct type somewhere inside the parser - for these two\n> > functions.\n> > 3. introduce and implement some generic solution for this case - it can\n> be\n> > implemented on C level via some function helper or on syntax level\n> > CREATE OR REPLACE FUNCTION lag(a anyelement, offset int, default\n> defval\n> > a%type) ...\n> > \"defval argname%type\" means for caller's expression \"CAST(defval AS\n> > typeof(argname))\"\n>\n> I continue to feel that the spec's definition of this is not so\n> obviously right that we should jump through hoops to duplicate it.\n> In fact, I don't even agree that we need a disclaimer in the docs\n> saying that it's not quite the same. Only the most nitpicky\n> language lawyers would ever even notice.\n>\n> In hopes of moving this along a bit, I experimented with converting\n> the other functions I listed to use anycompatible. I soon found that\n> touching any functions/operators that are listed in operator classes\n> would open a can of worms far larger than I'd previously supposed.\n> To maintain consistency, we'd have to propagate the datatype changes\n> to *every* function/operator associated with those opfamilies --- many\n> of which only have one any-foo input and thus aren't on my original\n> list. (An example here is that extending btree array_ops will require\n> changing the max(anyarray) and min(anyarray) aggregates too.) We'd\n> then end up with a situation that would be rather confusing from a\n> user's standpoint, in that it'd be quite un-obvious why some array\n> functions use anyarray while other ones use anycompatiblearray.\n>\n> So I backed off to just changing the functions/operators that have\n> no opclass connections, such as array_cat. Even that has some\n> downsides --- for example, in the attached patch, it's necessary\n> to change some polymorphism.sql examples that explicitly reference\n> array_cat(anyarray). I wonder whether this change would break a\n> significant number of user-defined aggregates or operators.\n>\n> (Note that I think we'd have to resist the temptation to fix that\n> by letting CREATE AGGREGATE et al accept support functions that\n> take anyarray/anycompatiblearray (etc) interchangeably. A lot of\n> the security analysis that went into CVE-2020-14350 depended on\n> the assumption that utility commands only do exact lookups of\n> support functions. If we tried to be lax about this, we'd\n> re-introduce the possibility for hostile capture of function\n> references in extension scripts.)\n>\n> Another interesting issue, not seen in the attached but which\n> came up while I was experimenting with the more aggressive patch,\n> was this failure in the polymorphism test:\n>\n> select max(histogram_bounds) from pg_stats where tablename = 'pg_am';\n> -ERROR: cannot compare arrays of different element types\n> +ERROR: function max(anyarray) does not exist\n>\n> That's because we don't accept pg_stats.histogram_bounds (of\n> declared type anyarray) as a valid input for anycompatiblearray.\n> I wonder if that isn't a bug we need to fix in the anycompatible\n> patch, independently of anything else. We'd not be able to deduce\n> an actual element type from such an input, but we already cannot\n> do so when we match it to an anyarray parameter.\n>\n> Anyway, attached find\n>\n> 0001 - updates Vik's original patch to HEAD and tweaks the\n> grammar in the docs a bit.\n>\n> 0002 - add-on patch to convert array_append, array_prepend,\n> array_cat, array_position, array_positions, array_remove,\n> array_replace, and width_bucket to use anycompatiblearray.\n>\n> I think 0001 is committable, but 0002 is just WIP since\n> I didn't touch the docs. I'm slightly discouraged about\n> whether 0002 is worth proceeding with. Any thoughts?\n>\n\nI think so 0002 has sense - more than doc I miss related regress tests, but\nit is partially covered by anycompatible tests\n\nAnyway I tested both patches and there is not problem with compilation,\nbuilding doc, and make check-world passed\n\nI'll mark this patch as ready for committer\n\nBest regards\n\nPavel\n\n\n>\n> regards, tom lane\n>\n>\n\nút 22. 9. 2020 v 2:33 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I see few possibilities how to finish and close this issue:\n> 1. use anycompatible type and add note to documentation so expression of\n> optional argument can change a result type, and so this is Postgres\n> specific - other databases and ANSI SQL disallow this.\n> It is the most simple solution, and it is good enough for this case, that\n> is not extra important.\n> 2. choose the correct type somewhere inside the parser - for these two\n> functions.\n> 3. introduce and implement some generic solution for this case - it can be\n> implemented on C level via some function helper or on syntax level\n>    CREATE OR REPLACE FUNCTION lag(a anyelement, offset int, default defval\n> a%type) ...\n> \"defval argname%type\" means for caller's expression \"CAST(defval AS\n> typeof(argname))\"\n\nI continue to feel that the spec's definition of this is not so\nobviously right that we should jump through hoops to duplicate it.\nIn fact, I don't even agree that we need a disclaimer in the docs\nsaying that it's not quite the same.  Only the most nitpicky\nlanguage lawyers would ever even notice.\n\nIn hopes of moving this along a bit, I experimented with converting\nthe other functions I listed to use anycompatible.  I soon found that\ntouching any functions/operators that are listed in operator classes\nwould open a can of worms far larger than I'd previously supposed.\nTo maintain consistency, we'd have to propagate the datatype changes\nto *every* function/operator associated with those opfamilies --- many\nof which only have one any-foo input and thus aren't on my original\nlist.  (An example here is that extending btree array_ops will require\nchanging the max(anyarray) and min(anyarray) aggregates too.)  We'd\nthen end up with a situation that would be rather confusing from a\nuser's standpoint, in that it'd be quite un-obvious why some array\nfunctions use anyarray while other ones use anycompatiblearray.\n\nSo I backed off to just changing the functions/operators that have\nno opclass connections, such as array_cat.  Even that has some\ndownsides --- for example, in the attached patch, it's necessary\nto change some polymorphism.sql examples that explicitly reference\narray_cat(anyarray).  I wonder whether this change would break a\nsignificant number of user-defined aggregates or operators.\n\n(Note that I think we'd have to resist the temptation to fix that\nby letting CREATE AGGREGATE et al accept support functions that\ntake anyarray/anycompatiblearray (etc) interchangeably.  A lot of\nthe security analysis that went into CVE-2020-14350 depended on\nthe assumption that utility commands only do exact lookups of\nsupport functions.  If we tried to be lax about this, we'd\nre-introduce the possibility for hostile capture of function\nreferences in extension scripts.)\n\nAnother interesting issue, not seen in the attached but which\ncame up while I was experimenting with the more aggressive patch,\nwas this failure in the polymorphism test:\n\n select max(histogram_bounds) from pg_stats where tablename = 'pg_am';\n-ERROR:  cannot compare arrays of different element types\n+ERROR:  function max(anyarray) does not exist\n\nThat's because we don't accept pg_stats.histogram_bounds (of\ndeclared type anyarray) as a valid input for anycompatiblearray.\nI wonder if that isn't a bug we need to fix in the anycompatible\npatch, independently of anything else.  We'd not be able to deduce\nan actual element type from such an input, but we already cannot\ndo so when we match it to an anyarray parameter.\n\nAnyway, attached find\n\n0001 - updates Vik's original patch to HEAD and tweaks the\ngrammar in the docs a bit.\n\n0002 - add-on patch to convert array_append, array_prepend,\narray_cat, array_position, array_positions, array_remove,\narray_replace, and width_bucket to use anycompatiblearray.\n\nI think 0001 is committable, but 0002 is just WIP since\nI didn't touch the docs.  I'm slightly discouraged about\nwhether 0002 is worth proceeding with.  Any thoughts?I think so 0002 has sense - more than doc I miss related regress tests, but it is partially covered by anycompatible testsAnyway I tested both patches and there is not problem with compilation, building doc, and make check-world passedI'll mark this patch as ready for committerBest regardsPavel \n\n                        regards, tom lane", "msg_date": "Thu, 24 Sep 2020 21:34:03 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Compatible defaults for LEAD/LAG" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> út 22. 9. 2020 v 2:33 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> Anyway, attached find\n>> 0001 - updates Vik's original patch to HEAD and tweaks the\n>> grammar in the docs a bit.\n>> 0002 - add-on patch to convert array_append, array_prepend,\n>> array_cat, array_position, array_positions, array_remove,\n>> array_replace, and width_bucket to use anycompatiblearray.\n>> I think 0001 is committable, but 0002 is just WIP since\n>> I didn't touch the docs. I'm slightly discouraged about\n>> whether 0002 is worth proceeding with. Any thoughts?\n\n> I think so 0002 has sense - more than doc I miss related regress tests, but\n> it is partially covered by anycompatible tests\n\nI didn't see any need for particularly exhaustive testing, but\nI did add one new test for an operator and one for a function.\nPushed with that and the necessary docs work.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 04 Nov 2020 16:12:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Compatible defaults for LEAD/LAG" }, { "msg_contents": "st 4. 11. 2020 v 22:12 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > út 22. 9. 2020 v 2:33 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n> >> Anyway, attached find\n> >> 0001 - updates Vik's original patch to HEAD and tweaks the\n> >> grammar in the docs a bit.\n> >> 0002 - add-on patch to convert array_append, array_prepend,\n> >> array_cat, array_position, array_positions, array_remove,\n> >> array_replace, and width_bucket to use anycompatiblearray.\n> >> I think 0001 is committable, but 0002 is just WIP since\n> >> I didn't touch the docs. I'm slightly discouraged about\n> >> whether 0002 is worth proceeding with. Any thoughts?\n>\n> > I think so 0002 has sense - more than doc I miss related regress tests,\n> but\n> > it is partially covered by anycompatible tests\n>\n> I didn't see any need for particularly exhaustive testing, but\n> I did add one new test for an operator and one for a function.\n> Pushed with that and the necessary docs work.\n>\n\nok, Thank you\n\nPavel\n\n\n>\n> regards, tom lane\n>\n\nst 4. 11. 2020 v 22:12 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> út 22. 9. 2020 v 2:33 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> Anyway, attached find\n>> 0001 - updates Vik's original patch to HEAD and tweaks the\n>> grammar in the docs a bit.\n>> 0002 - add-on patch to convert array_append, array_prepend,\n>> array_cat, array_position, array_positions, array_remove,\n>> array_replace, and width_bucket to use anycompatiblearray.\n>> I think 0001 is committable, but 0002 is just WIP since\n>> I didn't touch the docs.  I'm slightly discouraged about\n>> whether 0002 is worth proceeding with.  Any thoughts?\n\n> I think so 0002 has sense - more than doc I miss related regress tests, but\n> it is partially covered by anycompatible tests\n\nI didn't see any need for particularly exhaustive testing, but\nI did add one new test for an operator and one for a function.\nPushed with that and the necessary docs work.ok, Thank youPavel \n\n                        regards, tom lane", "msg_date": "Wed, 4 Nov 2020 23:00:17 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Compatible defaults for LEAD/LAG" } ]
[ { "msg_contents": "Hi, hackers!\n\nCurrently i see, COPY FROM insertion into the partitioned table with \nforeign partitions is not optimal: even if table constraints allows can \ndo multi insert copy, we will flush the buffers and prepare new INSERT \nquery for each tuple, routed into the foreign partition.\nTo solve this problem i tried to use the multi insert buffers for \nforeign tuples too. Flushing of these buffers performs by the analogy \nwith 'COPY .. FROM STDIN' machinery as it is done by the psql '\\copy' \ncommand.\nThe patch in attachment was prepared from the private scratch developed \nby Arseny Sher a couple of years ago.\nBenchmarks shows that it speeds up COPY FROM operation:\nCommand \"COPY pgbench_accounts FROM ...\" (test file contains 1e7 tuples, \ncopy to three partitions) executes on my laptop in 14 minutes without \nthe patch and in 1.5 minutes with the patch. Theoretical minimum here \n(with infinite buffer size) is 40 seconds.\n\nA couple of questions:\n1. Can this feature be interesting for the PostgreSQL core or not?\n2. If this is a useful feature, is the correct way chosen?\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 1 Jun 2020 14:29:23 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "[POC] Fast COPY FROM command for the table with foreign partitions" }, { "msg_contents": "Hi Andrey,\n\nOn Mon, Jun 1, 2020 at 6:29 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> Currently i see, COPY FROM insertion into the partitioned table with\n> foreign partitions is not optimal: even if table constraints allows can\n> do multi insert copy, we will flush the buffers and prepare new INSERT\n> query for each tuple, routed into the foreign partition.\n> To solve this problem i tried to use the multi insert buffers for\n> foreign tuples too. Flushing of these buffers performs by the analogy\n> with 'COPY .. FROM STDIN' machinery as it is done by the psql '\\copy'\n> command.\n> The patch in attachment was prepared from the private scratch developed\n> by Arseny Sher a couple of years ago.\n> Benchmarks shows that it speeds up COPY FROM operation:\n> Command \"COPY pgbench_accounts FROM ...\" (test file contains 1e7 tuples,\n> copy to three partitions) executes on my laptop in 14 minutes without\n> the patch and in 1.5 minutes with the patch. Theoretical minimum here\n> (with infinite buffer size) is 40 seconds.\n\nGreat!\n\n> A couple of questions:\n> 1. Can this feature be interesting for the PostgreSQL core or not?\n\nYeah, I think this is especially useful for sharding.\n\n> 2. If this is a useful feature, is the correct way chosen?\n\nI think I also thought something similar to this before [1]. Will take a look.\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/23990375-45a6-5823-b0aa-a6a7a6a957f0%40lab.ntt.co.jp\n\n\n", "msg_date": "Tue, 2 Jun 2020 09:02:10 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Thank you for the answer,\n\n02.06.2020 05:02, Etsuro Fujita пишет:\n> I think I also thought something similar to this before [1]. Will take a look.\n\n> [1] https://www.postgresql.org/message-id/23990375-45a6-5823-b0aa-a6a7a6a957f0%40lab.ntt.co.jp\n> \nI have looked into the thread.\nMy first version of the patch was like your idea. But when developing \nthe “COPY FROM” code, the following features were discovered:\n1. Two or more partitions can be placed at the same node. We need to \nfinish COPY into one partition before start COPY into another partition \nat the same node.\n2. On any error we need to send EOF to all started \"COPY .. FROM STDIN\" \noperations. Otherwise FDW can't cancel operation.\n\nHiding the COPY code under the buffers management machinery allows us to \ngeneralize buffers machinery, execute one COPY operation on each buffer \nand simplify error handling.\n\nAs i understand, main idea of the thread, mentioned by you, is to add \n\"COPY FROM\" support without changes in FDW API.\nIt is possible to remove BeginForeignCopy() and EndForeignCopy() from \nthe patch. But it is not trivial to change ExecForeignInsert() for the \nCOPY purposes.\nAll that I can offer in this place now is to introduce one new \nExecForeignBulkInsert(buf) routine that will execute single \"COPY FROM \nSTDIN\" operation, send tuples and close the operation. We can use the \nExecForeignInsert() routine for each buffer tuple if \nExecForeignBulkInsert() is not supported.\n\nOne of main questions here is to use COPY TO machinery for serializing a \ntuple. It is needed (if you will take a look into the patch) to \ntransform the CopyTo() routine to an iterative representation: \nstart/next/finish. May it be acceptable?\n\nIn the attachment there is a patch with the correction of a stupid error.\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 2 Jun 2020 10:51:22 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Thanks Andrey for the patch. I am glad that the patch has taken care\nof some corner cases already but there exist still more.\n\nCOPY command constructed doesn't take care of dropped columns. There\nis code in deparseAnalyzeSql which constructs list of columns for a\ngiven foreign relation. 0002 patch attached here, moves that code to a\nseparate function and reuses it for COPY. If you find that code change\nuseful please include it in the main patch.\n\nWhile working on that, I found two issues\n1. The COPY command constructed an empty columns list when there were\nno non-dropped columns in the relation. This caused a syntax error.\nFixed that in 0002.\n2. In the same case, if the foreign table declared locally didn't have\nany non-dropped columns but the relation that it referred to on the\nforeign server had some non-dropped columns, COPY command fails. I\nadded a test case for this in 0002 but haven't fixed it.\n\nI think this work is useful. Please add it to the next commitfest so\nthat it's tracked.\n\nOn Tue, Jun 2, 2020 at 11:21 AM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n>\n> Thank you for the answer,\n>\n> 02.06.2020 05:02, Etsuro Fujita пишет:\n> > I think I also thought something similar to this before [1]. Will take a look.\n>\n> > [1] https://www.postgresql.org/message-id/23990375-45a6-5823-b0aa-a6a7a6a957f0%40lab.ntt.co.jp\n> >\n> I have looked into the thread.\n> My first version of the patch was like your idea. But when developing\n> the “COPY FROM” code, the following features were discovered:\n> 1. Two or more partitions can be placed at the same node. We need to\n> finish COPY into one partition before start COPY into another partition\n> at the same node.\n> 2. On any error we need to send EOF to all started \"COPY .. FROM STDIN\"\n> operations. Otherwise FDW can't cancel operation.\n>\n> Hiding the COPY code under the buffers management machinery allows us to\n> generalize buffers machinery, execute one COPY operation on each buffer\n> and simplify error handling.\n>\n> As i understand, main idea of the thread, mentioned by you, is to add\n> \"COPY FROM\" support without changes in FDW API.\n> It is possible to remove BeginForeignCopy() and EndForeignCopy() from\n> the patch. But it is not trivial to change ExecForeignInsert() for the\n> COPY purposes.\n> All that I can offer in this place now is to introduce one new\n> ExecForeignBulkInsert(buf) routine that will execute single \"COPY FROM\n> STDIN\" operation, send tuples and close the operation. We can use the\n> ExecForeignInsert() routine for each buffer tuple if\n> ExecForeignBulkInsert() is not supported.\n>\n> One of main questions here is to use COPY TO machinery for serializing a\n> tuple. It is needed (if you will take a look into the patch) to\n> transform the CopyTo() routine to an iterative representation:\n> start/next/finish. May it be acceptable?\n>\n> In the attachment there is a patch with the correction of a stupid error.\n>\n> --\n> Andrey Lepikhov\n> Postgres Professional\n> https://postgrespro.com\n> The Russian Postgres Company\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Mon, 15 Jun 2020 10:56:12 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On 6/15/20 10:26 AM, Ashutosh Bapat wrote:\n> Thanks Andrey for the patch. I am glad that the patch has taken care\n> of some corner cases already but there exist still more.\n> \n> COPY command constructed doesn't take care of dropped columns. There\n> is code in deparseAnalyzeSql which constructs list of columns for a\n> given foreign relation. 0002 patch attached here, moves that code to a\n> separate function and reuses it for COPY. If you find that code change\n> useful please include it in the main patch.\n\nThanks, i included it.\n\n> 2. In the same case, if the foreign table declared locally didn't have\n> any non-dropped columns but the relation that it referred to on the\n> foreign server had some non-dropped columns, COPY command fails. I\n> added a test case for this in 0002 but haven't fixed it.\n\nI fixed it.\nThis is very special corner case. The problem was that COPY FROM does \nnot support semantics like the \"INSERT INTO .. DEFAULT VALUES\". To \nsimplify the solution, i switched off bulk copying for this case.\n\n > I think this work is useful. Please add it to the next commitfest so\n > that it's tracked.\nOk.\n\n-- \nAndrey Lepikhov\nPostgres Professional\nhttps://postgrespro.com", "msg_date": "Wed, 17 Jun 2020 11:24:48 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On Tue, Jun 2, 2020 at 2:51 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> 02.06.2020 05:02, Etsuro Fujita пишет:\n> > I think I also thought something similar to this before [1]. Will take a look.\n\nI'm still reviewing the patch, but let me comment on it.\n\n> > [1] https://www.postgresql.org/message-id/23990375-45a6-5823-b0aa-a6a7a6a957f0%40lab.ntt.co.jp\n\n> I have looked into the thread.\n> My first version of the patch was like your idea. But when developing\n> the “COPY FROM” code, the following features were discovered:\n> 1. Two or more partitions can be placed at the same node. We need to\n> finish COPY into one partition before start COPY into another partition\n> at the same node.\n> 2. On any error we need to send EOF to all started \"COPY .. FROM STDIN\"\n> operations. Otherwise FDW can't cancel operation.\n>\n> Hiding the COPY code under the buffers management machinery allows us to\n> generalize buffers machinery, execute one COPY operation on each buffer\n> and simplify error handling.\n\nI'm not sure that it's really a good idea that the bulk-insert API is\ndesigned the way it's tightly coupled with the bulk-insert machinery\nin the core, because 1) some FDWs might want to send tuples provided\nby the core to the remote, one by one, without storing them in a\nbuffer, or 2) some other FDWs might want to store the tuples in the\nbuffer and send them in a lump as postgres_fdw in the proposed patch\nbut might want to do so independently of MAX_BUFFERED_TUPLES and/or\nMAX_BUFFERED_BYTES defined in the bulk-insert machinery.\n\nI agree that we would need special handling for cases you mentioned\nabove if we design this API based on something like the idea I\nproposed in that thread.\n\n> As i understand, main idea of the thread, mentioned by you, is to add\n> \"COPY FROM\" support without changes in FDW API.\n\nI don't think so; I think we should introduce new API for this feature\nto keep the ExecForeignInsert() API simple.\n\n> All that I can offer in this place now is to introduce one new\n> ExecForeignBulkInsert(buf) routine that will execute single \"COPY FROM\n> STDIN\" operation, send tuples and close the operation. We can use the\n> ExecForeignInsert() routine for each buffer tuple if\n> ExecForeignBulkInsert() is not supported.\n\nAgreed.\n\n> One of main questions here is to use COPY TO machinery for serializing a\n> tuple. It is needed (if you will take a look into the patch) to\n> transform the CopyTo() routine to an iterative representation:\n> start/next/finish. May it be acceptable?\n\n+1 for the general idea.\n\n> In the attachment there is a patch with the correction of a stupid error.\n\nThanks for the patch!\n\nSorry for the delay.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 19 Jun 2020 23:58:02 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "19.06.2020 19:58, Etsuro Fujita пишет:\n> On Tue, Jun 2, 2020 at 2:51 PM Andrey Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>> Hiding the COPY code under the buffers management machinery allows us to\n>> generalize buffers machinery, execute one COPY operation on each buffer\n>> and simplify error handling.\n> \n> I'm not sure that it's really a good idea that the bulk-insert API is\n> designed the way it's tightly coupled with the bulk-insert machinery\n> in the core, because 1) some FDWs might want to send tuples provided\n> by the core to the remote, one by one, without storing them in a\n> buffer, or 2) some other FDWs might want to store the tuples in the\n> buffer and send them in a lump as postgres_fdw in the proposed patch\n> but might want to do so independently of MAX_BUFFERED_TUPLES and/or\n> MAX_BUFFERED_BYTES defined in the bulk-insert machinery.\n> \n> I agree that we would need special handling for cases you mentioned\n> above if we design this API based on something like the idea I\n> proposed in that thread.\nAgreed\n> \n>> As i understand, main idea of the thread, mentioned by you, is to add\n>> \"COPY FROM\" support without changes in FDW API.\n> \n> I don't think so; I think we should introduce new API for this feature\n> to keep the ExecForeignInsert() API simple.\nOk\n> \n>> All that I can offer in this place now is to introduce one new\n>> ExecForeignBulkInsert(buf) routine that will execute single \"COPY FROM\n>> STDIN\" operation, send tuples and close the operation. We can use the\n>> ExecForeignInsert() routine for each buffer tuple if\n>> ExecForeignBulkInsert() is not supported.\n> \n> Agreed.\nIn the next version (see attachment) of the patch i removed Begin/End \nfdwapi routines. Now we have only the ExecForeignBulkInsert() routine.\n\n-- \nAndrey Lepikhov\nPostgres Professional", "msg_date": "Mon, 22 Jun 2020 11:13:30 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On Wed, 17 Jun 2020 at 11:54, Andrey V. Lepikhov <a.lepikhov@postgrespro.ru>\nwrote:\n\n> On 6/15/20 10:26 AM, Ashutosh Bapat wrote:\n> > Thanks Andrey for the patch. I am glad that the patch has taken care\n> > of some corner cases already but there exist still more.\n> >\n> > COPY command constructed doesn't take care of dropped columns. There\n> > is code in deparseAnalyzeSql which constructs list of columns for a\n> > given foreign relation. 0002 patch attached here, moves that code to a\n> > separate function and reuses it for COPY. If you find that code change\n> > useful please include it in the main patch.\n>\n> Thanks, i included it.\n>\n> > 2. In the same case, if the foreign table declared locally didn't have\n> > any non-dropped columns but the relation that it referred to on the\n> > foreign server had some non-dropped columns, COPY command fails. I\n> > added a test case for this in 0002 but haven't fixed it.\n>\n> I fixed it.\n> This is very special corner case. The problem was that COPY FROM does\n> not support semantics like the \"INSERT INTO .. DEFAULT VALUES\". To\n> simplify the solution, i switched off bulk copying for this case.\n>\n> > I think this work is useful. Please add it to the next commitfest so\n> > that it's tracked.\n> Ok.\n>\n\nIt looks like we call BeginForeignInsert and EndForeignInsert even though\nactual copy is performed using BeginForeignCopy, ExecForeignCopy\nand EndForeignCopy. BeginForeignInsert constructs the INSERT query which\nlooks unnecessary. Also some of the other PgFdwModifyState members are\ninitialized unnecessarily. It also gives an impression that we are using\nINSERT underneath the copy. Instead a better way would be to\ncall BeginForeignCopy instead of BeginForeignInsert and EndForeignCopy\ninstead of EndForeignInsert, if we are going to use COPY protocol to copy\ndata to the foreign server. Corresponding postgres_fdw implementations need\nto change in order to do that.\n\nThis isn't a full review. I will continue reviewing this patch further.\n-- \nBest Wishes,\nAshutosh\n\nOn Wed, 17 Jun 2020 at 11:54, Andrey V. Lepikhov <a.lepikhov@postgrespro.ru> wrote:On 6/15/20 10:26 AM, Ashutosh Bapat wrote:\n> Thanks Andrey for the patch. I am glad that the patch has taken care\n> of some corner cases already but there exist still more.\n> \n> COPY command constructed doesn't take care of dropped columns. There\n> is code in deparseAnalyzeSql which constructs list of columns for a\n> given foreign relation. 0002 patch attached here, moves that code to a\n> separate function and reuses it for COPY. If you find that code change\n> useful please include it in the main patch.\n\nThanks, i included it.\n\n> 2. In the same case, if the foreign table declared locally didn't have\n> any non-dropped columns but the relation that it referred to on the\n> foreign server had some non-dropped columns, COPY command fails. I\n> added a test case for this in 0002 but haven't fixed it.\n\nI fixed it.\nThis is very special corner case. The problem was that COPY FROM does \nnot support semantics like the \"INSERT INTO .. DEFAULT VALUES\". To \nsimplify the solution, i switched off bulk copying for this case.\n\n > I think this work is useful. Please add it to the next commitfest so\n > that it's tracked.\nOk.It looks like we call BeginForeignInsert and EndForeignInsert even though actual copy is performed using BeginForeignCopy, ExecForeignCopy and EndForeignCopy. BeginForeignInsert constructs the INSERT query which looks unnecessary. Also some of the other PgFdwModifyState members are initialized unnecessarily. It also gives an impression that we are using INSERT underneath the copy. Instead a better way would be to call BeginForeignCopy instead of BeginForeignInsert and EndForeignCopy instead of EndForeignInsert, if we are going to use COPY protocol to copy data to the foreign server. Corresponding postgres_fdw implementations need to change in order to do that.This isn't a full review. I will continue reviewing this patch further.-- Best Wishes,Ashutosh", "msg_date": "Mon, 22 Jun 2020 17:41:05 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On 6/22/20 5:11 PM, Ashutosh Bapat wrote:\n\n> <a.lepikhov@postgrespro.ru <mailto:a.lepikhov@postgrespro.ru>> wrote:\n> It looks like we call BeginForeignInsert and EndForeignInsert even \n> though actual copy is performed using BeginForeignCopy, ExecForeignCopy \n> and EndForeignCopy. BeginForeignInsert constructs the INSERT query which \n> looks unnecessary. Also some of the other PgFdwModifyState members are \n> initialized unnecessarily. It also gives an impression that we are using \n> INSERT underneath the copy. Instead a better way would be to \n> call BeginForeignCopy instead of BeginForeignInsert and EndForeignCopy \n> instead of EndForeignInsert, if we are going to use COPY protocol to \n> copy data to the foreign server. Corresponding postgres_fdw \n> implementations need to change in order to do that.\n> \nI did not answer for a long time, because of waiting for the results of \nthe discussion on Tomas approach to bulk INSERT/UPDATE/DELETE. It seems \nmore general.\nI can move the query construction into the first execution of INSERT or \nCOPY operation. But another changes seems more invasive because \nBeginForeignInsert/EndForeignInsert are used in the execPartition.c \nmodule. We will need to pass copy/insert state of operation into \nExecFindPartition() and ExecCleanupTupleRouting().\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Fri, 3 Jul 2020 10:15:46 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "22.06.2020 17:11, Ashutosh Bapat пишет:\n> \n> \n> On Wed, 17 Jun 2020 at 11:54, Andrey V. Lepikhov \n> <a.lepikhov@postgrespro.ru <mailto:a.lepikhov@postgrespro.ru>> wrote:\n> \n> On 6/15/20 10:26 AM, Ashutosh Bapat wrote:\n> > Thanks Andrey for the patch. I am glad that the patch has taken care\n> > of some corner cases already but there exist still more.\n> >\n> > COPY command constructed doesn't take care of dropped columns. There\n> > is code in deparseAnalyzeSql which constructs list of columns for a\n> > given foreign relation. 0002 patch attached here, moves that code\n> to a\n> > separate function and reuses it for COPY. If you find that code\n> change\n> > useful please include it in the main patch.\n> \n> Thanks, i included it.\n> \n> > 2. In the same case, if the foreign table declared locally didn't\n> have\n> > any non-dropped columns but the relation that it referred to on the\n> > foreign server had some non-dropped columns, COPY command fails. I\n> > added a test case for this in 0002 but haven't fixed it.\n> \n> I fixed it.\n> This is very special corner case. The problem was that COPY FROM does\n> not support semantics like the \"INSERT INTO .. DEFAULT VALUES\". To\n> simplify the solution, i switched off bulk copying for this case.\n> \n>  > I think this work is useful. Please add it to the next commitfest so\n>  > that it's tracked.\n> Ok.\n> \n> \n> It looks like we call BeginForeignInsert and EndForeignInsert even \n> though actual copy is performed using BeginForeignCopy, ExecForeignCopy \n> and EndForeignCopy. BeginForeignInsert constructs the INSERT query which \n> looks unnecessary. Also some of the other PgFdwModifyState members are \n> initialized unnecessarily. It also gives an impression that we are using \n> INSERT underneath the copy. Instead a better way would be to \n> call BeginForeignCopy instead of BeginForeignInsert and EndForeignCopy \n> instead of EndForeignInsert, if we are going to use COPY protocol to \n> copy data to the foreign server. Corresponding postgres_fdw \n> implementations need to change in order to do that.\nFixed.\nI replaced names of CopyIn FDW API. Also the partition routing \ninitializer calls BeginForeignInsert or BeginForeignCopyIn routines in \naccordance with value of ResultRelInfo::UseBulkModifying.\nI introduced this parameter because foreign partitions can be placed at \nforeign servers with different types of foreign wrapper. Not all \nwrappers can support CopyIn API.\nAlso I ran the Tomas Vondra benchmark. At my laptop we have results:\n* regular: 5000 ms.\n* Tomas buffering patch: 11000 ms.\n* This CopyIn patch: 8000 ms.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Sun, 12 Jul 2020 22:46:16 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Hi Andrey,\n\nThanks for this work. I have been reading through your patch and\nhere's a what I understand it does and how:\n\nThe patch aims to fix the restriction that COPYing into a foreign\ntable can't use multi-insert buffer mechanism effectively. That's\nbecause copy.c currently uses the ExecForeignInsert() FDW API which\ncan be passed only 1 row at a time. postgres_fdw's implementation\nissues an `INSERT INTO remote_table VALUES (...)` statement to the\nremote side for each row, which is pretty inefficient for bulk loads.\nThe patch introduces a new FDW API ExecForeignCopyIn() that can\nreceive multiple rows and copy.c now calls it every time it flushes\nthe multi-insert buffer so that all the flushed rows can be sent to\nthe remote side in one go. postgres_fdw's now issues a `COPY\nremote_table FROM STDIN` to the remote server and\npostgresExecForeignCopyIn() funnels the tuples flushed by the local\ncopy to the server side waiting for tuples on the COPY protocol.\n\nHere are some comments on the patch.\n\n* Why the \"In\" in these API names?\n\n+ /* COPY a bulk of tuples into a foreign relation */\n+ BeginForeignCopyIn_function BeginForeignCopyIn;\n+ EndForeignCopyIn_function EndForeignCopyIn;\n+ ExecForeignCopyIn_function ExecForeignCopyIn;\n\n* fdwhandler.sgml should be updated with the description of these new APIs.\n\n* As far as I can tell, the following copy.h additions are for an FDW\nto use copy.c to obtain an external representation (char string) to\nsend to the remote side of the individual rows that are passed to\nExecForeignCopyIn():\n\n+typedef void (*copy_data_dest_cb) (void *outbuf, int len);\n+extern CopyState BeginForeignCopyTo(Relation rel);\n+extern char *NextForeignCopyRow(CopyState cstate, TupleTableSlot *slot);\n+extern void EndForeignCopyTo(CopyState cstate);\n\nSo, an FDW's ExecForeignCopyIn() calls copy.c: NextForeignCopyRow()\nwhich in turn calls copy.c: CopyOneRowTo() which fills\nCopyState.fe_msgbuf. The data_dest_cb() callback that runs after\nfe_msgbuf contains the full row simply copies it into a palloc'd char\nbuffer whose pointer is returned back to ExecForeignCopyIn(). I\nwonder why not let FDWs implement the callback and pass it to copy.c\nthrough BeginForeignCopyTo()? For example, you could implement a\npgfdw_copy_data_dest_cb() in postgres_fdw.c which gets a direct\npointer of fe_msgbuf to send it to the remote server.\n\nDo you think all FDWs would want to use copy,c like above? If not,\nmaybe the above APIs are really postgres_fdw-specific? Anyway, adding\ncomments above the definitions of these functions would be helpful.\n\n* I see that the remote copy is performed from scratch on every call\nof postgresExecForeignCopyIn(), but wouldn't it be more efficient to\nsend the `COPY remote_table FROM STDIN` in\npostgresBeginForeignCopyIn() and end it in postgresEndForeignCopyIn()\nwhen there are no errors during the copy?\n\nI tried implementing these two changes -- pgfdw_copy_data_dest_cb()\nand sending `COPY remote_table FROM STDIN` only once instead of on\nevery flush -- and I see significant speedup. Please check the\nattached patch that applies on top of yours. One problem I spotted\nwhen trying my patch but didn't spend much time debugging is that\nlocal COPY cannot be interrupted by Ctrl+C anymore, but that should be\nfixable by adjusting PG_TRY() blocks.\n\n* ResultRelInfo.UseBulkModifying should be ri_usesBulkModify for consistency.\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 16 Jul 2020 18:14:04 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On 7/16/20 2:14 PM, Amit Langote wrote:\n> Hi Andrey,\n> \n> Thanks for this work. I have been reading through your patch and\n> here's a what I understand it does and how:\n> \n> The patch aims to fix the restriction that COPYing into a foreign\n> table can't use multi-insert buffer mechanism effectively. That's\n> because copy.c currently uses the ExecForeignInsert() FDW API which\n> can be passed only 1 row at a time. postgres_fdw's implementation\n> issues an `INSERT INTO remote_table VALUES (...)` statement to the\n> remote side for each row, which is pretty inefficient for bulk loads.\n> The patch introduces a new FDW API ExecForeignCopyIn() that can\n> receive multiple rows and copy.c now calls it every time it flushes\n> the multi-insert buffer so that all the flushed rows can be sent to\n> the remote side in one go. postgres_fdw's now issues a `COPY\n> remote_table FROM STDIN` to the remote server and\n> postgresExecForeignCopyIn() funnels the tuples flushed by the local\n> copy to the server side waiting for tuples on the COPY protocol.\n\nFine\n\n> Here are some comments on the patch.\n> \n> * Why the \"In\" in these API names?\n> \n> + /* COPY a bulk of tuples into a foreign relation */\n> + BeginForeignCopyIn_function BeginForeignCopyIn;\n> + EndForeignCopyIn_function EndForeignCopyIn;\n> + ExecForeignCopyIn_function ExecForeignCopyIn;\n\nI used an analogy from copy.c.\n\n> * fdwhandler.sgml should be updated with the description of these new APIs.\n\n\n> * As far as I can tell, the following copy.h additions are for an FDW\n> to use copy.c to obtain an external representation (char string) to\n> send to the remote side of the individual rows that are passed to\n> ExecForeignCopyIn():\n> \n> +typedef void (*copy_data_dest_cb) (void *outbuf, int len);\n> +extern CopyState BeginForeignCopyTo(Relation rel);\n> +extern char *NextForeignCopyRow(CopyState cstate, TupleTableSlot *slot);\n> +extern void EndForeignCopyTo(CopyState cstate);\n> \n> So, an FDW's ExecForeignCopyIn() calls copy.c: NextForeignCopyRow()\n> which in turn calls copy.c: CopyOneRowTo() which fills\n> CopyState.fe_msgbuf. The data_dest_cb() callback that runs after\n> fe_msgbuf contains the full row simply copies it into a palloc'd char\n> buffer whose pointer is returned back to ExecForeignCopyIn(). I\n> wonder why not let FDWs implement the callback and pass it to copy.c\n> through BeginForeignCopyTo()? For example, you could implement a\n> pgfdw_copy_data_dest_cb() in postgres_fdw.c which gets a direct\n> pointer of fe_msgbuf to send it to the remote server.\nIt is good point! Thank you.\n\n> Do you think all FDWs would want to use copy,c like above? If not,\n> maybe the above APIs are really postgres_fdw-specific? Anyway, adding\n> comments above the definitions of these functions would be helpful.\nAgreed\n> \n> * I see that the remote copy is performed from scratch on every call\n> of postgresExecForeignCopyIn(), but wouldn't it be more efficient to\n> send the `COPY remote_table FROM STDIN` in\n> postgresBeginForeignCopyIn() and end it in postgresEndForeignCopyIn()\n> when there are no errors during the copy?\nIt is not possible. FDW share one connection between all foreign \nrelations from a server. If two or more partitions will be placed at one \nforeign server you will have problems with concurrent COPY command. May \nbe we can create new connection for each partition?\n> \n> I tried implementing these two changes -- pgfdw_copy_data_dest_cb()\n> and sending `COPY remote_table FROM STDIN` only once instead of on\n> every flush -- and I see significant speedup. Please check the\n> attached patch that applies on top of yours.\nI integrated first change and rejected the second by the reason as above.\n One problem I spotted\n> when trying my patch but didn't spend much time debugging is that\n> local COPY cannot be interrupted by Ctrl+C anymore, but that should be\n> fixable by adjusting PG_TRY() blocks.\nThanks\n> \n> * ResultRelInfo.UseBulkModifying should be ri_usesBulkModify for consistency.\n+1\n\nI will post a new version of the patch a little bit later.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Wed, 22 Jul 2020 14:09:51 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On 7/16/20 2:14 PM, Amit Langote wrote:\n> Amit Langote\n> EnterpriseDB: http://www.enterprisedb.com\n> \n\nVersion 5 of the patch. With changes caused by Amit's comments.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Thu, 23 Jul 2020 11:23:42 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Hi Andrey,\n\nOn 2020-07-23 09:23, Andrey V. Lepikhov wrote:\n> On 7/16/20 2:14 PM, Amit Langote wrote:\n>> Amit Langote\n>> EnterpriseDB: http://www.enterprisedb.com\n>> \n> \n> Version 5 of the patch. With changes caused by Amit's comments.\n\nJust got a segfault with your v5 patch by deleting from a foreign table. \nHere is a part of backtrace:\n\n * frame #0: 0x00000001029069ec \npostgres`ExecShutdownForeignScan(node=0x00007ff28c8909b0) at \nnodeForeignscan.c:385:3\n frame #1: 0x00000001028e7b06 \npostgres`ExecShutdownNode(node=0x00007ff28c8909b0) at \nexecProcnode.c:779:4\n frame #2: 0x000000010299b3fa \npostgres`planstate_walk_members(planstates=0x00007ff28c8906d8, nplans=1, \nwalker=(postgres`ExecShutdownNode at execProcnode.c:752), \ncontext=0x0000000000000000) at nodeFuncs.c:3998:7\n frame #3: 0x000000010299b010 \npostgres`planstate_tree_walker(planstate=0x00007ff28c8904c0, \nwalker=(postgres`ExecShutdownNode at execProcnode.c:752), \ncontext=0x0000000000000000) at nodeFuncs.c:3914:8\n frame #4: 0x00000001028e7ab7 \npostgres`ExecShutdownNode(node=0x00007ff28c8904c0) at \nexecProcnode.c:771:2\n\n(lldb) f 0\nframe #0: 0x00000001029069ec \npostgres`ExecShutdownForeignScan(node=0x00007ff28c8909b0) at \nnodeForeignscan.c:385:3\n 382 \t\tFdwRoutine *fdwroutine = node->fdwroutine;\n 383\n 384 \t\tif (fdwroutine->ShutdownForeignScan)\n-> 385 \t\t\tfdwroutine->ShutdownForeignScan(node);\n 386 \t}\n(lldb) p node->fdwroutine->ShutdownForeignScan\n(ShutdownForeignScan_function) $1 = 0x7f7f7f7f7f7f7f7f\n\nIt seems that ShutdownForeignScan inside node->fdwroutine doesn't have a \ncorrect pointer to the required function.\n\nI haven't had a chance to look closer on the code, but you can easily \nreproduce this error with the attached script (patched Postgres binaries \nshould be available in the PATH). It works well with master and fails \nwith your patch applied.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company", "msg_date": "Mon, 27 Jul 2020 19:34:46 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "\n\n27.07.2020 21:34, Alexey Kondratov пишет:\n> Hi Andrey,\n> \n> On 2020-07-23 09:23, Andrey V. Lepikhov wrote:\n>> On 7/16/20 2:14 PM, Amit Langote wrote:\n>>> Amit Langote\n>>> EnterpriseDB: http://www.enterprisedb.com\n>>>\n>>\n>> Version 5 of the patch. With changes caused by Amit's comments.\n> \n> Just got a segfault with your v5 patch by deleting from a foreign table. \n> It seems that ShutdownForeignScan inside node->fdwroutine doesn't have a \n> correct pointer to the required function.\n> \n> I haven't had a chance to look closer on the code, but you can easily \n> reproduce this error with the attached script (patched Postgres binaries \n> should be available in the PATH). It works well with master and fails \n> with your patch applied.\n\nI used master a3ab7a707d and v5 version of the patch with your script.\nNo errors found. Can you check your test case?\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Tue, 28 Jul 2020 05:33:19 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On 2020-07-28 03:33, Andrey Lepikhov wrote:\n> 27.07.2020 21:34, Alexey Kondratov пишет:\n>> Hi Andrey,\n>> \n>> On 2020-07-23 09:23, Andrey V. Lepikhov wrote:\n>>> On 7/16/20 2:14 PM, Amit Langote wrote:\n>>>> Amit Langote\n>>>> EnterpriseDB: http://www.enterprisedb.com\n>>>> \n>>> \n>>> Version 5 of the patch. With changes caused by Amit's comments.\n>> \n>> Just got a segfault with your v5 patch by deleting from a foreign \n>> table. It seems that ShutdownForeignScan inside node->fdwroutine \n>> doesn't have a correct pointer to the required function.\n>> \n>> I haven't had a chance to look closer on the code, but you can easily \n>> reproduce this error with the attached script (patched Postgres \n>> binaries should be available in the PATH). It works well with master \n>> and fails with your patch applied.\n> \n> I used master a3ab7a707d and v5 version of the patch with your script.\n> No errors found. Can you check your test case?\n\nYes, my bad. I forgot to re-install postgres_fdw extension, only did it \nfor postgres core, sorry for disturb.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n", "msg_date": "Tue, 28 Jul 2020 10:54:22 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Hi Andrey,\n\nThanks for updating the patch. I will try to take a look later.\n\nOn Wed, Jul 22, 2020 at 6:09 PM Andrey V. Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> On 7/16/20 2:14 PM, Amit Langote wrote:\n> > * Why the \"In\" in these API names?\n> >\n> > + /* COPY a bulk of tuples into a foreign relation */\n> > + BeginForeignCopyIn_function BeginForeignCopyIn;\n> > + EndForeignCopyIn_function EndForeignCopyIn;\n> > + ExecForeignCopyIn_function ExecForeignCopyIn;\n>\n> I used an analogy from copy.c.\n\nHmm, if we were going to also need *ForeignCopyOut APIs, maybe it\nmakes sense to have \"In\" here, but maybe we don't, so how about\nleaving out the \"In\" for clarity?\n\n> > * I see that the remote copy is performed from scratch on every call\n> > of postgresExecForeignCopyIn(), but wouldn't it be more efficient to\n> > send the `COPY remote_table FROM STDIN` in\n> > postgresBeginForeignCopyIn() and end it in postgresEndForeignCopyIn()\n> > when there are no errors during the copy?\n>\n> It is not possible. FDW share one connection between all foreign\n> relations from a server. If two or more partitions will be placed at one\n> foreign server you will have problems with concurrent COPY command.\n\nAh, you're right. I didn't consider multiple foreign partitions\npointing to the same server. Indeed, we would need separate\nconnections to a given server to COPY to multiple remote relations on\nthat server in parallel.\n\n> May be we can create new connection for each partition?\n\nYeah, perhaps, although it sounds like something that might be more\ngenerally useful and so we should work on that separately if at all.\n\n> > I tried implementing these two changes -- pgfdw_copy_data_dest_cb()\n> > and sending `COPY remote_table FROM STDIN` only once instead of on\n> > every flush -- and I see significant speedup. Please check the\n> > attached patch that applies on top of yours.\n>\n> I integrated first change and rejected the second by the reason as above.\n\nThanks.\n\nWill send more comments after reading the v5 patch.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 29 Jul 2020 17:03:02 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On 7/29/20 1:03 PM, Amit Langote wrote:\n> Hi Andrey,\n> \n> Thanks for updating the patch. I will try to take a look later.\n> \n> On Wed, Jul 22, 2020 at 6:09 PM Andrey V. Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>> On 7/16/20 2:14 PM, Amit Langote wrote:\n>>> * Why the \"In\" in these API names?\n>>>\n>>> + /* COPY a bulk of tuples into a foreign relation */\n>>> + BeginForeignCopyIn_function BeginForeignCopyIn;\n>>> + EndForeignCopyIn_function EndForeignCopyIn;\n>>> + ExecForeignCopyIn_function ExecForeignCopyIn;\n>>\n>> I used an analogy from copy.c.\n> \n> Hmm, if we were going to also need *ForeignCopyOut APIs, maybe it\n> makes sense to have \"In\" here, but maybe we don't, so how about\n> leaving out the \"In\" for clarity?\nOk, sounds good.\n> \n>>> * I see that the remote copy is performed from scratch on every call\n>>> of postgresExecForeignCopyIn(), but wouldn't it be more efficient to\n>>> send the `COPY remote_table FROM STDIN` in\n>>> postgresBeginForeignCopyIn() and end it in postgresEndForeignCopyIn()\n>>> when there are no errors during the copy?\n>>\n>> It is not possible. FDW share one connection between all foreign\n>> relations from a server. If two or more partitions will be placed at one\n>> foreign server you will have problems with concurrent COPY command.\n> \n> Ah, you're right. I didn't consider multiple foreign partitions\n> pointing to the same server. Indeed, we would need separate\n> connections to a given server to COPY to multiple remote relations on\n> that server in parallel.\n> \n>> May be we can create new connection for each partition?\n> \n> Yeah, perhaps, although it sounds like something that might be more\n> generally useful and so we should work on that separately if at all.\nI will try to prepare a separate patch.\n> \n>>> I tried implementing these two changes -- pgfdw_copy_data_dest_cb()\n>>> and sending `COPY remote_table FROM STDIN` only once instead of on\n>>> every flush -- and I see significant speedup. Please check the\n>>> attached patch that applies on top of yours.\n>>\n>> I integrated first change and rejected the second by the reason as above.\n> \n> Thanks.\n> \n> Will send more comments after reading the v5 patch.\n> \nOk. I'll be waiting for the end of your review.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Wed, 29 Jul 2020 13:36:01 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Hi Andrey,\n\nOn Wed, Jul 29, 2020 at 5:36 PM Andrey V. Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> > Will send more comments after reading the v5 patch.\n> >\n> Ok. I'll be waiting for the end of your review.\n\nSorry about the late reply.\n\nIf you'd like to send a new version for other reviewers, please feel\nfree. I haven't managed to take more than a brief look at the v5\npatch, but will try to look at it (or maybe the new version if you\npost) more closely this week.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 Aug 2020 20:38:57 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On Mon, Aug 3, 2020 at 8:38 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Jul 29, 2020 at 5:36 PM Andrey V. Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n> > > Will send more comments after reading the v5 patch.\n> > >\n> > Ok. I'll be waiting for the end of your review.\n>\n> Sorry about the late reply.\n>\n> If you'd like to send a new version for other reviewers, please feel\n> free. I haven't managed to take more than a brief look at the v5\n> patch, but will try to look at it (or maybe the new version if you\n> post) more closely this week.\n\nI was playing around with v5 and I noticed an assertion failure which\nI concluded is due to improper setting of ri_usesBulkModify. You can\nreproduce it with these steps.\n\ncreate extension postgres_fdw;\ncreate server lb foreign data wrapper postgres_fdw ;\ncreate user mapping for current_user server lb;\ncreate table foo (a int, b int) partition by list (a);\ncreate table foo1 (like foo);\ncreate foreign table ffoo1 partition of foo for values in (1) server\nlb options (table_name 'foo1');\ncreate table foo2 (like foo);\ncreate foreign table ffoo2 partition of foo for values in (2) server\nlb options (table_name 'foo2');\ncreate function print_new_row() returns trigger language plpgsql as $$\nbegin raise notice '%', new; return new; end; $$;\ncreate trigger ffoo1_br_trig before insert on ffoo1 for each row\nexecute function print_new_row();\ncopy foo from stdin csv;\nEnter data to be copied followed by a newline.\nEnd with a backslash and a period on a line by itself, or an EOF signal.\n>> 1,2\n>> 2,3\n>> \\.\nNOTICE: (1,2)\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\n\n#0 0x00007f2d5e266337 in raise () from /lib64/libc.so.6\n#1 0x00007f2d5e267a28 in abort () from /lib64/libc.so.6\n#2 0x0000000000aafd5d in ExceptionalCondition\n(conditionName=0x7f2d37b468d0 \"!resultRelInfo->ri_usesBulkModify ||\nresultRelInfo->ri_FdwRoutine->BeginForeignCopyIn == NULL\",\n errorType=0x7f2d37b46680 \"FailedAssertion\",\nfileName=0x7f2d37b4677f \"postgres_fdw.c\", lineNumber=1863) at\nassert.c:67\n#3 0x00007f2d37b3b0fe in postgresExecForeignInsert (estate=0x2456320,\nresultRelInfo=0x23a8f58, slot=0x23a9480, planSlot=0x0) at\npostgres_fdw.c:1862\n#4 0x000000000066362a in CopyFrom (cstate=0x23a8d40) at copy.c:3331\n\nThe problem is that partition ffoo1's BR trigger prevents it from\nusing multi-insert, but its ResultRelInfo.ri_usesBulkModify is true,\nwhich is copied from its parent. We should really check the same\nthings for a partition that CopyFrom() checks for the main target\nrelation (root parent) when deciding whether to use multi-insert.\n\nHowever instead of duplicating the same logic to do so in two places\n(CopyFrom and ExecInitPartitionInfo), I think it might be a good idea\nto refactor the code to decide if multi-insert mode can be used for a\ngiven relation by checking its properties and put it in some place\nthat both the main target relation and partitions need to invoke.\nInitResultRelInfo() seems to be one such place.\n\nAlso, it might be a good idea to use ri_usesBulkModify more generally\nthan only for foreign relations as the patch currently does, because I\ncan see that it can replace the variable insertMethod in CopyFrom().\nHaving both insertMethod and ri_usesBulkModify in each ResultRelInfo\nseems confusing and bug-prone.\n\nFinally, I suggest renaming ri_usesBulkModify to ri_usesMultiInsert to\nreflect its scope.\n\nPlease check the attached delta patch that applies on top of v5 to see\nwhat that would look like.\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 7 Aug 2020 18:14:41 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On 8/7/20 2:14 PM, Amit Langote wrote:\n> I was playing around with v5 and I noticed an assertion failure which\n> I concluded is due to improper setting of ri_usesBulkModify. You can\n> reproduce it with these steps.\n> \n> create extension postgres_fdw;\n> create server lb foreign data wrapper postgres_fdw ;\n> create user mapping for current_user server lb;\n> create table foo (a int, b int) partition by list (a);\n> create table foo1 (like foo);\n> create foreign table ffoo1 partition of foo for values in (1) server\n> lb options (table_name 'foo1');\n> create table foo2 (like foo);\n> create foreign table ffoo2 partition of foo for values in (2) server\n> lb options (table_name 'foo2');\n> create function print_new_row() returns trigger language plpgsql as $$\n> begin raise notice '%', new; return new; end; $$;\n> create trigger ffoo1_br_trig before insert on ffoo1 for each row\n> execute function print_new_row();\n> copy foo from stdin csv;\n> Enter data to be copied followed by a newline.\n> End with a backslash and a period on a line by itself, or an EOF signal.\n>>> 1,2\n>>> 2,3\n>>> \\.\n> NOTICE: (1,2)\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> \n> #0 0x00007f2d5e266337 in raise () from /lib64/libc.so.6\n> #1 0x00007f2d5e267a28 in abort () from /lib64/libc.so.6\n> #2 0x0000000000aafd5d in ExceptionalCondition\n> (conditionName=0x7f2d37b468d0 \"!resultRelInfo->ri_usesBulkModify ||\n> resultRelInfo->ri_FdwRoutine->BeginForeignCopyIn == NULL\",\n> errorType=0x7f2d37b46680 \"FailedAssertion\",\n> fileName=0x7f2d37b4677f \"postgres_fdw.c\", lineNumber=1863) at\n> assert.c:67\n> #3 0x00007f2d37b3b0fe in postgresExecForeignInsert (estate=0x2456320,\n> resultRelInfo=0x23a8f58, slot=0x23a9480, planSlot=0x0) at\n> postgres_fdw.c:1862\n> #4 0x000000000066362a in CopyFrom (cstate=0x23a8d40) at copy.c:3331\n> \n> The problem is that partition ffoo1's BR trigger prevents it from\n> using multi-insert, but its ResultRelInfo.ri_usesBulkModify is true,\n> which is copied from its parent. We should really check the same\n> things for a partition that CopyFrom() checks for the main target\n> relation (root parent) when deciding whether to use multi-insert.\nThnx, I added TAP-test on this problem> However instead of duplicating \nthe same logic to do so in two places\n> (CopyFrom and ExecInitPartitionInfo), I think it might be a good idea\n> to refactor the code to decide if multi-insert mode can be used for a\n> given relation by checking its properties and put it in some place\n> that both the main target relation and partitions need to invoke.\n> InitResultRelInfo() seems to be one such place.\n+1\n> \n> Also, it might be a good idea to use ri_usesBulkModify more generally\n> than only for foreign relations as the patch currently does, because I\n> can see that it can replace the variable insertMethod in CopyFrom().\n> Having both insertMethod and ri_usesBulkModify in each ResultRelInfo\n> seems confusing and bug-prone.\n> \n> Finally, I suggest renaming ri_usesBulkModify to ri_usesMultiInsert to\n> reflect its scope.\n> \n> Please check the attached delta patch that applies on top of v5 to see\n> what that would look like.\nI merged your delta patch (see v6 in attachment) to the main patch.\nCurrently it seems more commitable than before.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Fri, 21 Aug 2020 17:19:06 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Hi Andrey,\n\nOn Fri, Aug 21, 2020 at 9:19 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> On 8/7/20 2:14 PM, Amit Langote wrote:\n> > I was playing around with v5 and I noticed an assertion failure which\n> > I concluded is due to improper setting of ri_usesBulkModify. You can\n> > reproduce it with these steps.\n> >\n> > create extension postgres_fdw;\n> > create server lb foreign data wrapper postgres_fdw ;\n> > create user mapping for current_user server lb;\n> > create table foo (a int, b int) partition by list (a);\n> > create table foo1 (like foo);\n> > create foreign table ffoo1 partition of foo for values in (1) server\n> > lb options (table_name 'foo1');\n> > create table foo2 (like foo);\n> > create foreign table ffoo2 partition of foo for values in (2) server\n> > lb options (table_name 'foo2');\n> > create function print_new_row() returns trigger language plpgsql as $$\n> > begin raise notice '%', new; return new; end; $$;\n> > create trigger ffoo1_br_trig before insert on ffoo1 for each row\n> > execute function print_new_row();\n> > copy foo from stdin csv;\n> > Enter data to be copied followed by a newline.\n> > End with a backslash and a period on a line by itself, or an EOF signal.\n> >>> 1,2\n> >>> 2,3\n> >>> \\.\n> > NOTICE: (1,2)\n> > server closed the connection unexpectedly\n> > This probably means the server terminated abnormally\n> > before or while processing the request.\n>\n> Thnx, I added TAP-test on this problem> However instead of duplicating\n> the same logic to do so in two places\n\nGood call.\n\n> > (CopyFrom and ExecInitPartitionInfo), I think it might be a good idea\n> > to refactor the code to decide if multi-insert mode can be used for a\n> > given relation by checking its properties and put it in some place\n> > that both the main target relation and partitions need to invoke.\n> > InitResultRelInfo() seems to be one such place.\n> +1\n> >\n> > Also, it might be a good idea to use ri_usesBulkModify more generally\n> > than only for foreign relations as the patch currently does, because I\n> > can see that it can replace the variable insertMethod in CopyFrom().\n> > Having both insertMethod and ri_usesBulkModify in each ResultRelInfo\n> > seems confusing and bug-prone.\n> >\n> > Finally, I suggest renaming ri_usesBulkModify to ri_usesMultiInsert to\n> > reflect its scope.\n> >\n> > Please check the attached delta patch that applies on top of v5 to see\n> > what that would look like.\n>\n> I merged your delta patch (see v6 in attachment) to the main patch.\n> Currently it seems more commitable than before.\n\nThanks for accepting the changes.\n\nActually, I was thinking maybe making the patch to replace\nCopyInsertMethod enum by ri_usesMultiInsert separate from the rest\nmight be better as I can see it as independent refactoring. Attached\nis how the division would look like.\n\nI would\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 24 Aug 2020 16:18:36 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On Mon, Aug 24, 2020 at 4:18 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> I would\n\nOops, thought I'd continue writing, but hit send before actually doing\nthat. Please ignore.\n\nI have some comments on v6, which I will share later this week.\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 24 Aug 2020 18:19:28 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On Mon, Aug 24, 2020 at 06:19:28PM +0900, Amit Langote wrote:\n> On Mon, Aug 24, 2020 at 4:18 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > I would\n> \n> Oops, thought I'd continue writing, but hit send before actually doing\n> that. Please ignore.\n> \n> I have some comments on v6, which I will share later this week.\n\nWhile on it, the CF bot is telling that the documentation of the patch\nfails to compile. This needs to be fixed.\n--\nMichael", "msg_date": "Mon, 7 Sep 2020 16:26:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On 9/7/20 12:26 PM, Michael Paquier wrote:\n> On Mon, Aug 24, 2020 at 06:19:28PM +0900, Amit Langote wrote:\n>> On Mon, Aug 24, 2020 at 4:18 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>>> I would\n>>\n>> Oops, thought I'd continue writing, but hit send before actually doing\n>> that. Please ignore.\n>>\n>> I have some comments on v6, which I will share later this week.\n> \n> While on it, the CF bot is telling that the documentation of the patch\n> fails to compile. This needs to be fixed.\n> --\n> Michael\n> \nv.7 (in attachment) fixes this problem.\nI also accepted Amit's suggestion to rename all fdwapi routines such as \nForeignCopyIn to *ForeignCopy.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Mon, 7 Sep 2020 15:31:09 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Hi Andrey,\n\nOn Mon, Sep 7, 2020 at 7:31 PM Andrey V. Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> On 9/7/20 12:26 PM, Michael Paquier wrote:\n> > While on it, the CF bot is telling that the documentation of the patch\n> > fails to compile. This needs to be fixed.\n> > --\n> > Michael\n> >\n> v.7 (in attachment) fixes this problem.\n> I also accepted Amit's suggestion to rename all fdwapi routines such as\n> ForeignCopyIn to *ForeignCopy.\n\nAny thoughts on the taking out the refactoring changes out of the main\npatch as I suggested?\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 8 Sep 2020 16:34:47 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Hi,\n\nI've started doing a review of v7 yesterday.\n\nOn 2020-09-08 10:34, Amit Langote wrote:\n> On Mon, Sep 7, 2020 at 7:31 PM Andrey V. Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>> >\n>> v.7 (in attachment) fixes this problem.\n>> I also accepted Amit's suggestion to rename all fdwapi routines such \n>> as\n>> ForeignCopyIn to *ForeignCopy.\n> \n\nIt seems that naming is quite inconsistent now:\n\n+\t/* COPY a bulk of tuples into a foreign relation */\n+\tBeginForeignCopyIn_function BeginForeignCopy;\n+\tEndForeignCopyIn_function EndForeignCopy;\n+\tExecForeignCopyIn_function ExecForeignCopy;\n\nYou get rid of this 'In' in the function names, but the types are still \nwith it:\n\n+typedef void (*BeginForeignCopyIn_function) (ModifyTableState *mtstate,\n+\t\tResultRelInfo *rinfo);\n+\n+typedef void (*EndForeignCopyIn_function) (EState *estate,\n+\t\tResultRelInfo *rinfo);\n+\n+typedef void (*ExecForeignCopyIn_function) (ResultRelInfo *rinfo,\n+\t\tTupleTableSlot **slots,\n+\t\tint nslots);\n\nAlso docs refer to old function names:\n\n+void\n+BeginForeignCopyIn(ModifyTableState *mtstate,\n+ ResultRelInfo *rinfo);\n\nI think that it'd be better to choose either of these two naming schemes \nand use it everywhere for consistency.\n\n> \n> Any thoughts on the taking out the refactoring changes out of the main\n> patch as I suggested?\n> \n\n+1 for splitting the patch. It was rather difficult for me to \ndistinguish changes required by COPY via postgres_fdw from this \nrefactoring.\n\nAnother ambiguous part of the refactoring was in changing \nInitResultRelInfo() arguments:\n\n@@ -1278,6 +1280,7 @@ InitResultRelInfo(ResultRelInfo *resultRelInfo,\n \t\t\t\t Relation resultRelationDesc,\n \t\t\t\t Index resultRelationIndex,\n \t\t\t\t Relation partition_root,\n+\t\t\t\t bool use_multi_insert,\n \t\t\t\t int instrument_options)\n\nWhy do we need to pass this use_multi_insert flag here? Would it be \nbetter to set resultRelInfo->ri_usesMultiInsert in the \nInitResultRelInfo() unconditionally like it is done for \nri_usesFdwDirectModify? And after that it will be up to the caller \nwhether to use multi-insert or not based on their own circumstances. \nOtherwise now we have a flag to indicate that we want to check for \nanother flag, while this check doesn't look costly.\n\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n", "msg_date": "Tue, 08 Sep 2020 12:29:14 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On 9/8/20 12:34 PM, Amit Langote wrote:\n> Hi Andrey,\n> \n> On Mon, Sep 7, 2020 at 7:31 PM Andrey V. Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>> On 9/7/20 12:26 PM, Michael Paquier wrote:\n>>> While on it, the CF bot is telling that the documentation of the patch\n>>> fails to compile. This needs to be fixed.\n>>> --\n>>> Michael\n>>>\n>> v.7 (in attachment) fixes this problem.\n>> I also accepted Amit's suggestion to rename all fdwapi routines such as\n>> ForeignCopyIn to *ForeignCopy.\n> \n> Any thoughts on the taking out the refactoring changes out of the main\n> patch as I suggested?\n> \nSorry I thought you asked to ignore your previous letter. I'll look into \nthis patch set shortly.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Tue, 8 Sep 2020 15:21:33 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Hi Alexey,\n\nOn Tue, Sep 8, 2020 at 6:29 PM Alexey Kondratov\n<a.kondratov@postgrespro.ru> wrote:\n> On 2020-09-08 10:34, Amit Langote wrote:\n> > Any thoughts on the taking out the refactoring changes out of the main\n> > patch as I suggested?\n> >\n>\n> +1 for splitting the patch. It was rather difficult for me to\n> distinguish changes required by COPY via postgres_fdw from this\n> refactoring.\n>\n> Another ambiguous part of the refactoring was in changing\n> InitResultRelInfo() arguments:\n>\n> @@ -1278,6 +1280,7 @@ InitResultRelInfo(ResultRelInfo *resultRelInfo,\n> Relation resultRelationDesc,\n> Index resultRelationIndex,\n> Relation partition_root,\n> + bool use_multi_insert,\n> int instrument_options)\n>\n> Why do we need to pass this use_multi_insert flag here? Would it be\n> better to set resultRelInfo->ri_usesMultiInsert in the\n> InitResultRelInfo() unconditionally like it is done for\n> ri_usesFdwDirectModify? And after that it will be up to the caller\n> whether to use multi-insert or not based on their own circumstances.\n> Otherwise now we have a flag to indicate that we want to check for\n> another flag, while this check doesn't look costly.\n\nHmm, I think having two flags seems confusing and bug prone,\nespecially if you consider partitions. For example, if a partition's\nri_usesMultiInsert is true, but CopyFrom()'s local flag is false, then\nexecPartition.c: ExecInitPartitionInfo() would wrongly perform\nBeginForeignCopy() based on only ri_usesMultiInsert, because it\nwouldn't know CopyFrom()'s local flag. Am I missing something?\n\n-- \nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 8 Sep 2020 23:00:12 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On 2020-09-08 17:00, Amit Langote wrote:\n> Hi Alexey,\n> \n> On Tue, Sep 8, 2020 at 6:29 PM Alexey Kondratov\n> <a.kondratov@postgrespro.ru> wrote:\n>> On 2020-09-08 10:34, Amit Langote wrote:\n>> > Any thoughts on the taking out the refactoring changes out of the main\n>> > patch as I suggested?\n>> >\n>> \n>> +1 for splitting the patch. It was rather difficult for me to\n>> distinguish changes required by COPY via postgres_fdw from this\n>> refactoring.\n>> \n>> Another ambiguous part of the refactoring was in changing\n>> InitResultRelInfo() arguments:\n>> \n>> @@ -1278,6 +1280,7 @@ InitResultRelInfo(ResultRelInfo *resultRelInfo,\n>> Relation resultRelationDesc,\n>> Index resultRelationIndex,\n>> Relation partition_root,\n>> + bool use_multi_insert,\n>> int instrument_options)\n>> \n>> Why do we need to pass this use_multi_insert flag here? Would it be\n>> better to set resultRelInfo->ri_usesMultiInsert in the\n>> InitResultRelInfo() unconditionally like it is done for\n>> ri_usesFdwDirectModify? And after that it will be up to the caller\n>> whether to use multi-insert or not based on their own circumstances.\n>> Otherwise now we have a flag to indicate that we want to check for\n>> another flag, while this check doesn't look costly.\n> \n> Hmm, I think having two flags seems confusing and bug prone,\n> especially if you consider partitions. For example, if a partition's\n> ri_usesMultiInsert is true, but CopyFrom()'s local flag is false, then\n> execPartition.c: ExecInitPartitionInfo() would wrongly perform\n> BeginForeignCopy() based on only ri_usesMultiInsert, because it\n> wouldn't know CopyFrom()'s local flag. Am I missing something?\n\nNo, you're right. If someone want to share a state and use ResultRelInfo \n(RRI) for that purpose, then it's fine, but CopyFrom() may simply \noverride RRI->ri_usesMultiInsert if needed and pass this RRI further.\n\nThis is how it's done for RRI->ri_usesFdwDirectModify. \nInitResultRelInfo() initializes it to false and then \nExecInitModifyTable() changes the flag if needed.\n\nProbably this is just a matter of personal choice, but for me the \ncurrent implementation with additional argument in InitResultRelInfo() \ndoesn't look completely right. Maybe because a caller now should pass an \nadditional argument (as false) even if it doesn't care about \nri_usesMultiInsert at all. It also adds additional complexity and feels \nlike abstractions leaking.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n", "msg_date": "Tue, 08 Sep 2020 18:34:31 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On 9/8/20 8:34 PM, Alexey Kondratov wrote:\n> On 2020-09-08 17:00, Amit Langote wrote:\n>> <a.kondratov@postgrespro.ru> wrote:\n>>> On 2020-09-08 10:34, Amit Langote wrote:\n>>> Another ambiguous part of the refactoring was in changing\n>>> InitResultRelInfo() arguments:\n>>>\n>>> @@ -1278,6 +1280,7 @@ InitResultRelInfo(ResultRelInfo *resultRelInfo,\n>>>                                   Relation resultRelationDesc,\n>>>                                   Index resultRelationIndex,\n>>>                                   Relation partition_root,\n>>> +                                 bool use_multi_insert,\n>>>                                   int instrument_options)\n>>>\n>>> Why do we need to pass this use_multi_insert flag here? Would it be\n>>> better to set resultRelInfo->ri_usesMultiInsert in the\n>>> InitResultRelInfo() unconditionally like it is done for\n>>> ri_usesFdwDirectModify? And after that it will be up to the caller\n>>> whether to use multi-insert or not based on their own circumstances.\n>>> Otherwise now we have a flag to indicate that we want to check for\n>>> another flag, while this check doesn't look costly.\n>>\n>> Hmm, I think having two flags seems confusing and bug prone,\n>> especially if you consider partitions.  For example, if a partition's\n>> ri_usesMultiInsert is true, but CopyFrom()'s local flag is false, then\n>> execPartition.c: ExecInitPartitionInfo() would wrongly perform\n>> BeginForeignCopy() based on only ri_usesMultiInsert, because it\n>> wouldn't know CopyFrom()'s local flag.  Am I missing something?\n> \n> No, you're right. If someone want to share a state and use ResultRelInfo \n> (RRI) for that purpose, then it's fine, but CopyFrom() may simply \n> override RRI->ri_usesMultiInsert if needed and pass this RRI further.\n> \n> This is how it's done for RRI->ri_usesFdwDirectModify. \n> InitResultRelInfo() initializes it to false and then \n> ExecInitModifyTable() changes the flag if needed.\n> \n> Probably this is just a matter of personal choice, but for me the \n> current implementation with additional argument in InitResultRelInfo() \n> doesn't look completely right. Maybe because a caller now should pass an \n> additional argument (as false) even if it doesn't care about \n> ri_usesMultiInsert at all. It also adds additional complexity and feels \n> like abstractions leaking.\nI didn't feel what the problem was and prepared a patch version \naccording to Alexey's suggestion (see Alternate.patch).\nThis does not seem very convenient and will lead to errors in the \nfuture. So, I agree with Amit.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Wed, 9 Sep 2020 13:45:26 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Version 8 split into two patches (in accordance with Amit suggestion).\nAlso I eliminate naming inconsistency (thanks to Alexey).\nBased on master, f481d28232.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Wed, 9 Sep 2020 14:38:20 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On 2020-09-09 11:45, Andrey V. Lepikhov wrote:\n> On 9/8/20 8:34 PM, Alexey Kondratov wrote:\n>> On 2020-09-08 17:00, Amit Langote wrote:\n>>> <a.kondratov@postgrespro.ru> wrote:\n>>>> On 2020-09-08 10:34, Amit Langote wrote:\n>>>> Another ambiguous part of the refactoring was in changing\n>>>> InitResultRelInfo() arguments:\n>>>> \n>>>> @@ -1278,6 +1280,7 @@ InitResultRelInfo(ResultRelInfo \n>>>> *resultRelInfo,\n>>>>                                   Relation resultRelationDesc,\n>>>>                                   Index resultRelationIndex,\n>>>>                                   Relation partition_root,\n>>>> +                                 bool use_multi_insert,\n>>>>                                   int instrument_options)\n>>>> \n>>>> Why do we need to pass this use_multi_insert flag here? Would it be\n>>>> better to set resultRelInfo->ri_usesMultiInsert in the\n>>>> InitResultRelInfo() unconditionally like it is done for\n>>>> ri_usesFdwDirectModify? And after that it will be up to the caller\n>>>> whether to use multi-insert or not based on their own circumstances.\n>>>> Otherwise now we have a flag to indicate that we want to check for\n>>>> another flag, while this check doesn't look costly.\n>>> \n>>> Hmm, I think having two flags seems confusing and bug prone,\n>>> especially if you consider partitions.  For example, if a partition's\n>>> ri_usesMultiInsert is true, but CopyFrom()'s local flag is false, \n>>> then\n>>> execPartition.c: ExecInitPartitionInfo() would wrongly perform\n>>> BeginForeignCopy() based on only ri_usesMultiInsert, because it\n>>> wouldn't know CopyFrom()'s local flag.  Am I missing something?\n>> \n>> No, you're right. If someone want to share a state and use \n>> ResultRelInfo (RRI) for that purpose, then it's fine, but CopyFrom() \n>> may simply override RRI->ri_usesMultiInsert if needed and pass this \n>> RRI further.\n>> \n>> This is how it's done for RRI->ri_usesFdwDirectModify. \n>> InitResultRelInfo() initializes it to false and then \n>> ExecInitModifyTable() changes the flag if needed.\n>> \n>> Probably this is just a matter of personal choice, but for me the \n>> current implementation with additional argument in InitResultRelInfo() \n>> doesn't look completely right. Maybe because a caller now should pass \n>> an additional argument (as false) even if it doesn't care about \n>> ri_usesMultiInsert at all. It also adds additional complexity and \n>> feels like abstractions leaking.\n> I didn't feel what the problem was and prepared a patch version\n> according to Alexey's suggestion (see Alternate.patch).\n\nYes, that's very close to what I've meant.\n\n+\tleaf_part_rri->ri_usesMultiInsert = (leaf_part_rri->ri_usesMultiInsert \n&&\n+\t\trootResultRelInfo->ri_usesMultiInsert) ? true : false;\n\nThis could be just:\n\n+\tleaf_part_rri->ri_usesMultiInsert = (leaf_part_rri->ri_usesMultiInsert \n&&\n+\t\trootResultRelInfo->ri_usesMultiInsert);\n\n> This does not seem very convenient and will lead to errors in the\n> future. So, I agree with Amit.\n\nAnd InitResultRelInfo() may set ri_usesMultiInsert to false by default, \nsince it's used only by COPY now. Then you won't need this in several \nplaces:\n\n+\tresultRelInfo->ri_usesMultiInsert = false;\n\nWhile the logic of turning multi-insert on with all the validations \nrequired could be factored out of InitResultRelInfo() to a separate \nroutine.\n\nAnyway, I don't insist at all and think it's fine to stick to the \noriginal v7's logic.\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n", "msg_date": "Wed, 09 Sep 2020 12:42:09 +0300", "msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On Wed, Sep 9, 2020 at 6:42 PM Alexey Kondratov\n<a.kondratov@postgrespro.ru> wrote:\n> On 2020-09-09 11:45, Andrey V. Lepikhov wrote:\n> > This does not seem very convenient and will lead to errors in the\n> > future. So, I agree with Amit.\n>\n> And InitResultRelInfo() may set ri_usesMultiInsert to false by default,\n> since it's used only by COPY now. Then you won't need this in several\n> places:\n>\n> + resultRelInfo->ri_usesMultiInsert = false;\n>\n> While the logic of turning multi-insert on with all the validations\n> required could be factored out of InitResultRelInfo() to a separate\n> routine.\n\nInteresting idea. Maybe better to have a separate routine like Alexey says.\n\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Sep 2020 21:51:49 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On 9/9/20 5:51 PM, Amit Langote wrote:\n> On Wed, Sep 9, 2020 at 6:42 PM Alexey Kondratov\n> <a.kondratov@postgrespro.ru> wrote:\n>> On 2020-09-09 11:45, Andrey V. Lepikhov wrote:\n>>> This does not seem very convenient and will lead to errors in the\n>>> future. So, I agree with Amit.\n>>\n>> And InitResultRelInfo() may set ri_usesMultiInsert to false by default,\n>> since it's used only by COPY now. Then you won't need this in several\n>> places:\n>>\n>> + resultRelInfo->ri_usesMultiInsert = false;\n>>\n>> While the logic of turning multi-insert on with all the validations\n>> required could be factored out of InitResultRelInfo() to a separate\n>> routine.\n> \n> Interesting idea. Maybe better to have a separate routine like Alexey says.\nOk. I rewrited the patch 0001 with the Alexey suggestion.\nPatch 0002... required minor changes (new version see in attachment).\n\nAlso I added some optimization (see 0003 and 0004 patches). Here we \nexecute 'COPY .. FROM STDIN' at foreign server only once, in the \nBeginForeignCopy routine. It is a proof-of-concept patches.\n\nAlso I see that error messages processing needs to be rewritten. Unlike \nthe INSERT operation applied to each row, here we find out copy errors \nonly after sending the END of copy. Currently implementations 0002 and \n0004 provide uninformative error messages for some cases.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Thu, 10 Sep 2020 14:57:43 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On Thu, Sep 10, 2020 at 6:57 PM Andrey V. Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> On 9/9/20 5:51 PM, Amit Langote wrote:\n> > On Wed, Sep 9, 2020 at 6:42 PM Alexey Kondratov <a.kondratov@postgrespro.ru> wrote:\n> >> And InitResultRelInfo() may set ri_usesMultiInsert to false by default,\n> >> since it's used only by COPY now. Then you won't need this in several\n> >> places:\n> >>\n> >> + resultRelInfo->ri_usesMultiInsert = false;\n> >>\n> >> While the logic of turning multi-insert on with all the validations\n> >> required could be factored out of InitResultRelInfo() to a separate\n> >> routine.\n> >\n> > Interesting idea. Maybe better to have a separate routine like Alexey says.\n> Ok. I rewrited the patch 0001 with the Alexey suggestion.\n\nThank you. Some mostly cosmetic suggestions on that:\n\n+bool\n+checkMultiInsertMode(const ResultRelInfo *rri, const ResultRelInfo *parent)\n\nI think we should put this definition in executor.c and export in\nexecutor.h, not execPartition.c/h. Also, better to match the naming\nstyle of surrounding executor routines, say,\nExecRelationAllowsMultiInsert? I'm not sure if we need the 'parent'\nparameter but as it's pretty specific to partition's case, maybe\npartition_root is a better name.\n\n+ if (!checkMultiInsertMode(target_resultRelInfo, NULL))\n+ {\n+ /*\n+ * Do nothing. Can't allow multi-insert mode if previous conditions\n+ * checking disallow this.\n+ */\n+ }\n\nPersonally, I find this notation with empty blocks a bit strange.\nMaybe it's easier to read this instead:\n\n if (!cstate->volatile_defexprs &&\n !contain_volatile_functions(cstate->whereClause) &&\n ExecRelationAllowsMultiInsert(target_resultRelInfo, NULL))\n target_resultRelInfo->ri_usesMultiInsert = true;\n\nAlso, I don't really understand why we need\nlist_length(cstate->attnumlist) > 0 to use multi-insert on foreign\ntables but apparently we do. The next patch should add that condition\nhere along with a brief note on that in the comment.\n\n- if (resultRelInfo->ri_FdwRoutine != NULL &&\n- resultRelInfo->ri_FdwRoutine->BeginForeignInsert != NULL)\n- resultRelInfo->ri_FdwRoutine->BeginForeignInsert(mtstate,\n- resultRelInfo);\n+ /*\n+ * Init COPY into foreign table. Initialization of copying into foreign\n+ * partitions will be done later.\n+ */\n+ if (target_resultRelInfo->ri_FdwRoutine != NULL &&\n+ target_resultRelInfo->ri_FdwRoutine->BeginForeignInsert != NULL)\n+ target_resultRelInfo->ri_FdwRoutine->BeginForeignInsert(mtstate,\n+ resultRelInfo);\n\n\n@@ -3349,11 +3302,10 @@ CopyFrom(CopyState cstate)\n if (target_resultRelInfo->ri_FdwRoutine != NULL &&\n target_resultRelInfo->ri_FdwRoutine->EndForeignInsert != NULL)\n target_resultRelInfo->ri_FdwRoutine->EndForeignInsert(estate,\n-\ntarget_resultRelInfo);\n+ target_resultRelInfo);\n\nThese two hunks seem unnecessary, which I think I introduced into this\npatch when breaking it out of the main one.\n\nPlease check the attached delta patch which contains the above changes.\n\n--\nAmit Langote\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 16 Sep 2020 18:10:36 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "16.09.2020 12:10, Amit Langote пишет:\n> On Thu, Sep 10, 2020 at 6:57 PM Andrey V. Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>> On 9/9/20 5:51 PM, Amit Langote wrote:\n>> Ok. I rewrited the patch 0001 with the Alexey suggestion.\n> \n> Thank you. Some mostly cosmetic suggestions on that:\n> \n> +bool\n> +checkMultiInsertMode(const ResultRelInfo *rri, const ResultRelInfo *parent)\n> \n> I think we should put this definition in executor.c and export in\n> executor.h, not execPartition.c/h. Also, better to match the naming\n> style of surrounding executor routines, say,\n> ExecRelationAllowsMultiInsert? I'm not sure if we need the 'parent'\n> parameter but as it's pretty specific to partition's case, maybe\n> partition_root is a better name.\nAgreed\n\n> + if (!checkMultiInsertMode(target_resultRelInfo, NULL))\n> + {\n> + /*\n> + * Do nothing. Can't allow multi-insert mode if previous conditions\n> + * checking disallow this.\n> + */\n> + }\n> \n> Personally, I find this notation with empty blocks a bit strange.\n> Maybe it's easier to read this instead:\n> \n> if (!cstate->volatile_defexprs &&\n> !contain_volatile_functions(cstate->whereClause) &&\n> ExecRelationAllowsMultiInsert(target_resultRelInfo, NULL))\n> target_resultRelInfo->ri_usesMultiInsert = true;\nAgreed\n\n> Also, I don't really understand why we need\n> list_length(cstate->attnumlist) > 0 to use multi-insert on foreign\n> tables but apparently we do. The next patch should add that condition\n> here along with a brief note on that in the comment.\nThis is a feature of the COPY command. It can't be used without any \ncolumn in braces. However, foreign tables without columns can exist.\nYou can see this problem if you apply the 0002 patch on top of your \ndelta patch. Ashutosh in [1] noticed this problem and anchored it with \nregression test.\nI included this expression (with comments) into the 0002 patch.\n\n> \n> - if (resultRelInfo->ri_FdwRoutine != NULL &&\n> - resultRelInfo->ri_FdwRoutine->BeginForeignInsert != NULL)\n> - resultRelInfo->ri_FdwRoutine->BeginForeignInsert(mtstate,\n> - resultRelInfo);\n> + /*\n> + * Init COPY into foreign table. Initialization of copying into foreign\n> + * partitions will be done later.\n> + */\n> + if (target_resultRelInfo->ri_FdwRoutine != NULL &&\n> + target_resultRelInfo->ri_FdwRoutine->BeginForeignInsert != NULL)\n> + target_resultRelInfo->ri_FdwRoutine->BeginForeignInsert(mtstate,\n> + resultRelInfo);\n> \n> \n> @@ -3349,11 +3302,10 @@ CopyFrom(CopyState cstate)\n> if (target_resultRelInfo->ri_FdwRoutine != NULL &&\n> target_resultRelInfo->ri_FdwRoutine->EndForeignInsert != NULL)\n> target_resultRelInfo->ri_FdwRoutine->EndForeignInsert(estate,\n> -\n> target_resultRelInfo);\n> + target_resultRelInfo);\n> \n> These two hunks seem unnecessary, which I think I introduced into this\n> patch when breaking it out of the main one.\n> \n> Please check the attached delta patch which contains the above changes.\nI applied your delta patch to the 0001 patch and fix the 0002 patch in \naccordance with these changes.\nPatches 0003 and 0004 are experimental and i will not support them \nbefore discussing on applicability.\n\n[1] \nhttps://www.postgresql.org/message-id/CAExHW5uAtyAVL-iuu1Hsd0fycqS5UHoHCLfauYDLQwRucwC9Og%40mail.gmail.com\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Sun, 20 Sep 2020 12:12:08 +0300", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "This patch currently looks very ready for use. And I'm taking a close \nlook at the error reporting. Here we have difference in behavior of \nlocal and foreign table:\n\nregression test in postgres_fdw.sql:\ncopy rem2 from stdin;\n-1\txyzzy\n\\.\n\nreports error (1):\n=================\nERROR: new row for relation \"loc2\" violates check constraint...\nDETAIL: Failing row contains (-1, xyzzy).\nCONTEXT: COPY loc2, line 1: \"-1\txyzzy\"\nremote SQL command: COPY public.loc2(f1, f2) FROM STDIN\nCOPY rem2, line 2\n\nBut local COPY into loc2 reports another error (2):\n===================================================\ncopy loc2 from stdin;\nERROR: new row for relation \"loc2\" violates check constraint...\nDETAIL: Failing row contains (-1, xyzzy).\nCONTEXT: COPY loc2, line 1: \"-1\txyzzy\"\n\nReport (2) is shorter and more specific.\nReport (1) contains meaningless information.\n\nMaybe we need to improve error report? For example like this:\nERROR: Failed COPY into foreign table \"rem2\":\nnew row for relation \"loc2\" violates check constraint...\nDETAIL: Failing row contains (-1, xyzzy).\nremote SQL command: COPY public.loc2(f1, f2) FROM STDIN\nCOPY rem2, line 1\n\nThe problem here is that we run into an error after the COPY FROM \ncommand completes. And we need to translate lineno from foreign server \nto lineno of overall COPY command.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Mon, 21 Sep 2020 14:20:16 +0300", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Hello Andrey-san,\r\n\r\n\r\nThank you for challenging an interesting feature. Below are my review comments.\r\n\r\n\r\n(1)\r\n-\t/* for use by copy.c when performing multi-inserts */\r\n+\t/*\r\n+\t * The following fields are currently only relevant to copy.c.\r\n+\t *\r\n+\t * True if okay to use multi-insert on this relation\r\n+\t */\r\n+\tbool ri_usesMultiInsert;\r\n+\r\n+\t/* Buffer allocated to this relation when using multi-insert mode */\r\n \tstruct CopyMultiInsertBuffer *ri_CopyMultiInsertBuffer;\r\n } ResultRelInfo;\r\n\r\nIt's better to place the new bool member next to an existing bool member, so that the structure doesn't get larger.\r\n\r\n\r\n(2)\r\n+\tAssert(rri->ri_usesMultiInsert == false);\r\n\r\nAs the above assertion represents, I'm afraid the semantics of ExecRelationAllowsMultiInsert() and ResultRelInfo->ri_usesMultiInsert are unclear. In CopyFrom(), ri_usesMultiInsert is set by also considering the COPY-specific conditions:\r\n\r\n+\tif (!cstate->volatile_defexprs &&\r\n+\t\t!contain_volatile_functions(cstate->whereClause) &&\r\n+\t\tExecRelationAllowsMultiInsert(target_resultRelInfo, NULL))\r\n+\t\ttarget_resultRelInfo->ri_usesMultiInsert = true;\r\n\r\nOn the other hand, in below ExecInitPartitionInfo(), ri_usesMultiInsert is set purely based on the relation's characteristics.\r\n\r\n+\tleaf_part_rri->ri_usesMultiInsert =\r\n+\t\tExecRelationAllowsMultiInsert(leaf_part_rri, rootResultRelInfo);\r\n\r\nIn addition to these differences, I think it's a bit confusing that the function itself doesn't record the check result in ri_usesMultiInsert.\r\n\r\nIt's probably easy to understand to not add ri_usesMultiInsert, and the function just encapsulates the check logic based solely on the relation characteristics and returns the result. So, the argument is just one ResultRelInfo. The caller (e.g. COPY) combines the function result with other specific conditions.\r\n\r\n\r\n(3)\r\n+typedef void (*BeginForeignCopy_function) (ModifyTableState *mtstate,\r\n+\t\t\t\t\t\t\t\t\t\t\t ResultRelInfo *rinfo);\r\n+\r\n+typedef void (*EndForeignCopy_function) (EState *estate,\r\n+\t\t\t\t\t\t\t\t\t\t ResultRelInfo *rinfo);\r\n+\r\n+typedef void (*ExecForeignCopy_function) (ResultRelInfo *rinfo,\r\n+\t\t\t\t\t\t\t\t\t\t\t\t\t TupleTableSlot **slots,\r\n+\t\t\t\t\t\t\t\t\t\t\t\t\t int nslots);\r\n\r\nTo align with other function groups, it's better to place the functions in order of Begin, Exec, and End.\r\n\r\n\r\n(4)\r\n+\t/* COPY a bulk of tuples into a foreign relation */\r\n+\tBeginForeignCopy_function BeginForeignCopy;\r\n+\tEndForeignCopy_function EndForeignCopy;\r\n+\tExecForeignCopy_function ExecForeignCopy;\r\n\r\nTo align with the other functions' comment, the comment should be:\r\n\t/* Support functions for COPY */\r\n\r\n\r\n(5)\r\n+<programlisting>\r\n+TupleTableSlot *\r\n+ExecForeignCopy(ResultRelInfo *rinfo,\r\n+ TupleTableSlot **slots,\r\n+ int nslots);\r\n+</programlisting>\r\n+\r\n+ Copy a bulk of tuples into the foreign table.\r\n+ <literal>estate</literal> is global execution state for the query.\r\n\r\nThe return type is void.\r\n\r\n\r\n(6)\r\n+ <literal>nslots</literal> cis a number of tuples in the <literal>slots</literal>\r\n\r\ncis -> is\r\n\r\n\r\n(7)\r\n+ <para>\r\n+ If the <function>ExecForeignCopy</function> pointer is set to\r\n+ <literal>NULL</literal>, attempts to insert into the foreign table will fail\r\n+ with an error message.\r\n+ </para>\r\n\r\n\"attempts to insert into\" should be \"attempts to run COPY on\", because it's used for COPY.\r\nFurthermore, if ExecForeignCopy is NULL, COPY should use ExecForeignInsert() instead, right? Otherwise, existing FDWs would become unable to be used for COPY.\r\n\r\n\r\n(8)\r\n+\tbool\t\tpipe = (filename == NULL) && (data_dest_cb == NULL);\r\n\r\nThe above pipe in BeginCopyTo() is changed to not match pipe in DoCopyTo(), which only refers to filename. Should pipe in DoCopyTo() also be changed? If no, the use of the same variable name for different conditions is confusing.\r\n\r\n\r\n(9)\r\n-\t * partitions will be done later.\r\n+-\t * partitions will be done later.\r\n\r\nThis is an unintended addition of '-'?\r\n\r\n\r\n(10)\r\n-\tif (resultRelInfo->ri_FdwRoutine != NULL &&\r\n-\t\tresultRelInfo->ri_FdwRoutine->BeginForeignInsert != NULL)\r\n-\t\tresultRelInfo->ri_FdwRoutine->BeginForeignInsert(mtstate,\r\n-\t\t\t\t\t\t\t\t\t\t\t\t\t\t resultRelInfo);\r\n+\tif (target_resultRelInfo->ri_FdwRoutine != NULL)\r\n+\t{\r\n+\t\tif (target_resultRelInfo->ri_usesMultiInsert)\r\n+\t\t\ttarget_resultRelInfo->ri_FdwRoutine->BeginForeignCopy(mtstate,\r\n+\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t resultRelInfo);\r\n+\t\telse if (target_resultRelInfo->ri_FdwRoutine->BeginForeignInsert != NULL)\r\n+\t\t\ttarget_resultRelInfo->ri_FdwRoutine->BeginForeignInsert(mtstate,\r\n+\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tresultRelInfo);\r\n+\t}\r\n\r\nBeginForeignCopy() should be called if it's defined, because BeginForeignCopy() is an optional function.\r\n\r\n\r\n(11) \r\n+\t\toldcontext = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));\r\n+\r\n+\t\ttable_multi_insert(resultRelInfo->ri_RelationDesc,\r\n+\t\t\t\t\t\t slots,\r\n\r\nThe extra empty line seems unintended.\r\n\r\n\r\n(12)\r\n@@ -585,7 +583,8 @@ CopySendEndOfRow(CopyState cstate)\r\n \t\t\t(void) pq_putmessage('d', fe_msgbuf->data, fe_msgbuf->len);\r\n \t\t\tbreak;\r\n \t\tcase COPY_CALLBACK:\r\n-\t\t\tAssert(false);\t\t/* Not yet supported. */\r\n+\t\t\tCopySendChar(cstate, '\\n');\r\n+\t\t\tcstate->data_dest_cb(fe_msgbuf->data, fe_msgbuf->len);\r\n\r\nAs in the COPY_FILENAME case, shouldn't the line terminator be sent only in text format, and be changed to \\r\\n on Windows? I'm asking this as I'm probably a bit confused about in what situation COPY_CALLBACK could be used. I thought the binary format and \\r\\n line terminator could be necessary depending on the FDW implementation.\r\n\r\n\r\n(13)\r\n@@ -1001,9 +1001,13 @@ ExecInitRoutingInfo(ModifyTableState *mtstate,\r\n \t * If the partition is a foreign table, let the FDW init itself for\r\n \t * routing tuples to the partition.\r\n \t */\r\n-\tif (partRelInfo->ri_FdwRoutine != NULL &&\r\n-\t\tpartRelInfo->ri_FdwRoutine->BeginForeignInsert != NULL)\r\n-\t\tpartRelInfo->ri_FdwRoutine->BeginForeignInsert(mtstate, partRelInfo);\r\n+\tif (partRelInfo->ri_FdwRoutine != NULL)\r\n+\t{\r\n+\t\tif (partRelInfo->ri_usesMultiInsert)\r\n+\t\t\tpartRelInfo->ri_FdwRoutine->BeginForeignCopy(mtstate, partRelInfo);\r\n+\t\telse if (partRelInfo->ri_FdwRoutine->BeginForeignInsert != NULL)\r\n+\t\t\tpartRelInfo->ri_FdwRoutine->BeginForeignInsert(mtstate, partRelInfo);\r\n+\t}\r\n\r\nBeginForeignCopy() should be called only if it's defined, because BeginForeignCopy() is an optional function.\r\n\r\n\r\n(14)\r\n@@ -1205,10 +1209,18 @@ ExecCleanupTupleRouting(ModifyTableState *mtstate,\r\n \t\tResultRelInfo *resultRelInfo = proute->partitions[i];\r\n \r\n \t\t/* Allow any FDWs to shut down */\r\n-\t\tif (resultRelInfo->ri_FdwRoutine != NULL &&\r\n-\t\t\tresultRelInfo->ri_FdwRoutine->EndForeignInsert != NULL)\r\n-\t\t\tresultRelInfo->ri_FdwRoutine->EndForeignInsert(mtstate->ps.state,\r\n-\t\t\t\t\t\t\t\t\t\t\t\t\t\t resultRelInfo);\r\n+\t\tif (resultRelInfo->ri_FdwRoutine != NULL)\r\n+\t\t{\r\n+\t\t\tif (resultRelInfo->ri_usesMultiInsert)\r\n+\t\t\t{\r\n+\t\t\t\tAssert(resultRelInfo->ri_FdwRoutine->EndForeignCopy != NULL);\r\n+\t\t\t\tresultRelInfo->ri_FdwRoutine->EndForeignCopy(mtstate->ps.state,\r\n+\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t resultRelInfo);\r\n+\t\t\t}\r\n+\t\t\telse if (resultRelInfo->ri_FdwRoutine->EndForeignInsert != NULL)\r\n+\t\t\t\tresultRelInfo->ri_FdwRoutine->EndForeignInsert(mtstate->ps.state,\r\n+\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t resultRelInfo);\r\n+\t\t}\r\n\r\nEndForeignCopy() is an optional function, isn't it? That is, it's called if it's defined.\r\n\r\n\r\n(15)\r\n+static void\r\n+pgfdw_copy_dest_cb(void *buf, int len)\r\n+{\r\n+\tPGconn *conn = copy_fmstate->conn;\r\n+\r\n+\tif (PQputCopyData(conn, (char *) buf, len) <= 0)\r\n+\t{\r\n+\t\tPGresult *res = PQgetResult(conn);\r\n+\r\n+\t\tpgfdw_report_error(ERROR, res, conn, true, copy_fmstate->query);\r\n+\t}\r\n+}\r\n\r\nThe following page says \"Use PQerrorMessage to retrieve details if the return value is -1.\" So, it's correct to not use PGresult here and pass NULL as the second argument to pgfdw_report_error().\r\n\r\nhttps://www.postgresql.org/docs/devel/libpq-copy.html\r\n\r\n\r\n(16)\r\n+\t\tfor (i = 0; i < nslots; i++)\r\n+\t\t\tCopyOneRowTo(fmstate->cstate, slots[i]);\r\n+\r\n+\t\tstatus = true;\r\n+\t}\r\n\r\nI'm afraid it's not intuitive what \"status is true\" means. I think copy_data_sent or copy_send_success would be better for the variable name.\r\n\r\n\r\n(17)\r\n+\t\tif (PQputCopyEnd(conn, status ? NULL : _(\"canceled by server\")) <= 0 ||\r\n+\t\t\tPQflush(conn))\r\n+\t\t\tereport(ERROR,\r\n+\t\t\t\t\t(errmsg(\"error returned by PQputCopyEnd: %s\",\r\n+\t\t\t\t\t\t\tPQerrorMessage(conn))));\r\n\r\nAs the places that call PQsendQuery(), it seems preferrable to call pgfdw_report_error() here too.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Mon, 19 Oct 2020 04:12:25 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "19.10.2020 09:12, tsunakawa.takay@fujitsu.com пишет:\n> Hello Andrey-san,\n> \n> \n> Thank you for challenging an interesting feature. Below are my review comments.\n> \n> \n> (1)\n> -\t/* for use by copy.c when performing multi-inserts */\n> +\t/*\n> +\t * The following fields are currently only relevant to copy.c.\n> +\t *\n> +\t * True if okay to use multi-insert on this relation\n> +\t */\n> +\tbool ri_usesMultiInsert;\n> +\n> +\t/* Buffer allocated to this relation when using multi-insert mode */\n> \tstruct CopyMultiInsertBuffer *ri_CopyMultiInsertBuffer;\n> } ResultRelInfo;\n> \n> It's better to place the new bool member next to an existing bool member, so that the structure doesn't get larger.\nHere the variable position chosen in accordance with the logical \nmeaning. I don't see large problem with size of this structure.\n> \n> \n> (2)\n> +\tAssert(rri->ri_usesMultiInsert == false);\n> \n> As the above assertion represents, I'm afraid the semantics of ExecRelationAllowsMultiInsert() and ResultRelInfo->ri_usesMultiInsert are unclear. In CopyFrom(), ri_usesMultiInsert is set by also considering the COPY-specific conditions:\n> \n> +\tif (!cstate->volatile_defexprs &&\n> +\t\t!contain_volatile_functions(cstate->whereClause) &&\n> +\t\tExecRelationAllowsMultiInsert(target_resultRelInfo, NULL))\n> +\t\ttarget_resultRelInfo->ri_usesMultiInsert = true;\n> \n> On the other hand, in below ExecInitPartitionInfo(), ri_usesMultiInsert is set purely based on the relation's characteristics.\n> \n> +\tleaf_part_rri->ri_usesMultiInsert =\n> +\t\tExecRelationAllowsMultiInsert(leaf_part_rri, rootResultRelInfo);\n> \n> In addition to these differences, I think it's a bit confusing that the function itself doesn't record the check result in ri_usesMultiInsert.\n> \n> It's probably easy to understand to not add ri_usesMultiInsert, and the function just encapsulates the check logic based solely on the relation characteristics and returns the result. So, the argument is just one ResultRelInfo. The caller (e.g. COPY) combines the function result with other specific conditions.\nI can't fully agreed with this suggestion. We do so because in the \nfuture anyone can call this code from another subsystem for another \npurposes. And we want all the relation-related restrictions contains in \none routine. CopyState-related restrictions used in copy.c only and \ntaken out of this function.\n> \n> \n> (3)\n> +typedef void (*BeginForeignCopy_function) (ModifyTableState *mtstate,\n> +\t\t\t\t\t\t\t\t\t\t\t ResultRelInfo *rinfo);\n> +\n> +typedef void (*EndForeignCopy_function) (EState *estate,\n> +\t\t\t\t\t\t\t\t\t\t ResultRelInfo *rinfo);\n> +\n> +typedef void (*ExecForeignCopy_function) (ResultRelInfo *rinfo,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t TupleTableSlot **slots,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t int nslots);\n> \n> To align with other function groups, it's better to place the functions in order of Begin, Exec, and End.\nOk, thanks.\n> \n> \n> (4)\n> +\t/* COPY a bulk of tuples into a foreign relation */\n> +\tBeginForeignCopy_function BeginForeignCopy;\n> +\tEndForeignCopy_function EndForeignCopy;\n> +\tExecForeignCopy_function ExecForeignCopy;\n> \n> To align with the other functions' comment, the comment should be:\n> \t/* Support functions for COPY */\nAgreed\n> \n> \n> (5)\n> +<programlisting>\n> +TupleTableSlot *\n> +ExecForeignCopy(ResultRelInfo *rinfo,\n> + TupleTableSlot **slots,\n> + int nslots);\n> +</programlisting>\n> +\n> + Copy a bulk of tuples into the foreign table.\n> + <literal>estate</literal> is global execution state for the query.\n> \n> The return type is void.\nAgreed\n> \n> \n> (6)\n> + <literal>nslots</literal> cis a number of tuples in the <literal>slots</literal>\n> \n> cis -> is\nOk\n> \n> \n> (7)\n> + <para>\n> + If the <function>ExecForeignCopy</function> pointer is set to\n> + <literal>NULL</literal>, attempts to insert into the foreign table will fail\n> + with an error message.\n> + </para>\n> \n> \"attempts to insert into\" should be \"attempts to run COPY on\", because it's used for COPY.\n> Furthermore, if ExecForeignCopy is NULL, COPY should use ExecForeignInsert() instead, right? Otherwise, existing FDWs would become unable to be used for COPY.\nThanks\n> \n> \n> (8)\n> +\tbool\t\tpipe = (filename == NULL) && (data_dest_cb == NULL);\n> \n> The above pipe in BeginCopyTo() is changed to not match pipe in DoCopyTo(), which only refers to filename. Should pipe in DoCopyTo() also be changed? If no, the use of the same variable name for different conditions is confusing.\nOk\n> \n> \n> (9)\n> -\t * partitions will be done later.\n> +-\t * partitions will be done later.\n> \n> This is an unintended addition of '-'?\nOk\n> \n> \n> (10)\n> -\tif (resultRelInfo->ri_FdwRoutine != NULL &&\n> -\t\tresultRelInfo->ri_FdwRoutine->BeginForeignInsert != NULL)\n> -\t\tresultRelInfo->ri_FdwRoutine->BeginForeignInsert(mtstate,\n> -\t\t\t\t\t\t\t\t\t\t\t\t\t\t resultRelInfo);\n> +\tif (target_resultRelInfo->ri_FdwRoutine != NULL)\n> +\t{\n> +\t\tif (target_resultRelInfo->ri_usesMultiInsert)\n> +\t\t\ttarget_resultRelInfo->ri_FdwRoutine->BeginForeignCopy(mtstate,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t resultRelInfo);\n> +\t\telse if (target_resultRelInfo->ri_FdwRoutine->BeginForeignInsert != NULL)\n> +\t\t\ttarget_resultRelInfo->ri_FdwRoutine->BeginForeignInsert(mtstate,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tresultRelInfo);\n> +\t}\n> \n> BeginForeignCopy() should be called if it's defined, because BeginForeignCopy() is an optional function.\nMaybe\n> \n> \n> (11)\n> +\t\toldcontext = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));\n> +\n> +\t\ttable_multi_insert(resultRelInfo->ri_RelationDesc,\n> +\t\t\t\t\t\t slots,\n> \n> The extra empty line seems unintended.\n> \nOk\n> \n> (12)\n> @@ -585,7 +583,8 @@ CopySendEndOfRow(CopyState cstate)\n> \t\t\t(void) pq_putmessage('d', fe_msgbuf->data, fe_msgbuf->len);\n> \t\t\tbreak;\n> \t\tcase COPY_CALLBACK:\n> -\t\t\tAssert(false);\t\t/* Not yet supported. */\n> +\t\t\tCopySendChar(cstate, '\\n');\n> +\t\t\tcstate->data_dest_cb(fe_msgbuf->data, fe_msgbuf->len);\n> \n> As in the COPY_FILENAME case, shouldn't the line terminator be sent only in text format, and be changed to \\r\\n on Windows? I'm asking this as I'm probably a bit confused about in what situation COPY_CALLBACK could be used. I thought the binary format and \\r\\n line terminator could be necessary depending on the FDW implementation.\n> \nOk. I don't want to allow binary format in callback mode right now. It \nis not a subject of this patch. Maybe it will be done later.\n> \n> (13)\n> @@ -1001,9 +1001,13 @@ ExecInitRoutingInfo(ModifyTableState *mtstate,\n> \t * If the partition is a foreign table, let the FDW init itself for\n> \t * routing tuples to the partition.\n> \t */\n> -\tif (partRelInfo->ri_FdwRoutine != NULL &&\n> -\t\tpartRelInfo->ri_FdwRoutine->BeginForeignInsert != NULL)\n> -\t\tpartRelInfo->ri_FdwRoutine->BeginForeignInsert(mtstate, partRelInfo);\n> +\tif (partRelInfo->ri_FdwRoutine != NULL)\n> +\t{\n> +\t\tif (partRelInfo->ri_usesMultiInsert)\n> +\t\t\tpartRelInfo->ri_FdwRoutine->BeginForeignCopy(mtstate, partRelInfo);\n> +\t\telse if (partRelInfo->ri_FdwRoutine->BeginForeignInsert != NULL)\n> +\t\t\tpartRelInfo->ri_FdwRoutine->BeginForeignInsert(mtstate, partRelInfo);\n> +\t}\n> \n> BeginForeignCopy() should be called only if it's defined, because BeginForeignCopy() is an optional function.\nOk\n> \n> \n> (14)\n> @@ -1205,10 +1209,18 @@ ExecCleanupTupleRouting(ModifyTableState *mtstate,\n> \t\tResultRelInfo *resultRelInfo = proute->partitions[i];\n> \n> \t\t/* Allow any FDWs to shut down */\n> -\t\tif (resultRelInfo->ri_FdwRoutine != NULL &&\n> -\t\t\tresultRelInfo->ri_FdwRoutine->EndForeignInsert != NULL)\n> -\t\t\tresultRelInfo->ri_FdwRoutine->EndForeignInsert(mtstate->ps.state,\n> -\t\t\t\t\t\t\t\t\t\t\t\t\t\t resultRelInfo);\n> +\t\tif (resultRelInfo->ri_FdwRoutine != NULL)\n> +\t\t{\n> +\t\t\tif (resultRelInfo->ri_usesMultiInsert)\n> +\t\t\t{\n> +\t\t\t\tAssert(resultRelInfo->ri_FdwRoutine->EndForeignCopy != NULL);\n> +\t\t\t\tresultRelInfo->ri_FdwRoutine->EndForeignCopy(mtstate->ps.state,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t resultRelInfo);\n> +\t\t\t}\n> +\t\t\telse if (resultRelInfo->ri_FdwRoutine->EndForeignInsert != NULL)\n> +\t\t\t\tresultRelInfo->ri_FdwRoutine->EndForeignInsert(mtstate->ps.state,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t resultRelInfo);\n> +\t\t}\n> \n> EndForeignCopy() is an optional function, isn't it? That is, it's called if it's defined.\n> \nri_usesMultiInsert must guarantee that we will use multi-insertions. And \nwe use only assertions to control this.\n> \n> (15)\n> +static void\n> +pgfdw_copy_dest_cb(void *buf, int len)\n> +{\n> +\tPGconn *conn = copy_fmstate->conn;\n> +\n> +\tif (PQputCopyData(conn, (char *) buf, len) <= 0)\n> +\t{\n> +\t\tPGresult *res = PQgetResult(conn);\n> +\n> +\t\tpgfdw_report_error(ERROR, res, conn, true, copy_fmstate->query);\n> +\t}\n> +}\n> \n> The following page says \"Use PQerrorMessage to retrieve details if the return value is -1.\" So, it's correct to not use PGresult here and pass NULL as the second argument to pgfdw_report_error().\n> \n> https://www.postgresql.org/docs/devel/libpq-copy.html\nOk\n> \n> \n> (16)\n> +\t\tfor (i = 0; i < nslots; i++)\n> +\t\t\tCopyOneRowTo(fmstate->cstate, slots[i]);\n> +\n> +\t\tstatus = true;\n> +\t}\n> \n> I'm afraid it's not intuitive what \"status is true\" means. I think copy_data_sent or copy_send_success would be better for the variable name.\nAgreed. renamed to 'OK'. In accordance with psql/copy.c.\n> \n> \n> (17)\n> +\t\tif (PQputCopyEnd(conn, status ? NULL : _(\"canceled by server\")) <= 0 ||\n> +\t\t\tPQflush(conn))\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errmsg(\"error returned by PQputCopyEnd: %s\",\n> +\t\t\t\t\t\t\tPQerrorMessage(conn))));\n> \n> As the places that call PQsendQuery(), it seems preferrable to call pgfdw_report_error() here too.\nAgreed\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Mon, 19 Oct 2020 19:58:34 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Hi Andrey-san,\r\n\r\n\r\nThanks for the revision. The patch looks good except for the following two items.\r\n\r\n\r\n(18)\r\n+\tif (target_resultRelInfo->ri_FdwRoutine != NULL)\r\n+\t{\r\n+\t\tif (target_resultRelInfo->ri_usesMultiInsert)\r\n+\t\t{\r\n+\t\t\tAssert(target_resultRelInfo->ri_FdwRoutine->BeginForeignCopy != NULL);\r\n+\t\t\ttarget_resultRelInfo->ri_FdwRoutine->BeginForeignCopy(mtstate,\r\n+\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t resultRelInfo);\r\n+\t\t}\r\n\r\n> > (14)\r\n> > @@ -1205,10 +1209,18 @@ ExecCleanupTupleRouting(ModifyTableState\r\n> *mtstate,\r\n> > \t\tResultRelInfo *resultRelInfo = proute->partitions[i];\r\n> >\r\n> > \t\t/* Allow any FDWs to shut down */\r\n> > -\t\tif (resultRelInfo->ri_FdwRoutine != NULL &&\r\n> > -\t\t\tresultRelInfo->ri_FdwRoutine->EndForeignInsert !=\r\n> NULL)\r\n> > -\r\n> \tresultRelInfo->ri_FdwRoutine->EndForeignInsert(mtstate->ps.state,\r\n> > -\r\n> \t\t\t\t\t resultRelInfo);\r\n> > +\t\tif (resultRelInfo->ri_FdwRoutine != NULL)\r\n> > +\t\t{\r\n> > +\t\t\tif (resultRelInfo->ri_usesMultiInsert)\r\n> > +\t\t\t{\r\n> > +\r\n> \tAssert(resultRelInfo->ri_FdwRoutine->EndForeignCopy != NULL);\r\n> > +\r\n> \tresultRelInfo->ri_FdwRoutine->EndForeignCopy(mtstate->ps.state,\r\n> > +\r\n> \t\t\t\t\t\t resultRelInfo);\r\n> > +\t\t\t}\r\n> > +\t\t\telse if\r\n> (resultRelInfo->ri_FdwRoutine->EndForeignInsert != NULL)\r\n> > +\r\n> \tresultRelInfo->ri_FdwRoutine->EndForeignInsert(mtstate->ps.state,\r\n> > +\r\n> \t\t\t\t\t\t resultRelInfo);\r\n> > +\t\t}\r\n> >\r\n> > EndForeignCopy() is an optional function, isn't it? That is, it's called if it's\r\n> defined.\r\n> >\r\n> ri_usesMultiInsert must guarantee that we will use multi-insertions. And we\r\n> use only assertions to control this.\r\n\r\nThe code appears to require both BeginForeignCopy and EndForeignCopy, while the following documentation says they are optional. Which is correct? (I suppose the latter is correct just like other existing Begin/End functions are optional.)\r\n\r\n+ If the <function>BeginForeignCopy</function> pointer is set to\r\n+ <literal>NULL</literal>, no action is taken for the initialization.\r\n\r\n+ If the <function>EndForeignCopy</function> pointer is set to\r\n+ <literal>NULL</literal>, no action is taken for the termination.\r\n\r\n\r\n\r\n\r\n> > (2)\r\n> > +\tAssert(rri->ri_usesMultiInsert == false);\r\n> >\r\n> > As the above assertion represents, I'm afraid the semantics of\r\n> ExecRelationAllowsMultiInsert() and ResultRelInfo->ri_usesMultiInsert are\r\n> unclear. In CopyFrom(), ri_usesMultiInsert is set by also considering the\r\n> COPY-specific conditions:\r\n> >\r\n> > +\tif (!cstate->volatile_defexprs &&\r\n> > +\t\t!contain_volatile_functions(cstate->whereClause) &&\r\n> > +\t\tExecRelationAllowsMultiInsert(target_resultRelInfo, NULL))\r\n> > +\t\ttarget_resultRelInfo->ri_usesMultiInsert = true;\r\n> >\r\n> > On the other hand, in below ExecInitPartitionInfo(), ri_usesMultiInsert is set\r\n> purely based on the relation's characteristics.\r\n> >\r\n> > +\tleaf_part_rri->ri_usesMultiInsert =\r\n> > +\t\tExecRelationAllowsMultiInsert(leaf_part_rri,\r\n> rootResultRelInfo);\r\n> >\r\n> > In addition to these differences, I think it's a bit confusing that the function\r\n> itself doesn't record the check result in ri_usesMultiInsert.\r\n> >\r\n> > It's probably easy to understand to not add ri_usesMultiInsert, and the\r\n> function just encapsulates the check logic based solely on the relation\r\n> characteristics and returns the result. So, the argument is just one\r\n> ResultRelInfo. The caller (e.g. COPY) combines the function result with other\r\n> specific conditions.\r\n\r\n> I can't fully agreed with this suggestion. We do so because in the future anyone\r\n> can call this code from another subsystem for another purposes. And we want\r\n> all the relation-related restrictions contains in one routine. CopyState-related\r\n> restrictions used in copy.c only and taken out of this function.\r\n\r\nI'm sorry if I'm misinterpreting you, but I think the following simply serves its role sufficiently and cleanly without using ri_usesMultiInsert.\r\n\r\nbool\r\nExecRelationAllowsMultiInsert(RelationRelInfo *rri)\r\n{\r\n\tcheck if the relation allows multiinsert based on its characteristics;\r\n\treturn true or false;\r\n}\r\n\r\nI'm concerned that if one subsystem sets ri_usesMultiInsert to true based on its additional specific conditions, it might lead to another subsystem's misjudgment. For example, when subsystem A and B want to do different things respectively:\r\n\r\n[Subsystem A]\r\nif (ExecRelationAllowsMultiInsert(rri) && {A's conditions})\r\n\trri->ri_usesMultiInsert = true;\r\n...\r\nif (rri->ri_usesMultiInsert)\r\n\tdo A's business;\r\n\r\n[Subsystem B]\r\nif (rri->ri_usesMultiInsert)\r\n\tdo B's business;\r\n\r\nHere, what if subsystem A and B don't want each other's specific conditions to hold true? That is, A wants to do A's business only if B's specific conditions don't hold true. If A sets rri->ri_usesMultiInsert to true and passes rri to B, then B wrongly does B's business despite that A's specific conditions are true.\r\n\r\n(I think this is due to some form of violation of encapsulation.)\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n", "msg_date": "Tue, 20 Oct 2020 02:30:57 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Hi,\n\nI needed to look at this patch while working on something related, and I\nfound it got broken by 6973533650c a couple days ago. So here's a fixed\nversion, to keep cfbot happy. I haven't done any serious review yet.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 10 Nov 2020 19:17:53 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Hi Andrey-san,\r\n\r\nFrom: Tomas Vondra <tomas.vondra@enterprisedb.com>\r\n> I needed to look at this patch while working on something related, and I found it\r\n> got broken by 6973533650c a couple days ago. So here's a fixed version, to keep\r\n> cfbot happy. I haven't done any serious review yet.\r\n\r\nCould I or my colleague continue this patch in a few days? It looks it's stalled over one month.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\r\n", "msg_date": "Mon, 23 Nov 2020 02:49:59 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "\n\nOn 11/23/20 7:49 AM, tsunakawa.takay@fujitsu.com wrote:\n> Hi Andrey-san,\n> \n> From: Tomas Vondra <tomas.vondra@enterprisedb.com>\n>> I needed to look at this patch while working on something related, and I found it\n>> got broken by 6973533650c a couple days ago. So here's a fixed version, to keep\n>> cfbot happy. I haven't done any serious review yet.\n> \n> Could I or my colleague continue this patch in a few days? It looks it's stalled over one month.\n\nI don't found any problems with this patch that needed to be corrected. \nIt is wait for actions from committers side, i think.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Mon, 23 Nov 2020 13:39:00 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On Mon, Nov 23, 2020 at 5:39 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> On 11/23/20 7:49 AM, tsunakawa.takay@fujitsu.com wrote:\n> > Could I or my colleague continue this patch in a few days? It looks it's stalled over one month.\n>\n> I don't found any problems with this patch that needed to be corrected.\n> It is wait for actions from committers side, i think.\n\nI'm planning to review this patch. I think it would be better for\nanother pair of eyes to take a look at it, though.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Tue, 24 Nov 2020 11:56:20 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Andrey-san, Fujita-san,\r\n\r\nFrom: Etsuro Fujita <etsuro.fujita@gmail.com>\r\n> On Mon, Nov 23, 2020 at 5:39 PM Andrey Lepikhov\r\n> <a.lepikhov@postgrespro.ru> wrote:\r\n> > On 11/23/20 7:49 AM, tsunakawa.takay@fujitsu.com wrote:\r\n> > > Could I or my colleague continue this patch in a few days? It looks it's\r\n> stalled over one month.\r\n> >\r\n> > I don't found any problems with this patch that needed to be corrected.\r\n> > It is wait for actions from committers side, i think.\r\n> \r\n> I'm planning to review this patch. I think it would be better for\r\n> another pair of eyes to take a look at it, though.\r\n\r\n\r\nThere are the following two issues left untouched.\r\n\r\nhttps://www.postgresql.org/message-id/TYAPR01MB2990DC396B338C98F27C8ED3FE1F0%40TYAPR01MB2990.jpnprd01.prod.outlook.com\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Tue, 24 Nov 2020 04:27:07 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "\n\nOn 11/24/20 9:27 AM, tsunakawa.takay@fujitsu.com wrote:\n> Andrey-san, Fujita-san,\n> \n> From: Etsuro Fujita <etsuro.fujita@gmail.com>\n>> On Mon, Nov 23, 2020 at 5:39 PM Andrey Lepikhov\n>> <a.lepikhov@postgrespro.ru> wrote:\n>>> On 11/23/20 7:49 AM, tsunakawa.takay@fujitsu.com wrote:\n>>>> Could I or my colleague continue this patch in a few days? It looks it's\n>> stalled over one month.\n>>>\n>>> I don't found any problems with this patch that needed to be corrected.\n>>> It is wait for actions from committers side, i think.\n>>\n>> I'm planning to review this patch. I think it would be better for\n>> another pair of eyes to take a look at it, though.\n> \n> \n> There are the following two issues left untouched.\n> \n> https://www.postgresql.org/message-id/TYAPR01MB2990DC396B338C98F27C8ED3FE1F0%40TYAPR01MB2990.jpnprd01.prod.outlook.com\n\nI disagree with your opinion about changing the interface of the \nExecRelationAllowsMultiInsert routine. If you insist on the need for \nthis change, we need another opinion.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Tue, 24 Nov 2020 11:04:27 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Hi,\n\nOn Tue, Oct 20, 2020 at 11:31 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n> > > (2)\n> > > + Assert(rri->ri_usesMultiInsert == false);\n> > >\n> > > As the above assertion represents, I'm afraid the semantics of\n> > ExecRelationAllowsMultiInsert() and ResultRelInfo->ri_usesMultiInsert are\n> > unclear. In CopyFrom(), ri_usesMultiInsert is set by also considering the\n> > COPY-specific conditions:\n> > >\n> > > + if (!cstate->volatile_defexprs &&\n> > > + !contain_volatile_functions(cstate->whereClause) &&\n> > > + ExecRelationAllowsMultiInsert(target_resultRelInfo, NULL))\n> > > + target_resultRelInfo->ri_usesMultiInsert = true;\n> > >\n> > > On the other hand, in below ExecInitPartitionInfo(), ri_usesMultiInsert is set\n> > purely based on the relation's characteristics.\n> > >\n> > > + leaf_part_rri->ri_usesMultiInsert =\n> > > + ExecRelationAllowsMultiInsert(leaf_part_rri,\n> > rootResultRelInfo);\n> > >\n> > > In addition to these differences, I think it's a bit confusing that the function\n> > itself doesn't record the check result in ri_usesMultiInsert.\n> > >\n> > > It's probably easy to understand to not add ri_usesMultiInsert, and the\n> > function just encapsulates the check logic based solely on the relation\n> > characteristics and returns the result. So, the argument is just one\n> > ResultRelInfo. The caller (e.g. COPY) combines the function result with other\n> > specific conditions.\n>\n> > I can't fully agreed with this suggestion. We do so because in the future anyone\n> > can call this code from another subsystem for another purposes. And we want\n> > all the relation-related restrictions contains in one routine. CopyState-related\n> > restrictions used in copy.c only and taken out of this function.\n>\n> I'm sorry if I'm misinterpreting you, but I think the following simply serves its role sufficiently and cleanly without using ri_usesMultiInsert.\n>\n> bool\n> ExecRelationAllowsMultiInsert(RelationRelInfo *rri)\n> {\n> check if the relation allows multiinsert based on its characteristics;\n> return true or false;\n> }\n>\n> I'm concerned that if one subsystem sets ri_usesMultiInsert to true based on its additional specific conditions, it might lead to another subsystem's misjudgment. For example, when subsystem A and B want to do different things respectively:\n>\n> [Subsystem A]\n> if (ExecRelationAllowsMultiInsert(rri) && {A's conditions})\n> rri->ri_usesMultiInsert = true;\n> ...\n> if (rri->ri_usesMultiInsert)\n> do A's business;\n>\n> [Subsystem B]\n> if (rri->ri_usesMultiInsert)\n> do B's business;\n>\n> Here, what if subsystem A and B don't want each other's specific conditions to hold true? That is, A wants to do A's business only if B's specific conditions don't hold true. If A sets rri->ri_usesMultiInsert to true and passes rri to B, then B wrongly does B's business despite that A's specific conditions are true.\n>\n> (I think this is due to some form of violation of encapsulation.)\n\nSorry about chiming in late, but I think Tsunakawa-san raises some\nvalid concerns.\n\nFirst, IIUC, is whether we need the ri_usesMultiInsert flag at all. I\nthink yes, because computing that information repeatedly for every row\nseems wasteful, especially for a bulk operation, and even more so if\nwe're going to call a function when doing so.\n\nSecond is whether the interface for setting ri_usesMultiInsert\nencourages situations where different modules could possibly engage in\nconflicting behaviors. I can't think of a real-life example of that\nwith the current implementation, but maybe the interface provided in\nthe patch makes it harder to ensure that that remains true in the\nfuture. Tsunakawa-san, have you encountered an example of this, maybe\nwhen trying to integrate this patch with some other?\n\nAnyway, one thing we could do is rename\nExecRelationAllowsMultiInsert() to ExecSetRelationUsesMultiInsert(),\nthat is, to make it actually set ri_usesMultiInsert and have places\nlike CopyFrom() call it if (and only if) its local logic allows\nmulti-insert to be used. So, ri_usesMultiInsert starts out set to\nfalse and if a module wants to use multi-insert for a given target\nrelation, it calls ExecSetRelationUsesMultiInsert() to turn the flag\non. Also, given the confusion regarding how execPartition.c\nmanipulates the flag, maybe change ExecFindPartition() to accept a\nBoolean parameter multi_insert, which it will pass down to\nExecInitPartitionInfo(), which in turn will call\nExecSetRelationUsesMultiInsert() for a given partition. Of course, if\nthe logic in ExecSetRelationUsesMultiInsert() determines that\nmulti-insert can't be used, for the reasons listed in the function,\nthen the caller will have to live with that decision.\n\nAny other ideas on how to make this work and look better?\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/d3fbf3bc93b7bcd99ff7fa9ee41e0e20%40postgrespro.ru\n\n\n", "msg_date": "Wed, 25 Nov 2020 18:47:51 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "From: Amit Langote <amitlangote09@gmail.com>\r\n> Second is whether the interface for setting ri_usesMultiInsert\r\n> encourages situations where different modules could possibly engage in\r\n> conflicting behaviors. I can't think of a real-life example of that\r\n> with the current implementation, but maybe the interface provided in\r\n> the patch makes it harder to ensure that that remains true in the\r\n> future. Tsunakawa-san, have you encountered an example of this, maybe\r\n> when trying to integrate this patch with some other?\r\n\r\nThanks. No, I pointed out purely from the standpoint of program modularity (based on structured programming?)\r\n\r\n\r\n> Anyway, one thing we could do is rename\r\n> ExecRelationAllowsMultiInsert() to ExecSetRelationUsesMultiInsert(),\r\n> that is, to make it actually set ri_usesMultiInsert and have places\r\n> like CopyFrom() call it if (and only if) its local logic allows\r\n> multi-insert to be used. So, ri_usesMultiInsert starts out set to\r\n> false and if a module wants to use multi-insert for a given target\r\n> relation, it calls ExecSetRelationUsesMultiInsert() to turn the flag\r\n> on. Also, given the confusion regarding how execPartition.c\r\n\r\nI think separating the setting and inspection of the property into different functions will be good, at least.\r\n\r\n\r\n> manipulates the flag, maybe change ExecFindPartition() to accept a\r\n> Boolean parameter multi_insert, which it will pass down to\r\n> ExecInitPartitionInfo(), which in turn will call\r\n> ExecSetRelationUsesMultiInsert() for a given partition. Of course, if\r\n> the logic in ExecSetRelationUsesMultiInsert() determines that\r\n> multi-insert can't be used, for the reasons listed in the function,\r\n> then the caller will have to live with that decision.\r\n\r\nI can't say for sure, but it looks strange to me, because I can't find a good description of multi_insert argument for ExecFindPartition(). If we add multi_insert, I'm afraid we may want to add further arguments for other properties in the future like \"Hey, get me the partition that has triggers.\", \"Next, pass me a partition that uses a foreign table.\", etc. I think the current ExecFindPartition() is good -- \"Get me a partition that accepts this row.\"\r\n\r\nI wonder if ri_usesMultiInsert is really necessary. Would it cut down enough costs in the intended use case(s), say the heavyweight COPY FROM?\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n", "msg_date": "Thu, 26 Nov 2020 02:42:09 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On Thu, Nov 26, 2020 at 11:42 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n> From: Amit Langote <amitlangote09@gmail.com>\n> > Anyway, one thing we could do is rename\n> > ExecRelationAllowsMultiInsert() to ExecSetRelationUsesMultiInsert(),\n> > that is, to make it actually set ri_usesMultiInsert and have places\n> > like CopyFrom() call it if (and only if) its local logic allows\n> > multi-insert to be used. So, ri_usesMultiInsert starts out set to\n> > false and if a module wants to use multi-insert for a given target\n> > relation, it calls ExecSetRelationUsesMultiInsert() to turn the flag\n> > on. Also, given the confusion regarding how execPartition.c\n>\n> I think separating the setting and inspection of the property into different functions will be good, at least.\n>\n> > manipulates the flag, maybe change ExecFindPartition() to accept a\n> > Boolean parameter multi_insert, which it will pass down to\n> > ExecInitPartitionInfo(), which in turn will call\n> > ExecSetRelationUsesMultiInsert() for a given partition. Of course, if\n> > the logic in ExecSetRelationUsesMultiInsert() determines that\n> > multi-insert can't be used, for the reasons listed in the function,\n> > then the caller will have to live with that decision.\n>\n> I can't say for sure, but it looks strange to me, because I can't find a good description of multi_insert argument for ExecFindPartition(). If we add multi_insert, I'm afraid we may want to add further arguments for other properties in the future like \"Hey, get me the partition that has triggers.\", \"Next, pass me a partition that uses a foreign table.\", etc. I think the current ExecFindPartition() is good -- \"Get me a partition that accepts this row.\"\n>\n> I wonder if ri_usesMultiInsert is really necessary. Would it cut down enough costs in the intended use case(s), say the heavyweight COPY FROM?\n\nThinking on this more, I think I'm starting to agree with you on this.\nI skimmed the CopyFrom()'s main loop again today and indeed it doesn't\nseem that the cost of checking the individual conditions for whether\nor not to buffer the current tuple for the given target relation is\nall that big to save with ri_usesMultiInsert. So my argument that it\nis good for performance is perhaps not that strong.\n\nAndrey's original patch had the flag to, as I understand it, make the\npartitioning case work correctly. When inserting into a\nnon-partitioned table, there's only one relation to care about. In\nthat case, CopyFrom() can use either the new COPY interface or the\nINSERT interface for the entire operation when talking to a foreign\ntarget relation's FDW driver. With partitions, that has to be\nconsidered separately for each partition. What complicates the matter\nfurther is that while the original target relation (the root\npartitioned table in the partitioning case) is fully initialized in\nCopyFrom(), partitions are lazily initialized by ExecFindPartition().\nNote that the initialization of a given target relation can also\noptionally involve calling the FDW to perform any pre-COPY\ninitializations. So if a given partition is a foreign table, whether\nthe copy operation was initialized using the COPY interface or the\nINSERT interface is determined away from CopyFrom(). Andrey created\nri_usesMultiInsert to remember which was used so that CopyFrom() can\nuse the correct interface during the subsequent interactions with the\npartition's driver.\n\nNow, it does not seem outright impossible to do this without the flag,\nbut maybe Andrey thinks it is good for readability? If it is\nconfusing from a modularity standpoint, maybe we should rethink that.\nThat said, I still think that there should be a way for CopyFrom() to\ntell ExecFindPartition() which FDW interface to initialize a given\nforeign table partition's copy operation with -- COPY if the copy\nallows multi-insert, INSERT if not. Maybe the multi_insert parameter\nI mentioned earlier would serve that purpose.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 30 Nov 2020 23:06:47 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "From: Amit Langote <amitlangote09@gmail.com>\r\n> Andrey's original patch had the flag to, as I understand it, make the\r\n> partitioning case work correctly. When inserting into a\r\n> non-partitioned table, there's only one relation to care about. In\r\n> that case, CopyFrom() can use either the new COPY interface or the\r\n> INSERT interface for the entire operation when talking to a foreign\r\n> target relation's FDW driver. With partitions, that has to be\r\n> considered separately for each partition. What complicates the matter\r\n> further is that while the original target relation (the root\r\n> partitioned table in the partitioning case) is fully initialized in\r\n> CopyFrom(), partitions are lazily initialized by ExecFindPartition().\r\n\r\nYeah, I felt it a bit confusing to see the calls to Begin/EndForeignInsert() in both CopyFrom() and ExecInitRoutingInfo().\r\n\r\n\r\n> Note that the initialization of a given target relation can also\r\n> optionally involve calling the FDW to perform any pre-COPY\r\n> initializations. So if a given partition is a foreign table, whether\r\n> the copy operation was initialized using the COPY interface or the\r\n> INSERT interface is determined away from CopyFrom(). Andrey created\r\n> ri_usesMultiInsert to remember which was used so that CopyFrom() can\r\n> use the correct interface during the subsequent interactions with the\r\n> partition's driver.\r\n> \r\n> Now, it does not seem outright impossible to do this without the flag,\r\n> but maybe Andrey thinks it is good for readability? If it is\r\n> confusing from a modularity standpoint, maybe we should rethink that.\r\n> That said, I still think that there should be a way for CopyFrom() to\r\n> tell ExecFindPartition() which FDW interface to initialize a given\r\n> foreign table partition's copy operation with -- COPY if the copy\r\n> allows multi-insert, INSERT if not. Maybe the multi_insert parameter\r\n> I mentioned earlier would serve that purpose.\r\n\r\nI agree with your idea of adding multi_insert argument to ExecFindPartition() to request a multi-insert-capable partition. At first, I thought ExecFindPartition() is used for all operations, insert/delete/update/select, so I found it odd to add multi_insert argument. But ExecFindPartion() is used only for insert, so multi_insert argument seems okay.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Tue, 1 Dec 2020 05:39:59 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On Tue, Dec 1, 2020 at 2:40 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n> From: Amit Langote <amitlangote09@gmail.com>\n> > Andrey's original patch had the flag to, as I understand it, make the\n> > partitioning case work correctly. When inserting into a\n> > non-partitioned table, there's only one relation to care about. In\n> > that case, CopyFrom() can use either the new COPY interface or the\n> > INSERT interface for the entire operation when talking to a foreign\n> > target relation's FDW driver. With partitions, that has to be\n> > considered separately for each partition. What complicates the matter\n> > further is that while the original target relation (the root\n> > partitioned table in the partitioning case) is fully initialized in\n> > CopyFrom(), partitions are lazily initialized by ExecFindPartition().\n>\n> Yeah, I felt it a bit confusing to see the calls to Begin/EndForeignInsert() in both CopyFrom() and ExecInitRoutingInfo().\n>\n> > Note that the initialization of a given target relation can also\n> > optionally involve calling the FDW to perform any pre-COPY\n> > initializations. So if a given partition is a foreign table, whether\n> > the copy operation was initialized using the COPY interface or the\n> > INSERT interface is determined away from CopyFrom(). Andrey created\n> > ri_usesMultiInsert to remember which was used so that CopyFrom() can\n> > use the correct interface during the subsequent interactions with the\n> > partition's driver.\n> >\n> > Now, it does not seem outright impossible to do this without the flag,\n> > but maybe Andrey thinks it is good for readability? If it is\n> > confusing from a modularity standpoint, maybe we should rethink that.\n> > That said, I still think that there should be a way for CopyFrom() to\n> > tell ExecFindPartition() which FDW interface to initialize a given\n> > foreign table partition's copy operation with -- COPY if the copy\n> > allows multi-insert, INSERT if not. Maybe the multi_insert parameter\n> > I mentioned earlier would serve that purpose.\n>\n> I agree with your idea of adding multi_insert argument to ExecFindPartition() to request a multi-insert-capable partition. At first, I thought ExecFindPartition() is used for all operations, insert/delete/update/select, so I found it odd to add multi_insert argument. But ExecFindPartion() is used only for insert, so multi_insert argument seems okay.\n\nGood. Andrey, any thoughts on this?\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 1 Dec 2020 18:02:48 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On 12/1/20 2:02 PM, Amit Langote wrote:\n> On Tue, Dec 1, 2020 at 2:40 PM tsunakawa.takay@fujitsu.com\n> <tsunakawa.takay@fujitsu.com> wrote:\n>> From: Amit Langote <amitlangote09@gmail.com>\n >> The code appears to require both BeginForeignCopy and EndForeignCopy,\n >> while the following documentation says they are optional. Which is\n >> correct? (I suppose the latter is correct just like other existing\n >> Begin/End functions are optional.)\n\nFixed.\n\n > Anyway, one thing we could do is rename\n > ExecRelationAllowsMultiInsert() to ExecSetRelationUsesMultiInsert(\n\nRenamed.\n\n >> I agree with your idea of adding multi_insert argument to \nExecFindPartition() to request a multi-insert-capable partition. At \nfirst, I thought ExecFindPartition() is used for all operations, \ninsert/delete/update/select, so I found it odd to add multi_insert \nargument. But ExecFindPartion() is used only for insert, so \nmulti_insert argument seems okay.\n >\n > Good. Andrey, any thoughts on this?\n\nI have no serious technical arguments against this, other than code \nreadability and reduce of a routine parameters. Maybe we will be \nrethinking it later?\n\nThe new version rebased on commit 525e60b742 is attached.\n\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Mon, 14 Dec 2020 14:06:12 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Hi Andrey,\r\n\r\nThere is an error report in your patch as follows. Please take a check.\r\n\r\nhttps://travis-ci.org/github/postgresql-cfbot/postgresql/jobs/750682857#L1519\r\n\r\n>copyfrom.c:374:21: error: ‘save_cur_lineno’ is used uninitialized in this function [-Werror=uninitialized]\r\n\r\nRegards,\r\nTang\r\n\n\n", "msg_date": "Tue, 22 Dec 2020 07:04:34 +0000", "msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On 12/22/20 12:04 PM, Tang, Haiying wrote:\n> Hi Andrey,\n> \n> There is an error report in your patch as follows. Please take a check.\n> \n> https://travis-ci.org/github/postgresql-cfbot/postgresql/jobs/750682857#L1519\n> \n>> copyfrom.c:374:21: error: ‘save_cur_lineno’ is used uninitialized in this function [-Werror=uninitialized]\n> \n> Regards,\n> Tang\n> \n> \n\nThank you,\nsee new version in attachment.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Wed, 23 Dec 2020 14:00:18 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Hi\r\n\r\n> see new version in attachment.\r\n\r\nI took a look into the patch, and have some comments.\r\n\r\n1.\r\n+\tPG_FINALLY();\r\n+\t{\r\n+\t\tcopy_fmstate = NULL; /* Detect problems */\r\nI don't quite understand this comment,\r\ndoes it means we want to detect something like Null reference ?\r\n\r\n\r\n2.\r\n+\tPG_FINALLY();\r\n+\t{\r\n\t...\r\n+\t\tif (!OK)\r\n+\t\t\tPG_RE_THROW();\r\n+\t}\r\nIs this PG_RE_THROW() necessary ? \r\nIMO, PG_FINALLY will reproduce the PG_RE_THROW action if we get to the code block due to an error being thrown.\r\n\r\n3.\r\n+\t\t\tereport(ERROR,\r\n+\t\t\t\t\t(errmsg(\"unexpected extra results during COPY of table: %s\",\r\n+\t\t\t\t\t\t\tPQerrorMessage(conn))));\r\n\r\nI found some similar message like the following:\r\n\r\n\t\t\tpg_log_warning(\"unexpected extra results during COPY of table \\\"%s\\\"\",\r\n\t\t\t\t\t\t tocEntryTag);\r\nHow about using existing messages style ?\r\n\r\n4.\r\nI noticed some not standard code comment[1].\r\nI think it's better to comment like:\r\n/*\r\n * line 1\r\n * line 2\r\n */\r\n\r\n[1]-----------\r\n+\t\t/* Finish COPY IN protocol. It is needed to do after successful copy or\r\n+\t\t * after an error.\r\n+\t\t */\r\n\r\n\r\n+/*\r\n+ *\r\n+ * postgresExecForeignCopy\r\n\r\n+/*\r\n+ *\r\n+ * postgresBeginForeignCopy\r\n\r\n-----------\r\nBest regards,\r\nHouzj\r\n\r\n\n\n", "msg_date": "Tue, 29 Dec 2020 11:20:47 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On 29.12.2020 16:20, Hou, Zhijie wrote:\n>> see new version in attachment.\n> \n> I took a look into the patch, and have some comments.\n> \n> 1.\n> +\tPG_FINALLY();\n> +\t{\n> +\t\tcopy_fmstate = NULL; /* Detect problems */\n> I don't quite understand this comment,\n> does it means we want to detect something like Null reference ?\n> \n> \n> 2.\n> +\tPG_FINALLY();\n> +\t{\n> \t...\n> +\t\tif (!OK)\n> +\t\t\tPG_RE_THROW();\n> +\t}\n> Is this PG_RE_THROW() necessary ?\n> IMO, PG_FINALLY will reproduce the PG_RE_THROW action if we get to the code block due to an error being thrown.\n\nThis is a debugging stage atavisms. fixed.\n> \n> 3.\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errmsg(\"unexpected extra results during COPY of table: %s\",\n> +\t\t\t\t\t\t\tPQerrorMessage(conn))));\n> \n> I found some similar message like the following:\n> \n> \t\t\tpg_log_warning(\"unexpected extra results during COPY of table \\\"%s\\\"\",\n> \t\t\t\t\t\t tocEntryTag);\n> How about using existing messages style ?\n\nThis style is intended for use in frontend utilities, not for contrib \nextensions, i think.\n> \n> 4.\n> I noticed some not standard code comment[1].\n> I think it's better to comment like:\n> /*\n> * line 1\n> * line 2\n> */\n> \n> [1]-----------\n> +\t\t/* Finish COPY IN protocol. It is needed to do after successful copy or\n> +\t\t * after an error.\n> +\t\t */\n> \n> \n> +/*\n> + *\n> + * postgresExecForeignCopy\n> \n> +/*\n> + *\n> + * postgresBeginForeignCopy\nThanks, fixed.\nThe patch in attachment rebased on 107a2d4204.\n\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Wed, 30 Dec 2020 12:16:07 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Hi Andrey,\r\n\r\nI had a general look at this extension feature, I think it's beneficial for some application scenarios of PostgreSQL. So I did 7 performance cases test on your patch(v13). The results are really good. As you can see below we can get 7-10 times improvement with this patch.\r\n\r\nPSA test_copy_from.sql shows my test cases detail(I didn't attach my data file since it's too big). \r\n\r\nBelow are the test results:\r\n'Test No' corresponds to the number(0 1...6) in attached test_copy_from.sql.\r\n%reg=(Patched-Unpatched)/Unpatched), Unit is millisecond.\r\n\r\n|Test No| Test Case |Patched(ms) | Unpatched(ms) |%reg |\r\n|-------|-----------------------------------------------------------------------------------------|-------------|---------------|-------|\r\n|0 |COPY FROM insertion into the partitioned table(parition is foreign table) | 102483.223 | 1083300.907 | -91% |\r\n|1 |COPY FROM insertion into the partitioned table(parition is foreign partition) | 104779.893 | 1207320.287 | -91% |\r\n|2 |COPY FROM insertion into the foreign table(without partition) | 100268.730 | 1077309.158 | -91% |\r\n|3 |COPY FROM insertion into the partitioned table(part of foreign partitions) | 104110.620 | 1134781.855 | -91% |\r\n|4 |COPY FROM insertion into the partitioned table with constraint(part of foreign partition)| 136356.201 | 1238539.603 | -89% |\r\n|5 |COPY FROM insertion into the foreign table with constraint(without partition) | 136818.262 | 1189921.742 | -89% |\r\n|6 |\\copy insertion into the partitioned table with constraint. | 140368.072 | 1242689.924 | -89% |\r\n\r\nIf there is any question on my tests, please feel free to ask.\r\n\r\nBest Regard,\r\nTang", "msg_date": "Mon, 11 Jan 2021 11:59:13 +0000", "msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Hi Andrey,\n\nUnfortunately, this no longer applies :-( I tried to apply this on top\nof c532d15ddd (from 2020/12/30) but even that has non-trivial conflicts.\n\nCan you send a rebased version?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 11 Jan 2021 19:16:45 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On 1/11/21 4:59 PM, Tang, Haiying wrote:\n> Hi Andrey,\n> \n> I had a general look at this extension feature, I think it's beneficial for some application scenarios of PostgreSQL. So I did 7 performance cases test on your patch(v13). The results are really good. As you can see below we can get 7-10 times improvement with this patch.\n> \n> PSA test_copy_from.sql shows my test cases detail(I didn't attach my data file since it's too big).\n> \n> Below are the test results:\n> 'Test No' corresponds to the number(0 1...6) in attached test_copy_from.sql.\n> %reg=(Patched-Unpatched)/Unpatched), Unit is millisecond.\n> \n> |Test No| Test Case |Patched(ms) | Unpatched(ms) |%reg |\n> |-------|-----------------------------------------------------------------------------------------|-------------|---------------|-------|\n> |0 |COPY FROM insertion into the partitioned table(parition is foreign table) | 102483.223 | 1083300.907 | -91% |\n> |1 |COPY FROM insertion into the partitioned table(parition is foreign partition) | 104779.893 | 1207320.287 | -91% |\n> |2 |COPY FROM insertion into the foreign table(without partition) | 100268.730 | 1077309.158 | -91% |\n> |3 |COPY FROM insertion into the partitioned table(part of foreign partitions) | 104110.620 | 1134781.855 | -91% |\n> |4 |COPY FROM insertion into the partitioned table with constraint(part of foreign partition)| 136356.201 | 1238539.603 | -89% |\n> |5 |COPY FROM insertion into the foreign table with constraint(without partition) | 136818.262 | 1189921.742 | -89% |\n> |6 |\\copy insertion into the partitioned table with constraint. | 140368.072 | 1242689.924 | -89% |\n> \n> If there is any question on my tests, please feel free to ask.\n> \n> Best Regard,\n> Tang\nThank you for this work.\nSometimes before i suggested additional optimization [1] which can \nadditionally speed up COPY by 2-4 times. Maybe you can perform the \nbenchmark for this solution too?\n\n[1] \nhttps://www.postgresql.org/message-id/da7ed3f5-b596-2549-3710-4cc2a602ec17%40postgrespro.ru\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Tue, 12 Jan 2021 08:45:19 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On 1/11/21 11:16 PM, Tomas Vondra wrote:\n> Hi Andrey,\n> \n> Unfortunately, this no longer applies :-( I tried to apply this on top\n> of c532d15ddd (from 2020/12/30) but even that has non-trivial conflicts.\n> \n> Can you send a rebased version?\n> \n> regards\n> \nApplied on 044aa9e70e.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Tue, 12 Jan 2021 09:13:05 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Hi Andrey,\r\n\r\n> Sometimes before i suggested additional optimization [1] which can \r\n> additionally speed up COPY by 2-4 times. Maybe you can perform the \r\n> benchmark for this solution too?\r\n\r\nSorry for the late reply, I just have time to take this test now.\r\nBut the patch no longer applies, I tried to apply on e42b3c3bd6(2021/1/26) but failed.\r\n\r\nCan you send a rebased version?\r\n\r\nRegards,\r\nTang\r\n\r\n\n\n", "msg_date": "Thu, 28 Jan 2021 02:40:32 +0000", "msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Hello, Andrey-san,\r\n\r\n\r\nFrom: Tang, Haiying <tanghy.fnst@cn.fujitsu.com>\r\n> > Sometimes before i suggested additional optimization [1] which can\r\n> > additionally speed up COPY by 2-4 times. Maybe you can perform the\r\n> > benchmark for this solution too?\r\n...\r\n> But the patch no longer applies, I tried to apply on e42b3c3bd6(2021/1/26) but\r\n> failed.\r\n> \r\n> Can you send a rebased version?\r\n\r\nI think the basic part of this patch set is the following. The latter file unfortunately no longer applies to HEAD.\r\n\r\nv13-0001-Move-multi-insert-decision-logic-into-executor.patch\r\nv13_3-0002-Fast-COPY-FROM-into-the-foreign-or-sharded-table.patch\r\n\r\nPlus, as Tang-san said, I'm afraid the following files are older and doesn't apply.\r\n\r\nv9-0003-Add-separated-connections-into-the-postgres_fdw.patch\r\nv9-0004-Optimized-version-of-the-Fast-COPY-FROM-feature\r\n\r\nWhen do you think you can submit the rebased version of them? (IIUC at the off-list HighGo meeting, you're planning to post them late this week after the global snapshot patch.) Just in case you are not going to do them for the moment, can we rebase and/or further modify them so that they can be committed in PG 14?\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\r\n", "msg_date": "Tue, 2 Feb 2021 06:57:04 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On 2/2/21 11:57, tsunakawa.takay@fujitsu.com wrote:\n> Hello, Andrey-san,\n> \n> \n> From: Tang, Haiying <tanghy.fnst@cn.fujitsu.com>\n>>> Sometimes before i suggested additional optimization [1] which can\n>>> additionally speed up COPY by 2-4 times. Maybe you can perform the\n>>> benchmark for this solution too?\n> ...\n>> But the patch no longer applies, I tried to apply on e42b3c3bd6(2021/1/26) but\n>> failed.\n>>\n>> Can you send a rebased version?\n> When do you think you can submit the rebased version of them? (IIUC at the off-list HighGo meeting, you're planning to post them late this week after the global snapshot patch.) Just in case you are not going to do them for the moment, can we rebase and/or further modify them so that they can be committed in PG 14?\nOf course, you can rebase it.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Tue, 2 Feb 2021 13:28:27 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "From: Andrey Lepikhov <a.lepikhov@postgrespro.ru>\r\n> Of course, you can rebase it.\r\n\r\nThank you. I might modify the basic part to incorporate my past proposal about improving the layering or modularity related to ri_useMultiInsert. (But I may end up giving up due to lack of energy.)\r\n\r\nAlso, I might defer working on the extended part (v9 0003 and 0004) and further separate them in a different thread, if it seems to take longer.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n", "msg_date": "Tue, 2 Feb 2021 09:13:29 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "From: tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com>\r\n> From: Andrey Lepikhov <a.lepikhov@postgrespro.ru>\r\n> > Of course, you can rebase it.\r\n> \r\n> Thank you. I might modify the basic part to incorporate my past proposal\r\n> about improving the layering or modularity related to ri_useMultiInsert. (But I\r\n> may end up giving up due to lack of energy.)\r\n\r\nRebased to HEAD with the following modifications. It passes make check in the top directory and contrib/postgres_fdw.\r\n\r\n(1)\r\nPlaced and ordered new three FDW functions consistently among their documentation, declaration and definition.\r\n\r\n\r\n(2)\r\nCheck if BeginForeignCopy is not NULL before calling it, because the documentation says it's not mandatory.\r\n\r\n\r\n(3)\r\nChanged the function name ExecSetRelationUsesMultiInsert() to ExecMultiInsertAllowed() because it does *not* set anything but returns a boolean value to indicate whether the relation allows multi-insert. I was bugged about this function's interface and the use of ri_usesMultiInsert in ResultRelInfo. I still feel a bit uneasy about things like whether the function should really take the partition root (parent) argument, and whether it's a good design that ri_usesMultiInsert is used for the executor functions to determine which of Begin/EndForeignCopy() or Begin/EndForeignInsert() should be called. I'm fine with COPY using executor, but it feels a bit uncomfortable for the executor functions to be aware of COPY.\r\n\r\n\r\nThat said, with the reviews from some people and good performance results, I think this can be ready for committer.\r\n\r\n\r\n> Also, I might defer working on the extended part (v9 0003 and 0004) and further\r\n> separate them in a different thread, if it seems to take longer.\r\n\r\nI reviewed them but haven't rebased them (it seems to take more labor.)\r\nAndrey-san, could you tell us:\r\n\r\n* Why is a separate FDW connection established for each COPY? To avoid using the same FDW connection for multiple foreign table partitions in a single COPY run?\r\n\r\n* In what kind of test did you get 2-4x performance gain? COPY into many foreign table partitions where the input rows are ordered randomly enough that many rows don't accumulate in the COPY buffer?\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa", "msg_date": "Tue, 9 Feb 2021 04:35:03 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On 2/9/21 9:35 AM, tsunakawa.takay@fujitsu.com wrote:\n> \tFrom: tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com>\n>> From: Andrey Lepikhov <a.lepikhov@postgrespro.ru>\n>> Also, I might defer working on the extended part (v9 0003 and 0004) and further\n>> separate them in a different thread, if it seems to take longer.\n> \n> I reviewed them but haven't rebased them (it seems to take more labor.)\n> Andrey-san, could you tell us:\n> \n> * Why is a separate FDW connection established for each COPY? To avoid using the same FDW connection for multiple foreign table partitions in a single COPY run?\nWith separate connection you can init a 'COPY FROM' session for each \nforeign partition just one time on partition initialization.\n> \n> * In what kind of test did you get 2-4x performance gain? COPY into many foreign table partitions where the input rows are ordered randomly enough that many rows don't accumulate in the COPY buffer?\nI used 'INSERT INTO .. SELECT * FROM generate_series(1, N)' to generate \ntest data and HASH partitioning to avoid skews.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Tue, 9 Feb 2021 10:57:44 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "From: Andrey V. Lepikhov <a.lepikhov@postgrespro.ru>\r\n> On 2/9/21 9:35 AM, tsunakawa.takay@fujitsu.com wrote:\r\n> > * Why is a separate FDW connection established for each COPY? To avoid\r\n> using the same FDW connection for multiple foreign table partitions in a single\r\n> COPY run?\r\n> With separate connection you can init a 'COPY FROM' session for each\r\n> foreign partition just one time on partition initialization.\r\n> >\r\n> > * In what kind of test did you get 2-4x performance gain? COPY into many\r\n> foreign table partitions where the input rows are ordered randomly enough that\r\n> many rows don't accumulate in the COPY buffer?\r\n> I used 'INSERT INTO .. SELECT * FROM generate_series(1, N)' to generate\r\n> test data and HASH partitioning to avoid skews.\r\n\r\nI guess you used many hash partitions. Sadly, The current COPY implementation only accumulates either 1,000 rows or 64 KB of input data (very small!) before flushing all CopyMultiInsertBuffers. One CopyMultiInsertBuffer corresponds to one partition. Flushing a CopyMultiInsertBuffer calls ExecForeignCopy() once, which connects to a remote database, runs COPY FROM STDIN, and disconnects. Here, the flushing trigger (1,000 rows or 64 KB input data, whichever comes first) is so small that if there are many target partitions, the amount of data for each partition is small.\r\n\r\nLooking at the triggering threshold values, the description (of MAX_BUFFERED_TUPLES at least) seems to indicate that they take effect per CopyMultiInsertBuffer:\r\n\r\n\r\n/*\r\n * No more than this many tuples per CopyMultiInsertBuffer\r\n *\r\n * Caution: Don't make this too big, as we could end up with this many\r\n * CopyMultiInsertBuffer items stored in CopyMultiInsertInfo's\r\n * multiInsertBuffers list. Increasing this can cause quadratic growth in\r\n* memory requirements during copies into partitioned tables with a large\r\n * number of partitions.\r\n */\r\n#define MAX_BUFFERED_TUPLES 1000\r\n\r\n/*\r\n * Flush buffers if there are >= this many bytes, as counted by the input\r\n * size, of tuples stored.\r\n */\r\n#define MAX_BUFFERED_BYTES 65535\r\n\r\n\r\nBut these threshold take effect across all CopyMultiInsertBuffers:\r\n\r\n\r\n/*\r\n * Returns true if the buffers are full\r\n */\r\nstatic inline bool\r\nCopyMultiInsertInfoIsFull(CopyMultiInsertInfo *miinfo)\r\n{\r\n if (miinfo->bufferedTuples >= MAX_BUFFERED_TUPLES ||\r\n miinfo->bufferedBytes >= MAX_BUFFERED_BYTES)\r\n return true;\r\n return false;\r\n}\r\n\r\n\r\nSo, I think the direction to take is to allow more data to accumulate before flushing. I'm not very excited about the way 0003 and 0004 establishes a new connection for each partition; it adds flags to many places, and postgresfdw_xact_callback() has to be aware of COPY-specific processing. Plus, we have to take care of the message difference you found in the regression test.\r\n\r\nWhy don't we focus on committing the basic part and addressing the extended part (0003 and 0004) separately later? As Tang-san and you showed, the basic part already demonstrated impressive improvement. If there's no objection, I'd like to make this ready for committer in a few days.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n", "msg_date": "Tue, 9 Feb 2021 07:47:05 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On 2/9/21 12:47 PM, tsunakawa.takay@fujitsu.com wrote:\n> From: Andrey V. Lepikhov <a.lepikhov@postgrespro.ru>\n> I guess you used many hash partitions. Sadly, The current COPY implementation only accumulates either 1,000 rows or 64 KB of input data (very small!) before flushing all CopyMultiInsertBuffers. One CopyMultiInsertBuffer corresponds to one partition. Flushing a CopyMultiInsertBuffer calls ExecForeignCopy() once, which connects to a remote database, runs COPY FROM STDIN, and disconnects. Here, the flushing trigger (1,000 rows or 64 KB input data, whichever comes first) is so small that if there are many target partitions, the amount of data for each partition is small.\nI tried to use 1E4 - 1E8 rows in a tuple buffer. But the results weren't \nimpressive.\nWe can use one more GUC instead of a precompiled constant.\n\n> Why don't we focus on committing the basic part and addressing the extended part (0003 and 0004) separately later?\nI focused only on the 0001 and 0002 patches.\n> As Tang-san and you showed, the basic part already demonstrated impressive improvement. If there's no objection, I'd like to make this ready for committer in a few days.\nGood.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Tue, 9 Feb 2021 13:22:33 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "From: Andrey V. Lepikhov <a.lepikhov@postgrespro.ru>\r\n> I tried to use 1E4 - 1E8 rows in a tuple buffer. But the results weren't\r\n> impressive.\r\n\r\nI guess that's because the 64 KB threshold came first.\r\n\r\n\r\n> We can use one more GUC instead of a precompiled constant.\r\n\r\nYes, agreed.\r\n\r\n\r\n> > Why don't we focus on committing the basic part and addressing the\r\n> extended part (0003 and 0004) separately later?\r\n> I focused only on the 0001 and 0002 patches.\r\n> > As Tang-san and you showed, the basic part already demonstrated\r\n> impressive improvement. If there's no objection, I'd like to make this ready for\r\n> committer in a few days.\r\n> Good.\r\n\r\nGlad to hear that.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n", "msg_date": "Tue, 9 Feb 2021 08:30:37 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "From: Andrey V. Lepikhov <a.lepikhov@postgrespro.ru>\r\n> On 2/9/21 12:47 PM, tsunakawa.takay@fujitsu.com wrote:\r\n> > As Tang-san and you showed, the basic part already demonstrated\r\n> impressive improvement. If there's no objection, I'd like to make this ready for\r\n> committer in a few days.\r\n> Good.\r\n\r\nI've marked this as ready for committer. Good luck.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n", "msg_date": "Fri, 12 Feb 2021 07:43:00 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On Tue, Feb 09, 2021 at 04:35:03AM +0000, tsunakawa.takay@fujitsu.com wrote:\n> Rebased to HEAD with the following modifications. It passes make check in the top directory and contrib/postgres_fdw.\n> That said, with the reviews from some people and good performance results, I think this can be ready for committer.\n\nThis is crashing during fdw check.\nhttp://cfbot.cputube.org/andrey-lepikhov.html\n\nMaybe it's related to this patch:\n|commit 6214e2b2280462cbc3aa1986e350e167651b3905\n| Fix permission checks on constraint violation errors on partitions.\n| Security: CVE-2021-3393\n\nTRAP: FailedAssertion(\"n >= 0 && n < list->length\", File: \"../../src/include/nodes/pg_list.h\", Line: 259, PID: 19780)\n\n(gdb) bt\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51\n#1 0x00007fd33a557801 in __GI_abort () at abort.c:79\n#2 0x000055f7f53bbc88 in ExceptionalCondition (conditionName=conditionName@entry=0x7fd33b81bc40 \"n >= 0 && n < list->length\", errorType=errorType@entry=0x7fd33b81b698 \"FailedAssertion\", \n fileName=fileName@entry=0x7fd33b81be70 \"../../src/include/nodes/pg_list.h\", lineNumber=lineNumber@entry=259) at assert.c:69\n#3 0x00007fd33b816b54 in list_nth_cell (n=<optimized out>, list=<optimized out>) at ../../src/include/nodes/pg_list.h:259\n#4 list_nth (n=<optimized out>, list=<optimized out>) at ../../src/include/nodes/pg_list.h:281\n#5 exec_rt_fetch (estate=<optimized out>, rti=<optimized out>) at ../../src/include/executor/executor.h:558\n#6 postgresBeginForeignCopy (mtstate=<optimized out>, resultRelInfo=<optimized out>) at postgres_fdw.c:2208\n#7 0x000055f7f5114bb4 in ExecInitRoutingInfo (mtstate=mtstate@entry=0x55f7f710a508, estate=estate@entry=0x55f7f71a7d50, proute=proute@entry=0x55f7f710a720, dispatch=dispatch@entry=0x55f7f710a778, \n partRelInfo=partRelInfo@entry=0x55f7f710eb20, partidx=partidx@entry=0) at execPartition.c:1004\n#8 0x000055f7f511618d in ExecInitPartitionInfo (partidx=0, rootResultRelInfo=0x55f7f710a278, dispatch=0x55f7f710a778, proute=0x55f7f710a720, estate=0x55f7f71a7d50, mtstate=0x55f7f710a508) at execPartition.c:742\n#9 ExecFindPartition () at execPartition.c:400\n#10 0x000055f7f50a2718 in CopyFrom () at copyfrom.c:857\n#11 0x000055f7f50a1b06 in DoCopy () at copy.c:299\n\n(gdb) up\n#7 0x000055f7f5114bb4 in ExecInitRoutingInfo (mtstate=mtstate@entry=0x55f7f710a508, estate=estate@entry=0x55f7f71a7d50, proute=proute@entry=0x55f7f710a720, dispatch=dispatch@entry=0x55f7f710a778, \n partRelInfo=partRelInfo@entry=0x55f7f710eb20, partidx=partidx@entry=0) at execPartition.c:1004\n1004 partRelInfo->ri_FdwRoutine->BeginForeignCopy(mtstate, partRelInfo);\n(gdb) p partRelInfo->ri_RangeTableIndex\n$7 = 0\n(gdb) p *estate->es_range_table\n$9 = {type = T_List, length = 1, max_length = 5, elements = 0x55f7f717a2c0, initial_elements = 0x55f7f717a2c0}\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 14 Feb 2021 15:03:38 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "From: Justin Pryzby <pryzby@telsasoft.com>\n> This is crashing during fdw check.\n> http://cfbot.cputube.org/andrey-lepikhov.html\n> \n> Maybe it's related to this patch:\n> |commit 6214e2b2280462cbc3aa1986e350e167651b3905\n> | Fix permission checks on constraint violation errors on partitions.\n> | Security: CVE-2021-3393\n\nThank you for your kind detailed investigation. The rebased version is attached.\n\n\nRegards\nTakayuki Tsunakawa", "msg_date": "Mon, 15 Feb 2021 04:54:00 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Tsunakawa-san, Andrey,\n\nOn Mon, Feb 15, 2021 at 1:54 PM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n> From: Justin Pryzby <pryzby@telsasoft.com>\n> > This is crashing during fdw check.\n> > http://cfbot.cputube.org/andrey-lepikhov.html\n> >\n> > Maybe it's related to this patch:\n> > |commit 6214e2b2280462cbc3aa1986e350e167651b3905\n> > | Fix permission checks on constraint violation errors on partitions.\n> > | Security: CVE-2021-3393\n>\n> Thank you for your kind detailed investigation. The rebased version is attached.\n\nThanks for updating the patch.\n\nThe commit message says this:\n\n Move that decision logic into InitResultRelInfo which now sets a new\n boolean field ri_usesMultiInsert of ResultRelInfo when a target\n relation is first initialized. That prevents repeated computation\n of the same information in some cases, especially for partitions,\n and the new arrangement results in slightly more readability.\n Enum CopyInsertMethod removed. This logic implements by ri_usesMultiInsert\n field of the ResultRelInfo structure.\n\nHowever, it is no longer InitResultRelInfo() that sets\nri_usesMultiInsert. Doing that is now left for concerned functions\nwho set it when they have enough information to do that correctly.\nMaybe update the message to make that clear to interested readers.\n\n+ /*\n+ * Use multi-insert mode if the condition checking passes for the\n+ * parent and its child.\n+ */\n+ leaf_part_rri->ri_usesMultiInsert =\n+ ExecMultiInsertAllowed(leaf_part_rri, rootResultRelInfo);\n\nThink I have mentioned upthread that this looks better as:\n\nif (rootResultRelInfo->ri_usesMultiInsert)\n leaf_part_rri->ri_usesMultiInsert = ExecMultiInsertAllowed(leaf_part_rri);\n\nThis keeps the logic confined to ExecInitPartitionInfo() where it\nbelongs. No point in burdening other callers of\nExecMultiInsertAllowed() in deciding whether or not it should pass a\nvalid value for the 2nd parameter.\n\n+static void\n+postgresBeginForeignCopy(ModifyTableState *mtstate,\n+ ResultRelInfo *resultRelInfo)\n+{\n...\n+ if (resultRelInfo->ri_RangeTableIndex == 0)\n+ {\n+ ResultRelInfo *rootResultRelInfo = resultRelInfo->ri_RootResultRelInfo;\n+\n+ rte = exec_rt_fetch(rootResultRelInfo->ri_RangeTableIndex, estate);\n\nIt's better to add an Assert(rootResultRelInfo != NULL) here.\nApparently, there are cases where ri_RangeTableIndex == 0 without\nri_RootResultRelInfo being set. The Assert will ensure that\nBeginForeignCopy() is not mistakenly called on such ResultRelInfos.\n\n+/*\n+ * Deparse COPY FROM into given buf.\n+ * We need to use list of parameters at each query.\n+ */\n+void\n+deparseCopyFromSql(StringInfo buf, Relation rel)\n+{\n+ appendStringInfoString(buf, \"COPY \");\n+ deparseRelation(buf, rel);\n+ (void) deparseRelColumnList(buf, rel, true);\n+\n+ appendStringInfoString(buf, \" FROM STDIN \");\n+}\n\nI can't parse what the function's comment says about \"using list of\nparameters\". Maybe it means to say \"list of columns\" specified in the\nCOPY FROM statement. How about writing this as:\n\n/*\n * Deparse remote COPY FROM statement\n *\n * Note that this explicitly specifies the list of COPY's target columns\n * to account for the fact that the remote table's columns may not match\n * exactly with the columns declared in the local definition.\n */\n\nI'm hoping that I'm interpreting the original note correctly. Andrey?\n\n+ <para>\n+ <literal>mtstate</literal> is the overall state of the\n+ <structname>ModifyTable</structname> plan node being executed;\nglobal data about\n+ the plan and execution state is available via this structure.\n...\n+typedef void (*BeginForeignCopy_function) (ModifyTableState *mtstate,\n+ ResultRelInfo *rinfo);\n\nMaybe a bit late realizing this, but why does BeginForeignCopy()\naccept a ModifyTableState pointer whereas maybe just an EState pointer\nwill do? I can't imagine why an FDW would want to look at the\nModifyTableState. Case in point, I see that\npostgresBeginForeignCopy() only uses the EState from the\nModifyTableState passed to it. I think the ResultRelInfo that's being\npassed to the Copy APIs contains most of the necessary information.\nAlso, EndForeignCopy() seems fine with just receiving the EState.\n\n+ TupleDesc tupDesc; /* COPY TO will be used for manual tuple copying\n+ * into the destination */\n...\n@@ -382,19 +393,24 @@ EndCopy(CopyToState cstate)\n CopyToState\n BeginCopyTo(ParseState *pstate,\n Relation rel,\n+ TupleDesc srcTupDesc,\n\nI think that either the commentary around tupDesc/srcTupDesc needs to\nbe improved or we should really find a way to do this without\nmaintaining TupleDesc separately from the CopyState.rel. IIUC, this\nchange is merely to allow postgres_fdw's ExecForeignCopy() to use\nCopyOneRowTo() which needs to be passed a valid CopyState.\npostgresBeginForeignCopy() initializes one by calling BeginCopyTo(),\nbut it can't just pass the target foreign Relation to it, because\ngeneric BeginCopyTo() has this:\n\n if (rel != NULL && rel->rd_rel->relkind != RELKIND_RELATION)\n {\n ...\n else if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE)\n ereport(ERROR,\n (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n errmsg(\"cannot copy from foreign table \\\"%s\\\"\",\n RelationGetRelationName(rel)),\n errhint(\"Try the COPY (SELECT ...) TO variant.\")));\n\nIf the intention is to only prevent this error, maybe the condition\nabove could be changed as this:\n\n /*\n * Check whether we support copying data out of the specified relation,\n * unless the caller also passed a non-NULL data_dest_cb, in which case,\n * the callback will take care of it\n */\n if (rel != NULL && rel->rd_rel->relkind != RELKIND_RELATION &&\n data_dest_cb == NULL)\n\nI just checked that this works or at least doesn't break any newly added tests.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Feb 2021 17:31:11 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "From: Amit Langote <amitlangote09@gmail.com>\r\n> Think I have mentioned upthread that this looks better as:\r\n> \r\n> if (rootResultRelInfo->ri_usesMultiInsert)\r\n> leaf_part_rri->ri_usesMultiInsert = ExecMultiInsertAllowed(leaf_part_rri);\r\n> \r\n> This keeps the logic confined to ExecInitPartitionInfo() where it\r\n> belongs. No point in burdening other callers of\r\n> ExecMultiInsertAllowed() in deciding whether or not it should pass a\r\n> valid value for the 2nd parameter.\r\n\r\nOh, that's a good idea. (Why didn't I think of such a simple idea?)\r\n\r\n\r\n\r\n> Maybe a bit late realizing this, but why does BeginForeignCopy()\r\n> accept a ModifyTableState pointer whereas maybe just an EState pointer\r\n> will do? I can't imagine why an FDW would want to look at the\r\n> ModifyTableState. Case in point, I see that\r\n> postgresBeginForeignCopy() only uses the EState from the\r\n> ModifyTableState passed to it. I think the ResultRelInfo that's being\r\n> passed to the Copy APIs contains most of the necessary information.\r\n\r\nYou're right. COPY is not under the control of a ModifyTable plan, so it's strange to pass ModifyTableState.\r\n\r\n\r\n> Also, EndForeignCopy() seems fine with just receiving the EState.\r\n\r\nI think this can have the ResultRelInfo like EndForeignInsert() and EndForeignModify() to correspond to the Begin function: \"begin/end COPYing into this relation.\"\r\n\r\n\r\n> /*\r\n> * Check whether we support copying data out of the specified relation,\r\n> * unless the caller also passed a non-NULL data_dest_cb, in which case,\r\n> * the callback will take care of it\r\n> */\r\n> if (rel != NULL && rel->rd_rel->relkind != RELKIND_RELATION &&\r\n> data_dest_cb == NULL)\r\n> \r\n> I just checked that this works or at least doesn't break any newly added tests.\r\n\r\nGood idea, too. The code has become more readable.\r\n\r\nThank you a lot. Your other comments that are not mentioned above are also reflected. The attached patch passes the postgres_fdw regression test.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa", "msg_date": "Tue, 16 Feb 2021 05:39:57 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On 2/15/21 1:31 PM, Amit Langote wrote:\n> Tsunakawa-san, Andrey,\n> +static void\n> +postgresBeginForeignCopy(ModifyTableState *mtstate,\n> + ResultRelInfo *resultRelInfo)\n> +{\n> ...\n> + if (resultRelInfo->ri_RangeTableIndex == 0)\n> + {\n> + ResultRelInfo *rootResultRelInfo = resultRelInfo->ri_RootResultRelInfo;\n> +\n> + rte = exec_rt_fetch(rootResultRelInfo->ri_RangeTableIndex, estate);\n> \n> It's better to add an Assert(rootResultRelInfo != NULL) here.\n> Apparently, there are cases where ri_RangeTableIndex == 0 without\n> ri_RootResultRelInfo being set. The Assert will ensure that\n> BeginForeignCopy() is not mistakenly called on such ResultRelInfos.\n\n+1\n\n> I can't parse what the function's comment says about \"using list of\n> parameters\". Maybe it means to say \"list of columns\" specified in the\n> COPY FROM statement. How about writing this as:\n> \n> /*\n> * Deparse remote COPY FROM statement\n> *\n> * Note that this explicitly specifies the list of COPY's target columns\n> * to account for the fact that the remote table's columns may not match\n> * exactly with the columns declared in the local definition.\n> */\n> \n> I'm hoping that I'm interpreting the original note correctly. Andrey?\n\nYes, this is a good option.\n\n> \n> + <para>\n> + <literal>mtstate</literal> is the overall state of the\n> + <structname>ModifyTable</structname> plan node being executed;\n> global data about\n> + the plan and execution state is available via this structure.\n> ...\n> +typedef void (*BeginForeignCopy_function) (ModifyTableState *mtstate,\n> + ResultRelInfo *rinfo);\n> \n> Maybe a bit late realizing this, but why does BeginForeignCopy()\n> accept a ModifyTableState pointer whereas maybe just an EState pointer\n> will do? I can't imagine why an FDW would want to look at the\n> ModifyTableState. Case in point, I see that\n> postgresBeginForeignCopy() only uses the EState from the\n> ModifyTableState passed to it. I think the ResultRelInfo that's being\n> passed to the Copy APIs contains most of the necessary information.\n> Also, EndForeignCopy() seems fine with just receiving the EState.\n\n+1\n\n> If the intention is to only prevent this error, maybe the condition\n> above could be changed as this:\n> \n> /*\n> * Check whether we support copying data out of the specified relation,\n> * unless the caller also passed a non-NULL data_dest_cb, in which case,\n> * the callback will take care of it\n> */\n> if (rel != NULL && rel->rd_rel->relkind != RELKIND_RELATION &&\n> data_dest_cb == NULL)\n\nAgreed. This is an atavism. In the first versions, I did not use the \ndata_dest_cb routine. But now this is a redundant parameter.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Tue, 16 Feb 2021 13:23:34 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Find attached some language fixes.\n\n|/* Do this to ensure we've pumped libpq back to idle state */\n\nI don't know why you mean by \"pumped\"?\n\nThe CopySendEndOfRow \"case COPY_CALLBACK:\" should have a \"break;\"\n\nThis touches some of the same parts as my \"bulk insert\" patch:\nhttps://commitfest.postgresql.org/32/2553/\n\n-- \nJustin", "msg_date": "Wed, 3 Mar 2021 14:27:03 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "From: Justin Pryzby <pryzby@telsasoft.com>\n> Find attached some language fixes.\n\nThanks a lot! (I wish there will be some tool like \"pgEnglish\" that corrects English in code comments and docs.)\n\n\n> |/* Do this to ensure we've pumped libpq back to idle state */\n> \n> I don't know why you mean by \"pumped\"?\n\nI changed it to \"have not gotten extra results\" to match the error message.\n\n\n> The CopySendEndOfRow \"case COPY_CALLBACK:\" should have a \"break;\"\n\nAdded.\n\n> This touches some of the same parts as my \"bulk insert\" patch:\n> https://commitfest.postgresql.org/32/2553/\n\nMy colleague will be reviewing it.\n\n\nRegards\nTakayuki Tsunakawa", "msg_date": "Thu, 4 Mar 2021 02:24:24 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Hi,\n\nThis feature enables bulk COPY into foreign table in the case of\nmulti inserts is possible\n\n'is possible' -> 'if possible'\n\nFDWAPI was extended by next routines:\n\nnext routines -> the following routines\n\nFor postgresExecForeignCopy():\n\n+ if ((!OK && PQresultStatus(res) != PGRES_FATAL_ERROR) ||\n\nIs PGRES_FATAL_ERROR handled somewhere else ? I don't seem to find that in\nthe patch.\n\nCheers\n\nOn Wed, Mar 3, 2021 at 6:24 PM tsunakawa.takay@fujitsu.com <\ntsunakawa.takay@fujitsu.com> wrote:\n\n> From: Justin Pryzby <pryzby@telsasoft.com>\n> > Find attached some language fixes.\n>\n> Thanks a lot! (I wish there will be some tool like \"pgEnglish\" that\n> corrects English in code comments and docs.)\n>\n>\n> > |/* Do this to ensure we've pumped libpq back to idle state */\n> >\n> > I don't know why you mean by \"pumped\"?\n>\n> I changed it to \"have not gotten extra results\" to match the error message.\n>\n>\n> > The CopySendEndOfRow \"case COPY_CALLBACK:\" should have a \"break;\"\n>\n> Added.\n>\n> > This touches some of the same parts as my \"bulk insert\" patch:\n> > https://commitfest.postgresql.org/32/2553/\n>\n> My colleague will be reviewing it.\n>\n>\n> Regards\n> Takayuki Tsunakawa\n>\n>\n\nHi,This feature enables bulk COPY into foreign table in the case ofmulti inserts is possible'is possible' -> 'if possible'FDWAPI was extended by next routines:next routines -> the following routinesFor postgresExecForeignCopy():+       if ((!OK && PQresultStatus(res) != PGRES_FATAL_ERROR) ||Is PGRES_FATAL_ERROR handled somewhere else ? I don't seem to find that in the patch.CheersOn Wed, Mar 3, 2021 at 6:24 PM tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com> wrote:        From: Justin Pryzby <pryzby@telsasoft.com>\n> Find attached some language fixes.\n\nThanks a lot!  (I wish there will be some tool like \"pgEnglish\" that corrects English in code comments and docs.)\n\n\n> |/* Do this to ensure we've pumped libpq back to idle state */\n> \n> I don't know why you mean by \"pumped\"?\n\nI changed it to \"have not gotten extra results\" to match the error message.\n\n\n> The CopySendEndOfRow \"case COPY_CALLBACK:\" should have a \"break;\"\n\nAdded.\n\n> This touches some of the same parts as my \"bulk insert\" patch:\n> https://commitfest.postgresql.org/32/2553/\n\nMy colleague will be reviewing it.\n\n\nRegards\nTakayuki Tsunakawa", "msg_date": "Wed, 3 Mar 2021 18:51:39 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "From: Zhihong Yu <zyu@yugabyte.com> \r\n> This feature enables bulk COPY into foreign table in the case of\r\n> multi inserts is possible\r\n> \r\n> 'is possible' -> 'if possible'\r\n> \r\n> FDWAPI was extended by next routines:\r\n> \r\n> next routines -> the following routines\r\n\r\nThank you, fixed slightly differently. (I feel the need for pgEnglish again.)\r\n\r\n\r\n> + if ((!OK && PQresultStatus(res) != PGRES_FATAL_ERROR) ||\r\n> \r\n> Is PGRES_FATAL_ERROR handled somewhere else ? I don't seem to find that in the patch.\r\n\r\nGood catch. ok doesn't need to be consulted here, because failure during row transmission causes PQputCopyEnd() to receive non-NULL for its second argument, which in turn makes PQgetResult() return non-COMMAND_OK.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa", "msg_date": "Thu, 4 Mar 2021 07:39:55 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On Thu, Mar 4, 2021 at 12:40 PM tsunakawa.takay@fujitsu.com <\ntsunakawa.takay@fujitsu.com> wrote:\n\n> From: Zhihong Yu <zyu@yugabyte.com>\n> > This feature enables bulk COPY into foreign table in the case of\n> > multi inserts is possible\n> >\n> > 'is possible' -> 'if possible'\n> >\n> > FDWAPI was extended by next routines:\n> >\n> > next routines -> the following routines\n>\n> Thank you, fixed slightly differently. (I feel the need for pgEnglish\n> again.)\n>\n>\n> > + if ((!OK && PQresultStatus(res) != PGRES_FATAL_ERROR) ||\n> >\n> > Is PGRES_FATAL_ERROR handled somewhere else ? I don't seem to find that\n> in the patch.\n>\n> Good catch. ok doesn't need to be consulted here, because failure during\n> row transmission causes PQputCopyEnd() to receive non-NULL for its second\n> argument, which in turn makes PQgetResult() return non-COMMAND_OK.\n>\n>\n> Regards\n> Takayuki Tsunakawa\n>\n>\nThis patch set no longer applies\nhttp://cfbot.cputube.org/patch_32_2601.log\n\nCan we get a rebase?\n\nI am marking the patch \"Waiting on Author\"\n\n-- \nIbrar Ahmed\n\nOn Thu, Mar 4, 2021 at 12:40 PM tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com> wrote:From: Zhihong Yu <zyu@yugabyte.com> \n> This feature enables bulk COPY into foreign table in the case of\n> multi inserts is possible\n> \n> 'is possible' -> 'if possible'\n> \n> FDWAPI was extended by next routines:\n> \n> next routines -> the following routines\n\nThank you, fixed slightly differently.  (I feel the need for pgEnglish again.)\n\n\n> +       if ((!OK && PQresultStatus(res) != PGRES_FATAL_ERROR) ||\n> \n> Is PGRES_FATAL_ERROR handled somewhere else ? I don't seem to find that in the patch.\n\nGood catch.  ok doesn't need to be consulted here, because failure during row transmission causes PQputCopyEnd() to receive non-NULL for its second argument, which in turn makes PQgetResult() return non-COMMAND_OK.\n\n\nRegards\nTakayuki Tsunakawa\n This patch set no longer applieshttp://cfbot.cputube.org/patch_32_2601.logCan we get a rebase? I am marking the patch \"Waiting on Author\"-- Ibrar Ahmed", "msg_date": "Thu, 4 Mar 2021 16:02:26 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "I think this change to the regression tests is suspicous:\n\n-CONTEXT: remote SQL command: INSERT INTO public.loc2(f1, f2) VALUES ($1, $2)\n-COPY rem2, line 1: \"-1 xyzzy\"\n+CONTEXT: COPY loc2, line 1: \"-1 xyzzy\"\n+remote SQL command: COPY public.loc2(f1, f2) FROM STDIN \n+COPY rem2, line 2\n\nI think it shouldn't say \"COPY rem2, line 2\" but rather a remote version of the\nsame:\n|COPY loc2, line 1: \"-1 xyzzy\"\n\nI have rebased this on my side over yesterday's libpq changes - I'll send it if\nyou want, but it's probably just as easy if you do it.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 4 Mar 2021 14:39:49 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "From: Justin Pryzby <pryzby@telsasoft.com>\n> I think this change to the regression tests is suspicous:\n> \n> -CONTEXT: remote SQL command: INSERT INTO public.loc2(f1, f2) VALUES\n> ($1, $2)\n> -COPY rem2, line 1: \"-1 xyzzy\"\n> +CONTEXT: COPY loc2, line 1: \"-1 xyzzy\"\n> +remote SQL command: COPY public.loc2(f1, f2) FROM STDIN\n> +COPY rem2, line 2\n> \n> I think it shouldn't say \"COPY rem2, line 2\" but rather a remote version of the\n> same:\n> |COPY loc2, line 1: \"-1 xyzzy\"\n\nNo, the output is OK. The remote message is included as the first line of the CONTEXT message field. The last line of the CONTEXT field is something that was added by the local COPY command. (Anyway, useful enough information is included in the message -- the constraint violation and the data that caused it.)\n\n\n> I have rebased this on my side over yesterday's libpq changes - I'll send it if\n> you want, but it's probably just as easy if you do it.\n\nI've managed to rebased it, although it took unexpectedly long. The patch is attached. It passes make check against core and postgres_fdw. I'll turn the CF status back to ready for committer shortly.\n\n\n\nRegards\nTakayuki Tsunakawa", "msg_date": "Fri, 5 Mar 2021 16:54:17 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "From: Justin Pryzby <pryzby@telsasoft.com>\n> Could you rebase again and send an updated patch ?\n> I could do it if you want.\n\nRebased and attached. Fortunately, there was no rebase conflict this time. make check passed for PG core and postgres_fdw.\n\n\t\nRegards\nTakayuki Tsunakawa", "msg_date": "Fri, 12 Mar 2021 05:54:29 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On 5/3/21 21:54, tsunakawa.takay@fujitsu.com wrote:\n> I've managed to rebased it, although it took unexpectedly long. The patch is attached. It passes make check against core and postgres_fdw. I'll turn the CF status back to ready for committer shortly.\n\nMacros _() at the postgresExecForeignCopy routine:\nif (PQputCopyEnd(conn, OK ? NULL : _(\"canceled by server\")) <= 0)\n\nuses gettext. Under linux it is compiled ok, because (as i understood) \nuses standard implementation of gettext:\nobjdump -t contrib/postgres_fdw/postgres_fdw.so | grep 'gettext'\ngettext@@GLIBC_2.2.5\n\nbut in MacOS (and maybe somewhere else) we need to explicitly link \nlibintl library in the Makefile:\nSHLIB_LINK += $(filter -lintl, $(LIBS)\n\nAlso, we may not use gettext at all in this part of the code.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Mon, 22 Mar 2021 23:33:23 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "From: Andrey Lepikhov <a.lepikhov@postgrespro.ru>\n> Macros _() at the postgresExecForeignCopy routine:\n> if (PQputCopyEnd(conn, OK ? NULL : _(\"canceled by server\")) <= 0)\n> \n> uses gettext. Under linux it is compiled ok, because (as i understood)\n> uses standard implementation of gettext:\n> objdump -t contrib/postgres_fdw/postgres_fdw.so | grep 'gettext'\n> gettext@@GLIBC_2.2.5\n> \n> but in MacOS (and maybe somewhere else) we need to explicitly link\n> libintl library in the Makefile:\n> SHLIB_LINK += $(filter -lintl, $(LIBS)\n> \n> Also, we may not use gettext at all in this part of the code.\n\nI'm afraid so, because no extension in contrib/ has po/ directory. I just removed _() and rebased the patch on HEAD.\n\n\n\tRegards\nTakayuki \tTsunakawa", "msg_date": "Tue, 23 Mar 2021 02:01:56 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Hi,\nIn the description:\n\nwith data_dest_cb callback. It is used for send text representation of a\ntuple to a custom destination.\n\nsend text -> sending text\n\n struct PgFdwModifyState *aux_fmstate; /* foreign-insert state, if\n * created */\n+ CopyToState cstate; /* foreign COPY state, if used */\n\nSince foreign COPY is optional, should cstate be a pointer ? That would be\nin line with aux_fmstate.\n\nCheers\n\nOn Mon, Mar 22, 2021 at 7:02 PM tsunakawa.takay@fujitsu.com <\ntsunakawa.takay@fujitsu.com> wrote:\n\n> From: Andrey Lepikhov <a.lepikhov@postgrespro.ru>\n> > Macros _() at the postgresExecForeignCopy routine:\n> > if (PQputCopyEnd(conn, OK ? NULL : _(\"canceled by server\")) <= 0)\n> >\n> > uses gettext. Under linux it is compiled ok, because (as i understood)\n> > uses standard implementation of gettext:\n> > objdump -t contrib/postgres_fdw/postgres_fdw.so | grep 'gettext'\n> > gettext@@GLIBC_2.2.5\n> >\n> > but in MacOS (and maybe somewhere else) we need to explicitly link\n> > libintl library in the Makefile:\n> > SHLIB_LINK += $(filter -lintl, $(LIBS)\n> >\n> > Also, we may not use gettext at all in this part of the code.\n>\n> I'm afraid so, because no extension in contrib/ has po/ directory. I just\n> removed _() and rebased the patch on HEAD.\n>\n>\n> Regards\n> Takayuki Tsunakawa\n>\n>\n>\n\nHi,In the description:with data_dest_cb callback. It is used for send text representation of a tuple to a custom destination.send text -> sending text    struct PgFdwModifyState *aux_fmstate;   /* foreign-insert state, if                                             * created */+   CopyToState cstate; /* foreign COPY state, if used */Since foreign COPY is optional, should cstate be a pointer ? That would be in line with aux_fmstate.CheersOn Mon, Mar 22, 2021 at 7:02 PM tsunakawa.takay@fujitsu.com <tsunakawa.takay@fujitsu.com> wrote:From: Andrey Lepikhov <a.lepikhov@postgrespro.ru>\n> Macros _() at the postgresExecForeignCopy routine:\n> if (PQputCopyEnd(conn, OK ? NULL : _(\"canceled by server\")) <= 0)\n> \n> uses gettext. Under linux it is compiled ok, because (as i understood)\n> uses standard implementation of gettext:\n> objdump -t contrib/postgres_fdw/postgres_fdw.so | grep 'gettext'\n> gettext@@GLIBC_2.2.5\n> \n> but in MacOS (and maybe somewhere else) we need to explicitly link\n> libintl library in the Makefile:\n> SHLIB_LINK += $(filter -lintl, $(LIBS)\n> \n> Also, we may not use gettext at all in this part of the code.\n\nI'm afraid so, because no extension in contrib/ has po/ directory.  I just removed _() and rebased the patch on HEAD.\n\n\n        Regards\nTakayuki        Tsunakawa", "msg_date": "Mon, 22 Mar 2021 20:18:56 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On Mon, Mar 22, 2021 at 08:18:56PM -0700, Zhihong Yu wrote:\n> with data_dest_cb callback. It is used for send text representation of a\n> tuple to a custom destination.\n> \n> send text -> sending text\n\nI would say \"It is used to send the text representation ...\"\n\n> struct PgFdwModifyState *aux_fmstate; /* foreign-insert state, if\n> * created */\n> + CopyToState cstate; /* foreign COPY state, if used */\n> \n> Since foreign COPY is optional, should cstate be a pointer ? That would be\n> in line with aux_fmstate.\n\nIt's actually a pointer:\nsrc/include/commands/copy.h:typedef struct CopyToStateData *CopyToState;\n\nThere's many data structures like this, where a structure is typedefed with a\n\"Data\" suffix and the pointer is typedefed without the \"Data\"\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 22 Mar 2021 22:23:22 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "From: Justin Pryzby <pryzby@telsasoft.com>\n> On Mon, Mar 22, 2021 at 08:18:56PM -0700, Zhihong Yu wrote:\n> > with data_dest_cb callback. It is used for send text representation of a\n> > tuple to a custom destination.\n> >\n> > send text -> sending text\n> \n> I would say \"It is used to send the text representation ...\"\n\nI took Justin-san's suggestion. (It feels like I'm in a junior English class...)\n\n\n> It's actually a pointer:\n> src/include/commands/copy.h:typedef struct CopyToStateData *CopyToState;\n> \n> There's many data structures like this, where a structure is typedefed with a\n> \"Data\" suffix and the pointer is typedefed without the \"Data\"\n\nYes. Thank you for good explanation, Justin-san.\n\n\n\n\tRegards\nTakayuki \tTsunakawa", "msg_date": "Tue, 23 Mar 2021 05:05:48 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "I rebased this patch to resolve a trivial 1 line conflict from c5b7ba4e6.\n\n-- \nJustin", "msg_date": "Thu, 8 Apr 2021 19:49:56 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On Thu, Apr 8, 2021 at 5:49 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> I rebased this patch to resolve a trivial 1 line conflict from c5b7ba4e6.\n>\n> --\n> Justin\n>\n\nHi,\nIn src/backend/commands/copyfrom.c :\n\n+ if (resultRelInfo->ri_RelationDesc->rd_rel->relkind ==\nRELKIND_FOREIGN_TABLE)\n\nThere are a few steps of indirection. Adding assertion before the if\nstatement on resultRelInfo->ri_RelationDesc, etc would help catch potential\ninvalid pointer.\n\n+CopyToStart(CopyToState cstate)\n...\n+CopyToFinish(CopyToState cstate)\n\nSince 'copy to' is the action, it would be easier to read the method names\nif they're called StartCopyTo, FinishCopyTo, respectively.\nThat way, the method names would be consistent with existing ones, such as:\n extern uint64 DoCopyTo(CopyToState cstate);\n\n+ * If a partition's root parent isn't allowed to use it, neither is the\n\nIn the above sentence, 'it' refers to multi insert. It would be more\nreadable to explicitly mention 'multi insert' instead of 'it'\n\nCheers\n\nOn Thu, Apr 8, 2021 at 5:49 PM Justin Pryzby <pryzby@telsasoft.com> wrote:I rebased this patch to resolve a trivial 1 line conflict from c5b7ba4e6.\n\n-- \nJustinHi,In src/backend/commands/copyfrom.c :+   if (resultRelInfo->ri_RelationDesc->rd_rel->relkind == RELKIND_FOREIGN_TABLE)There are a few steps of indirection. Adding assertion before the if statement on resultRelInfo->ri_RelationDesc, etc would help catch potential invalid pointer.+CopyToStart(CopyToState cstate)...+CopyToFinish(CopyToState cstate)Since 'copy to' is the action, it would be easier to read the method names if they're called StartCopyTo, FinishCopyTo, respectively.That way, the method names would be consistent with existing ones, such as: extern uint64 DoCopyTo(CopyToState cstate);+    * If a partition's root parent isn't allowed to use it, neither is theIn the above sentence, 'it' refers to multi insert. It would be more readable to explicitly mention 'multi insert' instead of 'it'Cheers", "msg_date": "Thu, 8 Apr 2021 18:42:40 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On Fri, Apr 9, 2021 at 9:49 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> I rebased this patch to resolve a trivial 1 line conflict from c5b7ba4e6.\n>\n> --\n> Justin\n\n\n", "msg_date": "Fri, 9 Apr 2021 12:50:28 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "On Fri, Apr 9, 2021 at 9:49 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I rebased this patch to resolve a trivial 1 line conflict from c5b7ba4e6.\n\nThanks for rebasing!\n\nActually, I've started reviewing this, but I couldn't finish my\nreview. My apologies for not having much time on this. I'll continue\nto work on it for PG15.\n\nSorry for the empty email.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 9 Apr 2021 12:53:55 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [POC] Fast COPY FROM command for the table with foreign\n partitions" }, { "msg_contents": "Hi,\nWe still have slow 'COPY FROM' operation for foreign tables in current \nmaster.\nNow we have a foreign batch insert operation And I tried to rewrite the \npatch [1] with this machinery.\n\nThe patch (see in attachment) smaller than [1] and no changes required \nin FDW API.\n\nBenchmarking\n============\nI used two data sets: with a number of 1E6 and 1E7 tuples. As a foreign \nserver emulation I used loopback FDW links.\n\nTest table:\nCREATE TABLE test(a int, payload varchar(80));\n\nExecution time of COPY FROM into single foreign table:\nversion | 1E6 tuples | 1E7 tuples |\nmaster: | 64s | 775s |\nPatch [1]: | 5s | 50s |\nCurrent: | 4s | 42s |\nExecution time of the COPY operation into a plane table is 0.8s for 1E6 \ntuples and 8s for 1E7 tuples.\n\nExecution time of COPY FROM into the table partitioned by three foreign \npartitions:\nversion | 1E6 tuples | 1E7 tuples |\nmaster: | 85s | 900s |\nPatch [1]: | 10s | 100s |\nCurrent: | 3.5s | 34s |\n\nBut the bulk insert execution time in current implementation strongly \ndepends on MAX_BUFFERED_TUPLES/BYTES value and in my experiments was \nreduced to 50s.\n\n[1] \nhttps://www.postgresql.org/message-id/flat/3d0909dc-3691-a576-208a-90986e55489f%40postgrespro.ru\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Fri, 4 Jun 2021 13:26:29 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Fast COPY FROM based on batch insert" }, { "msg_contents": "From: Andrey Lepikhov <a.lepikhov@postgrespro.ru>\r\n> We still have slow 'COPY FROM' operation for foreign tables in current master.\r\n> Now we have a foreign batch insert operation And I tried to rewrite the patch [1]\r\n> with this machinery.\r\n\r\nI haven't looked at the patch, but nice performance.\r\n\r\nHowever, I see the following problems. What do you think about them?\r\n\r\n1)\r\nNo wonder why the user would think like \"Why are INSERTs run on the remote server? I ran COPY.\"\r\n\r\n\r\n2)\r\nWithout the FDW API for COPY, other FDWs won't get a chance to optimize for bulk data loading. For example, oracle_fdw might use conventional path insert for the FDW batch insert, and the direct path insert for the FDW COPY.\r\n\r\n\r\n3)\r\nINSERT and COPY in Postgres differs in whether the rule is invoked:\r\n\r\nhttps://www.postgresql.org/docs/devel/sql-copy.html\r\n\r\n\"COPY FROM will invoke any triggers and check constraints on the destination table. However, it will not invoke rules.\"\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n", "msg_date": "Fri, 4 Jun 2021 08:45:55 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Fast COPY FROM based on batch insert" }, { "msg_contents": "On 4/6/21 13:45, tsunakawa.takay@fujitsu.com wrote:\n> From: Andrey Lepikhov <a.lepikhov@postgrespro.ru>\n>> We still have slow 'COPY FROM' operation for foreign tables in current master.\n>> Now we have a foreign batch insert operation And I tried to rewrite the patch [1]\n>> with this machinery.\n> \n> I haven't looked at the patch, but nice performance.\n> \n> However, I see the following problems. What do you think about them?\nI agree with your fears.\nThink about this patch as an intermediate step on the way to fast COPY \nFROM. This patch contains all logic of the previous patch, except of \ntransport machinery (bulk insertion api).\nIt may be simpler to understand advantages of proposed 'COPY' FDW API \nhaving committed 'COPY FROM ...' feature based on the bulk insert FDW API.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Mon, 7 Jun 2021 11:36:18 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "Second version of the patch fixes problems detected by the FDW \nregression tests and shows differences of error reports in \ntuple-by-tuple and batched COPY approaches.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Mon, 7 Jun 2021 16:16:58 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On 2021-06-07 16:16:58 +0500, Andrey Lepikhov wrote:\n> Second version of the patch fixes problems detected by the FDW regression\n> tests and shows differences of error reports in tuple-by-tuple and batched\n> COPY approaches.\n\nPatch doesn't apply and likely hasn't for a while...\n\n\n", "msg_date": "Mon, 21 Mar 2022 16:58:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On Fri, Jun 4, 2021 at 5:26 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> We still have slow 'COPY FROM' operation for foreign tables in current\n> master.\n> Now we have a foreign batch insert operation And I tried to rewrite the\n> patch [1] with this machinery.\n\nI’d been reviewing the previous version of the patch without noticing\nthis. (Gmail grouped it in a new thread due to the subject change,\nbut I overlooked the whole thread.)\n\nI agree with you that the first step for fast copy into foreign\ntables/partitions is to use the foreign-batch-insert API. (Actually,\nI was also thinking the same while reviewing the previous version.)\nThanks for the new version of the patch!\n\nThe patch has been rewritten to something essentially different, but\nno one reviewed it. (Tsunakawa-san gave some comments without looking\nat it, though.) So the right status of the patch is “Needs review”,\nrather than “Ready for Committer”? Anyway, here are a few review\ncomments from me:\n\n* I don’t think this assumption is correct:\n\n@@ -359,6 +386,12 @@ CopyMultiInsertBufferFlush(CopyMultiInsertInfo *miinfo,\n (resultRelInfo->ri_TrigDesc->trig_insert_after_row ||\n resultRelInfo->ri_TrigDesc->trig_insert_new_table))\n {\n+ /*\n+ * AFTER ROW triggers aren't allowed with the foreign bulk insert\n+ * method.\n+ */\n+ Assert(resultRelInfo->ri_RelationDesc->rd_rel->relkind !=\nRELKIND_FOREIGN_TABLE);\n+\n\nIn postgres_fdw we disable foreign batch insert when the target table\nhas AFTER ROW triggers, but the core allows it even in that case. No?\n\n* To allow foreign multi insert, the patch made an invasive change to\nthe existing logic to determine whether to use multi insert for the\ntarget relation, adding a new member ri_usesMultiInsert to the\nResultRelInfo struct, as well as introducing a new function\nExecMultiInsertAllowed(). But I’m not sure we really need such a\nchange. Isn’t it reasonable to *adjust* the existing logic to allow\nforeign multi insert when possible?\n\nI didn’t finish my review, but I’ll mark this as “Waiting on Author”.\n\nMy apologies for the long long delay.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Tue, 22 Mar 2022 10:54:00 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On Tue, Mar 22, 2022 at 8:58 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2021-06-07 16:16:58 +0500, Andrey Lepikhov wrote:\n> > Second version of the patch fixes problems detected by the FDW regression\n> > tests and shows differences of error reports in tuple-by-tuple and batched\n> > COPY approaches.\n>\n> Patch doesn't apply and likely hasn't for a while...\n\nActually, it has bit-rotted due to the recent fix for cross-partition\nupdates (i.e., commit ba9a7e392).\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Tue, 22 Mar 2022 11:36:00 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On 3/22/22 06:54, Etsuro Fujita wrote:\n> On Fri, Jun 4, 2021 at 5:26 PM Andrey Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>> We still have slow 'COPY FROM' operation for foreign tables in current\n>> master.\n>> Now we have a foreign batch insert operation And I tried to rewrite the\n>> patch [1] with this machinery.\n> \n> The patch has been rewritten to something essentially different, but\n> no one reviewed it. (Tsunakawa-san gave some comments without looking\n> at it, though.) So the right status of the patch is “Needs review”,\n> rather than “Ready for Committer”? Anyway, here are a few review\n> comments from me:\n> \n> * I don’t think this assumption is correct:\n> \n> @@ -359,6 +386,12 @@ CopyMultiInsertBufferFlush(CopyMultiInsertInfo *miinfo,\n> (resultRelInfo->ri_TrigDesc->trig_insert_after_row ||\n> resultRelInfo->ri_TrigDesc->trig_insert_new_table))\n> {\n> + /*\n> + * AFTER ROW triggers aren't allowed with the foreign bulk insert\n> + * method.\n> + */\n> + Assert(resultRelInfo->ri_RelationDesc->rd_rel->relkind !=\n> RELKIND_FOREIGN_TABLE);\n> +\n> \n> In postgres_fdw we disable foreign batch insert when the target table\n> has AFTER ROW triggers, but the core allows it even in that case. No?\nAgree\n\n> * To allow foreign multi insert, the patch made an invasive change to\n> the existing logic to determine whether to use multi insert for the\n> target relation, adding a new member ri_usesMultiInsert to the\n> ResultRelInfo struct, as well as introducing a new function\n> ExecMultiInsertAllowed(). But I’m not sure we really need such a\n> change. Isn’t it reasonable to *adjust* the existing logic to allow\n> foreign multi insert when possible?\nOf course, such approach would look much better, if we implemented it. \nI'll ponder how to do it.\n\n> I didn’t finish my review, but I’ll mark this as “Waiting on Author”.\nI rebased the patch onto current master. Now it works correctly. I'll \nmark it as \"Waiting for review\".\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Thu, 24 Mar 2022 11:43:37 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "2022年3月24日(木) 15:44 Andrey V. Lepikhov <a.lepikhov@postgrespro.ru>:\n >\n > On 3/22/22 06:54, Etsuro Fujita wrote:\n > > On Fri, Jun 4, 2021 at 5:26 PM Andrey Lepikhov\n > > <a.lepikhov@postgrespro.ru> wrote:\n > >> We still have slow 'COPY FROM' operation for foreign tables in current\n > >> master.\n > >> Now we have a foreign batch insert operation And I tried to rewrite the\n > >> patch [1] with this machinery.\n > >\n > > The patch has been rewritten to something essentially different, but\n > > no one reviewed it. (Tsunakawa-san gave some comments without looking\n > > at it, though.) So the right status of the patch is “Needs review”,\n > > rather than “Ready for Committer”? Anyway, here are a few review\n > > comments from me:\n > >\n > > * I don’t think this assumption is correct:\n > >\n > > @@ -359,6 +386,12 @@ CopyMultiInsertBufferFlush(CopyMultiInsertInfo *miinfo,\n > > (resultRelInfo->ri_TrigDesc->trig_insert_after_row ||\n > > resultRelInfo->ri_TrigDesc->trig_insert_new_table))\n > > {\n > > + /*\n > > + * AFTER ROW triggers aren't allowed with the foreign bulk insert\n > > + * method.\n > > + */\n > > + Assert(resultRelInfo->ri_RelationDesc->rd_rel->relkind !=\n > > RELKIND_FOREIGN_TABLE);\n > > +\n > >\n > > In postgres_fdw we disable foreign batch insert when the target table\n > > has AFTER ROW triggers, but the core allows it even in that case. No?\n > Agree\n >\n > > * To allow foreign multi insert, the patch made an invasive change to\n > > the existing logic to determine whether to use multi insert for the\n > > target relation, adding a new member ri_usesMultiInsert to the\n > > ResultRelInfo struct, as well as introducing a new function\n > > ExecMultiInsertAllowed(). But I’m not sure we really need such a\n > > change. Isn’t it reasonable to *adjust* the existing logic to allow\n > > foreign multi insert when possible?\n > Of course, such approach would look much better, if we implemented it.\n > I'll ponder how to do it.\n >\n > > I didn’t finish my review, but I’ll mark this as “Waiting on Author”.\n > I rebased the patch onto current master. Now it works correctly. I'll\n > mark it as \"Waiting for review\".\n\nI took a look at this patch as it would a useful optimization to have.\n\nIt applies cleanly to current HEAD, but as-is, with a large data set, it\nreproducibly fails like this (using postgres_fdw):\n\n postgres=# COPY foo FROM '/tmp/fast-copy-from/test.csv' WITH (format csv);\n ERROR: bind message supplies 0 parameters, but prepared statement \"pgsql_fdw_prep_19422\" requires 6\n CONTEXT: remote SQL command: INSERT INTO public.foo_part_1(t, v1, v2, v3, v4, v5) VALUES ($1, $2, $3, $4, $5, $6)\n COPY foo, line 17281589\n\nThis occurs because not all multi-insert buffers being flushed actually contain\ntuples; the fix is simply not to call ExecForeignBatchInsert() if that's the case,\ne.g:\n\n\n /* Flush into foreign table or partition */\n do {\n int size = (resultRelInfo->ri_BatchSize < nused - sent) ?\n resultRelInfo->ri_BatchSize : (nused - sent);\n\n if (size)\n {\n int inserted = size;\n\n resultRelInfo->ri_FdwRoutine->ExecForeignBatchInsert(estate,\n resultRelInfo,\n &slots[sent],\n NULL,\n &inserted);\n sent += size;\n }\n } while (sent < nused);\n\n\nThere might a case for arguing that the respective FDW should check that it has\nactually received some tuples to insert, but IMHO it's much preferable to catch\nthis as early as possible and avoid a superfluous call.\n\nFWIW, with the above fix in place, with a simple local test the patch produces a\nconsistent speed-up of about 8 times compared to the existing functionality.\n\n\nRegards\n\nIan Barwick\n\n--\n\nEnterpriseDB - https://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 Jul 2022 12:14:44 +0900", "msg_from": "Ian Barwick <ian.barwick@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On 7/7/2022 06:14, Ian Barwick wrote:\n> 2022年3月24日(木) 15:44 Andrey V. Lepikhov <a.lepikhov@postgrespro.ru>:\n> >\n> > On 3/22/22 06:54, Etsuro Fujita wrote:\n> > > On Fri, Jun 4, 2021 at 5:26 PM Andrey Lepikhov\n> > > <a.lepikhov@postgrespro.ru> wrote:\n> > >> We still have slow 'COPY FROM' operation for foreign tables in \n> current\n> > >> master.\n> > >> Now we have a foreign batch insert operation And I tried to \n> rewrite the\n> > >> patch [1] with this machinery.\n> > >\n> > > The patch has been rewritten to something essentially different, but\n> > > no one reviewed it.  (Tsunakawa-san gave some comments without looking\n> > > at it, though.)  So the right status of the patch is “Needs review”,\n> > > rather than “Ready for Committer”?  Anyway, here are a few review\n> > > comments from me:\n> > >\n> > > * I don’t think this assumption is correct:\n> > >\n> > > @@ -359,6 +386,12 @@ CopyMultiInsertBufferFlush(CopyMultiInsertInfo \n> *miinfo,\n> > > \n> (resultRelInfo->ri_TrigDesc->trig_insert_after_row ||\n> > >                    resultRelInfo->ri_TrigDesc->trig_insert_new_table))\n> > >          {\n> > > +           /*\n> > > +            * AFTER ROW triggers aren't allowed with the foreign \n> bulk insert\n> > > +            * method.\n> > > +            */\n> > > +           Assert(resultRelInfo->ri_RelationDesc->rd_rel->relkind !=\n> > > RELKIND_FOREIGN_TABLE);\n> > > +\n> > >\n> > > In postgres_fdw we disable foreign batch insert when the target table\n> > > has AFTER ROW triggers, but the core allows it even in that case.  No?\n> > Agree\n> >\n> > > * To allow foreign multi insert, the patch made an invasive change to\n> > > the existing logic to determine whether to use multi insert for the\n> > > target relation, adding a new member ri_usesMultiInsert to the\n> > > ResultRelInfo struct, as well as introducing a new function\n> > > ExecMultiInsertAllowed().  But I’m not sure we really need such a\n> > > change.  Isn’t it reasonable to *adjust* the existing logic to allow\n> > > foreign multi insert when possible?\n> > Of course, such approach would look much better, if we implemented it.\n> > I'll ponder how to do it.\n> >\n> > > I didn’t finish my review, but I’ll mark this as “Waiting on Author”.\n> > I rebased the patch onto current master. Now it works correctly. I'll\n> > mark it as \"Waiting for review\".\n> \n> I took a look at this patch as it would a useful optimization to have.\n> \n> It applies cleanly to current HEAD, but as-is, with a large data set, it\n> reproducibly fails like this (using postgres_fdw):\n> \n>     postgres=# COPY foo FROM '/tmp/fast-copy-from/test.csv' WITH \n> (format csv);\n>     ERROR:  bind message supplies 0 parameters, but prepared statement \n> \"pgsql_fdw_prep_19422\" requires 6\n>     CONTEXT:  remote SQL command: INSERT INTO public.foo_part_1(t, v1, \n> v2, v3, v4, v5) VALUES ($1, $2, $3, $4, $5, $6)\n>     COPY foo, line 17281589\n> \n> This occurs because not all multi-insert buffers being flushed actually \n> contain\n> tuples; the fix is simply not to call ExecForeignBatchInsert() if that's \n> the case,\n> e.g:\n> \n> \n>         /* Flush into foreign table or partition */\n>         do {\n>             int size = (resultRelInfo->ri_BatchSize < nused - sent) ?\n>                         resultRelInfo->ri_BatchSize : (nused - sent);\n> \n>             if (size)\n>             {\n>                 int inserted = size;\n> \n> \n> resultRelInfo->ri_FdwRoutine->ExecForeignBatchInsert(estate,\n> \n> resultRelInfo,\n> \n> &slots[sent],\n>                                                                      NULL,\n> \n> &inserted);\n>                 sent += size;\n>             }\n>         } while (sent < nused);\n> \n> \n> There might a case for arguing that the respective FDW should check that \n> it has\n> actually received some tuples to insert, but IMHO it's much preferable \n> to catch\n> this as early as possible and avoid a superfluous call.\n> \n> FWIW, with the above fix in place, with a simple local test the patch \n> produces a\n> consistent speed-up of about 8 times compared to the existing \n> functionality.\nThank you for the attention to the patch.\nI have a couple of questions:\n1. It's a problem for me to reproduce the case you reported. Can you \ngive more details on the reproduction?\n2. Have you tried to use previous version, based on bulk COPY machinery, \nnot bulk INSERT? Which approach looks better and have better performance \nin your opinion?\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Thu, 7 Jul 2022 16:51:54 +0300", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On 07/07/2022 22:51, Andrey Lepikhov wrote:\n > On 7/7/2022 06:14, Ian Barwick wrote:\n >> 2022年3月24日(木) 15:44 Andrey V. Lepikhov <a.lepikhov@postgrespro.ru>:\n >> >\n >> > On 3/22/22 06:54, Etsuro Fujita wrote:\n >> > > On Fri, Jun 4, 2021 at 5:26 PM Andrey Lepikhov\n >> > > <a.lepikhov@postgrespro.ru> wrote:\n >> > >> We still have slow 'COPY FROM' operation for foreign tables in current\n >> > >> master.\n >> > >> Now we have a foreign batch insert operation And I tried to rewrite the\n >> > >> patch [1] with this machinery.\n >> > >\n >> > > The patch has been rewritten to something essentially different, but\n >> > > no one reviewed it. (Tsunakawa-san gave some comments without looking\n >> > > at it, though.) So the right status of the patch is “Needs review”,\n >> > > rather than “Ready for Committer”? Anyway, here are a few review\n >> > > comments from me:\n >> > >\n >> > > * I don’t think this assumption is correct:\n >> > >\n >> > > @@ -359,6 +386,12 @@ CopyMultiInsertBufferFlush(CopyMultiInsertInfo *miinfo,\n >> > > (resultRelInfo->ri_TrigDesc->trig_insert_after_row ||\n >> > > resultRelInfo->ri_TrigDesc->trig_insert_new_table))\n >> > > {\n >> > > + /*\n >> > > + * AFTER ROW triggers aren't allowed with the foreign bulk insert\n >> > > + * method.\n >> > > + */\n >> > > + Assert(resultRelInfo->ri_RelationDesc->rd_rel->relkind !=\n >> > > RELKIND_FOREIGN_TABLE);\n >> > > +\n >> > >\n >> > > In postgres_fdw we disable foreign batch insert when the target table\n >> > > has AFTER ROW triggers, but the core allows it even in that case. No?\n >> > Agree\n >> >\n >> > > * To allow foreign multi insert, the patch made an invasive change to\n >> > > the existing logic to determine whether to use multi insert for the\n >> > > target relation, adding a new member ri_usesMultiInsert to the\n >> > > ResultRelInfo struct, as well as introducing a new function\n >> > > ExecMultiInsertAllowed(). But I’m not sure we really need such a\n >> > > change. Isn’t it reasonable to *adjust* the existing logic to allow\n >> > > foreign multi insert when possible?\n >> > Of course, such approach would look much better, if we implemented it.\n >> > I'll ponder how to do it.\n >> >\n >> > > I didn’t finish my review, but I’ll mark this as “Waiting on Author”.\n >> > I rebased the patch onto current master. Now it works correctly. I'll\n >> > mark it as \"Waiting for review\".\n >>\n >> I took a look at this patch as it would a useful optimization to have.\n >>\n >> It applies cleanly to current HEAD, but as-is, with a large data set, it\n >> reproducibly fails like this (using postgres_fdw):\n >>\n >> postgres=# COPY foo FROM '/tmp/fast-copy-from/test.csv' WITH (format csv);\n >> ERROR: bind message supplies 0 parameters, but prepared statement \"pgsql_fdw_prep_19422\" requires 6\n >> CONTEXT: remote SQL command: INSERT INTO public.foo_part_1(t, v1, v2, v3, v4, v5) VALUES ($1, $2, $3, $4, $5, $6)\n >> COPY foo, line 17281589\n >>\n >> This occurs because not all multi-insert buffers being flushed actually contain\n >> tuples; the fix is simply not to call ExecForeignBatchInsert() if that's the case,\n >> e.g:\n >>\n >>\n >> /* Flush into foreign table or partition */\n >> do {\n >> int size = (resultRelInfo->ri_BatchSize < nused - sent) ?\n >> resultRelInfo->ri_BatchSize : (nused - sent);\n >>\n >> if (size)\n >> {\n >> int inserted = size;\n >>\n >> resultRelInfo->ri_FdwRoutine->ExecForeignBatchInsert(estate,\n >> resultRelInfo,\n >> &slots[sent],\n >> NULL,\n >> &inserted);\n >> sent += size;\n >> }\n >> } while (sent < nused);\n >>\n >>\n >> There might a case for arguing that the respective FDW should check that it has\n >> actually received some tuples to insert, but IMHO it's much preferable to catch\n >> this as early as possible and avoid a superfluous call.\n >>\n >> FWIW, with the above fix in place, with a simple local test the patch produces a\n >> consistent speed-up of about 8 times compared to the existing functionality.\n >\n > Thank you for the attention to the patch.\n > I have a couple of questions:\n >\n > 1. It's a problem for me to reproduce the case you reported. Can you give more\n > details on the reproduction?\n\nThe issue seems to occur when the data spans more than one foreign partition,\nprobably because the accumulated data for one partition needs to be flushed\nbefore moving on to the next partition, but not all pre-allocated multi-insert\nbuffers have been filled.\n\nThe reproduction method I have, which is pared down from the original bulk insert\nwhich triggered the error, is as follows:\n\n1. Create some data using the attached script:\n\n perl data.pl > /tmp/data.csv\n\n2. Create two nodes (A and B)\n\n3. On node B, create tables as follows:\n\n CREATE TABLE foo_part_1 (t timestamptz, v1 int, v2 int, v3 int, v4 text, v5 text);\n CREATE TABLE foo_part_2 (t timestamptz, v1 int, v2 int, v3 int, v4 text, v5 text);\n\n4. On node A, create FDW and partitioned table as follows:\n\n -- adjust parameters as appropriate\n\n CREATE EXTENSION postgres_fdw;\n\n CREATE SERVER pg_fdw\n FOREIGN DATA WRAPPER postgres_fdw\n OPTIONS (\n host 'localhost',\n port '6301',\n dbname 'postgres',\n batch_size '100'\n );\n\n CREATE USER MAPPING FOR CURRENT_USER SERVER pg_fdw\n OPTIONS(user 'postgres');\n\n -- create parition table and partitions\n\n CREATE TABLE foo (t timestamptz, v1 int, v2 int, v3 int, v4 text, v5 text) PARTITION BY RANGE(t);\n\n CREATE FOREIGN TABLE foo_part_1\n PARTITION OF foo\n FOR VALUES FROM ('2022-05-19 00:00:00') TO ('2022-05-20 00:00:00')\n SERVER pg_fdw;\n\n CREATE FOREIGN TABLE foo_part_2\n PARTITION OF foo\n FOR VALUES FROM ('2022-05-20 00:00:00') TO ('2022-05-21 00:00:00')\n SERVER pg_fdw;\n\n5. On node A, load the previously generated data with COPY:\n\n COPY foo FROM '/tmp/data.csv' with (format 'csv');\n\nThis will fail like this:\n\n ERROR: bind message supplies 0 parameters, but prepared statement \"pgsql_fdw_prep_178\" requires 6\n CONTEXT: remote SQL command: INSERT INTO public.foo_part_1(t, v1, v2, v3, v4, v5) VALUES ($1, $2, $3, $4, $5, $6)\n COPY foo, line 88160\n\n\n > 2. Have you tried to use previous version, based on bulk COPY machinery, not\n > bulk INSERT? > Which approach looks better and have better performance in\n > your opinion?\n\nAha, I didn't see that, I'll take a look.\n\n\nRegards\n\nIan Barwick\n\n-- \n\nEnterpriseDB - https://www.enterprisedb.com", "msg_date": "Fri, 8 Jul 2022 11:12:28 +0900", "msg_from": "Ian Barwick <ian.barwick@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On 8/7/2022 05:12, Ian Barwick wrote:\n>     ERROR:  bind message supplies 0 parameters, but prepared statement \n> \"pgsql_fdw_prep_178\" requires 6\n>     CONTEXT:  remote SQL command: INSERT INTO public.foo_part_1(t, v1, \n> v2, v3, v4, v5) VALUES ($1, $2, $3, $4, $5, $6)\n>     COPY foo, line 88160\nThanks, I got it. MultiInsertBuffer are created on the first non-zero \nflush of tuples into the partition and isn't deleted from the buffers \nlist until the end of COPY. And on a subsequent flush in the case of \nempty buffer we catch the error.\nYour fix is correct, but I want to propose slightly different change \n(see in attachment).\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Fri, 8 Jul 2022 18:09:35 +0300", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On 09/07/2022 00:09, Andrey Lepikhov wrote:\n> On 8/7/2022 05:12, Ian Barwick wrote:\n>>      ERROR:  bind message supplies 0 parameters, but prepared statement \"pgsql_fdw_prep_178\" requires 6\n>>      CONTEXT:  remote SQL command: INSERT INTO public.foo_part_1(t, v1, v2, v3, v4, v5) VALUES ($1, $2, $3, $4, $5, $6)\n>>      COPY foo, line 88160\n> Thanks, I got it. MultiInsertBuffer are created on the first non-zero flush of tuples into the partition and isn't deleted from the buffers list until the end of COPY. And on a subsequent flush in the case of empty buffer we catch the error.\n> Your fix is correct, but I want to propose slightly different change (see in attachment).\n\nLGTM.\n\nRegards\n\nIan Barwick\n\n--\nhttps://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 11 Jul 2022 10:12:26 +0900", "msg_from": "Ian Barwick <ian.barwick@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On 11/7/2022 04:12, Ian Barwick wrote:\n> On 09/07/2022 00:09, Andrey Lepikhov wrote:\n>> On 8/7/2022 05:12, Ian Barwick wrote:\n>>>      ERROR:  bind message supplies 0 parameters, but prepared \n>>> statement \"pgsql_fdw_prep_178\" requires 6\n>>>      CONTEXT:  remote SQL command: INSERT INTO public.foo_part_1(t, \n>>> v1, v2, v3, v4, v5) VALUES ($1, $2, $3, $4, $5, $6)\n>>>      COPY foo, line 88160\n>> Thanks, I got it. MultiInsertBuffer are created on the first non-zero \n>> flush of tuples into the partition and isn't deleted from the buffers \n>> list until the end of COPY. And on a subsequent flush in the case of \n>> empty buffer we catch the error.\n>> Your fix is correct, but I want to propose slightly different change \n>> (see in attachment).\n> \n> LGTM.\nNew version (with aforementioned changes) is attached.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Mon, 11 Jul 2022 08:54:08 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On Thu, Mar 24, 2022 at 3:43 PM Andrey V. Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> On 3/22/22 06:54, Etsuro Fujita wrote:\n> > * To allow foreign multi insert, the patch made an invasive change to\n> > the existing logic to determine whether to use multi insert for the\n> > target relation, adding a new member ri_usesMultiInsert to the\n> > ResultRelInfo struct, as well as introducing a new function\n> > ExecMultiInsertAllowed(). But I’m not sure we really need such a\n> > change. Isn’t it reasonable to *adjust* the existing logic to allow\n> > foreign multi insert when possible?\n> Of course, such approach would look much better, if we implemented it.\n\n> I'll ponder how to do it.\n\nI rewrote the decision logic to something much simpler and much less\ninvasive, which reduces the patch size significantly. Attached is an\nupdated patch. What do you think about that?\n\nWhile working on the patch, I fixed a few issues as well:\n\n+ if (resultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize != NULL)\n+ resultRelInfo->ri_BatchSize =\n+\nresultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize(resultRelInfo);\n\nWhen determining the batch size, I think we should check if the\nExecForeignBatchInsert callback routine is also defined, like other\nplaces such as execPartition.c. For consistency I fixed this by\ncopying-and-pasting the code from that file.\n\n+ * Also, the COPY command requires a non-zero input list of attributes.\n+ * Therefore, the length of the attribute list is checked here.\n+ */\n+ if (!cstate->volatile_defexprs &&\n+ list_length(cstate->attnumlist) > 0 &&\n+ !contain_volatile_functions(cstate->whereClause))\n+ target_resultRelInfo->ri_usesMultiInsert =\n+ ExecMultiInsertAllowed(target_resultRelInfo);\n\nI think “list_length(cstate->attnumlist) > 0” in the if-test would\nbreak COPY FROM; it currently supports multi-inserting into *plain*\ntables even in the case where they have no columns, but this would\ndisable the multi-insertion support in that case. postgres_fdw would\nnot be able to batch into zero-column foreign tables due to the INSERT\nsyntax limitation (i.e., the syntax does not allow inserting multiple\nempty rows into a zero-column table in a single INSERT statement).\nWhich is the reason why this was added to the if-test? But I think\nsome other FDWs might be able to, so I think we should let the FDW\ndecide whether to allow batching even in that case, when called from\nGetForeignModifyBatchSize. So I removed the attnumlist test from the\npatch, and modified postgresGetForeignModifyBatchSize as such. I\nmight miss something, though.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Mon, 18 Jul 2022 17:22:49 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On 18/7/2022 13:22, Etsuro Fujita wrote:\n> On Thu, Mar 24, 2022 at 3:43 PM Andrey V. Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>> On 3/22/22 06:54, Etsuro Fujita wrote:\n>>> * To allow foreign multi insert, the patch made an invasive change to\n>>> the existing logic to determine whether to use multi insert for the\n>>> target relation, adding a new member ri_usesMultiInsert to the\n>>> ResultRelInfo struct, as well as introducing a new function\n>>> ExecMultiInsertAllowed(). But I’m not sure we really need such a\n>>> change. Isn’t it reasonable to *adjust* the existing logic to allow\n>>> foreign multi insert when possible?\n>> Of course, such approach would look much better, if we implemented it.\n> \n>> I'll ponder how to do it.\n> \n> I rewrote the decision logic to something much simpler and much less\n> invasive, which reduces the patch size significantly. Attached is an\n> updated patch. What do you think about that?\n> \n> While working on the patch, I fixed a few issues as well:\n> \n> + if (resultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize != NULL)\n> + resultRelInfo->ri_BatchSize =\n> +\n> resultRelInfo->ri_FdwRoutine->GetForeignModifyBatchSize(resultRelInfo);\n> \n> When determining the batch size, I think we should check if the\n> ExecForeignBatchInsert callback routine is also defined, like other\n> places such as execPartition.c. For consistency I fixed this by\n> copying-and-pasting the code from that file.\n> \n> + * Also, the COPY command requires a non-zero input list of attributes.\n> + * Therefore, the length of the attribute list is checked here.\n> + */\n> + if (!cstate->volatile_defexprs &&\n> + list_length(cstate->attnumlist) > 0 &&\n> + !contain_volatile_functions(cstate->whereClause))\n> + target_resultRelInfo->ri_usesMultiInsert =\n> + ExecMultiInsertAllowed(target_resultRelInfo);\n> \n> I think “list_length(cstate->attnumlist) > 0” in the if-test would\n> break COPY FROM; it currently supports multi-inserting into *plain*\n> tables even in the case where they have no columns, but this would\n> disable the multi-insertion support in that case. postgres_fdw would\n> not be able to batch into zero-column foreign tables due to the INSERT\n> syntax limitation (i.e., the syntax does not allow inserting multiple\n> empty rows into a zero-column table in a single INSERT statement).\n> Which is the reason why this was added to the if-test? But I think\n> some other FDWs might be able to, so I think we should let the FDW\n> decide whether to allow batching even in that case, when called from\n> GetForeignModifyBatchSize. So I removed the attnumlist test from the\n> patch, and modified postgresGetForeignModifyBatchSize as such. I\n> might miss something, though.\nThanks a lot,\nmaybe you forgot this code:\n/*\n * If a partition's root parent isn't allowed to use it, neither is the\n * partition.\n*/\nif (rootResultRelInfo->ri_usesMultiInsert)\n\tleaf_part_rri->ri_usesMultiInsert =\n\t\t\t\tExecMultiInsertAllowed(leaf_part_rri);\n\nAlso, maybe to describe in documentation, if the value of batch_size is \nmore than 1, the ExecForeignBatchInsert routine have a chance to be called?\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Tue, 19 Jul 2022 14:35:22 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On Tue, Jul 19, 2022 at 6:35 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> On 18/7/2022 13:22, Etsuro Fujita wrote:\n> > I rewrote the decision logic to something much simpler and much less\n> > invasive, which reduces the patch size significantly. Attached is an\n> > updated patch. What do you think about that?\n\n> maybe you forgot this code:\n> /*\n> * If a partition's root parent isn't allowed to use it, neither is the\n> * partition.\n> */\n> if (rootResultRelInfo->ri_usesMultiInsert)\n> leaf_part_rri->ri_usesMultiInsert =\n> ExecMultiInsertAllowed(leaf_part_rri);\n\nI think the patch accounts for that. Consider this bit to determine\nwhether to use batching for the partition chosen by\nExecFindPartition():\n\n@@ -910,12 +962,14 @@ CopyFrom(CopyFromState cstate)\n\n /*\n * Disable multi-inserts when the partition has BEFORE/INSTEAD\n- * OF triggers, or if the partition is a foreign partition.\n+ * OF triggers, or if the partition is a foreign partition\n+ * that can't use batching.\n */\n leafpart_use_multi_insert = insertMethod == CIM_MULTI_CONDITION\\\nAL &&\n !has_before_insert_row_trig &&\n !has_instead_insert_row_trig &&\n- resultRelInfo->ri_FdwRoutine == NULL;\n+ (resultRelInfo->ri_FdwRoutine == NULL ||\n+ resultRelInfo->ri_BatchSize > 1);\n\nIf the root parent isn't allowed to use batching, then we have\ninsertMethod=CIM_SINGLE for the parent before we get here. So in that\ncase we have leafpart_use_multi_insert=false for the chosen partition,\nmeaning that the partition isn't allowed to use batching, either.\n(The patch just extends the existing decision logic to the\nforeign-partition case.)\n\n> Also, maybe to describe in documentation, if the value of batch_size is\n> more than 1, the ExecForeignBatchInsert routine have a chance to be called?\n\nYeah, but I think that is the existing behavior, and that the patch\ndoesn't change the behavior, so I would leave that for another patch.\n\nThanks for reviewing!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 20 Jul 2022 17:10:43 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On 7/20/22 13:10, Etsuro Fujita wrote:\n> On Tue, Jul 19, 2022 at 6:35 PM Andrey Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>> On 18/7/2022 13:22, Etsuro Fujita wrote:\n>>> I rewrote the decision logic to something much simpler and much less\n>>> invasive, which reduces the patch size significantly. Attached is an\n>>> updated patch. What do you think about that?\n> \n>> maybe you forgot this code:\n>> /*\n>> * If a partition's root parent isn't allowed to use it, neither is the\n>> * partition.\n>> */\n>> if (rootResultRelInfo->ri_usesMultiInsert)\n>> leaf_part_rri->ri_usesMultiInsert =\n>> ExecMultiInsertAllowed(leaf_part_rri);\n> \n> I think the patch accounts for that. Consider this bit to determine\n> whether to use batching for the partition chosen by\n> ExecFindPartition():\nAgreed.\n\nAnalyzing multi-level heterogeneous partitioned configurations I \nrealized, that single write into a partition with a trigger will flush \nbuffers for all other partitions of the parent table even if the parent \nhaven't any triggers.\nIt relates to the code:\nelse if (insertMethod == CIM_MULTI_CONDITIONAL &&\n !CopyMultiInsertInfoIsEmpty(&multiInsertInfo))\n{\n /*\n * Flush pending inserts if this partition can't use\n * batching, so rows are visible to triggers etc.\n */\n CopyMultiInsertInfoFlush(&multiInsertInfo, resultRelInfo);\n}\n\nWhy such cascade flush is really necessary, especially for BEFORE and \nINSTEAD OF triggers? AFTER Trigger should see all rows of the table, but \nif it isn't exists for parent, I think, we wouldn't obligate to \nguarantee order of COPY into two different tables.\n\n-- \nRegards\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Fri, 22 Jul 2022 11:39:23 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On Fri, Jul 22, 2022 at 3:39 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> Analyzing multi-level heterogeneous partitioned configurations I\n> realized, that single write into a partition with a trigger will flush\n> buffers for all other partitions of the parent table even if the parent\n> haven't any triggers.\n> It relates to the code:\n> else if (insertMethod == CIM_MULTI_CONDITIONAL &&\n> !CopyMultiInsertInfoIsEmpty(&multiInsertInfo))\n> {\n> /*\n> * Flush pending inserts if this partition can't use\n> * batching, so rows are visible to triggers etc.\n> */\n> CopyMultiInsertInfoFlush(&multiInsertInfo, resultRelInfo);\n> }\n>\n> Why such cascade flush is really necessary, especially for BEFORE and\n> INSTEAD OF triggers?\n\nBEFORE triggers on the chosen partition might query the parent table,\nnot just the partition, so I think we need to do this so that such\ntriggers can see all the rows that have been inserted into the parent\ntable until then.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 22 Jul 2022 17:14:00 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On 7/22/22 13:14, Etsuro Fujita wrote:\n> On Fri, Jul 22, 2022 at 3:39 PM Andrey Lepikhov\n>> Why such cascade flush is really necessary, especially for BEFORE and\n>> INSTEAD OF triggers?\n> \n> BEFORE triggers on the chosen partition might query the parent table,\n> not just the partition, so I think we need to do this so that such\n> triggers can see all the rows that have been inserted into the parent\n> table until then.\nThanks for the explanation of your point of view. So, maybe switch \nstatus of this patch to 'Ready for committer'?\n\n-- \nRegards\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Fri, 22 Jul 2022 13:42:43 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On Fri, Jul 22, 2022 at 5:42 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> So, maybe switch\n> status of this patch to 'Ready for committer'?\n\nYeah, I think the patch is getting better, but I noticed some issues,\nso I'm working on them. I think I can post a new version in the next\nfew days.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Tue, 26 Jul 2022 19:19:43 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On 7/22/22 13:14, Etsuro Fujita wrote:\n> On Fri, Jul 22, 2022 at 3:39 PM Andrey Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>> Analyzing multi-level heterogeneous partitioned configurations I\n>> realized, that single write into a partition with a trigger will flush\n>> buffers for all other partitions of the parent table even if the parent\n>> haven't any triggers.\n>> It relates to the code:\n>> else if (insertMethod == CIM_MULTI_CONDITIONAL &&\n>> !CopyMultiInsertInfoIsEmpty(&multiInsertInfo))\n>> {\n>> /*\n>> * Flush pending inserts if this partition can't use\n>> * batching, so rows are visible to triggers etc.\n>> */\n>> CopyMultiInsertInfoFlush(&multiInsertInfo, resultRelInfo);\n>> }\n>>\n>> Why such cascade flush is really necessary, especially for BEFORE and\n>> INSTEAD OF triggers?\n> \n> BEFORE triggers on the chosen partition might query the parent table,\n> not just the partition, so I think we need to do this so that such\n> triggers can see all the rows that have been inserted into the parent\n> table until then.\nif you'll excuse me, I will add one more argument.\nIt wasn't clear, so I've made an experiment: result of a SELECT in an \nINSERT trigger function shows only data, existed in the parent table \nbefore the start of COPY.\nSo, we haven't tools to access newly inserting rows in neighboring \npartition and don't need to flush tuple buffers immediately.\nWhere am I wrong?\n\n-- \nRegards\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Wed, 27 Jul 2022 10:42:28 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On Wed, Jul 27, 2022 at 2:42 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> On 7/22/22 13:14, Etsuro Fujita wrote:\n> > On Fri, Jul 22, 2022 at 3:39 PM Andrey Lepikhov\n> > <a.lepikhov@postgrespro.ru> wrote:\n> >> Analyzing multi-level heterogeneous partitioned configurations I\n> >> realized, that single write into a partition with a trigger will flush\n> >> buffers for all other partitions of the parent table even if the parent\n> >> haven't any triggers.\n> >> It relates to the code:\n> >> else if (insertMethod == CIM_MULTI_CONDITIONAL &&\n> >> !CopyMultiInsertInfoIsEmpty(&multiInsertInfo))\n> >> {\n> >> /*\n> >> * Flush pending inserts if this partition can't use\n> >> * batching, so rows are visible to triggers etc.\n> >> */\n> >> CopyMultiInsertInfoFlush(&multiInsertInfo, resultRelInfo);\n> >> }\n> >>\n> >> Why such cascade flush is really necessary, especially for BEFORE and\n> >> INSTEAD OF triggers?\n> >\n> > BEFORE triggers on the chosen partition might query the parent table,\n> > not just the partition, so I think we need to do this so that such\n> > triggers can see all the rows that have been inserted into the parent\n> > table until then.\n> if you'll excuse me, I will add one more argument.\n> It wasn't clear, so I've made an experiment: result of a SELECT in an\n> INSERT trigger function shows only data, existed in the parent table\n> before the start of COPY.\n\nIs the trigger function declared VOLATILE? If so, the trigger should\nsee modifications to the parent table as well. See:\n\nhttps://www.postgresql.org/docs/15/trigger-datachanges.html\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Mon, 1 Aug 2022 13:00:13 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On Tue, Jul 26, 2022 at 7:19 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> Yeah, I think the patch is getting better, but I noticed some issues,\n> so I'm working on them. I think I can post a new version in the next\n> few days.\n\n* When running AFTER ROW triggers in CopyMultiInsertBufferFlush(), the\npatch uses the slots passed to ExecForeignBatchInsert(), not the ones\nreturned by the callback function, but I don't think that that is\nalways correct, as the documentation about the callback function says:\n\n The return value is an array of slots containing the data that was\n actually inserted (this might differ from the data supplied, for\n example as a result of trigger actions.)\n The passed-in <literal>slots</literal> can be re-used for this purpose.\n\npostgres_fdw re-uses the passed-in slots, but other FDWs might not, so\nI modified the patch to reference the returned slots when running the\nAFTER ROW triggers. I also modified the patch to initialize the\ntts_tableOid. Attached is an updated patch, in which I made some\nminor adjustments to CopyMultiInsertBufferFlush() as well.\n\n* The patch produces incorrect error context information:\n\ncreate extension postgres_fdw;\ncreate server loopback foreign data wrapper postgres_fdw options\n(dbname 'postgres');\ncreate user mapping for current_user server loopback;\ncreate table t1 (f1 int, f2 text);\ncreate foreign table ft1 (f1 int, f2 text) server loopback options\n(table_name 't1');\nalter table t1 add constraint t1_f1positive check (f1 >= 0);\nalter foreign table ft1 add constraint ft1_f1positive check (f1 >= 0);\n\n— single insert\ncopy ft1 from stdin;\nEnter data to be copied followed by a newline.\nEnd with a backslash and a period on a line by itself, or an EOF signal.\n>> -1 foo\n>> 1 bar\n>> \\.\nERROR: new row for relation \"t1\" violates check constraint \"t1_f1positive\"\nDETAIL: Failing row contains (-1, foo).\nCONTEXT: remote SQL command: INSERT INTO public.t1(f1, f2) VALUES ($1, $2)\nCOPY ft1, line 1: \"-1 foo\"\n\n— batch insert\nalter server loopback options (add batch_size '2');\ncopy ft1 from stdin;\nEnter data to be copied followed by a newline.\nEnd with a backslash and a period on a line by itself, or an EOF signal.\n>> -1 foo\n>> 1 bar\n>> \\.\nERROR: new row for relation \"t1\" violates check constraint \"t1_f1positive\"\nDETAIL: Failing row contains (-1, foo).\nCONTEXT: remote SQL command: INSERT INTO public.t1(f1, f2) VALUES\n($1, $2), ($3, $4)\nCOPY ft1, line 3\n\nIn single-insert mode the error context information is correct, but in\nbatch-insert mode it isn’t (i.e., the line number isn’t correct).\n\nThe error occurs on the remote side, so I'm not sure if there is a\nsimple fix. What I came up with is to just suppress error context\ninformation other than the relation name, like the attached. What do\nyou think about that?\n\n(In CopyMultiInsertBufferFlush() your patch sets cstate->cur_lineno to\nbuffer->linenos[i] even when running AFTER ROW triggers for the i-th\nrow returned by ExecForeignBatchInsert(), but that wouldn’t always be\ncorrect, as the i-th returned row might not correspond to the i-th row\noriginally stored in the buffer as the callback function returns only\nthe rows that were actually inserted on the remote side. I think the\nproposed fix would address this issue as well.)\n\n* The patch produces incorrect row count in cases where some/all of\nthe rows passed to ExecForeignBatchInsert() weren’t inserted on the\nremote side:\n\ncreate function trig_null() returns trigger as $$ begin return NULL;\nend $$ language plpgsql;\ncreate trigger trig_null before insert on t1 for each row execute\nfunction trig_null();\n\n— single insert\nalter server loopback options (drop batch_size);\ncopy ft1 from stdin;\nEnter data to be copied followed by a newline.\nEnd with a backslash and a period on a line by itself, or an EOF signal.\n>> 0 foo\n>> 1 bar\n>> \\.\nCOPY 0\n\n— batch insert\nalter server loopback options (add batch_size '2');\ncopy ft1 from stdin;\nEnter data to be copied followed by a newline.\nEnd with a backslash and a period on a line by itself, or an EOF signal.\n>> 0 foo\n>> 1 bar\n>> \\.\nCOPY 2\n\nThe row count is correct in single-insert mode, but isn’t in batch-insert mode.\n\nThe reason is that in batch-insert mode the row counter is updated\nimmediately after adding the row to the buffer, not after doing\nExecForeignBatchInsert(), which might ignore the row. To fix, I\nmodified the patch to delay updating the row counter (and the progress\nof the COPY command) until after doing the callback function. For\nconsistency, I also modified the patch to delay it even when batching\ninto plain tables. IMO I think that that would be more consistent\nwith the single-insert mode, as in that mode we update them after\nwriting the tuple out to the table or sending it to the remote side.\n\n* I modified the patch so that when batching into foreign tables we\nskip useless steps in CopyMultiInsertBufferInit() and\nCopyMultiInsertBufferCleanup().\n\nThat’s all I have for now. Sorry for the delay.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Tue, 9 Aug 2022 20:44:59 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On Tue, Aug 9, 2022 at 4:45 AM Etsuro Fujita <etsuro.fujita@gmail.com>\nwrote:\n\n> On Tue, Jul 26, 2022 at 7:19 PM Etsuro Fujita <etsuro.fujita@gmail.com>\n> wrote:\n> > Yeah, I think the patch is getting better, but I noticed some issues,\n> > so I'm working on them. I think I can post a new version in the next\n> > few days.\n>\n> * When running AFTER ROW triggers in CopyMultiInsertBufferFlush(), the\n> patch uses the slots passed to ExecForeignBatchInsert(), not the ones\n> returned by the callback function, but I don't think that that is\n> always correct, as the documentation about the callback function says:\n>\n> The return value is an array of slots containing the data that was\n> actually inserted (this might differ from the data supplied, for\n> example as a result of trigger actions.)\n> The passed-in <literal>slots</literal> can be re-used for this\n> purpose.\n>\n> postgres_fdw re-uses the passed-in slots, but other FDWs might not, so\n> I modified the patch to reference the returned slots when running the\n> AFTER ROW triggers. I also modified the patch to initialize the\n> tts_tableOid. Attached is an updated patch, in which I made some\n> minor adjustments to CopyMultiInsertBufferFlush() as well.\n>\n> * The patch produces incorrect error context information:\n>\n> create extension postgres_fdw;\n> create server loopback foreign data wrapper postgres_fdw options\n> (dbname 'postgres');\n> create user mapping for current_user server loopback;\n> create table t1 (f1 int, f2 text);\n> create foreign table ft1 (f1 int, f2 text) server loopback options\n> (table_name 't1');\n> alter table t1 add constraint t1_f1positive check (f1 >= 0);\n> alter foreign table ft1 add constraint ft1_f1positive check (f1 >= 0);\n>\n> — single insert\n> copy ft1 from stdin;\n> Enter data to be copied followed by a newline.\n> End with a backslash and a period on a line by itself, or an EOF signal.\n> >> -1 foo\n> >> 1 bar\n> >> \\.\n> ERROR: new row for relation \"t1\" violates check constraint \"t1_f1positive\"\n> DETAIL: Failing row contains (-1, foo).\n> CONTEXT: remote SQL command: INSERT INTO public.t1(f1, f2) VALUES ($1, $2)\n> COPY ft1, line 1: \"-1 foo\"\n>\n> — batch insert\n> alter server loopback options (add batch_size '2');\n> copy ft1 from stdin;\n> Enter data to be copied followed by a newline.\n> End with a backslash and a period on a line by itself, or an EOF signal.\n> >> -1 foo\n> >> 1 bar\n> >> \\.\n> ERROR: new row for relation \"t1\" violates check constraint \"t1_f1positive\"\n> DETAIL: Failing row contains (-1, foo).\n> CONTEXT: remote SQL command: INSERT INTO public.t1(f1, f2) VALUES\n> ($1, $2), ($3, $4)\n> COPY ft1, line 3\n>\n> In single-insert mode the error context information is correct, but in\n> batch-insert mode it isn’t (i.e., the line number isn’t correct).\n>\n> The error occurs on the remote side, so I'm not sure if there is a\n> simple fix. What I came up with is to just suppress error context\n> information other than the relation name, like the attached. What do\n> you think about that?\n>\n> (In CopyMultiInsertBufferFlush() your patch sets cstate->cur_lineno to\n> buffer->linenos[i] even when running AFTER ROW triggers for the i-th\n> row returned by ExecForeignBatchInsert(), but that wouldn’t always be\n> correct, as the i-th returned row might not correspond to the i-th row\n> originally stored in the buffer as the callback function returns only\n> the rows that were actually inserted on the remote side. I think the\n> proposed fix would address this issue as well.)\n>\n> * The patch produces incorrect row count in cases where some/all of\n> the rows passed to ExecForeignBatchInsert() weren’t inserted on the\n> remote side:\n>\n> create function trig_null() returns trigger as $$ begin return NULL;\n> end $$ language plpgsql;\n> create trigger trig_null before insert on t1 for each row execute\n> function trig_null();\n>\n> — single insert\n> alter server loopback options (drop batch_size);\n> copy ft1 from stdin;\n> Enter data to be copied followed by a newline.\n> End with a backslash and a period on a line by itself, or an EOF signal.\n> >> 0 foo\n> >> 1 bar\n> >> \\.\n> COPY 0\n>\n> — batch insert\n> alter server loopback options (add batch_size '2');\n> copy ft1 from stdin;\n> Enter data to be copied followed by a newline.\n> End with a backslash and a period on a line by itself, or an EOF signal.\n> >> 0 foo\n> >> 1 bar\n> >> \\.\n> COPY 2\n>\n> The row count is correct in single-insert mode, but isn’t in batch-insert\n> mode.\n>\n> The reason is that in batch-insert mode the row counter is updated\n> immediately after adding the row to the buffer, not after doing\n> ExecForeignBatchInsert(), which might ignore the row. To fix, I\n> modified the patch to delay updating the row counter (and the progress\n> of the COPY command) until after doing the callback function. For\n> consistency, I also modified the patch to delay it even when batching\n> into plain tables. IMO I think that that would be more consistent\n> with the single-insert mode, as in that mode we update them after\n> writing the tuple out to the table or sending it to the remote side.\n>\n> * I modified the patch so that when batching into foreign tables we\n> skip useless steps in CopyMultiInsertBufferInit() and\n> CopyMultiInsertBufferCleanup().\n>\n> That’s all I have for now. Sorry for the delay.\n>\n> Best regards,\n> Etsuro Fujita\n>\n\nHi,\n\n+ /* If any rows were inserted, run AFTER ROW INSERT triggers. */\n...\n+ for (i = 0; i < inserted; i++)\n+ {\n+ TupleTableSlot *slot = rslots[i];\n...\n+ slot->tts_tableOid =\n+ RelationGetRelid(resultRelInfo->ri_RelationDesc);\n\nIt seems the return value of\n`RelationGetRelid(resultRelInfo->ri_RelationDesc)` can be stored in a\nvariable outside the for loop.\nInside the for loop, assign this variable to slot->tts_tableOid.\n\nCheers\n\nOn Tue, Aug 9, 2022 at 4:45 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:On Tue, Jul 26, 2022 at 7:19 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> Yeah, I think the patch is getting better, but I noticed some issues,\n> so I'm working on them.  I think I can post a new version in the next\n> few days.\n\n* When running AFTER ROW triggers in CopyMultiInsertBufferFlush(), the\npatch uses the slots passed to ExecForeignBatchInsert(), not the ones\nreturned by the callback function, but I don't think that that is\nalways correct, as the documentation about the callback function says:\n\n     The return value is an array of slots containing the data that was\n     actually inserted (this might differ from the data supplied, for\n     example as a result of trigger actions.)\n     The passed-in <literal>slots</literal> can be re-used for this purpose.\n\npostgres_fdw re-uses the passed-in slots, but other FDWs might not, so\nI modified the patch to reference the returned slots when running the\nAFTER ROW triggers.  I also modified the patch to initialize the\ntts_tableOid.  Attached is an updated patch, in which I made some\nminor adjustments to CopyMultiInsertBufferFlush() as well.\n\n* The patch produces incorrect error context information:\n\ncreate extension postgres_fdw;\ncreate server loopback foreign data wrapper postgres_fdw options\n(dbname 'postgres');\ncreate user mapping for current_user server loopback;\ncreate table t1 (f1 int, f2 text);\ncreate foreign table ft1 (f1 int, f2 text) server loopback options\n(table_name 't1');\nalter table t1 add constraint t1_f1positive check (f1 >= 0);\nalter foreign table ft1 add constraint ft1_f1positive check (f1 >= 0);\n\n— single insert\ncopy ft1 from stdin;\nEnter data to be copied followed by a newline.\nEnd with a backslash and a period on a line by itself, or an EOF signal.\n>> -1 foo\n>> 1 bar\n>> \\.\nERROR:  new row for relation \"t1\" violates check constraint \"t1_f1positive\"\nDETAIL:  Failing row contains (-1, foo).\nCONTEXT:  remote SQL command: INSERT INTO public.t1(f1, f2) VALUES ($1, $2)\nCOPY ft1, line 1: \"-1 foo\"\n\n— batch insert\nalter server loopback options (add batch_size '2');\ncopy ft1 from stdin;\nEnter data to be copied followed by a newline.\nEnd with a backslash and a period on a line by itself, or an EOF signal.\n>> -1 foo\n>> 1 bar\n>> \\.\nERROR:  new row for relation \"t1\" violates check constraint \"t1_f1positive\"\nDETAIL:  Failing row contains (-1, foo).\nCONTEXT:  remote SQL command: INSERT INTO public.t1(f1, f2) VALUES\n($1, $2), ($3, $4)\nCOPY ft1, line 3\n\nIn single-insert mode the error context information is correct, but in\nbatch-insert mode it isn’t (i.e., the line number isn’t correct).\n\nThe error occurs on the remote side, so I'm not sure if there is a\nsimple fix.  What I came up with is to just suppress error context\ninformation other than the relation name, like the attached.  What do\nyou think about that?\n\n(In CopyMultiInsertBufferFlush() your patch sets cstate->cur_lineno to\nbuffer->linenos[i] even when running AFTER ROW triggers for the i-th\nrow returned by ExecForeignBatchInsert(), but that wouldn’t always be\ncorrect, as the i-th returned row might not correspond to the i-th row\noriginally stored in the buffer as the callback function returns only\nthe rows that were actually inserted on the remote side.  I think the\nproposed fix would address this issue as well.)\n\n* The patch produces incorrect row count in cases where some/all of\nthe rows passed to ExecForeignBatchInsert() weren’t inserted on the\nremote side:\n\ncreate function trig_null() returns trigger as $$ begin return NULL;\nend $$ language plpgsql;\ncreate trigger trig_null before insert on t1 for each row execute\nfunction trig_null();\n\n— single insert\nalter server loopback options (drop batch_size);\ncopy ft1 from stdin;\nEnter data to be copied followed by a newline.\nEnd with a backslash and a period on a line by itself, or an EOF signal.\n>> 0 foo\n>> 1 bar\n>> \\.\nCOPY 0\n\n— batch insert\nalter server loopback options (add batch_size '2');\ncopy ft1 from stdin;\nEnter data to be copied followed by a newline.\nEnd with a backslash and a period on a line by itself, or an EOF signal.\n>> 0    foo\n>> 1 bar\n>> \\.\nCOPY 2\n\nThe row count is correct in single-insert mode, but isn’t in batch-insert mode.\n\nThe reason is that in batch-insert mode the row counter is updated\nimmediately after adding the row to the buffer, not after doing\nExecForeignBatchInsert(), which might ignore the row.  To fix, I\nmodified the patch to delay updating the row counter (and the progress\nof the COPY command) until after doing the callback function.  For\nconsistency, I also modified the patch to delay it even when batching\ninto plain tables.  IMO I think that that would be more consistent\nwith the single-insert mode, as in that mode we update them after\nwriting the tuple out to the table or sending it to the remote side.\n\n* I modified the patch so that when batching into foreign tables we\nskip useless steps in CopyMultiInsertBufferInit() and\nCopyMultiInsertBufferCleanup().\n\nThat’s all I have for now.  Sorry for the delay.\n\nBest regards,\nEtsuro FujitaHi,+           /* If any rows were inserted, run AFTER ROW INSERT triggers. */...+               for (i = 0; i < inserted; i++)+               {+                   TupleTableSlot *slot = rslots[i];...+                   slot->tts_tableOid =+                       RelationGetRelid(resultRelInfo->ri_RelationDesc);It seems the return value of `RelationGetRelid(resultRelInfo->ri_RelationDesc)` can be stored in a variable outside the for loop.Inside the for loop, assign this variable to slot->tts_tableOid.Cheers", "msg_date": "Tue, 9 Aug 2022 09:13:02 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "Hi,\n\nOn Wed, Aug 10, 2022 at 1:06 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Tue, Aug 9, 2022 at 4:45 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n>> * When running AFTER ROW triggers in CopyMultiInsertBufferFlush(), the\n>> patch uses the slots passed to ExecForeignBatchInsert(), not the ones\n>> returned by the callback function, but I don't think that that is\n>> always correct, as the documentation about the callback function says:\n>>\n>> The return value is an array of slots containing the data that was\n>> actually inserted (this might differ from the data supplied, for\n>> example as a result of trigger actions.)\n>> The passed-in <literal>slots</literal> can be re-used for this purpose.\n>>\n>> postgres_fdw re-uses the passed-in slots, but other FDWs might not, so\n>> I modified the patch to reference the returned slots when running the\n>> AFTER ROW triggers.\n\nI noticed that my explanation was not correct. Let me explain.\nBefore commit 82593b9a3, when batching into a view referencing a\npostgres_fdw foreign table that has WCO constraints, postgres_fdw used\nthe passed-in slots to store the first tuple that was actually\ninserted to the remote table. But that commit disabled batching in\nthat case, so postgres_fdw wouldn’t use the passed-in slots (until we\nsupport batching when there are WCO constraints from the parent views\nand/or AFTER ROW triggers on the foreign table).\n\n> + /* If any rows were inserted, run AFTER ROW INSERT triggers. */\n> ...\n> + for (i = 0; i < inserted; i++)\n> + {\n> + TupleTableSlot *slot = rslots[i];\n> ...\n> + slot->tts_tableOid =\n> + RelationGetRelid(resultRelInfo->ri_RelationDesc);\n>\n> It seems the return value of `RelationGetRelid(resultRelInfo->ri_RelationDesc)` can be stored in a variable outside the for loop.\n> Inside the for loop, assign this variable to slot->tts_tableOid.\n\nActually, I did this to match the code in ExecBatchInsert(), but that\nseems like a good idea, so I’ll update the patch as such in the next\nversion.\n\nThanks for reviewing!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 10 Aug 2022 17:30:20 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On 8/9/22 16:44, Etsuro Fujita wrote:\n>>> -1 foo\n>>> 1 bar\n>>> \\.\n> ERROR: new row for relation \"t1\" violates check constraint \"t1_f1positive\"\n> DETAIL: Failing row contains (-1, foo).\n> CONTEXT: remote SQL command: INSERT INTO public.t1(f1, f2) VALUES\n> ($1, $2), ($3, $4)\n> COPY ft1, line 3\n> \n> In single-insert mode the error context information is correct, but in\n> batch-insert mode it isn’t (i.e., the line number isn’t correct).\n> \n> The error occurs on the remote side, so I'm not sure if there is a\n> simple fix. What I came up with is to just suppress error context\n> information other than the relation name, like the attached. What do\n> you think about that?\nI've spent many efforts to this problem too. Your solution have a \nrationale and looks fine.\nI only think, we should add a bit of info into an error report to \nsimplify comprehension why don't point specific line here. For example:\n'COPY %s (buffered)'\nor\n'COPY FOREIGN TABLE %s'\n\nor, if instead of relname_only field to save a MultiInsertBuffer \npointer, we might add min/max linenos into the report:\n'COPY %s, line between %llu and %llu'\n\n-- \nRegards\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Mon, 15 Aug 2022 10:29:22 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On Mon, Aug 15, 2022 at 2:29 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> On 8/9/22 16:44, Etsuro Fujita wrote:\n> >>> -1 foo\n> >>> 1 bar\n> >>> \\.\n> > ERROR: new row for relation \"t1\" violates check constraint \"t1_f1positive\"\n> > DETAIL: Failing row contains (-1, foo).\n> > CONTEXT: remote SQL command: INSERT INTO public.t1(f1, f2) VALUES\n> > ($1, $2), ($3, $4)\n> > COPY ft1, line 3\n> >\n> > In single-insert mode the error context information is correct, but in\n> > batch-insert mode it isn’t (i.e., the line number isn’t correct).\n> >\n> > The error occurs on the remote side, so I'm not sure if there is a\n> > simple fix. What I came up with is to just suppress error context\n> > information other than the relation name, like the attached. What do\n> > you think about that?\n\n> I've spent many efforts to this problem too. Your solution have a\n> rationale and looks fine.\n> I only think, we should add a bit of info into an error report to\n> simplify comprehension why don't point specific line here. For example:\n> 'COPY %s (buffered)'\n> or\n> 'COPY FOREIGN TABLE %s'\n>\n> or, if instead of relname_only field to save a MultiInsertBuffer\n> pointer, we might add min/max linenos into the report:\n> 'COPY %s, line between %llu and %llu'\n\nI think the latter is more consistent with the existing error context\ninformation when in CopyMultiInsertBufferFlush(). Actually, I thought\nthis too, and I think this would be useful when the COPY FROM command\nis executed on a foreign table. My concern, however, is the case when\nthe command is executed on a partitioned table containing foreign\npartitions; in that case the input data would not always be sorted in\nthe partition order, so the range for an error-occurring foreign\npartition might contain many lines with rows from other partitions,\nwhich I think makes the range information less useful. Maybe I'm too\nworried about that, though.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Mon, 22 Aug 2022 17:44:27 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On 22/8/2022 11:44, Etsuro Fujita wrote:\n> I think the latter is more consistent with the existing error context\n> information when in CopyMultiInsertBufferFlush(). Actually, I thought\n> this too, and I think this would be useful when the COPY FROM command\n> is executed on a foreign table. My concern, however, is the case when\n> the command is executed on a partitioned table containing foreign\n> partitions; in that case the input data would not always be sorted in\n> the partition order, so the range for an error-occurring foreign\n> partition might contain many lines with rows from other partitions,\n> which I think makes the range information less useful. Maybe I'm too\n> worried about that, though.\nI got your point. Indeed, perharps such info doesn't really needed to be \nincluded into the core, at least for now.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Tue, 23 Aug 2022 08:58:22 +0300", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On Tue, Aug 23, 2022 at 2:58 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> On 22/8/2022 11:44, Etsuro Fujita wrote:\n> > I think the latter is more consistent with the existing error context\n> > information when in CopyMultiInsertBufferFlush(). Actually, I thought\n> > this too, and I think this would be useful when the COPY FROM command\n> > is executed on a foreign table. My concern, however, is the case when\n> > the command is executed on a partitioned table containing foreign\n> > partitions; in that case the input data would not always be sorted in\n> > the partition order, so the range for an error-occurring foreign\n> > partition might contain many lines with rows from other partitions,\n> > which I think makes the range information less useful. Maybe I'm too\n> > worried about that, though.\n\n> I got your point. Indeed, perharps such info doesn't really needed to be\n> included into the core, at least for now.\n\nOk. Sorry for the late response.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Tue, 27 Sep 2022 17:47:45 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On Wed, Aug 10, 2022 at 5:30 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Wed, Aug 10, 2022 at 1:06 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > + /* If any rows were inserted, run AFTER ROW INSERT triggers. */\n> > ...\n> > + for (i = 0; i < inserted; i++)\n> > + {\n> > + TupleTableSlot *slot = rslots[i];\n> > ...\n> > + slot->tts_tableOid =\n> > + RelationGetRelid(resultRelInfo->ri_RelationDesc);\n> >\n> > It seems the return value of `RelationGetRelid(resultRelInfo->ri_RelationDesc)` can be stored in a variable outside the for loop.\n> > Inside the for loop, assign this variable to slot->tts_tableOid.\n>\n> Actually, I did this to match the code in ExecBatchInsert(), but that\n> seems like a good idea, so I’ll update the patch as such in the next\n> version.\n\nDone. I also adjusted the code in CopyMultiInsertBufferFlush() a bit\nfurther. No functional changes. I put back in the original position\nan assertion ensuring the FDW supports batching. Sorry for the back\nand forth. Attached is an updated version of the patch.\n\nOther changes are:\n\n* The previous patch modified postgres_fdw.sql so that the existing\ntest cases for COPY FROM were tested in batch-insert mode. But I\nthink we should keep them as-is to test the default behavior, so I\nadded test cases for this feature by copying-and-pasting some of the\nexisting test cases. Also, the previous patch added this:\n\n+create table foo (a int) partition by list (a);\n+create table foo1 (like foo);\n+create foreign table ffoo1 partition of foo for values in (1)\n+ server loopback options (table_name 'foo1');\n+create table foo2 (like foo);\n+create foreign table ffoo2 partition of foo for values in (2)\n+ server loopback options (table_name 'foo2');\n+create function print_new_row() returns trigger language plpgsql as $$\n+ begin raise notice '%', new; return new; end; $$;\n+create trigger ffoo1_br_trig before insert on ffoo1\n+ for each row execute function print_new_row();\n+\n+copy foo from stdin;\n+1\n+2\n+\\.\n\nRather than doing so, I think it would be better to use a partitioned\ntable defined in the above section “test tuple routing for\nforeign-table partitions”, to save cycles. So I changed this as such.\n\n* I modified comments a bit further and updated docs.\n\nThat is it. I will review the patch a bit more, but I feel that it is\nin good shape.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Tue, 27 Sep 2022 18:03:51 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On Tue, Sep 27, 2022 at 6:03 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> I will review the patch a bit more, but I feel that it is\n> in good shape.\n\nOne thing I noticed is this bit added to CopyMultiInsertBufferFlush()\nto run triggers on the foreign table.\n\n+ /* Run AFTER ROW INSERT triggers */\n+ if (resultRelInfo->ri_TrigDesc != NULL &&\n+ (resultRelInfo->ri_TrigDesc->trig_insert_after_row ||\n+ resultRelInfo->ri_TrigDesc->trig_insert_new_table))\n+ {\n+ Oid relid =\nRelationGetRelid(resultRelInfo->ri_RelationDesc);\n+\n+ for (i = 0; i < inserted; i++)\n+ {\n+ TupleTableSlot *slot = rslots[i];\n+\n+ /*\n+ * AFTER ROW Triggers might reference the tableoid column,\n+ * so (re-)initialize tts_tableOid before evaluating them.\n+ */\n+ slot->tts_tableOid = relid;\n+\n+ ExecARInsertTriggers(estate, resultRelInfo,\n+ slot, NIL,\n+ cstate->transition_capture);\n+ }\n+ }\n\nSince foreign tables cannot have transition tables, we have\ntrig_insert_new_table=false. So I simplified the if test and added an\nassertion ensuring trig_insert_new_table=false. Attached is a new\nversion of the patch. I tweaked some comments a bit as well. I think\nthe patch is committable. So I plan on committing it next week if\nthere are no objections.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Fri, 7 Oct 2022 15:18:59 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On 10/7/22 11:18, Etsuro Fujita wrote:\n> On Tue, Sep 27, 2022 at 6:03 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n>> I will review the patch a bit more, but I feel that it is\n>> in good shape.\n> \n> One thing I noticed is this bit added to CopyMultiInsertBufferFlush()\n> to run triggers on the foreign table.\n> \n> + /* Run AFTER ROW INSERT triggers */\n> + if (resultRelInfo->ri_TrigDesc != NULL &&\n> + (resultRelInfo->ri_TrigDesc->trig_insert_after_row ||\n> + resultRelInfo->ri_TrigDesc->trig_insert_new_table))\n> + {\n> + Oid relid =\n> RelationGetRelid(resultRelInfo->ri_RelationDesc);\n> +\n> + for (i = 0; i < inserted; i++)\n> + {\n> + TupleTableSlot *slot = rslots[i];\n> +\n> + /*\n> + * AFTER ROW Triggers might reference the tableoid column,\n> + * so (re-)initialize tts_tableOid before evaluating them.\n> + */\n> + slot->tts_tableOid = relid;\n> +\n> + ExecARInsertTriggers(estate, resultRelInfo,\n> + slot, NIL,\n> + cstate->transition_capture);\n> + }\n> + }\n> \n> Since foreign tables cannot have transition tables, we have\n> trig_insert_new_table=false. So I simplified the if test and added an\n> assertion ensuring trig_insert_new_table=false. Attached is a new\n> version of the patch. I tweaked some comments a bit as well. I think\n> the patch is committable. So I plan on committing it next week if\n> there are no objections.\nI reviewed the patch one more time. Only one question: bistate and \nri_FdwRoutine are strongly bounded. Maybe to add some assertion on \n(ri_FdwRoutine XOR bistate) ? Just to prevent possible errors in future.\n\n-- \nRegards\nAndrey Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Tue, 11 Oct 2022 11:06:57 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On Tue, Oct 11, 2022 at 3:06 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> I reviewed the patch one more time. Only one question: bistate and\n> ri_FdwRoutine are strongly bounded. Maybe to add some assertion on\n> (ri_FdwRoutine XOR bistate) ? Just to prevent possible errors in future.\n\nYou mean the bistate member of CopyMultiInsertBuffer?\n\nWe do not use that member at all for foreign tables, so the patch\navoids initializing that member in CopyMultiInsertBufferInit() when\ncalled for a foreign table. So we have bistate = NULL for foreign\ntables (and bistate != NULL for plain tables), as you mentioned above.\nI think it is a good idea to add such assertions. How about adding\nthem to CopyMultiInsertBufferFlush() and\nCopyMultiInsertBufferCleanup() like the attached? In the attached I\nupdated comments a bit further as well.\n\nThanks for reviewing!\n\nBest regards,\nEtsuro Fujita", "msg_date": "Wed, 12 Oct 2022 11:56:17 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On 10/12/22 07:56, Etsuro Fujita wrote:\n> On Tue, Oct 11, 2022 at 3:06 PM Andrey Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>> I reviewed the patch one more time. Only one question: bistate and\n>> ri_FdwRoutine are strongly bounded. Maybe to add some assertion on\n>> (ri_FdwRoutine XOR bistate) ? Just to prevent possible errors in future.\n> \n> You mean the bistate member of CopyMultiInsertBuffer?\nYes\n> \n> We do not use that member at all for foreign tables, so the patch\n> avoids initializing that member in CopyMultiInsertBufferInit() when\n> called for a foreign table. So we have bistate = NULL for foreign\n> tables (and bistate != NULL for plain tables), as you mentioned above.\n> I think it is a good idea to add such assertions. How about adding\n> them to CopyMultiInsertBufferFlush() and\n> CopyMultiInsertBufferCleanup() like the attached? In the attached I\n> updated comments a bit further as well.\nYes, quite enough.\n\n-- \nRegards\nAndrey Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Thu, 13 Oct 2022 09:38:05 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On Thu, Oct 13, 2022 at 1:38 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> On 10/12/22 07:56, Etsuro Fujita wrote:\n> > On Tue, Oct 11, 2022 at 3:06 PM Andrey Lepikhov\n> > <a.lepikhov@postgrespro.ru> wrote:\n> >> I reviewed the patch one more time. Only one question: bistate and\n> >> ri_FdwRoutine are strongly bounded. Maybe to add some assertion on\n> >> (ri_FdwRoutine XOR bistate) ? Just to prevent possible errors in future.\n> >\n> > You mean the bistate member of CopyMultiInsertBuffer?\n> Yes\n> >\n> > We do not use that member at all for foreign tables, so the patch\n> > avoids initializing that member in CopyMultiInsertBufferInit() when\n> > called for a foreign table. So we have bistate = NULL for foreign\n> > tables (and bistate != NULL for plain tables), as you mentioned above.\n> > I think it is a good idea to add such assertions. How about adding\n> > them to CopyMultiInsertBufferFlush() and\n> > CopyMultiInsertBufferCleanup() like the attached? In the attached I\n> > updated comments a bit further as well.\n> Yes, quite enough.\n\nI have committed the patch after tweaking comments a little bit further.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 13 Oct 2022 18:58:14 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On Thu, Oct 13, 2022 at 6:58 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> I have committed the patch after tweaking comments a little bit further.\n\nI think there is another patch that improves performance of COPY FROM\nfor foreign tables using COPY FROM STDIN, but if Andrey (or anyone\nelse) want to work on it again, I think it would be better to create a\nnew CF entry for it (and start a new thread for it). So I plan to\nclose this in the November CF unless they think otherwise.\n\nAnyway, thanks for the patch, Andrey! Thanks for reviewing, Ian and Zhihong!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 28 Oct 2022 19:12:52 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On 28/10/2022 16:12, Etsuro Fujita wrote:\n> On Thu, Oct 13, 2022 at 6:58 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n>> I have committed the patch after tweaking comments a little bit further.\n> \n> I think there is another patch that improves performance of COPY FROM\n> for foreign tables using COPY FROM STDIN, but if Andrey (or anyone\n> else) want to work on it again, I think it would be better to create a\n> new CF entry for it (and start a new thread for it). So I plan to\n> close this in the November CF unless they think otherwise.\n> \n> Anyway, thanks for the patch, Andrey! Thanks for reviewing, Ian and Zhihong!\nThanks,\n\nI studied performance of this code in comparison to bulk INSERTions.\nThis patch seems to improve speed of insertion by about 20%. Also, this \npatch is very invasive. So, I don't have any plans to work on it now.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Fri, 28 Oct 2022 16:53:17 +0600", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Fast COPY FROM based on batch insert" }, { "msg_contents": "On Fri, Oct 28, 2022 at 7:53 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> On 28/10/2022 16:12, Etsuro Fujita wrote:\n> > I think there is another patch that improves performance of COPY FROM\n> > for foreign tables using COPY FROM STDIN, but if Andrey (or anyone\n> > else) want to work on it again, I think it would be better to create a\n> > new CF entry for it (and start a new thread for it). So I plan to\n> > close this in the November CF unless they think otherwise.\n\n> I studied performance of this code in comparison to bulk INSERTions.\n> This patch seems to improve speed of insertion by about 20%. Also, this\n> patch is very invasive. So, I don't have any plans to work on it now.\n\nOk, let's leave that for future work. I closed this entry in the November CF.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Mon, 31 Oct 2022 17:50:02 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fast COPY FROM based on batch insert" } ]
[ { "msg_contents": "Hello hackers\n\nI think I found a bug about estimating width of table column when I\nperform SQL with UNION statement.\n\nI think table column width of UNION statement should be equal one of UNION ALL.\nBut they don't match.This can be reproduce it on HEAD.\n\nSee following example.\n\n--CREATE TEST TABLE\nDROP TABLE union_test;DROP TABLE union_test2;\nCREATE TABLE union_test AS SELECT md5(g::text)::char(84) as data FROM\ngenerate_series(1,1000) as g;\nCREATE TABLE union_test2 AS SELECT md5(g::text)::char(84) as data FROM\ngenerate_series(1,1000) as g;\nANALYZE union_test;\nANALYZE union_test2;\n\n--width of union_test is 85.\nSELECT * FROM union_test;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Seq Scan on union_test (cost=0.00..25.00 rows=1000 width=85) (actual\ntime=0.591..1.166 rows=1000 loops=1)\n Planning Time: 10.559 ms\n Execution Time: 2.974 ms\n(3 rows)\n\n--width of UNION is 340(wrong)\nEXPLAIN ANALYZE\nSELECT * FROM union_test\nUNION\nSELECT * FROM union_test2;\n\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=85.00..105.00 rows=2000 width=*340*) (actual\ntime=3.323..3.672 rows=1000 loops=1)\nGroup Key: union_test.data\nPeak Memory Usage: 369 kB\n-> Append (cost=0.00..80.00 rows=2000 width=340) (actual\ntime=0.021..1.191 rows=2000 loops=1)\n-> Seq Scan on union_test (cost=0.00..25.00 rows=1000 width=85)\n(actual time=0.019..0.393 rows=1000 loops=1)\n-> Seq Scan on union_test2 (cost=0.00..25.00 rows=1000 width=85)\n(actual time=0.027..0.302 rows=1000 loops=1)\nPlanning Time: 0.096 ms\nExecution Time: 3.908 ms\n(8 rows)\n\n--width of UNION ALL is 85\nEXPLAIN ANALYZE\nSELECT * FROM union_test\nUNION ALL\nSELECT * FROM union_test2;\n\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\nAppend (cost=0.00..60.00 rows=2000 width=85) (actual\ntime=0.017..1.187 rows=2000 loops=1)\n-> Seq Scan on union_test (cost=0.00..25.00 rows=1000 width=85)\n(actual time=0.017..0.251 rows=1000 loops=1)\n-> Seq Scan on union_test2 (cost=0.00..25.00 rows=1000 width=85)\n(actual time=0.018..0.401 rows=1000 loops=1)\nPlanning Time: 0.213 ms\nExecution Time: 1.444 ms\n(5 rows)\n\nI think this is bug, is it right?\n\nRegards\nKenichiro Tanaka.\n\n\n", "msg_date": "Mon, 1 Jun 2020 22:35:02 +0900", "msg_from": "Kenichiro Tanaka <kenichirotanakapg@gmail.com>", "msg_from_op": true, "msg_subject": "Wrong width of UNION statement" }, { "msg_contents": "Kenichiro Tanaka <kenichirotanakapg@gmail.com> writes:\n> I think table column width of UNION statement should be equal one of UNION ALL.\n\nI don't buy that argument, because there could be type coercions involved,\nso that the result width isn't necessarily equal to any one of the inputs.\n\nHaving said that, the example you give shows that we make use of\npg_statistic.stawidth values when estimating the width of immediate\nrelation outputs, but that data isn't available by the time we get to\na UNION output. So we fall back to get_typavgwidth, which in this\ncase is going to produce something involving the typmod times the\nmaximum encoding length. (I suppose you're using UTF8 encoding...)\n\nThere's room for improvement there, but this is all bound up in the legacy\nmess that we have in prepunion.c. For example, because we don't have\nRelOptInfo nodes associated with individual set-operation outputs, it's\ndifficult to figure out where we might store data about the widths of such\noutputs. Nor could we easily access the data if we had it, since the\nassociated Vars don't have valid RTE indexes. So to my mind, that code\nneeds to be thrown away and rewritten, using actual relations to represent\nthe different setop results and Paths to represent possible computations.\nIn the meantime, it's hard to get excited about layering some additional\nhacks on top of what's there now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Jun 2020 11:04:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Wrong width of UNION statement" }, { "msg_contents": "Hello,\n\nThank you for your quick response and sorry for my late reply.\n\n> (I suppose you're using UTF8 encoding...)\nIt is right.\nAs you said, my encoding of database is UTF8.\n\n>There's room for improvement there, but this is all bound up in the legacy\n>mess that we have in prepunion.c.\nAt first,I think it is easy to fix it.\nBecause I think that it is easy to fix by calculating in the same way\nas UNION ALL.\nBut ,now,I understand it is not so easy.\n\nI'll report if I find some strong reason enough to throw away and\nrewrite prepunion.c.\n\nThank you.\n\nRegards\nKenichiro Tanaka.\n\n2020年6月2日(火) 0:04 Tom Lane <tgl@sss.pgh.pa.us>:\n>\n> Kenichiro Tanaka <kenichirotanakapg@gmail.com> writes:\n> > I think table column width of UNION statement should be equal one of UNION ALL.\n>\n> I don't buy that argument, because there could be type coercions involved,\n> so that the result width isn't necessarily equal to any one of the inputs.\n>\n> Having said that, the example you give shows that we make use of\n> pg_statistic.stawidth values when estimating the width of immediate\n> relation outputs, but that data isn't available by the time we get to\n> a UNION output. So we fall back to get_typavgwidth, which in this\n> case is going to produce something involving the typmod times the\n> maximum encoding length. (I suppose you're using UTF8 encoding...)\n>\n> There's room for improvement there, but this is all bound up in the legacy\n> mess that we have in prepunion.c. For example, because we don't have\n> RelOptInfo nodes associated with individual set-operation outputs, it's\n> difficult to figure out where we might store data about the widths of such\n> outputs. Nor could we easily access the data if we had it, since the\n> associated Vars don't have valid RTE indexes. So to my mind, that code\n> needs to be thrown away and rewritten, using actual relations to represent\n> the different setop results and Paths to represent possible computations.\n> In the meantime, it's hard to get excited about layering some additional\n> hacks on top of what's there now.\n>\n> regards, tom lane\n\n\n", "msg_date": "Thu, 4 Jun 2020 23:53:15 +0900", "msg_from": "Kenichiro Tanaka <kenichirotanakapg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Wrong width of UNION statement" }, { "msg_contents": "On Mon, Jun 1, 2020 at 8:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Kenichiro Tanaka <kenichirotanakapg@gmail.com> writes:\n> > I think table column width of UNION statement should be equal one of UNION ALL.\n>\n> I don't buy that argument, because there could be type coercions involved,\n> so that the result width isn't necessarily equal to any one of the inputs.\n>\n> Having said that, the example you give shows that we make use of\n> pg_statistic.stawidth values when estimating the width of immediate\n> relation outputs, but that data isn't available by the time we get to\n> a UNION output. So we fall back to get_typavgwidth, which in this\n> case is going to produce something involving the typmod times the\n> maximum encoding length. (I suppose you're using UTF8 encoding...)\n>\n> There's room for improvement there, but this is all bound up in the legacy\n> mess that we have in prepunion.c. For example, because we don't have\n> RelOptInfo nodes associated with individual set-operation outputs,\n\nWe already have that infrastructure, IIUC through commit\n\ncommit c596fadbfe20ff50a8e5f4bc4b4ff5b7c302ecc0\nAuthor: Robert Haas <rhaas@postgresql.org>\nDate: Mon Mar 19 11:55:38 2018 -0400\n\n Generate a separate upper relation for each stage of setop planning.\n\nCan we use that to fix this bug?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 8 Jun 2020 18:41:36 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wrong width of UNION statement" }, { "msg_contents": "Hello,Ashutosh\nThank you for your response.\n\n>We already have that infrastructure, IIUC through commit\nThat's good!\n\n>Can we use that to fix this bug?\nI'll try it.\nBut this is my first hack,I think I need time.\n\nRegards\nKenichiro Tanaka\n\n2020年6月8日(月) 22:11 Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>:\n>\n> On Mon, Jun 1, 2020 at 8:35 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Kenichiro Tanaka <kenichirotanakapg@gmail.com> writes:\n> > > I think table column width of UNION statement should be equal one of UNION ALL.\n> >\n> > I don't buy that argument, because there could be type coercions involved,\n> > so that the result width isn't necessarily equal to any one of the inputs.\n> >\n> > Having said that, the example you give shows that we make use of\n> > pg_statistic.stawidth values when estimating the width of immediate\n> > relation outputs, but that data isn't available by the time we get to\n> > a UNION output. So we fall back to get_typavgwidth, which in this\n> > case is going to produce something involving the typmod times the\n> > maximum encoding length. (I suppose you're using UTF8 encoding...)\n> >\n> > There's room for improvement there, but this is all bound up in the legacy\n> > mess that we have in prepunion.c. For example, because we don't have\n> > RelOptInfo nodes associated with individual set-operation outputs,\n>\n> We already have that infrastructure, IIUC through commit\n>\n> commit c596fadbfe20ff50a8e5f4bc4b4ff5b7c302ecc0\n> Author: Robert Haas <rhaas@postgresql.org>\n> Date: Mon Mar 19 11:55:38 2018 -0400\n>\n> Generate a separate upper relation for each stage of setop planning.\n>\n> Can we use that to fix this bug?\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n\n\n", "msg_date": "Sat, 13 Jun 2020 01:40:42 +0900", "msg_from": "Kenichiro Tanaka <kenichirotanakapg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Wrong width of UNION statement" } ]
[ { "msg_contents": "One line change to remove a duplicate check.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 1 Jun 2020 08:37:12 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Small code cleanup" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> One line change to remove a duplicate check.\n\nThe comment just above this mentions a connection to the \"Finish printing\nthe footer information about a table\" stanza below. I think some work is\nneeded to clarify what's going on there --- it doesn't seem actually\nbuggy, but there are multiple lies embedded in these comments. I'm also\nquestioning somebody's decision to wedge partitioning into this logic\nwithout refactoring any existing if's, as they seem to have done. At the\nvery least we're issuing useless queries here, for instance looking for\ninheritance parents of matviews.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Jun 2020 11:53:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Small code cleanup" }, { "msg_contents": "\n\n> On Jun 1, 2020, at 8:53 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> One line change to remove a duplicate check.\n> \n> The comment just above this mentions a connection to the \"Finish printing\n> the footer information about a table\" stanza below. I think some work is\n> needed to clarify what's going on there --- it doesn't seem actually\n> buggy, but there are multiple lies embedded in these comments. I'm also\n> questioning somebody's decision to wedge partitioning into this logic\n> without refactoring any existing if's, as they seem to have done. At the\n> very least we're issuing useless queries here, for instance looking for\n> inheritance parents of matviews.\n\nYeah, I noticed the `git blame` last night when writing the patch that you had originally wrote the code around 2017, and that the duplication was introduced in a patch committed by others around 2018. I was hoping that you, as the original author, or somebody involved in the 2018 patch, might have a deeper understanding of what's being done and volunteer to clean up the comments.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 1 Jun 2020 09:23:07 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Small code cleanup" }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> Yeah, I noticed the `git blame` last night when writing the patch that you had originally wrote the code around 2017, and that the duplication was introduced in a patch committed by others around 2018. I was hoping that you, as the original author, or somebody involved in the 2018 patch, might have a deeper understanding of what's being done and volunteer to clean up the comments.\n\nI don't think there's any deep dark mystery here. We have a collection of\nthings we need to do, each one applying to some subset of relkinds, and\nthe issue is how to express the control logic in a maintainable and\nnot-too-confusing way. Unfortunately people have pasted in new things\nwith little focus on \"not too confusing\" and more focus on \"how can I make\nthis individual patch as short as possible\". It's probably time to take a\nstep back and refactor.\n\nMy immediate annoyance was that the \"Finish printing the footer\ninformation about a table\" comment has been made a lie by adding\npartitioned indexes to the set of relkinds handled; I can cope with\nconsidering a matview to be a table, but surely an index is not. Plus, if\npartitioned indexes need to be handled here, why not also regular indexes?\nThe lack of any comments explaining this is really not good.\n\nI'm inclined to think that maybe having that overall if-test just after\nthat comment is obsolete, and we ought to break it down into separate\nsegments. For instance there's no obvious reason why the first\n\"print foreign server name\" stanza should be inside that if-test;\nand the sections related to partitioning would be better off restricted\nto relkinds that, um, can have partitions.\n\nI have to admit that I don't any longer see what the connection is\nbetween the two \"footer information about a table\" sections. Maybe\nit was more obvious before all the partitioning stuff got shoved in,\nor maybe there never was any essential connection.\n\nAnyway the whole thing is overdue for a cosmetic workover. Do you\nwant to have a go at that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Jun 2020 12:59:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Small code cleanup" }, { "msg_contents": "\n\n> On Jun 1, 2020, at 9:59 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> Yeah, I noticed the `git blame` last night when writing the patch that you had originally wrote the code around 2017, and that the duplication was introduced in a patch committed by others around 2018. I was hoping that you, as the original author, or somebody involved in the 2018 patch, might have a deeper understanding of what's being done and volunteer to clean up the comments.\n> \n> I don't think there's any deep dark mystery here. We have a collection of\n> things we need to do, each one applying to some subset of relkinds, and\n> the issue is how to express the control logic in a maintainable and\n> not-too-confusing way. Unfortunately people have pasted in new things\n> with little focus on \"not too confusing\" and more focus on \"how can I make\n> this individual patch as short as possible\". It's probably time to take a\n> step back and refactor.\n> \n> My immediate annoyance was that the \"Finish printing the footer\n> information about a table\" comment has been made a lie by adding\n> partitioned indexes to the set of relkinds handled; I can cope with\n> considering a matview to be a table, but surely an index is not. Plus, if\n> partitioned indexes need to be handled here, why not also regular indexes?\n> The lack of any comments explaining this is really not good.\n> \n> I'm inclined to think that maybe having that overall if-test just after\n> that comment is obsolete, and we ought to break it down into separate\n> segments. For instance there's no obvious reason why the first\n> \"print foreign server name\" stanza should be inside that if-test;\n> and the sections related to partitioning would be better off restricted\n> to relkinds that, um, can have partitions.\n> \n> I have to admit that I don't any longer see what the connection is\n> between the two \"footer information about a table\" sections. Maybe\n> it was more obvious before all the partitioning stuff got shoved in,\n> or maybe there never was any essential connection.\n> \n> Anyway the whole thing is overdue for a cosmetic workover. Do you\n> want to have a go at that?\n\nOk, sure, I'll see if I can clean that up. I ran into this while doing some refactoring of about 160 files, so I wasn't really focused on this particular file, or its features.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 1 Jun 2020 10:07:26 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Small code cleanup" } ]
[ { "msg_contents": "Hi all,\n\nWhen reading pg_stat_replication doc of PG13, I thought it's better to\nmention that tracking of spilled transactions works only for logical\nreplication like we already mentioned about replication lag tracking:\n\n <para>\n Lag times work automatically for physical replication. Logical decoding\n plugins may optionally emit tracking messages; if they do not, the tracking\n mechanism will simply display NULL lag.\n </para>\n\nWhat do you think? Please find attached patch.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 2 Jun 2020 12:40:11 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Small doc improvement about spilled txn tracking" }, { "msg_contents": "On Tue, Jun 2, 2020 at 9:10 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> Hi all,\n>\n> When reading pg_stat_replication doc of PG13, I thought it's better to\n> mention that tracking of spilled transactions works only for logical\n> replication like we already mentioned about replication lag tracking:\n>\n> <para>\n> Lag times work automatically for physical replication. Logical decoding\n> plugins may optionally emit tracking messages; if they do not, the tracking\n> mechanism will simply display NULL lag.\n> </para>\n>\n> What do you think?\n>\n\n+1.\n\n> Please find attached patch.\n>\n\nOn a quick look, it seems fine but I will look in more detail and let\nyou know if I have any feedback.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Jun 2020 10:22:26 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Small doc improvement about spilled txn tracking" }, { "msg_contents": "On Tue, Jun 2, 2020 at 10:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jun 2, 2020 at 9:10 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n>\n> > Please find attached patch.\n> >\n>\n> On a quick look, it seems fine but I will look in more detail and let\n> you know if I have any feedback.\n>\n\nI am not sure if we need to add \"Logical decoding plugins may\noptionally emit tracking message.\" as the stats behavior should be the\nsame for decoding plugin and logical replication. Apart from removing\nthis line, I have made a few other minor changes, see what you think\nof attached?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 2 Jun 2020 11:19:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Small doc improvement about spilled txn tracking" }, { "msg_contents": "On Tue, 2 Jun 2020 at 14:50, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jun 2, 2020 at 10:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jun 2, 2020 at 9:10 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> >\n> > > Please find attached patch.\n> > >\n> >\n> > On a quick look, it seems fine but I will look in more detail and let\n> > you know if I have any feedback.\n> >\n>\n> I am not sure if we need to add \"Logical decoding plugins may\n> optionally emit tracking message.\" as the stats behavior should be the\n> same for decoding plugin and logical replication. Apart from removing\n> this line, I have made a few other minor changes, see what you think\n> of attached?\n>\n\nI'm okay with removing that sentence. The patch looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 2 Jun 2020 14:59:55 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Small doc improvement about spilled txn tracking" }, { "msg_contents": "On Tue, Jun 2, 2020 at 11:30 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 2 Jun 2020 at 14:50, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jun 2, 2020 at 10:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Jun 2, 2020 at 9:10 AM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > >\n> > > > Please find attached patch.\n> > > >\n> > >\n> > > On a quick look, it seems fine but I will look in more detail and let\n> > > you know if I have any feedback.\n> > >\n> >\n> > I am not sure if we need to add \"Logical decoding plugins may\n> > optionally emit tracking message.\" as the stats behavior should be the\n> > same for decoding plugin and logical replication. Apart from removing\n> > this line, I have made a few other minor changes, see what you think\n> > of attached?\n> >\n>\n> I'm okay with removing that sentence. The patch looks good to me.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Jun 2020 11:45:15 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Small doc improvement about spilled txn tracking" }, { "msg_contents": "On Tue, 2 Jun 2020 at 15:15, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jun 2, 2020 at 11:30 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Tue, 2 Jun 2020 at 14:50, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Jun 2, 2020 at 10:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Tue, Jun 2, 2020 at 9:10 AM Masahiko Sawada\n> > > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > > >\n> > > >\n> > > > > Please find attached patch.\n> > > > >\n> > > >\n> > > > On a quick look, it seems fine but I will look in more detail and let\n> > > > you know if I have any feedback.\n> > > >\n> > >\n> > > I am not sure if we need to add \"Logical decoding plugins may\n> > > optionally emit tracking message.\" as the stats behavior should be the\n> > > same for decoding plugin and logical replication. Apart from removing\n> > > this line, I have made a few other minor changes, see what you think\n> > > of attached?\n> > >\n> >\n> > I'm okay with removing that sentence. The patch looks good to me.\n> >\n>\n> Pushed.\n\nThank you!\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 2 Jun 2020 15:18:15 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Small doc improvement about spilled txn tracking" } ]
[ { "msg_contents": "Hi all,\n\nTracking of spilled transactions has been introduced to PG13. These\nnew statistics values, spill_txns, spill_count, and spill_bytes, are\ncumulative total values unlike other statistics values in\npg_stat_replication. How can we reset these values? We can reset\nstatistics values in other statistics views using by\npg_stat_reset_shared(), pg_stat_reset() and so on. It seems to me that\nthe only option to reset spilled transactions is to restart logical\nreplication but it's surely high cost.\n\nIt might have been discussed during development but it's worth having\na SQL function to reset these statistics?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 2 Jun 2020 15:17:36 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "At Tue, 2 Jun 2020 15:17:36 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in \n> Hi all,\n> \n> Tracking of spilled transactions has been introduced to PG13. These\n> new statistics values, spill_txns, spill_count, and spill_bytes, are\n> cumulative total values unlike other statistics values in\n> pg_stat_replication. How can we reset these values? We can reset\n> statistics values in other statistics views using by\n> pg_stat_reset_shared(), pg_stat_reset() and so on. It seems to me that\n> the only option to reset spilled transactions is to restart logical\n> replication but it's surely high cost.\n> \n> It might have been discussed during development but it's worth having\n> a SQL function to reset these statistics?\n\nActually, I don't see pg_stat_reset() useful so much except for our\nregression test (or might be rather harmful for monitoring aids). So\nI doubt the usefulness of the feature, but having it makes things more\nconsistent.\n\nAnyway I think the most significant point of implementing the feature\nwould be user interface. Adding pg_stat_replication_reset(pid int)\ndoesn't seem to be a good thing to do..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 02 Jun 2020 16:00:19 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "\n\nOn 2020/06/02 16:00, Kyotaro Horiguchi wrote:\n> At Tue, 2 Jun 2020 15:17:36 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n>> Hi all,\n>>\n>> Tracking of spilled transactions has been introduced to PG13. These\n>> new statistics values, spill_txns, spill_count, and spill_bytes, are\n>> cumulative total values unlike other statistics values in\n>> pg_stat_replication.\n\nBasically I don't think it's good design to mix dynamic and collected\nstats in one view. It might be better to separate them into different\nnew stats view. It's too late to add new stats view for v13, though....\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 2 Jun 2020 17:22:26 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Jun 2, 2020 at 11:48 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> Hi all,\n>\n> Tracking of spilled transactions has been introduced to PG13. These\n> new statistics values, spill_txns, spill_count, and spill_bytes, are\n> cumulative total values unlike other statistics values in\n> pg_stat_replication. How can we reset these values? We can reset\n> statistics values in other statistics views using by\n> pg_stat_reset_shared(), pg_stat_reset() and so on. It seems to me that\n> the only option to reset spilled transactions is to restart logical\n> replication but it's surely high cost.\n>\n\nI see your point but I don't see a pressing need for such a function\nfor PG13. Basically, these counters will be populated when we have\nlarge transactions in the system so not sure how much is the use case\nfor such a function. Note that we need to add additional column\nstats_reset in pg_stat_replication view as well similar to what we\nhave in pg_stat_archiver and pg_stat_bgwriter. OTOH, I don't see any\nbig reason for not having such a function for PG14.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Jun 2020 15:04:39 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, 2 Jun 2020 at 17:22, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/06/02 16:00, Kyotaro Horiguchi wrote:\n> > At Tue, 2 Jun 2020 15:17:36 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n> >> Hi all,\n> >>\n> >> Tracking of spilled transactions has been introduced to PG13. These\n> >> new statistics values, spill_txns, spill_count, and spill_bytes, are\n> >> cumulative total values unlike other statistics values in\n> >> pg_stat_replication.\n>\n> Basically I don't think it's good design to mix dynamic and collected\n> stats in one view. It might be better to separate them into different\n> new stats view. It's too late to add new stats view for v13, though....\n\nYeah, actually I had the same impression when studying this feature.\nAnother benefit of having a separate view for such statistics would be\nthat it can also support logical decoding invoked by SQL interface.\nCurrently, reorder buffer always tracks statistics of spilled\ntransactions but we can see it only in using logical replication.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 2 Jun 2020 18:38:36 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Jun 2, 2020 at 1:52 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/06/02 16:00, Kyotaro Horiguchi wrote:\n> > At Tue, 2 Jun 2020 15:17:36 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n> >> Hi all,\n> >>\n> >> Tracking of spilled transactions has been introduced to PG13. These\n> >> new statistics values, spill_txns, spill_count, and spill_bytes, are\n> >> cumulative total values unlike other statistics values in\n> >> pg_stat_replication.\n>\n> Basically I don't think it's good design to mix dynamic and collected\n> stats in one view. It might be better to separate them into different\n> new stats view.\n>\n\nI think this is worth considering but note that we already have a\nsimilar mix in other views like pg_stat_archiver (archived_count and\nfailed_count are dynamic whereas other columns are static). On the\none hand, it is good if we have a separate view for such dynamic\ninformation but OTOH users need to consult more views for replication\ninformation.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Jun 2020 15:10:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, 2 Jun 2020 at 16:00, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 2 Jun 2020 15:17:36 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n> > Hi all,\n> >\n> > Tracking of spilled transactions has been introduced to PG13. These\n> > new statistics values, spill_txns, spill_count, and spill_bytes, are\n> > cumulative total values unlike other statistics values in\n> > pg_stat_replication. How can we reset these values? We can reset\n> > statistics values in other statistics views using by\n> > pg_stat_reset_shared(), pg_stat_reset() and so on. It seems to me that\n> > the only option to reset spilled transactions is to restart logical\n> > replication but it's surely high cost.\n> >\n> > It might have been discussed during development but it's worth having\n> > a SQL function to reset these statistics?\n>\n> Actually, I don't see pg_stat_reset() useful so much except for our\n> regression test (or might be rather harmful for monitoring aids). So\n> I doubt the usefulness of the feature, but having it makes things more\n> consistent.\n\nIMO these reset functions are useful for verifications. I often use\nthem before starting performance evaluations.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 2 Jun 2020 22:24:16 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, 2 Jun 2020 at 18:34, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jun 2, 2020 at 11:48 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > Hi all,\n> >\n> > Tracking of spilled transactions has been introduced to PG13. These\n> > new statistics values, spill_txns, spill_count, and spill_bytes, are\n> > cumulative total values unlike other statistics values in\n> > pg_stat_replication. How can we reset these values? We can reset\n> > statistics values in other statistics views using by\n> > pg_stat_reset_shared(), pg_stat_reset() and so on. It seems to me that\n> > the only option to reset spilled transactions is to restart logical\n> > replication but it's surely high cost.\n> >\n>\n> I see your point but I don't see a pressing need for such a function\n> for PG13. Basically, these counters will be populated when we have\n> large transactions in the system so not sure how much is the use case\n> for such a function. Note that we need to add additional column\n> stats_reset in pg_stat_replication view as well similar to what we\n> have in pg_stat_archiver and pg_stat_bgwriter. OTOH, I don't see any\n> big reason for not having such a function for PG14.\n\nOk. I think the reset function is mostly for evaluations or rare\ncases. In either case, since it's not an urgent case we can postpone\nit to PG14 if necessary.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 9 Jun 2020 16:11:57 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Jun 9, 2020 at 9:12 AM Masahiko Sawada <\nmasahiko.sawada@2ndquadrant.com> wrote:\n\n> On Tue, 2 Jun 2020 at 18:34, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jun 2, 2020 at 11:48 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > Hi all,\n> > >\n> > > Tracking of spilled transactions has been introduced to PG13. These\n> > > new statistics values, spill_txns, spill_count, and spill_bytes, are\n> > > cumulative total values unlike other statistics values in\n> > > pg_stat_replication. How can we reset these values? We can reset\n> > > statistics values in other statistics views using by\n> > > pg_stat_reset_shared(), pg_stat_reset() and so on. It seems to me that\n> > > the only option to reset spilled transactions is to restart logical\n> > > replication but it's surely high cost.\n>\n\nYou just have to \"bounce\" the worker though, right? You don't have to\nactually restart logical replication, just disconnect and reconnect?\n\n\n> I see your point but I don't see a pressing need for such a function\n> > for PG13. Basically, these counters will be populated when we have\n> > large transactions in the system so not sure how much is the use case\n> > for such a function. Note that we need to add additional column\n> > stats_reset in pg_stat_replication view as well similar to what we\n> > have in pg_stat_archiver and pg_stat_bgwriter. OTOH, I don't see any\n> > big reason for not having such a function for PG14.\n>\n> Ok. I think the reset function is mostly for evaluations or rare\n> cases. In either case, since it's not an urgent case we can postpone\n> it to PG14 if necessary.\n>\n\nReading through this thread, I agree that it's kind of weird to keep\ncumulative stats mixed with non-cumulative stats. (it always irks me, for\nexample, that we have numbackends in pg_stat_database which behaves\ndifferent from every other column in it)\n\nHowever, I don't see how they *are* cumulative. They are only cumulative\nwhile the client is connected -- as soon as it disconnects they go away. In\nthat regard, they're more like the pg_stat_progress_xyz views for example.\n\nWhich makes it mostly useless for long-term tracking anyway. Because no\nmatter which way you snapshot it, you will lose data.\n\nISTM the logical places to keep cumulative stats would be\npg_replication_slots? (Or go create a pg_stat_replication_slots?) That is,\nthat the logical grouping of these statistics for long-term is the\nreplication slot, not the walsender?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Jun 9, 2020 at 9:12 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:On Tue, 2 Jun 2020 at 18:34, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jun 2, 2020 at 11:48 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > Hi all,\n> >\n> > Tracking of spilled transactions has been introduced to PG13. These\n> > new statistics values, spill_txns, spill_count, and spill_bytes, are\n> > cumulative total values unlike other statistics values in\n> > pg_stat_replication. How can we reset these values? We can reset\n> > statistics values in other statistics views using by\n> > pg_stat_reset_shared(), pg_stat_reset() and so on. It seems to me that\n> > the only option to reset spilled transactions is to restart logical\n> > replication but it's surely high cost.You just have to \"bounce\" the worker though, right? You don't have to actually restart logical replication, just disconnect and reconnect?> I see your point but I don't see a pressing need for such a function\n> for PG13.  Basically, these counters will be populated when we have\n> large transactions in the system so not sure how much is the use case\n> for such a function. Note that we need to add additional column\n> stats_reset in pg_stat_replication view as well similar to what we\n> have in pg_stat_archiver and pg_stat_bgwriter.  OTOH, I don't see any\n> big reason for not having such a function for PG14.\n\nOk. I think the reset function is mostly for evaluations or rare\ncases. In either case, since it's not an urgent case we can postpone\nit to PG14 if necessary.Reading through this thread, I agree that it's kind of weird to keep cumulative stats mixed with non-cumulative stats. (it always irks me, for example, that we have numbackends in pg_stat_database which behaves different from every other column in it)However, I don't see how they *are* cumulative. They are only cumulative while the client is connected -- as soon as it disconnects they go away. In that regard, they're more like the pg_stat_progress_xyz views for example.Which makes it mostly useless for long-term tracking anyway. Because no matter which way you snapshot it, you will lose data.ISTM the logical places to keep cumulative stats would be pg_replication_slots? (Or go create a pg_stat_replication_slots?) That is, that the logical grouping of these statistics for long-term is the replication slot, not the walsender?--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Tue, 9 Jun 2020 10:24:21 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Jun 9, 2020 at 1:54 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Tue, Jun 9, 2020 at 9:12 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>>\n>> On Tue, 2 Jun 2020 at 18:34, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >\n>> > I see your point but I don't see a pressing need for such a function\n>> > for PG13. Basically, these counters will be populated when we have\n>> > large transactions in the system so not sure how much is the use case\n>> > for such a function. Note that we need to add additional column\n>> > stats_reset in pg_stat_replication view as well similar to what we\n>> > have in pg_stat_archiver and pg_stat_bgwriter. OTOH, I don't see any\n>> > big reason for not having such a function for PG14.\n>>\n>> Ok. I think the reset function is mostly for evaluations or rare\n>> cases. In either case, since it's not an urgent case we can postpone\n>> it to PG14 if necessary.\n>\n>\n> Reading through this thread, I agree that it's kind of weird to keep cumulative stats mixed with non-cumulative stats. (it always irks me, for example, that we have numbackends in pg_stat_database which behaves different from every other column in it)\n>\n> However, I don't see how they *are* cumulative. They are only cumulative while the client is connected -- as soon as it disconnects they go away. In that regard, they're more like the pg_stat_progress_xyz views for example.\n>\n> Which makes it mostly useless for long-term tracking anyway. Because no matter which way you snapshot it, you will lose data.\n>\n> ISTM the logical places to keep cumulative stats would be pg_replication_slots? (Or go create a pg_stat_replication_slots?) That is, that the logical grouping of these statistics for long-term is the replication slot, not the walsender?\n>\n\nI think I see one advantage of displaying these stats at slot level.\nCurrently, we won't be able to see these stats when we use SQL\nInterface APIs (like pg_logical_get_slot_changes) to decode the WAL\nbut if we display at slot level, then we should be able to see it.\n\nI would prefer to display it in pg_replication_slots just to avoid\ncreating more views but OTOH, if a new view like\npg_stat_replication_slots sounds better place for these stats then I\nam fine with it too.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 10 Jun 2020 09:20:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, 9 Jun 2020 at 17:24, Magnus Hagander <magnus@hagander.net> wrote:\n>\n>\n>\n> On Tue, Jun 9, 2020 at 9:12 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>>\n>> On Tue, 2 Jun 2020 at 18:34, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >\n>> > On Tue, Jun 2, 2020 at 11:48 AM Masahiko Sawada\n>> > <masahiko.sawada@2ndquadrant.com> wrote:\n>> > >\n>> > > Hi all,\n>> > >\n>> > > Tracking of spilled transactions has been introduced to PG13. These\n>> > > new statistics values, spill_txns, spill_count, and spill_bytes, are\n>> > > cumulative total values unlike other statistics values in\n>> > > pg_stat_replication. How can we reset these values? We can reset\n>> > > statistics values in other statistics views using by\n>> > > pg_stat_reset_shared(), pg_stat_reset() and so on. It seems to me that\n>> > > the only option to reset spilled transactions is to restart logical\n>> > > replication but it's surely high cost.\n>\n>\n> You just have to \"bounce\" the worker though, right? You don't have to actually restart logical replication, just disconnect and reconnect?\n\nRight.\n\n>\n>\n>> > I see your point but I don't see a pressing need for such a function\n>> > for PG13. Basically, these counters will be populated when we have\n>> > large transactions in the system so not sure how much is the use case\n>> > for such a function. Note that we need to add additional column\n>> > stats_reset in pg_stat_replication view as well similar to what we\n>> > have in pg_stat_archiver and pg_stat_bgwriter. OTOH, I don't see any\n>> > big reason for not having such a function for PG14.\n>>\n>> Ok. I think the reset function is mostly for evaluations or rare\n>> cases. In either case, since it's not an urgent case we can postpone\n>> it to PG14 if necessary.\n>\n>\n> Reading through this thread, I agree that it's kind of weird to keep cumulative stats mixed with non-cumulative stats. (it always irks me, for example, that we have numbackends in pg_stat_database which behaves different from every other column in it)\n>\n> However, I don't see how they *are* cumulative. They are only cumulative while the client is connected -- as soon as it disconnects they go away. In that regard, they're more like the pg_stat_progress_xyz views for example.\n>\n> Which makes it mostly useless for long-term tracking anyway. Because no matter which way you snapshot it, you will lose data.\n>\n> ISTM the logical places to keep cumulative stats would be pg_replication_slots? (Or go create a pg_stat_replication_slots?) That is, that the logical grouping of these statistics for long-term is the replication slot, not the walsender?\n\nI personally prefer to display these values in pg_replication_slots.\nIf we create a new stats view, it's only for logical replication\nslots? Or displaying both types of slots as physical replication slots\nmight have statistics in the future?\n\nIf we move these values to replication slots, I think we can change\nthe code so that these statistics are managed by replication slots\n(e.g. ReplicationSlot struct). Once ReplicationSlot has these values,\nwe can keep them beyond reconnections or multiple calls of SQL\ninterface functions. Of course, these values don’t need to be\npersisted.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 10 Jun 2020 16:00:45 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Wed, Jun 10, 2020 at 9:01 AM Masahiko Sawada <\nmasahiko.sawada@2ndquadrant.com> wrote:\n\n> On Tue, 9 Jun 2020 at 17:24, Magnus Hagander <magnus@hagander.net> wrote:\n> >\n> >\n> >\n> > On Tue, Jun 9, 2020 at 9:12 AM Masahiko Sawada <\n> masahiko.sawada@2ndquadrant.com> wrote:\n> >>\n> >> On Tue, 2 Jun 2020 at 18:34, Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >> >\n> >> > On Tue, Jun 2, 2020 at 11:48 AM Masahiko Sawada\n> >> > <masahiko.sawada@2ndquadrant.com> wrote:\n> >> > >\n> >> > > Hi all,\n> >> > >\n> >> > > Tracking of spilled transactions has been introduced to PG13. These\n> >> > > new statistics values, spill_txns, spill_count, and spill_bytes, are\n> >> > > cumulative total values unlike other statistics values in\n> >> > > pg_stat_replication. How can we reset these values? We can reset\n> >> > > statistics values in other statistics views using by\n> >> > > pg_stat_reset_shared(), pg_stat_reset() and so on. It seems to me\n> that\n> >> > > the only option to reset spilled transactions is to restart logical\n> >> > > replication but it's surely high cost.\n> >\n> >\n> > You just have to \"bounce\" the worker though, right? You don't have to\n> actually restart logical replication, just disconnect and reconnect?\n>\n> Right.\n>\n> >\n> >\n> >> > I see your point but I don't see a pressing need for such a function\n> >> > for PG13. Basically, these counters will be populated when we have\n> >> > large transactions in the system so not sure how much is the use case\n> >> > for such a function. Note that we need to add additional column\n> >> > stats_reset in pg_stat_replication view as well similar to what we\n> >> > have in pg_stat_archiver and pg_stat_bgwriter. OTOH, I don't see any\n> >> > big reason for not having such a function for PG14.\n> >>\n> >> Ok. I think the reset function is mostly for evaluations or rare\n> >> cases. In either case, since it's not an urgent case we can postpone\n> >> it to PG14 if necessary.\n> >\n> >\n> > Reading through this thread, I agree that it's kind of weird to keep\n> cumulative stats mixed with non-cumulative stats. (it always irks me, for\n> example, that we have numbackends in pg_stat_database which behaves\n> different from every other column in it)\n> >\n> > However, I don't see how they *are* cumulative. They are only cumulative\n> while the client is connected -- as soon as it disconnects they go away. In\n> that regard, they're more like the pg_stat_progress_xyz views for example.\n> >\n> > Which makes it mostly useless for long-term tracking anyway. Because no\n> matter which way you snapshot it, you will lose data.\n> >\n> > ISTM the logical places to keep cumulative stats would be\n> pg_replication_slots? (Or go create a pg_stat_replication_slots?) That is,\n> that the logical grouping of these statistics for long-term is the\n> replication slot, not the walsender?\n>\n> I personally prefer to display these values in pg_replication_slots.\n> If we create a new stats view, it's only for logical replication\n> slots? Or displaying both types of slots as physical replication slots\n> might have statistics in the future?\n>\n\nYeah, I think it's kind of a weird situation. There's already some things\nin pg_replication_slots that should probably be in a stat_ view, so if we\nwere to create one we would have to move those, and probably needlessly\nbreak things for people.\n\ni guess we could have separate views for logical and pysical slots since\nthere are things that only one of them will have. But there is that already\n-- the database for example, and xmins. Splitting that apart now should be\na bigger thing, but I don't think it's worth it.\n\n\nIf we move these values to replication slots, I think we can change\n> the code so that these statistics are managed by replication slots\n> (e.g. ReplicationSlot struct). Once ReplicationSlot has these values,\n> we can keep them beyond reconnections or multiple calls of SQL\n> interface functions. Of course, these values don’t need to be\n> persisted.\n>\n\nEh, why should they not be persisted? The comparison would be temp_files\nand temp_bytes in pg_stat_database, and those *are* persisted.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Jun 10, 2020 at 9:01 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:On Tue, 9 Jun 2020 at 17:24, Magnus Hagander <magnus@hagander.net> wrote:\n>\n>\n>\n> On Tue, Jun 9, 2020 at 9:12 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>>\n>> On Tue, 2 Jun 2020 at 18:34, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >\n>> > On Tue, Jun 2, 2020 at 11:48 AM Masahiko Sawada\n>> > <masahiko.sawada@2ndquadrant.com> wrote:\n>> > >\n>> > > Hi all,\n>> > >\n>> > > Tracking of spilled transactions has been introduced to PG13. These\n>> > > new statistics values, spill_txns, spill_count, and spill_bytes, are\n>> > > cumulative total values unlike other statistics values in\n>> > > pg_stat_replication. How can we reset these values? We can reset\n>> > > statistics values in other statistics views using by\n>> > > pg_stat_reset_shared(), pg_stat_reset() and so on. It seems to me that\n>> > > the only option to reset spilled transactions is to restart logical\n>> > > replication but it's surely high cost.\n>\n>\n> You just have to \"bounce\" the worker though, right? You don't have to actually restart logical replication, just disconnect and reconnect?\n\nRight.\n\n>\n>\n>> > I see your point but I don't see a pressing need for such a function\n>> > for PG13.  Basically, these counters will be populated when we have\n>> > large transactions in the system so not sure how much is the use case\n>> > for such a function. Note that we need to add additional column\n>> > stats_reset in pg_stat_replication view as well similar to what we\n>> > have in pg_stat_archiver and pg_stat_bgwriter.  OTOH, I don't see any\n>> > big reason for not having such a function for PG14.\n>>\n>> Ok. I think the reset function is mostly for evaluations or rare\n>> cases. In either case, since it's not an urgent case we can postpone\n>> it to PG14 if necessary.\n>\n>\n> Reading through this thread, I agree that it's kind of weird to keep cumulative stats mixed with non-cumulative stats. (it always irks me, for example, that we have numbackends in pg_stat_database which behaves different from every other column in it)\n>\n> However, I don't see how they *are* cumulative. They are only cumulative while the client is connected -- as soon as it disconnects they go away. In that regard, they're more like the pg_stat_progress_xyz views for example.\n>\n> Which makes it mostly useless for long-term tracking anyway. Because no matter which way you snapshot it, you will lose data.\n>\n> ISTM the logical places to keep cumulative stats would be pg_replication_slots? (Or go create a pg_stat_replication_slots?) That is, that the logical grouping of these statistics for long-term is the replication slot, not the walsender?\n\nI personally prefer to display these values in pg_replication_slots.\nIf we create a new stats view, it's only for logical replication\nslots? Or displaying both types of slots as physical replication slots\nmight have statistics in the future?Yeah, I think it's kind of a weird situation. There's already some things in pg_replication_slots that should probably be in a stat_ view,  so if we were to create one we would have to move those, and probably needlessly break things for people. i guess we could have separate views for logical and pysical slots since there are things that only one of them will have. But there is that already -- the database for example, and xmins. Splitting that apart now should be a bigger thing, but I don't think it's worth it.\nIf we move these values to replication slots, I think we can change\nthe code so that these statistics are managed by replication slots\n(e.g. ReplicationSlot struct). Once ReplicationSlot has these values,\nwe can keep them beyond reconnections or multiple calls of SQL\ninterface functions. Of course, these values don’t need to be\npersisted.Eh, why should they not be persisted? The comparison would be temp_files and temp_bytes in pg_stat_database, and those *are* persisted. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 10 Jun 2020 21:05:26 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, Jun 11, 2020 at 12:35 AM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Wed, Jun 10, 2020 at 9:01 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>>\n>\n>> If we move these values to replication slots, I think we can change\n>> the code so that these statistics are managed by replication slots\n>> (e.g. ReplicationSlot struct). Once ReplicationSlot has these values,\n>> we can keep them beyond reconnections or multiple calls of SQL\n>> interface functions. Of course, these values don’t need to be\n>> persisted.\n>\n>\n> Eh, why should they not be persisted?\n>\n\nBecause these stats are corresponding to a ReoderBuffer (logical\ndecoding) which will be allocated fresh after restart and have no\nrelation with what has happened before restart.\n\nNow, thinking about this again, I am not sure if these stats are\ndirectly related to slots. These are stats for logical decoding which\ncan be performed either via WALSender or decoding plugin (via APIs).\nSo, why not have them displayed in a new view like pg_stat_logical (or\npg_stat_logical_decoding/pg_stat_logical_replication)? In future, we\nwill need to add similar stats for streaming of in-progress\ntransactions as well (see patch 0007-Track-statistics-for-streaming at\n[1]), so having a separate view for these doesn't sound illogical.\n\n> The comparison would be temp_files and temp_bytes in pg_stat_database, and those *are* persisted.\n\nI am not able to see a one-on-one mapping of those stats with what is\nbeing discussed here.\n\n[1] - https://www.postgresql.org/message-id/CAFiTN-vXQx_161WC-a9HvNaF25nwO%3DJJRpRdTtyfGQHbM3Bd1Q%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Jun 2020 09:00:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, 11 Jun 2020 at 12:30, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jun 11, 2020 at 12:35 AM Magnus Hagander <magnus@hagander.net> wrote:\n> >\n> > On Wed, Jun 10, 2020 at 9:01 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n> >>\n> >\n> >> If we move these values to replication slots, I think we can change\n> >> the code so that these statistics are managed by replication slots\n> >> (e.g. ReplicationSlot struct). Once ReplicationSlot has these values,\n> >> we can keep them beyond reconnections or multiple calls of SQL\n> >> interface functions. Of course, these values don’t need to be\n> >> persisted.\n> >\n> >\n> > Eh, why should they not be persisted?\n> >\n>\n> Because these stats are corresponding to a ReoderBuffer (logical\n> decoding) which will be allocated fresh after restart and have no\n> relation with what has happened before restart.\n\nI thought the same. But I now think there is no difference between\nreconnecting replication and server restart in terms of the logical\ndecoding context. Even if we persist these values, we might want to\nreset these values after crash recovery like stats collector.\n\n>\n> Now, thinking about this again, I am not sure if these stats are\n> directly related to slots. These are stats for logical decoding which\n> can be performed either via WALSender or decoding plugin (via APIs).\n> So, why not have them displayed in a new view like pg_stat_logical (or\n> pg_stat_logical_decoding/pg_stat_logical_replication)? In future, we\n> will need to add similar stats for streaming of in-progress\n> transactions as well (see patch 0007-Track-statistics-for-streaming at\n> [1]), so having a separate view for these doesn't sound illogical.\n>\n\nI think we need to decide how long we want to remain these statistics\nvalues. That is, if we were to have such pg_stat_logical view, these\nvalues would remain until logical decoding finished since I think the\nview would display only running logical decoding. OTOH, if we were to\ncorrespond these stats to slots, these values would remain beyond\nmultiple logical decoding SQL API calls.\n\nI think one of the main use-case of these statistics is the tuning of\nlogical_decoding_work_mem. This seems similar to a combination of\npg_stat_database.temp_files/temp_bytes and work_mem. From this\nperspective, I guess it’s useful for users if these values remain\nuntil or the slots are removed or server crashes. Given that the kinds\nof logical decoding statistics might grow, having a separate view\ndedicated to replication slots makes sense to me.\n\nFor updating these statistics, if we correspond these statistics to\nlogical decoding or replication slot, we can change the strategy of\nupdating statistics so that it doesn’t depend on logical decoding\nplugin implementation. If updating statistics doesn’t affect\nperformance much, it’s better to track the statistics regardless of\nplugins.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 11 Jun 2020 17:16:19 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, Jun 11, 2020 at 1:46 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 11 Jun 2020 at 12:30, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Now, thinking about this again, I am not sure if these stats are\n> > directly related to slots. These are stats for logical decoding which\n> > can be performed either via WALSender or decoding plugin (via APIs).\n> > So, why not have them displayed in a new view like pg_stat_logical (or\n> > pg_stat_logical_decoding/pg_stat_logical_replication)? In future, we\n> > will need to add similar stats for streaming of in-progress\n> > transactions as well (see patch 0007-Track-statistics-for-streaming at\n> > [1]), so having a separate view for these doesn't sound illogical.\n> >\n>\n> I think we need to decide how long we want to remain these statistics\n> values. That is, if we were to have such pg_stat_logical view, these\n> values would remain until logical decoding finished since I think the\n> view would display only running logical decoding. OTOH, if we were to\n> correspond these stats to slots, these values would remain beyond\n> multiple logical decoding SQL API calls.\n>\n\nI thought of having these till the process that performs these\noperations exist. So for WALSender, the stats will be valid till it\nis not restarted due to some reason or when performed via backend, the\nstats will be valid till the corresponding backend exits.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Jun 2020 14:40:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, 11 Jun 2020 at 18:11, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jun 11, 2020 at 1:46 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Thu, 11 Jun 2020 at 12:30, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > Now, thinking about this again, I am not sure if these stats are\n> > > directly related to slots. These are stats for logical decoding which\n> > > can be performed either via WALSender or decoding plugin (via APIs).\n> > > So, why not have them displayed in a new view like pg_stat_logical (or\n> > > pg_stat_logical_decoding/pg_stat_logical_replication)? In future, we\n> > > will need to add similar stats for streaming of in-progress\n> > > transactions as well (see patch 0007-Track-statistics-for-streaming at\n> > > [1]), so having a separate view for these doesn't sound illogical.\n> > >\n> >\n> > I think we need to decide how long we want to remain these statistics\n> > values. That is, if we were to have such pg_stat_logical view, these\n> > values would remain until logical decoding finished since I think the\n> > view would display only running logical decoding. OTOH, if we were to\n> > correspond these stats to slots, these values would remain beyond\n> > multiple logical decoding SQL API calls.\n> >\n>\n> I thought of having these till the process that performs these\n> operations exist. So for WALSender, the stats will be valid till it\n> is not restarted due to some reason or when performed via backend, the\n> stats will be valid till the corresponding backend exits.\n>\n\nThe number of rows of that view could be up to (max_backends +\nmax_wal_senders). Is that right? What if different backends used the\nsame replication slot one after the other?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 11 Jun 2020 18:36:27 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, Jun 11, 2020 at 3:07 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 11 Jun 2020 at 18:11, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Jun 11, 2020 at 1:46 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > On Thu, 11 Jun 2020 at 12:30, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > >\n> > > > Now, thinking about this again, I am not sure if these stats are\n> > > > directly related to slots. These are stats for logical decoding which\n> > > > can be performed either via WALSender or decoding plugin (via APIs).\n> > > > So, why not have them displayed in a new view like pg_stat_logical (or\n> > > > pg_stat_logical_decoding/pg_stat_logical_replication)? In future, we\n> > > > will need to add similar stats for streaming of in-progress\n> > > > transactions as well (see patch 0007-Track-statistics-for-streaming at\n> > > > [1]), so having a separate view for these doesn't sound illogical.\n> > > >\n> > >\n> > > I think we need to decide how long we want to remain these statistics\n> > > values. That is, if we were to have such pg_stat_logical view, these\n> > > values would remain until logical decoding finished since I think the\n> > > view would display only running logical decoding. OTOH, if we were to\n> > > correspond these stats to slots, these values would remain beyond\n> > > multiple logical decoding SQL API calls.\n> > >\n> >\n> > I thought of having these till the process that performs these\n> > operations exist. So for WALSender, the stats will be valid till it\n> > is not restarted due to some reason or when performed via backend, the\n> > stats will be valid till the corresponding backend exits.\n> >\n>\n> The number of rows of that view could be up to (max_backends +\n> max_wal_senders). Is that right? What if different backends used the\n> same replication slot one after the other?\n>\n\nYeah, it would be tricky if multiple slots are used by the same\nbackend. We could probably track the number of times decoding has\nhappened by the session that will probably help us in averaging the\nspill amount. If we think that the aim is to help users to tune\nlogical_decoding_work_mem to avoid frequent spilling or streaming then\nhow would maintaining at slot level will help? As you said previously\nwe could track it only for running logical decoding context but if we\ndo that then data will be temporary and the user needs to constantly\nmonitor the same to make sense of it but maybe that is fine.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Jun 2020 16:32:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, 11 Jun 2020 at 20:02, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jun 11, 2020 at 3:07 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Thu, 11 Jun 2020 at 18:11, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Jun 11, 2020 at 1:46 PM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > > On Thu, 11 Jun 2020 at 12:30, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > >\n> > > > > Now, thinking about this again, I am not sure if these stats are\n> > > > > directly related to slots. These are stats for logical decoding which\n> > > > > can be performed either via WALSender or decoding plugin (via APIs).\n> > > > > So, why not have them displayed in a new view like pg_stat_logical (or\n> > > > > pg_stat_logical_decoding/pg_stat_logical_replication)? In future, we\n> > > > > will need to add similar stats for streaming of in-progress\n> > > > > transactions as well (see patch 0007-Track-statistics-for-streaming at\n> > > > > [1]), so having a separate view for these doesn't sound illogical.\n> > > > >\n> > > >\n> > > > I think we need to decide how long we want to remain these statistics\n> > > > values. That is, if we were to have such pg_stat_logical view, these\n> > > > values would remain until logical decoding finished since I think the\n> > > > view would display only running logical decoding. OTOH, if we were to\n> > > > correspond these stats to slots, these values would remain beyond\n> > > > multiple logical decoding SQL API calls.\n> > > >\n> > >\n> > > I thought of having these till the process that performs these\n> > > operations exist. So for WALSender, the stats will be valid till it\n> > > is not restarted due to some reason or when performed via backend, the\n> > > stats will be valid till the corresponding backend exits.\n> > >\n> >\n> > The number of rows of that view could be up to (max_backends +\n> > max_wal_senders). Is that right? What if different backends used the\n> > same replication slot one after the other?\n> >\n>\n> Yeah, it would be tricky if multiple slots are used by the same\n> backend. We could probably track the number of times decoding has\n> happened by the session that will probably help us in averaging the\n> spill amount. If we think that the aim is to help users to tune\n> logical_decoding_work_mem to avoid frequent spilling or streaming then\n> how would maintaining at slot level will help?\n\nSince the logical decoding intermediate files are written at per slots\ndirectory, I thought that corresponding these statistics to\nreplication slots is also understandable for users. I was thinking\nsomething like pg_stat_logical_replication_slot view which shows\nslot_name and statistics of only logical replication slots. The view\nalways shows rows as many as existing replication slots regardless of\nlogical decoding being running. I think there is no big difference in\nhow users use these statistics values between maintaining at slot\nlevel and at logical decoding level.\n\nIn logical replication case, since we generally don’t support setting\ndifferent logical_decoding_work_mem per wal senders, every wal sender\nwill decode the same WAL stream with the same setting, meaning they\nwill similarly spill intermediate files. Maybe the same is true\nstatistics of streaming. So having these statistics per logical\nreplication might not help as of now.\n\n> As you said previously\n> we could track it only for running logical decoding context but if we\n> do that then data will be temporary and the user needs to constantly\n> monitor the same to make sense of it but maybe that is fine.\n\nAgreed, in general, it's better not to frequently reset cumulative\nvalues. I personally would like these statistics to be valid even\nafter the process executing logical decoding exits (or even after\nserver restart), and to do reset them as needed.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 11 Jun 2020 23:09:00 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, Jun 11, 2020 at 7:39 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 11 Jun 2020 at 20:02, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Jun 11, 2020 at 3:07 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > On Thu, 11 Jun 2020 at 18:11, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Thu, Jun 11, 2020 at 1:46 PM Masahiko Sawada\n> > > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > > >\n> > > > > On Thu, 11 Jun 2020 at 12:30, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > >\n> > > > > > Now, thinking about this again, I am not sure if these stats are\n> > > > > > directly related to slots. These are stats for logical decoding which\n> > > > > > can be performed either via WALSender or decoding plugin (via APIs).\n> > > > > > So, why not have them displayed in a new view like pg_stat_logical (or\n> > > > > > pg_stat_logical_decoding/pg_stat_logical_replication)? In future, we\n> > > > > > will need to add similar stats for streaming of in-progress\n> > > > > > transactions as well (see patch 0007-Track-statistics-for-streaming at\n> > > > > > [1]), so having a separate view for these doesn't sound illogical.\n> > > > > >\n> > > > >\n> > > > > I think we need to decide how long we want to remain these statistics\n> > > > > values. That is, if we were to have such pg_stat_logical view, these\n> > > > > values would remain until logical decoding finished since I think the\n> > > > > view would display only running logical decoding. OTOH, if we were to\n> > > > > correspond these stats to slots, these values would remain beyond\n> > > > > multiple logical decoding SQL API calls.\n> > > > >\n> > > >\n> > > > I thought of having these till the process that performs these\n> > > > operations exist. So for WALSender, the stats will be valid till it\n> > > > is not restarted due to some reason or when performed via backend, the\n> > > > stats will be valid till the corresponding backend exits.\n> > > >\n> > >\n> > > The number of rows of that view could be up to (max_backends +\n> > > max_wal_senders). Is that right? What if different backends used the\n> > > same replication slot one after the other?\n> > >\n> >\n> > Yeah, it would be tricky if multiple slots are used by the same\n> > backend. We could probably track the number of times decoding has\n> > happened by the session that will probably help us in averaging the\n> > spill amount. If we think that the aim is to help users to tune\n> > logical_decoding_work_mem to avoid frequent spilling or streaming then\n> > how would maintaining at slot level will help?\n>\n> Since the logical decoding intermediate files are written at per slots\n> directory, I thought that corresponding these statistics to\n> replication slots is also understandable for users.\n>\n\nWhat I wanted to know is how will it help users to tune\nlogical_decoding_work_mem? Different backends can process from the\nsame slot, so it is not clear how user will be able to make any\nmeaning out of those stats. OTOH, it is easier to see how to make\nmeaning of these stats if we display them w.r.t process. Basically,\nwe have spill_count and spill_size which can be used to tune\nlogical_decoding_work_mem and also the activity of spilling happens at\nprocess level, so it sounds like one-to-one mapping. I am not telling\nto rule out maintaining a slot level but trying to see if we can come\nup with a clear definition.\n\n> I was thinking\n> something like pg_stat_logical_replication_slot view which shows\n> slot_name and statistics of only logical replication slots. The view\n> always shows rows as many as existing replication slots regardless of\n> logical decoding being running. I think there is no big difference in\n> how users use these statistics values between maintaining at slot\n> level and at logical decoding level.\n>\n> In logical replication case, since we generally don’t support setting\n> different logical_decoding_work_mem per wal senders, every wal sender\n> will decode the same WAL stream with the same setting, meaning they\n> will similarly spill intermediate files.\n>\n\nI am not sure this will be true in every case. We do have a\nslot_advance functionality, so some plugin might use that and that\nwill lead to different files getting spilled.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 12 Jun 2020 08:51:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "\n\nOn 2020/06/12 12:21, Amit Kapila wrote:\n> On Thu, Jun 11, 2020 at 7:39 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n>>\n>> On Thu, 11 Jun 2020 at 20:02, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>>\n>>> On Thu, Jun 11, 2020 at 3:07 PM Masahiko Sawada\n>>> <masahiko.sawada@2ndquadrant.com> wrote:\n>>>>\n>>>> On Thu, 11 Jun 2020 at 18:11, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>>>>\n>>>>> On Thu, Jun 11, 2020 at 1:46 PM Masahiko Sawada\n>>>>> <masahiko.sawada@2ndquadrant.com> wrote:\n>>>>>>\n>>>>>> On Thu, 11 Jun 2020 at 12:30, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>>>>>>\n>>>>>>>\n>>>>>>> Now, thinking about this again, I am not sure if these stats are\n>>>>>>> directly related to slots. These are stats for logical decoding which\n>>>>>>> can be performed either via WALSender or decoding plugin (via APIs).\n>>>>>>> So, why not have them displayed in a new view like pg_stat_logical (or\n>>>>>>> pg_stat_logical_decoding/pg_stat_logical_replication)? In future, we\n>>>>>>> will need to add similar stats for streaming of in-progress\n>>>>>>> transactions as well (see patch 0007-Track-statistics-for-streaming at\n>>>>>>> [1]), so having a separate view for these doesn't sound illogical.\n>>>>>>>\n>>>>>>\n>>>>>> I think we need to decide how long we want to remain these statistics\n>>>>>> values. That is, if we were to have such pg_stat_logical view, these\n>>>>>> values would remain until logical decoding finished since I think the\n>>>>>> view would display only running logical decoding. OTOH, if we were to\n>>>>>> correspond these stats to slots, these values would remain beyond\n>>>>>> multiple logical decoding SQL API calls.\n>>>>>>\n>>>>>\n>>>>> I thought of having these till the process that performs these\n>>>>> operations exist. So for WALSender, the stats will be valid till it\n>>>>> is not restarted due to some reason or when performed via backend, the\n>>>>> stats will be valid till the corresponding backend exits.\n>>>>>\n>>>>\n>>>> The number of rows of that view could be up to (max_backends +\n>>>> max_wal_senders). Is that right? What if different backends used the\n>>>> same replication slot one after the other?\n>>>>\n>>>\n>>> Yeah, it would be tricky if multiple slots are used by the same\n>>> backend. We could probably track the number of times decoding has\n>>> happened by the session that will probably help us in averaging the\n>>> spill amount. If we think that the aim is to help users to tune\n>>> logical_decoding_work_mem to avoid frequent spilling or streaming then\n>>> how would maintaining at slot level will help?\n>>\n>> Since the logical decoding intermediate files are written at per slots\n>> directory, I thought that corresponding these statistics to\n>> replication slots is also understandable for users.\n>>\n> \n> What I wanted to know is how will it help users to tune\n> logical_decoding_work_mem? Different backends can process from the\n> same slot, so it is not clear how user will be able to make any\n> meaning out of those stats. OTOH, it is easier to see how to make\n> meaning of these stats if we display them w.r.t process. Basically,\n> we have spill_count and spill_size which can be used to tune\n> logical_decoding_work_mem and also the activity of spilling happens at\n> process level, so it sounds like one-to-one mapping. I am not telling\n> to rule out maintaining a slot level but trying to see if we can come\n> up with a clear definition.\n> \n>> I was thinking\n>> something like pg_stat_logical_replication_slot view which shows\n>> slot_name and statistics of only logical replication slots. The view\n>> always shows rows as many as existing replication slots regardless of\n>> logical decoding being running. I think there is no big difference in\n>> how users use these statistics values between maintaining at slot\n>> level and at logical decoding level.\n>>\n>> In logical replication case, since we generally don’t support setting\n>> different logical_decoding_work_mem per wal senders, every wal sender\n>> will decode the same WAL stream with the same setting, meaning they\n>> will similarly spill intermediate files.\n\nI was thinking we support that. We can create multiple replication users\nwith different logical_decoding_work_mem settings. Also each walsender\ncan use logical_decoding_work_mem configured in its user. No?\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 12 Jun 2020 12:56:16 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Fri, 12 Jun 2020 at 12:56, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/06/12 12:21, Amit Kapila wrote:\n> > On Thu, Jun 11, 2020 at 7:39 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> >>\n> >> On Thu, 11 Jun 2020 at 20:02, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>>\n> >>> On Thu, Jun 11, 2020 at 3:07 PM Masahiko Sawada\n> >>> <masahiko.sawada@2ndquadrant.com> wrote:\n> >>>>\n> >>>> On Thu, 11 Jun 2020 at 18:11, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>>>>\n> >>>>> On Thu, Jun 11, 2020 at 1:46 PM Masahiko Sawada\n> >>>>> <masahiko.sawada@2ndquadrant.com> wrote:\n> >>>>>>\n> >>>>>> On Thu, 11 Jun 2020 at 12:30, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>>>>>>\n> >>>>>>>\n> >>>>>>> Now, thinking about this again, I am not sure if these stats are\n> >>>>>>> directly related to slots. These are stats for logical decoding which\n> >>>>>>> can be performed either via WALSender or decoding plugin (via APIs).\n> >>>>>>> So, why not have them displayed in a new view like pg_stat_logical (or\n> >>>>>>> pg_stat_logical_decoding/pg_stat_logical_replication)? In future, we\n> >>>>>>> will need to add similar stats for streaming of in-progress\n> >>>>>>> transactions as well (see patch 0007-Track-statistics-for-streaming at\n> >>>>>>> [1]), so having a separate view for these doesn't sound illogical.\n> >>>>>>>\n> >>>>>>\n> >>>>>> I think we need to decide how long we want to remain these statistics\n> >>>>>> values. That is, if we were to have such pg_stat_logical view, these\n> >>>>>> values would remain until logical decoding finished since I think the\n> >>>>>> view would display only running logical decoding. OTOH, if we were to\n> >>>>>> correspond these stats to slots, these values would remain beyond\n> >>>>>> multiple logical decoding SQL API calls.\n> >>>>>>\n> >>>>>\n> >>>>> I thought of having these till the process that performs these\n> >>>>> operations exist. So for WALSender, the stats will be valid till it\n> >>>>> is not restarted due to some reason or when performed via backend, the\n> >>>>> stats will be valid till the corresponding backend exits.\n> >>>>>\n> >>>>\n> >>>> The number of rows of that view could be up to (max_backends +\n> >>>> max_wal_senders). Is that right? What if different backends used the\n> >>>> same replication slot one after the other?\n> >>>>\n> >>>\n> >>> Yeah, it would be tricky if multiple slots are used by the same\n> >>> backend. We could probably track the number of times decoding has\n> >>> happened by the session that will probably help us in averaging the\n> >>> spill amount. If we think that the aim is to help users to tune\n> >>> logical_decoding_work_mem to avoid frequent spilling or streaming then\n> >>> how would maintaining at slot level will help?\n> >>\n> >> Since the logical decoding intermediate files are written at per slots\n> >> directory, I thought that corresponding these statistics to\n> >> replication slots is also understandable for users.\n> >>\n> >\n> > What I wanted to know is how will it help users to tune\n> > logical_decoding_work_mem? Different backends can process from the\n> > same slot, so it is not clear how user will be able to make any\n> > meaning out of those stats. OTOH, it is easier to see how to make\n> > meaning of these stats if we display them w.r.t process. Basically,\n> > we have spill_count and spill_size which can be used to tune\n> > logical_decoding_work_mem and also the activity of spilling happens at\n> > process level, so it sounds like one-to-one mapping. I am not telling\n> > to rule out maintaining a slot level but trying to see if we can come\n> > up with a clear definition.\n> >\n> >> I was thinking\n> >> something like pg_stat_logical_replication_slot view which shows\n> >> slot_name and statistics of only logical replication slots. The view\n> >> always shows rows as many as existing replication slots regardless of\n> >> logical decoding being running. I think there is no big difference in\n> >> how users use these statistics values between maintaining at slot\n> >> level and at logical decoding level.\n> >>\n> >> In logical replication case, since we generally don’t support setting\n> >> different logical_decoding_work_mem per wal senders, every wal sender\n> >> will decode the same WAL stream with the same setting, meaning they\n> >> will similarly spill intermediate files.\n>\n> I was thinking we support that. We can create multiple replication users\n> with different logical_decoding_work_mem settings. Also each walsender\n> can use logical_decoding_work_mem configured in its user. No?\n>\n\nYes, you're right. I had missed that way.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 12 Jun 2020 13:28:32 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Fri, 12 Jun 2020 at 12:21, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jun 11, 2020 at 7:39 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Thu, 11 Jun 2020 at 20:02, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Jun 11, 2020 at 3:07 PM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > > On Thu, 11 Jun 2020 at 18:11, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Thu, Jun 11, 2020 at 1:46 PM Masahiko Sawada\n> > > > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > > > >\n> > > > > > On Thu, 11 Jun 2020 at 12:30, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > >\n> > > > > > >\n> > > > > > > Now, thinking about this again, I am not sure if these stats are\n> > > > > > > directly related to slots. These are stats for logical decoding which\n> > > > > > > can be performed either via WALSender or decoding plugin (via APIs).\n> > > > > > > So, why not have them displayed in a new view like pg_stat_logical (or\n> > > > > > > pg_stat_logical_decoding/pg_stat_logical_replication)? In future, we\n> > > > > > > will need to add similar stats for streaming of in-progress\n> > > > > > > transactions as well (see patch 0007-Track-statistics-for-streaming at\n> > > > > > > [1]), so having a separate view for these doesn't sound illogical.\n> > > > > > >\n> > > > > >\n> > > > > > I think we need to decide how long we want to remain these statistics\n> > > > > > values. That is, if we were to have such pg_stat_logical view, these\n> > > > > > values would remain until logical decoding finished since I think the\n> > > > > > view would display only running logical decoding. OTOH, if we were to\n> > > > > > correspond these stats to slots, these values would remain beyond\n> > > > > > multiple logical decoding SQL API calls.\n> > > > > >\n> > > > >\n> > > > > I thought of having these till the process that performs these\n> > > > > operations exist. So for WALSender, the stats will be valid till it\n> > > > > is not restarted due to some reason or when performed via backend, the\n> > > > > stats will be valid till the corresponding backend exits.\n> > > > >\n> > > >\n> > > > The number of rows of that view could be up to (max_backends +\n> > > > max_wal_senders). Is that right? What if different backends used the\n> > > > same replication slot one after the other?\n> > > >\n> > >\n> > > Yeah, it would be tricky if multiple slots are used by the same\n> > > backend. We could probably track the number of times decoding has\n> > > happened by the session that will probably help us in averaging the\n> > > spill amount. If we think that the aim is to help users to tune\n> > > logical_decoding_work_mem to avoid frequent spilling or streaming then\n> > > how would maintaining at slot level will help?\n> >\n> > Since the logical decoding intermediate files are written at per slots\n> > directory, I thought that corresponding these statistics to\n> > replication slots is also understandable for users.\n> >\n>\n> What I wanted to know is how will it help users to tune\n> logical_decoding_work_mem? Different backends can process from the\n> same slot, so it is not clear how user will be able to make any\n> meaning out of those stats.\n\nI thought that the user needs to constantly monitor them during one\nprocess is executing logical decoding and to see the increments. I\nmight not fully understand but I guess the same is true for displaying\nthem w.r.t. process. Since a process can do logical decoding several\ntimes using the same slot with a different setting, the user will need\nto monitor them several times.\n\n> OTOH, it is easier to see how to make\n> meaning of these stats if we display them w.r.t process. Basically,\n> we have spill_count and spill_size which can be used to tune\n> logical_decoding_work_mem and also the activity of spilling happens at\n> process level, so it sounds like one-to-one mapping.\n\nDisplaying them w.r.t process also seems a good idea but I'm still\nunclear what to display and how long these values are valid. The view\nwill have the following columns for example?\n\n* pid\n* slot_name\n* spill_txns\n* spill_count\n* spill_bytes\n* exec_count\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 12 Jun 2020 14:50:12 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Fri, Jun 12, 2020 at 11:20 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Fri, 12 Jun 2020 at 12:21, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > Since the logical decoding intermediate files are written at per slots\n> > > directory, I thought that corresponding these statistics to\n> > > replication slots is also understandable for users.\n> > >\n> >\n> > What I wanted to know is how will it help users to tune\n> > logical_decoding_work_mem? Different backends can process from the\n> > same slot, so it is not clear how user will be able to make any\n> > meaning out of those stats.\n>\n> I thought that the user needs to constantly monitor them during one\n> process is executing logical decoding and to see the increments. I\n> might not fully understand but I guess the same is true for displaying\n> them w.r.t. process. Since a process can do logical decoding several\n> times using the same slot with a different setting, the user will need\n> to monitor them several times.\n>\n\nYeah, I think we might not be able to get exact measure but if we\ndivide total_size spilled by exec_count, we will get some rough idea\nof what should be the logical_decoding_work_mem for that particular\nsession. For ex. consider the logical_decoding_work_mem is 100bytes\nfor a particular backend and the size spilled by that backend is 100\nthen I think you can roughly keep it to 200bytes if you want to avoid\nspilling. Similarly one can compute its average value over multiple\nexecutions. Does this make sense to you?\n\n> > OTOH, it is easier to see how to make\n> > meaning of these stats if we display them w.r.t process. Basically,\n> > we have spill_count and spill_size which can be used to tune\n> > logical_decoding_work_mem and also the activity of spilling happens at\n> > process level, so it sounds like one-to-one mapping.\n>\n> Displaying them w.r.t process also seems a good idea but I'm still\n> unclear what to display and how long these values are valid.\n>\n\nI feel till the lifetime of a process if we want to display the values\nat process level but I am open to hear others (including yours) views\non this.\n\n> The view\n> will have the following columns for example?\n>\n> * pid\n> * slot_name\n> * spill_txns\n> * spill_count\n> * spill_bytes\n> * exec_count\n>\n\nYeah, these appear to be what I have in mind. Note that we can have\nmultiple entries of the same pid here because of slotname, there is\nsome value to display slotname but I am not completely sure if that is\na good idea but I am fine if you have a reason to include slotname?\n--\nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 12 Jun 2020 13:53:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Fri, Jun 12, 2020 at 10:23 AM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n\n> On Fri, Jun 12, 2020 at 11:20 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Fri, 12 Jun 2020 at 12:21, Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > >\n> > > > Since the logical decoding intermediate files are written at per\n> slots\n> > > > directory, I thought that corresponding these statistics to\n> > > > replication slots is also understandable for users.\n> > > >\n> > >\n> > > What I wanted to know is how will it help users to tune\n> > > logical_decoding_work_mem? Different backends can process from the\n> > > same slot, so it is not clear how user will be able to make any\n> > > meaning out of those stats.\n> >\n> > I thought that the user needs to constantly monitor them during one\n> > process is executing logical decoding and to see the increments. I\n> > might not fully understand but I guess the same is true for displaying\n> > them w.r.t. process. Since a process can do logical decoding several\n> > times using the same slot with a different setting, the user will need\n> > to monitor them several times.\n> >\n>\n> Yeah, I think we might not be able to get exact measure but if we\n> divide total_size spilled by exec_count, we will get some rough idea\n> of what should be the logical_decoding_work_mem for that particular\n> session. For ex. consider the logical_decoding_work_mem is 100bytes\n> for a particular backend and the size spilled by that backend is 100\n> then I think you can roughly keep it to 200bytes if you want to avoid\n> spilling. Similarly one can compute its average value over multiple\n> executions. Does this make sense to you?\n>\n\nThe thing that becomes really interesting is to analyze this across time.\nFor example to identify patterns where it always spills at the same time as\ncertain other things are happening. For that usecase, having a \"predictable\npersistence\" is important. You may not be able to afford setting\nlogical_decoding_work_mem high enough to cover every possible scenario (if\nyou did, then we would basically not need the spilling..), so you want to\ntrack down in relation to the rest of your application exactly when and how\nthis is happening.\n\n\n>\n> > > OTOH, it is easier to see how to make\n> > > meaning of these stats if we display them w.r.t process. Basically,\n> > > we have spill_count and spill_size which can be used to tune\n> > > logical_decoding_work_mem and also the activity of spilling happens at\n> > > process level, so it sounds like one-to-one mapping.\n> >\n> > Displaying them w.r.t process also seems a good idea but I'm still\n> > unclear what to display and how long these values are valid.\n> >\n>\n> I feel till the lifetime of a process if we want to display the values\n> at process level but I am open to hear others (including yours) views\n> on this.\n>\n\nThe problem with \"lifetime of a process\" is that it's not predictable. A\nreplication process might \"bounce\" for any reason, and it is normally not a\nproblem. But if you suddenly lose your stats when you do that, it starts to\nmatter a lot more. Especially when you don't know if it bounced. (Sure you\ncan look at the backend_start time, but that adds a whole different sets of\ncomplexitites).\n\n\n> > The view\n\n> > will have the following columns for example?\n> >\n> > * pid\n> > * slot_name\n> > * spill_txns\n> > * spill_count\n> > * spill_bytes\n> > * exec_count\n> >\n>\n> Yeah, these appear to be what I have in mind. Note that we can have\n> multiple entries of the same pid here because of slotname, there is\n> some value to display slotname but I am not completely sure if that is\n> a good idea but I am fine if you have a reason to include slotname?\n>\n\nWell, it's a general view so you can always GROUP BY that away if you want\nat reading point?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Jun 12, 2020 at 10:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Fri, Jun 12, 2020 at 11:20 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Fri, 12 Jun 2020 at 12:21, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > Since the logical decoding intermediate files are written at per slots\n> > > directory, I thought that corresponding these statistics to\n> > > replication slots is also understandable for users.\n> > >\n> >\n> > What I wanted to know is how will it help users to tune\n> > logical_decoding_work_mem?  Different backends can process from the\n> > same slot, so it is not clear how user will be able to make any\n> > meaning out of those stats.\n>\n> I thought that the user needs to constantly monitor them during one\n> process is executing logical decoding and to see the increments. I\n> might not fully understand but I guess the same is true for displaying\n> them w.r.t. process. Since a process can do logical decoding several\n> times using the same slot with a different setting, the user will need\n> to monitor them several times.\n>\n\nYeah, I think we might not be able to get exact measure but if we\ndivide total_size spilled by exec_count, we will get some rough idea\nof what should be the logical_decoding_work_mem for that particular\nsession.  For ex. consider the logical_decoding_work_mem is 100bytes\nfor a particular backend and the size spilled by that backend is 100\nthen I think you can roughly keep it to 200bytes if you want to avoid\nspilling.  Similarly one can compute its average value over multiple\nexecutions.  Does this make sense to you?The thing that becomes really interesting is to analyze this across time. For example to identify patterns where it always spills at the same time as certain other things are happening. For that usecase, having a \"predictable persistence\" is important. You may not be able to afford setting logical_decoding_work_mem high enough to cover every possible scenario (if you did, then we would basically not need the spilling..), so you want to track down in relation to the rest of your application exactly when and how this is happening. \n\n> > OTOH, it is easier to see how to make\n> > meaning of these stats if we display them w.r.t process.  Basically,\n> > we have spill_count and spill_size which can be used to tune\n> > logical_decoding_work_mem and also the activity of spilling happens at\n> > process level, so it sounds like one-to-one mapping.\n>\n> Displaying them w.r.t process also seems a good idea but I'm still\n> unclear what to display and how long these values are valid.\n>\n\nI feel till the lifetime of a process if we want to display the values\nat process level but I am open to hear others (including yours) views\non this.The problem with \"lifetime of a process\" is that it's not predictable. A replication process might \"bounce\" for any reason, and it is normally not a problem. But if you suddenly lose your stats when you do that, it starts to matter a lot more. Especially when you don't know if it bounced. (Sure you can look at the backend_start time, but that adds a whole different sets of complexitites).> > The view\n> will have the following columns for example?\n>\n> * pid\n> * slot_name\n> * spill_txns\n> * spill_count\n> * spill_bytes\n> * exec_count\n>\n\nYeah, these appear to be what I have in mind.  Note that we can have\nmultiple entries of the same pid here because of slotname, there is\nsome value to display slotname but I am not completely sure if that is\na good idea but I am fine if you have a reason to include slotname?Well, it's a general view so you can always GROUP BY that away if you want at reading point?--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 12 Jun 2020 14:41:42 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Fri, Jun 12, 2020 at 6:11 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Fri, Jun 12, 2020 at 10:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>\n>\n> The problem with \"lifetime of a process\" is that it's not predictable. A replication process might \"bounce\" for any reason, and it is normally not a problem. But if you suddenly lose your stats when you do that, it starts to matter a lot more. Especially when you don't know if it bounced. (Sure you can look at the backend_start time, but that adds a whole different sets of complexitites).\n>\n\nIt is not clear to me what is a good way to display the stats for a\nprocess that has exited or bounced due to whatever reason. OTOH, if\nwe just display per-slot stats, it is difficult to imagine how the\nuser can make any sense out of it or in other words how such stats can\nbe useful to users.\n\n>\n> > > The view\n>>\n>> > will have the following columns for example?\n>> >\n>> > * pid\n>> > * slot_name\n>> > * spill_txns\n>> > * spill_count\n>> > * spill_bytes\n>> > * exec_count\n>> >\n>>\n>> Yeah, these appear to be what I have in mind. Note that we can have\n>> multiple entries of the same pid here because of slotname, there is\n>> some value to display slotname but I am not completely sure if that is\n>> a good idea but I am fine if you have a reason to include slotname?\n>\n>\n> Well, it's a general view so you can always GROUP BY that away if you want at reading point?\n>\n\nOkay, that is a valid point.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 13 Jun 2020 10:53:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "\n\nOn 2020/06/13 14:23, Amit Kapila wrote:\n> On Fri, Jun 12, 2020 at 6:11 PM Magnus Hagander <magnus@hagander.net> wrote:\n>>\n>> On Fri, Jun 12, 2020 at 10:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>>\n>>\n>>\n>> The problem with \"lifetime of a process\" is that it's not predictable. A replication process might \"bounce\" for any reason, and it is normally not a problem. But if you suddenly lose your stats when you do that, it starts to matter a lot more. Especially when you don't know if it bounced. (Sure you can look at the backend_start time, but that adds a whole different sets of complexitites).\n>>\n> \n> It is not clear to me what is a good way to display the stats for a\n> process that has exited or bounced due to whatever reason. OTOH, if\n> we just display per-slot stats, it is difficult to imagine how the\n> user can make any sense out of it or in other words how such stats can\n> be useful to users.\n\nIf we allow users to set logical_decoding_work_mem per slot,\nmaybe the users can tune it directly from the stats?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Sat, 13 Jun 2020 20:37:47 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Sat, Jun 13, 2020 at 5:07 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n> On 2020/06/13 14:23, Amit Kapila wrote:\n> > On Fri, Jun 12, 2020 at 6:11 PM Magnus Hagander <magnus@hagander.net> wrote:\n> >>\n> >> On Fri, Jun 12, 2020 at 10:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>>\n> >>\n> >>\n> >> The problem with \"lifetime of a process\" is that it's not predictable. A replication process might \"bounce\" for any reason, and it is normally not a problem. But if you suddenly lose your stats when you do that, it starts to matter a lot more. Especially when you don't know if it bounced. (Sure you can look at the backend_start time, but that adds a whole different sets of complexitites).\n> >>\n> >\n> > It is not clear to me what is a good way to display the stats for a\n> > process that has exited or bounced due to whatever reason. OTOH, if\n> > we just display per-slot stats, it is difficult to imagine how the\n> > user can make any sense out of it or in other words how such stats can\n> > be useful to users.\n>\n> If we allow users to set logical_decoding_work_mem per slot,\n> maybe the users can tune it directly from the stats?\n>\n\nHow will it behave when same slot is used from multiple sessions? I\nthink it will be difficult to make sense of the stats for slots unless\nwe also somehow see which process has lead to that stats and if we do\nso then there won't be much difference w.r.t what we can do now?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Jun 2020 10:28:28 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Sat, 13 Jun 2020 at 14:23, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jun 12, 2020 at 6:11 PM Magnus Hagander <magnus@hagander.net> wrote:\n> >\n> > On Fri, Jun 12, 2020 at 10:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >\n> >\n> > The problem with \"lifetime of a process\" is that it's not predictable. A replication process might \"bounce\" for any reason, and it is normally not a problem. But if you suddenly lose your stats when you do that, it starts to matter a lot more. Especially when you don't know if it bounced. (Sure you can look at the backend_start time, but that adds a whole different sets of complexitites).\n> >\n>\n> It is not clear to me what is a good way to display the stats for a\n> process that has exited or bounced due to whatever reason. OTOH, if\n> we just display per-slot stats, it is difficult to imagine how the\n> user can make any sense out of it or in other words how such stats can\n> be useful to users.\n\nIf we have the reset function, the user can reset before doing logical\ndecoding so that the user can use the stats directly. Or I think we\ncan automatically reset the stats when logical decoding is performed\nwith different logical_decoding_work_mem value than the previous one.\nIn either way, since the stats correspond to the logical decoding\nusing the same slot with the same parameter value the user can use\nthem directly.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 17 Jun 2020 17:03:41 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Wed, Jun 17, 2020 at 1:34 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Sat, 13 Jun 2020 at 14:23, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jun 12, 2020 at 6:11 PM Magnus Hagander <magnus@hagander.net> wrote:\n> > >\n> > > On Fri, Jun 12, 2020 at 10:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >>\n> > >\n> > >\n> > > The problem with \"lifetime of a process\" is that it's not predictable. A replication process might \"bounce\" for any reason, and it is normally not a problem. But if you suddenly lose your stats when you do that, it starts to matter a lot more. Especially when you don't know if it bounced. (Sure you can look at the backend_start time, but that adds a whole different sets of complexitites).\n> > >\n> >\n> > It is not clear to me what is a good way to display the stats for a\n> > process that has exited or bounced due to whatever reason. OTOH, if\n> > we just display per-slot stats, it is difficult to imagine how the\n> > user can make any sense out of it or in other words how such stats can\n> > be useful to users.\n>\n> If we have the reset function, the user can reset before doing logical\n> decoding so that the user can use the stats directly. Or I think we\n> can automatically reset the stats when logical decoding is performed\n> with different logical_decoding_work_mem value than the previous one.\n>\n\nI had written above in the context of persisting these stats. I mean\nto say if the process has bounced or server has restarted then the\nprevious stats might not make much sense because we were planning to\nuse pid [1], so the stats from process that has exited might not make\nmuch sense or do you think that is okay? If we don't want to persist\nand the lifetime of these stats is till the process is alive then we\nare fine.\n\n\n[1] - https://www.postgresql.org/message-id/CA%2Bfd4k5nqeFdhpnCULpTh9TR%2B15rHZSbz0SDC6sZhr_v99SeKA%40mail.gmail.com\n\nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 17 Jun 2020 16:44:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Wed, 17 Jun 2020 at 20:14, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jun 17, 2020 at 1:34 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Sat, 13 Jun 2020 at 14:23, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Jun 12, 2020 at 6:11 PM Magnus Hagander <magnus@hagander.net> wrote:\n> > > >\n> > > > On Fri, Jun 12, 2020 at 10:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >>\n> > > >\n> > > >\n> > > > The problem with \"lifetime of a process\" is that it's not predictable. A replication process might \"bounce\" for any reason, and it is normally not a problem. But if you suddenly lose your stats when you do that, it starts to matter a lot more. Especially when you don't know if it bounced. (Sure you can look at the backend_start time, but that adds a whole different sets of complexitites).\n> > > >\n> > >\n> > > It is not clear to me what is a good way to display the stats for a\n> > > process that has exited or bounced due to whatever reason. OTOH, if\n> > > we just display per-slot stats, it is difficult to imagine how the\n> > > user can make any sense out of it or in other words how such stats can\n> > > be useful to users.\n> >\n> > If we have the reset function, the user can reset before doing logical\n> > decoding so that the user can use the stats directly. Or I think we\n> > can automatically reset the stats when logical decoding is performed\n> > with different logical_decoding_work_mem value than the previous one.\n> >\n>\n> I had written above in the context of persisting these stats. I mean\n> to say if the process has bounced or server has restarted then the\n> previous stats might not make much sense because we were planning to\n> use pid [1], so the stats from process that has exited might not make\n> much sense or do you think that is okay? If we don't want to persist\n> and the lifetime of these stats is till the process is alive then we\n> are fine.\n>\n\nSorry for confusing you. The above my idea is about having the stats\nper slots. That is, we add spill_txns, spill_count and spill_bytes to\npg_replication_slots or a new view pg_stat_logical_replication_slots\nwith some columns: slot_name plus these stats columns and stats_reset.\nThe idea is that the stats values accumulate until either the slot is\ndropped, the server crashed, the user executes the reset function, or\nlogical decoding is performed with different logical_decoding_work_mem\nvalue than the previous time. In other words, the stats values are\nreset in either case. That way, I think the stats values always\ncorrespond to logical decoding using the same slot with the same\nlogical_decoding_work_mem value.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 18 Jun 2020 11:30:30 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, Jun 18, 2020 at 8:01 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Wed, 17 Jun 2020 at 20:14, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > I had written above in the context of persisting these stats. I mean\n> > to say if the process has bounced or server has restarted then the\n> > previous stats might not make much sense because we were planning to\n> > use pid [1], so the stats from process that has exited might not make\n> > much sense or do you think that is okay? If we don't want to persist\n> > and the lifetime of these stats is till the process is alive then we\n> > are fine.\n> >\n>\n> Sorry for confusing you. The above my idea is about having the stats\n> per slots. That is, we add spill_txns, spill_count and spill_bytes to\n> pg_replication_slots or a new view pg_stat_logical_replication_slots\n> with some columns: slot_name plus these stats columns and stats_reset.\n> The idea is that the stats values accumulate until either the slot is\n> dropped, the server crashed, the user executes the reset function, or\n> logical decoding is performed with different logical_decoding_work_mem\n> value than the previous time. In other words, the stats values are\n> reset in either case. That way, I think the stats values always\n> correspond to logical decoding using the same slot with the same\n> logical_decoding_work_mem value.\n>\n\nWhat if the decoding has been performed by multiple backends using the\nsame slot? In that case, it will be difficult to make the judgment\nfor the value of logical_decoding_work_mem based on stats. It would\nmake sense if we provide a way to set logical_decoding_work_mem for a\nslot but not sure if that is better than what we have now.\n\nWhat problems do we see in displaying these for each process? I think\nusers might want to see the stats for the exited processes or after\nserver restart but I think both of those are not even possible today.\nI think the stats are available till the corresponding WALSender\nprocess is active.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Jun 2020 12:21:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "Hi,\n\nSorry for neglecting this thread for the last couple days ...\n\nIn general, I agree it's somewhat unfortunate the stats are reset when\nthe walsender exits. This was mostly fine for tuning of the spilling\n(change value -> restart -> see stats) but for proper monitoring this\nis somewhat problematic. I simply considered these fields somewhat\nsimilar to lag monitoring, not from the \"monitoring\" POV.\n\n\nOn Thu, Jun 11, 2020 at 11:09:00PM +0900, Masahiko Sawada wrote:\n>\n> ...\n>\n>Since the logical decoding intermediate files are written at per slots\n>directory, I thought that corresponding these statistics to\n>replication slots is also understandable for users. I was thinking\n>something like pg_stat_logical_replication_slot view which shows\n>slot_name and statistics of only logical replication slots. The view\n>always shows rows as many as existing replication slots regardless of\n>logical decoding being running. I think there is no big difference in\n>how users use these statistics values between maintaining at slot\n>level and at logical decoding level.\n>\n>In logical replication case, since we generally don’t support setting\n>different logical_decoding_work_mem per wal senders, every wal sender\n>will decode the same WAL stream with the same setting, meaning they\n>will similarly spill intermediate files. Maybe the same is true\n>statistics of streaming. So having these statistics per logical\n>replication might not help as of now.\n>\n\nI think the idea to track these stats per replication slot (rather than\nper walsender) is the right approach. We should extend statistics\ncollector to keep one entry per replication slot and have a new stats\nview called e.g. pg_stat_replication_slots, which could be reset just\nlike other stats in the collector.\n\nI don't quite understand the discussion about different backends using\nlogical_decoding_work_mem - why would this be an issue? Surely we have\nthis exact issue e.g. with tracking index vs. sequential scans and GUCs\nlike random_page_cost. That can change over time too, different backends\nmay use different values, and yet we don't worry about resetting the\nnumber of index scans for a table etc.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 20 Jun 2020 23:48:36 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, Jun 18, 2020 at 12:21:17PM +0530, Amit Kapila wrote:\n>On Thu, Jun 18, 2020 at 8:01 AM Masahiko Sawada\n><masahiko.sawada@2ndquadrant.com> wrote:\n>>\n>> On Wed, 17 Jun 2020 at 20:14, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >\n>> >\n>> > I had written above in the context of persisting these stats. I mean\n>> > to say if the process has bounced or server has restarted then the\n>> > previous stats might not make much sense because we were planning to\n>> > use pid [1], so the stats from process that has exited might not make\n>> > much sense or do you think that is okay? If we don't want to persist\n>> > and the lifetime of these stats is till the process is alive then we\n>> > are fine.\n>> >\n>>\n>> Sorry for confusing you. The above my idea is about having the stats\n>> per slots. That is, we add spill_txns, spill_count and spill_bytes to\n>> pg_replication_slots or a new view pg_stat_logical_replication_slots\n>> with some columns: slot_name plus these stats columns and stats_reset.\n>> The idea is that the stats values accumulate until either the slot is\n>> dropped, the server crashed, the user executes the reset function, or\n>> logical decoding is performed with different logical_decoding_work_mem\n>> value than the previous time. In other words, the stats values are\n>> reset in either case. That way, I think the stats values always\n>> correspond to logical decoding using the same slot with the same\n>> logical_decoding_work_mem value.\n>>\n>\n>What if the decoding has been performed by multiple backends using the\n>same slot? In that case, it will be difficult to make the judgment\n>for the value of logical_decoding_work_mem based on stats. It would\n>make sense if we provide a way to set logical_decoding_work_mem for a\n>slot but not sure if that is better than what we have now.\n>\n>What problems do we see in displaying these for each process? I think\n>users might want to see the stats for the exited processes or after\n>server restart but I think both of those are not even possible today.\n>I think the stats are available till the corresponding WALSender\n>process is active.\n>\n\nI don't quite see what the problem is. We're in this exact position with\nmany other stats we track and various GUCs. If you decide to tune the\nsetting for a particular slot, you simply need to be careful which\nbackends decode the slot and what GUC values they used.\n\nBut I don't think this situation (multiple backends decoding the same\nslot with different logical_decoding_work_mem values) is very common. In\nmost cases the backends/walsenders will all use the same value. If you\nchange that, you better remember that.\n\nI really think we should not be inventing something that automatically\nresets the stats when someone happens to change the GUC.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 20 Jun 2020 23:57:23 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Sun, Jun 21, 2020 at 3:27 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Thu, Jun 18, 2020 at 12:21:17PM +0530, Amit Kapila wrote:\n> >On Thu, Jun 18, 2020 at 8:01 AM Masahiko Sawada\n> ><masahiko.sawada@2ndquadrant.com> wrote:\n> >>\n> >> On Wed, 17 Jun 2020 at 20:14, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >> >\n> >> >\n> >> > I had written above in the context of persisting these stats. I mean\n> >> > to say if the process has bounced or server has restarted then the\n> >> > previous stats might not make much sense because we were planning to\n> >> > use pid [1], so the stats from process that has exited might not make\n> >> > much sense or do you think that is okay? If we don't want to persist\n> >> > and the lifetime of these stats is till the process is alive then we\n> >> > are fine.\n> >> >\n> >>\n> >> Sorry for confusing you. The above my idea is about having the stats\n> >> per slots. That is, we add spill_txns, spill_count and spill_bytes to\n> >> pg_replication_slots or a new view pg_stat_logical_replication_slots\n> >> with some columns: slot_name plus these stats columns and stats_reset.\n> >> The idea is that the stats values accumulate until either the slot is\n> >> dropped, the server crashed, the user executes the reset function, or\n> >> logical decoding is performed with different logical_decoding_work_mem\n> >> value than the previous time. In other words, the stats values are\n> >> reset in either case. That way, I think the stats values always\n> >> correspond to logical decoding using the same slot with the same\n> >> logical_decoding_work_mem value.\n> >>\n> >\n> >What if the decoding has been performed by multiple backends using the\n> >same slot? In that case, it will be difficult to make the judgment\n> >for the value of logical_decoding_work_mem based on stats. It would\n> >make sense if we provide a way to set logical_decoding_work_mem for a\n> >slot but not sure if that is better than what we have now.\n> >\n> >What problems do we see in displaying these for each process? I think\n> >users might want to see the stats for the exited processes or after\n> >server restart but I think both of those are not even possible today.\n> >I think the stats are available till the corresponding WALSender\n> >process is active.\n> >\n>\n> I don't quite see what the problem is. We're in this exact position with\n> many other stats we track and various GUCs. If you decide to tune the\n> setting for a particular slot, you simply need to be careful which\n> backends decode the slot and what GUC values they used.\n>\n\nWhat problem do you if we allow it to display per-process (WALSender\nor backend)? They are incurred by the WALSender or by backends so\ndisplaying them accordingly seems more straightforward and logical to\nme.\n\nAs of now, we don't allow it to be set for a slot, so it won't be\nconvenient for the user to tune it per slot. I think we can allow to\nset it per-slot but not sure if there is any benefit for the same.\n\n> I really think we should not be inventing something that automatically\n> resets the stats when someone happens to change the GUC.\n>\n\nI agree with that.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Jun 2020 08:22:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Mon, Jun 22, 2020 at 8:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Jun 21, 2020 at 3:27 AM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n> >\n> > On Thu, Jun 18, 2020 at 12:21:17PM +0530, Amit Kapila wrote:\n> > >On Thu, Jun 18, 2020 at 8:01 AM Masahiko Sawada\n> > ><masahiko.sawada@2ndquadrant.com> wrote:\n> > >>\n> > >> On Wed, 17 Jun 2020 at 20:14, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >> >\n> > >> >\n> > >> > I had written above in the context of persisting these stats. I mean\n> > >> > to say if the process has bounced or server has restarted then the\n> > >> > previous stats might not make much sense because we were planning to\n> > >> > use pid [1], so the stats from process that has exited might not make\n> > >> > much sense or do you think that is okay? If we don't want to persist\n> > >> > and the lifetime of these stats is till the process is alive then we\n> > >> > are fine.\n> > >> >\n> > >>\n> > >> Sorry for confusing you. The above my idea is about having the stats\n> > >> per slots. That is, we add spill_txns, spill_count and spill_bytes to\n> > >> pg_replication_slots or a new view pg_stat_logical_replication_slots\n> > >> with some columns: slot_name plus these stats columns and stats_reset.\n> > >> The idea is that the stats values accumulate until either the slot is\n> > >> dropped, the server crashed, the user executes the reset function, or\n> > >> logical decoding is performed with different logical_decoding_work_mem\n> > >> value than the previous time. In other words, the stats values are\n> > >> reset in either case. That way, I think the stats values always\n> > >> correspond to logical decoding using the same slot with the same\n> > >> logical_decoding_work_mem value.\n> > >>\n> > >\n> > >What if the decoding has been performed by multiple backends using the\n> > >same slot? In that case, it will be difficult to make the judgment\n> > >for the value of logical_decoding_work_mem based on stats. It would\n> > >make sense if we provide a way to set logical_decoding_work_mem for a\n> > >slot but not sure if that is better than what we have now.\n> > >\n> > >What problems do we see in displaying these for each process? I think\n> > >users might want to see the stats for the exited processes or after\n> > >server restart but I think both of those are not even possible today.\n> > >I think the stats are available till the corresponding WALSender\n> > >process is active.\n> > >\n> >\n> > I don't quite see what the problem is. We're in this exact position with\n> > many other stats we track and various GUCs. If you decide to tune the\n> > setting for a particular slot, you simply need to be careful which\n> > backends decode the slot and what GUC values they used.\n> >\n>\n> What problem do you if we allow it to display per-process (WALSender\n> or backend)? They are incurred by the WALSender or by backends so\n> displaying them accordingly seems more straightforward and logical to\n> me.\n>\n> As of now, we don't allow it to be set for a slot, so it won't be\n> convenient for the user to tune it per slot. I think we can allow to\n> set it per-slot but not sure if there is any benefit for the same.\n>\n\nIf we display stats as discussed in email [1] (pid, slot_name,\nspill_txns, spill_count, etc.), then we can even find the stats w.r.t\neach slot.\n\n\n[1] - https://www.postgresql.org/message-id/CA%2Bfd4k5nqeFdhpnCULpTh9TR%2B15rHZSbz0SDC6sZhr_v99SeKA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Jun 2020 08:26:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Sun, 21 Jun 2020 at 06:57, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Thu, Jun 18, 2020 at 12:21:17PM +0530, Amit Kapila wrote:\n> >On Thu, Jun 18, 2020 at 8:01 AM Masahiko Sawada\n> ><masahiko.sawada@2ndquadrant.com> wrote:\n> >>\n> >> On Wed, 17 Jun 2020 at 20:14, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >> >\n> >> >\n> >> > I had written above in the context of persisting these stats. I mean\n> >> > to say if the process has bounced or server has restarted then the\n> >> > previous stats might not make much sense because we were planning to\n> >> > use pid [1], so the stats from process that has exited might not make\n> >> > much sense or do you think that is okay? If we don't want to persist\n> >> > and the lifetime of these stats is till the process is alive then we\n> >> > are fine.\n> >> >\n> >>\n> >> Sorry for confusing you. The above my idea is about having the stats\n> >> per slots. That is, we add spill_txns, spill_count and spill_bytes to\n> >> pg_replication_slots or a new view pg_stat_logical_replication_slots\n> >> with some columns: slot_name plus these stats columns and stats_reset.\n> >> The idea is that the stats values accumulate until either the slot is\n> >> dropped, the server crashed, the user executes the reset function, or\n> >> logical decoding is performed with different logical_decoding_work_mem\n> >> value than the previous time. In other words, the stats values are\n> >> reset in either case. That way, I think the stats values always\n> >> correspond to logical decoding using the same slot with the same\n> >> logical_decoding_work_mem value.\n> >>\n> >\n> >What if the decoding has been performed by multiple backends using the\n> >same slot? In that case, it will be difficult to make the judgment\n> >for the value of logical_decoding_work_mem based on stats. It would\n> >make sense if we provide a way to set logical_decoding_work_mem for a\n> >slot but not sure if that is better than what we have now.\n> >\n\nI thought that the stats are relevant to what\nlogical_decoding_work_mem value was but not with who performed logical\ndecoding. So even if multiple backends perform logical decoding using\nthe same slot, the user can directly use stats as long as\nlogical_decoding_work_mem value doesn’t change.\n\n> >What problems do we see in displaying these for each process? I think\n> >users might want to see the stats for the exited processes or after\n> >server restart but I think both of those are not even possible today.\n> >I think the stats are available till the corresponding WALSender\n> >process is active.\n\nI might want to see the stats for the exited processes or after server\nrestart. But I'm inclined to agree with displaying the stats per\nprocess if the stats are displayed on a separate view (e.g.\npg_stat_replication_slots).\n\n> >\n>\n> I don't quite see what the problem is. We're in this exact position with\n> many other stats we track and various GUCs. If you decide to tune the\n> setting for a particular slot, you simply need to be careful which\n> backends decode the slot and what GUC values they used.\n>\n> But I don't think this situation (multiple backends decoding the same\n> slot with different logical_decoding_work_mem values) is very common. In\n> most cases the backends/walsenders will all use the same value. If you\n> change that, you better remember that.\n>\n> I really think we should not be inventing something that automatically\n> resets the stats when someone happens to change the GUC.\n\nAgreed. But what I thought is more simple; storing the\nlogical_decoding_work_mem value along with the stats into a logical\nreplication slot and resetting the stats if the\nlogical_decoding_work_mem value is different than the stored value\nwhen performing logical decoding.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 23 Jun 2020 13:01:45 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Jun 23, 2020 at 9:32 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Sun, 21 Jun 2020 at 06:57, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> >\n> > >\n> > >What if the decoding has been performed by multiple backends using the\n> > >same slot? In that case, it will be difficult to make the judgment\n> > >for the value of logical_decoding_work_mem based on stats. It would\n> > >make sense if we provide a way to set logical_decoding_work_mem for a\n> > >slot but not sure if that is better than what we have now.\n> > >\n>\n> I thought that the stats are relevant to what\n> logical_decoding_work_mem value was but not with who performed logical\n> decoding. So even if multiple backends perform logical decoding using\n> the same slot, the user can directly use stats as long as\n> logical_decoding_work_mem value doesn’t change.\n>\n\nI think if you maintain these stats at the slot level, you probably\nneed to use spinlock or atomic ops in order to update those as slots\ncan be used from multiple backends whereas currently, we don't need\nthat.\n\n> > >What problems do we see in displaying these for each process? I think\n> > >users might want to see the stats for the exited processes or after\n> > >server restart but I think both of those are not even possible today.\n> > >I think the stats are available till the corresponding WALSender\n> > >process is active.\n>\n> I might want to see the stats for the exited processes or after server\n> restart. But I'm inclined to agree with displaying the stats per\n> process if the stats are displayed on a separate view (e.g.\n> pg_stat_replication_slots).\n>\n\nYeah, as told previously, this makes more sense to me.\n\nDo you think we should try to write a POC patch using a per-process\nentry approach and see what difficulties we are facing and does it\ngive the stats in a way we are imagining but OTOH, we can wait for\nsome more to see if there is clear winner approach here?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 23 Jun 2020 10:58:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Jun 23, 2020 at 10:58:18AM +0530, Amit Kapila wrote:\n>On Tue, Jun 23, 2020 at 9:32 AM Masahiko Sawada\n><masahiko.sawada@2ndquadrant.com> wrote:\n>>\n>> On Sun, 21 Jun 2020 at 06:57, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>> >\n>> > >\n>> > >What if the decoding has been performed by multiple backends using the\n>> > >same slot? In that case, it will be difficult to make the judgment\n>> > >for the value of logical_decoding_work_mem based on stats. It would\n>> > >make sense if we provide a way to set logical_decoding_work_mem for a\n>> > >slot but not sure if that is better than what we have now.\n>> > >\n>>\n>> I thought that the stats are relevant to what\n>> logical_decoding_work_mem value was but not with who performed logical\n>> decoding. So even if multiple backends perform logical decoding using\n>> the same slot, the user can directly use stats as long as\n>> logical_decoding_work_mem value doesn’t change.\n>>\n>\n>I think if you maintain these stats at the slot level, you probably\n>need to use spinlock or atomic ops in order to update those as slots\n>can be used from multiple backends whereas currently, we don't need\n>that.\n\nIMHO storing the stats in the slot itself is a bad idea. We have the\nstatistics collector for exactly this purpose, and it's receiving data\nover UDP without any extra locking etc.\n>\n>> > >What problems do we see in displaying these for each process? I think\n>> > >users might want to see the stats for the exited processes or after\n>> > >server restart but I think both of those are not even possible today.\n>> > >I think the stats are available till the corresponding WALSender\n>> > >process is active.\n>>\n>> I might want to see the stats for the exited processes or after server\n>> restart. But I'm inclined to agree with displaying the stats per\n>> process if the stats are displayed on a separate view (e.g.\n>> pg_stat_replication_slots).\n>>\n>\n>Yeah, as told previously, this makes more sense to me.\n>\n>Do you think we should try to write a POC patch using a per-process\n>entry approach and see what difficulties we are facing and does it\n>give the stats in a way we are imagining but OTOH, we can wait for\n>some more to see if there is clear winner approach here?\n>\n\nI may be missing something obvious, but I still see no point in tracking\nper-process stats. We don't have that for other stats, and I'm not sure\nhow common is the scenario when a given slot is decoded by many\nbackends. I'd say vast majority of cases are simply running decoding\nfrom a walsender, which may occasionally restart, but I doubt the users\nare interested in per-pid data - they probably want aggregated data.\n\nCan someone explain a plausible scenario for which tracking per-process\nstats would be needed, and simply computing deltas would not work? How\nwill you know which old PID is which, what will you do when a PID is\nreused, and so on?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 23 Jun 2020 12:18:31 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Jun 23, 2020 at 3:48 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Tue, Jun 23, 2020 at 10:58:18AM +0530, Amit Kapila wrote:\n> >On Tue, Jun 23, 2020 at 9:32 AM Masahiko Sawada\n> ><masahiko.sawada@2ndquadrant.com> wrote:\n> >>\n> >> On Sun, 21 Jun 2020 at 06:57, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> >> >\n> >> > >\n> >> > >What if the decoding has been performed by multiple backends using the\n> >> > >same slot? In that case, it will be difficult to make the judgment\n> >> > >for the value of logical_decoding_work_mem based on stats. It would\n> >> > >make sense if we provide a way to set logical_decoding_work_mem for a\n> >> > >slot but not sure if that is better than what we have now.\n> >> > >\n> >>\n> >> I thought that the stats are relevant to what\n> >> logical_decoding_work_mem value was but not with who performed logical\n> >> decoding. So even if multiple backends perform logical decoding using\n> >> the same slot, the user can directly use stats as long as\n> >> logical_decoding_work_mem value doesn’t change.\n> >>\n> >\n> >I think if you maintain these stats at the slot level, you probably\n> >need to use spinlock or atomic ops in order to update those as slots\n> >can be used from multiple backends whereas currently, we don't need\n> >that.\n>\n> IMHO storing the stats in the slot itself is a bad idea. We have the\n> statistics collector for exactly this purpose, and it's receiving data\n> over UDP without any extra locking etc.\n> >\n> >> > >What problems do we see in displaying these for each process? I think\n> >> > >users might want to see the stats for the exited processes or after\n> >> > >server restart but I think both of those are not even possible today.\n> >> > >I think the stats are available till the corresponding WALSender\n> >> > >process is active.\n> >>\n> >> I might want to see the stats for the exited processes or after server\n> >> restart. But I'm inclined to agree with displaying the stats per\n> >> process if the stats are displayed on a separate view (e.g.\n> >> pg_stat_replication_slots).\n> >>\n> >\n> >Yeah, as told previously, this makes more sense to me.\n> >\n> >Do you think we should try to write a POC patch using a per-process\n> >entry approach and see what difficulties we are facing and does it\n> >give the stats in a way we are imagining but OTOH, we can wait for\n> >some more to see if there is clear winner approach here?\n> >\n>\n> I may be missing something obvious, but I still see no point in tracking\n> per-process stats. We don't have that for other stats,\n>\n\nWon't we display per-process information in pg_stat_replication?\nThese stats are currently displayed in that view and one of the\nshortcomings was that it won't display these stats when we decode via\nbackend.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 23 Jun 2020 18:39:27 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Jun 23, 2020 at 6:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jun 23, 2020 at 3:48 PM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n> >\n> > On Tue, Jun 23, 2020 at 10:58:18AM +0530, Amit Kapila wrote:\n> > >On Tue, Jun 23, 2020 at 9:32 AM Masahiko Sawada\n> > ><masahiko.sawada@2ndquadrant.com> wrote:\n> > >>\n> > >> On Sun, 21 Jun 2020 at 06:57, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> > >> >\n> > >> > >\n> > >> > >What if the decoding has been performed by multiple backends using the\n> > >> > >same slot? In that case, it will be difficult to make the judgment\n> > >> > >for the value of logical_decoding_work_mem based on stats. It would\n> > >> > >make sense if we provide a way to set logical_decoding_work_mem for a\n> > >> > >slot but not sure if that is better than what we have now.\n> > >> > >\n> > >>\n> > >> I thought that the stats are relevant to what\n> > >> logical_decoding_work_mem value was but not with who performed logical\n> > >> decoding. So even if multiple backends perform logical decoding using\n> > >> the same slot, the user can directly use stats as long as\n> > >> logical_decoding_work_mem value doesn’t change.\n> > >>\n\nToday, I thought about it again, and if we consider the point that\nlogical_decoding_work_mem value doesn’t change much then having the\nstats at slot-level would also allow computing\nlogical_decoding_work_mem based on stats. Do you think it is a\nreasonable assumption that users won't change\nlogical_decoding_work_mem for different processes (WALSender, etc.)?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 25 Jun 2020 16:05:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, 25 Jun 2020 at 19:35, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jun 23, 2020 at 6:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jun 23, 2020 at 3:48 PM Tomas Vondra\n> > <tomas.vondra@2ndquadrant.com> wrote:\n> > >\n> > > On Tue, Jun 23, 2020 at 10:58:18AM +0530, Amit Kapila wrote:\n> > > >On Tue, Jun 23, 2020 at 9:32 AM Masahiko Sawada\n> > > ><masahiko.sawada@2ndquadrant.com> wrote:\n> > > >>\n> > > >> On Sun, 21 Jun 2020 at 06:57, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> > > >> >\n> > > >> > >\n> > > >> > >What if the decoding has been performed by multiple backends using the\n> > > >> > >same slot? In that case, it will be difficult to make the judgment\n> > > >> > >for the value of logical_decoding_work_mem based on stats. It would\n> > > >> > >make sense if we provide a way to set logical_decoding_work_mem for a\n> > > >> > >slot but not sure if that is better than what we have now.\n> > > >> > >\n> > > >>\n> > > >> I thought that the stats are relevant to what\n> > > >> logical_decoding_work_mem value was but not with who performed logical\n> > > >> decoding. So even if multiple backends perform logical decoding using\n> > > >> the same slot, the user can directly use stats as long as\n> > > >> logical_decoding_work_mem value doesn’t change.\n> > > >>\n>\n> Today, I thought about it again, and if we consider the point that\n> logical_decoding_work_mem value doesn’t change much then having the\n> stats at slot-level would also allow computing\n> logical_decoding_work_mem based on stats. Do you think it is a\n> reasonable assumption that users won't change\n> logical_decoding_work_mem for different processes (WALSender, etc.)?\n\nFWIW, if we use logical_decoding_work_mem as a threshold of starting\nof sending changes to a subscriber, I think there might be use cases\nwhere the user wants to set different logical_decoding_work_mem values\nto different wal senders. For example, setting a lower value to\nminimize the latency of synchronous logical replication to a near-site\nwhereas setting a large value to minimize the amount of data sent to a\nfar site.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 26 Jun 2020 15:01:22 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Fri, Jun 26, 2020 at 11:31 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 25 Jun 2020 at 19:35, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jun 23, 2020 at 6:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Jun 23, 2020 at 3:48 PM Tomas Vondra\n> > > <tomas.vondra@2ndquadrant.com> wrote:\n> > > >\n> > > > On Tue, Jun 23, 2020 at 10:58:18AM +0530, Amit Kapila wrote:\n> > > > >On Tue, Jun 23, 2020 at 9:32 AM Masahiko Sawada\n> > > > ><masahiko.sawada@2ndquadrant.com> wrote:\n> > > > >>\n> > > > >> On Sun, 21 Jun 2020 at 06:57, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> > > > >> >\n> > > > >> > >\n> > > > >> > >What if the decoding has been performed by multiple backends using the\n> > > > >> > >same slot? In that case, it will be difficult to make the judgment\n> > > > >> > >for the value of logical_decoding_work_mem based on stats. It would\n> > > > >> > >make sense if we provide a way to set logical_decoding_work_mem for a\n> > > > >> > >slot but not sure if that is better than what we have now.\n> > > > >> > >\n> > > > >>\n> > > > >> I thought that the stats are relevant to what\n> > > > >> logical_decoding_work_mem value was but not with who performed logical\n> > > > >> decoding. So even if multiple backends perform logical decoding using\n> > > > >> the same slot, the user can directly use stats as long as\n> > > > >> logical_decoding_work_mem value doesn’t change.\n> > > > >>\n> >\n> > Today, I thought about it again, and if we consider the point that\n> > logical_decoding_work_mem value doesn’t change much then having the\n> > stats at slot-level would also allow computing\n> > logical_decoding_work_mem based on stats. Do you think it is a\n> > reasonable assumption that users won't change\n> > logical_decoding_work_mem for different processes (WALSender, etc.)?\n>\n> FWIW, if we use logical_decoding_work_mem as a threshold of starting\n> of sending changes to a subscriber, I think there might be use cases\n> where the user wants to set different logical_decoding_work_mem values\n> to different wal senders. For example, setting a lower value to\n> minimize the latency of synchronous logical replication to a near-site\n> whereas setting a large value to minimize the amount of data sent to a\n> far site.\n>\n\nHow does setting a large value can minimize the amount of data sent?\nOne possibility is if there are a lot of transaction aborts and\ntransactions are not large enough that they cross\nlogical_decoding_work_mem threshold but such cases shouldn't be many.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Jun 2020 14:22:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Jun 23, 2020 at 12:18 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Tue, Jun 23, 2020 at 10:58:18AM +0530, Amit Kapila wrote:\n> >On Tue, Jun 23, 2020 at 9:32 AM Masahiko Sawada\n> ><masahiko.sawada@2ndquadrant.com> wrote:\n> >>\n> >> On Sun, 21 Jun 2020 at 06:57, Tomas Vondra <\n> tomas.vondra@2ndquadrant.com> wrote:\n> >> >\n> >> > >\n> >> > >What if the decoding has been performed by multiple backends using\n> the\n> >> > >same slot? In that case, it will be difficult to make the judgment\n> >> > >for the value of logical_decoding_work_mem based on stats. It would\n> >> > >make sense if we provide a way to set logical_decoding_work_mem for a\n> >> > >slot but not sure if that is better than what we have now.\n> >> > >\n> >>\n> >> I thought that the stats are relevant to what\n> >> logical_decoding_work_mem value was but not with who performed logical\n> >> decoding. So even if multiple backends perform logical decoding using\n> >> the same slot, the user can directly use stats as long as\n> >> logical_decoding_work_mem value doesn’t change.\n> >>\n> >\n> >I think if you maintain these stats at the slot level, you probably\n> >need to use spinlock or atomic ops in order to update those as slots\n> >can be used from multiple backends whereas currently, we don't need\n> >that.\n>\n> IMHO storing the stats in the slot itself is a bad idea. We have the\n> statistics collector for exactly this purpose, and it's receiving data\n> over UDP without any extra locking etc.\n>\n\nYeah, that seems much more appropriate. Of course, where they are exposed\nis a different question.\n\n\n>> > >What problems do we see in displaying these for each process? I think\n> >> > >users might want to see the stats for the exited processes or after\n> >> > >server restart but I think both of those are not even possible today.\n> >> > >I think the stats are available till the corresponding WALSender\n> >> > >process is active.\n> >>\n> >> I might want to see the stats for the exited processes or after server\n> >> restart. But I'm inclined to agree with displaying the stats per\n> >> process if the stats are displayed on a separate view (e.g.\n> >> pg_stat_replication_slots).\n> >>\n> >\n> >Yeah, as told previously, this makes more sense to me.\n> >\n> >Do you think we should try to write a POC patch using a per-process\n> >entry approach and see what difficulties we are facing and does it\n> >give the stats in a way we are imagining but OTOH, we can wait for\n> >some more to see if there is clear winner approach here?\n> >\n>\n> I may be missing something obvious, but I still see no point in tracking\n> per-process stats. We don't have that for other stats, and I'm not sure\n> how common is the scenario when a given slot is decoded by many\n> backends. I'd say vast majority of cases are simply running decoding\n> from a walsender, which may occasionally restart, but I doubt the users\n> are interested in per-pid data - they probably want aggregated data.\n>\n\nWell, technically we do -- we have the pg_stat_xact_* views. However, those\nare only viewable from *inside* the session itself (which can sometimes be\nquite annoying).\n\nThis does somewhat apply in that normal transactions send their stats\nbatches at transaction end. If this is data we'd be interested in viewing\ninside of that, a more direct exposure would be needed -- such as the way\nwe do with LSNs in pg_stat_replication or whatever.\n\nFor long-term monitoring, people definitely want aggregate data I'd say.\nThe \"realtime data\" if we call it that is in my experience mostly\ninteresting if you want to define alerts etc (\"replication standby is too\nfar behind\" is alertable through that, whereas things like \"total amount of\nreplication traffic over the past hour\" is something that's more\ntrend-alertable which is typically handled in a separate system pulling the\naggregate stats)\n\n\nCan someone explain a plausible scenario for which tracking per-process\n> stats would be needed, and simply computing deltas would not work? How\n> will you know which old PID is which, what will you do when a PID is\n> reused, and so on?\n>\n\nI fail to see that one as well, in a real-world scenario. Maybe if you want\nto do a one-off point-tuning of one tiny piece of a system? But you will\nthen also need to long term statistics to follow-up if what you did was\ncorrect anyway...\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Jun 23, 2020 at 12:18 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Tue, Jun 23, 2020 at 10:58:18AM +0530, Amit Kapila wrote:\n>On Tue, Jun 23, 2020 at 9:32 AM Masahiko Sawada\n><masahiko.sawada@2ndquadrant.com> wrote:\n>>\n>> On Sun, 21 Jun 2020 at 06:57, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>> >\n>> > >\n>> > >What if the decoding has been performed by multiple backends using the\n>> > >same slot?  In that case, it will be difficult to make the judgment\n>> > >for the value of logical_decoding_work_mem based on stats.  It would\n>> > >make sense if we provide a way to set logical_decoding_work_mem for a\n>> > >slot but not sure if that is better than what we have now.\n>> > >\n>>\n>> I thought that the stats are relevant to what\n>> logical_decoding_work_mem value was but not with who performed logical\n>> decoding. So even if multiple backends perform logical decoding using\n>> the same slot, the user can directly use stats as long as\n>> logical_decoding_work_mem value doesn’t change.\n>>\n>\n>I think if you maintain these stats at the slot level, you probably\n>need to use spinlock or atomic ops in order to update those as slots\n>can be used from multiple backends whereas currently, we don't need\n>that.\n\nIMHO storing the stats in the slot itself is a bad idea. We have the\nstatistics collector for exactly this purpose, and it's receiving data\nover UDP without any extra locking etc.Yeah, that seems much more appropriate. Of course, where they are exposed is a different question.>> > >What problems do we see in displaying these for each process?  I think\n>> > >users might want to see the stats for the exited processes or after\n>> > >server restart but I think both of those are not even possible today.\n>> > >I think the stats are available till the corresponding WALSender\n>> > >process is active.\n>>\n>> I might want to see the stats for the exited processes or after server\n>> restart. But I'm inclined to agree with displaying the stats per\n>> process if the stats are displayed on a separate view (e.g.\n>> pg_stat_replication_slots).\n>>\n>\n>Yeah, as told previously, this makes more sense to me.\n>\n>Do you think we should try to write a POC patch using a per-process\n>entry approach and see what difficulties we are facing and does it\n>give the stats in a way we are imagining but OTOH, we can wait for\n>some more to see if there is clear winner approach here?\n>\n\nI may be missing something obvious, but I still see no point in tracking\nper-process stats. We don't have that for other stats, and I'm not sure\nhow common is the scenario when a given slot is decoded by many\nbackends. I'd say vast majority of cases are simply running decoding\nfrom a walsender, which may occasionally restart, but I doubt the users\nare interested in per-pid data - they probably want aggregated data.Well, technically we do -- we have the pg_stat_xact_* views. However, those are only viewable from *inside* the session itself (which can sometimes be quite annoying).This does somewhat apply in that normal transactions send their stats batches at transaction end. If this is data we'd be interested in viewing inside of that, a more direct exposure would be needed -- such as the way we do with LSNs in pg_stat_replication or whatever.For long-term monitoring, people definitely want aggregate data I'd say. The \"realtime data\" if we call it that is in my experience mostly interesting if you want to define alerts etc (\"replication standby is too far behind\" is alertable through that, whereas things like \"total amount of replication traffic over the past hour\" is something that's more trend-alertable which is typically handled in a separate system pulling the aggregate stats)Can someone explain a plausible scenario for which tracking per-process\nstats would be needed, and simply computing deltas would not work? How\nwill you know which old PID is which, what will you do when a PID is\nreused, and so on?I fail to see that one as well, in a real-world scenario. Maybe if you want to do a one-off point-tuning of one tiny piece of a system? But you will then also need to long term statistics to follow-up if what you did was correct anyway... --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 26 Jun 2020 11:08:08 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Fri, 26 Jun 2020 at 17:53, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jun 26, 2020 at 11:31 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Thu, 25 Jun 2020 at 19:35, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Jun 23, 2020 at 6:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Tue, Jun 23, 2020 at 3:48 PM Tomas Vondra\n> > > > <tomas.vondra@2ndquadrant.com> wrote:\n> > > > >\n> > > > > On Tue, Jun 23, 2020 at 10:58:18AM +0530, Amit Kapila wrote:\n> > > > > >On Tue, Jun 23, 2020 at 9:32 AM Masahiko Sawada\n> > > > > ><masahiko.sawada@2ndquadrant.com> wrote:\n> > > > > >>\n> > > > > >> On Sun, 21 Jun 2020 at 06:57, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> > > > > >> >\n> > > > > >> > >\n> > > > > >> > >What if the decoding has been performed by multiple backends using the\n> > > > > >> > >same slot? In that case, it will be difficult to make the judgment\n> > > > > >> > >for the value of logical_decoding_work_mem based on stats. It would\n> > > > > >> > >make sense if we provide a way to set logical_decoding_work_mem for a\n> > > > > >> > >slot but not sure if that is better than what we have now.\n> > > > > >> > >\n> > > > > >>\n> > > > > >> I thought that the stats are relevant to what\n> > > > > >> logical_decoding_work_mem value was but not with who performed logical\n> > > > > >> decoding. So even if multiple backends perform logical decoding using\n> > > > > >> the same slot, the user can directly use stats as long as\n> > > > > >> logical_decoding_work_mem value doesn’t change.\n> > > > > >>\n> > >\n> > > Today, I thought about it again, and if we consider the point that\n> > > logical_decoding_work_mem value doesn’t change much then having the\n> > > stats at slot-level would also allow computing\n> > > logical_decoding_work_mem based on stats. Do you think it is a\n> > > reasonable assumption that users won't change\n> > > logical_decoding_work_mem for different processes (WALSender, etc.)?\n> >\n> > FWIW, if we use logical_decoding_work_mem as a threshold of starting\n> > of sending changes to a subscriber, I think there might be use cases\n> > where the user wants to set different logical_decoding_work_mem values\n> > to different wal senders. For example, setting a lower value to\n> > minimize the latency of synchronous logical replication to a near-site\n> > whereas setting a large value to minimize the amount of data sent to a\n> > far site.\n> >\n>\n> How does setting a large value can minimize the amount of data sent?\n> One possibility is if there are a lot of transaction aborts and\n> transactions are not large enough that they cross\n> logical_decoding_work_mem threshold but such cases shouldn't be many.\n\nYeah, this is what I meant.\n\nI agree that it would not be a common case that the user sets\ndifferent values for different processes. Based on that assumption, I\nalso think having the stats at slot-level is a good idea. But I might\nwant to have the reset function.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 29 Jun 2020 13:55:56 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Mon, Jun 29, 2020 at 10:26 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Fri, 26 Jun 2020 at 17:53, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jun 26, 2020 at 11:31 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > On Thu, 25 Jun 2020 at 19:35, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > >\n> > > > Today, I thought about it again, and if we consider the point that\n> > > > logical_decoding_work_mem value doesn’t change much then having the\n> > > > stats at slot-level would also allow computing\n> > > > logical_decoding_work_mem based on stats. Do you think it is a\n> > > > reasonable assumption that users won't change\n> > > > logical_decoding_work_mem for different processes (WALSender, etc.)?\n> > >\n> > > FWIW, if we use logical_decoding_work_mem as a threshold of starting\n> > > of sending changes to a subscriber, I think there might be use cases\n> > > where the user wants to set different logical_decoding_work_mem values\n> > > to different wal senders. For example, setting a lower value to\n> > > minimize the latency of synchronous logical replication to a near-site\n> > > whereas setting a large value to minimize the amount of data sent to a\n> > > far site.\n> > >\n> >\n> > How does setting a large value can minimize the amount of data sent?\n> > One possibility is if there are a lot of transaction aborts and\n> > transactions are not large enough that they cross\n> > logical_decoding_work_mem threshold but such cases shouldn't be many.\n>\n> Yeah, this is what I meant.\n>\n> I agree that it would not be a common case that the user sets\n> different values for different processes. Based on that assumption, I\n> also think having the stats at slot-level is a good idea.\n>\n\nOkay.\n\n> But I might\n> want to have the reset function.\n>\n\nI don't mind but lets fist see how the patch for the basic feature\nlooks and what is required to implement it? Are you interested in\nwriting the patch for this work?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Jun 2020 17:07:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Mon, 29 Jun 2020 at 20:37, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jun 29, 2020 at 10:26 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Fri, 26 Jun 2020 at 17:53, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Jun 26, 2020 at 11:31 AM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > > On Thu, 25 Jun 2020 at 19:35, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > >\n> > > > > Today, I thought about it again, and if we consider the point that\n> > > > > logical_decoding_work_mem value doesn’t change much then having the\n> > > > > stats at slot-level would also allow computing\n> > > > > logical_decoding_work_mem based on stats. Do you think it is a\n> > > > > reasonable assumption that users won't change\n> > > > > logical_decoding_work_mem for different processes (WALSender, etc.)?\n> > > >\n> > > > FWIW, if we use logical_decoding_work_mem as a threshold of starting\n> > > > of sending changes to a subscriber, I think there might be use cases\n> > > > where the user wants to set different logical_decoding_work_mem values\n> > > > to different wal senders. For example, setting a lower value to\n> > > > minimize the latency of synchronous logical replication to a near-site\n> > > > whereas setting a large value to minimize the amount of data sent to a\n> > > > far site.\n> > > >\n> > >\n> > > How does setting a large value can minimize the amount of data sent?\n> > > One possibility is if there are a lot of transaction aborts and\n> > > transactions are not large enough that they cross\n> > > logical_decoding_work_mem threshold but such cases shouldn't be many.\n> >\n> > Yeah, this is what I meant.\n> >\n> > I agree that it would not be a common case that the user sets\n> > different values for different processes. Based on that assumption, I\n> > also think having the stats at slot-level is a good idea.\n> >\n>\n> Okay.\n>\n> > But I might\n> > want to have the reset function.\n> >\n>\n> I don't mind but lets fist see how the patch for the basic feature\n> looks and what is required to implement it? Are you interested in\n> writing the patch for this work?\n\nYes, I'll write the draft patch.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 30 Jun 2020 10:08:14 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Jun 30, 2020 at 6:38 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Mon, 29 Jun 2020 at 20:37, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Jun 29, 2020 at 10:26 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > I agree that it would not be a common case that the user sets\n> > > different values for different processes. Based on that assumption, I\n> > > also think having the stats at slot-level is a good idea.\n> > >\n> >\n> > Okay.\n> >\n> > > But I might\n> > > want to have the reset function.\n> > >\n> >\n> > I don't mind but lets fist see how the patch for the basic feature\n> > looks and what is required to implement it? Are you interested in\n> > writing the patch for this work?\n>\n> Yes, I'll write the draft patch.\n>\n\nGreat, thanks. One thing we can consider is that instead of storing\nthe stats directly in the slot we can consider sending it to stats\ncollector as suggested by Tomas. Basically that can avoid contention\naround slots (See discussion in email [1]). I have not evaluated any\nof the approaches in detail so you can let us know the advantage of\none over another. Now, you might be already considering this but I\nthought it is better to share what I have in mind rather than saying\nthat later once you have the draft patch ready.\n\n[1] - https://www.postgresql.org/message-id/20200623101831.it6lzwbm37xwquco%40development\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 30 Jun 2020 09:28:33 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, 30 Jun 2020 at 12:58, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jun 30, 2020 at 6:38 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Mon, 29 Jun 2020 at 20:37, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Jun 29, 2020 at 10:26 AM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > > I agree that it would not be a common case that the user sets\n> > > > different values for different processes. Based on that assumption, I\n> > > > also think having the stats at slot-level is a good idea.\n> > > >\n> > >\n> > > Okay.\n> > >\n> > > > But I might\n> > > > want to have the reset function.\n> > > >\n> > >\n> > > I don't mind but lets fist see how the patch for the basic feature\n> > > looks and what is required to implement it? Are you interested in\n> > > writing the patch for this work?\n> >\n> > Yes, I'll write the draft patch.\n> >\n>\n> Great, thanks. One thing we can consider is that instead of storing\n> the stats directly in the slot we can consider sending it to stats\n> collector as suggested by Tomas. Basically that can avoid contention\n> around slots (See discussion in email [1]). I have not evaluated any\n> of the approaches in detail so you can let us know the advantage of\n> one over another. Now, you might be already considering this but I\n> thought it is better to share what I have in mind rather than saying\n> that later once you have the draft patch ready.\n\nThanks! Yes, I'm working on this patch while considering to send the\nstats to stats collector.\n\nI've attached PoC patch that implements a simple approach. I'd like to\ndiscuss how we collect the replication slot statistics in the stats\ncollector before I bring the patch to completion.\n\nIn this PoC patch, we have the array of PgStat_ReplSlotStats struct\nwhich has max_replication_slots entries. The backend and wal sender\nsend the slot statistics to the stats collector when decoding a commit\nWAL record.\n\ntypedef struct PgStat_ReplSlotStats\n{\n char slotname[NAMEDATALEN];\n PgStat_Counter spill_txns;\n PgStat_Counter spill_count;\n PgStat_Counter spill_bytes;\n} PgStat_ReplSlotStats;\n\nWhat I'd like to discuss are:\n\nSince the unique identifier of replication slots is the name, the\nprocess sends slot statistics along with slot name to stats collector.\nI'm concerned about the amount of data sent to the stats collector and\nthe cost of searching the statistics within the statistics array (it’s\nO(N) where N is max_replication_slots). Since the maximum length of\nslot name is NAMEDATALEN (64 bytes default) and max_replication_slots\nis unlikely a large number, I might be too worrying but it seems like\nit’s better to avoid that if we can do that easily. An idea I came up\nwith is to use the index of slots (i.g., the index of\nReplicationSlotCtl->replication_slots[]) as the index of statistics of\nslot in the stats collector. But since the index of slots could change\nafter the restart we need to synchronize the index of slots on both\narray somehow. So I thought that we can determine the index of the\nstatistics of slots at ReplicationSlotAcquire() or\nReplicationSlotCreate(), but it will in turn need to read stats file\nwhile holding ReplicationSlotControlLock to prevent the same index of\nthe statistics being used by the concurrent process who creating a\nslot. I might be missing something though.\n\nSecond, as long as the unique identifier is the slot name there is no\nconvenient way to distinguish between the same name old and new\nreplication slots, so the backend process or wal sender process sends\na message to the stats collector to drop the replication slot at\nReplicationSlotDropPtr(). This strategy differs from what we do for\ntable, index, and function statistics. It might not be a problem but\nI’m thinking a better way.\n\nThe new view name is also an open question. I prefer\npg_stat_replication_slots and to add stats of physical replication\nslots to the same view in the future, rather than a separate view.\n\nAside from the above, this patch will change the most of the changes\nintroduced by commit 9290ad198b1 and introduce new code much. I’m\nconcerned whether such changes are acceptable at the time of beta 2.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 2 Jul 2020 12:30:47 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, Jul 2, 2020 at 1:31 PM Masahiko Sawada <\nmasahiko.sawada@2ndquadrant.com> wrote:\n\n>\n>\n> Thanks! Yes, I'm working on this patch while considering to send the\n> stats to stats collector.\n>\n> I've attached PoC patch that implements a simple approach. I'd like to\n> discuss how we collect the replication slot statistics in the stats\n> collector before I bring the patch to completion.\n>\n>\nI understand the patch is only in the initial stage but I just tried\ntesting it. Using the patch, I enabled logical replication and created two\npub/subs (sub1,sub2) for two seperate tables (t1,t2). I inserted data into\nthe second table (t2) such that it spills into disk.\nThen when I checked the stats using the new function\npg_stat_get_replication_slots() , I see that the same stats are updated for\nboth the slots, when ideally it should have reflected in the second slot\nalone.\n\npostgres=# SELECT s.name, s.spill_txns, s.spill_count, s.spill_bytes\n FROM pg_stat_get_replication_slots() s(name, spill_txns, spill_count,\nspill_bytes);\n name | spill_txns | spill_count | spill_bytes\n------+------------+-------------+-------------\n sub1 | 1 | 20 | 1320000000\n sub2 | 1 | 20 | 1320000000\n(2 rows)\n\nI haven't debugged the issue yet, I can if you wish but just thought I'd\nlet you know what I found.\n\nthanks,\nAjin Cherian\nFujitsu Australia\n\nOn Thu, Jul 2, 2020 at 1:31 PM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n\nThanks! Yes, I'm working on this patch while considering to send the\nstats to stats collector.\n\nI've attached PoC patch that implements a simple approach. I'd like to\ndiscuss how we collect the replication slot statistics in the stats\ncollector before I bring the patch to completion.I understand the patch is only in the initial stage but I just tried testing it. Using the patch, I enabled logical replication and created two pub/subs (sub1,sub2) for two seperate tables (t1,t2). I inserted data into the second table (t2) such that it spills into disk.Then when I checked the stats using the new function \n\npg_stat_get_replication_slots()\n\n, I see that the same stats are updated for both the slots, when ideally it should have reflected in the second slot alone.postgres=# SELECT s.name, s.spill_txns,    s.spill_count,    s.spill_bytes   FROM pg_stat_get_replication_slots() s(name, spill_txns, spill_count, spill_bytes); name | spill_txns | spill_count | spill_bytes ------+------------+-------------+------------- sub1 |          1 |          20 |  1320000000 sub2 |          1 |          20 |  1320000000(2 rows)I haven't debugged the issue yet, I can if you wish but just thought I'd let you know what I found.thanks,Ajin CherianFujitsu Australia", "msg_date": "Sat, 4 Jul 2020 23:13:01 +1000", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Sat, 4 Jul 2020 at 22:13, Ajin Cherian <itsajin@gmail.com> wrote:\n>\n>\n>\n> On Thu, Jul 2, 2020 at 1:31 PM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\n>>\n>>\n>>\n>> Thanks! Yes, I'm working on this patch while considering to send the\n>> stats to stats collector.\n>>\n>> I've attached PoC patch that implements a simple approach. I'd like to\n>> discuss how we collect the replication slot statistics in the stats\n>> collector before I bring the patch to completion.\n>>\n>\n> I understand the patch is only in the initial stage but I just tried testing it.\n\nThank you for testing the patch!\n\n> Using the patch, I enabled logical replication and created two pub/subs (sub1,sub2) for two seperate tables (t1,t2). I inserted data into the second table (t2) such that it spills into disk.\n> Then when I checked the stats using the new function pg_stat_get_replication_slots() , I see that the same stats are updated for both the slots, when ideally it should have reflected in the second slot alone.\n>\n> postgres=# SELECT s.name, s.spill_txns, s.spill_count, s.spill_bytes FROM pg_stat_get_replication_slots() s(name, spill_txns, spill_count, spill_bytes);\n> name | spill_txns | spill_count | spill_bytes\n> ------+------------+-------------+-------------\n> sub1 | 1 | 20 | 1320000000\n> sub2 | 1 | 20 | 1320000000\n> (2 rows)\n>\n\nI think this is because logical decodings behind those two logical\nreplications decode all WAL records *before* filtering the specified\ntables. In logical replication, we decode the whole WAL stream and\nthen pass it to a logical decoding output plugin such as pgoutput. And\nthen we filter tables according to the publication. Therefore, even if\nsubscription sub1 is not interested in changes related to table t2,\nthe replication slot sub1 needs to decode the whole WAL stream,\nresulting in spilling into disk.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 6 Jul 2020 16:36:01 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, Jul 2, 2020 at 9:01 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> I've attached PoC patch that implements a simple approach. I'd like to\n> discuss how we collect the replication slot statistics in the stats\n> collector before I bring the patch to completion.\n>\n> In this PoC patch, we have the array of PgStat_ReplSlotStats struct\n> which has max_replication_slots entries. The backend and wal sender\n> send the slot statistics to the stats collector when decoding a commit\n> WAL record.\n>\n> typedef struct PgStat_ReplSlotStats\n> {\n> char slotname[NAMEDATALEN];\n> PgStat_Counter spill_txns;\n> PgStat_Counter spill_count;\n> PgStat_Counter spill_bytes;\n> } PgStat_ReplSlotStats;\n>\n> What I'd like to discuss are:\n>\n> Since the unique identifier of replication slots is the name, the\n> process sends slot statistics along with slot name to stats collector.\n> I'm concerned about the amount of data sent to the stats collector and\n> the cost of searching the statistics within the statistics array (it’s\n> O(N) where N is max_replication_slots). Since the maximum length of\n> slot name is NAMEDATALEN (64 bytes default) and max_replication_slots\n> is unlikely a large number, I might be too worrying but it seems like\n> it’s better to avoid that if we can do that easily. An idea I came up\n> with is to use the index of slots (i.g., the index of\n> ReplicationSlotCtl->replication_slots[]) as the index of statistics of\n> slot in the stats collector. But since the index of slots could change\n> after the restart we need to synchronize the index of slots on both\n> array somehow. So I thought that we can determine the index of the\n> statistics of slots at ReplicationSlotAcquire() or\n> ReplicationSlotCreate(), but it will in turn need to read stats file\n> while holding ReplicationSlotControlLock to prevent the same index of\n> the statistics being used by the concurrent process who creating a\n> slot. I might be missing something though.\n>\n\nI don't think we should be bothered about the large values of\nmax_replication_slots. The default value is 10 and I am not sure if\nusers will be able to select values large enough that we should bother\nabout searching them by name. I think if it could turn out to be a\nproblem then we can try to invent something to mitigate it.\n\n> Second, as long as the unique identifier is the slot name there is no\n> convenient way to distinguish between the same name old and new\n> replication slots, so the backend process or wal sender process sends\n> a message to the stats collector to drop the replication slot at\n> ReplicationSlotDropPtr(). This strategy differs from what we do for\n> table, index, and function statistics. It might not be a problem but\n> I’m thinking a better way.\n>\n\nCan we rely on message ordering in the transmission mechanism (UDP)\nfor stats? The wiki suggests [1] we can't. If so, then this might\nnot work.\n\n> The new view name is also an open question. I prefer\n> pg_stat_replication_slots and to add stats of physical replication\n> slots to the same view in the future, rather than a separate view.\n>\n\nThis sounds okay to me.\n\n> Aside from the above, this patch will change the most of the changes\n> introduced by commit 9290ad198b1 and introduce new code much. I’m\n> concerned whether such changes are acceptable at the time of beta 2.\n>\n\nI think it depends on the final patch. My initial thought was that we\nshould do this for PG14 but if you are suggesting removing the changes\ndone by commit 9290ad198b1 then we need to think over it. I could\nthink of below options:\na. Revert 9290ad198b1 and introduce stats for spilling in PG14. We\nwere anyway having spilling without any work in PG13 but didn’t have\nstats.\nb. Try to get your patch in PG13 if we can, otherwise, revert the\nfeature 9290ad198b1.\nc. Get whatever we have in commit 9290ad198b1 for PG13 and\nadditionally have what we are discussing here for PG14. This means\nthat spilled stats at slot level will be available in PG14 via\npg_stat_replication_slots and for individual WAL senders it will be\navailable via pg_stat_replication both in PG13 and PG14. Even if we\ncan get your patch in PG13, we can still keep those in\npg_stat_replication.\nd. Get whatever we have in commit 9290ad198b1 for PG13 and change it\nfor PG14. I don't think this will be a popular approach.\n\n[1] - https://en.wikipedia.org/wiki/User_Datagram_Protocol\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Jul 2020 17:15:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Mon, 6 Jul 2020 at 20:45, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jul 2, 2020 at 9:01 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > I've attached PoC patch that implements a simple approach. I'd like to\n> > discuss how we collect the replication slot statistics in the stats\n> > collector before I bring the patch to completion.\n> >\n> > In this PoC patch, we have the array of PgStat_ReplSlotStats struct\n> > which has max_replication_slots entries. The backend and wal sender\n> > send the slot statistics to the stats collector when decoding a commit\n> > WAL record.\n> >\n> > typedef struct PgStat_ReplSlotStats\n> > {\n> > char slotname[NAMEDATALEN];\n> > PgStat_Counter spill_txns;\n> > PgStat_Counter spill_count;\n> > PgStat_Counter spill_bytes;\n> > } PgStat_ReplSlotStats;\n> >\n> > What I'd like to discuss are:\n> >\n> > Since the unique identifier of replication slots is the name, the\n> > process sends slot statistics along with slot name to stats collector.\n> > I'm concerned about the amount of data sent to the stats collector and\n> > the cost of searching the statistics within the statistics array (it’s\n> > O(N) where N is max_replication_slots). Since the maximum length of\n> > slot name is NAMEDATALEN (64 bytes default) and max_replication_slots\n> > is unlikely a large number, I might be too worrying but it seems like\n> > it’s better to avoid that if we can do that easily. An idea I came up\n> > with is to use the index of slots (i.g., the index of\n> > ReplicationSlotCtl->replication_slots[]) as the index of statistics of\n> > slot in the stats collector. But since the index of slots could change\n> > after the restart we need to synchronize the index of slots on both\n> > array somehow. So I thought that we can determine the index of the\n> > statistics of slots at ReplicationSlotAcquire() or\n> > ReplicationSlotCreate(), but it will in turn need to read stats file\n> > while holding ReplicationSlotControlLock to prevent the same index of\n> > the statistics being used by the concurrent process who creating a\n> > slot. I might be missing something though.\n> >\n>\n> I don't think we should be bothered about the large values of\n> max_replication_slots. The default value is 10 and I am not sure if\n> users will be able to select values large enough that we should bother\n> about searching them by name. I think if it could turn out to be a\n> problem then we can try to invent something to mitigate it.\n\nAgreed.\n\n>\n> > Second, as long as the unique identifier is the slot name there is no\n> > convenient way to distinguish between the same name old and new\n> > replication slots, so the backend process or wal sender process sends\n> > a message to the stats collector to drop the replication slot at\n> > ReplicationSlotDropPtr(). This strategy differs from what we do for\n> > table, index, and function statistics. It might not be a problem but\n> > I’m thinking a better way.\n> >\n>\n> Can we rely on message ordering in the transmission mechanism (UDP)\n> for stats? The wiki suggests [1] we can't. If so, then this might\n> not work.\n\nYeah, I'm also concerned about this. Another idea would be to have\nanother unique identifier to distinguish old and new replication slots\nwith the same name. For example, creation timestamp. And then we\nreclaim the stats of unused slots later like table and function\nstatistics.\n\nOn the other hand, if the ordering were to be reversed, we would miss\nthat stats but the next stat reporting would create the new entry. If\nthe problem is unlikely to happen in common case we can live with\nthat.\n\n> > The new view name is also an open question. I prefer\n> > pg_stat_replication_slots and to add stats of physical replication\n> > slots to the same view in the future, rather than a separate view.\n> >\n>\n> This sounds okay to me.\n>\n> > Aside from the above, this patch will change the most of the changes\n> > introduced by commit 9290ad198b1 and introduce new code much. I’m\n> > concerned whether such changes are acceptable at the time of beta 2.\n> >\n>\n> I think it depends on the final patch. My initial thought was that we\n> should do this for PG14 but if you are suggesting removing the changes\n> done by commit 9290ad198b1 then we need to think over it. I could\n> think of below options:\n> a. Revert 9290ad198b1 and introduce stats for spilling in PG14. We\n> were anyway having spilling without any work in PG13 but didn’t have\n> stats.\n> b. Try to get your patch in PG13 if we can, otherwise, revert the\n> feature 9290ad198b1.\n> c. Get whatever we have in commit 9290ad198b1 for PG13 and\n> additionally have what we are discussing here for PG14. This means\n> that spilled stats at slot level will be available in PG14 via\n> pg_stat_replication_slots and for individual WAL senders it will be\n> available via pg_stat_replication both in PG13 and PG14. Even if we\n> can get your patch in PG13, we can still keep those in\n> pg_stat_replication.\n> d. Get whatever we have in commit 9290ad198b1 for PG13 and change it\n> for PG14. I don't think this will be a popular approach.\n\nI was thinking option (a) or (b). I'm inclined to option (a) since the\nPoC patch added a certain amount of new codes. I agree with you that\nit depends on the final patch.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 7 Jul 2020 10:36:57 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Jul 7, 2020 at 7:07 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Mon, 6 Jul 2020 at 20:45, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > Second, as long as the unique identifier is the slot name there is no\n> > > convenient way to distinguish between the same name old and new\n> > > replication slots, so the backend process or wal sender process sends\n> > > a message to the stats collector to drop the replication slot at\n> > > ReplicationSlotDropPtr(). This strategy differs from what we do for\n> > > table, index, and function statistics. It might not be a problem but\n> > > I’m thinking a better way.\n> > >\n> >\n> > Can we rely on message ordering in the transmission mechanism (UDP)\n> > for stats? The wiki suggests [1] we can't. If so, then this might\n> > not work.\n>\n> Yeah, I'm also concerned about this. Another idea would be to have\n> another unique identifier to distinguish old and new replication slots\n> with the same name. For example, creation timestamp. And then we\n> reclaim the stats of unused slots later like table and function\n> statistics.\n>\n\nSo, we need to have 'creation timestamp' as persistent data for slots\nto achieve this? I am not sure of adding creation_time as a parameter\nto identify for this case because users can change timings on systems\nso it might not be a bullet-proof method but I agree that it can work\nin general.\n\n> On the other hand, if the ordering were to be reversed, we would miss\n> that stats but the next stat reporting would create the new entry. If\n> the problem is unlikely to happen in common case we can live with\n> that.\n>\n\nYeah, that is a valid point and I think otherwise also some UDP\npackets can be lost so maybe we don't need to worry too much about\nthis. I guess we can add a comment in the code for such a case.\n\n> >\n> > > Aside from the above, this patch will change the most of the changes\n> > > introduced by commit 9290ad198b1 and introduce new code much. I’m\n> > > concerned whether such changes are acceptable at the time of beta 2.\n> > >\n> >\n> > I think it depends on the final patch. My initial thought was that we\n> > should do this for PG14 but if you are suggesting removing the changes\n> > done by commit 9290ad198b1 then we need to think over it. I could\n> > think of below options:\n> > a. Revert 9290ad198b1 and introduce stats for spilling in PG14. We\n> > were anyway having spilling without any work in PG13 but didn’t have\n> > stats.\n> > b. Try to get your patch in PG13 if we can, otherwise, revert the\n> > feature 9290ad198b1.\n> > c. Get whatever we have in commit 9290ad198b1 for PG13 and\n> > additionally have what we are discussing here for PG14. This means\n> > that spilled stats at slot level will be available in PG14 via\n> > pg_stat_replication_slots and for individual WAL senders it will be\n> > available via pg_stat_replication both in PG13 and PG14. Even if we\n> > can get your patch in PG13, we can still keep those in\n> > pg_stat_replication.\n> > d. Get whatever we have in commit 9290ad198b1 for PG13 and change it\n> > for PG14. I don't think this will be a popular approach.\n>\n> I was thinking option (a) or (b). I'm inclined to option (a) since the\n> PoC patch added a certain amount of new codes. I agree with you that\n> it depends on the final patch.\n>\n\nMagnus, Tomas, others, do you have any suggestions on the above\noptions or let us know if you have any other option in mind?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Jul 2020 08:40:13 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Jul 7, 2020 at 5:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Tue, Jul 7, 2020 at 7:07 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Mon, 6 Jul 2020 at 20:45, Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > >\n> > > > Second, as long as the unique identifier is the slot name there is no\n> > > > convenient way to distinguish between the same name old and new\n> > > > replication slots, so the backend process or wal sender process sends\n> > > > a message to the stats collector to drop the replication slot at\n> > > > ReplicationSlotDropPtr(). This strategy differs from what we do for\n> > > > table, index, and function statistics. It might not be a problem but\n> > > > I’m thinking a better way.\n> > > >\n> > >\n> > > Can we rely on message ordering in the transmission mechanism (UDP)\n> > > for stats? The wiki suggests [1] we can't. If so, then this might\n> > > not work.\n> >\n> > Yeah, I'm also concerned about this. Another idea would be to have\n> > another unique identifier to distinguish old and new replication slots\n> > with the same name. For example, creation timestamp. And then we\n> > reclaim the stats of unused slots later like table and function\n> > statistics.\n> >\n>\n> So, we need to have 'creation timestamp' as persistent data for slots\n> to achieve this? I am not sure of adding creation_time as a parameter\n> to identify for this case because users can change timings on systems\n> so it might not be a bullet-proof method but I agree that it can work\n> in general.\n>\n\nIf we need them to be persistent across time like that, perhaps we simply\nneed to assign oids to replication slots? That might simplify this problem\nquite a bit?\n\n\n> On the other hand, if the ordering were to be reversed, we would miss\n> > that stats but the next stat reporting would create the new entry. If\n> > the problem is unlikely to happen in common case we can live with\n> > that.\n> >\n>\n> Yeah, that is a valid point and I think otherwise also some UDP\n> packets can be lost so maybe we don't need to worry too much about\n> this. I guess we can add a comment in the code for such a case.\n>\n\nThe fact that we may in theory lose some packages over UDP is the main\nreason we're using UDP in the first place, I believe :) But it's highly\nunlikely to happen in the real world I believe (and I think on some\nplatforms impossible).\n\n\n> > > Aside from the above, this patch will change the most of the changes\n> > > > introduced by commit 9290ad198b1 and introduce new code much. I’m\n> > > > concerned whether such changes are acceptable at the time of beta 2.\n> > > >\n> > >\n> > > I think it depends on the final patch. My initial thought was that we\n> > > should do this for PG14 but if you are suggesting removing the changes\n> > > done by commit 9290ad198b1 then we need to think over it. I could\n> > > think of below options:\n> > > a. Revert 9290ad198b1 and introduce stats for spilling in PG14. We\n> > > were anyway having spilling without any work in PG13 but didn’t have\n> > > stats.\n> > > b. Try to get your patch in PG13 if we can, otherwise, revert the\n> > > feature 9290ad198b1.\n> > > c. Get whatever we have in commit 9290ad198b1 for PG13 and\n> > > additionally have what we are discussing here for PG14. This means\n> > > that spilled stats at slot level will be available in PG14 via\n> > > pg_stat_replication_slots and for individual WAL senders it will be\n> > > available via pg_stat_replication both in PG13 and PG14. Even if we\n> > > can get your patch in PG13, we can still keep those in\n> > > pg_stat_replication.\n> > > d. Get whatever we have in commit 9290ad198b1 for PG13 and change it\n> > > for PG14. I don't think this will be a popular approach.\n> >\n> > I was thinking option (a) or (b). I'm inclined to option (a) since the\n> > PoC patch added a certain amount of new codes. I agree with you that\n> > it depends on the final patch.\n> >\n>\n> Magnus, Tomas, others, do you have any suggestions on the above\n> options or let us know if you have any other option in mind?\n>\n>\nI have a feeling it's far too late for (b) at this time. Regardless of the\nsize of the patch, it feels that this can end up being a rushed and not\nthought-through-all-the-way one, in which case we may end up in an even\nworse position.\n\nMuch as I would like to have these stats earlier, I'm also\nleaning towards (a).\n\n//Magnus\n\nOn Tue, Jul 7, 2020 at 5:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Tue, Jul 7, 2020 at 7:07 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Mon, 6 Jul 2020 at 20:45, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > Second, as long as the unique identifier is the slot name there is no\n> > > convenient way to distinguish between the same name old and new\n> > > replication slots, so the backend process or wal sender process sends\n> > > a message to the stats collector to drop the replication slot at\n> > > ReplicationSlotDropPtr(). This strategy differs from what we do for\n> > > table, index, and function statistics. It might not be a problem but\n> > > I’m thinking a better way.\n> > >\n> >\n> > Can we rely on message ordering in the transmission mechanism (UDP)\n> > for stats?  The wiki suggests [1] we can't.  If so, then this might\n> > not work.\n>\n> Yeah, I'm also concerned about this. Another idea would be to have\n> another unique identifier to distinguish old and new replication slots\n> with the same name. For example, creation timestamp. And then we\n> reclaim the stats of unused slots later like table and function\n> statistics.\n>\n\nSo, we need to have 'creation timestamp' as persistent data for slots\nto achieve this?  I am not sure of adding creation_time as a parameter\nto identify for this case because users can change timings on systems\nso it might not be a bullet-proof method but I agree that it can work\nin general.If we need them to be persistent across time like that, perhaps we simply need to assign oids to replication slots? That might simplify this problem quite a bit?\n> On the other hand, if the ordering were to be reversed, we would miss\n> that stats but the next stat reporting would create the new entry. If\n> the problem is unlikely to happen in common case we can live with\n> that.\n>\n\nYeah, that is a valid point and I think otherwise also some UDP\npackets can be lost so maybe we don't need to worry too much about\nthis.  I guess we can add a comment in the code for such a case.The fact that we may in theory lose some packages over UDP is the main reason we're using UDP in the first place, I believe :) But it's highly unlikely to happen in the real world I believe (and I think on some platforms impossible).> > > Aside from the above, this patch will change the most of the changes\n> > > introduced by commit 9290ad198b1 and introduce new code much. I’m\n> > > concerned whether such changes are acceptable at the time of beta 2.\n> > >\n> >\n> > I think it depends on the final patch. My initial thought was that we\n> > should do this for PG14 but if you are suggesting removing the changes\n> > done by commit 9290ad198b1 then we need to think over it.  I could\n> > think of below options:\n> > a. Revert 9290ad198b1 and introduce stats for spilling in PG14.  We\n> > were anyway having spilling without any work in PG13 but didn’t have\n> > stats.\n> > b. Try to get your patch in PG13 if we can, otherwise, revert the\n> > feature 9290ad198b1.\n> > c. Get whatever we have in commit 9290ad198b1 for PG13 and\n> > additionally have what we are discussing here for PG14.  This means\n> > that spilled stats at slot level will be available in PG14 via\n> > pg_stat_replication_slots and for individual WAL senders it will be\n> > available via pg_stat_replication both in PG13 and PG14.  Even if we\n> > can get your patch in PG13, we can still keep those in\n> > pg_stat_replication.\n> > d. Get whatever we have in commit 9290ad198b1 for PG13 and change it\n> > for PG14.  I don't think this will be a popular approach.\n>\n> I was thinking option (a) or (b). I'm inclined to option (a) since the\n> PoC patch added a certain amount of new codes. I agree with you that\n> it depends on the final patch.\n>\n\nMagnus, Tomas, others, do you have any suggestions on the above\noptions or let us know if you have any other option in mind?\nI have a feeling it's far too late for (b) at this time. Regardless of the size of the patch, it feels that this can end up being a rushed and not thought-through-all-the-way one, in which case we may end up in an even worse position.Much as I would like to have these stats earlier, I'm also leaning towards (a).//Magnus", "msg_date": "Tue, 7 Jul 2020 10:50:26 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, 7 Jul 2020 at 17:50, Magnus Hagander <magnus@hagander.net> wrote:\n>\n>\n>\n> On Tue, Jul 7, 2020 at 5:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Tue, Jul 7, 2020 at 7:07 AM Masahiko Sawada\n>> <masahiko.sawada@2ndquadrant.com> wrote:\n>> >\n>> > On Mon, 6 Jul 2020 at 20:45, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> > >\n>> > > > Second, as long as the unique identifier is the slot name there is no\n>> > > > convenient way to distinguish between the same name old and new\n>> > > > replication slots, so the backend process or wal sender process sends\n>> > > > a message to the stats collector to drop the replication slot at\n>> > > > ReplicationSlotDropPtr(). This strategy differs from what we do for\n>> > > > table, index, and function statistics. It might not be a problem but\n>> > > > I’m thinking a better way.\n>> > > >\n>> > >\n>> > > Can we rely on message ordering in the transmission mechanism (UDP)\n>> > > for stats? The wiki suggests [1] we can't. If so, then this might\n>> > > not work.\n>> >\n>> > Yeah, I'm also concerned about this. Another idea would be to have\n>> > another unique identifier to distinguish old and new replication slots\n>> > with the same name. For example, creation timestamp. And then we\n>> > reclaim the stats of unused slots later like table and function\n>> > statistics.\n>> >\n>>\n>> So, we need to have 'creation timestamp' as persistent data for slots\n>> to achieve this? I am not sure of adding creation_time as a parameter\n>> to identify for this case because users can change timings on systems\n>> so it might not be a bullet-proof method but I agree that it can work\n>> in general.\n>\n>\n> If we need them to be persistent across time like that, perhaps we simply need to assign oids to replication slots? That might simplify this problem quite a bit?\n\nYeah, I guess assigning oids to replication slots in the same way of\noids in system catalogs might not work because physical replication\nslot can be created even during recovery. But using a\nmonotonically-increasing integer as id seems better and straight\nforward. This id is not necessarily displayed in pg_repliation_slots\nview because the user already can use slot name as a unique\nidentifier.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 8 Jul 2020 14:58:18 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Wed, Jul 8, 2020 at 11:28 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 7 Jul 2020 at 17:50, Magnus Hagander <magnus@hagander.net> wrote:\n> >\n> >\n> >\n> > On Tue, Jul 7, 2020 at 5:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> On Tue, Jul 7, 2020 at 7:07 AM Masahiko Sawada\n> >> <masahiko.sawada@2ndquadrant.com> wrote:\n> >> >\n> >> > On Mon, 6 Jul 2020 at 20:45, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >> > >\n> >> > > > Second, as long as the unique identifier is the slot name there is no\n> >> > > > convenient way to distinguish between the same name old and new\n> >> > > > replication slots, so the backend process or wal sender process sends\n> >> > > > a message to the stats collector to drop the replication slot at\n> >> > > > ReplicationSlotDropPtr(). This strategy differs from what we do for\n> >> > > > table, index, and function statistics. It might not be a problem but\n> >> > > > I’m thinking a better way.\n> >> > > >\n> >> > >\n> >> > > Can we rely on message ordering in the transmission mechanism (UDP)\n> >> > > for stats? The wiki suggests [1] we can't. If so, then this might\n> >> > > not work.\n> >> >\n> >> > Yeah, I'm also concerned about this. Another idea would be to have\n> >> > another unique identifier to distinguish old and new replication slots\n> >> > with the same name. For example, creation timestamp. And then we\n> >> > reclaim the stats of unused slots later like table and function\n> >> > statistics.\n> >> >\n> >>\n> >> So, we need to have 'creation timestamp' as persistent data for slots\n> >> to achieve this? I am not sure of adding creation_time as a parameter\n> >> to identify for this case because users can change timings on systems\n> >> so it might not be a bullet-proof method but I agree that it can work\n> >> in general.\n> >\n> >\n> > If we need them to be persistent across time like that, perhaps we simply need to assign oids to replication slots? That might simplify this problem quite a bit?\n>\n> Yeah, I guess assigning oids to replication slots in the same way of\n> oids in system catalogs might not work because physical replication\n> slot can be created even during recovery. But using a\n> monotonically-increasing integer as id seems better and straight\n> forward.\n>\n\nBut don't we need to make it WAL logged as well similar to what we do\nin GetNewObjectId? I am thinking do we really need Oids for slots or\nis it okay to have some approximate stats in boundary cases?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Jul 2020 12:34:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Wed, 8 Jul 2020 at 16:04, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jul 8, 2020 at 11:28 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Tue, 7 Jul 2020 at 17:50, Magnus Hagander <magnus@hagander.net> wrote:\n> > >\n> > >\n> > >\n> > > On Tue, Jul 7, 2020 at 5:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >>\n> > >> On Tue, Jul 7, 2020 at 7:07 AM Masahiko Sawada\n> > >> <masahiko.sawada@2ndquadrant.com> wrote:\n> > >> >\n> > >> > On Mon, 6 Jul 2020 at 20:45, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >> > >\n> > >> > > > Second, as long as the unique identifier is the slot name there is no\n> > >> > > > convenient way to distinguish between the same name old and new\n> > >> > > > replication slots, so the backend process or wal sender process sends\n> > >> > > > a message to the stats collector to drop the replication slot at\n> > >> > > > ReplicationSlotDropPtr(). This strategy differs from what we do for\n> > >> > > > table, index, and function statistics. It might not be a problem but\n> > >> > > > I’m thinking a better way.\n> > >> > > >\n> > >> > >\n> > >> > > Can we rely on message ordering in the transmission mechanism (UDP)\n> > >> > > for stats? The wiki suggests [1] we can't. If so, then this might\n> > >> > > not work.\n> > >> >\n> > >> > Yeah, I'm also concerned about this. Another idea would be to have\n> > >> > another unique identifier to distinguish old and new replication slots\n> > >> > with the same name. For example, creation timestamp. And then we\n> > >> > reclaim the stats of unused slots later like table and function\n> > >> > statistics.\n> > >> >\n> > >>\n> > >> So, we need to have 'creation timestamp' as persistent data for slots\n> > >> to achieve this? I am not sure of adding creation_time as a parameter\n> > >> to identify for this case because users can change timings on systems\n> > >> so it might not be a bullet-proof method but I agree that it can work\n> > >> in general.\n> > >\n> > >\n> > > If we need them to be persistent across time like that, perhaps we simply need to assign oids to replication slots? That might simplify this problem quite a bit?\n> >\n> > Yeah, I guess assigning oids to replication slots in the same way of\n> > oids in system catalogs might not work because physical replication\n> > slot can be created even during recovery. But using a\n> > monotonically-increasing integer as id seems better and straight\n> > forward.\n> >\n>\n> But don't we need to make it WAL logged as well similar to what we do\n> in GetNewObjectId?\n\nYes. I was thinking that assigning (the maximum number of the existing\nslot id + 1) to a new slot without WAL logging.\n\n> I am thinking do we really need Oids for slots or\n> is it okay to have some approximate stats in boundary cases?\n\nI think that using oids has another benefit that we don't need to send\nslot name to the stats collector along with the stats. Since the\nmaximum size of slot name is NAMEDATALEN and we don't support the\npgstat message larger than PGSTAT_MAX_MSG_SIZE (1000 bytes), if the\nuser wants to increase NAMEDATALEN they might not be able to build.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 8 Jul 2020 16:44:18 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Wed, Jul 8, 2020 at 1:14 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Wed, 8 Jul 2020 at 16:04, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Jul 8, 2020 at 11:28 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > >\n> > > > If we need them to be persistent across time like that, perhaps we simply need to assign oids to replication slots? That might simplify this problem quite a bit?\n> > >\n> > > Yeah, I guess assigning oids to replication slots in the same way of\n> > > oids in system catalogs might not work because physical replication\n> > > slot can be created even during recovery. But using a\n> > > monotonically-increasing integer as id seems better and straight\n> > > forward.\n> > >\n> >\n> > But don't we need to make it WAL logged as well similar to what we do\n> > in GetNewObjectId?\n>\n> Yes. I was thinking that assigning (the maximum number of the existing\n> slot id + 1) to a new slot without WAL logging.\n>\n> > I am thinking do we really need Oids for slots or\n> > is it okay to have some approximate stats in boundary cases?\n>\n> I think that using oids has another benefit that we don't need to send\n> slot name to the stats collector along with the stats. Since the\n> maximum size of slot name is NAMEDATALEN and we don't support the\n> pgstat message larger than PGSTAT_MAX_MSG_SIZE (1000 bytes), if the\n> user wants to increase NAMEDATALEN they might not be able to build.\n>\n\nI think NAMEDATALEN is used for many other objects as well and I don't\nthink we want to change it in foreseeable future, so that doesn't\nsound to be a good reason to invent OIDs for slots. OTOH, I do\nunderstand it would be better to send OIDs than names for slots but I\nam just not sure if it is a good idea to invent a new way to generate\nOIDs (which is different from how we do it for other objects in the\nsystem) for this purpose.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 Jul 2020 08:41:03 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Jul 7, 2020 at 2:20 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Tue, Jul 7, 2020 at 5:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> > > I think it depends on the final patch. My initial thought was that we\n>> > > should do this for PG14 but if you are suggesting removing the changes\n>> > > done by commit 9290ad198b1 then we need to think over it. I could\n>> > > think of below options:\n>> > > a. Revert 9290ad198b1 and introduce stats for spilling in PG14. We\n>> > > were anyway having spilling without any work in PG13 but didn’t have\n>> > > stats.\n>> > > b. Try to get your patch in PG13 if we can, otherwise, revert the\n>> > > feature 9290ad198b1.\n>> > > c. Get whatever we have in commit 9290ad198b1 for PG13 and\n>> > > additionally have what we are discussing here for PG14. This means\n>> > > that spilled stats at slot level will be available in PG14 via\n>> > > pg_stat_replication_slots and for individual WAL senders it will be\n>> > > available via pg_stat_replication both in PG13 and PG14. Even if we\n>> > > can get your patch in PG13, we can still keep those in\n>> > > pg_stat_replication.\n>> > > d. Get whatever we have in commit 9290ad198b1 for PG13 and change it\n>> > > for PG14. I don't think this will be a popular approach.\n>> >\n>> > I was thinking option (a) or (b). I'm inclined to option (a) since the\n>> > PoC patch added a certain amount of new codes. I agree with you that\n>> > it depends on the final patch.\n>> >\n>>\n>> Magnus, Tomas, others, do you have any suggestions on the above\n>> options or let us know if you have any other option in mind?\n>>\n>\n> I have a feeling it's far too late for (b) at this time. Regardless of the size of the patch, it feels that this can end up being a rushed and not thought-through-all-the-way one, in which case we may end up in an even worse position.\n>\n> Much as I would like to have these stats earlier, I'm also leaning towards (a).\n>\n\nFair enough. The attached patch reverts the commits related to these\nstats. Sawada-San, can you please once see if I have missed anything\napart from catversion bump which I will do before commit?\n\n\n\n--\nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 9 Jul 2020 12:39:13 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, 9 Jul 2020 at 16:09, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jul 7, 2020 at 2:20 PM Magnus Hagander <magnus@hagander.net> wrote:\n> >\n> > On Tue, Jul 7, 2020 at 5:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> > > I think it depends on the final patch. My initial thought was that we\n> >> > > should do this for PG14 but if you are suggesting removing the changes\n> >> > > done by commit 9290ad198b1 then we need to think over it. I could\n> >> > > think of below options:\n> >> > > a. Revert 9290ad198b1 and introduce stats for spilling in PG14. We\n> >> > > were anyway having spilling without any work in PG13 but didn’t have\n> >> > > stats.\n> >> > > b. Try to get your patch in PG13 if we can, otherwise, revert the\n> >> > > feature 9290ad198b1.\n> >> > > c. Get whatever we have in commit 9290ad198b1 for PG13 and\n> >> > > additionally have what we are discussing here for PG14. This means\n> >> > > that spilled stats at slot level will be available in PG14 via\n> >> > > pg_stat_replication_slots and for individual WAL senders it will be\n> >> > > available via pg_stat_replication both in PG13 and PG14. Even if we\n> >> > > can get your patch in PG13, we can still keep those in\n> >> > > pg_stat_replication.\n> >> > > d. Get whatever we have in commit 9290ad198b1 for PG13 and change it\n> >> > > for PG14. I don't think this will be a popular approach.\n> >> >\n> >> > I was thinking option (a) or (b). I'm inclined to option (a) since the\n> >> > PoC patch added a certain amount of new codes. I agree with you that\n> >> > it depends on the final patch.\n> >> >\n> >>\n> >> Magnus, Tomas, others, do you have any suggestions on the above\n> >> options or let us know if you have any other option in mind?\n> >>\n> >\n> > I have a feeling it's far too late for (b) at this time. Regardless of the size of the patch, it feels that this can end up being a rushed and not thought-through-all-the-way one, in which case we may end up in an even worse position.\n> >\n> > Much as I would like to have these stats earlier, I'm also leaning towards (a).\n> >\n>\n> Fair enough. The attached patch reverts the commits related to these\n> stats. Sawada-San, can you please once see if I have missed anything\n> apart from catversion bump which I will do before commit?\n\nThank you for the patch!\n\nDo we remove the corresponding line in the release note by another\ncommit? For the rest, the looks good to me.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 10 Jul 2020 10:49:13 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, 9 Jul 2020 at 12:11, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jul 8, 2020 at 1:14 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Wed, 8 Jul 2020 at 16:04, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Jul 8, 2020 at 11:28 AM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > > >\n> > > > > If we need them to be persistent across time like that, perhaps we simply need to assign oids to replication slots? That might simplify this problem quite a bit?\n> > > >\n> > > > Yeah, I guess assigning oids to replication slots in the same way of\n> > > > oids in system catalogs might not work because physical replication\n> > > > slot can be created even during recovery. But using a\n> > > > monotonically-increasing integer as id seems better and straight\n> > > > forward.\n> > > >\n> > >\n> > > But don't we need to make it WAL logged as well similar to what we do\n> > > in GetNewObjectId?\n> >\n> > Yes. I was thinking that assigning (the maximum number of the existing\n> > slot id + 1) to a new slot without WAL logging.\n> >\n> > > I am thinking do we really need Oids for slots or\n> > > is it okay to have some approximate stats in boundary cases?\n> >\n> > I think that using oids has another benefit that we don't need to send\n> > slot name to the stats collector along with the stats. Since the\n> > maximum size of slot name is NAMEDATALEN and we don't support the\n> > pgstat message larger than PGSTAT_MAX_MSG_SIZE (1000 bytes), if the\n> > user wants to increase NAMEDATALEN they might not be able to build.\n> >\n>\n> I think NAMEDATALEN is used for many other objects as well and I don't\n> think we want to change it in foreseeable future, so that doesn't\n> sound to be a good reason to invent OIDs for slots. OTOH, I do\n> understand it would be better to send OIDs than names for slots but I\n> am just not sure if it is a good idea to invent a new way to generate\n> OIDs (which is different from how we do it for other objects in the\n> system) for this purpose.\n\nI'm concerned that there might be users who are using custom\nPostgreSQL that increased NAMEDATALEN for some reason. But indeed, I\nalso agree with your concerns. So perhaps we can go with the current\nPoC patch approach as the first version (i.g., sending slot drop\nmessage to stats collector). When we need such a unique identifier\nalso for other purposes, we will be able to change this feature so\nthat it uses that identifier for this statistics reporting purpose.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 10 Jul 2020 10:53:00 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Fri, Jul 10, 2020 at 7:19 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 9 Jul 2020 at 16:09, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Fair enough. The attached patch reverts the commits related to these\n> > stats. Sawada-San, can you please once see if I have missed anything\n> > apart from catversion bump which I will do before commit?\n>\n> Thank you for the patch!\n>\n> Do we remove the corresponding line in the release note by another\n> commit?\n>\n\nYes, I will do that as well.\n\n> For the rest, the looks good to me.\n>\n\nThanks, will wait for a day or so and then push it early next week.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 10 Jul 2020 14:39:33 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Fri, Jul 10, 2020 at 7:23 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 9 Jul 2020 at 12:11, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Jul 8, 2020 at 1:14 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > >\n> > > I think that using oids has another benefit that we don't need to send\n> > > slot name to the stats collector along with the stats. Since the\n> > > maximum size of slot name is NAMEDATALEN and we don't support the\n> > > pgstat message larger than PGSTAT_MAX_MSG_SIZE (1000 bytes), if the\n> > > user wants to increase NAMEDATALEN they might not be able to build.\n> > >\n> >\n> > I think NAMEDATALEN is used for many other objects as well and I don't\n> > think we want to change it in foreseeable future, so that doesn't\n> > sound to be a good reason to invent OIDs for slots. OTOH, I do\n> > understand it would be better to send OIDs than names for slots but I\n> > am just not sure if it is a good idea to invent a new way to generate\n> > OIDs (which is different from how we do it for other objects in the\n> > system) for this purpose.\n>\n> I'm concerned that there might be users who are using custom\n> PostgreSQL that increased NAMEDATALEN for some reason. But indeed, I\n> also agree with your concerns. So perhaps we can go with the current\n> PoC patch approach as the first version (i.g., sending slot drop\n> message to stats collector). When we need such a unique identifier\n> also for other purposes, we will be able to change this feature so\n> that it uses that identifier for this statistics reporting purpose.\n>\n\nOkay, feel to submit the version atop my revert patch. I think you\nmight want to remove the indexing stuff you have added for faster\nsearch as discussed above.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 10 Jul 2020 14:42:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Fri, Jul 10, 2020 at 2:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 10, 2020 at 7:19 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Thu, 9 Jul 2020 at 16:09, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > Fair enough. The attached patch reverts the commits related to these\n> > > stats. Sawada-San, can you please once see if I have missed anything\n> > > apart from catversion bump which I will do before commit?\n> >\n> > Thank you for the patch!\n> >\n> > Do we remove the corresponding line in the release note by another\n> > commit?\n> >\n>\n> Yes, I will do that as well.\n>\n> > For the rest, the looks good to me.\n> >\n>\n> Thanks, will wait for a day or so and then push it early next week.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Jul 2020 16:53:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Fri, Jul 10, 2020 at 2:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 10, 2020 at 7:23 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Thu, 9 Jul 2020 at 12:11, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Jul 8, 2020 at 1:14 PM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > >\n> > > > I think that using oids has another benefit that we don't need to send\n> > > > slot name to the stats collector along with the stats. Since the\n> > > > maximum size of slot name is NAMEDATALEN and we don't support the\n> > > > pgstat message larger than PGSTAT_MAX_MSG_SIZE (1000 bytes), if the\n> > > > user wants to increase NAMEDATALEN they might not be able to build.\n> > > >\n> > >\n> > > I think NAMEDATALEN is used for many other objects as well and I don't\n> > > think we want to change it in foreseeable future, so that doesn't\n> > > sound to be a good reason to invent OIDs for slots. OTOH, I do\n> > > understand it would be better to send OIDs than names for slots but I\n> > > am just not sure if it is a good idea to invent a new way to generate\n> > > OIDs (which is different from how we do it for other objects in the\n> > > system) for this purpose.\n> >\n> > I'm concerned that there might be users who are using custom\n> > PostgreSQL that increased NAMEDATALEN for some reason. But indeed, I\n> > also agree with your concerns. So perhaps we can go with the current\n> > PoC patch approach as the first version (i.g., sending slot drop\n> > message to stats collector). When we need such a unique identifier\n> > also for other purposes, we will be able to change this feature so\n> > that it uses that identifier for this statistics reporting purpose.\n> >\n>\n> Okay, feel to submit the version atop my revert patch.\n>\n\nAttached, please find the rebased version. I have kept prorows as 10\ninstead of 100 for pg_stat_get_replication_slots because I don't see\nmuch reason for keeping the value more than the default value of\nmax_replication_slots.\n\nAs we are targeting this patch for PG14, so I think we can now add the\nfunctionality to reset the stats as well. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 16 Jul 2020 12:23:35 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, 16 Jul 2020 at 15:53, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 10, 2020 at 2:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jul 10, 2020 at 7:23 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > On Thu, 9 Jul 2020 at 12:11, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Wed, Jul 8, 2020 at 1:14 PM Masahiko Sawada\n> > > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > > >\n> > > > >\n> > > > > I think that using oids has another benefit that we don't need to send\n> > > > > slot name to the stats collector along with the stats. Since the\n> > > > > maximum size of slot name is NAMEDATALEN and we don't support the\n> > > > > pgstat message larger than PGSTAT_MAX_MSG_SIZE (1000 bytes), if the\n> > > > > user wants to increase NAMEDATALEN they might not be able to build.\n> > > > >\n> > > >\n> > > > I think NAMEDATALEN is used for many other objects as well and I don't\n> > > > think we want to change it in foreseeable future, so that doesn't\n> > > > sound to be a good reason to invent OIDs for slots. OTOH, I do\n> > > > understand it would be better to send OIDs than names for slots but I\n> > > > am just not sure if it is a good idea to invent a new way to generate\n> > > > OIDs (which is different from how we do it for other objects in the\n> > > > system) for this purpose.\n> > >\n> > > I'm concerned that there might be users who are using custom\n> > > PostgreSQL that increased NAMEDATALEN for some reason. But indeed, I\n> > > also agree with your concerns. So perhaps we can go with the current\n> > > PoC patch approach as the first version (i.g., sending slot drop\n> > > message to stats collector). When we need such a unique identifier\n> > > also for other purposes, we will be able to change this feature so\n> > > that it uses that identifier for this statistics reporting purpose.\n> > >\n> >\n> > Okay, feel to submit the version atop my revert patch.\n> >\n>\n> Attached, please find the rebased version. I have kept prorows as 10\n> instead of 100 for pg_stat_get_replication_slots because I don't see\n> much reason for keeping the value more than the default value of\n> max_replication_slots.\n>\n\nThank you for rebasing the patch! Agreed.\n\n> As we are targeting this patch for PG14, so I think we can now add the\n> functionality to reset the stats as well. What do you think?\n>\n\nYeah, I was also updating the patch while adding the reset functions.\n\nHowever, I'm concerned about the following part:\n\n+static int\n+pgstat_replslot_index(const char *name)\n+{\n+ int i;\n+\n+ Assert(nReplSlotStats <= max_replication_slots);\n+ for (i = 0; i < nReplSlotStats; i++)\n+ {\n+ if (strcmp(replSlotStats[i].slotname, name) == 0)\n+ return i; /* found */\n+ }\n+\n+ /* not found, register new slot */\n+ memset(&replSlotStats[nReplSlotStats], 0, sizeof(PgStat_ReplSlotStats));\n+ memcpy(&replSlotStats[nReplSlotStats].slotname, name, NAMEDATALEN);\n+ return nReplSlotStats++;\n+}\n\n+static void\n+pgstat_recv_replslot(PgStat_MsgReplSlot *msg, int len)\n+{\n+ int idx;\n+\n+ idx = pgstat_replslot_index(msg->m_slotname);\n+ Assert(idx >= 0 && idx < max_replication_slots);\n\nAs long as we cannot rely on message ordering, the above assertion\ncould be false. For example, suppose that there is no unused\nreplication slots and the user:\n\n1. drops the existing slot.\n2. creates a new slot.\n\nIf the stats messages arrive in order of 2 and 1, the above assertion\nis false or leads to memory corruption when assertions are not\nenabled.\n\nA possible solution would be to add an in-use flag to\nPgStat_ReplSlotStats indicating whether the stats for slot is used or\nnot. When receiving a drop message for a slot, the stats collector\njust marks the corresponding stats as unused. When receiving the stats\nreport for a new slot but there is no unused stats slot, ignore it.\nWhat do you think?\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 16 Jul 2020 17:15:01 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, Jul 16, 2020 at 1:45 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 16 Jul 2020 at 15:53, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jul 10, 2020 at 2:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Jul 10, 2020 at 7:23 AM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > > On Thu, 9 Jul 2020 at 12:11, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Wed, Jul 8, 2020 at 1:14 PM Masahiko Sawada\n> > > > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > > > >\n> > > > > >\n> > > > > > I think that using oids has another benefit that we don't need to send\n> > > > > > slot name to the stats collector along with the stats. Since the\n> > > > > > maximum size of slot name is NAMEDATALEN and we don't support the\n> > > > > > pgstat message larger than PGSTAT_MAX_MSG_SIZE (1000 bytes), if the\n> > > > > > user wants to increase NAMEDATALEN they might not be able to build.\n> > > > > >\n> > > > >\n> > > > > I think NAMEDATALEN is used for many other objects as well and I don't\n> > > > > think we want to change it in foreseeable future, so that doesn't\n> > > > > sound to be a good reason to invent OIDs for slots. OTOH, I do\n> > > > > understand it would be better to send OIDs than names for slots but I\n> > > > > am just not sure if it is a good idea to invent a new way to generate\n> > > > > OIDs (which is different from how we do it for other objects in the\n> > > > > system) for this purpose.\n> > > >\n> > > > I'm concerned that there might be users who are using custom\n> > > > PostgreSQL that increased NAMEDATALEN for some reason. But indeed, I\n> > > > also agree with your concerns. So perhaps we can go with the current\n> > > > PoC patch approach as the first version (i.g., sending slot drop\n> > > > message to stats collector). When we need such a unique identifier\n> > > > also for other purposes, we will be able to change this feature so\n> > > > that it uses that identifier for this statistics reporting purpose.\n> > > >\n> > >\n> > > Okay, feel to submit the version atop my revert patch.\n> > >\n> >\n> > Attached, please find the rebased version. I have kept prorows as 10\n> > instead of 100 for pg_stat_get_replication_slots because I don't see\n> > much reason for keeping the value more than the default value of\n> > max_replication_slots.\n> >\n>\n> Thank you for rebasing the patch! Agreed.\n>\n> > As we are targeting this patch for PG14, so I think we can now add the\n> > functionality to reset the stats as well. What do you think?\n> >\n>\n> Yeah, I was also updating the patch while adding the reset functions.\n>\n> However, I'm concerned about the following part:\n>\n> +static int\n> +pgstat_replslot_index(const char *name)\n> +{\n> + int i;\n> +\n> + Assert(nReplSlotStats <= max_replication_slots);\n> + for (i = 0; i < nReplSlotStats; i++)\n> + {\n> + if (strcmp(replSlotStats[i].slotname, name) == 0)\n> + return i; /* found */\n> + }\n> +\n> + /* not found, register new slot */\n> + memset(&replSlotStats[nReplSlotStats], 0, sizeof(PgStat_ReplSlotStats));\n> + memcpy(&replSlotStats[nReplSlotStats].slotname, name, NAMEDATALEN);\n> + return nReplSlotStats++;\n> +}\n>\n> +static void\n> +pgstat_recv_replslot(PgStat_MsgReplSlot *msg, int len)\n> +{\n> + int idx;\n> +\n> + idx = pgstat_replslot_index(msg->m_slotname);\n> + Assert(idx >= 0 && idx < max_replication_slots);\n>\n> As long as we cannot rely on message ordering, the above assertion\n> could be false. For example, suppose that there is no unused\n> replication slots and the user:\n>\n> 1. drops the existing slot.\n> 2. creates a new slot.\n>\n> If the stats messages arrive in order of 2 and 1, the above assertion\n> is false or leads to memory corruption when assertions are not\n> enabled.\n>\n> A possible solution would be to add an in-use flag to\n> PgStat_ReplSlotStats indicating whether the stats for slot is used or\n> not. When receiving a drop message for a slot, the stats collector\n> just marks the corresponding stats as unused. When receiving the stats\n> report for a new slot but there is no unused stats slot, ignore it.\n> What do you think?\n>\n\nAs of now, you have a boolean flag msg.m_drop to distinguish the drop\nmessage but we don't have a similar way to distinguish the 'create'\nmessage. What if have a way to distinguish 'create' message (we can\nprobably keep some sort of flag to indicate the type of message\n(create, drop, update)) and then if the slot with the same name\nalready exists, we ignore such a message. Now, we also need a way to\ncreate the entry for a slot for a normal stats update message as well\nto accommodate for the lost 'create' message. Does that make sense?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 16 Jul 2020 14:46:25 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, 16 Jul 2020 at 18:16, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jul 16, 2020 at 1:45 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Thu, 16 Jul 2020 at 15:53, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Jul 10, 2020 at 2:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Fri, Jul 10, 2020 at 7:23 AM Masahiko Sawada\n> > > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > > >\n> > > > > On Thu, 9 Jul 2020 at 12:11, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > > On Wed, Jul 8, 2020 at 1:14 PM Masahiko Sawada\n> > > > > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > > > > >\n> > > > > > >\n> > > > > > > I think that using oids has another benefit that we don't need to send\n> > > > > > > slot name to the stats collector along with the stats. Since the\n> > > > > > > maximum size of slot name is NAMEDATALEN and we don't support the\n> > > > > > > pgstat message larger than PGSTAT_MAX_MSG_SIZE (1000 bytes), if the\n> > > > > > > user wants to increase NAMEDATALEN they might not be able to build.\n> > > > > > >\n> > > > > >\n> > > > > > I think NAMEDATALEN is used for many other objects as well and I don't\n> > > > > > think we want to change it in foreseeable future, so that doesn't\n> > > > > > sound to be a good reason to invent OIDs for slots. OTOH, I do\n> > > > > > understand it would be better to send OIDs than names for slots but I\n> > > > > > am just not sure if it is a good idea to invent a new way to generate\n> > > > > > OIDs (which is different from how we do it for other objects in the\n> > > > > > system) for this purpose.\n> > > > >\n> > > > > I'm concerned that there might be users who are using custom\n> > > > > PostgreSQL that increased NAMEDATALEN for some reason. But indeed, I\n> > > > > also agree with your concerns. So perhaps we can go with the current\n> > > > > PoC patch approach as the first version (i.g., sending slot drop\n> > > > > message to stats collector). When we need such a unique identifier\n> > > > > also for other purposes, we will be able to change this feature so\n> > > > > that it uses that identifier for this statistics reporting purpose.\n> > > > >\n> > > >\n> > > > Okay, feel to submit the version atop my revert patch.\n> > > >\n> > >\n> > > Attached, please find the rebased version. I have kept prorows as 10\n> > > instead of 100 for pg_stat_get_replication_slots because I don't see\n> > > much reason for keeping the value more than the default value of\n> > > max_replication_slots.\n> > >\n> >\n> > Thank you for rebasing the patch! Agreed.\n> >\n> > > As we are targeting this patch for PG14, so I think we can now add the\n> > > functionality to reset the stats as well. What do you think?\n> > >\n> >\n> > Yeah, I was also updating the patch while adding the reset functions.\n> >\n> > However, I'm concerned about the following part:\n> >\n> > +static int\n> > +pgstat_replslot_index(const char *name)\n> > +{\n> > + int i;\n> > +\n> > + Assert(nReplSlotStats <= max_replication_slots);\n> > + for (i = 0; i < nReplSlotStats; i++)\n> > + {\n> > + if (strcmp(replSlotStats[i].slotname, name) == 0)\n> > + return i; /* found */\n> > + }\n> > +\n> > + /* not found, register new slot */\n> > + memset(&replSlotStats[nReplSlotStats], 0, sizeof(PgStat_ReplSlotStats));\n> > + memcpy(&replSlotStats[nReplSlotStats].slotname, name, NAMEDATALEN);\n> > + return nReplSlotStats++;\n> > +}\n> >\n> > +static void\n> > +pgstat_recv_replslot(PgStat_MsgReplSlot *msg, int len)\n> > +{\n> > + int idx;\n> > +\n> > + idx = pgstat_replslot_index(msg->m_slotname);\n> > + Assert(idx >= 0 && idx < max_replication_slots);\n> >\n> > As long as we cannot rely on message ordering, the above assertion\n> > could be false. For example, suppose that there is no unused\n> > replication slots and the user:\n> >\n> > 1. drops the existing slot.\n> > 2. creates a new slot.\n> >\n> > If the stats messages arrive in order of 2 and 1, the above assertion\n> > is false or leads to memory corruption when assertions are not\n> > enabled.\n> >\n> > A possible solution would be to add an in-use flag to\n> > PgStat_ReplSlotStats indicating whether the stats for slot is used or\n> > not. When receiving a drop message for a slot, the stats collector\n> > just marks the corresponding stats as unused. When receiving the stats\n> > report for a new slot but there is no unused stats slot, ignore it.\n> > What do you think?\n> >\n>\n> As of now, you have a boolean flag msg.m_drop to distinguish the drop\n> message but we don't have a similar way to distinguish the 'create'\n> message. What if have a way to distinguish 'create' message (we can\n> probably keep some sort of flag to indicate the type of message\n> (create, drop, update)) and then if the slot with the same name\n> already exists, we ignore such a message. Now, we also need a way to\n> create the entry for a slot for a normal stats update message as well\n> to accommodate for the lost 'create' message. Does that make sense?\n\nI might be missing your point, but even if we have 'create' message,\nthe problem can happen if when slots are full the user drops slot\n‘slot_a’, creates slot ‘slot_b', and messages arrive in the reverse\norder?\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 16 Jul 2020 19:33:42 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, Jul 16, 2020 at 4:04 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 16 Jul 2020 at 18:16, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Jul 16, 2020 at 1:45 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > A possible solution would be to add an in-use flag to\n> > > PgStat_ReplSlotStats indicating whether the stats for slot is used or\n> > > not. When receiving a drop message for a slot, the stats collector\n> > > just marks the corresponding stats as unused. When receiving the stats\n> > > report for a new slot but there is no unused stats slot, ignore it.\n> > > What do you think?\n> > >\n> >\n> > As of now, you have a boolean flag msg.m_drop to distinguish the drop\n> > message but we don't have a similar way to distinguish the 'create'\n> > message. What if have a way to distinguish 'create' message (we can\n> > probably keep some sort of flag to indicate the type of message\n> > (create, drop, update)) and then if the slot with the same name\n> > already exists, we ignore such a message. Now, we also need a way to\n> > create the entry for a slot for a normal stats update message as well\n> > to accommodate for the lost 'create' message. Does that make sense?\n>\n> I might be missing your point, but even if we have 'create' message,\n> the problem can happen if when slots are full the user drops slot\n> ‘slot_a’, creates slot ‘slot_b', and messages arrive in the reverse\n> order?\n>\n\nIn that case, also, we should drop the 'create' message of 'slot_b' as\nwe don't have space but later when an 'update' message arrives with\nstats for the 'slot_b', we will create the entry. I am also thinking\nwhat if send only 'update' and 'drop' message, the message ordering\nproblem can still happen but we will lose one 'update' message in that\ncase?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 16 Jul 2020 16:31:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, 16 Jul 2020 at 20:01, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jul 16, 2020 at 4:04 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Thu, 16 Jul 2020 at 18:16, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Jul 16, 2020 at 1:45 PM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > > A possible solution would be to add an in-use flag to\n> > > > PgStat_ReplSlotStats indicating whether the stats for slot is used or\n> > > > not. When receiving a drop message for a slot, the stats collector\n> > > > just marks the corresponding stats as unused. When receiving the stats\n> > > > report for a new slot but there is no unused stats slot, ignore it.\n> > > > What do you think?\n> > > >\n> > >\n> > > As of now, you have a boolean flag msg.m_drop to distinguish the drop\n> > > message but we don't have a similar way to distinguish the 'create'\n> > > message. What if have a way to distinguish 'create' message (we can\n> > > probably keep some sort of flag to indicate the type of message\n> > > (create, drop, update)) and then if the slot with the same name\n> > > already exists, we ignore such a message. Now, we also need a way to\n> > > create the entry for a slot for a normal stats update message as well\n> > > to accommodate for the lost 'create' message. Does that make sense?\n> >\n> > I might be missing your point, but even if we have 'create' message,\n> > the problem can happen if when slots are full the user drops slot\n> > ‘slot_a’, creates slot ‘slot_b', and messages arrive in the reverse\n> > order?\n> >\n>\n> In that case, also, we should drop the 'create' message of 'slot_b' as\n> we don't have space but later when an 'update' message arrives with\n> stats for the 'slot_b', we will create the entry.\n\nAgreed.\n\n> I am also thinking\n> what if send only 'update' and 'drop' message, the message ordering\n> problem can still happen but we will lose one 'update' message in that\n> case?\n\nYes, I think so too. We will lose one 'update' message at a maximum.\n\nI've updated the patch so that the stats collector ignores the\n'update' message if the slot stats array is already full.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 23 Jul 2020 15:16:15 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, Jul 23, 2020 at 11:46 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> I've updated the patch so that the stats collector ignores the\n> 'update' message if the slot stats array is already full.\n>\n\nThis patch needs a rebase. I don't see this patch in the CF app. I\nhope you are still interested in working on this.\n\nReview comments:\n===============\n1.\n+CREATE VIEW pg_stat_replication_slots AS\n+ SELECT\n+ s.name,\n+ s.spill_txns,\n+ s.spill_count,\n+ s.spill_bytes\n+ FROM pg_stat_get_replication_slots() AS s;\n\nThe view pg_stat_replication_slots should have a column 'stats_reset'\n(datatype: timestamp with time zone) as we provide a facitily to reset\nthe slots. A similar column exists in pg_stat_slru as well, so is\nthere a reason of not providing it here?\n\n2.\n+ </para>\n+ </sect2>\n+\n+ <sect2 id=\"monitoring-pg-stat-wal-receiver-view\">\n <title><structname>pg_stat_wal_receiver</structname></title>\n\nIt is better to keep one empty line between </para> and </sect2> to\nkeep it consistent with the documentation of other views.\n\n3.\n <primary>pg_stat_reset_replication_slot</primary>\n+ </indexterm>\n+ <function>pg_stat_reset_replication_slot</function> (\n<type>text</type> )\n+ <returnvalue>void</returnvalue>\n+ </para>\n+ <para>\n+ Resets statistics to zero for a single replication slot, or for all\n+ replication slots in the cluster. If the argument is NULL,\nall counters\n+ shown in the\n<structname>pg_stat_replication_slots</structname> view for\n+ all replication slots are reset.\n+ </para>\n\nI think the information about the parameter for this function is not\ncompletely clear. It seems to me that it should be the name of the\nslot for which we want to reset the stats, if so, let's try to be\nclear.\n\n4.\n+pgstat_reset_replslot_counter(const char *name)\n+{\n+ PgStat_MsgResetreplslotcounter msg;\n+\n+ if (pgStatSock == PGINVALID_SOCKET)\n+ return;\n+\n+ pgstat_setheader(&msg.m_hdr, PGSTAT_MTYPE_RESETREPLSLOTCOUNTER);\n+ if (name)\n+ {\n+ memcpy(&msg.m_slotname, name, NAMEDATALEN);\n+ msg.clearall = false;\n+ }\n\nDon't we want to verify here or in the caller of this function whether\nthe passed slot_name is a valid slot or not? For ex. see\npgstat_reset_shared_counters where we return an error if the target is\nnot valid.\n\n5.\n+static void\n+pgstat_recv_resetreplslotcounter(PgStat_MsgResetreplslotcounter *msg,\n+ int len)\n+{\n+ int i;\n+ int idx = -1;\n+ TimestampTz ts;\n+\n+ if (!msg->clearall)\n+ {\n+ /* Get the index of replication slot statistics to reset */\n+ idx = pgstat_replslot_index(msg->m_slotname, false);\n+\n+ if (idx < 0)\n+ return; /* not found */\n\nCan we add a comment to describe when we don't expect to find the slot\nhere unless there is no way that can happen?\n\n6.\n+pgstat_recv_resetreplslotcounter(PgStat_MsgResetreplslotcounter *msg,\n+ int len)\n{\n..\n+ for (i = 0; i < SLRU_NUM_ELEMENTS; i++)\n..\n}\n\nI think here we need to traverse till nReplSlotStats, not SLRU_NUM_ELEMENTS.\n\n7. Don't we need to change PGSTAT_FILE_FORMAT_ID for this patch? We\ncan probably do at the end but better to change it now so that it\ndoesn't slip from our mind.\n\n8.\n@@ -5350,6 +5474,23 @@ pgstat_read_statsfiles(Oid onlydb, bool\npermanent, bool deep)\n\n break;\n\n+ /*\n+ * 'R' A PgStat_ReplSlotStats struct describing a replication slot\n+ * follows.\n+ */\n+ case 'R':\n+ if (fread(&replSlotStats[nReplSlotStats], 1,\nsizeof(PgStat_ReplSlotStats), fpin)\n+ != sizeof(PgStat_ReplSlotStats))\n+ {\n+ ereport(pgStatRunningInCollector ? LOG : WARNING,\n+ (errmsg(\"corrupted statistics file \\\"%s\\\"\",\n+ statfile)));\n+ memset(&replSlotStats[nReplSlotStats], 0, sizeof(PgStat_ReplSlotStats));\n+ goto done;\n+ }\n+ nReplSlotStats++;\n+ break;\n\nBoth here and in pgstat_read_db_statsfile_timestamp(), the patch\nhandles 'R' message after 'D' whereas while writing the 'R' is written\nbefore 'D'. So, I think it is better if we keep the order during read\nthe same as during write.\n\n9. While reviewing this patch, I noticed that in\npgstat_read_db_statsfile_timestamp(), if we fail to read ArchiverStats\nor SLRUStats, we return 'false' from this function but OTOH, if we\nfail to read 'D' or 'R' message, we will return 'true'. I feel the\nhandling of 'D' and 'R' message is fine because once we read\nGlobalStats, we can return the stats_timestamp. So the other two\nstands corrected. I understand that this is not directly related to\nthis patch but if you agree we can do this as a separate patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 7 Sep 2020 11:54:23 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Mon, 7 Sep 2020 at 15:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jul 23, 2020 at 11:46 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > I've updated the patch so that the stats collector ignores the\n> > 'update' message if the slot stats array is already full.\n> >\n>\n> This patch needs a rebase. I don't see this patch in the CF app. I\n> hope you are still interested in working on this.\n\nThank you for reviewing this patch!\n\nI'm still going to work on this patch although I might be slow\nresponse this month.\n\n>\n> Review comments:\n> ===============\n> 1.\n> +CREATE VIEW pg_stat_replication_slots AS\n> + SELECT\n> + s.name,\n> + s.spill_txns,\n> + s.spill_count,\n> + s.spill_bytes\n> + FROM pg_stat_get_replication_slots() AS s;\n>\n> The view pg_stat_replication_slots should have a column 'stats_reset'\n> (datatype: timestamp with time zone) as we provide a facitily to reset\n> the slots. A similar column exists in pg_stat_slru as well, so is\n> there a reason of not providing it here?\n\nI had missed adding the column. Fixed.\n\n>\n> 2.\n> + </para>\n> + </sect2>\n> +\n> + <sect2 id=\"monitoring-pg-stat-wal-receiver-view\">\n> <title><structname>pg_stat_wal_receiver</structname></title>\n>\n> It is better to keep one empty line between </para> and </sect2> to\n> keep it consistent with the documentation of other views.\n\nFixed.\n\n>\n> 3.\n> <primary>pg_stat_reset_replication_slot</primary>\n> + </indexterm>\n> + <function>pg_stat_reset_replication_slot</function> (\n> <type>text</type> )\n> + <returnvalue>void</returnvalue>\n> + </para>\n> + <para>\n> + Resets statistics to zero for a single replication slot, or for all\n> + replication slots in the cluster. If the argument is NULL,\n> all counters\n> + shown in the\n> <structname>pg_stat_replication_slots</structname> view for\n> + all replication slots are reset.\n> + </para>\n>\n> I think the information about the parameter for this function is not\n> completely clear. It seems to me that it should be the name of the\n> slot for which we want to reset the stats, if so, let's try to be\n> clear.\n\nFixed.\n\n>\n> 4.\n> +pgstat_reset_replslot_counter(const char *name)\n> +{\n> + PgStat_MsgResetreplslotcounter msg;\n> +\n> + if (pgStatSock == PGINVALID_SOCKET)\n> + return;\n> +\n> + pgstat_setheader(&msg.m_hdr, PGSTAT_MTYPE_RESETREPLSLOTCOUNTER);\n> + if (name)\n> + {\n> + memcpy(&msg.m_slotname, name, NAMEDATALEN);\n> + msg.clearall = false;\n> + }\n>\n> Don't we want to verify here or in the caller of this function whether\n> the passed slot_name is a valid slot or not? For ex. see\n> pgstat_reset_shared_counters where we return an error if the target is\n> not valid.\n\nAgreed. Fixed.\n\n>\n> 5.\n> +static void\n> +pgstat_recv_resetreplslotcounter(PgStat_MsgResetreplslotcounter *msg,\n> + int len)\n> +{\n> + int i;\n> + int idx = -1;\n> + TimestampTz ts;\n> +\n> + if (!msg->clearall)\n> + {\n> + /* Get the index of replication slot statistics to reset */\n> + idx = pgstat_replslot_index(msg->m_slotname, false);\n> +\n> + if (idx < 0)\n> + return; /* not found */\n>\n> Can we add a comment to describe when we don't expect to find the slot\n> here unless there is no way that can happen?\n\nAdded.\n\n>\n> 6.\n> +pgstat_recv_resetreplslotcounter(PgStat_MsgResetreplslotcounter *msg,\n> + int len)\n> {\n> ..\n> + for (i = 0; i < SLRU_NUM_ELEMENTS; i++)\n> ..\n> }\n>\n> I think here we need to traverse till nReplSlotStats, not SLRU_NUM_ELEMENTS.\n\nFixed.\n\n>\n> 7. Don't we need to change PGSTAT_FILE_FORMAT_ID for this patch? We\n> can probably do at the end but better to change it now so that it\n> doesn't slip from our mind.\n\nYes, changed.\n\n>\n> 8.\n> @@ -5350,6 +5474,23 @@ pgstat_read_statsfiles(Oid onlydb, bool\n> permanent, bool deep)\n>\n> break;\n>\n> + /*\n> + * 'R' A PgStat_ReplSlotStats struct describing a replication slot\n> + * follows.\n> + */\n> + case 'R':\n> + if (fread(&replSlotStats[nReplSlotStats], 1,\n> sizeof(PgStat_ReplSlotStats), fpin)\n> + != sizeof(PgStat_ReplSlotStats))\n> + {\n> + ereport(pgStatRunningInCollector ? LOG : WARNING,\n> + (errmsg(\"corrupted statistics file \\\"%s\\\"\",\n> + statfile)));\n> + memset(&replSlotStats[nReplSlotStats], 0, sizeof(PgStat_ReplSlotStats));\n> + goto done;\n> + }\n> + nReplSlotStats++;\n> + break;\n>\n> Both here and in pgstat_read_db_statsfile_timestamp(), the patch\n> handles 'R' message after 'D' whereas while writing the 'R' is written\n> before 'D'. So, I think it is better if we keep the order during read\n> the same as during write.\n\nChanged the code so that it writes 'R' after 'D'.\n\n>\n> 9. While reviewing this patch, I noticed that in\n> pgstat_read_db_statsfile_timestamp(), if we fail to read ArchiverStats\n> or SLRUStats, we return 'false' from this function but OTOH, if we\n> fail to read 'D' or 'R' message, we will return 'true'. I feel the\n> handling of 'D' and 'R' message is fine because once we read\n> GlobalStats, we can return the stats_timestamp. So the other two\n> stands corrected. I understand that this is not directly related to\n> this patch but if you agree we can do this as a separate patch.\n\nIt seems to make sense to me. We can set *ts and then read both\nArchiverStats and SLRUStats so we can return a valid timestamp even if\nwe fail to read.\n\nI've attached both patches: 0001 patch fixes the issue you reported.\n0002 patch is the patch that incorporated all review comments.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 8 Sep 2020 11:23:10 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Sep 8, 2020 at 7:53 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Mon, 7 Sep 2020 at 15:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I'm still going to work on this patch although I might be slow\n> response this month.\n>\n\nThis is a quite fast response. Thanks for staying on top of it.\n\n>\n> >\n> > 9. While reviewing this patch, I noticed that in\n> > pgstat_read_db_statsfile_timestamp(), if we fail to read ArchiverStats\n> > or SLRUStats, we return 'false' from this function but OTOH, if we\n> > fail to read 'D' or 'R' message, we will return 'true'. I feel the\n> > handling of 'D' and 'R' message is fine because once we read\n> > GlobalStats, we can return the stats_timestamp. So the other two\n> > stands corrected. I understand that this is not directly related to\n> > this patch but if you agree we can do this as a separate patch.\n>\n> It seems to make sense to me. We can set *ts and then read both\n> ArchiverStats and SLRUStats so we can return a valid timestamp even if\n> we fail to read.\n>\n\nI have started a separate thread for this bug-fix [1] and will\ncontinue reviewing this patch.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1J3oTJKyVq6v7K4d3jD%2BvtnruG9fHRib6UuWWsrwAR6Aw%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 8 Sep 2020 11:43:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Sep 8, 2020 at 7:53 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Mon, 7 Sep 2020 at 15:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > This patch needs a rebase. I don't see this patch in the CF app. I\n> > hope you are still interested in working on this.\n>\n> Thank you for reviewing this patch!\n>\n> I'm still going to work on this patch although I might be slow\n> response this month.\n>\n\nComments on the latest patch:\n=============================\n1.\n+CREATE VIEW pg_stat_replication_slots AS\n+ SELECT\n+ s.name,\n+ s.spill_txns,\n+ s.spill_count,\n+ s.spill_bytes,\n+ s.stats_reset\n+ FROM pg_stat_get_replication_slots() AS s;\n\nYou forgot to update the docs for the new parameter.\n\n2.\n@@ -5187,6 +5305,12 @@ pgstat_read_statsfiles(Oid onlydb, bool\npermanent, bool deep)\n for (i = 0; i < SLRU_NUM_ELEMENTS; i++)\n slruStats[i].stat_reset_timestamp = globalStats.stat_reset_timestamp;\n\n+ /*\n+ * Set the same reset timestamp for all replication slots too.\n+ */\n+ for (i = 0; i < max_replication_slots; i++)\n+ replSlotStats[i].stat_reset_timestamp = globalStats.stat_reset_timestamp;\n+\n\nI don't understand why you have removed the above code from the new\nversion of the patch?\n\n3.\npgstat_recv_resetreplslotcounter()\n{\n..\n+ ts = GetCurrentTimestamp();\n+ for (i = 0; i < nReplSlotStats; i++)\n+ {\n+ /* reset entry with the given index, or all entries */\n+ if (msg->clearall || idx == i)\n+ {\n+ /* reset only counters. Don't clear slot name */\n+ replSlotStats[i].spill_txns = 0;\n+ replSlotStats[i].spill_count = 0;\n+ replSlotStats[i].spill_bytes = 0;\n+ replSlotStats[i].stat_reset_timestamp = ts;\n+ }\n+ }\n..\n\nI don't like this coding pattern as in the worst case we need to\ntraverse all the slots to reset a particular slot. This could be okay\nfor a fixed number of elements as we have in SLRU but here it appears\nquite inefficient. We can move the reset of stats part to a separate\nfunction and then invoke it from the place where we need to reset a\nparticular slot and the above place.\n\n4.\n+pgstat_replslot_index(const char *name, bool create_it)\n{\n..\n+ replSlotStats[nReplSlotStats].stat_reset_timestamp = GetCurrentTimestamp();\n..\n}\n\nWhy do we need to set the reset timestamp on the creation of slot entry?\n\n5.\n@@ -3170,6 +3175,13 @@ ReorderBufferSerializeTXN(ReorderBuffer *rb,\nReorderBufferTXN *txn)\n spilled++;\n }\n\n+ /* update the statistics */\n+ rb->spillCount += 1;\n+ rb->spillBytes += size;\n+\n+ /* Don't consider already serialized transactions. */\n+ rb->spillTxns += rbtxn_is_serialized(txn) ? 0 : 1;\n\nWe can't increment the spillTxns in the above way because now\nsometimes we do serialize before streaming and in that case we clear\nthe serialized flag after streaming, see ReorderBufferTruncateTXN. So,\nthe count can go wrong. Another problem is currently the patch call\nUpdateSpillStats only from begin_cb_wrapper which means it won't\nconsider streaming transactions (streaming transactions that might\nhave spilled). If we consider the streamed case along with it, we can\nprobably keep this counter up-to-date because in the above place we\ncan check if the txn is once serialized or streamed, we shouldn't\nincrement the counter. I think we need to merge Ajin's patch for\nstreaming stats [1] and fix the issue. I have not checked his patch so\nit might need a rebase and or some changes.\n\n6.\n@@ -322,6 +321,9 @@ ReplicationSlotCreate(const char *name, bool db_specific,\n\n /* Let everybody know we've modified this slot */\n ConditionVariableBroadcast(&slot->active_cv);\n+\n+ /* Create statistics entry for the new slot */\n+ pgstat_report_replslot(NameStr(slot->data.name), 0, 0, 0);\n }\n..\n..\n@@ -683,6 +685,18 @@ ReplicationSlotDropPtr(ReplicationSlot *slot)\n ereport(WARNING,\n (errmsg(\"could not remove directory \\\"%s\\\"\", tmppath)));\n\n+ /*\n+ * Report to drop the replication slot to stats collector. Since there\n+ * is no guarantee the order of message arrival on an UDP connection,\n+ * it's possible that a message for creating a new slot arrives before a\n+ * message for removing the old slot. We send the drop message while\n+ * holding ReplicationSlotAllocationLock to reduce that possibility.\n+ * If the messages arrived in reverse, we would lose one statistics update\n+ * message. But the next update message will create the statistics for\n+ * the replication slot.\n+ */\n+ pgstat_report_replslot_drop(NameStr(slot->data.name));\n+\n\nSimilar to drop message, why don't we send the create message while\nholding the ReplicationSlotAllocationLock?\n\n[1] - https://www.postgresql.org/message-id/CAFPTHDZ8RnOovefzB%2BOMoRxLSD404WRLqWBUHe6bWqM5ew1bNA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 8 Sep 2020 19:02:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Sep 8, 2020 at 7:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Comments on the latest patch:\n> =============================\n>\n\nApart from the comments I gave yesterday, another thing I was\nwondering is how to write some tests for this patch. The two ideas I\ncould think of are as follows:\n\n1. One idea was to keep these stats for each WALSender as it was in\nthe commit that we reverted as b074813d48. If we had that then we can\nquery the stats for tests added in commit 58b5ae9d62. I am not sure\nwhether we want to display it in view pg_stat_replication but it would\nbe a really good way to test the streamed and serialized transactions\nin a predictable manner.\n\n2. Then the second way is to try doing something similar to what we do\nin src/test/regress/sql/stats.sql\n\nI think we should do both if possible.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 9 Sep 2020 15:20:32 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Wed, Sep 9, 2020 at 3:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Sep 8, 2020 at 7:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Comments on the latest patch:\n> > =============================\n> >\n>\n> Apart from the comments I gave yesterday, another thing I was\n> wondering is how to write some tests for this patch. The two ideas I\n> could think of are as follows:\n>\n> 1. One idea was to keep these stats for each WALSender as it was in\n> the commit that we reverted as b074813d48. If we had that then we can\n> query the stats for tests added in commit 58b5ae9d62. I am not sure\n> whether we want to display it in view pg_stat_replication but it would\n> be a really good way to test the streamed and serialized transactions\n> in a predictable manner.\n>\n> 2. Then the second way is to try doing something similar to what we do\n> in src/test/regress/sql/stats.sql\n>\n> I think we should do both if possible.\n>\n\nI have made a few comment changes on top of your last version. If you\nare fine with these then include them with the next version of your\npatch.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 10 Sep 2020 09:30:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Sep 8, 2020 at 7:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Sep 8, 2020 at 7:53 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n\nI have fixed these review comments in the attached patch.\n\n>\n> Comments on the latest patch:\n> =============================\n> 1.\n> +CREATE VIEW pg_stat_replication_slots AS\n> + SELECT\n> + s.name,\n> + s.spill_txns,\n> + s.spill_count,\n> + s.spill_bytes,\n> + s.stats_reset\n> + FROM pg_stat_get_replication_slots() AS s;\n>\n> You forgot to update the docs for the new parameter.\n>\n\nUpdated the docs for this.\n\n> 2.\n> @@ -5187,6 +5305,12 @@ pgstat_read_statsfiles(Oid onlydb, bool\n> permanent, bool deep)\n> for (i = 0; i < SLRU_NUM_ELEMENTS; i++)\n> slruStats[i].stat_reset_timestamp = globalStats.stat_reset_timestamp;\n>\n> + /*\n> + * Set the same reset timestamp for all replication slots too.\n> + */\n> + for (i = 0; i < max_replication_slots; i++)\n> + replSlotStats[i].stat_reset_timestamp = globalStats.stat_reset_timestamp;\n> +\n>\n> I don't understand why you have removed the above code from the new\n> version of the patch?\n>\n\nAdded back.\n\n> 3.\n> pgstat_recv_resetreplslotcounter()\n> {\n> ..\n> + ts = GetCurrentTimestamp();\n> + for (i = 0; i < nReplSlotStats; i++)\n> + {\n> + /* reset entry with the given index, or all entries */\n> + if (msg->clearall || idx == i)\n> + {\n> + /* reset only counters. Don't clear slot name */\n> + replSlotStats[i].spill_txns = 0;\n> + replSlotStats[i].spill_count = 0;\n> + replSlotStats[i].spill_bytes = 0;\n> + replSlotStats[i].stat_reset_timestamp = ts;\n> + }\n> + }\n> ..\n>\n> I don't like this coding pattern as in the worst case we need to\n> traverse all the slots to reset a particular slot. This could be okay\n> for a fixed number of elements as we have in SLRU but here it appears\n> quite inefficient. We can move the reset of stats part to a separate\n> function and then invoke it from the place where we need to reset a\n> particular slot and the above place.\n>\n\nChanged the code as per the above idea.\n\n> 4.\n> +pgstat_replslot_index(const char *name, bool create_it)\n> {\n> ..\n> + replSlotStats[nReplSlotStats].stat_reset_timestamp = GetCurrentTimestamp();\n> ..\n> }\n>\n> Why do we need to set the reset timestamp on the creation of slot entry?\n>\n\nI don't think we need to show any time if the slot is never reset. Let\nme know if there is any reason to show it.\n\n\n> 5.\n> @@ -3170,6 +3175,13 @@ ReorderBufferSerializeTXN(ReorderBuffer *rb,\n> ReorderBufferTXN *txn)\n> spilled++;\n> }\n>\n> + /* update the statistics */\n> + rb->spillCount += 1;\n> + rb->spillBytes += size;\n> +\n> + /* Don't consider already serialized transactions. */\n> + rb->spillTxns += rbtxn_is_serialized(txn) ? 0 : 1;\n>\n> We can't increment the spillTxns in the above way because now\n> sometimes we do serialize before streaming and in that case we clear\n> the serialized flag after streaming, see ReorderBufferTruncateTXN. So,\n> the count can go wrong.\n>\n\nTo fix, this I have added another flag which indicates if we have ever\nserialized the txn. I couldn't find a better way, do let me know if\nyou can think of a better way to address this comment.\n\n> Another problem is currently the patch call\n> UpdateSpillStats only from begin_cb_wrapper which means it won't\n> consider streaming transactions (streaming transactions that might\n> have spilled).\n>\n\nThe other problem I see with updating in begin_cb_wrapper is that it\nwill ignore the spilling done for transactions that get aborted. To\nfix both the issues, I have updated the stats in DecodeCommit and\nDecodeAbort.\n\n\n> 6.\n> @@ -322,6 +321,9 @@ ReplicationSlotCreate(const char *name, bool db_specific,\n>\n> /* Let everybody know we've modified this slot */\n> ConditionVariableBroadcast(&slot->active_cv);\n> +\n> + /* Create statistics entry for the new slot */\n> + pgstat_report_replslot(NameStr(slot->data.name), 0, 0, 0);\n> }\n> ..\n> ..\n> @@ -683,6 +685,18 @@ ReplicationSlotDropPtr(ReplicationSlot *slot)\n> ereport(WARNING,\n> (errmsg(\"could not remove directory \\\"%s\\\"\", tmppath)));\n>\n> + /*\n> + * Report to drop the replication slot to stats collector. Since there\n> + * is no guarantee the order of message arrival on an UDP connection,\n> + * it's possible that a message for creating a new slot arrives before a\n> + * message for removing the old slot. We send the drop message while\n> + * holding ReplicationSlotAllocationLock to reduce that possibility.\n> + * If the messages arrived in reverse, we would lose one statistics update\n> + * message. But the next update message will create the statistics for\n> + * the replication slot.\n> + */\n> + pgstat_report_replslot_drop(NameStr(slot->data.name));\n> +\n>\n> Similar to drop message, why don't we send the create message while\n> holding the ReplicationSlotAllocationLock?\n>\n\nUpdated code to address this comment, basically moved the create\nmessage under lock.\n\nApart from the above,\n(a) fixed one bug in ReorderBufferSerializeTXN() where we were\nupdating the stats even when we have not spilled anything.\n(b) made changes in pgstat_read_db_statsfile_timestamp to return false\nwhen the replication slot entry is corrupt.\n(c) move the declaration and definitions in pgstat.c to make them\nconsistent with existing code\n(d) made another couple of cosmetic fixes and changed a few comments\n(e) Tested the patch by using a guc which allows spilling all the\nchanges. See v4-0001-guc-always-spill\n\nLet me know what do you think about the changes?\n\n--\nWith Regards,\nAmit Kapila.", "msg_date": "Sat, 19 Sep 2020 13:48:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Sat, Sep 19, 2020 at 1:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Sep 8, 2020 at 7:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Sep 8, 2020 at 7:53 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n>\n> I have fixed these review comments in the attached patch.\n>\n>\n> Apart from the above,\n> (a) fixed one bug in ReorderBufferSerializeTXN() where we were\n> updating the stats even when we have not spilled anything.\n> (b) made changes in pgstat_read_db_statsfile_timestamp to return false\n> when the replication slot entry is corrupt.\n> (c) move the declaration and definitions in pgstat.c to make them\n> consistent with existing code\n> (d) made another couple of cosmetic fixes and changed a few comments\n> (e) Tested the patch by using a guc which allows spilling all the\n> changes. See v4-0001-guc-always-spill\n>\n\nI have found a way to write the test case for this patch. This is\nbased on the idea we used in stats.sql. As of now, I have kept the\ntest as a separate patch. We can decide to commit the test part\nseparately as it is slightly timing dependent but OTOH as it is based\non existing logic in stats.sql so there shouldn't be much problem. I\nhave not changed anything apart from the test patch in this version.\nNote that the first patch is just a debugging kind of tool to test the\npatch.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 24 Sep 2020 17:44:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, Sep 24, 2020 at 5:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Sep 19, 2020 at 1:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Sep 8, 2020 at 7:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Sep 8, 2020 at 7:53 AM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > I have fixed these review comments in the attached patch.\n> >\n> >\n> > Apart from the above,\n> > (a) fixed one bug in ReorderBufferSerializeTXN() where we were\n> > updating the stats even when we have not spilled anything.\n> > (b) made changes in pgstat_read_db_statsfile_timestamp to return false\n> > when the replication slot entry is corrupt.\n> > (c) move the declaration and definitions in pgstat.c to make them\n> > consistent with existing code\n> > (d) made another couple of cosmetic fixes and changed a few comments\n> > (e) Tested the patch by using a guc which allows spilling all the\n> > changes. See v4-0001-guc-always-spill\n> >\n>\n> I have found a way to write the test case for this patch. This is\n> based on the idea we used in stats.sql. As of now, I have kept the\n> test as a separate patch. We can decide to commit the test part\n> separately as it is slightly timing dependent but OTOH as it is based\n> on existing logic in stats.sql so there shouldn't be much problem. I\n> have not changed anything apart from the test patch in this version.\n> Note that the first patch is just a debugging kind of tool to test the\n> patch.\n>\n\nI have done some more testing of this patch especially for the case\nwhere we spill before streaming the transaction and found everything\nis working as expected. Additionally, I have changed a few more\ncomments and ran pgindent. I am still not very sure whether we want to\ndisplay physical slots in this view as all the stats are for logical\nslots but anyway we can add stats w.r.t physical slots in the future.\nI am fine either way (don't show physical slots in this view or show\nthem but keep stats as 0). Let me know if you have any thoughts on\nthese points, other than that I am happy with the current state of the\npatch.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Fri, 25 Sep 2020 16:33:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Fri, Sep 25, 2020 at 4:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Sep 24, 2020 at 5:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sat, Sep 19, 2020 at 1:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Sep 8, 2020 at 7:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Tue, Sep 8, 2020 at 7:53 AM Masahiko Sawada\n> > > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > I have fixed these review comments in the attached patch.\n> > >\n> > >\n> > > Apart from the above,\n> > > (a) fixed one bug in ReorderBufferSerializeTXN() where we were\n> > > updating the stats even when we have not spilled anything.\n> > > (b) made changes in pgstat_read_db_statsfile_timestamp to return false\n> > > when the replication slot entry is corrupt.\n> > > (c) move the declaration and definitions in pgstat.c to make them\n> > > consistent with existing code\n> > > (d) made another couple of cosmetic fixes and changed a few comments\n> > > (e) Tested the patch by using a guc which allows spilling all the\n> > > changes. See v4-0001-guc-always-spill\n> > >\n> >\n> > I have found a way to write the test case for this patch. This is\n> > based on the idea we used in stats.sql. As of now, I have kept the\n> > test as a separate patch. We can decide to commit the test part\n> > separately as it is slightly timing dependent but OTOH as it is based\n> > on existing logic in stats.sql so there shouldn't be much problem. I\n> > have not changed anything apart from the test patch in this version.\n> > Note that the first patch is just a debugging kind of tool to test the\n> > patch.\n> >\n>\n> I have done some more testing of this patch especially for the case\n> where we spill before streaming the transaction and found everything\n> is working as expected. Additionally, I have changed a few more\n> comments and ran pgindent. I am still not very sure whether we want to\n> display physical slots in this view as all the stats are for logical\n> slots but anyway we can add stats w.r.t physical slots in the future.\n> I am fine either way (don't show physical slots in this view or show\n> them but keep stats as 0). Let me know if you have any thoughts on\n> these points, other than that I am happy with the current state of the\n> patch.\n\nIMHO, It will make more sense to only show the logical replication\nslots in this view.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 30 Sep 2020 13:12:32 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Wed, Sep 30, 2020 at 1:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Sep 25, 2020 at 4:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Sep 24, 2020 at 5:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I have done some more testing of this patch especially for the case\n> > where we spill before streaming the transaction and found everything\n> > is working as expected. Additionally, I have changed a few more\n> > comments and ran pgindent. I am still not very sure whether we want to\n> > display physical slots in this view as all the stats are for logical\n> > slots but anyway we can add stats w.r.t physical slots in the future.\n> > I am fine either way (don't show physical slots in this view or show\n> > them but keep stats as 0). Let me know if you have any thoughts on\n> > these points, other than that I am happy with the current state of the\n> > patch.\n>\n> IMHO, It will make more sense to only show the logical replication\n> slots in this view.\n>\n\nOkay, Sawada-San, others, do you have any opinion on this matter?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 30 Sep 2020 14:40:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Wed, Sep 30, 2020 at 2:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Sep 30, 2020 at 1:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Fri, Sep 25, 2020 at 4:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Sep 24, 2020 at 5:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > I have done some more testing of this patch especially for the case\n> > > where we spill before streaming the transaction and found everything\n> > > is working as expected. Additionally, I have changed a few more\n> > > comments and ran pgindent. I am still not very sure whether we want to\n> > > display physical slots in this view as all the stats are for logical\n> > > slots but anyway we can add stats w.r.t physical slots in the future.\n> > > I am fine either way (don't show physical slots in this view or show\n> > > them but keep stats as 0). Let me know if you have any thoughts on\n> > > these points, other than that I am happy with the current state of the\n> > > patch.\n> >\n> > IMHO, It will make more sense to only show the logical replication\n> > slots in this view.\n> >\n>\n> Okay, Sawada-San, others, do you have any opinion on this matter?\n\nI have started looking into this patch, I have a few comments.\n\n+ Number of times transactions were spilled to disk. Transactions\n+ may get spilled repeatedly, and this counter gets incremented on every\n+ such invocation.\n+ </para></entry>\n+ </row>\n+\n+\n+ <para>\n+ Tracking of spilled transactions works only for logical replication. In\n\nThe number of spaces used after the full stop is not uniform.\n\n+ /* update the statistics iff we have spilled anything */\n+ if (spilled)\n+ {\n+ rb->spillCount += 1;\n+ rb->spillBytes += size;\n+\n+ /* Don't consider already serialized transactions. */\n\nSingle line comments are not uniform, \"update the statistics\" is\nstarting is small letter and not ending with the full stop\nwhereas 'Don't consider' is starting with capital and ending with full stop.\n\n\n+\n+ /*\n+ * We set this flag to indicate if the transaction is ever serialized.\n+ * We need this to accurately update the stats.\n+ */\n+ txn->txn_flags |= RBTXN_IS_SERIALIZED_CLEAR;\n\nI feel we can explain the exact scenario in the comment, i.e. after\nspill if we stream then we still\nneed to know that it spilled in past so that we don't count this again\nas a new spilled transaction.\n\nold slot. We send the drop\n+ * and create messages while holding ReplicationSlotAllocationLock to\n+ * reduce that possibility. If the messages reached in reverse, we would\n+ * lose one statistics update message. But\n\nSpacing after the full stop is not uniform.\n\n\n+ * Statistics about transactions spilled to disk.\n+ *\n+ * A single transaction may be spilled repeatedly, which is why we keep\n+ * two different counters. For spilling, the transaction counter includes\n+ * both toplevel transactions and subtransactions.\n+ */\n+ int64 spillCount; /* spill-to-disk invocation counter */\n+ int64 spillTxns; /* number of transactions spilled to disk */\n+ int64 spillBytes; /* amount of data spilled to disk */\n\nCan we keep the order as spillTxns, spillTxns, spillBytes because\nevery other place we kept like that\nso that way it will look more uniform.\n\nOther than that I did not see any problem.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 30 Sep 2020 16:34:18 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Wed, Sep 30, 2020 at 4:34 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Sep 30, 2020 at 2:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Sep 30, 2020 at 1:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Fri, Sep 25, 2020 at 4:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Thu, Sep 24, 2020 at 5:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > I have done some more testing of this patch especially for the case\n> > > > where we spill before streaming the transaction and found everything\n> > > > is working as expected. Additionally, I have changed a few more\n> > > > comments and ran pgindent. I am still not very sure whether we want to\n> > > > display physical slots in this view as all the stats are for logical\n> > > > slots but anyway we can add stats w.r.t physical slots in the future.\n> > > > I am fine either way (don't show physical slots in this view or show\n> > > > them but keep stats as 0). Let me know if you have any thoughts on\n> > > > these points, other than that I am happy with the current state of the\n> > > > patch.\n> > >\n> > > IMHO, It will make more sense to only show the logical replication\n> > > slots in this view.\n> > >\n> >\n> > Okay, Sawada-San, others, do you have any opinion on this matter?\n>\n\nI have changed so that view will only show logical replication slots\nand adjusted the docs accordingly. I think we can easily extend it to\nshow physical replication slots if required in the future.\n\n> I have started looking into this patch, I have a few comments.\n>\n> + Number of times transactions were spilled to disk. Transactions\n> + may get spilled repeatedly, and this counter gets incremented on every\n> + such invocation.\n> + </para></entry>\n> + </row>\n> +\n> +\n> + <para>\n> + Tracking of spilled transactions works only for logical replication. In\n>\n> The number of spaces used after the full stop is not uniform.\n>\n\nI have removed this sentence as it is not required as we only want to\nshow logical slots in the view and for that I have updated the other\nparts of the doc.\n\n> + /* update the statistics iff we have spilled anything */\n> + if (spilled)\n> + {\n> + rb->spillCount += 1;\n> + rb->spillBytes += size;\n> +\n> + /* Don't consider already serialized transactions. */\n>\n> Single line comments are not uniform, \"update the statistics\" is\n> starting is small letter and not ending with the full stop\n> whereas 'Don't consider' is starting with capital and ending with full stop.\n>\n\nActually, it doesn't matter in this case but I have changed to keep it\nconsistent.\n\n>\n> +\n> + /*\n> + * We set this flag to indicate if the transaction is ever serialized.\n> + * We need this to accurately update the stats.\n> + */\n> + txn->txn_flags |= RBTXN_IS_SERIALIZED_CLEAR;\n>\n> I feel we can explain the exact scenario in the comment, i.e. after\n> spill if we stream then we still\n> need to know that it spilled in past so that we don't count this again\n> as a new spilled transaction.\n>\n\nOkay, I have expanded the comment a bit more to explain this.\n\n> old slot. We send the drop\n> + * and create messages while holding ReplicationSlotAllocationLock to\n> + * reduce that possibility. If the messages reached in reverse, we would\n> + * lose one statistics update message. But\n>\n> Spacing after the full stop is not uniform.\n>\n\nChanged.\n\n>\n> + * Statistics about transactions spilled to disk.\n> + *\n> + * A single transaction may be spilled repeatedly, which is why we keep\n> + * two different counters. For spilling, the transaction counter includes\n> + * both toplevel transactions and subtransactions.\n> + */\n> + int64 spillCount; /* spill-to-disk invocation counter */\n> + int64 spillTxns; /* number of transactions spilled to disk */\n> + int64 spillBytes; /* amount of data spilled to disk */\n>\n> Can we keep the order as spillTxns, spillTxns, spillBytes because\n> every other place we kept like that\n> so that way it will look more uniform.\n>\n\nChanged here and at one more place as per this suggestion.\n\n> Other than that I did not see any problem.\n>\n\nThanks for the review.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 1 Oct 2020 12:06:28 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, Oct 1, 2020 at 12:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Thanks for the review.\n>\n\noops, forgot to attach the updated patches, doing now.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 1 Oct 2020 12:09:09 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Wed, 30 Sep 2020 at 18:10, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Sep 30, 2020 at 1:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Fri, Sep 25, 2020 at 4:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Sep 24, 2020 at 5:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > I have done some more testing of this patch especially for the case\n> > > where we spill before streaming the transaction and found everything\n> > > is working as expected. Additionally, I have changed a few more\n> > > comments and ran pgindent. I am still not very sure whether we want to\n> > > display physical slots in this view as all the stats are for logical\n> > > slots but anyway we can add stats w.r.t physical slots in the future.\n> > > I am fine either way (don't show physical slots in this view or show\n> > > them but keep stats as 0). Let me know if you have any thoughts on\n> > > these points, other than that I am happy with the current state of the\n> > > patch.\n> >\n> > IMHO, It will make more sense to only show the logical replication\n> > slots in this view.\n> >\n>\n\nThank you for updating the patch.\n\n> Okay, Sawada-San, others, do you have any opinion on this matter?\n>\n\nWhen we discussed this before, I was thinking that we could have other\nstatistics for physical slots in the same statistics view in the\nfuture. Having the view show only logical slots also makes sense to me\nbut I’m concerned a bit that we could break backward compatibility\nthat monitoring tools etc will be affected when the view starts to\nshow physical slots too. If the view shows only logical slots, it also\nmight be worth considering to have separate views for logical slots\nand physical slots and having pg_stat_logical_replication_slots by\nthis change.\n\nHere is my review comment on the v7 patch.\n\n+ /*\n+ * Set the same reset timestamp for all replication slots too.\n+ */\n+ for (i = 0; i < max_replication_slots; i++)\n+ replSlotStats[i].stat_reset_timestamp =\nglobalStats.stat_reset_timestamp;\n+\n\nYou added back the above code but since we clear the timestamps on\ncreation of a new slot they are not shown:\n\n+ /* Register new slot */\n+ memset(&replSlotStats[nReplSlotStats], 0, sizeof(PgStat_ReplSlotStats));\n+ memcpy(&replSlotStats[nReplSlotStats].slotname, name, NAMEDATALEN);\n\nLooking at other statistics views such as pg_stat_slru,\npg_stat_bgwriter, and pg_stat_archiver, they have a valid\nreset_timestamp value from the beginning. That's why I removed that\ncode and assigned the timestamp when registering a new slot.\n\n---\n+ if (OidIsValid(slot->data.database))\n+ pgstat_report_replslot(NameStr(slot->data.name), 0, 0, 0);\n\nI think we can use SlotIsLogical() for this purpose. The same is true\nwhen dropping a slot.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 3 Oct 2020 12:56:11 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Sat, Oct 3, 2020 at 9:26 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> When we discussed this before, I was thinking that we could have other\n> statistics for physical slots in the same statistics view in the\n> future. Having the view show only logical slots also makes sense to me\n> but I’m concerned a bit that we could break backward compatibility\n> that monitoring tools etc will be affected when the view starts to\n> show physical slots too.\n>\n\nI think that would happen anyway as we need to add more columns in\nview for the physical slots.\n\n> If the view shows only logical slots, it also\n> might be worth considering to have separate views for logical slots\n> and physical slots and having pg_stat_logical_replication_slots by\n> this change.\n>\n\nI am not sure at this stage but I think we will add the additional\nstats for physical slots once we have any in this view itself. I would\nlike to avoid adding separate views if possible. The only reason to\nomit physical slots at this stage is that we don't have any stats for\nthe same.\n\n> Here is my review comment on the v7 patch.\n>\n> + /*\n> + * Set the same reset timestamp for all replication slots too.\n> + */\n> + for (i = 0; i < max_replication_slots; i++)\n> + replSlotStats[i].stat_reset_timestamp =\n> globalStats.stat_reset_timestamp;\n> +\n>\n> You added back the above code but since we clear the timestamps on\n> creation of a new slot they are not shown:\n>\n> + /* Register new slot */\n> + memset(&replSlotStats[nReplSlotStats], 0, sizeof(PgStat_ReplSlotStats));\n> + memcpy(&replSlotStats[nReplSlotStats].slotname, name, NAMEDATALEN);\n>\n> Looking at other statistics views such as pg_stat_slru,\n> pg_stat_bgwriter, and pg_stat_archiver, they have a valid\n> reset_timestamp value from the beginning. That's why I removed that\n> code and assigned the timestamp when registering a new slot.\n>\n\nHmm, I don't think it is shown intentionally in those views. I think\nwhat is happening in other views is that it has been initialized with\nsome value when we read the stats and then while updating and or\nwriting because we don't change the stat_reset_timestamp, it displays\nthe same value as initialized at the time of read. Now, because in\npgstat_replslot_index() we always initialize the replSlotStats it\nwould overwrite any previous value we have set during read and display\nthe stat_reset as empty for replication slots. If we stop initializing\nthe replSlotStats in pgstat_replslot_index() then we will see similar\nbehavior as other views have. So even if we want to change then\nprobably we should stop initialization in pgstat_replslot_index but I\ndon't think that is necessarily better behavior because the\ndescription of the parameter doesn't indicate any such thing.\n\n> ---\n> + if (OidIsValid(slot->data.database))\n> + pgstat_report_replslot(NameStr(slot->data.name), 0, 0, 0);\n>\n> I think we can use SlotIsLogical() for this purpose. The same is true\n> when dropping a slot.\n>\n\nmakes sense, so changed accordingly in the attached patch.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Sat, 3 Oct 2020 13:25:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Sat, 3 Oct 2020 at 16:55, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Oct 3, 2020 at 9:26 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > When we discussed this before, I was thinking that we could have other\n> > statistics for physical slots in the same statistics view in the\n> > future. Having the view show only logical slots also makes sense to me\n> > but I’m concerned a bit that we could break backward compatibility\n> > that monitoring tools etc will be affected when the view starts to\n> > show physical slots too.\n> >\n>\n> I think that would happen anyway as we need to add more columns in\n> view for the physical slots.\n\nI think it depends; adding more columns to the view might not break\ntools if the query used in the tool explicitly specifies columns. OTOH\nif the view starts to show more rows, the tool will need to have the\ncondition to get the same result as before.\n\n>\n> > If the view shows only logical slots, it also\n> > might be worth considering to have separate views for logical slots\n> > and physical slots and having pg_stat_logical_replication_slots by\n> > this change.\n> >\n>\n> I am not sure at this stage but I think we will add the additional\n> stats for physical slots once we have any in this view itself. I would\n> like to avoid adding separate views if possible. The only reason to\n> omit physical slots at this stage is that we don't have any stats for\n> the same.\n\nI also prefer not to have separate views. I'm concerned about the\ncompatibility I explained above but at the same time I agree that it\ndoesn't make sense to show the stats always having nothing. Since\ngiven you and Dilip agreed on that, I also agree with that.\n\n>\n> > Here is my review comment on the v7 patch.\n> >\n> > + /*\n> > + * Set the same reset timestamp for all replication slots too.\n> > + */\n> > + for (i = 0; i < max_replication_slots; i++)\n> > + replSlotStats[i].stat_reset_timestamp =\n> > globalStats.stat_reset_timestamp;\n> > +\n> >\n> > You added back the above code but since we clear the timestamps on\n> > creation of a new slot they are not shown:\n> >\n> > + /* Register new slot */\n> > + memset(&replSlotStats[nReplSlotStats], 0, sizeof(PgStat_ReplSlotStats));\n> > + memcpy(&replSlotStats[nReplSlotStats].slotname, name, NAMEDATALEN);\n> >\n> > Looking at other statistics views such as pg_stat_slru,\n> > pg_stat_bgwriter, and pg_stat_archiver, they have a valid\n> > reset_timestamp value from the beginning. That's why I removed that\n> > code and assigned the timestamp when registering a new slot.\n> >\n>\n> Hmm, I don't think it is shown intentionally in those views. I think\n> what is happening in other views is that it has been initialized with\n> some value when we read the stats and then while updating and or\n> writing because we don't change the stat_reset_timestamp, it displays\n> the same value as initialized at the time of read. Now, because in\n> pgstat_replslot_index() we always initialize the replSlotStats it\n> would overwrite any previous value we have set during read and display\n> the stat_reset as empty for replication slots. If we stop initializing\n> the replSlotStats in pgstat_replslot_index() then we will see similar\n> behavior as other views have. So even if we want to change then\n> probably we should stop initialization in pgstat_replslot_index but I\n> don't think that is necessarily better behavior because the\n> description of the parameter doesn't indicate any such thing.\n\nUnderstood. I agreed that the newly created slot doesn't have\nreset_timestamp. Looking at pg_stat_database, a view whose rows are\nadded dynamically unlike other stat views, the newly created database\ndoesn't have reset_timestamp. But given we clear the stats for a slot\nat pgstat_replslot_index(), why do we need to initialize the\nreset_timestamp with globalStats.stat_reset_timestamp when reading the\nstats file? Even if we could not find any slot stats in the stats file\nthe view won’t show anything. And the reset_timestamp will be cleared\nwhen receiving a slot stats.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 5 Oct 2020 16:55:46 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Mon, Oct 5, 2020 at 1:26 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Sat, 3 Oct 2020 at 16:55, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sat, Oct 3, 2020 at 9:26 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > When we discussed this before, I was thinking that we could have other\n> > > statistics for physical slots in the same statistics view in the\n> > > future. Having the view show only logical slots also makes sense to me\n> > > but I’m concerned a bit that we could break backward compatibility\n> > > that monitoring tools etc will be affected when the view starts to\n> > > show physical slots too.\n> > >\n> >\n> > I think that would happen anyway as we need to add more columns in\n> > view for the physical slots.\n>\n> I think it depends; adding more columns to the view might not break\n> tools if the query used in the tool explicitly specifies columns.\n>\n\nWhat if it uses Select * ...? It might not be advisable to assume how\nthe user might fetch data.\n\n> OTOH\n> if the view starts to show more rows, the tool will need to have the\n> condition to get the same result as before.\n>\n> >\n> > > If the view shows only logical slots, it also\n> > > might be worth considering to have separate views for logical slots\n> > > and physical slots and having pg_stat_logical_replication_slots by\n> > > this change.\n> > >\n> >\n> > I am not sure at this stage but I think we will add the additional\n> > stats for physical slots once we have any in this view itself. I would\n> > like to avoid adding separate views if possible. The only reason to\n> > omit physical slots at this stage is that we don't have any stats for\n> > the same.\n>\n> I also prefer not to have separate views. I'm concerned about the\n> compatibility I explained above but at the same time I agree that it\n> doesn't make sense to show the stats always having nothing. Since\n> given you and Dilip agreed on that, I also agree with that.\n>\n\nOkay.\n\n> >\n> > > Here is my review comment on the v7 patch.\n> > >\n> > > + /*\n> > > + * Set the same reset timestamp for all replication slots too.\n> > > + */\n> > > + for (i = 0; i < max_replication_slots; i++)\n> > > + replSlotStats[i].stat_reset_timestamp =\n> > > globalStats.stat_reset_timestamp;\n> > > +\n> > >\n> > > You added back the above code but since we clear the timestamps on\n> > > creation of a new slot they are not shown:\n> > >\n> > > + /* Register new slot */\n> > > + memset(&replSlotStats[nReplSlotStats], 0, sizeof(PgStat_ReplSlotStats));\n> > > + memcpy(&replSlotStats[nReplSlotStats].slotname, name, NAMEDATALEN);\n> > >\n> > > Looking at other statistics views such as pg_stat_slru,\n> > > pg_stat_bgwriter, and pg_stat_archiver, they have a valid\n> > > reset_timestamp value from the beginning. That's why I removed that\n> > > code and assigned the timestamp when registering a new slot.\n> > >\n> >\n> > Hmm, I don't think it is shown intentionally in those views. I think\n> > what is happening in other views is that it has been initialized with\n> > some value when we read the stats and then while updating and or\n> > writing because we don't change the stat_reset_timestamp, it displays\n> > the same value as initialized at the time of read. Now, because in\n> > pgstat_replslot_index() we always initialize the replSlotStats it\n> > would overwrite any previous value we have set during read and display\n> > the stat_reset as empty for replication slots. If we stop initializing\n> > the replSlotStats in pgstat_replslot_index() then we will see similar\n> > behavior as other views have. So even if we want to change then\n> > probably we should stop initialization in pgstat_replslot_index but I\n> > don't think that is necessarily better behavior because the\n> > description of the parameter doesn't indicate any such thing.\n>\n> Understood. I agreed that the newly created slot doesn't have\n> reset_timestamp. Looking at pg_stat_database, a view whose rows are\n> added dynamically unlike other stat views, the newly created database\n> doesn't have reset_timestamp. But given we clear the stats for a slot\n> at pgstat_replslot_index(), why do we need to initialize the\n> reset_timestamp with globalStats.stat_reset_timestamp when reading the\n> stats file? Even if we could not find any slot stats in the stats file\n> the view won’t show anything.\n>\n\nIt was mainly for a code consistency point of view. Also, we will\nclear the data in pgstat_replslot_index only for new slots, not for\nexisting slots. It might be used when we can't load the statsfile as\nper comment in code (\"Set the current timestamp (will be kept only in\ncase we can't load an existing statsfile)).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 5 Oct 2020 14:20:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Mon, 5 Oct 2020 at 17:50, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Oct 5, 2020 at 1:26 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Sat, 3 Oct 2020 at 16:55, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Sat, Oct 3, 2020 at 9:26 AM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > > When we discussed this before, I was thinking that we could have other\n> > > > statistics for physical slots in the same statistics view in the\n> > > > future. Having the view show only logical slots also makes sense to me\n> > > > but I’m concerned a bit that we could break backward compatibility\n> > > > that monitoring tools etc will be affected when the view starts to\n> > > > show physical slots too.\n> > > >\n> > >\n> > > I think that would happen anyway as we need to add more columns in\n> > > view for the physical slots.\n> >\n> > I think it depends; adding more columns to the view might not break\n> > tools if the query used in the tool explicitly specifies columns.\n> >\n>\n> What if it uses Select * ...? It might not be advisable to assume how\n> the user might fetch data.\n>\n> > OTOH\n> > if the view starts to show more rows, the tool will need to have the\n> > condition to get the same result as before.\n> >\n> > >\n> > > > If the view shows only logical slots, it also\n> > > > might be worth considering to have separate views for logical slots\n> > > > and physical slots and having pg_stat_logical_replication_slots by\n> > > > this change.\n> > > >\n> > >\n> > > I am not sure at this stage but I think we will add the additional\n> > > stats for physical slots once we have any in this view itself. I would\n> > > like to avoid adding separate views if possible. The only reason to\n> > > omit physical slots at this stage is that we don't have any stats for\n> > > the same.\n> >\n> > I also prefer not to have separate views. I'm concerned about the\n> > compatibility I explained above but at the same time I agree that it\n> > doesn't make sense to show the stats always having nothing. Since\n> > given you and Dilip agreed on that, I also agree with that.\n> >\n>\n> Okay.\n>\n> > >\n> > > > Here is my review comment on the v7 patch.\n> > > >\n> > > > + /*\n> > > > + * Set the same reset timestamp for all replication slots too.\n> > > > + */\n> > > > + for (i = 0; i < max_replication_slots; i++)\n> > > > + replSlotStats[i].stat_reset_timestamp =\n> > > > globalStats.stat_reset_timestamp;\n> > > > +\n> > > >\n> > > > You added back the above code but since we clear the timestamps on\n> > > > creation of a new slot they are not shown:\n> > > >\n> > > > + /* Register new slot */\n> > > > + memset(&replSlotStats[nReplSlotStats], 0, sizeof(PgStat_ReplSlotStats));\n> > > > + memcpy(&replSlotStats[nReplSlotStats].slotname, name, NAMEDATALEN);\n> > > >\n> > > > Looking at other statistics views such as pg_stat_slru,\n> > > > pg_stat_bgwriter, and pg_stat_archiver, they have a valid\n> > > > reset_timestamp value from the beginning. That's why I removed that\n> > > > code and assigned the timestamp when registering a new slot.\n> > > >\n> > >\n> > > Hmm, I don't think it is shown intentionally in those views. I think\n> > > what is happening in other views is that it has been initialized with\n> > > some value when we read the stats and then while updating and or\n> > > writing because we don't change the stat_reset_timestamp, it displays\n> > > the same value as initialized at the time of read. Now, because in\n> > > pgstat_replslot_index() we always initialize the replSlotStats it\n> > > would overwrite any previous value we have set during read and display\n> > > the stat_reset as empty for replication slots. If we stop initializing\n> > > the replSlotStats in pgstat_replslot_index() then we will see similar\n> > > behavior as other views have. So even if we want to change then\n> > > probably we should stop initialization in pgstat_replslot_index but I\n> > > don't think that is necessarily better behavior because the\n> > > description of the parameter doesn't indicate any such thing.\n> >\n> > Understood. I agreed that the newly created slot doesn't have\n> > reset_timestamp. Looking at pg_stat_database, a view whose rows are\n> > added dynamically unlike other stat views, the newly created database\n> > doesn't have reset_timestamp. But given we clear the stats for a slot\n> > at pgstat_replslot_index(), why do we need to initialize the\n> > reset_timestamp with globalStats.stat_reset_timestamp when reading the\n> > stats file? Even if we could not find any slot stats in the stats file\n> > the view won’t show anything.\n> >\n>\n> It was mainly for a code consistency point of view. Also, we will\n> clear the data in pgstat_replslot_index only for new slots, not for\n> existing slots. It might be used when we can't load the statsfile as\n> per comment in code (\"Set the current timestamp (will be kept only in\n> case we can't load an existing statsfile)).\n>\n\nUnderstood.\n\nLooking at pgstat_reset_replslot_counter() in the v8 patch, even if we\npass a physical slot name to pg_stat_reset_replication_slot() a\nPgStat_MsgResetreplslotcounter is sent to the stats collector. I’m\nokay with not raising an error but maybe we can have it not to send\nthe message in that case.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 6 Oct 2020 13:03:32 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Oct 6, 2020 at 9:34 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> Looking at pgstat_reset_replslot_counter() in the v8 patch, even if we\n> pass a physical slot name to pg_stat_reset_replication_slot() a\n> PgStat_MsgResetreplslotcounter is sent to the stats collector. I’m\n> okay with not raising an error but maybe we can have it not to send\n> the message in that case.\n>\n\nmakes sense, so changed accordingly.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 6 Oct 2020 14:26:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, 6 Oct 2020 at 17:56, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 6, 2020 at 9:34 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > Looking at pgstat_reset_replslot_counter() in the v8 patch, even if we\n> > pass a physical slot name to pg_stat_reset_replication_slot() a\n> > PgStat_MsgResetreplslotcounter is sent to the stats collector. I’m\n> > okay with not raising an error but maybe we can have it not to send\n> > the message in that case.\n> >\n>\n> makes sense, so changed accordingly.\n>\n\nThank you for updating the patch!\n\nBoth patches look good to me.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 7 Oct 2020 14:54:04 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Wed, Oct 7, 2020 at 11:24 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 6 Oct 2020 at 17:56, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Oct 6, 2020 at 9:34 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > Looking at pgstat_reset_replslot_counter() in the v8 patch, even if we\n> > > pass a physical slot name to pg_stat_reset_replication_slot() a\n> > > PgStat_MsgResetreplslotcounter is sent to the stats collector. I’m\n> > > okay with not raising an error but maybe we can have it not to send\n> > > the message in that case.\n> > >\n> >\n> > makes sense, so changed accordingly.\n> >\n>\n> Thank you for updating the patch!\n>\n\nThanks, I will push the first one tomorrow unless I see more comments\nand test-case one later. I think after we are done with this the next\nstep would be to finish the streaming stats work [1]. We probably need\nto review and add the test case in that patch. If nobody else shows up\nI will pick it up and complete it.\n\n[1] - https://www.postgresql.org/message-id/CAFPTHDZ8RnOovefzB%2BOMoRxLSD404WRLqWBUHe6bWqM5ew1bNA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 7 Oct 2020 14:22:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Wed, 7 Oct 2020 at 17:52, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Oct 7, 2020 at 11:24 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Tue, 6 Oct 2020 at 17:56, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Oct 6, 2020 at 9:34 AM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > > Looking at pgstat_reset_replslot_counter() in the v8 patch, even if we\n> > > > pass a physical slot name to pg_stat_reset_replication_slot() a\n> > > > PgStat_MsgResetreplslotcounter is sent to the stats collector. I’m\n> > > > okay with not raising an error but maybe we can have it not to send\n> > > > the message in that case.\n> > > >\n> > >\n> > > makes sense, so changed accordingly.\n> > >\n> >\n> > Thank you for updating the patch!\n> >\n>\n> Thanks, I will push the first one tomorrow unless I see more comments\n> and test-case one later.\n\nI thought we could have a test case for the reset function, what do you think?\n\n> I think after we are done with this the next\n> step would be to finish the streaming stats work [1]. We probably need\n> to review and add the test case in that patch. If nobody else shows up\n> I will pick it up and complete it.\n\n+1\nI can review that patch.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 8 Oct 2020 11:15:34 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, Oct 8, 2020 at 7:46 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Wed, 7 Oct 2020 at 17:52, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Oct 7, 2020 at 11:24 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > On Tue, 6 Oct 2020 at 17:56, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Tue, Oct 6, 2020 at 9:34 AM Masahiko Sawada\n> > > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > > >\n> > > > > Looking at pgstat_reset_replslot_counter() in the v8 patch, even if we\n> > > > > pass a physical slot name to pg_stat_reset_replication_slot() a\n> > > > > PgStat_MsgResetreplslotcounter is sent to the stats collector. I’m\n> > > > > okay with not raising an error but maybe we can have it not to send\n> > > > > the message in that case.\n> > > > >\n> > > >\n> > > > makes sense, so changed accordingly.\n> > > >\n> > >\n> > > Thank you for updating the patch!\n> > >\n> >\n> > Thanks, I will push the first one tomorrow unless I see more comments\n> > and test-case one later.\n>\n> I thought we could have a test case for the reset function, what do you think?\n>\n\nWe can write if we want but there are few things we need to do for\nthat like maybe a new function like wait_for_spill_stats which will\ncheck if the counters have become zero. Then probably call a reset\nfunction, call a new wait function, and then again check stats to\nensure they are reset to 0.\n\nWe can't write any advanced test which means reset the existing stats\nperform some tests and again check stats because *slot_get_changes()\nfunction can start from the previous WAL for which we have covered the\nstats. We might write that if we can somehow track the WAL positions\nfrom the previous test. I am not sure if we want to go there.\n\n> > I think after we are done with this the next\n> > step would be to finish the streaming stats work [1]. We probably need\n> > to review and add the test case in that patch. If nobody else shows up\n> > I will pick it up and complete it.\n>\n> +1\n> I can review that patch.\n>\n\nThanks!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 8 Oct 2020 10:41:03 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, 8 Oct 2020 at 14:10, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Oct 8, 2020 at 7:46 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Wed, 7 Oct 2020 at 17:52, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Oct 7, 2020 at 11:24 AM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > > On Tue, 6 Oct 2020 at 17:56, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Tue, Oct 6, 2020 at 9:34 AM Masahiko Sawada\n> > > > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > > > >\n> > > > > > Looking at pgstat_reset_replslot_counter() in the v8 patch, even if we\n> > > > > > pass a physical slot name to pg_stat_reset_replication_slot() a\n> > > > > > PgStat_MsgResetreplslotcounter is sent to the stats collector. I’m\n> > > > > > okay with not raising an error but maybe we can have it not to send\n> > > > > > the message in that case.\n> > > > > >\n> > > > >\n> > > > > makes sense, so changed accordingly.\n> > > > >\n> > > >\n> > > > Thank you for updating the patch!\n> > > >\n> > >\n> > > Thanks, I will push the first one tomorrow unless I see more comments\n> > > and test-case one later.\n> >\n> > I thought we could have a test case for the reset function, what do you think?\n> >\n>\n> We can write if we want but there are few things we need to do for\n> that like maybe a new function like wait_for_spill_stats which will\n> check if the counters have become zero. Then probably call a reset\n> function, call a new wait function, and then again check stats to\n> ensure they are reset to 0.\n\nYes.\n\n> We can't write any advanced test which means reset the existing stats\n> perform some tests and again check stats because *slot_get_changes()\n> function can start from the previous WAL for which we have covered the\n> stats. We might write that if we can somehow track the WAL positions\n> from the previous test. I am not sure if we want to go there.\n\nCan we use pg_logical_slot_peek_changes() instead to decode the same\ntransactions multiple times?\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 8 Oct 2020 17:24:52 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, Oct 8, 2020 at 1:55 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 8 Oct 2020 at 14:10, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > We can write if we want but there are few things we need to do for\n> > that like maybe a new function like wait_for_spill_stats which will\n> > check if the counters have become zero. Then probably call a reset\n> > function, call a new wait function, and then again check stats to\n> > ensure they are reset to 0.\n>\n> Yes.\n>\n\nI am not sure if it is worth but probably it is not a bad idea\nespecially if we extend the existing tests based on your below idea?\n\n> > We can't write any advanced test which means reset the existing stats\n> > perform some tests and again check stats because *slot_get_changes()\n> > function can start from the previous WAL for which we have covered the\n> > stats. We might write that if we can somehow track the WAL positions\n> > from the previous test. I am not sure if we want to go there.\n>\n> Can we use pg_logical_slot_peek_changes() instead to decode the same\n> transactions multiple times?\n>\n\nI think this will do the trick. If we want to go there then I suggest\nwe can have a separate regression test file in test_decoding with name\nas decoding_stats, stats, or something like that. We can later add the\ntests related to streaming stats in that file as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 8 Oct 2020 14:29:25 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, Oct 8, 2020 at 7:46 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Wed, 7 Oct 2020 at 17:52, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> > I think after we are done with this the next\n> > step would be to finish the streaming stats work [1]. We probably need\n> > to review and add the test case in that patch. If nobody else shows up\n> > I will pick it up and complete it.\n>\n> +1\n> I can review that patch.\n>\n\nI have rebased the stream stats patch and made minor modifications. I\nhaven't done a detailed review but one thing that I think is not\ncorrect is:\n@@ -3496,10 +3499,18 @@ ReorderBufferStreamTXN(ReorderBuffer *rb,\nReorderBufferTXN *txn)\n txn->snapshot_now = NULL;\n }\n\n+\n+ rb->streamCount += 1;\n+ rb->streamBytes += txn->total_size;\n+\n+ /* Don't consider already streamed transaction. */\n+ rb->streamTxns += (rbtxn_is_streamed(txn)) ? 0 : 1;\n+\n /* Process and send the changes to output plugin. */\n ReorderBufferProcessTXN(rb, txn, InvalidXLogRecPtr, snapshot_now,\n command_id, true);\n\nI think we should update the stream stats after\nReorderBufferProcessTXN rather than before because any error in\nReorderBufferProcessTXN can lead to an unnecessary update of stats.\nBut OTOH, the txn flags, and other data can be changed after\nReorderBufferProcessTXN so we need to save them in a temporary\nvariable before calling the function.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 8 Oct 2020 19:27:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, 8 Oct 2020 at 17:59, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Oct 8, 2020 at 1:55 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Thu, 8 Oct 2020 at 14:10, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > We can write if we want but there are few things we need to do for\n> > > that like maybe a new function like wait_for_spill_stats which will\n> > > check if the counters have become zero. Then probably call a reset\n> > > function, call a new wait function, and then again check stats to\n> > > ensure they are reset to 0.\n> >\n> > Yes.\n> >\n>\n> I am not sure if it is worth but probably it is not a bad idea\n> especially if we extend the existing tests based on your below idea?\n>\n> > > We can't write any advanced test which means reset the existing stats\n> > > perform some tests and again check stats because *slot_get_changes()\n> > > function can start from the previous WAL for which we have covered the\n> > > stats. We might write that if we can somehow track the WAL positions\n> > > from the previous test. I am not sure if we want to go there.\n> >\n> > Can we use pg_logical_slot_peek_changes() instead to decode the same\n> > transactions multiple times?\n> >\n>\n> I think this will do the trick. If we want to go there then I suggest\n> we can have a separate regression test file in test_decoding with name\n> as decoding_stats, stats, or something like that. We can later add the\n> tests related to streaming stats in that file as well.\n>\n\nAgreed.\n\nI've updated the patch. Please review it.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 12 Oct 2020 14:29:01 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, 8 Oct 2020 at 22:57, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Oct 8, 2020 at 7:46 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Wed, 7 Oct 2020 at 17:52, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > > I think after we are done with this the next\n> > > step would be to finish the streaming stats work [1]. We probably need\n> > > to review and add the test case in that patch. If nobody else shows up\n> > > I will pick it up and complete it.\n> >\n> > +1\n> > I can review that patch.\n> >\n>\n> I have rebased the stream stats patch and made minor modifications. I\n> haven't done a detailed review but one thing that I think is not\n> correct is:\n> @@ -3496,10 +3499,18 @@ ReorderBufferStreamTXN(ReorderBuffer *rb,\n> ReorderBufferTXN *txn)\n> txn->snapshot_now = NULL;\n> }\n>\n> +\n> + rb->streamCount += 1;\n> + rb->streamBytes += txn->total_size;\n> +\n> + /* Don't consider already streamed transaction. */\n> + rb->streamTxns += (rbtxn_is_streamed(txn)) ? 0 : 1;\n> +\n> /* Process and send the changes to output plugin. */\n> ReorderBufferProcessTXN(rb, txn, InvalidXLogRecPtr, snapshot_now,\n> command_id, true);\n>\n> I think we should update the stream stats after\n> ReorderBufferProcessTXN rather than before because any error in\n> ReorderBufferProcessTXN can lead to an unnecessary update of stats.\n> But OTOH, the txn flags, and other data can be changed after\n> ReorderBufferProcessTXN so we need to save them in a temporary\n> variable before calling the function.\n\nThank you for updating the patch!\n\nI've not looked at the patch in-depth yet but RBTXN_IS_STREAMED could\nbe cleared after ReorderBUfferProcessTXN()?\n\nBTW maybe it's better to start a new thread for this patch as the\ntitle is no longer relevant.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 12 Oct 2020 15:22:14 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "Hi, thank you for the awesome feature.\r\n\r\nAs it may have been discussed, I think the 'name' column in pg_stat_replication_slots is more consistent with the column name and data type matched to the pg_replication_slots catalog.\r\nThe attached patch changes the name and data type of the 'name' column to slot_name and 'name' type, respectively.\r\nAlso, the macro name PG_STAT_GET_..._CLOS has been changed to PG_STAT_GET_..._COLS.\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n-----Original Message-----\r\nFrom: Masahiko Sawada [mailto:masahiko.sawada@2ndquadrant.com] \r\nSent: Monday, October 12, 2020 3:22 PM\r\nTo: Amit Kapila <amit.kapila16@gmail.com>\r\nCc: Dilip Kumar <dilipbalaut@gmail.com>; Magnus Hagander <magnus@hagander.net>; Tomas Vondra <tomas.vondra@2ndquadrant.com>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>; Ajin Cherian <itsajin@gmail.com>\r\nSubject: Re: Resetting spilled txn statistics in pg_stat_replication\r\n\r\nOn Thu, 8 Oct 2020 at 22:57, Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n>\r\n> On Thu, Oct 8, 2020 at 7:46 AM Masahiko Sawada \r\n> <masahiko.sawada@2ndquadrant.com> wrote:\r\n> >\r\n> > On Wed, 7 Oct 2020 at 17:52, Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> > >\r\n> >\r\n> > > I think after we are done with this the next step would be to \r\n> > > finish the streaming stats work [1]. We probably need to review \r\n> > > and add the test case in that patch. If nobody else shows up I \r\n> > > will pick it up and complete it.\r\n> >\r\n> > +1\r\n> > I can review that patch.\r\n> >\r\n>\r\n> I have rebased the stream stats patch and made minor modifications. I \r\n> haven't done a detailed review but one thing that I think is not \r\n> correct is:\r\n> @@ -3496,10 +3499,18 @@ ReorderBufferStreamTXN(ReorderBuffer *rb, \r\n> ReorderBufferTXN *txn)\r\n> txn->snapshot_now = NULL;\r\n> }\r\n>\r\n> +\r\n> + rb->streamCount += 1;\r\n> + rb->streamBytes += txn->total_size;\r\n> +\r\n> + /* Don't consider already streamed transaction. */\r\n> + rb->streamTxns += (rbtxn_is_streamed(txn)) ? 0 : 1;\r\n> +\r\n> /* Process and send the changes to output plugin. */\r\n> ReorderBufferProcessTXN(rb, txn, InvalidXLogRecPtr, snapshot_now,\r\n> command_id, true);\r\n>\r\n> I think we should update the stream stats after \r\n> ReorderBufferProcessTXN rather than before because any error in \r\n> ReorderBufferProcessTXN can lead to an unnecessary update of stats.\r\n> But OTOH, the txn flags, and other data can be changed after \r\n> ReorderBufferProcessTXN so we need to save them in a temporary \r\n> variable before calling the function.\r\n\r\nThank you for updating the patch!\r\n\r\nI've not looked at the patch in-depth yet but RBTXN_IS_STREAMED could be cleared after ReorderBUfferProcessTXN()?\r\n\r\nBTW maybe it's better to start a new thread for this patch as the title is no longer relevant.\r\n\r\nRegards,\r\n\r\n--\r\nMasahiko Sawada http://www.2ndQuadrant.com/ \r\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 12 Oct 2020 09:29:05 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan A&PS Delivery)\"\n\t<noriyoshi.shinoda@hpe.com>", "msg_from_op": false, "msg_subject": "RE: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Mon, Oct 12, 2020 at 10:59 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 8 Oct 2020 at 17:59, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Oct 8, 2020 at 1:55 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > On Thu, 8 Oct 2020 at 14:10, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > >\n> > > > We can write if we want but there are few things we need to do for\n> > > > that like maybe a new function like wait_for_spill_stats which will\n> > > > check if the counters have become zero. Then probably call a reset\n> > > > function, call a new wait function, and then again check stats to\n> > > > ensure they are reset to 0.\n> > >\n> > > Yes.\n> > >\n> >\n> > I am not sure if it is worth but probably it is not a bad idea\n> > especially if we extend the existing tests based on your below idea?\n> >\n> > > > We can't write any advanced test which means reset the existing stats\n> > > > perform some tests and again check stats because *slot_get_changes()\n> > > > function can start from the previous WAL for which we have covered the\n> > > > stats. We might write that if we can somehow track the WAL positions\n> > > > from the previous test. I am not sure if we want to go there.\n> > >\n> > > Can we use pg_logical_slot_peek_changes() instead to decode the same\n> > > transactions multiple times?\n> > >\n> >\n> > I think this will do the trick. If we want to go there then I suggest\n> > we can have a separate regression test file in test_decoding with name\n> > as decoding_stats, stats, or something like that. We can later add the\n> > tests related to streaming stats in that file as well.\n> >\n>\n> Agreed.\n>\n> I've updated the patch. Please review it.\n>\n\nFew comments:\n=============\n1.\n+-- function to wait for counters to advance\n+CREATE FUNCTION wait_for_spill_stats(check_reset bool) RETURNS void AS $$\n\nCan we rename this function to wait_for_decode_stats? I am thinking we\ncan later reuse this function for streaming stats as well by passing\nthe additional parameter 'stream bool'.\n\n2. let's drop the table added by this test and regression_slot at the\nend of the test.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 12 Oct 2020 15:55:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Mon, 12 Oct 2020 at 18:29, Shinoda, Noriyoshi (PN Japan A&PS\nDelivery) <noriyoshi.shinoda@hpe.com> wrote:\n>\n> Hi, thank you for the awesome feature.\n>\n\nThank you for reporting!\n\n> As it may have been discussed, I think the 'name' column in pg_stat_replication_slots is more consistent with the column name and data type matched to the pg_replication_slots catalog.\n> The attached patch changes the name and data type of the 'name' column to slot_name and 'name' type, respectively.\n\nIt seems a good idea to me. In other system views, we use the name\ndata type for object name. When I wrote the first patch, I borrowed\nthe code for pg_stat_slru which uses text data for the name but I\nthink it's an oversight.\n\n> Also, the macro name PG_STAT_GET_..._CLOS has been changed to PG_STAT_GET_..._COLS.\n\nGood catch!\n\nHere is my comments on the patch:\n\n--- a/src/backend/catalog/system_views.sql\n+++ b/src/backend/catalog/system_views.sql\n@@ -798,7 +798,7 @@ CREATE VIEW pg_stat_replication AS\n\n CREATE VIEW pg_stat_replication_slots AS\n SELECT\n- s.name,\n+ s.name AS slot_name,\n s.spill_txns,\n s.spill_count,\n s.spill_bytes,\n\nI think we should modify 'proargnames' of\npg_stat_get_replication_slots() in pg_proc.dat instead.\n\n---\n--- a/src/backend/postmaster/pgstat.c\n+++ b/src/backend/postmaster/pgstat.c\n@@ -7094,7 +7094,7 @@ pgstat_replslot_index(const char *name, bool create_it)\n Assert(nReplSlotStats <= max_replication_slots);\n for (i = 0; i < nReplSlotStats; i++)\n {\n- if (strcmp(replSlotStats[i].slotname, name) == 0)\n+ if (strcmp(replSlotStats[i].slotname.data, name) == 0)\n return i; /* found */\n }\n\n@@ -7107,7 +7107,7 @@ pgstat_replslot_index(const char *name, bool create_it)\n\n /* Register new slot */\n memset(&replSlotStats[nReplSlotStats], 0, sizeof(PgStat_ReplSlotStats));\n- memcpy(&replSlotStats[nReplSlotStats].slotname, name, NAMEDATALEN);\n+ memcpy(&replSlotStats[nReplSlotStats].slotname.data, name, NAMEDATALEN);\n\n return nReplSlotStats++;\n }\n\nWe can use NameStr() instead.\n\n---\nPerhaps we need to update the regression test as well.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 12 Oct 2020 20:11:32 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Mon, Oct 12, 2020 at 11:52 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 8 Oct 2020 at 22:57, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Oct 8, 2020 at 7:46 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > I have rebased the stream stats patch and made minor modifications. I\n> > haven't done a detailed review but one thing that I think is not\n> > correct is:\n> > @@ -3496,10 +3499,18 @@ ReorderBufferStreamTXN(ReorderBuffer *rb,\n> > ReorderBufferTXN *txn)\n> > txn->snapshot_now = NULL;\n> > }\n> >\n> > +\n> > + rb->streamCount += 1;\n> > + rb->streamBytes += txn->total_size;\n> > +\n> > + /* Don't consider already streamed transaction. */\n> > + rb->streamTxns += (rbtxn_is_streamed(txn)) ? 0 : 1;\n> > +\n> > /* Process and send the changes to output plugin. */\n> > ReorderBufferProcessTXN(rb, txn, InvalidXLogRecPtr, snapshot_now,\n> > command_id, true);\n> >\n> > I think we should update the stream stats after\n> > ReorderBufferProcessTXN rather than before because any error in\n> > ReorderBufferProcessTXN can lead to an unnecessary update of stats.\n> > But OTOH, the txn flags, and other data can be changed after\n> > ReorderBufferProcessTXN so we need to save them in a temporary\n> > variable before calling the function.\n>\n> Thank you for updating the patch!\n>\n> I've not looked at the patch in-depth yet but RBTXN_IS_STREAMED could\n> be cleared after ReorderBUfferProcessTXN()?\n>\n\nI think you mean to say RBTXN_IS_STREAMED could be *set* after\nReorderBUfferProcessTXN(). We need to set it for txn and subtxns and\ncurrently, it is being done with other things in\nReorderBufferTruncateTXN so not sure if it is a good idea to do this\nseparately.\n\n> BTW maybe it's better to start a new thread for this patch as the\n> title is no longer relevant.\n>\n\nYeah, that makes sense. I'll do that while posting a new version of the patch.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 12 Oct 2020 16:51:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "Sawada-san, Thank you your comments.\r\n\r\nThe attached patch reflects the comment.\r\nI also made a fix for the regression test.\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n\r\n-----Original Message-----\r\nFrom: Masahiko Sawada [mailto:masahiko.sawada@2ndquadrant.com] \r\nSent: Monday, October 12, 2020 8:12 PM\r\nTo: Shinoda, Noriyoshi (PN Japan A&PS Delivery) <noriyoshi.shinoda@hpe.com>\r\nCc: Amit Kapila <amit.kapila16@gmail.com>; Dilip Kumar <dilipbalaut@gmail.com>; Magnus Hagander <magnus@hagander.net>; Tomas Vondra <tomas.vondra@2ndquadrant.com>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>; Ajin Cherian <itsajin@gmail.com>\r\nSubject: Re: Resetting spilled txn statistics in pg_stat_replication\r\n\r\nOn Mon, 12 Oct 2020 at 18:29, Shinoda, Noriyoshi (PN Japan A&PS\r\nDelivery) <noriyoshi.shinoda@hpe.com> wrote:\r\n>\r\n> Hi, thank you for the awesome feature.\r\n>\r\n\r\nThank you for reporting!\r\n\r\n> As it may have been discussed, I think the 'name' column in pg_stat_replication_slots is more consistent with the column name and data type matched to the pg_replication_slots catalog.\r\n> The attached patch changes the name and data type of the 'name' column to slot_name and 'name' type, respectively.\r\n\r\nIt seems a good idea to me. In other system views, we use the name data type for object name. When I wrote the first patch, I borrowed the code for pg_stat_slru which uses text data for the name but I think it's an oversight.\r\n\r\n> Also, the macro name PG_STAT_GET_..._CLOS has been changed to PG_STAT_GET_..._COLS.\r\n\r\nGood catch!\r\n\r\nHere is my comments on the patch:\r\n\r\n--- a/src/backend/catalog/system_views.sql\r\n+++ b/src/backend/catalog/system_views.sql\r\n@@ -798,7 +798,7 @@ CREATE VIEW pg_stat_replication AS\r\n\r\n CREATE VIEW pg_stat_replication_slots AS\r\n SELECT\r\n- s.name,\r\n+ s.name AS slot_name,\r\n s.spill_txns,\r\n s.spill_count,\r\n s.spill_bytes,\r\n\r\nI think we should modify 'proargnames' of\r\npg_stat_get_replication_slots() in pg_proc.dat instead.\r\n\r\n---\r\n--- a/src/backend/postmaster/pgstat.c\r\n+++ b/src/backend/postmaster/pgstat.c\r\n@@ -7094,7 +7094,7 @@ pgstat_replslot_index(const char *name, bool create_it)\r\n Assert(nReplSlotStats <= max_replication_slots);\r\n for (i = 0; i < nReplSlotStats; i++)\r\n {\r\n- if (strcmp(replSlotStats[i].slotname, name) == 0)\r\n+ if (strcmp(replSlotStats[i].slotname.data, name) == 0)\r\n return i; /* found */\r\n }\r\n\r\n@@ -7107,7 +7107,7 @@ pgstat_replslot_index(const char *name, bool create_it)\r\n\r\n /* Register new slot */\r\n memset(&replSlotStats[nReplSlotStats], 0, sizeof(PgStat_ReplSlotStats));\r\n- memcpy(&replSlotStats[nReplSlotStats].slotname, name, NAMEDATALEN);\r\n+ memcpy(&replSlotStats[nReplSlotStats].slotname.data, name, \r\n+ NAMEDATALEN);\r\n\r\n return nReplSlotStats++;\r\n }\r\n\r\nWe can use NameStr() instead.\r\n\r\n---\r\nPerhaps we need to update the regression test as well.\r\n\r\nRegards,\r\n\r\n--\r\nMasahiko Sawada http://www.2ndQuadrant.com/ \r\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 12 Oct 2020 14:45:04 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan A&PS Delivery)\"\n\t<noriyoshi.shinoda@hpe.com>", "msg_from_op": false, "msg_subject": "RE: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Mon, 12 Oct 2020 at 19:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Oct 12, 2020 at 10:59 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Thu, 8 Oct 2020 at 17:59, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Oct 8, 2020 at 1:55 PM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > > On Thu, 8 Oct 2020 at 14:10, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > >\n> > > > > We can write if we want but there are few things we need to do for\n> > > > > that like maybe a new function like wait_for_spill_stats which will\n> > > > > check if the counters have become zero. Then probably call a reset\n> > > > > function, call a new wait function, and then again check stats to\n> > > > > ensure they are reset to 0.\n> > > >\n> > > > Yes.\n> > > >\n> > >\n> > > I am not sure if it is worth but probably it is not a bad idea\n> > > especially if we extend the existing tests based on your below idea?\n> > >\n> > > > > We can't write any advanced test which means reset the existing stats\n> > > > > perform some tests and again check stats because *slot_get_changes()\n> > > > > function can start from the previous WAL for which we have covered the\n> > > > > stats. We might write that if we can somehow track the WAL positions\n> > > > > from the previous test. I am not sure if we want to go there.\n> > > >\n> > > > Can we use pg_logical_slot_peek_changes() instead to decode the same\n> > > > transactions multiple times?\n> > > >\n> > >\n> > > I think this will do the trick. If we want to go there then I suggest\n> > > we can have a separate regression test file in test_decoding with name\n> > > as decoding_stats, stats, or something like that. We can later add the\n> > > tests related to streaming stats in that file as well.\n> > >\n> >\n> > Agreed.\n> >\n> > I've updated the patch. Please review it.\n> >\n>\n> Few comments:\n> =============\n\nThank you for your review.\n\n> 1.\n> +-- function to wait for counters to advance\n> +CREATE FUNCTION wait_for_spill_stats(check_reset bool) RETURNS void AS $$\n>\n> Can we rename this function to wait_for_decode_stats? I am thinking we\n> can later reuse this function for streaming stats as well by passing\n> the additional parameter 'stream bool'.\n\n+1. Fixed.\n\n>\n> 2. let's drop the table added by this test and regression_slot at the\n> end of the test.\n\nFixed.\n\nAttached the updated version patch. Please review it.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 13 Oct 2020 08:23:50 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "(Please avoid top-posting)\n\nOn Mon, 12 Oct 2020 at 23:45, Shinoda, Noriyoshi (PN Japan A&PS\nDelivery) <noriyoshi.shinoda@hpe.com> wrote:\n>\n> Sawada-san, Thank you your comments.\n>\n> The attached patch reflects the comment.\n> I also made a fix for the regression test.\n>\n> Regards,\n> Noriyoshi Shinoda\n>\n> -----Original Message-----\n> From: Masahiko Sawada [mailto:masahiko.sawada@2ndquadrant.com]\n> Sent: Monday, October 12, 2020 8:12 PM\n> To: Shinoda, Noriyoshi (PN Japan A&PS Delivery) <noriyoshi.shinoda@hpe.com>\n> Cc: Amit Kapila <amit.kapila16@gmail.com>; Dilip Kumar <dilipbalaut@gmail.com>; Magnus Hagander <magnus@hagander.net>; Tomas Vondra <tomas.vondra@2ndquadrant.com>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>; Ajin Cherian <itsajin@gmail.com>\n> Subject: Re: Resetting spilled txn statistics in pg_stat_replication\n>\n> On Mon, 12 Oct 2020 at 18:29, Shinoda, Noriyoshi (PN Japan A&PS\n> Delivery) <noriyoshi.shinoda@hpe.com> wrote:\n> >\n> > Hi, thank you for the awesome feature.\n> >\n>\n> Thank you for reporting!\n>\n> > As it may have been discussed, I think the 'name' column in pg_stat_replication_slots is more consistent with the column name and data type matched to the pg_replication_slots catalog.\n> > The attached patch changes the name and data type of the 'name' column to slot_name and 'name' type, respectively.\n>\n> It seems a good idea to me. In other system views, we use the name data type for object name. When I wrote the first patch, I borrowed the code for pg_stat_slru which uses text data for the name but I think it's an oversight.\n\nHmm, my above observation is wrong. All other statistics use text data\ntype and internally use char[NAMEDATALEN]. So I think renaming to\n'slot_name' would be a good idea but probably we don’t need to change\nthe internally used data type. For the data type of slot_name of\npg_stat_replication_slots view, given that the doc says the\nfollowing[1], I think we can keep it too as this view is not a system\ncatalog. What do you think?\n\n8.3. Character Types:\nThe name type exists only for the storage of identifiers in the\ninternal system catalogs\n\n[1] https://www.postgresql.org/docs/devel/datatype-character.html\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 13 Oct 2020 09:10:51 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Oct 13, 2020 at 4:54 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> Attached the updated version patch. Please review it.\n>\n\nI have pushed this but it failed in one of the BF. See\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican&dt=2020-10-13%2003%3A07%3A25\n\nThe failure is shown below and I am analyzing it. See, if you can\nprovide any insights.\n\n@@ -58,7 +58,7 @@\n SELECT name, spill_txns, spill_count FROM pg_stat_replication_slots;\n name | spill_txns | spill_count\n -----------------+------------+-------------\n- regression_slot | 1 | 12\n+ regression_slot | 1 | 10\n (1 row)\n\n -- reset the slot stats, and wait for stats collector to reset\n@@ -96,7 +96,7 @@\n SELECT name, spill_txns, spill_count FROM pg_stat_replication_slots;\n name | spill_txns | spill_count\n -----------------+------------+-------------\n- regression_slot | 1 | 12\n+ regression_slot | 1 | 10\n (1 row)\n\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 13 Oct 2020 09:02:20 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Oct 13, 2020 at 9:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 13, 2020 at 4:54 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > Attached the updated version patch. Please review it.\n> >\n>\n> I have pushed this but it failed in one of the BF. See\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican&dt=2020-10-13%2003%3A07%3A25\n>\n> The failure is shown below and I am analyzing it. See, if you can\n> provide any insights.\n>\n> @@ -58,7 +58,7 @@\n> SELECT name, spill_txns, spill_count FROM pg_stat_replication_slots;\n> name | spill_txns | spill_count\n> -----------------+------------+-------------\n> - regression_slot | 1 | 12\n> + regression_slot | 1 | 10\n> (1 row)\n>\n> -- reset the slot stats, and wait for stats collector to reset\n> @@ -96,7 +96,7 @@\n> SELECT name, spill_txns, spill_count FROM pg_stat_replication_slots;\n> name | spill_txns | spill_count\n> -----------------+------------+-------------\n> - regression_slot | 1 | 12\n> + regression_slot | 1 | 10\n> (1 row)\n>\n\nThe reason for this problem could be that there is some transaction\n(say by autovacuum) which happened interleaved with this transaction\nand committed before this one. Now during DecodeCommit of this\nbackground transaction, we will send the stats accumulated by that\ntime which could lead to such a problem.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 13 Oct 2020 09:11:38 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Oct 13, 2020 at 9:11 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 13, 2020 at 9:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Oct 13, 2020 at 4:54 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > Attached the updated version patch. Please review it.\n> > >\n> >\n> > I have pushed this but it failed in one of the BF. See\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican&dt=2020-10-13%2003%3A07%3A25\n> >\n> > The failure is shown below and I am analyzing it. See, if you can\n> > provide any insights.\n> >\n> > @@ -58,7 +58,7 @@\n> > SELECT name, spill_txns, spill_count FROM pg_stat_replication_slots;\n> > name | spill_txns | spill_count\n> > -----------------+------------+-------------\n> > - regression_slot | 1 | 12\n> > + regression_slot | 1 | 10\n> > (1 row)\n> >\n> > -- reset the slot stats, and wait for stats collector to reset\n> > @@ -96,7 +96,7 @@\n> > SELECT name, spill_txns, spill_count FROM pg_stat_replication_slots;\n> > name | spill_txns | spill_count\n> > -----------------+------------+-------------\n> > - regression_slot | 1 | 12\n> > + regression_slot | 1 | 10\n> > (1 row)\n> >\n>\n> The reason for this problem could be that there is some transaction\n> (say by autovacuum) which happened interleaved with this transaction\n> and committed before this one. Now during DecodeCommit of this\n> background transaction, we will send the stats accumulated by that\n> time which could lead to such a problem.\n>\n\nIf this theory is correct then I think we can't rely on the\n'spill_count' value, what do you think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 13 Oct 2020 09:22:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> I have pushed this but it failed in one of the BF. See\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican&dt=2020-10-13%2003%3A07%3A25\n> The failure is shown below and I am analyzing it. See, if you can\n> provide any insights.\n\nIt's not very clear what spill_count actually counts (and the\ndocumentation sure does nothing to clarify that), but if it has anything\nto do with WAL volume, the explanation might be that florican is 32-bit.\nAll the animals that have passed that test so far are 64-bit.\n\n> The reason for this problem could be that there is some transaction\n> (say by autovacuum) which happened interleaved with this transaction\n> and committed before this one.\n\nI can believe that idea too, but would it not have resulted in a\ndiff in spill_txns as well?\n\nIn short, I'm not real convinced that a stable result is possible in this\ntest. Maybe you should just test for spill_txns and spill_count being\npositive.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 12 Oct 2020 23:55:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Oct 13, 2020 at 9:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > I have pushed this but it failed in one of the BF. See\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican&dt=2020-10-13%2003%3A07%3A25\n> > The failure is shown below and I am analyzing it. See, if you can\n> > provide any insights.\n>\n> It's not very clear what spill_count actually counts (and the\n> documentation sure does nothing to clarify that), but if it has anything\n> to do with WAL volume, the explanation might be that florican is 32-bit.\n> All the animals that have passed that test so far are 64-bit.\n>\n\nIt is based on the size of the change. In this case, it is the size of\nthe tuples inserted. See ReorderBufferChangeSize() know how we compute\nthe size of each change. Once the total_size for changes crosses\nlogical_decoding_work_mem (64kB) in this case, we will spill. So\n'spill_count' is the number of times the size of changes in that\ntransaction crossed the threshold and which lead to a spill of the\ncorresponding changes.\n\n\n> > The reason for this problem could be that there is some transaction\n> > (say by autovacuum) which happened interleaved with this transaction\n> > and committed before this one.\n>\n> I can believe that idea too, but would it not have resulted in a\n> diff in spill_txns as well?\n>\n\nWe count that 'spill_txns' once for a transaction that is ever\nspilled. I think the 'spill_txns' wouldn't vary for this particular\ntest even if the autovacuum transaction happens-before the main\ntransaction of the test because in that case, wait_for_decode_stats\nwon't finish until it sees the main transaction ('spill_txns' won't be\npositive by that time)\n\n> In short, I'm not real convinced that a stable result is possible in this\n> test. Maybe you should just test for spill_txns and spill_count being\n> positive.\n>\n\nYeah, that seems like the best we can do here.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 13 Oct 2020 09:54:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Tue, Oct 13, 2020 at 9:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It's not very clear what spill_count actually counts (and the\n>> documentation sure does nothing to clarify that), but if it has anything\n>> to do with WAL volume, the explanation might be that florican is 32-bit.\n>> All the animals that have passed that test so far are 64-bit.\n\nprairiedog just failed in not-quite-the-same way, which reinforces the\nidea that this test is dependent on MAXALIGN, which determines physical\ntuple size. (I just checked the buildfarm, and the four active members\nthat report MAXALIGN 4 during configure are florican, lapwing, locust,\nand prairiedog. Not sure about the MSVC critters though.) The\nspill_count number is different though, so it seems that that may not\nbe the whole story.\n\n> It is based on the size of the change. In this case, it is the size of\n> the tuples inserted. See ReorderBufferChangeSize() know how we compute\n> the size of each change.\n\nI know I can go read the source code, but most users will not want to.\nIs the documentation in monitoring.sgml really sufficient? If we can't\nexplain this with more precision, is it really a number we want to expose\nat all?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 13 Oct 2020 00:51:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Oct 13, 2020 at 10:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Tue, Oct 13, 2020 at 9:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> It's not very clear what spill_count actually counts (and the\n> >> documentation sure does nothing to clarify that), but if it has anything\n> >> to do with WAL volume, the explanation might be that florican is 32-bit.\n> >> All the animals that have passed that test so far are 64-bit.\n>\n> prairiedog just failed in not-quite-the-same way, which reinforces the\n> idea that this test is dependent on MAXALIGN, which determines physical\n> tuple size. (I just checked the buildfarm, and the four active members\n> that report MAXALIGN 4 during configure are florican, lapwing, locust,\n> and prairiedog. Not sure about the MSVC critters though.) The\n> spill_count number is different though, so it seems that that may not\n> be the whole story.\n>\n\nIt is possible that MAXALIGN stuff is playing a role here and or the\nbackground transaction stuff. I think if we go with the idea of\ntesting spill_txns and spill_count being positive then the results\nwill be stable. I'll write a patch for that.\n\n> > It is based on the size of the change. In this case, it is the size of\n> > the tuples inserted. See ReorderBufferChangeSize() know how we compute\n> > the size of each change.\n>\n> I know I can go read the source code, but most users will not want to.\n> Is the documentation in monitoring.sgml really sufficient? If we can't\n> explain this with more precision, is it really a number we want to expose\n> at all?\n>\n\nThis counter is important to give users an idea about the amount of\nI/O we incur during decoding and to tune logical_decoding_work_mem\nGUC. So, I would prefer to improve the documentation for this\nvariable.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 13 Oct 2020 10:33:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "I wrote:\n> prairiedog just failed in not-quite-the-same way, which reinforces the\n> idea that this test is dependent on MAXALIGN, which determines physical\n> tuple size. (I just checked the buildfarm, and the four active members\n> that report MAXALIGN 4 during configure are florican, lapwing, locust,\n> and prairiedog. Not sure about the MSVC critters though.) The\n> spill_count number is different though, so it seems that that may not\n> be the whole story.\n\nOh, and here comes lapwing:\n\n- regression_slot | 1 | 12\n+ regression_slot | 1 | 10\n\nSo if it weren't that prairiedog showed 11 not 10, we'd have a nice\nneat it-depends-on-MAXALIGN theory. As is, I'm not sure what all\nis affecting it, though MAXALIGN sure seems to be a component.\n\n(locust seems to be AWOL at the moment, so I'm not holding my breath\nfor that one to report in.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 13 Oct 2020 01:05:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Oct 13, 2020 at 10:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 13, 2020 at 10:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > > On Tue, Oct 13, 2020 at 9:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> It's not very clear what spill_count actually counts (and the\n> > >> documentation sure does nothing to clarify that), but if it has anything\n> > >> to do with WAL volume, the explanation might be that florican is 32-bit.\n> > >> All the animals that have passed that test so far are 64-bit.\n> >\n> > prairiedog just failed in not-quite-the-same way, which reinforces the\n> > idea that this test is dependent on MAXALIGN, which determines physical\n> > tuple size. (I just checked the buildfarm, and the four active members\n> > that report MAXALIGN 4 during configure are florican, lapwing, locust,\n> > and prairiedog. Not sure about the MSVC critters though.) The\n> > spill_count number is different though, so it seems that that may not\n> > be the whole story.\n> >\n>\n> It is possible that MAXALIGN stuff is playing a role here and or the\n> background transaction stuff. I think if we go with the idea of\n> testing spill_txns and spill_count being positive then the results\n> will be stable. I'll write a patch for that.\n>\n\nPlease find the attached patch for the same. Additionally, I have\nskipped empty xacts during decoding as background autovacuum\ntransactions can impact that count as well. I have done some minimal\ntesting with this. I'll do some more.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 13 Oct 2020 10:59:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n>> It is possible that MAXALIGN stuff is playing a role here and or the\n>> background transaction stuff. I think if we go with the idea of\n>> testing spill_txns and spill_count being positive then the results\n>> will be stable. I'll write a patch for that.\n\nHere's our first failure on a MAXALIGN-8 machine:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2020-10-13%2005%3A00%3A08\n\nSo this is just plain not stable. It is odd though. I can\neasily think of mechanisms that would cause the WAL volume\nto occasionally be *more* than the \"typical\" case. What\nwould cause it to be *less*, if MAXALIGN is ruled out?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 13 Oct 2020 01:35:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Oct 13, 2020 at 11:05 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> >> It is possible that MAXALIGN stuff is playing a role here and or the\n> >> background transaction stuff. I think if we go with the idea of\n> >> testing spill_txns and spill_count being positive then the results\n> >> will be stable. I'll write a patch for that.\n>\n> Here's our first failure on a MAXALIGN-8 machine:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2020-10-13%2005%3A00%3A08\n>\n> So this is just plain not stable. It is odd though. I can\n> easily think of mechanisms that would cause the WAL volume\n> to occasionally be *more* than the \"typical\" case. What\n> would cause it to be *less*, if MAXALIGN is ruled out?\n>\n\nThe original theory I have given above [1] which is an interleaved\nautovacumm transaction. Let me try to explain in a bit more detail.\nSay when transaction T-1 is performing Insert ('INSERT INTO stats_test\nSELECT 'serialize-topbig--1:'||g.i FROM generate_series(1, 5000)\ng(i);') a parallel autovacuum transaction occurs. The problem as seen\nin buildfarm will happen when autovacuum transaction happens after 80%\nor more of the Insert is done.\n\nIn such a situation we will start decoding 'Insert' first and need to\nspill multiple times due to the amount of changes (more than threshold\nlogical_decoding_work_mem) and then before we encounter Commit of\ntransaction that performed Insert (and probably some more changes from\nthat transaction) we will encounter a small transaction (autovacuum\ntransaction). The decode of that small transaction will send the\nstats collected till now which will lead to the problem shown in\nbuildfarm.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1Jo0U1oSJyxrdA7i-bOOTh0Hue-NQqdG-CEqwGtDZPjyw%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 13 Oct 2020 11:24:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, 13 Oct 2020 at 14:53, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 13, 2020 at 11:05 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > >> It is possible that MAXALIGN stuff is playing a role here and or the\n> > >> background transaction stuff. I think if we go with the idea of\n> > >> testing spill_txns and spill_count being positive then the results\n> > >> will be stable. I'll write a patch for that.\n> >\n> > Here's our first failure on a MAXALIGN-8 machine:\n> >\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2020-10-13%2005%3A00%3A08\n> >\n> > So this is just plain not stable. It is odd though. I can\n> > easily think of mechanisms that would cause the WAL volume\n> > to occasionally be *more* than the \"typical\" case. What\n> > would cause it to be *less*, if MAXALIGN is ruled out?\n> >\n>\n> The original theory I have given above [1] which is an interleaved\n> autovacumm transaction. Let me try to explain in a bit more detail.\n> Say when transaction T-1 is performing Insert ('INSERT INTO stats_test\n> SELECT 'serialize-topbig--1:'||g.i FROM generate_series(1, 5000)\n> g(i);') a parallel autovacuum transaction occurs. The problem as seen\n> in buildfarm will happen when autovacuum transaction happens after 80%\n> or more of the Insert is done.\n>\n> In such a situation we will start decoding 'Insert' first and need to\n> spill multiple times due to the amount of changes (more than threshold\n> logical_decoding_work_mem) and then before we encounter Commit of\n> transaction that performed Insert (and probably some more changes from\n> that transaction) we will encounter a small transaction (autovacuum\n> transaction). The decode of that small transaction will send the\n> stats collected till now which will lead to the problem shown in\n> buildfarm.\n\nThat seems a possible scenario.\n\nI think probably this also explains the reason why spill_count\nslightly varied and spill_txns was still 1. The spill_count value\ndepends on how much the process spilled out transactions before\nencountering the commit of an autovacuum transaction. Since we have\nthe spill statistics per reorder buffer, not per transactions, it's\npossible.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 13 Oct 2020 15:18:29 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Oct 13, 2020 at 11:49 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 13 Oct 2020 at 14:53, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Oct 13, 2020 at 11:05 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > > >> It is possible that MAXALIGN stuff is playing a role here and or the\n> > > >> background transaction stuff. I think if we go with the idea of\n> > > >> testing spill_txns and spill_count being positive then the results\n> > > >> will be stable. I'll write a patch for that.\n> > >\n> > > Here's our first failure on a MAXALIGN-8 machine:\n> > >\n> > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2020-10-13%2005%3A00%3A08\n> > >\n> > > So this is just plain not stable. It is odd though. I can\n> > > easily think of mechanisms that would cause the WAL volume\n> > > to occasionally be *more* than the \"typical\" case. What\n> > > would cause it to be *less*, if MAXALIGN is ruled out?\n> > >\n> >\n> > The original theory I have given above [1] which is an interleaved\n> > autovacumm transaction. Let me try to explain in a bit more detail.\n> > Say when transaction T-1 is performing Insert ('INSERT INTO stats_test\n> > SELECT 'serialize-topbig--1:'||g.i FROM generate_series(1, 5000)\n> > g(i);') a parallel autovacuum transaction occurs. The problem as seen\n> > in buildfarm will happen when autovacuum transaction happens after 80%\n> > or more of the Insert is done.\n> >\n> > In such a situation we will start decoding 'Insert' first and need to\n> > spill multiple times due to the amount of changes (more than threshold\n> > logical_decoding_work_mem) and then before we encounter Commit of\n> > transaction that performed Insert (and probably some more changes from\n> > that transaction) we will encounter a small transaction (autovacuum\n> > transaction). The decode of that small transaction will send the\n> > stats collected till now which will lead to the problem shown in\n> > buildfarm.\n>\n> That seems a possible scenario.\n>\n> I think probably this also explains the reason why spill_count\n> slightly varied and spill_txns was still 1. The spill_count value\n> depends on how much the process spilled out transactions before\n> encountering the commit of an autovacuum transaction. Since we have\n> the spill statistics per reorder buffer, not per transactions, it's\n> possible.\n>\n\nOkay, here is an updated version (changed some comments) of the patch\nI posted some time back. What do you think? I have tested this on both\nWindows and Linux environments. I think it is a bit tricky to\nreproduce the exact scenario so if you are fine we can push this and\ncheck or let me know if you any better idea?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 13 Oct 2020 11:57:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, 13 Oct 2020 at 15:27, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 13, 2020 at 11:49 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Tue, 13 Oct 2020 at 14:53, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Oct 13, 2020 at 11:05 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > >\n> > > > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > > > >> It is possible that MAXALIGN stuff is playing a role here and or the\n> > > > >> background transaction stuff. I think if we go with the idea of\n> > > > >> testing spill_txns and spill_count being positive then the results\n> > > > >> will be stable. I'll write a patch for that.\n> > > >\n> > > > Here's our first failure on a MAXALIGN-8 machine:\n> > > >\n> > > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2020-10-13%2005%3A00%3A08\n> > > >\n> > > > So this is just plain not stable. It is odd though. I can\n> > > > easily think of mechanisms that would cause the WAL volume\n> > > > to occasionally be *more* than the \"typical\" case. What\n> > > > would cause it to be *less*, if MAXALIGN is ruled out?\n> > > >\n> > >\n> > > The original theory I have given above [1] which is an interleaved\n> > > autovacumm transaction. Let me try to explain in a bit more detail.\n> > > Say when transaction T-1 is performing Insert ('INSERT INTO stats_test\n> > > SELECT 'serialize-topbig--1:'||g.i FROM generate_series(1, 5000)\n> > > g(i);') a parallel autovacuum transaction occurs. The problem as seen\n> > > in buildfarm will happen when autovacuum transaction happens after 80%\n> > > or more of the Insert is done.\n> > >\n> > > In such a situation we will start decoding 'Insert' first and need to\n> > > spill multiple times due to the amount of changes (more than threshold\n> > > logical_decoding_work_mem) and then before we encounter Commit of\n> > > transaction that performed Insert (and probably some more changes from\n> > > that transaction) we will encounter a small transaction (autovacuum\n> > > transaction). The decode of that small transaction will send the\n> > > stats collected till now which will lead to the problem shown in\n> > > buildfarm.\n> >\n> > That seems a possible scenario.\n> >\n> > I think probably this also explains the reason why spill_count\n> > slightly varied and spill_txns was still 1. The spill_count value\n> > depends on how much the process spilled out transactions before\n> > encountering the commit of an autovacuum transaction. Since we have\n> > the spill statistics per reorder buffer, not per transactions, it's\n> > possible.\n> >\n>\n> Okay, here is an updated version (changed some comments) of the patch\n> I posted some time back. What do you think? I have tested this on both\n> Windows and Linux environments. I think it is a bit tricky to\n> reproduce the exact scenario so if you are fine we can push this and\n> check or let me know if you any better idea?\n\nI agree to check if the spill_counts and spill_txns are positive. I\nthought we can reduce the number of tuples to insert to the half. It\nwould help to reduce the likelihood of other transactions interfere\nand speed up the test (currently, the stats.sql test takes almost 1\nsec in my environment). But it might lead to another problem like the\nlogical decoding doesn't spill out the transaction on some\nenvironment.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 13 Oct 2020 15:46:35 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Oct 13, 2020 at 12:17 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 13 Oct 2020 at 15:27, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Oct 13, 2020 at 11:49 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > On Tue, 13 Oct 2020 at 14:53, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > The original theory I have given above [1] which is an interleaved\n> > > > autovacumm transaction. Let me try to explain in a bit more detail.\n> > > > Say when transaction T-1 is performing Insert ('INSERT INTO stats_test\n> > > > SELECT 'serialize-topbig--1:'||g.i FROM generate_series(1, 5000)\n> > > > g(i);') a parallel autovacuum transaction occurs. The problem as seen\n> > > > in buildfarm will happen when autovacuum transaction happens after 80%\n> > > > or more of the Insert is done.\n> > > >\n> > > > In such a situation we will start decoding 'Insert' first and need to\n> > > > spill multiple times due to the amount of changes (more than threshold\n> > > > logical_decoding_work_mem) and then before we encounter Commit of\n> > > > transaction that performed Insert (and probably some more changes from\n> > > > that transaction) we will encounter a small transaction (autovacuum\n> > > > transaction). The decode of that small transaction will send the\n> > > > stats collected till now which will lead to the problem shown in\n> > > > buildfarm.\n> > >\n> > > That seems a possible scenario.\n> > >\n> > > I think probably this also explains the reason why spill_count\n> > > slightly varied and spill_txns was still 1. The spill_count value\n> > > depends on how much the process spilled out transactions before\n> > > encountering the commit of an autovacuum transaction. Since we have\n> > > the spill statistics per reorder buffer, not per transactions, it's\n> > > possible.\n> > >\n> >\n> > Okay, here is an updated version (changed some comments) of the patch\n> > I posted some time back. What do you think? I have tested this on both\n> > Windows and Linux environments. I think it is a bit tricky to\n> > reproduce the exact scenario so if you are fine we can push this and\n> > check or let me know if you any better idea?\n>\n> I agree to check if the spill_counts and spill_txns are positive.\n>\n\nI am able to reproduce this problem via debugger. Basically, execute\nthe Insert mentioned above from one the psql sessions and in\nExecInsert() stop the execution once 'estate->es_processed > 4000' and\nthen from another psql terminal execute some DDL which will be ignored\nbut will any try to decode commit. Then perform 'continue' in the\nfirst session. This will lead to inconsistent stats value depending\nupon at what time DDL is performed. I'll push the patch as I am more\nconfident now.\n\n> I\n> thought we can reduce the number of tuples to insert to the half. It\n> would help to reduce the likelihood of other transactions interfere\n> and speed up the test (currently, the stats.sql test takes almost 1\n> sec in my environment). But it might lead to another problem like the\n> logical decoding doesn't spill out the transaction on some\n> environment.\n>\n\nYeah, and in other cases, in spill.sql we are using the same amount of\ndata to test spilling.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 13 Oct 2020 12:42:35 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Oct 13, 2020 at 12:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 13, 2020 at 12:17 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > I agree to check if the spill_counts and spill_txns are positive.\n> >\n>\n> I am able to reproduce this problem via debugger. Basically, execute\n> the Insert mentioned above from one the psql sessions and in\n> ExecInsert() stop the execution once 'estate->es_processed > 4000' and\n> then from another psql terminal execute some DDL which will be ignored\n> but will any try to decode commit.\n\n/will any try to decode commit./we will anyway try to decode commit\nfor this DDL transaction when decoding changes via\npg_logical_slot_peek_changes\n\n> Then perform 'continue' in the\n> first session. This will lead to inconsistent stats value depending\n> upon at what time DDL is performed. I'll push the patch as I am more\n> confident now.\n>\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 13 Oct 2020 12:45:05 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, 13 Oct 2020 at 16:12, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 13, 2020 at 12:17 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Tue, 13 Oct 2020 at 15:27, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Oct 13, 2020 at 11:49 AM Masahiko Sawada\n> > > <masahiko.sawada@2ndquadrant.com> wrote:\n> > > >\n> > > > On Tue, 13 Oct 2020 at 14:53, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > The original theory I have given above [1] which is an interleaved\n> > > > > autovacumm transaction. Let me try to explain in a bit more detail.\n> > > > > Say when transaction T-1 is performing Insert ('INSERT INTO stats_test\n> > > > > SELECT 'serialize-topbig--1:'||g.i FROM generate_series(1, 5000)\n> > > > > g(i);') a parallel autovacuum transaction occurs. The problem as seen\n> > > > > in buildfarm will happen when autovacuum transaction happens after 80%\n> > > > > or more of the Insert is done.\n> > > > >\n> > > > > In such a situation we will start decoding 'Insert' first and need to\n> > > > > spill multiple times due to the amount of changes (more than threshold\n> > > > > logical_decoding_work_mem) and then before we encounter Commit of\n> > > > > transaction that performed Insert (and probably some more changes from\n> > > > > that transaction) we will encounter a small transaction (autovacuum\n> > > > > transaction). The decode of that small transaction will send the\n> > > > > stats collected till now which will lead to the problem shown in\n> > > > > buildfarm.\n> > > >\n> > > > That seems a possible scenario.\n> > > >\n> > > > I think probably this also explains the reason why spill_count\n> > > > slightly varied and spill_txns was still 1. The spill_count value\n> > > > depends on how much the process spilled out transactions before\n> > > > encountering the commit of an autovacuum transaction. Since we have\n> > > > the spill statistics per reorder buffer, not per transactions, it's\n> > > > possible.\n> > > >\n> > >\n> > > Okay, here is an updated version (changed some comments) of the patch\n> > > I posted some time back. What do you think? I have tested this on both\n> > > Windows and Linux environments. I think it is a bit tricky to\n> > > reproduce the exact scenario so if you are fine we can push this and\n> > > check or let me know if you any better idea?\n> >\n> > I agree to check if the spill_counts and spill_txns are positive.\n> >\n>\n> I am able to reproduce this problem via debugger. Basically, execute\n> the Insert mentioned above from one the psql sessions and in\n> ExecInsert() stop the execution once 'estate->es_processed > 4000' and\n> then from another psql terminal execute some DDL which will be ignored\n> but will any try to decode commit. Then perform 'continue' in the\n> first session. This will lead to inconsistent stats value depending\n> upon at what time DDL is performed.\n\nThanks!\nI'm also able to reproduce this in a similar way and have confirmed\nthe patch fixes it.\n\n> I'll push the patch as I am more\n> confident now.\n\n+1. Let's check how the tests are going to be.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 13 Oct 2020 16:20:42 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Oct 13, 2020 at 12:51 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n>\n> > I'll push the patch as I am more\n> > confident now.\n>\n> +1. Let's check how the tests are going to be.\n>\n\nOkay, thanks. I have pushed it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 13 Oct 2020 13:24:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> I am able to reproduce this problem via debugger. Basically, execute\n> the Insert mentioned above from one the psql sessions and in\n> ExecInsert() stop the execution once 'estate->es_processed > 4000' and\n> then from another psql terminal execute some DDL which will be ignored\n> but will any try to decode commit. Then perform 'continue' in the\n> first session. This will lead to inconsistent stats value depending\n> upon at what time DDL is performed. I'll push the patch as I am more\n> confident now.\n\nSo ... doesn't this mean that if the concurrent transaction commits very\nshortly after our query starts, decoding might stop without having ever\nspilled at all? IOW, I'm afraid that the revised test can still fail,\njust at a frequency circa one-twelfth of before.\n\nI'm also somewhat suspicious of this explanation because it doesn't\nseem to account for the clear experimental evidence that 32-bit machines\nwere more prone to failure than 64-bit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 13 Oct 2020 10:27:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Oct 13, 2020 at 7:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > I am able to reproduce this problem via debugger. Basically, execute\n> > the Insert mentioned above from one the psql sessions and in\n> > ExecInsert() stop the execution once 'estate->es_processed > 4000' and\n> > then from another psql terminal execute some DDL which will be ignored\n> > but will any try to decode commit. Then perform 'continue' in the\n> > first session. This will lead to inconsistent stats value depending\n> > upon at what time DDL is performed. I'll push the patch as I am more\n> > confident now.\n>\n> So ... doesn't this mean that if the concurrent transaction commits very\n> shortly after our query starts, decoding might stop without having ever\n> spilled at all?\n>\n\nI am assuming in \"our query starts\" you refer to Insert statement used\nin the test. No, the decoding (and required spilling) will still\nhappen. It is only that we will try to send the stats accumulated by\nthat time which will be zero. And later when the decoding of our\nInsert transaction is finished it will again send the updated stats.\nIt should work fine in this or similar scenarios.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 13 Oct 2020 20:24:42 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "Thanks for your comment.\r\n\r\n> 8.3. Character Types:\r\n> The name type exists only for the storage of identifiers in the internal system catalogs\r\n\r\nI didn't know the policy about data types. Thank you.\r\nBut I think the column names should match pg_replication_slots.\r\nThe attached patch changes only the column names and macros.\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n\r\n-----Original Message-----\r\nFrom: Masahiko Sawada [mailto:masahiko.sawada@2ndquadrant.com] \r\nSent: Tuesday, October 13, 2020 9:11 AM\r\nTo: Shinoda, Noriyoshi (PN Japan A&PS Delivery) <noriyoshi.shinoda@hpe.com>\r\nCc: Amit Kapila <amit.kapila16@gmail.com>; Dilip Kumar <dilipbalaut@gmail.com>; Magnus Hagander <magnus@hagander.net>; Tomas Vondra <tomas.vondra@2ndquadrant.com>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>; Ajin Cherian <itsajin@gmail.com>\r\nSubject: Re: Resetting spilled txn statistics in pg_stat_replication\r\n\r\n(Please avoid top-posting)\r\n\r\nOn Mon, 12 Oct 2020 at 23:45, Shinoda, Noriyoshi (PN Japan A&PS\r\nDelivery) <noriyoshi.shinoda@hpe.com> wrote:\r\n>\r\n> Sawada-san, Thank you your comments.\r\n>\r\n> The attached patch reflects the comment.\r\n> I also made a fix for the regression test.\r\n>\r\n> Regards,\r\n> Noriyoshi Shinoda\r\n>\r\n> -----Original Message-----\r\n> From: Masahiko Sawada [mailto:masahiko.sawada@2ndquadrant.com]\r\n> Sent: Monday, October 12, 2020 8:12 PM\r\n> To: Shinoda, Noriyoshi (PN Japan A&PS Delivery) \r\n> <noriyoshi.shinoda@hpe.com>\r\n> Cc: Amit Kapila <amit.kapila16@gmail.com>; Dilip Kumar \r\n> <dilipbalaut@gmail.com>; Magnus Hagander <magnus@hagander.net>; Tomas \r\n> Vondra <tomas.vondra@2ndquadrant.com>; PostgreSQL Hackers \r\n> <pgsql-hackers@lists.postgresql.org>; Ajin Cherian <itsajin@gmail.com>\r\n> Subject: Re: Resetting spilled txn statistics in pg_stat_replication\r\n>\r\n> On Mon, 12 Oct 2020 at 18:29, Shinoda, Noriyoshi (PN Japan A&PS\r\n> Delivery) <noriyoshi.shinoda@hpe.com> wrote:\r\n> >\r\n> > Hi, thank you for the awesome feature.\r\n> >\r\n>\r\n> Thank you for reporting!\r\n>\r\n> > As it may have been discussed, I think the 'name' column in pg_stat_replication_slots is more consistent with the column name and data type matched to the pg_replication_slots catalog.\r\n> > The attached patch changes the name and data type of the 'name' column to slot_name and 'name' type, respectively.\r\n>\r\n> It seems a good idea to me. In other system views, we use the name data type for object name. When I wrote the first patch, I borrowed the code for pg_stat_slru which uses text data for the name but I think it's an oversight.\r\n\r\nHmm, my above observation is wrong. All other statistics use text data type and internally use char[NAMEDATALEN]. So I think renaming to 'slot_name' would be a good idea but probably we don’t need to change the internally used data type. For the data type of slot_name of pg_stat_replication_slots view, given that the doc says the following[1], I think we can keep it too as this view is not a system catalog. What do you think?\r\n\r\n8.3. Character Types:\r\nThe name type exists only for the storage of identifiers in the internal system catalogs\r\n\r\n[1] https://www.postgresql.org/docs/devel/datatype-character.html \r\n\r\nRegards,\r\n\r\n--\r\nMasahiko Sawada http://www.2ndQuadrant.com/ \r\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 14 Oct 2020 03:03:49 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan A&PS Delivery)\"\n\t<noriyoshi.shinoda@hpe.com>", "msg_from_op": false, "msg_subject": "RE: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Wed, 14 Oct 2020 at 12:03, Shinoda, Noriyoshi (PN Japan A&PS\nDelivery) <noriyoshi.shinoda@hpe.com> wrote:\n>\n> Thanks for your comment.\n>\n> > 8.3. Character Types:\n> > The name type exists only for the storage of identifiers in the internal system catalogs\n>\n> I didn't know the policy about data types. Thank you.\n> But I think the column names should match pg_replication_slots.\n> The attached patch changes only the column names and macros.\n>\n\nThank you for updating the patch!\n\nThe patch changes the column name from 'name' to 'slot_name' and fixes\na typo. That's reasonable to me. Amit, what do you think?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 15 Oct 2020 14:36:46 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Oct 13, 2020 at 5:41 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Mon, 12 Oct 2020 at 23:45, Shinoda, Noriyoshi (PN Japan A&PS\n> Delivery) <noriyoshi.shinoda@hpe.com> wrote:\n> >\n> >\n> > > As it may have been discussed, I think the 'name' column in pg_stat_replication_slots is more consistent with the column name and data type matched to the pg_replication_slots catalog.\n> > > The attached patch changes the name and data type of the 'name' column to slot_name and 'name' type, respectively.\n> >\n> > It seems a good idea to me. In other system views, we use the name data type for object name. When I wrote the first patch, I borrowed the code for pg_stat_slru which uses text data for the name but I think it's an oversight.\n>\n> Hmm, my above observation is wrong. All other statistics use text data\n> type and internally use char[NAMEDATALEN].\n>\n\nAFAICS, we use name data-type in many other similar stats views like\npg_stat_subscription, pg_statio_all_sequences, pg_stat_user_functions,\npg_stat_all_tables. So, shouldn't we consistent with those views?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 15 Oct 2020 14:22:20 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "Amit-san, Sawada-san,\r\n\r\nThank you for your comment.\r\n\r\n> AFAICS, we use name data-type in many other similar stats views like pg_stat_subscription, pg_statio_all_sequences, pg_stat_user_functions, \r\n> pg_stat_all_tables. So, shouldn't we consistent with those views?\r\n\r\nI checked the data type used for the statistics view identity column. 'Name' type columns are used in many views. If there is no problem with PostgreSQL standard, I would like to change both the data type and the column name.\r\n\r\n- name type \r\npg_stat_activity.datname\r\npg_stat_replication.usename\r\npg_stat_subscription.subname\r\npg_stat_database.datname\r\npg_stat_database_conflicts.datname\r\npg_stat_all_tables.schemaname/.relname\r\npg_stat_all_indexes.schemaname/.relname/.indexrelname\r\npg_statio_all_tables.schemaname/.relname\r\npg_statio_all_indexes.schemaname/.relname/.indexname\r\npg_statio_all_sequences.schemaname/.relname\r\npg_stat_user_functions.schemaname/.funcname\r\n\r\n- text type\r\npg_stat_replication_slots.name\r\npg_stat_slru.name\r\npg_backend_memory_contexts.name\r\n\r\nThe attached patch makes the following changes.\r\n- column name: name to slot_name\r\n- data type: text to name\r\n- macro: ... CLOS to ... COLS\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n\r\n-----Original Message-----\r\nFrom: Amit Kapila [mailto:amit.kapila16@gmail.com] \r\nSent: Thursday, October 15, 2020 5:52 PM\r\nTo: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\r\nCc: Shinoda, Noriyoshi (PN Japan A&PS Delivery) <noriyoshi.shinoda@hpe.com>; Dilip Kumar <dilipbalaut@gmail.com>; Magnus Hagander <magnus@hagander.net>; Tomas Vondra <tomas.vondra@2ndquadrant.com>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>; Ajin Cherian <itsajin@gmail.com>\r\nSubject: Re: Resetting spilled txn statistics in pg_stat_replication\r\n\r\nOn Tue, Oct 13, 2020 at 5:41 AM Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote:\r\n>\r\n> On Mon, 12 Oct 2020 at 23:45, Shinoda, Noriyoshi (PN Japan A&PS\r\n> Delivery) <noriyoshi.shinoda@hpe.com> wrote:\r\n> >\r\n> >\r\n> > > As it may have been discussed, I think the 'name' column in pg_stat_replication_slots is more consistent with the column name and data type matched to the pg_replication_slots catalog.\r\n> > > The attached patch changes the name and data type of the 'name' column to slot_name and 'name' type, respectively.\r\n> >\r\n> > It seems a good idea to me. In other system views, we use the name data type for object name. When I wrote the first patch, I borrowed the code for pg_stat_slru which uses text data for the name but I think it's an oversight.\r\n>\r\n> Hmm, my above observation is wrong. All other statistics use text data \r\n> type and internally use char[NAMEDATALEN].\r\n>\r\n\r\nAFAICS, we use name data-type in many other similar stats views like pg_stat_subscription, pg_statio_all_sequences, pg_stat_user_functions, pg_stat_all_tables. So, shouldn't we consistent with those views?\r\n\r\n--\r\nWith Regards,\r\nAmit Kapila.", "msg_date": "Mon, 19 Oct 2020 00:29:05 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan A&PS Delivery)\"\n\t<noriyoshi.shinoda@hpe.com>", "msg_from_op": false, "msg_subject": "RE: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, 15 Oct 2020 at 17:51, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 13, 2020 at 5:41 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Mon, 12 Oct 2020 at 23:45, Shinoda, Noriyoshi (PN Japan A&PS\n> > Delivery) <noriyoshi.shinoda@hpe.com> wrote:\n> > >\n> > >\n> > > > As it may have been discussed, I think the 'name' column in pg_stat_replication_slots is more consistent with the column name and data type matched to the pg_replication_slots catalog.\n> > > > The attached patch changes the name and data type of the 'name' column to slot_name and 'name' type, respectively.\n> > >\n> > > It seems a good idea to me. In other system views, we use the name data type for object name. When I wrote the first patch, I borrowed the code for pg_stat_slru which uses text data for the name but I think it's an oversight.\n> >\n> > Hmm, my above observation is wrong. All other statistics use text data\n> > type and internally use char[NAMEDATALEN].\n> >\n>\n> AFAICS, we use name data-type in many other similar stats views like\n> pg_stat_subscription, pg_statio_all_sequences, pg_stat_user_functions,\n> pg_stat_all_tables. So, shouldn't we consistent with those views?\n>\n\nYes, they has the name data type column but it actually comes from\nsystem catalogs. For instance, here is the view definition of\npg_stat_subscription:\n\n SELECT su.oid AS subid,\n su.subname,\n st.pid,\n st.relid,\n st.received_lsn,\n st.last_msg_send_time,\n st.last_msg_receipt_time,\n st.latest_end_lsn,\n st.latest_end_time\n FROM pg_subscription su\n LEFT JOIN pg_stat_get_subscription(NULL::oid) st(subid, relid,\npid, received_lsn, last_msg_send_time, last_msg_receipt_time, latest\n_end_lsn, latest_end_time) ON st.subid = su.oid;\n\nThis view uses the subscription name from pg_subscription system\ncatalog. AFAICS no string data managed by the stats collector use name\ndata type. It uses char[NAMEDATALEN] instead. And since I found the\ndescription about name data type in the doc, I thought it's better to\nhave it as text. Probably since pg_stat_replication_slots\n(pg_stat_get_replication_slots()) is the our first view that doesn’t\nuse system catalog to get the object name I'm okay with changing to\nname data type but if we do that it's better to update the description\nin the doc as well.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 19 Oct 2020 12:34:01 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Mon, Oct 19, 2020 at 9:04 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 15 Oct 2020 at 17:51, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > AFAICS, we use name data-type in many other similar stats views like\n> > pg_stat_subscription, pg_statio_all_sequences, pg_stat_user_functions,\n> > pg_stat_all_tables. So, shouldn't we consistent with those views?\n> >\n>\n> Yes, they has the name data type column but it actually comes from\n> system catalogs. For instance, here is the view definition of\n> pg_stat_subscription:\n>\n> SELECT su.oid AS subid,\n> su.subname,\n> st.pid,\n> st.relid,\n> st.received_lsn,\n> st.last_msg_send_time,\n> st.last_msg_receipt_time,\n> st.latest_end_lsn,\n> st.latest_end_time\n> FROM pg_subscription su\n> LEFT JOIN pg_stat_get_subscription(NULL::oid) st(subid, relid,\n> pid, received_lsn, last_msg_send_time, last_msg_receipt_time, latest\n> _end_lsn, latest_end_time) ON st.subid = su.oid;\n>\n> This view uses the subscription name from pg_subscription system\n> catalog. AFAICS no string data managed by the stats collector use name\n> data type. It uses char[NAMEDATALEN] instead. And since I found the\n> description about name data type in the doc, I thought it's better to\n> have it as text.\n>\n\nOkay, I see the merit of keeping slot_name as 'text' type. I was\ntrying to see if we can be consistent but I guess that is not so\nstraight-forward, if we go with all (stat) view definitions to\nconsistently display/use 'name' datatype for strings then ideally we\nneed to change at other places as well. Also, I see that\npg_stat_wal_receiver uses slot_name as 'text', so displaying the same\nas 'name' in pg_stat_replication_slots might not be ideal. So, let's\nstick with 'text' data type for slot_name which means we should\ngo-ahead with the v3 version of the patch [1] posted by Shinoda-San,\nright?\n\n[1] - https://www.postgresql.org/message-id/TU4PR8401MB115297EF936A7675A5644151EE050%40TU4PR8401MB1152.NAMPRD84.PROD.OUTLOOK.COM\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 19 Oct 2020 10:55:02 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Mon, 19 Oct 2020 at 14:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Oct 19, 2020 at 9:04 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Thu, 15 Oct 2020 at 17:51, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > AFAICS, we use name data-type in many other similar stats views like\n> > > pg_stat_subscription, pg_statio_all_sequences, pg_stat_user_functions,\n> > > pg_stat_all_tables. So, shouldn't we consistent with those views?\n> > >\n> >\n> > Yes, they has the name data type column but it actually comes from\n> > system catalogs. For instance, here is the view definition of\n> > pg_stat_subscription:\n> >\n> > SELECT su.oid AS subid,\n> > su.subname,\n> > st.pid,\n> > st.relid,\n> > st.received_lsn,\n> > st.last_msg_send_time,\n> > st.last_msg_receipt_time,\n> > st.latest_end_lsn,\n> > st.latest_end_time\n> > FROM pg_subscription su\n> > LEFT JOIN pg_stat_get_subscription(NULL::oid) st(subid, relid,\n> > pid, received_lsn, last_msg_send_time, last_msg_receipt_time, latest\n> > _end_lsn, latest_end_time) ON st.subid = su.oid;\n> >\n> > This view uses the subscription name from pg_subscription system\n> > catalog. AFAICS no string data managed by the stats collector use name\n> > data type. It uses char[NAMEDATALEN] instead. And since I found the\n> > description about name data type in the doc, I thought it's better to\n> > have it as text.\n> >\n>\n> Okay, I see the merit of keeping slot_name as 'text' type. I was\n> trying to see if we can be consistent but I guess that is not so\n> straight-forward, if we go with all (stat) view definitions to\n> consistently display/use 'name' datatype for strings then ideally we\n> need to change at other places as well. Also, I see that\n> pg_stat_wal_receiver uses slot_name as 'text', so displaying the same\n> as 'name' in pg_stat_replication_slots might not be ideal.\n\nOh, I didn't realize pg_stat_wal_receiver uses text data type.\n\n> So, let's\n> stick with 'text' data type for slot_name which means we should\n> go-ahead with the v3 version of the patch [1] posted by Shinoda-San,\n> right?\n\nRight. +1\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 19 Oct 2020 16:50:13 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Mon, Oct 19, 2020 at 1:20 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Mon, 19 Oct 2020 at 14:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> > So, let's\n> > stick with 'text' data type for slot_name which means we should\n> > go-ahead with the v3 version of the patch [1] posted by Shinoda-San,\n> > right?\n>\n> Right. +1\n>\n\nI have pushed the patch after updating the test_decoding/sql/stats.\nThe corresponding changes in the test were missing.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 20 Oct 2020 16:42:13 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, 20 Oct 2020 at 20:11, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Oct 19, 2020 at 1:20 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Mon, 19 Oct 2020 at 14:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > > So, let's\n> > > stick with 'text' data type for slot_name which means we should\n> > > go-ahead with the v3 version of the patch [1] posted by Shinoda-San,\n> > > right?\n> >\n> > Right. +1\n> >\n>\n> I have pushed the patch after updating the test_decoding/sql/stats.\n> The corresponding changes in the test were missing.\n\nThank you!\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 20 Oct 2020 21:23:52 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "> I have pushed the patch after updating the test_decoding/sql/stats.\r\n> The corresponding changes in the test were missing.\r\n\r\nThank you very much for your help!\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n\r\n-----Original Message-----\r\nFrom: Masahiko Sawada [mailto:masahiko.sawada@2ndquadrant.com] \r\nSent: Tuesday, October 20, 2020 9:24 PM\r\nTo: Amit Kapila <amit.kapila16@gmail.com>\r\nCc: Shinoda, Noriyoshi (PN Japan A&PS Delivery) <noriyoshi.shinoda@hpe.com>; Dilip Kumar <dilipbalaut@gmail.com>; Magnus Hagander <magnus@hagander.net>; Tomas Vondra <tomas.vondra@2ndquadrant.com>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>; Ajin Cherian <itsajin@gmail.com>\r\nSubject: Re: Resetting spilled txn statistics in pg_stat_replication\r\n\r\nOn Tue, 20 Oct 2020 at 20:11, Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n>\r\n> On Mon, Oct 19, 2020 at 1:20 PM Masahiko Sawada \r\n> <masahiko.sawada@2ndquadrant.com> wrote:\r\n> >\r\n> > On Mon, 19 Oct 2020 at 14:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> > >\r\n> >\r\n> > > So, let's\r\n> > > stick with 'text' data type for slot_name which means we should \r\n> > > go-ahead with the v3 version of the patch [1] posted by \r\n> > > Shinoda-San, right?\r\n> >\r\n> > Right. +1\r\n> >\r\n>\r\n> I have pushed the patch after updating the test_decoding/sql/stats.\r\n> The corresponding changes in the test were missing.\r\n\r\nThank you!\r\n\r\nRegards,\r\n\r\n-- \r\nMasahiko Sawada http://www.2ndQuadrant.com/ \r\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\r\n", "msg_date": "Wed, 21 Oct 2020 01:05:36 +0000", "msg_from": "\"Shinoda, Noriyoshi (PN Japan A&PS Delivery)\"\n\t<noriyoshi.shinoda@hpe.com>", "msg_from_op": false, "msg_subject": "RE: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Oct 13, 2020 at 10:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 13, 2020 at 10:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> >\n> > I know I can go read the source code, but most users will not want to.\n> > Is the documentation in monitoring.sgml really sufficient? If we can't\n> > explain this with more precision, is it really a number we want to expose\n> > at all?\n> >\n>\n> This counter is important to give users an idea about the amount of\n> I/O we incur during decoding and to tune logical_decoding_work_mem\n> GUC. So, I would prefer to improve the documentation for this\n> variable.\n>\n\nI have modified the description of spill_count and spill_txns to make\nthings clear. Any suggestions?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Wed, 21 Oct 2020 09:26:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Wed, 21 Oct 2020 at 12:56, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 13, 2020 at 10:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Oct 13, 2020 at 10:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > >\n> > > I know I can go read the source code, but most users will not want to.\n> > > Is the documentation in monitoring.sgml really sufficient? If we can't\n> > > explain this with more precision, is it really a number we want to expose\n> > > at all?\n> > >\n> >\n> > This counter is important to give users an idea about the amount of\n> > I/O we incur during decoding and to tune logical_decoding_work_mem\n> > GUC. So, I would prefer to improve the documentation for this\n> > variable.\n> >\n>\n> I have modified the description of spill_count and spill_txns to make\n> things clear. Any suggestions?\n\nThank you for the patch.\n\n- logical decoding exceeds\n<literal>logical_decoding_work_mem</literal>. The\n- counter gets incremented both for toplevel transactions and\n- subtransactions.\n+ logical decoding of changes from WAL for this exceeds\n+ <literal>logical_decoding_work_mem</literal>. The counter gets\n+ incremented both for toplevel transactions and subtransactions.\n\nWhat is the word \"this\" in the above change referring to? How about\nsomething like:\n\nNumber of transactions spilled to disk after the memory used by\nlogical decoding of changes from WAL exceeding\nlogical_decoding_work_mem. The counter gets incremented both for\ntoplevel transactions and subtransactions.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 22 Oct 2020 19:38:30 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, Oct 22, 2020 at 4:09 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Wed, 21 Oct 2020 at 12:56, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Oct 13, 2020 at 10:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Oct 13, 2020 at 10:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > >\n> > > >\n> > > > I know I can go read the source code, but most users will not want to.\n> > > > Is the documentation in monitoring.sgml really sufficient? If we can't\n> > > > explain this with more precision, is it really a number we want to expose\n> > > > at all?\n> > > >\n> > >\n> > > This counter is important to give users an idea about the amount of\n> > > I/O we incur during decoding and to tune logical_decoding_work_mem\n> > > GUC. So, I would prefer to improve the documentation for this\n> > > variable.\n> > >\n> >\n> > I have modified the description of spill_count and spill_txns to make\n> > things clear. Any suggestions?\n>\n> Thank you for the patch.\n>\n> - logical decoding exceeds\n> <literal>logical_decoding_work_mem</literal>. The\n> - counter gets incremented both for toplevel transactions and\n> - subtransactions.\n> + logical decoding of changes from WAL for this exceeds\n> + <literal>logical_decoding_work_mem</literal>. The counter gets\n> + incremented both for toplevel transactions and subtransactions.\n>\n> What is the word \"this\" in the above change referring to?\n>\n\n'slot'. The word *slot* is missing in the sentence.\n\n> How about\n> something like:\n>\n\n> Number of transactions spilled to disk after the memory used by\n> logical decoding of changes from WAL exceeding\n> logical_decoding_work_mem. The counter gets incremented both for\n> toplevel transactions and subtransactions.\n>\n\n/exceeding/exceeds. I am fine with your proposed text as well but if\nyou like the above after correction that would be better because it\nwould be more close to spill_count description.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 22 Oct 2020 17:05:05 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Thu, 22 Oct 2020 at 20:34, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Oct 22, 2020 at 4:09 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Wed, 21 Oct 2020 at 12:56, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Oct 13, 2020 at 10:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Tue, Oct 13, 2020 at 10:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > > >\n> > > > >\n> > > > > I know I can go read the source code, but most users will not want to.\n> > > > > Is the documentation in monitoring.sgml really sufficient? If we can't\n> > > > > explain this with more precision, is it really a number we want to expose\n> > > > > at all?\n> > > > >\n> > > >\n> > > > This counter is important to give users an idea about the amount of\n> > > > I/O we incur during decoding and to tune logical_decoding_work_mem\n> > > > GUC. So, I would prefer to improve the documentation for this\n> > > > variable.\n> > > >\n> > >\n> > > I have modified the description of spill_count and spill_txns to make\n> > > things clear. Any suggestions?\n> >\n> > Thank you for the patch.\n> >\n> > - logical decoding exceeds\n> > <literal>logical_decoding_work_mem</literal>. The\n> > - counter gets incremented both for toplevel transactions and\n> > - subtransactions.\n> > + logical decoding of changes from WAL for this exceeds\n> > + <literal>logical_decoding_work_mem</literal>. The counter gets\n> > + incremented both for toplevel transactions and subtransactions.\n> >\n> > What is the word \"this\" in the above change referring to?\n> >\n>\n> 'slot'. The word *slot* is missing in the sentence.\n>\n> > How about\n> > something like:\n> >\n>\n> > Number of transactions spilled to disk after the memory used by\n> > logical decoding of changes from WAL exceeding\n> > logical_decoding_work_mem. The counter gets incremented both for\n> > toplevel transactions and subtransactions.\n> >\n>\n> /exceeding/exceeds. I am fine with your proposed text as well but if\n> you like the above after correction that would be better because it\n> would be more close to spill_count description.\n\nyeah, I agree with the correction.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 23 Oct 2020 11:12:17 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Fri, Oct 23, 2020 at 7:42 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 22 Oct 2020 at 20:34, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > > I have modified the description of spill_count and spill_txns to make\n> > > > things clear. Any suggestions?\n> > >\n> > > Thank you for the patch.\n> > >\n> > > - logical decoding exceeds\n> > > <literal>logical_decoding_work_mem</literal>. The\n> > > - counter gets incremented both for toplevel transactions and\n> > > - subtransactions.\n> > > + logical decoding of changes from WAL for this exceeds\n> > > + <literal>logical_decoding_work_mem</literal>. The counter gets\n> > > + incremented both for toplevel transactions and subtransactions.\n> > >\n> > > What is the word \"this\" in the above change referring to?\n> > >\n> >\n> > 'slot'. The word *slot* is missing in the sentence.\n> >\n> > > How about\n> > > something like:\n> > >\n> >\n> > > Number of transactions spilled to disk after the memory used by\n> > > logical decoding of changes from WAL exceeding\n> > > logical_decoding_work_mem. The counter gets incremented both for\n> > > toplevel transactions and subtransactions.\n> > >\n> >\n> > /exceeding/exceeds. I am fine with your proposed text as well but if\n> > you like the above after correction that would be better because it\n> > would be more close to spill_count description.\n>\n> yeah, I agree with the correction.\n>\n\nOkay, thanks, attached is an updated patch. I'll push this early next\nweek unless you or someone else has any comments/suggestions.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Fri, 23 Oct 2020 08:59:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Fri, Oct 23, 2020 at 8:59 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Oct 23, 2020 at 7:42 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Thu, 22 Oct 2020 at 20:34, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > > > I have modified the description of spill_count and spill_txns to make\n> > > > > things clear. Any suggestions?\n> > > >\n> > > > Thank you for the patch.\n> > > >\n> > > > - logical decoding exceeds\n> > > > <literal>logical_decoding_work_mem</literal>. The\n> > > > - counter gets incremented both for toplevel transactions and\n> > > > - subtransactions.\n> > > > + logical decoding of changes from WAL for this exceeds\n> > > > + <literal>logical_decoding_work_mem</literal>. The counter gets\n> > > > + incremented both for toplevel transactions and subtransactions.\n> > > >\n> > > > What is the word \"this\" in the above change referring to?\n> > > >\n> > >\n> > > 'slot'. The word *slot* is missing in the sentence.\n> > >\n> > > > How about\n> > > > something like:\n> > > >\n> > >\n> > > > Number of transactions spilled to disk after the memory used by\n> > > > logical decoding of changes from WAL exceeding\n> > > > logical_decoding_work_mem. The counter gets incremented both for\n> > > > toplevel transactions and subtransactions.\n> > > >\n> > >\n> > > /exceeding/exceeds. I am fine with your proposed text as well but if\n> > > you like the above after correction that would be better because it\n> > > would be more close to spill_count description.\n> >\n> > yeah, I agree with the correction.\n> >\n>\n> Okay, thanks, attached is an updated patch.\n>\n\nWhile updating the streaming stats patch, it occurred to me that we\ncan write a better description spill_bytes as well. Attached contains\nthe update to spill_bytes description.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Fri, 23 Oct 2020 10:45:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Fri, Oct 23, 2020 at 10:45:34AM +0530, Amit Kapila wrote:\n> On Fri, Oct 23, 2020 at 8:59 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Oct 23, 2020 at 7:42 AM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > On Thu, 22 Oct 2020 at 20:34, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > > > I have modified the description of spill_count and spill_txns to make\n> > > > > > things clear. Any suggestions?\n> > > > >\n> > > > > Thank you for the patch.\n> > > > >\n> > > > > - logical decoding exceeds\n> > > > > <literal>logical_decoding_work_mem</literal>. The\n> > > > > - counter gets incremented both for toplevel transactions and\n> > > > > - subtransactions.\n> > > > > + logical decoding of changes from WAL for this exceeds\n> > > > > + <literal>logical_decoding_work_mem</literal>. The counter gets\n> > > > > + incremented both for toplevel transactions and subtransactions.\n> > > > >\n> > > > > What is the word \"this\" in the above change referring to?\n> > > > >\n> > > >\n> > > > 'slot'. The word *slot* is missing in the sentence.\n> > > >\n> > > > > How about\n> > > > > something like:\n> > > > >\n> > > >\n> > > > > Number of transactions spilled to disk after the memory used by\n> > > > > logical decoding of changes from WAL exceeding\n> > > > > logical_decoding_work_mem. The counter gets incremented both for\n> > > > > toplevel transactions and subtransactions.\n> > > > >\n> > > >\n> > > > /exceeding/exceeds. I am fine with your proposed text as well but if\n> > > > you like the above after correction that would be better because it\n> > > > would be more close to spill_count description.\n> > >\n> > > yeah, I agree with the correction.\n> > >\n> >\n> > Okay, thanks, attached is an updated patch.\n> >\n> \n> While updating the streaming stats patch, it occurred to me that we\n> can write a better description spill_bytes as well. Attached contains\n> the update to spill_bytes description.\n\n+ This and other spill\n+ counters can be used to gauge the I/O occurred during logical decoding\n+ and accordingly can tune <literal>logical_decoding_work_mem</literal>.\n\n\"gauge the IO occurred\" is wrong.\nEither: I/O *which* occured, or I/O occurring, or occurs.\n\n\"can tune\" should say \"allow tuning\".\n\nLike:\n+ This and other spill\n+ counters can be used to gauge the I/O which occurs during logical decoding\n+ and accordingly allow tuning of <literal>logical_decoding_work_mem</literal>.\n\n- Number of times transactions were spilled to disk. Transactions\n+ Number of times transactions were spilled to disk while performing\n+ decoding of changes from WAL for this slot. Transactions\n\nWhat about: \"..while decoding changes..\" (remove \"performing\" and \"of\").\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 26 Oct 2020 22:21:01 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Oct 27, 2020 at 8:51 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, Oct 23, 2020 at 10:45:34AM +0530, Amit Kapila wrote:\n> > On Fri, Oct 23, 2020 at 8:59 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > While updating the streaming stats patch, it occurred to me that we\n> > can write a better description spill_bytes as well. Attached contains\n> > the update to spill_bytes description.\n>\n> + This and other spill\n> + counters can be used to gauge the I/O occurred during logical decoding\n> + and accordingly can tune <literal>logical_decoding_work_mem</literal>.\n>\n> \"gauge the IO occurred\" is wrong.\n> Either: I/O *which* occured, or I/O occurring, or occurs.\n>\n> \"can tune\" should say \"allow tuning\".\n>\n> Like:\n> + This and other spill\n> + counters can be used to gauge the I/O which occurs during logical decoding\n> + and accordingly allow tuning of <literal>logical_decoding_work_mem</literal>.\n>\n> - Number of times transactions were spilled to disk. Transactions\n> + Number of times transactions were spilled to disk while performing\n> + decoding of changes from WAL for this slot. Transactions\n>\n> What about: \"..while decoding changes..\" (remove \"performing\" and \"of\").\n>\n\nAll of your suggestions sound good to me. Find the patch attached to\nupdate the docs accordingly.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 27 Oct 2020 09:17:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Oct 27, 2020 at 09:17:43AM +0530, Amit Kapila wrote:\n> On Tue, Oct 27, 2020 at 8:51 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Fri, Oct 23, 2020 at 10:45:34AM +0530, Amit Kapila wrote:\n> > > On Fri, Oct 23, 2020 at 8:59 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > While updating the streaming stats patch, it occurred to me that we\n> > > can write a better description spill_bytes as well. Attached contains\n> > > the update to spill_bytes description.\n> >\n> > + This and other spill\n> > + counters can be used to gauge the I/O occurred during logical decoding\n> > + and accordingly can tune <literal>logical_decoding_work_mem</literal>.\n> >\n> > \"gauge the IO occurred\" is wrong.\n> > Either: I/O *which* occured, or I/O occurring, or occurs.\n> >\n> > \"can tune\" should say \"allow tuning\".\n> >\n> > Like:\n> > + This and other spill\n> > + counters can be used to gauge the I/O which occurs during logical decoding\n> > + and accordingly allow tuning of <literal>logical_decoding_work_mem</literal>.\n> >\n> > - Number of times transactions were spilled to disk. Transactions\n> > + Number of times transactions were spilled to disk while performing\n> > + decoding of changes from WAL for this slot. Transactions\n> >\n> > What about: \"..while decoding changes..\" (remove \"performing\" and \"of\").\n> >\n> \n> All of your suggestions sound good to me. Find the patch attached to\n> update the docs accordingly.\n\n@@ -2628,8 +2627,8 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i\n <para>\n Amount of decoded transaction data spilled to disk while performing\n decoding of changes from WAL for this slot. This and other spill\n- counters can be used to gauge the I/O occurred during logical decoding\n- and accordingly can tune <literal>logical_decoding_work_mem</literal>.\n+ counters can be used to gauge the I/O which occurred during logical\n+ decoding and accordingly allow tuning <literal>logical_decoding_work_mem</literal>.\n\nNow that I look again, maybe remove \"accordingly\" ?\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 27 Oct 2020 18:19:42 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Tue, Oct 27, 2020 at 06:19:42PM -0500, Justin Pryzby wrote:\n>On Tue, Oct 27, 2020 at 09:17:43AM +0530, Amit Kapila wrote:\n>> On Tue, Oct 27, 2020 at 8:51 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> >\n>> > On Fri, Oct 23, 2020 at 10:45:34AM +0530, Amit Kapila wrote:\n>> > > On Fri, Oct 23, 2020 at 8:59 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> > >\n>> > > While updating the streaming stats patch, it occurred to me that we\n>> > > can write a better description spill_bytes as well. Attached contains\n>> > > the update to spill_bytes description.\n>> >\n>> > + This and other spill\n>> > + counters can be used to gauge the I/O occurred during logical decoding\n>> > + and accordingly can tune <literal>logical_decoding_work_mem</literal>.\n>> >\n>> > \"gauge the IO occurred\" is wrong.\n>> > Either: I/O *which* occured, or I/O occurring, or occurs.\n>> >\n>> > \"can tune\" should say \"allow tuning\".\n>> >\n>> > Like:\n>> > + This and other spill\n>> > + counters can be used to gauge the I/O which occurs during logical decoding\n>> > + and accordingly allow tuning of <literal>logical_decoding_work_mem</literal>.\n>> >\n>> > - Number of times transactions were spilled to disk. Transactions\n>> > + Number of times transactions were spilled to disk while performing\n>> > + decoding of changes from WAL for this slot. Transactions\n>> >\n>> > What about: \"..while decoding changes..\" (remove \"performing\" and \"of\").\n>> >\n>>\n>> All of your suggestions sound good to me. Find the patch attached to\n>> update the docs accordingly.\n>\n>@@ -2628,8 +2627,8 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i\n> <para>\n> Amount of decoded transaction data spilled to disk while performing\n> decoding of changes from WAL for this slot. This and other spill\n>- counters can be used to gauge the I/O occurred during logical decoding\n>- and accordingly can tune <literal>logical_decoding_work_mem</literal>.\n>+ counters can be used to gauge the I/O which occurred during logical\n>+ decoding and accordingly allow tuning <literal>logical_decoding_work_mem</literal>.\n>\n>Now that I look again, maybe remove \"accordingly\" ?\n>\n\nYeah, the 'accordingly' seems rather unnecessary here. Let's remove it.\n\n\nFWIW thanks to everyone working on this and getting the reworked version\nof the 9290ad198b patch in. As an author of that patch I should have\npaid more attention to this thread, and I appreciate the amount of work\nspent on fixing it.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Wed, 28 Oct 2020 00:56:23 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" }, { "msg_contents": "On Wed, Oct 28, 2020 at 5:26 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Tue, Oct 27, 2020 at 06:19:42PM -0500, Justin Pryzby wrote:\n> >On Tue, Oct 27, 2020 at 09:17:43AM +0530, Amit Kapila wrote:\n> >> On Tue, Oct 27, 2020 at 8:51 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >> >\n> >> > On Fri, Oct 23, 2020 at 10:45:34AM +0530, Amit Kapila wrote:\n> >> > > On Fri, Oct 23, 2020 at 8:59 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >> > >\n> >> > > While updating the streaming stats patch, it occurred to me that we\n> >> > > can write a better description spill_bytes as well. Attached contains\n> >> > > the update to spill_bytes description.\n> >> >\n> >> > + This and other spill\n> >> > + counters can be used to gauge the I/O occurred during logical decoding\n> >> > + and accordingly can tune <literal>logical_decoding_work_mem</literal>.\n> >> >\n> >> > \"gauge the IO occurred\" is wrong.\n> >> > Either: I/O *which* occured, or I/O occurring, or occurs.\n> >> >\n> >> > \"can tune\" should say \"allow tuning\".\n> >> >\n> >> > Like:\n> >> > + This and other spill\n> >> > + counters can be used to gauge the I/O which occurs during logical decoding\n> >> > + and accordingly allow tuning of <literal>logical_decoding_work_mem</literal>.\n> >> >\n> >> > - Number of times transactions were spilled to disk. Transactions\n> >> > + Number of times transactions were spilled to disk while performing\n> >> > + decoding of changes from WAL for this slot. Transactions\n> >> >\n> >> > What about: \"..while decoding changes..\" (remove \"performing\" and \"of\").\n> >> >\n> >>\n> >> All of your suggestions sound good to me. Find the patch attached to\n> >> update the docs accordingly.\n> >\n> >@@ -2628,8 +2627,8 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i\n> > <para>\n> > Amount of decoded transaction data spilled to disk while performing\n> > decoding of changes from WAL for this slot. This and other spill\n> >- counters can be used to gauge the I/O occurred during logical decoding\n> >- and accordingly can tune <literal>logical_decoding_work_mem</literal>.\n> >+ counters can be used to gauge the I/O which occurred during logical\n> >+ decoding and accordingly allow tuning <literal>logical_decoding_work_mem</literal>.\n> >\n> >Now that I look again, maybe remove \"accordingly\" ?\n> >\n>\n> Yeah, the 'accordingly' seems rather unnecessary here. Let's remove it.\n>\n\nRemoved and pushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 28 Oct 2020 08:04:31 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resetting spilled txn statistics in pg_stat_replication" } ]
[ { "msg_contents": "Hello.\n\nI noticed that UpdateSpillStats calls \"elog(DEBUG2\" within\nSpinLockAcquire section on MyWalSnd. The lock doesn't protect rb and\nin the first place the rb cannot be modified during the function is\nrunning.\n\nIt should be out of the lock section.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 02 Jun 2020 16:15:18 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "\n\nOn 2020/06/02 16:15, Kyotaro Horiguchi wrote:\n> Hello.\n> \n> I noticed that UpdateSpillStats calls \"elog(DEBUG2\" within\n> SpinLockAcquire section on MyWalSnd. The lock doesn't protect rb and\n> in the first place the rb cannot be modified during the function is\n> running.\n> \n> It should be out of the lock section.\n\nThanks for the patch! It looks good to me.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 2 Jun 2020 17:35:48 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "On Tue, Jun 2, 2020 at 2:05 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/06/02 16:15, Kyotaro Horiguchi wrote:\n> > Hello.\n> >\n> > I noticed that UpdateSpillStats calls \"elog(DEBUG2\" within\n> > SpinLockAcquire section on MyWalSnd. The lock doesn't protect rb and\n> > in the first place the rb cannot be modified during the function is\n> > running.\n> >\n> > It should be out of the lock section.\n\nRight.\n\n>\n> Thanks for the patch! It looks good to me.\n>\n\nThe patch looks good to me as well. I will push this unless Fujii-San\nwants to do it.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Jun 2020 14:12:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "\n\nOn 2020/06/02 17:42, Amit Kapila wrote:\n> On Tue, Jun 2, 2020 at 2:05 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> On 2020/06/02 16:15, Kyotaro Horiguchi wrote:\n>>> Hello.\n>>>\n>>> I noticed that UpdateSpillStats calls \"elog(DEBUG2\" within\n>>> SpinLockAcquire section on MyWalSnd. The lock doesn't protect rb and\n>>> in the first place the rb cannot be modified during the function is\n>>> running.\n>>>\n>>> It should be out of the lock section.\n> \n> Right.\n> \n>>\n>> Thanks for the patch! It looks good to me.\n>>\n> \n> The patch looks good to me as well. I will push this unless Fujii-San\n> wants to do it.\n\nThanks! I pushed the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 2 Jun 2020 19:24:16 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "At Tue, 2 Jun 2020 19:24:16 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> Thanks! I pushed the patch.\n\nThanks to all!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 03 Jun 2020 09:18:19 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "On Wed, Jun 03, 2020 at 09:18:19AM +0900, Kyotaro Horiguchi wrote:\n> Thanks to all!\n\nIndeed, this was incorrect. And you may not have noticed, but we have\na second instance of that in LogicalIncreaseRestartDecodingForSlot()\nthat goes down to 9.4 and b89e151. I used a dirty-still-efficient\nhack to detect that, and that's the only instance I have spotted.\n\nI am not sure if that's worth worrying a back-patch, but we should\nreally address that at least on HEAD. Attached is an extra patch to\nclose the loop.\n--\nMichael", "msg_date": "Wed, 3 Jun 2020 12:05:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "On Wed, Jun 3, 2020 at 8:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jun 03, 2020 at 09:18:19AM +0900, Kyotaro Horiguchi wrote:\n> > Thanks to all!\n>\n> Indeed, this was incorrect.\n>\n\nDo you mean to say correct?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Jun 2020 08:52:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Indeed, this was incorrect. And you may not have noticed, but we have\n> a second instance of that in LogicalIncreaseRestartDecodingForSlot()\n> that goes down to 9.4 and b89e151. I used a dirty-still-efficient\n> hack to detect that, and that's the only instance I have spotted.\n\nUgh, that is just horrid. I experimented with the attached patch\nbut it did not find any other problems. Still, that only proves\nsomething about code paths that are taken during check-world, and\nwe know that our test coverage is not very good :-(.\n\nShould we think about adding automated detection of this type of\nmistake? I don't like the attached as-is because of the #include\nfootprint expansion, but maybe we can find a better way.\n\n> I am not sure if that's worth worrying a back-patch, but we should\n> really address that at least on HEAD.\n\nIt's actually worse in the back branches, because elog() did not have\na good short-circuit path like ereport() does. +1 for back-patch.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 03 Jun 2020 00:36:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "On Wed, Jun 03, 2020 at 08:52:08AM +0530, Amit Kapila wrote:\n> Do you mean to say correct?\n\nNope, I really meant that the code before caa3c42 is incorrect, and I\nam glad that it got fixed. Sorry if that sounded confusing.\n--\nMichael", "msg_date": "Wed, 3 Jun 2020 13:41:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "On Wed, Jun 03, 2020 at 12:36:34AM -0400, Tom Lane wrote:\n> Ugh, that is just horrid. I experimented with the attached patch\n> but it did not find any other problems.\n\nOh. I can see the same \"ifndef FRONTEND\" logic all around the place\nas I did on my local branch :)\n\n> Still, that only proves something about code paths that are taken\n> during check-world, and we know that our test coverage is not very\n> good :-(.\n\nYeah. Not perfect, still we are getting better at it with the years.\nI am fine to take care of a backpatch, but I'll wait first a bit to\nsee if others have any comments.\n--\nMichael", "msg_date": "Wed, 3 Jun 2020 13:48:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "I wrote:\n> Ugh, that is just horrid. I experimented with the attached patch\n> but it did not find any other problems.\n\nIt occurred to me to add NotHoldingSpinLock() into palloc and\nfriends, and look what I found in copy_replication_slot:\n\n SpinLockAcquire(&s->mutex);\n src_islogical = SlotIsLogical(s);\n src_restart_lsn = s->data.restart_lsn;\n temporary = s->data.persistency == RS_TEMPORARY;\n plugin = logical_slot ? pstrdup(NameStr(s->data.plugin)) : NULL;\n SpinLockRelease(&s->mutex);\n\nThat is not gonna do, of course. And there is another pstrdup\ninside another spinlock section a bit further down in the same\nfunction. Also, pg_get_replication_slots has a couple of\nnamecpy() calls inside a spinlock, which is maybe less dangerous\nthan palloc() but it's still willful disregard of the project coding\nrule about \"only straight-line code inside a spinlock\".\n\nI'm inclined to think that memcpy'ing the ReplicationSlot struct\ninto a local variable might be the best way, replacing all the\npiecemeal copying these stanzas are doing right now. memcpy() of\na fixed amount of data isn't quite straight-line code perhaps,\nbut it has a well-defined runtime and zero chance of throwing an\nerror, which are the two properties we should be most urgently\nconcerned about.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Jun 2020 01:27:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "On Wed, Jun 03, 2020 at 01:27:51AM -0400, Tom Lane wrote:\n> I'm inclined to think that memcpy'ing the ReplicationSlot struct\n> into a local variable might be the best way, replacing all the\n> piecemeal copying these stanzas are doing right now. memcpy() of\n> a fixed amount of data isn't quite straight-line code perhaps,\n> but it has a well-defined runtime and zero chance of throwing an\n> error, which are the two properties we should be most urgently\n> concerned about.\n\n+1. And I guess that you are already on that? ;)\n--\nMichael", "msg_date": "Wed, 3 Jun 2020 14:47:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Jun 03, 2020 at 01:27:51AM -0400, Tom Lane wrote:\n>> I'm inclined to think that memcpy'ing the ReplicationSlot struct\n>> into a local variable might be the best way, replacing all the\n>> piecemeal copying these stanzas are doing right now.\n\n> +1. And I guess that you are already on that? ;)\n\nI'll work on it tomorrow ... it's getting late here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Jun 2020 01:54:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "... and InvalidateObsoleteReplicationSlots(), too.\n\nI am detecting a pattern here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Jun 2020 02:00:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "At Wed, 03 Jun 2020 02:00:53 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> ... and InvalidateObsoleteReplicationSlots(), too.\n> \n> I am detecting a pattern here.\n\nI looked through 224 locations where SpinLockAcquire and found some.\n\nLogicalIncreaseRestartDecodingForSlot is spotted by Michael.\npg_get_replication_slots has some namecpy as Tom pointed out.\ncopy_replication_slot has pstrdup as Tom pointed out.\nInvalidateObsoleteReplicationSlots has pstrdup as Tom poineed out.\n\nI found another instance of pstrdup, but found some string copy functions.\n\nCreateInitDecodingContext has StrNCpy (up to NAMEDATALEN = 64 bytes).\nRequestXLogStreaming has strlcpy (up to MAXCONNINFO = 1024 bytes).\nSaveSlotToPath has memcpy on ReplicationSlotOnDisk (176 bytes).\nWalReceiverMain has strlcpy(MAXCONINFO + NAMEDATALEN) and memset of MAXCONNINFO.\npg_stat_get_wal_receiver has strlcpy (NAMEDATALEN + NI_MAXHOST(1025) + MAXCONNINFO).\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 03 Jun 2020 15:18:19 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> I looked through 224 locations where SpinLockAcquire and found some.\n\nYeah, I made a similar scan and arrived at about the same conclusions.\nI think that the memcpy and strlcpy calls are fine; at least, we've got\nto transport data somehow and it's not apparent why those aren't OK ways\nto do it. The one use of StrNCpy is annoying from a cosmetic standpoint\n(mainly because it's Not Like Anywhere Else) but I'm not sure it's worth\nchanging.\n\nThe condition-variable code has a boatload of spinlocked calls of the\nproclist functions in proclist.h. All of those are straight-line code\nso they're okay performance wise, but I wonder if we shouldn't add a\ncomment to that header pointing out that its functions must not throw\nerrors.\n\nThe only other thing I remain concerned about is some instances of atomic\noperations inside spinlocks, which I started a separate thread about [1].\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/1141819.1591208385%40sss.pgh.pa.us\n\n\n", "msg_date": "Wed, 03 Jun 2020 14:35:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "On Wed, Jun 03, 2020 at 12:36:34AM -0400, Tom Lane wrote:\n> Should we think about adding automated detection of this type of\n> mistake? I don't like the attached as-is because of the #include\n> footprint expansion, but maybe we can find a better way.\n\nI think that this one first boils down to the FRONTEND dependency in\nthose headers. Or in short, spin.h may get loaded by the frontend but\nwe have a backend-only API, no?\n\n> It's actually worse in the back branches, because elog() did not have\n> a good short-circuit path like ereport() does. +1 for back-patch.\n\nThanks, got that fixed down to 9.5.\n--\nMichael", "msg_date": "Thu, 4 Jun 2020 10:43:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Jun 03, 2020 at 12:36:34AM -0400, Tom Lane wrote:\n>> Should we think about adding automated detection of this type of\n>> mistake? I don't like the attached as-is because of the #include\n>> footprint expansion, but maybe we can find a better way.\n\n> I think that this one first boils down to the FRONTEND dependency in\n> those headers. Or in short, spin.h may get loaded by the frontend but\n> we have a backend-only API, no?\n\nI think the #include bloat comes from wanting to declare the global\nstate variable as \"slock_t *\". We could give up on that and write\nsomething like this in a central place like c.h:\n\n#if defined(USE_ASSERT_CHECKING) && !defined(FRONTEND)\nextern void *held_spinlock;\n#define NotHoldingSpinLock() Assert(held_spinlock == NULL)\n#else\n#define NotHoldingSpinLock() ((void) 0)\n#endif\n\nThen throwing NotHoldingSpinLock() into relevant places costs\nnothing new include-wise.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Jun 2020 21:57:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "On Wed, Jun 3, 2020 at 12:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Should we think about adding automated detection of this type of\n> mistake? I don't like the attached as-is because of the #include\n> footprint expansion, but maybe we can find a better way.\n\nI think it would be an excellent idea.\n\nRemoving some of these spinlocks and replacing them with LWLocks might\nalso be worth considering.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 9 Jun 2020 13:46:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Removing some of these spinlocks and replacing them with LWLocks might\n> also be worth considering.\n\nWhen I went through the existing spinlock stanzas, the only thing that\nreally made me acutely uncomfortable was the chunk in pg_stat_statement's\npgss_store(), lines 1386..1438 in HEAD. In the first place, that's\npushing the notion of \"short straight-line code\" well beyond reasonable\nbounds. Other processes could waste a fair amount of time spinning while\nthe lock holder does all this arithmetic; not to mention the risk of\nexhausting one's CPU time-slice partway through. In the second place,\na chunk of code this large could well allow people to make modifications\nwithout noticing that they're inside a spinlock, allowing future coding\nviolations to sneak in.\n\nNot sure what we want to do about it though. An LWLock per pgss entry\nprobably isn't gonna do. Perhaps we could take a cue from your old\nhack with multiplexed spinlocks, and map the pgss entries onto some\nfixed-size pool of LWLocks, figuring that the odds of false conflicts\nare small as long as the pool is bigger than MaxBackends.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Jun 2020 13:59:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "On Tue, Jun 9, 2020 at 1:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Removing some of these spinlocks and replacing them with LWLocks might\n> > also be worth considering.\n>\n> When I went through the existing spinlock stanzas, the only thing that\n> really made me acutely uncomfortable was the chunk in pg_stat_statement's\n> pgss_store(), lines 1386..1438 in HEAD. In the first place, that's\n> pushing the notion of \"short straight-line code\" well beyond reasonable\n> bounds. Other processes could waste a fair amount of time spinning while\n> the lock holder does all this arithmetic; not to mention the risk of\n> exhausting one's CPU time-slice partway through. In the second place,\n> a chunk of code this large could well allow people to make modifications\n> without noticing that they're inside a spinlock, allowing future coding\n> violations to sneak in.\n>\n> Not sure what we want to do about it though. An LWLock per pgss entry\n> probably isn't gonna do. Perhaps we could take a cue from your old\n> hack with multiplexed spinlocks, and map the pgss entries onto some\n> fixed-size pool of LWLocks, figuring that the odds of false conflicts\n> are small as long as the pool is bigger than MaxBackends.\n\nI mean, what would be wrong with having an LWLock per pgss entry? If\nyou're worried about efficiency, it's no longer the case that an\nLWLock uses a spinlock internally, so there's not the old problem of\ndoubling (plus contention) the number of atomic operations by using an\nLWLock. If you're worried about space, an LWLock is only 16 bytes, and\nthe slock_t that we'd be replacing is currently at the end of the\nstruct so presumably followed by some padding.\n\nI suspect that these days many of the places we're using spinlocks are\nbuying little of any value on the efficiency side, but making any\nhigh-contention scenarios way worse. Plus, unlike LWLocks, they're not\ninstrumented with wait events, so you can't even find out that you've\ngot contention there without breaking out 'perf', not exactly a great\nthing to have to do in a production environments.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 9 Jun 2020 15:20:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jun 9, 2020 at 1:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> When I went through the existing spinlock stanzas, the only thing that\n>> really made me acutely uncomfortable was the chunk in pg_stat_statement's\n>> pgss_store(), lines 1386..1438 in HEAD.\n\n> I mean, what would be wrong with having an LWLock per pgss entry?\n\nHmm, maybe nothing. I'm accustomed to thinking of them as being\nsignificantly more expensive than spinlocks, but maybe we've narrowed\nthe gap enough that that's not such a problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Jun 2020 19:24:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "Hi,\n\nOn 2020-06-09 19:24:15 -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Tue, Jun 9, 2020 at 1:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> When I went through the existing spinlock stanzas, the only thing that\n> >> really made me acutely uncomfortable was the chunk in pg_stat_statement's\n> >> pgss_store(), lines 1386..1438 in HEAD.\n> \n> > I mean, what would be wrong with having an LWLock per pgss entry?\n\n+1\n\n> Hmm, maybe nothing. I'm accustomed to thinking of them as being\n> significantly more expensive than spinlocks, but maybe we've narrowed\n> the gap enough that that's not such a problem.\n\nThey do add a few cycles (IIRC ~30 or so, last time I measured a\nspecific scenario) of latency to acquisition, but it's not a large\namount. The only case where acquisition is noticably slower, in my\nexperiments, is when there's \"just the right amount\" of\ncontention. There spinning instead of entering the kernel can be good.\n\nI've mused about adding a small amount of spinning to lwlock acquisition\nbefore. But so far working on reducing contention seemed the better\nroute.\n\n\nFunnily enough lwlock *release*, even when there are no waiters, has a\nsomewhat noticable performance difference on x86 (and other TSO\nplatforms) compared to spinlock release. For spinlock release we can\njust use a plain write and a compiler barrier, whereas lwlock release\nneeds to use an atomic operation.\n\nI think that's hard, but not impossible, to avoid for an userspace\nreader-writer lock.\n\n\nIt would be a nice experiment to make spinlocks a legacy wrapper around\nrwlocks. I think if we added 2-3 optimizations (optimize for\nexclusive-only locks, short amount of spinning, possibly inline\nfunctions for \"fast path\" acquisitions/release) that'd be better for\nnearly all situations. And in the situations where it's not, the loss\nwould be pretty darn small.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 9 Jun 2020 16:54:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "Hi,\n\nOn 2020-06-09 15:20:08 -0400, Robert Haas wrote:\n> If you're worried about space, an LWLock is only 16 bytes, and the\n> slock_t that we'd be replacing is currently at the end of the struct\n> so presumably followed by some padding.\n\nI don't think the size is worth of concern in this case, and I'm not\nsure there's any current case where it's really worth spending effort\nreducing size. But if there is: It seems possible to reduce the size.\n\nstruct LWLock {\n uint16 tranche; /* 0 2 */\n\n /* XXX 2 bytes hole, try to pack */\n\n pg_atomic_uint32 state; /* 4 4 */\n proclist_head waiters; /* 8 8 */\n\n /* size: 16, cachelines: 1, members: 3 */\n /* sum members: 14, holes: 1, sum holes: 2 */\n /* last cacheline: 16 bytes */\n};\n\nFirst, we could remove the tranche from the lwlock, and instead perform\nmore work when we need to know it. Which is only when we're going to\nsleep, so it'd be ok if it's not that much work. Perhaps we could even\ndefer determining the tranche to the the *read* side of the wait event\n(presumably that'd require making the pgstat side a bit more\ncomplicated).\n\nSecond, it seems like it should be doable to reduce the size of the\nwaiters list. We e.g. could have a separate 'array of wait lists' array\nin shared memory, which gets assigned to an lwlock whenever a backend\nwants to wait for an lwlock. The number of processes waiting for lwlocks\nis clearly limited by MAX_BACKENDS / 2^18-1 backends waiting, so one 4\nbyte integer pointing to a wait list obviously would suffice.\n\nBut again, I'm not sure the current size a real problem anywhere.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 9 Jun 2020 17:12:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "On Tue, Jun 9, 2020 at 8:12 PM Andres Freund <andres@anarazel.de> wrote:\n> I don't think the size is worth of concern in this case, and I'm not\n> sure there's any current case where it's really worth spending effort\n> reducing size. But if there is: It seems possible to reduce the size.\n\nYeah, I don't think it's very important.\n\n> First, we could remove the tranche from the lwlock, and instead perform\n> more work when we need to know it. Which is only when we're going to\n> sleep, so it'd be ok if it's not that much work. Perhaps we could even\n> defer determining the tranche to the the *read* side of the wait event\n> (presumably that'd require making the pgstat side a bit more\n> complicated).\n>\n> Second, it seems like it should be doable to reduce the size of the\n> waiters list. We e.g. could have a separate 'array of wait lists' array\n> in shared memory, which gets assigned to an lwlock whenever a backend\n> wants to wait for an lwlock. The number of processes waiting for lwlocks\n> is clearly limited by MAX_BACKENDS / 2^18-1 backends waiting, so one 4\n> byte integer pointing to a wait list obviously would suffice.\n>\n> But again, I'm not sure the current size a real problem anywhere.\n\nHonestly, both of these sound more painful than it's worth. We're not\nlikely to have enough LWLocks that using 16 bytes for each one rather\nthan 8 is a major problem. With regard to the first of these ideas,\nbear in mind that the LWLock might be in a DSM segment that the reader\ndoesn't have mapped.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 10 Jun 2020 10:45:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "Hi,\n\nOn 2020-06-03 00:36:34 -0400, Tom Lane wrote:\n> Should we think about adding automated detection of this type of\n> mistake? I don't like the attached as-is because of the #include\n> footprint expansion, but maybe we can find a better way.\n\nI experimented with making the compiler warn about about some of these\nkinds of mistakes without needing full test coverage:\n\nI was able to get clang to warn about things like using palloc in signal\nhandlers, or using palloc while holding a spinlock. Which would be\ngreat, except that it doesn't warn when there's an un-annotated\nintermediary function. Even when that function is in the same TU.\n\nHere's my attempt: https://godbolt.org/z/xfa6Es\n\nIt does detect things like\n spinlock_lock();\n example_alloc(17);\n spinlock_unlock();\n\n<source>:49:2: warning: cannot call function 'example_alloc' while mutex 'holding_spinlock' is held [-Wthread-safety-analysis]\n\n example_alloc(17);\n\n ^\n\nwhich isn't too bad.\n\nDoes anybody think this would be useful even if it doesn't detect the\nmore complicated cases?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 16 Jun 2020 16:31:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I experimented with making the compiler warn about about some of these\n> kinds of mistakes without needing full test coverage:\n\n> I was able to get clang to warn about things like using palloc in signal\n> handlers, or using palloc while holding a spinlock. Which would be\n> great, except that it doesn't warn when there's an un-annotated\n> intermediary function. Even when that function is in the same TU.\n\nHm. Couldn't we make \"calling an un-annotated function\" be a violation\nin itself? Certainly in the case of spinlocks, what we want is pretty\nnearly a total ban on calling anything at all. I wouldn't cry too hard\nabout having a similar policy for signal handlers. (The postmaster's\nhandlers would have to be an exception for now.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 16 Jun 2020 19:46:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." }, { "msg_contents": "Hi,\n\nOn 2020-06-16 19:46:29 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I experimented with making the compiler warn about about some of these\n> > kinds of mistakes without needing full test coverage:\n>\n> > I was able to get clang to warn about things like using palloc in signal\n> > handlers, or using palloc while holding a spinlock. Which would be\n> > great, except that it doesn't warn when there's an un-annotated\n> > intermediary function. Even when that function is in the same TU.\n>\n> Hm. Couldn't we make \"calling an un-annotated function\" be a violation\n> in itself?\n\nI don't see a way to do that with these annotations, unfortunately.\n\nhttps://clang.llvm.org/docs/ThreadSafetyAnalysis.html\nhttps://clang.llvm.org/docs/AttributeReference.html#acquire-capability-acquire-shared-capability\n\n\n> Certainly in the case of spinlocks, what we want is pretty\n> nearly a total ban on calling anything at all. I wouldn't cry too hard\n> about having a similar policy for signal handlers.\n\nIt'd be interesting to try and see how invasive that'd be, if it were\npossible to enforce. But...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 16 Jun 2020 17:27:36 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: elog(DEBUG2 in SpinLocked section." } ]
[ { "msg_contents": "Hi hackers,\n\n\nI've attached a patch to add blocker(s) information for LW Locks.\nThe motive behind is to be able to get some blocker(s) information (if \nany) in the context of LW Locks.\n\n_Motivation:_\n\nWe have seen some cases with heavy contention on some LW Locks (large \nnumber of backends waiting on the same LW Lock).\n\nAdding some blocker information would make the investigations easier, it \ncould help answering questions like:\n\n * how many PIDs are holding the LWLock (could be more than one in case\n of LW_SHARED)?\n * Is the blocking PID changing?\n * Is the number of blocking PIDs changing?\n * What is the blocking PID doing?\n * Is the blocking PID waiting?\n * In which mode request is the blocked PID?\n * in which mode is the blocker PID holding the lock?\n\n_Technical context and proposal:_\n\nThere is 2 points in this patch:\n\n * Add the instrumentation:\n\n * the patch adds into the LWLock struct:\n\n                     last_holding_pid: last pid owner of the lock\n                     last_mode: last holding mode of the last pid owner \nof the lock\n                     nholders: number of holders (could be >1 in case of \nLW_SHARED)\n\n * the patch adds into the PGPROC struct:\n\n//lwLastHoldingPid: last holder of the LW lock the PID is waiting for\n                     lwHolderMode;  LW lock mode of last holder of the \nLW lock the PID is waiting for\n                     lwNbHolders: number of holders of the LW lock the \nPID is waiting for\n\n             and what is necessary to update this new information.\n\n * Provide a way to display the information: the patch also adds a\n function /pg_lwlock_blocking_pid/ to display this new information.\n\n_Outcome Example:_\n\n# select * from pg_lwlock_blocking_pid(10259);\n\n requested_mode | last_holder_pid | last_holder_mode | nb_holders\n\n----------------+-----------------+------------------+------------\n\n LW_EXCLUSIVE   |           10232 | LW_EXCLUSIVE     |          1\n\n(1 row)\n\n \n\n # select query,pid,state,wait_event,wait_event_type,pg_lwlock_blocking_pid(pid),pg_blocking_pids(pid) from pg_stat_activity where state='active' and pid != pg_backend_pid();\n\n              query              |  pid  | state  |  wait_event   | wait_event_type |          pg_lwlock_blocking_pid           | pg_blocking_pids\n\n--------------------------------+-------+--------+---------------+-----------------+-------------------------------------------+------------------\n\n insert into bdtlwa values (1); | 10232 | active |               |                 | (,,,)                                     | {}\n\n insert into bdtlwb values (1); | 10254 | active | WALInsert     | LWLock          | (LW_WAIT_UNTIL_FREE,10232,LW_EXCLUSIVE,1) | {}\n\n create table bdtwt (a int);    | 10256 | active | WALInsert     | LWLock          | (LW_WAIT_UNTIL_FREE,10232,LW_EXCLUSIVE,1) | {}\n\n insert into bdtlwa values (2); | 10259 | active | BufferContent | LWLock          | (LW_EXCLUSIVE,10232,LW_EXCLUSIVE,1)       | {}\n\n drop table bdtlwd;             | 10261 | active | WALInsert     | LWLock          | (LW_WAIT_UNTIL_FREE,10232,LW_EXCLUSIVE,1) | {}\n\n(5 rows)\n\n\nSo, should a PID being blocked on a LWLock we could see:\n\n * in which mode request it is waiting\n * the last pid holding the lock\n * the mode of the last PID holding the lock\n * the number of PID(s) holding the lock\n\n_Remarks:_\n\nI did a few benchmarks so far and did not observe notable performance \ndegradation (can share more details if needed).\n\nI did some quick attempts to get an exhaustive list of blockers (in case \nof LW_SHARED holders), but I think that would be challenging as:\n\n * There is about 40 000 calls to LWLockInitialize and all my attempts\n to init a list here produced “ FATAL: out of shared memory” or similar.\n * One way to get rid of using a list in LWLock could be to use\n proc_list (with proclist_head in LWLock and proclist_node in\n PGPROC). This is the current implementation for the “waiters” list.\n But this would not work for the blockers as one PGPROC can hold\n multiples LW locks so it could mean having a list of about 40K\n proclist_node per PGPROC.\n * I also have concerns about possible performance impact by using such\n a huge list in this context.\n\nThose are the reasons why this patch does not provide an exhaustive list \nof blockers.\n\nWhile this patch does not provide an exhaustive list of blockers (in \ncase of LW_SHARED holders), the information it delivers could already be \nuseful to get insights during LWLock contention scenario.\n\nI will add this patch to the next commitfest. I look forward to your \nfeedback about the idea and/or implementation.\n\nRegards,\n\nBertrand", "msg_date": "Tue, 2 Jun 2020 14:24:53 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Add LWLock blocker(s) information" }, { "msg_contents": "Hi hackers,\n\nOn 6/2/20 2:24 PM, Drouvot, Bertrand wrote:\n>\n> Hi hackers,\n>\n>\n> I've attached a patch to add blocker(s) information for LW Locks.\n> The motive behind is to be able to get some blocker(s) information (if \n> any) in the context of LW Locks.\n>\n> _Motivation:_\n>\n> We have seen some cases with heavy contention on some LW Locks (large \n> number of backends waiting on the same LW Lock).\n>\n> Adding some blocker information would make the investigations easier, \n> it could help answering questions like:\n>\n> * how many PIDs are holding the LWLock (could be more than one in\n> case of LW_SHARED)?\n> * Is the blocking PID changing?\n> * Is the number of blocking PIDs changing?\n> * What is the blocking PID doing?\n> * Is the blocking PID waiting?\n> * In which mode request is the blocked PID?\n> * in which mode is the blocker PID holding the lock?\n>\n> _Technical context and proposal:_\n>\n> There is 2 points in this patch:\n>\n> * Add the instrumentation:\n>\n> * the patch adds into the LWLock struct:\n>\n>                     last_holding_pid: last pid owner of the lock\n>                     last_mode: last holding mode of the last pid owner \n> of the lock\n>                     nholders: number of holders (could be >1 in case \n> of LW_SHARED)\n>\n> * the patch adds into the PGPROC struct:\n>\n> //lwLastHoldingPid: last holder of the LW lock the PID is waiting for\n>                     lwHolderMode;  LW lock mode of last holder of the \n> LW lock the PID is waiting for\n>                     lwNbHolders: number of holders of the LW lock the \n> PID is waiting for\n>\n>             and what is necessary to update this new information.\n>\n> * Provide a way to display the information: the patch also adds a\n> function /pg_lwlock_blocking_pid/ to display this new information.\n>\n> _Outcome Example:_\n>\n> # select * from pg_lwlock_blocking_pid(10259);\n> requested_mode | last_holder_pid | last_holder_mode | nb_holders\n> ----------------+-----------------+------------------+------------\n> LW_EXCLUSIVE   |           10232 | LW_EXCLUSIVE     |          1\n> (1 row)\n> \n> # select query,pid,state,wait_event,wait_event_type,pg_lwlock_blocking_pid(pid),pg_blocking_pids(pid) from pg_stat_activity where state='active' and pid != pg_backend_pid();\n>              query              |  pid  | state  |  wait_event   | wait_event_type |          pg_lwlock_blocking_pid           | pg_blocking_pids\n> --------------------------------+-------+--------+---------------+-----------------+-------------------------------------------+------------------\n> insert into bdtlwa values (1); | 10232 | active |               |                 | (,,,)                                     | {}\n> insert into bdtlwb values (1); | 10254 | active | WALInsert     | LWLock          | (LW_WAIT_UNTIL_FREE,10232,LW_EXCLUSIVE,1) | {}\n> create table bdtwt (a int);    | 10256 | active | WALInsert     | LWLock          | (LW_WAIT_UNTIL_FREE,10232,LW_EXCLUSIVE,1) | {}\n> insert into bdtlwa values (2); | 10259 | active | BufferContent | LWLock          | (LW_EXCLUSIVE,10232,LW_EXCLUSIVE,1)       | {}\n> drop table bdtlwd;             | 10261 | active | WALInsert     | LWLock          | (LW_WAIT_UNTIL_FREE,10232,LW_EXCLUSIVE,1) | {}\n> (5 rows)\n>\n>\n> So, should a PID being blocked on a LWLock we could see:\n>\n> * in which mode request it is waiting\n> * the last pid holding the lock\n> * the mode of the last PID holding the lock\n> * the number of PID(s) holding the lock\n>\n> _Remarks:_\n>\n> I did a few benchmarks so far and did not observe notable performance \n> degradation (can share more details if needed).\n>\n> I did some quick attempts to get an exhaustive list of blockers (in \n> case of LW_SHARED holders), but I think that would be challenging as:\n>\n> * There is about 40 000 calls to LWLockInitialize and all my\n> attempts to init a list here produced “ FATAL: out of shared\n> memory” or similar.\n> * One way to get rid of using a list in LWLock could be to use\n> proc_list (with proclist_head in LWLock and proclist_node in\n> PGPROC). This is the current implementation for the “waiters”\n> list. But this would not work for the blockers as one PGPROC can\n> hold multiples LW locks so it could mean having a list of about\n> 40K proclist_node per PGPROC.\n> * I also have concerns about possible performance impact by using\n> such a huge list in this context.\n>\n> Those are the reasons why this patch does not provide an exhaustive \n> list of blockers.\n>\n> While this patch does not provide an exhaustive list of blockers (in \n> case of LW_SHARED holders), the information it delivers could already \n> be useful to get insights during LWLock contention scenario.\n>\n> I will add this patch to the next commitfest. I look forward to your \n> feedback about the idea and/or implementation.\n>\n> Regards,\n>\n> Bertrand \n\nAttaching a new version of the patch with a tiny change to make it pass \nthe regression tests (opr_sanity was failing due to the new function \nthat is part of the patch).\n\nRegards,\n\nBertrand", "msg_date": "Sun, 7 Jun 2020 08:12:59 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add LWLock blocker(s) information" }, { "msg_contents": "Hi, \r\nThis is a very interesting topic. I did apply the 2nd patch to master branch and performed a quick test. I can observe below information,\r\npostgres=# select * from pg_lwlock_blocking_pid(26925);\r\n requested_mode | last_holder_pid | last_holder_mode | nb_holders \r\n----------------+-----------------+------------------+------------\r\n LW_EXCLUSIVE | 26844 | LW_EXCLUSIVE | 1\r\n(1 row)\r\n\r\npostgres=# select query,pid,state,wait_event,wait_event_type,pg_lwlock_blocking_pid(pid),pg_blocking_pids(pid) from pg_stat_activity where state='active' and pid != pg_backend_pid();\r\n query | pid | state | wait_event | wait_event_type | pg_lwlock_blocking_pid | pg_blocking_pids \r\n--------------------------------------------------------------+-------+--------+------------+-----------------+-------------------------------------+------------------\r\n INSERT INTO orders SELECT FROM generate_series(1, 10000000); | 26925 | active | WALWrite | LWLock | (LW_EXCLUSIVE,26844,LW_EXCLUSIVE,1) | {}\r\n(1 row)\r\n\r\nAt some points, I have to keep repeating the query in order to capture the \"lock info\". I think this is probably part of the design, but I was wondering,\r\nif a query is in deadlock expecting a developer to take a look using the methods above, will the process be killed before a developer get the chance to execute the one of the query?\r\nif some statistics information can be added, it may help the developers to get an overall idea about the lock status, and if the developers can specify some filters, such as, the number of times a query entered into a deadlock, the queries hold the lock more than number of ms, etc, it might help to troubleshooting the \"lock\" issue even better. And moreover, if this feature can be an independent extension, similar to \"pg_buffercache\" it will be great.\r\nBest regards,\r\n\r\nDavid", "msg_date": "Fri, 07 Aug 2020 19:53:38 +0000", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Add LWLock blocker(s) information" }, { "msg_contents": "On Tue, Jun 2, 2020 at 8:25 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> the patch adds into the LWLock struct:\n>\n> last_holding_pid: last pid owner of the lock\n> last_mode: last holding mode of the last pid owner of the lock\n> nholders: number of holders (could be >1 in case of LW_SHARED)\n\nThere's been significant work done over the years to get the size of\nan LWLock down; I'm not very enthusiastic about making it bigger\nagain. See for example commit 6150a1b08a9fe7ead2b25240be46dddeae9d98e1\nwhich embeds one of the LWLocks associated with a BufferDesc into the\nstructure to reduce the number of cache lines associated with common\nbuffer operations. I'm not sure whether this patch would increase the\nspace usage of a BufferDesc to more than one cache line again, but at\nthe very least it would make it a lot tighter, since it looks like it\nadds 12 bytes to the size of each one.\n\nIt's also a little hard to believe that this doesn't hurt performance\non workloads with a lot of LWLock contention, although maybe not; it\ndoesn't seem crazy expensive, just possibly enough to matter.\n\nI thought a little bit about what this might buy as compared with just\nsampling wait events. That by itself is enough to tell you which\nLWLocks are heavily contended. It doesn't tell you what they are\ncontending against, so this would be superior in that regard. However,\nI wonder how much of a problem that actually is. Typically, LWLocks\naren't being taken for long periods, so all the things that are\naccessing the lock spend some time waiting (which you will see via\nwait events in pg_stat_activity) and some time holding the lock\n(making you see other things in pg_stat_activity). It's possible to\nhave cases where this isn't true; e.g. a relatively small number of\nbackends committing transactions could be slowing down a much larger\nnumber of backends taking snapshots, and you'd mostly only see the\nlatter waiting for ProcArrayLock. However, those kinds of cases don't\nseem super-common or super-difficult to figure out.\n\nWhat kinds of scenarios motivate you to propose this?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 10 Aug 2020 18:27:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add LWLock blocker(s) information" }, { "msg_contents": "Hi,\n\nOn 2020-08-10 18:27:17 -0400, Robert Haas wrote:\n> On Tue, Jun 2, 2020 at 8:25 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> > the patch adds into the LWLock struct:\n> >\n> > last_holding_pid: last pid owner of the lock\n> > last_mode: last holding mode of the last pid owner of the lock\n> > nholders: number of holders (could be >1 in case of LW_SHARED)\n> \n> There's been significant work done over the years to get the size of\n> an LWLock down; I'm not very enthusiastic about making it bigger\n> again. See for example commit 6150a1b08a9fe7ead2b25240be46dddeae9d98e1\n> which embeds one of the LWLocks associated with a BufferDesc into the\n> structure to reduce the number of cache lines associated with common\n> buffer operations. I'm not sure whether this patch would increase the\n> space usage of a BufferDesc to more than one cache line again, but at\n> the very least it would make it a lot tighter, since it looks like it\n> adds 12 bytes to the size of each one.\n\n+many. If anything I would like to make them *smaller*. We should strive\nto make locking more and more granular, and that requires the space\noverhead to be small. I'm unhappy enough about the tranche being in\nthere, and requiring padding etc.\n\nI spent a *LOT* of sweat getting where we are, I'd be unhappy to regress\non size or efficiency.\n\n\n> It's also a little hard to believe that this doesn't hurt performance\n> on workloads with a lot of LWLock contention, although maybe not; it\n> doesn't seem crazy expensive, just possibly enough to matter.\n\nYea.\n\n\n> I thought a little bit about what this might buy as compared with just\n> sampling wait events. That by itself is enough to tell you which\n> LWLocks are heavily contended. It doesn't tell you what they are\n> contending against, so this would be superior in that regard. However,\n> I wonder how much of a problem that actually is. Typically, LWLocks\n> aren't being taken for long periods, so all the things that are\n> accessing the lock spend some time waiting (which you will see via\n> wait events in pg_stat_activity) and some time holding the lock\n> (making you see other things in pg_stat_activity). It's possible to\n> have cases where this isn't true; e.g. a relatively small number of\n> backends committing transactions could be slowing down a much larger\n> number of backends taking snapshots, and you'd mostly only see the\n> latter waiting for ProcArrayLock. However, those kinds of cases don't\n> seem super-common or super-difficult to figure out.\n\nMost of the cases where this kind of information really is interesting\nseem to benefit a lot from having stack information available. That\nobviously has overhead, so we don't want the cost all the\ntime. The script at\nhttps://postgr.es/m/20170622210845.d2hsbqv6rxu2tiye%40alap3.anarazel.de\ncan give you results like e.g.\nhttps://anarazel.de/t/2017-06-22/pgsemwait_64_async.svg\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 Aug 2020 17:41:36 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add LWLock blocker(s) information" }, { "msg_contents": "On Mon, Aug 10, 2020 at 5:41 PM Andres Freund <andres@anarazel.de> wrote:\n> Most of the cases where this kind of information really is interesting\n> seem to benefit a lot from having stack information available. That\n> obviously has overhead, so we don't want the cost all the\n> time. The script at\n> https://postgr.es/m/20170622210845.d2hsbqv6rxu2tiye%40alap3.anarazel.de\n> can give you results like e.g.\n> https://anarazel.de/t/2017-06-22/pgsemwait_64_async.svg\n\nIt seems to have bitrot. Do you have a more recent version of the script?\n\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 12 Aug 2020 16:47:13 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Add LWLock blocker(s) information" }, { "msg_contents": "Hi,\n\nOn 2020-08-12 16:47:13 -0700, Peter Geoghegan wrote:\n> On Mon, Aug 10, 2020 at 5:41 PM Andres Freund <andres@anarazel.de> wrote:\n> > Most of the cases where this kind of information really is interesting\n> > seem to benefit a lot from having stack information available. That\n> > obviously has overhead, so we don't want the cost all the\n> > time. The script at\n> > https://postgr.es/m/20170622210845.d2hsbqv6rxu2tiye%40alap3.anarazel.de\n> > can give you results like e.g.\n> > https://anarazel.de/t/2017-06-22/pgsemwait_64_async.svg\n> \n> It seems to have bitrot. Do you have a more recent version of the script?\n\nAttached. Needed one python3 fix, and to be adapted so it works with\nfutex based semaphores. Seems to work for both sysv and posix semaphores\nnow, based a very short test.\n\nsudo python3 ./pgsemwait.py -x /home/andres/build/postgres/dev-optimize/vpath/src/backend/postgres -f 3|~/src/flamegraph/flamegraph.pl\n\nWill add a note to the other thread.\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 12 Aug 2020 17:39:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add LWLock blocker(s) information" }, { "msg_contents": "On Wed, Aug 12, 2020 at 5:39 PM Andres Freund <andres@anarazel.de> wrote:\n> Attached. Needed one python3 fix, and to be adapted so it works with\n> futex based semaphores. Seems to work for both sysv and posix semaphores\n> now, based a very short test.\n\nGreat, thanks!\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 12 Aug 2020 18:03:43 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Add LWLock blocker(s) information" }, { "msg_contents": "On 11/08/2020 03:41, Andres Freund wrote:\n> On 2020-08-10 18:27:17 -0400, Robert Haas wrote:\n>> On Tue, Jun 2, 2020 at 8:25 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>>> the patch adds into the LWLock struct:\n>>>\n>>> last_holding_pid: last pid owner of the lock\n>>> last_mode: last holding mode of the last pid owner of the lock\n>>> nholders: number of holders (could be >1 in case of LW_SHARED)\n>>\n>> There's been significant work done over the years to get the size of\n>> an LWLock down; I'm not very enthusiastic about making it bigger\n>> again. See for example commit 6150a1b08a9fe7ead2b25240be46dddeae9d98e1\n>> which embeds one of the LWLocks associated with a BufferDesc into the\n>> structure to reduce the number of cache lines associated with common\n>> buffer operations. I'm not sure whether this patch would increase the\n>> space usage of a BufferDesc to more than one cache line again, but at\n>> the very least it would make it a lot tighter, since it looks like it\n>> adds 12 bytes to the size of each one.\n> \n> +many. If anything I would like to make them *smaller*. We should strive\n> to make locking more and more granular, and that requires the space\n> overhead to be small. I'm unhappy enough about the tranche being in\n> there, and requiring padding etc.\n> \n> I spent a *LOT* of sweat getting where we are, I'd be unhappy to regress\n> on size or efficiency.\n\nThat seems to be the consensus, so I'm marking this as Returned with \nFeeback in the commitfest.\n\n- Heikki\n\n\n", "msg_date": "Wed, 18 Nov 2020 11:25:26 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Add LWLock blocker(s) information" }, { "msg_contents": "On Wed, Nov 18, 2020 at 5:25 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> On 11/08/2020 03:41, Andres Freund wrote:\n> > On 2020-08-10 18:27:17 -0400, Robert Haas wrote:\n> >> On Tue, Jun 2, 2020 at 8:25 AM Drouvot, Bertrand <bdrouvot@amazon.com>\n> wrote:\n> >>> the patch adds into the LWLock struct:\n> >>>\n> >>> last_holding_pid: last pid owner of the lock\n> >>> last_mode: last holding mode of the last pid\n> owner of the lock\n> >>> nholders: number of holders (could be >1 in case\n> of LW_SHARED)\n> >>\n> >> There's been significant work done over the years to get the size of\n> >> an LWLock down; I'm not very enthusiastic about making it bigger\n> >> again. See for example commit 6150a1b08a9fe7ead2b25240be46dddeae9d98e1\n> >> which embeds one of the LWLocks associated with a BufferDesc into the\n> >> structure to reduce the number of cache lines associated with common\n> >> buffer operations. I'm not sure whether this patch would increase the\n> >> space usage of a BufferDesc to more than one cache line again, but at\n> >> the very least it would make it a lot tighter, since it looks like it\n> >> adds 12 bytes to the size of each one.\n> >\n> > +many. If anything I would like to make them *smaller*. We should strive\n> > to make locking more and more granular, and that requires the space\n> > overhead to be small. I'm unhappy enough about the tranche being in\n> > there, and requiring padding etc.\n> >\n> > I spent a *LOT* of sweat getting where we are, I'd be unhappy to regress\n> > on size or efficiency.\n>\n> That seems to be the consensus, so I'm marking this as Returned with\n> Feeback in the commitfest.\n>\n\nFor what it's worth, I think that things like this are where we can really\nbenefit from external tracing and observation tools.\n\nInstead of tracking the information persistently in the LWLock struct, we\ncan emit TRACE_POSTGRESQL_LWLOCK_BLOCKED_ON(...) in a context where we have\nthe information available to us, then forget all about it. We don't spend\nanything unless someone's collecting the info.\n\nIf someone wants to track LWLock blocking relationships during debugging\nand performance work, they can use systemtap, dtrace, bpftrace, or a\nsimilar tool to observe the LWLock tracepoints and generate stats on LWLock\nblocking frequencies/durations. Doing so with systemtap should be rather\nsimple.\n\nI actually already had a look at this before. I found that the tracepoints\nthat're in the LWLock code right now don't supply enough information in\ntheir arguments so you have to use DWARF debuginfo based probes, which is a\npain. The tranche name alone doesn't let you identify which lock within a\ntranche is the current target.\n\nI've attached a patch that adds the actual LWLock* to each tracepoint in\nthe LWLock subsystem. That permits different locks to be tracked when\nhandling tracepoint events within a single process.\n\nAnother patch adds tracepoints that were missing from LWLockUpdateVar and\nLWLockWaitForVar. And another removes a stray\nTRACE_POSTGRESQL_LWLOCK_ACQUIRE() in LWLockWaitForVar() which should not\nhave been there, since the lock is not actually acquired by\nLWLockWaitForVar().\n\nI'd hoped to add some sort of \"index within the tranche\" to tracepoints,\nbut it looks like it's not feasible. It turns out to be basically\nimpossible to get a stable identifier for an individual LWLock that is\nvalid across different backends. A LWLock inside a DSM segment might have a\ndifferent address in different backends depending on where the DSM segment\ngot mapped. The LWLock subsystem doesn't keep track of them and doesn't\nhave any way to map a LWLock pointer to any sort of cross-process-valid\nidentifier. So that's a bit of a pain when tracing. To provide something\nstable I think it'd be necessary to add some kind of counter tracked\nper-tranche and set by LWLockInitialize in the LWLock struct itself, which\nwe sure don't want to do. If this ever becomes important for some reason we\ncan probably look up whether the address is within a DSM segment or static\nshmem and compute some kind of relative address to report. For now you\ncan't identify and compare individual locks within a tranche except for\nindividual locks and named tranches.\n\n\nBy the way, the LWLock tracepoints currently fire T_NAME(lock) which calls\nGetLWTrancheName() for each tracepoint hit, so long as Pg is built with\n--enable-dtrace, even when nothing is actually tracing them. We might want\nto consider guarding them in systemtap tracepoint semaphore tests so they\njust become a predicted-away branch when not active. Doing so requires a\nsmall change to how we compile probes.d and the related makefile, but\nshouldn't be too hard. I haven't done that in this patch set.", "msg_date": "Thu, 19 Nov 2020 18:33:49 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Add LWLock blocker(s) information" } ]
[ { "msg_contents": "Hi all,\n\nI have been looking at the ODBC driver and the need for currtid() as\nwell as currtid2(), and as mentioned already in [1], matching with my\nlookup of things, these are actually not needed by the driver as long\nas we connect to a server newer than 8.2 able to support RETURNING. I\nam adding in CC of this thread Saito-san and Inoue-san who are the\ntwo main maintainers of the driver for comments. It is worth noting\nthat on its latest HEAD the ODBC driver requires libpq from at least\n9.2.\n\nI would like to remove those two functions and the surrounding code\nfor v14, leading to some cleanup:\n 6 files changed, 326 deletions(-)\n\nWhile on it, I have noticed that heap_get_latest_tid() is still\nlocated within heapam.c, but we can just move it within\nheapam_handler.c.\n\nAttached are two patches to address both points. Comments are\nwelcome.\n\nThanks,\n\n[1]: https://www.postgresql.org/message-id/20200529005559.jl2gsolomyro4l4n@alap3.anarazel.de\n--\nMichael", "msg_date": "Wed, 3 Jun 2020 11:14:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> I would like to remove those two functions and the surrounding code\n> for v14, leading to some cleanup:\n\n+1\n\n> While on it, I have noticed that heap_get_latest_tid() is still\n> located within heapam.c, but we can just move it within\n> heapam_handler.c.\n\nIt looks like table_beginscan_tid wouldn't need to be exported anymore\neither.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jun 2020 22:34:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "I wrote:\n> It looks like table_beginscan_tid wouldn't need to be exported anymore\n> either.\n\nAh, scratch that, I misread it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jun 2020 22:47:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "Hi,\n\nOn 2020/06/03 11:14, Michael Paquier wrote:\n> Hi all,\n\n> I have been looking at the ODBC driver and the need for currtid() as\n> well as currtid2(), and as mentioned already in [1], matching with my\n> lookup of things, these are actually not needed by the driver as long\n> as we connect to a server newer than 8.2 able to support RETURNING.\n\nThough currtid2() is necessary even for servers which support RETURNING,\nI don't object to remove it.\n\nregards,\nHiroshi Inoue\n\n> I\n> am adding in CC of this thread Saito-san and Inoue-san who are the\n> two main maintainers of the driver for comments. It is worth noting\n> that on its latest HEAD the ODBC driver requires libpq from at least\n> 9.2.\n>\n> I would like to remove those two functions and the surrounding code\n> for v14, leading to some cleanup:\n> 6 files changed, 326 deletions(-)\n>\n> While on it, I have noticed that heap_get_latest_tid() is still\n> located within heapam.c, but we can just move it within\n> heapam_handler.c.\n>\n> Attached are two patches to address both points. Comments are\n> welcome.\n>\n> Thanks,\n>\n> [1]: https://www.postgresql.org/message-id/20200529005559.jl2gsolomyro4l4n@alap3.anarazel.de\n> --\n> Michael\n\n\n\n", "msg_date": "Wed, 3 Jun 2020 22:10:21 +0900", "msg_from": "\"Inoue, Hiroshi\" <h-inoue@dream.email.ne.jp>", "msg_from_op": false, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "Hi,\n\nOn 2020-06-03 11:14:48 +0900, Michael Paquier wrote:\n> I would like to remove those two functions and the surrounding code\n> for v14, leading to some cleanup:\n> 6 files changed, 326 deletions(-)\n\n+1\n\n\n> While on it, I have noticed that heap_get_latest_tid() is still\n> located within heapam.c, but we can just move it within\n> heapam_handler.c.\n\nWhat's the point of that change? I think the differentiation between\nheapam_handler.c and heapam.c could be clearer, but if anything, I'd\nargue that heap_get_latest_tid is sufficiently low-level that it'd\nbelong in heapam.c.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 4 Jun 2020 10:59:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "On Thu, Jun 04, 2020 at 10:59:05AM -0700, Andres Freund wrote:\n> What's the point of that change? I think the differentiation between\n> heapam_handler.c and heapam.c could be clearer, but if anything, I'd\n> argue that heap_get_latest_tid is sufficiently low-level that it'd\n> belong in heapam.c.\n\nWell, heap_get_latest_tid() is only called in heapam_handler.c if\nanything, as it is not used elsewhere and not publish it. And IMO we\nshould try to encourage using table_get_latest_tid() instead if some\nplugins need that. Anyway, if you are opposed to this change, I won't\npush hard for it either.\n--\nMichael", "msg_date": "Fri, 5 Jun 2020 15:07:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "On Wed, Jun 03, 2020 at 10:10:21PM +0900, Inoue, Hiroshi wrote:\n> On 2020/06/03 11:14, Michael Paquier wrote:\n>> I have been looking at the ODBC driver and the need for currtid() as\n>> well as currtid2(), and as mentioned already in [1], matching with my\n>> lookup of things, these are actually not needed by the driver as long\n>> as we connect to a server newer than 8.2 able to support RETURNING.\n> \n> Though currtid2() is necessary even for servers which support RETURNING,\n> I don't object to remove it.\n\nIn which cases is it getting used then? From what I can see there is\nzero coverage for that part in the tests. And based on a rough read\nof the code, this would get called with LATEST_TUPLE_LOAD being set,\nwhere there is some kind of bulk deletion involved. Couldn't that be\na problem?\n--\nMichael", "msg_date": "Fri, 5 Jun 2020 15:22:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "On 2020/06/05 15:22, Michael Paquier wrote:\n> On Wed, Jun 03, 2020 at 10:10:21PM +0900, Inoue, Hiroshi wrote:\n>> On 2020/06/03 11:14, Michael Paquier wrote:\n>>> I have been looking at the ODBC driver and the need for currtid() as\n>>> well as currtid2(), and as mentioned already in [1], matching with my\n>>> lookup of things, these are actually not needed by the driver as long\n>>> as we connect to a server newer than 8.2 able to support RETURNING.\n>> Though currtid2() is necessary even for servers which support RETURNING,\n>> I don't object to remove it.\n> In which cases is it getting used then?\n\nKeyset-driven cursors always detect changes made by other applications\n(and themselves). currtid() is necessary to detect the changes.\nCTIDs are changed by updates unfortunately.\n\nregards,\nHiroshi Inoue\n\n> From what I can see there is\n> zero coverage for that part in the tests. And based on a rough read\n> of the code, this would get called with LATEST_TUPLE_LOAD being set,\n> where there is some kind of bulk deletion involved. Couldn't that be\n> a problem?\n> --\n> Michael\n\n\n\n\n\n\n\n\n\nOn 2020/06/05 15:22, Michael Paquier\n wrote:\n\n\nOn Wed, Jun 03, 2020 at 10:10:21PM +0900, Inoue, Hiroshi wrote:\n\n\nOn 2020/06/03 11:14, Michael Paquier wrote:\n\n\nI have been looking at the ODBC driver and the need for currtid() as\nwell as currtid2(), and as mentioned already in [1], matching with my\nlookup of things, these are actually not needed by the driver as long\nas we connect to a server newer than 8.2 able to support RETURNING.\n\n\n\nThough currtid2() is necessary even for servers which support RETURNING,\nI don't object to remove it.\n\n\n\nIn which cases is it getting used then?\n\n\nKeyset-driven cursors always detect changes made by other\n applications\n (and themselves). currtid() is necessary to detect the changes.\n CTIDs are changed by updates unfortunately.\n\n regards,\n Hiroshi Inoue\n\n\n\n From what I can see there is\nzero coverage for that part in the tests. And based on a rough read\nof the code, this would get called with LATEST_TUPLE_LOAD being set,\nwhere there is some kind of bulk deletion involved. Couldn't that be\na problem?\n--\nMichael", "msg_date": "Fri, 5 Jun 2020 22:25:00 +0900", "msg_from": "\"Inoue, Hiroshi\" <h-inoue@dream.email.ne.jp>", "msg_from_op": false, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "On Fri, Jun 05, 2020 at 10:25:00PM +0900, Inoue, Hiroshi wrote:\n> Keyset-driven cursors always detect changes made by other applications\n> (and themselves). currtid() is necessary to detect the changes.\n> CTIDs are changed by updates unfortunately.\n\nYou mean currtid2() here and not currtid(), right? We have two\nproblems here then:\n1) We cannot actually really remove currtid2() from the backend yet\nwithout removing the dependency in the driver, or that may break some\nusers.\n2) The driver does not include tests for that stuff yet.\n--\nMichael", "msg_date": "Mon, 8 Jun 2020 15:52:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "Sorry for the reply.\n\nOn 2020/06/08 15:52, Michael Paquier wrote:\n> On Fri, Jun 05, 2020 at 10:25:00PM +0900, Inoue, Hiroshi wrote:\n>> Keyset-driven cursors always detect changes made by other applications\n>> (and themselves). currtid() is necessary to detect the changes.\n>> CTIDs are changed by updates unfortunately.\n> You mean currtid2() here and not currtid(), right?\n\nYes.\n\n> We have two\n> problems here then:\n> 1) We cannot actually really remove currtid2() from the backend yet\n> without removing the dependency in the driver, or that may break some\n> users.\n\nI think only ODBC driver uses currtid2().\n\n> 2) The driver does not include tests for that stuff yet.\n\nSQLSetPos(.., .., SQL_REFRESH, ..) call in positioned-update-test passes \nthe stuff\n �when 'Use Declare/Fetch' option is turned off. In other words, \nkeyset-driven cursor\nis not supported when 'Use Declare/Fetch' option is turned on. Probably \nkeyset-driven\ncursor support would be lost regardless of 'Use Declare/Fetch' option \nafter the\nremoval of currtid2().\n\n> --\n> Michael\n\n\n\n", "msg_date": "Mon, 15 Jun 2020 20:50:23 +0900", "msg_from": "\"Inoue, Hiroshi\" <h-inoue@dream.email.ne.jp>", "msg_from_op": false, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "On Mon, Jun 15, 2020 at 08:50:23PM +0900, Inoue, Hiroshi wrote:\n> Sorry for the reply.\n\nNo problem, thanks for taking the time.\n\n> On 2020/06/08 15:52, Michael Paquier wrote:\n>> On Fri, Jun 05, 2020 at 10:25:00PM +0900, Inoue, Hiroshi wrote:\n>> We have two\n>> problems here then:\n>> 1) We cannot actually really remove currtid2() from the backend yet\n>> without removing the dependency in the driver, or that may break some\n>> users.\n> \n> I think only ODBC driver uses currtid2().\n\nCheck. I think so too.\n\n>> 2) The driver does not include tests for that stuff yet.\n> \n> SQLSetPos(.., .., SQL_REFRESH, ..) call in positioned-update-test passes the\n> stuff\n>  when 'Use Declare/Fetch' option is turned off. In other words,\n> keyset-driven cursor\n> is not supported when 'Use Declare/Fetch' option is turned on. Probably\n> keyset-driven\n> cursor support would be lost regardless of 'Use Declare/Fetch' option after\n> the removal of currtid2().\n\nSorry, but I am not quite sure what is the relationship between\nUseDeclareFetch and currtid2()? Is that related to the use of\nSQL_CURSOR_KEYSET_DRIVEN? The only thing I can be sure of here is\nthat we never call currtid2() in any of the regression tests present\nin the ODBC code for any of the scenarios covered by installcheck-all,\nso that does not really bring any confidence that removing currtid2()\nis a wise thing to do, because we may silently break stuff. If the\nfunction is used, it would be good to close the gap with a test to\nstress that at least in the driver.\n\ncurrtid(), on the contrary, would be fine as far as I understand\nbecause the ODBC code relies on a RETURNING ctid instead, and that's\nsupported for ages in the Postgres backend.\n--\nMichael", "msg_date": "Tue, 23 Jun 2020 13:29:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "On Tue, Jun 23, 2020 at 01:29:06PM +0900, Michael Paquier wrote:\n> Sorry, but I am not quite sure what is the relationship between\n> UseDeclareFetch and currtid2()? Is that related to the use of\n> SQL_CURSOR_KEYSET_DRIVEN? The only thing I can be sure of here is\n> that we never call currtid2() in any of the regression tests present\n> in the ODBC code for any of the scenarios covered by installcheck-all,\n> so that does not really bring any confidence that removing currtid2()\n> is a wise thing to do, because we may silently break stuff. If the\n> function is used, it would be good to close the gap with a test to\n> stress that at least in the driver.\n\nActually, while reviewing the code, the only code path where we use\ncurrtid2() involves positioned_load() and LATEST_TUPLE_LOAD. And the\nonly location where this happens is in SC_pos_reload_with_key(), where\nI don't actually see how it would be possible to not have a keyset and\nstill use a CTID, which would led to LATEST_TUPLE_LOAD being used. So\ncould it be possible that the code paths of currtid2() are actually\njust dead code?\n--\nMichael", "msg_date": "Tue, 23 Jun 2020 14:02:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "On Tue, Jun 23, 2020 at 02:02:33PM +0900, Michael Paquier wrote:\n> Actually, while reviewing the code, the only code path where we use\n> currtid2() involves positioned_load() and LATEST_TUPLE_LOAD. And the\n> only location where this happens is in SC_pos_reload_with_key(), where\n> I don't actually see how it would be possible to not have a keyset and\n> still use a CTID, which would led to LATEST_TUPLE_LOAD being used. So\n> could it be possible that the code paths of currtid2() are actually\n> just dead code?\n\nI have dug more into this one, and we actually stressed this code path\nquite a lot up to commit d9cb23f in the ODBC driver, with tests\ncursor-block-delete, positioned-update and bulkoperations particularly\nwhen calling SQLSetPos(). However, 86e2e7a has reworked the code in\nsuch a way that we visibly don't use anymore CTIDs if we don't have a\nkeyset, and that combinations of various options like UseDeclareFetch\nor UpdatableCursors don't trigger this code path anymore. In short,\ncurrtid2() does not get used. Inoue-san, Saito-san, what do you\nthink? I am adding also Tsunakawa-san in CC who has some experience\nin this area.\n--\nMichael", "msg_date": "Wed, 24 Jun 2020 11:11:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "Hi Michael,\n\nWhere do you test, on Windows or on *nix?\nHow do you test there?\n\nregards,\nHiroshi Inoue\n\nOn 2020/06/24 11:11, Michael Paquier wrote:\n> On Tue, Jun 23, 2020 at 02:02:33PM +0900, Michael Paquier wrote:\n>> Actually, while reviewing the code, the only code path where we use\n>> currtid2() involves positioned_load() and LATEST_TUPLE_LOAD. And the\n>> only location where this happens is in SC_pos_reload_with_key(), where\n>> I don't actually see how it would be possible to not have a keyset and\n>> still use a CTID, which would led to LATEST_TUPLE_LOAD being used. So\n>> could it be possible that the code paths of currtid2() are actually\n>> just dead code?\n> I have dug more into this one, and we actually stressed this code path\n> quite a lot up to commit d9cb23f in the ODBC driver, with tests\n> cursor-block-delete, positioned-update and bulkoperations particularly\n> when calling SQLSetPos(). However, 86e2e7a has reworked the code in\n> such a way that we visibly don't use anymore CTIDs if we don't have a\n> keyset, and that combinations of various options like UseDeclareFetch\n> or UpdatableCursors don't trigger this code path anymore. In short,\n> currtid2() does not get used. Inoue-san, Saito-san, what do you\n> think? I am adding also Tsunakawa-san in CC who has some experience\n> in this area.\n> --\n> Michael\n\n\n\n", "msg_date": "Wed, 24 Jun 2020 17:20:42 +0900", "msg_from": "\"Inoue, Hiroshi\" <h-inoue@dream.email.ne.jp>", "msg_from_op": false, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "Hi Inoue-san,\n\nOn Wed, Jun 24, 2020 at 05:20:42PM +0900, Inoue, Hiroshi wrote:\n> Where do you test, on Windows or on *nix?\n> How do you test there?\n\nI have been testing the driver on macos only, with various backend\nversions, from 11 to 14.\n\nThanks,\n--\nMichael", "msg_date": "Wed, 24 Jun 2020 18:00:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "Hi,\n\nI seem to have invalidated KEYSET-DRIVEN cursors used in \npositioned-update test .\nIt was introduced by the commit 4a272fd but was invalidated by the \ncommit 2be35a6.\n\nI don't object to the removal of currtid(2) because keyset-driven \ncursors in psqlodbc are changed into static cursors in many cases and \nI've hardly ever heard a complaint about it.\n\nregards,\nHiroshi Inoue\n\nOn 2020/06/24 11:11, Michael Paquier wrote:\n> On Tue, Jun 23, 2020 at 02:02:33PM +0900, Michael Paquier wrote:\n>> Actually, while reviewing the code, the only code path where we use\n>> currtid2() involves positioned_load() and LATEST_TUPLE_LOAD. And the\n>> only location where this happens is in SC_pos_reload_with_key(), where\n>> I don't actually see how it would be possible to not have a keyset and\n>> still use a CTID, which would led to LATEST_TUPLE_LOAD being used. So\n>> could it be possible that the code paths of currtid2() are actually\n>> just dead code?\n> I have dug more into this one, and we actually stressed this code path\n> quite a lot up to commit d9cb23f in the ODBC driver, with tests\n> cursor-block-delete, positioned-update and bulkoperations particularly\n> when calling SQLSetPos(). However, 86e2e7a has reworked the code in\n> such a way that we visibly don't use anymore CTIDs if we don't have a\n> keyset, and that combinations of various options like UseDeclareFetch\n> or UpdatableCursors don't trigger this code path anymore. In short,\n> currtid2() does not get used. Inoue-san, Saito-san, what do you\n> think? I am adding also Tsunakawa-san in CC who has some experience\n> in this area.\n> --\n> Michael\n\n\n\n", "msg_date": "Thu, 25 Jun 2020 22:14:00 +0900", "msg_from": "\"Inoue, Hiroshi\" <h-inoue@dream.email.ne.jp>", "msg_from_op": false, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "On Thu, Jun 25, 2020 at 10:14:00PM +0900, Inoue, Hiroshi wrote:\n> I seem to have invalidated KEYSET-DRIVEN cursors used in positioned-update\n> test. It was introduced by the commit 4a272fd but was invalidated by the\n> commit 2be35a6.\n> \n> I don't object to the removal of currtid(2) because keyset-driven cursors in\n> psqlodbc are changed into static cursors in many cases and I've hardly ever\n> heard a complaint about it.\n\nHmm. I am not sure that this completely answers my original concern\nthough. In short, don't we still have corner cases where\nkeyset-driven cursors are not changed into static cursors, meaning\nthat currtid2() could get used? The removal of the in-core functions\nwould hurt applications using that, meaning that we should at least\nprovide an equivalent of currtid2() in the worse case as a contrib\nmodule, no? If the code paths of currtid2() are reachable, shouldn't\nwe also make sure that they are still reached in the regression tests\nof the driver, meaning that the driver code needs more coverage? I\nhave been looking at the tests and tried to tweak them using\nSQLSetPos() so as the code paths involving currtid2() get reached, but \nI am not really able to do so. It does not mean that that currtid2()\nnever gets reached, it just means that I am not able to be sure that\nthis part can be safely removed from the Postgres backend code :( \n\nFrom what I can see on this thread, we could just remove currtid() per\nthe arguments of the RETURNING ctid clause supported since PG 8.2, but\nit would make more sense to me to just remove both currtid/currtid2()\nat once.\n--\nMichael", "msg_date": "Fri, 26 Jun 2020 13:11:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "On Fri, Jun 26, 2020 at 01:11:55PM +0900, Michael Paquier wrote:\n> From what I can see on this thread, we could just remove currtid() per\n> the arguments of the RETURNING ctid clause supported since PG 8.2, but\n> it would make more sense to me to just remove both currtid/currtid2()\n> at once.\n\nThe CF bot is complaining, so here is a rebase for the main patch.\nOpinions are welcome about the arguments of upthread.\n--\nMichael", "msg_date": "Thu, 3 Sep 2020 19:14:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "On 2020-09-03 12:14, Michael Paquier wrote:\n> On Fri, Jun 26, 2020 at 01:11:55PM +0900, Michael Paquier wrote:\n>> From what I can see on this thread, we could just remove currtid() per\n>> the arguments of the RETURNING ctid clause supported since PG 8.2, but\n>> it would make more sense to me to just remove both currtid/currtid2()\n>> at once.\n> \n> The CF bot is complaining, so here is a rebase for the main patch.\n> Opinions are welcome about the arguments of upthread.\n\nIt appears that currtid2() is still used, so we ought to keep it.\n\n\n", "msg_date": "Fri, 20 Nov 2020 16:14:49 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 2020-09-03 12:14, Michael Paquier wrote:\n>> Opinions are welcome about the arguments of upthread.\n\n> It appears that currtid2() is still used, so we ought to keep it.\n\nYeah, if pgODBC were not using it at all then I think it'd be fine\nto get rid of, but if it still contains calls then we cannot.\nThe suggestion upthread that those calls might be unreachable\nis interesting, but it seems unproven.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 20 Nov 2020 11:53:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "On Fri, Nov 20, 2020 at 11:53:11AM -0500, Tom Lane wrote:\n> Yeah, if pgODBC were not using it at all then I think it'd be fine\n> to get rid of, but if it still contains calls then we cannot.\n> The suggestion upthread that those calls might be unreachable\n> is interesting, but it seems unproven.\n\nYeah, I am not 100% sure that there are no code paths calling\ncurrtid2(), and the ODBC is too obscure to me to get to a clear\nconclusion. currtid() though, is a different deal thanks to\nRETURNING. What about cutting the cake in two and just remove\ncurrtid() then?\n--\nMichael", "msg_date": "Sat, 21 Nov 2020 10:12:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> What about cutting the cake in two and just remove\n> currtid() then?\n\n+1. That'd still let us get rid of setLastTid() which is\nthe ugliest part of the thing, IMO.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 20 Nov 2020 21:50:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "On Fri, Nov 20, 2020 at 09:50:08PM -0500, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> What about cutting the cake in two and just remove\n>> currtid() then?\n> \n> +1. That'd still let us get rid of setLastTid() which is\n> the ugliest part of the thing, IMO.\n\nIndeed, this could go. There is a recursive call for views, but in\norder to maintain compatibility with that we can just remove one\nfunction and move the second to use a regclass as argument, like the\nattached, while removing setLastTid(). Any thoughts?\n--\nMichael", "msg_date": "Sat, 21 Nov 2020 14:45:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Indeed, this could go. There is a recursive call for views, but in\n> order to maintain compatibility with that we can just remove one\n> function and move the second to use a regclass as argument, like the\n> attached, while removing setLastTid(). Any thoughts?\n\nConsidering that we're preserving this only for backwards compatibility,\nI doubt that changing the signature is a good idea. It maybe risks\nbreaking something, and the ODBC driver is hardly going to notice\nany improved ease-of-use.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 21 Nov 2020 13:13:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "Hi,\n\n+1 for getting rid of whatever we can without too much trouble.\n\nOn 2020-11-21 13:13:35 -0500, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > Indeed, this could go. There is a recursive call for views, but in\n> > order to maintain compatibility with that we can just remove one\n> > function and move the second to use a regclass as argument, like the\n> > attached, while removing setLastTid(). Any thoughts?\n> \n> Considering that we're preserving this only for backwards compatibility,\n> I doubt that changing the signature is a good idea. It maybe risks\n> breaking something, and the ODBC driver is hardly going to notice\n> any improved ease-of-use.\n\n+1.\n\nRegards,\n\nAndres\n\n\n", "msg_date": "Sat, 21 Nov 2020 10:33:58 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "On Sat, Nov 21, 2020 at 01:13:35PM -0500, Tom Lane wrote:\n> Considering that we're preserving this only for backwards compatibility,\n> I doubt that changing the signature is a good idea. It maybe risks\n> breaking something, and the ODBC driver is hardly going to notice\n> any improved ease-of-use.\n\nSo, what you are basically saying is to switch currtid_byreloid() to\nbecome a function local to tid.c. And then have just\ncurrtid_byrelname() and currtid_for_view() call that, right?\n--\nMichael", "msg_date": "Sun, 22 Nov 2020 11:09:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> So, what you are basically saying is to switch currtid_byreloid() to\n> become a function local to tid.c. And then have just\n> currtid_byrelname() and currtid_for_view() call that, right?\n\nYeah, that sounds about right.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 21 Nov 2020 21:39:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "On Sat, Nov 21, 2020 at 09:39:28PM -0500, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> So, what you are basically saying is to switch currtid_byreloid() to\n>> become a function local to tid.c. And then have just\n>> currtid_byrelname() and currtid_for_view() call that, right?\n> \n> Yeah, that sounds about right.\n\nOkay, here you go with the attached. If there are any other comments,\nplease feel free.\n--\nMichael", "msg_date": "Sun, 22 Nov 2020 20:11:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" }, { "msg_contents": "On Sun, Nov 22, 2020 at 08:11:21PM +0900, Michael Paquier wrote:\n> Okay, here you go with the attached. If there are any other comments,\n> please feel free.\n\nHearing nothing, applied this one after going through the ODBC driver\ncode again this morning. Compatibility is exactly the same for\ncurrtid2(), while currtid() is now gone.\n--\nMichael", "msg_date": "Wed, 25 Nov 2020 12:21:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Removal of currtid()/currtid2() and some table AM cleanup" } ]
[ { "msg_contents": "Hi all,\n\nI have bumped into $subject, causing a replica identity index to\nbe considered as dropped if running REINDEX CONCURRENTLY on it. This\nmeans that the old tuple information would get lost in this case, as\na REPLICA IDENTITY USING INDEX without a dropped index is the same as\nNOTHING.\n\nAttached is a fix for this issue, that needs a backpatch down to 12.\nThanks,\n--\nMichael", "msg_date": "Wed, 3 Jun 2020 15:53:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "REINDEX CONCURRENTLY and indisreplident" }, { "msg_contents": "On Wed, 3 Jun 2020 at 03:54, Michael Paquier <michael@paquier.xyz> wrote:\n\n> Hi all,\n>\n> I have bumped into $subject, causing a replica identity index to\n> be considered as dropped if running REINDEX CONCURRENTLY on it. This\n> means that the old tuple information would get lost in this case, as\n> a REPLICA IDENTITY USING INDEX without a dropped index is the same as\n> NOTHING.\n>\n> LGTM. I tested in both versions (12, master) and it works accordingly.\n\n\n-- \nEuler Taveira http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Wed, 3 Jun 2020 at 03:54, Michael Paquier <michael@paquier.xyz> wrote:Hi all,\n\nI have bumped into $subject, causing a replica identity index to\nbe considered as dropped if running REINDEX CONCURRENTLY on it.  This\nmeans that the old tuple information would get lost in this case, as\na REPLICA IDENTITY USING INDEX without a dropped index is the same as\nNOTHING.\nLGTM. I tested in both versions (12, master) and it works accordingly.-- Euler Taveira                 http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 3 Jun 2020 12:40:38 -0300", "msg_from": "Euler Taveira <euler.taveira@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: REINDEX CONCURRENTLY and indisreplident" }, { "msg_contents": "On Wed, Jun 03, 2020 at 12:40:38PM -0300, Euler Taveira wrote:\n> On Wed, 3 Jun 2020 at 03:54, Michael Paquier <michael@paquier.xyz> wrote:\n>> I have bumped into $subject, causing a replica identity index to\n>> be considered as dropped if running REINDEX CONCURRENTLY on it. This\n>> means that the old tuple information would get lost in this case, as\n>> a REPLICA IDENTITY USING INDEX without a dropped index is the same as\n>> NOTHING.\n>\n> LGTM. I tested in both versions (12, master) and it works accordingly.\n\nThanks for the review. I'll try to get that fixed soon.\n\nBy the way, your previous email was showing up as part of my own email\nwith the indentation that was used so I missed it first. That's the\ncase as well here:\nhttps://www.postgresql.org/message-id/CAH503wDaejzhP7+wA-hHS6c7NzE69oWqe5Zf_TYFu1epAwp6EQ@mail.gmail.com\n--\nMichael", "msg_date": "Thu, 4 Jun 2020 11:23:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: REINDEX CONCURRENTLY and indisreplident" }, { "msg_contents": "On Thu, Jun 04, 2020 at 11:23:36AM +0900, Michael Paquier wrote:\n> On Wed, Jun 03, 2020 at 12:40:38PM -0300, Euler Taveira wrote:\n> > On Wed, 3 Jun 2020 at 03:54, Michael Paquier <michael@paquier.xyz> wrote:\n> >> I have bumped into $subject, causing a replica identity index to\n> >> be considered as dropped if running REINDEX CONCURRENTLY on it. This\n> >> means that the old tuple information would get lost in this case, as\n> >> a REPLICA IDENTITY USING INDEX without a dropped index is the same as\n> >> NOTHING.\n> >\n> > LGTM. I tested in both versions (12, master) and it works accordingly.\n> \n> Thanks for the review. I'll try to get that fixed soon.\n\nApplied this one, just in time before the branching:\nhttps://www.postgresql.org/message-id/1931934b-09dc-e93e-fab9-78c5bc72743d@postgresql.org\n--\nMichael", "msg_date": "Fri, 5 Jun 2020 11:04:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: REINDEX CONCURRENTLY and indisreplident" } ]