threads
listlengths
1
2.99k
[ { "msg_contents": "Hi.\n\nThere are assorted fixes to the head branch.\n\n1. Avoid useless reassigning var _logsegno\n(src/backend/access/transam/xlog.c)\nCommit 7d70809\n<https://github.com/postgres/postgres/commit/7d708093b7400327658a30d1aa1d5e284d37622c>\nleft a little oversight.\nXLByteToPrevSeg and XLByteToSeg are macros, and both assign _logsegno.\nSo, the first assignment is lost and is useless.\n\n2. Avoid retesting log_min_duration (src/backend/commands/analyze.c)\nThe log_min_duration has already been tested before and the second test\ncan be safely removed.\n\n3. Avoid useless var declaration record (src/backend/utils/misc/guc.c)\nThe var record is never really used.\n\n4. Fix declaration volatile signal var (src/bin/pgbench/pgbench.c)\nLike how to commit 5ac9e86\n<https://github.com/postgres/postgres/commit/5ac9e869191148741539e626b84ba7e77dc71670>,\nthis is a similar case.\n\nregards,\nRanier Vilela", "msg_date": "Thu, 29 Sep 2022 21:08:02 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Small miscellaneous fixes" }, { "msg_contents": "On Fri, Sep 30, 2022 at 9:08 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Hi.\n>\n> There are assorted fixes to the head branch.\n>\n> 1. Avoid useless reassigning var _logsegno (src/backend/access/transam/xlog.c)\n> Commit 7d70809 left a little oversight.\n> XLByteToPrevSeg and XLByteToSeg are macros, and both assign _logsegno.\n> So, the first assignment is lost and is useless.\n>\n> 2. Avoid retesting log_min_duration (src/backend/commands/analyze.c)\n> The log_min_duration has already been tested before and the second test\n> can be safely removed.\n>\n> 3. Avoid useless var declaration record (src/backend/utils/misc/guc.c)\n> The var record is never really used.\n\nThree changes look good to me.\n\n>\n> 4. Fix declaration volatile signal var (src/bin/pgbench/pgbench.c)\n> Like how to commit 5ac9e86, this is a similar case.\n\nThe same is true also for alarm_triggered in pg_test_fsync.c?\n\nRegards,\n\n-- \nMasahiko Sawada\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 3 Oct 2022 17:01:16 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Small miscellaneous fixes" }, { "msg_contents": "Em seg., 3 de out. de 2022 às 05:01, Masahiko Sawada <sawada.mshk@gmail.com>\nescreveu:\n\n> On Fri, Sep 30, 2022 at 9:08 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> >\n> > Hi.\n> >\n> > There are assorted fixes to the head branch.\n> >\n> > 1. Avoid useless reassigning var _logsegno\n> (src/backend/access/transam/xlog.c)\n> > Commit 7d70809 left a little oversight.\n> > XLByteToPrevSeg and XLByteToSeg are macros, and both assign _logsegno.\n> > So, the first assignment is lost and is useless.\n> >\n> > 2. Avoid retesting log_min_duration (src/backend/commands/analyze.c)\n> > The log_min_duration has already been tested before and the second test\n> > can be safely removed.\n> >\n> > 3. Avoid useless var declaration record (src/backend/utils/misc/guc.c)\n> > The var record is never really used.\n>\n> Three changes look good to me.\n>\nHi, thanks for reviewing this.\n\n\n>\n> >\n> > 4. Fix declaration volatile signal var (src/bin/pgbench/pgbench.c)\n> > Like how to commit 5ac9e86, this is a similar case.\n>\n> The same is true also for alarm_triggered in pg_test_fsync.c?\n>\nI don't think so.\nIf I understand the problem correctly, the failure can occur with true\nsignals, provided by the OS\nIn the case at hand, it seems to me more like an internal form of signal,\nthat is, simulated.\nSo bool works fine.\n\nCF entry created:\nhttps://commitfest.postgresql.org/40/3925/\n\nregards,\nRanier Vilela\n\nEm seg., 3 de out. de 2022 às 05:01, Masahiko Sawada <sawada.mshk@gmail.com> escreveu:On Fri, Sep 30, 2022 at 9:08 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Hi.\n>\n> There are assorted fixes to the head branch.\n>\n> 1. Avoid useless reassigning var _logsegno (src/backend/access/transam/xlog.c)\n> Commit 7d70809 left a little oversight.\n> XLByteToPrevSeg and XLByteToSeg are macros, and both assign _logsegno.\n> So, the first assignment is lost and is useless.\n>\n> 2. Avoid retesting log_min_duration (src/backend/commands/analyze.c)\n> The log_min_duration has already been tested before and the second test\n> can be safely removed.\n>\n> 3. Avoid useless var declaration record (src/backend/utils/misc/guc.c)\n> The var record is never really used.\n\nThree changes look good to me.Hi, thanks for reviewing this. \n\n>\n> 4. Fix declaration volatile signal var (src/bin/pgbench/pgbench.c)\n> Like how to commit 5ac9e86, this is a similar case.\n\nThe same is true also for alarm_triggered in pg_test_fsync.c?I don't think so.If I understand the problem correctly, the failure can occur with true signals, provided by the OSIn the case at hand, it seems to me more like an internal form of signal, that is, simulated.So bool works fine.CF entry created:https://commitfest.postgresql.org/40/3925/regards,Ranier Vilela", "msg_date": "Mon, 3 Oct 2022 08:05:57 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Small miscellaneous fixes" }, { "msg_contents": "On Mon, Oct 03, 2022 at 08:05:57AM -0300, Ranier Vilela wrote:\n> Em seg., 3 de out. de 2022 às 05:01, Masahiko Sawada <sawada.mshk@gmail.com>\n> escreveu:\n>> On Fri, Sep 30, 2022 at 9:08 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>>> 1. Avoid useless reassigning var _logsegno\n>> (src/backend/access/transam/xlog.c)\n>>> Commit 7d70809 left a little oversight.\n>>> XLByteToPrevSeg and XLByteToSeg are macros, and both assign _logsegno.\n>>> So, the first assignment is lost and is useless.\n\nRight, I have missed this one. We do that now in\nbuild_backup_content() when building the contents of the backup\nhistory file.\n\n>>> 4. Fix declaration volatile signal var (src/bin/pgbench/pgbench.c)\n>>> Like how to commit 5ac9e86, this is a similar case.\n>>\n>> The same is true also for alarm_triggered in pg_test_fsync.c?\n>>\n> I don't think so.\n> If I understand the problem correctly, the failure can occur with true\n> signals, provided by the OS\n> In the case at hand, it seems to me more like an internal form of signal,\n> that is, simulated.\n> So bool works fine.\n\nI am not following your reasoning here. Why does it matter to change\none but not the other? Both are used with SIGALRM, it seems.\n\nThe other three seem fine, so fixed.\n--\nMichael", "msg_date": "Tue, 4 Oct 2022 13:18:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Small miscellaneous fixes" }, { "msg_contents": "Em ter., 4 de out. de 2022 às 01:18, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Mon, Oct 03, 2022 at 08:05:57AM -0300, Ranier Vilela wrote:\n> > Em seg., 3 de out. de 2022 às 05:01, Masahiko Sawada <\n> sawada.mshk@gmail.com>\n> > escreveu:\n> >> On Fri, Sep 30, 2022 at 9:08 AM Ranier Vilela <ranier.vf@gmail.com>\n> wrote:\n> >>> 1. Avoid useless reassigning var _logsegno\n> >> (src/backend/access/transam/xlog.c)\n> >>> Commit 7d70809 left a little oversight.\n> >>> XLByteToPrevSeg and XLByteToSeg are macros, and both assign _logsegno.\n> >>> So, the first assignment is lost and is useless.\n>\n> Right, I have missed this one. We do that now in\n> build_backup_content() when building the contents of the backup\n> history file.\n>\n> >>> 4. Fix declaration volatile signal var (src/bin/pgbench/pgbench.c)\n> >>> Like how to commit 5ac9e86, this is a similar case.\n> >>\n> >> The same is true also for alarm_triggered in pg_test_fsync.c?\n> >>\n> > I don't think so.\n> > If I understand the problem correctly, the failure can occur with true\n> > signals, provided by the OS\n> > In the case at hand, it seems to me more like an internal form of signal,\n> > that is, simulated.\n> > So bool works fine.\n>\n> I am not following your reasoning here. Why does it matter to change\n> one but not the other? Both are used with SIGALRM, it seems.\n>\nBoth are correct, I missed the pqsignal calls.\n\nAttached patch to change this.\n\n\n> The other three seem fine, so fixed.\n>\nThanks Michael for the commit.\n\nregards,\nRanier Vilela", "msg_date": "Tue, 4 Oct 2022 08:23:16 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Small miscellaneous fixes" }, { "msg_contents": "On Tue, Oct 04, 2022 at 08:23:16AM -0300, Ranier Vilela wrote:\n> Both are correct, I missed the pqsignal calls.\n> \n> Attached patch to change this.\n\nThe change for pgbench is missing and this is only changing\npg_test_fsync. Switching to sig_atomic_t would be fine on non-WIN32\nas these are used in signal handlers, but are we sure that this is\nfine on WIN32 for pg_test_fsync where we rely on a separate thread to\ncontrol the timing of the alarm?\n--\nMichael", "msg_date": "Wed, 16 Nov 2022 15:59:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Small miscellaneous fixes" }, { "msg_contents": "Em qua., 16 de nov. de 2022 às 03:59, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Tue, Oct 04, 2022 at 08:23:16AM -0300, Ranier Vilela wrote:\n> > Both are correct, I missed the pqsignal calls.\n> >\n> > Attached patch to change this.\n>\n> The change for pgbench is missing and this is only changing\n> pg_test_fsync. Switching to sig_atomic_t would be fine on non-WIN32\n> as these are used in signal handlers, but are we sure that this is\n> fine on WIN32 for pg_test_fsync where we rely on a separate thread to\n> control the timing of the alarm?\n>\nWell I tested here in Windows 10 64 bits with sig_atomic_t alarm_triggered\nand works fine.\nctrl + c breaks the exe.\n\nWindows 10 64bits\nSSD 256GB\n\nFor curiosity, this is the test results:\n5 seconds per test\nO_DIRECT supported on this platform for open_datasync and open_sync.\n\nCompare file sync methods using one 8kB write:\n(in wal_sync_method preference order, except fdatasync is Linux's default)\n open_datasync\n\nctrl + c\n\nC:\\postgres_debug\\bin>pg_test_fsync\n5 seconds per test\nO_DIRECT supported on this platform for open_datasync and open_sync.\n\nCompare file sync methods using one 8kB write:\n(in wal_sync_method preference order, except fdatasync is Linux's default)\n open_datasync\n\nctrl + c\n\nC:\\postgres_debug\\bin>pg_test_fsync\n5 seconds per test\nO_DIRECT supported on this platform for open_datasync and open_sync.\n\nCompare file sync methods using one 8kB write:\n(in wal_sync_method preference order, except fdatasync is Linux's default)\n open_datasync 9495,720 ops/sec 105 usecs/op\n fdatasync 444,174 ops/sec 2251 usecs/op\n fsync 398,487 ops/sec 2509 usecs/op\n fsync_writethrough 342,018 ops/sec 2924 usecs/op\n open_sync n/a\n\nCompare file sync methods using two 8kB writes:\n(in wal_sync_method preference order, except fdatasync is Linux's default)\n open_datasync 4719,825 ops/sec 212 usecs/op\n fdatasync 442,138 ops/sec 2262 usecs/op\n fsync 401,163 ops/sec 2493 usecs/op\n fsync_writethrough 397,198 ops/sec 2518 usecs/op\n open_sync n/a\n\nCompare open_sync with different write sizes:\n(This is designed to compare the cost of writing 16kB in different write\nopen_sync sizes.)\n 1 * 16kB open_sync write n/a\n 2 * 8kB open_sync writes n/a\n 4 * 4kB open_sync writes n/a\n 8 * 2kB open_sync writes n/a\n 16 * 1kB open_sync writes n/a\n\nTest if fsync on non-write file descriptor is honored:\n(If the times are similar, fsync() can sync data written on a different\ndescriptor.)\n write, fsync, close 77,808 ops/sec 12852 usecs/op\n write, close, fsync 77,469 ops/sec 12908 usecs/op\n\nNon-sync'ed 8kB writes:\n write 139789,685 ops/sec 7 usecs/op\n\nregards,\nRanier Vilela\n\nEm qua., 16 de nov. de 2022 às 03:59, Michael Paquier <michael@paquier.xyz> escreveu:On Tue, Oct 04, 2022 at 08:23:16AM -0300, Ranier Vilela wrote:\n> Both are correct, I missed the pqsignal calls.\n> \n> Attached patch to change this.\n\nThe change for pgbench is missing and this is only changing\npg_test_fsync.  Switching to sig_atomic_t would be fine on non-WIN32\nas these are used in signal handlers, but are we sure that this is\nfine on WIN32 for pg_test_fsync where we rely on a separate thread to\ncontrol the timing of the alarm?Well I tested here in Windows 10 64 bits with sig_atomic_t alarm_triggered and works fine.ctrl + c breaks the exe.Windows 10 64bitsSSD 256GBFor curiosity, this is the test results:5 seconds per testO_DIRECT supported on this platform for open_datasync and open_sync.Compare file sync methods using one 8kB write:(in wal_sync_method preference order, except fdatasync is Linux's default)        open_datasyncctrl + cC:\\postgres_debug\\bin>pg_test_fsync5 seconds per testO_DIRECT supported on this platform for open_datasync and open_sync.Compare file sync methods using one 8kB write:(in wal_sync_method preference order, except fdatasync is Linux's default)        open_datasync\nctrl + c C:\\postgres_debug\\bin>pg_test_fsync5 seconds per testO_DIRECT supported on this platform for open_datasync and open_sync.Compare file sync methods using one 8kB write:(in wal_sync_method preference order, except fdatasync is Linux's default)        open_datasync                      9495,720 ops/sec     105 usecs/op        fdatasync                           444,174 ops/sec    2251 usecs/op        fsync                               398,487 ops/sec    2509 usecs/op        fsync_writethrough                  342,018 ops/sec    2924 usecs/op        open_sync                                       n/aCompare file sync methods using two 8kB writes:(in wal_sync_method preference order, except fdatasync is Linux's default)        open_datasync                      4719,825 ops/sec     212 usecs/op        fdatasync                           442,138 ops/sec    2262 usecs/op        fsync                               401,163 ops/sec    2493 usecs/op        fsync_writethrough                  397,198 ops/sec    2518 usecs/op        open_sync                                       n/aCompare open_sync with different write sizes:(This is designed to compare the cost of writing 16kB in different writeopen_sync sizes.)         1 * 16kB open_sync write                       n/a         2 *  8kB open_sync writes                      n/a         4 *  4kB open_sync writes                      n/a         8 *  2kB open_sync writes                      n/a        16 *  1kB open_sync writes                      n/aTest if fsync on non-write file descriptor is honored:(If the times are similar, fsync() can sync data written on a differentdescriptor.)        write, fsync, close                  77,808 ops/sec   12852 usecs/op        write, close, fsync                  77,469 ops/sec   12908 usecs/opNon-sync'ed 8kB writes:        write                            139789,685 ops/sec       7 usecs/opregards,Ranier Vilela", "msg_date": "Wed, 16 Nov 2022 14:56:11 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Small miscellaneous fixes" }, { "msg_contents": "On 04.10.22 06:18, Michael Paquier wrote:\n> On Mon, Oct 03, 2022 at 08:05:57AM -0300, Ranier Vilela wrote:\n>> Em seg., 3 de out. de 2022 às 05:01, Masahiko Sawada <sawada.mshk@gmail.com>\n>> escreveu:\n>>> On Fri, Sep 30, 2022 at 9:08 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>>>> 1. Avoid useless reassigning var _logsegno\n>>> (src/backend/access/transam/xlog.c)\n>>>> Commit 7d70809 left a little oversight.\n>>>> XLByteToPrevSeg and XLByteToSeg are macros, and both assign _logsegno.\n>>>> So, the first assignment is lost and is useless.\n> \n> Right, I have missed this one. We do that now in\n> build_backup_content() when building the contents of the backup\n> history file.\n\nIs this something you want to follow up on, since you were involved in \nthat patch? Is the redundant assignment simply to be deleted, or do you \nwant to check the original patch again for context?\n\n\n\n", "msg_date": "Fri, 25 Nov 2022 13:15:40 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Small miscellaneous fixes" }, { "msg_contents": "On Fri, Nov 25, 2022 at 01:15:40PM +0100, Peter Eisentraut wrote:\n> Is this something you want to follow up on, since you were involved in that\n> patch? Is the redundant assignment simply to be deleted, or do you want to\n> check the original patch again for context?\n\nMost of the changes of this thread have been applied as of c42cd05c.\nRemains the SIGALRM business with sig_atomic_t, and I wanted to check\nthat by myself first.\n--\nMichael", "msg_date": "Sat, 26 Nov 2022 10:21:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Small miscellaneous fixes" }, { "msg_contents": "Em sex., 25 de nov. de 2022 às 22:21, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Fri, Nov 25, 2022 at 01:15:40PM +0100, Peter Eisentraut wrote:\n> > Is this something you want to follow up on, since you were involved in\n> that\n> > patch? Is the redundant assignment simply to be deleted, or do you want\n> to\n> > check the original patch again for context?\n>\n> Most of the changes of this thread have been applied as of c42cd05c.\n> Remains the SIGALRM business with sig_atomic_t, and I wanted to check\n> that by myself first.\n>\nThank you Michael, for taking care of it.\n\nregards,\nRanier Vilela\n\nEm sex., 25 de nov. de 2022 às 22:21, Michael Paquier <michael@paquier.xyz> escreveu:On Fri, Nov 25, 2022 at 01:15:40PM +0100, Peter Eisentraut wrote:\n> Is this something you want to follow up on, since you were involved in that\n> patch?  Is the redundant assignment simply to be deleted, or do you want to\n> check the original patch again for context?\n\nMost of the changes of this thread have been applied as of c42cd05c.\nRemains the SIGALRM business with sig_atomic_t, and I wanted to check\nthat by myself first.Thank you Michael, for taking care of it.regards,Ranier Vilela", "msg_date": "Sat, 26 Nov 2022 11:30:07 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Small miscellaneous fixes" }, { "msg_contents": "On Sat, Nov 26, 2022 at 11:30:07AM -0300, Ranier Vilela wrote:\n> Thank you Michael, for taking care of it.\n\n(As of 1e31484, after finishing the tests I wanted.)\n--\nMichael", "msg_date": "Sun, 27 Nov 2022 14:13:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Small miscellaneous fixes" } ]
[ { "msg_contents": "Hi,\n\nWhen tap tests are interrupted (e.g. with ctrl-c), we don't cancel running\npostgres instances etc. That doesn't strike me as a good thing.\n\nIn contrast, the postgres instances started by pg_regress do terminate. I\nassume this is because pg_regress starts postgres directly, whereas tap tests\nlargely start postgres via pg_ctl. pg_ctl will, as it should, start postgres\nwithout a controlling terminal. Thus a ctrl-c won't be delivered to it.\n\nISTM we should at least install a SIGINT/TERM handler in Cluster.pm that does\nthe stuff we already do in END.\n\nThat still leaves us with some other potential processes around that won't\nimmediately exec, but it'd be much better already.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 29 Sep 2022 21:07:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "interrupted tap tests leave postgres instances around" }, { "msg_contents": "On Thu, Sep 29, 2022 at 09:07:34PM -0700, Andres Freund wrote:\n> ISTM we should at least install a SIGINT/TERM handler in Cluster.pm that does\n> the stuff we already do in END.\n\nHmm, indeed. And here I thought that END was actually taking care of\nthat on an interrupt..\n--\nMichael", "msg_date": "Fri, 30 Sep 2022 15:38:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: interrupted tap tests leave postgres instances around" }, { "msg_contents": "On 2022-Sep-30, Michael Paquier wrote:\n\n> On Thu, Sep 29, 2022 at 09:07:34PM -0700, Andres Freund wrote:\n> > ISTM we should at least install a SIGINT/TERM handler in Cluster.pm that does\n> > the stuff we already do in END.\n> \n> Hmm, indeed. And here I thought that END was actually taking care of\n> that on an interrupt..\n\nMe too. But the perlmod manpage says\n\n An \"END\" code block is executed as late as possible, that is, after perl has\n finished running the program and just before the interpreter is being exited,\n even if it is exiting as a result of a die() function. (But not if it's\n morphing into another program via \"exec\", or being blown out of the water by a\n signal--you have to trap that yourself (if you can).)\n\nSo clearly we need to fix it. I thought it should be as simple as the\nattached, since exit() calls END. (Would it be better to die() instead\nof exit()?)\n\nBut on testing, some nodes linger after being sent a shutdown signal.\nI'm not clear why this is -- I think it's due to the fact that we send\nthe signal just as the node is starting up, which means the signal\ndoesn't reach the process. (I added the 0002 patch --not for commit--\nto see which Clusters were being shut down and in the trace file I can\nclearly see that the nodes that linger were definitely subject to\n->teardown_node).\n\n\nAnother funny thing: C-C'ing one run, I got this lingering process:\n\nalvherre 800868 98.2 0.0 12144 5052 pts/9 R 11:03 0:26 /pgsql/install/master/bin/psql -X -c BASE_BACKUP (CHECKPOINT 'fast', MAX_RATE 32); -c SELECT pg_backup_stop() -d port=54380 host=/tmp/O_2PPNj9Fg dbname='postgres' replication=database\n\nThis is probably a bug in psql. Backtrace is:\n\n#0 PQclear (res=<optimized out>) at /pgsql/source/master/src/interfaces/libpq/fe-exec.c:748\n#1 PQclear (res=res@entry=0x55ad308c6190) at /pgsql/source/master/src/interfaces/libpq/fe-exec.c:718\n#2 0x000055ad2f303323 in ClearOrSaveResult (result=0x55ad308c6190) at /pgsql/source/master/src/bin/psql/common.c:472\n#3 ClearOrSaveAllResults () at /pgsql/source/master/src/bin/psql/common.c:488\n#4 ExecQueryAndProcessResults (query=query@entry=0x55ad308bc7a0 \"BASE_BACKUP (CHECKPOINT 'fast', MAX_RATE 32);\", \n elapsed_msec=elapsed_msec@entry=0x7fff9c9941d8, svpt_gone_p=svpt_gone_p@entry=0x7fff9c9941d7, is_watch=is_watch@entry=false, \n opt=opt@entry=0x0, printQueryFout=printQueryFout@entry=0x0) at /pgsql/source/master/src/bin/psql/common.c:1608\n#5 0x000055ad2f301b9d in SendQuery (query=0x55ad308bc7a0 \"BASE_BACKUP (CHECKPOINT 'fast', MAX_RATE 32);\")\n at /pgsql/source/master/src/bin/psql/common.c:1172\n#6 0x000055ad2f2f7bd9 in main (argc=<optimized out>, argv=<optimized out>) at /pgsql/source/master/src/bin/psql/startup.c:384\n\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"How amazing is that? I call it a night and come back to find that a bug has\nbeen identified and patched while I sleep.\" (Robert Davidson)\n http://archives.postgresql.org/pgsql-sql/2006-03/msg00378.php", "msg_date": "Fri, 30 Sep 2022 11:17:00 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: interrupted tap tests leave postgres instances around" }, { "msg_contents": "Hi,\n\nOn 2022-09-30 11:17:00 +0200, Alvaro Herrera wrote:\n> But on testing, some nodes linger after being sent a shutdown signal.\n> I'm not clear why this is -- I think it's due to the fact that we send\n> the signal just as the node is starting up, which means the signal\n> doesn't reach the process.\n\nI suspect it's when a test gets interrupt while pg_ctl is starting the\nbackend. The start() routine only does _update_pid() after pg_ctl finished,\nand terminate()->stop() returns before doing anything if pid isn't defined.\n\nPerhaps the END{} routine should call $node->_update_pid(-1); if $exit_code !=\n0 and _pid is undefined?\n\nThat does seem to reduce the incidence of \"leftover\" postgres\ninstances. 001_start_stop.pl leaves some behind, but that makes sense, because\nit's bypassing the whole node management. But I still occasionally see some\nremaining processes if I crank up test concurrency.\n\nAh! At least part of the problem is that sub stop() does BAIL_OUT, and of\ncourse it can fail as part of the shutdown.\n\nBut there's still some that survive, where your perl.trace doesn't contain the\nnode getting shut down...\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 1 Oct 2022 13:38:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: interrupted tap tests leave postgres instances around" }, { "msg_contents": "On 30.09.22 06:07, Andres Freund wrote:\n> When tap tests are interrupted (e.g. with ctrl-c), we don't cancel running\n> postgres instances etc. That doesn't strike me as a good thing.\n> \n> In contrast, the postgres instances started by pg_regress do terminate. I\n> assume this is because pg_regress starts postgres directly, whereas tap tests\n> largely start postgres via pg_ctl. pg_ctl will, as it should, start postgres\n> without a controlling terminal. Thus a ctrl-c won't be delivered to it.\n\nI ran into the problem recently that pg_upgrade starts the servers with \npg_ctl, and thus without terminal, and so you can't get any password \nprompts for SSL keys, for example. Taking out the setsid() call in \npg_ctl.c fixed that. I suspect this is ultimately the same problem.\n\nWe could make TAP tests and pg_upgrade not use pg_ctl and start \npostmaster directly. I'm not sure how much work that would be, but \nseeing that pg_regress does it, it doesn't seem unreasonable.\n\nAlternatively, perhaps we could make a mode for pg_ctl that it doesn't \ncall setsid(). This could be activated by an environment variable. \nThat might address all these problems, too.\n\n\n\n", "msg_date": "Tue, 4 Oct 2022 10:24:19 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: interrupted tap tests leave postgres instances around" }, { "msg_contents": "Hi,\n\nOn 2022-10-04 10:24:19 +0200, Peter Eisentraut wrote:\n> On 30.09.22 06:07, Andres Freund wrote:\n> > When tap tests are interrupted (e.g. with ctrl-c), we don't cancel running\n> > postgres instances etc. That doesn't strike me as a good thing.\n> > \n> > In contrast, the postgres instances started by pg_regress do terminate. I\n> > assume this is because pg_regress starts postgres directly, whereas tap tests\n> > largely start postgres via pg_ctl. pg_ctl will, as it should, start postgres\n> > without a controlling terminal. Thus a ctrl-c won't be delivered to it.\n> \n> I ran into the problem recently that pg_upgrade starts the servers with\n> pg_ctl, and thus without terminal, and so you can't get any password prompts\n> for SSL keys, for example.\n\nFor this specific case I wonder if pg_upgrade should disable ssl... That would\nrequire fixing pg_upgrade to use a unix socket on windows, but that'd be a\ngood idea anyway.\n\n\n> Taking out the setsid() call in pg_ctl.c fixed that. I suspect this is\n> ultimately the same problem.\n\n> We could make TAP tests and pg_upgrade not use pg_ctl and start postmaster\n> directly. I'm not sure how much work that would be, but seeing that\n> pg_regress does it, it doesn't seem unreasonable.\n\nIt's not trivial, particularly from perl. Check all the stuff pg_regress and\npg_ctl do around windows accounts and tokens.\n\n\n> Alternatively, perhaps we could make a mode for pg_ctl that it doesn't call\n> setsid(). This could be activated by an environment variable. That might\n> address all these problems, too.\n\nIt looks like that won't help. Because pg_ctl exits after forking postgres,\npostgres parent isn't the shell anymore...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 4 Oct 2022 10:10:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: interrupted tap tests leave postgres instances around" }, { "msg_contents": "On 2022-Oct-01, Andres Freund wrote:\n\n> Perhaps the END{} routine should call $node->_update_pid(-1); if $exit_code !=\n> 0 and _pid is undefined?\n\nYeah, that sounds reasonable.\n\n> That does seem to reduce the incidence of \"leftover\" postgres\n> instances. 001_start_stop.pl leaves some behind, but that makes sense, because\n> it's bypassing the whole node management. But I still occasionally see some\n> remaining processes if I crank up test concurrency.\n> \n> Ah! At least part of the problem is that sub stop() does BAIL_OUT, and of\n> course it can fail as part of the shutdown.\n\nI made teardown_node pass down fail_ok=>1 to avoid this problem, so we\nno longer BAIL_OUT in that case.\n\n\n> But there's still some that survive, where your perl.trace doesn't contain the\n> node getting shut down...\n\nYeah, something's still unexplained. I'll get this pushed soon, which\nalready reduces the number of leftover instances a good amount.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/", "msg_date": "Tue, 18 Oct 2022 12:15:11 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: interrupted tap tests leave postgres instances around" }, { "msg_contents": "Pushed this. It should improve things significantly.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Puedes vivir sólo una vez, pero si lo haces bien, una vez es suficiente\"\n\n\n", "msg_date": "Wed, 19 Oct 2022 17:38:50 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: interrupted tap tests leave postgres instances around" }, { "msg_contents": "On 2022-10-19 17:38:50 +0200, Alvaro Herrera wrote:\n> Pushed this. It should improve things significantly.\n\nThanks for working on this!\n\n\n", "msg_date": "Wed, 19 Oct 2022 10:08:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: interrupted tap tests leave postgres instances around" } ]
[ { "msg_contents": "Hi all,\n\nI have bumped a few days ago on the fact that COERCE_SQL_SYNTAX\n(introduced by 40c24bf) and SQLValueFunction are around to do the\nexact same thing, as known as enforcing single-function calls with\ndedicated SQL keywords. For example, keywords like SESSION_USER,\nCURRENT_DATE, etc. go through SQLValueFunction and rely on the parser\nto set a state that gets then used in execExprInterp.c. And it is\nrather easy to implement incorrect SQLValueFunctions, as these rely on\nmore hardcoded assumptions in the parser and executor than the\nequivalent FuncCalls (like collation to assign when using a text-like\nSQLValueFunctions).\n\nThere are two categories of single-value functions:\n- The ones returning names, where we enforce a C collation in two\nplaces of the code (current_role, role, current_catalog,\ncurrent_schema, current_database, current_user), even if\nget_typcollation() should do that for name types.\n- The ones working on time, date and timestamps (localtime[stamp],\ncurrent_date, current_time[stamp]), for 9 patterns as these accept an\noptional typmod.\n\nI have dug into the possibility to unify all that with a single\ninterface, and finish with the attached patch set which is a reduction\nof code, where all the SQLValueFunctions are replaced by a set of\nFuncCalls:\n 25 files changed, 338 insertions(+), 477 deletions(-)\n\n0001 is the move done for the name-related functions, cleaning up two\nplaces in the executor when a C collation is assigned to those\nfunction expressions. 0002 is the remaining cleanup for the\ntime-related ones, moving a set of parser-side checks to the execution\npath within each function, so as all this knowledge is now local to\neach file holding the date and timestamp types. Most of the gain is\nin 0002, obviously.\n\nThe pg_proc entries introduced for the sake of the move use the same\nname as the SQL keywords. These should perhaps be prefixed with a\n\"pg_\" at least. There would be an exception with pg_localtime[stamp],\nthough, where we could use a pg_localtime[stamp]_sql for the function\nname for prosrc. I am open to suggestions for these names.\n\nThoughts?\n--\nMichael", "msg_date": "Fri, 30 Sep 2022 15:04:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Getting rid of SQLValueFunction" }, { "msg_contents": "On Fri, Sep 30, 2022 at 2:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> Hi all,\n>\n> I have bumped a few days ago on the fact that COERCE_SQL_SYNTAX\n> (introduced by 40c24bf) and SQLValueFunction are around to do the\n> exact same thing, as known as enforcing single-function calls with\n> dedicated SQL keywords. For example, keywords like SESSION_USER,\n> CURRENT_DATE, etc. go through SQLValueFunction and rely on the parser\n> to set a state that gets then used in execExprInterp.c. And it is\n> rather easy to implement incorrect SQLValueFunctions, as these rely on\n> more hardcoded assumptions in the parser and executor than the\n> equivalent FuncCalls (like collation to assign when using a text-like\n> SQLValueFunctions).\n>\n> There are two categories of single-value functions:\n> - The ones returning names, where we enforce a C collation in two\n> places of the code (current_role, role, current_catalog,\n> current_schema, current_database, current_user), even if\n> get_typcollation() should do that for name types.\n> - The ones working on time, date and timestamps (localtime[stamp],\n> current_date, current_time[stamp]), for 9 patterns as these accept an\n> optional typmod.\n>\n> I have dug into the possibility to unify all that with a single\n> interface, and finish with the attached patch set which is a reduction\n> of code, where all the SQLValueFunctions are replaced by a set of\n> FuncCalls:\n> 25 files changed, 338 insertions(+), 477 deletions(-)\n>\n> 0001 is the move done for the name-related functions, cleaning up two\n> places in the executor when a C collation is assigned to those\n> function expressions. 0002 is the remaining cleanup for the\n> time-related ones, moving a set of parser-side checks to the execution\n> path within each function, so as all this knowledge is now local to\n> each file holding the date and timestamp types. Most of the gain is\n> in 0002, obviously.\n>\n> The pg_proc entries introduced for the sake of the move use the same\n> name as the SQL keywords. These should perhaps be prefixed with a\n> \"pg_\" at least. There would be an exception with pg_localtime[stamp],\n> though, where we could use a pg_localtime[stamp]_sql for the function\n> name for prosrc. I am open to suggestions for these names.\n>\n> Thoughts?\n> --\n> Michael\n>\n\nI like this a lot. Deleted code is debugged code.\n\nPatch applies and passes make check-world.\n\nNo trace of SQLValueFunction is left in the codebase, at least according to\n`git grep -l`.\n\nI have only one non-nitpick question about the code:\n\n+ /*\n+ * we're not too tense about good error message here because grammar\n+ * shouldn't allow wrong number of modifiers for TIME\n+ */\n+ if (n != 1)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"invalid type modifier\")));\n\n\nI agree that we shouldn't spend too much effort on a good error message\nhere, but perhaps we should have the message mention that it is\ndate/time-related? A person git-grepping for this error message will get 4\nhits in .c files (date.c, timestamp.c, varbit.c, varchar.c) so even a\nslight variation in the message could save them some time.\n\nThis is an extreme nitpick, but the patchset seems like it should have been\n1 file or 3 (remove name functions, remove time functions, remove\nSQLValueFunction infrastructure), but that will only matter in the unlikely\ncase that we find a need for SQLValueFunction but we want to leave the\ntimestamp function as COERCE_SQL_SYNTAX.\n\nOn Fri, Sep 30, 2022 at 2:04 AM Michael Paquier <michael@paquier.xyz> wrote:Hi all,\n\nI have bumped a few days ago on the fact that COERCE_SQL_SYNTAX\n(introduced by 40c24bf) and SQLValueFunction are around to do the\nexact same thing, as known as enforcing single-function calls with\ndedicated SQL keywords.  For example, keywords like SESSION_USER,\nCURRENT_DATE, etc. go through SQLValueFunction and rely on the parser\nto set a state that gets then used in execExprInterp.c.  And it is\nrather easy to implement incorrect SQLValueFunctions, as these rely on\nmore hardcoded assumptions in the parser and executor than the\nequivalent FuncCalls (like collation to assign when using a text-like\nSQLValueFunctions).\n\nThere are two categories of single-value functions:\n- The ones returning names, where we enforce a C collation in two\nplaces of the code (current_role, role, current_catalog,\ncurrent_schema, current_database, current_user), even if\nget_typcollation() should do that for name types.\n- The ones working on time, date and timestamps (localtime[stamp],\ncurrent_date, current_time[stamp]), for 9 patterns as these accept an\noptional typmod.\n\nI have dug into the possibility to unify all that with a single\ninterface, and finish with the attached patch set which is a reduction\nof code, where all the SQLValueFunctions are replaced by a set of\nFuncCalls:\n 25 files changed, 338 insertions(+), 477 deletions(-)\n\n0001 is the move done for the name-related functions, cleaning up two\nplaces in the executor when a C collation is assigned to those\nfunction expressions.  0002 is the remaining cleanup for the\ntime-related ones, moving a set of parser-side checks to the execution\npath within each function, so as all this knowledge is now local to\neach file holding the date and timestamp types.  Most of the gain is\nin 0002, obviously.\n\nThe pg_proc entries introduced for the sake of the move use the same\nname as the SQL keywords.  These should perhaps be prefixed with a\n\"pg_\" at least.  There would be an exception with pg_localtime[stamp],\nthough, where we could use a pg_localtime[stamp]_sql for the function\nname for prosrc.  I am open to suggestions for these names.\n\nThoughts?\n--\nMichaelI like this a lot. Deleted code is debugged code.Patch applies and passes make check-world.No trace of SQLValueFunction is left in the codebase, at least according to `git grep -l`.I have only one non-nitpick question about the code:+\t/*+\t * we're not too tense about good error message here because grammar+\t * shouldn't allow wrong number of modifiers for TIME+\t */+\tif (n != 1)+\t\tereport(ERROR,+\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),+\t\t\t\t errmsg(\"invalid type modifier\")));I agree that we shouldn't spend too much effort on a good error message here, but perhaps we should have the message mention that it is date/time-related? A person git-grepping for this error message will get 4 hits in .c files (date.c, timestamp.c, varbit.c, varchar.c) so even a slight variation in the message could save them some time.This is an extreme nitpick, but the patchset seems like it should have been 1 file or 3 (remove name functions, remove time functions, remove SQLValueFunction infrastructure), but that will only matter in the unlikely case that we find a need for SQLValueFunction but we want to leave the timestamp function as COERCE_SQL_SYNTAX.", "msg_date": "Tue, 18 Oct 2022 16:35:33 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Getting rid of SQLValueFunction" }, { "msg_contents": "(Adding Tom in CC, in case.)\n\nOn Tue, Oct 18, 2022 at 04:35:33PM -0400, Corey Huinker wrote:\n> I agree that we shouldn't spend too much effort on a good error message\n> here, but perhaps we should have the message mention that it is\n> date/time-related? A person git-grepping for this error message will get 4\n> hits in .c files (date.c, timestamp.c, varbit.c, varchar.c) so even a\n> slight variation in the message could save them some time.\n\nThe message is the same between HEAD and the patch and these have been\naround for a long time, except that we would see it at parsing time on\nHEAD, and at executor time with the patch. I would not mind changing\nif there are better ideas than what's used now, of course ;)\n\n> This is an extreme nitpick, but the patchset seems like it should have been\n> 1 file or 3 (remove name functions, remove time functions, remove\n> SQLValueFunction infrastructure), but that will only matter in the unlikely\n> case that we find a need for SQLValueFunction but we want to leave the\n> timestamp function as COERCE_SQL_SYNTAX.\n\nOnce the timestamp functions are removed, SQLValueFunction is just\ndead code so including its removal in 0002 does not change much in my\nopinion.\n\nAn other thing I had on my list for this patch was to check its\nperformance impact. So I have spent some time having a look at the\nperf profiles produced on HEAD and with the patch using queries like\nSELECT current_role FROM generate_series(1,N) where N > 10M and I have\nnot noticed any major differences in runtime or in the profiles, at\nthe difference that we don't have anymore SQLValueFunction() and its\ninternal functions called, hence they are missing from the stacks, but\nthat's the whole point of the patch.\n\nWith this in mind, would somebody complain if I commit that? That's a\nnice reduction in code, while completing the work done in 40c24bf:\n 25 files changed, 338 insertions(+), 477 deletions(-)\n\nThanks,\n--\nMichael", "msg_date": "Wed, 19 Oct 2022 15:45:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Getting rid of SQLValueFunction" }, { "msg_contents": "On Wed, Oct 19, 2022 at 03:45:48PM +0900, Michael Paquier wrote:\n> With this in mind, would somebody complain if I commit that? That's a\n> nice reduction in code, while completing the work done in 40c24bf:\n> 25 files changed, 338 insertions(+), 477 deletions(-)\n\nOn second look, there is something I have underestimated here with\nFigureColnameInternal(). This function would create an attribute name\nbased on the SQL keyword given in input. For example, on HEAD we\nwould get that:\n=# SELECT * FROM CURRENT_CATALOG;\n current_catalog \n-----------------\n postgres\n(1 row)\n\nBut the patch enforces the attribute name to be the underlying\nfunction name, switching the previous \"current_catalog\" to\n\"current_database\". For example:\n=# SELECT * FROM CURRENT_CATALOG;\n current_database \n------------------\n postgres\n(1 row)\n\nI am not sure how much it matters in practice, but this could break\nsome queries. One way to tackle that is to extend\nFigureColnameInternal() so as we use a compatible name when the node\nis a T_FuncCall, but that won't be entirely water-proof as long as\nthere is not a one-one mapping between the SQL keywords and the\nunderlying function names, aka we would need a current_catalog.\n\"user\" would be also too generic as a catalog function name, so we\nshould name its proc entry to a pg_user anyway, requiring a shortcut\nin FigureColnameInternal(). Or perhaps I am worrying too much and\nkeeping the code simpler is better? Does the SQL specification\nrequire that the attribute name has to match its SQL keyword when\nspecified in a FROM clause when there is no aliases?\n\nThoughts?\n--\nMichael", "msg_date": "Fri, 21 Oct 2022 11:57:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Getting rid of SQLValueFunction" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> But the patch enforces the attribute name to be the underlying\n> function name, switching the previous \"current_catalog\" to\n> \"current_database\".\n\nThe entire point of SQLValueFunction IMO was to hide the underlying\nimplementation(s). Replacing it with something that leaks\nimplementation details does not seem like a step forward.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 Oct 2022 23:10:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Getting rid of SQLValueFunction" }, { "msg_contents": "On Thu, Oct 20, 2022 at 11:10:22PM -0400, Tom Lane wrote:\n> The entire point of SQLValueFunction IMO was to hide the underlying\n> implementation(s). Replacing it with something that leaks\n> implementation details does not seem like a step forward.\n\nHmm.. Okay, thanks. So this just comes down that I am going to need\none different pg_proc entry per SQL keyword, then, or this won't fly\nfar. For example, note that on HEAD or with the patch, a view with a\nSQL keyword in a FROM clause translates the same way with quotes\napplied in the same places, as of:\n=# create view test as select (SELECT * FROM CURRENT_USER) as cu;\nCREATE VIEW\n=# select pg_get_viewdef('test', true);\n pg_get_viewdef \n---------------------------------------------------------------------\n SELECT ( SELECT \"current_user\".\"current_user\" +\n FROM CURRENT_USER \"current_user\"(\"current_user\")) AS cu;\n(1 row)\n\nA sticky point is that this would need the creation of a pg_proc entry\nfor \"user\" which is a generic word, or a shortcut around\nFigureColnameInternal(). The code gain overall still looks appealing\nin the executor, even if we do all that and the resulting backend code\ngets kind of nicer and easier to maintain long-term IMO.\n--\nMichael", "msg_date": "Fri, 21 Oct 2022 12:34:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Getting rid of SQLValueFunction" }, { "msg_contents": "On Fri, Oct 21, 2022 at 12:34:23PM +0900, Michael Paquier wrote:\n> A sticky point is that this would need the creation of a pg_proc entry\n> for \"user\" which is a generic word, or a shortcut around\n> FigureColnameInternal(). The code gain overall still looks appealing\n> in the executor, even if we do all that and the resulting backend code\n> gets kind of nicer and easier to maintain long-term IMO.\n\nI have looked at that, and the attribute mapping remains compatible\nwith past versions once the appropriate pg_proc entries are added.\nThe updated patch set attached does that (with a user() function as\nwell to keep the code a maximum simple), with more tests to cover the\nattribute case mentioned upthread.\n--\nMichael", "msg_date": "Fri, 21 Oct 2022 14:27:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Getting rid of SQLValueFunction" }, { "msg_contents": "On Fri, Oct 21, 2022 at 02:27:07PM +0900, Michael Paquier wrote:\n> I have looked at that, and the attribute mapping remains compatible\n> with past versions once the appropriate pg_proc entries are added.\n> The updated patch set attached does that (with a user() function as\n> well to keep the code a maximum simple), with more tests to cover the\n> attribute case mentioned upthread.\n\nAttached is a rebased patch set, as of the conflicts from 2e0d80c.\n--\nMichael", "msg_date": "Tue, 25 Oct 2022 14:20:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Getting rid of SQLValueFunction" }, { "msg_contents": "On Tue, Oct 25, 2022 at 02:20:12PM +0900, Michael Paquier wrote:\n> Attached is a rebased patch set, as of the conflicts from 2e0d80c.\n\nSo, this patch set has been sitting in the CF app for a few weeks now,\nand I would like to apply them to remove a bit of code from the\nexecutor.\n\nPlease note that in order to avoid tweaks when choosing the attribute\nname of function call, this needs a total of 8 new catalog functions\nmapping to the SQL keywords, which is what the test added by 2e0d80c\nis about:\n- current_role\n- user\n- current_catalog\n- current_date\n- current_time\n- current_timestamp\n- localtime\n- localtimestamp\n\nAny objections?\n--\nMichael", "msg_date": "Fri, 18 Nov 2022 10:23:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Getting rid of SQLValueFunction" }, { "msg_contents": "On Fri, Nov 18, 2022 at 10:23:58AM +0900, Michael Paquier wrote:\n> Please note that in order to avoid tweaks when choosing the attribute\n> name of function call, this needs a total of 8 new catalog functions\n> mapping to the SQL keywords, which is what the test added by 2e0d80c\n> is about:\n> - current_role\n> - user\n> - current_catalog\n> - current_date\n> - current_time\n> - current_timestamp\n> - localtime\n> - localtimestamp\n> \n> Any objections?\n\nHearing nothing, I have gone through 0001 again and applied it as\nfb32748 to remove the dependency between names and SQLValueFunction.\nAttached is 0002, to bring back the CI to a green state.\n--\nMichael", "msg_date": "Sun, 20 Nov 2022 12:01:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Getting rid of SQLValueFunction" }, { "msg_contents": "On Sat, Nov 19, 2022 at 7:01 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Nov 18, 2022 at 10:23:58AM +0900, Michael Paquier wrote:\n> > Please note that in order to avoid tweaks when choosing the attribute\n> > name of function call, this needs a total of 8 new catalog functions\n> > mapping to the SQL keywords, which is what the test added by 2e0d80c\n> > is about:\n> > - current_role\n> > - user\n> > - current_catalog\n> > - current_date\n> > - current_time\n> > - current_timestamp\n> > - localtime\n> > - localtimestamp\n> >\n> > Any objections?\n>\n> Hearing nothing, I have gone through 0001 again and applied it as\n> fb32748 to remove the dependency between names and SQLValueFunction.\n> Attached is 0002, to bring back the CI to a green state.\n> --\n> Michael\n>\n\nHi,\nFor get_func_sql_syntax(), the code for cases\nof F_CURRENT_TIME, F_CURRENT_TIMESTAMP, F_LOCALTIME and F_LOCALTIMESTAMP is\nmostly the same.\nMaybe we can introduce a helper so that code duplication is reduced.\n\nCheers\n\nOn Sat, Nov 19, 2022 at 7:01 PM Michael Paquier <michael@paquier.xyz> wrote:On Fri, Nov 18, 2022 at 10:23:58AM +0900, Michael Paquier wrote:\n> Please note that in order to avoid tweaks when choosing the attribute\n> name of function call, this needs a total of 8 new catalog functions\n> mapping to the SQL keywords, which is what the test added by 2e0d80c\n> is about:\n> - current_role\n> - user\n> - current_catalog\n> - current_date\n> - current_time\n> - current_timestamp\n> - localtime\n> - localtimestamp\n> \n> Any objections?\n\nHearing nothing, I have gone through 0001 again and applied it as\nfb32748 to remove the dependency between names and SQLValueFunction.\nAttached is 0002, to bring back the CI to a green state.\n--\nMichaelHi,For get_func_sql_syntax(), the code for cases of F_CURRENT_TIME, F_CURRENT_TIMESTAMP, F_LOCALTIME and F_LOCALTIMESTAMP is mostly the same.Maybe we can introduce a helper so that code duplication is reduced.Cheers", "msg_date": "Sun, 20 Nov 2022 08:21:10 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Getting rid of SQLValueFunction" }, { "msg_contents": "On Sun, Nov 20, 2022 at 08:21:10AM -0800, Ted Yu wrote:\n> For get_func_sql_syntax(), the code for cases\n> of F_CURRENT_TIME, F_CURRENT_TIMESTAMP, F_LOCALTIME and F_LOCALTIMESTAMP is\n> mostly the same.\n> Maybe we can introduce a helper so that code duplication is reduced.\n\nIt would. Thanks for the suggestion.\n\nDo you like something like the patch 0002 attached? This reduces a\nbit the overall size of the patch. Both ought to be merged in the\nsame commit, still it is easier to see the simplification created.\n--\nMichael", "msg_date": "Mon, 21 Nov 2022 08:12:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Getting rid of SQLValueFunction" }, { "msg_contents": "On Sun, Nov 20, 2022 at 3:12 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sun, Nov 20, 2022 at 08:21:10AM -0800, Ted Yu wrote:\n> > For get_func_sql_syntax(), the code for cases\n> > of F_CURRENT_TIME, F_CURRENT_TIMESTAMP, F_LOCALTIME and F_LOCALTIMESTAMP\n> is\n> > mostly the same.\n> > Maybe we can introduce a helper so that code duplication is reduced.\n>\n> It would. Thanks for the suggestion.\n>\n> Do you like something like the patch 0002 attached? This reduces a\n> bit the overall size of the patch. Both ought to be merged in the\n> same commit, still it is easier to see the simplification created.\n> --\n> Michael\n>\nHi,\nThanks for the quick response.\n\n+ * timestamp. These require a specific handling with their typmod is given\n+ * by the function caller through their SQL keyword.\n\ntypo: typmod is given -> typmod given\n\nOther than the above, code looks good to me.\n\nCheers\n\nOn Sun, Nov 20, 2022 at 3:12 PM Michael Paquier <michael@paquier.xyz> wrote:On Sun, Nov 20, 2022 at 08:21:10AM -0800, Ted Yu wrote:\n> For get_func_sql_syntax(), the code for cases\n> of F_CURRENT_TIME, F_CURRENT_TIMESTAMP, F_LOCALTIME and F_LOCALTIMESTAMP is\n> mostly the same.\n> Maybe we can introduce a helper so that code duplication is reduced.\n\nIt would.  Thanks for the suggestion.\n\nDo you like something like the patch 0002 attached?  This reduces a\nbit the overall size of the patch.  Both ought to be merged in the\nsame commit, still it is easier to see the simplification created.\n--\nMichaelHi,Thanks for the quick response.+ * timestamp.  These require a specific handling with their typmod is given+ * by the function caller through their SQL keyword. typo: typmod is given -> typmod givenOther than the above, code looks good to me.Cheers", "msg_date": "Sun, 20 Nov 2022 15:15:34 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Getting rid of SQLValueFunction" }, { "msg_contents": "On Sun, Nov 20, 2022 at 03:15:34PM -0800, Ted Yu wrote:\n> + * timestamp. These require a specific handling with their typmod is given\n> + * by the function caller through their SQL keyword.\n> \n> typo: typmod is given -> typmod given\n> \n> Other than the above, code looks good to me.\n\nThanks for double-checking. I intended a different wording, actually,\nso fixed this one. And applied after an extra round of reviews.\n--\nMichael", "msg_date": "Mon, 21 Nov 2022 18:38:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Getting rid of SQLValueFunction" }, { "msg_contents": "Hi\n\n2022年11月21日(月) 18:39 Michael Paquier <michael@paquier.xyz>:\n>\n> On Sun, Nov 20, 2022 at 03:15:34PM -0800, Ted Yu wrote:\n> > + * timestamp. These require a specific handling with their typmod is given\n> > + * by the function caller through their SQL keyword.\n> >\n> > typo: typmod is given -> typmod given\n> >\n> > Other than the above, code looks good to me.\n>\n> Thanks for double-checking. I intended a different wording, actually,\n> so fixed this one. And applied after an extra round of reviews.\n\nI noticed this commit (f193883f) introduces following regressions:\n\n postgres=# SELECT current_timestamp(7);\n WARNING: TIMESTAMP(7) WITH TIME ZONE precision reduced to maximum\nallowed, 6\n ERROR: timestamp(7) precision must be between 0 and 6\n\n postgres=# SELECT localtimestamp(7);\n WARNING: TIMESTAMP(7) precision reduced to maximum allowed, 6\n ERROR: timestamp(7) precision must be between 0 and 6\n\nSuggested fix attached.\n\nRegards\n\nIan Barwick", "msg_date": "Fri, 30 Dec 2022 10:57:52 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Getting rid of SQLValueFunction" }, { "msg_contents": "On Fri, Dec 30, 2022 at 10:57:52AM +0900, Ian Lawrence Barwick wrote:\n> I noticed this commit (f193883f) introduces following regressions:\n> \n> postgres=# SELECT current_timestamp(7);\n> WARNING: TIMESTAMP(7) WITH TIME ZONE precision reduced to maximum\n> allowed, 6\n> ERROR: timestamp(7) precision must be between 0 and 6\n> \n> postgres=# SELECT localtimestamp(7);\n> WARNING: TIMESTAMP(7) precision reduced to maximum allowed, 6\n> ERROR: timestamp(7) precision must be between 0 and 6\n> \n> Suggested fix attached.\n\nThanks for the report, Ian. Will fix.\n--\nMichael", "msg_date": "Fri, 30 Dec 2022 14:21:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Getting rid of SQLValueFunction" }, { "msg_contents": "On Fri, Dec 30, 2022 at 10:57:52AM +0900, Ian Lawrence Barwick wrote:\n> I noticed this commit (f193883f) introduces following regressions:\n> \n> postgres=# SELECT current_timestamp(7);\n> WARNING: TIMESTAMP(7) WITH TIME ZONE precision reduced to maximum\n> allowed, 6\n> ERROR: timestamp(7) precision must be between 0 and 6\n> \n> postgres=# SELECT localtimestamp(7);\n> WARNING: TIMESTAMP(7) precision reduced to maximum allowed, 6\n> ERROR: timestamp(7) precision must be between 0 and 6\n> \n> Suggested fix attached.\n\nThe two changes in timestamp.c are fine, Now I can see that the same\nmistake was introduced in date.c. The WARNINGs were issued and the\ncompilation went through the same way as the default, but they passed\ndown an incorrect precision, so I have fixed all that. Coverage has\nbeen added for all four, while the patch proposed covered only two.\n--\nMichael", "msg_date": "Fri, 30 Dec 2022 20:51:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Getting rid of SQLValueFunction" } ]
[ { "msg_contents": "Hi,\n\nWhile resuming the work on [1] I noticed that:\n\n- there is an unused parameter in log_heap_visible()\n- the comment associated to the function is not in \"sync\" with the \ncurrent implementation (referring a \"block\" that is not involved anymore)\n\nAttached a tiny patch as an attempt to address the above remarks.\n\n\n[1]: https://commitfest.postgresql.org/39/3740/\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 30 Sep 2022 09:35:47 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "log_heap_visible(): remove unused parameter and update comment" }, { "msg_contents": "On Fri, Sep 30, 2022 at 1:07 PM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> Hi,\n>\n> While resuming the work on [1] I noticed that:\n>\n> - there is an unused parameter in log_heap_visible()\n> - the comment associated to the function is not in \"sync\" with the\n> current implementation (referring a \"block\" that is not involved anymore)\n>\n> Attached a tiny patch as an attempt to address the above remarks.\n>\n> [1]: https://commitfest.postgresql.org/39/3740/\n\nIt looks like that parameter was originally introduced and used in PG\n9.4 where xl_heap_visible structure was having RelFileNode, which was\nlater removed in PG 9.5, since then the RelFileNode rnode parameter is\nleft out. This parameter got renamed to RelFileLocator rlocator by the\ncommit b0a55e43299c4ea2a9a8c757f9c26352407d0ccc in HEAD.\n\nThe attached patch LGTM.\n\nWe recently committed another patch to remove an unused function\nparameter - 65b158ae4e892c2da7a5e31e2d2645e5e79a0bfd.\n\nIt makes me think that why can't we remove the unused function\nparameters once and for all, say with a compiler flag such as\n-Wunused-parameter [1]? We might have to be careful in removing\ncertain parameters which are not being used right now, but might be\nused in the near future though.\n\n[1] https://man7.org/linux/man-pages/man1/gcc.1.html\n\n -Wunused-parameter\n Warn whenever a function parameter is unused aside from its\n declaration.\n\n To suppress this warning use the \"unused\" attribute.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 30 Sep 2022 17:02:43 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: log_heap_visible(): remove unused parameter and update comment" }, { "msg_contents": "Hi,\n\nOn 9/30/22 1:32 PM, Bharath Rupireddy wrote:\n> On Fri, Sep 30, 2022 at 1:07 PM Drouvot, Bertrand\n> <bertranddrouvot.pg@gmail.com> wrote:\n>>\n>> Hi,\n>>\n>> While resuming the work on [1] I noticed that:\n>>\n>> - there is an unused parameter in log_heap_visible()\n>> - the comment associated to the function is not in \"sync\" with the\n>> current implementation (referring a \"block\" that is not involved anymore)\n>>\n>> Attached a tiny patch as an attempt to address the above remarks.\n>>\n>> [1]: https://commitfest.postgresql.org/39/3740/\n> \n> It looks like that parameter was originally introduced and used in PG\n> 9.4 where xl_heap_visible structure was having RelFileNode, which was\n> later removed in PG 9.5, since then the RelFileNode rnode parameter is\n> left out. This parameter got renamed to RelFileLocator rlocator by the\n> commit b0a55e43299c4ea2a9a8c757f9c26352407d0ccc in HEAD.\n> \n> The attached patch LGTM.\n\nThanks for looking at it!\n\n> \n> We recently committed another patch to remove an unused function\n> parameter - 65b158ae4e892c2da7a5e31e2d2645e5e79a0bfd.\n> \n> It makes me think that why can't we remove the unused function\n> parameters once and for all, say with a compiler flag such as\n> -Wunused-parameter [1]? We might have to be careful in removing\n> certain parameters which are not being used right now, but might be\n> used in the near future though.\n> \n> [1] https://man7.org/linux/man-pages/man1/gcc.1.html\n> \n> -Wunused-parameter\n> Warn whenever a function parameter is unused aside from its\n> declaration.\n> \n> To suppress this warning use the \"unused\" attribute.\n> \n\nThat's right. I have the feeling this will be somehow time consuming and \nI'm not sure the added value is worth it (as compare to fix them when \n\"accidentally\" cross their paths).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 30 Sep 2022 14:24:47 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: log_heap_visible(): remove unused parameter and update comment" }, { "msg_contents": "\nOn Fri, 30 Sep 2022 at 19:32, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Fri, Sep 30, 2022 at 1:07 PM Drouvot, Bertrand\n> <bertranddrouvot.pg@gmail.com> wrote:\n>>\n>> Hi,\n>>\n>> While resuming the work on [1] I noticed that:\n>>\n>> - there is an unused parameter in log_heap_visible()\n>> - the comment associated to the function is not in \"sync\" with the\n>> current implementation (referring a \"block\" that is not involved anymore)\n>>\n>> Attached a tiny patch as an attempt to address the above remarks.\n>>\n>> [1]: https://commitfest.postgresql.org/39/3740/\n>\n> It looks like that parameter was originally introduced and used in PG\n> 9.4 where xl_heap_visible structure was having RelFileNode, which was\n> later removed in PG 9.5, since then the RelFileNode rnode parameter is\n> left out. This parameter got renamed to RelFileLocator rlocator by the\n> commit b0a55e43299c4ea2a9a8c757f9c26352407d0ccc in HEAD.\n>\n> The attached patch LGTM.\n>\n> We recently committed another patch to remove an unused function\n> parameter - 65b158ae4e892c2da7a5e31e2d2645e5e79a0bfd.\n>\n> It makes me think that why can't we remove the unused function\n> parameters once and for all, say with a compiler flag such as\n> -Wunused-parameter [1]? We might have to be careful in removing\n> certain parameters which are not being used right now, but might be\n> used in the near future though.\n>\n> [1] https://man7.org/linux/man-pages/man1/gcc.1.html\n>\n> -Wunused-parameter\n> Warn whenever a function parameter is unused aside from its\n> declaration.\n>\n> To suppress this warning use the \"unused\" attribute.\n\nWhen I try to use -Wunused-parameter, I find there are many warnings :-( .\n\n/home/japin/Codes/postgres/Debug/../src/backend/optimizer/geqo/geqo_pool.c: In function ‘free_chromo’:\n/home/japin/Codes/postgres/Debug/../src/backend/optimizer/geqo/geqo_pool.c:176:26: warning: unused parameter ‘root’ [-Wunused-parameter]\n 176 | free_chromo(PlannerInfo *root, Chromosome *chromo)\n | ~~~~~~~~~~~~~^~~~\n/home/japin/Codes/postgres/Debug/../src/backend/optimizer/path/equivclass.c: In function ‘eclass_useful_for_merging’:\n/home/japin/Codes/postgres/Debug/../src/backend/optimizer/path/equivclass.c:3091:40: warning: unused parameter ‘root’ [-Wunused-parameter]\n 3091 | eclass_useful_for_merging(PlannerInfo *root,\n | ~~~~~~~~~~~~~^~~~\n/home/japin/Codes/postgres/Debug/../src/backend/optimizer/path/indxpath.c: In function ‘ec_member_matches_indexcol’:\n/home/japin/Codes/postgres/Debug/../src/backend/optimizer/path/indxpath.c:3453:41: warning: unused parameter ‘root’ [-Wunused-parameter]\n 3453 | ec_member_matches_indexcol(PlannerInfo *root, RelOptInfo *rel,\n | ~~~~~~~~~~~~~^~~~\n/home/japin/Codes/postgres/Debug/../src/backend/optimizer/path/indxpath.c:3453:59: warning: unused parameter ‘rel’ [-Wunused-parameter]\n 3453 | ec_member_matches_indexcol(PlannerInfo *root, RelOptInfo *rel,\n | ~~~~~~~~~~~~^~~\n/home/japin/Codes/postgres/Debug/../src/backend/optimizer/path/indxpath.c: In function ‘relation_has_unique_index_for’:\n/home/japin/Codes/postgres/Debug/../src/backend/optimizer/path/indxpath.c:3511:44: warning: unused parameter ‘root’ [-Wunused-parameter]\n 3511 | relation_has_unique_index_for(PlannerInfo *root, RelOptInfo *rel,\n | ~~~~~~~~~~~~~^~~~\n/home/japin/Codes/postgres/Debug/../src/backend/optimizer/path/joinpath.c: In function ‘allow_star_schema_join’:\n/home/japin/Codes/postgres/Debug/../src/backend/optimizer/path/joinpath.c:356:37: warning: unused parameter ‘root’ [-Wunused-parameter]\n 356 | allow_star_schema_join(PlannerInfo *root,\n | ~~~~~~~~~~~~~^~~~\n/home/japin/Codes/postgres/Debug/../src/backend/optimizer/path/joinpath.c: In function ‘paraminfo_get_equal_hashops’:\n/home/japin/Codes/postgres/Debug/../src/backend/optimizer/path/joinpath.c:378:42: warning: unused parameter ‘root’ [-Wunused-parameter]\n 378 | paraminfo_get_equal_hashops(PlannerInfo *root, ParamPathInfo *param_info,\n | ~~~~~~~~~~~~~^~~~\n\n\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Fri, 30 Sep 2022 21:59:55 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: log_heap_visible(): remove unused parameter and update comment" }, { "msg_contents": "On Fri, Sep 30, 2022 at 7:30 PM Japin Li <japinli@hotmail.com> wrote:\n>\n> When I try to use -Wunused-parameter, I find there are many warnings :-( .\n\nGreat!\n\nI think we can't just remove every unused parameter, for instance, it\nmakes sense to retain PlannerInfo *root parameter even though it's not\nused now, in future it may be. But if the parameter is of type\nunrelated to the context of the function, like the one committed\n65b158ae4e892c2da7a5e31e2d2645e5e79a0bfd and like the proposed patch\nto some extent, it could be removed.\n\nOthers may have different thoughts here.\n\n> /home/japin/Codes/postgres/Debug/../src/backend/optimizer/geqo/geqo_pool.c: In function ‘free_chromo’:\n> /home/japin/Codes/postgres/Debug/../src/backend/optimizer/geqo/geqo_pool.c:176:26: warning: unused parameter ‘root’ [-Wunused-parameter]\n> 176 | free_chromo(PlannerInfo *root, Chromosome *chromo)\n> | ~~~~~~~~~~~~~^~~~\n> /home/japin/Codes/postgres/Debug/../src/backend/optimizer/path/equivclass.c: In function ‘eclass_useful_for_merging’:\n> /home/japin/Codes/postgres/Debug/../src/backend/optimizer/path/equivclass.c:3091:40: warning: unused parameter ‘root’ [-Wunused-parameter]\n> 3091 | eclass_useful_for_merging(PlannerInfo *root,\n> | ~~~~~~~~~~~~~^~~~\n> /home/japin/Codes/postgres/Debug/../src/backend/optimizer/path/indxpath.c: In function ‘ec_member_matches_indexcol’:\n> /home/japin/Codes/postgres/Debug/../src/backend/optimizer/path/indxpath.c:3453:41: warning: unused parameter ‘root’ [-Wunused-parameter]\n> 3453 | ec_member_matches_indexcol(PlannerInfo *root, RelOptInfo *rel,\n> | ~~~~~~~~~~~~~^~~~\n> /home/japin/Codes/postgres/Debug/../src/backend/optimizer/path/indxpath.c:3453:59: warning: unused parameter ‘rel’ [-Wunused-parameter]\n> 3453 | ec_member_matches_indexcol(PlannerInfo *root, RelOptInfo *rel,\n> | ~~~~~~~~~~~~^~~\n> /home/japin/Codes/postgres/Debug/../src/backend/optimizer/path/indxpath.c: In function ‘relation_has_unique_index_for’:\n> /home/japin/Codes/postgres/Debug/../src/backend/optimizer/path/indxpath.c:3511:44: warning: unused parameter ‘root’ [-Wunused-parameter]\n> 3511 | relation_has_unique_index_for(PlannerInfo *root, RelOptInfo *rel,\n> | ~~~~~~~~~~~~~^~~~\n> /home/japin/Codes/postgres/Debug/../src/backend/optimizer/path/joinpath.c: In function ‘allow_star_schema_join’:\n> /home/japin/Codes/postgres/Debug/../src/backend/optimizer/path/joinpath.c:356:37: warning: unused parameter ‘root’ [-Wunused-parameter]\n> 356 | allow_star_schema_join(PlannerInfo *root,\n> | ~~~~~~~~~~~~~^~~~\n> /home/japin/Codes/postgres/Debug/../src/backend/optimizer/path/joinpath.c: In function ‘paraminfo_get_equal_hashops’:\n> /home/japin/Codes/postgres/Debug/../src/backend/optimizer/path/joinpath.c:378:42: warning: unused parameter ‘root’ [-Wunused-parameter]\n> 378 | paraminfo_get_equal_hashops(PlannerInfo *root, ParamPathInfo *param_info,\n> | ~~~~~~~~~~~~~^~~~\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 30 Sep 2022 19:39:36 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: log_heap_visible(): remove unused parameter and update comment" }, { "msg_contents": "\nOn Fri, 30 Sep 2022 at 22:09, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Fri, Sep 30, 2022 at 7:30 PM Japin Li <japinli@hotmail.com> wrote:\n>>\n>> When I try to use -Wunused-parameter, I find there are many warnings :-( .\n>\n> Great!\n>\n> I think we can't just remove every unused parameter, for instance, it\n> makes sense to retain PlannerInfo *root parameter even though it's not\n> used now, in future it may be. But if the parameter is of type\n> unrelated to the context of the function, like the one committed\n> 65b158ae4e892c2da7a5e31e2d2645e5e79a0bfd and like the proposed patch\n> to some extent, it could be removed.\n>\n> Others may have different thoughts here.\n\nMaybe we can define a macro like UNUSED(x) for those parameters, and\ncall this macro on the parameter that we might be useful, then\nwe can use -Wunused-parameter when compiling. Any thoughts?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Fri, 30 Sep 2022 22:18:17 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: log_heap_visible(): remove unused parameter and update comment" }, { "msg_contents": "On Fri, Sep 30, 2022 at 7:48 PM Japin Li <japinli@hotmail.com> wrote:\n>\n> On Fri, 30 Sep 2022 at 22:09, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > On Fri, Sep 30, 2022 at 7:30 PM Japin Li <japinli@hotmail.com> wrote:\n> >>\n> >> When I try to use -Wunused-parameter, I find there are many warnings :-( .\n> >\n> > Great!\n> >\n> > I think we can't just remove every unused parameter, for instance, it\n> > makes sense to retain PlannerInfo *root parameter even though it's not\n> > used now, in future it may be. But if the parameter is of type\n> > unrelated to the context of the function, like the one committed\n> > 65b158ae4e892c2da7a5e31e2d2645e5e79a0bfd and like the proposed patch\n> > to some extent, it could be removed.\n> >\n> > Others may have different thoughts here.\n>\n> Maybe we can define a macro like UNUSED(x) for those parameters, and\n> call this macro on the parameter that we might be useful, then\n> we can use -Wunused-parameter when compiling. Any thoughts?\n\nWe have the pg_attribute_unused() macro already. I'm not sure if\nadding -Wunused-parameter for compilation plus using\npg_attribute_unused() for unused-yet-contextually-required variables\nis a great idea. But it has some merits as it avoids unused variables\nlying around in the code. However, we can even discuss this in a\nseparate thread IMO to hear more from other hackers.\n\nWhile on this, I noticed that pg_attribute_unused() is being used for\nnpages in AdvanceXLInsertBuffer(), but the npages variable is actually\nbeing used there. I think we can remove it.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 4 Oct 2022 12:49:19 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: log_heap_visible(): remove unused parameter and update comment" }, { "msg_contents": "On 04.10.22 09:19, Bharath Rupireddy wrote:\n> We have the pg_attribute_unused() macro already. I'm not sure if\n> adding -Wunused-parameter for compilation plus using\n> pg_attribute_unused() for unused-yet-contextually-required variables\n> is a great idea. But it has some merits as it avoids unused variables\n> lying around in the code. However, we can even discuss this in a\n> separate thread IMO to hear more from other hackers.\n\nI tried this once. The patch I have from a few years ago is\n\n 420 files changed, 1482 insertions(+), 1482 deletions(-)\n\nand it was a lot of work to maintain.\n\nI can send it in if there is interest. But I'm not sure if it's worth it.\n\n\n\n", "msg_date": "Tue, 4 Oct 2022 10:18:43 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: log_heap_visible(): remove unused parameter and update comment" } ]
[ { "msg_contents": "It seems like the issue discussed in [0] is back, but this time for XSL imports\nvia xsltproc. The http link now redirects with a 301 (since when I don't know,\nbut it worked recently):\n\n $ curl -I http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\n HTTP/1.1 301 Moved Permanently\n Date: Fri, 30 Sep 2022 09:20:00 GMT\n Connection: keep-alive\n Cache-Control: max-age=3600\n Expires: Fri, 30 Sep 2022 10:20:00 GMT\n Location: https://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\n Content-Security-Policy: upgrade-insecure-requests\n Server: cloudflare\n CF-RAY: 752be1544eea0d2e-ARN\n alt-svc: h3=\":443\"; ma=86400, h3-29=\":443\"; ma=86400\n\nChanging the links in the documents to be https, and avoid the redirect,\ndoesn't help unfortunately since xsltproc can't download assets over https.\nThe lack of https support in libxml2 has been reported, and patches submitted,\na long time ago [1] but there is still a lack of https support. Looking around\nfor other mirrors I only managed to find cdn.dookbook.org, which just like\nSourceforge does a 301 redirect.\n\nInstalling the stylesheets locally as we document solves the issue of course,\nbut maybe it's time to move to using --nonet as we discussed in [0] and require\nthe stylesheets locally? It's a shame that casual contributions require a big\ninvestment in installation, but it seems hard to get around.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n[0] https://www.postgresql.org/message-id/flat/E2EE6B76-2D96-408A-B961-CAE47D1A86F0@yesql.se\n[1] https://mail.gnome.org/archives/xml/2007-March/msg00087.html\n\n", "msg_date": "Fri, 30 Sep 2022 11:35:36 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Documentation building fails on HTTPS redirect (again)" }, { "msg_contents": "Hi,\n\nOn 2022-09-30 11:35:36 +0200, Daniel Gustafsson wrote:\n> Installing the stylesheets locally as we document solves the issue of course,\n> but maybe it's time to move to using --nonet as we discussed in [0] and require\n> the stylesheets locally? It's a shame that casual contributions require a big\n> investment in installation, but it seems hard to get around.\n\ndocbooks-xml and docbooks-xsl aren't that big (adding 8MB to a minimal debian\ninstall).\n\nHowever a) we document installing fop as well, even though it's not needed for\nthe html docs build b) the dependencies recommended by the debian packages\nincrease the size a lot. Just using our documented line ends up with 550MB.\n\nPerhaps separating out fop and using --no-install-recommends (and other\nsimilar flags) makes it less of an issue? We probably should work to deliver\na more usable error than what just using --nonet gives you...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 1 Oct 2022 18:49:58 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Documentation building fails on HTTPS redirect (again)" }, { "msg_contents": "> On 2 Oct 2022, at 03:49, Andres Freund <andres@anarazel.de> wrote:\n> On 2022-09-30 11:35:36 +0200, Daniel Gustafsson wrote:\n>> Installing the stylesheets locally as we document solves the issue of course,\n>> but maybe it's time to move to using --nonet as we discussed in [0] and require\n>> the stylesheets locally? It's a shame that casual contributions require a big\n>> investment in installation, but it seems hard to get around.\n> \n> docbooks-xml and docbooks-xsl aren't that big (adding 8MB to a minimal debian\n> install).\n\nThats true, size wise they are trivial, but it's a shame we seemingly need to\nmove away from \"you don't have to do anything\" which worked for years, to \"you\nneed to install X which you are unlikely to need for anything else\".\nEspecially when the failure stems from such a silly limitation. But, that's\nout of our hands, and we can only work on making it better for our\ncontributors.\n\n> However a) we document installing fop as well, even though it's not needed for\n> the html docs build b) the dependencies recommended by the debian packages\n> increase the size a lot. Just using our documented line ends up with 550MB.\n\nWe have this in the documentation today, but it's not especially visible and\nwell below where we list the packages:\n\n\t\"If xmllint or xsltproc is not found, you will not be able to build any\n\tof the documentation. fop is only needed to build the documentation in\n\tPDF format.\"\n\nI think we should make it a lot more visible.\n\n> Perhaps separating out fop and using --no-install-recommends (and other\n> similar flags) makes it less of an issue? We probably should work to deliver\n> a more usable error than what just using --nonet gives you...\n\nI agree with that.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Sun, 2 Oct 2022 23:00:30 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Documentation building fails on HTTPS redirect (again)" } ]
[ { "msg_contents": "autoconf set PREFIX to /usr/local/pgsql, so I think we should\ndo the same in meson build.\n\nThis will group all the targets generated by postgres in the same directory.\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Fri, 30 Sep 2022 23:21:22 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH v1] [meson] add a default option prefix=/usr/local/pgsql" }, { "msg_contents": "Hi,\n\nOn 2022-09-30 23:21:22 +0800, Junwang Zhao wrote:\n> autoconf set PREFIX to /usr/local/pgsql, so I think we should\n> do the same in meson build.\n\nThat makes sense.\n\nOne concern with that is that default would also apply to windows - autoconf\ndidn't have to care about that. I just tried it, and it \"just\" ends up\ninstalling it into c:/usr/local/pgsql (assuming the build dir is in\nc:/<something>). I think that's something we could live with, but it's worth\nthinking about.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 30 Sep 2022 08:43:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH v1] [meson] add a default option prefix=/usr/local/pgsql" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-09-30 23:21:22 +0800, Junwang Zhao wrote:\n>> autoconf set PREFIX to /usr/local/pgsql, so I think we should\n>> do the same in meson build.\n\n> That makes sense.\n\n+1\n\n> One concern with that is that default would also apply to windows - autoconf\n> didn't have to care about that. I just tried it, and it \"just\" ends up\n> installing it into c:/usr/local/pgsql (assuming the build dir is in\n> c:/<something>). I think that's something we could live with, but it's worth\n> thinking about.\n\nCan we have a platform-dependent default? What was the default\nbehavior with the MSVC scripts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 30 Sep 2022 11:45:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH v1] [meson] add a default option prefix=/usr/local/pgsql" }, { "msg_contents": "Hi,\n\nOn 2022-09-30 11:45:35 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > One concern with that is that default would also apply to windows - autoconf\n> > didn't have to care about that. I just tried it, and it \"just\" ends up\n> > installing it into c:/usr/local/pgsql (assuming the build dir is in\n> > c:/<something>). I think that's something we could live with, but it's worth\n> > thinking about.\n> \n> Can we have a platform-dependent default?\n\nNot easily in that spot, I think.\n\n\n> What was the default behavior with the MSVC scripts?\n\nThe install script always needs a target directory. And pg_config_paths is\nalways set to:\n\t\tprint $o <<EOF;\n#define PGBINDIR \"/bin\"\n#define PGSHAREDIR \"/share\"\n#define SYSCONFDIR \"/etc\"\n#define INCLUDEDIR \"/include\"\n#define PKGINCLUDEDIR \"/include\"\n#define INCLUDEDIRSERVER \"/include/server\"\n#define LIBDIR \"/lib\"\n#define PKGLIBDIR \"/lib\"\n#define LOCALEDIR \"/share/locale\"\n#define DOCDIR \"/doc\"\n#define HTMLDIR \"/doc\"\n#define MANDIR \"/man\"\nEOF\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 30 Sep 2022 08:59:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH v1] [meson] add a default option prefix=/usr/local/pgsql" }, { "msg_contents": "Hi,\n\nOn 2022-09-30 08:59:53 -0700, Andres Freund wrote:\n> On 2022-09-30 11:45:35 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > One concern with that is that default would also apply to windows - autoconf\n> > > didn't have to care about that. I just tried it, and it \"just\" ends up\n> > > installing it into c:/usr/local/pgsql (assuming the build dir is in\n> > > c:/<something>). I think that's something we could live with, but it's worth\n> > > thinking about.\n> > \n> > Can we have a platform-dependent default?\n> \n> Not easily in that spot, I think.\n\nFor background: The reason for that is that meson doesn't yet know what the\nhost/build environment is, because those can be influenced by\ndefault_options. We can run programs though, so if we really want to set some\nplatform dependent default, we can.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 30 Sep 2022 10:01:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH v1] [meson] add a default option prefix=/usr/local/pgsql" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-09-30 08:59:53 -0700, Andres Freund wrote:\n>> On 2022-09-30 11:45:35 -0400, Tom Lane wrote:\n>>> Can we have a platform-dependent default?\n\n>> Not easily in that spot, I think.\n\n> For background: The reason for that is that meson doesn't yet know what the\n> host/build environment is, because those can be influenced by\n> default_options. We can run programs though, so if we really want to set some\n> platform dependent default, we can.\n\nMeh. It's not like the existing MSVC script behavior is so sane\nthat we should strive to retain it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 30 Sep 2022 13:13:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH v1] [meson] add a default option prefix=/usr/local/pgsql" }, { "msg_contents": "Hi,\n\nOn 2022-09-30 13:13:29 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-09-30 08:59:53 -0700, Andres Freund wrote:\n> >> On 2022-09-30 11:45:35 -0400, Tom Lane wrote:\n> >>> Can we have a platform-dependent default?\n> \n> >> Not easily in that spot, I think.\n> \n> > For background: The reason for that is that meson doesn't yet know what the\n> > host/build environment is, because those can be influenced by\n> > default_options. We can run programs though, so if we really want to set some\n> > platform dependent default, we can.\n> \n> Meh. It's not like the existing MSVC script behavior is so sane\n> that we should strive to retain it.\n\nAgreed - I was just trying to give background. I'm inclined to just go for\nJunwang Zhao's patch for now.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 30 Sep 2022 10:17:37 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH v1] [meson] add a default option prefix=/usr/local/pgsql" }, { "msg_contents": "Hi,\n\nOn 2022-09-30 10:17:37 -0700, Andres Freund wrote:\n> Agreed - I was just trying to give background. I'm inclined to just go for\n> Junwang Zhao's patch for now.\n\nThat turns out to break tests on windows right now - but it's not the fault of\nthe patch. Paths on windows are just evil:\n\nWe do the installation for tmp_install with DESTDIR (no surprise) just as in\nautoconf. To set PATH etc, we need a path to the bindir inside that. Trivial\non unixoid systems. Not so much on windows. The obvious problematic cases are\nthings like a prefix of c:/something: Can't just prepend tmp_install/.\n\nI'd hacked that up for c:/ style paths. But that doesn't work for paths like\n/usr/local, because they're neither absolute nor relative, but \"drive\nrelative\". And then there's like a gazillion other things. A prefix could be\n'//computer/share/path/to/' and all other sorts of nastiness.\n\nI see two potential ways of dealing with this reliably on windows: - error out\nif a prefix is not drive-local, that's easy enough to check, something like:\nnormalized_prefix.startswith('/') and not normalized_prefix.startswith('//')\nas the installation on windows is relocatable, that's not too bad a\nrestriction - if on windows call a small python helper to compute the path of\ntmp_install + prefix, using the code that meson uses for the purpose\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 30 Sep 2022 18:09:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH v1] [meson] add a default option prefix=/usr/local/pgsql" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I see two potential ways of dealing with this reliably on windows: - error out\n> if a prefix is not drive-local, that's easy enough to check, something like:\n> normalized_prefix.startswith('/') and not normalized_prefix.startswith('//')\n> as the installation on windows is relocatable, that's not too bad a\n> restriction - if on windows call a small python helper to compute the path of\n> tmp_install + prefix, using the code that meson uses for the purpose\n\nI'd be inclined to keep it simple for now. This seems like something\nthat could be improved later in a pretty localized way, and it's not\nlike there's not tons of other things that need work.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 30 Sep 2022 21:19:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH v1] [meson] add a default option prefix=/usr/local/pgsql" }, { "msg_contents": "Hi,\n\nOn 2022-09-30 21:19:03 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I see two potential ways of dealing with this reliably on windows: - error out\n> > if a prefix is not drive-local, that's easy enough to check, something like:\n> > normalized_prefix.startswith('/') and not normalized_prefix.startswith('//')\n> > as the installation on windows is relocatable, that's not too bad a\n> > restriction - if on windows call a small python helper to compute the path of\n> > tmp_install + prefix, using the code that meson uses for the purpose\n> \n> I'd be inclined to keep it simple for now. This seems like something\n> that could be improved later in a pretty localized way, and it's not\n> like there's not tons of other things that need work.\n\nJust not sure which of the two are simpler, particularly taking docs into\naccount...\n\nThe attached 0001 calls into a meson helper command to do this. Not\nparticularly pretty, but less code than before, and likely more reliable.\n\n\nAlternatively, the code meson uses for this is trivial, we could just stash it\nin a windows_tempinstall_helper.py as well:\n\nimport sys\nfrom pathlib import PureWindowsPath as PurePath\n\ndef destdir_join(d1: str, d2: str) -> str:\n if not d1:\n return d2\n # c:\\destdir + c:\\prefix must produce c:\\destdir\\prefix\n return str(PurePath(d1, *PurePath(d2).parts[1:]))\n\nprint(destdir_join(sys.argv[1], sys.argv[2]))\n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 30 Sep 2022 19:07:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH v1] [meson] add a default option prefix=/usr/local/pgsql" }, { "msg_contents": "Hi,\n\nOn 2022-09-30 19:07:21 -0700, Andres Freund wrote:\n> The attached 0001 calls into a meson helper command to do this. Not\n> particularly pretty, but less code than before, and likely more reliable.\n\nI pushed Junwang Zhao's patch together with this change, after adding a\ncomment to the setting of the default prefix explaining how it affects\nwindows.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 1 Oct 2022 12:44:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH v1] [meson] add a default option prefix=/usr/local/pgsql" } ]
[ { "msg_contents": "Hello,\n\nA bloom filter provides early filtering of rows that cannot be joined\nbefore they would reach the join operator, the optimization is also\ncalled a semi join filter (SJF) pushdown. Such a filter can be created\nwhen one child of the join operator must materialize its derived table\nbefore the other child is evaluated.\n\nFor example, a bloom filter can be created using the the join keys for\nthe build side/inner side of a hash join or the outer side of a merge\njoin, the bloom filter can then be used to pre-filter rows on the\nother side of the join operator during the scan of the base relation.\nThe thread about “Hash Joins vs. Bloom Filters / take 2” [1] is good\ndiscussion on using such optimization for hash join without going into\nthe pushdown of the filter where its performance gain could be further\nincreased.\n\nWe worked on prototyping bloom filter pushdown for both hash join and\nmerge join. Attached is a patch set for bloom filter pushdown for\nmerge join. We also plan to send the patch for hash join once we have\nit rebased.\n\nHere is a summary of the patch set:\n1. Bloom Filter Pushdown optimizes Merge Join by filtering rows early\nduring the table scan instead of later on.\n -The bloom filter is pushed down along the execution tree to\nthe target SeqScan nodes.\n -Experiments show that this optimization can speed up Merge\nJoin by up to 36%.\n\n2. The planner makes the decision to use the bloom filter based on the\nestimated filtering rate and the expected performance gain.\n -The planner accomplishes this by estimating four numbers per\nvariable - the total number of rows of the relation, the number of\ndistinct values for a given variable, and the minimum and maximum\nvalue of the variable (when applicable). Using these numbers, the\nplanner estimates a filtering rate of a potential filter.\n -Because actually creating and implementing the filter adds\nmore operations, there is a minimum threshold of filtering where the\nfilter would actually be useful. Based on testing, we query to see if\nthe estimated filtering rate is higher than 35%, and that informs our\ndecision to use a filter or not.\n\n3. If using a bloom filter, the planner also adjusts the expected cost\nof Merge Join based on expected performance gain.\n\n4. Capability to build the bloom filter in parallel in case of\nparallel SeqScan. This is done efficiently by populating a local bloom\nfilter for each parallel worker and then taking a bitwise OR over all\nthe local bloom filters to form a shared bloom filter at the end of\nthe parallel SeqScan.\n\n5. The optimization is GUC controlled, with settings of\nenable_mergejoin_semijoin_filter and force_mergejoin_semijoin_filter.\n\nWe found in experiments that there is a significant improvement\nwhen using the bloom filter during Merge Join. One experiment involved\njoining two large tables while varying the theoretical filtering rate\n(TFR) between the two tables, the TFR is defined as the percentage\nthat the two datasets are disjoint. Both tables in the merge join were\nthe same size. We tested changing the TFR to see the change in\nfiltering optimization.\n\nFor example, let’s imagine t0 has 10 million rows, which contain the\nnumbers 1 through 10 million randomly shuffled. Also, t1 has the\nnumbers 4 million through 14 million randomly shuffled. Then the TFR\nfor a join of these two tables is 40%, since 40% of the tables are\ndisjoint from the other table (1 through 4 million for t0, 10 million\nthrough 14 million for t4).\n\nHere is the performance test result joining two tables:\nTFR: theoretical filtering rate\nEFR: estimated filtering rate\nAFR: actual filtering rate\nHJ: hash join\nMJ Default: default merge join\nMJ Filter: merge join with bloom filter optimization enabled\nMJ Filter Forced: merge join with bloom filter optimization forced\n\nTFR EFR AFR HJ MJ Default MJ Filter MJ Filter Forced\n-------------------------------------------------------------------------------------\n10 33.46 7.41 6529 22638 21949 23160\n20 37.27 14.85 6483 22290 21928 21930\n30 41.32 22.25 6395 22374 20718 20794\n40 45.67 29.7 6272 21969 19449 19410\n50 50.41 37.1 6210 21412 18222 18224\n60 55.64 44.51 6052 21108 17060 17018\n70 61.59 51.98 5947 21020 15682 15737\n80 68.64 59.36 5761 20812 14411 14437\n90 77.83 66.86 5701 20585 13171 13200\nTable. Execution Time (ms) vs Filtering Rate (%) for Joining Two\nTables of 10M Rows.\n\nAttached you can find figures of the same performance test and a SQL script\nto reproduce the performance test.\n\nThe first thing to notice is that Hash Join generally is the most\nefficient join strategy. This is because Hash Join is better at\ndealing with small tables, and our size of 10 million is still small\nenough where Hash Join outperforms the other join strategies. Future\nexperiments can investigate using much larger tables.\n\nHowever, comparing just within the different Merge Join variants, we\nsee that using the bloom filter greatly improves performance.\nIntuitively, all of these execution times follow linear paths.\nComparing forced filtering versus default, we can see that the default\nMerge Join outperforms Merge Join with filtering at low filter rates,\nbut after about 20% TFR, the Merge Join with filtering outperforms\ndefault Merge Join. This makes intuitive sense, as there are some\nfixed costs associated with building and checking with the bloom\nfilter. In the worst case, at only 10% TFR, the bloom filter makes\nMerge Join less than 5% slower. However, in the best case, at 90% TFR,\nthe bloom filter improves Merge Join by 36%.\n\nBased on the results of the above experiments, we came up with a\nlinear equation for the performance ratio for using the filter\npushdown from the actual filtering rate. Based on the numbers\npresented in the figure, this is the equation:\n\nT_filter / T_no_filter = 1 / (0.83 * estimated filtering rate + 0.863)\n\nFor example, this means that with an estimated filtering rate of 0.4,\nthe execution time of merge join is estimated to be improved by 16.3%.\nNote that the estimated filtering rate is used in the equation, not\nthe theoretical filtering rate or the actual filtering rate because it\nis what we have during planning. In practice the estimated filtering\nrate isn’t usually accurate. In fact, the estimated filtering rate can\ndiffer from the theoretical filtering rate by as much as 17% in our\nexperiments. One way to mitigate the power loss of bloom filter caused\nby inaccurate estimated filtering rate is to adaptively turn it off at\nexecution time, this is yet to be implemented.\n\nHere is a list of tasks we plan to work on in order to improve this patch:\n1. More regression testing to guarantee correctness.\n2. More performance testing involving larger tables and complicated query plans.\n3. Improve the cost model.\n4. Explore runtime tuning such as making the bloom filter checking adaptive.\n5. Currently, only the best single join key is used for building the\nBloom filter. However, if there are several keys and we know that\ntheir distributions are somewhat disjoint, we could leverage this fact\nand use multiple keys for the bloom filter.\n6. Currently, Bloom filter pushdown is only implemented for SeqScan\nnodes. However, it would be possible to allow push down to other types\nof scan nodes.\n7. Explore if the Bloom filter could be pushed down through a foreign\nscan when the foreign server is capable of handling it – which could\nbe made true for postgres_fdw.\n8. Better explain command on the usage of bloom filters.\n\nThis patch set is prepared by Marcus Ma, Lyu Pan and myself. Feedback\nis appreciated.\n\nWith Regards,\nZheng Li\nAmazon RDS/Aurora for PostgreSQL\n\n[1] https://www.postgresql.org/message-id/flat/c902844d-837f-5f63-ced3-9f7fd222f175%402ndquadrant.com", "msg_date": "Fri, 30 Sep 2022 18:44:16 -0400", "msg_from": "Zheng Li <zhengli10@gmail.com>", "msg_from_op": true, "msg_subject": "Bloom filter Pushdown Optimization for Merge Join" }, { "msg_contents": "On Fri, Sep 30, 2022 at 3:44 PM Zheng Li <zhengli10@gmail.com> wrote:\n\n> Hello,\n>\n> A bloom filter provides early filtering of rows that cannot be joined\n> before they would reach the join operator, the optimization is also\n> called a semi join filter (SJF) pushdown. Such a filter can be created\n> when one child of the join operator must materialize its derived table\n> before the other child is evaluated.\n>\n> For example, a bloom filter can be created using the the join keys for\n> the build side/inner side of a hash join or the outer side of a merge\n> join, the bloom filter can then be used to pre-filter rows on the\n> other side of the join operator during the scan of the base relation.\n> The thread about “Hash Joins vs. Bloom Filters / take 2” [1] is good\n> discussion on using such optimization for hash join without going into\n> the pushdown of the filter where its performance gain could be further\n> increased.\n>\n> We worked on prototyping bloom filter pushdown for both hash join and\n> merge join. Attached is a patch set for bloom filter pushdown for\n> merge join. We also plan to send the patch for hash join once we have\n> it rebased.\n>\n> Here is a summary of the patch set:\n> 1. Bloom Filter Pushdown optimizes Merge Join by filtering rows early\n> during the table scan instead of later on.\n> -The bloom filter is pushed down along the execution tree to\n> the target SeqScan nodes.\n> -Experiments show that this optimization can speed up Merge\n> Join by up to 36%.\n>\n> 2. The planner makes the decision to use the bloom filter based on the\n> estimated filtering rate and the expected performance gain.\n> -The planner accomplishes this by estimating four numbers per\n> variable - the total number of rows of the relation, the number of\n> distinct values for a given variable, and the minimum and maximum\n> value of the variable (when applicable). Using these numbers, the\n> planner estimates a filtering rate of a potential filter.\n> -Because actually creating and implementing the filter adds\n> more operations, there is a minimum threshold of filtering where the\n> filter would actually be useful. Based on testing, we query to see if\n> the estimated filtering rate is higher than 35%, and that informs our\n> decision to use a filter or not.\n>\n> 3. If using a bloom filter, the planner also adjusts the expected cost\n> of Merge Join based on expected performance gain.\n>\n> 4. Capability to build the bloom filter in parallel in case of\n> parallel SeqScan. This is done efficiently by populating a local bloom\n> filter for each parallel worker and then taking a bitwise OR over all\n> the local bloom filters to form a shared bloom filter at the end of\n> the parallel SeqScan.\n>\n> 5. The optimization is GUC controlled, with settings of\n> enable_mergejoin_semijoin_filter and force_mergejoin_semijoin_filter.\n>\n> We found in experiments that there is a significant improvement\n> when using the bloom filter during Merge Join. One experiment involved\n> joining two large tables while varying the theoretical filtering rate\n> (TFR) between the two tables, the TFR is defined as the percentage\n> that the two datasets are disjoint. Both tables in the merge join were\n> the same size. We tested changing the TFR to see the change in\n> filtering optimization.\n>\n> For example, let’s imagine t0 has 10 million rows, which contain the\n> numbers 1 through 10 million randomly shuffled. Also, t1 has the\n> numbers 4 million through 14 million randomly shuffled. Then the TFR\n> for a join of these two tables is 40%, since 40% of the tables are\n> disjoint from the other table (1 through 4 million for t0, 10 million\n> through 14 million for t4).\n>\n> Here is the performance test result joining two tables:\n> TFR: theoretical filtering rate\n> EFR: estimated filtering rate\n> AFR: actual filtering rate\n> HJ: hash join\n> MJ Default: default merge join\n> MJ Filter: merge join with bloom filter optimization enabled\n> MJ Filter Forced: merge join with bloom filter optimization forced\n>\n> TFR EFR AFR HJ MJ Default MJ Filter MJ Filter Forced\n>\n> -------------------------------------------------------------------------------------\n> 10 33.46 7.41 6529 22638 21949 23160\n> 20 37.27 14.85 6483 22290 21928 21930\n> 30 41.32 22.25 6395 22374 20718 20794\n> 40 45.67 29.7 6272 21969 19449 19410\n> 50 50.41 37.1 6210 21412 18222 18224\n> 60 55.64 44.51 6052 21108 17060 17018\n> 70 61.59 51.98 5947 21020 15682 15737\n> 80 68.64 59.36 5761 20812 14411 14437\n> 90 77.83 66.86 5701 20585 13171 13200\n> Table. Execution Time (ms) vs Filtering Rate (%) for Joining Two\n> Tables of 10M Rows.\n>\n> Attached you can find figures of the same performance test and a SQL script\n> to reproduce the performance test.\n>\n> The first thing to notice is that Hash Join generally is the most\n> efficient join strategy. This is because Hash Join is better at\n> dealing with small tables, and our size of 10 million is still small\n> enough where Hash Join outperforms the other join strategies. Future\n> experiments can investigate using much larger tables.\n>\n> However, comparing just within the different Merge Join variants, we\n> see that using the bloom filter greatly improves performance.\n> Intuitively, all of these execution times follow linear paths.\n> Comparing forced filtering versus default, we can see that the default\n> Merge Join outperforms Merge Join with filtering at low filter rates,\n> but after about 20% TFR, the Merge Join with filtering outperforms\n> default Merge Join. This makes intuitive sense, as there are some\n> fixed costs associated with building and checking with the bloom\n> filter. In the worst case, at only 10% TFR, the bloom filter makes\n> Merge Join less than 5% slower. However, in the best case, at 90% TFR,\n> the bloom filter improves Merge Join by 36%.\n>\n> Based on the results of the above experiments, we came up with a\n> linear equation for the performance ratio for using the filter\n> pushdown from the actual filtering rate. Based on the numbers\n> presented in the figure, this is the equation:\n>\n> T_filter / T_no_filter = 1 / (0.83 * estimated filtering rate + 0.863)\n>\n> For example, this means that with an estimated filtering rate of 0.4,\n> the execution time of merge join is estimated to be improved by 16.3%.\n> Note that the estimated filtering rate is used in the equation, not\n> the theoretical filtering rate or the actual filtering rate because it\n> is what we have during planning. In practice the estimated filtering\n> rate isn’t usually accurate. In fact, the estimated filtering rate can\n> differ from the theoretical filtering rate by as much as 17% in our\n> experiments. One way to mitigate the power loss of bloom filter caused\n> by inaccurate estimated filtering rate is to adaptively turn it off at\n> execution time, this is yet to be implemented.\n>\n> Here is a list of tasks we plan to work on in order to improve this patch:\n> 1. More regression testing to guarantee correctness.\n> 2. More performance testing involving larger tables and complicated query\n> plans.\n> 3. Improve the cost model.\n> 4. Explore runtime tuning such as making the bloom filter checking\n> adaptive.\n> 5. Currently, only the best single join key is used for building the\n> Bloom filter. However, if there are several keys and we know that\n> their distributions are somewhat disjoint, we could leverage this fact\n> and use multiple keys for the bloom filter.\n> 6. Currently, Bloom filter pushdown is only implemented for SeqScan\n> nodes. However, it would be possible to allow push down to other types\n> of scan nodes.\n> 7. Explore if the Bloom filter could be pushed down through a foreign\n> scan when the foreign server is capable of handling it – which could\n> be made true for postgres_fdw.\n> 8. Better explain command on the usage of bloom filters.\n>\n> This patch set is prepared by Marcus Ma, Lyu Pan and myself. Feedback\n> is appreciated.\n>\n> With Regards,\n> Zheng Li\n> Amazon RDS/Aurora for PostgreSQL\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/c902844d-837f-5f63-ced3-9f7fd222f175%402ndquadrant.com\n\n\nHi,\nIn the header of patch 1:\n\nIn this prototype, the cost model is based on an assumption that there is a\nlinear relationship between the performance gain from using a semijoin\nfilter and the estimated filtering rate:\n% improvement to Merge Join cost = 0.83 * estimated filtering rate - 0.137.\n\nHow were the coefficients (0.83 and 0.137) determined ?\nI guess they were based on the results of running certain workload.\n\nCheers\n\nOn Fri, Sep 30, 2022 at 3:44 PM Zheng Li <zhengli10@gmail.com> wrote:Hello,\n\nA bloom filter provides early filtering of rows that cannot be joined\nbefore they would reach the join operator, the optimization is also\ncalled a semi join filter (SJF) pushdown. Such a filter can be created\nwhen one child of the join operator must materialize its derived table\nbefore the other child is evaluated.\n\nFor example, a bloom filter can be created using the the join keys for\nthe build side/inner side of a hash join or the outer side of a merge\njoin, the bloom filter can then be used to pre-filter rows on the\nother side of the join operator during the scan of the base relation.\nThe thread about “Hash Joins vs. Bloom Filters / take 2” [1] is good\ndiscussion on using such optimization for hash join without going into\nthe pushdown of the filter where its performance gain could be further\nincreased.\n\nWe worked on prototyping bloom filter pushdown for both hash join and\nmerge join. Attached is a patch set for bloom filter pushdown for\nmerge join. We also plan to send the patch for hash join once we have\nit rebased.\n\nHere is a summary of the patch set:\n1. Bloom Filter Pushdown optimizes Merge Join by filtering rows early\nduring the table scan instead of later on.\n        -The bloom filter is pushed down along the execution tree to\nthe target SeqScan nodes.\n        -Experiments show that this optimization can speed up Merge\nJoin by up to 36%.\n\n2. The planner makes the decision to use the bloom filter based on the\nestimated filtering rate and the expected performance gain.\n        -The planner accomplishes this by estimating four numbers per\nvariable - the total number of rows of the relation, the number of\ndistinct values for a given variable, and the minimum and maximum\nvalue of the variable (when applicable). Using these numbers, the\nplanner estimates a filtering rate of a potential filter.\n        -Because actually creating and implementing the filter adds\nmore operations, there is a minimum threshold of filtering where the\nfilter would actually be useful. Based on testing, we query to see if\nthe estimated filtering rate is higher than 35%, and that informs our\ndecision to use a filter or not.\n\n3. If using a bloom filter, the planner also adjusts the expected cost\nof Merge Join based on expected performance gain.\n\n4. Capability to build the bloom filter in parallel in case of\nparallel SeqScan. This is done efficiently by populating a local bloom\nfilter for each parallel worker and then taking a bitwise OR over all\nthe local bloom filters to form a shared bloom filter at the end of\nthe parallel SeqScan.\n\n5. The optimization is GUC controlled, with settings of\nenable_mergejoin_semijoin_filter and force_mergejoin_semijoin_filter.\n\nWe found in experiments that there is a significant improvement\nwhen using the bloom filter during Merge Join. One experiment involved\njoining two large tables while varying the theoretical filtering rate\n(TFR) between the two tables, the TFR is defined as the percentage\nthat the two datasets are disjoint. Both tables in the merge join were\nthe same size. We tested changing the TFR to see the change in\nfiltering optimization.\n\nFor example, let’s imagine t0 has 10 million rows, which contain the\nnumbers 1 through 10 million randomly shuffled. Also, t1 has the\nnumbers 4 million through 14 million randomly shuffled. Then the TFR\nfor a join of these two tables is 40%, since 40% of the tables are\ndisjoint from the other table (1 through 4 million for t0, 10 million\nthrough 14 million for t4).\n\nHere is the performance test result joining two tables:\nTFR: theoretical filtering rate\nEFR: estimated filtering rate\nAFR: actual filtering rate\nHJ: hash join\nMJ Default: default merge join\nMJ Filter: merge join with bloom filter optimization enabled\nMJ Filter Forced: merge join with bloom filter optimization forced\n\nTFR   EFR   AFR   HJ   MJ Default   MJ Filter   MJ Filter Forced\n-------------------------------------------------------------------------------------\n10     33.46   7.41    6529   22638        21949            23160\n20     37.27  14.85   6483   22290        21928            21930\n30     41.32   22.25  6395   22374        20718            20794\n40     45.67   29.7    6272   21969        19449            19410\n50     50.41   37.1    6210   21412        18222            18224\n60     55.64   44.51  6052   21108        17060            17018\n70     61.59   51.98  5947   21020        15682            15737\n80     68.64   59.36  5761   20812        14411            14437\n90     77.83   66.86  5701   20585        13171            13200\nTable. Execution Time (ms) vs Filtering Rate (%) for Joining Two\nTables of 10M Rows.\n\nAttached you can find figures of the same performance test and a SQL script\nto reproduce the performance test.\n\nThe first thing to notice is that Hash Join generally is the most\nefficient join strategy. This is because Hash Join is better at\ndealing with small tables, and our size of 10 million is still small\nenough where Hash Join outperforms the other join strategies. Future\nexperiments can investigate using much larger tables.\n\nHowever, comparing just within the different Merge Join variants, we\nsee that using the bloom filter greatly improves performance.\nIntuitively, all of these execution times follow linear paths.\nComparing forced filtering versus default, we can see that the default\nMerge Join outperforms Merge Join with filtering at low filter rates,\nbut after about 20% TFR, the Merge Join with filtering outperforms\ndefault Merge Join. This makes intuitive sense, as there are some\nfixed costs associated with building and checking with the bloom\nfilter. In the worst case, at only 10% TFR, the bloom filter makes\nMerge Join less than 5% slower. However, in the best case, at 90% TFR,\nthe bloom filter improves Merge Join by 36%.\n\nBased on the results of the above experiments, we came up with a\nlinear equation for the performance ratio for using the filter\npushdown from the actual filtering rate. Based on the numbers\npresented in the figure, this is the equation:\n\nT_filter / T_no_filter = 1 / (0.83 * estimated filtering rate + 0.863)\n\nFor example, this means that with an estimated filtering rate of 0.4,\nthe execution time of merge join is estimated to be improved by 16.3%.\nNote that the estimated filtering rate is used in the equation, not\nthe theoretical filtering rate or the actual filtering rate because it\nis what we have during planning. In practice the estimated filtering\nrate isn’t usually accurate. In fact, the estimated filtering rate can\ndiffer from the theoretical filtering rate by as much as 17% in our\nexperiments. One way to mitigate the power loss of bloom filter caused\nby inaccurate estimated filtering rate is to adaptively turn it off at\nexecution time, this is yet to be implemented.\n\nHere is a list of tasks we plan to work on in order to improve this patch:\n1. More regression testing to guarantee correctness.\n2. More performance testing involving larger tables and complicated query plans.\n3. Improve the cost model.\n4. Explore runtime tuning such as making the bloom filter checking adaptive.\n5. Currently, only the best single join key is used for building the\nBloom filter. However, if there are several keys and we know that\ntheir distributions are somewhat disjoint, we could leverage this fact\nand use multiple keys for the bloom filter.\n6. Currently, Bloom filter pushdown is only implemented for SeqScan\nnodes. However, it would be possible to allow push down to other types\nof scan nodes.\n7. Explore if the Bloom filter could be pushed down through a foreign\nscan when the foreign server is capable of handling it – which could\nbe made true for postgres_fdw.\n8. Better explain command on the usage of bloom filters.\n\nThis patch set is prepared by Marcus Ma, Lyu Pan and myself. Feedback\nis appreciated.\n\nWith Regards,\nZheng Li\nAmazon RDS/Aurora for PostgreSQL\n\n[1] https://www.postgresql.org/message-id/flat/c902844d-837f-5f63-ced3-9f7fd222f175%402ndquadrant.comHi,In the header of patch 1:In this prototype, the cost model is based on an assumption that there is a linear relationship between the performance gain from using a semijoin filter and the estimated filtering rate:% improvement to Merge Join cost = 0.83 * estimated filtering rate - 0.137.How were the coefficients (0.83 and 0.137) determined ?I guess they were based on the results of running certain workload.Cheers", "msg_date": "Fri, 30 Sep 2022 20:40:36 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Bloom filter Pushdown Optimization for Merge Join" }, { "msg_contents": "On Fri, Sep 30, 2022 at 8:40 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Fri, Sep 30, 2022 at 3:44 PM Zheng Li <zhengli10@gmail.com> wrote:\n>\n>> Hello,\n>>\n>> A bloom filter provides early filtering of rows that cannot be joined\n>> before they would reach the join operator, the optimization is also\n>> called a semi join filter (SJF) pushdown. Such a filter can be created\n>> when one child of the join operator must materialize its derived table\n>> before the other child is evaluated.\n>>\n>> For example, a bloom filter can be created using the the join keys for\n>> the build side/inner side of a hash join or the outer side of a merge\n>> join, the bloom filter can then be used to pre-filter rows on the\n>> other side of the join operator during the scan of the base relation.\n>> The thread about “Hash Joins vs. Bloom Filters / take 2” [1] is good\n>> discussion on using such optimization for hash join without going into\n>> the pushdown of the filter where its performance gain could be further\n>> increased.\n>>\n>> We worked on prototyping bloom filter pushdown for both hash join and\n>> merge join. Attached is a patch set for bloom filter pushdown for\n>> merge join. We also plan to send the patch for hash join once we have\n>> it rebased.\n>>\n>> Here is a summary of the patch set:\n>> 1. Bloom Filter Pushdown optimizes Merge Join by filtering rows early\n>> during the table scan instead of later on.\n>> -The bloom filter is pushed down along the execution tree to\n>> the target SeqScan nodes.\n>> -Experiments show that this optimization can speed up Merge\n>> Join by up to 36%.\n>>\n>> 2. The planner makes the decision to use the bloom filter based on the\n>> estimated filtering rate and the expected performance gain.\n>> -The planner accomplishes this by estimating four numbers per\n>> variable - the total number of rows of the relation, the number of\n>> distinct values for a given variable, and the minimum and maximum\n>> value of the variable (when applicable). Using these numbers, the\n>> planner estimates a filtering rate of a potential filter.\n>> -Because actually creating and implementing the filter adds\n>> more operations, there is a minimum threshold of filtering where the\n>> filter would actually be useful. Based on testing, we query to see if\n>> the estimated filtering rate is higher than 35%, and that informs our\n>> decision to use a filter or not.\n>>\n>> 3. If using a bloom filter, the planner also adjusts the expected cost\n>> of Merge Join based on expected performance gain.\n>>\n>> 4. Capability to build the bloom filter in parallel in case of\n>> parallel SeqScan. This is done efficiently by populating a local bloom\n>> filter for each parallel worker and then taking a bitwise OR over all\n>> the local bloom filters to form a shared bloom filter at the end of\n>> the parallel SeqScan.\n>>\n>> 5. The optimization is GUC controlled, with settings of\n>> enable_mergejoin_semijoin_filter and force_mergejoin_semijoin_filter.\n>>\n>> We found in experiments that there is a significant improvement\n>> when using the bloom filter during Merge Join. One experiment involved\n>> joining two large tables while varying the theoretical filtering rate\n>> (TFR) between the two tables, the TFR is defined as the percentage\n>> that the two datasets are disjoint. Both tables in the merge join were\n>> the same size. We tested changing the TFR to see the change in\n>> filtering optimization.\n>>\n>> For example, let’s imagine t0 has 10 million rows, which contain the\n>> numbers 1 through 10 million randomly shuffled. Also, t1 has the\n>> numbers 4 million through 14 million randomly shuffled. Then the TFR\n>> for a join of these two tables is 40%, since 40% of the tables are\n>> disjoint from the other table (1 through 4 million for t0, 10 million\n>> through 14 million for t4).\n>>\n>> Here is the performance test result joining two tables:\n>> TFR: theoretical filtering rate\n>> EFR: estimated filtering rate\n>> AFR: actual filtering rate\n>> HJ: hash join\n>> MJ Default: default merge join\n>> MJ Filter: merge join with bloom filter optimization enabled\n>> MJ Filter Forced: merge join with bloom filter optimization forced\n>>\n>> TFR EFR AFR HJ MJ Default MJ Filter MJ Filter Forced\n>>\n>> -------------------------------------------------------------------------------------\n>> 10 33.46 7.41 6529 22638 21949 23160\n>> 20 37.27 14.85 6483 22290 21928 21930\n>> 30 41.32 22.25 6395 22374 20718 20794\n>> 40 45.67 29.7 6272 21969 19449 19410\n>> 50 50.41 37.1 6210 21412 18222 18224\n>> 60 55.64 44.51 6052 21108 17060 17018\n>> 70 61.59 51.98 5947 21020 15682 15737\n>> 80 68.64 59.36 5761 20812 14411 14437\n>> 90 77.83 66.86 5701 20585 13171 13200\n>> Table. Execution Time (ms) vs Filtering Rate (%) for Joining Two\n>> Tables of 10M Rows.\n>>\n>> Attached you can find figures of the same performance test and a SQL\n>> script\n>> to reproduce the performance test.\n>>\n>> The first thing to notice is that Hash Join generally is the most\n>> efficient join strategy. This is because Hash Join is better at\n>> dealing with small tables, and our size of 10 million is still small\n>> enough where Hash Join outperforms the other join strategies. Future\n>> experiments can investigate using much larger tables.\n>>\n>> However, comparing just within the different Merge Join variants, we\n>> see that using the bloom filter greatly improves performance.\n>> Intuitively, all of these execution times follow linear paths.\n>> Comparing forced filtering versus default, we can see that the default\n>> Merge Join outperforms Merge Join with filtering at low filter rates,\n>> but after about 20% TFR, the Merge Join with filtering outperforms\n>> default Merge Join. This makes intuitive sense, as there are some\n>> fixed costs associated with building and checking with the bloom\n>> filter. In the worst case, at only 10% TFR, the bloom filter makes\n>> Merge Join less than 5% slower. However, in the best case, at 90% TFR,\n>> the bloom filter improves Merge Join by 36%.\n>>\n>> Based on the results of the above experiments, we came up with a\n>> linear equation for the performance ratio for using the filter\n>> pushdown from the actual filtering rate. Based on the numbers\n>> presented in the figure, this is the equation:\n>>\n>> T_filter / T_no_filter = 1 / (0.83 * estimated filtering rate + 0.863)\n>>\n>> For example, this means that with an estimated filtering rate of 0.4,\n>> the execution time of merge join is estimated to be improved by 16.3%.\n>> Note that the estimated filtering rate is used in the equation, not\n>> the theoretical filtering rate or the actual filtering rate because it\n>> is what we have during planning. In practice the estimated filtering\n>> rate isn’t usually accurate. In fact, the estimated filtering rate can\n>> differ from the theoretical filtering rate by as much as 17% in our\n>> experiments. One way to mitigate the power loss of bloom filter caused\n>> by inaccurate estimated filtering rate is to adaptively turn it off at\n>> execution time, this is yet to be implemented.\n>>\n>> Here is a list of tasks we plan to work on in order to improve this patch:\n>> 1. More regression testing to guarantee correctness.\n>> 2. More performance testing involving larger tables and complicated query\n>> plans.\n>> 3. Improve the cost model.\n>> 4. Explore runtime tuning such as making the bloom filter checking\n>> adaptive.\n>> 5. Currently, only the best single join key is used for building the\n>> Bloom filter. However, if there are several keys and we know that\n>> their distributions are somewhat disjoint, we could leverage this fact\n>> and use multiple keys for the bloom filter.\n>> 6. Currently, Bloom filter pushdown is only implemented for SeqScan\n>> nodes. However, it would be possible to allow push down to other types\n>> of scan nodes.\n>> 7. Explore if the Bloom filter could be pushed down through a foreign\n>> scan when the foreign server is capable of handling it – which could\n>> be made true for postgres_fdw.\n>> 8. Better explain command on the usage of bloom filters.\n>>\n>> This patch set is prepared by Marcus Ma, Lyu Pan and myself. Feedback\n>> is appreciated.\n>>\n>> With Regards,\n>> Zheng Li\n>> Amazon RDS/Aurora for PostgreSQL\n>>\n>> [1]\n>> https://www.postgresql.org/message-id/flat/c902844d-837f-5f63-ced3-9f7fd222f175%402ndquadrant.com\n>\n>\n> Hi,\n> In the header of patch 1:\n>\n> In this prototype, the cost model is based on an assumption that there is\n> a linear relationship between the performance gain from using a semijoin\n> filter and the estimated filtering rate:\n> % improvement to Merge Join cost = 0.83 * estimated filtering rate - 0.137.\n>\n> How were the coefficients (0.83 and 0.137) determined ?\n> I guess they were based on the results of running certain workload.\n>\n> Cheers\n>\nHi,\nFor patch 1:\n\n+bool enable_mergejoin_semijoin_filter;\n+bool force_mergejoin_semijoin_filter;\n\nHow would (enable_mergejoin_semijoin_filter = off,\nforce_mergejoin_semijoin_filter = on) be interpreted ?\nHave you considered using one GUC which has three values: off, enabled,\nforced ?\n\n+ mergeclauses_for_sjf = get_actual_clauses(path->path_mergeclauses);\n+ mergeclauses_for_sjf = get_switched_clauses(path->path_mergeclauses,\n+\npath->jpath.outerjoinpath->parent->relids);\n\nmergeclauses_for_sjf is assigned twice and I don't see mergeclauses_for_sjf\nbeing reference in the call to get_switched_clauses().\nIs this intentional ?\n\n+ /* want at least 1000 rows_filtered to avoid any nasty edge\ncases */\n+ if (force_mergejoin_semijoin_filter || (filteringRate >= 0.35\n&& rows_filtered > 1000))\n\nThe above condition is narrower compared to the enclosing condition.\nSince there is no else block for the second if block, please merge the two\nif statements.\n\n+ int best_filter_clause;\n\nNormally I would think `clause` is represented by List*. But\nbest_filter_clause is an int. Please use another variable name so that\nthere is less chance of confusion.\n\nFor evaluate_semijoin_filtering_rate():\n\n+ double best_sj_selectivity = 1.01;\n\nHow was 1.01 determined ?\n\n+ debug_sj1(\"SJPD: start evaluate_semijoin_filtering_rate\");\n\nThere are debug statements in the methods.\nIt would be better to remove them in the next patch set.\n\nCheers\n\nOn Fri, Sep 30, 2022 at 8:40 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Fri, Sep 30, 2022 at 3:44 PM Zheng Li <zhengli10@gmail.com> wrote:Hello,\n\nA bloom filter provides early filtering of rows that cannot be joined\nbefore they would reach the join operator, the optimization is also\ncalled a semi join filter (SJF) pushdown. Such a filter can be created\nwhen one child of the join operator must materialize its derived table\nbefore the other child is evaluated.\n\nFor example, a bloom filter can be created using the the join keys for\nthe build side/inner side of a hash join or the outer side of a merge\njoin, the bloom filter can then be used to pre-filter rows on the\nother side of the join operator during the scan of the base relation.\nThe thread about “Hash Joins vs. Bloom Filters / take 2” [1] is good\ndiscussion on using such optimization for hash join without going into\nthe pushdown of the filter where its performance gain could be further\nincreased.\n\nWe worked on prototyping bloom filter pushdown for both hash join and\nmerge join. Attached is a patch set for bloom filter pushdown for\nmerge join. We also plan to send the patch for hash join once we have\nit rebased.\n\nHere is a summary of the patch set:\n1. Bloom Filter Pushdown optimizes Merge Join by filtering rows early\nduring the table scan instead of later on.\n        -The bloom filter is pushed down along the execution tree to\nthe target SeqScan nodes.\n        -Experiments show that this optimization can speed up Merge\nJoin by up to 36%.\n\n2. The planner makes the decision to use the bloom filter based on the\nestimated filtering rate and the expected performance gain.\n        -The planner accomplishes this by estimating four numbers per\nvariable - the total number of rows of the relation, the number of\ndistinct values for a given variable, and the minimum and maximum\nvalue of the variable (when applicable). Using these numbers, the\nplanner estimates a filtering rate of a potential filter.\n        -Because actually creating and implementing the filter adds\nmore operations, there is a minimum threshold of filtering where the\nfilter would actually be useful. Based on testing, we query to see if\nthe estimated filtering rate is higher than 35%, and that informs our\ndecision to use a filter or not.\n\n3. If using a bloom filter, the planner also adjusts the expected cost\nof Merge Join based on expected performance gain.\n\n4. Capability to build the bloom filter in parallel in case of\nparallel SeqScan. This is done efficiently by populating a local bloom\nfilter for each parallel worker and then taking a bitwise OR over all\nthe local bloom filters to form a shared bloom filter at the end of\nthe parallel SeqScan.\n\n5. The optimization is GUC controlled, with settings of\nenable_mergejoin_semijoin_filter and force_mergejoin_semijoin_filter.\n\nWe found in experiments that there is a significant improvement\nwhen using the bloom filter during Merge Join. One experiment involved\njoining two large tables while varying the theoretical filtering rate\n(TFR) between the two tables, the TFR is defined as the percentage\nthat the two datasets are disjoint. Both tables in the merge join were\nthe same size. We tested changing the TFR to see the change in\nfiltering optimization.\n\nFor example, let’s imagine t0 has 10 million rows, which contain the\nnumbers 1 through 10 million randomly shuffled. Also, t1 has the\nnumbers 4 million through 14 million randomly shuffled. Then the TFR\nfor a join of these two tables is 40%, since 40% of the tables are\ndisjoint from the other table (1 through 4 million for t0, 10 million\nthrough 14 million for t4).\n\nHere is the performance test result joining two tables:\nTFR: theoretical filtering rate\nEFR: estimated filtering rate\nAFR: actual filtering rate\nHJ: hash join\nMJ Default: default merge join\nMJ Filter: merge join with bloom filter optimization enabled\nMJ Filter Forced: merge join with bloom filter optimization forced\n\nTFR   EFR   AFR   HJ   MJ Default   MJ Filter   MJ Filter Forced\n-------------------------------------------------------------------------------------\n10     33.46   7.41    6529   22638        21949            23160\n20     37.27  14.85   6483   22290        21928            21930\n30     41.32   22.25  6395   22374        20718            20794\n40     45.67   29.7    6272   21969        19449            19410\n50     50.41   37.1    6210   21412        18222            18224\n60     55.64   44.51  6052   21108        17060            17018\n70     61.59   51.98  5947   21020        15682            15737\n80     68.64   59.36  5761   20812        14411            14437\n90     77.83   66.86  5701   20585        13171            13200\nTable. Execution Time (ms) vs Filtering Rate (%) for Joining Two\nTables of 10M Rows.\n\nAttached you can find figures of the same performance test and a SQL script\nto reproduce the performance test.\n\nThe first thing to notice is that Hash Join generally is the most\nefficient join strategy. This is because Hash Join is better at\ndealing with small tables, and our size of 10 million is still small\nenough where Hash Join outperforms the other join strategies. Future\nexperiments can investigate using much larger tables.\n\nHowever, comparing just within the different Merge Join variants, we\nsee that using the bloom filter greatly improves performance.\nIntuitively, all of these execution times follow linear paths.\nComparing forced filtering versus default, we can see that the default\nMerge Join outperforms Merge Join with filtering at low filter rates,\nbut after about 20% TFR, the Merge Join with filtering outperforms\ndefault Merge Join. This makes intuitive sense, as there are some\nfixed costs associated with building and checking with the bloom\nfilter. In the worst case, at only 10% TFR, the bloom filter makes\nMerge Join less than 5% slower. However, in the best case, at 90% TFR,\nthe bloom filter improves Merge Join by 36%.\n\nBased on the results of the above experiments, we came up with a\nlinear equation for the performance ratio for using the filter\npushdown from the actual filtering rate. Based on the numbers\npresented in the figure, this is the equation:\n\nT_filter / T_no_filter = 1 / (0.83 * estimated filtering rate + 0.863)\n\nFor example, this means that with an estimated filtering rate of 0.4,\nthe execution time of merge join is estimated to be improved by 16.3%.\nNote that the estimated filtering rate is used in the equation, not\nthe theoretical filtering rate or the actual filtering rate because it\nis what we have during planning. In practice the estimated filtering\nrate isn’t usually accurate. In fact, the estimated filtering rate can\ndiffer from the theoretical filtering rate by as much as 17% in our\nexperiments. One way to mitigate the power loss of bloom filter caused\nby inaccurate estimated filtering rate is to adaptively turn it off at\nexecution time, this is yet to be implemented.\n\nHere is a list of tasks we plan to work on in order to improve this patch:\n1. More regression testing to guarantee correctness.\n2. More performance testing involving larger tables and complicated query plans.\n3. Improve the cost model.\n4. Explore runtime tuning such as making the bloom filter checking adaptive.\n5. Currently, only the best single join key is used for building the\nBloom filter. However, if there are several keys and we know that\ntheir distributions are somewhat disjoint, we could leverage this fact\nand use multiple keys for the bloom filter.\n6. Currently, Bloom filter pushdown is only implemented for SeqScan\nnodes. However, it would be possible to allow push down to other types\nof scan nodes.\n7. Explore if the Bloom filter could be pushed down through a foreign\nscan when the foreign server is capable of handling it – which could\nbe made true for postgres_fdw.\n8. Better explain command on the usage of bloom filters.\n\nThis patch set is prepared by Marcus Ma, Lyu Pan and myself. Feedback\nis appreciated.\n\nWith Regards,\nZheng Li\nAmazon RDS/Aurora for PostgreSQL\n\n[1] https://www.postgresql.org/message-id/flat/c902844d-837f-5f63-ced3-9f7fd222f175%402ndquadrant.comHi,In the header of patch 1:In this prototype, the cost model is based on an assumption that there is a linear relationship between the performance gain from using a semijoin filter and the estimated filtering rate:% improvement to Merge Join cost = 0.83 * estimated filtering rate - 0.137.How were the coefficients (0.83 and 0.137) determined ?I guess they were based on the results of running certain workload.CheersHi,For patch 1:+bool       enable_mergejoin_semijoin_filter;+bool       force_mergejoin_semijoin_filter; How would (enable_mergejoin_semijoin_filter = off, force_mergejoin_semijoin_filter = on) be interpreted ?Have you considered using one GUC which has three values: off, enabled, forced ?+       mergeclauses_for_sjf = get_actual_clauses(path->path_mergeclauses);+       mergeclauses_for_sjf = get_switched_clauses(path->path_mergeclauses,+                                                   path->jpath.outerjoinpath->parent->relids);mergeclauses_for_sjf is assigned twice and I don't see mergeclauses_for_sjf being reference in the call to get_switched_clauses().Is this intentional ?+           /* want at least 1000 rows_filtered to avoid any nasty edge cases */+           if (force_mergejoin_semijoin_filter || (filteringRate >= 0.35 && rows_filtered > 1000))The above condition is narrower compared to the enclosing condition.Since there is no else block for the second if block, please merge the two if statements.+   int         best_filter_clause;Normally I would think `clause` is represented by List*. But best_filter_clause is an int. Please use another variable name so that there is less chance of confusion.For evaluate_semijoin_filtering_rate():+   double      best_sj_selectivity = 1.01;How was 1.01 determined ?+   debug_sj1(\"SJPD:  start evaluate_semijoin_filtering_rate\");There are debug statements in the methods.It would be better to remove them in the next patch set.Cheers", "msg_date": "Fri, 30 Sep 2022 21:20:54 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Bloom filter Pushdown Optimization for Merge Join" }, { "msg_contents": "On Fri, Sep 30, 2022 at 9:20 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Fri, Sep 30, 2022 at 8:40 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>>\n>>\n>> On Fri, Sep 30, 2022 at 3:44 PM Zheng Li <zhengli10@gmail.com> wrote:\n>>\n>>> Hello,\n>>>\n>>> A bloom filter provides early filtering of rows that cannot be joined\n>>> before they would reach the join operator, the optimization is also\n>>> called a semi join filter (SJF) pushdown. Such a filter can be created\n>>> when one child of the join operator must materialize its derived table\n>>> before the other child is evaluated.\n>>>\n>>> For example, a bloom filter can be created using the the join keys for\n>>> the build side/inner side of a hash join or the outer side of a merge\n>>> join, the bloom filter can then be used to pre-filter rows on the\n>>> other side of the join operator during the scan of the base relation.\n>>> The thread about “Hash Joins vs. Bloom Filters / take 2” [1] is good\n>>> discussion on using such optimization for hash join without going into\n>>> the pushdown of the filter where its performance gain could be further\n>>> increased.\n>>>\n>>> We worked on prototyping bloom filter pushdown for both hash join and\n>>> merge join. Attached is a patch set for bloom filter pushdown for\n>>> merge join. We also plan to send the patch for hash join once we have\n>>> it rebased.\n>>>\n>>> Here is a summary of the patch set:\n>>> 1. Bloom Filter Pushdown optimizes Merge Join by filtering rows early\n>>> during the table scan instead of later on.\n>>> -The bloom filter is pushed down along the execution tree to\n>>> the target SeqScan nodes.\n>>> -Experiments show that this optimization can speed up Merge\n>>> Join by up to 36%.\n>>>\n>>> 2. The planner makes the decision to use the bloom filter based on the\n>>> estimated filtering rate and the expected performance gain.\n>>> -The planner accomplishes this by estimating four numbers per\n>>> variable - the total number of rows of the relation, the number of\n>>> distinct values for a given variable, and the minimum and maximum\n>>> value of the variable (when applicable). Using these numbers, the\n>>> planner estimates a filtering rate of a potential filter.\n>>> -Because actually creating and implementing the filter adds\n>>> more operations, there is a minimum threshold of filtering where the\n>>> filter would actually be useful. Based on testing, we query to see if\n>>> the estimated filtering rate is higher than 35%, and that informs our\n>>> decision to use a filter or not.\n>>>\n>>> 3. If using a bloom filter, the planner also adjusts the expected cost\n>>> of Merge Join based on expected performance gain.\n>>>\n>>> 4. Capability to build the bloom filter in parallel in case of\n>>> parallel SeqScan. This is done efficiently by populating a local bloom\n>>> filter for each parallel worker and then taking a bitwise OR over all\n>>> the local bloom filters to form a shared bloom filter at the end of\n>>> the parallel SeqScan.\n>>>\n>>> 5. The optimization is GUC controlled, with settings of\n>>> enable_mergejoin_semijoin_filter and force_mergejoin_semijoin_filter.\n>>>\n>>> We found in experiments that there is a significant improvement\n>>> when using the bloom filter during Merge Join. One experiment involved\n>>> joining two large tables while varying the theoretical filtering rate\n>>> (TFR) between the two tables, the TFR is defined as the percentage\n>>> that the two datasets are disjoint. Both tables in the merge join were\n>>> the same size. We tested changing the TFR to see the change in\n>>> filtering optimization.\n>>>\n>>> For example, let’s imagine t0 has 10 million rows, which contain the\n>>> numbers 1 through 10 million randomly shuffled. Also, t1 has the\n>>> numbers 4 million through 14 million randomly shuffled. Then the TFR\n>>> for a join of these two tables is 40%, since 40% of the tables are\n>>> disjoint from the other table (1 through 4 million for t0, 10 million\n>>> through 14 million for t4).\n>>>\n>>> Here is the performance test result joining two tables:\n>>> TFR: theoretical filtering rate\n>>> EFR: estimated filtering rate\n>>> AFR: actual filtering rate\n>>> HJ: hash join\n>>> MJ Default: default merge join\n>>> MJ Filter: merge join with bloom filter optimization enabled\n>>> MJ Filter Forced: merge join with bloom filter optimization forced\n>>>\n>>> TFR EFR AFR HJ MJ Default MJ Filter MJ Filter Forced\n>>>\n>>> -------------------------------------------------------------------------------------\n>>> 10 33.46 7.41 6529 22638 21949 23160\n>>> 20 37.27 14.85 6483 22290 21928 21930\n>>> 30 41.32 22.25 6395 22374 20718 20794\n>>> 40 45.67 29.7 6272 21969 19449 19410\n>>> 50 50.41 37.1 6210 21412 18222 18224\n>>> 60 55.64 44.51 6052 21108 17060 17018\n>>> 70 61.59 51.98 5947 21020 15682 15737\n>>> 80 68.64 59.36 5761 20812 14411 14437\n>>> 90 77.83 66.86 5701 20585 13171 13200\n>>> Table. Execution Time (ms) vs Filtering Rate (%) for Joining Two\n>>> Tables of 10M Rows.\n>>>\n>>> Attached you can find figures of the same performance test and a SQL\n>>> script\n>>> to reproduce the performance test.\n>>>\n>>> The first thing to notice is that Hash Join generally is the most\n>>> efficient join strategy. This is because Hash Join is better at\n>>> dealing with small tables, and our size of 10 million is still small\n>>> enough where Hash Join outperforms the other join strategies. Future\n>>> experiments can investigate using much larger tables.\n>>>\n>>> However, comparing just within the different Merge Join variants, we\n>>> see that using the bloom filter greatly improves performance.\n>>> Intuitively, all of these execution times follow linear paths.\n>>> Comparing forced filtering versus default, we can see that the default\n>>> Merge Join outperforms Merge Join with filtering at low filter rates,\n>>> but after about 20% TFR, the Merge Join with filtering outperforms\n>>> default Merge Join. This makes intuitive sense, as there are some\n>>> fixed costs associated with building and checking with the bloom\n>>> filter. In the worst case, at only 10% TFR, the bloom filter makes\n>>> Merge Join less than 5% slower. However, in the best case, at 90% TFR,\n>>> the bloom filter improves Merge Join by 36%.\n>>>\n>>> Based on the results of the above experiments, we came up with a\n>>> linear equation for the performance ratio for using the filter\n>>> pushdown from the actual filtering rate. Based on the numbers\n>>> presented in the figure, this is the equation:\n>>>\n>>> T_filter / T_no_filter = 1 / (0.83 * estimated filtering rate + 0.863)\n>>>\n>>> For example, this means that with an estimated filtering rate of 0.4,\n>>> the execution time of merge join is estimated to be improved by 16.3%.\n>>> Note that the estimated filtering rate is used in the equation, not\n>>> the theoretical filtering rate or the actual filtering rate because it\n>>> is what we have during planning. In practice the estimated filtering\n>>> rate isn’t usually accurate. In fact, the estimated filtering rate can\n>>> differ from the theoretical filtering rate by as much as 17% in our\n>>> experiments. One way to mitigate the power loss of bloom filter caused\n>>> by inaccurate estimated filtering rate is to adaptively turn it off at\n>>> execution time, this is yet to be implemented.\n>>>\n>>> Here is a list of tasks we plan to work on in order to improve this\n>>> patch:\n>>> 1. More regression testing to guarantee correctness.\n>>> 2. More performance testing involving larger tables and complicated\n>>> query plans.\n>>> 3. Improve the cost model.\n>>> 4. Explore runtime tuning such as making the bloom filter checking\n>>> adaptive.\n>>> 5. Currently, only the best single join key is used for building the\n>>> Bloom filter. However, if there are several keys and we know that\n>>> their distributions are somewhat disjoint, we could leverage this fact\n>>> and use multiple keys for the bloom filter.\n>>> 6. Currently, Bloom filter pushdown is only implemented for SeqScan\n>>> nodes. However, it would be possible to allow push down to other types\n>>> of scan nodes.\n>>> 7. Explore if the Bloom filter could be pushed down through a foreign\n>>> scan when the foreign server is capable of handling it – which could\n>>> be made true for postgres_fdw.\n>>> 8. Better explain command on the usage of bloom filters.\n>>>\n>>> This patch set is prepared by Marcus Ma, Lyu Pan and myself. Feedback\n>>> is appreciated.\n>>>\n>>> With Regards,\n>>> Zheng Li\n>>> Amazon RDS/Aurora for PostgreSQL\n>>>\n>>> [1]\n>>> https://www.postgresql.org/message-id/flat/c902844d-837f-5f63-ced3-9f7fd222f175%402ndquadrant.com\n>>\n>>\n>> Hi,\n>> In the header of patch 1:\n>>\n>> In this prototype, the cost model is based on an assumption that there is\n>> a linear relationship between the performance gain from using a semijoin\n>> filter and the estimated filtering rate:\n>> % improvement to Merge Join cost = 0.83 * estimated filtering rate -\n>> 0.137.\n>>\n>> How were the coefficients (0.83 and 0.137) determined ?\n>> I guess they were based on the results of running certain workload.\n>>\n>> Cheers\n>>\n> Hi,\n> For patch 1:\n>\n> +bool enable_mergejoin_semijoin_filter;\n> +bool force_mergejoin_semijoin_filter;\n>\n> How would (enable_mergejoin_semijoin_filter = off,\n> force_mergejoin_semijoin_filter = on) be interpreted ?\n> Have you considered using one GUC which has three values: off, enabled,\n> forced ?\n>\n> + mergeclauses_for_sjf = get_actual_clauses(path->path_mergeclauses);\n> + mergeclauses_for_sjf =\n> get_switched_clauses(path->path_mergeclauses,\n> +\n> path->jpath.outerjoinpath->parent->relids);\n>\n> mergeclauses_for_sjf is assigned twice and I don't\n> see mergeclauses_for_sjf being reference in the call\n> to get_switched_clauses().\n> Is this intentional ?\n>\n> + /* want at least 1000 rows_filtered to avoid any nasty edge\n> cases */\n> + if (force_mergejoin_semijoin_filter || (filteringRate >= 0.35\n> && rows_filtered > 1000))\n>\n> The above condition is narrower compared to the enclosing condition.\n> Since there is no else block for the second if block, please merge the two\n> if statements.\n>\n> + int best_filter_clause;\n>\n> Normally I would think `clause` is represented by List*. But\n> best_filter_clause is an int. Please use another variable name so that\n> there is less chance of confusion.\n>\n> For evaluate_semijoin_filtering_rate():\n>\n> + double best_sj_selectivity = 1.01;\n>\n> How was 1.01 determined ?\n>\n> + debug_sj1(\"SJPD: start evaluate_semijoin_filtering_rate\");\n>\n> There are debug statements in the methods.\n> It would be better to remove them in the next patch set.\n>\n> Cheers\n>\nHi,\nStill patch 1.\n\n+ if (!outer_arg_md->is_or_maps_to_base_column\n+ && !inner_arg_md->is_or_maps_to_constant)\n+ {\n+ debug_sj2(\"SJPD: outer equijoin arg does not map %s\",\n+ \"to a base column nor a constant; semijoin is not\nvalid\");\n\nLooks like there is a typo: inner_arg_md->is_or_maps_to_constant should be\nouter_arg_md->is_or_maps_to_constant\n\n+ if (outer_arg_md->est_col_width > MAX_SEMIJOIN_SINGLE_KEY_WIDTH)\n+ {\n+ debug_sj2(\"SJPD: outer equijoin column's width %s\",\n+ \"was excessive; condition rejected\");\n\nHow is the value of MAX_SEMIJOIN_SINGLE_KEY_WIDTH determined ?\n\nFor verify_valid_pushdown():\n\n+ Assert(path);\n+ Assert(target_var_no > 0);\n+\n+ if (path == NULL)\n+ {\n+ return false;\n\nI don't understand the first assertion. Does it mean path would always be\nnon-NULL ? Then the if statement should be dropped.\n\n+ if (path->parent->relid == target_var_no)\n+ {\n+ /*\n+ * Found source of target var! We know that the pushdown\n+ * is valid now.\n+ */\n+ return true;\n+ }\n+ return false;\n\nThe above can be simplified as: return path->parent->relid == target_var_no;\n\n+ * True if the given con_exprs, ref_exprs and operators will exactlty\n\nTypo: exactlty -> exactly\n\n+ if (!bms_equal(all_vars, matched_vars))\n+ return false;\n+ return true;\n\nThe above can be simplified as: return bms_equal(all_vars, matched_vars);\n\nCheers\n\nOn Fri, Sep 30, 2022 at 9:20 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Fri, Sep 30, 2022 at 8:40 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Fri, Sep 30, 2022 at 3:44 PM Zheng Li <zhengli10@gmail.com> wrote:Hello,\n\nA bloom filter provides early filtering of rows that cannot be joined\nbefore they would reach the join operator, the optimization is also\ncalled a semi join filter (SJF) pushdown. Such a filter can be created\nwhen one child of the join operator must materialize its derived table\nbefore the other child is evaluated.\n\nFor example, a bloom filter can be created using the the join keys for\nthe build side/inner side of a hash join or the outer side of a merge\njoin, the bloom filter can then be used to pre-filter rows on the\nother side of the join operator during the scan of the base relation.\nThe thread about “Hash Joins vs. Bloom Filters / take 2” [1] is good\ndiscussion on using such optimization for hash join without going into\nthe pushdown of the filter where its performance gain could be further\nincreased.\n\nWe worked on prototyping bloom filter pushdown for both hash join and\nmerge join. Attached is a patch set for bloom filter pushdown for\nmerge join. We also plan to send the patch for hash join once we have\nit rebased.\n\nHere is a summary of the patch set:\n1. Bloom Filter Pushdown optimizes Merge Join by filtering rows early\nduring the table scan instead of later on.\n        -The bloom filter is pushed down along the execution tree to\nthe target SeqScan nodes.\n        -Experiments show that this optimization can speed up Merge\nJoin by up to 36%.\n\n2. The planner makes the decision to use the bloom filter based on the\nestimated filtering rate and the expected performance gain.\n        -The planner accomplishes this by estimating four numbers per\nvariable - the total number of rows of the relation, the number of\ndistinct values for a given variable, and the minimum and maximum\nvalue of the variable (when applicable). Using these numbers, the\nplanner estimates a filtering rate of a potential filter.\n        -Because actually creating and implementing the filter adds\nmore operations, there is a minimum threshold of filtering where the\nfilter would actually be useful. Based on testing, we query to see if\nthe estimated filtering rate is higher than 35%, and that informs our\ndecision to use a filter or not.\n\n3. If using a bloom filter, the planner also adjusts the expected cost\nof Merge Join based on expected performance gain.\n\n4. Capability to build the bloom filter in parallel in case of\nparallel SeqScan. This is done efficiently by populating a local bloom\nfilter for each parallel worker and then taking a bitwise OR over all\nthe local bloom filters to form a shared bloom filter at the end of\nthe parallel SeqScan.\n\n5. The optimization is GUC controlled, with settings of\nenable_mergejoin_semijoin_filter and force_mergejoin_semijoin_filter.\n\nWe found in experiments that there is a significant improvement\nwhen using the bloom filter during Merge Join. One experiment involved\njoining two large tables while varying the theoretical filtering rate\n(TFR) between the two tables, the TFR is defined as the percentage\nthat the two datasets are disjoint. Both tables in the merge join were\nthe same size. We tested changing the TFR to see the change in\nfiltering optimization.\n\nFor example, let’s imagine t0 has 10 million rows, which contain the\nnumbers 1 through 10 million randomly shuffled. Also, t1 has the\nnumbers 4 million through 14 million randomly shuffled. Then the TFR\nfor a join of these two tables is 40%, since 40% of the tables are\ndisjoint from the other table (1 through 4 million for t0, 10 million\nthrough 14 million for t4).\n\nHere is the performance test result joining two tables:\nTFR: theoretical filtering rate\nEFR: estimated filtering rate\nAFR: actual filtering rate\nHJ: hash join\nMJ Default: default merge join\nMJ Filter: merge join with bloom filter optimization enabled\nMJ Filter Forced: merge join with bloom filter optimization forced\n\nTFR   EFR   AFR   HJ   MJ Default   MJ Filter   MJ Filter Forced\n-------------------------------------------------------------------------------------\n10     33.46   7.41    6529   22638        21949            23160\n20     37.27  14.85   6483   22290        21928            21930\n30     41.32   22.25  6395   22374        20718            20794\n40     45.67   29.7    6272   21969        19449            19410\n50     50.41   37.1    6210   21412        18222            18224\n60     55.64   44.51  6052   21108        17060            17018\n70     61.59   51.98  5947   21020        15682            15737\n80     68.64   59.36  5761   20812        14411            14437\n90     77.83   66.86  5701   20585        13171            13200\nTable. Execution Time (ms) vs Filtering Rate (%) for Joining Two\nTables of 10M Rows.\n\nAttached you can find figures of the same performance test and a SQL script\nto reproduce the performance test.\n\nThe first thing to notice is that Hash Join generally is the most\nefficient join strategy. This is because Hash Join is better at\ndealing with small tables, and our size of 10 million is still small\nenough where Hash Join outperforms the other join strategies. Future\nexperiments can investigate using much larger tables.\n\nHowever, comparing just within the different Merge Join variants, we\nsee that using the bloom filter greatly improves performance.\nIntuitively, all of these execution times follow linear paths.\nComparing forced filtering versus default, we can see that the default\nMerge Join outperforms Merge Join with filtering at low filter rates,\nbut after about 20% TFR, the Merge Join with filtering outperforms\ndefault Merge Join. This makes intuitive sense, as there are some\nfixed costs associated with building and checking with the bloom\nfilter. In the worst case, at only 10% TFR, the bloom filter makes\nMerge Join less than 5% slower. However, in the best case, at 90% TFR,\nthe bloom filter improves Merge Join by 36%.\n\nBased on the results of the above experiments, we came up with a\nlinear equation for the performance ratio for using the filter\npushdown from the actual filtering rate. Based on the numbers\npresented in the figure, this is the equation:\n\nT_filter / T_no_filter = 1 / (0.83 * estimated filtering rate + 0.863)\n\nFor example, this means that with an estimated filtering rate of 0.4,\nthe execution time of merge join is estimated to be improved by 16.3%.\nNote that the estimated filtering rate is used in the equation, not\nthe theoretical filtering rate or the actual filtering rate because it\nis what we have during planning. In practice the estimated filtering\nrate isn’t usually accurate. In fact, the estimated filtering rate can\ndiffer from the theoretical filtering rate by as much as 17% in our\nexperiments. One way to mitigate the power loss of bloom filter caused\nby inaccurate estimated filtering rate is to adaptively turn it off at\nexecution time, this is yet to be implemented.\n\nHere is a list of tasks we plan to work on in order to improve this patch:\n1. More regression testing to guarantee correctness.\n2. More performance testing involving larger tables and complicated query plans.\n3. Improve the cost model.\n4. Explore runtime tuning such as making the bloom filter checking adaptive.\n5. Currently, only the best single join key is used for building the\nBloom filter. However, if there are several keys and we know that\ntheir distributions are somewhat disjoint, we could leverage this fact\nand use multiple keys for the bloom filter.\n6. Currently, Bloom filter pushdown is only implemented for SeqScan\nnodes. However, it would be possible to allow push down to other types\nof scan nodes.\n7. Explore if the Bloom filter could be pushed down through a foreign\nscan when the foreign server is capable of handling it – which could\nbe made true for postgres_fdw.\n8. Better explain command on the usage of bloom filters.\n\nThis patch set is prepared by Marcus Ma, Lyu Pan and myself. Feedback\nis appreciated.\n\nWith Regards,\nZheng Li\nAmazon RDS/Aurora for PostgreSQL\n\n[1] https://www.postgresql.org/message-id/flat/c902844d-837f-5f63-ced3-9f7fd222f175%402ndquadrant.comHi,In the header of patch 1:In this prototype, the cost model is based on an assumption that there is a linear relationship between the performance gain from using a semijoin filter and the estimated filtering rate:% improvement to Merge Join cost = 0.83 * estimated filtering rate - 0.137.How were the coefficients (0.83 and 0.137) determined ?I guess they were based on the results of running certain workload.CheersHi,For patch 1:+bool       enable_mergejoin_semijoin_filter;+bool       force_mergejoin_semijoin_filter; How would (enable_mergejoin_semijoin_filter = off, force_mergejoin_semijoin_filter = on) be interpreted ?Have you considered using one GUC which has three values: off, enabled, forced ?+       mergeclauses_for_sjf = get_actual_clauses(path->path_mergeclauses);+       mergeclauses_for_sjf = get_switched_clauses(path->path_mergeclauses,+                                                   path->jpath.outerjoinpath->parent->relids);mergeclauses_for_sjf is assigned twice and I don't see mergeclauses_for_sjf being reference in the call to get_switched_clauses().Is this intentional ?+           /* want at least 1000 rows_filtered to avoid any nasty edge cases */+           if (force_mergejoin_semijoin_filter || (filteringRate >= 0.35 && rows_filtered > 1000))The above condition is narrower compared to the enclosing condition.Since there is no else block for the second if block, please merge the two if statements.+   int         best_filter_clause;Normally I would think `clause` is represented by List*. But best_filter_clause is an int. Please use another variable name so that there is less chance of confusion.For evaluate_semijoin_filtering_rate():+   double      best_sj_selectivity = 1.01;How was 1.01 determined ?+   debug_sj1(\"SJPD:  start evaluate_semijoin_filtering_rate\");There are debug statements in the methods.It would be better to remove them in the next patch set.CheersHi,Still patch 1.+       if (!outer_arg_md->is_or_maps_to_base_column+           && !inner_arg_md->is_or_maps_to_constant)+       {+           debug_sj2(\"SJPD:        outer equijoin arg does not map %s\",+                     \"to a base column nor a constant; semijoin is not valid\");Looks like there is a typo:  inner_arg_md->is_or_maps_to_constant should be outer_arg_md->is_or_maps_to_constant+       if (outer_arg_md->est_col_width > MAX_SEMIJOIN_SINGLE_KEY_WIDTH)+       {+           debug_sj2(\"SJPD:        outer equijoin column's width %s\",+                     \"was excessive; condition rejected\");How is the value of MAX_SEMIJOIN_SINGLE_KEY_WIDTH determined ?For verify_valid_pushdown():+   Assert(path);+   Assert(target_var_no > 0);++   if (path == NULL)+   {+       return false;I don't understand the first assertion. Does it mean path would always be non-NULL ? Then the if statement should be dropped.+               if (path->parent->relid == target_var_no)+               {+                   /*+                    * Found source of target var! We know that the pushdown+                    * is valid now.+                    */+                   return true;+               }+               return false;The above can be simplified as: return path->parent->relid == target_var_no;+ *     True if the given con_exprs, ref_exprs and operators will exactltyTypo: exactlty -> exactly+   if (!bms_equal(all_vars, matched_vars))+       return false;+   return true;The above can be simplified as: return bms_equal(all_vars, matched_vars);Cheers", "msg_date": "Sat, 1 Oct 2022 00:45:11 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Bloom filter Pushdown Optimization for Merge Join" }, { "msg_contents": "On Sat, Oct 1, 2022 at 12:45 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Fri, Sep 30, 2022 at 9:20 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>>\n>>\n>> On Fri, Sep 30, 2022 at 8:40 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>>\n>>>\n>>>\n>>> On Fri, Sep 30, 2022 at 3:44 PM Zheng Li <zhengli10@gmail.com> wrote:\n>>>\n>>>> Hello,\n>>>>\n>>>> A bloom filter provides early filtering of rows that cannot be joined\n>>>> before they would reach the join operator, the optimization is also\n>>>> called a semi join filter (SJF) pushdown. Such a filter can be created\n>>>> when one child of the join operator must materialize its derived table\n>>>> before the other child is evaluated.\n>>>>\n>>>> For example, a bloom filter can be created using the the join keys for\n>>>> the build side/inner side of a hash join or the outer side of a merge\n>>>> join, the bloom filter can then be used to pre-filter rows on the\n>>>> other side of the join operator during the scan of the base relation.\n>>>> The thread about “Hash Joins vs. Bloom Filters / take 2” [1] is good\n>>>> discussion on using such optimization for hash join without going into\n>>>> the pushdown of the filter where its performance gain could be further\n>>>> increased.\n>>>>\n>>>> We worked on prototyping bloom filter pushdown for both hash join and\n>>>> merge join. Attached is a patch set for bloom filter pushdown for\n>>>> merge join. We also plan to send the patch for hash join once we have\n>>>> it rebased.\n>>>>\n>>>> Here is a summary of the patch set:\n>>>> 1. Bloom Filter Pushdown optimizes Merge Join by filtering rows early\n>>>> during the table scan instead of later on.\n>>>> -The bloom filter is pushed down along the execution tree to\n>>>> the target SeqScan nodes.\n>>>> -Experiments show that this optimization can speed up Merge\n>>>> Join by up to 36%.\n>>>>\n>>>> 2. The planner makes the decision to use the bloom filter based on the\n>>>> estimated filtering rate and the expected performance gain.\n>>>> -The planner accomplishes this by estimating four numbers per\n>>>> variable - the total number of rows of the relation, the number of\n>>>> distinct values for a given variable, and the minimum and maximum\n>>>> value of the variable (when applicable). Using these numbers, the\n>>>> planner estimates a filtering rate of a potential filter.\n>>>> -Because actually creating and implementing the filter adds\n>>>> more operations, there is a minimum threshold of filtering where the\n>>>> filter would actually be useful. Based on testing, we query to see if\n>>>> the estimated filtering rate is higher than 35%, and that informs our\n>>>> decision to use a filter or not.\n>>>>\n>>>> 3. If using a bloom filter, the planner also adjusts the expected cost\n>>>> of Merge Join based on expected performance gain.\n>>>>\n>>>> 4. Capability to build the bloom filter in parallel in case of\n>>>> parallel SeqScan. This is done efficiently by populating a local bloom\n>>>> filter for each parallel worker and then taking a bitwise OR over all\n>>>> the local bloom filters to form a shared bloom filter at the end of\n>>>> the parallel SeqScan.\n>>>>\n>>>> 5. The optimization is GUC controlled, with settings of\n>>>> enable_mergejoin_semijoin_filter and force_mergejoin_semijoin_filter.\n>>>>\n>>>> We found in experiments that there is a significant improvement\n>>>> when using the bloom filter during Merge Join. One experiment involved\n>>>> joining two large tables while varying the theoretical filtering rate\n>>>> (TFR) between the two tables, the TFR is defined as the percentage\n>>>> that the two datasets are disjoint. Both tables in the merge join were\n>>>> the same size. We tested changing the TFR to see the change in\n>>>> filtering optimization.\n>>>>\n>>>> For example, let’s imagine t0 has 10 million rows, which contain the\n>>>> numbers 1 through 10 million randomly shuffled. Also, t1 has the\n>>>> numbers 4 million through 14 million randomly shuffled. Then the TFR\n>>>> for a join of these two tables is 40%, since 40% of the tables are\n>>>> disjoint from the other table (1 through 4 million for t0, 10 million\n>>>> through 14 million for t4).\n>>>>\n>>>> Here is the performance test result joining two tables:\n>>>> TFR: theoretical filtering rate\n>>>> EFR: estimated filtering rate\n>>>> AFR: actual filtering rate\n>>>> HJ: hash join\n>>>> MJ Default: default merge join\n>>>> MJ Filter: merge join with bloom filter optimization enabled\n>>>> MJ Filter Forced: merge join with bloom filter optimization forced\n>>>>\n>>>> TFR EFR AFR HJ MJ Default MJ Filter MJ Filter Forced\n>>>>\n>>>> -------------------------------------------------------------------------------------\n>>>> 10 33.46 7.41 6529 22638 21949 23160\n>>>> 20 37.27 14.85 6483 22290 21928 21930\n>>>> 30 41.32 22.25 6395 22374 20718 20794\n>>>> 40 45.67 29.7 6272 21969 19449 19410\n>>>> 50 50.41 37.1 6210 21412 18222 18224\n>>>> 60 55.64 44.51 6052 21108 17060 17018\n>>>> 70 61.59 51.98 5947 21020 15682 15737\n>>>> 80 68.64 59.36 5761 20812 14411 14437\n>>>> 90 77.83 66.86 5701 20585 13171 13200\n>>>> Table. Execution Time (ms) vs Filtering Rate (%) for Joining Two\n>>>> Tables of 10M Rows.\n>>>>\n>>>> Attached you can find figures of the same performance test and a SQL\n>>>> script\n>>>> to reproduce the performance test.\n>>>>\n>>>> The first thing to notice is that Hash Join generally is the most\n>>>> efficient join strategy. This is because Hash Join is better at\n>>>> dealing with small tables, and our size of 10 million is still small\n>>>> enough where Hash Join outperforms the other join strategies. Future\n>>>> experiments can investigate using much larger tables.\n>>>>\n>>>> However, comparing just within the different Merge Join variants, we\n>>>> see that using the bloom filter greatly improves performance.\n>>>> Intuitively, all of these execution times follow linear paths.\n>>>> Comparing forced filtering versus default, we can see that the default\n>>>> Merge Join outperforms Merge Join with filtering at low filter rates,\n>>>> but after about 20% TFR, the Merge Join with filtering outperforms\n>>>> default Merge Join. This makes intuitive sense, as there are some\n>>>> fixed costs associated with building and checking with the bloom\n>>>> filter. In the worst case, at only 10% TFR, the bloom filter makes\n>>>> Merge Join less than 5% slower. However, in the best case, at 90% TFR,\n>>>> the bloom filter improves Merge Join by 36%.\n>>>>\n>>>> Based on the results of the above experiments, we came up with a\n>>>> linear equation for the performance ratio for using the filter\n>>>> pushdown from the actual filtering rate. Based on the numbers\n>>>> presented in the figure, this is the equation:\n>>>>\n>>>> T_filter / T_no_filter = 1 / (0.83 * estimated filtering rate + 0.863)\n>>>>\n>>>> For example, this means that with an estimated filtering rate of 0.4,\n>>>> the execution time of merge join is estimated to be improved by 16.3%.\n>>>> Note that the estimated filtering rate is used in the equation, not\n>>>> the theoretical filtering rate or the actual filtering rate because it\n>>>> is what we have during planning. In practice the estimated filtering\n>>>> rate isn’t usually accurate. In fact, the estimated filtering rate can\n>>>> differ from the theoretical filtering rate by as much as 17% in our\n>>>> experiments. One way to mitigate the power loss of bloom filter caused\n>>>> by inaccurate estimated filtering rate is to adaptively turn it off at\n>>>> execution time, this is yet to be implemented.\n>>>>\n>>>> Here is a list of tasks we plan to work on in order to improve this\n>>>> patch:\n>>>> 1. More regression testing to guarantee correctness.\n>>>> 2. More performance testing involving larger tables and complicated\n>>>> query plans.\n>>>> 3. Improve the cost model.\n>>>> 4. Explore runtime tuning such as making the bloom filter checking\n>>>> adaptive.\n>>>> 5. Currently, only the best single join key is used for building the\n>>>> Bloom filter. However, if there are several keys and we know that\n>>>> their distributions are somewhat disjoint, we could leverage this fact\n>>>> and use multiple keys for the bloom filter.\n>>>> 6. Currently, Bloom filter pushdown is only implemented for SeqScan\n>>>> nodes. However, it would be possible to allow push down to other types\n>>>> of scan nodes.\n>>>> 7. Explore if the Bloom filter could be pushed down through a foreign\n>>>> scan when the foreign server is capable of handling it – which could\n>>>> be made true for postgres_fdw.\n>>>> 8. Better explain command on the usage of bloom filters.\n>>>>\n>>>> This patch set is prepared by Marcus Ma, Lyu Pan and myself. Feedback\n>>>> is appreciated.\n>>>>\n>>>> With Regards,\n>>>> Zheng Li\n>>>> Amazon RDS/Aurora for PostgreSQL\n>>>>\n>>>> [1]\n>>>> https://www.postgresql.org/message-id/flat/c902844d-837f-5f63-ced3-9f7fd222f175%402ndquadrant.com\n>>>\n>>>\n>>> Hi,\n>>> In the header of patch 1:\n>>>\n>>> In this prototype, the cost model is based on an assumption that there\n>>> is a linear relationship between the performance gain from using a semijoin\n>>> filter and the estimated filtering rate:\n>>> % improvement to Merge Join cost = 0.83 * estimated filtering rate -\n>>> 0.137.\n>>>\n>>> How were the coefficients (0.83 and 0.137) determined ?\n>>> I guess they were based on the results of running certain workload.\n>>>\n>>> Cheers\n>>>\n>> Hi,\n>> For patch 1:\n>>\n>> +bool enable_mergejoin_semijoin_filter;\n>> +bool force_mergejoin_semijoin_filter;\n>>\n>> How would (enable_mergejoin_semijoin_filter = off,\n>> force_mergejoin_semijoin_filter = on) be interpreted ?\n>> Have you considered using one GUC which has three values: off, enabled,\n>> forced ?\n>>\n>> + mergeclauses_for_sjf =\n>> get_actual_clauses(path->path_mergeclauses);\n>> + mergeclauses_for_sjf =\n>> get_switched_clauses(path->path_mergeclauses,\n>> +\n>> path->jpath.outerjoinpath->parent->relids);\n>>\n>> mergeclauses_for_sjf is assigned twice and I don't\n>> see mergeclauses_for_sjf being reference in the call\n>> to get_switched_clauses().\n>> Is this intentional ?\n>>\n>> + /* want at least 1000 rows_filtered to avoid any nasty edge\n>> cases */\n>> + if (force_mergejoin_semijoin_filter || (filteringRate >= 0.35\n>> && rows_filtered > 1000))\n>>\n>> The above condition is narrower compared to the enclosing condition.\n>> Since there is no else block for the second if block, please merge the\n>> two if statements.\n>>\n>> + int best_filter_clause;\n>>\n>> Normally I would think `clause` is represented by List*. But\n>> best_filter_clause is an int. Please use another variable name so that\n>> there is less chance of confusion.\n>>\n>> For evaluate_semijoin_filtering_rate():\n>>\n>> + double best_sj_selectivity = 1.01;\n>>\n>> How was 1.01 determined ?\n>>\n>> + debug_sj1(\"SJPD: start evaluate_semijoin_filtering_rate\");\n>>\n>> There are debug statements in the methods.\n>> It would be better to remove them in the next patch set.\n>>\n>> Cheers\n>>\n> Hi,\n> Still patch 1.\n>\n> + if (!outer_arg_md->is_or_maps_to_base_column\n> + && !inner_arg_md->is_or_maps_to_constant)\n> + {\n> + debug_sj2(\"SJPD: outer equijoin arg does not map %s\",\n> + \"to a base column nor a constant; semijoin is not\n> valid\");\n>\n> Looks like there is a typo: inner_arg_md->is_or_maps_to_constant should\n> be outer_arg_md->is_or_maps_to_constant\n>\n> + if (outer_arg_md->est_col_width > MAX_SEMIJOIN_SINGLE_KEY_WIDTH)\n> + {\n> + debug_sj2(\"SJPD: outer equijoin column's width %s\",\n> + \"was excessive; condition rejected\");\n>\n> How is the value of MAX_SEMIJOIN_SINGLE_KEY_WIDTH determined ?\n>\n> For verify_valid_pushdown():\n>\n> + Assert(path);\n> + Assert(target_var_no > 0);\n> +\n> + if (path == NULL)\n> + {\n> + return false;\n>\n> I don't understand the first assertion. Does it mean path would always be\n> non-NULL ? Then the if statement should be dropped.\n>\n> + if (path->parent->relid == target_var_no)\n> + {\n> + /*\n> + * Found source of target var! We know that the\n> pushdown\n> + * is valid now.\n> + */\n> + return true;\n> + }\n> + return false;\n>\n> The above can be simplified as: return path->parent->relid ==\n> target_var_no;\n>\n> + * True if the given con_exprs, ref_exprs and operators will exactlty\n>\n> Typo: exactlty -> exactly\n>\n> + if (!bms_equal(all_vars, matched_vars))\n> + return false;\n> + return true;\n>\n> The above can be simplified as: return bms_equal(all_vars, matched_vars);\n>\n> Cheers\n>\nHi,\nStill in patch 1 :-)\n\n+ if (best_path->use_semijoinfilter)\n+ {\n+ if (best_path->best_mergeclause != -1)\n\nSince there is no else block, the two conditions can be combined.\n\n+ ListCell *clause_cell = list_nth_cell(mergeclauses,\nbest_path->best_mergeclause);\n\nAs shown in the above code, best_mergeclause is the position of the best\nmerge clause in mergeclauses.\nI think best_mergeclause_pos (or similar name) is more appropriate for the\nfieldname.\n\nFor depth_of_semijoin_target():\n\n+ * Parameters:\n+ * node: plan node to be considered for semijoin push down.\n\nThe name of the parameter is pn - please align the comment with code.\n\nFor T_SubqueryScan case in depth_of_semijoin_target():\n\n+ Assert(rte->subquery->targetList);\n...\n+ if (rel && rel->subroot\n+ && rte && rte->subquery && rte->subquery->targetList)\n\nIt seems the condition can be simplified since rte->subquery->targetList\nhas passed the assertion.\n\nFor is_table_scan_node_source_of_relids_or_var(), the else block can be\nsimplified to returning scan_node_varno == target_var->varno directly.\n\nFor get_appendrel_occluded_references():\n\n+ * Given a virtual column from an Union ALL subquery,\n+ * return the expression it immediately occludes that satisfy\n\nSince the index is returned from the func, it would be better to clarify\nthe comment by saying `return the last index of expression ...`\n\n+ /* Subquery without append and partitioned tables */\n\nappend and partitioned tables -> append or partitioned tables\n\nMore reviews for subsequent patches to follow.\n\nOn Sat, Oct 1, 2022 at 12:45 AM Zhihong Yu <zyu@yugabyte.com> wrote:On Fri, Sep 30, 2022 at 9:20 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Fri, Sep 30, 2022 at 8:40 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Fri, Sep 30, 2022 at 3:44 PM Zheng Li <zhengli10@gmail.com> wrote:Hello,\n\nA bloom filter provides early filtering of rows that cannot be joined\nbefore they would reach the join operator, the optimization is also\ncalled a semi join filter (SJF) pushdown. Such a filter can be created\nwhen one child of the join operator must materialize its derived table\nbefore the other child is evaluated.\n\nFor example, a bloom filter can be created using the the join keys for\nthe build side/inner side of a hash join or the outer side of a merge\njoin, the bloom filter can then be used to pre-filter rows on the\nother side of the join operator during the scan of the base relation.\nThe thread about “Hash Joins vs. Bloom Filters / take 2” [1] is good\ndiscussion on using such optimization for hash join without going into\nthe pushdown of the filter where its performance gain could be further\nincreased.\n\nWe worked on prototyping bloom filter pushdown for both hash join and\nmerge join. Attached is a patch set for bloom filter pushdown for\nmerge join. We also plan to send the patch for hash join once we have\nit rebased.\n\nHere is a summary of the patch set:\n1. Bloom Filter Pushdown optimizes Merge Join by filtering rows early\nduring the table scan instead of later on.\n        -The bloom filter is pushed down along the execution tree to\nthe target SeqScan nodes.\n        -Experiments show that this optimization can speed up Merge\nJoin by up to 36%.\n\n2. The planner makes the decision to use the bloom filter based on the\nestimated filtering rate and the expected performance gain.\n        -The planner accomplishes this by estimating four numbers per\nvariable - the total number of rows of the relation, the number of\ndistinct values for a given variable, and the minimum and maximum\nvalue of the variable (when applicable). Using these numbers, the\nplanner estimates a filtering rate of a potential filter.\n        -Because actually creating and implementing the filter adds\nmore operations, there is a minimum threshold of filtering where the\nfilter would actually be useful. Based on testing, we query to see if\nthe estimated filtering rate is higher than 35%, and that informs our\ndecision to use a filter or not.\n\n3. If using a bloom filter, the planner also adjusts the expected cost\nof Merge Join based on expected performance gain.\n\n4. Capability to build the bloom filter in parallel in case of\nparallel SeqScan. This is done efficiently by populating a local bloom\nfilter for each parallel worker and then taking a bitwise OR over all\nthe local bloom filters to form a shared bloom filter at the end of\nthe parallel SeqScan.\n\n5. The optimization is GUC controlled, with settings of\nenable_mergejoin_semijoin_filter and force_mergejoin_semijoin_filter.\n\nWe found in experiments that there is a significant improvement\nwhen using the bloom filter during Merge Join. One experiment involved\njoining two large tables while varying the theoretical filtering rate\n(TFR) between the two tables, the TFR is defined as the percentage\nthat the two datasets are disjoint. Both tables in the merge join were\nthe same size. We tested changing the TFR to see the change in\nfiltering optimization.\n\nFor example, let’s imagine t0 has 10 million rows, which contain the\nnumbers 1 through 10 million randomly shuffled. Also, t1 has the\nnumbers 4 million through 14 million randomly shuffled. Then the TFR\nfor a join of these two tables is 40%, since 40% of the tables are\ndisjoint from the other table (1 through 4 million for t0, 10 million\nthrough 14 million for t4).\n\nHere is the performance test result joining two tables:\nTFR: theoretical filtering rate\nEFR: estimated filtering rate\nAFR: actual filtering rate\nHJ: hash join\nMJ Default: default merge join\nMJ Filter: merge join with bloom filter optimization enabled\nMJ Filter Forced: merge join with bloom filter optimization forced\n\nTFR   EFR   AFR   HJ   MJ Default   MJ Filter   MJ Filter Forced\n-------------------------------------------------------------------------------------\n10     33.46   7.41    6529   22638        21949            23160\n20     37.27  14.85   6483   22290        21928            21930\n30     41.32   22.25  6395   22374        20718            20794\n40     45.67   29.7    6272   21969        19449            19410\n50     50.41   37.1    6210   21412        18222            18224\n60     55.64   44.51  6052   21108        17060            17018\n70     61.59   51.98  5947   21020        15682            15737\n80     68.64   59.36  5761   20812        14411            14437\n90     77.83   66.86  5701   20585        13171            13200\nTable. Execution Time (ms) vs Filtering Rate (%) for Joining Two\nTables of 10M Rows.\n\nAttached you can find figures of the same performance test and a SQL script\nto reproduce the performance test.\n\nThe first thing to notice is that Hash Join generally is the most\nefficient join strategy. This is because Hash Join is better at\ndealing with small tables, and our size of 10 million is still small\nenough where Hash Join outperforms the other join strategies. Future\nexperiments can investigate using much larger tables.\n\nHowever, comparing just within the different Merge Join variants, we\nsee that using the bloom filter greatly improves performance.\nIntuitively, all of these execution times follow linear paths.\nComparing forced filtering versus default, we can see that the default\nMerge Join outperforms Merge Join with filtering at low filter rates,\nbut after about 20% TFR, the Merge Join with filtering outperforms\ndefault Merge Join. This makes intuitive sense, as there are some\nfixed costs associated with building and checking with the bloom\nfilter. In the worst case, at only 10% TFR, the bloom filter makes\nMerge Join less than 5% slower. However, in the best case, at 90% TFR,\nthe bloom filter improves Merge Join by 36%.\n\nBased on the results of the above experiments, we came up with a\nlinear equation for the performance ratio for using the filter\npushdown from the actual filtering rate. Based on the numbers\npresented in the figure, this is the equation:\n\nT_filter / T_no_filter = 1 / (0.83 * estimated filtering rate + 0.863)\n\nFor example, this means that with an estimated filtering rate of 0.4,\nthe execution time of merge join is estimated to be improved by 16.3%.\nNote that the estimated filtering rate is used in the equation, not\nthe theoretical filtering rate or the actual filtering rate because it\nis what we have during planning. In practice the estimated filtering\nrate isn’t usually accurate. In fact, the estimated filtering rate can\ndiffer from the theoretical filtering rate by as much as 17% in our\nexperiments. One way to mitigate the power loss of bloom filter caused\nby inaccurate estimated filtering rate is to adaptively turn it off at\nexecution time, this is yet to be implemented.\n\nHere is a list of tasks we plan to work on in order to improve this patch:\n1. More regression testing to guarantee correctness.\n2. More performance testing involving larger tables and complicated query plans.\n3. Improve the cost model.\n4. Explore runtime tuning such as making the bloom filter checking adaptive.\n5. Currently, only the best single join key is used for building the\nBloom filter. However, if there are several keys and we know that\ntheir distributions are somewhat disjoint, we could leverage this fact\nand use multiple keys for the bloom filter.\n6. Currently, Bloom filter pushdown is only implemented for SeqScan\nnodes. However, it would be possible to allow push down to other types\nof scan nodes.\n7. Explore if the Bloom filter could be pushed down through a foreign\nscan when the foreign server is capable of handling it – which could\nbe made true for postgres_fdw.\n8. Better explain command on the usage of bloom filters.\n\nThis patch set is prepared by Marcus Ma, Lyu Pan and myself. Feedback\nis appreciated.\n\nWith Regards,\nZheng Li\nAmazon RDS/Aurora for PostgreSQL\n\n[1] https://www.postgresql.org/message-id/flat/c902844d-837f-5f63-ced3-9f7fd222f175%402ndquadrant.comHi,In the header of patch 1:In this prototype, the cost model is based on an assumption that there is a linear relationship between the performance gain from using a semijoin filter and the estimated filtering rate:% improvement to Merge Join cost = 0.83 * estimated filtering rate - 0.137.How were the coefficients (0.83 and 0.137) determined ?I guess they were based on the results of running certain workload.CheersHi,For patch 1:+bool       enable_mergejoin_semijoin_filter;+bool       force_mergejoin_semijoin_filter; How would (enable_mergejoin_semijoin_filter = off, force_mergejoin_semijoin_filter = on) be interpreted ?Have you considered using one GUC which has three values: off, enabled, forced ?+       mergeclauses_for_sjf = get_actual_clauses(path->path_mergeclauses);+       mergeclauses_for_sjf = get_switched_clauses(path->path_mergeclauses,+                                                   path->jpath.outerjoinpath->parent->relids);mergeclauses_for_sjf is assigned twice and I don't see mergeclauses_for_sjf being reference in the call to get_switched_clauses().Is this intentional ?+           /* want at least 1000 rows_filtered to avoid any nasty edge cases */+           if (force_mergejoin_semijoin_filter || (filteringRate >= 0.35 && rows_filtered > 1000))The above condition is narrower compared to the enclosing condition.Since there is no else block for the second if block, please merge the two if statements.+   int         best_filter_clause;Normally I would think `clause` is represented by List*. But best_filter_clause is an int. Please use another variable name so that there is less chance of confusion.For evaluate_semijoin_filtering_rate():+   double      best_sj_selectivity = 1.01;How was 1.01 determined ?+   debug_sj1(\"SJPD:  start evaluate_semijoin_filtering_rate\");There are debug statements in the methods.It would be better to remove them in the next patch set.CheersHi,Still patch 1.+       if (!outer_arg_md->is_or_maps_to_base_column+           && !inner_arg_md->is_or_maps_to_constant)+       {+           debug_sj2(\"SJPD:        outer equijoin arg does not map %s\",+                     \"to a base column nor a constant; semijoin is not valid\");Looks like there is a typo:  inner_arg_md->is_or_maps_to_constant should be outer_arg_md->is_or_maps_to_constant+       if (outer_arg_md->est_col_width > MAX_SEMIJOIN_SINGLE_KEY_WIDTH)+       {+           debug_sj2(\"SJPD:        outer equijoin column's width %s\",+                     \"was excessive; condition rejected\");How is the value of MAX_SEMIJOIN_SINGLE_KEY_WIDTH determined ?For verify_valid_pushdown():+   Assert(path);+   Assert(target_var_no > 0);++   if (path == NULL)+   {+       return false;I don't understand the first assertion. Does it mean path would always be non-NULL ? Then the if statement should be dropped.+               if (path->parent->relid == target_var_no)+               {+                   /*+                    * Found source of target var! We know that the pushdown+                    * is valid now.+                    */+                   return true;+               }+               return false;The above can be simplified as: return path->parent->relid == target_var_no;+ *     True if the given con_exprs, ref_exprs and operators will exactltyTypo: exactlty -> exactly+   if (!bms_equal(all_vars, matched_vars))+       return false;+   return true;The above can be simplified as: return bms_equal(all_vars, matched_vars);CheersHi,Still in patch 1 :-)+   if (best_path->use_semijoinfilter)+   {+       if (best_path->best_mergeclause != -1) Since there is no else block, the two conditions can be combined.+           ListCell   *clause_cell = list_nth_cell(mergeclauses, best_path->best_mergeclause);As shown in the above code, best_mergeclause is the position of the best merge clause in mergeclauses.I think best_mergeclause_pos (or similar name) is more appropriate for the fieldname.For depth_of_semijoin_target():+ *  Parameters:+ *  node: plan node to be considered for semijoin push down.The name of the parameter is pn - please align the comment with code.For T_SubqueryScan case in depth_of_semijoin_target():+               Assert(rte->subquery->targetList);...+               if (rel && rel->subroot+                   && rte && rte->subquery && rte->subquery->targetList)It seems the condition can be simplified since rte->subquery->targetList has passed the assertion.For is_table_scan_node_source_of_relids_or_var(), the else block can be simplified to returning scan_node_varno == target_var->varno directly.For get_appendrel_occluded_references():+ *  Given a virtual column from an Union ALL subquery,+ *  return the expression it immediately occludes that satisfySince the index is returned from the func, it would be better to clarify the comment by saying `return the last index of expression ...`+   /* Subquery without append and partitioned tables */append and partitioned tables -> append or partitioned tablesMore reviews for subsequent patches to follow.", "msg_date": "Sun, 2 Oct 2022 06:40:30 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Bloom filter Pushdown Optimization for Merge Join" }, { "msg_contents": "On Sun, Oct 2, 2022 at 6:40 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Sat, Oct 1, 2022 at 12:45 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>>\n>>\n>> On Fri, Sep 30, 2022 at 9:20 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>>\n>>>\n>>>\n>>> On Fri, Sep 30, 2022 at 8:40 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>>>\n>>>>\n>>>>\n>>>> On Fri, Sep 30, 2022 at 3:44 PM Zheng Li <zhengli10@gmail.com> wrote:\n>>>>\n>>>>> Hello,\n>>>>>\n>>>>> A bloom filter provides early filtering of rows that cannot be joined\n>>>>> before they would reach the join operator, the optimization is also\n>>>>> called a semi join filter (SJF) pushdown. Such a filter can be created\n>>>>> when one child of the join operator must materialize its derived table\n>>>>> before the other child is evaluated.\n>>>>>\n>>>>> For example, a bloom filter can be created using the the join keys for\n>>>>> the build side/inner side of a hash join or the outer side of a merge\n>>>>> join, the bloom filter can then be used to pre-filter rows on the\n>>>>> other side of the join operator during the scan of the base relation.\n>>>>> The thread about “Hash Joins vs. Bloom Filters / take 2” [1] is good\n>>>>> discussion on using such optimization for hash join without going into\n>>>>> the pushdown of the filter where its performance gain could be further\n>>>>> increased.\n>>>>>\n>>>>> We worked on prototyping bloom filter pushdown for both hash join and\n>>>>> merge join. Attached is a patch set for bloom filter pushdown for\n>>>>> merge join. We also plan to send the patch for hash join once we have\n>>>>> it rebased.\n>>>>>\n>>>>> Here is a summary of the patch set:\n>>>>> 1. Bloom Filter Pushdown optimizes Merge Join by filtering rows early\n>>>>> during the table scan instead of later on.\n>>>>> -The bloom filter is pushed down along the execution tree to\n>>>>> the target SeqScan nodes.\n>>>>> -Experiments show that this optimization can speed up Merge\n>>>>> Join by up to 36%.\n>>>>>\n>>>>> 2. The planner makes the decision to use the bloom filter based on the\n>>>>> estimated filtering rate and the expected performance gain.\n>>>>> -The planner accomplishes this by estimating four numbers per\n>>>>> variable - the total number of rows of the relation, the number of\n>>>>> distinct values for a given variable, and the minimum and maximum\n>>>>> value of the variable (when applicable). Using these numbers, the\n>>>>> planner estimates a filtering rate of a potential filter.\n>>>>> -Because actually creating and implementing the filter adds\n>>>>> more operations, there is a minimum threshold of filtering where the\n>>>>> filter would actually be useful. Based on testing, we query to see if\n>>>>> the estimated filtering rate is higher than 35%, and that informs our\n>>>>> decision to use a filter or not.\n>>>>>\n>>>>> 3. If using a bloom filter, the planner also adjusts the expected cost\n>>>>> of Merge Join based on expected performance gain.\n>>>>>\n>>>>> 4. Capability to build the bloom filter in parallel in case of\n>>>>> parallel SeqScan. This is done efficiently by populating a local bloom\n>>>>> filter for each parallel worker and then taking a bitwise OR over all\n>>>>> the local bloom filters to form a shared bloom filter at the end of\n>>>>> the parallel SeqScan.\n>>>>>\n>>>>> 5. The optimization is GUC controlled, with settings of\n>>>>> enable_mergejoin_semijoin_filter and force_mergejoin_semijoin_filter.\n>>>>>\n>>>>> We found in experiments that there is a significant improvement\n>>>>> when using the bloom filter during Merge Join. One experiment involved\n>>>>> joining two large tables while varying the theoretical filtering rate\n>>>>> (TFR) between the two tables, the TFR is defined as the percentage\n>>>>> that the two datasets are disjoint. Both tables in the merge join were\n>>>>> the same size. We tested changing the TFR to see the change in\n>>>>> filtering optimization.\n>>>>>\n>>>>> For example, let’s imagine t0 has 10 million rows, which contain the\n>>>>> numbers 1 through 10 million randomly shuffled. Also, t1 has the\n>>>>> numbers 4 million through 14 million randomly shuffled. Then the TFR\n>>>>> for a join of these two tables is 40%, since 40% of the tables are\n>>>>> disjoint from the other table (1 through 4 million for t0, 10 million\n>>>>> through 14 million for t4).\n>>>>>\n>>>>> Here is the performance test result joining two tables:\n>>>>> TFR: theoretical filtering rate\n>>>>> EFR: estimated filtering rate\n>>>>> AFR: actual filtering rate\n>>>>> HJ: hash join\n>>>>> MJ Default: default merge join\n>>>>> MJ Filter: merge join with bloom filter optimization enabled\n>>>>> MJ Filter Forced: merge join with bloom filter optimization forced\n>>>>>\n>>>>> TFR EFR AFR HJ MJ Default MJ Filter MJ Filter Forced\n>>>>>\n>>>>> -------------------------------------------------------------------------------------\n>>>>> 10 33.46 7.41 6529 22638 21949 23160\n>>>>> 20 37.27 14.85 6483 22290 21928 21930\n>>>>> 30 41.32 22.25 6395 22374 20718 20794\n>>>>> 40 45.67 29.7 6272 21969 19449 19410\n>>>>> 50 50.41 37.1 6210 21412 18222 18224\n>>>>> 60 55.64 44.51 6052 21108 17060 17018\n>>>>> 70 61.59 51.98 5947 21020 15682 15737\n>>>>> 80 68.64 59.36 5761 20812 14411 14437\n>>>>> 90 77.83 66.86 5701 20585 13171 13200\n>>>>> Table. Execution Time (ms) vs Filtering Rate (%) for Joining Two\n>>>>> Tables of 10M Rows.\n>>>>>\n>>>>> Attached you can find figures of the same performance test and a SQL\n>>>>> script\n>>>>> to reproduce the performance test.\n>>>>>\n>>>>> The first thing to notice is that Hash Join generally is the most\n>>>>> efficient join strategy. This is because Hash Join is better at\n>>>>> dealing with small tables, and our size of 10 million is still small\n>>>>> enough where Hash Join outperforms the other join strategies. Future\n>>>>> experiments can investigate using much larger tables.\n>>>>>\n>>>>> However, comparing just within the different Merge Join variants, we\n>>>>> see that using the bloom filter greatly improves performance.\n>>>>> Intuitively, all of these execution times follow linear paths.\n>>>>> Comparing forced filtering versus default, we can see that the default\n>>>>> Merge Join outperforms Merge Join with filtering at low filter rates,\n>>>>> but after about 20% TFR, the Merge Join with filtering outperforms\n>>>>> default Merge Join. This makes intuitive sense, as there are some\n>>>>> fixed costs associated with building and checking with the bloom\n>>>>> filter. In the worst case, at only 10% TFR, the bloom filter makes\n>>>>> Merge Join less than 5% slower. However, in the best case, at 90% TFR,\n>>>>> the bloom filter improves Merge Join by 36%.\n>>>>>\n>>>>> Based on the results of the above experiments, we came up with a\n>>>>> linear equation for the performance ratio for using the filter\n>>>>> pushdown from the actual filtering rate. Based on the numbers\n>>>>> presented in the figure, this is the equation:\n>>>>>\n>>>>> T_filter / T_no_filter = 1 / (0.83 * estimated filtering rate + 0.863)\n>>>>>\n>>>>> For example, this means that with an estimated filtering rate of 0.4,\n>>>>> the execution time of merge join is estimated to be improved by 16.3%.\n>>>>> Note that the estimated filtering rate is used in the equation, not\n>>>>> the theoretical filtering rate or the actual filtering rate because it\n>>>>> is what we have during planning. In practice the estimated filtering\n>>>>> rate isn’t usually accurate. In fact, the estimated filtering rate can\n>>>>> differ from the theoretical filtering rate by as much as 17% in our\n>>>>> experiments. One way to mitigate the power loss of bloom filter caused\n>>>>> by inaccurate estimated filtering rate is to adaptively turn it off at\n>>>>> execution time, this is yet to be implemented.\n>>>>>\n>>>>> Here is a list of tasks we plan to work on in order to improve this\n>>>>> patch:\n>>>>> 1. More regression testing to guarantee correctness.\n>>>>> 2. More performance testing involving larger tables and complicated\n>>>>> query plans.\n>>>>> 3. Improve the cost model.\n>>>>> 4. Explore runtime tuning such as making the bloom filter checking\n>>>>> adaptive.\n>>>>> 5. Currently, only the best single join key is used for building the\n>>>>> Bloom filter. However, if there are several keys and we know that\n>>>>> their distributions are somewhat disjoint, we could leverage this fact\n>>>>> and use multiple keys for the bloom filter.\n>>>>> 6. Currently, Bloom filter pushdown is only implemented for SeqScan\n>>>>> nodes. However, it would be possible to allow push down to other types\n>>>>> of scan nodes.\n>>>>> 7. Explore if the Bloom filter could be pushed down through a foreign\n>>>>> scan when the foreign server is capable of handling it – which could\n>>>>> be made true for postgres_fdw.\n>>>>> 8. Better explain command on the usage of bloom filters.\n>>>>>\n>>>>> This patch set is prepared by Marcus Ma, Lyu Pan and myself. Feedback\n>>>>> is appreciated.\n>>>>>\n>>>>> With Regards,\n>>>>> Zheng Li\n>>>>> Amazon RDS/Aurora for PostgreSQL\n>>>>>\n>>>>> [1]\n>>>>> https://www.postgresql.org/message-id/flat/c902844d-837f-5f63-ced3-9f7fd222f175%402ndquadrant.com\n>>>>\n>>>>\n>>>> Hi,\n>>>> In the header of patch 1:\n>>>>\n>>>> In this prototype, the cost model is based on an assumption that there\n>>>> is a linear relationship between the performance gain from using a semijoin\n>>>> filter and the estimated filtering rate:\n>>>> % improvement to Merge Join cost = 0.83 * estimated filtering rate -\n>>>> 0.137.\n>>>>\n>>>> How were the coefficients (0.83 and 0.137) determined ?\n>>>> I guess they were based on the results of running certain workload.\n>>>>\n>>>> Cheers\n>>>>\n>>> Hi,\n>>> For patch 1:\n>>>\n>>> +bool enable_mergejoin_semijoin_filter;\n>>> +bool force_mergejoin_semijoin_filter;\n>>>\n>>> How would (enable_mergejoin_semijoin_filter = off,\n>>> force_mergejoin_semijoin_filter = on) be interpreted ?\n>>> Have you considered using one GUC which has three values: off, enabled,\n>>> forced ?\n>>>\n>>> + mergeclauses_for_sjf =\n>>> get_actual_clauses(path->path_mergeclauses);\n>>> + mergeclauses_for_sjf =\n>>> get_switched_clauses(path->path_mergeclauses,\n>>> +\n>>> path->jpath.outerjoinpath->parent->relids);\n>>>\n>>> mergeclauses_for_sjf is assigned twice and I don't\n>>> see mergeclauses_for_sjf being reference in the call\n>>> to get_switched_clauses().\n>>> Is this intentional ?\n>>>\n>>> + /* want at least 1000 rows_filtered to avoid any nasty edge\n>>> cases */\n>>> + if (force_mergejoin_semijoin_filter || (filteringRate >=\n>>> 0.35 && rows_filtered > 1000))\n>>>\n>>> The above condition is narrower compared to the enclosing condition.\n>>> Since there is no else block for the second if block, please merge the\n>>> two if statements.\n>>>\n>>> + int best_filter_clause;\n>>>\n>>> Normally I would think `clause` is represented by List*. But\n>>> best_filter_clause is an int. Please use another variable name so that\n>>> there is less chance of confusion.\n>>>\n>>> For evaluate_semijoin_filtering_rate():\n>>>\n>>> + double best_sj_selectivity = 1.01;\n>>>\n>>> How was 1.01 determined ?\n>>>\n>>> + debug_sj1(\"SJPD: start evaluate_semijoin_filtering_rate\");\n>>>\n>>> There are debug statements in the methods.\n>>> It would be better to remove them in the next patch set.\n>>>\n>>> Cheers\n>>>\n>> Hi,\n>> Still patch 1.\n>>\n>> + if (!outer_arg_md->is_or_maps_to_base_column\n>> + && !inner_arg_md->is_or_maps_to_constant)\n>> + {\n>> + debug_sj2(\"SJPD: outer equijoin arg does not map %s\",\n>> + \"to a base column nor a constant; semijoin is not\n>> valid\");\n>>\n>> Looks like there is a typo: inner_arg_md->is_or_maps_to_constant should\n>> be outer_arg_md->is_or_maps_to_constant\n>>\n>> + if (outer_arg_md->est_col_width > MAX_SEMIJOIN_SINGLE_KEY_WIDTH)\n>> + {\n>> + debug_sj2(\"SJPD: outer equijoin column's width %s\",\n>> + \"was excessive; condition rejected\");\n>>\n>> How is the value of MAX_SEMIJOIN_SINGLE_KEY_WIDTH determined ?\n>>\n>> For verify_valid_pushdown():\n>>\n>> + Assert(path);\n>> + Assert(target_var_no > 0);\n>> +\n>> + if (path == NULL)\n>> + {\n>> + return false;\n>>\n>> I don't understand the first assertion. Does it mean path would always be\n>> non-NULL ? Then the if statement should be dropped.\n>>\n>> + if (path->parent->relid == target_var_no)\n>> + {\n>> + /*\n>> + * Found source of target var! We know that the\n>> pushdown\n>> + * is valid now.\n>> + */\n>> + return true;\n>> + }\n>> + return false;\n>>\n>> The above can be simplified as: return path->parent->relid ==\n>> target_var_no;\n>>\n>> + * True if the given con_exprs, ref_exprs and operators will exactlty\n>>\n>> Typo: exactlty -> exactly\n>>\n>> + if (!bms_equal(all_vars, matched_vars))\n>> + return false;\n>> + return true;\n>>\n>> The above can be simplified as: return bms_equal(all_vars, matched_vars);\n>>\n>> Cheers\n>>\n> Hi,\n> Still in patch 1 :-)\n>\n> + if (best_path->use_semijoinfilter)\n> + {\n> + if (best_path->best_mergeclause != -1)\n>\n> Since there is no else block, the two conditions can be combined.\n>\n> + ListCell *clause_cell = list_nth_cell(mergeclauses,\n> best_path->best_mergeclause);\n>\n> As shown in the above code, best_mergeclause is the position of the best\n> merge clause in mergeclauses.\n> I think best_mergeclause_pos (or similar name) is more appropriate for the\n> fieldname.\n>\n> For depth_of_semijoin_target():\n>\n> + * Parameters:\n> + * node: plan node to be considered for semijoin push down.\n>\n> The name of the parameter is pn - please align the comment with code.\n>\n> For T_SubqueryScan case in depth_of_semijoin_target():\n>\n> + Assert(rte->subquery->targetList);\n> ...\n> + if (rel && rel->subroot\n> + && rte && rte->subquery && rte->subquery->targetList)\n>\n> It seems the condition can be simplified since rte->subquery->targetList\n> has passed the assertion.\n>\n> For is_table_scan_node_source_of_relids_or_var(), the else block can be\n> simplified to returning scan_node_varno == target_var->varno directly.\n>\n> For get_appendrel_occluded_references():\n>\n> + * Given a virtual column from an Union ALL subquery,\n> + * return the expression it immediately occludes that satisfy\n>\n> Since the index is returned from the func, it would be better to clarify\n> the comment by saying `return the last index of expression ...`\n>\n> + /* Subquery without append and partitioned tables */\n>\n> append and partitioned tables -> append or partitioned tables\n>\n> More reviews for subsequent patches to follow.\n>\nHi,\nFor 0002-Support-semijoin-filter-in-the-executor-for-non-para.patch ,\n\n+ if (!qual && !projInfo && !IsA(node, SeqScanState) &&\n+ !((SeqScanState *) node)->applySemiJoinFilter)\n\nI am confused by the last two clauses in the condition. If !IsA(node,\nSeqScanState) is true, why does the last clause cast node to SeqScanState *\n?\nI think you forgot to put the last two clauses in a pair of parentheses.\n\n+ /* slot did not pass SemiJoinFilter, so skipping\nit. */\n\nskipping it -> skip it\n\n+ /* double row estimate to reduce error rate for Bloom filter */\n+ *nodeRows = Max(*nodeRows, scan->ss.ps.plan->plan_rows * 2);\n\nProbably add more comment above about why the row count is doubled and how\nthe error rate is reduced.\n\n+SemiJoinFilterExamineSlot(List *semiJoinFilters, TupleTableSlot *slot, Oid\ntableId)\n\nSemiJoinFilterExamineSlot -> ExamineSlotUsingSemiJoinFilter\n\nCheers\n\nOn Sun, Oct 2, 2022 at 6:40 AM Zhihong Yu <zyu@yugabyte.com> wrote:On Sat, Oct 1, 2022 at 12:45 AM Zhihong Yu <zyu@yugabyte.com> wrote:On Fri, Sep 30, 2022 at 9:20 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Fri, Sep 30, 2022 at 8:40 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Fri, Sep 30, 2022 at 3:44 PM Zheng Li <zhengli10@gmail.com> wrote:Hello,\n\nA bloom filter provides early filtering of rows that cannot be joined\nbefore they would reach the join operator, the optimization is also\ncalled a semi join filter (SJF) pushdown. Such a filter can be created\nwhen one child of the join operator must materialize its derived table\nbefore the other child is evaluated.\n\nFor example, a bloom filter can be created using the the join keys for\nthe build side/inner side of a hash join or the outer side of a merge\njoin, the bloom filter can then be used to pre-filter rows on the\nother side of the join operator during the scan of the base relation.\nThe thread about “Hash Joins vs. Bloom Filters / take 2” [1] is good\ndiscussion on using such optimization for hash join without going into\nthe pushdown of the filter where its performance gain could be further\nincreased.\n\nWe worked on prototyping bloom filter pushdown for both hash join and\nmerge join. Attached is a patch set for bloom filter pushdown for\nmerge join. We also plan to send the patch for hash join once we have\nit rebased.\n\nHere is a summary of the patch set:\n1. Bloom Filter Pushdown optimizes Merge Join by filtering rows early\nduring the table scan instead of later on.\n        -The bloom filter is pushed down along the execution tree to\nthe target SeqScan nodes.\n        -Experiments show that this optimization can speed up Merge\nJoin by up to 36%.\n\n2. The planner makes the decision to use the bloom filter based on the\nestimated filtering rate and the expected performance gain.\n        -The planner accomplishes this by estimating four numbers per\nvariable - the total number of rows of the relation, the number of\ndistinct values for a given variable, and the minimum and maximum\nvalue of the variable (when applicable). Using these numbers, the\nplanner estimates a filtering rate of a potential filter.\n        -Because actually creating and implementing the filter adds\nmore operations, there is a minimum threshold of filtering where the\nfilter would actually be useful. Based on testing, we query to see if\nthe estimated filtering rate is higher than 35%, and that informs our\ndecision to use a filter or not.\n\n3. If using a bloom filter, the planner also adjusts the expected cost\nof Merge Join based on expected performance gain.\n\n4. Capability to build the bloom filter in parallel in case of\nparallel SeqScan. This is done efficiently by populating a local bloom\nfilter for each parallel worker and then taking a bitwise OR over all\nthe local bloom filters to form a shared bloom filter at the end of\nthe parallel SeqScan.\n\n5. The optimization is GUC controlled, with settings of\nenable_mergejoin_semijoin_filter and force_mergejoin_semijoin_filter.\n\nWe found in experiments that there is a significant improvement\nwhen using the bloom filter during Merge Join. One experiment involved\njoining two large tables while varying the theoretical filtering rate\n(TFR) between the two tables, the TFR is defined as the percentage\nthat the two datasets are disjoint. Both tables in the merge join were\nthe same size. We tested changing the TFR to see the change in\nfiltering optimization.\n\nFor example, let’s imagine t0 has 10 million rows, which contain the\nnumbers 1 through 10 million randomly shuffled. Also, t1 has the\nnumbers 4 million through 14 million randomly shuffled. Then the TFR\nfor a join of these two tables is 40%, since 40% of the tables are\ndisjoint from the other table (1 through 4 million for t0, 10 million\nthrough 14 million for t4).\n\nHere is the performance test result joining two tables:\nTFR: theoretical filtering rate\nEFR: estimated filtering rate\nAFR: actual filtering rate\nHJ: hash join\nMJ Default: default merge join\nMJ Filter: merge join with bloom filter optimization enabled\nMJ Filter Forced: merge join with bloom filter optimization forced\n\nTFR   EFR   AFR   HJ   MJ Default   MJ Filter   MJ Filter Forced\n-------------------------------------------------------------------------------------\n10     33.46   7.41    6529   22638        21949            23160\n20     37.27  14.85   6483   22290        21928            21930\n30     41.32   22.25  6395   22374        20718            20794\n40     45.67   29.7    6272   21969        19449            19410\n50     50.41   37.1    6210   21412        18222            18224\n60     55.64   44.51  6052   21108        17060            17018\n70     61.59   51.98  5947   21020        15682            15737\n80     68.64   59.36  5761   20812        14411            14437\n90     77.83   66.86  5701   20585        13171            13200\nTable. Execution Time (ms) vs Filtering Rate (%) for Joining Two\nTables of 10M Rows.\n\nAttached you can find figures of the same performance test and a SQL script\nto reproduce the performance test.\n\nThe first thing to notice is that Hash Join generally is the most\nefficient join strategy. This is because Hash Join is better at\ndealing with small tables, and our size of 10 million is still small\nenough where Hash Join outperforms the other join strategies. Future\nexperiments can investigate using much larger tables.\n\nHowever, comparing just within the different Merge Join variants, we\nsee that using the bloom filter greatly improves performance.\nIntuitively, all of these execution times follow linear paths.\nComparing forced filtering versus default, we can see that the default\nMerge Join outperforms Merge Join with filtering at low filter rates,\nbut after about 20% TFR, the Merge Join with filtering outperforms\ndefault Merge Join. This makes intuitive sense, as there are some\nfixed costs associated with building and checking with the bloom\nfilter. In the worst case, at only 10% TFR, the bloom filter makes\nMerge Join less than 5% slower. However, in the best case, at 90% TFR,\nthe bloom filter improves Merge Join by 36%.\n\nBased on the results of the above experiments, we came up with a\nlinear equation for the performance ratio for using the filter\npushdown from the actual filtering rate. Based on the numbers\npresented in the figure, this is the equation:\n\nT_filter / T_no_filter = 1 / (0.83 * estimated filtering rate + 0.863)\n\nFor example, this means that with an estimated filtering rate of 0.4,\nthe execution time of merge join is estimated to be improved by 16.3%.\nNote that the estimated filtering rate is used in the equation, not\nthe theoretical filtering rate or the actual filtering rate because it\nis what we have during planning. In practice the estimated filtering\nrate isn’t usually accurate. In fact, the estimated filtering rate can\ndiffer from the theoretical filtering rate by as much as 17% in our\nexperiments. One way to mitigate the power loss of bloom filter caused\nby inaccurate estimated filtering rate is to adaptively turn it off at\nexecution time, this is yet to be implemented.\n\nHere is a list of tasks we plan to work on in order to improve this patch:\n1. More regression testing to guarantee correctness.\n2. More performance testing involving larger tables and complicated query plans.\n3. Improve the cost model.\n4. Explore runtime tuning such as making the bloom filter checking adaptive.\n5. Currently, only the best single join key is used for building the\nBloom filter. However, if there are several keys and we know that\ntheir distributions are somewhat disjoint, we could leverage this fact\nand use multiple keys for the bloom filter.\n6. Currently, Bloom filter pushdown is only implemented for SeqScan\nnodes. However, it would be possible to allow push down to other types\nof scan nodes.\n7. Explore if the Bloom filter could be pushed down through a foreign\nscan when the foreign server is capable of handling it – which could\nbe made true for postgres_fdw.\n8. Better explain command on the usage of bloom filters.\n\nThis patch set is prepared by Marcus Ma, Lyu Pan and myself. Feedback\nis appreciated.\n\nWith Regards,\nZheng Li\nAmazon RDS/Aurora for PostgreSQL\n\n[1] https://www.postgresql.org/message-id/flat/c902844d-837f-5f63-ced3-9f7fd222f175%402ndquadrant.comHi,In the header of patch 1:In this prototype, the cost model is based on an assumption that there is a linear relationship between the performance gain from using a semijoin filter and the estimated filtering rate:% improvement to Merge Join cost = 0.83 * estimated filtering rate - 0.137.How were the coefficients (0.83 and 0.137) determined ?I guess they were based on the results of running certain workload.CheersHi,For patch 1:+bool       enable_mergejoin_semijoin_filter;+bool       force_mergejoin_semijoin_filter; How would (enable_mergejoin_semijoin_filter = off, force_mergejoin_semijoin_filter = on) be interpreted ?Have you considered using one GUC which has three values: off, enabled, forced ?+       mergeclauses_for_sjf = get_actual_clauses(path->path_mergeclauses);+       mergeclauses_for_sjf = get_switched_clauses(path->path_mergeclauses,+                                                   path->jpath.outerjoinpath->parent->relids);mergeclauses_for_sjf is assigned twice and I don't see mergeclauses_for_sjf being reference in the call to get_switched_clauses().Is this intentional ?+           /* want at least 1000 rows_filtered to avoid any nasty edge cases */+           if (force_mergejoin_semijoin_filter || (filteringRate >= 0.35 && rows_filtered > 1000))The above condition is narrower compared to the enclosing condition.Since there is no else block for the second if block, please merge the two if statements.+   int         best_filter_clause;Normally I would think `clause` is represented by List*. But best_filter_clause is an int. Please use another variable name so that there is less chance of confusion.For evaluate_semijoin_filtering_rate():+   double      best_sj_selectivity = 1.01;How was 1.01 determined ?+   debug_sj1(\"SJPD:  start evaluate_semijoin_filtering_rate\");There are debug statements in the methods.It would be better to remove them in the next patch set.CheersHi,Still patch 1.+       if (!outer_arg_md->is_or_maps_to_base_column+           && !inner_arg_md->is_or_maps_to_constant)+       {+           debug_sj2(\"SJPD:        outer equijoin arg does not map %s\",+                     \"to a base column nor a constant; semijoin is not valid\");Looks like there is a typo:  inner_arg_md->is_or_maps_to_constant should be outer_arg_md->is_or_maps_to_constant+       if (outer_arg_md->est_col_width > MAX_SEMIJOIN_SINGLE_KEY_WIDTH)+       {+           debug_sj2(\"SJPD:        outer equijoin column's width %s\",+                     \"was excessive; condition rejected\");How is the value of MAX_SEMIJOIN_SINGLE_KEY_WIDTH determined ?For verify_valid_pushdown():+   Assert(path);+   Assert(target_var_no > 0);++   if (path == NULL)+   {+       return false;I don't understand the first assertion. Does it mean path would always be non-NULL ? Then the if statement should be dropped.+               if (path->parent->relid == target_var_no)+               {+                   /*+                    * Found source of target var! We know that the pushdown+                    * is valid now.+                    */+                   return true;+               }+               return false;The above can be simplified as: return path->parent->relid == target_var_no;+ *     True if the given con_exprs, ref_exprs and operators will exactltyTypo: exactlty -> exactly+   if (!bms_equal(all_vars, matched_vars))+       return false;+   return true;The above can be simplified as: return bms_equal(all_vars, matched_vars);CheersHi,Still in patch 1 :-)+   if (best_path->use_semijoinfilter)+   {+       if (best_path->best_mergeclause != -1) Since there is no else block, the two conditions can be combined.+           ListCell   *clause_cell = list_nth_cell(mergeclauses, best_path->best_mergeclause);As shown in the above code, best_mergeclause is the position of the best merge clause in mergeclauses.I think best_mergeclause_pos (or similar name) is more appropriate for the fieldname.For depth_of_semijoin_target():+ *  Parameters:+ *  node: plan node to be considered for semijoin push down.The name of the parameter is pn - please align the comment with code.For T_SubqueryScan case in depth_of_semijoin_target():+               Assert(rte->subquery->targetList);...+               if (rel && rel->subroot+                   && rte && rte->subquery && rte->subquery->targetList)It seems the condition can be simplified since rte->subquery->targetList has passed the assertion.For is_table_scan_node_source_of_relids_or_var(), the else block can be simplified to returning scan_node_varno == target_var->varno directly.For get_appendrel_occluded_references():+ *  Given a virtual column from an Union ALL subquery,+ *  return the expression it immediately occludes that satisfySince the index is returned from the func, it would be better to clarify the comment by saying `return the last index of expression ...`+   /* Subquery without append and partitioned tables */append and partitioned tables -> append or partitioned tablesMore reviews for subsequent patches to follow.Hi,For 0002-Support-semijoin-filter-in-the-executor-for-non-para.patch ,+   if (!qual && !projInfo && !IsA(node, SeqScanState) &&+       !((SeqScanState *) node)->applySemiJoinFilter)I am confused by the last two clauses in the condition. If !IsA(node, SeqScanState) is true, why does the last clause cast node to SeqScanState * ?I think you forgot to put the last two clauses in a pair of parentheses.+                       /* slot did not pass SemiJoinFilter, so skipping it. */skipping it -> skip it+           /* double row estimate to reduce error rate for Bloom filter */+           *nodeRows = Max(*nodeRows, scan->ss.ps.plan->plan_rows * 2);Probably add more comment above about why the row count is doubled and how the error rate is reduced.+SemiJoinFilterExamineSlot(List *semiJoinFilters, TupleTableSlot *slot, Oid tableId)SemiJoinFilterExamineSlot -> ExamineSlotUsingSemiJoinFilterCheers", "msg_date": "Sun, 2 Oct 2022 07:42:47 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Bloom filter Pushdown Optimization for Merge Join" }, { "msg_contents": "Hello Zheng Li,\n\nGreat to see someone is working on this! Some initial comments/review:\n\nOn 10/1/22 00:44, Zheng Li wrote:\n> Hello,\n> \n> A bloom filter provides early filtering of rows that cannot be joined\n> before they would reach the join operator, the optimization is also\n> called a semi join filter (SJF) pushdown. Such a filter can be created\n> when one child of the join operator must materialize its derived table\n> before the other child is evaluated.\n> \n> For example, a bloom filter can be created using the the join keys for\n> the build side/inner side of a hash join or the outer side of a merge\n> join, the bloom filter can then be used to pre-filter rows on the\n> other side of the join operator during the scan of the base relation.\n> The thread about “Hash Joins vs. Bloom Filters / take 2” [1] is good\n> discussion on using such optimization for hash join without going into\n> the pushdown of the filter where its performance gain could be further\n> increased.\n> \n\nAgreed. That patch was beneficial for hashjoins with batching, but I\nthink the pushdown makes this much more interesting.\n\n> We worked on prototyping bloom filter pushdown for both hash join and\n> merge join. Attached is a patch set for bloom filter pushdown for\n> merge join. We also plan to send the patch for hash join once we have\n> it rebased.\n> \n> Here is a summary of the patch set:\n> 1. Bloom Filter Pushdown optimizes Merge Join by filtering rows early\n> during the table scan instead of later on.\n> -The bloom filter is pushed down along the execution tree to\n> the target SeqScan nodes.\n> -Experiments show that this optimization can speed up Merge\n> Join by up to 36%.\n> \n\nRight, although I think the speedup very much depends on the data sets\nused for the tests, and can be made arbitrarily large with \"appropriate\"\ndata set.\n\n> 2. The planner makes the decision to use the bloom filter based on the\n> estimated filtering rate and the expected performance gain.\n> -The planner accomplishes this by estimating four numbers per\n> variable - the total number of rows of the relation, the number of\n> distinct values for a given variable, and the minimum and maximum\n> value of the variable (when applicable). Using these numbers, the\n> planner estimates a filtering rate of a potential filter.\n> -Because actually creating and implementing the filter adds\n> more operations, there is a minimum threshold of filtering where the\n> filter would actually be useful. Based on testing, we query to see if\n> the estimated filtering rate is higher than 35%, and that informs our\n> decision to use a filter or not.\n> \n\nI agree, in principle, although I think the current logic / formula is a\nbit too crude and fitted to the simple data used in the test. I think\nthis needs to be formulated as a regular costing issue, considering\nstuff like cost of the hash functions, and so on.\n\nI think this needs to do two things:\n\n1) estimate the cost of building the bloom filter - This shall depend on\nthe number of rows in the inner relation, number/cost of the hash\nfunctions (which may be higher for some data types), etc.\n\n2) estimate improvement for the probing branch - Essentially, we need to\nestimate how much we save by filtering some of the rows, but this also\nneeeds to include the cost of probing the bloom filter.\n\nThis will probably require some improvements to the lib/bloomfilter, in\norder to estimate the false positive rate - this may matter a lot for\nlarge data sets and small work_mem values. The bloomfilter library\nsimply reduces the size of the bloom filter, which increases the false\npositive rate. At some point it'll start reducing the benefit.\n\n> 3. If using a bloom filter, the planner also adjusts the expected cost\n> of Merge Join based on expected performance gain.\n> \n\nI think this is going to be a weak point of the costing, because we're\nadjusting the cost of the whole subtree after it was costed.\n\nWe're doing something similar when costing LIMIT, and that can already\ncauses a lot of strange stuff with non-uniform data distributions, etc.\n\nAnd in this case it's probably worse, because we're eliminating rows at\nthe scan level, without changing the cost of any of the intermediate\nnodes. It's certainly going to be confusing in EXPLAIN, because of the\ndiscrepancy between estimated and actual row counts ...\n\n> 4. Capability to build the bloom filter in parallel in case of\n> parallel SeqScan. This is done efficiently by populating a local bloom\n> filter for each parallel worker and then taking a bitwise OR over all\n> the local bloom filters to form a shared bloom filter at the end of\n> the parallel SeqScan.\n> \n\nOK. Could also build the bloom filter in shared memory?\n\n> 5. The optimization is GUC controlled, with settings of\n> enable_mergejoin_semijoin_filter and force_mergejoin_semijoin_filter.\n> \n> We found in experiments that there is a significant improvement\n> when using the bloom filter during Merge Join. One experiment involved\n> joining two large tables while varying the theoretical filtering rate\n> (TFR) between the two tables, the TFR is defined as the percentage\n> that the two datasets are disjoint. Both tables in the merge join were\n> the same size. We tested changing the TFR to see the change in\n> filtering optimization.\n> \n> For example, let’s imagine t0 has 10 million rows, which contain the\n> numbers 1 through 10 million randomly shuffled. Also, t1 has the\n> numbers 4 million through 14 million randomly shuffled. Then the TFR\n> for a join of these two tables is 40%, since 40% of the tables are\n> disjoint from the other table (1 through 4 million for t0, 10 million\n> through 14 million for t4).\n> \n> Here is the performance test result joining two tables:\n> TFR: theoretical filtering rate\n> EFR: estimated filtering rate\n> AFR: actual filtering rate\n> HJ: hash join\n> MJ Default: default merge join\n> MJ Filter: merge join with bloom filter optimization enabled\n> MJ Filter Forced: merge join with bloom filter optimization forced\n> \n> TFR EFR AFR HJ MJ Default MJ Filter MJ Filter Forced\n> -------------------------------------------------------------------------------------\n> 10 33.46 7.41 6529 22638 21949 23160\n> 20 37.27 14.85 6483 22290 21928 21930\n> 30 41.32 22.25 6395 22374 20718 20794\n> 40 45.67 29.7 6272 21969 19449 19410\n> 50 50.41 37.1 6210 21412 18222 18224\n> 60 55.64 44.51 6052 21108 17060 17018\n> 70 61.59 51.98 5947 21020 15682 15737\n> 80 68.64 59.36 5761 20812 14411 14437\n> 90 77.83 66.86 5701 20585 13171 13200\n> Table. Execution Time (ms) vs Filtering Rate (%) for Joining Two\n> Tables of 10M Rows.\n> \n> Attached you can find figures of the same performance test and a SQL script\n> to reproduce the performance test.\n> \n> The first thing to notice is that Hash Join generally is the most\n> efficient join strategy. This is because Hash Join is better at\n> dealing with small tables, and our size of 10 million is still small\n> enough where Hash Join outperforms the other join strategies. Future\n> experiments can investigate using much larger tables.\n> \n> However, comparing just within the different Merge Join variants, we\n> see that using the bloom filter greatly improves performance.\n> Intuitively, all of these execution times follow linear paths.\n> Comparing forced filtering versus default, we can see that the default\n> Merge Join outperforms Merge Join with filtering at low filter rates,\n> but after about 20% TFR, the Merge Join with filtering outperforms\n> default Merge Join. This makes intuitive sense, as there are some\n> fixed costs associated with building and checking with the bloom\n> filter. In the worst case, at only 10% TFR, the bloom filter makes\n> Merge Join less than 5% slower. However, in the best case, at 90% TFR,\n> the bloom filter improves Merge Join by 36%.\n> \n> Based on the results of the above experiments, we came up with a\n> linear equation for the performance ratio for using the filter\n> pushdown from the actual filtering rate. Based on the numbers\n> presented in the figure, this is the equation:\n> \n> T_filter / T_no_filter = 1 / (0.83 * estimated filtering rate + 0.863)\n> \n> For example, this means that with an estimated filtering rate of 0.4,\n> the execution time of merge join is estimated to be improved by 16.3%.\n> Note that the estimated filtering rate is used in the equation, not\n> the theoretical filtering rate or the actual filtering rate because it\n> is what we have during planning. In practice the estimated filtering\n> rate isn’t usually accurate. In fact, the estimated filtering rate can\n> differ from the theoretical filtering rate by as much as 17% in our\n> experiments. One way to mitigate the power loss of bloom filter caused\n> by inaccurate estimated filtering rate is to adaptively turn it off at\n> execution time, this is yet to be implemented.\n> \n\nIMHO we shouldn't make too many conclusions from these examples. Yes, it\nshows merge join can be improved, but for cases where a hashjoin works\nbetter so we wouldn't use merge join anyway.\n\nI think we should try constructing examples where either merge join wins\nalready (and gets further improved by the bloom filter), or would lose\nto hash join and the bloom filter improves it enough to win.\n\nAFAICS that requires a join of two large tables - large enough that hash\njoin would need to be batched, or pre-sorted inputs (which eliminates\nthe explicit Sort, which is the main cost in most cases).\n\nThe current patch only works with sequential scans, which eliminates the\nsecond (pre-sorted) option. So let's try the first one - can we invent\nan example with a join of two large tables where a merge join would win?\n\nCan we find such example in existing benchmarks like TPC-H/TPC-DS.\n\n> Here is a list of tasks we plan to work on in order to improve this patch:\n> 1. More regression testing to guarantee correctness.\n> 2. More performance testing involving larger tables and complicated query plans.\n> 3. Improve the cost model.\n\n+1\n\n> 4. Explore runtime tuning such as making the bloom filter checking adaptive.\n\nI think this is tricky, I'd leave it out from the patch for now until\nthe other bits are polished. It can be added later.\n\n> 5. Currently, only the best single join key is used for building the\n> Bloom filter. However, if there are several keys and we know that\n> their distributions are somewhat disjoint, we could leverage this fact\n> and use multiple keys for the bloom filter.\n\nTrue, and I guess it wouldn't be hard.\n\n> 6. Currently, Bloom filter pushdown is only implemented for SeqScan\n> nodes. However, it would be possible to allow push down to other types\n> of scan nodes.\n\nI think pushing down the bloom filter to other types of scans is not the\nhard part, really. It's populating the bloom filter early enough.\n\nInvariably, all the examples end up with plans like this:\n\n -> Merge Join\n Merge Cond: (t0.c1 = t1.c1)\n SemiJoin Filter Created Based on: (t0.c1 = t1.c1)\n SemiJoin Estimated Filtering Rate: 1.0000\n -> Sort\n Sort Key: t0.c1\n -> Seq Scan on t0\n -> Materialize\n -> Sort\n Sort Key: t1.c1\n -> Seq Scan on t1\n\nThe bloom filter is built by the first seqscan (on t0), and then used by\nthe second seqscan (on t1). But this only works because we always run\nthe t0 scan to completion (because we're feeding it into Sort) before we\nstart scanning t1.\n\nBut when the scan on t1 switches to an index scan, it's over - we'd be\nbuilding the filter without being able to probe it, and when we finish\nbuilding it we no longer need it. So this seems pretty futile.\n\nIt might still improve plans like\n\n -> Merge Join\n Merge Cond: (t0.c1 = t1.c1)\n SemiJoin Filter Created Based on: (t0.c1 = t1.c1)\n SemiJoin Estimated Filtering Rate: 1.0000\n -> Sort\n Sort Key: t0.c1\n -> Seq Scan on t0\n -> Index Scan on t1\n\nBut I don't know how common/likely that actually is. I'd expect to have\nan index on both sides, but perhaps I'm wrong.\n\nThis is why hashjoin seems like a more natural fit for the bloom filter,\nBTW, because there we have a guarantee the inner relation is processed\nfirst (so we know the bloom filter is fine and can be probed).\n\n> 7. Explore if the Bloom filter could be pushed down through a foreign\n> scan when the foreign server is capable of handling it – which could\n> be made true for postgres_fdw.\n\nNeat idea, but I suggest to leave this out of scope of this patch.\n\n> 8. Better explain command on the usage of bloom filters.\n> \n\nI don't know what improvements you have in mind exactly, but I think\nit'd be good to show which node is building/using a bloom filter, and\nthen also some basic stats (size, number of hash functions, FPR, number\nof probes, ...). This may require improvements to lib/bloomfilter, which\ncurrently does not expose some of the details.\n\n> This patch set is prepared by Marcus Ma, Lyu Pan and myself. Feedback\n> is appreciated.\n> \n\nAttached is a patch series with two \"review\" parts (0002 and 0004). I\nalready mentioned some of the stuff above, but a couple more points:\n\n1) Don't allocate memory directly through alloca() etc. Use palloc, i.e.\nrely on our memory context.\n\n2) It's customary to have \"PlannerInfo *root\" as the first parameter.\n\n3) For the \"debug\" logging, I'd suggest to do it the way TRACE_SORT\n(instead of inventing a bunch of dbg routines).\n\n4) I find the naming inconsistent, e.g. with respect to the surrounding\ncode (say, when everything around starts with Exec, maybe the new\nfunctions should too?). Also, various functions/variables say \"semijoin\"\nbut then we apply that to \"inner joins\" too.\n\n5) Do we really need estimate_distincts_remaining() to implement yet\nanother formula for estimating number of distinct groups, different from\nestimate_num_groups() does? Why?\n\n6) A number of new functions miss comments explaining the purpose, and\nit's not quite clear what the \"contract\" is. Also, some functions have\nnew parameters but the comment was not updated to reflect it.\n\n7) SemiJoinFilterExamineSlot is matching the relations by OID, but\nthat's wrong - if you do a self-join, both sides have the same OID. It\nneeds to match RT index (I believe scanrelid in Scan node is what this\nshould be looking at).\n\nThere's a couple more review comments in the patches, but those are\nminor and not worth discussing here - feel free to ask, if anything is\nnot clear enough (or if you disagree).\n\n\nI did a bunch of testing, after tweaking your SQL script.\n\nI changed the data generation a bit not to be so slow (instead of\nrelying on unnest of multiple large sets, I use one sequence and random\nto generate data). And I run the tests with different parameters (step,\nwork_mem, ...) driven by the attached shell script.\n\nAnd it quickly fails (on assert-enabled-build). I see two backtraces:\n\n1) bogus overlapping estimate (ratio > 1.0)\n...\n#4 0x0000000000c9d56b in ExceptionalCondition (conditionName=0xe43724\n\"inner_overlapping_ratio >= 0 && inner_overlapping_ratio <= 1\",\nerrorType=0xd33069 \"FailedAssertion\", fileName=0xe42bdb \"costsize.c\",\nlineNumber=7442) at assert.c:69\n#5 0x00000000008ed767 in evaluate_semijoin_filtering_rate\n(join_path=0x2fe79f0, equijoin_list=0x2fea6c0, root=0x2fe6b68,\nworkspace=0x7ffd87588a78, best_clause=0x7ffd875888cc,\nrows_filtered=0x7ffd875888c8) at costsize.c:7442\n\nSeems it's doing the math wrong, or does not expect some corner case.\n\n\n2) stuck spinlock in SemiJoinFilterFinishScan\n...\n#5 0x0000000000a85cb0 in s_lock_stuck (file=0xe68c8c \"lwlock.c\",\nline=907, func=0xe690a1 \"LWLockWaitListLock\") at s_lock.c:83\n#6 0x0000000000a85a8d in perform_spin_delay (status=0x7ffd8758b8e8) at\ns_lock.c:134\n#7 0x0000000000a771c3 in LWLockWaitListLock (lock=0x7e40a597c060) at\nlwlock.c:911\n#8 0x0000000000a76e93 in LWLockConflictsWithVar (lock=0x7e40a597c060,\nvalptr=0x7e40a597c048, oldval=1, newval=0x7e40a597c048,\nresult=0x7ffd8758b983) at lwlock.c:1580\n#9 0x0000000000a76ce9 in LWLockWaitForVar (lock=0x7e40a597c060,\nvalptr=0x7e40a597c048, oldval=1, newval=0x7e40a597c048) at lwlock.c:1638\n#10 0x000000000080aa55 in SemiJoinFilterFinishScan\n(semiJoinFilters=0x2e349b0, tableId=1253696, parallel_area=0x2e17388) at\nnodeMergejoin.c:2035\n\nThis only happens in parallel plans, I haven't looked at the details.\n\nI do recall parallel hash join was quite tricky exactly because there\nare issues with coordinating building the hash table (e.g. workers might\nget stuck due to waiting on shmem queues etc.), I wonder if this might\nbe something similar due to building the filter. But maybe it's\nsomething trivial.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 3 Oct 2022 18:14:13 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Bloom filter Pushdown Optimization for Merge Join" }, { "msg_contents": "Hello Zhihong Yu & Tomas Vondra,\n\nThank you so much for your review and feedback!\n\nWe made some updates based on previous feedback and attached the new\npatch set. Due to time constraints, we didn't get to resolve all the\ncomments, and we'll continue to improve this patch.\n\n> In this prototype, the cost model is based on an assumption that there is\n> a linear relationship between the performance gain from using a semijoin\n> filter and the estimated filtering rate:\n> % improvement to Merge Join cost = 0.83 * estimated filtering rate - 0.137.\n>\n> How were the coefficients (0.83 and 0.137) determined ?\n> I guess they were based on the results of running certain workload.\n\nRight, the coefficients (0.83 and 0.137) determined are based on some\npreliminary testings. The current costing model is pretty naive and\nwe'll work on a more robust costing model in future work.\n\n\n> I agree, in principle, although I think the current logic / formula is a\n> bit too crude and fitted to the simple data used in the test. I think\n> this needs to be formulated as a regular costing issue, considering\n> stuff like cost of the hash functions, and so on.\n>\n> I think this needs to do two things:\n>\n> 1) estimate the cost of building the bloom filter - This shall depend on\n> the number of rows in the inner relation, number/cost of the hash\n> functions (which may be higher for some data types), etc.\n>\n> 2) estimate improvement for the probing branch - Essentially, we need to\n> estimate how much we save by filtering some of the rows, but this also\n> neeeds to include the cost of probing the bloom filter.\n>\n> This will probably require some improvements to the lib/bloomfilter, in\n> order to estimate the false positive rate - this may matter a lot for\n> large data sets and small work_mem values. The bloomfilter library\n> simply reduces the size of the bloom filter, which increases the false\n> positive rate. At some point it'll start reducing the benefit.\n>\n\nThese suggestions make a lot of sense. The current costing model is\ndefinitely not good enough, and we plan to work on a more robust\ncosting model as we continue to improve the patch.\n\n\n> OK. Could also build the bloom filter in shared memory?\n>\n\nWe thought about this approach but didn't prefer this one because if\nall worker processes share the same bloom filter in shared memory, we\nneed to frequently lock and unlock the bloom filter to avoid race\nconditions. So we decided to have each worker process create its own\nbloom filter.\n\n\n> IMHO we shouldn't make too many conclusions from these examples. Yes, it\n> shows merge join can be improved, but for cases where a hashjoin works\n> better so we wouldn't use merge join anyway.\n>\n> I think we should try constructing examples where either merge join wins\n> already (and gets further improved by the bloom filter), or would lose\n> to hash join and the bloom filter improves it enough to win.\n>\n> AFAICS that requires a join of two large tables - large enough that hash\n> join would need to be batched, or pre-sorted inputs (which eliminates\n> the explicit Sort, which is the main cost in most cases).\n>\n> The current patch only works with sequential scans, which eliminates the\n> second (pre-sorted) option. So let's try the first one - can we invent\n> an example with a join of two large tables where a merge join would win?\n>\n> Can we find such example in existing benchmarks like TPC-H/TPC-DS.\n>\n\nAgreed. The current examples are only intended to show us that using\nbloom filters in merge join could improve the merge join performance\nin some cases. We are working on testing more examples that merge join\nwith bloom filter could out-perform hash join, which should be more\npersuasive.\n\n\n> The bloom filter is built by the first seqscan (on t0), and then used by\n> the second seqscan (on t1). But this only works because we always run\n> the t0 scan to completion (because we're feeding it into Sort) before we\n> start scanning t1.\n>\n> But when the scan on t1 switches to an index scan, it's over - we'd be\n> building the filter without being able to probe it, and when we finish\n> building it we no longer need it. So this seems pretty futile.\n>\n> It might still improve plans like\n>\n> -> Merge Join\n> Merge Cond: (t0.c1 = t1.c1)\n> SemiJoin Filter Created Based on: (t0.c1 = t1.c1)\n> SemiJoin Estimated Filtering Rate: 1.0000\n> -> Sort\n> Sort Key: t0.c1\n> -> Seq Scan on t0\n> -> Index Scan on t1\n>\n> But I don't know how common/likely that actually is. I'd expect to have\n> an index on both sides, but perhaps I'm wrong.\n>\n> This is why hashjoin seems like a more natural fit for the bloom filter,\n> BTW, because there we have a guarantee the inner relation is processed\n> first (so we know the bloom filter is fine and can be probed).\n>\n\nGreat observation. The bloom filter only works if the first SeqScan\nalways runs to completion before the second SeqScan starts.\nI guess one possible way to avoid futile bloom filter might be\nenforcing that the bloom filter only be used if both the outer/inner\nplans of the MergeJoin are Sort nodes, to guarantee the bloom filter\nis ready to use after processing one side of the join, but this may be\ntoo restrictive.\n\n\n> I don't know what improvements you have in mind exactly, but I think\n> it'd be good to show which node is building/using a bloom filter, and\n> then also some basic stats (size, number of hash functions, FPR, number\n> of probes, ...). This may require improvements to lib/bloomfilter, which\n> currently does not expose some of the details.\n>\n\nAlong with the new patch set, we have added information to display\nwhich node is building/using a bloom filter (as well as the\ncorresponding expressions), and some bloom filter basic stats. We'll\nadd more related information (e.g. FPR) as we modify lib/bloomfilter\nimplementation in future work.\n\n\nThanks again for the great reviews and they are really useful! More\nfeedback is always welcome and appreciated!\n\nRegards,\nLyu Pan\nAmazon RDS/Aurora for PostgreSQL\n\nOn Mon, 3 Oct 2022 at 09:14, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n> Hello Zheng Li,\n>\n> Great to see someone is working on this! Some initial comments/review:\n>\n> On 10/1/22 00:44, Zheng Li wrote:\n> > Hello,\n> >\n> > A bloom filter provides early filtering of rows that cannot be joined\n> > before they would reach the join operator, the optimization is also\n> > called a semi join filter (SJF) pushdown. Such a filter can be created\n> > when one child of the join operator must materialize its derived table\n> > before the other child is evaluated.\n> >\n> > For example, a bloom filter can be created using the the join keys for\n> > the build side/inner side of a hash join or the outer side of a merge\n> > join, the bloom filter can then be used to pre-filter rows on the\n> > other side of the join operator during the scan of the base relation.\n> > The thread about “Hash Joins vs. Bloom Filters / take 2” [1] is good\n> > discussion on using such optimization for hash join without going into\n> > the pushdown of the filter where its performance gain could be further\n> > increased.\n> >\n>\n> Agreed. That patch was beneficial for hashjoins with batching, but I\n> think the pushdown makes this much more interesting.\n>\n> > We worked on prototyping bloom filter pushdown for both hash join and\n> > merge join. Attached is a patch set for bloom filter pushdown for\n> > merge join. We also plan to send the patch for hash join once we have\n> > it rebased.\n> >\n> > Here is a summary of the patch set:\n> > 1. Bloom Filter Pushdown optimizes Merge Join by filtering rows early\n> > during the table scan instead of later on.\n> > -The bloom filter is pushed down along the execution tree to\n> > the target SeqScan nodes.\n> > -Experiments show that this optimization can speed up Merge\n> > Join by up to 36%.\n> >\n>\n> Right, although I think the speedup very much depends on the data sets\n> used for the tests, and can be made arbitrarily large with \"appropriate\"\n> data set.\n>\n> > 2. The planner makes the decision to use the bloom filter based on the\n> > estimated filtering rate and the expected performance gain.\n> > -The planner accomplishes this by estimating four numbers per\n> > variable - the total number of rows of the relation, the number of\n> > distinct values for a given variable, and the minimum and maximum\n> > value of the variable (when applicable). Using these numbers, the\n> > planner estimates a filtering rate of a potential filter.\n> > -Because actually creating and implementing the filter adds\n> > more operations, there is a minimum threshold of filtering where the\n> > filter would actually be useful. Based on testing, we query to see if\n> > the estimated filtering rate is higher than 35%, and that informs our\n> > decision to use a filter or not.\n> >\n>\n> I agree, in principle, although I think the current logic / formula is a\n> bit too crude and fitted to the simple data used in the test. I think\n> this needs to be formulated as a regular costing issue, considering\n> stuff like cost of the hash functions, and so on.\n>\n> I think this needs to do two things:\n>\n> 1) estimate the cost of building the bloom filter - This shall depend on\n> the number of rows in the inner relation, number/cost of the hash\n> functions (which may be higher for some data types), etc.\n>\n> 2) estimate improvement for the probing branch - Essentially, we need to\n> estimate how much we save by filtering some of the rows, but this also\n> neeeds to include the cost of probing the bloom filter.\n>\n> This will probably require some improvements to the lib/bloomfilter, in\n> order to estimate the false positive rate - this may matter a lot for\n> large data sets and small work_mem values. The bloomfilter library\n> simply reduces the size of the bloom filter, which increases the false\n> positive rate. At some point it'll start reducing the benefit.\n>\n> > 3. If using a bloom filter, the planner also adjusts the expected cost\n> > of Merge Join based on expected performance gain.\n> >\n>\n> I think this is going to be a weak point of the costing, because we're\n> adjusting the cost of the whole subtree after it was costed.\n>\n> We're doing something similar when costing LIMIT, and that can already\n> causes a lot of strange stuff with non-uniform data distributions, etc.\n>\n> And in this case it's probably worse, because we're eliminating rows at\n> the scan level, without changing the cost of any of the intermediate\n> nodes. It's certainly going to be confusing in EXPLAIN, because of the\n> discrepancy between estimated and actual row counts ...\n>\n> > 4. Capability to build the bloom filter in parallel in case of\n> > parallel SeqScan. This is done efficiently by populating a local bloom\n> > filter for each parallel worker and then taking a bitwise OR over all\n> > the local bloom filters to form a shared bloom filter at the end of\n> > the parallel SeqScan.\n> >\n>\n> OK. Could also build the bloom filter in shared memory?\n>\n> > 5. The optimization is GUC controlled, with settings of\n> > enable_mergejoin_semijoin_filter and force_mergejoin_semijoin_filter.\n> >\n> > We found in experiments that there is a significant improvement\n> > when using the bloom filter during Merge Join. One experiment involved\n> > joining two large tables while varying the theoretical filtering rate\n> > (TFR) between the two tables, the TFR is defined as the percentage\n> > that the two datasets are disjoint. Both tables in the merge join were\n> > the same size. We tested changing the TFR to see the change in\n> > filtering optimization.\n> >\n> > For example, let’s imagine t0 has 10 million rows, which contain the\n> > numbers 1 through 10 million randomly shuffled. Also, t1 has the\n> > numbers 4 million through 14 million randomly shuffled. Then the TFR\n> > for a join of these two tables is 40%, since 40% of the tables are\n> > disjoint from the other table (1 through 4 million for t0, 10 million\n> > through 14 million for t4).\n> >\n> > Here is the performance test result joining two tables:\n> > TFR: theoretical filtering rate\n> > EFR: estimated filtering rate\n> > AFR: actual filtering rate\n> > HJ: hash join\n> > MJ Default: default merge join\n> > MJ Filter: merge join with bloom filter optimization enabled\n> > MJ Filter Forced: merge join with bloom filter optimization forced\n> >\n> > TFR EFR AFR HJ MJ Default MJ Filter MJ Filter Forced\n> > -------------------------------------------------------------------------------------\n> > 10 33.46 7.41 6529 22638 21949 23160\n> > 20 37.27 14.85 6483 22290 21928 21930\n> > 30 41.32 22.25 6395 22374 20718 20794\n> > 40 45.67 29.7 6272 21969 19449 19410\n> > 50 50.41 37.1 6210 21412 18222 18224\n> > 60 55.64 44.51 6052 21108 17060 17018\n> > 70 61.59 51.98 5947 21020 15682 15737\n> > 80 68.64 59.36 5761 20812 14411 14437\n> > 90 77.83 66.86 5701 20585 13171 13200\n> > Table. Execution Time (ms) vs Filtering Rate (%) for Joining Two\n> > Tables of 10M Rows.\n> >\n> > Attached you can find figures of the same performance test and a SQL script\n> > to reproduce the performance test.\n> >\n> > The first thing to notice is that Hash Join generally is the most\n> > efficient join strategy. This is because Hash Join is better at\n> > dealing with small tables, and our size of 10 million is still small\n> > enough where Hash Join outperforms the other join strategies. Future\n> > experiments can investigate using much larger tables.\n> >\n> > However, comparing just within the different Merge Join variants, we\n> > see that using the bloom filter greatly improves performance.\n> > Intuitively, all of these execution times follow linear paths.\n> > Comparing forced filtering versus default, we can see that the default\n> > Merge Join outperforms Merge Join with filtering at low filter rates,\n> > but after about 20% TFR, the Merge Join with filtering outperforms\n> > default Merge Join. This makes intuitive sense, as there are some\n> > fixed costs associated with building and checking with the bloom\n> > filter. In the worst case, at only 10% TFR, the bloom filter makes\n> > Merge Join less than 5% slower. However, in the best case, at 90% TFR,\n> > the bloom filter improves Merge Join by 36%.\n> >\n> > Based on the results of the above experiments, we came up with a\n> > linear equation for the performance ratio for using the filter\n> > pushdown from the actual filtering rate. Based on the numbers\n> > presented in the figure, this is the equation:\n> >\n> > T_filter / T_no_filter = 1 / (0.83 * estimated filtering rate + 0.863)\n> >\n> > For example, this means that with an estimated filtering rate of 0.4,\n> > the execution time of merge join is estimated to be improved by 16.3%.\n> > Note that the estimated filtering rate is used in the equation, not\n> > the theoretical filtering rate or the actual filtering rate because it\n> > is what we have during planning. In practice the estimated filtering\n> > rate isn’t usually accurate. In fact, the estimated filtering rate can\n> > differ from the theoretical filtering rate by as much as 17% in our\n> > experiments. One way to mitigate the power loss of bloom filter caused\n> > by inaccurate estimated filtering rate is to adaptively turn it off at\n> > execution time, this is yet to be implemented.\n> >\n>\n> IMHO we shouldn't make too many conclusions from these examples. Yes, it\n> shows merge join can be improved, but for cases where a hashjoin works\n> better so we wouldn't use merge join anyway.\n>\n> I think we should try constructing examples where either merge join wins\n> already (and gets further improved by the bloom filter), or would lose\n> to hash join and the bloom filter improves it enough to win.\n>\n> AFAICS that requires a join of two large tables - large enough that hash\n> join would need to be batched, or pre-sorted inputs (which eliminates\n> the explicit Sort, which is the main cost in most cases).\n>\n> The current patch only works with sequential scans, which eliminates the\n> second (pre-sorted) option. So let's try the first one - can we invent\n> an example with a join of two large tables where a merge join would win?\n>\n> Can we find such example in existing benchmarks like TPC-H/TPC-DS.\n>\n> > Here is a list of tasks we plan to work on in order to improve this patch:\n> > 1. More regression testing to guarantee correctness.\n> > 2. More performance testing involving larger tables and complicated query plans.\n> > 3. Improve the cost model.\n>\n> +1\n>\n> > 4. Explore runtime tuning such as making the bloom filter checking adaptive.\n>\n> I think this is tricky, I'd leave it out from the patch for now until\n> the other bits are polished. It can be added later.\n>\n> > 5. Currently, only the best single join key is used for building the\n> > Bloom filter. However, if there are several keys and we know that\n> > their distributions are somewhat disjoint, we could leverage this fact\n> > and use multiple keys for the bloom filter.\n>\n> True, and I guess it wouldn't be hard.\n>\n> > 6. Currently, Bloom filter pushdown is only implemented for SeqScan\n> > nodes. However, it would be possible to allow push down to other types\n> > of scan nodes.\n>\n> I think pushing down the bloom filter to other types of scans is not the\n> hard part, really. It's populating the bloom filter early enough.\n>\n> Invariably, all the examples end up with plans like this:\n>\n> -> Merge Join\n> Merge Cond: (t0.c1 = t1.c1)\n> SemiJoin Filter Created Based on: (t0.c1 = t1.c1)\n> SemiJoin Estimated Filtering Rate: 1.0000\n> -> Sort\n> Sort Key: t0.c1\n> -> Seq Scan on t0\n> -> Materialize\n> -> Sort\n> Sort Key: t1.c1\n> -> Seq Scan on t1\n>\n> The bloom filter is built by the first seqscan (on t0), and then used by\n> the second seqscan (on t1). But this only works because we always run\n> the t0 scan to completion (because we're feeding it into Sort) before we\n> start scanning t1.\n>\n> But when the scan on t1 switches to an index scan, it's over - we'd be\n> building the filter without being able to probe it, and when we finish\n> building it we no longer need it. So this seems pretty futile.\n>\n> It might still improve plans like\n>\n> -> Merge Join\n> Merge Cond: (t0.c1 = t1.c1)\n> SemiJoin Filter Created Based on: (t0.c1 = t1.c1)\n> SemiJoin Estimated Filtering Rate: 1.0000\n> -> Sort\n> Sort Key: t0.c1\n> -> Seq Scan on t0\n> -> Index Scan on t1\n>\n> But I don't know how common/likely that actually is. I'd expect to have\n> an index on both sides, but perhaps I'm wrong.\n>\n> This is why hashjoin seems like a more natural fit for the bloom filter,\n> BTW, because there we have a guarantee the inner relation is processed\n> first (so we know the bloom filter is fine and can be probed).\n>\n> > 7. Explore if the Bloom filter could be pushed down through a foreign\n> > scan when the foreign server is capable of handling it – which could\n> > be made true for postgres_fdw.\n>\n> Neat idea, but I suggest to leave this out of scope of this patch.\n>\n> > 8. Better explain command on the usage of bloom filters.\n> >\n>\n> I don't know what improvements you have in mind exactly, but I think\n> it'd be good to show which node is building/using a bloom filter, and\n> then also some basic stats (size, number of hash functions, FPR, number\n> of probes, ...). This may require improvements to lib/bloomfilter, which\n> currently does not expose some of the details.\n>\n> > This patch set is prepared by Marcus Ma, Lyu Pan and myself. Feedback\n> > is appreciated.\n> >\n>\n> Attached is a patch series with two \"review\" parts (0002 and 0004). I\n> already mentioned some of the stuff above, but a couple more points:\n>\n> 1) Don't allocate memory directly through alloca() etc. Use palloc, i.e.\n> rely on our memory context.\n>\n> 2) It's customary to have \"PlannerInfo *root\" as the first parameter.\n>\n> 3) For the \"debug\" logging, I'd suggest to do it the way TRACE_SORT\n> (instead of inventing a bunch of dbg routines).\n>\n> 4) I find the naming inconsistent, e.g. with respect to the surrounding\n> code (say, when everything around starts with Exec, maybe the new\n> functions should too?). Also, various functions/variables say \"semijoin\"\n> but then we apply that to \"inner joins\" too.\n>\n> 5) Do we really need estimate_distincts_remaining() to implement yet\n> another formula for estimating number of distinct groups, different from\n> estimate_num_groups() does? Why?\n>\n> 6) A number of new functions miss comments explaining the purpose, and\n> it's not quite clear what the \"contract\" is. Also, some functions have\n> new parameters but the comment was not updated to reflect it.\n>\n> 7) SemiJoinFilterExamineSlot is matching the relations by OID, but\n> that's wrong - if you do a self-join, both sides have the same OID. It\n> needs to match RT index (I believe scanrelid in Scan node is what this\n> should be looking at).\n>\n> There's a couple more review comments in the patches, but those are\n> minor and not worth discussing here - feel free to ask, if anything is\n> not clear enough (or if you disagree).\n>\n>\n> I did a bunch of testing, after tweaking your SQL script.\n>\n> I changed the data generation a bit not to be so slow (instead of\n> relying on unnest of multiple large sets, I use one sequence and random\n> to generate data). And I run the tests with different parameters (step,\n> work_mem, ...) driven by the attached shell script.\n>\n> And it quickly fails (on assert-enabled-build). I see two backtraces:\n>\n> 1) bogus overlapping estimate (ratio > 1.0)\n> ...\n> #4 0x0000000000c9d56b in ExceptionalCondition (conditionName=0xe43724\n> \"inner_overlapping_ratio >= 0 && inner_overlapping_ratio <= 1\",\n> errorType=0xd33069 \"FailedAssertion\", fileName=0xe42bdb \"costsize.c\",\n> lineNumber=7442) at assert.c:69\n> #5 0x00000000008ed767 in evaluate_semijoin_filtering_rate\n> (join_path=0x2fe79f0, equijoin_list=0x2fea6c0, root=0x2fe6b68,\n> workspace=0x7ffd87588a78, best_clause=0x7ffd875888cc,\n> rows_filtered=0x7ffd875888c8) at costsize.c:7442\n>\n> Seems it's doing the math wrong, or does not expect some corner case.\n>\n>\n> 2) stuck spinlock in SemiJoinFilterFinishScan\n> ...\n> #5 0x0000000000a85cb0 in s_lock_stuck (file=0xe68c8c \"lwlock.c\",\n> line=907, func=0xe690a1 \"LWLockWaitListLock\") at s_lock.c:83\n> #6 0x0000000000a85a8d in perform_spin_delay (status=0x7ffd8758b8e8) at\n> s_lock.c:134\n> #7 0x0000000000a771c3 in LWLockWaitListLock (lock=0x7e40a597c060) at\n> lwlock.c:911\n> #8 0x0000000000a76e93 in LWLockConflictsWithVar (lock=0x7e40a597c060,\n> valptr=0x7e40a597c048, oldval=1, newval=0x7e40a597c048,\n> result=0x7ffd8758b983) at lwlock.c:1580\n> #9 0x0000000000a76ce9 in LWLockWaitForVar (lock=0x7e40a597c060,\n> valptr=0x7e40a597c048, oldval=1, newval=0x7e40a597c048) at lwlock.c:1638\n> #10 0x000000000080aa55 in SemiJoinFilterFinishScan\n> (semiJoinFilters=0x2e349b0, tableId=1253696, parallel_area=0x2e17388) at\n> nodeMergejoin.c:2035\n>\n> This only happens in parallel plans, I haven't looked at the details.\n>\n> I do recall parallel hash join was quite tricky exactly because there\n> are issues with coordinating building the hash table (e.g. workers might\n> get stuck due to waiting on shmem queues etc.), I wonder if this might\n> be something similar due to building the filter. But maybe it's\n> something trivial.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company", "msg_date": "Wed, 12 Oct 2022 15:36:00 -0700", "msg_from": "Lyu Pan <lyu.steve.pan@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bloom filter Pushdown Optimization for Merge Join" }, { "msg_contents": "On Wed, Oct 12, 2022 at 3:36 PM Lyu Pan <lyu.steve.pan@gmail.com> wrote:\n\n> Hello Zhihong Yu & Tomas Vondra,\n>\n> Thank you so much for your review and feedback!\n>\n> We made some updates based on previous feedback and attached the new\n> patch set. Due to time constraints, we didn't get to resolve all the\n> comments, and we'll continue to improve this patch.\n>\n> > In this prototype, the cost model is based on an assumption that there is\n> > a linear relationship between the performance gain from using a semijoin\n> > filter and the estimated filtering rate:\n> > % improvement to Merge Join cost = 0.83 * estimated filtering rate -\n> 0.137.\n> >\n> > How were the coefficients (0.83 and 0.137) determined ?\n> > I guess they were based on the results of running certain workload.\n>\n> Right, the coefficients (0.83 and 0.137) determined are based on some\n> preliminary testings. The current costing model is pretty naive and\n> we'll work on a more robust costing model in future work.\n>\n>\n> > I agree, in principle, although I think the current logic / formula is a\n> > bit too crude and fitted to the simple data used in the test. I think\n> > this needs to be formulated as a regular costing issue, considering\n> > stuff like cost of the hash functions, and so on.\n> >\n> > I think this needs to do two things:\n> >\n> > 1) estimate the cost of building the bloom filter - This shall depend on\n> > the number of rows in the inner relation, number/cost of the hash\n> > functions (which may be higher for some data types), etc.\n> >\n> > 2) estimate improvement for the probing branch - Essentially, we need to\n> > estimate how much we save by filtering some of the rows, but this also\n> > neeeds to include the cost of probing the bloom filter.\n> >\n> > This will probably require some improvements to the lib/bloomfilter, in\n> > order to estimate the false positive rate - this may matter a lot for\n> > large data sets and small work_mem values. The bloomfilter library\n> > simply reduces the size of the bloom filter, which increases the false\n> > positive rate. At some point it'll start reducing the benefit.\n> >\n>\n> These suggestions make a lot of sense. The current costing model is\n> definitely not good enough, and we plan to work on a more robust\n> costing model as we continue to improve the patch.\n>\n>\n> > OK. Could also build the bloom filter in shared memory?\n> >\n>\n> We thought about this approach but didn't prefer this one because if\n> all worker processes share the same bloom filter in shared memory, we\n> need to frequently lock and unlock the bloom filter to avoid race\n> conditions. So we decided to have each worker process create its own\n> bloom filter.\n>\n>\n> > IMHO we shouldn't make too many conclusions from these examples. Yes, it\n> > shows merge join can be improved, but for cases where a hashjoin works\n> > better so we wouldn't use merge join anyway.\n> >\n> > I think we should try constructing examples where either merge join wins\n> > already (and gets further improved by the bloom filter), or would lose\n> > to hash join and the bloom filter improves it enough to win.\n> >\n> > AFAICS that requires a join of two large tables - large enough that hash\n> > join would need to be batched, or pre-sorted inputs (which eliminates\n> > the explicit Sort, which is the main cost in most cases).\n> >\n> > The current patch only works with sequential scans, which eliminates the\n> > second (pre-sorted) option. So let's try the first one - can we invent\n> > an example with a join of two large tables where a merge join would win?\n> >\n> > Can we find such example in existing benchmarks like TPC-H/TPC-DS.\n> >\n>\n> Agreed. The current examples are only intended to show us that using\n> bloom filters in merge join could improve the merge join performance\n> in some cases. We are working on testing more examples that merge join\n> with bloom filter could out-perform hash join, which should be more\n> persuasive.\n>\n>\n> > The bloom filter is built by the first seqscan (on t0), and then used by\n> > the second seqscan (on t1). But this only works because we always run\n> > the t0 scan to completion (because we're feeding it into Sort) before we\n> > start scanning t1.\n> >\n> > But when the scan on t1 switches to an index scan, it's over - we'd be\n> > building the filter without being able to probe it, and when we finish\n> > building it we no longer need it. So this seems pretty futile.\n> >\n> > It might still improve plans like\n> >\n> > -> Merge Join\n> > Merge Cond: (t0.c1 = t1.c1)\n> > SemiJoin Filter Created Based on: (t0.c1 = t1.c1)\n> > SemiJoin Estimated Filtering Rate: 1.0000\n> > -> Sort\n> > Sort Key: t0.c1\n> > -> Seq Scan on t0\n> > -> Index Scan on t1\n> >\n> > But I don't know how common/likely that actually is. I'd expect to have\n> > an index on both sides, but perhaps I'm wrong.\n> >\n> > This is why hashjoin seems like a more natural fit for the bloom filter,\n> > BTW, because there we have a guarantee the inner relation is processed\n> > first (so we know the bloom filter is fine and can be probed).\n> >\n>\n> Great observation. The bloom filter only works if the first SeqScan\n> always runs to completion before the second SeqScan starts.\n> I guess one possible way to avoid futile bloom filter might be\n> enforcing that the bloom filter only be used if both the outer/inner\n> plans of the MergeJoin are Sort nodes, to guarantee the bloom filter\n> is ready to use after processing one side of the join, but this may be\n> too restrictive.\n>\n>\n> > I don't know what improvements you have in mind exactly, but I think\n> > it'd be good to show which node is building/using a bloom filter, and\n> > then also some basic stats (size, number of hash functions, FPR, number\n> > of probes, ...). This may require improvements to lib/bloomfilter, which\n> > currently does not expose some of the details.\n> >\n>\n> Along with the new patch set, we have added information to display\n> which node is building/using a bloom filter (as well as the\n> corresponding expressions), and some bloom filter basic stats. We'll\n> add more related information (e.g. FPR) as we modify lib/bloomfilter\n> implementation in future work.\n>\n>\n> Thanks again for the great reviews and they are really useful! More\n> feedback is always welcome and appreciated!\n>\n> Regards,\n> Lyu Pan\n> Amazon RDS/Aurora for PostgreSQL\n>\n> Hi,\nFor v1-0001-Support-semijoin-filter-in-the-planner-optimizer.patch :\n\n+\n+ /* want at least 1000 rows_filtered to avoid any nasty edge cases */\n+ if (force_mergejoin_semijoin_filter ||\n+ (filtering_rate >= 0.35 && rows_filtered > 1000 &&\nbest_filter_clause >= 0))\n\nCurrently rows_filtered is compared with a constant, should the constant be\nmade configurable ?\n\n+ * improvement of 19.5%. This equation also concludes thata a\n17%\n\nTypo: thata\n\n+ inner_md_array = palloc(sizeof(SemiJoinFilterExprMetadata) * num_md);\n+ if (!outer_md_array || !inner_md_array)\n+ {\n+ return 0; /* a stack array allocation failed */\n\nShould the allocated array be freed before returning ?\n\nFor verify_valid_pushdown(), the parameters in comment don't match the\nactual parameters. Please modify the comment.\n\nFor is_fk_pk(), since the outer_key_list is fixed across the iterations, I\nthink all_vars can be computed outside of expressions_match_foreign_key().\n\nCheers\n\nOn Wed, Oct 12, 2022 at 3:36 PM Lyu Pan <lyu.steve.pan@gmail.com> wrote:Hello Zhihong Yu & Tomas Vondra,\n\nThank you so much for your review and feedback!\n\nWe made some updates based on previous feedback and attached the new\npatch set. Due to time constraints, we didn't get to resolve all the\ncomments, and we'll continue to improve this patch.\n\n> In this prototype, the cost model is based on an assumption that there is\n> a linear relationship between the performance gain from using a semijoin\n> filter and the estimated filtering rate:\n> % improvement to Merge Join cost = 0.83 * estimated filtering rate - 0.137.\n>\n> How were the coefficients (0.83 and 0.137) determined ?\n> I guess they were based on the results of running certain workload.\n\nRight, the coefficients (0.83 and 0.137) determined are based on some\npreliminary testings. The current costing model is pretty naive and\nwe'll work on a more robust costing model in future work.\n\n\n> I agree, in principle, although I think the current logic / formula is a\n> bit too crude and fitted to the simple data used in the test. I think\n> this needs to be formulated as a regular costing issue, considering\n> stuff like cost of the hash functions, and so on.\n>\n> I think this needs to do two things:\n>\n> 1) estimate the cost of building the bloom filter - This shall depend on\n> the number of rows in the inner relation, number/cost of the hash\n> functions (which may be higher for some data types), etc.\n>\n> 2) estimate improvement for the probing branch - Essentially, we need to\n> estimate how much we save by filtering some of the rows, but this also\n> neeeds to include the cost of probing the bloom filter.\n>\n> This will probably require some improvements to the lib/bloomfilter, in\n> order to estimate the false positive rate - this may matter a lot for\n> large data sets and small work_mem values. The bloomfilter library\n> simply reduces the size of the bloom filter, which increases the false\n> positive rate. At some point it'll start reducing the benefit.\n>\n\nThese suggestions make a lot of sense. The current costing model is\ndefinitely not good enough, and we plan to work on a more robust\ncosting model as we continue to improve the patch.\n\n\n> OK. Could also build the bloom filter in shared memory?\n>\n\nWe thought about this approach but didn't prefer this one because if\nall worker processes share the same bloom filter in shared memory, we\nneed to frequently lock and unlock the bloom filter to avoid race\nconditions. So we decided to have each worker process create its own\nbloom filter.\n\n\n> IMHO we shouldn't make too many conclusions from these examples. Yes, it\n> shows merge join can be improved, but for cases where a hashjoin works\n> better so we wouldn't use merge join anyway.\n>\n> I think we should try constructing examples where either merge join wins\n> already (and gets further improved by the bloom filter), or would lose\n> to hash join and the bloom filter improves it enough to win.\n>\n> AFAICS that requires a join of two large tables - large enough that hash\n> join would need to be batched, or pre-sorted inputs (which eliminates\n> the explicit Sort, which is the main cost in most cases).\n>\n> The current patch only works with sequential scans, which eliminates the\n> second (pre-sorted) option. So let's try the first one - can we invent\n> an example with a join of two large tables where a merge join would win?\n>\n> Can we find such example in existing benchmarks like TPC-H/TPC-DS.\n>\n\nAgreed. The current examples are only intended to show us that using\nbloom filters in merge join could improve the merge join performance\nin some cases. We are working on testing more examples that merge join\nwith bloom filter could out-perform hash join, which should be more\npersuasive.\n\n\n> The bloom filter is built by the first seqscan (on t0), and then used by\n> the second seqscan (on t1). But this only works because we always run\n> the t0 scan to completion (because we're feeding it into Sort) before we\n> start scanning t1.\n>\n> But when the scan on t1 switches to an index scan, it's over - we'd be\n> building the filter without being able to probe it, and when we finish\n> building it we no longer need it. So this seems pretty futile.\n>\n> It might still improve plans like\n>\n>    ->  Merge Join\n>          Merge Cond: (t0.c1 = t1.c1)\n>          SemiJoin Filter Created Based on: (t0.c1 = t1.c1)\n>          SemiJoin Estimated Filtering Rate: 1.0000\n>         ->  Sort\n>                Sort Key: t0.c1\n>                ->  Seq Scan on t0\n>          ->  Index Scan on t1\n>\n> But I don't know how common/likely that actually is. I'd expect to have\n> an index on both sides, but perhaps I'm wrong.\n>\n> This is why hashjoin seems like a more natural fit for the bloom filter,\n> BTW, because there we have a guarantee the inner relation is processed\n> first (so we know the bloom filter is fine and can be probed).\n>\n\nGreat observation. The bloom filter only works if the first SeqScan\nalways runs to completion before the second SeqScan starts.\nI guess one possible way to avoid futile bloom filter might be\nenforcing that the bloom filter only be used if both the outer/inner\nplans of the MergeJoin are Sort nodes, to guarantee the bloom filter\nis ready to use after processing one side of the join, but this may be\ntoo restrictive.\n\n\n> I don't know what improvements you have in mind exactly, but I think\n> it'd be good to show which node is building/using a bloom filter, and\n> then also some basic stats (size, number of hash functions, FPR, number\n> of probes, ...). This may require improvements to lib/bloomfilter, which\n> currently does not expose some of the details.\n>\n\nAlong with the new patch set, we have added information to display\nwhich node is building/using a bloom filter (as well as the\ncorresponding expressions), and some bloom filter basic stats. We'll\nadd more related information (e.g. FPR) as we modify lib/bloomfilter\nimplementation in future work.\n\n\nThanks again for the great reviews and they are really useful! More\nfeedback is always welcome and appreciated!\n\nRegards,\nLyu Pan\nAmazon RDS/Aurora for PostgreSQLHi,For v1-0001-Support-semijoin-filter-in-the-planner-optimizer.patch :++       /* want at least 1000 rows_filtered to avoid any nasty edge cases */+       if (force_mergejoin_semijoin_filter ||+           (filtering_rate >= 0.35 && rows_filtered > 1000 && best_filter_clause >= 0))Currently rows_filtered is compared with a constant, should the constant be made configurable ?+            * improvement of 19.5%. This equation also concludes thata a 17%Typo: thata+   inner_md_array = palloc(sizeof(SemiJoinFilterExprMetadata) * num_md);+   if (!outer_md_array || !inner_md_array)+   {+       return 0;               /* a stack array allocation failed  */Should the allocated array be freed before returning ?For verify_valid_pushdown(), the parameters in comment don't match the actual parameters. Please modify the comment.For is_fk_pk(), since the outer_key_list is fixed across the iterations, I think all_vars can be computed outside of expressions_match_foreign_key().Cheers", "msg_date": "Wed, 12 Oct 2022 16:35:14 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Bloom filter Pushdown Optimization for Merge Join" }, { "msg_contents": "On Wed, Oct 12, 2022 at 4:35 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Wed, Oct 12, 2022 at 3:36 PM Lyu Pan <lyu.steve.pan@gmail.com> wrote:\n>\n>> Hello Zhihong Yu & Tomas Vondra,\n>>\n>> Thank you so much for your review and feedback!\n>>\n>> We made some updates based on previous feedback and attached the new\n>> patch set. Due to time constraints, we didn't get to resolve all the\n>> comments, and we'll continue to improve this patch.\n>>\n>> > In this prototype, the cost model is based on an assumption that there\n>> is\n>> > a linear relationship between the performance gain from using a semijoin\n>> > filter and the estimated filtering rate:\n>> > % improvement to Merge Join cost = 0.83 * estimated filtering rate -\n>> 0.137.\n>> >\n>> > How were the coefficients (0.83 and 0.137) determined ?\n>> > I guess they were based on the results of running certain workload.\n>>\n>> Right, the coefficients (0.83 and 0.137) determined are based on some\n>> preliminary testings. The current costing model is pretty naive and\n>> we'll work on a more robust costing model in future work.\n>>\n>>\n>> > I agree, in principle, although I think the current logic / formula is a\n>> > bit too crude and fitted to the simple data used in the test. I think\n>> > this needs to be formulated as a regular costing issue, considering\n>> > stuff like cost of the hash functions, and so on.\n>> >\n>> > I think this needs to do two things:\n>> >\n>> > 1) estimate the cost of building the bloom filter - This shall depend on\n>> > the number of rows in the inner relation, number/cost of the hash\n>> > functions (which may be higher for some data types), etc.\n>> >\n>> > 2) estimate improvement for the probing branch - Essentially, we need to\n>> > estimate how much we save by filtering some of the rows, but this also\n>> > neeeds to include the cost of probing the bloom filter.\n>> >\n>> > This will probably require some improvements to the lib/bloomfilter, in\n>> > order to estimate the false positive rate - this may matter a lot for\n>> > large data sets and small work_mem values. The bloomfilter library\n>> > simply reduces the size of the bloom filter, which increases the false\n>> > positive rate. At some point it'll start reducing the benefit.\n>> >\n>>\n>> These suggestions make a lot of sense. The current costing model is\n>> definitely not good enough, and we plan to work on a more robust\n>> costing model as we continue to improve the patch.\n>>\n>>\n>> > OK. Could also build the bloom filter in shared memory?\n>> >\n>>\n>> We thought about this approach but didn't prefer this one because if\n>> all worker processes share the same bloom filter in shared memory, we\n>> need to frequently lock and unlock the bloom filter to avoid race\n>> conditions. So we decided to have each worker process create its own\n>> bloom filter.\n>>\n>>\n>> > IMHO we shouldn't make too many conclusions from these examples. Yes, it\n>> > shows merge join can be improved, but for cases where a hashjoin works\n>> > better so we wouldn't use merge join anyway.\n>> >\n>> > I think we should try constructing examples where either merge join wins\n>> > already (and gets further improved by the bloom filter), or would lose\n>> > to hash join and the bloom filter improves it enough to win.\n>> >\n>> > AFAICS that requires a join of two large tables - large enough that hash\n>> > join would need to be batched, or pre-sorted inputs (which eliminates\n>> > the explicit Sort, which is the main cost in most cases).\n>> >\n>> > The current patch only works with sequential scans, which eliminates the\n>> > second (pre-sorted) option. So let's try the first one - can we invent\n>> > an example with a join of two large tables where a merge join would win?\n>> >\n>> > Can we find such example in existing benchmarks like TPC-H/TPC-DS.\n>> >\n>>\n>> Agreed. The current examples are only intended to show us that using\n>> bloom filters in merge join could improve the merge join performance\n>> in some cases. We are working on testing more examples that merge join\n>> with bloom filter could out-perform hash join, which should be more\n>> persuasive.\n>>\n>>\n>> > The bloom filter is built by the first seqscan (on t0), and then used by\n>> > the second seqscan (on t1). But this only works because we always run\n>> > the t0 scan to completion (because we're feeding it into Sort) before we\n>> > start scanning t1.\n>> >\n>> > But when the scan on t1 switches to an index scan, it's over - we'd be\n>> > building the filter without being able to probe it, and when we finish\n>> > building it we no longer need it. So this seems pretty futile.\n>> >\n>> > It might still improve plans like\n>> >\n>> > -> Merge Join\n>> > Merge Cond: (t0.c1 = t1.c1)\n>> > SemiJoin Filter Created Based on: (t0.c1 = t1.c1)\n>> > SemiJoin Estimated Filtering Rate: 1.0000\n>> > -> Sort\n>> > Sort Key: t0.c1\n>> > -> Seq Scan on t0\n>> > -> Index Scan on t1\n>> >\n>> > But I don't know how common/likely that actually is. I'd expect to have\n>> > an index on both sides, but perhaps I'm wrong.\n>> >\n>> > This is why hashjoin seems like a more natural fit for the bloom filter,\n>> > BTW, because there we have a guarantee the inner relation is processed\n>> > first (so we know the bloom filter is fine and can be probed).\n>> >\n>>\n>> Great observation. The bloom filter only works if the first SeqScan\n>> always runs to completion before the second SeqScan starts.\n>> I guess one possible way to avoid futile bloom filter might be\n>> enforcing that the bloom filter only be used if both the outer/inner\n>> plans of the MergeJoin are Sort nodes, to guarantee the bloom filter\n>> is ready to use after processing one side of the join, but this may be\n>> too restrictive.\n>>\n>>\n>> > I don't know what improvements you have in mind exactly, but I think\n>> > it'd be good to show which node is building/using a bloom filter, and\n>> > then also some basic stats (size, number of hash functions, FPR, number\n>> > of probes, ...). This may require improvements to lib/bloomfilter, which\n>> > currently does not expose some of the details.\n>> >\n>>\n>> Along with the new patch set, we have added information to display\n>> which node is building/using a bloom filter (as well as the\n>> corresponding expressions), and some bloom filter basic stats. We'll\n>> add more related information (e.g. FPR) as we modify lib/bloomfilter\n>> implementation in future work.\n>>\n>>\n>> Thanks again for the great reviews and they are really useful! More\n>> feedback is always welcome and appreciated!\n>>\n>> Regards,\n>> Lyu Pan\n>> Amazon RDS/Aurora for PostgreSQL\n>>\n>> Hi,\n> For v1-0001-Support-semijoin-filter-in-the-planner-optimizer.patch :\n>\n> +\n> + /* want at least 1000 rows_filtered to avoid any nasty edge cases\n> */\n> + if (force_mergejoin_semijoin_filter ||\n> + (filtering_rate >= 0.35 && rows_filtered > 1000 &&\n> best_filter_clause >= 0))\n>\n> Currently rows_filtered is compared with a constant, should the constant\n> be made configurable ?\n>\n> + * improvement of 19.5%. This equation also concludes thata a\n> 17%\n>\n> Typo: thata\n>\n> + inner_md_array = palloc(sizeof(SemiJoinFilterExprMetadata) * num_md);\n> + if (!outer_md_array || !inner_md_array)\n> + {\n> + return 0; /* a stack array allocation failed */\n>\n> Should the allocated array be freed before returning ?\n>\n> For verify_valid_pushdown(), the parameters in comment don't match the\n> actual parameters. Please modify the comment.\n>\n> For is_fk_pk(), since the outer_key_list is fixed across the iterations, I\n> think all_vars can be computed outside of expressions_match_foreign_key().\n>\n> Cheers\n>\n\nContinuing\nwith v1-0001-Support-semijoin-filter-in-the-planner-optimizer.patch\n\nFor get_switched_clauses(), the returned List contains all the clauses. Yet\nthe name suggests that only switched clauses are returned.\nPlease rename the method to adjust_XX or rearrange_YY so that it is less\nlikely to cause confusion.\n\nFor depth_of_semijoin_target(), idx is only used inside the `if (num_exprs\n== get_appendrel_occluded_references(` block, you can move its declaration\ninside that if block.\n\n+ outside_subq_rte = root->simple_rte_array[outside_subq_var->varno];\n+\n+ /* System Vars have varattno < 0, don't bother */\n+ if (outside_subq_var->varattno <= 0)\n+ return 0;\n\nSince the check on outside_subq_var->varattno doesn't depend on\noutside_subq_rte, you can move the assignment to outside_subq_rte after the\nif check.\n\nCheers\n\nOn Wed, Oct 12, 2022 at 4:35 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Wed, Oct 12, 2022 at 3:36 PM Lyu Pan <lyu.steve.pan@gmail.com> wrote:Hello Zhihong Yu & Tomas Vondra,\n\nThank you so much for your review and feedback!\n\nWe made some updates based on previous feedback and attached the new\npatch set. Due to time constraints, we didn't get to resolve all the\ncomments, and we'll continue to improve this patch.\n\n> In this prototype, the cost model is based on an assumption that there is\n> a linear relationship between the performance gain from using a semijoin\n> filter and the estimated filtering rate:\n> % improvement to Merge Join cost = 0.83 * estimated filtering rate - 0.137.\n>\n> How were the coefficients (0.83 and 0.137) determined ?\n> I guess they were based on the results of running certain workload.\n\nRight, the coefficients (0.83 and 0.137) determined are based on some\npreliminary testings. The current costing model is pretty naive and\nwe'll work on a more robust costing model in future work.\n\n\n> I agree, in principle, although I think the current logic / formula is a\n> bit too crude and fitted to the simple data used in the test. I think\n> this needs to be formulated as a regular costing issue, considering\n> stuff like cost of the hash functions, and so on.\n>\n> I think this needs to do two things:\n>\n> 1) estimate the cost of building the bloom filter - This shall depend on\n> the number of rows in the inner relation, number/cost of the hash\n> functions (which may be higher for some data types), etc.\n>\n> 2) estimate improvement for the probing branch - Essentially, we need to\n> estimate how much we save by filtering some of the rows, but this also\n> neeeds to include the cost of probing the bloom filter.\n>\n> This will probably require some improvements to the lib/bloomfilter, in\n> order to estimate the false positive rate - this may matter a lot for\n> large data sets and small work_mem values. The bloomfilter library\n> simply reduces the size of the bloom filter, which increases the false\n> positive rate. At some point it'll start reducing the benefit.\n>\n\nThese suggestions make a lot of sense. The current costing model is\ndefinitely not good enough, and we plan to work on a more robust\ncosting model as we continue to improve the patch.\n\n\n> OK. Could also build the bloom filter in shared memory?\n>\n\nWe thought about this approach but didn't prefer this one because if\nall worker processes share the same bloom filter in shared memory, we\nneed to frequently lock and unlock the bloom filter to avoid race\nconditions. So we decided to have each worker process create its own\nbloom filter.\n\n\n> IMHO we shouldn't make too many conclusions from these examples. Yes, it\n> shows merge join can be improved, but for cases where a hashjoin works\n> better so we wouldn't use merge join anyway.\n>\n> I think we should try constructing examples where either merge join wins\n> already (and gets further improved by the bloom filter), or would lose\n> to hash join and the bloom filter improves it enough to win.\n>\n> AFAICS that requires a join of two large tables - large enough that hash\n> join would need to be batched, or pre-sorted inputs (which eliminates\n> the explicit Sort, which is the main cost in most cases).\n>\n> The current patch only works with sequential scans, which eliminates the\n> second (pre-sorted) option. So let's try the first one - can we invent\n> an example with a join of two large tables where a merge join would win?\n>\n> Can we find such example in existing benchmarks like TPC-H/TPC-DS.\n>\n\nAgreed. The current examples are only intended to show us that using\nbloom filters in merge join could improve the merge join performance\nin some cases. We are working on testing more examples that merge join\nwith bloom filter could out-perform hash join, which should be more\npersuasive.\n\n\n> The bloom filter is built by the first seqscan (on t0), and then used by\n> the second seqscan (on t1). But this only works because we always run\n> the t0 scan to completion (because we're feeding it into Sort) before we\n> start scanning t1.\n>\n> But when the scan on t1 switches to an index scan, it's over - we'd be\n> building the filter without being able to probe it, and when we finish\n> building it we no longer need it. So this seems pretty futile.\n>\n> It might still improve plans like\n>\n>    ->  Merge Join\n>          Merge Cond: (t0.c1 = t1.c1)\n>          SemiJoin Filter Created Based on: (t0.c1 = t1.c1)\n>          SemiJoin Estimated Filtering Rate: 1.0000\n>         ->  Sort\n>                Sort Key: t0.c1\n>                ->  Seq Scan on t0\n>          ->  Index Scan on t1\n>\n> But I don't know how common/likely that actually is. I'd expect to have\n> an index on both sides, but perhaps I'm wrong.\n>\n> This is why hashjoin seems like a more natural fit for the bloom filter,\n> BTW, because there we have a guarantee the inner relation is processed\n> first (so we know the bloom filter is fine and can be probed).\n>\n\nGreat observation. The bloom filter only works if the first SeqScan\nalways runs to completion before the second SeqScan starts.\nI guess one possible way to avoid futile bloom filter might be\nenforcing that the bloom filter only be used if both the outer/inner\nplans of the MergeJoin are Sort nodes, to guarantee the bloom filter\nis ready to use after processing one side of the join, but this may be\ntoo restrictive.\n\n\n> I don't know what improvements you have in mind exactly, but I think\n> it'd be good to show which node is building/using a bloom filter, and\n> then also some basic stats (size, number of hash functions, FPR, number\n> of probes, ...). This may require improvements to lib/bloomfilter, which\n> currently does not expose some of the details.\n>\n\nAlong with the new patch set, we have added information to display\nwhich node is building/using a bloom filter (as well as the\ncorresponding expressions), and some bloom filter basic stats. We'll\nadd more related information (e.g. FPR) as we modify lib/bloomfilter\nimplementation in future work.\n\n\nThanks again for the great reviews and they are really useful! More\nfeedback is always welcome and appreciated!\n\nRegards,\nLyu Pan\nAmazon RDS/Aurora for PostgreSQLHi,For v1-0001-Support-semijoin-filter-in-the-planner-optimizer.patch :++       /* want at least 1000 rows_filtered to avoid any nasty edge cases */+       if (force_mergejoin_semijoin_filter ||+           (filtering_rate >= 0.35 && rows_filtered > 1000 && best_filter_clause >= 0))Currently rows_filtered is compared with a constant, should the constant be made configurable ?+            * improvement of 19.5%. This equation also concludes thata a 17%Typo: thata+   inner_md_array = palloc(sizeof(SemiJoinFilterExprMetadata) * num_md);+   if (!outer_md_array || !inner_md_array)+   {+       return 0;               /* a stack array allocation failed  */Should the allocated array be freed before returning ?For verify_valid_pushdown(), the parameters in comment don't match the actual parameters. Please modify the comment.For is_fk_pk(), since the outer_key_list is fixed across the iterations, I think all_vars can be computed outside of expressions_match_foreign_key().CheersContinuing with v1-0001-Support-semijoin-filter-in-the-planner-optimizer.patch For get_switched_clauses(), the returned List contains all the clauses. Yet the name suggests that only switched clauses are returned.Please rename the method to adjust_XX or rearrange_YY so that it is less likely to cause confusion.For depth_of_semijoin_target(), idx is only used inside the `if (num_exprs == get_appendrel_occluded_references(` block, you can move its declaration inside that if block.+   outside_subq_rte = root->simple_rte_array[outside_subq_var->varno];++   /* System Vars have varattno < 0, don't bother */+   if (outside_subq_var->varattno <= 0)+       return 0;Since the check on outside_subq_var->varattno doesn't depend on outside_subq_rte, you can move the assignment to outside_subq_rte after the if check.Cheers", "msg_date": "Thu, 13 Oct 2022 07:30:11 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Bloom filter Pushdown Optimization for Merge Join" }, { "msg_contents": "On Thu, Oct 13, 2022 at 7:30 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Wed, Oct 12, 2022 at 4:35 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>>\n>>\n>> On Wed, Oct 12, 2022 at 3:36 PM Lyu Pan <lyu.steve.pan@gmail.com> wrote:\n>>\n>>> Hello Zhihong Yu & Tomas Vondra,\n>>>\n>>> Thank you so much for your review and feedback!\n>>>\n>>> We made some updates based on previous feedback and attached the new\n>>> patch set. Due to time constraints, we didn't get to resolve all the\n>>> comments, and we'll continue to improve this patch.\n>>>\n>>> > In this prototype, the cost model is based on an assumption that there\n>>> is\n>>> > a linear relationship between the performance gain from using a\n>>> semijoin\n>>> > filter and the estimated filtering rate:\n>>> > % improvement to Merge Join cost = 0.83 * estimated filtering rate -\n>>> 0.137.\n>>> >\n>>> > How were the coefficients (0.83 and 0.137) determined ?\n>>> > I guess they were based on the results of running certain workload.\n>>>\n>>> Right, the coefficients (0.83 and 0.137) determined are based on some\n>>> preliminary testings. The current costing model is pretty naive and\n>>> we'll work on a more robust costing model in future work.\n>>>\n>>>\n>>> > I agree, in principle, although I think the current logic / formula is\n>>> a\n>>> > bit too crude and fitted to the simple data used in the test. I think\n>>> > this needs to be formulated as a regular costing issue, considering\n>>> > stuff like cost of the hash functions, and so on.\n>>> >\n>>> > I think this needs to do two things:\n>>> >\n>>> > 1) estimate the cost of building the bloom filter - This shall depend\n>>> on\n>>> > the number of rows in the inner relation, number/cost of the hash\n>>> > functions (which may be higher for some data types), etc.\n>>> >\n>>> > 2) estimate improvement for the probing branch - Essentially, we need\n>>> to\n>>> > estimate how much we save by filtering some of the rows, but this also\n>>> > neeeds to include the cost of probing the bloom filter.\n>>> >\n>>> > This will probably require some improvements to the lib/bloomfilter, in\n>>> > order to estimate the false positive rate - this may matter a lot for\n>>> > large data sets and small work_mem values. The bloomfilter library\n>>> > simply reduces the size of the bloom filter, which increases the false\n>>> > positive rate. At some point it'll start reducing the benefit.\n>>> >\n>>>\n>>> These suggestions make a lot of sense. The current costing model is\n>>> definitely not good enough, and we plan to work on a more robust\n>>> costing model as we continue to improve the patch.\n>>>\n>>>\n>>> > OK. Could also build the bloom filter in shared memory?\n>>> >\n>>>\n>>> We thought about this approach but didn't prefer this one because if\n>>> all worker processes share the same bloom filter in shared memory, we\n>>> need to frequently lock and unlock the bloom filter to avoid race\n>>> conditions. So we decided to have each worker process create its own\n>>> bloom filter.\n>>>\n>>>\n>>> > IMHO we shouldn't make too many conclusions from these examples. Yes,\n>>> it\n>>> > shows merge join can be improved, but for cases where a hashjoin works\n>>> > better so we wouldn't use merge join anyway.\n>>> >\n>>> > I think we should try constructing examples where either merge join\n>>> wins\n>>> > already (and gets further improved by the bloom filter), or would lose\n>>> > to hash join and the bloom filter improves it enough to win.\n>>> >\n>>> > AFAICS that requires a join of two large tables - large enough that\n>>> hash\n>>> > join would need to be batched, or pre-sorted inputs (which eliminates\n>>> > the explicit Sort, which is the main cost in most cases).\n>>> >\n>>> > The current patch only works with sequential scans, which eliminates\n>>> the\n>>> > second (pre-sorted) option. So let's try the first one - can we invent\n>>> > an example with a join of two large tables where a merge join would\n>>> win?\n>>> >\n>>> > Can we find such example in existing benchmarks like TPC-H/TPC-DS.\n>>> >\n>>>\n>>> Agreed. The current examples are only intended to show us that using\n>>> bloom filters in merge join could improve the merge join performance\n>>> in some cases. We are working on testing more examples that merge join\n>>> with bloom filter could out-perform hash join, which should be more\n>>> persuasive.\n>>>\n>>>\n>>> > The bloom filter is built by the first seqscan (on t0), and then used\n>>> by\n>>> > the second seqscan (on t1). But this only works because we always run\n>>> > the t0 scan to completion (because we're feeding it into Sort) before\n>>> we\n>>> > start scanning t1.\n>>> >\n>>> > But when the scan on t1 switches to an index scan, it's over - we'd be\n>>> > building the filter without being able to probe it, and when we finish\n>>> > building it we no longer need it. So this seems pretty futile.\n>>> >\n>>> > It might still improve plans like\n>>> >\n>>> > -> Merge Join\n>>> > Merge Cond: (t0.c1 = t1.c1)\n>>> > SemiJoin Filter Created Based on: (t0.c1 = t1.c1)\n>>> > SemiJoin Estimated Filtering Rate: 1.0000\n>>> > -> Sort\n>>> > Sort Key: t0.c1\n>>> > -> Seq Scan on t0\n>>> > -> Index Scan on t1\n>>> >\n>>> > But I don't know how common/likely that actually is. I'd expect to have\n>>> > an index on both sides, but perhaps I'm wrong.\n>>> >\n>>> > This is why hashjoin seems like a more natural fit for the bloom\n>>> filter,\n>>> > BTW, because there we have a guarantee the inner relation is processed\n>>> > first (so we know the bloom filter is fine and can be probed).\n>>> >\n>>>\n>>> Great observation. The bloom filter only works if the first SeqScan\n>>> always runs to completion before the second SeqScan starts.\n>>> I guess one possible way to avoid futile bloom filter might be\n>>> enforcing that the bloom filter only be used if both the outer/inner\n>>> plans of the MergeJoin are Sort nodes, to guarantee the bloom filter\n>>> is ready to use after processing one side of the join, but this may be\n>>> too restrictive.\n>>>\n>>>\n>>> > I don't know what improvements you have in mind exactly, but I think\n>>> > it'd be good to show which node is building/using a bloom filter, and\n>>> > then also some basic stats (size, number of hash functions, FPR, number\n>>> > of probes, ...). This may require improvements to lib/bloomfilter,\n>>> which\n>>> > currently does not expose some of the details.\n>>> >\n>>>\n>>> Along with the new patch set, we have added information to display\n>>> which node is building/using a bloom filter (as well as the\n>>> corresponding expressions), and some bloom filter basic stats. We'll\n>>> add more related information (e.g. FPR) as we modify lib/bloomfilter\n>>> implementation in future work.\n>>>\n>>>\n>>> Thanks again for the great reviews and they are really useful! More\n>>> feedback is always welcome and appreciated!\n>>>\n>>> Regards,\n>>> Lyu Pan\n>>> Amazon RDS/Aurora for PostgreSQL\n>>>\n>>> Hi,\n>> For v1-0001-Support-semijoin-filter-in-the-planner-optimizer.patch :\n>>\n>> +\n>> + /* want at least 1000 rows_filtered to avoid any nasty edge cases\n>> */\n>> + if (force_mergejoin_semijoin_filter ||\n>> + (filtering_rate >= 0.35 && rows_filtered > 1000 &&\n>> best_filter_clause >= 0))\n>>\n>> Currently rows_filtered is compared with a constant, should the constant\n>> be made configurable ?\n>>\n>> + * improvement of 19.5%. This equation also concludes thata a\n>> 17%\n>>\n>> Typo: thata\n>>\n>> + inner_md_array = palloc(sizeof(SemiJoinFilterExprMetadata) * num_md);\n>> + if (!outer_md_array || !inner_md_array)\n>> + {\n>> + return 0; /* a stack array allocation failed */\n>>\n>> Should the allocated array be freed before returning ?\n>>\n>> For verify_valid_pushdown(), the parameters in comment don't match the\n>> actual parameters. Please modify the comment.\n>>\n>> For is_fk_pk(), since the outer_key_list is fixed across the iterations,\n>> I think all_vars can be computed outside of expressions_match_foreign_key().\n>>\n>> Cheers\n>>\n>\n> Continuing\n> with v1-0001-Support-semijoin-filter-in-the-planner-optimizer.patch\n>\n> For get_switched_clauses(), the returned List contains all the clauses.\n> Yet the name suggests that only switched clauses are returned.\n> Please rename the method to adjust_XX or rearrange_YY so that it is less\n> likely to cause confusion.\n>\n> For depth_of_semijoin_target(), idx is only used inside the `if (num_exprs\n> == get_appendrel_occluded_references(` block, you can move its declaration\n> inside that if block.\n>\n> + outside_subq_rte = root->simple_rte_array[outside_subq_var->varno];\n> +\n> + /* System Vars have varattno < 0, don't bother */\n> + if (outside_subq_var->varattno <= 0)\n> + return 0;\n>\n> Since the check on outside_subq_var->varattno doesn't depend on\n> outside_subq_rte, you can move the assignment to outside_subq_rte after the\n> if check.\n>\n> Cheers\n>\n\nFor v1-0002-Support-semijoin-filter-in-the-executor-for-non-para.patch, I\nhave a few comments.\n\n+\n+ if (IsA(node, SeqScanState) && ((SeqScanState *)\nnode)->apply_semijoin_filter\n+ && !ExecScanUsingSemiJoinFilter((SeqScanState *) node,\necontext))\n+ {\n+ /* slot did not pass SemiJoinFilter, so skipping it. */\n+ ResetExprContext(econtext);\n+ continue;\n+ }\n+ return projectedSlot;\n\nSince projectedSlot is only returned if slot passes SemiJoinFilter, you can\nmove the call to `ExecProject` after the above if block.\n\n+ List *semijoin_filters; /* SemiJoinFilterJoinNodeState */\n+ List *sj_scan_data; /* SemiJoinFilterScanNodeState */\n\nBetter use plural form in the comments.\n\nCheers\n\nOn Thu, Oct 13, 2022 at 7:30 AM Zhihong Yu <zyu@yugabyte.com> wrote:On Wed, Oct 12, 2022 at 4:35 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Wed, Oct 12, 2022 at 3:36 PM Lyu Pan <lyu.steve.pan@gmail.com> wrote:Hello Zhihong Yu & Tomas Vondra,\n\nThank you so much for your review and feedback!\n\nWe made some updates based on previous feedback and attached the new\npatch set. Due to time constraints, we didn't get to resolve all the\ncomments, and we'll continue to improve this patch.\n\n> In this prototype, the cost model is based on an assumption that there is\n> a linear relationship between the performance gain from using a semijoin\n> filter and the estimated filtering rate:\n> % improvement to Merge Join cost = 0.83 * estimated filtering rate - 0.137.\n>\n> How were the coefficients (0.83 and 0.137) determined ?\n> I guess they were based on the results of running certain workload.\n\nRight, the coefficients (0.83 and 0.137) determined are based on some\npreliminary testings. The current costing model is pretty naive and\nwe'll work on a more robust costing model in future work.\n\n\n> I agree, in principle, although I think the current logic / formula is a\n> bit too crude and fitted to the simple data used in the test. I think\n> this needs to be formulated as a regular costing issue, considering\n> stuff like cost of the hash functions, and so on.\n>\n> I think this needs to do two things:\n>\n> 1) estimate the cost of building the bloom filter - This shall depend on\n> the number of rows in the inner relation, number/cost of the hash\n> functions (which may be higher for some data types), etc.\n>\n> 2) estimate improvement for the probing branch - Essentially, we need to\n> estimate how much we save by filtering some of the rows, but this also\n> neeeds to include the cost of probing the bloom filter.\n>\n> This will probably require some improvements to the lib/bloomfilter, in\n> order to estimate the false positive rate - this may matter a lot for\n> large data sets and small work_mem values. The bloomfilter library\n> simply reduces the size of the bloom filter, which increases the false\n> positive rate. At some point it'll start reducing the benefit.\n>\n\nThese suggestions make a lot of sense. The current costing model is\ndefinitely not good enough, and we plan to work on a more robust\ncosting model as we continue to improve the patch.\n\n\n> OK. Could also build the bloom filter in shared memory?\n>\n\nWe thought about this approach but didn't prefer this one because if\nall worker processes share the same bloom filter in shared memory, we\nneed to frequently lock and unlock the bloom filter to avoid race\nconditions. So we decided to have each worker process create its own\nbloom filter.\n\n\n> IMHO we shouldn't make too many conclusions from these examples. Yes, it\n> shows merge join can be improved, but for cases where a hashjoin works\n> better so we wouldn't use merge join anyway.\n>\n> I think we should try constructing examples where either merge join wins\n> already (and gets further improved by the bloom filter), or would lose\n> to hash join and the bloom filter improves it enough to win.\n>\n> AFAICS that requires a join of two large tables - large enough that hash\n> join would need to be batched, or pre-sorted inputs (which eliminates\n> the explicit Sort, which is the main cost in most cases).\n>\n> The current patch only works with sequential scans, which eliminates the\n> second (pre-sorted) option. So let's try the first one - can we invent\n> an example with a join of two large tables where a merge join would win?\n>\n> Can we find such example in existing benchmarks like TPC-H/TPC-DS.\n>\n\nAgreed. The current examples are only intended to show us that using\nbloom filters in merge join could improve the merge join performance\nin some cases. We are working on testing more examples that merge join\nwith bloom filter could out-perform hash join, which should be more\npersuasive.\n\n\n> The bloom filter is built by the first seqscan (on t0), and then used by\n> the second seqscan (on t1). But this only works because we always run\n> the t0 scan to completion (because we're feeding it into Sort) before we\n> start scanning t1.\n>\n> But when the scan on t1 switches to an index scan, it's over - we'd be\n> building the filter without being able to probe it, and when we finish\n> building it we no longer need it. So this seems pretty futile.\n>\n> It might still improve plans like\n>\n>    ->  Merge Join\n>          Merge Cond: (t0.c1 = t1.c1)\n>          SemiJoin Filter Created Based on: (t0.c1 = t1.c1)\n>          SemiJoin Estimated Filtering Rate: 1.0000\n>         ->  Sort\n>                Sort Key: t0.c1\n>                ->  Seq Scan on t0\n>          ->  Index Scan on t1\n>\n> But I don't know how common/likely that actually is. I'd expect to have\n> an index on both sides, but perhaps I'm wrong.\n>\n> This is why hashjoin seems like a more natural fit for the bloom filter,\n> BTW, because there we have a guarantee the inner relation is processed\n> first (so we know the bloom filter is fine and can be probed).\n>\n\nGreat observation. The bloom filter only works if the first SeqScan\nalways runs to completion before the second SeqScan starts.\nI guess one possible way to avoid futile bloom filter might be\nenforcing that the bloom filter only be used if both the outer/inner\nplans of the MergeJoin are Sort nodes, to guarantee the bloom filter\nis ready to use after processing one side of the join, but this may be\ntoo restrictive.\n\n\n> I don't know what improvements you have in mind exactly, but I think\n> it'd be good to show which node is building/using a bloom filter, and\n> then also some basic stats (size, number of hash functions, FPR, number\n> of probes, ...). This may require improvements to lib/bloomfilter, which\n> currently does not expose some of the details.\n>\n\nAlong with the new patch set, we have added information to display\nwhich node is building/using a bloom filter (as well as the\ncorresponding expressions), and some bloom filter basic stats. We'll\nadd more related information (e.g. FPR) as we modify lib/bloomfilter\nimplementation in future work.\n\n\nThanks again for the great reviews and they are really useful! More\nfeedback is always welcome and appreciated!\n\nRegards,\nLyu Pan\nAmazon RDS/Aurora for PostgreSQLHi,For v1-0001-Support-semijoin-filter-in-the-planner-optimizer.patch :++       /* want at least 1000 rows_filtered to avoid any nasty edge cases */+       if (force_mergejoin_semijoin_filter ||+           (filtering_rate >= 0.35 && rows_filtered > 1000 && best_filter_clause >= 0))Currently rows_filtered is compared with a constant, should the constant be made configurable ?+            * improvement of 19.5%. This equation also concludes thata a 17%Typo: thata+   inner_md_array = palloc(sizeof(SemiJoinFilterExprMetadata) * num_md);+   if (!outer_md_array || !inner_md_array)+   {+       return 0;               /* a stack array allocation failed  */Should the allocated array be freed before returning ?For verify_valid_pushdown(), the parameters in comment don't match the actual parameters. Please modify the comment.For is_fk_pk(), since the outer_key_list is fixed across the iterations, I think all_vars can be computed outside of expressions_match_foreign_key().CheersContinuing with v1-0001-Support-semijoin-filter-in-the-planner-optimizer.patch For get_switched_clauses(), the returned List contains all the clauses. Yet the name suggests that only switched clauses are returned.Please rename the method to adjust_XX or rearrange_YY so that it is less likely to cause confusion.For depth_of_semijoin_target(), idx is only used inside the `if (num_exprs == get_appendrel_occluded_references(` block, you can move its declaration inside that if block.+   outside_subq_rte = root->simple_rte_array[outside_subq_var->varno];++   /* System Vars have varattno < 0, don't bother */+   if (outside_subq_var->varattno <= 0)+       return 0;Since the check on outside_subq_var->varattno doesn't depend on outside_subq_rte, you can move the assignment to outside_subq_rte after the if check.Cheers For v1-0002-Support-semijoin-filter-in-the-executor-for-non-para.patch, I have a few comments.++               if (IsA(node, SeqScanState) && ((SeqScanState *) node)->apply_semijoin_filter+                   && !ExecScanUsingSemiJoinFilter((SeqScanState *) node, econtext))+               {+                   /* slot did not pass SemiJoinFilter, so skipping it. */+                   ResetExprContext(econtext);+                   continue;+               }+               return projectedSlot;Since projectedSlot is only returned if slot passes SemiJoinFilter, you can move the call to `ExecProject` after the above if block.+   List       *semijoin_filters;   /* SemiJoinFilterJoinNodeState  */+   List       *sj_scan_data;   /* SemiJoinFilterScanNodeState */Better use plural form in the comments.Cheers", "msg_date": "Thu, 13 Oct 2022 09:33:01 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Bloom filter Pushdown Optimization for Merge Join" } ]
[ { "msg_contents": "Hi,\n\nI created a postgers_fdw server lookback as the test does. Then run the following SQLs\n\n\ncreate table t1(c0 int);\n\ninsert into t1 values(1);\n\ncreate foreign table ft1(\nc0 int\n) SERVER loopback OPTIONS (schema_name 'public', table_name 't1');\n\n\nThen started a transaction that runs queries on both t1 and ft1 tables:\n\nbegin;\n\nselect * from ft1;\nc0\n----\n 1\n(1 row)\n\nselect * from t1;\n c0\n----\n 1\n(1 row)\n\ninsert into ft1 values(2);\n\nselect * from ft1;\n c0\n----\n 1\n 2\n(2 rows)\n\n\nselect * from t1;\n c0\n----\n 1\n(1 row)\n\n\nThough t1 and ft1 actually share the same data, and in the same transaction, they have different transaction ids and different snapshots, and queries are run in different processes, they see different data.\n\nThen I attempted to run the update on them\n\nupdate t1 set c0=8;\n\nupdate ft1 set c0=9;\n\n\nThen the transaction got stuck. Should the \"lookback\" server be disabled in the postgres_fdw?\n\nThoughts?\n\nthanks,\nXiaoran\n\n\n\n\n\n\n\n\nHi,\n\n\n\n\n\nI created a postgers_fdw server lookback as the test does. Then run the following SQLs\n\n\n\n\n\n\n\ncreate table t1(c0 int);\n\n\n\n\ninsert into t1 values(1);\n\n\n\n\ncreate foreign table ft1(\n\nc0 int\n\n\n) SERVER loopback OPTIONS (schema_name 'public', table_name 't1');\n\n\n\n\n\n\n\nThen started a transaction that runs queries on both t1 and ft1 tables:\n\n\n\n\nbegin;\n\n\n\n\nselect * from ft1;\n\n\nc0\n----\n  1\n(1 row)\n\n\n\n\n\nselect * from t1;\n c0\n----\n  1\n(1 row)\n\n\n\n\n\ninsert into ft1 values(2);\n\n\n\n\n\nselect * from ft1;\n c0\n----\n  1\n  2\n(2 rows)\n\n\n\n\n\n\n\n\nselect * from t1;\n c0\n----\n  1\n(1 row)\n\n\n\n\n\n\n\n\nThough t1 and ft1 actually share the same data, and in the same transaction, they have different transaction ids and different snapshots, and queries are run in different processes, they see different data.\n\n\n\n\nThen I attempted to run the update on them\n\n\n\n\nupdate t1 set c0=8;\n\n\n\n\nupdate ft1 set c0=9;\n\n\n\n\n\n\n\n\nThen the transaction got stuck. Should the \"lookback\" server be disabled in the postgres_fdw?\n\n\n\n\n\nThoughts?\n\n\n\n\nthanks,\nXiaoran", "msg_date": "Sat, 1 Oct 2022 04:02:09 +0000", "msg_from": "Xiaoran Wang <wxiaoran@vmware.com>", "msg_from_op": true, "msg_subject": "postgres_fdw: dead lock in a same transaction when postgres_fdw\n server is lookback" }, { "msg_contents": "On Sat, 2022-10-01 at 04:02 +0000, Xiaoran Wang wrote:\n> I created a postgers_fdw server lookback as the test does. Then run the following SQLs\n> \n> [create a foreign server via loopback and manipulate the same data locally and via foreign table]\n> \n> Then the transaction got stuck. Should the \"lookback\" server be disabled in the postgres_fdw?\n\nIt shouldn't; there are good use cases for that (\"autonomous transactions\").\nAT most, some cautioning documentation could be added, but I am not convinced\nthat even that is necessary.\n\nI'd say that this is a pretty obvious case of pilot error.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Sat, 01 Oct 2022 16:38:52 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: dead lock in a same transaction when postgres_fdw\n server is lookback" } ]
[ { "msg_contents": "Hi,\n\nSee e.g. https://cirrus-ci.com/task/4682373060100096\n2022-10-01 15:15:21.849 UTC [41962][postmaster] LOG: could not bind IPv4 address \"127.0.0.1\": Address already in use\n2022-10-01 15:15:21.849 UTC [41962][postmaster] HINT: Is another postmaster already running on port 57003? If not, wait a few seconds and retry.\n2022-10-01 15:15:21.849 UTC [41962][postmaster] WARNING: could not create listen socket for \"127.0.0.1\"\n\nI downloaded all test logs and grepping for 57003 shows the problem:\n\nbuild/testrun/ldap/001_auth/log/regress_log_001_auth\n3:# Checking port 57003\n4:# Found port 57003\n22:# Running: /usr/sbin/slapd -f /tmp/cirrus-ci-build/build/testrun/ldap/001_auth/data/slapd.conf -h ldap://localhost:57002 ldaps://localhost:57003\n\nbuild/testrun/ldap/001_auth/log/001_auth_node.log\n253:2022-10-01 15:15:25.103 UTC [42574][client backend] [[unknown]][3/1:0] DETAIL: Connection matched pg_hba.conf line 1: \"local all all ldap ldapurl=\"ldaps://localhost:57003/dc=example,dc=net??sub?(uid=$username)\" ldaptls=1\"\n\nbuild/testrun/ssl/001_ssltests/log/regress_log_001_ssltests\n2:# Checking port 57003\n3:# Found port 57003\n8:Connection string: port=57003 host=/tmp/1k5yhaWLQ1\n\nbuild/testrun/ssl/001_ssltests/log/001_ssltests_primary.log\n2:2022-10-01 15:15:20.668 UTC [41740][postmaster] LOG: listening on Unix socket \"/tmp/1k5yhaWLQ1/.s.PGSQL.57003\"\n58:2022-10-01 15:15:21.849 UTC [41962][postmaster] HINT: Is another postmaster already running on port 57003? If not, wait a few seconds and retry.\n\n\nI.e. we chose the same port for slapd as part of ldap's 001_auth.pl as for the\npostgres instance of ssl's 001_ssltests.pl.\n\n\nI don't think get_free_port() has any chance of being reliable as is. It's\nfundamentally racy just among concurrently running tests, without even\nconsidering things external to the tests (given it's using the range of ports\nauto-assigned for client tcp ports...).\n\n\nThe current code is from 803466b6ffa, which said:\n This isn't 100% bulletproof, since\n conceivably something else on the machine could grab the port between\n the time we check and the time we actually start the server. But that's\n a pretty short window, so in practice this should be good enough.\n\nbut I've seen this fail a couple times, so I suspect it's unfortunately not\ngood enough.\n\n\nI can see a few potential ways of improving the situation:\n\n1) Improve the port we start the search for a unique port from. We use the\n following bit right now:\n\n\t# Tracking of last port value assigned to accelerate free port lookup.\n\t$last_port_assigned = int(rand() * 16384) + 49152;\n\n It *might* be less likely that we hit conflicts if we start the search on a\n port based on the pid, rather than rand(). We e.g., could start at\n something like (($$ * 16) % 16384 + 49152), giving a decent likelihood that\n each test has 16 free ports.\n\n Perhaps also worth to increase the range of ports searched?\n\n\n2) Use a lockfile containing a pid to protect the choice of a port within a\n build directory. Before accepting a port get_free_port() would check if the\n a lockfile exists for the port and if so, if the test using it is still\n alive. That will protect against racyness between multiple tests inside a\n build directory, but won't protect against races between multiple builds\n running concurrently on a machine (e.g. on a buildfarm host)\n\n\n3) We could generate unique port ranges for each test and thus remove the\n chance of conflicts within a builddir and substantially reduce the\n likelihood of conflicts between build directories (as the conflict would be\n between two tests in different build directories, rather than all tests).\n\n This would be easy to do in the meson build, but nontrivial in autoconf\n afaics.\n\n\n4) Add retries to the tests that need get_free_port(). That seems hard to get\n right, because every single restart of postgres (or some other service that\n we used get_free_port()) provides the potential for a new conflict.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 2 Oct 2022 09:49:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "ssl tests aren't concurrency safe due to get_free_port()" }, { "msg_contents": "\nOn 2022-10-02 Su 12:49, Andres Freund wrote:\n>\n> 2) Use a lockfile containing a pid to protect the choice of a port within a\n> build directory. Before accepting a port get_free_port() would check if the\n> a lockfile exists for the port and if so, if the test using it is still\n> alive. That will protect against racyness between multiple tests inside a\n> build directory, but won't protect against races between multiple builds\n> running concurrently on a machine (e.g. on a buildfarm host)\n>\n>\n\nI think this is the right solution. To deal with the last issue, the\nlockdir should be overrideable, like this:\n\n\n  my $port_lockdir = $ENV{PG_PORT_LOCKDIR} || $build_dir;\n\n\nBuildfarm animals could set this, probably to the global lockdir (see\nrun_branches.pl). Prior to that, buildfarm owners could do that manually.\n\n\nThere are numerous examples of lockfile code in the buildfarm sources.\nI'll try to hack something up.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 4 Oct 2022 01:39:51 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: ssl tests aren't concurrency safe due to get_free_port()" }, { "msg_contents": "\nOn 2022-10-04 Tu 01:39, Andrew Dunstan wrote:\n> On 2022-10-02 Su 12:49, Andres Freund wrote:\n>> 2) Use a lockfile containing a pid to protect the choice of a port within a\n>> build directory. Before accepting a port get_free_port() would check if the\n>> a lockfile exists for the port and if so, if the test using it is still\n>> alive. That will protect against racyness between multiple tests inside a\n>> build directory, but won't protect against races between multiple builds\n>> running concurrently on a machine (e.g. on a buildfarm host)\n>>\n>>\n> I think this is the right solution. To deal with the last issue, the\n> lockdir should be overrideable, like this:\n>\n>\n>   my $port_lockdir = $ENV{PG_PORT_LOCKDIR} || $build_dir;\n>\n>\n> Buildfarm animals could set this, probably to the global lockdir (see\n> run_branches.pl). Prior to that, buildfarm owners could do that manually.\n>\n>\n\nThe problem here is that Cluster.pm doesn't have any idea where the\nbuild directory is, or even if there is one present at all.\n\nmeson does appear to let us know that, however, with MESON_BUILD_ROOT,\nso probably the best thing would be to use PG_PORT_LOCKDIR if it's set,\notherwise MESON_BUILD_ROOT if it's set, otherwise the tmp_check\ndirectory. If we want to backport to the make system we could export\ntop_builddir somewhere.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 17 Oct 2022 10:59:32 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: ssl tests aren't concurrency safe due to get_free_port()" }, { "msg_contents": "On 2022-10-17 Mo 10:59, Andrew Dunstan wrote:\n> On 2022-10-04 Tu 01:39, Andrew Dunstan wrote:\n>> On 2022-10-02 Su 12:49, Andres Freund wrote:\n>>> 2) Use a lockfile containing a pid to protect the choice of a port within a\n>>> build directory. Before accepting a port get_free_port() would check if the\n>>> a lockfile exists for the port and if so, if the test using it is still\n>>> alive. That will protect against racyness between multiple tests inside a\n>>> build directory, but won't protect against races between multiple builds\n>>> running concurrently on a machine (e.g. on a buildfarm host)\n>>>\n>>>\n>> I think this is the right solution. To deal with the last issue, the\n>> lockdir should be overrideable, like this:\n>>\n>>\n>>   my $port_lockdir = $ENV{PG_PORT_LOCKDIR} || $build_dir;\n>>\n>>\n>> Buildfarm animals could set this, probably to the global lockdir (see\n>> run_branches.pl). Prior to that, buildfarm owners could do that manually.\n>>\n>>\n> The problem here is that Cluster.pm doesn't have any idea where the\n> build directory is, or even if there is one present at all.\n>\n> meson does appear to let us know that, however, with MESON_BUILD_ROOT,\n> so probably the best thing would be to use PG_PORT_LOCKDIR if it's set,\n> otherwise MESON_BUILD_ROOT if it's set, otherwise the tmp_check\n> directory. If we want to backport to the make system we could export\n> top_builddir somewhere.\n>\n>\n\nHere's a patch which I think does the right thing.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Wed, 2 Nov 2022 15:09:47 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: ssl tests aren't concurrency safe due to get_free_port()" }, { "msg_contents": "On 2022-11-02 We 15:09, Andrew Dunstan wrote:\n> On 2022-10-17 Mo 10:59, Andrew Dunstan wrote:\n>> On 2022-10-04 Tu 01:39, Andrew Dunstan wrote:\n>>> On 2022-10-02 Su 12:49, Andres Freund wrote:\n>>>> 2) Use a lockfile containing a pid to protect the choice of a port within a\n>>>> build directory. Before accepting a port get_free_port() would check if the\n>>>> a lockfile exists for the port and if so, if the test using it is still\n>>>> alive. That will protect against racyness between multiple tests inside a\n>>>> build directory, but won't protect against races between multiple builds\n>>>> running concurrently on a machine (e.g. on a buildfarm host)\n>>>>\n>>>>\n>>> I think this is the right solution. To deal with the last issue, the\n>>> lockdir should be overrideable, like this:\n>>>\n>>>\n>>>   my $port_lockdir = $ENV{PG_PORT_LOCKDIR} || $build_dir;\n>>>\n>>>\n>>> Buildfarm animals could set this, probably to the global lockdir (see\n>>> run_branches.pl). Prior to that, buildfarm owners could do that manually.\n>>>\n>>>\n>> The problem here is that Cluster.pm doesn't have any idea where the\n>> build directory is, or even if there is one present at all.\n>>\n>> meson does appear to let us know that, however, with MESON_BUILD_ROOT,\n>> so probably the best thing would be to use PG_PORT_LOCKDIR if it's set,\n>> otherwise MESON_BUILD_ROOT if it's set, otherwise the tmp_check\n>> directory. If we want to backport to the make system we could export\n>> top_builddir somewhere.\n>>\n>>\n> Here's a patch which I think does the right thing.\n>\n>\n>\n\nUpdated with a couple of thinkos fixed.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 3 Nov 2022 14:16:51 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: ssl tests aren't concurrency safe due to get_free_port()" }, { "msg_contents": "Hi,\n\nOn 2022-11-03 14:16:51 -0400, Andrew Dunstan wrote:\n> > Here's a patch which I think does the right thing.\n> Updated with a couple of thinkos fixed.\n\nThanks!\n\n\n> diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm\n> index d80134b26f..aceca353d3 100644\n> --- a/src/test/perl/PostgreSQL/Test/Cluster.pm\n> +++ b/src/test/perl/PostgreSQL/Test/Cluster.pm\n> @@ -93,9 +93,9 @@ use warnings;\n> \n> use Carp;\n> use Config;\n> -use Fcntl qw(:mode);\n> +use Fcntl qw(:mode :flock :seek O_CREAT O_RDWR);\n\nDoes this do anything useful on windows?\n\n\n\n> # the minimum version we believe to be compatible with this package without\n> # subclassing.\n> @@ -140,6 +140,27 @@ INIT\n> \n> \t# Tracking of last port value assigned to accelerate free port lookup.\n> \t$last_port_assigned = int(rand() * 16384) + 49152;\n> +\n> +\t# Set the port lock directory\n> +\n> +\t# If we're told to use a directory (e.g. from a buildfarm client)\n> +\t# explicitly, use that\n> +\t$portdir = $ENV{PG_TEST_PORT_DIR};\n> +\t# Otherwise, try to use a directory at the top of the build tree\n> +\tif (! $portdir && $ENV{MESON_BUILD_ROOT})\n> +\t{\n> +\t\t$portdir = $ENV{MESON_BUILD_ROOT} . '/portlock'\n> +\t}\n> +\telsif (! $portdir && ($ENV{TESTDATADIR} || \"\") =~ /\\W(src|contrib)\\W/p)\n> +\t{\n> +\t\tmy $dir = ${^PREMATCH};\n> +\t\t$portdir = \"$dir/portlock\" if $dir;\n> +\t}\n> +\t# As a last resort use a directory under tmp_check\n> +\t$portdir ||= $PostgreSQL::Test::Utils::tmp_check . '/portlock';\n> +\t$portdir =~ s!\\\\!/!g;\n> +\t# Make sure the directory exists\n> +\tmkpath($portdir) unless -d $portdir;\n> }\n> \n> =pod\n> @@ -1505,6 +1526,7 @@ sub get_free_port\n> \t\t\t\t\tlast;\n> \t\t\t\t}\n> \t\t\t}\n> +\t\t\t$found = _reserve_port($port) if $found;\n> \t\t}\n> \t}\n> \n> @@ -1535,6 +1557,38 @@ sub can_bind\n> \treturn $ret;\n> }\n> \n> +# Internal routine to reserve a port number\n> +# Returns 1 if successful, 0 if port is already reserved.\n> +sub _reserve_port\n> +{\n> +\tmy $port = shift;\n> +\t# open in rw mode so we don't have to reopen it and lose the lock\n> +\tsysopen(my $portfile, \"$portdir/$port.rsv\", O_RDWR|O_CREAT)\n> +\t || die \"opening port file\";\n> +\t# take an exclusive lock to avoid concurrent access\n> +\tflock($portfile, LOCK_EX) || die \"locking port file\";\n> +\t# see if someone else has or had a reservation of this port\n> +\tmy $pid = <$portfile>;\n> +\tchomp $pid;\n> +\tif ($pid +0 > 0)\n\nGotta love perl.\n\n\n> +\t{\n> +\t\tif (kill 0, $pid)\n\nDoes this work on windows?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 5 Nov 2022 11:36:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: ssl tests aren't concurrency safe due to get_free_port()" }, { "msg_contents": "\nOn 2022-11-05 Sa 14:36, Andres Freund wrote:\n>> \n>> use Carp;\n>> use Config;\n>> -use Fcntl qw(:mode);\n>> +use Fcntl qw(:mode :flock :seek O_CREAT O_RDWR);\n> Does this do anything useful on windows?\n\n\nAll we're doing here on Windows and elsewhere is getting access to some\nconstants used in calls to flock(), seek() and sysopen(). It's not\nactually doing anything else anywhere.\n\n\n>\n>> +\tif ($pid +0 > 0)\n> Gotta love perl.\n\n\nThink of it as a typecast.\n\n\n>\n>\n>> +\t{\n>> +\t\tif (kill 0, $pid)\n> Does this work on windows?\n>\nYes, it's supposed to. It doesn't actually send a signal, it checks if\nthe process exists. There's some suggestion it might give false\npositives on Windows, but that won't really hurt us here, we'll just\nlook for a different port.\n\nOne possible addition would be to add removing the reservation files in\nan END handler. That would be pretty simple.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 6 Nov 2022 11:30:31 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: ssl tests aren't concurrency safe due to get_free_port()" }, { "msg_contents": "On 2022-11-06 Su 11:30, Andrew Dunstan wrote:\n>\n> One possible addition would be to add removing the reservation files in\n> an END handler. That would be pretty simple.\n>\n>\n\n\nHere's a version with that. I suggest we try it out and see if anything\nbreaks.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 15 Nov 2022 15:56:37 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: ssl tests aren't concurrency safe due to get_free_port()" }, { "msg_contents": "Hi,\n\nOn 2022-11-15 15:56:37 -0500, Andrew Dunstan wrote:\n> On 2022-11-06 Su 11:30, Andrew Dunstan wrote:\n> >\n> > One possible addition would be to add removing the reservation files in\n> > an END handler. That would be pretty simple.\n> >\n> >\n> \n> \n> Here's a version with that. I suggest we try it out and see if anything\n> breaks.\n\nThanks! I agree it makes sense to go ahead with this.\n\nI'd guess we should test drive this a bit in HEAD but eventually backpatch?\n\n\n> diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm\n> index d80134b26f..85fae32c14 100644\n> --- a/src/test/perl/PostgreSQL/Test/Cluster.pm\n> +++ b/src/test/perl/PostgreSQL/Test/Cluster.pm\n\nI think this should also update the comment for get_free_port(), since the\nrace mentioned there is largely addressed by this patch?\n\n\n> @@ -140,6 +143,27 @@ INIT\n> \n> \t# Tracking of last port value assigned to accelerate free port lookup.\n> \t$last_port_assigned = int(rand() * 16384) + 49152;\n> +\n> +\t# Set the port lock directory\n> +\n> +\t# If we're told to use a directory (e.g. from a buildfarm client)\n> +\t# explicitly, use that\n> +\t$portdir = $ENV{PG_TEST_PORT_DIR};\n> +\t# Otherwise, try to use a directory at the top of the build tree\n> +\tif (! $portdir && $ENV{MESON_BUILD_ROOT})\n> +\t{\n> +\t\t$portdir = $ENV{MESON_BUILD_ROOT} . '/portlock'\n> +\t}\n> +\telsif (! $portdir && ($ENV{TESTDATADIR} || \"\") =~ /\\W(src|contrib)\\W/p)\n> +\t{\n> +\t\tmy $dir = ${^PREMATCH};\n> +\t\t$portdir = \"$dir/portlock\" if $dir;\n> +\t}\n> +\t# As a last resort use a directory under tmp_check\n> +\t$portdir ||= $PostgreSQL::Test::Utils::tmp_check . '/portlock';\n> +\t$portdir =~ s!\\\\!/!g;\n> +\t# Make sure the directory exists\n> +\tmkpath($portdir) unless -d $portdir;\n> }\n\nPerhaps we should just export a directory in configure instead of this\nguessing game?\n\n\n> =pod\n> @@ -1505,6 +1529,7 @@ sub get_free_port\n> \t\t\t\t\tlast;\n> \t\t\t\t}\n> \t\t\t}\n> +\t\t\t$found = _reserve_port($port) if $found;\n> \t\t}\n> \t}\n> \n> @@ -1535,6 +1560,40 @@ sub can_bind\n> \treturn $ret;\n> }\n> \n> +# Internal routine to reserve a port number\n> +# Returns 1 if successful, 0 if port is already reserved.\n> +sub _reserve_port\n> +{\n> +\tmy $port = shift;\n> +\t# open in rw mode so we don't have to reopen it and lose the lock\n> +\tmy $filename = \"$portdir/$port.rsv\";\n> +\tsysopen(my $portfile, $filename, O_RDWR|O_CREAT)\n> +\t || die \"opening port file $filename\";\n\nPerhaps add $! to the message so e.g. permission denied errors or such are\neasier to debug?\n\n\n> +\t# take an exclusive lock to avoid concurrent access\n> +\tflock($portfile, LOCK_EX) || die \"locking port file $filename\";\n\ndito\n\n\n> +\t# see if someone else has or had a reservation of this port\n> +\tmy $pid = <$portfile>;\n> +\tchomp $pid;\n> +\tif ($pid +0 > 0)\n> +\t{\n> +\t\tif (kill 0, $pid)\n> +\t\t{\n> +\t\t\t# process exists and is owned by us, so we can't reserve this port\n> +\t\t\tclose($portfile);\n> +\t\t\treturn 0;\n> +\t\t}\n> +\t}\n> +\t# All good, go ahead and reserve the port, first rewind and truncate.\n> +\t# If truncation fails it's not a tragedy, it just might leave some\n> +\t# trailing junk in the file that won't affect us.\n> +\tseek($portfile, 0, SEEK_SET);\n> +\ttruncate($portfile, 0);\n\nPerhaps check truncate's return value?\n\n\n> +\tprint $portfile \"$$\\n\";\n> +\tclose($portfile);\n> +\tpush(@port_reservation_files, $filename);\n> +\treturn 1;\n> +}\n\nPerhaps it'd be better to release the file lock explicitly? flock() has this\nannoying behaviour of only releasing the lock when the last file descriptor\nfor a file is closed. We shouldn't end up with dup'd FDs or forks here, but it\nseems like it might be more robust to just explicitly release the lock?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Nov 2022 17:51:05 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: ssl tests aren't concurrency safe due to get_free_port()" }, { "msg_contents": "\nOn 2022-11-15 Tu 20:51, Andres Freund wrote:\n>> @@ -140,6 +143,27 @@ INIT\n>> \n>> \t# Tracking of last port value assigned to accelerate free port lookup.\n>> \t$last_port_assigned = int(rand() * 16384) + 49152;\n>> +\n>> +\t# Set the port lock directory\n>> +\n>> +\t# If we're told to use a directory (e.g. from a buildfarm client)\n>> +\t# explicitly, use that\n>> +\t$portdir = $ENV{PG_TEST_PORT_DIR};\n>> +\t# Otherwise, try to use a directory at the top of the build tree\n>> +\tif (! $portdir && $ENV{MESON_BUILD_ROOT})\n>> +\t{\n>> +\t\t$portdir = $ENV{MESON_BUILD_ROOT} . '/portlock'\n>> +\t}\n>> +\telsif (! $portdir && ($ENV{TESTDATADIR} || \"\") =~ /\\W(src|contrib)\\W/p)\n>> +\t{\n>> +\t\tmy $dir = ${^PREMATCH};\n>> +\t\t$portdir = \"$dir/portlock\" if $dir;\n>> +\t}\n>> +\t# As a last resort use a directory under tmp_check\n>> +\t$portdir ||= $PostgreSQL::Test::Utils::tmp_check . '/portlock';\n>> +\t$portdir =~ s!\\\\!/!g;\n>> +\t# Make sure the directory exists\n>> +\tmkpath($portdir) unless -d $portdir;\n>> }\n> Perhaps we should just export a directory in configure instead of this\n> guessing game?\n>\n>\n>\n\nI think the obvious candidate would be to export top_builddir from\nsrc/Makefile.global. That would remove the need to infer it from\nTESTDATADIR.\n\n\nAny objections?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 19 Nov 2022 10:56:33 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: ssl tests aren't concurrency safe due to get_free_port()" }, { "msg_contents": "Hi,\n\nOn 2022-11-19 10:56:33 -0500, Andrew Dunstan wrote:\n> > Perhaps we should just export a directory in configure instead of this\n> > guessing game?\n> \n> I think the obvious candidate would be to export top_builddir from\n> src/Makefile.global. That would remove the need to infer it from\n> TESTDATADIR.\n\nI think that'd be good. I'd perhaps rename it in the process so it's\nexported uppercase, but whatever...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 19 Nov 2022 12:16:15 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: ssl tests aren't concurrency safe due to get_free_port()" }, { "msg_contents": "\nOn 2022-11-19 Sa 15:16, Andres Freund wrote:\n> Hi,\n>\n> On 2022-11-19 10:56:33 -0500, Andrew Dunstan wrote:\n>>> Perhaps we should just export a directory in configure instead of this\n>>> guessing game?\n>> I think the obvious candidate would be to export top_builddir from\n>> src/Makefile.global. That would remove the need to infer it from\n>> TESTDATADIR.\n> I think that'd be good. I'd perhaps rename it in the process so it's\n> exported uppercase, but whatever...\n>\n\nOK, pushed with a little more tweaking. I didn't upcase top_builddir\nbecause the existing prove_installcheck recipes already export it and I\nwanted to stay consistent with those.\n\nIf it works ok I will backpatch in couple of days.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 20 Nov 2022 10:10:38 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: ssl tests aren't concurrency safe due to get_free_port()" }, { "msg_contents": "On 2022-11-20 10:10:38 -0500, Andrew Dunstan wrote:\n> OK, pushed with a little more tweaking.\n\nThanks!\n\n\n> I didn't upcase top_builddir\n> because the existing prove_installcheck recipes already export it and I\n> wanted to stay consistent with those.\n\nMakes sense.\n\n\n> If it works ok I will backpatch in couple of days.\n\n+1\n\n\n", "msg_date": "Sun, 20 Nov 2022 11:05:45 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: ssl tests aren't concurrency safe due to get_free_port()" }, { "msg_contents": "\nOn 2022-11-20 Su 14:05, Andres Freund wrote:\n>> If it works ok I will backpatch in couple of days.\n> +1\n\n\n\nDone.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 22 Nov 2022 10:57:41 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: ssl tests aren't concurrency safe due to get_free_port()" }, { "msg_contents": "Hi,\n\nOn 2022-11-22 10:57:41 -0500, Andrew Dunstan wrote:\n> On 2022-11-20 Su 14:05, Andres Freund wrote:\n> >> If it works ok I will backpatch in couple of days.\n> > +1\n> Done.\n\nWhile looking into a weird buildfarm failure ([1]), I noticed this:\n\n# Checking port 62707\nUse of uninitialized value $pid in scalar chomp at /mnt/resource/bf/build/grassquit/REL_11_STABLE/pgsql.build/../pgsql/src/test/perl/PostgresNode.pm line 1247.\nUse of uninitialized value $pid in addition (+) at /mnt/resource/bf/build/grassquit/REL_11_STABLE/pgsql.build/../pgsql/src/test/perl/PostgresNode.pm line 1248.\n\nThis isn't related the failure afaics. I think it's happening for all runs on\nall branches on my host. And also a few other animals [2].\n\nNot quite sure how $pid ends up uninitialized, given the code:\n\t# see if someone else has or had a reservation of this port\n\tmy $pid = <$portfile>;\n\tchomp $pid;\n\tif ($pid +0 > 0)\n\nGreetings,\n\nAndres Freund\n\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2022-11-22%2016%3A33%3A57\n\nThe main symptom is\n# Running: pg_ctl -D /mnt/resource/bf/build/grassquit/REL_11_STABLE/pgsql.build/src/bin/pg_ctl/tmp_check/t_003_promote_standby2_data/pgdata promote\nwaiting for server to promote....\npg_ctl: control file appears to be corrupt\n\n[2] https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=peripatus&dt=2022-11-23%2000%3A20%3A13&stg=pg_ctl-check\n\n\n", "msg_date": "Tue, 22 Nov 2022 17:26:07 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: ssl tests aren't concurrency safe due to get_free_port()" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> While looking into a weird buildfarm failure ([1]), I noticed this:\n\n> # Checking port 62707\n> Use of uninitialized value $pid in scalar chomp at /mnt/resource/bf/build/grassquit/REL_11_STABLE/pgsql.build/../pgsql/src/test/perl/PostgresNode.pm line 1247.\n> Use of uninitialized value $pid in addition (+) at /mnt/resource/bf/build/grassquit/REL_11_STABLE/pgsql.build/../pgsql/src/test/perl/PostgresNode.pm line 1248.\n\nYeah, my animals are showing that too.\n\n> Not quite sure how $pid ends up uninitialized, given the code:\n> \t# see if someone else has or had a reservation of this port\n> \tmy $pid = <$portfile>;\n> \tchomp $pid;\n> \tif ($pid +0 > 0)\n\nI guess the <$portfile> might return undef if the file is empty?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 22 Nov 2022 20:36:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ssl tests aren't concurrency safe due to get_free_port()" }, { "msg_contents": "\n> On Nov 22, 2022, at 8:36 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Andres Freund <andres@anarazel.de> writes:\n>> While looking into a weird buildfarm failure ([1]), I noticed this:\n> \n>> # Checking port 62707\n>> Use of uninitialized value $pid in scalar chomp at /mnt/resource/bf/build/grassquit/REL_11_STABLE/pgsql.build/../pgsql/src/test/perl/PostgresNode.pm line 1247.\n>> Use of uninitialized value $pid in addition (+) at /mnt/resource/bf/build/grassquit/REL_11_STABLE/pgsql.build/../pgsql/src/test/perl/PostgresNode.pm line 1248.\n> \n> Yeah, my animals are showing that too.\n> \n>> Not quite sure how $pid ends up uninitialized, given the code:\n>> # see if someone else has or had a reservation of this port\n>> my $pid = <$portfile>;\n>> chomp $pid;\n>> if ($pid +0 > 0)\n> \n> I guess the <$portfile> might return undef if the file is empty?\n> \n\nProbably, will fix in the morning \n\nCheers\n\nAndrew\n\n", "msg_date": "Tue, 22 Nov 2022 21:52:22 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: ssl tests aren't concurrency safe due to get_free_port()" }, { "msg_contents": "\nOn 2022-11-22 Tu 20:36, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> While looking into a weird buildfarm failure ([1]), I noticed this:\n>> # Checking port 62707\n>> Use of uninitialized value $pid in scalar chomp at /mnt/resource/bf/build/grassquit/REL_11_STABLE/pgsql.build/../pgsql/src/test/perl/PostgresNode.pm line 1247.\n>> Use of uninitialized value $pid in addition (+) at /mnt/resource/bf/build/grassquit/REL_11_STABLE/pgsql.build/../pgsql/src/test/perl/PostgresNode.pm line 1248.\n> Yeah, my animals are showing that too.\n>\n>> Not quite sure how $pid ends up uninitialized, given the code:\n>> \t# see if someone else has or had a reservation of this port\n>> \tmy $pid = <$portfile>;\n>> \tchomp $pid;\n>> \tif ($pid +0 > 0)\n> I guess the <$portfile> might return undef if the file is empty?\n>\n> \t\t\n\n\nYeah, should be fixed now.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 23 Nov 2022 20:15:42 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: ssl tests aren't concurrency safe due to get_free_port()" }, { "msg_contents": "On 20.11.22 16:10, Andrew Dunstan wrote:\n> \n> On 2022-11-19 Sa 15:16, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2022-11-19 10:56:33 -0500, Andrew Dunstan wrote:\n>>>> Perhaps we should just export a directory in configure instead of this\n>>>> guessing game?\n>>> I think the obvious candidate would be to export top_builddir from\n>>> src/Makefile.global. That would remove the need to infer it from\n>>> TESTDATADIR.\n>> I think that'd be good. I'd perhaps rename it in the process so it's\n>> exported uppercase, but whatever...\n>>\n> \n> OK, pushed with a little more tweaking. I didn't upcase top_builddir\n> because the existing prove_installcheck recipes already export it and I\n> wanted to stay consistent with those.\n> \n> If it works ok I will backpatch in couple of days.\n\nThese patches have affected pgxs-using extensions that have their own \nTAP tests.\n\nThe portlock directory is created at\n\n my $build_dir = $ENV{top_builddir}\n || $PostgreSQL::Test::Utils::tmp_check ;\n $portdir ||= \"$build_dir/portlock\";\n\nbut for a pgxs user, top_builddir points into the installation tree, \nspecifically at $prefix/lib/pgxs/.\n\nSo when running \"make installcheck\" for an extension, we either won't \nhave write access to that directory, or if we do, then it's still not \ngood to write into the installation tree during a test suite.\n\nA possible fix is\n\ndiff --git a/src/Makefile.global.in b/src/Makefile.global.in\nindex 5dacc4d838..c493d1a60c 100644\n--- a/src/Makefile.global.in\n+++ b/src/Makefile.global.in\n@@ -464,7 +464,7 @@ rm -rf '$(CURDIR)'/tmp_check && \\\n $(MKDIR_P) '$(CURDIR)'/tmp_check && \\\n cd $(srcdir) && \\\n TESTDIR='$(CURDIR)' PATH=\"$(bindir):$(CURDIR):$$PATH\" \\\n- PGPORT='6$(DEF_PGPORT)' top_builddir='$(top_builddir)' \\\n+ PGPORT='6$(DEF_PGPORT)' top_builddir='$(CURDIR)' \\\n PG_REGRESS='$(top_builddir)/src/test/regress/pg_regress' \\\n $(PROVE) $(PG_PROVE_FLAGS) $(PROVE_FLAGS) $(if \n$(PROVE_TESTS),$(PROVE_TESTS),t/*.pl)\n endef\n\n\n\n", "msg_date": "Tue, 25 Apr 2023 12:27:52 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: ssl tests aren't concurrency safe due to get_free_port()" }, { "msg_contents": "On 25.04.23 12:27, Peter Eisentraut wrote:\n> These patches have affected pgxs-using extensions that have their own \n> TAP tests.\n> \n> The portlock directory is created at\n> \n>     my $build_dir = $ENV{top_builddir}\n>       || $PostgreSQL::Test::Utils::tmp_check ;\n>     $portdir ||= \"$build_dir/portlock\";\n> \n> but for a pgxs user, top_builddir points into the installation tree, \n> specifically at $prefix/lib/pgxs/.\n> \n> So when running \"make installcheck\" for an extension, we either won't \n> have write access to that directory, or if we do, then it's still not \n> good to write into the installation tree during a test suite.\n> \n> A possible fix is\n> \n> diff --git a/src/Makefile.global.in b/src/Makefile.global.in\n> index 5dacc4d838..c493d1a60c 100644\n> --- a/src/Makefile.global.in\n> +++ b/src/Makefile.global.in\n> @@ -464,7 +464,7 @@ rm -rf '$(CURDIR)'/tmp_check && \\\n>  $(MKDIR_P) '$(CURDIR)'/tmp_check && \\\n>  cd $(srcdir) && \\\n>     TESTDIR='$(CURDIR)' PATH=\"$(bindir):$(CURDIR):$$PATH\" \\\n> -   PGPORT='6$(DEF_PGPORT)' top_builddir='$(top_builddir)' \\\n> +   PGPORT='6$(DEF_PGPORT)' top_builddir='$(CURDIR)' \\\n>     PG_REGRESS='$(top_builddir)/src/test/regress/pg_regress' \\\n>     $(PROVE) $(PG_PROVE_FLAGS) $(PROVE_FLAGS) $(if \n> $(PROVE_TESTS),$(PROVE_TESTS),t/*.pl)\n>  endef\n\nAny thoughts on this? I would like to get this into the upcoming minor \nreleases.\n\nNote that the piece of code shown here is only applicable to PGXS, so \ngiven that that is currently broken, this \"can't make it worse\".\n\n\n\n", "msg_date": "Tue, 2 May 2023 07:57:05 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: ssl tests aren't concurrency safe due to get_free_port()" }, { "msg_contents": "On 25.04.23 12:27, Peter Eisentraut wrote:\n> On 20.11.22 16:10, Andrew Dunstan wrote:\n>>\n>> On 2022-11-19 Sa 15:16, Andres Freund wrote:\n>>> Hi,\n>>>\n>>> On 2022-11-19 10:56:33 -0500, Andrew Dunstan wrote:\n>>>>> Perhaps we should just export a directory in configure instead of this\n>>>>> guessing game?\n>>>> I think the obvious candidate would be to export top_builddir from\n>>>> src/Makefile.global. That would remove the need to infer it from\n>>>> TESTDATADIR.\n>>> I think that'd be good. I'd perhaps rename it in the process so it's\n>>> exported uppercase, but whatever...\n>>>\n>>\n>> OK, pushed with a little more tweaking. I didn't upcase top_builddir\n>> because the existing prove_installcheck recipes already export it and I\n>> wanted to stay consistent with those.\n>>\n>> If it works ok I will backpatch in couple of days.\n> \n> These patches have affected pgxs-using extensions that have their own \n> TAP tests.\n> \n> The portlock directory is created at\n> \n>     my $build_dir = $ENV{top_builddir}\n>       || $PostgreSQL::Test::Utils::tmp_check ;\n>     $portdir ||= \"$build_dir/portlock\";\n> \n> but for a pgxs user, top_builddir points into the installation tree, \n> specifically at $prefix/lib/pgxs/.\n> \n> So when running \"make installcheck\" for an extension, we either won't \n> have write access to that directory, or if we do, then it's still not \n> good to write into the installation tree during a test suite.\n> \n> A possible fix is\n> \n> diff --git a/src/Makefile.global.in b/src/Makefile.global.in\n> index 5dacc4d838..c493d1a60c 100644\n> --- a/src/Makefile.global.in\n> +++ b/src/Makefile.global.in\n> @@ -464,7 +464,7 @@ rm -rf '$(CURDIR)'/tmp_check && \\\n>  $(MKDIR_P) '$(CURDIR)'/tmp_check && \\\n>  cd $(srcdir) && \\\n>     TESTDIR='$(CURDIR)' PATH=\"$(bindir):$(CURDIR):$$PATH\" \\\n> -   PGPORT='6$(DEF_PGPORT)' top_builddir='$(top_builddir)' \\\n> +   PGPORT='6$(DEF_PGPORT)' top_builddir='$(CURDIR)' \\\n>     PG_REGRESS='$(top_builddir)/src/test/regress/pg_regress' \\\n>     $(PROVE) $(PG_PROVE_FLAGS) $(PROVE_FLAGS) $(if \n> $(PROVE_TESTS),$(PROVE_TESTS),t/*.pl)\n>  endef\n\nBetter idea: Remove the top_builddir assignment altogether. I traced \nthe history of this, and it seems like it was just dragged around with \nvarious other changes and doesn't have a purpose of its own.\n\nThe only effect of the current code (top_builddir='$(top_builddir)') is \nto export top_builddir as an environment variable. And the only Perl \ntest code that reads that environment variable is the code that makes \nthe portlock directory, which is exactly what we don't want. So just \nremoving that seems to be the right solution.", "msg_date": "Thu, 4 May 2023 08:40:02 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: ssl tests aren't concurrency safe due to get_free_port()" }, { "msg_contents": "On 2023-05-04 Th 02:40, Peter Eisentraut wrote:\n> On 25.04.23 12:27, Peter Eisentraut wrote:\n>> On 20.11.22 16:10, Andrew Dunstan wrote:\n>>>\n>>> On 2022-11-19 Sa 15:16, Andres Freund wrote:\n>>>> Hi,\n>>>>\n>>>> On 2022-11-19 10:56:33 -0500, Andrew Dunstan wrote:\n>>>>>> Perhaps we should just export a directory in configure instead of \n>>>>>> this\n>>>>>> guessing game?\n>>>>> I think the obvious candidate would be to export top_builddir from\n>>>>> src/Makefile.global. That would remove the need to infer it from\n>>>>> TESTDATADIR.\n>>>> I think that'd be good. I'd perhaps rename it in the process so it's\n>>>> exported uppercase, but whatever...\n>>>>\n>>>\n>>> OK, pushed with a little more tweaking. I didn't upcase top_builddir\n>>> because the existing prove_installcheck recipes already export it and I\n>>> wanted to stay consistent with those.\n>>>\n>>> If it works ok I will backpatch in couple of days.\n>>\n>> These patches have affected pgxs-using extensions that have their own \n>> TAP tests.\n>>\n>> The portlock directory is created at\n>>\n>>      my $build_dir = $ENV{top_builddir}\n>>        || $PostgreSQL::Test::Utils::tmp_check ;\n>>      $portdir ||= \"$build_dir/portlock\";\n>>\n>> but for a pgxs user, top_builddir points into the installation tree, \n>> specifically at $prefix/lib/pgxs/.\n>>\n>> So when running \"make installcheck\" for an extension, we either won't \n>> have write access to that directory, or if we do, then it's still not \n>> good to write into the installation tree during a test suite.\n>>\n>> A possible fix is\n>>\n>> diff --git a/src/Makefile.global.in b/src/Makefile.global.in\n>> index 5dacc4d838..c493d1a60c 100644\n>> --- a/src/Makefile.global.in\n>> +++ b/src/Makefile.global.in\n>> @@ -464,7 +464,7 @@ rm -rf '$(CURDIR)'/tmp_check && \\\n>>   $(MKDIR_P) '$(CURDIR)'/tmp_check && \\\n>>   cd $(srcdir) && \\\n>>      TESTDIR='$(CURDIR)' PATH=\"$(bindir):$(CURDIR):$$PATH\" \\\n>> -   PGPORT='6$(DEF_PGPORT)' top_builddir='$(top_builddir)' \\\n>> +   PGPORT='6$(DEF_PGPORT)' top_builddir='$(CURDIR)' \\\n>>      PG_REGRESS='$(top_builddir)/src/test/regress/pg_regress' \\\n>>      $(PROVE) $(PG_PROVE_FLAGS) $(PROVE_FLAGS) $(if \n>> $(PROVE_TESTS),$(PROVE_TESTS),t/*.pl)\n>>   endef\n>\n> Better idea: Remove the top_builddir assignment altogether.  I traced \n> the history of this, and it seems like it was just dragged around with \n> various other changes and doesn't have a purpose of its own.\n>\n> The only effect of the current code (top_builddir='$(top_builddir)') \n> is to export top_builddir as an environment variable.  And the only \n> Perl test code that reads that environment variable is the code that \n> makes the portlock directory, which is exactly what we don't want.  So \n> just removing that seems to be the right solution.\n\n\n\nYeah, that should be OK in the pgxs case.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-05-04 Th 02:40, Peter\n Eisentraut wrote:\n\nOn\n 25.04.23 12:27, Peter Eisentraut wrote:\n \nOn 20.11.22 16:10, Andrew Dunstan wrote:\n \n\n\n On 2022-11-19 Sa 15:16, Andres Freund wrote:\n \nHi,\n \n\n On 2022-11-19 10:56:33 -0500, Andrew Dunstan wrote:\n \n\nPerhaps we should just export a\n directory in configure instead of this\n \n guessing game?\n \n\n I think the obvious candidate would be to export\n top_builddir from\n \n src/Makefile.global. That would remove the need to infer\n it from\n \n TESTDATADIR.\n \n\n I think that'd be good. I'd perhaps rename it in the process\n so it's\n \n exported uppercase, but whatever...\n \n\n\n\n OK, pushed with a little more tweaking. I didn't upcase\n top_builddir\n \n because the existing prove_installcheck recipes already export\n it and I\n \n wanted to stay consistent with those.\n \n\n If it works ok I will backpatch in couple of days.\n \n\n\n These patches have affected pgxs-using extensions that have\n their own TAP tests.\n \n\n The portlock directory is created at\n \n\n      my $build_dir = $ENV{top_builddir}\n \n        || $PostgreSQL::Test::Utils::tmp_check ;\n \n      $portdir ||= \"$build_dir/portlock\";\n \n\n but for a pgxs user, top_builddir points into the installation\n tree, specifically at $prefix/lib/pgxs/.\n \n\n So when running \"make installcheck\" for an extension, we either\n won't have write access to that directory, or if we do, then\n it's still not good to write into the installation tree during a\n test suite.\n \n\n A possible fix is\n \n\n diff --git a/src/Makefile.global.in b/src/Makefile.global.in\n \n index 5dacc4d838..c493d1a60c 100644\n \n --- a/src/Makefile.global.in\n \n +++ b/src/Makefile.global.in\n \n @@ -464,7 +464,7 @@ rm -rf '$(CURDIR)'/tmp_check && \\\n \n   $(MKDIR_P) '$(CURDIR)'/tmp_check && \\\n \n   cd $(srcdir) && \\\n \n      TESTDIR='$(CURDIR)' PATH=\"$(bindir):$(CURDIR):$$PATH\" \\\n \n -   PGPORT='6$(DEF_PGPORT)' top_builddir='$(top_builddir)' \\\n \n +   PGPORT='6$(DEF_PGPORT)' top_builddir='$(CURDIR)' \\\n \n      PG_REGRESS='$(top_builddir)/src/test/regress/pg_regress' \\\n \n      $(PROVE) $(PG_PROVE_FLAGS) $(PROVE_FLAGS) $(if\n $(PROVE_TESTS),$(PROVE_TESTS),t/*.pl)\n \n   endef\n \n\n\n Better idea: Remove the top_builddir assignment altogether.  I\n traced the history of this, and it seems like it was just dragged\n around with various other changes and doesn't have a purpose of\n its own.\n \n\n The only effect of the current code\n (top_builddir='$(top_builddir)') is to export top_builddir as an\n environment variable.  And the only Perl test code that reads that\n environment variable is the code that makes the portlock\n directory, which is exactly what we don't want.  So just removing\n that seems to be the right solution.\n \n\n\n\n\n\nYeah, that should be OK in the pgxs case.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 4 May 2023 18:33:47 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: ssl tests aren't concurrency safe due to get_free_port()" }, { "msg_contents": "On 05.05.23 00:33, Andrew Dunstan wrote:\n> \n> On 2023-05-04 Th 02:40, Peter Eisentraut wrote:\n>> On 25.04.23 12:27, Peter Eisentraut wrote:\n>>> On 20.11.22 16:10, Andrew Dunstan wrote:\n>>>>\n>>>> On 2022-11-19 Sa 15:16, Andres Freund wrote:\n>>>>> Hi,\n>>>>>\n>>>>> On 2022-11-19 10:56:33 -0500, Andrew Dunstan wrote:\n>>>>>>> Perhaps we should just export a directory in configure instead of \n>>>>>>> this\n>>>>>>> guessing game?\n>>>>>> I think the obvious candidate would be to export top_builddir from\n>>>>>> src/Makefile.global. That would remove the need to infer it from\n>>>>>> TESTDATADIR.\n>>>>> I think that'd be good. I'd perhaps rename it in the process so it's\n>>>>> exported uppercase, but whatever...\n>>>>>\n>>>>\n>>>> OK, pushed with a little more tweaking. I didn't upcase top_builddir\n>>>> because the existing prove_installcheck recipes already export it and I\n>>>> wanted to stay consistent with those.\n>>>>\n>>>> If it works ok I will backpatch in couple of days.\n>>>\n>>> These patches have affected pgxs-using extensions that have their own \n>>> TAP tests.\n>>>\n>>> The portlock directory is created at\n>>>\n>>>      my $build_dir = $ENV{top_builddir}\n>>>        || $PostgreSQL::Test::Utils::tmp_check ;\n>>>      $portdir ||= \"$build_dir/portlock\";\n>>>\n>>> but for a pgxs user, top_builddir points into the installation tree, \n>>> specifically at $prefix/lib/pgxs/.\n>>>\n>>> So when running \"make installcheck\" for an extension, we either won't \n>>> have write access to that directory, or if we do, then it's still not \n>>> good to write into the installation tree during a test suite.\n>>>\n>>> A possible fix is\n>>>\n>>> diff --git a/src/Makefile.global.in b/src/Makefile.global.in\n>>> index 5dacc4d838..c493d1a60c 100644\n>>> --- a/src/Makefile.global.in\n>>> +++ b/src/Makefile.global.in\n>>> @@ -464,7 +464,7 @@ rm -rf '$(CURDIR)'/tmp_check && \\\n>>>   $(MKDIR_P) '$(CURDIR)'/tmp_check && \\\n>>>   cd $(srcdir) && \\\n>>>      TESTDIR='$(CURDIR)' PATH=\"$(bindir):$(CURDIR):$$PATH\" \\\n>>> -   PGPORT='6$(DEF_PGPORT)' top_builddir='$(top_builddir)' \\\n>>> +   PGPORT='6$(DEF_PGPORT)' top_builddir='$(CURDIR)' \\\n>>>      PG_REGRESS='$(top_builddir)/src/test/regress/pg_regress' \\\n>>>      $(PROVE) $(PG_PROVE_FLAGS) $(PROVE_FLAGS) $(if \n>>> $(PROVE_TESTS),$(PROVE_TESTS),t/*.pl)\n>>>   endef\n>>\n>> Better idea: Remove the top_builddir assignment altogether.  I traced \n>> the history of this, and it seems like it was just dragged around with \n>> various other changes and doesn't have a purpose of its own.\n>>\n>> The only effect of the current code (top_builddir='$(top_builddir)') \n>> is to export top_builddir as an environment variable.  And the only \n>> Perl test code that reads that environment variable is the code that \n>> makes the portlock directory, which is exactly what we don't want.  So \n>> just removing that seems to be the right solution.\n> \n> Yeah, that should be OK in the pgxs case.\n\nI have committed this to all active branches.\n\n\n\n", "msg_date": "Fri, 5 May 2023 07:39:20 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: ssl tests aren't concurrency safe due to get_free_port()" } ]
[ { "msg_contents": "Hello,\n\nWhile building PostgreSQL 15 RC 1 with LLVM 15, I got a build failure as\nfollows:\n\ncc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement\n-Werror=vla -Werror=unguarded-availability-new -Wendif-labels\n-Wmissing-format-attribute -Wcast-function-type -Wformat-security\n-fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument\n-Wno-compound-token-split-by-macro -O2 -pipe -O3 -funroll-loops\n-fstack-protector-strong -fno-strict-aliasing -Wno-deprecated-declarations\n-fPIC -DPIC -D__STDC_LIMIT_MACROS -D__STDC_FORMAT_MACROS\n-D__STDC_CONSTANT_MACROS -I/usr/local/llvm15/include\n -I../../../../src/include -I/usr/local/include -I/usr/local/include\n-I/usr/local/include/libxml2 -I/usr/local/include -I/usr/local/include\n-I/usr/local/include -I/usr/local/include -c -o llvmjit.o llvmjit.c\nllvmjit.c:1115:50: error: use of undeclared identifier\n'LLVMJITCSymbolMapPair'\n LLVMOrcCSymbolMapPairs symbols =\npalloc0(sizeof(LLVMJITCSymbolMapPair) * LookupSetSize);\n ^\nllvmjit.c:1233:81: error: too few arguments to function call, expected 3,\nhave 2\n ref_gen =\nLLVMOrcCreateCustomCAPIDefinitionGenerator(llvm_resolve_symbols, NULL);\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n ^\n/usr/local/llvm15/include/llvm-c/Orc.h:997:31: note:\n'LLVMOrcCreateCustomCAPIDefinitionGenerator' declared here\nLLVMOrcDefinitionGeneratorRef LLVMOrcCreateCustomCAPIDefinitionGenerator(\n ^\n2 errors generated.\ngmake: *** [<builtin>: llvmjit.o] Error 1\n*** Error code 2\n\nI've prepared a patch (attached) to fix the build issue with LLVM 15 or\nabove. It is also available at\nhttps://people.FreeBSD.org/~sunpoet/patch/postgres/0001-Fix-build-with-LLVM-15-or-above.patch\nThanks.\n\nRegards,\nsunpoet", "msg_date": "Mon, 3 Oct 2022 11:55:32 +0800", "msg_from": "Po-Chuan Hsieh <sunpoet@sunpoet.net>", "msg_from_op": true, "msg_subject": "[PATCH] Fix build with LLVM 15 or above" }, { "msg_contents": "On Mon, Oct 3, 2022 at 4:56 PM Po-Chuan Hsieh <sunpoet@sunpoet.net> wrote:\n> While building PostgreSQL 15 RC 1 with LLVM 15, I got a build failure as follows:\n>\n> cc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Werror=unguarded-availability-new -Wendif-labels -Wmissing-format-attribute -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -Wno-compound-token-split-by-macro -O2 -pipe -O3 -funroll-loops -fstack-protector-strong -fno-strict-aliasing -Wno-deprecated-declarations -fPIC -DPIC -D__STDC_LIMIT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_CONSTANT_MACROS -I/usr/local/llvm15/include -I../../../../src/include -I/usr/local/include -I/usr/local/include -I/usr/local/include/libxml2 -I/usr/local/include -I/usr/local/include -I/usr/local/include -I/usr/local/include -c -o llvmjit.o llvmjit.c\n> llvmjit.c:1115:50: error: use of undeclared identifier 'LLVMJITCSymbolMapPair'\n> LLVMOrcCSymbolMapPairs symbols = palloc0(sizeof(LLVMJITCSymbolMapPair) * LookupSetSize);\n> ^\n> llvmjit.c:1233:81: error: too few arguments to function call, expected 3, have 2\n> ref_gen = LLVMOrcCreateCustomCAPIDefinitionGenerator(llvm_resolve_symbols, NULL);\n> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^\n> /usr/local/llvm15/include/llvm-c/Orc.h:997:31: note: 'LLVMOrcCreateCustomCAPIDefinitionGenerator' declared here\n> LLVMOrcDefinitionGeneratorRef LLVMOrcCreateCustomCAPIDefinitionGenerator(\n> ^\n> 2 errors generated.\n> gmake: *** [<builtin>: llvmjit.o] Error 1\n> *** Error code 2\n>\n> I've prepared a patch (attached) to fix the build issue with LLVM 15 or above. It is also available at https://people.FreeBSD.org/~sunpoet/patch/postgres/0001-Fix-build-with-LLVM-15-or-above.patch\n\nHi,\n\nUnfortunately that is only the tip of a mini iceberg. While that\nchange makes it compile, there are other API changes that are required\nto make our use of LLVM ORC actually work. We can't get through 'make\ncheck', because various code paths in LLVM 15 abort, because we're\nusing a bunch of APIs from before the big change to \"opaque pointers\"\nhttps://llvm.org/docs/OpaquePointers.html. I've been trying to get to\na patch to fix that -- basically a few simple-looking changes like\nLLVMBuildLoad() to LLVMBuildLoad2() as described there -- that gain an\nargument where you have to tell it the type of the pointer (whereas\nbefore it knew the type of pointers automatically). Unfortunately I\nhad to work on other problems that came up recently and it's probably\ngoing to be at least a week before I can get back to this and post a\npatch.\n\nOne option I thought about as a stopgap measure is to use\nLLVMContextSetOpaquePointers(context, false) to turn the new code\npaths off, but it doesn't seem to work for me and I couldn't figure\nout why yet (it still aborts -- probably there are more 'contexts'\naround that I didn't handle, something like that). That option is\navailable for LLVM 15 but will be taken out in LLVM 16, so that's\nsupposed to be the last chance to stop using pre-opaque pointers; see\nthe bottom of the page I linked above for that, where they call it\nsetOpaquePointers(false) (the C++ version of\nLLVMContextSetOpaquePointers()). I don't really want to go with that\nif we can avoid it, though, because it says \"Opaque pointers are\nenabled by default. Typed pointers are still available, but only\nsupported on a best-effort basis and may be untested\" so I expect it\nto be blighted with problems.\n\nHere's my attempt at that minimal change, which is apparently still\nmissing something (if you can get this to build and pass all tests\nagainst LLVM 15 then it might still be interesting to know about):\n\nhttps://github.com/macdice/postgres/tree/llvm15-min\n\nHere's my WIP unfinished branch where I'm trying to get the real code\nchange done. It needs more work on function pointer types, which are\na bit tedious to deal with and I haven't got it all right in here yet\nas you can see from failures if you build against 15:\n\nhttps://github.com/macdice/postgres/tree/llvm15\n\nHopefully more next week...\n\n\n", "msg_date": "Mon, 3 Oct 2022 18:34:18 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix build with LLVM 15 or above" }, { "msg_contents": "Hi,\n\nOn 2022-10-03 18:34:18 +1300, Thomas Munro wrote:\n> One option I thought about as a stopgap measure is to use\n> LLVMContextSetOpaquePointers(context, false) to turn the new code\n> paths off, but it doesn't seem to work for me and I couldn't figure\n> out why yet (it still aborts -- probably there are more 'contexts'\n> around that I didn't handle, something like that).\n\nI think that's just because of this hunk:\n\n@@ -992,7 +1000,12 @@ llvm_create_types(void)\n }\n\n /* eagerly load contents, going to need it all */\n+#if LLVM_VERSION_MAJOR > 14\n+ if (LLVMParseBitcodeInContext2(LLVMOrcThreadSafeContextGetContext(llvm_ts_context),\n+ buf, &llvm_types_module))\n+#else\n if (LLVMParseBitcode2(buf, &llvm_types_module))\n+#endif\n {\n elog(ERROR, \"LLVMParseBitcode2 of %s failed\", path);\n }\n\nThis is the wrong context to use here. Because of that we end up with types\nfrom two different contexts being used, which leads to this assertion to fail:\n\n#5 0x00007f945a036ab2 in __GI___assert_fail (\n assertion=0x7f93cf5a4a1b \"getOperand(0)->getType() == getOperand(1)->getType() && \\\"Both operands to ICmp instruction are not of the same type!\\\"\",\n file=0x7f93cf66062a \"/home/andres/src/llvm-project/llvm/include/llvm/IR/Instructions.h\", line=1191,\n function=0x7f93cf5f2db6 \"void llvm::ICmpInst::AssertOK()\") at ./assert/assert.c:101\n#6 0x00007f93cf9e3a3c in llvm::ICmpInst::AssertOK (this=0x56482c3b4b50) at /home/andres/src/llvm-project/llvm/include/llvm/IR/Instructions.h:1190\n#7 0x00007f93cf9e38ca in llvm::ICmpInst::ICmpInst (this=0x56482c3b4b50, pred=llvm::CmpInst::ICMP_UGE, LHS=0x56482c3b98d0, RHS=0x56482c3b9920, NameStr=\"\")\n at /home/andres/src/llvm-project/llvm/include/llvm/IR/Instructions.h:1245\n#8 0x00007f93cf9dc6f9 in llvm::IRBuilderBase::CreateICmp (this=0x56482c3b4650, P=llvm::CmpInst::ICMP_UGE, LHS=0x56482c3b98d0, RHS=0x56482c3b9920, Name=\"\")\n at /home/andres/src/llvm-project/llvm/include/llvm/IR/IRBuilder.h:2212\n#9 0x00007f93cfa650cd in LLVMBuildICmp (B=0x56482c3b4650, Op=LLVMIntUGE, LHS=0x56482c3b98d0, RHS=0x56482c3b9920, Name=0x7f9459722cf2 \"\")\n at /home/andres/src/llvm-project/llvm/lib/IR/Core.cpp:3883\n#10 0x00007f945971b4d7 in llvm_compile_expr (state=0x56482c31f878) at /home/andres/src/postgresql/src/backend/jit/llvm/llvmjit_expr.c:302\n#11 0x000056482a28f76b in jit_compile_expr (state=state@entry=0x56482c31f878) at /home/andres/src/postgresql/src/backend/jit/jit.c:177\n#12 0x0000564829f44e62 in ExecReadyExpr (state=state@entry=0x56482c31f878) at /home/andres/src/postgresql/src/backend/executor/execExpr.c:885\n\nbecause types (compared by pointer value) are only unique within a context.\n\nI think all that is needed for this aspect would be:\n\n#if LLVM_VERSION_MAJOR > 14\nLLVMContextSetOpaquePointers(LLVMGetGlobalContext(), false);\n#endif\n\n\nI haven't yet run through the whole regression test with an assert enabled\nllvm because an assert-enabled llvm is *SLOW*, but it got through the first\nfew parallel groups ok. Using an optimized llvm 15, all tests pass with\nPGOPTIONS=-cjit_above_cost=0.\n\n\n> That option is available for LLVM 15 but will be taken out in LLVM 16, so\n> that's supposed to be the last chance to stop using pre-opaque pointers; see\n> the bottom of the page I linked above for that, where they call it\n> setOpaquePointers(false) (the C++ version of\n> LLVMContextSetOpaquePointers()). I don't really want to go with that if we\n> can avoid it, though, because it says \"Opaque pointers are enabled by\n> default. Typed pointers are still available, but only supported on a\n> best-effort basis and may be untested\" so I expect it to be blighted with\n> problems.\n\nI think it'd be ok for the back branches, while we figure out the opaque stuff\nin HEAD.\n\nGreetings,\n\nAndres Freund", "msg_date": "Mon, 3 Oct 2022 12:16:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix build with LLVM 15 or above" }, { "msg_contents": "Hi,\n\nOn 2022-10-03 12:16:12 -0700, Andres Freund wrote:\n> I haven't yet run through the whole regression test with an assert enabled\n> llvm because an assert-enabled llvm is *SLOW*, but it got through the first\n> few parallel groups ok. Using an optimized llvm 15, all tests pass with\n> PGOPTIONS=-cjit_above_cost=0.\n\nThat did pass. But to be able to use clang >= 15 one more piece is\nneeded. Updated patch attached.\n\nGreetings,\n\nAndres Freund", "msg_date": "Mon, 3 Oct 2022 14:41:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix build with LLVM 15 or above" }, { "msg_contents": "On Mon, Oct 3, 2022 at 2:41 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-10-03 12:16:12 -0700, Andres Freund wrote:\n> > I haven't yet run through the whole regression test with an assert\n> enabled\n> > llvm because an assert-enabled llvm is *SLOW*, but it got through the\n> first\n> > few parallel groups ok. Using an optimized llvm 15, all tests pass with\n> > PGOPTIONS=-cjit_above_cost=0.\n>\n> That did pass. But to be able to use clang >= 15 one more piece is\n> needed. Updated patch attached.\n>\n> Greetings,\n>\n> Andres Freund\n>\nHi,\n\n+ * When targetting an llvm version with opaque pointers enabled by\n\nI think `targetting` should be spelled as targeting\n\nCheers\n\nOn Mon, Oct 3, 2022 at 2:41 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2022-10-03 12:16:12 -0700, Andres Freund wrote:\n> I haven't yet run through the whole regression test with an assert enabled\n> llvm because an assert-enabled llvm is *SLOW*, but it got through the first\n> few parallel groups ok.  Using an optimized llvm 15, all tests pass with\n> PGOPTIONS=-cjit_above_cost=0.\n\nThat did pass. But to be able to use clang >= 15 one more piece is\nneeded. Updated patch attached.\n\nGreetings,\n\nAndres FreundHi,+    * When targetting an llvm version with opaque pointers enabled byI think `targetting` should be spelled as targetingCheers", "msg_date": "Mon, 3 Oct 2022 14:45:05 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix build with LLVM 15 or above" }, { "msg_contents": "On Tue, Oct 4, 2022 at 10:45 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Mon, Oct 3, 2022 at 2:41 PM Andres Freund <andres@anarazel.de> wrote:\n>> On 2022-10-03 12:16:12 -0700, Andres Freund wrote:\n>> > I haven't yet run through the whole regression test with an assert enabled\n>> > llvm because an assert-enabled llvm is *SLOW*, but it got through the first\n>> > few parallel groups ok. Using an optimized llvm 15, all tests pass with\n>> > PGOPTIONS=-cjit_above_cost=0.\n\n+ /*\n+ * When targetting an llvm version with opaque pointers enabled by\n+ * default, turn them off for the context we build our code in. Don't need\n+ * to do so for other contexts (e.g. llvm_ts_context) - once the IR is\n+ * generated, it carries the necessary information.\n+ */\n+#if LLVM_VERSION_MAJOR > 14\n+ LLVMContextSetOpaquePointers(LLVMGetGlobalContext(), false);\n+#endif\n\nAhh, right, thanks!\n\n>> That did pass. But to be able to use clang >= 15 one more piece is\n>> needed. Updated patch attached.\n\n+ bitcode_cflags += ['-Xclang', '-no-opaque-pointers']\n\nOh, right. That makes sense.\n\n> I think `targetting` should be spelled as targeting\n\nYeah.\n\nOK, I'll wait for the dust to settle on our 15 release and then\nback-patch this. Then I'll keep working on the opaque pointer support\nfor master, which LLVM 16 will need (I expect we'll eventually want to\nback-patch that eventually, but first things first...).\n\n\n", "msg_date": "Tue, 11 Oct 2022 11:07:00 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix build with LLVM 15 or above" }, { "msg_contents": "Hi Thomas,\n\nOn Tue, 2022-10-11 at 11:07 +1300, Thomas Munro wrote:\n> OK, I'll wait for the dust to settle on our 15 release and then\n> back-patch this.  Then I'll keep working on the opaque pointer\n> support for master, which LLVM 16 will need (I expect we'll\n> eventually want to back-patch that eventually, but first things\n> first...).\n\nFedora 37 is out very very soon, and ships with CLANG/LLVM 15. What is\nthe timeline for backpatching llvm15 support?\n\nThanks!\n\nCheers,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, Red Hat Certified Engineer\nTwitter: @DevrimGunduz , @DevrimGunduzTR", "msg_date": "Tue, 18 Oct 2022 10:01:52 +0100", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix build with LLVM 15 or above" }, { "msg_contents": "On Tue, Oct 18, 2022 at 10:01 PM Devrim Gündüz <devrim@gunduz.org> wrote:\n> On Tue, 2022-10-11 at 11:07 +1300, Thomas Munro wrote:\n> > OK, I'll wait for the dust to settle on our 15 release and then\n> > back-patch this. Then I'll keep working on the opaque pointer\n> > support for master, which LLVM 16 will need (I expect we'll\n> > eventually want to back-patch that eventually, but first things\n> > first...).\n>\n> Fedora 37 is out very very soon, and ships with CLANG/LLVM 15. What is\n> the timeline for backpatching llvm15 support?\n\nHi Devrim,\n\nWill do first thing tomorrow.\n\n\n", "msg_date": "Tue, 18 Oct 2022 22:06:17 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix build with LLVM 15 or above" }, { "msg_contents": "Hi,\n\nOn Tue, 2022-10-18 at 22:06 +1300, Thomas Munro wrote:\n> Will do first thing tomorrow.\n\nJust wanted to confirm that I pushed Fedora RPMs built against LLVM 15\nby adding these patches.\n\nThanks Thomas.\n\nRegards,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, Red Hat Certified Engineer\nTwitter: @DevrimGunduz , @DevrimGunduzTR", "msg_date": "Tue, 25 Oct 2022 16:28:03 +0100", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix build with LLVM 15 or above" }, { "msg_contents": "On Wed, Oct 26, 2022 at 4:28 AM Devrim Gündüz <devrim@gunduz.org> wrote:\n> On Tue, 2022-10-18 at 22:06 +1300, Thomas Munro wrote:\n> > Will do first thing tomorrow.\n>\n> Just wanted to confirm that I pushed Fedora RPMs built against LLVM 15\n> by adding these patches.\n>\n> Thanks Thomas.\n\nCool.\n\nFTR I still have to finish the 'real' fixes for LLVM 16. Their\ncadence is one major release every 6 months, putting it at about April\n'23, but I'll try to get it ready quite soon on our master branch. BF\nanimal seawasp is green again for now, but I expect it will turn back\nto red pretty soon when they start ripping out the deprecated stuff on\ntheir master branch...\n\n\n", "msg_date": "Wed, 26 Oct 2022 12:28:53 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix build with LLVM 15 or above" } ]
[ { "msg_contents": ">Please let us know if you have any questions. We're excited that we are\n>very close to officially releasing PostgreSQL 15.\nHi, forgive my ignorance.\nWhat are the rules for a commit to be included in the release notes?\nDoes it need to be explicitly requested?\n\nWhy is the commit 8cb2a22\n<https://github.com/postgres/postgres/commit/8cb2a22bbb2cf4212482ac15021ceaa2e9c52209>\n(bug fix), not included?\nI think users of this function should be better informed.\n\nregards,\nRanier Vilela\n\n>Please let us know if you have any questions. We're excited that we are >very close to officially releasing PostgreSQL 15. Hi, forgive my ignorance.What are the rules for a commit to be included in the release notes?Does it need to be explicitly requested?Why is the commit 8cb2a22 (bug fix), not included?I think users of this function should be better informed.regards,Ranier Vilela", "msg_date": "Mon, 3 Oct 2022 09:05:34 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "re: PostgreSQL 15 GA release date" }, { "msg_contents": "On Mon, Oct 03, 2022 at 09:05:34AM -0300, Ranier Vilela wrote:\n> >Please let us know if you have any questions. We're excited that we are\n> >very close to officially releasing PostgreSQL 15.\n> Hi, forgive my ignorance.\n> What are the rules for a commit to be included in the release notes?\n> Does it need to be explicitly requested?\n> \n> Why is the commit 8cb2a22 (bug fix), not included?\n\nIt's not included in the release notes for major version 15, since it\nwas backpatched to v12, so it's not a \"change in v15\" (for example,\nsomeone upgrading from v14.6 will already have that change).\n\nIt'll be included in the minor release notes, instead.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 3 Oct 2022 07:39:40 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 15 GA release date" }, { "msg_contents": "On Mon, Oct 03, 2022 at 09:05:34AM -0300, Ranier Vilela wrote:\n> >Please let us know if you have any questions. We're excited that we are\n> >very close to officially releasing PostgreSQL 15.\n> Hi, forgive my ignorance.\n> What are the rules for a commit to be included in the release notes?\n> Does it need to be explicitly requested?\n> \n> Why is the commit 8cb2a22\n> <https://github.com/postgres/postgres/commit/8cb2a22bbb2cf4212482ac15021ceaa2e9c52209>\n> (bug fix), not included?\n> I think users of this function should be better informed.\n\nThat commit has been backpatched, so it's not new in pg15 and will be in the\nrelease notes of the next minor versions.\n\n\n", "msg_date": "Mon, 3 Oct 2022 20:40:13 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 15 GA release date" }, { "msg_contents": "Em seg., 3 de out. de 2022 às 09:39, Justin Pryzby <pryzby@telsasoft.com>\nescreveu:\n\n> On Mon, Oct 03, 2022 at 09:05:34AM -0300, Ranier Vilela wrote:\n> > >Please let us know if you have any questions. We're excited that we are\n> > >very close to officially releasing PostgreSQL 15.\n> > Hi, forgive my ignorance.\n> > What are the rules for a commit to be included in the release notes?\n> > Does it need to be explicitly requested?\n> >\n> > Why is the commit 8cb2a22 (bug fix), not included?\n>\n> It's not included in the release notes for major version 15, since it\n> was backpatched to v12, so it's not a \"change in v15\" (for example,\n> someone upgrading from v14.6 will already have that change).\n>\n> It'll be included in the minor release notes, instead.\n>\nThanks Justin for the clarification.\n\nregards,\nRanier Vilela\n\nEm seg., 3 de out. de 2022 às 09:39, Justin Pryzby <pryzby@telsasoft.com> escreveu:On Mon, Oct 03, 2022 at 09:05:34AM -0300, Ranier Vilela wrote:\n> >Please let us know if you have any questions. We're excited that we are\n> >very close to officially releasing PostgreSQL 15.\n> Hi, forgive my ignorance.\n> What are the rules for a commit to be included in the release notes?\n> Does it need to be explicitly requested?\n> \n> Why is the commit 8cb2a22 (bug fix), not included?\n\nIt's not included in the release notes for major version 15, since it\nwas backpatched to v12, so it's not a \"change in v15\" (for example,\nsomeone upgrading from v14.6 will already have that change).\n\nIt'll be included in the minor release notes, instead.Thanks Justin for the clarification.regards,Ranier Vilela", "msg_date": "Mon, 3 Oct 2022 09:49:55 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 15 GA release date" }, { "msg_contents": "Em seg., 3 de out. de 2022 às 09:40, Julien Rouhaud <rjuju123@gmail.com>\nescreveu:\n\n> On Mon, Oct 03, 2022 at 09:05:34AM -0300, Ranier Vilela wrote:\n> > >Please let us know if you have any questions. We're excited that we are\n> > >very close to officially releasing PostgreSQL 15.\n> > Hi, forgive my ignorance.\n> > What are the rules for a commit to be included in the release notes?\n> > Does it need to be explicitly requested?\n> >\n> > Why is the commit 8cb2a22\n> > <\n> https://github.com/postgres/postgres/commit/8cb2a22bbb2cf4212482ac15021ceaa2e9c52209\n> >\n> > (bug fix), not included?\n> > I think users of this function should be better informed.\n>\n> That commit has been backpatched, so it's not new in pg15 and will be in\n> the\n> release notes of the next minor versions.\n>\nThanks Julien.\n\nregards,\nRanier Vilela\n\nEm seg., 3 de out. de 2022 às 09:40, Julien Rouhaud <rjuju123@gmail.com> escreveu:On Mon, Oct 03, 2022 at 09:05:34AM -0300, Ranier Vilela wrote:\n> >Please let us know if you have any questions. We're excited that we are\n> >very close to officially releasing PostgreSQL 15.\n> Hi, forgive my ignorance.\n> What are the rules for a commit to be included in the release notes?\n> Does it need to be explicitly requested?\n> \n> Why is the commit 8cb2a22\n> <https://github.com/postgres/postgres/commit/8cb2a22bbb2cf4212482ac15021ceaa2e9c52209>\n> (bug fix), not included?\n> I think users of this function should be better informed.\n\nThat commit has been backpatched, so it's not new in pg15 and will be in the\nrelease notes of the next minor versions.Thanks Julien.regards,Ranier Vilela", "msg_date": "Mon, 3 Oct 2022 09:50:18 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 15 GA release date" } ]
[ { "msg_contents": "The date of the current commitfest is over, here is the current status of\nthe \"September 2022 commitfest.\"\nThere were 296 patches in the commitfest and 58 were get committed.\n\nTotal: 296.\nNeeds review: 155.\nWaiting on Author: 41.\nReady for Committer: 19.\nCommitted: 58.\nMoved to next CF: 8.\n Returned with Feedback: 5.\nRejected: 2. Withdrawn: 8.\n\n-- \nIbrar Ahmed.\n\nThe date of the current commitfest is over, here is the current status of the  \"September 2022 commitfest.\"There were 296 patches in the commitfest and 58 were get committed.Total: 296.Needs review: 155.Waiting on Author: 41. Ready for Committer: 19. Committed: 58. Moved to next CF: 8. Returned with Feedback: 5. Rejected: 2. Withdrawn: 8. -- Ibrar Ahmed.", "msg_date": "Mon, 3 Oct 2022 18:11:19 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmed@percona.com>", "msg_from_op": true, "msg_subject": "[Commitfest 2022-09] Date is Over." }, { "msg_contents": "On 2022-Oct-03, Ibrar Ahmed wrote:\n\n> The date of the current commitfest is over, here is the current status of\n> the \"September 2022 commitfest.\"\n> There were 296 patches in the commitfest and 58 were get committed.\n\nAre you moving the open patches to the next commitfest, closing some as\nRwF, etc? I'm not clear what the status is, for the November commitfest.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Hay quien adquiere la mala costumbre de ser infeliz\" (M. A. Evans)\n\n\n", "msg_date": "Wed, 5 Oct 2022 10:42:57 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [Commitfest 2022-09] Date is Over." }, { "msg_contents": "On Wed, 5 Oct 2022 at 1:43 PM, Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2022-Oct-03, Ibrar Ahmed wrote:\n>\n> > The date of the current commitfest is over, here is the current status of\n> > the \"September 2022 commitfest.\"\n> > There were 296 patches in the commitfest and 58 were get committed.\n>\n> Are you moving the open patches to the next commitfest, closing some as\n> RwF, etc? I'm not clear what the status is, for the November commitfest.\n\n\nI am also not clear about that should I move that or wait till November.\nAnybody guide me\n\n>\n>\n> --\n> Álvaro Herrera PostgreSQL Developer —\n> https://www.EnterpriseDB.com/\n> \"Hay quien adquiere la mala costumbre de ser infeliz\" (M. A. Evans)\n>\n-- \n\nIbrar Ahmed.\nSenior Software Engineer, PostgreSQL Consultant.\n\nOn Wed, 5 Oct 2022 at 1:43 PM, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Oct-03, Ibrar Ahmed wrote:\n\n> The date of the current commitfest is over, here is the current status of\n> the  \"September 2022 commitfest.\"\n> There were 296 patches in the commitfest and 58 were get committed.\n\nAre you moving the open patches to the next commitfest, closing some as\nRwF, etc?  I'm not clear what the status is, for the November commitfest.I am also not clear about that should I move that or wait till November. Anybody guide me\n\n-- \nÁlvaro Herrera         PostgreSQL Developer  —  https://www.EnterpriseDB.com/\n\"Hay quien adquiere la mala costumbre de ser infeliz\" (M. A. Evans)\n-- Ibrar Ahmed. Senior Software Engineer, PostgreSQL Consultant.", "msg_date": "Wed, 5 Oct 2022 14:50:58 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmed@percona.com>", "msg_from_op": true, "msg_subject": "Re: [Commitfest 2022-09] Date is Over." }, { "msg_contents": "Hi,\n\nOn Wed, Oct 05, 2022 at 02:50:58PM +0500, Ibrar Ahmed wrote:\n> On Wed, 5 Oct 2022 at 1:43 PM, Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n> \n> > On 2022-Oct-03, Ibrar Ahmed wrote:\n> >\n> > > The date of the current commitfest is over, here is the current status of\n> > > the \"September 2022 commitfest.\"\n> > > There were 296 patches in the commitfest and 58 were get committed.\n> >\n> > Are you moving the open patches to the next commitfest, closing some as\n> > RwF, etc? I'm not clear what the status is, for the November commitfest.\n> \n> \n> I am also not clear about that should I move that or wait till November.\n> Anybody guide me\n\nThe CF should be marked as closed, and its entries fully processed within a few\ndays after its final day.\n\nThe general rule is that patches that have been waiting on authors for 2 weeks\nor more without any answer from the author(s) should get returned with\nfeedback with some message to the author, and the rest is moved to the next\ncommitfest. If you have the time to look at some patches and see if they need\nsomething else, that's always better.\n\n\n", "msg_date": "Wed, 5 Oct 2022 18:01:01 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Commitfest 2022-09] Date is Over." }, { "msg_contents": "On Wed, Oct 5, 2022 at 3:01 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Hi,\n>\n> On Wed, Oct 05, 2022 at 02:50:58PM +0500, Ibrar Ahmed wrote:\n> > On Wed, 5 Oct 2022 at 1:43 PM, Alvaro Herrera <alvherre@alvh.no-ip.org>\n> > wrote:\n> >\n> > > On 2022-Oct-03, Ibrar Ahmed wrote:\n> > >\n> > > > The date of the current commitfest is over, here is the current\n> status of\n> > > > the \"September 2022 commitfest.\"\n> > > > There were 296 patches in the commitfest and 58 were get committed.\n> > >\n> > > Are you moving the open patches to the next commitfest, closing some as\n> > > RwF, etc? I'm not clear what the status is, for the November\n> commitfest.\n> >\n> >\n> > I am also not clear about that should I move that or wait till November.\n> > Anybody guide me\n>\n> The CF should be marked as closed, and its entries fully processed within\n> a few\n> days after its final day.\n>\n> The general rule is that patches that have been waiting on authors for 2\n> weeks\n> or more without any answer from the author(s) should get returned with\n> feedback with some message to the author, and the rest is moved to the next\n> commitfest. If you have the time to look at some patches and see if they\n> need\n> something else, that's always better.\n>\n> Thanks for your response; I will do that in two days.\n\n\n-- \n\nIbrar Ahmed.\nSenior Software Engineer, PostgreSQL Consultant.\n\nOn Wed, Oct 5, 2022 at 3:01 PM Julien Rouhaud <rjuju123@gmail.com> wrote:Hi,\n\nOn Wed, Oct 05, 2022 at 02:50:58PM +0500, Ibrar Ahmed wrote:\n> On Wed, 5 Oct 2022 at 1:43 PM, Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n> \n> > On 2022-Oct-03, Ibrar Ahmed wrote:\n> >\n> > > The date of the current commitfest is over, here is the current status of\n> > > the  \"September 2022 commitfest.\"\n> > > There were 296 patches in the commitfest and 58 were get committed.\n> >\n> > Are you moving the open patches to the next commitfest, closing some as\n> > RwF, etc?  I'm not clear what the status is, for the November commitfest.\n> \n> \n> I am also not clear about that should I move that or wait till November.\n> Anybody guide me\n\nThe CF should be marked as closed, and its entries fully processed within a few\ndays after its final day.\n\nThe general rule is that patches that have been waiting on authors for 2 weeks\nor more without any answer from the author(s) should get returned with\nfeedback with some message to the author, and the rest is moved to the next\ncommitfest.  If you have the time to look at some patches and see if they need\nsomething else, that's always better.Thanks for your response; I will do that in two days.-- Ibrar Ahmed. Senior Software Engineer, PostgreSQL Consultant.", "msg_date": "Thu, 6 Oct 2022 21:00:02 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmed@percona.com>", "msg_from_op": true, "msg_subject": "Re: [Commitfest 2022-09] Date is Over." }, { "msg_contents": "Fwiw I'm going through some patches looking for patches to review.... And\nI'm finding that the patches I'm seeing actually did get reviews, some of\nthem months ago.\n\nIf there was any substantial feedback since the last patch was posted I\nwould say you should change the status to Waiting on Author when moving it\nforward rather than leaving it as Needs Review.\n\nIdeally there should be very few patches moved to the next commitfest as\nNeeds Review. Only patches that have been not getting attention and the\nauthor is blocked waiting on feedback.\n\nFwiw I'm going through some patches looking for patches to review.... And I'm finding that the patches I'm seeing actually did get reviews, some of them months ago.If there was any substantial feedback since the last patch was posted I would say you should change the status to Waiting on Author when moving it forward rather than leaving it as Needs Review.Ideally there should be very few patches moved to the next commitfest as Needs Review. Only patches that have been not getting attention and the author is blocked waiting on feedback.", "msg_date": "Thu, 6 Oct 2022 17:30:32 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: [Commitfest 2022-09] Date is Over." }, { "msg_contents": "On Wed, Oct 05, 2022 at 06:01:01PM +0800, Julien Rouhaud wrote:\n> The CF should be marked as closed, and its entries fully processed within a few\n> days after its final day.\n> \n> The general rule is that patches that have been waiting on authors for 2 weeks\n> or more without any answer from the author(s) should get returned with\n> feedback with some message to the author, and the rest is moved to the next\n> commitfest. If you have the time to look at some patches and see if they need\n> something else, that's always better.\n\nOne week after this message, there was a total of 170-ish entries\nstill in the commit fest, so I have gone through each one of them and\nupdated the ones in need of a refresh. Please note that there was\nsomething like 50~60 entries where the CF bot was failing. In some\ncases, Ibrar has mentioned that on the thread near the beginning of\nSeptember, and based on the lack of updates such entries have been\nswitched as RwF. Other entries where the CF bot is failing have been\nmarked as waiting on author for now. A couple of entries have been\ncommitted but not marked as such, and there was the usual set of\nentries with an incorrect status.\n\nOf course, I may have done some mistakes while classifying all that,\nso feel free to scream back at me if you feel that something has been\nhandled incorrectly.\n\nHere is the final score:\nCommitted: 65.\nMoved to next CF: 177.\nWithdrawn: 11.\nRejected: 3.\nReturned with Feedback: 40.\nTotal: 296. \n\nThanks,\n--\nMichael", "msg_date": "Wed, 12 Oct 2022 17:54:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [Commitfest 2022-09] Date is Over." } ]
[ { "msg_contents": "Hi,\r\n\r\nWe are planning a PostgreSQL 15 RC2 release for October 6, 2022. We are \r\nreleasing a second release candidate due to the revert of an \r\noptimization around the GROUP BY clause.\r\n\r\nWe are still planning the PostgreSQL 15 GA release for October 13, but \r\nwe may push this to October 20 based on reports from the RCs.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Mon, 3 Oct 2022 10:07:33 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "PostgreSQL 15 RC2 + GA release dates" } ]
[ { "msg_contents": "Hi,\n\nThere were a couple of tab completion issues present:\na) \\dRp and \\dRs tab completion displays tables instead of displaying\npublications and subscriptions.\nb) \"ALTER ... OWNER TO\" does not include displaying of CURRENT_ROLE,\nCURRENT_USER and SESSION_USER.\n\nThe attached patch has the changes to handle the same.\n\nRegards,\nVignesh", "msg_date": "Mon, 3 Oct 2022 20:41:07 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Miscellaneous tab completion issue fixes" }, { "msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n\n> Hi,\n>\n> There were a couple of tab completion issues present:\n> a) \\dRp and \\dRs tab completion displays tables instead of displaying\n> publications and subscriptions.\n> b) \"ALTER ... OWNER TO\" does not include displaying of CURRENT_ROLE,\n> CURRENT_USER and SESSION_USER.\n>\n> The attached patch has the changes to handle the same.\n\nGood catches! Just a few comments:\n\n> +\telse if (TailMatchesCS(\"\\\\dRp*\"))\n> +\t\tCOMPLETE_WITH_QUERY(Query_for_list_of_publications[0].query);\n> +\telse if (TailMatchesCS(\"\\\\dRs*\"))\n> +\t\tCOMPLETE_WITH_QUERY(Query_for_list_of_subscriptions[0].query);\n\nThese are version-specific queries, so should be passed in their\nentirety to COMPLETE_WITH_VERSIONED_QUERY() so that psql can pick the\nright version, and avoid sending the query at all if the server is too\nold.\n\n> +/* add these to Query_for_list_of_roles in OWNER TO contexts */\n> +#define Keywords_for_list_of_owner_to_roles \\\n> +\"CURRENT_ROLE\", \"CURRENT_USER\", \"SESSION_USER\"\n\nI think this would read better without the TO, both in the comment and\nthe constant name, similar to the below only having GRANT without TO:\n\n> /* add these to Query_for_list_of_roles in GRANT contexts */\n> #define Keywords_for_list_of_grant_roles \\\n> \"PUBLIC\", \"CURRENT_ROLE\", \"CURRENT_USER\", \"SESSION_USER\"\n\n- ilmari\n\n\n", "msg_date": "Mon, 03 Oct 2022 18:29:32 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Miscellaneous tab completion issue fixes" }, { "msg_contents": "On Mon, Oct 03, 2022 at 06:29:32PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> vignesh C <vignesh21@gmail.com> writes:\n>> +\telse if (TailMatchesCS(\"\\\\dRp*\"))\n>> +\t\tCOMPLETE_WITH_QUERY(Query_for_list_of_publications[0].query);\n>> +\telse if (TailMatchesCS(\"\\\\dRs*\"))\n>> +\t\tCOMPLETE_WITH_QUERY(Query_for_list_of_subscriptions[0].query);\n> \n> These are version-specific queries, so should be passed in their\n> entirety to COMPLETE_WITH_VERSIONED_QUERY() so that psql can pick the\n> right version, and avoid sending the query at all if the server is too\n> old.\n\n+1.\n\n>> +/* add these to Query_for_list_of_roles in OWNER TO contexts */\n>> +#define Keywords_for_list_of_owner_to_roles \\\n>> +\"CURRENT_ROLE\", \"CURRENT_USER\", \"SESSION_USER\"\n> \n> I think this would read better without the TO, both in the comment and\n> the constant name, similar to the below only having GRANT without TO:\n\nKeywords_for_list_of_grant_roles is used in six code paths, so it\nseems to me that there is little gain in having a separate #define\nhere. Let's just specify the list of allowed roles (RoleSpec) where\nOWNER TO is parsed.\n--\nMichael", "msg_date": "Tue, 4 Oct 2022 12:43:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Miscellaneous tab completion issue fixes" }, { "msg_contents": "On Tue, 4 Oct 2022 at 09:13, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Oct 03, 2022 at 06:29:32PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> > vignesh C <vignesh21@gmail.com> writes:\n> >> + else if (TailMatchesCS(\"\\\\dRp*\"))\n> >> + COMPLETE_WITH_QUERY(Query_for_list_of_publications[0].query);\n> >> + else if (TailMatchesCS(\"\\\\dRs*\"))\n> >> + COMPLETE_WITH_QUERY(Query_for_list_of_subscriptions[0].query);\n> >\n> > These are version-specific queries, so should be passed in their\n> > entirety to COMPLETE_WITH_VERSIONED_QUERY() so that psql can pick the\n> > right version, and avoid sending the query at all if the server is too\n> > old.\n>\n> +1.\n>\n\nModified\n\n >> +/* add these to Query_for_list_of_roles in OWNER TO contexts */\n> >> +#define Keywords_for_list_of_owner_to_roles \\\n> >> +\"CURRENT_ROLE\", \"CURRENT_USER\", \"SESSION_USER\"\n> >\n> > I think this would read better without the TO, both in the comment and\n> > the constant name, similar to the below only having GRANT without TO:\n>\n> Keywords_for_list_of_grant_roles is used in six code paths, so it\n> seems to me that there is little gain in having a separate #define\n> here. Let's just specify the list of allowed roles (RoleSpec) where\n> OWNER TO is parsed.\n\nI have removed the macro and specified the allowed roles.\n\nThanks for the comments, the attached v2 patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Tue, 4 Oct 2022 15:00:10 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Miscellaneous tab completion issue fixes" }, { "msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n\n> On Tue, 4 Oct 2022 at 09:13, Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On Mon, Oct 03, 2022 at 06:29:32PM +0100, Dagfinn Ilmari Mannsåker wrote:\n>> > vignesh C <vignesh21@gmail.com> writes:\n>> >> + else if (TailMatchesCS(\"\\\\dRp*\"))\n>> >> + COMPLETE_WITH_QUERY(Query_for_list_of_publications[0].query);\n>> >> + else if (TailMatchesCS(\"\\\\dRs*\"))\n>> >> + COMPLETE_WITH_QUERY(Query_for_list_of_subscriptions[0].query);\n>> >\n>> > These are version-specific queries, so should be passed in their\n>> > entirety to COMPLETE_WITH_VERSIONED_QUERY() so that psql can pick the\n>> > right version, and avoid sending the query at all if the server is too\n>> > old.\n>>\n>> +1.\n>\n> Modified\n>\n> >> +/* add these to Query_for_list_of_roles in OWNER TO contexts */\n>> >> +#define Keywords_for_list_of_owner_to_roles \\\n>> >> +\"CURRENT_ROLE\", \"CURRENT_USER\", \"SESSION_USER\"\n>> >\n>> > I think this would read better without the TO, both in the comment and\n>> > the constant name, similar to the below only having GRANT without TO:\n>>\n>> Keywords_for_list_of_grant_roles is used in six code paths, so it\n>> seems to me that there is little gain in having a separate #define\n>> here. Let's just specify the list of allowed roles (RoleSpec) where\n>> OWNER TO is parsed.\n>\n> I have removed the macro and specified the allowed roles.\n>\n> Thanks for the comments, the attached v2 patch has the changes for the same.\n\nLGTM, +1 to commit.\n\n- ilmari\n\n\n", "msg_date": "Tue, 04 Oct 2022 10:35:25 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Miscellaneous tab completion issue fixes" }, { "msg_contents": "On Tue, Oct 04, 2022 at 10:35:25AM +0100, Dagfinn Ilmari Mannsåker wrote:\n> LGTM, +1 to commit.\n\nFine by me, so done.\n--\nMichael", "msg_date": "Wed, 5 Oct 2022 11:47:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Miscellaneous tab completion issue fixes" }, { "msg_contents": "On Wed, 5 Oct 2022 at 08:17, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Oct 04, 2022 at 10:35:25AM +0100, Dagfinn Ilmari Mannsåker wrote:\n> > LGTM, +1 to commit.\n>\n> Fine by me, so done.\n\nThanks for pushing this.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 5 Oct 2022 17:35:38 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Miscellaneous tab completion issue fixes" } ]
[ { "msg_contents": "Hello, here's my take on masking data when using pg_dump\n \nThe main idea is using PostgreSQL functions to replace data during a SELECT.\nWhen table data is dumped SELECT a,b,c,d ... from ... query is generated, the columns that are marked for masking are replaced with result of functions on those columns\nExample: columns name, count are to be masked, so the query will look as such: SELECT id, mask_text(name), mask_int(count), date from ...\n \nSo about the interface: I added 2 more command-line options: \n \n--mask-columns, which specifies what columns from what tables will be masked \n    usage example:\n            --mask-columns \"t1.name, t2.description\" - both columns will be masked with the same corresponding function\n            or --mask-columns name - ALL columns with name \"name\" from all dumped tables will be masked with correspoding function\n \n--mask-function, which specifies what functions will mask data\n    usage example:\n            --mask-function mask_int - corresponding columns will be masked with function named \"mask_int\" from default schema (public)\n            or --mask-function my_schema.mask_varchar - same as above but with specified schema where the function is stored\n            or --mask-function somedir/filename - the function is \"defined\" here - more on the structure below\n \nStructure of the file with function description:\n \nFirst row - function name (with or without schema name)\nSecond row - type of in and out value (the design is to only work with same input/output type so no int-to-text shenanigans)\nThird row - language of function\nForth and later rows - body of a function\n \nExample of such file:\n \nmask_text\ntext\nplpgsql\nres := '***';\n \nFirst iteration of using file-described functions used just plain SQL query, but since it executed during read-write connection, some things such as writing \"DROP TABLE t1;\" after the CREATE FUNCTION ...; were possible.\nNow even if something harmful is written in function body, it will be executed during dump-read-only connection, where it will just throw an error\n \nAbout \"corresponding columns and functions\" - masking functions and columns are paired with eachother based on the input order, but --masking-columns and --masking-functions don't have to be subsequent.\nExample: pg_dump -t table_name --mask-columns name --mask-colums count --mask-function mask_text --mask-function mask_int - here 'name' will be paired with function 'mask_text' and 'count' with 'mask_int' \n \nPatch includes regression tests\n \nI'm open to discussion of this patch\n \nBest regards,\n \nOleg Tselebrovskiy", "msg_date": "Mon, 03 Oct 2022 18:30:17 +0300", "msg_from": "=?UTF-8?B?0J7Qu9C10LMg0KbQtdC70LXQsdGA0L7QstGB0LrQuNC5?=\n <oleg_tselebrovskiy@mail.ru>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?UG9zc2libGUgc29sdXRpb24gZm9yIG1hc2tpbmcgY2hvc2VuIGNvbHVtbnMg?=\n =?UTF-8?B?d2hlbiB1c2luZyBwZ19kdW1w?=" }, { "msg_contents": "Hi,\n\nOn Mon, Oct 03, 2022 at 06:30:17PM +0300, Олег Целебровский wrote:\n>\n> Hello, here's my take on masking data when using pg_dump\n>  \n> The main idea is using PostgreSQL functions to replace data during a SELECT.\n> When table data is dumped SELECT a,b,c,d ... from ... query is generated, the columns that are marked for masking are replaced with result of functions on those columns\n> Example: columns name, count are to be masked, so the query will look as such: SELECT id, mask_text(name), mask_int(count), date from ...\n>  \n> So about the interface: I added 2 more command-line options: \n>  \n> --mask-columns, which specifies what columns from what tables will be masked \n>     usage example:\n>             --mask-columns \"t1.name, t2.description\" - both columns will be masked with the same corresponding function\n>             or --mask-columns name - ALL columns with name \"name\" from all dumped tables will be masked with correspoding function\n>  \n> --mask-function, which specifies what functions will mask data\n>     usage example:\n>             --mask-function mask_int - corresponding columns will be masked with function named \"mask_int\" from default schema (public)\n>             or --mask-function my_schema.mask_varchar - same as above but with specified schema where the function is stored\n>             or --mask-function somedir/filename - the function is \"defined\" here - more on the structure below\n\nFTR I wrote an extension POC [1] last weekend that does that but on the backend\nside. The main advantage is that it's working with any existing versions of\npg_dump (or any client relying on COPY or even plain interactive SQL\nstatements), and that the DBA can force a dedicated role to only get a masked\ndump, even if they forgot to ask for it.\n\nI only had a quick look at your patch but it seems that you left some todo in\nrussian, which isn't helpful at least to me.\n\n[1] https://github.com/rjuju/pg_anonymize\n\n\n", "msg_date": "Mon, 3 Oct 2022 23:44:53 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible solution for masking chosen columns when using pg_dump" }, { "msg_contents": "Hi,\nI took a look, here are several suggestions for improvement:\n\n- Masking is not a main functionality of pg_dump and it is better to write\nmost of the connected things in a separate file like parallel.c or\ndumputils.c. This will help slow down the growth of an already huge pg_dump\nfile.\n\n- Also it can be hard to use a lot of different functions for different\nfields, maybe it would be better to set up functions in a file.\n\n- How will it work for the same field and tables in the different schemas?\nCan we set up the exact schema for the field?\n\n- misspelling in a word\n>/*\n>* Add all columns and funcions to list of MaskColumnInfo structures,\n>*/\n\n- Why did you use 256 here?\n> char* table = (char*) pg_malloc(256 * sizeof(char));\nAlso for malloc you need malloc on 1 symbol more because you have to store\n'\\0' symbol.\n\n- Instead of addFuncToDatabase you can run your query using something\nalready defined from fe_utils/query_utils.c. And It will be better to set\nup a connection only once and create all functions. Establishing a\nconnection is a resource-intensive procedure. There are a lot of magic\nnumbers, better to leave some comments explaining why there are 64 or 512.\n\n- It seems that you are not using temp_string\n> char *temp_string = (char*)malloc(256 * sizeof(char));\n\n- Grammar issues\n>/*\n>* mask_column_info_list contains info about every to-be-masked column:\n>* its name, a name its table (if nothing is specified - mask all columns\nwith this name),\n>* name of masking function and name of schema containing this function\n(public if not specified)\n>*/\nthe name of its table\n\n\nпн, 3 окт. 2022 г. в 20:45, Julien Rouhaud <rjuju123@gmail.com>:\n\n> Hi,\n>\n> On Mon, Oct 03, 2022 at 06:30:17PM +0300, Олег Целебровский wrote:\n> >\n> > Hello, here's my take on masking data when using pg_dump\n> >\n> > The main idea is using PostgreSQL functions to replace data during a\n> SELECT.\n> > When table data is dumped SELECT a,b,c,d ... from ... query is\n> generated, the columns that are marked for masking are replaced with result\n> of functions on those columns\n> > Example: columns name, count are to be masked, so the query will look as\n> such: SELECT id, mask_text(name), mask_int(count), date from ...\n> >\n> > So about the interface: I added 2 more command-line options:\n> >\n> > --mask-columns, which specifies what columns from what tables will be\n> masked\n> > usage example:\n> > --mask-columns \"t1.name, t2.description\" - both columns\n> will be masked with the same corresponding function\n> > or --mask-columns name - ALL columns with name \"name\" from\n> all dumped tables will be masked with correspoding function\n> >\n> > --mask-function, which specifies what functions will mask data\n> > usage example:\n> > --mask-function mask_int - corresponding columns will be\n> masked with function named \"mask_int\" from default schema (public)\n> > or --mask-function my_schema.mask_varchar - same as above\n> but with specified schema where the function is stored\n> > or --mask-function somedir/filename - the function is\n> \"defined\" here - more on the structure below\n>\n> FTR I wrote an extension POC [1] last weekend that does that but on the\n> backend\n> side. The main advantage is that it's working with any existing versions\n> of\n> pg_dump (or any client relying on COPY or even plain interactive SQL\n> statements), and that the DBA can force a dedicated role to only get a\n> masked\n> dump, even if they forgot to ask for it.\n>\n> I only had a quick look at your patch but it seems that you left some todo\n> in\n> russian, which isn't helpful at least to me.\n>\n> [1] https://github.com/rjuju/pg_anonymize\n>\n>\n>\n\nHi, I took a look, here are several suggestions for improvement:-\n Masking is not a main functionality of pg_dump and it is better to \nwrite most of the connected things in a separate file like parallel.c or\n dumputils.c. This will help slow down the growth of an already huge \npg_dump file. - Also it can be hard to use\n a lot of different functions for different fields, maybe it would be \nbetter to set up functions in a file. - How will it work for the same field and tables in the different schemas? Can we set up the exact schema for the field?- misspelling in a word>/*\t>* Add all columns and funcions to list of MaskColumnInfo structures,\t>*/- Why did you use 256 here?> char* table = (char*) pg_malloc(256 * sizeof(char));Also for malloc you need malloc on 1 symbol more because you have to store '\\0' symbol.-\n Instead of addFuncToDatabase you can run your query using something \nalready defined from fe_utils/query_utils.c. And It will be better to \nset up a connection only once and create all functions. Establishing a \nconnection is a resource-intensive procedure. There are a lot of magic \nnumbers, better to leave some comments explaining why there are 64 or \n512.- It seems that you are not using temp_string> \tchar\t   *temp_string = (char*)malloc(256 * sizeof(char));- Grammar issues>/*>* mask_column_info_list contains info about every to-be-masked column:>* its name, a name its table (if nothing is specified - mask all columns with this name),>* name of masking function and name of schema containing this function (public if not specified)>*/the name of its tableпн, 3 окт. 2022 г. в 20:45, Julien Rouhaud <rjuju123@gmail.com>:Hi,\n\nOn Mon, Oct 03, 2022 at 06:30:17PM +0300, Олег Целебровский wrote:\n>\n> Hello, here's my take on masking data when using pg_dump\n>  \n> The main idea is using PostgreSQL functions to replace data during a SELECT.\n> When table data is dumped SELECT a,b,c,d ... from ... query is generated, the columns that are marked for masking are replaced with result of functions on those columns\n> Example: columns name, count are to be masked, so the query will look as such: SELECT id, mask_text(name), mask_int(count), date from ...\n>  \n> So about the interface: I added 2 more command-line options: \n>  \n> --mask-columns, which specifies what columns from what tables will be masked \n>     usage example:\n>             --mask-columns \"t1.name, t2.description\" - both columns will be masked with the same corresponding function\n>             or --mask-columns name - ALL columns with name \"name\" from all dumped tables will be masked with correspoding function\n>  \n> --mask-function, which specifies what functions will mask data\n>     usage example:\n>             --mask-function mask_int - corresponding columns will be masked with function named \"mask_int\" from default schema (public)\n>             or --mask-function my_schema.mask_varchar - same as above but with specified schema where the function is stored\n>             or --mask-function somedir/filename - the function is \"defined\" here - more on the structure below\n\nFTR I wrote an extension POC [1] last weekend that does that but on the backend\nside.  The main advantage is that it's working with any existing versions of\npg_dump (or any client relying on COPY or even plain interactive SQL\nstatements), and that the DBA can force a dedicated role to only get a masked\ndump, even if they forgot to ask for it.\n\nI only had a quick look at your patch but it seems that you left some todo in\nrussian, which isn't helpful at least to me.\n\n[1] https://github.com/rjuju/pg_anonymize", "msg_date": "Fri, 7 Oct 2022 06:01:24 +0500", "msg_from": "=?UTF-8?B?0JLQuNC60YLQvtGA0LjRjyDQqNC10L/QsNGA0LQ=?=\n <we.viktory@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible solution for masking chosen columns when using pg_dump" }, { "msg_contents": "Hi,\n\nI applied most of suggestions: used separate files for most of added code, fixed typos/mistakes, got rid of that pesky TODO that was already implemented, just not deleted.\n\nAdded tests (and functionality) for cases when you need to mask columns in tables with the same name in different schemas. If schema is not specified, then columns in all tables with specified name are masked (Example - pg_dump -t ‘t0’ --mask-columns id --mask-function mask_int will mask all ids in all tables with names ‘t0’ in all existing schemas).\n\nWrote comments for all ‘magic numbers’\n\nAbout that\n\n>- Also it can be hard to use a lot of different functions for different fields, maybe it would be better to set up functions in a file.\n\nI agree with that, but I know about at least 2 other patches (both are WIP, but still) that are interacting with reading command-line options from file. And if everyone will write their own version of reading command-line options from file, it will quickly get confusing.\n\nA solution to that problem is another patch that will put all options from file (one file for any possible options, from existing to future ones) into **argv in main, so that pg_dump can process them as if they came form command line.\n  \n>Пятница, 7 октября 2022, 8:01 +07:00 от Виктория Шепард <we.viktory@gmail.com>:\n> \n>Hi,\n>I took a look, here are several suggestions for improvement:\n> \n>- Masking is not a main functionality of pg_dump and it is better to write most of the connected things in a separate file like parallel.c or dumputils.c. This will help slow down the growth of an already huge pg_dump file.\n> \n>- Also it can be hard to use a lot of different functions for different fields, maybe it would be better to set up functions in a file.\n> \n>- How will it work for the same field and tables in the different schemas? Can we set up the exact schema for the field?\n> \n>- misspelling in a word\n>>/*\n>>* Add all columns and funcions to list of MaskColumnInfo structures,\n>>*/\n> \n>- Why did you use 256 here?\n>> char* table = (char*) pg_malloc(256 * sizeof(char));\n>Also for malloc you need malloc on 1 symbol more because you have to store '\\0' symbol.\n> \n>- Instead of addFuncToDatabase you can run your query using something already defined from fe_utils/query_utils.c. And It will be better to set up a connection only once and create all functions. Establishing a connection is a resource-intensive procedure. There are a lot of magic numbers, better to leave some comments explaining why there are 64 or 512.\n> \n>- It seems that you are not using temp_string\n>> char   *temp_string = (char*)malloc(256 * sizeof(char));\n> \n>- Grammar issues\n>>/*\n>>* mask_column_info_list contains info about every to-be-masked column:\n>>* its name, a name its table (if nothing is specified - mask all columns with this name),\n>>* name of masking function and name of schema containing this function (public if not specified)\n>>*/\n>the name of its table\n>   \n>пн, 3 окт. 2022 г. в 20:45, Julien Rouhaud < rjuju123@gmail.com >:\n>>Hi,\n>>\n>>On Mon, Oct 03, 2022 at 06:30:17PM +0300, Олег Целебровский wrote:\n>>>\n>>> Hello, here's my take on masking data when using pg_dump\n>>>  \n>>> The main idea is using PostgreSQL functions to replace data during a SELECT.\n>>> When table data is dumped SELECT a,b,c,d ... from ... query is generated, the columns that are marked for masking are replaced with result of functions on those columns\n>>> Example: columns name, count are to be masked, so the query will look as such: SELECT id, mask_text(name), mask_int(count), date from ...\n>>>  \n>>> So about the interface: I added 2 more command-line options: \n>>>  \n>>> --mask-columns, which specifies what columns from what tables will be masked \n>>>     usage example:\n>>>             --mask-columns \" t1.name , t2.description\" - both columns will be masked with the same corresponding function\n>>>             or --mask-columns name - ALL columns with name \"name\" from all dumped tables will be masked with correspoding function\n>>>  \n>>> --mask-function, which specifies what functions will mask data\n>>>     usage example:\n>>>             --mask-function mask_int - corresponding columns will be masked with function named \"mask_int\" from default schema (public)\n>>>             or --mask-function my_schema.mask_varchar - same as above but with specified schema where the function is stored\n>>>             or --mask-function somedir/filename - the function is \"defined\" here - more on the structure below\n>>\n>>FTR I wrote an extension POC [1] last weekend that does that but on the backend\n>>side.  The main advantage is that it's working with any existing versions of\n>>pg_dump (or any client relying on COPY or even plain interactive SQL\n>>statements), and that the DBA can force a dedicated role to only get a masked\n>>dump, even if they forgot to ask for it.\n>>\n>>I only had a quick look at your patch but it seems that you left some todo in\n>>russian, which isn't helpful at least to me.\n>>\n>>[1] https://github.com/rjuju/pg_anonymize\n>>\n>>", "msg_date": "Mon, 10 Oct 2022 12:53:43 +0300", "msg_from": "=?UTF-8?B?0J7Qu9C10LMg0KbQtdC70LXQsdGA0L7QstGB0LrQuNC5?=\n <oleg_tselebrovskiy@mail.ru>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?UmVbMl06IFBvc3NpYmxlIHNvbHV0aW9uIGZvciBtYXNraW5nIGNob3NlbiBj?=\n =?UTF-8?B?b2x1bW5zIHdoZW4gdXNpbmcgcGdfZHVtcA==?=" }, { "msg_contents": "Hi,\n\nHere is an idea of how to read masking options from a file. Please, take a\nlook.\n\nпн, 10 окт. 2022 г. в 14:54, Олег Целебровский <oleg_tselebrovskiy@mail.ru>:\n\n> Hi,\n>\n> I applied most of suggestions: used separate files for most of added code,\n> fixed typos/mistakes, got rid of that pesky TODO that was already\n> implemented, just not deleted.\n>\n> Added tests (and functionality) for cases when you need to mask columns in\n> tables with the same name in different schemas. If schema is not specified,\n> then columns in all tables with specified name are masked (Example -\n> pg_dump -t ‘t0’ --mask-columns id --mask-function mask_int will mask all\n> ids in all tables with names ‘t0’ in all existing schemas).\n>\n> Wrote comments for all ‘magic numbers’\n>\n> About that\n>\n> >- Also it can be hard to use a lot of different functions for different\n> fields, maybe it would be better to set up functions in a file.\n>\n> I agree with that, but I know about at least 2 other patches (both are\n> WIP, but still) that are interacting with reading command-line options from\n> file. And if everyone will write their own version of reading command-line\n> options from file, it will quickly get confusing.\n>\n> A solution to that problem is another patch that will put all options from\n> file (one file for any possible options, from existing to future ones) into\n> **argv in main, so that pg_dump can process them as if they came form\n> command line.\n>\n>\n> Пятница, 7 октября 2022, 8:01 +07:00 от Виктория Шепард <\n> we.viktory@gmail.com>:\n>\n> Hi,\n> I took a look, here are several suggestions for improvement:\n>\n> - Masking is not a main functionality of pg_dump and it is better to write\n> most of the connected things in a separate file like parallel.c or\n> dumputils.c. This will help slow down the growth of an already huge pg_dump\n> file.\n>\n> - Also it can be hard to use a lot of different functions for different\n> fields, maybe it would be better to set up functions in a file.\n>\n> - How will it work for the same field and tables in the different schemas?\n> Can we set up the exact schema for the field?\n>\n> - misspelling in a word\n> >/*\n> >* Add all columns and funcions to list of MaskColumnInfo structures,\n> >*/\n>\n> - Why did you use 256 here?\n> > char* table = (char*) pg_malloc(256 * sizeof(char));\n> Also for malloc you need malloc on 1 symbol more because you have to store\n> '\\0' symbol.\n>\n> - Instead of addFuncToDatabase you can run your query using something\n> already defined from fe_utils/query_utils.c. And It will be better to set\n> up a connection only once and create all functions. Establishing a\n> connection is a resource-intensive procedure. There are a lot of magic\n> numbers, better to leave some comments explaining why there are 64 or 512.\n>\n> - It seems that you are not using temp_string\n> > char *temp_string = (char*)malloc(256 * sizeof(char));\n>\n> - Grammar issues\n> >/*\n> >* mask_column_info_list contains info about every to-be-masked column:\n> >* its name, a name its table (if nothing is specified - mask all columns\n> with this name),\n> >* name of masking function and name of schema containing this function\n> (public if not specified)\n> >*/\n> the name of its table\n>\n>\n> пн, 3 окт. 2022 г. в 20:45, Julien Rouhaud <rjuju123@gmail.com\n> <//e.mail.ru/compose/?mailto=mailto%3arjuju123@gmail.com>>:\n>\n> Hi,\n>\n> On Mon, Oct 03, 2022 at 06:30:17PM +0300, Олег Целебровский wrote:\n> >\n> > Hello, here's my take on masking data when using pg_dump\n> >\n> > The main idea is using PostgreSQL functions to replace data during a\n> SELECT.\n> > When table data is dumped SELECT a,b,c,d ... from ... query is\n> generated, the columns that are marked for masking are replaced with result\n> of functions on those columns\n> > Example: columns name, count are to be masked, so the query will look as\n> such: SELECT id, mask_text(name), mask_int(count), date from ...\n> >\n> > So about the interface: I added 2 more command-line options:\n> >\n> > --mask-columns, which specifies what columns from what tables will be\n> masked\n> > usage example:\n> > --mask-columns \"t1.name, t2.description\" - both columns\n> will be masked with the same corresponding function\n> > or --mask-columns name - ALL columns with name \"name\" from\n> all dumped tables will be masked with correspoding function\n> >\n> > --mask-function, which specifies what functions will mask data\n> > usage example:\n> > --mask-function mask_int - corresponding columns will be\n> masked with function named \"mask_int\" from default schema (public)\n> > or --mask-function my_schema.mask_varchar - same as above\n> but with specified schema where the function is stored\n> > or --mask-function somedir/filename - the function is\n> \"defined\" here - more on the structure below\n>\n> FTR I wrote an extension POC [1] last weekend that does that but on the\n> backend\n> side. The main advantage is that it's working with any existing versions\n> of\n> pg_dump (or any client relying on COPY or even plain interactive SQL\n> statements), and that the DBA can force a dedicated role to only get a\n> masked\n> dump, even if they forgot to ask for it.\n>\n> I only had a quick look at your patch but it seems that you left some todo\n> in\n> russian, which isn't helpful at least to me.\n>\n> [1] https://github.com/rjuju/pg_anonymize\n>\n>\n>\n>\n>", "msg_date": "Wed, 12 Oct 2022 12:19:34 +0500", "msg_from": "=?UTF-8?B?0JLQuNC60YLQvtGA0LjRjyDQqNC10L/QsNGA0LQ=?=\n <we.viktory@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re[2]: Possible solution for masking chosen columns when using\n pg_dump" }, { "msg_contents": "Hi,\n\nThank you, Oleg Tselebrovskiy, for your valuable review, here are the fixes\n\nBest regards,\nViktoria Shepard\n\nср, 12 окт. 2022 г. в 12:19, Виктория Шепард <we.viktory@gmail.com>:\n\n> Hi,\n>\n> Here is an idea of how to read masking options from a file. Please, take a\n> look.\n>\n> пн, 10 окт. 2022 г. в 14:54, Олег Целебровский <oleg_tselebrovskiy@mail.ru\n> >:\n>\n>> Hi,\n>>\n>> I applied most of suggestions: used separate files for most of added\n>> code, fixed typos/mistakes, got rid of that pesky TODO that was already\n>> implemented, just not deleted.\n>>\n>> Added tests (and functionality) for cases when you need to mask columns\n>> in tables with the same name in different schemas. If schema is not\n>> specified, then columns in all tables with specified name are masked\n>> (Example - pg_dump -t ‘t0’ --mask-columns id --mask-function mask_int will\n>> mask all ids in all tables with names ‘t0’ in all existing schemas).\n>>\n>> Wrote comments for all ‘magic numbers’\n>>\n>> About that\n>>\n>> >- Also it can be hard to use a lot of different functions for different\n>> fields, maybe it would be better to set up functions in a file.\n>>\n>> I agree with that, but I know about at least 2 other patches (both are\n>> WIP, but still) that are interacting with reading command-line options from\n>> file. And if everyone will write their own version of reading command-line\n>> options from file, it will quickly get confusing.\n>>\n>> A solution to that problem is another patch that will put all options\n>> from file (one file for any possible options, from existing to future ones)\n>> into **argv in main, so that pg_dump can process them as if they came form\n>> command line.\n>>\n>>\n>> Пятница, 7 октября 2022, 8:01 +07:00 от Виктория Шепард <\n>> we.viktory@gmail.com>:\n>>\n>> Hi,\n>> I took a look, here are several suggestions for improvement:\n>>\n>> - Masking is not a main functionality of pg_dump and it is better to\n>> write most of the connected things in a separate file like parallel.c or\n>> dumputils.c. This will help slow down the growth of an already huge pg_dump\n>> file.\n>>\n>> - Also it can be hard to use a lot of different functions for different\n>> fields, maybe it would be better to set up functions in a file.\n>>\n>> - How will it work for the same field and tables in the different\n>> schemas? Can we set up the exact schema for the field?\n>>\n>> - misspelling in a word\n>> >/*\n>> >* Add all columns and funcions to list of MaskColumnInfo structures,\n>> >*/\n>>\n>> - Why did you use 256 here?\n>> > char* table = (char*) pg_malloc(256 * sizeof(char));\n>> Also for malloc you need malloc on 1 symbol more because you have to\n>> store '\\0' symbol.\n>>\n>> - Instead of addFuncToDatabase you can run your query using something\n>> already defined from fe_utils/query_utils.c. And It will be better to set\n>> up a connection only once and create all functions. Establishing a\n>> connection is a resource-intensive procedure. There are a lot of magic\n>> numbers, better to leave some comments explaining why there are 64 or 512.\n>>\n>> - It seems that you are not using temp_string\n>> > char *temp_string = (char*)malloc(256 * sizeof(char));\n>>\n>> - Grammar issues\n>> >/*\n>> >* mask_column_info_list contains info about every to-be-masked column:\n>> >* its name, a name its table (if nothing is specified - mask all columns\n>> with this name),\n>> >* name of masking function and name of schema containing this function\n>> (public if not specified)\n>> >*/\n>> the name of its table\n>>\n>>\n>> пн, 3 окт. 2022 г. в 20:45, Julien Rouhaud <rjuju123@gmail.com\n>> <//e.mail.ru/compose/?mailto=mailto%3arjuju123@gmail.com>>:\n>>\n>> Hi,\n>>\n>> On Mon, Oct 03, 2022 at 06:30:17PM +0300, Олег Целебровский wrote:\n>> >\n>> > Hello, here's my take on masking data when using pg_dump\n>> >\n>> > The main idea is using PostgreSQL functions to replace data during a\n>> SELECT.\n>> > When table data is dumped SELECT a,b,c,d ... from ... query is\n>> generated, the columns that are marked for masking are replaced with result\n>> of functions on those columns\n>> > Example: columns name, count are to be masked, so the query will look\n>> as such: SELECT id, mask_text(name), mask_int(count), date from ...\n>> >\n>> > So about the interface: I added 2 more command-line options:\n>> >\n>> > --mask-columns, which specifies what columns from what tables will be\n>> masked\n>> > usage example:\n>> > --mask-columns \"t1.name, t2.description\" - both columns\n>> will be masked with the same corresponding function\n>> > or --mask-columns name - ALL columns with name \"name\" from\n>> all dumped tables will be masked with correspoding function\n>> >\n>> > --mask-function, which specifies what functions will mask data\n>> > usage example:\n>> > --mask-function mask_int - corresponding columns will be\n>> masked with function named \"mask_int\" from default schema (public)\n>> > or --mask-function my_schema.mask_varchar - same as above\n>> but with specified schema where the function is stored\n>> > or --mask-function somedir/filename - the function is\n>> \"defined\" here - more on the structure below\n>>\n>> FTR I wrote an extension POC [1] last weekend that does that but on the\n>> backend\n>> side. The main advantage is that it's working with any existing versions\n>> of\n>> pg_dump (or any client relying on COPY or even plain interactive SQL\n>> statements), and that the DBA can force a dedicated role to only get a\n>> masked\n>> dump, even if they forgot to ask for it.\n>>\n>> I only had a quick look at your patch but it seems that you left some\n>> todo in\n>> russian, which isn't helpful at least to me.\n>>\n>> [1] https://github.com/rjuju/pg_anonymize\n>>\n>>\n>>\n>>\n>>\n>", "msg_date": "Mon, 24 Oct 2022 04:24:38 +0500", "msg_from": "=?UTF-8?B?0JLQuNC60YLQvtGA0LjRjyDQqNC10L/QsNGA0LQ=?=\n <we.viktory@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re[2]: Possible solution for masking chosen columns when using\n pg_dump" } ]
[ { "msg_contents": "Hi, hackers!\n\n\nTrying to implement error handling behavior required by SQL/JSON, we\ncame to agreement that we need special infrastructure for catching\nerrors in the input and type conversion functions without heavy-weight\nthings like subtransactions. See the whole thread \"SQL/JSON features\nfor v15\" [1], or last ~5 messages in the branch starting from [2].\n\nThe idea is simple -- introduce new \"error-safe\" calling mode of user\nfunctions by passing special node through FunctCallInfo.context, in\nwhich function should write error info and return instead of throwing\nit. Also such functions should manually free resources before\nreturning an error. This gives ability to avoid PG_TRY/PG_CATCH and\nsubtransactions.\n\n\nI have submitted two patch sets to the old thread: the first [3] POC\nexample for NULL_ON_ERROR option for COPY, and the second [4] with the\nset of error-safe functions needed for SQL/JSON.\n\nNow I'm starting this separate thread with the new version of the\npatch set, which includes error-safe functions for the subset of\ndata types (unfinished domains were removed), NULL_ON_ERROR option\nfor COPY (may need one more thread).\n\n\n\nIn the previous version of the patch error-safe functions were marked\nin the catalog using new column pg_proc.proissafe, but it is not the\nbest solution:\n\nOn 30.09.2022, Tom Lane wrote:\n> I strongly recommend against having a new pg_proc column at all.\n> I doubt that you really need it, and having one will create\n> enormous mechanical burdens to making the conversion. (For example,\n> needing a catversion bump every time we convert one more function,\n> or an extension version bump to convert extensions.)\n\nI think the only way to avoid catalog modification (adding new columns\nto pg_proc or pg_type, introducing new function signatures etc.) and\nto avoid adding some additional code to the entry of error-safe\nfunctions is to bump version of our calling convention. I simply added\nflag Pg_finfo_record.errorsafe which is set to true when the new\nPG_FUNCTION_INFO_V2_ERRORSAFE() macro is used. We could avoid adding\nthis flag by treating every V2 as error-safe, but I'm not sure if\nit is acceptable.\n\nBuilt-in error-safe function are marked in pg_proc.dat using the\nspecial flag \"errorsafe\" which is stored only in FmgrBuiltin, not in\nthe catalog like previous \"proissafe\" was.\n\n\n> On 2022-09-3 Andrew Dunstan wrote:\n>> I suggest just submitting the Input function stuff on its own, I\n>> think that means not patches 3,4,15 at this stage. Maybe we would\n>> also need a small test module to call the functions, or at least\n>> some of them. The earlier we can get this in the earlier SQL/JSON\n>> patches based on it can be considered.\n\n> +1\n\nI have added test module in patch #14.\n\n> On 2022-09-3 Andrew Dunstan wrote:\n> proissafe isn't really a very informative name. Safe for what? maybe\n> proerrorsafe or something would be better?\n\nI have renamed \"safe\" to \"errorsafe\".\n\n\nOn 2022-09-3 Andrew Dunstan wrote:\n> I don't think we need the if test or else clause here:\n>\n> + if (edata)\n> + return InputFunctionCallInternal(flinfo, str, typioparam,\n> typmod, edata);\n> + else\n> + return InputFunctionCall(flinfo, str, typioparam, typmod);\n\"If\" statement removed.\n\n\nOn 2022-09-3 Andrew Dunstan wrote:\n> I think we should probably cover float8 as well as float4, and there\n> might be some other odd gaps.\n\nI have added error-safe function for float8 too.\n\n\n[1]https://www.postgresql.org/message-id/flat/abd9b83b-aa66-f230-3d6d-734817f0995d%40postgresql.org\n[2]https://www.postgresql.org/message-id/flat/13351.1661965592%40sss.pgh.pa.us#3d23aa20c808d0267ac1f7ef2825f0dd\n[3]https://www.postgresql.org/message-id/raw/379e5365-9670-e0de-ee08-57ba61cbc976%40postgrespro.ru\n[4]https://www.postgresql.org/message-id/raw/0574201c-bd35-01af-1557-8936f99ce5aa%40postgrespro.ru\n\n-- \nNikita Glukhov\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 3 Oct 2022 22:44:27 +0300", "msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Error-safe user functions" }, { "msg_contents": "Sorry, I didn't not tried building using meson.\nOne line was fixed in the new test module's meson.build.\n\n--\nNikita Glukhov\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 4 Oct 2022 02:13:52 +0300", "msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Nikita Glukhov <n.gluhov@postgrespro.ru> writes:\n> On 30.09.2022, Tom Lane wrote:\n>> I strongly recommend against having a new pg_proc column at all.\n\n> I think the only way to avoid catalog modification (adding new columns\n> to pg_proc or pg_type, introducing new function signatures etc.) and\n> to avoid adding some additional code to the entry of error-safe\n> functions is to bump version of our calling convention. I simply added\n> flag Pg_finfo_record.errorsafe which is set to true when the new\n> PG_FUNCTION_INFO_V2_ERRORSAFE() macro is used.\n\nI don't think you got my point at all.\n\nI do not think we need a new pg_proc column (btw, touching pg_proc.dat\nis morally equivalent to a pg_proc column), and I do not think we need\na new call-convention version either, because I think that this sort\nof thing:\n\n+ /* check whether input function supports returning errors */\n+ if (cstate->opts.null_on_error_flags[attnum - 1] &&\n+ !func_is_error_safe(in_func_oid))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"input function for datatype \\\"%s\\\" does not support error handling\",\n+ format_type_be(att->atttypid))));\n\nis useless. It does not benefit anybody to pre-emptively throw an error\nbecause you are afraid that some other code might throw an error later.\nThat just converts \"might fail\" to \"guaranteed to fail\" --- how is that\nbetter?\n\nI think what we want is to document options like NULL_ON_ERROR along the\nlines of\n\n If the input data in one of the specified columns is invalid,\n set the column's value to NULL instead of reporting an error.\n This feature will only work if the column datatype's input\n function has been upgraded to support it; otherwise, an invalid\n input value will result in an error anyway.\n\nand just leave it on the heads of extension authors to get their\ncode moved forward. (If we fail to get all the core types converted\nby the time v16 reaches feature freeze, we'd have to add some docs\nabout which ones support this; but maybe we will get all that done\nand not need documentation effort.)\n\nSome other recommendations:\n\n* The primary work-product from an initial patch of this sort is an\nAPI specification. Therefore, your 0001 patch ought to be introducing\nsome prose documentation somewhere (and I don't mean comments in elog.h,\nrather a README file or even the SGML docs --- utils/fmgr/README might\nbe a good spot). Getting that text right so that people understand\nwhat to do is more important than any single code detail. You are not\nwinning any fans by not bothering with code comments such as per-function\nheader comments, either.\n\n* Submitting 16-part patch series is a good way to discourage people\nfrom reviewing your work. I'd toss most of the datatype conversions\noverboard for the moment, planning to address them later once the core\npatch is committed. The initial patchset only needs to have one or two\ndata types done as proof-of-concept.\n\n* I'd toss the test module overboard too. Once you've got COPY using\nthe feature, that's a perfectly good testbed. The effort spent on\nthe test module would have been better spent on making the COPY support\nmore complete (ie, get rid of the silly restriction to CSV).\n\n* The 0015 and 0016 patches don't seem to belong here either. It's\nimpossible to review these when the code is neither commented nor\nconnected to any use-case.\n\n* I think that the ereturn macro is the right idea, but I don't understand\nthe rationale for also inventing PG_RETURN_ERROR. Also, ereturn's\nimplementation isn't great --- I don't like duplicating the __VA_ARGS__\ntext, because that will lead to substantial code bloat. It'd likely\nwork better to make ereturn very much like ereport, except with a\ndifferent finishing function that contains the throw-or-not logic.\nAs a small nitpick, I think I'd make ereturn's argument order be return\nvalue then edata then ...; it just seems more sensible that way.\n\n* execnodes.h seems like an *extremely* random place to put struct\nFuncCallError; that will force inclusion of execnodes.h in many places\nthat did not need it before. Possibly fmgr.h is the right place for it?\nIn general you need to think about avoiding major inclusion bloat\n(and I wonder whether the patchset passes cpluspluscheck). It might\nhelp to treat ereturn's edata argument as just \"void *\" and confine\nreferences to the FuncCallError struct to the errfinish-replacement\nsubroutine, ie drop the tests in PG_GET_ERROR_PTR and do that check\ninside elog.c.\n\n* I wonder if there's a way to avoid the CopyErrorData and FreeErrorData\nsteps in use-cases like this --- that's pure overhead, really, for\nCOPY's purposes, and it's probably not the only use-case that will\nthink so. Maybe we could complicate FuncCallError a little and pass\na flag that indicates that we only want to know whether an error\noccurred, not what it was exactly. On the other hand, if you assume\nthat errors should be rare, maybe that's useless micro-optimization.\n\nBasically, this patch set should be a lot smaller and not have ambitions\nbeyond \"get the API right\" and \"make one or two datatypes support COPY\nNULL_ON_ERROR\". Add more code once that core functionality gets reviewed\nand committed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Oct 2022 13:37:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": ">\n>\n> The idea is simple -- introduce new \"error-safe\" calling mode of user\n> functions by passing special node through FunctCallInfo.context, in\n> which function should write error info and return instead of throwing\n> it. Also such functions should manually free resources before\n> returning an error. This gives ability to avoid PG_TRY/PG_CATCH and\n> subtransactions.\n>\n> I tried something similar when trying to implement TRY_CAST (\nhttps://learn.microsoft.com/en-us/sql/t-sql/functions/try-cast-transact-sql?view=sql-server-ver16)\nlate last year. I also considered having a default datum rather than just\nreturning NULL.\n\nI had not considered a new node type. I had considered having every\nfunction have a \"safe\" version, which would be a big duplication of logic\nrequiring a lot of regression tests and possibly fuzzing tests.\n\nInstead, I extended every core input function to have an extra boolean\nparameter to indicate if failures were allowed, and then an extra Datum\nparameter for the default value. The Input function wouldn't need to check\nthe value of the new parameters until it was already in a situation where\nit found invalid data, but the extra overhead still remained, and it meant\nthat basically every third party type extension would need to be changed.\n\nThen I considered whether the cast failure should be completely silent, or\nif the previous error message should instead be omitted as a LOG/INFO/WARN,\nand if we'd want that to be configurable, so then the boolean parameter\nbecame an integer enum:\n\n* regular fail (0)\n* use default silently (1)\n* use default emit LOG/NOTICE/WARNING (2,3,4)\n\nAt the time, all of this seemed like too big of a change for a function\nthat isn't even in the SQL Standard, but maybe SQL/JSON changes that.\n\nIf so, it would allow for a can-cast-to test that users would find very\nuseful. Something like:\n\nSELECT CASE WHEN 'abc' CAN BE integer THEN 'Integer' ELSE 'Nope' END\n\nThere's obviously no standard syntax to support that, but the data\ncleansing possibilities would be great.\n\nThe idea is simple -- introduce new \"error-safe\" calling mode of user \nfunctions by passing special node through FunctCallInfo.context, in \nwhich function should write error info and return instead of throwing \nit. Also such functions should manually free resources before \nreturning an error. This gives ability to avoid PG_TRY/PG_CATCH and \nsubtransactions.I tried something similar when trying to implement TRY_CAST (https://learn.microsoft.com/en-us/sql/t-sql/functions/try-cast-transact-sql?view=sql-server-ver16) late last year. I also considered having a default datum rather than just returning NULL.I had not considered a new node type. I had considered having every function have a \"safe\" version, which would be a big duplication of logic requiring a lot of regression tests and possibly fuzzing tests.Instead, I extended every core input function to have an extra boolean parameter to indicate if failures were allowed, and then an extra Datum parameter for the default value. The Input function wouldn't need to check the value of the new parameters until it was already in a situation where it found invalid data, but the extra overhead still remained, and it meant that basically every third party type extension would need to be changed.Then I considered whether the cast failure should be completely silent, or if the previous error message should instead be omitted as a LOG/INFO/WARN, and if we'd want that to be configurable, so then the boolean parameter became an integer enum:* regular fail (0)* use default silently (1)* use default emit LOG/NOTICE/WARNING (2,3,4)At the time, all of this seemed like too big of a change for a function that isn't even in the SQL Standard, but maybe SQL/JSON changes that.If so, it would allow for a can-cast-to test that users would find very useful. Something like:SELECT CASE WHEN 'abc' CAN BE integer THEN 'Integer' ELSE 'Nope' ENDThere's obviously no standard syntax to support that, but the data cleansing possibilities would be great.", "msg_date": "Mon, 10 Oct 2022 12:54:28 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-10-07 Fr 13:37, Tom Lane wrote:\n\n\n[ lots of detailed review ]\n\n> Basically, this patch set should be a lot smaller and not have ambitions\n> beyond \"get the API right\" and \"make one or two datatypes support COPY\n> NULL_ON_ERROR\". Add more code once that core functionality gets reviewed\n> and committed.\n>\n> \t\t\t\n\n\nNikita,\n\njust checking in, are you making progress on this? I think we really\nneed to get this reviewed and committed ASAP if we are to have a chance\nto get the SQL/JSON stuff reworked to use it in time for release 16.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 15 Nov 2022 11:35:55 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Tue, Nov 15, 2022 at 11:36 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 2022-10-07 Fr 13:37, Tom Lane wrote:\n>\n>\n> [ lots of detailed review ]\n>\n> > Basically, this patch set should be a lot smaller and not have ambitions\n> > beyond \"get the API right\" and \"make one or two datatypes support COPY\n> > NULL_ON_ERROR\". Add more code once that core functionality gets reviewed\n> > and committed.\n> >\n> >\n>\n>\n> Nikita,\n>\n> just checking in, are you making progress on this? I think we really\n> need to get this reviewed and committed ASAP if we are to have a chance\n> to get the SQL/JSON stuff reworked to use it in time for release 16.\n>\n>\nI'm making an attempt at this or something very similar to it. I don't yet\nhave a patch ready.\n\nOn Tue, Nov 15, 2022 at 11:36 AM Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 2022-10-07 Fr 13:37, Tom Lane wrote:\n\n\n[ lots of detailed review ]\n\n> Basically, this patch set should be a lot smaller and not have ambitions\n> beyond \"get the API right\" and \"make one or two datatypes support COPY\n> NULL_ON_ERROR\".  Add more code once that core functionality gets reviewed\n> and committed.\n>\n>                       \n\n\nNikita,\n\njust checking in, are you making progress on this? I think we really\nneed to get this reviewed and committed ASAP if we are to have a chance\nto get the SQL/JSON stuff reworked to use it in time for release 16.I'm making an attempt at this or something very similar to it. I don't yet have a patch ready.", "msg_date": "Mon, 21 Nov 2022 00:17:12 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n> On Tue, Nov 15, 2022 at 11:36 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> Nikita,\n>> just checking in, are you making progress on this? I think we really\n>> need to get this reviewed and committed ASAP if we are to have a chance\n>> to get the SQL/JSON stuff reworked to use it in time for release 16.\n\n> I'm making an attempt at this or something very similar to it. I don't yet\n> have a patch ready.\n\nCool. We can't delay too much longer on this if we want to have\na credible feature in v16. Although I want a minimal initial\npatch, there will still be a ton of incremental work to do after\nthe core capability is reviewed and committed, so there's no\ntime to lose.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Nov 2022 00:26:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Mon, Nov 21, 2022 at 12:26:45AM -0500, Tom Lane wrote:\n> Corey Huinker <corey.huinker@gmail.com> writes:\n>> I'm making an attempt at this or something very similar to it. I don't yet\n>> have a patch ready.\n\nNice to hear that. If a WIP or a proof of concept takes more than a\nfew hours, how about beginning a new thread with the ideas you have in\nmind so as we could agree about how to shape this error-to-default\nconversion facility on data input?\n\n> Cool. We can't delay too much longer on this if we want to have\n> a credible feature in v16. Although I want a minimal initial\n> patch, there will still be a ton of incremental work to do after\n> the core capability is reviewed and committed, so there's no\n> time to lose.\n\nI was glancing at the patch of upthread and the introduction of a v2\nscared me, so I've stopped at this point.\n--\nMichael", "msg_date": "Tue, 22 Nov 2022 08:59:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On 2022-11-21 Mo 00:26, Tom Lane wrote:\n> Corey Huinker <corey.huinker@gmail.com> writes:\n>> On Tue, Nov 15, 2022 at 11:36 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>> Nikita,\n>>> just checking in, are you making progress on this? I think we really\n>>> need to get this reviewed and committed ASAP if we are to have a chance\n>>> to get the SQL/JSON stuff reworked to use it in time for release 16.\n>> I'm making an attempt at this or something very similar to it. I don't yet\n>> have a patch ready.\n> Cool. We can't delay too much longer on this if we want to have\n> a credible feature in v16. Although I want a minimal initial\n> patch, there will still be a ton of incremental work to do after\n> the core capability is reviewed and committed, so there's no\n> time to lose.\n>\n> \t\t\t\n\n\nOK, here's a set of minimal patches based on Nikita's earlier work and\nalso some work by my colleague Amul Sul. It tries to follow Tom's\noriginal outline at [1], and do as little else as possible.\n\nPatch 1 introduces the IOCallContext node. The caller should set the\nno_error_throw flag and clear the error_found flag, which will be set by\na conforming IO function if an error is found. It also includes a string\nfield for an error message. I haven't used that, it's more there to\nstimulate discussion. Robert suggested to me that maybe it should be an\nErrorData, but I'm not sure how we would use it.\n\nPatch 2 introduces InputFunctionCallContext(), which is similar to\nInputFunctionCall() but with an extra context parameter. Note that it's\nok for an input function to return a NULL to this function if an error\nis found.\n\nPatches 3 and 4 modify the bool_in() and int4in() functions respectively\nto handle an IOContextCall appropriately if provided one in their\nfcinfo.context.\n\nPatch 5 introduces COPY FROM ... NULL_ON_ERROR which, in addition to\nbeing useful in itself, provides a test of the previous patches.\n\nI hope we can get a fairly quick agreement so that work can being on\nadjusting at least those things needed for the SQL/JSON patches ASAP.\nOur goal should be to adjust all the core input functions, but that's\nnot quite so urgent, and can be completed in parallel with the SQL/JSON\nwork.\n\n\ncheers\n\n\nandrew\n\n[1] https://www.postgresql.org/message-id/13351.1661965592%40sss.pgh.pa.us\n\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 1 Dec 2022 12:46:48 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> OK, here's a set of minimal patches based on Nikita's earlier work and\n> also some work by my colleague Amul Sul. It tries to follow Tom's\n> original outline at [1], and do as little else as possible.\n\nThis is not really close at all to what I had in mind.\n\nThe main objection is that we shouldn't be passing back a \"char *\"\nerror string (though I observe that your 0003 and 0004 patches aren't\neven bothering to do that much). I think we want to pass back a\nfully populated ErrorData struct so that we can report everything\nthe actual error would have done (notably, the SQLSTATE). That means\nthat elog.h/.c has to be intimately involved in this. I liked Nikita's\noverall idea of introducing an \"ereturn\" macro, with the idea that\nwhere we have, say,\n\n ereport(ERROR,\n (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n errmsg(\"value \\\"%s\\\" is out of range for type %s\",\n s, \"integer\")));\n\nwe would write\n\n ereturn(context, ERROR,\n (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n errmsg(\"value \\\"%s\\\" is out of range for type %s\",\n s, \"integer\")));\n return NULL; // or whatever is appropriate\n\nand the involvement with the contents of the context node would\nall be confined to some new code in elog.c. That would help\nprevent the #include-footprint-bloat that is otherwise going to\nensue.\n\n(Maybe we could assume that ereturn's elevel must be ERROR, and\nsave a little notation. I'm not very wedded to \"ereturn\" as the\nnew macro name either, though it's not awful.)\n\nAlso, as I said before, the absolute first priority has to be\ndocumentation explaining what function authors are supposed to\ndo differently than before.\n\nI'd be willing to have a go at this myself, except that Corey\nsaid he was working on it, so I don't want to step on his toes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 01 Dec 2022 13:14:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Thu, Dec 1, 2022 at 1:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The main objection is that we shouldn't be passing back a \"char *\"\n> error string (though I observe that your 0003 and 0004 patches aren't\n> even bothering to do that much). I think we want to pass back a\n> fully populated ErrorData struct so that we can report everything\n> the actual error would have done (notably, the SQLSTATE).\n\n+1.\n\n> That means that elog.h/.c has to be intimately involved in this.\n> I liked Nikita's\n> overall idea of introducing an \"ereturn\" macro, with the idea that\n> where we have, say,\n>\n> ereport(ERROR,\n> (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n> errmsg(\"value \\\"%s\\\" is out of range for type %s\",\n> s, \"integer\")));\n>\n> we would write\n>\n> ereturn(context, ERROR,\n> (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n> errmsg(\"value \\\"%s\\\" is out of range for type %s\",\n> s, \"integer\")));\n> return NULL; // or whatever is appropriate\n\nIt sounds like you're imagining that ereturn doesn't return, which\nseems confusing. But I don't know that I'd like it better if it did.\nMagic return statements hidden inside macros seem not too fun. What\nI'd like to see is a macro that takes a pointer to an ErrorData and\nthe rest of the arguments like ereport() and stuffs everything in\nthere. And then you can pass that to ThrowErrorData() later if you\nlike. That way it's visible when you're using the macro where you're\nputting the error. I think that would make the code more readable.\n\n> Also, as I said before, the absolute first priority has to be\n> documentation explaining what function authors are supposed to\n> do differently than before.\n\n+1.\n\n> I'd be willing to have a go at this myself, except that Corey\n> said he was working on it, so I don't want to step on his toes.\n\nTime is short, and I do not think Corey will be too sad if you decide\nto have a go at it. The chances of person A being able to write the\ncode person B is imagining as well as person B could write it are not\ngreat, regardless of who A and B are. And I think the general\nconsensus around here is that you're a better coder than most.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Dec 2022 14:04:00 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> It sounds like you're imagining that ereturn doesn't return, which\n> seems confusing. But I don't know that I'd like it better if it did.\n\nThe spec I had in mind was that it would behave as ereport(ERROR)\nunless a suitable FuncErrorContext node is passed, in which case\nit'd store the error data into that node and return. This leaves\nthe invoker with only the job of passing control back afterwards,\nif it gets control back. I'd be the first to agree that \"ereturn\"\ndoesn't capture that detail very well, but I don't have a better name.\n(And I do like the fact that this name is the same length as \"ereport\",\nso that we won't end up with lots of reindentation to do.)\n \t\t \n> Magic return statements hidden inside macros seem not too fun. What\n> I'd like to see is a macro that takes a pointer to an ErrorData and\n> the rest of the arguments like ereport() and stuffs everything in\n> there. And then you can pass that to ThrowErrorData() later if you\n> like. That way it's visible when you're using the macro where you're\n> putting the error. I think that would make the code more readable.\n\nI think that'd just complicate the places that are having to report\nsuch errors --- of which there are likely to be hundreds by the time\nwe are done. I will not accept a solution that requires more than\nthe absolute minimum of additions to the error-reporting spots.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 01 Dec 2022 14:41:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Thu, Dec 1, 2022 at 2:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > It sounds like you're imagining that ereturn doesn't return, which\n> > seems confusing. But I don't know that I'd like it better if it did.\n>\n> The spec I had in mind was that it would behave as ereport(ERROR)\n> unless a suitable FuncErrorContext node is passed, in which case\n> it'd store the error data into that node and return. This leaves\n> the invoker with only the job of passing control back afterwards,\n> if it gets control back. I'd be the first to agree that \"ereturn\"\n> doesn't capture that detail very well, but I don't have a better name.\n> (And I do like the fact that this name is the same length as \"ereport\",\n> so that we won't end up with lots of reindentation to do.)\n\nI don't think it's sensible to make decisions about important syntax\non the basis of byte-length. Reindenting is a one-time effort; code\nclarity will be with us forever.\n\n> > Magic return statements hidden inside macros seem not too fun. What\n> > I'd like to see is a macro that takes a pointer to an ErrorData and\n> > the rest of the arguments like ereport() and stuffs everything in\n> > there. And then you can pass that to ThrowErrorData() later if you\n> > like. That way it's visible when you're using the macro where you're\n> > putting the error. I think that would make the code more readable.\n>\n> I think that'd just complicate the places that are having to report\n> such errors --- of which there are likely to be hundreds by the time\n> we are done.\n\nOK, that's a fair point.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Dec 2022 15:12:09 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Dec 1, 2022 at 2:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'd be the first to agree that \"ereturn\"\n>> doesn't capture that detail very well, but I don't have a better name.\n>> (And I do like the fact that this name is the same length as \"ereport\",\n>> so that we won't end up with lots of reindentation to do.)\n\n> I don't think it's sensible to make decisions about important syntax\n> on the basis of byte-length. Reindenting is a one-time effort; code\n> clarity will be with us forever.\n\nSure, but without a proposal for a better name, that's irrelevant.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 01 Dec 2022 15:49:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Thu, Dec 1, 2022 at 3:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I don't think it's sensible to make decisions about important syntax\n> > on the basis of byte-length. Reindenting is a one-time effort; code\n> > clarity will be with us forever.\n>\n> Sure, but without a proposal for a better name, that's irrelevant.\n\nSure, but you're far too clever not to be able to come up with\nsomething good without any help from me. io_error_return_or_throw()?\nstore_or_report_io_error()? Or just store_io_error()?\n\nIt sounds to me like we're crafting something that is specific to and\ncan only be used with type input and output functions, so the name\nprobably should reflect that rather than being something totally\ngeneric like ereturn() or error_stash() or whatever. If we were making\nthis into a general-purpose way of sticking an error someplace, then a\nname like that would make sense and this would be an extension of the\nelog.c interface. But what you're proposing is a great deal more\nspecialized than that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Dec 2022 16:15:17 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> It sounds to me like we're crafting something that is specific to and\n> can only be used with type input and output functions, so the name\n> probably should reflect that rather than being something totally\n> generic like ereturn() or error_stash() or whatever.\n\nMy opinion is exactly the opposite. Don't we already have a need\nfor error-safe type conversions, too, in the JSON stuff? Even if\nI couldn't point to a need-it-now requirement, I think we will\neventually find a use for this with some other classes of functions.\n\n> If we were making\n> this into a general-purpose way of sticking an error someplace, then a\n> name like that would make sense and this would be an extension of the\n> elog.c interface. But what you're proposing is a great deal more\n> specialized than that.\n\nI'm proposing *exactly* an extension of the elog.c interface;\nso were you, a couple messages back. It's only specialized to I/O\nin the sense that our current need is for that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 01 Dec 2022 17:33:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Thu, Dec 1, 2022 at 5:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > It sounds to me like we're crafting something that is specific to and\n> > can only be used with type input and output functions, so the name\n> > probably should reflect that rather than being something totally\n> > generic like ereturn() or error_stash() or whatever.\n>\n> My opinion is exactly the opposite. Don't we already have a need\n> for error-safe type conversions, too, in the JSON stuff? Even if\n> I couldn't point to a need-it-now requirement, I think we will\n> eventually find a use for this with some other classes of functions.\n\n<sputters>\n\nBut you yourself proposed a new node called IOCallContext. It can't be\nright to have the names be specific to I/O functions in one part of\nthe patch and totally generic in another part.\n\nHmm, but yesterday I see that you were now calling it FuncCallContext.\n\nI think the design is evolving in your head as you think about this\nmore, which is totally understandable and actually very good. However,\nthis is also why I think that you should produce the patch you\nactually want instead of letting other people repeatedly submit\npatches and then complain that they weren't what you had in mind.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 2 Dec 2022 08:27:58 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I think the design is evolving in your head as you think about this\n> more, which is totally understandable and actually very good. However,\n> this is also why I think that you should produce the patch you\n> actually want instead of letting other people repeatedly submit\n> patches and then complain that they weren't what you had in mind.\n\nOK, Corey hasn't said anything, so I will have a look at this over\nthe weekend.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 02 Dec 2022 09:12:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-02 Fr 09:12, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> I think the design is evolving in your head as you think about this\n>> more, which is totally understandable and actually very good. However,\n>> this is also why I think that you should produce the patch you\n>> actually want instead of letting other people repeatedly submit\n>> patches and then complain that they weren't what you had in mind.\n> OK, Corey hasn't said anything, so I will have a look at this over\n> the weekend.\n>\n> \t\t\t\n\n\nGreat. Let's hope we can get this settled early next week and then we\ncan get to work on the next tranche of functions, those that will let\nthe SQL/JSON work restart.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 2 Dec 2022 09:34:24 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Fri, Dec 2, 2022 at 9:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I think the design is evolving in your head as you think about this\n> > more, which is totally understandable and actually very good. However,\n> > this is also why I think that you should produce the patch you\n> > actually want instead of letting other people repeatedly submit\n> > patches and then complain that they weren't what you had in mind.\n>\n> OK, Corey hasn't said anything, so I will have a look at this over\n> the weekend.\n>\n> regards, tom lane\n>\n\nSorry, had several life issues intervene. Glancing over what was discussed\nbecause it seems pretty similar to what I had in mind. Will respond back in\ndetail shortly.\n\nOn Fri, Dec 2, 2022 at 9:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Robert Haas <robertmhaas@gmail.com> writes:\n> I think the design is evolving in your head as you think about this\n> more, which is totally understandable and actually very good. However,\n> this is also why I think that you should produce the patch you\n> actually want instead of letting other people repeatedly submit\n> patches and then complain that they weren't what you had in mind.\n\nOK, Corey hasn't said anything, so I will have a look at this over\nthe weekend.\n\n                        regards, tom laneSorry, had several life issues intervene. Glancing over what was discussed because it seems pretty similar to what I had in mind. Will respond back in detail shortly.", "msg_date": "Fri, 2 Dec 2022 12:53:20 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Fri, Dec 2, 2022 at 9:34 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 2022-12-02 Fr 09:12, Tom Lane wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> >> I think the design is evolving in your head as you think about this\n> >> more, which is totally understandable and actually very good. However,\n> >> this is also why I think that you should produce the patch you\n> >> actually want instead of letting other people repeatedly submit\n> >> patches and then complain that they weren't what you had in mind.\n> > OK, Corey hasn't said anything, so I will have a look at this over\n> > the weekend.\n> >\n> >\n>\n>\n> Great. Let's hope we can get this settled early next week and then we\n> can get to work on the next tranche of functions, those that will let\n> the SQL/JSON work restart.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\nI'm still working on organizing my patch, but it grew out of a desire to do\nthis:\n\nCAST(value AS TypeName DEFAULT expr)\n\nThis is a thing that exists in other forms in other databases and while it\nmay look unrelated, it is essentially the SQL/JSON casts within a nested\ndata structure issue, just a lot simpler.\n\nMy original plan had been two new params to all _in() functions: a boolean\nerror_mode and a default expression Datum.\n\nAfter consulting with Jeff Davis and Michael Paquier, the notion of\nmodifying fcinfo itself two booleans:\n allow_error (this call is allowed to return if there was an error with\nINPUT) and\n has_error (this function has the concept of a purely-input-based error,\nand found one)\n\nThe nice part about this is that unaware functions can ignore these values,\nand custom data types that did not check these values would continue to\nwork as before. It wouldn't respect the CAST default, but that's up to the\nextension writer to fix, and that's a pretty acceptable failure mode.\n\nWhere this gets tricky is arrays and complex types: the default expression\napplies only to the object explicitly casted, so if somebody tried CAST\n('{\"123\",\"abc\"}'::text[] AS integer[] DEFAULT '{0}') the inner casts need\nto know that they _can_ allow input errors, but have no default to offer,\nthey need merely report their failure upstream...and that's where the\nissues align with the SQL/JSON issue.\n\nIn pursuing this, I see that my method was simultaneously too little and\ntoo much. Too much in the sense that it alters the structure for all fmgr\nfunctions, though in a very minor and back-compatible way, and too little\nin the sense that we could actually return the ereport info in a structure\nand leave it to the node to decide whether to raise it or not. Though I\nshould add that there many situations where we don't care about the\nspecifics of the error, we just want to know that one existed and move on,\nso time spent forming that return structure would be time wasted.\n\nThe one gap I see so far in the patch presented is that it returns a null\nvalue on bad input, which might be ok if the node has the default, but that\nthen presents the node with having to understand whether it was a null\nbecause of bad input vs a null that was expected.\n\nOn Fri, Dec 2, 2022 at 9:34 AM Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 2022-12-02 Fr 09:12, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> I think the design is evolving in your head as you think about this\n>> more, which is totally understandable and actually very good. However,\n>> this is also why I think that you should produce the patch you\n>> actually want instead of letting other people repeatedly submit\n>> patches and then complain that they weren't what you had in mind.\n> OK, Corey hasn't said anything, so I will have a look at this over\n> the weekend.\n>\n>                       \n\n\nGreat. Let's hope we can get this settled early next week and then we\ncan get to work on the next tranche of functions, those that will let\nthe SQL/JSON work restart.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.comI'm still working on organizing my patch, but it grew out of a desire to do this:CAST(value AS TypeName DEFAULT expr)This is a thing that exists in other forms in other databases and while it may look unrelated, it is essentially the SQL/JSON casts within a nested data structure issue, just a lot simpler.My original plan had been two new params to all _in() functions: a boolean error_mode and a default expression Datum.After consulting with Jeff Davis and Michael Paquier, the notion of modifying fcinfo itself two booleans:  allow_error (this call is allowed to return if there was an error with INPUT) and   has_error (this function has the concept of a purely-input-based error, and found one)The nice part about this is that unaware functions can ignore these values, and custom data types that did not check these values would continue to work as before. It wouldn't respect the CAST default, but that's up to the extension writer to fix, and that's a pretty acceptable failure mode.Where this gets tricky is arrays and complex types: the default expression applies only to the object explicitly casted, so if somebody tried CAST ('{\"123\",\"abc\"}'::text[] AS integer[] DEFAULT '{0}') the inner casts need to know that they _can_ allow input errors, but have no default to offer, they need merely report their failure upstream...and that's where the issues align with the SQL/JSON issue.In pursuing this, I see that my method was simultaneously too little and too much. Too much in the sense that it alters the structure for all fmgr functions, though in a very minor and back-compatible way, and too little in the sense that we could actually return the ereport info in a structure and leave it to the node to decide whether to raise it or not. Though I should add that there many situations where we don't care about the specifics of the error, we just want to know that one existed and move on, so time spent forming that return structure would be time wasted.The one gap I see so far in the patch presented is that it returns a null value on bad input, which might be ok if the node has the default, but that then presents the node with having to understand whether it was a null because of bad input vs a null that was expected.", "msg_date": "Fri, 2 Dec 2022 13:15:06 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n> I'm still working on organizing my patch, but it grew out of a desire to do\n> this:\n> CAST(value AS TypeName DEFAULT expr)\n> This is a thing that exists in other forms in other databases and while it\n> may look unrelated, it is essentially the SQL/JSON casts within a nested\n> data structure issue, just a lot simpler.\n\nOkay, maybe that's why I was thinking we had a requirement for\nfailure-free casts. Sure, you can transform it to the other thing\nby always implementing this as a cast-via-IO, but you could run into\nsemantics issues that way. (If memory serves, we already have cases\nwhere casting X to Y gives a different result from casting X to text\nto Y.)\n\n> My original plan had been two new params to all _in() functions: a boolean\n> error_mode and a default expression Datum.\n> After consulting with Jeff Davis and Michael Paquier, the notion of\n> modifying fcinfo itself two booleans:\n> allow_error (this call is allowed to return if there was an error with\n> INPUT) and\n> has_error (this function has the concept of a purely-input-based error,\n> and found one)\n\nHmm ... my main complaint about that is the lack of any way to report\nthe details of the error. I realize that a plain boolean failure flag\nmight be enough for our immediate use-cases, but I don't want to expend\nthe amount of effort this is going to involve and then later find we\nhave a use-case where we need the error details.\n\nThe sketch that's in my head at the moment is to make use of the existing\n\"context\" field of FunctionCallInfo: if that contains a node of some\nto-be-named type, then we are requesting that errors not be thrown\nbut instead be reported by passing back an ErrorData using a field of\nthat node. The issue about not constructing an ErrorData if the outer\ncaller doesn't need it could perhaps be addressed by adding some boolean\nflag fields in that node, but the details of that need not be known to\nthe functions reporting errors this way; it'd be a side channel from the\nouter caller to elog.c.\n\nThe main objection I can see to this approach is that we only support\none context value per call, so you could not easily combine this\nfunctionality with existing use-cases for the context field. A quick\ncensus of InitFunctionCallInfoData calls finds aggregates, window\nfunctions, triggers, and procedures, none of which seem like plausible\ncandidates for wanting no-error behavior, so I'm not too concerned\nabout that. (Maybe the error-reporting node could be made a sub-node\nof the context node in any future cases where we do need it?)\n\n> The one gap I see so far in the patch presented is that it returns a null\n> value on bad input, which might be ok if the node has the default, but that\n> then presents the node with having to understand whether it was a null\n> because of bad input vs a null that was expected.\n\nYeah. That's something we could probably get away with for the case of\ninput functions only, but I think explicit out-of-band signaling that\nthere was an error is a more future-proof solution.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 02 Dec 2022 13:46:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Fri, Dec 2, 2022 at 1:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Corey Huinker <corey.huinker@gmail.com> writes:\n> > I'm still working on organizing my patch, but it grew out of a desire to\n> do\n> > this:\n> > CAST(value AS TypeName DEFAULT expr)\n> > This is a thing that exists in other forms in other databases and while\n> it\n> > may look unrelated, it is essentially the SQL/JSON casts within a nested\n> > data structure issue, just a lot simpler.\n>\n> Okay, maybe that's why I was thinking we had a requirement for\n> failure-free casts. Sure, you can transform it to the other thing\n> by always implementing this as a cast-via-IO, but you could run into\n> semantics issues that way. (If memory serves, we already have cases\n> where casting X to Y gives a different result from casting X to text\n> to Y.)\n>\n\nYes, I was setting aside the issue of direct cast functions for my v0.1\n\n\n>\n> > My original plan had been two new params to all _in() functions: a\n> boolean\n> > error_mode and a default expression Datum.\n> > After consulting with Jeff Davis and Michael Paquier, the notion of\n> > modifying fcinfo itself two booleans:\n> > allow_error (this call is allowed to return if there was an error with\n> > INPUT) and\n> > has_error (this function has the concept of a purely-input-based error,\n> > and found one)\n>\n> Hmm ... my main complaint about that is the lack of any way to report\n> the details of the error. I realize that a plain boolean failure flag\n> might be enough for our immediate use-cases, but I don't want to expend\n> the amount of effort this is going to involve and then later find we\n> have a use-case where we need the error details.\n>\n\nI agree, but then we're past a boolean for allow_error, and we probably get\ninto a list of modes like this:\n\nCAST_ERROR_ERROR /* default ereport(), what we do now */\nCAST_ERROR_REPORT_FULL /* report that the cast failed, everything that you\nwould have put in the ereport() instead put in a struct that gets returned\nto caller */\nCAST_ERROR_REPORT_SILENT /* report that the cast failed, but nobody cares\nwhy so don't even form the ereport strings, good for bulk operations */\nCAST_ERROR_WARNING /* report that the cast failed, but emit ereport() as a\nwarning */\nCAST_ERROR_[NOTICE,LOG,DEBUG1,..DEBUG5] /* same, but some other loglevel */\n\n\n>\n> The sketch that's in my head at the moment is to make use of the existing\n> \"context\" field of FunctionCallInfo: if that contains a node of some\n> to-be-named type, then we are requesting that errors not be thrown\n> but instead be reported by passing back an ErrorData using a field of\n> that node. The issue about not constructing an ErrorData if the outer\n> caller doesn't need it could perhaps be addressed by adding some boolean\n> flag fields in that node, but the details of that need not be known to\n> the functions reporting errors this way; it'd be a side channel from the\n> outer caller to elog.c.\n>\n\nThat should be a good place for it, assuming it's not already used like\nfn_extra is. It would also squash those cases above into just three: ERROR,\nREPORT_FULL, and REPORT_SILENT, leaving it up to the node what type of\nerroring/logging is appropriate.\n\n\n>\n> The main objection I can see to this approach is that we only support\n> one context value per call, so you could not easily combine this\n> functionality with existing use-cases for the context field. A quick\n> census of InitFunctionCallInfoData calls finds aggregates, window\n> functions, triggers, and procedures, none of which seem like plausible\n> candidates for wanting no-error behavior, so I'm not too concerned\n> about that. (Maybe the error-reporting node could be made a sub-node\n> of the context node in any future cases where we do need it?)\n>\n\nA subnode had occurred to me when fiddling about with fn_extra, so so that\napplies here, but if we're doing a sub-node, then maybe it's worth it's own\nparameter. I struggled with that because of how few of these functions will\nuse it vs how often they're executed.\n\n\n>\n> > The one gap I see so far in the patch presented is that it returns a null\n> > value on bad input, which might be ok if the node has the default, but\n> that\n> > then presents the node with having to understand whether it was a null\n> > because of bad input vs a null that was expected.\n>\n> Yeah. That's something we could probably get away with for the case of\n> input functions only, but I think explicit out-of-band signaling that\n> there was an error is a more future-proof solution.\n>\n\nI think we'll run into it fairly soon, because if I recall correctly,\nSQL/JSON has a formatting spec that essentially means that we're not\ncalling input functions, we're calling TO_CHAR() and TO_DATE(), but they're\nvery similiar to input functions.\n\nOne positive side effect of all this is we can get a isa(value, typname)\nconstruct like this \"for free\", we just try the cast and return the value.\n\nI'm still working on my patch even though it will probably be sidelined in\nthe hopes that it informs us of any subsequent issues.\n\nOn Fri, Dec 2, 2022 at 1:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Corey Huinker <corey.huinker@gmail.com> writes:\n> I'm still working on organizing my patch, but it grew out of a desire to do\n> this:\n> CAST(value AS TypeName DEFAULT expr)\n> This is a thing that exists in other forms in other databases and while it\n> may look unrelated, it is essentially the SQL/JSON casts within a nested\n> data structure issue, just a lot simpler.\n\nOkay, maybe that's why I was thinking we had a requirement for\nfailure-free casts.  Sure, you can transform it to the other thing\nby always implementing this as a cast-via-IO, but you could run into\nsemantics issues that way.  (If memory serves, we already have cases\nwhere casting X to Y gives a different result from casting X to text\nto Y.)Yes, I was setting aside the issue of direct cast functions for my v0.1 \n\n> My original plan had been two new params to all _in() functions: a boolean\n> error_mode and a default expression Datum.\n> After consulting with Jeff Davis and Michael Paquier, the notion of\n> modifying fcinfo itself two booleans:\n>   allow_error (this call is allowed to return if there was an error with\n> INPUT) and\n>   has_error (this function has the concept of a purely-input-based error,\n> and found one)\n\nHmm ... my main complaint about that is the lack of any way to report\nthe details of the error.  I realize that a plain boolean failure flag\nmight be enough for our immediate use-cases, but I don't want to expend\nthe amount of effort this is going to involve and then later find we\nhave a use-case where we need the error details.I agree, but then we're past a boolean for allow_error, and we probably get into a list of modes like this:CAST_ERROR_ERROR  /* default ereport(), what we do now */CAST_ERROR_REPORT_FULL /* report that the cast failed, everything that you would have put in the ereport() instead put in a struct that gets returned to caller */CAST_ERROR_REPORT_SILENT /* report that the cast failed, but nobody cares why so don't even form the ereport strings, good for bulk operations */CAST_ERROR_WARNING /* report that the cast failed, but emit ereport() as a warning */CAST_ERROR_[NOTICE,LOG,DEBUG1,..DEBUG5] /* same, but some other loglevel */ \n\nThe sketch that's in my head at the moment is to make use of the existing\n\"context\" field of FunctionCallInfo: if that contains a node of some\nto-be-named type, then we are requesting that errors not be thrown\nbut instead be reported by passing back an ErrorData using a field of\nthat node.  The issue about not constructing an ErrorData if the outer\ncaller doesn't need it could perhaps be addressed by adding some boolean\nflag fields in that node, but the details of that need not be known to\nthe functions reporting errors this way; it'd be a side channel from the\nouter caller to elog.c.That should be a good place for it, assuming it's not already used like fn_extra is. It would also squash those cases above into just three: ERROR, REPORT_FULL, and REPORT_SILENT, leaving it up to the node what type of erroring/logging is appropriate. \n\nThe main objection I can see to this approach is that we only support\none context value per call, so you could not easily combine this\nfunctionality with existing use-cases for the context field.  A quick\ncensus of InitFunctionCallInfoData calls finds aggregates, window\nfunctions, triggers, and procedures, none of which seem like plausible\ncandidates for wanting no-error behavior, so I'm not too concerned\nabout that.  (Maybe the error-reporting node could be made a sub-node\nof the context node in any future cases where we do need it?)A subnode had occurred to me when fiddling about with fn_extra, so so that applies here, but if we're doing a sub-node, then maybe it's worth it's own parameter. I struggled with that because of how few of these functions will use it vs how often they're executed. \n\n> The one gap I see so far in the patch presented is that it returns a null\n> value on bad input, which might be ok if the node has the default, but that\n> then presents the node with having to understand whether it was a null\n> because of bad input vs a null that was expected.\n\nYeah.  That's something we could probably get away with for the case of\ninput functions only, but I think explicit out-of-band signaling that\nthere was an error is a more future-proof solution.I think we'll run into it fairly soon, because if I recall correctly, SQL/JSON has a formatting spec that essentially means that we're not calling input functions, we're calling TO_CHAR() and TO_DATE(), but they're very similiar to input functions.One positive side effect of all this is we can get a isa(value, typname) construct like this \"for free\", we just try the cast and return the value.I'm still working on my patch even though it will probably be sidelined in the hopes that it informs us of any subsequent issues.", "msg_date": "Fri, 2 Dec 2022 14:06:09 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Fri, Dec 2, 2022 at 1:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The main objection I can see to this approach is that we only support\n> one context value per call, so you could not easily combine this\n> functionality with existing use-cases for the context field. A quick\n> census of InitFunctionCallInfoData calls finds aggregates, window\n> functions, triggers, and procedures, none of which seem like plausible\n> candidates for wanting no-error behavior, so I'm not too concerned\n> about that. (Maybe the error-reporting node could be made a sub-node\n> of the context node in any future cases where we do need it?)\n\nI kind of wonder why we don't just add another member to FmgrInfo.\nIt's 48 bytes right now and this would increase the size to 56 bytes,\nso it's not as if we're increasing the number of cache lines or even\nusing up all of the remaining byte space. It's an API break, but\npeople have to recompile for new major versions anyway, so I guess I\ndon't really see the downside.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 2 Dec 2022 15:49:09 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Dec 2, 2022 at 1:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The main objection I can see to this approach is that we only support\n>> one context value per call, so you could not easily combine this\n>> functionality with existing use-cases for the context field.\n\n> I kind of wonder why we don't just add another member to FmgrInfo.\n> It's 48 bytes right now and this would increase the size to 56 bytes,\n\nThis'd be FunctionCallInfoData not FmgrInfo.\n\nI'm not terribly concerned about the size of FunctionCallInfoData,\nbut I am concerned about the number of cycles spent to initialize it,\nbecause we do that pretty durn often. So I don't really want to add\nfields to it without compelling use-cases, and I don't see one here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 02 Dec 2022 16:19:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Great. Let's hope we can get this settled early next week and then we\n> can get to work on the next tranche of functions, those that will let\n> the SQL/JSON work restart.\n\nOK, here's a draft proposal. I should start out by acknowledging that\nthis steals a great deal from Nikita's original patch as well as yours,\nthough I editorialized heavily.\n\n0001 is the core infrastructure and documentation for the feature.\n(I didn't bother breaking it down further than that.)\n\n0002 fixes boolin and int4in. That is the work that we're going to\nhave to replicate in an awful lot of places, and I am pleased by how\nshort-and-sweet it is. Of course, stuff like the datetime functions\nmight be more complex to adapt.\n\nThen 0003 is a quick-hack version of COPY that is able to exercise\nall this. I did not bother with the per-column flags as you had\nthem, because I'm not sure if they're worth the trouble compared\nto a simple boolean; in any case we can add that refinement later.\nWhat I did add was a WARN_ON_ERROR option that exercises the ability\nto extract the error message after a soft error. I'm not proposing\nthat as a shippable feature, it's just something for testing.\n\nI think there are just a couple of loose ends here:\n\n1. Bikeshedding on my name choices is welcome. I know Robert is\ndissatisfied with \"ereturn\", but I'm content with that so I didn't\nchange it here.\n\n2. Everybody has struggled with just where to put the declaration\nof the error context structure. The most natural home for it\nprobably would be elog.h, but that's out because it cannot depend\non nodes.h, and the struct has to be a Node type to conform to\nthe fmgr safety guidelines. What I've done here is to drop it\nin nodes.h, as we've done with a couple of other hard-to-classify\nnode types; but I can't say I'm satisfied with that.\n\nOther plausible answers seem to be:\n\n* Drop it in fmgr.h. The only real problem is that historically\nwe've not wanted fmgr.h to depend on nodes.h either. But I'm not\nsure how strong the argument for that really is/was. If we did\ndo it like that we could clean up a few kluges, both in this patch\nand pre-existing (fmNodePtr at least could go away).\n\n* Invent a whole new header just for this struct. But then we're\nback to the question of what to call it. Maybe something along the\nlines of utils/elog_extras.h ?\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 03 Dec 2022 16:46:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": ">\n> I think there are just a couple of loose ends here:\n>\n> 1. Bikeshedding on my name choices is welcome. I know Robert is\n> dissatisfied with \"ereturn\", but I'm content with that so I didn't\n> change it here.\n>\n\n1. details_please => include_error_data\n\nas this hints the reader directly to the struct to be filled out\n\n2. ereturn_* => errfeedback / error_feedback / efeedback\n\nIt is returned, but it's not taking control and the caller could ignore it.\nI arrived at his after checking https://www.thesaurus.com/browse/report and\nhttps://www.thesaurus.com/browse/hint.\n\n\n> 2. Everybody has struggled with just where to put the declaration\n> of the error context structure. The most natural home for it\n> probably would be elog.h, but that's out because it cannot depend\n> on nodes.h, and the struct has to be a Node type to conform to\n> the fmgr safety guidelines. What I've done here is to drop it\n> in nodes.h, as we've done with a couple of other hard-to-classify\n> node types; but I can't say I'm satisfied with that.\n>\n> Other plausible answers seem to be:\n>\n> * Drop it in fmgr.h. The only real problem is that historically\n> we've not wanted fmgr.h to depend on nodes.h either. But I'm not\n> sure how strong the argument for that really is/was. If we did\n> do it like that we could clean up a few kluges, both in this patch\n> and pre-existing (fmNodePtr at least could go away).\n>\n> * Invent a whole new header just for this struct. But then we're\n> back to the question of what to call it. Maybe something along the\n> lines of utils/elog_extras.h ?\n>\n\nWould moving ErrorReturnContext and the ErrorData struct to their own\nutil/errordata.h allow us to avoid the void pointer for ercontext? If so,\nthat'd be a win because typed pointers give the reader some idea of what is\nexpected there, as well as aiding doxygen-like tools.\n\nOverall this looks like a good foundation.\n\nMy own effort was getting bogged down in the number of changes I needed to\nmake in code paths that would never want a failover case for their\ntypecasts, so I'm going to refactor my work on top of this and see where I\nget stuck.\n\nI think there are just a couple of loose ends here:\n\n1. Bikeshedding on my name choices is welcome.  I know Robert is\ndissatisfied with \"ereturn\", but I'm content with that so I didn't\nchange it here.1. details_please => include_error_dataas this hints the reader directly to the struct to be filled out2. ereturn_* => errfeedback / error_feedback / efeedbackIt is returned, but it's not taking control and the caller could ignore it. I arrived at his after checking https://www.thesaurus.com/browse/report and https://www.thesaurus.com/browse/hint. 2. Everybody has struggled with just where to put the declaration\nof the error context structure.  The most natural home for it\nprobably would be elog.h, but that's out because it cannot depend\non nodes.h, and the struct has to be a Node type to conform to\nthe fmgr safety guidelines.  What I've done here is to drop it\nin nodes.h, as we've done with a couple of other hard-to-classify\nnode types; but I can't say I'm satisfied with that.\n\nOther plausible answers seem to be:\n\n* Drop it in fmgr.h.  The only real problem is that historically\nwe've not wanted fmgr.h to depend on nodes.h either.  But I'm not\nsure how strong the argument for that really is/was.  If we did\ndo it like that we could clean up a few kluges, both in this patch\nand pre-existing (fmNodePtr at least could go away).\n\n* Invent a whole new header just for this struct.  But then we're\nback to the question of what to call it.  Maybe something along the\nlines of utils/elog_extras.h ?Would moving ErrorReturnContext and the ErrorData struct to their own util/errordata.h allow us to avoid the void  pointer for ercontext? If so, that'd be a win because typed pointers give the reader some idea of what is expected there, as well as aiding doxygen-like tools.Overall this looks like a good foundation.My own effort was getting bogged down in the number of changes I needed to make in code paths that would never want a failover case for their typecasts, so I'm going to refactor my work on top of this and see where I get stuck.", "msg_date": "Sat, 3 Dec 2022 22:56:49 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n> My own effort was getting bogged down in the number of changes I needed to\n> make in code paths that would never want a failover case for their\n> typecasts, so I'm going to refactor my work on top of this and see where I\n> get stuck.\n\n+1, that would be a good way to see if I missed anything.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 03 Dec 2022 23:17:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-03 Sa 16:46, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> Great. Let's hope we can get this settled early next week and then we\n>> can get to work on the next tranche of functions, those that will let\n>> the SQL/JSON work restart.\n> OK, here's a draft proposal. I should start out by acknowledging that\n> this steals a great deal from Nikita's original patch as well as yours,\n> though I editorialized heavily.\n>\n> 0001 is the core infrastructure and documentation for the feature.\n> (I didn't bother breaking it down further than that.)\n>\n> 0002 fixes boolin and int4in. That is the work that we're going to\n> have to replicate in an awful lot of places, and I am pleased by how\n> short-and-sweet it is. Of course, stuff like the datetime functions\n> might be more complex to adapt.\n>\n> Then 0003 is a quick-hack version of COPY that is able to exercise\n> all this. I did not bother with the per-column flags as you had\n> them, because I'm not sure if they're worth the trouble compared\n> to a simple boolean; in any case we can add that refinement later.\n> What I did add was a WARN_ON_ERROR option that exercises the ability\n> to extract the error message after a soft error. I'm not proposing\n> that as a shippable feature, it's just something for testing.\n\n\nOverall I think this is pretty good, and I hope we can settle on it quickly.\n\n\n>\n> I think there are just a couple of loose ends here:\n>\n> 1. Bikeshedding on my name choices is welcome. I know Robert is\n> dissatisfied with \"ereturn\", but I'm content with that so I didn't\n> change it here.\n\n\nI haven't got anything better than ereturn.\n\ndetails_please seems more informal than our usual style. details_wanted\nmaybe?\n\n\n>\n> 2. Everybody has struggled with just where to put the declaration\n> of the error context structure. The most natural home for it\n> probably would be elog.h, but that's out because it cannot depend\n> on nodes.h, and the struct has to be a Node type to conform to\n> the fmgr safety guidelines. What I've done here is to drop it\n> in nodes.h, as we've done with a couple of other hard-to-classify\n> node types; but I can't say I'm satisfied with that.\n>\n> Other plausible answers seem to be:\n>\n> * Drop it in fmgr.h. The only real problem is that historically\n> we've not wanted fmgr.h to depend on nodes.h either. But I'm not\n> sure how strong the argument for that really is/was. If we did\n> do it like that we could clean up a few kluges, both in this patch\n> and pre-existing (fmNodePtr at least could go away).\n>\n> * Invent a whole new header just for this struct. But then we're\n> back to the question of what to call it. Maybe something along the\n> lines of utils/elog_extras.h ?\n>\n> \t\t\t\n\n\nMaybe a new header misc_nodes.h?\n\nSoon after we get this done I think we'll find we need to extend this to\nnon-input functions. But that can wait a short while.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 4 Dec 2022 07:53:15 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-12-03 Sa 16:46, Tom Lane wrote:\n>> 1. Bikeshedding on my name choices is welcome. I know Robert is\n>> dissatisfied with \"ereturn\", but I'm content with that so I didn't\n>> change it here.\n\n> details_please seems more informal than our usual style. details_wanted\n> maybe?\n\nYeah, Corey didn't like that either. \"details_wanted\" works for me.\n\n> Soon after we get this done I think we'll find we need to extend this to\n> non-input functions. But that can wait a short while.\n\nI'm curious to know exactly which other use-cases you foresee.\nIt wouldn't be a bad idea to write some draft code to verify\nthat this mechanism will work conveniently for them.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 04 Dec 2022 10:25:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-04 Su 10:25, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 2022-12-03 Sa 16:46, Tom Lane wrote:\n>>> 1. Bikeshedding on my name choices is welcome. I know Robert is\n>>> dissatisfied with \"ereturn\", but I'm content with that so I didn't\n>>> change it here.\n>> details_please seems more informal than our usual style. details_wanted\n>> maybe?\n> Yeah, Corey didn't like that either. \"details_wanted\" works for me.\n>\n>> Soon after we get this done I think we'll find we need to extend this to\n>> non-input functions. But that can wait a short while.\n> I'm curious to know exactly which other use-cases you foresee.\n> It wouldn't be a bad idea to write some draft code to verify\n> that this mechanism will work conveniently for them.\n\n\nThe SQL/JSON patches at [1] included fixes for some numeric and datetime\nconversion functions as well as various input functions, so that's a\nfairly immediate need. More generally, I can see uses for error free\ncasts, something like, say CAST(foo AS bar ON ERROR blurfl)\n\n\ncheers\n\n\nandrew\n\n\n[1]\nhttps://www.postgresql.org/message-id/f54ebd2b-0e67-d1c6-4ff7-5d732492d1a0%40postgrespro.ru\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 4 Dec 2022 11:21:41 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On 12/4/22 17:21, Andrew Dunstan wrote:\n> \n> More generally, I can see uses for error free\n> casts, something like, say CAST(foo AS bar ON ERROR blurfl)\n\nWhat I am proposing for inclusion in the standard is basically the same \nas what JSON does:\n\n<cast specification> ::=\nCAST <left paren>\n <cast operand> AS <cast target>\n [ FORMAT <cast template> ]\n [ <cast error behavior> ON ERROR ]\n <right paren>\n\n<cast error behavior> ::=\n ERROR\n | NULL\n | DEFAULT <value expression>\n\nOnce/If I get that in, I will be pushing to get that syntax in postgres \nas well.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Sun, 4 Dec 2022 18:01:33 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Sun, Dec 04, 2022 at 06:01:33PM +0100, Vik Fearing wrote:\n> Once/If I get that in, I will be pushing to get that syntax in postgres as\n> well.\n\nIf I may ask, how long would it take to know if this grammar would be\nintegrated in the standard or not?\n--\nMichael", "msg_date": "Mon, 5 Dec 2022 10:17:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Sat, Dec 3, 2022 at 10:57 PM Corey Huinker <corey.huinker@gmail.com> wrote:\n> 2. ereturn_* => errfeedback / error_feedback / feedback\n\nOh, I like that, especially errfeedback.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 5 Dec 2022 10:47:49 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sat, Dec 3, 2022 at 10:57 PM Corey Huinker <corey.huinker@gmail.com> wrote:\n>> 2. ereturn_* => errfeedback / error_feedback / feedback\n\n> Oh, I like that, especially errfeedback.\n\nefeedback? But TBH I do not think any of these are better than ereturn.\n\nWhether or not you agree with my position that it'd be best if the new\nmacro name is the same length as \"ereport\", I hope we can all agree\nthat it had better be short. ereport call nests already tend to contain\nquite long lines. We don't need to add another couple tab-stops worth\nof indentation there.\n\nAs for it being the same length: if you take a close look at my 0002\npatch, you will realize that replacing ereport with a different-length\nname will double or triple the number of lines that need to be touched\nin many input functions. Yeah, we could sweep that under the rug to\nsome extent by submitting non-pgindent'd patches and running a separate\npgindent commit later, but that is not without pain. I don't want to\ngo there for the sake of a name that isn't really compellingly The\nRight Word, and \"feedback\" just isn't very compelling IMO.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 05 Dec 2022 11:09:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Mon, Dec 5, 2022 at 11:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Sat, Dec 3, 2022 at 10:57 PM Corey Huinker <corey.huinker@gmail.com> wrote:\n> >> 2. ereturn_* => errfeedback / error_feedback / feedback\n>\n> > Oh, I like that, especially errfeedback.\n>\n> efeedback? But TBH I do not think any of these are better than ereturn.\n\nI do. Having a macro name that is \"return\" plus one character is going\nto make people think that it returns. I predict that if you insist on\nusing that name people are still going to be making mistakes based on\nthat confusion 10 years from now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 5 Dec 2022 11:20:02 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-05 Mo 11:20, Robert Haas wrote:\n> On Mon, Dec 5, 2022 at 11:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>>> On Sat, Dec 3, 2022 at 10:57 PM Corey Huinker <corey.huinker@gmail.com> wrote:\n>>>> 2. ereturn_* => errfeedback / error_feedback / feedback\n>>> Oh, I like that, especially errfeedback.\n>> efeedback? But TBH I do not think any of these are better than ereturn.\n> I do. Having a macro name that is \"return\" plus one character is going\n> to make people think that it returns. I predict that if you insist on\n> using that name people are still going to be making mistakes based on\n> that confusion 10 years from now.\n>\n\nOK, I take both this point and Tom's about trying to keep it the same\nlength. So we need something that's  7 letters, doesn't say 'return' and\npreferably begins with 'e'. I modestly suggest 'eseterr', or if we like\nthe 'feedback' idea 'efeedbk'.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 5 Dec 2022 11:36:19 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On 12/5/22 11:36, Andrew Dunstan wrote:\n> \n> On 2022-12-05 Mo 11:20, Robert Haas wrote:\n>> On Mon, Dec 5, 2022 at 11:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Robert Haas <robertmhaas@gmail.com> writes:\n>>>> On Sat, Dec 3, 2022 at 10:57 PM Corey Huinker <corey.huinker@gmail.com> wrote:\n>>>>> 2. ereturn_* => errfeedback / error_feedback / feedback\n>>>> Oh, I like that, especially errfeedback.\n>>> efeedback? But TBH I do not think any of these are better than ereturn.\n>> I do. Having a macro name that is \"return\" plus one character is going\n>> to make people think that it returns. I predict that if you insist on\n>> using that name people are still going to be making mistakes based on\n>> that confusion 10 years from now.\n>>\n> \n> OK, I take both this point and Tom's about trying to keep it the same\n> length. So we need something that's  7 letters, doesn't say 'return' and\n> preferably begins with 'e'. I modestly suggest 'eseterr', or if we like\n> the 'feedback' idea 'efeedbk'.\n\nMaybe eretort?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Mon, 5 Dec 2022 11:53:17 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Dec 5, 2022 at 11:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> efeedback? But TBH I do not think any of these are better than ereturn.\n\n> I do. Having a macro name that is \"return\" plus one character is going\n> to make people think that it returns.\n\nBut it does return, or at least you need to code on the assumption\nthat it will. (The cases where it doesn't aren't much different\nfrom any situation where a called subroutine unexpectedly throws\nan error. Callers typically don't have to consider that.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 05 Dec 2022 12:09:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> On 12/5/22 11:36, Andrew Dunstan wrote:\n>> OK, I take both this point and Tom's about trying to keep it the same\n>> length. So we need something that's 7 letters, doesn't say 'return' and\n>> preferably begins with 'e'. I modestly suggest 'eseterr', or if we like\n>> the 'feedback' idea 'efeedbk'.\n\n> Maybe eretort?\n\nNah, it's so close to ereport that it looks like a typo. eseterr isn't\nawful, perhaps. Or maybe errXXXX, but I've not thought of suitable XXXX.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 05 Dec 2022 12:14:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Mon, Dec 5, 2022 at 12:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> But it does return, or at least you need to code on the assumption\n> that it will. (The cases where it doesn't aren't much different\n> from any situation where a called subroutine unexpectedly throws\n> an error. Callers typically don't have to consider that.)\n\nAre you just trolling me here?\n\nAIUI, the macro never returns in the sense of using the return\nstatement, unlike PG_RETURN_WHATEVER(), which do. It possibly\ntransfers control by throwing an error. But that is also true of just\nabout everything you do in PostgreSQL code, because errors can get\nthrown from almost anywhere. So clearly the possibility of a non-local\ntransfer of control is not the issue here. The issue is the\npossibility that there will be NO transfer of control. That is, you\nare compelled to write ereturn() and then afterwards you still need a\nreturn statement.\n\nI do not understand how it is possible to sensibly argue that someone\nwon't see a macro called ereturn() and perhaps come to the false\nconclusion that it will always return.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 5 Dec 2022 12:27:29 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "I wrote:\n> Nah, it's so close to ereport that it looks like a typo. eseterr isn't\n> awful, perhaps. Or maybe errXXXX, but I've not thought of suitable XXXX.\n\n... \"errsave\", maybe?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 05 Dec 2022 12:27:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Mon, Dec 5, 2022 at 12:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > Nah, it's so close to ereport that it looks like a typo. eseterr isn't\n> > awful, perhaps. Or maybe errXXXX, but I've not thought of suitable XXXX.\n>\n> ... \"errsave\", maybe?\n\neseterr or errsave seem totally fine to me, FWIW.\n\nI would probably choose a more verbose name if I were doing it, but I\ndo get the point that keeping line lengths reasonable is important,\nand if someone were to accuse me of excessive prolixity, I would be\nunable to mount much of a defense.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 5 Dec 2022 12:35:34 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On 12/5/22 12:35, Robert Haas wrote:\n> On Mon, Dec 5, 2022 at 12:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I wrote:\n>> > Nah, it's so close to ereport that it looks like a typo. eseterr isn't\n>> > awful, perhaps. Or maybe errXXXX, but I've not thought of suitable XXXX.\n>>\n>> ... \"errsave\", maybe?\n> \n> eseterr or errsave seem totally fine to me, FWIW.\n\n+1\n\n> I would probably choose a more verbose name if I were doing it, but I\n> do get the point that keeping line lengths reasonable is important,\n> and if someone were to accuse me of excessive prolixity, I would be\n> unable to mount much of a defense.\n\nprolixity -- nice word! I won't comment on its applicability to you in \nparticular ;-P\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Mon, 5 Dec 2022 12:38:45 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On 2022-Dec-05, Tom Lane wrote:\n\n> I wrote:\n> > Nah, it's so close to ereport that it looks like a typo. eseterr isn't\n> > awful, perhaps. Or maybe errXXXX, but I've not thought of suitable XXXX.\n> \n> ... \"errsave\", maybe?\n\nIMO eseterr is quite awful while errsave is not, so here goes my vote\nfor the latter.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\nTom: There seems to be something broken here.\nTeodor: I'm in sackcloth and ashes... Fixed.\n http://archives.postgresql.org/message-id/482D1632.8010507@sigaev.ru\n\n\n", "msg_date": "Mon, 5 Dec 2022 18:42:31 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> AIUI, the macro never returns in the sense of using the return\n> statement, unlike PG_RETURN_WHATEVER(), which do.\n\nOh! Now I see what you don't like about it. I thought you\nmeant \"return to the call site\", not \"return to the call site's\ncaller\". Agreed that that could be confusing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 05 Dec 2022 12:44:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Mon, Dec 5, 2022 at 12:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > AIUI, the macro never returns in the sense of using the return\n> > statement, unlike PG_RETURN_WHATEVER(), which do.\n>\n> Oh! Now I see what you don't like about it. I thought you\n> meant \"return to the call site\", not \"return to the call site's\n> caller\". Agreed that that could be confusing.\n\nOK, good. I couldn't figure out what in the world we were arguing\nabout... apparently I wasn't being as clear as I thought I was.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 5 Dec 2022 12:50:47 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-05 Mo 12:42, Alvaro Herrera wrote:\n> On 2022-Dec-05, Tom Lane wrote:\n>\n>> I wrote:\n>>> Nah, it's so close to ereport that it looks like a typo. eseterr isn't\n>>> awful, perhaps. Or maybe errXXXX, but I've not thought of suitable XXXX.\n>> ... \"errsave\", maybe?\n> IMO eseterr is quite awful while errsave is not, so here goes my vote\n> for the latter.\n\n\nWait a minute!  Oh, no, sorry, as you were, 'errsave' is fine.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 5 Dec 2022 12:51:49 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Wait a minute! Oh, no, sorry, as you were, 'errsave' is fine.\n\nSeems like everybody's okay with errsave. I'll make a v2 in a\nlittle bit. I'd like to try updating array_in and/or record_in\njust to verify that indirection cases work okay, before we consider\nthe design to be set.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 05 Dec 2022 13:00:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Mon, Dec 5, 2022 at 11:36 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 2022-12-05 Mo 11:20, Robert Haas wrote:\n> > On Mon, Dec 5, 2022 at 11:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Robert Haas <robertmhaas@gmail.com> writes:\n> >>> On Sat, Dec 3, 2022 at 10:57 PM Corey Huinker <corey.huinker@gmail.com>\n> wrote:\n> >>>> 2. ereturn_* => errfeedback / error_feedback / feedback\n> >>> Oh, I like that, especially errfeedback.\n> >> efeedback? But TBH I do not think any of these are better than ereturn.\n> > I do. Having a macro name that is \"return\" plus one character is going\n> > to make people think that it returns. I predict that if you insist on\n> > using that name people are still going to be making mistakes based on\n> > that confusion 10 years from now.\n> >\n>\n> OK, I take both this point and Tom's about trying to keep it the same\n> length. So we need something that's 7 letters, doesn't say 'return' and\n> preferably begins with 'e'. I modestly suggest 'eseterr', or if we like\n> the 'feedback' idea 'efeedbk'.\n>\n>\n>\nConsulting https://www.thesaurus.com/browse/feedback again:\nereply clocks in at 7 characters.\n\nOn Mon, Dec 5, 2022 at 11:36 AM Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 2022-12-05 Mo 11:20, Robert Haas wrote:\n> On Mon, Dec 5, 2022 at 11:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>>> On Sat, Dec 3, 2022 at 10:57 PM Corey Huinker <corey.huinker@gmail.com> wrote:\n>>>> 2. ereturn_* => errfeedback / error_feedback / feedback\n>>> Oh, I like that, especially errfeedback.\n>> efeedback?  But TBH I do not think any of these are better than ereturn.\n> I do. Having a macro name that is \"return\" plus one character is going\n> to make people think that it returns. I predict that if you insist on\n> using that name people are still going to be making mistakes based on\n> that confusion 10 years from now.\n>\n\nOK, I take both this point and Tom's about trying to keep it the same\nlength. So we need something that's  7 letters, doesn't say 'return' and\npreferably begins with 'e'. I modestly suggest 'eseterr', or if we like\nthe 'feedback' idea 'efeedbk'.\n\nConsulting https://www.thesaurus.com/browse/feedback again:ereply clocks in at 7 characters.", "msg_date": "Mon, 5 Dec 2022 14:22:27 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Mon, Dec 5, 2022 at 1:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > Wait a minute! Oh, no, sorry, as you were, 'errsave' is fine.\n>\n> Seems like everybody's okay with errsave. I'll make a v2 in a\n> little bit. I'd like to try updating array_in and/or record_in\n> just to verify that indirection cases work okay, before we consider\n> the design to be set.\n>\n\n+1 to errsave.\n\nOn Mon, Dec 5, 2022 at 1:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andrew Dunstan <andrew@dunslane.net> writes:\n> Wait a minute!  Oh, no, sorry, as you were, 'errsave' is fine.\n\nSeems like everybody's okay with errsave.  I'll make a v2 in a\nlittle bit.  I'd like to try updating array_in and/or record_in\njust to verify that indirection cases work okay, before we consider\nthe design to be set.+1 to errsave.", "msg_date": "Mon, 5 Dec 2022 14:23:38 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-05 Mo 14:22, Corey Huinker wrote:\n>\n> On Mon, Dec 5, 2022 at 11:36 AM Andrew Dunstan <andrew@dunslane.net>\n> wrote:\n>\n>\n> On 2022-12-05 Mo 11:20, Robert Haas wrote:\n> > On Mon, Dec 5, 2022 at 11:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Robert Haas <robertmhaas@gmail.com> writes:\n> >>> On Sat, Dec 3, 2022 at 10:57 PM Corey Huinker\n> <corey.huinker@gmail.com> wrote:\n> >>>> 2. ereturn_* => errfeedback / error_feedback / feedback\n> >>> Oh, I like that, especially errfeedback.\n> >> efeedback?  But TBH I do not think any of these are better than\n> ereturn.\n> > I do. Having a macro name that is \"return\" plus one character is\n> going\n> > to make people think that it returns. I predict that if you\n> insist on\n> > using that name people are still going to be making mistakes\n> based on\n> > that confusion 10 years from now.\n> >\n>\n> OK, I take both this point and Tom's about trying to keep it the same\n> length. So we need something that's  7 letters, doesn't say\n> 'return' and\n> preferably begins with 'e'. I modestly suggest 'eseterr', or if we\n> like\n> the 'feedback' idea 'efeedbk'.\n>\n>\n>\n> Consulting https://www.thesaurus.com/browse/feedback again:\n> ereply clocks in at 7 characters.\n\n\nIt does?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 5 Dec 2022 14:55:19 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "I wrote:\n> Seems like everybody's okay with errsave. I'll make a v2 in a\n> little bit. I'd like to try updating array_in and/or record_in\n> just to verify that indirection cases work okay, before we consider\n> the design to be set.\n\nv2 as promised, incorporating the discussed renamings as well as some\nfollow-on ones (ErrorReturnContext -> ErrorSaveContext, notably).\n\nI also tried moving the struct into a new header file, miscnodes.h\nafter Andrew's suggestion upthread. That seems at least marginally\ncleaner than putting it in nodes.h, although I'm not wedded to this\nchoice.\n\nI was really glad that I took the trouble to update some less-trivial\ninput functions, because I learned two things:\n\n* It's better if InputFunctionCallSafe will tolerate the case of not\nbeing passed an ErrorSaveContext. In the COPY hack it felt worthwhile\nto have completely separate code paths calling InputFunctionCallSafe\nor InputFunctionCall, but that's less appetizing elsewhere.\n\n* There's a crying need for a macro that wraps up errsave() with an\nimmediate return. Hence, ereturn() is reborn from the ashes. I hope\nRobert won't object to that name if it *does* do a return.\n\nI feel pretty good about this version; it seems committable if there\nare not objections. Not sure if we should commit 0003 like this,\nthough.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 05 Dec 2022 16:40:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Hi,\n\nOn 2022-12-05 16:40:06 -0500, Tom Lane wrote:\n> +/*\n> + * errsave_start --- begin a \"safe\" error-reporting cycle\n> + *\n> + * If \"context\" isn't an ErrorSaveContext node, this behaves as\n> + * errstart(ERROR, domain), and the errsave() macro ends up acting\n> + * exactly like ereport(ERROR, ...).\n> + *\n> + * If \"context\" is an ErrorSaveContext node, but the node creator only wants\n> + * notification of the fact of a safe error without any details, just set\n> + * the error_occurred flag in the ErrorSaveContext node and return false,\n> + * which will cause us to skip the remaining error processing steps.\n> + *\n> + * Otherwise, create and initialize error stack entry and return true.\n> + * Subsequently, errmsg() and perhaps other routines will be called to further\n> + * populate the stack entry. Finally, errsave_finish() will be called to\n> + * tidy up.\n> + */\n> +bool\n> +errsave_start(void *context, const char *domain)\n\nWhy is context a void *?\n\n\n> +{\n> +\tErrorSaveContext *escontext;\n> +\tErrorData *edata;\n> +\n> +\t/*\n> +\t * Do we have a context for safe error reporting? If not, just punt to\n> +\t * errstart().\n> +\t */\n> +\tif (context == NULL || !IsA(context, ErrorSaveContext))\n> +\t\treturn errstart(ERROR, domain);\n\nI don't think we should \"accept\" !IsA(context, ErrorSaveContext) - that\nseems likely to hide things like use-after-free.\n\n\n> +\tif (++errordata_stack_depth >= ERRORDATA_STACK_SIZE)\n> +\t{\n> +\t\t/*\n> +\t\t * Wups, stack not big enough. We treat this as a PANIC condition\n> +\t\t * because it suggests an infinite loop of errors during error\n> +\t\t * recovery.\n> +\t\t */\n> +\t\terrordata_stack_depth = -1; /* make room on stack */\n> +\t\tereport(PANIC, (errmsg_internal(\"ERRORDATA_STACK_SIZE exceeded\")));\n> +\t}\n\nThis is the fourth copy of this code...\n\n\n\n> +/*\n> + * errsave_finish --- end a \"safe\" error-reporting cycle\n> + *\n> + * If errsave_start() decided this was a regular error, behave as\n> + * errfinish(). Otherwise, package up the error details and save\n> + * them in the ErrorSaveContext node.\n> + */\n> +void\n> +errsave_finish(void *context, const char *filename, int lineno,\n> +\t\t\t const char *funcname)\n> +{\n> +\tErrorSaveContext *escontext = (ErrorSaveContext *) context;\n> +\tErrorData *edata = &errordata[errordata_stack_depth];\n> +\n> +\t/* verify stack depth before accessing *edata */\n> +\tCHECK_STACK_DEPTH();\n> +\n> +\t/*\n> +\t * If errsave_start punted to errstart, then elevel will be ERROR or\n> +\t * perhaps even PANIC. Punt likewise to errfinish.\n> +\t */\n> +\tif (edata->elevel >= ERROR)\n> +\t\terrfinish(filename, lineno, funcname);\n\nI'd put a pg_unreachable() or such after the errfinish() call.\n\n\n> +\t/*\n> +\t * Else, we should package up the stack entry contents and deliver them to\n> +\t * the caller.\n> +\t */\n> +\trecursion_depth++;\n> +\n> +\t/* Save the last few bits of error state into the stack entry */\n> +\tif (filename)\n> +\t{\n> +\t\tconst char *slash;\n> +\n> +\t\t/* keep only base name, useful especially for vpath builds */\n> +\t\tslash = strrchr(filename, '/');\n> +\t\tif (slash)\n> +\t\t\tfilename = slash + 1;\n> +\t\t/* Some Windows compilers use backslashes in __FILE__ strings */\n> +\t\tslash = strrchr(filename, '\\\\');\n> +\t\tif (slash)\n> +\t\t\tfilename = slash + 1;\n> +\t}\n> +\n> +\tedata->filename = filename;\n> +\tedata->lineno = lineno;\n> +\tedata->funcname = funcname;\n> +\tedata->elevel = ERROR;\t\t/* hide the LOG value used above */\n> +\n> +\t/*\n> +\t * We skip calling backtrace and context functions, which are more likely\n> +\t * to cause trouble than provide useful context; they might act on the\n> +\t * assumption that a transaction abort is about to occur.\n> +\t */\n\nThis seems like a fair bit of duplicated code.\n\n\n> + * This is the same as InputFunctionCall, but the caller may also pass a\n> + * previously-initialized ErrorSaveContext node. (We declare that as\n> + * \"void *\" to avoid including miscnodes.h in fmgr.h.)\n\nIt seems way cleaner to forward declare ErrorSaveContext instead of\nusing void *.\n\n\n> If escontext points\n> + * to an ErrorSaveContext, any \"safe\" errors detected by the input function\n> + * will be reported by filling the escontext struct. The caller must\n> + * check escontext->error_occurred before assuming that the function result\n> + * is meaningful.\n\nI wonder if we shouldn't instead make InputFunctionCallSafe() return a\nboolean and return the Datum via a pointer. As callers are otherwise\ngoing to need to do SAFE_ERROR_OCCURRED(escontext) themselves, I think\nit should also lead to more concise (and slightly more efficient) code.\n\n\n> +Datum\n> +InputFunctionCallSafe(FmgrInfo *flinfo, char *str,\n> +\t\t\t\t\t Oid typioparam, int32 typmod,\n> +\t\t\t\t\t void *escontext)\n\nIs there a reason not to provide this infrastructure for\nReceiveFunctionCall() as well?\n\n\nNot that I have a suggestion for a better name, but I don't particularly\nlike \"Safe\" denoting non-erroring input function calls. There's too many\ninterpretations of safe - e.g. safe against privilege escalation issues\nor such.\n\n\n\n> @@ -252,10 +254,13 @@ record_in(PG_FUNCTION_ARGS)\n> \t\t\tcolumn_info->column_type = column_type;\n> \t\t}\n> \n> -\t\tvalues[i] = InputFunctionCall(&column_info->proc,\n> -\t\t\t\t\t\t\t\t\t column_data,\n> -\t\t\t\t\t\t\t\t\t column_info->typioparam,\n> -\t\t\t\t\t\t\t\t\t att->atttypmod);\n> +\t\tvalues[i] = InputFunctionCallSafe(&column_info->proc,\n> +\t\t\t\t\t\t\t\t\t\t column_data,\n> +\t\t\t\t\t\t\t\t\t\t column_info->typioparam,\n> +\t\t\t\t\t\t\t\t\t\t att->atttypmod,\n> +\t\t\t\t\t\t\t\t\t\t escontext);\n> +\t\tif (SAFE_ERROR_OCCURRED(escontext))\n> +\t\t\tPG_RETURN_NULL();\n\nIt doesn't *quite* seem right to set ->isnull in case of an error. Not\nthat it has an obvious harm.\n\nWonder if it's perhaps worth to add VALGRIND_MAKE_MEM_UNDEFINED() calls\nto InputFunctionCallSafe() to more easily detect cases where a caller\nignores that an error occured.\n\n\n> +\t\t\tif (safe_mode)\n> +\t\t\t{\n> +\t\t\t\tErrorSaveContext *es_context = cstate->es_context;\n> +\n> +\t\t\t\t/* Must reset the error_occurred flag each time */\n> +\t\t\t\tes_context->error_occurred = false;\n\nI'd put that into the if (es_context->error_occurred) path. Likely the\nwindow for store-forwarding issues is smaller than\nInputFunctionCallSafe(), but it's trivial to write it differently...\n\n\n> diff --git a/src/test/regress/sql/copy.sql b/src/test/regress/sql/copy.sql\n> index 285022e07c..ff77d27cfc 100644\n> --- a/src/test/regress/sql/copy.sql\n> +++ b/src/test/regress/sql/copy.sql\n> @@ -268,3 +268,23 @@ a\tc\tb\n> \n> SELECT * FROM header_copytest ORDER BY a;\n> drop table header_copytest;\n> +\n> +-- \"safe\" error handling\n> +create table on_error_copytest(i int, b bool, ai int[]);\n> +\n> +copy on_error_copytest from stdin with (null_on_error);\n> +1\ta\t{1,}\n> +err\t1\t{x}\n> +2\tf\t{3,4}\n> +bad\tx\t{,\n> +\\.\n> +\n> +copy on_error_copytest from stdin with (warn_on_error);\n> +3\t0\t[3:4]={3,4}\n> +4\tb\t[0:1000]={3,4}\n> +err\tt\t{}\n> +bad\tz\t{\"zed\"}\n> +\\.\n> +\n> +select * from on_error_copytest;\n> +drop table on_error_copytest;\n\nThink it'd be good to have a test for a composite type where one of the\ncolumns safely errors out and the other doesn't.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 5 Dec 2022 15:47:34 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Why is context a void *?\n\nelog.h can't depend on nodes.h, at least not without some rather\nfundamental rethinking of our #include relationships. We could\npossibly use the same kind of hack that fmgr.h does:\n\ntypedef struct Node *fmNodePtr;\n\nbut I'm not sure that's much of an improvement. Note that it'd\n*not* be correct to declare it as anything more specific than Node*,\nsince the fmgr context pointer is Node* and we're not expecting\ncallers to do their own IsA checks to see what they were passed.\n\n> I don't think we should \"accept\" !IsA(context, ErrorSaveContext) - that\n> seems likely to hide things like use-after-free.\n\nNo, see above. Moving the IsA checks out to the callers would\nnot improve the garbage-pointer risk one bit, it would just\nadd code bloat.\n\n> I'd put a pg_unreachable() or such after the errfinish() call.\n\n[ shrug... ] Kinda pointless IMO, but OK.\n\n> This seems like a fair bit of duplicated code.\n\nI don't think refactoring to remove the duplication would improve it.\n\n>> + * This is the same as InputFunctionCall, but the caller may also pass a\n>> + * previously-initialized ErrorSaveContext node. (We declare that as\n>> + * \"void *\" to avoid including miscnodes.h in fmgr.h.)\n\n> It seems way cleaner to forward declare ErrorSaveContext instead of\n> using void *.\n\nAgain, it cannot be any more specific than Node*. But you're right\nthat we could use fmNodePtr here, and that would be at least a little\nnicer.\n\n> I wonder if we shouldn't instead make InputFunctionCallSafe() return a\n> boolean and return the Datum via a pointer. As callers are otherwise\n> going to need to do SAFE_ERROR_OCCURRED(escontext) themselves, I think\n> it should also lead to more concise (and slightly more efficient) code.\n\nHmm, maybe. It would be a bigger change from existing code, but\nI don't think very many call sites would be impacted. (But by\nthe same token, we'd not save much code this way.) Personally\nI put more value on keeping similar APIs between InputFunctionCall\nand InputFunctionCallSafe, but I won't argue hard if you're insistent.\n\n> Is there a reason not to provide this infrastructure for\n> ReceiveFunctionCall() as well?\n\nThere's a comment in 0003 about that: I doubt that it makes sense\nto have no-error semantics for binary input. That would require\nfar more trust in the receive functions' ability to detect garbage\ninput than I think they have in reality. Perhaps more to the\npoint, even if we ultimately do that I don't want to do it now.\nIncluding the receive functions in the first-pass conversion would\nroughly double the amount of work needed per datatype, and we are\nalready going to be hard put to it to finish what needs to be done\nfor v16.\n\n> Not that I have a suggestion for a better name, but I don't particularly\n> like \"Safe\" denoting non-erroring input function calls. There's too many\n> interpretations of safe - e.g. safe against privilege escalation issues\n> or such.\n\nYeah, I'm not that thrilled with it either --- but it's a reasonably\non-point modifier, and short.\n\n> It doesn't *quite* seem right to set ->isnull in case of an error. Not\n> that it has an obvious harm.\n\nDoesn't matter: if the caller pays attention to either the Datum\nvalue or the isnull flag, it's broken.\n\n> Wonder if it's perhaps worth to add VALGRIND_MAKE_MEM_UNDEFINED() calls\n> to InputFunctionCallSafe() to more easily detect cases where a caller\n> ignores that an error occured.\n\nI do not think there are going to be enough callers of\nInputFunctionCallSafe that we need such tactics to validate them.\n\n> I'd put that into the if (es_context->error_occurred) path. Likely the\n> window for store-forwarding issues is smaller than\n> InputFunctionCallSafe(), but it's trivial to write it differently...\n\nDoes not seem better to me, and your argument for it seems like the\nworst sort of premature micro-optimization.\n\n> Think it'd be good to have a test for a composite type where one of the\n> columns safely errors out and the other doesn't.\n\nI wasn't trying all that hard on the error tests, because I think\n0003 is just throwaway code at this point. If we want to seriously\ncheck the input functions' behavior then we need to factorize the\ntests so it can be done per-datatype, not in one central place in\nthe COPY tests. For the core types it could make sense to provide\nsome function in pg_regress.c that allows access to the non-exception\ncode path independently of COPY; but I'm not sure how contrib\ndatatypes could use that.\n\nIn any case, I'm unconvinced that testing each error exit both ways is\nlikely to be a profitable use of test cycles. The far more likely source\nof problems with this patch series is going to be that we miss converting\nsome ereport call that is reachable with bad input. No amount of\ntesting is going to prove that that didn't happen.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 05 Dec 2022 19:18:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Hi,\n\nOn 2022-12-05 19:18:11 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Why is context a void *?\n>\n> elog.h can't depend on nodes.h, at least not without some rather\n> fundamental rethinking of our #include relationships. We could\n> possibly use the same kind of hack that fmgr.h does:\n>\n> typedef struct Node *fmNodePtr;\n>\n> but I'm not sure that's much of an improvement. Note that it'd\n> *not* be correct to declare it as anything more specific than Node*,\n> since the fmgr context pointer is Node* and we're not expecting\n> callers to do their own IsA checks to see what they were passed.\n\nAh - I hadn't actually grokked that that's the reason for the\nvoid*. Unless I missed a comment to that regard, entirely possible, it\nseems worth explaining that above errsave_start().\n\n\n> > This seems like a fair bit of duplicated code.\n>\n> I don't think refactoring to remove the duplication would improve it.\n\nWhy? I think a populate_edata() or such seems to make sense. And the\nrequired argument to skip ->backtrace and error_context_stack processing\nseem like things that'd be good to document anyway.\n\n\n> > I wonder if we shouldn't instead make InputFunctionCallSafe() return a\n> > boolean and return the Datum via a pointer. As callers are otherwise\n> > going to need to do SAFE_ERROR_OCCURRED(escontext) themselves, I think\n> > it should also lead to more concise (and slightly more efficient) code.\n>\n> Hmm, maybe. It would be a bigger change from existing code, but\n> I don't think very many call sites would be impacted. (But by\n> the same token, we'd not save much code this way.) Personally\n> I put more value on keeping similar APIs between InputFunctionCall\n> and InputFunctionCallSafe, but I won't argue hard if you're insistent.\n\nI think it's good to diverge from the existing code, because imo the\nbehaviour is quite different and omitting the SAFE_ERROR_OCCURRED()\ncheck will lead to brokenness.\n\n\n> > Is there a reason not to provide this infrastructure for\n> > ReceiveFunctionCall() as well?\n>\n> There's a comment in 0003 about that: I doubt that it makes sense\n> to have no-error semantics for binary input. That would require\n> far more trust in the receive functions' ability to detect garbage\n> input than I think they have in reality. Perhaps more to the\n> point, even if we ultimately do that I don't want to do it now.\n> Including the receive functions in the first-pass conversion would\n> roughly double the amount of work needed per datatype, and we are\n> already going to be hard put to it to finish what needs to be done\n> for v16.\n\nFair enough.\n\n\n> > Wonder if it's perhaps worth to add VALGRIND_MAKE_MEM_UNDEFINED() calls\n> > to InputFunctionCallSafe() to more easily detect cases where a caller\n> > ignores that an error occured.\n>\n> I do not think there are going to be enough callers of\n> InputFunctionCallSafe that we need such tactics to validate them.\n\nI predict that we'll have quite a few bugs due to converting some parts\nof the system, but not other parts. But we can add them later, so I'll\nnot insist on it.\n\n\n> > I'd put that into the if (es_context->error_occurred) path. Likely the\n> > window for store-forwarding issues is smaller than\n> > InputFunctionCallSafe(), but it's trivial to write it differently...\n>\n> Does not seem better to me, and your argument for it seems like the\n> worst sort of premature micro-optimization.\n\nShrug. The copy code is quite slow today, but not by a single source,\nbut by death by a thousand cuts.\n\n\n> > Think it'd be good to have a test for a composite type where one of the\n> > columns safely errors out and the other doesn't.\n>\n> I wasn't trying all that hard on the error tests, because I think\n> 0003 is just throwaway code at this point.\n\nI am mainly interested in having *something* test erroring out hard when\nusing the \"Safe\" mechanism, which afaict we don't have with the patches\nas they stand. You're right that it'd be better to do that without COPY\nin the way, but it doesn't seem all that crucial.\n\n\n> If we want to seriously check the input functions' behavior then we\n> need to factorize the tests so it can be done per-datatype, not in one\n> central place in the COPY tests. For the core types it could make\n> sense to provide some function in pg_regress.c that allows access to\n> the non-exception code path independently of COPY; but I'm not sure\n> how contrib datatypes could use that.\n\nIt might be worth adding a function for testing safe input functions\ninto core PG - it's not like we don't have other such functions.\n\nBut perhaps it's even worth having such a function properly exposed:\nIt's not at all rare to parse text data during ETL and quite often\nerroring out fatally is undesirable. As savepoints are undesirable\noverhead-wise, there's a lot of SQL out there that tries to do a\npre-check about whether some text could be cast to some other data\ntype. A function that'd try to cast input to a certain type without\nerroring out hard would be quite useful for that.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 5 Dec 2022 16:56:07 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-12-05 19:18:11 -0500, Tom Lane wrote:\n>> but I'm not sure that's much of an improvement. Note that it'd\n>> *not* be correct to declare it as anything more specific than Node*,\n>> since the fmgr context pointer is Node* and we're not expecting\n>> callers to do their own IsA checks to see what they were passed.\n\n> Ah - I hadn't actually grokked that that's the reason for the\n> void*. Unless I missed a comment to that regard, entirely possible, it\n> seems worth explaining that above errsave_start().\n\nThere's a comment about that in elog.h IIRC, but no harm in saying\nit in elog.c as well.\n\nHaving said that, I am warming a little bit to making these pointers\nbe Node* or an alias spelling of that rather than void*.\n\n>> I don't think refactoring to remove the duplication would improve it.\n\n> Why? I think a populate_edata() or such seems to make sense. And the\n> required argument to skip ->backtrace and error_context_stack processing\n> seem like things that'd be good to document anyway.\n\nMeh. Well, I'll have a look, but it seems kind of orthogonal to the\nmain point of the patch.\n\n>> Hmm, maybe. It would be a bigger change from existing code, but\n>> I don't think very many call sites would be impacted. (But by\n>> the same token, we'd not save much code this way.) Personally\n>> I put more value on keeping similar APIs between InputFunctionCall\n>> and InputFunctionCallSafe, but I won't argue hard if you're insistent.\n\n> I think it's good to diverge from the existing code, because imo the\n> behaviour is quite different and omitting the SAFE_ERROR_OCCURRED()\n> check will lead to brokenness.\n\nTrue, but it only helps for the immediate caller of InputFunctionCallSafe,\nnot for call levels further out. Still, I'll give that a look.\n\n>> I wasn't trying all that hard on the error tests, because I think\n>> 0003 is just throwaway code at this point.\n\n> I am mainly interested in having *something* test erroring out hard when\n> using the \"Safe\" mechanism, which afaict we don't have with the patches\n> as they stand. You're right that it'd be better to do that without COPY\n> in the way, but it doesn't seem all that crucial.\n\nHmm, either I'm confused or you're stating that backwards --- aren't\nthe hard-error code paths already tested by our existing tests?\n\n> But perhaps it's even worth having such a function properly exposed:\n> It's not at all rare to parse text data during ETL and quite often\n> erroring out fatally is undesirable. As savepoints are undesirable\n> overhead-wise, there's a lot of SQL out there that tries to do a\n> pre-check about whether some text could be cast to some other data\n> type. A function that'd try to cast input to a certain type without\n> erroring out hard would be quite useful for that.\n\nCorey and Vik are already talking about a non-error CAST variant.\nMaybe we should leave this in abeyance until something shows up\nfor that? Otherwise we'll be making a nonstandard API for what\nwill probably ultimately be SQL-spec functionality. I don't mind\nthat as regression-test infrastructure, but I'm a bit less excited\nabout exposing it as a user feature.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 05 Dec 2022 20:06:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Hi,\n\nOn 2022-12-05 20:06:55 -0500, Tom Lane wrote:\n> >> I wasn't trying all that hard on the error tests, because I think\n> >> 0003 is just throwaway code at this point.\n> \n> > I am mainly interested in having *something* test erroring out hard when\n> > using the \"Safe\" mechanism, which afaict we don't have with the patches\n> > as they stand. You're right that it'd be better to do that without COPY\n> > in the way, but it doesn't seem all that crucial.\n> \n> Hmm, either I'm confused or you're stating that backwards --- aren't\n> the hard-error code paths already tested by our existing tests?\n\nWhat I'd like to test is a hard error, either due to an input function\nthat wasn't converted or because it's a type of error that can't be\nhandled \"softly\", but when using the \"safe\" interface.\n\n\n> > But perhaps it's even worth having such a function properly exposed:\n> > It's not at all rare to parse text data during ETL and quite often\n> > erroring out fatally is undesirable. As savepoints are undesirable\n> > overhead-wise, there's a lot of SQL out there that tries to do a\n> > pre-check about whether some text could be cast to some other data\n> > type. A function that'd try to cast input to a certain type without\n> > erroring out hard would be quite useful for that.\n> \n> Corey and Vik are already talking about a non-error CAST variant.\n> Maybe we should leave this in abeyance until something shows up\n> for that? Otherwise we'll be making a nonstandard API for what\n> will probably ultimately be SQL-spec functionality. I don't mind\n> that as regression-test infrastructure, but I'm a bit less excited\n> about exposing it as a user feature.\n\nYea, I'm fine with that. I was just thinking out loud on this aspect.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 5 Dec 2022 17:14:04 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-12-05 20:06:55 -0500, Tom Lane wrote:\n>> Hmm, either I'm confused or you're stating that backwards --- aren't\n>> the hard-error code paths already tested by our existing tests?\n\n> What I'd like to test is a hard error, either due to an input function\n> that wasn't converted or because it's a type of error that can't be\n> handled \"softly\", but when using the \"safe\" interface.\n\nOh, I see. That seems like kind of a problematic requirement,\nunless we leave some datatype around that's intentionally not\never going to be converted. For datatypes that we do convert,\nthere shouldn't be any easy way to get to a hard error.\n\nI don't really quite understand why you're worried about that\nthough. The hard-error code paths are well tested already.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 05 Dec 2022 20:19:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Hi,\n\nOn 2022-12-05 20:19:26 -0500, Tom Lane wrote:\n> That seems like kind of a problematic requirement, unless we leave some\n> datatype around that's intentionally not ever going to be converted. For\n> datatypes that we do convert, there shouldn't be any easy way to get to a\n> hard error.\n\nI suspect there are going to be types we can't convert. But even if not - that\nactually makes a *stronger* case for ensuring the path is tested, because\ncertainly some out of core types aren't going to be converted.\n\n\nThis made me look at fmgr/README again:\n\n> +Considering datatype input functions as examples, typical \"safe\" error\n> +conditions include input syntax errors and out-of-range values. An input\n> +function typically detects such cases with simple if-tests and can easily\n> +change the following ereport call to errsave. Error conditions that\n> +should NOT be handled this way include out-of-memory, internal errors, and\n> +anything where there is any question about our ability to continue normal\n> +processing of the transaction. Those should still be thrown with ereport.\n\nI wonder if we should provide more guidance around what kind of catalogs\naccess are acceptable before avoiding throwing an error.\n\nThis in turn make me look at record_in() in 0002 - I think we might be leaking\na tupledesc refcount in case of errors. Yup:\n\nDROP TABLE IF EXISTS tbl_as_record, tbl_with_record;\n\nCREATE TABLE tbl_as_record(a int, b int);\nCREATE TABLE tbl_with_record(composite_col tbl_as_record, non_composite_col int);\n\nCOPY tbl_with_record FROM stdin WITH (warn_on_error);\nkdjkdf\t212\n\\.\n\nWARNING: 22P02: invalid input for column composite_col: malformed record literal: \"kdjkdf\"\nWARNING: 01000: TupleDesc reference leak: TupleDesc 0x7fb1c5fd0c58 (159584,-1) still referenced\n\n\n\n> I don't really quite understand why you're worried about that\n> though. The hard-error code paths are well tested already.\n\nAfaict they're not tested when going through InputFunctionCallSafe() / with an\nErrorSaveContext. To me that does seem worth testing.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 5 Dec 2022 18:23:23 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> This in turn make me look at record_in() in 0002 - I think we might be leaking\n> a tupledesc refcount in case of errors. Yup:\n\nDoh :-( ... I did that function a little too hastily, obviously.\nThanks for catching that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 05 Dec 2022 21:32:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-05 Mo 20:06, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>\n>> But perhaps it's even worth having such a function properly exposed:\n>> It's not at all rare to parse text data during ETL and quite often\n>> erroring out fatally is undesirable. As savepoints are undesirable\n>> overhead-wise, there's a lot of SQL out there that tries to do a\n>> pre-check about whether some text could be cast to some other data\n>> type. A function that'd try to cast input to a certain type without\n>> erroring out hard would be quite useful for that.\n> Corey and Vik are already talking about a non-error CAST variant.\n\n\n/metoo! :-)\n\n\n> Maybe we should leave this in abeyance until something shows up\n> for that? Otherwise we'll be making a nonstandard API for what\n> will probably ultimately be SQL-spec functionality. I don't mind\n> that as regression-test infrastructure, but I'm a bit less excited\n> about exposing it as a user feature.\n> \t\t\t\n\n\nI think a functional mechanism could be very useful. Who knows when the\nstandard might specify something in this area?\n\n\ncheers\n\n\nandrew\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 6 Dec 2022 06:46:10 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "[ continuing the naming quagmire... ]\n\nI wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> Not that I have a suggestion for a better name, but I don't particularly\n>> like \"Safe\" denoting non-erroring input function calls. There's too many\n>> interpretations of safe - e.g. safe against privilege escalation issues\n>> or such.\n\n> Yeah, I'm not that thrilled with it either --- but it's a reasonably\n> on-point modifier, and short.\n\nIt occurs to me that another spelling could be NoError (or _noerror\nwhere not using camel case). There's some precedent for that already;\nand where we have it, it has the same implication of reporting rather\nthan throwing certain errors, without making a guarantee about all\nerrors. For instance lookup_rowtype_tupdesc_noerror won't prevent\nthrowing errors if catalog corruption is detected inside the catcaches.\n\nI'm not sure this is any *better* than Safe ... it's longer, less\nmellifluous, and still subject to misinterpretation. But it's\na possible alternative.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 06 Dec 2022 09:42:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-06 Tu 09:42, Tom Lane wrote:\n> [ continuing the naming quagmire... ]\n>\n> I wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> Not that I have a suggestion for a better name, but I don't particularly\n>>> like \"Safe\" denoting non-erroring input function calls. There's too many\n>>> interpretations of safe - e.g. safe against privilege escalation issues\n>>> or such.\n>> Yeah, I'm not that thrilled with it either --- but it's a reasonably\n>> on-point modifier, and short.\n> It occurs to me that another spelling could be NoError (or _noerror\n> where not using camel case). There's some precedent for that already;\n> and where we have it, it has the same implication of reporting rather\n> than throwing certain errors, without making a guarantee about all\n> errors. For instance lookup_rowtype_tupdesc_noerror won't prevent\n> throwing errors if catalog corruption is detected inside the catcaches.\n>\n> I'm not sure this is any *better* than Safe ... it's longer, less\n> mellifluous, and still subject to misinterpretation. But it's\n> a possible alternative.\n>\n> \t\t\t\n\n\nYeah, I don't think there's terribly much to choose between 'safe' and\n'noerror' in terms of meaning.\n\nI originally chose InputFunctionCallContext as a more neutral name in\ncase we wanted to be able to pass some other sort of node for the\ncontext in future.\n\nMaybe that was a little too forward looking.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 6 Dec 2022 10:43:03 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-12-06 Tu 09:42, Tom Lane wrote:\n>> I'm not sure this is any *better* than Safe ... it's longer, less\n>> mellifluous, and still subject to misinterpretation. But it's\n>> a possible alternative.\n\n> Yeah, I don't think there's terribly much to choose between 'safe' and\n> 'noerror' in terms of meaning.\n\nYeah, I just wanted to throw it out there and see if anyone thought\nit was a better idea.\n\n> I originally chose InputFunctionCallContext as a more neutral name in\n> case we wanted to be able to pass some other sort of node for the\n> context in future.\n> Maybe that was a little too forward looking.\n\nI didn't like that because it seemed to convey nothing at all about\nthe expected behavior.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 06 Dec 2022 11:07:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Tue, Dec 6, 2022 at 11:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I originally chose InputFunctionCallContext as a more neutral name in\n> > case we wanted to be able to pass some other sort of node for the\n> > context in future.\n> > Maybe that was a little too forward looking.\n>\n> I didn't like that because it seemed to convey nothing at all about\n> the expected behavior.\n\nI feel like this can go either way. If we pick a name that conveys a\nspecific intended behavior now, and then later we want to pass some\nother sort of node for some purpose other than ignoring errors, it's\nunpleasant to have a name that sounds like it can only ignore errors.\nBut if we never use it for anything other than ignoring errors, a\nspecific name is clearer.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 6 Dec 2022 12:10:09 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Tue, Dec 6, 2022 at 6:46 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 2022-12-05 Mo 20:06, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> >\n> >> But perhaps it's even worth having such a function properly exposed:\n> >> It's not at all rare to parse text data during ETL and quite often\n> >> erroring out fatally is undesirable. As savepoints are undesirable\n> >> overhead-wise, there's a lot of SQL out there that tries to do a\n> >> pre-check about whether some text could be cast to some other data\n> >> type. A function that'd try to cast input to a certain type without\n> >> erroring out hard would be quite useful for that.\n> > Corey and Vik are already talking about a non-error CAST variant.\n>\n>\n> /metoo! :-)\n>\n>\n> > Maybe we should leave this in abeyance until something shows up\n> > for that? Otherwise we'll be making a nonstandard API for what\n> > will probably ultimately be SQL-spec functionality. I don't mind\n> > that as regression-test infrastructure, but I'm a bit less excited\n> > about exposing it as a user feature.\n> >\n>\n>\n> I think a functional mechanism could be very useful. Who knows when the\n> standard might specify something in this area?\n>\n>\n>\nVik's working on the standard (he put the spec in earlier in this thread).\nI'm working on implementing it on top of Tom's work, but I'm one patchset\nbehind at the moment.\n\nOnce completed, it should be leverage-able in several places, COPY being\nthe most obvious.\n\nWhat started all this was me noticing that if I implemented TRY_CAST in\npl/pgsql with an exception block, then I wasn't able to use parallel query.\n\nOn Tue, Dec 6, 2022 at 6:46 AM Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 2022-12-05 Mo 20:06, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>\n>> But perhaps it's even worth having such a function properly exposed:\n>> It's not at all rare to parse text data during ETL and quite often\n>> erroring out fatally is undesirable. As savepoints are undesirable\n>> overhead-wise, there's a lot of SQL out there that tries to do a\n>> pre-check about whether some text could be cast to some other data\n>> type. A function that'd try to cast input to a certain type without\n>> erroring out hard would be quite useful for that.\n> Corey and Vik are already talking about a non-error CAST variant.\n\n\n/metoo! :-)\n\n\n> Maybe we should leave this in abeyance until something shows up\n> for that?  Otherwise we'll be making a nonstandard API for what\n> will probably ultimately be SQL-spec functionality.  I don't mind\n> that as regression-test infrastructure, but I'm a bit less excited\n> about exposing it as a user feature.\n>                       \n\n\nI think a functional mechanism could be very useful. Who knows when the\nstandard might specify something in this area?\nVik's working on the standard (he put the spec in earlier in this thread). I'm working on implementing it on top of Tom's work, but I'm one patchset behind at the moment.Once completed, it should be leverage-able in several places, COPY being the most obvious.What started all this was me noticing that if I implemented TRY_CAST in pl/pgsql with an exception block, then I wasn't able to use parallel query.", "msg_date": "Tue, 6 Dec 2022 13:16:59 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "OK, here's a v3 responding to the comments from Andres.\n\n0000 is preliminary refactoring of elog.c, with (I trust) no\nfunctional effect. It gets rid of some pre-existing code duplication\nas well as setting up to let 0001's additions be less duplicative.\n\n0001 adopts use of Node pointers in place of \"void *\". To do this\nI needed an alias type in elog.h equivalent to fmgr.h's fmNodePtr.\nI decided that having two different aliases would be too confusing,\nso what I did here was to converge both elog.h and fmgr.h on using\nthe same alias \"typedef struct Node *NodePtr\". That has to be in\nelog.h since it's included first, from postgres.h. (I thought of\ndefining NodePtr in postgres.h, but postgres.h includes elog.h\nimmediately so that wouldn't have looked very nice.)\n\nI also adopted Andres' recommendation that InputFunctionCallSafe\nreturn boolean. I'm still not totally sold on that ... but it does\nend with array_in and record_in never using SAFE_ERROR_OCCURRED at\nall, so maybe the idea's OK.\n\n0002 adjusts the I/O functions for these API changes, and fixes\nmy silly oversight about error cleanup in record_in.\n\nGiven the discussion about testing requirements, I threw away the\nCOPY hack entirely. This 0003 provides a couple of SQL-callable\nfunctions that can be used to invoke a specific datatype's input\nfunction. I haven't documented them, pending bikeshedding on\nnames etc. I also arranged to test array_in and record_in with\na datatype that still throws errors, reserving the existing test\ntype \"widget\" for that purpose.\n\n(I'm not intending to foreclose development of new COPY features\nin this area, just abandoning the idea that that's our initial\ntest mechanism.)\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 06 Dec 2022 15:21:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I feel like this can go either way. If we pick a name that conveys a\n> specific intended behavior now, and then later we want to pass some\n> other sort of node for some purpose other than ignoring errors, it's\n> unpleasant to have a name that sounds like it can only ignore errors.\n> But if we never use it for anything other than ignoring errors, a\n> specific name is clearer.\n\nWith Andres' proposal to make the function return boolean succeed/fail,\nI think it's pretty clear that the only useful case is to pass an\nErrorSaveContext. There may well be future APIs that pass some other\nkind of context object to input functions, but they'll presumably\nhave different goals and want a different sort of wrapper function.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 06 Dec 2022 15:29:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-06 Tu 15:21, Tom Lane wrote:\n> OK, here's a v3 responding to the comments from Andres.\n\n\nLooks pretty good to me.\n\n\n>\n> 0000 is preliminary refactoring of elog.c, with (I trust) no\n> functional effect. It gets rid of some pre-existing code duplication\n> as well as setting up to let 0001's additions be less duplicative.\n>\n> 0001 adopts use of Node pointers in place of \"void *\". To do this\n> I needed an alias type in elog.h equivalent to fmgr.h's fmNodePtr.\n> I decided that having two different aliases would be too confusing,\n> so what I did here was to converge both elog.h and fmgr.h on using\n> the same alias \"typedef struct Node *NodePtr\". That has to be in\n> elog.h since it's included first, from postgres.h. (I thought of\n> defining NodePtr in postgres.h, but postgres.h includes elog.h\n> immediately so that wouldn't have looked very nice.)\n>\n> I also adopted Andres' recommendation that InputFunctionCallSafe\n> return boolean. I'm still not totally sold on that ... but it does\n> end with array_in and record_in never using SAFE_ERROR_OCCURRED at\n> all, so maybe the idea's OK.\n\n\nOriginally I wanted to make the new function look as much like the\noriginal as possible, but I'm not wedded to that either. I can live with\nit like this.\n\n\n>\n> 0002 adjusts the I/O functions for these API changes, and fixes\n> my silly oversight about error cleanup in record_in.\n>\n> Given the discussion about testing requirements, I threw away the\n> COPY hack entirely. This 0003 provides a couple of SQL-callable\n> functions that can be used to invoke a specific datatype's input\n> function. I haven't documented them, pending bikeshedding on\n> names etc. I also arranged to test array_in and record_in with\n> a datatype that still throws errors, reserving the existing test\n> type \"widget\" for that purpose.\n>\n> (I'm not intending to foreclose development of new COPY features\n> in this area, just abandoning the idea that that's our initial\n> test mechanism.)\n>\n\nThe new functions on their own are likely to make plenty of people quite\nhappy once we've adjusted all the input functions.\n\nPerhaps we should add a type in the regress library that will never have\na safe input function, so we can test that the mechanism works as\nexpected in that case even after we adjust all the core data types'\ninput functions.\n\nOtherwise I think we're good to go.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 7 Dec 2022 08:47:37 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Perhaps we should add a type in the regress library that will never have\n> a safe input function, so we can test that the mechanism works as\n> expected in that case even after we adjust all the core data types'\n> input functions.\n\nI was intending that the existing \"widget\" type be that. 0003 already\nadds a comment to widget_in saying not to \"fix\" its one ereport call.\n\nReturning to the naming quagmire -- it occurred to me just now that\nit might be helpful to call this style of error reporting \"soft\"\nerrors rather than \"safe\" errors, which'd provide a nice contrast\nwith \"hard\" errors thrown by longjmp'ing. That would lead to naming\nall the variant functions XXXSoft not XXXSafe. There would still\nbe commentary to the effect that \"soft errors must be safe, in the\nsense that there's no question whether it's safe to continue\nprocessing the transaction\". Anybody think that'd be an\nimprovement?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 07 Dec 2022 09:20:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Wed, Dec 7, 2022 at 7:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> Returning to the naming quagmire -- it occurred to me just now that\n> it might be helpful to call this style of error reporting \"soft\"\n> errors rather than \"safe\" errors, which'd provide a nice contrast\n> with \"hard\" errors thrown by longjmp'ing. That would lead to naming\n> all the variant functions XXXSoft not XXXSafe. There would still\n> be commentary to the effect that \"soft errors must be safe, in the\n> sense that there's no question whether it's safe to continue\n> processing the transaction\". Anybody think that'd be an\n> improvement?\n>\n>\n+1\n\nDavid J.\n\nOn Wed, Dec 7, 2022 at 7:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nReturning to the naming quagmire -- it occurred to me just now that\nit might be helpful to call this style of error reporting \"soft\"\nerrors rather than \"safe\" errors, which'd provide a nice contrast\nwith \"hard\" errors thrown by longjmp'ing.  That would lead to naming\nall the variant functions XXXSoft not XXXSafe.  There would still\nbe commentary to the effect that \"soft errors must be safe, in the\nsense that there's no question whether it's safe to continue\nprocessing the transaction\".  Anybody think that'd be an\nimprovement?+1David J.", "msg_date": "Wed, 7 Dec 2022 07:51:12 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-07 We 09:20, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> Perhaps we should add a type in the regress library that will never have\n>> a safe input function, so we can test that the mechanism works as\n>> expected in that case even after we adjust all the core data types'\n>> input functions.\n> I was intending that the existing \"widget\" type be that. 0003 already\n> adds a comment to widget_in saying not to \"fix\" its one ereport call.\n\n\nYeah, I see that, I must have been insufficiently caffeinated.\n\n\n>\n> Returning to the naming quagmire -- it occurred to me just now that\n> it might be helpful to call this style of error reporting \"soft\"\n> errors rather than \"safe\" errors, which'd provide a nice contrast\n> with \"hard\" errors thrown by longjmp'ing. That would lead to naming\n> all the variant functions XXXSoft not XXXSafe. There would still\n> be commentary to the effect that \"soft errors must be safe, in the\n> sense that there's no question whether it's safe to continue\n> processing the transaction\". Anybody think that'd be an\n> improvement?\n>\n> \t\t\t\n\n\nI'm not sure InputFunctionCallSoft would be an improvement. Maybe\nInputFunctionCallSoftError would be clearer, but I don't know that it's\nmuch of an improvement either. The same goes for the other visible changes.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 7 Dec 2022 10:04:01 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-12-07 We 09:20, Tom Lane wrote:\n>> Returning to the naming quagmire -- it occurred to me just now that\n>> it might be helpful to call this style of error reporting \"soft\"\n>> errors rather than \"safe\" errors, which'd provide a nice contrast\n>> with \"hard\" errors thrown by longjmp'ing. That would lead to naming\n>> all the variant functions XXXSoft not XXXSafe.\n\n> I'm not sure InputFunctionCallSoft would be an improvement.\n\nYeah, after reflecting on it a bit more I'm not that impressed with\nthat as a function name either.\n\n(I think that \"soft error\" could be useful as informal terminology.\nAFAIR we don't use \"hard error\" in any formal way either, but there\nare certainly comments using that phrase.)\n\nMore questions:\n\n* Anyone want to bikeshed about the new SQL-level function names?\nI'm reasonably satisfied with \"pg_input_is_valid\" for the bool-returning\nvariant, but not so much with \"pg_input_invalid_message\" for the\nerror-message-returning variant. Thinking about \"pg_input_error_message\"\ninstead, but that's not stellar either.\n\n* Where in the world shall we document these, if we document them?\nThe only section of chapter 9 that seems even a little bit appropriate\nis \"9.26. System Information Functions and Operators\", and even there,\nthey would need their own new table because they don't fit well in any\nexisting table.\n\nBTW, does anyone else agree that 9.26 is desperately in need of some\n<sect2> subdivisions? It seems to have gotten a lot longer since\nI looked at it last.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 07 Dec 2022 10:23:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Wed, Dec 7, 2022 at 8:04 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 2022-12-07 We 09:20, Tom Lane wrote:\n> > Andrew Dunstan <andrew@dunslane.net> writes:\n> >> Perhaps we should add a type in the regress library that will never have\n> >> a safe input function, so we can test that the mechanism works as\n> >> expected in that case even after we adjust all the core data types'\n> >> input functions.\n> > I was intending that the existing \"widget\" type be that. 0003 already\n> > adds a comment to widget_in saying not to \"fix\" its one ereport call.\n>\n>\n> Yeah, I see that, I must have been insufficiently caffeinated.\n>\n>\n> >\n> > Returning to the naming quagmire -- it occurred to me just now that\n> > it might be helpful to call this style of error reporting \"soft\"\n> > errors rather than \"safe\" errors, which'd provide a nice contrast\n> > with \"hard\" errors thrown by longjmp'ing. That would lead to naming\n> > all the variant functions XXXSoft not XXXSafe. There would still\n> > be commentary to the effect that \"soft errors must be safe, in the\n> > sense that there's no question whether it's safe to continue\n> > processing the transaction\". Anybody think that'd be an\n> > improvement?\n> >\n> >\n>\n>\n> I'm not sure InputFunctionCallSoft would be an improvement. Maybe\n> InputFunctionCallSoftError would be clearer, but I don't know that it's\n> much of an improvement either. The same goes for the other visible changes.\n>\n>\nInputFunctionCallSafe -> TryInputFunctionCall\n\nI think in create type saying \"input functions to handle errors softly\" is\nan improvement over \"input functions to return safe errors\".\n\nstart->save->finish describes a soft error handling procedure quite well.\nsafe has baggage, all code should be \"safe\".\n\nfmgr/README: \"Handling Non-Exception Errors\" -> \"Soft Error Handling\"\n\n\"typical safe error conditions include\" -> \"error conditions that can be\nhandled softly include\"\n\n(pg_input_is_valid) \"input function has been updated to return \"safe'\nerrors\" -> \"input function has been updated to soft error handling\"\n\n\nUnrelated observation: \"Although the error stack is not large, we don't\nexpect to run out of space.\" -> \"Because the error stack is not large,\nassume that we will not run out of space and panic if we are wrong.\"?\n\nDavid J.\n\nOn Wed, Dec 7, 2022 at 8:04 AM Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 2022-12-07 We 09:20, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> Perhaps we should add a type in the regress library that will never have\n>> a safe input function, so we can test that the mechanism works as\n>> expected in that case even after we adjust all the core data types'\n>> input functions.\n> I was intending that the existing \"widget\" type be that.  0003 already\n> adds a comment to widget_in saying not to \"fix\" its one ereport call.\n\n\nYeah, I see that, I must have been insufficiently caffeinated.\n\n\n>\n> Returning to the naming quagmire -- it occurred to me just now that\n> it might be helpful to call this style of error reporting \"soft\"\n> errors rather than \"safe\" errors, which'd provide a nice contrast\n> with \"hard\" errors thrown by longjmp'ing.  That would lead to naming\n> all the variant functions XXXSoft not XXXSafe.  There would still\n> be commentary to the effect that \"soft errors must be safe, in the\n> sense that there's no question whether it's safe to continue\n> processing the transaction\".  Anybody think that'd be an\n> improvement?\n>\n>                       \n\n\nI'm not sure InputFunctionCallSoft would be an improvement. Maybe\nInputFunctionCallSoftError would be clearer, but I don't know that it's\nmuch of an improvement either. The same goes for the other visible changes.InputFunctionCallSafe -> TryInputFunctionCallI think in create type saying \"input functions to handle errors softly\" is an improvement over \"input functions to return safe errors\".start->save->finish describes a soft error handling procedure quite well.  safe has baggage, all code should be \"safe\".fmgr/README: \"Handling Non-Exception Errors\" -> \"Soft Error Handling\"\"typical safe error conditions include\" -> \"error conditions that can be handled softly include\"(pg_input_is_valid) \"input function has been updated to return \"safe' errors\" -> \"input function has been updated to soft error handling\"Unrelated observation: \"Although the error stack is not large, we don't expect to run out of space.\" -> \"Because the error stack is not large, assume that we will not run out of space and panic if we are wrong.\"?David J.", "msg_date": "Wed, 7 Dec 2022 08:33:11 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Wed, Dec 7, 2022 at 8:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > On 2022-12-07 We 09:20, Tom Lane wrote:\n> >> Returning to the naming quagmire -- it occurred to me just now that\n> >> it might be helpful to call this style of error reporting \"soft\"\n> >> errors rather than \"safe\" errors, which'd provide a nice contrast\n> >> with \"hard\" errors thrown by longjmp'ing. That would lead to naming\n> >> all the variant functions XXXSoft not XXXSafe.\n>\n> > I'm not sure InputFunctionCallSoft would be an improvement.\n>\n> Yeah, after reflecting on it a bit more I'm not that impressed with\n> that as a function name either.\n>\n> (I think that \"soft error\" could be useful as informal terminology.\n> AFAIR we don't use \"hard error\" in any formal way either, but there\n> are certainly comments using that phrase.)\n>\n> More questions:\n>\n> * Anyone want to bikeshed about the new SQL-level function names?\n> I'm reasonably satisfied with \"pg_input_is_valid\" for the bool-returning\n> variant, but not so much with \"pg_input_invalid_message\" for the\n> error-message-returning variant. Thinking about \"pg_input_error_message\"\n> instead, but that's not stellar either.\n>\n\nWhy not do away with two separate functions and define a composite type\n(boolean, text) for is_valid to return?\n\n\n> * Where in the world shall we document these, if we document them?\n> The only section of chapter 9 that seems even a little bit appropriate\n> is \"9.26. System Information Functions and Operators\", and even there,\n> they would need their own new table because they don't fit well in any\n> existing table.\n>\n\nI would indeed just add a table there.\n\n\n>\n> BTW, does anyone else agree that 9.26 is desperately in need of some\n> <sect2> subdivisions? It seems to have gotten a lot longer since\n> I looked at it last.\n>\n>\nI'd be inclined to do something like what we are attempting for Chapter 28\nMonitoring Database Activity; introduce pagination through refentry and\nbuild our own table of contents into it.\n\nDavid J.\n\nOn Wed, Dec 7, 2022 at 8:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-12-07 We 09:20, Tom Lane wrote:\n>> Returning to the naming quagmire -- it occurred to me just now that\n>> it might be helpful to call this style of error reporting \"soft\"\n>> errors rather than \"safe\" errors, which'd provide a nice contrast\n>> with \"hard\" errors thrown by longjmp'ing.  That would lead to naming\n>> all the variant functions XXXSoft not XXXSafe.\n\n> I'm not sure InputFunctionCallSoft would be an improvement.\n\nYeah, after reflecting on it a bit more I'm not that impressed with\nthat as a function name either.\n\n(I think that \"soft error\" could be useful as informal terminology.\nAFAIR we don't use \"hard error\" in any formal way either, but there\nare certainly comments using that phrase.)\n\nMore questions:\n\n* Anyone want to bikeshed about the new SQL-level function names?\nI'm reasonably satisfied with \"pg_input_is_valid\" for the bool-returning\nvariant, but not so much with \"pg_input_invalid_message\" for the\nerror-message-returning variant.  Thinking about \"pg_input_error_message\"\ninstead, but that's not stellar either.Why not do away with two separate functions and define a composite type (boolean, text) for is_valid to return?\n\n* Where in the world shall we document these, if we document them?\nThe only section of chapter 9 that seems even a little bit appropriate\nis \"9.26. System Information Functions and Operators\", and even there,\nthey would need their own new table because they don't fit well in any\nexisting table.I would indeed just add a table there. \n\nBTW, does anyone else agree that 9.26 is desperately in need of some\n<sect2> subdivisions?  It seems to have gotten a lot longer since\nI looked at it last.I'd be inclined to do something like what we are attempting for Chapter 28 Monitoring Database Activity; introduce pagination through refentry and build our own table of contents into it.David J.", "msg_date": "Wed, 7 Dec 2022 08:49:18 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> Why not do away with two separate functions and define a composite type\n> (boolean, text) for is_valid to return?\n\nI don't see any advantage to that. It would be harder to use in both\nuse-cases.\n\n>> BTW, does anyone else agree that 9.26 is desperately in need of some\n>> <sect2> subdivisions? It seems to have gotten a lot longer since\n>> I looked at it last.\n\n> I'd be inclined to do something like what we are attempting for Chapter 28\n> Monitoring Database Activity; introduce pagination through refentry and\n> build our own table of contents into it.\n\nI'd prefer to follow the model that already exists in 9.27,\nie break it up with <sect2>'s, which provide a handy\nsub-table-of-contents.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 07 Dec 2022 11:06:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Wed, Dec 7, 2022 at 9:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > Why not do away with two separate functions and define a composite type\n> > (boolean, text) for is_valid to return?\n>\n> I don't see any advantage to that. It would be harder to use in both\n> use-cases.\n>\n\nI don't really see a use case for either of them individually. If all you\nare doing is printing them out in a test and checking the result in what\nsituation wouldn't you want to check that both the true/false and message\nare as expected? Plus, you don't have to figure out a name for the second\nfunction.\n\n\n>\n> >> BTW, does anyone else agree that 9.26 is desperately in need of some\n> >> <sect2> subdivisions? It seems to have gotten a lot longer since\n> >> I looked at it last.\n>\n> > I'd be inclined to do something like what we are attempting for Chapter\n> 28\n> > Monitoring Database Activity; introduce pagination through refentry and\n> > build our own table of contents into it.\n>\n> I'd prefer to follow the model that already exists in 9.27,\n> ie break it up with <sect2>'s, which provide a handy\n> sub-table-of-contents.\n>\n>\nI have a bigger issue with the non-pagination myself; the extra bit of\neffort to manually create a tabular ToC (where we can add descriptions)\nseems like a worthy price to pay.\n\nAre you suggesting we should not go down the path that v8-0003 does in the\nmonitoring section cleanup thread? I find the usability of Chapter 54\nSystem Views to be superior to these two run-on chapters and would rather\nwe emulate it in both these places - for what is in the end very little\nadditional effort, all mechanical in nature.\n\nDavid J.\n\nOn Wed, Dec 7, 2022 at 9:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> Why not do away with two separate functions and define a composite type\n> (boolean, text) for is_valid to return?\n\nI don't see any advantage to that.  It would be harder to use in both\nuse-cases.I don't really see a use case for either of them individually.  If all you are doing is printing them out in a test and checking the result in what situation wouldn't you want to check that both the true/false and message are as expected?  Plus, you don't have to figure out a name for the second function. \n\n>> BTW, does anyone else agree that 9.26 is desperately in need of some\n>> <sect2> subdivisions?  It seems to have gotten a lot longer since\n>> I looked at it last.\n\n> I'd be inclined to do something like what we are attempting for Chapter 28\n> Monitoring Database Activity; introduce pagination through refentry and\n> build our own table of contents into it.\n\nI'd prefer to follow the model that already exists in 9.27,\nie break it up with <sect2>'s, which provide a handy\nsub-table-of-contents.I have a bigger issue with the non-pagination myself; the extra bit of effort to manually create a tabular ToC (where we can add descriptions) seems like a worthy price to pay.Are you suggesting we should not go down the path that v8-0003 does in the monitoring section cleanup thread?  I find the usability of Chapter 54 System Views to be superior to these two run-on chapters and would rather we emulate it in both these places - for what is in the end very little additional effort, all mechanical in nature.David J.", "msg_date": "Wed, 7 Dec 2022 09:15:31 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Wed, Dec 7, 2022 at 9:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n>>> Why not do away with two separate functions and define a composite type\n>>> (boolean, text) for is_valid to return?\n\n>> I don't see any advantage to that. It would be harder to use in both\n>> use-cases.\n\n> I don't really see a use case for either of them individually.\n\nUh, several people opined that pg_input_is_valid would be of field\ninterest. If I thought these were only for testing purposes I wouldn't\nbe especially concerned about documenting them at all.\n\n> Are you suggesting we should not go down the path that v8-0003 does in the\n> monitoring section cleanup thread? I find the usability of Chapter 54\n> System Views to be superior to these two run-on chapters and would rather\n> we emulate it in both these places - for what is in the end very little\n> additional effort, all mechanical in nature.\n\nI have not been following that thread, and am not really excited about\nputting in a huge amount of documentation work here. I'd just like 9.26\nto have a mini-TOC at the page head, which <sect2>'s would be enough for.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 07 Dec 2022 11:59:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Wed, Dec 7, 2022 at 9:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > Perhaps we should add a type in the regress library that will never have\n> > a safe input function, so we can test that the mechanism works as\n> > expected in that case even after we adjust all the core data types'\n> > input functions.\n>\n> I was intending that the existing \"widget\" type be that. 0003 already\n> adds a comment to widget_in saying not to \"fix\" its one ereport call.\n>\n> Returning to the naming quagmire -- it occurred to me just now that\n> it might be helpful to call this style of error reporting \"soft\"\n> errors rather than \"safe\" errors, which'd provide a nice contrast\n> with \"hard\" errors thrown by longjmp'ing. That would lead to naming\n> all the variant functions XXXSoft not XXXSafe. There would still\n> be commentary to the effect that \"soft errors must be safe, in the\n> sense that there's no question whether it's safe to continue\n> processing the transaction\". Anybody think that'd be an\n> improvement?\n\n\nIn my attempt to implement CAST...DEFAULT, I noticed that I immediately\nneeded an\nOidInputFunctionCallSafe, which was trivial but maybe something we want to\nadd to the infra patch, but the comments around that function also somewhat\nindicate that we might want to just do the work in-place and call\nInputFunctionCallSafe directly. Open to both ideas.\n\nLooking forward cascades up into coerce_type and its brethren, and\nreimplementing those from a Node returner to a boolean returner with a Node\nparameter seems a bit of a stretch, so I have to pick a point where the\ncode pivots from passing down a safe-mode indicator and passing back a\nfound_error indicator (which may be combine-able, as safe is always true\nwhen the found_error pointer is not null, and always false when it isn't),\nbut for the most part things look do-able.\n\nOn Wed, Dec 7, 2022 at 9:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andrew Dunstan <andrew@dunslane.net> writes:\n> Perhaps we should add a type in the regress library that will never have\n> a safe input function, so we can test that the mechanism works as\n> expected in that case even after we adjust all the core data types'\n> input functions.\n\nI was intending that the existing \"widget\" type be that.  0003 already\nadds a comment to widget_in saying not to \"fix\" its one ereport call.\n\nReturning to the naming quagmire -- it occurred to me just now that\nit might be helpful to call this style of error reporting \"soft\"\nerrors rather than \"safe\" errors, which'd provide a nice contrast\nwith \"hard\" errors thrown by longjmp'ing.  That would lead to naming\nall the variant functions XXXSoft not XXXSafe.  There would still\nbe commentary to the effect that \"soft errors must be safe, in the\nsense that there's no question whether it's safe to continue\nprocessing the transaction\".  Anybody think that'd be an\nimprovement?In my attempt to implement CAST...DEFAULT, I noticed that I immediately needed anOidInputFunctionCallSafe, which was trivial but maybe something we want to add to the infra patch, but the comments around that function also somewhat indicate that we might want to just do the work in-place and call InputFunctionCallSafe directly. Open to both ideas.Looking forward cascades up into coerce_type and its brethren, and reimplementing those from a Node returner to a boolean returner with a Node parameter seems a bit of a stretch, so I have to pick a point where the code pivots from passing down a safe-mode indicator and passing back a found_error indicator (which may be combine-able, as safe is always true when the found_error pointer is not null, and always false when it isn't), but for the most part things look do-able.", "msg_date": "Wed, 7 Dec 2022 12:01:11 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Wed, Dec 7, 2022 at 9:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n>\n> > Are you suggesting we should not go down the path that v8-0003 does in\n> the\n> > monitoring section cleanup thread? I find the usability of Chapter 54\n> > System Views to be superior to these two run-on chapters and would rather\n> > we emulate it in both these places - for what is in the end very little\n> > additional effort, all mechanical in nature.\n>\n> I have not been following that thread, and am not really excited about\n> putting in a huge amount of documentation work here. I'd just like 9.26\n> to have a mini-TOC at the page head, which <sect2>'s would be enough for.\n>\n>\nSo long as you aren't opposed to the idea if someone else does the work,\nadding sect2 is better than nothing even if it is just a stop-gap measure.\n\nDavid J.\n\nOn Wed, Dec 7, 2022 at 9:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> Are you suggesting we should not go down the path that v8-0003 does in the\n> monitoring section cleanup thread?  I find the usability of Chapter 54\n> System Views to be superior to these two run-on chapters and would rather\n> we emulate it in both these places - for what is in the end very little\n> additional effort, all mechanical in nature.\n\nI have not been following that thread, and am not really excited about\nputting in a huge amount of documentation work here.  I'd just like 9.26\nto have a mini-TOC at the page head, which <sect2>'s would be enough for.So long as you aren't opposed to the idea if someone else does the work, adding sect2 is better than nothing even if it is just a stop-gap measure.David J.", "msg_date": "Wed, 7 Dec 2022 10:02:47 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On 2022-12-07 09:20:33 -0500, Tom Lane wrote:\n> Returning to the naming quagmire -- it occurred to me just now that\n> it might be helpful to call this style of error reporting \"soft\"\n> errors rather than \"safe\" errors, which'd provide a nice contrast\n> with \"hard\" errors thrown by longjmp'ing.\n\n+1\n\n\n", "msg_date": "Wed, 7 Dec 2022 09:17:25 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n> In my attempt to implement CAST...DEFAULT, I noticed that I immediately\n> needed an\n> OidInputFunctionCallSafe, which was trivial but maybe something we want to\n> add to the infra patch, but the comments around that function also somewhat\n> indicate that we might want to just do the work in-place and call\n> InputFunctionCallSafe directly. Open to both ideas.\n\nI'm a bit skeptical of that. IMO using OidInputFunctionCall is only\nappropriate in places that will be executed just once per query.\nOtherwise, unless you have zero concern for performance, you should\nbe caching the function lookup. (The test functions in my 0003 patch\nillustrate the standard way to do that within SQL-callable functions.\nIf you're implementing CAST as a new kind of executable expression,\nthe lookup would likely happen in expression compilation.)\n\nI don't say that OidInputFunctionCallSafe won't ever be useful, but\nI doubt it's what we want in CAST.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 07 Dec 2022 12:17:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> So long as you aren't opposed to the idea if someone else does the work,\n> adding sect2 is better than nothing even if it is just a stop-gap measure.\n\nOK, we can agree on that.\n\nAs for the other point --- not sure why I didn't remember this right off,\nbut the point of two test functions is that one exercises the code path\nwith details_wanted = true while the other exercises details_wanted =\nfalse. A combined function would only test the first case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 07 Dec 2022 12:20:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Hi,\n\nOn 2022-12-06 15:21:09 -0500, Tom Lane wrote:\n> +{ oid => '8050', descr => 'test whether string is valid input for data type',\n> + proname => 'pg_input_is_valid', provolatile => 's', prorettype => 'bool',\n> + proargtypes => 'text regtype', prosrc => 'pg_input_is_valid' },\n> +{ oid => '8051', descr => 'test whether string is valid input for data type',\n> + proname => 'pg_input_is_valid', provolatile => 's', prorettype => 'bool',\n> + proargtypes => 'text regtype int4', prosrc => 'pg_input_is_valid_mod' },\n> +{ oid => '8052',\n> + descr => 'get error message if string is not valid input for data type',\n> + proname => 'pg_input_invalid_message', provolatile => 's',\n> + prorettype => 'text', proargtypes => 'text regtype',\n> + prosrc => 'pg_input_invalid_message' },\n> +{ oid => '8053',\n> + descr => 'get error message if string is not valid input for data type',\n> + proname => 'pg_input_invalid_message', provolatile => 's',\n> + prorettype => 'text', proargtypes => 'text regtype int4',\n> + prosrc => 'pg_input_invalid_message_mod' },\n> +\n\nIs there a guarantee that input functions are stable or immutable? We don't\nhave any volatile input functions in core PG:\n\nSELECT provolatile, count(*) FROM pg_proc WHERE oid IN (SELECT typinput FROM pg_type) GROUP BY provolatile;\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 7 Dec 2022 09:34:27 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Wed, Dec 7, 2022 at 10:34 AM Andres Freund <andres@anarazel.de> wrote:\n\n> > +{ oid => '8053',\n> > + descr => 'get error message if string is not valid input for data\n> type',\n> > + proname => 'pg_input_invalid_message', provolatile => 's',\n> > + prorettype => 'text', proargtypes => 'text regtype int4',\n> > + prosrc => 'pg_input_invalid_message_mod' },\n> > +\n>\n> Is there a guarantee that input functions are stable or immutable? We don't\n> have any volatile input functions in core PG:\n>\n> SELECT provolatile, count(*) FROM pg_proc WHERE oid IN (SELECT typinput\n> FROM pg_type) GROUP BY provolatile;\n>\n>\nEffectively yes, though I'm not sure if it is formally documented or\notherwise enforced by the system.\n\nThe fact we allow stable is a bit of a sore spot, volatile would be a\nterrible property for an I/O function.\n\nDavid J.\n\nOn Wed, Dec 7, 2022 at 10:34 AM Andres Freund <andres@anarazel.de> wrote:> +{ oid => '8053',\n> +  descr => 'get error message if string is not valid input for data type',\n> +  proname => 'pg_input_invalid_message', provolatile => 's',\n> +  prorettype => 'text', proargtypes => 'text regtype int4',\n> +  prosrc => 'pg_input_invalid_message_mod' },\n> +\n\nIs there a guarantee that input functions are stable or immutable? We don't\nhave any volatile input functions in core PG:\n\nSELECT provolatile, count(*) FROM pg_proc WHERE oid IN (SELECT typinput FROM pg_type) GROUP BY provolatile;Effectively yes, though I'm not sure if it is formally documented or otherwise enforced by the system.The fact we allow stable is a bit of a sore spot, volatile would be a terrible property for an I/O function.David J.", "msg_date": "Wed, 7 Dec 2022 10:46:26 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Is there a guarantee that input functions are stable or immutable?\n\nThere's a project policy that that should be true. That justifies\nmarking things like record_in as stable --- if the per-column input\nfunctions could be volatile, record_in would need to be as well.\nThere are other dependencies on it; see e.g. aab353a60, 3db6524fe.\n\n> We don't\n> have any volatile input functions in core PG:\n\nIndeed, because type_sanity.sql checks that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 07 Dec 2022 12:51:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> Is there a guarantee that input functions are stable or immutable?\n\n> There's a project policy that that should be true. That justifies\n> marking things like record_in as stable --- if the per-column input\n> functions could be volatile, record_in would need to be as well.\n> There are other dependencies on it; see e.g. aab353a60, 3db6524fe.\n\nI dug in the archives and found the thread leading up to aab353a60:\n\nhttps://www.postgresql.org/message-id/flat/AANLkTik8v7O9QR9jjHNVh62h-COC1B0FDUNmEYMdtKjR%40mail.gmail.com\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 07 Dec 2022 13:00:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Wed, Dec 7, 2022 at 8:04 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> I'm not sure InputFunctionCallSoft would be an improvement. Maybe\n>> InputFunctionCallSoftError would be clearer, but I don't know that it's\n>> much of an improvement either. The same goes for the other visible changes.\n\n> InputFunctionCallSafe -> TryInputFunctionCall\n\nI think we are already using \"TryXXX\" for code that involves catching\nereport errors. Since the whole point here is that we are NOT doing\nthat, I think this naming would be more confusing than helpful.\n\n> Unrelated observation: \"Although the error stack is not large, we don't\n> expect to run out of space.\" -> \"Because the error stack is not large,\n> assume that we will not run out of space and panic if we are wrong.\"?\n\nThat doesn't seem to make the point I wanted to make.\n\nI've adopted your other suggestions in the v4 I'm preparing now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 07 Dec 2022 15:16:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "OK, here's a v4 that I think is possibly committable.\n\nI've changed all the comments and docs to use the \"soft error\"\nterminology, but since using \"soft\" in the actual function names\ndidn't seem that appealing, they still use \"safe\".\n\nI already pushed the 0000 elog-refactoring patch, since that seemed\nuncontroversial. 0001 attached covers the same territory as before,\nbut I regrouped the rest so that 0002 installs the new test support\nfunctions, then 0003 adds both the per-datatype changes and\ncorresponding test cases for bool, int4, arrays, and records.\nThe idea here is that 0003 can be pointed to as a sample of what\nhas to be done to datatype input functions, while the preceding\npatches can be cited as relevant documentation. (I've not decided\nwhether to squash 0001 and 0002 together or commit them separately.\nDoes it make sense to break 0003 into 4 separate commits, or is\nthat overkill?)\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 07 Dec 2022 17:32:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-07 We 17:32, Tom Lane wrote:\n> OK, here's a v4 that I think is possibly committable.\n>\n> I've changed all the comments and docs to use the \"soft error\"\n> terminology, but since using \"soft\" in the actual function names\n> didn't seem that appealing, they still use \"safe\".\n>\n> I already pushed the 0000 elog-refactoring patch, since that seemed\n> uncontroversial. 0001 attached covers the same territory as before,\n> but I regrouped the rest so that 0002 installs the new test support\n> functions, then 0003 adds both the per-datatype changes and\n> corresponding test cases for bool, int4, arrays, and records.\n> The idea here is that 0003 can be pointed to as a sample of what\n> has to be done to datatype input functions, while the preceding\n> patches can be cited as relevant documentation. (I've not decided\n> whether to squash 0001 and 0002 together or commit them separately.\n> Does it make sense to break 0003 into 4 separate commits, or is\n> that overkill?)\n>\n\nNo strong opinion about 0001 and 0002. I'm happy enough with them as\nthey are, but if you want to squash them that's ok. I wouldn't break up\n0003. I think we're going to end up committing the remaining work in\nbatches, although they would probably be a bit more thematically linked\nthan these.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 7 Dec 2022 17:50:34 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-12-07 We 17:32, Tom Lane wrote:\n>> Does it make sense to break 0003 into 4 separate commits, or is\n>> that overkill?)\n\n> No strong opinion about 0001 and 0002. I'm happy enough with them as\n> they are, but if you want to squash them that's ok. I wouldn't break up\n> 0003. I think we're going to end up committing the remaining work in\n> batches, although they would probably be a bit more thematically linked\n> than these.\n\nYeah, we certainly aren't likely to do this work as\none-commit-per-datatype going forward. I'm just wondering\nhow to do these initial commits so that they provide\ngood reference material.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 07 Dec 2022 17:56:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Hi,\n\nOn 2022-12-07 17:32:21 -0500, Tom Lane wrote:\n> I already pushed the 0000 elog-refactoring patch, since that seemed\n> uncontroversial. 0001 attached covers the same territory as before,\n> but I regrouped the rest so that 0002 installs the new test support\n> functions, then 0003 adds both the per-datatype changes and\n> corresponding test cases for bool, int4, arrays, and records.\n> The idea here is that 0003 can be pointed to as a sample of what\n> has to be done to datatype input functions, while the preceding\n> patches can be cited as relevant documentation. (I've not decided\n> whether to squash 0001 and 0002 together or commit them separately.\n\nI think they make sense as is.\n\n\n> Does it make sense to break 0003 into 4 separate commits, or is\n> that overkill?)\n\nI think it'd be fine either way.\n\n\n> + * If \"context\" is an ErrorSaveContext node, but the node creator only wants\n> + * notification of the fact of a soft error without any details, just set\n> + * the error_occurred flag in the ErrorSaveContext node and return false,\n> + * which will cause us to skip the remaining error processing steps.\n> + *\n> + * Otherwise, create and initialize error stack entry and return true.\n> + * Subsequently, errmsg() and perhaps other routines will be called to further\n> + * populate the stack entry. Finally, errsave_finish() will be called to\n> + * tidy up.\n> + */\n> +bool\n> +errsave_start(NodePtr context, const char *domain)\n\nI wonder if there are potential use-cases for levels other than ERROR. I can\npotentially see us wanting to defer some FATALs, e.g. when they occur in\nprocess exit hooks.\n\n\n> +{\n> +\tErrorSaveContext *escontext;\n> +\tErrorData *edata;\n> +\n> +\t/*\n> +\t * Do we have a context for soft error reporting? If not, just punt to\n> +\t * errstart().\n> +\t */\n> +\tif (context == NULL || !IsA(context, ErrorSaveContext))\n> +\t\treturn errstart(ERROR, domain);\n> +\n> +\t/* Report that a soft error was detected */\n> +\tescontext = (ErrorSaveContext *) context;\n> +\tescontext->error_occurred = true;\n> +\n> +\t/* Nothing else to do if caller wants no further details */\n> +\tif (!escontext->details_wanted)\n> +\t\treturn false;\n> +\n> +\t/*\n> +\t * Okay, crank up a stack entry to store the info in.\n> +\t */\n> +\n> +\trecursion_depth++;\n> +\n> +\t/* Initialize data for this error frame */\n> +\tedata = get_error_stack_entry();\n\nFor a moment I was worried that it could lead to odd behaviour that we don't\ndo get_error_stack_entry() when !details_wanted, due to not raising an error\nwe'd otherwise raise. But that's a should-never-be-reached case, so ...\n\n\n> +/*\n> + * errsave_finish --- end a \"soft\" error-reporting cycle\n> + *\n> + * If errsave_start() decided this was a regular error, behave as\n> + * errfinish(). Otherwise, package up the error details and save\n> + * them in the ErrorSaveContext node.\n> + */\n> +void\n> +errsave_finish(NodePtr context, const char *filename, int lineno,\n> +\t\t\t const char *funcname)\n> +{\n> +\tErrorSaveContext *escontext = (ErrorSaveContext *) context;\n> +\tErrorData *edata = &errordata[errordata_stack_depth];\n> +\n> +\t/* verify stack depth before accessing *edata */\n> +\tCHECK_STACK_DEPTH();\n> +\n> +\t/*\n> +\t * If errsave_start punted to errstart, then elevel will be ERROR or\n> +\t * perhaps even PANIC. Punt likewise to errfinish.\n> +\t */\n> +\tif (edata->elevel >= ERROR)\n> +\t{\n> +\t\terrfinish(filename, lineno, funcname);\n> +\t\tpg_unreachable();\n> +\t}\n\nIt seems somewhat ugly transport this knowledge via edata->elevel, but it's\nnot too bad.\n\n\n\n> +/*\n> + * We cannot include nodes.h yet, so make a stub reference. (This is also\n> + * used by fmgr.h, which doesn't want to depend on nodes.h either.)\n> + */\n> +typedef struct Node *NodePtr;\n\nSeems like it'd be easier to just forward declare the struct, and use the\nnon-typedef'ed name in the header than to have to deal with these\ninterdependencies and the differing typenames.\n\n\n> +/*----------\n> + * Support for reporting \"soft\" errors that don't require a full transaction\n> + * abort to clean up. This is to be used in this way:\n> + *\t\terrsave(context,\n> + *\t\t\t\terrcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\n> + *\t\t\t\terrmsg(\"invalid input syntax for type %s: \\\"%s\\\"\",\n> + *\t\t\t\t\t \"boolean\", in_str),\n> + *\t\t\t\t... other errxxx() fields as needed ...);\n> + *\n> + * \"context\" is a node pointer or NULL, and the remaining auxiliary calls\n> + * provide the same error details as in ereport(). If context is not a\n> + * pointer to an ErrorSaveContext node, then errsave(context, ...)\n> + * behaves identically to ereport(ERROR, ...). If context is a pointer\n> + * to an ErrorSaveContext node, then the information provided by the\n> + * auxiliary calls is stored in the context node and control returns\n> + * normally. The caller of errsave() must then do any required cleanup\n> + * and return control back to its caller. That caller must check the\n> + * ErrorSaveContext node to see whether an error occurred before\n> + * it can trust the function's result to be meaningful.\n> + *\n> + * errsave_domain() allows a message domain to be specified; it is\n> + * precisely analogous to ereport_domain().\n> + *----------\n> + */\n> +#define errsave_domain(context, domain, ...)\t\\\n> +\tdo { \\\n> +\t\tNodePtr context_ = (context); \\\n> +\t\tpg_prevent_errno_in_scope(); \\\n> +\t\tif (errsave_start(context_, domain)) \\\n> +\t\t\t__VA_ARGS__, errsave_finish(context_, __FILE__, __LINE__, __func__); \\\n> +\t} while(0)\n\nPerhaps worth noting here that the reason why the errsave_start/errsave_finish\nsplit exist differs a bit from the reason in ereport_domain()? \"Over there\"\nit's just about not wanting to incur overhead when the message isn't logged,\nbut here we'll always have >= ERROR, but ->details_wanted can still lead to\nnot wanting to incur the overhead.\n\n\n> /*\n> diff --git a/src/backend/utils/adt/rowtypes.c b/src/backend/utils/adt/rowtypes.c\n> index db843a0fbf..bdafcff02d 100644\n> --- a/src/backend/utils/adt/rowtypes.c\n> +++ b/src/backend/utils/adt/rowtypes.c\n> @@ -77,6 +77,7 @@ record_in(PG_FUNCTION_ARGS)\n> \tchar\t *string = PG_GETARG_CSTRING(0);\n> \tOid\t\t\ttupType = PG_GETARG_OID(1);\n> \tint32\t\ttupTypmod = PG_GETARG_INT32(2);\n> +\tNode\t *escontext = fcinfo->context;\n> \tHeapTupleHeader result;\n> \tTupleDesc\ttupdesc;\n> \tHeapTuple\ttuple;\n> @@ -100,7 +101,7 @@ record_in(PG_FUNCTION_ARGS)\n> \t * supply a valid typmod, and then we can do something useful for RECORD.\n> \t */\n> \tif (tupType == RECORDOID && tupTypmod < 0)\n> -\t\tereport(ERROR,\n> +\t\tereturn(escontext, (Datum) 0,\n> \t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> \t\t\t\t errmsg(\"input of anonymous composite types is not implemented\")));\n> \n\nIs it ok that we throw an error in lookup_rowtype_tupdesc()? Normally those\nshould not be reachable by users, I think? The new testing functions might\nreach it, but that seems fine, they're test functions.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 7 Dec 2022 15:35:18 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I wonder if there are potential use-cases for levels other than ERROR. I can\n> potentially see us wanting to defer some FATALs, e.g. when they occur in\n> process exit hooks.\n\nI thought about that early on, and concluded not. The whole thing is\nmoot for levels less than ERROR, of course, and I'm having a hard\ntime seeing how it could be useful for FATAL or PANIC. Maybe I just\nlack imagination, but if a call is specifying FATAL rather than just\nERROR then it seems to me it's already a special snowflake rather\nthan something we could fold into a generic non-error behavior.\n\n> For a moment I was worried that it could lead to odd behaviour that we don't\n> do get_error_stack_entry() when !details_wanted, due to not raising an error\n> we'd otherwise raise. But that's a should-never-be-reached case, so ...\n\nI don't see how. Returning false out of errsave_start causes the\nerrsave macro to immediately give control back to the caller, which\nwill go on about its business.\n\n> It seems somewhat ugly transport this knowledge via edata->elevel, but it's\n> not too bad.\n\nThe LOG-vs-ERROR business, you mean? Yeah. I considered adding another\nbool flag to ErrorData, but couldn't convince myself it was worth the\ntrouble. If we find a problem we can do that sometime in future.\n\n>> +/*\n>> + * We cannot include nodes.h yet, so make a stub reference. (This is also\n>> + * used by fmgr.h, which doesn't want to depend on nodes.h either.)\n>> + */\n>> +typedef struct Node *NodePtr;\n\n> Seems like it'd be easier to just forward declare the struct, and use the\n> non-typedef'ed name in the header than to have to deal with these\n> interdependencies and the differing typenames.\n\nMeh. I'm a little allergic to writing \"struct foo *\" in function argument\nlists, because I so often see gcc pointing out that if struct foo isn't\nyet known then that can silently mean something different than you\nintended. With the typedef, it either works or is an error, no halfway\nabout it. And the struct way isn't really much better in terms of\nhaving two different notations to use rather than only one.\n\n> Perhaps worth noting here that the reason why the errsave_start/errsave_finish\n> split exist differs a bit from the reason in ereport_domain()? \"Over there\"\n> it's just about not wanting to incur overhead when the message isn't logged,\n> but here we'll always have >= ERROR, but ->details_wanted can still lead to\n> not wanting to incur the overhead.\n\nHmmm ... it seems like the same reason to me, we don't want to incur the\noverhead if the \"start\" function says not to.\n\n> Is it ok that we throw an error in lookup_rowtype_tupdesc()?\n\nYeah, that should fall in the category of internal errors I think.\nI don't see how you'd reach that from a bad input string.\n\n(Or to be more precise, the point of pg_input_is_valid is to tell\nyou whether the input string is valid, not to tell you whether the\ntype name is valid; if you're worried about the latter you need\na separate and earlier test.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 07 Dec 2022 18:52:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Wed, Dec 7, 2022 at 12:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Corey Huinker <corey.huinker@gmail.com> writes:\n> > In my attempt to implement CAST...DEFAULT, I noticed that I immediately\n> > needed an\n> > OidInputFunctionCallSafe, which was trivial but maybe something we want\n> to\n> > add to the infra patch, but the comments around that function also\n> somewhat\n> > indicate that we might want to just do the work in-place and call\n> > InputFunctionCallSafe directly. Open to both ideas.\n>\n> I'm a bit skeptical of that. IMO using OidInputFunctionCall is only\n> appropriate in places that will be executed just once per query.\n>\n\nThat is what's happening when the expr of the existing CAST ( expr AS\ntypename ) is a constant and we want to just resolve the constant at parse\ntime.\n\n\n>\n>\n\nOn Wed, Dec 7, 2022 at 12:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Corey Huinker <corey.huinker@gmail.com> writes:\n> In my attempt to implement CAST...DEFAULT, I noticed that I immediately\n> needed an\n> OidInputFunctionCallSafe, which was trivial but maybe something we want to\n> add to the infra patch, but the comments around that function also somewhat\n> indicate that we might want to just do the work in-place and call\n> InputFunctionCallSafe directly. Open to both ideas.\n\nI'm a bit skeptical of that.  IMO using OidInputFunctionCall is only\nappropriate in places that will be executed just once per query.That is what's happening when the expr of the existing CAST ( expr AS typename ) is a constant and we want to just resolve the constant at parse time.", "msg_date": "Wed, 7 Dec 2022 22:37:28 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On 2022-Dec-07, David G. Johnston wrote:\n\n> Are you suggesting we should not go down the path that v8-0003 does in the\n> monitoring section cleanup thread? I find the usability of Chapter 54\n> System Views to be superior to these two run-on chapters and would rather\n> we emulate it in both these places - for what is in the end very little\n> additional effort, all mechanical in nature.\n\nI think the new 9.26 is much better now than what we had there two days\nago. Maybe it would be even better with your proposed changes, but\nlet's see what you come up with.\n\nAs for Chapter 54, while it's a lot better than what we had previously,\nI have a complaint about the new presentation: the overview table\nappears (at least in the HTML presentation) in a separate page from the\ninitial page of the chapter. So to get the intended table of contents I\nhave to move forward from the unintended table of contents (i.e. from\nhttps://www.postgresql.org/docs/devel/views.html forward to\nhttps://www.postgresql.org/docs/devel/views-overview.html ). This seems\npointless. I think it would be better if we just removed the line\n<sect1 id=\"overview\">, which would put that table in the \"front page\".\n\nI also have an issue with Chapter 28, more precisely 28.2.2, where we\nhave a similar TOC-style tables (Tables 28.1 and 28.2), but these ones\nseem inferior to the new table in Chapter 54 in that the outgoing links\nare in random positions in the text of the table. It would be better to\nput those in a column of their own, so that they are all vertically\naligned and easier to spot/click. Not sure if you've been here already.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"En las profundidades de nuestro inconsciente hay una obsesiva necesidad\nde un universo lógico y coherente. Pero el universo real se halla siempre\nun paso más allá de la lógica\" (Irulan)\n\n\n", "msg_date": "Thu, 8 Dec 2022 13:00:19 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-12-07 17:32:21 -0500, Tom Lane wrote:\n>> +typedef struct Node *NodePtr;\n\n> Seems like it'd be easier to just forward declare the struct, and use the\n> non-typedef'ed name in the header than to have to deal with these\n> interdependencies and the differing typenames.\n\nI've been having second thoughts about how to handle this issue.\nAs we convert more and more datatypes, references to \"Node *\" are\ngoing to be needed in assorted headers that don't currently have\nany reason to #include nodes.h. Rather than bloating their include\nfootprints, we'll want to use the alternate spelling, whichever\nit is. (I already had to do this in array.h.) Some of these headers\nmight be things that are also read by frontend compiles, in which\ncase they won't have access to elog.h either, so that NodePtr in\nthis formulation won't work for them. (I ran into a variant of that\nwith an early draft of this patch series.)\n\nIf we stick with NodePtr we'll probably end by putting that typedef\ninto c.h so that it's accessible in frontend as well as backend.\nI don't have a huge problem with that, but I concede it's a little ugly.\n\nIf we go with \"struct Node *\" then we can solve such problems by\njust repeating \"struct Node;\" forward-declarations in as many\nheaders as we have to. This is a bit ugly too, but maybe less so,\nand it's a method we use elsewhere. The main downside I can see\nto it is that we will probably not find out all the places where\nwe need such declarations until we get field complaints that\n\"header X doesn't compile for me\". elog.h will have a struct Node\ndeclaration, and that will be visible in every backend compilation\nwe do as well as every cpluspluscheck/headerscheck test.\n\nAnother notational point I'm wondering about is whether we want\nto create hundreds of direct references to fcinfo->context.\nIs it time to invent\n\n#define PG_GET_CONTEXT()\t(fcinfo->context)\n\nand write that instead in all these input functions?\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 08 Dec 2022 11:31:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Thu, Dec 8, 2022 at 11:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If we go with \"struct Node *\" then we can solve such problems by\n> just repeating \"struct Node;\" forward-declarations in as many\n> headers as we have to.\n\nYes, I think just putting \"struct Node;\" in as many places as\nnecessary is the way to go. Or even:\n\nstruct Node;\ntypedef struct Node Node;\n\n....which I think then allows for Node * to be used later.\n\nA small problem with typedef struct Something *SomethingElse is that\nit can get hard to keep track of whether some identifier is a pointer\nto a struct or just a struct. This doesn't bother me as much as it\ndoes some other hackers, from what I gather anyway, but I think we\nshould be pretty judicious in using typedef that way. \"SomethingPtr\"\nreally has no advantage over \"Something *\". It is neither shorter nor\nclearer.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 8 Dec 2022 16:00:10 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Hi,\n\nOn 2022-12-08 16:00:10 -0500, Robert Haas wrote:\n> On Thu, Dec 8, 2022 at 11:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > If we go with \"struct Node *\" then we can solve such problems by\n> > just repeating \"struct Node;\" forward-declarations in as many\n> > headers as we have to.\n> \n> Yes, I think just putting \"struct Node;\" in as many places as\n> necessary is the way to go. Or even:\n\n+1\n\n\n> struct Node;\n> typedef struct Node Node;\n\nThat doesn't work well, because C99 doesn't allow typedefs to be redeclared in\nthe same scope. IIRC C11 added suppport for it, and a lot of compilers already\nsupported it before.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 8 Dec 2022 13:58:36 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-12-08 16:00:10 -0500, Robert Haas wrote:\n>> Yes, I think just putting \"struct Node;\" in as many places as\n>> necessary is the way to go. Or even:\n\n> +1\n\nOK, here's a v5 that does it like that.\n\nI've spent a little time pushing ahead on other input functions,\nand realized that my original plan to require a pre-encoded typmod\nfor these test functions was not very user-friendly. So in v5\nyou can write something like\n\npg_input_is_valid('1234.567', 'numeric(7,4)')\n\n0004 attached finishes up the remaining core numeric datatypes\n(int*, float*, numeric). I ripped out float8in_internal_opt_error\nin favor of a function that uses the new APIs.\n\n0005 converts contrib/cube, which I chose to tackle partly because\nI'd already touched it in 0004, partly because it seemed like a\ngood idea to verify that extension modules wouldn't have any\nproblems with this apprach, and partly because I wondered whether\nan input function that uses a Bison/Flex parser would have big\nproblems getting converted. This one didn't, anyway.\n\nGiven that this additional experimentation didn't find any holes\nin the API design, I think this is pretty much ready to go.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 08 Dec 2022 17:57:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-08 Th 17:57, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2022-12-08 16:00:10 -0500, Robert Haas wrote:\n>>> Yes, I think just putting \"struct Node;\" in as many places as\n>>> necessary is the way to go. Or even:\n>> +1\n> OK, here's a v5 that does it like that.\n>\n> I've spent a little time pushing ahead on other input functions,\n> and realized that my original plan to require a pre-encoded typmod\n> for these test functions was not very user-friendly. So in v5\n> you can write something like\n>\n> pg_input_is_valid('1234.567', 'numeric(7,4)')\n>\n> 0004 attached finishes up the remaining core numeric datatypes\n> (int*, float*, numeric). I ripped out float8in_internal_opt_error\n> in favor of a function that uses the new APIs.\n\n\nGreat, that takes care of some of the relatively urgent work.\n\n\n>\n> 0005 converts contrib/cube, which I chose to tackle partly because\n> I'd already touched it in 0004, partly because it seemed like a\n> good idea to verify that extension modules wouldn't have any\n> problems with this apprach, and partly because I wondered whether\n> an input function that uses a Bison/Flex parser would have big\n> problems getting converted. This one didn't, anyway.\n\n\nCool\n\n\n>\n> Given that this additional experimentation didn't find any holes\n> in the API design, I think this is pretty much ready to go.\n>\n> \t\t\t\n\n\nI will look in more detail tomorrow, but it LGTM on a quick look.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 8 Dec 2022 21:15:42 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Hi,\n\nOn 2022-12-08 17:57:09 -0500, Tom Lane wrote:\n> Given that this additional experimentation didn't find any holes\n> in the API design, I think this is pretty much ready to go.\n\nOne interesting area is timestamp / datetime related code. There's been some\npast efforts in the area, mostly in 5bc450629b3. See the RETURN_ERROR macro in\nformatting.c.\n\nThis is not directly about type input functions, but it looks to me that the\nfunctionality in the patchset should work.\n\nI certainly have the hope that it'll make the code look a bit less ugly...\n\n\nIt looks like a fair bit of work to convert this code, so I don't think we\nshould tie converting formatting.c to the patchset. But it might be a good\nidea for Tom to skim the code to see whether there's any things impacting the\ndesign.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 8 Dec 2022 18:33:51 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-12-08 17:57:09 -0500, Tom Lane wrote:\n>> Given that this additional experimentation didn't find any holes\n>> in the API design, I think this is pretty much ready to go.\n\n> One interesting area is timestamp / datetime related code. There's been some\n> past efforts in the area, mostly in 5bc450629b3. See the RETURN_ERROR macro in\n> formatting.c.\n> This is not directly about type input functions, but it looks to me that the\n> functionality in the patchset should work.\n\nYeah, I was planning to take a look at that before walking away from\nthis stuff. (I'm sure not volunteering to convert ALL the input\nfunctions, but I'll do the datetime code.)\n\nYou're right that formatting.c is doing stuff that's not exactly\nan input function, but I don't see why we can't apply the same\nAPI concepts to it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 08 Dec 2022 21:59:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-08 Th 21:59, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2022-12-08 17:57:09 -0500, Tom Lane wrote:\n>>> Given that this additional experimentation didn't find any holes\n>>> in the API design, I think this is pretty much ready to go.\n>> One interesting area is timestamp / datetime related code. There's been some\n>> past efforts in the area, mostly in 5bc450629b3. See the RETURN_ERROR macro in\n>> formatting.c.\n>> This is not directly about type input functions, but it looks to me that the\n>> functionality in the patchset should work.\n> Yeah, I was planning to take a look at that before walking away from\n> this stuff. (I'm sure not volunteering to convert ALL the input\n> functions, but I'll do the datetime code.)\n>\n\nAwesome. Perhaps if there are no more comments you can commit what you\ncurrently have so people can start work on other input functions.\n\n\nThanks for your work on this.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 9 Dec 2022 08:06:58 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-12-08 Th 21:59, Tom Lane wrote:\n>> Yeah, I was planning to take a look at that before walking away from\n>> this stuff. (I'm sure not volunteering to convert ALL the input\n>> functions, but I'll do the datetime code.)\n\n> Awesome. Perhaps if there are no more comments you can commit what you\n> currently have so people can start work on other input functions.\n\nPushed. As I said, I'll take a look at the datetime area. Do we\nhave any volunteers for other input functions?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 09 Dec 2022 10:16:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-09 Fr 10:16, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 2022-12-08 Th 21:59, Tom Lane wrote:\n>>> Yeah, I was planning to take a look at that before walking away from\n>>> this stuff. (I'm sure not volunteering to convert ALL the input\n>>> functions, but I'll do the datetime code.)\n>> Awesome. Perhaps if there are no more comments you can commit what you\n>> currently have so people can start work on other input functions.\n> Pushed. \n\n\nGreat!\n\n\n> As I said, I'll take a look at the datetime area. Do we\n> have any volunteers for other input functions?\n>\n> \t\t\t\n\n\nI am currently looking at the json types. I think that will be enough to\nlet us rework the sql/json patches as discussed a couple of months ago.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 9 Dec 2022 10:37:56 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Fri, Dec 9, 2022 at 9:08 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 2022-12-09 Fr 10:16, Tom Lane wrote:\n> > Andrew Dunstan <andrew@dunslane.net> writes:\n> >> On 2022-12-08 Th 21:59, Tom Lane wrote:\n> >>> Yeah, I was planning to take a look at that before walking away from\n> >>> this stuff. (I'm sure not volunteering to convert ALL the input\n> >>> functions, but I'll do the datetime code.)\n> >> Awesome. Perhaps if there are no more comments you can commit what you\n> >> currently have so people can start work on other input functions.\n> > Pushed.\n>\n>\n> Great!\n>\n>\n> > As I said, I'll take a look at the datetime area. Do we\n> > have any volunteers for other input functions?\n> >\n> >\n>\n>\n> I am currently looking at the json types. I think that will be enough to\n> let us rework the sql/json patches as discussed a couple of months ago.\n>\n\nI will pick a few other input functions, thanks.\n\nRegards,\nAmul\n\n\n", "msg_date": "Fri, 9 Dec 2022 21:46:34 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Fri, Dec 9, 2022 at 11:17 AM Amul Sul <sulamul@gmail.com> wrote:\n\n> On Fri, Dec 9, 2022 at 9:08 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> >\n> >\n> > On 2022-12-09 Fr 10:16, Tom Lane wrote:\n> > > Andrew Dunstan <andrew@dunslane.net> writes:\n> > >> On 2022-12-08 Th 21:59, Tom Lane wrote:\n> > >>> Yeah, I was planning to take a look at that before walking away from\n> > >>> this stuff. (I'm sure not volunteering to convert ALL the input\n> > >>> functions, but I'll do the datetime code.)\n> > >> Awesome. Perhaps if there are no more comments you can commit what you\n> > >> currently have so people can start work on other input functions.\n> > > Pushed.\n> >\n> >\n> > Great!\n> >\n> >\n> > > As I said, I'll take a look at the datetime area. Do we\n> > > have any volunteers for other input functions?\n> > >\n> > >\n> >\n> >\n> > I am currently looking at the json types. I think that will be enough to\n> > let us rework the sql/json patches as discussed a couple of months ago.\n> >\n>\n> I will pick a few other input functions, thanks.\n>\n> Regards,\n> Amul\n>\n\nI can do a few as well, as I need them done for the CAST With Default\neffort.\n\nAmul, please let me know which ones you pick so we don't duplicate work.\n\nOn Fri, Dec 9, 2022 at 11:17 AM Amul Sul <sulamul@gmail.com> wrote:On Fri, Dec 9, 2022 at 9:08 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 2022-12-09 Fr 10:16, Tom Lane wrote:\n> > Andrew Dunstan <andrew@dunslane.net> writes:\n> >> On 2022-12-08 Th 21:59, Tom Lane wrote:\n> >>> Yeah, I was planning to take a look at that before walking away from\n> >>> this stuff.  (I'm sure not volunteering to convert ALL the input\n> >>> functions, but I'll do the datetime code.)\n> >> Awesome. Perhaps if there are no more comments you can commit what you\n> >> currently have so people can start work on other input functions.\n> > Pushed.\n>\n>\n> Great!\n>\n>\n> > As I said, I'll take a look at the datetime area.  Do we\n> > have any volunteers for other input functions?\n> >\n> >\n>\n>\n> I am currently looking at the json types. I think that will be enough to\n> let us rework the sql/json patches as discussed a couple of months ago.\n>\n\nI will pick a few other input functions, thanks.\n\nRegards,\nAmulI can do a few as well, as I need them done for the CAST With Default effort.Amul, please let me know which ones you pick so we don't duplicate work.", "msg_date": "Fri, 9 Dec 2022 17:54:24 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-12-09 Fr 10:16, Tom Lane wrote:\n>> As I said, I'll take a look at the datetime area. Do we\n>> have any volunteers for other input functions?\n\n> I am currently looking at the json types. I think that will be enough to\n> let us rework the sql/json patches as discussed a couple of months ago.\n\nCool. I've finished up what I wanted to do with the datetime code.\n\nIt occurred to me that we're going to have a bit of a problem\nwith domain_in. We can certainly make it pass back any soft\nerrors from the underlying type's input function, and we can\nmake it return a soft error if a domain constraint evaluates\nto false. However, what happens if some function in a check\nconstraint throws an error? Our only hope of trapping that,\ngiven that it's a general user-defined expression, would be\na subtransaction. Which is exactly what we don't want here.\n\nI think though that it might be okay to just define this as\nNot Our Problem. Although we don't seem to try to enforce it,\nnon-immutable domain check constraints are strongly deprecated\n(the CREATE DOMAIN man page says that we assume immutability).\nAnd not throwing errors is something that we usually consider\nshould ride along with immutability. So I think it might be\nokay to say \"if you want soft error treatment for a domain,\nmake sure its check constraints don't throw errors\".\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 09 Dec 2022 20:28:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On 2022-Dec-09, Tom Lane wrote:\n\n> I think though that it might be okay to just define this as\n> Not Our Problem. Although we don't seem to try to enforce it,\n> non-immutable domain check constraints are strongly deprecated\n> (the CREATE DOMAIN man page says that we assume immutability).\n> And not throwing errors is something that we usually consider\n> should ride along with immutability. So I think it might be\n> okay to say \"if you want soft error treatment for a domain,\n> make sure its check constraints don't throw errors\".\n\nI think that's fine. If the user does, say \"CHECK (value > 0)\" and that\nresults in a soft error, that seems to me enough support for now. If\nthey want to do something more elaborate, they can write C functions.\nMaybe eventually we'll want to offer some other mechanism that doesn't\nrequire C, but let's figure out what the requirements are. I don't\nthink we know that, at this point.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Estoy de acuerdo contigo en que la verdad absoluta no existe...\nEl problema es que la mentira sí existe y tu estás mintiendo\" (G. Lama)\n\n\n", "msg_date": "Sat, 10 Dec 2022 13:20:13 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Dec-09, Tom Lane wrote:\n>> ... So I think it might be\n>> okay to say \"if you want soft error treatment for a domain,\n>> make sure its check constraints don't throw errors\".\n\n> I think that's fine. If the user does, say \"CHECK (value > 0)\" and that\n> results in a soft error, that seems to me enough support for now. If\n> they want to do something more elaborate, they can write C functions.\n> Maybe eventually we'll want to offer some other mechanism that doesn't\n> require C, but let's figure out what the requirements are. I don't\n> think we know that, at this point.\n\nA fallback we can offer to anyone with such a problem is \"write a\nplpgsql function and wrap the potentially-failing bit in an exception\nblock\". Then they get to pay the cost of the subtransaction, while\nwe're not imposing one on everybody else.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 10 Dec 2022 09:20:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On 2022-12-09 Fr 10:37, Andrew Dunstan wrote:\n> I am currently looking at the json types. I think that will be enough to\n> let us rework the sql/json patches as discussed a couple of months ago.\n>\n\nOK, json is a fairly easy case, see attached. But jsonb is a different\nkettle of fish. Both the semantic routines called by the parser and the\nsubsequent call to JsonbValueToJsonb() can raise errors. These are\npretty much all about breaking various limits (for strings, objects,\narrays). There's also a call to numeric_in, but I assume that a string\nthat's already parsed as a valid json numeric literal won't upset\nnumeric_in. Many of these occur several calls down the stack, so\nadjusting everything to deal with them would be fairly invasive. Perhaps\nwe could instead document that this class of input error won't be\ntrapped, at least for jsonb. We could still test for well-formed jsonb\ninput, just as I propose for json. That means that we would not be able\nto trap one of these errors in the ON ERROR clause of JSON_TABLE. I\nthink we can probably live with that.\n\nThoughts?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sat, 10 Dec 2022 09:35:12 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "so 10. 12. 2022 v 15:35 odesílatel Andrew Dunstan <andrew@dunslane.net>\nnapsal:\n\n>\n> On 2022-12-09 Fr 10:37, Andrew Dunstan wrote:\n> > I am currently looking at the json types. I think that will be enough to\n> > let us rework the sql/json patches as discussed a couple of months ago.\n> >\n>\n> OK, json is a fairly easy case, see attached. But jsonb is a different\n> kettle of fish. Both the semantic routines called by the parser and the\n> subsequent call to JsonbValueToJsonb() can raise errors. These are\n> pretty much all about breaking various limits (for strings, objects,\n> arrays). There's also a call to numeric_in, but I assume that a string\n> that's already parsed as a valid json numeric literal won't upset\n> numeric_in. Many of these occur several calls down the stack, so\n> adjusting everything to deal with them would be fairly invasive. Perhaps\n> we could instead document that this class of input error won't be\n> trapped, at least for jsonb. We could still test for well-formed jsonb\n> input, just as I propose for json. That means that we would not be able\n> to trap one of these errors in the ON ERROR clause of JSON_TABLE. I\n> think we can probably live with that.\n>\n> Thoughts?\n>\n\n+1\n\nPavel\n\n\n\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n\nso 10. 12. 2022 v 15:35 odesílatel Andrew Dunstan <andrew@dunslane.net> napsal:\nOn 2022-12-09 Fr 10:37, Andrew Dunstan wrote:\n> I am currently looking at the json types. I think that will be enough to\n> let us rework the sql/json patches as discussed a couple of months ago.\n>\n\nOK, json is a fairly easy case, see attached. But jsonb is a different\nkettle of fish. Both the semantic routines called by the parser and the\nsubsequent call to JsonbValueToJsonb() can raise errors. These are\npretty much all about breaking various limits (for strings, objects,\narrays). There's also a call to numeric_in, but I assume that a string\nthat's already parsed as a valid json numeric literal won't upset\nnumeric_in. Many of these occur several calls down the stack, so\nadjusting everything to deal with them would be fairly invasive. Perhaps\nwe could instead document that this class of input error won't be\ntrapped, at least for jsonb. We could still test for well-formed jsonb\ninput, just as I propose for json. That means that we would not be able\nto trap one of these errors in the ON ERROR clause of JSON_TABLE. I\nthink we can probably live with that.\n\nThoughts?+1Pavel \n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sat, 10 Dec 2022 16:43:28 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Sat, Dec 10, 2022 at 9:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2022-Dec-09, Tom Lane wrote:\n> >> ... So I think it might be\n> >> okay to say \"if you want soft error treatment for a domain,\n> >> make sure its check constraints don't throw errors\".\n>\n> > I think that's fine. If the user does, say \"CHECK (value > 0)\" and that\n> > results in a soft error, that seems to me enough support for now. If\n> > they want to do something more elaborate, they can write C functions.\n> > Maybe eventually we'll want to offer some other mechanism that doesn't\n> > require C, but let's figure out what the requirements are. I don't\n> > think we know that, at this point.\n>\n> A fallback we can offer to anyone with such a problem is \"write a\n> plpgsql function and wrap the potentially-failing bit in an exception\n> block\". Then they get to pay the cost of the subtransaction, while\n> we're not imposing one on everybody else.\n>\n> regards, tom lane\n>\n\nThat exception block will prevent parallel plans. I'm not saying it isn't\nthe best way forward for us, but wanted to make that side effect clear.\n\nOn Sat, Dec 10, 2022 at 9:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Dec-09, Tom Lane wrote:\n>> ...  So I think it might be\n>> okay to say \"if you want soft error treatment for a domain,\n>> make sure its check constraints don't throw errors\".\n\n> I think that's fine.  If the user does, say \"CHECK (value > 0)\" and that\n> results in a soft error, that seems to me enough support for now.  If\n> they want to do something more elaborate, they can write C functions.\n> Maybe eventually we'll want to offer some other mechanism that doesn't\n> require C, but let's figure out what the requirements are.  I don't\n> think we know that, at this point.\n\nA fallback we can offer to anyone with such a problem is \"write a\nplpgsql function and wrap the potentially-failing bit in an exception\nblock\".  Then they get to pay the cost of the subtransaction, while\nwe're not imposing one on everybody else.\n\n                        regards, tom laneThat exception block will prevent parallel plans. I'm not saying it isn't the best way forward for us, but wanted to make that side effect clear.", "msg_date": "Sat, 10 Dec 2022 12:19:59 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> OK, json is a fairly easy case, see attached. But jsonb is a different\n> kettle of fish. Both the semantic routines called by the parser and the\n> subsequent call to JsonbValueToJsonb() can raise errors. These are\n> pretty much all about breaking various limits (for strings, objects,\n> arrays). There's also a call to numeric_in, but I assume that a string\n> that's already parsed as a valid json numeric literal won't upset\n> numeric_in.\n\nUm, nope ...\n\nregression=# select '1e1000000'::jsonb;\nERROR: value overflows numeric format\nLINE 1: select '1e1000000'::jsonb;\n ^\n\n> Many of these occur several calls down the stack, so\n> adjusting everything to deal with them would be fairly invasive. Perhaps\n> we could instead document that this class of input error won't be\n> trapped, at least for jsonb.\n\nSeeing that SQL/JSON is one of the major drivers of this whole project,\nit seemed a little sad to me that jsonb couldn't manage to implement\nwhat is required. So I spent a bit of time poking at it. Attached\nis an extended version of your patch that also covers jsonb.\n\nThe main thing I soon realized is that the JsonSemAction API is based\non the assumption that semantic actions will report errors by throwing\nthem. This is a bit schizophrenic considering the parser itself carefully\nhands back error codes instead of throwing anything (excluding palloc\nfailures of course). What I propose in the attached is that we change\nthat API so that action functions return JsonParseErrorType, and add\nan enum value denoting \"I already logged a suitable error, so you don't\nhave to\". It was a little tedious to modify all the existing functions\nthat way, but not hard. Only the ones used by jsonb_in need to do\nanything except \"return JSON_SUCCESS\", at least for now.\n\n(I wonder if pg_verifybackup's parse_manifest.c could use a second\nlook at how it's handling errors, given this API. I didn't study it\nclosely.)\n\nI have not done anything here about errors within JsonbValueToJsonb.\nThere would need to be another round of API-extension in that area\nif we want to be able to trap its errors. As you say, those are mostly\nabout exceeding implementation size limits, so I suppose one could argue\nthat they are not so different from palloc failure. It's still annoying.\nIf people are good with the changes attached, I might take a look at\nthat.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 10 Dec 2022 14:38:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n> On Sat, Dec 10, 2022 at 9:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> A fallback we can offer to anyone with such a problem is \"write a\n>> plpgsql function and wrap the potentially-failing bit in an exception\n>> block\". Then they get to pay the cost of the subtransaction, while\n>> we're not imposing one on everybody else.\n\n> That exception block will prevent parallel plans. I'm not saying it isn't\n> the best way forward for us, but wanted to make that side effect clear.\n\nHmm. Apropos of that, I notice that domain_in is marked PARALLEL SAFE,\nwhich seems like a bad idea if it could invoke not-so-parallel-safe\nexpressions. Do we need to mark it less safe, and if so how much less?\n\nAnyway, assuming that people are okay with the Not Our Problem approach,\nthe patch is pretty trivial, as attached. I started to write an addition\nto the CREATE DOMAIN man page recommending that domain CHECK constraints\nnot throw errors, but couldn't get past the bare recommendation. Normally\nI'd want to explain such a thing along the lines of \"For example, X won't\nwork\" ... but we don't yet have any committed features that depend on\nthis. I'm inclined to leave it like that for now. If we don't remember\nto fix it once we do have some features, I'm sure somebody will ask a\nquestion about it eventually.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 10 Dec 2022 16:01:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-10 Sa 14:38, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> OK, json is a fairly easy case, see attached. But jsonb is a different\n>> kettle of fish. Both the semantic routines called by the parser and the\n>> subsequent call to JsonbValueToJsonb() can raise errors. These are\n>> pretty much all about breaking various limits (for strings, objects,\n>> arrays). There's also a call to numeric_in, but I assume that a string\n>> that's already parsed as a valid json numeric literal won't upset\n>> numeric_in.\n> Um, nope ...\n>\n> regression=# select '1e1000000'::jsonb;\n> ERROR: value overflows numeric format\n> LINE 1: select '1e1000000'::jsonb;\n> ^\n\n\nOops, yeah.\n\n\n>> Many of these occur several calls down the stack, so\n>> adjusting everything to deal with them would be fairly invasive. Perhaps\n>> we could instead document that this class of input error won't be\n>> trapped, at least for jsonb.\n> Seeing that SQL/JSON is one of the major drivers of this whole project,\n> it seemed a little sad to me that jsonb couldn't manage to implement\n> what is required. So I spent a bit of time poking at it. Attached\n> is an extended version of your patch that also covers jsonb.\n>\n> The main thing I soon realized is that the JsonSemAction API is based\n> on the assumption that semantic actions will report errors by throwing\n> them. This is a bit schizophrenic considering the parser itself carefully\n> hands back error codes instead of throwing anything (excluding palloc\n> failures of course). What I propose in the attached is that we change\n> that API so that action functions return JsonParseErrorType, and add\n> an enum value denoting \"I already logged a suitable error, so you don't\n> have to\". It was a little tedious to modify all the existing functions\n> that way, but not hard. Only the ones used by jsonb_in need to do\n> anything except \"return JSON_SUCCESS\", at least for now.\n\n\nMany thanks for doing this, it looks good.\n\n> I have not done anything here about errors within JsonbValueToJsonb.\n> There would need to be another round of API-extension in that area\n> if we want to be able to trap its errors. As you say, those are mostly\n> about exceeding implementation size limits, so I suppose one could argue\n> that they are not so different from palloc failure. It's still annoying.\n> If people are good with the changes attached, I might take a look at\n> that.\n>\n> \t\t\t\n\n\nAwesome.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 10 Dec 2022 18:11:35 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-12-10 Sa 14:38, Tom Lane wrote:\n>> Seeing that SQL/JSON is one of the major drivers of this whole project,\n>> it seemed a little sad to me that jsonb couldn't manage to implement\n>> what is required. So I spent a bit of time poking at it. Attached\n>> is an extended version of your patch that also covers jsonb.\n\n> Many thanks for doing this, it looks good.\n\nCool, thanks. Looking at my notes, there's one other loose end\nI forgot to mention:\n\n * Note: pg_unicode_to_server() will throw an error for a\n * conversion failure, rather than returning a failure\n * indication. That seems OK.\n\nWe ought to do something about that, but I'm not sure how hard we\nought to work at it. Perhaps it's sufficient to make a variant of\npg_unicode_to_server that just returns true/false instead of failing,\nand add a JsonParseErrorType for \"untranslatable character\" to let\njson_errdetail return a reasonably on-point message. We could imagine\nextending the ErrorSaveContext infrastructure into the encoding\nconversion modules, and maybe at some point that'll be worth doing,\nbut in this particular context it doesn't seem like we'd be getting\na very much better error message. The main thing that we would get\nfrom such an extension is a chance to capture the report from\nreport_untranslatable_char. But what that adds is the ability to\nidentify exactly which character couldn't be translated --- and in\nthis use-case there's always just one.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 10 Dec 2022 19:00:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-10 Sa 19:00, Tom Lane wrote:\n> Looking at my notes, there's one other loose end\n> I forgot to mention:\n>\n> * Note: pg_unicode_to_server() will throw an error for a\n> * conversion failure, rather than returning a failure\n> * indication. That seems OK.\n>\n> We ought to do something about that, but I'm not sure how hard we\n> ought to work at it. Perhaps it's sufficient to make a variant of\n> pg_unicode_to_server that just returns true/false instead of failing,\n> and add a JsonParseErrorType for \"untranslatable character\" to let\n> json_errdetail return a reasonably on-point message. \n\n\nSeems reasonable.\n\n\n> We could imagine\n> extending the ErrorSaveContext infrastructure into the encoding\n> conversion modules, and maybe at some point that'll be worth doing,\n> but in this particular context it doesn't seem like we'd be getting\n> a very much better error message. The main thing that we would get\n> from such an extension is a chance to capture the report from\n> report_untranslatable_char. But what that adds is the ability to\n> identify exactly which character couldn't be translated --- and in\n> this use-case there's always just one.\n>\n> \t\t\t\n\n\nYeah, probably overkill for now.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 11 Dec 2022 09:35:40 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-12-10 Sa 14:38, Tom Lane wrote:\n>> I have not done anything here about errors within JsonbValueToJsonb.\n>> There would need to be another round of API-extension in that area\n>> if we want to be able to trap its errors. As you say, those are mostly\n>> about exceeding implementation size limits, so I suppose one could argue\n>> that they are not so different from palloc failure. It's still annoying.\n>> If people are good with the changes attached, I might take a look at\n>> that.\n\n> Awesome.\n\nI spent some time looking at this, and was discouraged to conclude\nthat the notational mess would probably be substantially out of\nproportion to the value. The main problem is that we'd have to change\nthe API of pushJsonbValue, which has more than 150 call sites, most\nof which would need to grow a new test for failure return. Maybe\nsomebody will feel like tackling that at some point, but not me today.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 11 Dec 2022 12:24:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-11 Su 12:24, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 2022-12-10 Sa 14:38, Tom Lane wrote:\n>>> I have not done anything here about errors within JsonbValueToJsonb.\n>>> There would need to be another round of API-extension in that area\n>>> if we want to be able to trap its errors. As you say, those are mostly\n>>> about exceeding implementation size limits, so I suppose one could argue\n>>> that they are not so different from palloc failure. It's still annoying.\n>>> If people are good with the changes attached, I might take a look at\n>>> that.\n>> Awesome.\n> I spent some time looking at this, and was discouraged to conclude\n> that the notational mess would probably be substantially out of\n> proportion to the value. The main problem is that we'd have to change\n> the API of pushJsonbValue, which has more than 150 call sites, most\n> of which would need to grow a new test for failure return. Maybe\n> somebody will feel like tackling that at some point, but not me today.\n>\n> \t\t\t\n\n\nYes, I had similar feelings when I looked at it. I don't think this\nneeds to hold up proceeding with the SQL/JSON rework, which I think can\nreasonably restart now.\n\nMaybe as we work through the remaining input functions (there are about\n60 core candidates left on my list) we should mark them with a comment\nif no adjustment is needed.\n\nI'm going to look at jsonpath and the text types next, I somewhat tied\nup this week but might get to relook at pushJsonbValue later in the month.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 11 Dec 2022 13:01:36 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Maybe as we work through the remaining input functions (there are about\n> 60 core candidates left on my list) we should mark them with a comment\n> if no adjustment is needed.\n\nI did a quick pass through them last night. Assuming that we don't\nneed to touch the unimplemented input functions (eg for pseudotypes),\nI count these core functions as still needing work:\n\naclitemin\nbit_in\nbox_in\nbpcharin\nbyteain\ncash_in\ncidin\ncidr_in\ncircle_in\ninet_in\nint2vectorin\njsonpath_in\nline_in\nlseg_in\nmacaddr8_in\nmacaddr_in\nmultirange_in\nnamein\noidin\noidvectorin\npath_in\npg_lsn_in\npg_snapshot_in\npoint_in\npoly_in\nrange_in\nregclassin\nregcollationin\nregconfigin\nregdictionaryin\nregnamespacein\nregoperatorin\nregoperin\nregprocedurein\nregprocin\nregrolein\nregtypein\ntidin\ntsqueryin\ntsvectorin\nuuid_in\nvarbit_in\nvarcharin\nxid8in\nxidin\nxml_in\n\nand these contrib functions:\n\nhstore:\nhstore_in\nintarray:\nbqarr_in\nisn:\nean13_in\nisbn_in\nismn_in\nissn_in\nupc_in\nltree:\nltree_in\nlquery_in\nltxtq_in\nseg:\nseg_in\n\nMaybe we should have a conversation about which of these are\nhighest priority to get to a credible feature. We clearly need\nto fix the remaining SQL-spec types (varchar and bpchar, mainly).\nAt the other extreme, likely nobody would weep if we never fixed\nint2vectorin, for instance.\n\nI'm a little concerned about the cost-benefit of fixing the reg* types.\nThe ones that accept type names actually use the core grammar to parse\nthose. Now, we probably could fix the grammar to be non-throwing, but\nit'd be very invasive and I'm not sure about the performance impact.\nIt might be best to content ourselves with soft reporting of lookup\nfailures, as opposed to syntax problems.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 11 Dec 2022 13:29:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Hi,\n\nOn 2022-12-11 12:24:11 -0500, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > On 2022-12-10 Sa 14:38, Tom Lane wrote:\n> >> I have not done anything here about errors within JsonbValueToJsonb.\n> >> There would need to be another round of API-extension in that area\n> >> if we want to be able to trap its errors. As you say, those are mostly\n> >> about exceeding implementation size limits, so I suppose one could argue\n> >> that they are not so different from palloc failure. It's still annoying.\n> >> If people are good with the changes attached, I might take a look at\n> >> that.\n> \n> > Awesome.\n> \n> I spent some time looking at this, and was discouraged to conclude\n> that the notational mess would probably be substantially out of\n> proportion to the value. The main problem is that we'd have to change\n> the API of pushJsonbValue, which has more than 150 call sites, most\n> of which would need to grow a new test for failure return. Maybe\n> somebody will feel like tackling that at some point, but not me today.\n\nCould we address this more minimally by putting the error state into the\nJsonbParseState and add a check for that error state to convertToJsonb() or\nsuch (by passing in the JsonbParseState)? We'd need to return immediately in\npushJsonbValue() if there's already an error, but that that's not too bad.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 11 Dec 2022 12:41:21 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-12-11 12:24:11 -0500, Tom Lane wrote:\n>> I spent some time looking at this, and was discouraged to conclude\n>> that the notational mess would probably be substantially out of\n>> proportion to the value. The main problem is that we'd have to change\n>> the API of pushJsonbValue, which has more than 150 call sites, most\n>> of which would need to grow a new test for failure return. Maybe\n>> somebody will feel like tackling that at some point, but not me today.\n\n> Could we address this more minimally by putting the error state into the\n> JsonbParseState and add a check for that error state to convertToJsonb() or\n> such (by passing in the JsonbParseState)? We'd need to return immediately in\n> pushJsonbValue() if there's already an error, but that that's not too bad.\n\nWe could shoehorn error state into the JsonbParseState, although the\nfact that that stack normally starts out empty is a bit of a problem.\nI think you'd have to push a dummy entry if you want soft errors,\nstore the error state pointer into that, and have pushState() copy\ndown the parent's error pointer. Kind of ugly, but do-able. Whether\nit's better than replacing that argument with a pointer-to-struct-\nthat-includes-the-stack-and-the-error-pointer wasn't real clear to me.\n\nWhat seemed like a mess was getting the calling code to quit early.\nI'm not convinced that just putting an immediate exit into pushJsonbValue\nwould be enough, because the callers tend to assume a series of calls\nwill behave as they expect. Probably some of the call sites could\nignore the issue, but you'd still end with a lot of messy changes\nI fear.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 11 Dec 2022 16:23:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Mon, Dec 12, 2022 at 12:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > Maybe as we work through the remaining input functions (there are about\n> > 60 core candidates left on my list) we should mark them with a comment\n> > if no adjustment is needed.\n>\n> I did a quick pass through them last night. Assuming that we don't\n> need to touch the unimplemented input functions (eg for pseudotypes),\n> I count these core functions as still needing work:\n>\n> aclitemin\n> bit_in\n> box_in\n> bpcharin\n> byteain\n> cash_in\n> cidin\n> cidr_in\n> circle_in\n> inet_in\n> int2vectorin\n> jsonpath_in\n> line_in\n> lseg_in\n> macaddr8_in\n> macaddr_in\n\nAttaching patches changing these functions except bpcharin,\nbyteain, jsonpath_in, and cidin. I am continuing work on the next\nitems below:\n\n> multirange_in\n> namein\n> oidin\n> oidvectorin\n> path_in\n> pg_lsn_in\n> pg_snapshot_in\n> point_in\n> poly_in\n> range_in\n> regclassin\n> regcollationin\n> regconfigin\n> regdictionaryin\n> regnamespacein\n> regoperatorin\n> regoperin\n> regprocedurein\n> regprocin\n> regrolein\n> regtypein\n> tidin\n> tsqueryin\n> tsvectorin\n> uuid_in\n> varbit_in\n> varcharin\n> xid8in\n> xidin\n> xml_in\n>\n> and these contrib functions:\n>\n> hstore:\n> hstore_in\n> intarray:\n> bqarr_in\n> isn:\n> ean13_in\n> isbn_in\n> ismn_in\n> issn_in\n> upc_in\n> ltree:\n> ltree_in\n> lquery_in\n> ltxtq_in\n> seg:\n> seg_in\n>\n> Maybe we should have a conversation about which of these are\n> highest priority to get to a credible feature. We clearly need\n> to fix the remaining SQL-spec types (varchar and bpchar, mainly).\n> At the other extreme, likely nobody would weep if we never fixed\n> int2vectorin, for instance.\n>\n> I'm a little concerned about the cost-benefit of fixing the reg* types.\n> The ones that accept type names actually use the core grammar to parse\n> those. Now, we probably could fix the grammar to be non-throwing, but\n> it'd be very invasive and I'm not sure about the performance impact.\n> It might be best to content ourselves with soft reporting of lookup\n> failures, as opposed to syntax problems.\n>\n\nRegards,\nAmul", "msg_date": "Tue, 13 Dec 2022 18:03:19 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Tue, Dec 13, 2022 at 6:03 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Mon, Dec 12, 2022 at 12:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Andrew Dunstan <andrew@dunslane.net> writes:\n> > > Maybe as we work through the remaining input functions (there are about\n> > > 60 core candidates left on my list) we should mark them with a comment\n> > > if no adjustment is needed.\n> >\n> > I did a quick pass through them last night. Assuming that we don't\n> > need to touch the unimplemented input functions (eg for pseudotypes),\n> > I count these core functions as still needing work:\n> >\n> > aclitemin\n> > bit_in\n> > box_in\n> > bpcharin\n> > byteain\n> > cash_in\n> > cidin\n> > cidr_in\n> > circle_in\n> > inet_in\n> > int2vectorin\n> > jsonpath_in\n> > line_in\n> > lseg_in\n> > macaddr8_in\n> > macaddr_in\n>\n> Attaching patches changing these functions except bpcharin,\n> byteain, jsonpath_in, and cidin. I am continuing work on the next\n> items below:\n>\n> > multirange_in\n> > namein\n> > oidin\n> > oidvectorin\n> > path_in\n> > pg_lsn_in\n> > pg_snapshot_in\n> > point_in\n> > poly_in\n> > range_in\n> > regclassin\n> > regcollationin\n> > regconfigin\n> > regdictionaryin\n> > regnamespacein\n> > regoperatorin\n> > regoperin\n> > regprocedurein\n> > regprocin\n> > regrolein\n> > regtypein\n> > tidin\n> > tsqueryin\n> > tsvectorin\n> > uuid_in\n> > varbit_in\n> > varcharin\n> > xid8in\n> > xidin\n\nAttaching a complete set of the patches changing function till this\nexcept bpcharin, byteain jsonpath_in that Andrew is planning to look\nin. I have skipped reg* functions.\nmultirange_in and range_in changes are a bit complicated and big --\nplanning to resume work on that and the rest of the items in the list\nin the last week of this month, thanks.\n\n\n> > xml_in\n> >\n> > and these contrib functions:\n> >\n> > hstore:\n> > hstore_in\n> > intarray:\n> > bqarr_in\n> > isn:\n> > ean13_in\n> > isbn_in\n> > ismn_in\n> > issn_in\n> > upc_in\n> > ltree:\n> > ltree_in\n> > lquery_in\n> > ltxtq_in\n> > seg:\n> > seg_in\n> >\n> > Maybe we should have a conversation about which of these are\n> > highest priority to get to a credible feature. We clearly need\n> > to fix the remaining SQL-spec types (varchar and bpchar, mainly).\n> > At the other extreme, likely nobody would weep if we never fixed\n> > int2vectorin, for instance.\n> >\n> > I'm a little concerned about the cost-benefit of fixing the reg* types.\n> > The ones that accept type names actually use the core grammar to parse\n> > those. Now, we probably could fix the grammar to be non-throwing, but\n> > it'd be very invasive and I'm not sure about the performance impact.\n> > It might be best to content ourselves with soft reporting of lookup\n> > failures, as opposed to syntax problems.\n> >\n>\n\nRegards,\nAmul", "msg_date": "Wed, 14 Dec 2022 18:05:02 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Amul Sul <sulamul@gmail.com> writes:\n> Attaching a complete set of the patches changing function till this\n> except bpcharin, byteain jsonpath_in that Andrew is planning to look\n> in. I have skipped reg* functions.\n\nI'll take a look at these shortly, unless Andrew is already on it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Dec 2022 11:00:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-14 We 11:00, Tom Lane wrote:\n> Amul Sul <sulamul@gmail.com> writes:\n>> Attaching a complete set of the patches changing function till this\n>> except bpcharin, byteain jsonpath_in that Andrew is planning to look\n>> in. I have skipped reg* functions.\n> I'll take a look at these shortly, unless Andrew is already on it.\n>\n> \t\t\t\n\n\nThanks, I have been looking at jsonpath, but I'm not quite sure how to\nget the escontext argument to the yyerror calls in jsonath_scan.l. Maybe\nI need to specify a lex-param setting?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 14 Dec 2022 16:45:45 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Thanks, I have been looking at jsonpath, but I'm not quite sure how to\n> get the escontext argument to the yyerror calls in jsonath_scan.l. Maybe\n> I need to specify a lex-param setting?\n\nYou want a parse-param option in jsonpath_gram.y, I think; adding that\nwill persuade Bison to change the signatures of relevant functions.\nCompare the mods I made in contrib/cube in ccff2d20e.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Dec 2022 17:37:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "I wrote:\n> Amul Sul <sulamul@gmail.com> writes:\n>> Attaching a complete set of the patches changing function till this\n>> except bpcharin, byteain jsonpath_in that Andrew is planning to look\n>> in. I have skipped reg* functions.\n\n> I'll take a look at these shortly, unless Andrew is already on it.\n\nI've gone through these now and revised/pushed most of them.\n\nI do not think that we need to touch any unimplemented I/O functions.\nThey can just as well be unimplemented for soft-error cases too;\nI can't see any use-case where it could be useful to have them not\ncomplain. So I discarded\n\nv1-0004-Change-brin_bloom_summary_in-to-allow-non-throw-e.patch\nv1-0005-Change-brin_minmax_multi_summary_in-to-allow-non-.patch\nv1-0009-Change-gtsvectorin-to-allow-non-throw-error-repor.patch\nv1-0018-Change-pg_mcv_list_in-to-allow-non-throw-error-re.patch\nv1-0019-Change-pg_ndistinct_in-to-allow-non-throw-error-r.patch\n\nAs for the rest, some were closer to being committable than others.\nYou need to be more careful about handling error cases in subroutines:\nyou can't just ereturn from a subroutine and figure you're done,\nbecause the caller will keep plugging along if you don't do something\nto teach it not to. What that would often lead to is the caller\nfinding what it thinks is a new error condition, and overwriting the\noriginal message with something that's much less on-point. This is\ncomparable to cascading errors from a compiler: anybody who's dealt\nwith those knows that errors after the first one are often just noise.\nSo we have to be careful to quit after we log the first error.\n\nAlso, I ended up ripping out the changes in line_construct, because\nas soon as I tried to test them I tripped over the fact that lseg_sl\nwas still throwing hard errors, before we ever get to line_construct.\nPerhaps it is worth fixing all that but I have to deem it very low\npriority, because the two-input-points formulation isn't the mainstream\ncode path. (I kind of wonder too if there isn't a better, more\nnumerically robust conversion method ...) In any case I'm pretty sure\nthose changes in float.h would have drawn Andres' ire. We don't want\nto be adding arguments to float_overflow_error/float_underflow_error;\nif that were acceptable they'd not have looked like that to begin with.\n\nAnyway, thanks for the work! That moved us a good ways.\n\nI think I'm going to go fix bpcharin and varcharin, because those\nare the last of the SQL-spec-defined types.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Dec 2022 18:24:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Here are some proposed patches for converting range_in and multirange_in.\n\n0001 tackles the straightforward part, which is trapping syntax errors\nand called-input-function errors. The only thing that I think might\nbe controversial here is that I chose to change the signatures of\nthe exposed functions range_serialize and make_range rather than\ninventing xxx_safe variants. I think this is all right, because\nAFAIK the only likely reason for extensions to call either of those\nis that custom types' canonical functions would need to call\nrange_serialize --- and those will need to be touched anyway,\nsee 0002.\n\nWhat 0001 does not cover is trapping errors occurring in range\ncanonicalize functions. I'd first thought maybe doing that wasn't\nworth the trouble, but it's not really very hard to fix the built-in\ncanonicalize functions, as shown in 0002. Probably extensions would\nnot find it much harder, and in any case they're not really required\nto make their errors soft.\n\nAny objections?\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 14 Dec 2022 22:33:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Thu, Dec 15, 2022 at 9:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Here are some proposed patches for converting range_in and multirange_in.\n>\n> 0001 tackles the straightforward part, which is trapping syntax errors\n> and called-input-function errors. The only thing that I think might\n> be controversial here is that I chose to change the signatures of\n> the exposed functions range_serialize and make_range rather than\n> inventing xxx_safe variants. I think this is all right, because\n> AFAIK the only likely reason for extensions to call either of those\n> is that custom types' canonical functions would need to call\n> range_serialize --- and those will need to be touched anyway,\n> see 0002.\n>\n> What 0001 does not cover is trapping errors occurring in range\n> canonicalize functions. I'd first thought maybe doing that wasn't\n> worth the trouble, but it's not really very hard to fix the built-in\n> canonicalize functions, as shown in 0002. Probably extensions would\n> not find it much harder, and in any case they're not really required\n> to make their errors soft.\n>\n> Any objections?\n>\n\nThere are other a bunch of hard errors from get_multirange_io_data(),\nget_range_io_data() and its subroutine can hit, shouldn't we care\nabout those?\n\n\nRegards,\nAmul\n\n\n", "msg_date": "Thu, 15 Dec 2022 10:49:45 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Amul Sul <sulamul@gmail.com> writes:\n> There are other a bunch of hard errors from get_multirange_io_data(),\n> get_range_io_data() and its subroutine can hit, shouldn't we care\n> about those?\n\nI think those are all \"internal\" errors, ie not reachable as a\nconsequence of bad input data. Do you see a reason to think\ndifferently?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 15 Dec 2022 00:45:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Thu, Dec 15, 2022 at 11:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amul Sul <sulamul@gmail.com> writes:\n> > There are other a bunch of hard errors from get_multirange_io_data(),\n> > get_range_io_data() and its subroutine can hit, shouldn't we care\n> > about those?\n>\n> I think those are all \"internal\" errors, ie not reachable as a\n> consequence of bad input data. Do you see a reason to think\n> differently?\n\nMake sense, I was worried about the internal errors as well as an\nerror that the user can cause while declaring multi-range e.g. shell\ntype, but realized that case gets checked at creating that multi-range\ntype.\n\nRegards,\nAmul\n\n\n", "msg_date": "Thu, 15 Dec 2022 11:49:24 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Wed, Dec 14, 2022 at 6:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I've gone through these now and revised/pushed most of them.\n\nTom, I just want to extend huge thanks to you for working on this\ninfrastructure. jsonpath aside, I think this is going to pay dividends\nin many ways for many years to come. It's something that we've needed\nfor a really long time, and I'm very happy that we're moving forward\nwith it.\n\nThanks so much.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 15 Dec 2022 09:12:04 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Tom, I just want to extend huge thanks to you for working on this\n> infrastructure.\n\nThanks. I agree it's an important bit of work.\n\nI'm going to step back from this for now and get on with other work,\nbut before that I thought there was one more input function I should\nlook at: xml_in, because xml.c is such a hairy can of worms. It\nturns out to be not too bad, given our design principle that only\n\"bad input\" errors should be reported softly. xml_parse() now has\ntwo different ways of reporting errors depending on whether they're\nhard or soft, but it didn't take an undue amount of refactoring to\nmake that work.\n\nWhile fixing that, my attention was drawn to wellformed_xml(),\nwhose error handling is unbelievably horrid: it traps any longjmp\nwhatsoever (query cancel, for instance) and reports it as ill-formed XML.\n0002 attached makes use of this new code to get rid of the need for any\nPG_TRY there at all; instead, soft errors result in a \"false\" return\nbut hard errors are allowed to propagate. xml_is_document was much more\ncareful, but we can change it the same way to save code and cycles.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 15 Dec 2022 17:18:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "I wrote:\n> I'm going to step back from this for now and get on with other work,\n> but before that I thought there was one more input function I should\n> look at: xml_in, because xml.c is such a hairy can of worms.\n\nPushed that. For the record, my list of input functions still needing\nattention stands at\n\nCore:\n\njsonpath_in\nregclassin\nregcollationin\nregconfigin\nregdictionaryin\nregnamespacein\nregoperatorin\nregoperin\nregprocedurein\nregprocin\nregrolein\nregtypein\ntsqueryin\ntsvectorin\n\nContrib:\n\nhstore:\nhstore_in\nintarray:\nbqarr_in\nisn:\nean13_in\nisbn_in\nismn_in\nissn_in\nupc_in\nltree:\nltree_in\nlquery_in\nltxtq_in\nseg:\nseg_in\n\nThe reg* functions probably need a unified plan as to how far\ndown we want to push non-error behavior. The rest of these\nI think just require turning the crank along the same lines\nas in functions already dealt with.\n\nWhile it'd be good to get all of these done before v16 feature\nfreeze, I can't see that any of them represent blockers for\nbuilding features based on soft input error handling.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Dec 2022 13:31:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-15 Th 09:12, Robert Haas wrote:\n> On Wed, Dec 14, 2022 at 6:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I've gone through these now and revised/pushed most of them.\n> Tom, I just want to extend huge thanks to you for working on this\n> infrastructure. jsonpath aside, I think this is going to pay dividends\n> in many ways for many years to come. It's something that we've needed\n> for a really long time, and I'm very happy that we're moving forward\n> with it.\n>\n> Thanks so much.\n>\n\nRobert beat me to it, but I will heartily second this. Many thanks.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 17 Dec 2022 16:59:00 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On 2022-12-14 We 17:37, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> Thanks, I have been looking at jsonpath, but I'm not quite sure how to\n>> get the escontext argument to the yyerror calls in jsonath_scan.l. Maybe\n>> I need to specify a lex-param setting?\n> You want a parse-param option in jsonpath_gram.y, I think; adding that\n> will persuade Bison to change the signatures of relevant functions.\n> Compare the mods I made in contrib/cube in ccff2d20e.\n>\n> \t\t\t\n\n\nYeah, I started there, but it's substantially more complex - unlike cube\nthe jsonpath scanner calls the error routines as well as the parser.\n\n\nAnyway, here's a patch.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sun, 18 Dec 2022 09:42:39 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Fri, Dec 16, 2022 at 1:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The reg* functions probably need a unified plan as to how far\n> down we want to push non-error behavior. The rest of these\n> I think just require turning the crank along the same lines\n> as in functions already dealt with.\n\nI would be in favor of an aggressive approach. For example, let's look\nat regclassin(). It calls oidin(), stringToQualifiedNameList(),\nmakeRangeVarFromNameList(), and RangeVarGetRelidExtended(). Basically,\noidin() could fail if the input, known to be all digits, is out of\nrange; stringToQualifiedNameList() could fail due to mismatched\ndelimiters or improperly-separated names; makeRangeVarFromNameList()\ndoesn't want to have more than three name components\n(db.schema.relation); and RangeVarGetRelidExtended() doesn't like\ncross-database references or non-existent relations.\n\nNow, one option here would be to distinguish between something that\ncould be valid in some database but isn't in this one, like a\nnon-existent relation name, and one that couldn't ever work anywhere,\nlike a relation name with four parts or bad quoting. You could decide\nthat the former kind of error will be reported softly but the latter\nis hard error. But I think that is presuming that we know how users\nwill want to use this functionality, and I don't think we do. I also\nthink that it will be confusing to users. Finally, I think it's\ndifferent from what we do for other data types. You could equally well\nargue that, for int4in, we ought to treat '9999999999' and 'potato'\ndifferently, one a hard error and the other soft. I think it's hard to\npuzzle out a decision that makes any sense there, and I don't think\nthis case is much different. I don't think it's too hard to mentally\nseparate errors about the validity of the input from, say, out of\nmemory errors -- but one distinguishing between one kind of input\nvalidity check and another seems like a muddle.\n\nIt also doesn't seem too bad from an implementation point of view to\ntry to cover all the caes. The stickiest case looks to be\nRangeVarGetRelidExtended() and we might need to give a bit of thought\nto how to handle that one. The others don't seem like a big issue, and\noidin() is already done.\n\n> While it'd be good to get all of these done before v16 feature\n> freeze, I can't see that any of them represent blockers for\n> building features based on soft input error handling.\n\n+1.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Dec 2022 11:34:39 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Dec 16, 2022 at 1:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The reg* functions probably need a unified plan as to how far\n>> down we want to push non-error behavior. The rest of these\n>> I think just require turning the crank along the same lines\n>> as in functions already dealt with.\n\n> I would be in favor of an aggressive approach.\n\nI agree that anything based on implementation concerns is going\nto look pretty unprincipled to end users. However ...\n\n> It also doesn't seem too bad from an implementation point of view to\n> try to cover all the caes.\n\n... I guess you didn't read my remarks upthread about regtypein.\nI do not want to try to make gram.y+scan.l non-error-throwing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Dec 2022 11:44:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Mon, Dec 19, 2022 at 11:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > It also doesn't seem too bad from an implementation point of view to\n> > try to cover all the caes.\n>\n> ... I guess you didn't read my remarks upthread about regtypein.\n> I do not want to try to make gram.y+scan.l non-error-throwing.\n\nHuh, for some reason I'm not seeing an email about that. Do you have a link?\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Dec 2022 13:25:19 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Dec 19, 2022 at 11:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ... I guess you didn't read my remarks upthread about regtypein.\n>> I do not want to try to make gram.y+scan.l non-error-throwing.\n\n> Huh, for some reason I'm not seeing an email about that. Do you have a link?\n\nIn [1] I wrote\n\n>>> I'm a little concerned about the cost-benefit of fixing the reg* types.\n>>> The ones that accept type names actually use the core grammar to parse\n>>> those. Now, we probably could fix the grammar to be non-throwing, but\n>>> it'd be very invasive and I'm not sure about the performance impact.\n>>> It might be best to content ourselves with soft reporting of lookup\n>>> failures, as opposed to syntax problems.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/1863335.1670783397%40sss.pgh.pa.us\n\n\n", "msg_date": "Mon, 19 Dec 2022 16:27:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Mon, Dec 19, 2022 at 4:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> In [1] I wrote\n>\n> >>> I'm a little concerned about the cost-benefit of fixing the reg* types.\n> >>> The ones that accept type names actually use the core grammar to parse\n> >>> those. Now, we probably could fix the grammar to be non-throwing, but\n> >>> it'd be very invasive and I'm not sure about the performance impact.\n> >>> It might be best to content ourselves with soft reporting of lookup\n> >>> failures, as opposed to syntax problems.\n\nAh right. I agree that invading the main grammar doesn't seem\nterribly appealing. Setting regtypein aside could be a sensible\nchoice, then. Another option might be to have some way of parsing type\nnames outside of the main grammar, which would be more work and would\nrequire keeping things in sync, but perhaps it would end up being less\nugly....\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Dec 2022 17:48:53 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On 2022-12-18 Su 09:42, Andrew Dunstan wrote:\n> On 2022-12-14 We 17:37, Tom Lane wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> Thanks, I have been looking at jsonpath, but I'm not quite sure how to\n>>> get the escontext argument to the yyerror calls in jsonath_scan.l. Maybe\n>>> I need to specify a lex-param setting?\n>> You want a parse-param option in jsonpath_gram.y, I think; adding that\n>> will persuade Bison to change the signatures of relevant functions.\n>> Compare the mods I made in contrib/cube in ccff2d20e.\n>>\n>> \t\t\t\n>\n> Yeah, I started there, but it's substantially more complex - unlike cube\n> the jsonpath scanner calls the error routines as well as the parser.\n>\n>\n> Anyway, here's a patch.\n>\n>\n\nAnd here's another for contrib/seg\n\nI'm planning to commit these two in the next day or so.\n\n\ncheers\n\n\nandew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Wed, 21 Dec 2022 18:19:52 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> And here's another for contrib/seg\n> I'm planning to commit these two in the next day or so.\n\nI didn't look at the jsonpath one yet. The seg patch passes\nan eyeball check, with one minor nit: in seg_atof,\n\n+\t*result = float4in_internal(value, NULL, \"real\", value, escontext);\n\ndon't we want to use \"seg\" as the type_name?\n\nEven more nitpicky, in\n\n+seg_yyerror(SEG *result, struct Node *escontext, const char *message)\n {\n+\tif (SOFT_ERROR_OCCURRED(escontext))\n+\t\treturn;\n\nI'd be inclined to add some explanation, say\n\n+seg_yyerror(SEG *result, struct Node *escontext, const char *message)\n {\n+\t/* if we already reported an error, don't overwrite it */\n+\tif (SOFT_ERROR_OCCURRED(escontext))\n+\t\treturn;\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Dec 2022 01:10:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Yeah, I started there, but it's substantially more complex - unlike cube\n> the jsonpath scanner calls the error routines as well as the parser.\n> Anyway, here's a patch.\n\nI looked through this and it seems generally OK. A minor nitpick is\nthat we usually write \"(Datum) 0\" not \"(Datum) NULL\" for dont-care Datum\nvalues. A slightly bigger issue is that makeItemLikeRegex still allows\nan error to be thrown from RE_compile_and_cache if a bogus regex is\npresented. But that could be dealt with later.\n\n(I wonder why this is using RE_compile_and_cache at all, really,\nrather than some other API. There doesn't seem to be value in\nforcing the regex into the cache at this point.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Dec 2022 11:44:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-22 Th 01:10, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> And here's another for contrib/seg\n>> I'm planning to commit these two in the next day or so.\n> I didn't look at the jsonpath one yet. The seg patch passes\n> an eyeball check, with one minor nit: in seg_atof,\n>\n> +\t*result = float4in_internal(value, NULL, \"real\", value, escontext);\n>\n> don't we want to use \"seg\" as the type_name?\n>\n> Even more nitpicky, in\n>\n> +seg_yyerror(SEG *result, struct Node *escontext, const char *message)\n> {\n> +\tif (SOFT_ERROR_OCCURRED(escontext))\n> +\t\treturn;\n>\n> I'd be inclined to add some explanation, say\n>\n> +seg_yyerror(SEG *result, struct Node *escontext, const char *message)\n> {\n> +\t/* if we already reported an error, don't overwrite it */\n> +\tif (SOFT_ERROR_OCCURRED(escontext))\n> +\t\treturn;\n>\n> \t\t\t\n\n\nThanks for the review.\n\n\nFixed both of these and pushed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 23 Dec 2022 09:52:12 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On 2022-12-22 Th 11:44, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> Yeah, I started there, but it's substantially more complex - unlike cube\n>> the jsonpath scanner calls the error routines as well as the parser.\n>> Anyway, here's a patch.\n> I looked through this and it seems generally OK. A minor nitpick is\n> that we usually write \"(Datum) 0\" not \"(Datum) NULL\" for dont-care Datum\n> values. \n\n\nFixed in the new version attached.\n\n\n> A slightly bigger issue is that makeItemLikeRegex still allows\n> an error to be thrown from RE_compile_and_cache if a bogus regex is\n> presented. But that could be dealt with later.\n\n\nI'd rather fix it now while we're paying attention.\n\n\n>\n> (I wonder why this is using RE_compile_and_cache at all, really,\n> rather than some other API. There doesn't seem to be value in\n> forcing the regex into the cache at this point.)\n>\n> \t\t\t\n\n\nI agree. The attached uses pg_regcomp instead. I had a lift a couple of\nlines from regexp.c, but not too many.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 23 Dec 2022 12:19:44 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-12-22 Th 11:44, Tom Lane wrote:\n>> (I wonder why this is using RE_compile_and_cache at all, really,\n>> rather than some other API. There doesn't seem to be value in\n>> forcing the regex into the cache at this point.)\n\n> I agree. The attached uses pg_regcomp instead. I had a lift a couple of\n> lines from regexp.c, but not too many.\n\nLGTM. No further comments.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Dec 2022 13:53:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Fri, Dec 23, 2022 at 9:20 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 2022-12-22 Th 11:44, Tom Lane wrote:\n> > Andrew Dunstan <andrew@dunslane.net> writes:\n> >> Yeah, I started there, but it's substantially more complex - unlike cube\n> >> the jsonpath scanner calls the error routines as well as the parser.\n> >> Anyway, here's a patch.\n> > I looked through this and it seems generally OK. A minor nitpick is\n> > that we usually write \"(Datum) 0\" not \"(Datum) NULL\" for dont-care Datum\n> > values.\n>\n>\n> Fixed in the new version attached.\n>\n>\n> > A slightly bigger issue is that makeItemLikeRegex still allows\n> > an error to be thrown from RE_compile_and_cache if a bogus regex is\n> > presented. But that could be dealt with later.\n>\n>\n> I'd rather fix it now while we're paying attention.\n>\n>\n> >\n> > (I wonder why this is using RE_compile_and_cache at all, really,\n> > rather than some other API. There doesn't seem to be value in\n> > forcing the regex into the cache at this point.)\n> >\n> >\n>\n>\n> I agree. The attached uses pg_regcomp instead. I had a lift a couple of\n> lines from regexp.c, but not too many.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n\nHi,\nIn makeItemLikeRegex :\n\n+ /* See regexp.c for explanation */\n+ CHECK_FOR_INTERRUPTS();\n+ pg_regerror(re_result, &re_tmp, errMsg,\nsizeof(errMsg));\n+ ereturn(escontext, false,\n\nSince an error is returned, I wonder if the `CHECK_FOR_INTERRUPTS` call is\nstill necessary.\n\n Cheers\n\nOn Fri, Dec 23, 2022 at 9:20 AM Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 2022-12-22 Th 11:44, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> Yeah, I started there, but it's substantially more complex - unlike cube\n>> the jsonpath scanner calls the error routines as well as the parser.\n>> Anyway, here's a patch.\n> I looked through this and it seems generally OK.  A minor nitpick is\n> that we usually write \"(Datum) 0\" not \"(Datum) NULL\" for dont-care Datum\n> values.  \n\n\nFixed in the new version attached.\n\n\n> A slightly bigger issue is that makeItemLikeRegex still allows\n> an error to be thrown from RE_compile_and_cache if a bogus regex is\n> presented.  But that could be dealt with later.\n\n\nI'd rather fix it now while we're paying attention.\n\n\n>\n> (I wonder why this is using RE_compile_and_cache at all, really,\n> rather than some other API.  There doesn't seem to be value in\n> forcing the regex into the cache at this point.)\n>\n>                       \n\n\nI agree. The attached uses pg_regcomp instead. I had a lift a couple of\nlines from regexp.c, but not too many.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.comHi,In makeItemLikeRegex :+                       /* See regexp.c for explanation */+                       CHECK_FOR_INTERRUPTS();+                       pg_regerror(re_result, &re_tmp, errMsg, sizeof(errMsg));+                       ereturn(escontext, false,Since an error is returned, I wonder if the `CHECK_FOR_INTERRUPTS` call is still necessary. Cheers", "msg_date": "Fri, 23 Dec 2022 13:19:07 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Ted Yu <yuzhihong@gmail.com> writes:\n> In makeItemLikeRegex :\n\n> + /* See regexp.c for explanation */\n> + CHECK_FOR_INTERRUPTS();\n> + pg_regerror(re_result, &re_tmp, errMsg,\n> sizeof(errMsg));\n> + ereturn(escontext, false,\n\n> Since an error is returned, I wonder if the `CHECK_FOR_INTERRUPTS` call is\n> still necessary.\n\nYes, it is. We don't want a query-cancel transformed into a soft error.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Dec 2022 16:22:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Fri, Dec 23, 2022 at 1:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Ted Yu <yuzhihong@gmail.com> writes:\n> > In makeItemLikeRegex :\n>\n> > + /* See regexp.c for explanation */\n> > + CHECK_FOR_INTERRUPTS();\n> > + pg_regerror(re_result, &re_tmp, errMsg,\n> > sizeof(errMsg));\n> > + ereturn(escontext, false,\n>\n> > Since an error is returned, I wonder if the `CHECK_FOR_INTERRUPTS` call\n> is\n> > still necessary.\n>\n> Yes, it is. We don't want a query-cancel transformed into a soft error.\n>\n> regards, tom lane\n>\nHi,\n`ereturn(escontext` calls appear in multiple places in the patch.\nWhat about other callsites (w.r.t. checking interrupt) ?\n\nCheers\n\nOn Fri, Dec 23, 2022 at 1:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Ted Yu <yuzhihong@gmail.com> writes:\n> In makeItemLikeRegex :\n\n> +                       /* See regexp.c for explanation */\n> +                       CHECK_FOR_INTERRUPTS();\n> +                       pg_regerror(re_result, &re_tmp, errMsg,\n> sizeof(errMsg));\n> +                       ereturn(escontext, false,\n\n> Since an error is returned, I wonder if the `CHECK_FOR_INTERRUPTS` call is\n> still necessary.\n\nYes, it is.  We don't want a query-cancel transformed into a soft error.\n\n                        regards, tom laneHi,`ereturn(escontext` calls appear in multiple places in the patch.What about other callsites (w.r.t. checking interrupt) ?Cheers", "msg_date": "Fri, 23 Dec 2022 13:25:42 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Ted Yu <yuzhihong@gmail.com> writes:\n> On Fri, Dec 23, 2022 at 1:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Ted Yu <yuzhihong@gmail.com> writes:\n>>> + /* See regexp.c for explanation */\n>>> + CHECK_FOR_INTERRUPTS();\n\n>> Yes, it is. We don't want a query-cancel transformed into a soft error.\n\n> `ereturn(escontext` calls appear in multiple places in the patch.\n> What about other callsites (w.r.t. checking interrupt) ?\n\nWhat about them? The reason this one is special is that backend/regexp\nmight return a failure code that's specifically \"I gave up because\nthere's a query cancel pending\". We don't want to report that as a soft\nerror. It's true that we might cancel the query for real a bit later on\neven if this check weren't here, but that doesn't mean it's okay to go\ndown the soft error path and hope that there'll be a CHECK_FOR_INTERRUPTS\nsometime before there's any visible evidence that we did the wrong thing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Dec 2022 16:38:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Fri, Dec 23, 2022 at 1:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Ted Yu <yuzhihong@gmail.com> writes:\n> > In makeItemLikeRegex :\n>\n> > + /* See regexp.c for explanation */\n> > + CHECK_FOR_INTERRUPTS();\n> > + pg_regerror(re_result, &re_tmp, errMsg,\n> > sizeof(errMsg));\n> > + ereturn(escontext, false,\n>\n> > Since an error is returned, I wonder if the `CHECK_FOR_INTERRUPTS` call\n> is\n> > still necessary.\n>\n> Yes, it is. We don't want a query-cancel transformed into a soft error.\n>\n> regards, tom lane\n>\nHi,\nFor this case (`invalid regular expression`), the potential user\ninterruption is one reason for stopping execution.\nI feel surfacing user interruption somehow masks the underlying error.\n\nThe same regex, without user interruption, would exhibit an `invalid\nregular expression` error.\nI think it would be better to surface the error.\n\nCheers\n\nOn Fri, Dec 23, 2022 at 1:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Ted Yu <yuzhihong@gmail.com> writes:\n> In makeItemLikeRegex :\n\n> +                       /* See regexp.c for explanation */\n> +                       CHECK_FOR_INTERRUPTS();\n> +                       pg_regerror(re_result, &re_tmp, errMsg,\n> sizeof(errMsg));\n> +                       ereturn(escontext, false,\n\n> Since an error is returned, I wonder if the `CHECK_FOR_INTERRUPTS` call is\n> still necessary.\n\nYes, it is.  We don't want a query-cancel transformed into a soft error.\n\n                        regards, tom laneHi,For this case (`invalid regular expression`), the potential user interruption is one reason for stopping execution.I feel surfacing user interruption somehow masks the underlying error.The same regex, without user interruption, would exhibit an `invalid regular expression` error.I think it would be better to surface the error.Cheers", "msg_date": "Sat, 24 Dec 2022 01:51:10 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-24 Sa 04:51, Ted Yu wrote:\n>\n>\n> On Fri, Dec 23, 2022 at 1:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Ted Yu <yuzhihong@gmail.com> writes:\n> > In makeItemLikeRegex :\n>\n> > +                       /* See regexp.c for explanation */\n> > +                       CHECK_FOR_INTERRUPTS();\n> > +                       pg_regerror(re_result, &re_tmp, errMsg,\n> > sizeof(errMsg));\n> > +                       ereturn(escontext, false,\n>\n> > Since an error is returned, I wonder if the\n> `CHECK_FOR_INTERRUPTS` call is\n> > still necessary.\n>\n> Yes, it is.  We don't want a query-cancel transformed into a soft\n> error.\n>\n>                         regards, tom lane\n>\n> Hi,\n> For this case (`invalid regular expression`), the potential user\n> interruption is one reason for stopping execution.\n> I feel surfacing user interruption somehow masks the underlying error.\n>\n> The same regex, without user interruption, would exhibit an `invalid\n> regular expression` error.\n> I think it would be better to surface the error.\n>\n>\n\nAll that this patch is doing is replacing a call to\nRE_compile_and_cache, which calls CHECK_FOR_INTERRUPTS, with similar\ncode, which gives us the opportunity to call ereturn instead of ereport.\nNote that where escontext is NULL (the common case), ereturn functions\nidentically to ereport. So unless you want to argue that the logic in\nRE_compile_and_cache is wrong I don't see what we're arguing about. If\ninstead I had altered the API of RE_compile_and_cache to include an\nescontext parameter we wouldn't be having this argument at all. The only\nreason I didn't do that was the point Tom quite properly raised about\nwhy we're doing any caching here anyway.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 24 Dec 2022 07:38:44 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-23 Fr 13:53, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 2022-12-22 Th 11:44, Tom Lane wrote:\n>>> (I wonder why this is using RE_compile_and_cache at all, really,\n>>> rather than some other API. There doesn't seem to be value in\n>>> forcing the regex into the cache at this point.)\n>> I agree. The attached uses pg_regcomp instead. I had a lift a couple of\n>> lines from regexp.c, but not too many.\n> LGTM. No further comments.\n>\n> \t\t\t\n\n\nAs I was giving this a final polish I noticed this in jspConvertRegexFlags:\n\n\n    /*\n     * We'll never need sub-match details at execution.  While\n     * RE_compile_and_execute would set this flag anyway, force it on\nhere to\n     * ensure that the regex cache entries created by makeItemLikeRegex are\n     * useful.\n     */\n    cflags |= REG_NOSUB;\n\n\nClearly the comment would no longer be true. I guess I should just\nremove this?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 24 Dec 2022 07:51:25 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Sat, Dec 24, 2022 at 4:38 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 2022-12-24 Sa 04:51, Ted Yu wrote:\n> >\n> >\n> > On Fri, Dec 23, 2022 at 1:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Ted Yu <yuzhihong@gmail.com> writes:\n> > > In makeItemLikeRegex :\n> >\n> > > + /* See regexp.c for explanation */\n> > > + CHECK_FOR_INTERRUPTS();\n> > > + pg_regerror(re_result, &re_tmp, errMsg,\n> > > sizeof(errMsg));\n> > > + ereturn(escontext, false,\n> >\n> > > Since an error is returned, I wonder if the\n> > `CHECK_FOR_INTERRUPTS` call is\n> > > still necessary.\n> >\n> > Yes, it is. We don't want a query-cancel transformed into a soft\n> > error.\n> >\n> > regards, tom lane\n> >\n> > Hi,\n> > For this case (`invalid regular expression`), the potential user\n> > interruption is one reason for stopping execution.\n> > I feel surfacing user interruption somehow masks the underlying error.\n> >\n> > The same regex, without user interruption, would exhibit an `invalid\n> > regular expression` error.\n> > I think it would be better to surface the error.\n> >\n> >\n>\n> All that this patch is doing is replacing a call to\n> RE_compile_and_cache, which calls CHECK_FOR_INTERRUPTS, with similar\n> code, which gives us the opportunity to call ereturn instead of ereport.\n> Note that where escontext is NULL (the common case), ereturn functions\n> identically to ereport. So unless you want to argue that the logic in\n> RE_compile_and_cache is wrong I don't see what we're arguing about. If\n> instead I had altered the API of RE_compile_and_cache to include an\n> escontext parameter we wouldn't be having this argument at all. The only\n> reason I didn't do that was the point Tom quite properly raised about\n> why we're doing any caching here anyway.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n\nAndrew:\n\nThanks for the response.\n\nOn Sat, Dec 24, 2022 at 4:38 AM Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 2022-12-24 Sa 04:51, Ted Yu wrote:\n>\n>\n> On Fri, Dec 23, 2022 at 1:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>     Ted Yu <yuzhihong@gmail.com> writes:\n>     > In makeItemLikeRegex :\n>\n>     > +                       /* See regexp.c for explanation */\n>     > +                       CHECK_FOR_INTERRUPTS();\n>     > +                       pg_regerror(re_result, &re_tmp, errMsg,\n>     > sizeof(errMsg));\n>     > +                       ereturn(escontext, false,\n>\n>     > Since an error is returned, I wonder if the\n>     `CHECK_FOR_INTERRUPTS` call is\n>     > still necessary.\n>\n>     Yes, it is.  We don't want a query-cancel transformed into a soft\n>     error.\n>\n>                             regards, tom lane\n>\n> Hi,\n> For this case (`invalid regular expression`), the potential user\n> interruption is one reason for stopping execution.\n> I feel surfacing user interruption somehow masks the underlying error.\n>\n> The same regex, without user interruption, would exhibit an `invalid\n> regular expression` error.\n> I think it would be better to surface the error.\n>\n>\n\nAll that this patch is doing is replacing a call to\nRE_compile_and_cache, which calls CHECK_FOR_INTERRUPTS, with similar\ncode, which gives us the opportunity to call ereturn instead of ereport.\nNote that where escontext is NULL (the common case), ereturn functions\nidentically to ereport. So unless you want to argue that the logic in\nRE_compile_and_cache is wrong I don't see what we're arguing about. If\ninstead I had altered the API of RE_compile_and_cache to include an\nescontext parameter we wouldn't be having this argument at all. The only\nreason I didn't do that was the point Tom quite properly raised about\nwhy we're doing any caching here anyway.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.comAndrew:Thanks for the response.", "msg_date": "Sat, 24 Dec 2022 06:28:45 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> As I was giving this a final polish I noticed this in jspConvertRegexFlags:\n\n>     /*\n>      * We'll never need sub-match details at execution.  While\n>      * RE_compile_and_execute would set this flag anyway, force it on here to\n>      * ensure that the regex cache entries created by makeItemLikeRegex are\n>      * useful.\n>      */\n>     cflags |= REG_NOSUB;\n\n> Clearly the comment would no longer be true. I guess I should just\n> remove this?\n\nYeah, we can just drop that I guess. I'm slightly worried that we might\nneed it again after some future refactoring; but it's not really worth\ndevising a re-worded comment to justify keeping it.\n\nAlso, I realized that I failed in my reviewerly duty by not noticing\nthat you'd forgotten to pg_regfree the regex after successful\ncompilation. Running something like this exposes the memory leak\nvery quickly:\n\nselect pg_input_is_valid('$ ? (@ like_regex \"pattern\" flag \"smixq\")', 'jsonpath')\n from generate_series(1,10000000);\n\nThe attached delta patch takes care of it. (Per comment at pg_regcomp,\nwe don't need this after a failure return.)\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 24 Dec 2022 10:42:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Ted Yu <yuzhihong@gmail.com> writes:\n> On Fri, Dec 23, 2022 at 1:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yes, it is. We don't want a query-cancel transformed into a soft error.\n\n> The same regex, without user interruption, would exhibit an `invalid\n> regular expression` error.\n\nOn what grounds do you claim that? The timing of arrival of the SIGINT\nis basically chance --- it might happen while we're inside backend/regex,\nor not. I mean, sure you could claim that a bad regex might run a long\ntime and thereby be more likely to cause the user to issue a query\ncancel, but that's a stretched line of reasoning.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 24 Dec 2022 10:48:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-24 Sa 10:42, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> As I was giving this a final polish I noticed this in jspConvertRegexFlags:\n>>     /*\n>>      * We'll never need sub-match details at execution.  While\n>>      * RE_compile_and_execute would set this flag anyway, force it on here to\n>>      * ensure that the regex cache entries created by makeItemLikeRegex are\n>>      * useful.\n>>      */\n>>     cflags |= REG_NOSUB;\n>> Clearly the comment would no longer be true. I guess I should just\n>> remove this?\n> Yeah, we can just drop that I guess. I'm slightly worried that we might\n> need it again after some future refactoring; but it's not really worth\n> devising a re-worded comment to justify keeping it.\n>\n> Also, I realized that I failed in my reviewerly duty by not noticing\n> that you'd forgotten to pg_regfree the regex after successful\n> compilation. Running something like this exposes the memory leak\n> very quickly:\n>\n> select pg_input_is_valid('$ ? (@ like_regex \"pattern\" flag \"smixq\")', 'jsonpath')\n> from generate_series(1,10000000);\n>\n> The attached delta patch takes care of it. (Per comment at pg_regcomp,\n> we don't need this after a failure return.)\n>\n> \t\t\t\n\n\nThanks, pushed with those changes.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 24 Dec 2022 15:23:38 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Dec 16, 2022 at 1:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The reg* functions probably need a unified plan as to how far\n>> down we want to push non-error behavior.\n\n> I would be in favor of an aggressive approach.\n\nHere's a proposed patch for converting regprocin and friends\nto soft error reporting. I'll say at the outset that it's an\nengineering compromise, and it may be worth going further in\nfuture. But I doubt it's worth doing more than this for v16,\nbecause the next steps would be pretty invasive.\n\nI've converted all the errors thrown directly within regproc.c,\nand also converted parseTypeString, typeStringToTypeName, and\nstringToQualifiedNameList to report their own errors softly.\nThis affected some outside callers, but not so many of them\nthat I think it's worth inventing compatibility wrappers.\n\nI dealt with lookup failures by just changing the input functions\nto call the respective lookup functions with missing_ok = true,\nand then throw their own error softly on failure.\n\nAlso, I've changed to_regproc() and friends to return NULL\nin exactly the same cases that are now soft errors for the\ninput functions. Previously they were a bit inconsistent\nabout what triggered hard errors vs. returning NULL.\n(Perhaps we should go further than this, and convert all these\nfunctions to just be DirectInputFunctionCallSafe wrappers\naround the corresponding input functions? That would save\nsome duplicative code, but I've not done it here.)\n\nWhat's not fixed here:\n\n1. As previously discussed, parse errors in type names are\nthrown by the main grammar, so getting those to not be\nhard errors seems like too big a lift for today.\n\n2. Errors about invalid type modifiers (reported by\ntypenameTypeMod or type-specific typmodin routines) are not\ntrapped either. Fixing this would require extending the\nsoft-error conventions to typmodin routines, which maybe will\nbe worth doing someday but it seems pretty far down the\npriority list. Specifying a typmod is surely not main-line\nusage for regtypein.\n\n3. Throwing our own error has the demerit that it might be\ndifferent from what the underlying lookup function would have\nreported. This is visible in some changes in existing\nregression test cases, such as\n\n-ERROR: schema \"ng_catalog\" does not exist\n+ERROR: relation \"ng_catalog.pg_class\" does not exist\n\nThis isn't wrong, exactly, but the loss of specificity is\na bit annoying.\n\n4. This still fails to trap errors about \"too many dotted names\"\nand \"cross-database references are not implemented\", which are\nthrown in DeconstructQualifiedName, LookupTypeName,\nRangeVarGetRelid, and maybe some other places.\n\n5. We also don't trap errors about \"the schema exists but\nyou don't have USAGE permission to do a lookup in it\",\nbecause LookupExplicitNamespace still throws that even\nwhen passed missing_ok = true.\n\nThe obvious way to fix #3,#4,#5 is to change pretty much all\nof the catalog lookup infrastructure to deal in escontext\narguments instead of \"missing_ok\" booleans. That might be\nworth doing --- it'd have benefits beyond the immediate\nproblem, I think --- but I feel it's a bigger lift than we\nwant to undertake for v16. It'd be better to spend the time\nwe have left for v16 on building features that use soft error\nreporting than on refining corner cases in the reg* functions.\n\nSo I think we should stop more or less here, possibly after\nchanging the to_regfoo functions to be simple wrappers\naround the soft input functions.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 25 Dec 2022 12:13:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "I got annoyed by the fact that types cid, xid, xid8 don't throw\nerror even for obvious garbage, because they just believe the\nresult of strtoul or strtoull without any checking. That was\nprobably up to project standards when cidin and xidin were\nwritten; but surely it's not anymore, especially when we can\npiggyback on work already done for type oid.\n\nAnybody have an objection to the following? One note is that\nbecause we already had test cases checking that xid would\naccept hex input, I made the common subroutines use \"0\" not\n\"10\" for strtoul's last argument, meaning that oid will accept\nhex now too.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 25 Dec 2022 15:38:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-25 Su 12:13, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> On Fri, Dec 16, 2022 at 1:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> The reg* functions probably need a unified plan as to how far\n>>> down we want to push non-error behavior.\n>> I would be in favor of an aggressive approach.\n> Here's a proposed patch for converting regprocin and friends\n> to soft error reporting. I'll say at the outset that it's an\n> engineering compromise, and it may be worth going further in\n> future. But I doubt it's worth doing more than this for v16,\n> because the next steps would be pretty invasive.\n\n\nIt's a judgement call, but I'm not too fussed about stopping here for\nv16. I see the reg* items as probably the lowest priority to fix.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 26 Dec 2022 08:59:08 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Here's a proposed patch for making tsvectorin and tsqueryin\nreport errors softly. We have to take the changes down a\ncouple of levels of subroutines, but it's not hugely difficult.\n\nA couple of points worthy of comment:\n\n* To reduce API changes, I made the functions in\ntsvector_parser.c and tsquery.c pass around the escontext pointer\nin TSVectorParseState and TSQueryParserState respectively.\nThis is a little duplicative, but since those structs are private\nwithin those files, there's no easy way to share the same\npointer except by adding it as a new parameter to all those\nfunctions. This also means that if any of the outside callers\nof parse_tsquery (in to_tsany.c) wanted to do soft error handling\nand wanted their custom PushFunctions to be able to report such\nerrors, they'd need to pass the escontext via their \"opaque\"\npassthrough structs, making for yet a third copy. Still,\nI judged adding an extra parameter to dozens of functions wasn't\na better way.\n\n* There are two places in tsquery parsing that emit nuisance\nNOTICEs about empty queries. I chose to suppress those when\nsoft error handling has been requested. Maybe we should rethink\nwhether we want them at all?\n\nWith the other patches I've posted recently, this covers all\nof the core datatype input functions. There are still half\na dozen to tackle in contrib.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 26 Dec 2022 12:47:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-26 Mo 12:47, Tom Lane wrote:\n> Here's a proposed patch for making tsvectorin and tsqueryin\n> report errors softly. We have to take the changes down a\n> couple of levels of subroutines, but it's not hugely difficult.\n\n\nGreat!\n\n\n>\n> With the other patches I've posted recently, this covers all\n> of the core datatype input functions. There are still half\n> a dozen to tackle in contrib.\n>\n> \t\t\t\n\n\nYeah, I'm currently looking at those in ltree.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 26 Dec 2022 14:12:06 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "I wrote:\n> (Perhaps we should go further than this, and convert all these\n> functions to just be DirectInputFunctionCallSafe wrappers\n> around the corresponding input functions? That would save\n> some duplicative code, but I've not done it here.)\n\nI looked closer at that idea, and realized that it would do more than\njust save some code: it'd cause the to_regfoo functions to accept\nnumeric OIDs, as they did not before (and are documented not to).\nIt is unclear to me whether that inconsistency with the input\nfunctions is really desirable or not --- but I don't offhand see a\ngood argument for it. If we change this though, it should probably\nhappen in a separate commit. Accordingly, here's a delta patch\ndoing that.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 26 Dec 2022 18:00:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On 2022-12-26 Mo 14:12, Andrew Dunstan wrote:\n> On 2022-12-26 Mo 12:47, Tom Lane wrote:\n>> Here's a proposed patch for making tsvectorin and tsqueryin\n>> report errors softly. We have to take the changes down a\n>> couple of levels of subroutines, but it's not hugely difficult.\n>\n> Great!\n>\n>\n>> With the other patches I've posted recently, this covers all\n>> of the core datatype input functions. There are still half\n>> a dozen to tackle in contrib.\n>>\n>> \t\t\t\n>\n> Yeah, I'm currently looking at those in ltree.\n>\n>\n\nHere's a patch that covers the ltree and intarray contrib modules. I\nthink that would leave just hstore to be done.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 27 Dec 2022 08:31:01 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-26 Mo 18:00, Tom Lane wrote:\n> I wrote:\n>> (Perhaps we should go further than this, and convert all these\n>> functions to just be DirectInputFunctionCallSafe wrappers\n>> around the corresponding input functions? That would save\n>> some duplicative code, but I've not done it here.)\n> I looked closer at that idea, and realized that it would do more than\n> just save some code: it'd cause the to_regfoo functions to accept\n> numeric OIDs, as they did not before (and are documented not to).\n> It is unclear to me whether that inconsistency with the input\n> functions is really desirable or not --- but I don't offhand see a\n> good argument for it. If we change this though, it should probably\n> happen in a separate commit. Accordingly, here's a delta patch\n> doing that.\n>\n> \t\t\t\n\n\n+1 for doing this. The code simplification is nice too.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 27 Dec 2022 08:36:15 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Here's a patch that covers the ltree and intarray contrib modules.\n\nI would probably have done this a little differently --- I think\nthe added \"res\" parameters aren't really necessary for most of\nthese. But it's not worth arguing over.\n\n> I think that would leave just hstore to be done.\n\nYeah, that matches my scoreboard. Are you going to look at\nhstore, or do you want me to?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 27 Dec 2022 12:47:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\n\n> On Dec 27, 2022, at 12:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> Here's a patch that covers the ltree and intarray contrib modules.\n> \n> I would probably have done this a little differently --- I think\n> the added \"res\" parameters aren't really necessary for most of\n> these. But it's not worth arguing over.\n\nI’ll take another look \n\n\n> \n>> I think that would leave just hstore to be done.\n> \n> Yeah, that matches my scoreboard. Are you going to look at\n> hstore, or do you want me to?\n> \n> \n\nGo for it. \n\nCheers\n\nAndrew \n\n", "msg_date": "Tue, 27 Dec 2022 13:05:06 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On Dec 27, 2022, at 12:47 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> I think that would leave just hstore to be done.\n\n>> Yeah, that matches my scoreboard. Are you going to look at\n>> hstore, or do you want me to?\n\n> Go for it. \n\nDone.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 27 Dec 2022 14:51:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Tue, Dec 27, 2022 at 11:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > Here's a patch that covers the ltree and intarray contrib modules.\n>\n> I would probably have done this a little differently --- I think\n> the added \"res\" parameters aren't really necessary for most of\n> these. But it's not worth arguing over.\n>\n\nAlso, it would be good if we can pass \"escontext\" through the \"state\"\nargument of makepool() like commit 78212f210114 done for makepol() of\ntsquery.c. Attached patch is the updated version that does the same.\n\nRegards,\nAmul", "msg_date": "Wed, 28 Dec 2022 11:30:34 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "\nOn 2022-12-28 We 01:00, Amul Sul wrote:\n> On Tue, Dec 27, 2022 at 11:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> Here's a patch that covers the ltree and intarray contrib modules.\n>> I would probably have done this a little differently --- I think\n>> the added \"res\" parameters aren't really necessary for most of\n>> these. But it's not worth arguing over.\n>>\n> Also, it would be good if we can pass \"escontext\" through the \"state\"\n> argument of makepool() like commit 78212f210114 done for makepol() of\n> tsquery.c. Attached patch is the updated version that does the same.\n>\n\n\nThanks, I have done both of these things. Looks like we're now done with\nthis task, thanks everybody.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 28 Dec 2022 10:04:07 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" }, { "msg_contents": "On Sun, Dec 25, 2022 at 12:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Here's a proposed patch for converting regprocin and friends\n> to soft error reporting. I'll say at the outset that it's an\n> engineering compromise, and it may be worth going further in\n> future. But I doubt it's worth doing more than this for v16,\n> because the next steps would be pretty invasive.\n\nI don't know that I feel particularly good about converting some\nerrors to be reported softly and others not, especially since the\ndividing line around which things fall into which category is pretty\nmuch \"well, whatever seemed hard we didn't convert\". We could consider\nhanging it to report everything as a hard error until we can convert\neverything, but I'm not sure that's better.\n\nOn another note, I can't help noticing that all of these patches seem\nto have been committed without any documentation changes. Maybe that's\nbecause there's nothing user-visible that makes any use of these\nfeatures yet, but if that's true, then we probably ought to add\nsomething so that the changes are testable. And having done that we\nneed to explain to users what the behavior actually is: that input\nvalidation errors are trapped but other kinds of failures like out of\nmemory are not; that most core data types report all input validation\nerrors softly, and the exceptions; and that for non-core data types\nthe behavior depends on how the extension is coded. I think it's\nreally a mistake to suppose that users won't care about or don't need\nto know these kinds of details. In my experience, that's just not\ntrue.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 3 Jan 2023 13:16:40 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error-safe user functions" } ]
[ { "msg_contents": "Hi,\n\nSplitting out to a new thread. Started at\nhttps://www.postgresql.org/message-id/20220915051754.ccx35szetbvnjv7g%40awork3.anarazel.de\n\nOn 2022-09-14 22:17:54 -0700, Andres Freund wrote:\n> On 2022-09-15 01:10:16 -0400, Tom Lane wrote:\n> > I realize that there are people for whom other considerations outweigh\n> > that, but I don't think that we should install static libraries by\n> > default. Long ago it was pretty common for configure scripts to\n> > offer --enable-shared and --enable-static options ... should we\n> > resurrect that?\n>\n> It'd be easy enough. I don't really have an opinion on whether it's worth\n> having the options. I think most packaging systems have ways of not including\n> files even if $software installs them.\n\nA few questions, in case we want to do this:\n\n1) should this affect libraries we build only as static libraries, like\n pgport, pgcommon, pgfeutils?\n\n I assume there's some extensions that build binaries with pgxs, which then\n presumably need pgport, pgcommon.\n\n2) Would we want the option add it to autoconf and meson, or just meson?\n\n3) For meson, I'd be inclined to leave the static libraries in as build\n targets, but just not build and install them by default.\n\n4) Why are we installing the static libraries into libdir? Given that they're\n not versioned at all, it somehow seems pkglibdir would be more appropriate?\n\n Debian apparently patches postgres in an attempt to do so:\n https://salsa.debian.org/postgresql/postgresql/-/blob/15/debian/patches/libpgport-pkglibdir\n but I think it's not quite complete, because the libpq.pc doesn't know\n about the new location for the static libraries\n\n5) currently we build the constituents of libpq.a with -fPIC, but then propose\n to link with -lpgport -lpgcommon, rather than linking with the_shlib\n versions. That seems a bit bogus to me - either we care about providing an\n efficient non-PIC library (in which case libpq.a elements would need to be\n build separately), or we don't (in which case we should just link to\n *_shlib).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 3 Oct 2022 15:42:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "installing static libraries (was building postgres with meson)" }, { "msg_contents": "On 04.10.22 00:42, Andres Freund wrote:\n>>> I realize that there are people for whom other considerations outweigh\n>>> that, but I don't think that we should install static libraries by\n>>> default. Long ago it was pretty common for configure scripts to\n>>> offer --enable-shared and --enable-static options ... should we\n>>> resurrect that?\n>>\n>> It'd be easy enough. I don't really have an opinion on whether it's worth\n>> having the options. I think most packaging systems have ways of not including\n>> files even if $software installs them.\n\nRight. I think there is enough work to stabilize and synchronize the \nnew build system. I don't really see a need to prioritize this.\n\n> A few questions, in case we want to do this:\n> \n> 1) should this affect libraries we build only as static libraries, like\n> pgport, pgcommon, pgfeutils?\n> \n> I assume there's some extensions that build binaries with pgxs, which then\n> presumably need pgport, pgcommon.\n\nI'm not familiar with cases like this and what their expectations would be.\n\n> 2) Would we want the option add it to autoconf and meson, or just meson?\n\nif at all, then both\n\n> 3) For meson, I'd be inclined to leave the static libraries in as build\n> targets, but just not build and install them by default.\n\nnot sure why\n\n> 4) Why are we installing the static libraries into libdir? Given that they're\n> not versioned at all, it somehow seems pkglibdir would be more appropriate?\n\nThat's the standard file system layout. I don't think we need to \neditorialize that.\n\n\n\n", "msg_date": "Wed, 5 Oct 2022 15:24:31 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: installing static libraries (was building postgres with meson)" } ]
[ { "msg_contents": "Hi,\n\nwhile working on installcheck support with meson, that currently running\ninstallcheck-world fails regularly with meson and occasionally with make.\n\nA way to quite reliably reproduce this with make is\n\nmake -s -j48 -C contrib/ USE_MODULE_DB=1 installcheck-adminpack-recurse installcheck-passwordcheck-recurse\n\nthat will fail with diffs like:\n\ndiff -du10 /home/andres/src/postgresql/contrib/passwordcheck/expected/passwordcheck.out /home/andres/build/postgres/dev-assert/vpath/contrib/passwordcheck/res>\n--- /home/andres/src/postgresql/contrib/passwordcheck/expected/passwordcheck.out 2022-10-03 15:56:57.900326662 -0700\n+++ /home/andres/build/postgres/dev-assert/vpath/contrib/passwordcheck/results/passwordcheck.out 2022-10-03 15:56:59.930329973 -0700\n@@ -1,19 +1,22 @@\n LOAD 'passwordcheck';\n CREATE USER regress_user1;\n -- ok\n ALTER USER regress_user1 PASSWORD 'a_nice_long_password';\n+ERROR: tuple concurrently deleted\n -- error: too short\n ALTER USER regress_user1 PASSWORD 'tooshrt';\n-ERROR: password is too short\n+ERROR: role \"regress_user1\" does not exist\n -- error: contains user name\n ALTER USER regress_user1 PASSWORD 'xyzregress_user1';\n-ERROR: password must not contain user name\n+ERROR: role \"regress_user1\" does not exist\n -- error: contains only letters\n\n LOAD 'passwordcheck';\n CREATE USER regress_user1;\n -- ok\n ALTER USER regress_user1 PASSWORD 'a_nice_long_password';\n+ERROR: tuple concurrently deleted\n -- error: too short\n ALTER USER regress_user1 PASSWORD 'tooshrt';\n-ERROR: password is too short\n+ERROR: role \"regress_user1\" does not exist\n -- error: contains user name\n\n\nThat's not surprising, given the common name of \"regress_user1\".\n\nThe attached patch fixes a number of instances of this issue. With it I got\nthrough ~5 iterations of installcheck-world on ac, and >30 iterations with\nmeson.\n\nThere's a few further roles that seem to pose some danger goign forward:\n\n./contrib/file_fdw/sql/file_fdw.sql:CREATE ROLE regress_no_priv_user LOGIN; -- has priv but no user mapping\n./contrib/postgres_fdw/sql/postgres_fdw.sql:CREATE ROLE regress_view_owner SUPERUSER;\n./contrib/postgres_fdw/sql/postgres_fdw.sql:CREATE ROLE regress_nosuper NOSUPERUSER;\n./contrib/passwordcheck/sql/passwordcheck.sql:CREATE USER regress_passwordcheck_user1;\n./contrib/citext/sql/create_index_acl.sql:CREATE ROLE regress_minimal;\n./src/test/modules/test_rls_hooks/sql/test_rls_hooks.sql:CREATE ROLE regress_r1;\n./src/test/modules/test_rls_hooks/sql/test_rls_hooks.sql:CREATE ROLE regress_s1;\n./src/test/modules/test_oat_hooks/sql/test_oat_hooks.sql:CREATE ROLE regress_role_joe;\n./src/test/modules/test_oat_hooks/sql/test_oat_hooks.sql:CREATE USER regress_test_user;\n./src/test/modules/unsafe_tests/sql/rolenames.sql:CREATE ROLE regress_testrol0 SUPERUSER LOGIN;\n./src/test/modules/unsafe_tests/sql/rolenames.sql:CREATE ROLE regress_testrolx SUPERUSER LOGIN;\n./src/test/modules/unsafe_tests/sql/rolenames.sql:CREATE ROLE regress_testrol2 SUPERUSER;\n./src/test/modules/unsafe_tests/sql/rolenames.sql:CREATE ROLE regress_testrol1 SUPERUSER LOGIN IN ROLE regress_testrol2;\n./src/test/modules/unsafe_tests/sql/rolenames.sql:CREATE ROLE regress_role_haspriv;\n./src/test/modules/unsafe_tests/sql/rolenames.sql:CREATE ROLE regress_role_nopriv;\n./src/test/modules/unsafe_tests/sql/guc_privs.sql:CREATE ROLE regress_admin SUPERUSER;\n./src/test/modules/test_ddl_deparse/sql/alter_function.sql:CREATE ROLE regress_alter_function_role;\n\n\nBTW, shouldn't src/test/modules/unsafe_tests use the PG_TEST_EXTRA mechanism\nsomehow? Seems not great to run it as part of installcheck-world, if we don't\nwant to run it as part of installcheck.\n\n\nA second issue I noticed is that advisory_lock.sql often fails, because the\npg_locks queries don't restrict to the current database. Patch attached.\n\nI haven't seen that with autoconf installcheck-world, presumably because of\nthis:\n\n# There are too many interdependencies between the subdirectories, so\n# don't attempt parallel make here.\n.NOTPARALLEL:\n\n\nWith those two patches applied, I got through 10 iterations of running all\nregress / isolation tests concurrently with meson without failures.\n\nI attached the meson patch as well, but just because I used it to to get to\nthese patches.\n\nGreetings,\n\nAndres Freund", "msg_date": "Mon, 3 Oct 2022 16:41:11 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "installcheck-world concurrency issues" }, { "msg_contents": "On Mon, Oct 03, 2022 at 04:41:11PM -0700, Andres Freund wrote:\n> There's a few further roles that seem to pose some danger goign forward:\n\nI have never seen that myself, but 0001 is a nice cleanup.\ngenerated.sql includes a user named \"regress_user11\". Perhaps that's\nworth renaming while on it?\n\n> BTW, shouldn't src/test/modules/unsafe_tests use the PG_TEST_EXTRA mechanism\n> somehow? Seems not great to run it as part of installcheck-world, if we don't\n> want to run it as part of installcheck.c\n\nIndeed.\n\n> A second issue I noticed is that advisory_lock.sql often fails, because the\n> pg_locks queries don't restrict to the current database. Patch attached.\n\nAs in prepared_xacts.sql or just advisory locks taken in an installed\ncluster? Or both?\n\n> I attached the meson patch as well, but just because I used it to to get to\n> these patches.\n\nI am still studying a lot of this area, but it seems like all the\nspots requiring a custom configuration (aka NO_INSTALLCHECK) are\ncovered. --setup running is working here with 0003.\n--\nMichael", "msg_date": "Tue, 4 Oct 2022 17:05:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: installcheck-world concurrency issues" }, { "msg_contents": "Hi,\n\nOn 2022-10-04 17:05:40 +0900, Michael Paquier wrote:\n> On Mon, Oct 03, 2022 at 04:41:11PM -0700, Andres Freund wrote:\n> > There's a few further roles that seem to pose some danger goign forward:\n> \n> I have never seen that myself, but 0001 is a nice cleanup.\n> generated.sql includes a user named \"regress_user11\". Perhaps that's\n> worth renaming while on it?\n\nI think regress_* without a \"namespace\" is what's src/test/regress uses, so\nthere's not really a need?\n\n\n> > A second issue I noticed is that advisory_lock.sql often fails, because the\n> > pg_locks queries don't restrict to the current database. Patch attached.\n> \n> As in prepared_xacts.sql or just advisory locks taken in an installed\n> cluster? Or both?\n\nThere's various isolation tests, including several in src/test/isolation, that\nuse advisory locks.\n\nprepared_xacts.sql shouldn't be an issue, because it's scheduled in a separate\ngroup.\n\n\n> > I attached the meson patch as well, but just because I used it to to get to\n> > these patches.\n> \n> I am still studying a lot of this area, but it seems like all the\n> spots requiring a custom configuration (aka NO_INSTALLCHECK) are\n> covered. --setup running is working here with 0003.\n\nThanks for checking.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 4 Oct 2022 11:35:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: installcheck-world concurrency issues" }, { "msg_contents": "On 04.10.22 01:41, Andres Freund wrote:\n> BTW, shouldn't src/test/modules/unsafe_tests use the PG_TEST_EXTRA mechanism\n> somehow? Seems not great to run it as part of installcheck-world, if we don't\n> want to run it as part of installcheck.\n\nI think there are different levels and kinds of unsafeness. The ssl and \nkerberos tests start open server processes on your machine. The \nmodules/unsafe_tests just make a mess of your postgres instance. The \nlatter isn't a problem when run against a temp instance.\n\n\n\n", "msg_date": "Wed, 5 Oct 2022 08:16:37 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: installcheck-world concurrency issues" }, { "msg_contents": "Hi,\n\nOn 2022-10-05 08:16:37 +0200, Peter Eisentraut wrote:\n> On 04.10.22 01:41, Andres Freund wrote:\n> > BTW, shouldn't src/test/modules/unsafe_tests use the PG_TEST_EXTRA mechanism\n> > somehow? Seems not great to run it as part of installcheck-world, if we don't\n> > want to run it as part of installcheck.\n> \n> I think there are different levels and kinds of unsafeness. The ssl and\n> kerberos tests start open server processes on your machine. The\n> modules/unsafe_tests just make a mess of your postgres instance. The latter\n> isn't a problem when run against a temp instance.\n\nI agree - but I suspect our definition of danger is reversed. For me breaking\nan existing cluster is a lot more likely to incur \"real world\" danger than\nstarting a throway instance listening to tcp on localhost...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 Oct 2022 10:20:27 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: installcheck-world concurrency issues" }, { "msg_contents": "On Tue, Oct 04, 2022 at 11:35:53AM -0700, Andres Freund wrote:\n> On 2022-10-04 17:05:40 +0900, Michael Paquier wrote:\n>> I am still studying a lot of this area, but it seems like all the\n>> spots requiring a custom configuration (aka NO_INSTALLCHECK) are\n>> covered. --setup running is working here with 0003.\n> \n> Thanks for checking.\n\nFor the archives' sake: this has been applied as of 6a20b04 and\nc3315a7.\n--\nMichael", "msg_date": "Thu, 6 Oct 2022 13:52:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: installcheck-world concurrency issues" }, { "msg_contents": "On Thu, Oct 06, 2022 at 01:52:46PM +0900, Michael Paquier wrote:\n> For the archives' sake: this has been applied as of 6a20b04 and\n> c3315a7.\n\nCorey (added in CC.), has noticed that the issue fixed by c3315a7 in\n16~ for advisory locks is not complicated to reach, leading to\nfailures in some of our automated internal stuff. A cherry-pick of\nc3315a7 works cleanly across 12~15. Would there be any objections if\nI were to backpatch this part down to 12?\n\nThe problems fixed by 6a20b04 have not really been an issue here,\nhence I'd rather let things be as they are for the conflicting role\nnames.\n--\nMichael", "msg_date": "Tue, 24 Sep 2024 11:24:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: installcheck-world concurrency issues" }, { "msg_contents": "On Tue, Sep 24, 2024 at 11:24:46AM +0900, Michael Paquier wrote:\n> Corey (added in CC.), has noticed that the issue fixed by c3315a7 in\n> 16~ for advisory locks is not complicated to reach, leading to\n> failures in some of our automated internal stuff. A cherry-pick of\n> c3315a7 works cleanly across 12~15. Would there be any objections if\n> I were to backpatch this part down to 12?\n> \n> The problems fixed by 6a20b04 have not really been an issue here,\n> hence I'd rather let things be as they are for the conflicting role\n> names.\n\nHearing nothing, done.\n--\nMichael", "msg_date": "Thu, 26 Sep 2024 13:47:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: installcheck-world concurrency issues" } ]
[ { "msg_contents": "On Thu, 29 Sept 2022 at 18:30, David Rowley <dgrowleyml(at)gmail(dot)com>\nwrote:\n> Does anyone have any opinions on this?\nHi,\n\nRevisiting my work on reducing memory consumption, I found this patch left\nout.\nI'm not sure I can help.\nBut basically I was able to write and read the block size, in the chunk.\nCould it be the case of writing and reading the context pointer in the same\nway?\nSure this leads to some performance loss, but would it make it possible to\nget the context pointer from the chunk?\nIn other words, would it be possible to save the context pointer and\ncompare it later in MemoryContextContains?\n\nregards,\nRanier Vilela", "msg_date": "Mon, 3 Oct 2022 21:35:03 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reducing the chunk header sizes on all memory context types" }, { "msg_contents": "On Tue, 4 Oct 2022 at 13:35, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> Revisiting my work on reducing memory consumption, I found this patch left out.\n> I'm not sure I can help.\n> But basically I was able to write and read the block size, in the chunk.\n> Could it be the case of writing and reading the context pointer in the same way?\n> Sure this leads to some performance loss, but would it make it possible to get the context pointer from the chunk?\n> In other words, would it be possible to save the context pointer and compare it later in MemoryContextContains?\n\nI'm not really sure I understand the intention of the patch. The\nheader size for all our memory contexts was already reduced in\nc6e0fe1f2. The patch you sent seems to pre-date that commit.\n\nDavid\n\n\n", "msg_date": "Tue, 4 Oct 2022 21:35:55 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Reducing the chunk header sizes on all memory context types" }, { "msg_contents": "Em ter., 4 de out. de 2022 às 05:36, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Tue, 4 Oct 2022 at 13:35, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > Revisiting my work on reducing memory consumption, I found this patch\n> left out.\n> > I'm not sure I can help.\n> > But basically I was able to write and read the block size, in the chunk.\n> > Could it be the case of writing and reading the context pointer in the\n> same way?\n> > Sure this leads to some performance loss, but would it make it possible\n> to get the context pointer from the chunk?\n> > In other words, would it be possible to save the context pointer and\n> compare it later in MemoryContextContains?\n>\n> I'm not really sure I understand the intention of the patch. The\n> header size for all our memory contexts was already reduced in\n> c6e0fe1f2. The patch you sent seems to pre-date that commit.\n>\nThere is zero intention to commit this. It's just an experiment I did.\n\nAs it is in the patch, it is possible to save the context pointer outside\nthe header chunk.\nMaking it possible to retrieve it in MemoryContextContains.\n\nIt's just an idea.\n\nregards,\nRanier Vilela\n\nEm ter., 4 de out. de 2022 às 05:36, David Rowley <dgrowleyml@gmail.com> escreveu:On Tue, 4 Oct 2022 at 13:35, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> Revisiting my work on reducing memory consumption, I found this patch left out.\n> I'm not sure I can help.\n> But basically I was able to write and read the block size, in the chunk.\n> Could it be the case of writing and reading the context pointer in the same way?\n> Sure this leads to some performance loss, but would it make it possible to get the context pointer from the chunk?\n> In other words, would it be possible to save the context pointer and compare it later in MemoryContextContains?\n\nI'm not really sure I understand the intention of the patch. The\nheader size for all our memory contexts was already reduced in\nc6e0fe1f2.  The patch you sent seems to pre-date that commit.There is zero intention to commit this. It's just an experiment I did.As it is in the patch, it is possible to save the context pointer outside the header chunk. Making it possible to retrieve it in MemoryContextContains.It's just an idea.regards,Ranier Vilela", "msg_date": "Tue, 4 Oct 2022 08:29:23 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reducing the chunk header sizes on all memory context types" }, { "msg_contents": "Em ter., 4 de out. de 2022 às 08:29, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em ter., 4 de out. de 2022 às 05:36, David Rowley <dgrowleyml@gmail.com>\n> escreveu:\n>\n>> On Tue, 4 Oct 2022 at 13:35, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>> > Revisiting my work on reducing memory consumption, I found this patch\n>> left out.\n>> > I'm not sure I can help.\n>> > But basically I was able to write and read the block size, in the chunk.\n>> > Could it be the case of writing and reading the context pointer in the\n>> same way?\n>> > Sure this leads to some performance loss, but would it make it possible\n>> to get the context pointer from the chunk?\n>> > In other words, would it be possible to save the context pointer and\n>> compare it later in MemoryContextContains?\n>>\n>> I'm not really sure I understand the intention of the patch. The\n>> header size for all our memory contexts was already reduced in\n>> c6e0fe1f2. The patch you sent seems to pre-date that commit.\n>>\n> There is zero intention to commit this. It's just an experiment I did.\n>\n> As it is in the patch, it is possible to save the context pointer outside\n> the header chunk.\n> Making it possible to retrieve it in MemoryContextContains.\n>\nNever mind, it's a waste of time.\n\nregards,\nRanier Vilela\n\nEm ter., 4 de out. de 2022 às 08:29, Ranier Vilela <ranier.vf@gmail.com> escreveu:Em ter., 4 de out. de 2022 às 05:36, David Rowley <dgrowleyml@gmail.com> escreveu:On Tue, 4 Oct 2022 at 13:35, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> Revisiting my work on reducing memory consumption, I found this patch left out.\n> I'm not sure I can help.\n> But basically I was able to write and read the block size, in the chunk.\n> Could it be the case of writing and reading the context pointer in the same way?\n> Sure this leads to some performance loss, but would it make it possible to get the context pointer from the chunk?\n> In other words, would it be possible to save the context pointer and compare it later in MemoryContextContains?\n\nI'm not really sure I understand the intention of the patch. The\nheader size for all our memory contexts was already reduced in\nc6e0fe1f2.  The patch you sent seems to pre-date that commit.There is zero intention to commit this. It's just an experiment I did.As it is in the patch, it is possible to save the context pointer outside the header chunk. Making it possible to retrieve it in MemoryContextContains.Never mind, it's a waste of time. regards,Ranier Vilela", "msg_date": "Tue, 4 Oct 2022 09:25:39 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Reducing the chunk header sizes on all memory context types" } ]
[ { "msg_contents": "Hi,\n\nIt looks like there's an opportunity to replace explicit WAL file\nparsing code with XLogFromFileName() in pg_resetwal.c. This was not\ndone then (in PG 10) because the XLogFromFileName() wasn't accepting\nfile size as an input parameter (see [1]) and pg_resetwal needed to\nuse WAL file size from the controlfile. Thanks to the commit\nfc49e24fa69a15efacd5b8958115ed9c43c48f9a which added the\nwal_segsz_bytes parameter to XLogFromFileName().\n\nI'm attaching a small patch herewith. This removes using extra\nvariables in pg_resetwal.c and a bit of duplicate code too (5 LOC).\n\nThoughts?\n\n[1] https://github.com/postgres/postgres/blob/REL_10_STABLE/src/include/access/xlog_internal.h\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 4 Oct 2022 11:06:15 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Use XLogFromFileName() in pg_resetwal to parse position from WAL file" }, { "msg_contents": "At Tue, 4 Oct 2022 11:06:15 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> It looks like there's an opportunity to replace explicit WAL file\n> parsing code with XLogFromFileName() in pg_resetwal.c. This was not\n> done then (in PG 10) because the XLogFromFileName() wasn't accepting\n> file size as an input parameter (see [1]) and pg_resetwal needed to\n> use WAL file size from the controlfile. Thanks to the commit\n> fc49e24fa69a15efacd5b8958115ed9c43c48f9a which added the\n> wal_segsz_bytes parameter to XLogFromFileName().\n\nNice finding. I found a few '%08X%08X's but they don't seem to fit\nsimilar fix.\n\n> I'm attaching a small patch herewith. This removes using extra\n> variables in pg_resetwal.c and a bit of duplicate code too (5 LOC).\n> \n> Thoughts?\n\n> -\tsegs_per_xlogid = (UINT64CONST(0x0000000100000000) / ControlFile.xlog_seg_size);\n> \tnewXlogSegNo = ControlFile.checkPointCopy.redo / ControlFile.xlog_seg_size;\n\nCouldn't we use XLByteToSeg() here?\n\nOther than that, it looks good to me.\n\n> [1] https://github.com/postgres/postgres/blob/REL_10_STABLE/src/include/access/xlog_internal.h\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 04 Oct 2022 15:17:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use XLogFromFileName() in pg_resetwal to parse position from\n WAL file" }, { "msg_contents": "On Tue, Oct 4, 2022 at 11:47 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> > - segs_per_xlogid = (UINT64CONST(0x0000000100000000) / ControlFile.xlog_seg_size);\n> > newXlogSegNo = ControlFile.checkPointCopy.redo / ControlFile.xlog_seg_size;\n>\n> Couldn't we use XLByteToSeg() here?\n\nYes, we could.\n\n> Other than that, it looks good to me.\n\nThanks. There are a few more assorted WAL file related things I found,\nI will be sending all of them in this thread itself in a while.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 4 Oct 2022 11:52:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use XLogFromFileName() in pg_resetwal to parse position from WAL\n file" }, { "msg_contents": "At Tue, 04 Oct 2022 15:17:06 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Other than that, it looks good to me.\n\nSorry I have another comment.\n\n> -\t\t\tunsigned int tli,\n> -\t\t\t\t\t\tlog,\n> -\t\t\t\t\t\tseg;\n> +\t\t\tunsigned int tli;\n> \t\t\tXLogSegNo\tsegno;\n\nTLI should be of type TimeLineID.\n\nThis is not directly related to this patch, pg_resetwal.c has the\nfollowing line..\n\n> static uint32 minXlogTli = 0;\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 04 Oct 2022 15:23:48 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use XLogFromFileName() in pg_resetwal to parse position from\n WAL file" }, { "msg_contents": "On Tue, Oct 04, 2022 at 03:17:06PM +0900, Kyotaro Horiguchi wrote:\n> Nice finding. I found a few '%08X%08X's but they don't seem to fit\n> similar fix.\n\nNice cleanup.\n\n> Couldn't we use XLByteToSeg() here?\n> \n> Other than that, it looks good to me.\n\nYep. It looks that you're right here.\n--\nMichael", "msg_date": "Tue, 4 Oct 2022 15:28:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Use XLogFromFileName() in pg_resetwal to parse position from WAL\n file" }, { "msg_contents": "At Tue, 04 Oct 2022 15:23:48 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> This is not directly related to this patch, pg_resetwal.c has the\n> following line..\n> \n> > static uint32 minXlogTli = 0;\n\nI have found other three instances of this in xlog.c and\npg_receivewal.c. Do they worth fixing?\n\n(pg_upgarade.c has \"uint32 tli/logid/segno but I'm not sure they need\n to be \"fixed\". At least the segno is not a XLogSegNo.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 04 Oct 2022 15:41:30 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use XLogFromFileName() in pg_resetwal to parse position from\n WAL file" }, { "msg_contents": "On Tue, Oct 4, 2022 at 12:11 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> >\n> > > static uint32 minXlogTli = 0;\n>\n> I have found other three instances of this in xlog.c and\n> pg_receivewal.c. Do they worth fixing?\n>\n> (pg_upgarade.c has \"uint32 tli/logid/segno but I'm not sure they need\n> to be \"fixed\". At least the segno is not a XLogSegNo.)\n\nThere are quite a number of places where data types need to be fixed,\nsee XLogFileNameById() callers. They are all being parsed as uint32\nand then used. I'm not sure if we want to fix all of them.\n\nI think I found that we can fix/refactor few WAL file related things:\n\n1. 0001 replaces explicit WAL file parsing code with\nXLogFromFileName() and uses XLByteToSeg() in pg_resetwal.c. This was\nnot done then (in PG 10) because the XLogFromFileName() wasn't\naccepting file size as an input parameter and pg_resetwal needed to\nuse WAL file size from the controlfile. Thanks to the commit\nfc49e24fa69a15efacd5b8958115ed9c43c48f9a which added the\nwal_segsz_bytes parameter to XLogFromFileName(). This removes using\nextra variables in pg_resetwal.c and a bit of duplicate code too. It\nalso replaces the explicit code with the XLByteToSeg() macro.\n\n2. 0002 replaces MAXPGPATH with MAXFNAMELEN for WAL file names.\nMAXFNAMELEN (64 bytes) is typically meant to be used for all WAL file\nnames across the code base. Because the WAL file names in postgres\ncan't be bigger than 64 bytes, in fact, not more than XLOG_FNAME_LEN\n(24 bytes) but there are suffixes, timeline history files etc. To\naccommodate all of that MAXFNAMELEN is introduced. There are some\nplaces in the code base that still use MAXPGPATH (1024 bytes) for WAL\nfile names which is an unnecessary wastage of stack memory. This makes\ncode consistent across and saves a bit of space.\n\n3. 0003 replaces WAL file name calculation with XLogFileNameById() in\npg_upgrade/controldata.c to be consistent across the code base. Note\nthat this requires us to change the nextxlogfile size from hard-coded\n25 bytes to MAXFNAMELEN (64 bytes).\n\nI'm attaching the v2 patch set.\n\nThoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 4 Oct 2022 13:20:54 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Assorted fixes related to WAL files (was: Use XLogFromFileName() in\n pg_resetwal to parse position from WAL file)" }, { "msg_contents": "At Tue, 4 Oct 2022 13:20:54 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Tue, Oct 4, 2022 at 12:11 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > >\n> > > > static uint32 minXlogTli = 0;\n> >\n> > I have found other three instances of this in xlog.c and\n> > pg_receivewal.c. Do they worth fixing?\n> >\n> > (pg_upgarade.c has \"uint32 tli/logid/segno but I'm not sure they need\n> > to be \"fixed\". At least the segno is not a XLogSegNo.)\n> \n> There are quite a number of places where data types need to be fixed,\n> see XLogFileNameById() callers. They are all being parsed as uint32\n> and then used. I'm not sure if we want to fix all of them.\n> \n> I think I found that we can fix/refactor few WAL file related things:\n> \n> 1. 0001 replaces explicit WAL file parsing code with\n> XLogFromFileName() and uses XLByteToSeg() in pg_resetwal.c. This was\n> not done then (in PG 10) because the XLogFromFileName() wasn't\n> accepting file size as an input parameter and pg_resetwal needed to\n> use WAL file size from the controlfile. Thanks to the commit\n> fc49e24fa69a15efacd5b8958115ed9c43c48f9a which added the\n> wal_segsz_bytes parameter to XLogFromFileName(). This removes using\n> extra variables in pg_resetwal.c and a bit of duplicate code too. It\n> also replaces the explicit code with the XLByteToSeg() macro.\n\nLooks good to me.\n\n> 2. 0002 replaces MAXPGPATH with MAXFNAMELEN for WAL file names.\n> MAXFNAMELEN (64 bytes) is typically meant to be used for all WAL file\n> names across the code base. Because the WAL file names in postgres\n> can't be bigger than 64 bytes, in fact, not more than XLOG_FNAME_LEN\n> (24 bytes) but there are suffixes, timeline history files etc. To\n> accommodate all of that MAXFNAMELEN is introduced. There are some\n> places in the code base that still use MAXPGPATH (1024 bytes) for WAL\n> file names which is an unnecessary wastage of stack memory. This makes\n> code consistent across and saves a bit of space.\n\nLooks reasonable, too. I don't find other instances of the same mistake.\n\n> 3. 0003 replaces WAL file name calculation with XLogFileNameById() in\n> pg_upgrade/controldata.c to be consistent across the code base. Note\n> that this requires us to change the nextxlogfile size from hard-coded\n> 25 bytes to MAXFNAMELEN (64 bytes).\n\nI'm not sure I like this. In other places where XLogFileNameById() is\nused, the buffer is known to store longer strings so MAXFNAMELEN is\nreasonable. But we don't need to add useless 39 bytes here.\nTherefore, even if I wanted to change it, I would replace it with\n\"XLOG_FNAME_LEN + 1\".\n\n> I'm attaching the v2 patch set.\n> \n> Thoughts?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 04 Oct 2022 17:31:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assorted fixes related to WAL files (was: Use\n XLogFromFileName() in pg_resetwal to parse position from WAL file)" }, { "msg_contents": "On Tue, Oct 4, 2022 at 2:01 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> > 1. 0001 replaces explicit WAL file parsing code with\n>\n> Looks good to me.\n>\n> > 2. 0002 replaces MAXPGPATH with MAXFNAMELEN for WAL file names.\n>\n> Looks reasonable, too. I don't find other instances of the same mistake.\n\nThanks for reviewing.\n\n> > 3. 0003 replaces WAL file name calculation with XLogFileNameById() in\n> > pg_upgrade/controldata.c to be consistent across the code base. Note\n> > that this requires us to change the nextxlogfile size from hard-coded\n> > 25 bytes to MAXFNAMELEN (64 bytes).\n>\n> I'm not sure I like this. In other places where XLogFileNameById() is\n> used, the buffer is known to store longer strings so MAXFNAMELEN is\n> reasonable. But we don't need to add useless 39 bytes here.\n> Therefore, even if I wanted to change it, I would replace it with\n> \"XLOG_FNAME_LEN + 1\".\n\nI'm fine with doing either of these things. Let's hear from others.\n\nI've added a CF entry - https://commitfest.postgresql.org/40/3927/\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 4 Oct 2022 18:24:18 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Assorted fixes related to WAL files (was: Use XLogFromFileName()\n in pg_resetwal to parse position from WAL file)" }, { "msg_contents": "On Tue, Oct 04, 2022 at 06:24:18PM +0530, Bharath Rupireddy wrote:\n> I'm fine with doing either of these things. Let's hear from others.\n> \n> I've added a CF entry - https://commitfest.postgresql.org/40/3927/\n\nAbout 0002, I am not sure that it is worth bothering. Sure, this\nwastes a few bytes, but I recall that there are quite a few places in\nthe code where we imply a WAL segment but append a full path to it,\nand this creates a few bumps with back-patches.\n\n--- a/src/bin/pg_upgrade/pg_upgrade.h\n+++ b/src/bin/pg_upgrade/pg_upgrade.h\n@@ -10,6 +10,7 @@\n #include <sys/stat.h>\n #include <sys/time.h>\n \n+#include \"access/xlog_internal.h\"\n #include \"common/relpath.h\"\n #include \"libpq-fe.h\"\nWell, xlog_internal.h includes a few backend-only definitions. I'd\nrather have us untangle more the backend/frontend dependencies before\nadding more includes of this type, even if I agree that nextxlogfile\nwould be better with the implied name size limit.\n\nSaying that, 0001 is a nice catch, so applied it. I have switched the\ntwo TLI variables to use TimeLineID, while touching the area.\n--\nMichael", "msg_date": "Wed, 5 Oct 2022 14:16:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Assorted fixes related to WAL files (was: Use XLogFromFileName()\n in pg_resetwal to parse position from WAL file)" } ]
[ { "msg_contents": "Hello,\nwith a view to meeting with postgres code and to get some practice with it,\nI am making a small patch that adds the possibility of partial tables dump.\nA rule of filtering is specified with standard SQL where clause (without\n\"where\" keyword)\nThere are three ways to send data filters over command line:\n\n1) using table pattern in \"where\" parameter with divider '@'\n... --where \"table_pattern@where_condition\" ...\n\n\"Where\" condition will be used for all tables that match the search pattern.\n\n2) using table parameter before any table inclusion\n... --where \"where condition\" ...\n\nAll tables in databases will be filtered with input condition.\n\n3) using \"where\" parameter after table pattern\n... -t table_pattern --where where_condition ...\n\nOnly tables matching to last pattern before --where will be filtered. Third\nway is necessary to shorten the command\nline and to avoid duplicating tables pattern when specific tables are\ndumped.\n\nAlso filters may be input from files.\nA file consists of lines, and every line is a table pattern or a where\ncondition for data.\nFor example, file\n\"\"\"\nwhere column_name_1 == 1\ntable_pattern\ntable_pattern where column_name_2 == 1\n\"\"\"\ncorresponds to parameters\n\n --where \"column_name_1 == 1\" -t table_pattern --where \"column_name_2 == 1\"\n\nThe file format is not very good, because it doesn't provide sending\npatterns of other components such as schemas for example.\nAnd I am ready to change it, if functionality is actually needed.\n\nAll use cases are provided with tests.\n\nI will be grateful if patch will get a discussion.", "msg_date": "Tue, 4 Oct 2022 12:57:31 +0700", "msg_from": "=?UTF-8?B?0J3QuNC60LjRgtCwINCh0YLQsNGA0L7QstC+0LnRgtC+0LI=?=\n <nikstarall@gmail.com>", "msg_from_op": true, "msg_subject": "possibility of partial data dumps with pg_dump" }, { "msg_contents": "Hi\n\nút 4. 10. 2022 v 12:48 odesílatel Никита Старовойтов <nikstarall@gmail.com>\nnapsal:\n\n> Hello,\n> with a view to meeting with postgres code and to get some practice with\n> it, I am making a small patch that adds the possibility of partial tables\n> dump.\n> A rule of filtering is specified with standard SQL where clause (without\n> \"where\" keyword)\n> There are three ways to send data filters over command line:\n>\n> 1) using table pattern in \"where\" parameter with divider '@'\n> ... --where \"table_pattern@where_condition\" ...\n>\n> \"Where\" condition will be used for all tables that match the search\n> pattern.\n>\n> 2) using table parameter before any table inclusion\n> ... --where \"where condition\" ...\n>\n> All tables in databases will be filtered with input condition.\n>\n> 3) using \"where\" parameter after table pattern\n> ... -t table_pattern --where where_condition ...\n>\n> Only tables matching to last pattern before --where will be filtered.\n> Third way is necessary to shorten the command\n> line and to avoid duplicating tables pattern when specific tables are\n> dumped.\n>\n> Also filters may be input from files.\n> A file consists of lines, and every line is a table pattern or a where\n> condition for data.\n> For example, file\n> \"\"\"\n> where column_name_1 == 1\n> table_pattern\n> table_pattern where column_name_2 == 1\n> \"\"\"\n> corresponds to parameters\n>\n> --where \"column_name_1 == 1\" -t table_pattern --where \"column_name_2 == 1\"\n>\n> The file format is not very good, because it doesn't provide sending\n> patterns of other components such as schemas for example.\n> And I am ready to change it, if functionality is actually needed.\n>\n> All use cases are provided with tests.\n>\n> I will be grateful if patch will get a discussion.\n>\n\nWhat is benefit and use case? For this case I don't see any benefit against\nsimple\n\n\\copy (select * from xx where ...) to file CSV\n\nor how hard is it to write trivial application that does export of what you\nwant in the format that you want?\n\nRegards\n\nPavel\n\nHiút 4. 10. 2022 v 12:48 odesílatel Никита Старовойтов <nikstarall@gmail.com> napsal:Hello,with a view to meeting with postgres code and to get some practice with it, I am making a small patch that adds the possibility of partial tables dump.A rule of filtering is specified with standard SQL where clause (without \"where\" keyword)There are three ways to send data filters over command line:1) using table pattern in \"where\" parameter with divider '@' ... --where \"table_pattern@where_condition\" ...\"Where\" condition will be used for all tables that match the search pattern.2) using table parameter before any table inclusion... --where \"where condition\" ...All tables in databases will be filtered with input condition.3) using \"where\" parameter after table pattern... -t table_pattern --where where_condition ...Only tables matching to last pattern before --where will be filtered. Third way is necessary to shorten the commandline and to avoid duplicating tables pattern when specific tables are dumped.Also filters may be input from files.A file consists of lines, and every line is a table pattern or a where condition for data.For example, file\"\"\"where column_name_1 == 1table_pattern table_pattern where column_name_2 == 1\"\"\"corresponds to parameters --where \"column_name_1 == 1\" -t table_pattern --where \"column_name_2 == 1\"The file format is not very good, because it doesn't provide sending patterns of other components such as schemas for example.And I am ready to change it, if functionality is actually needed.All use cases are provided with tests.I will be grateful if patch will get a discussion.What is benefit and use case? For this case I don't see any benefit against simple\\copy (select * from xx where ...) to file CSVor how hard is it to write trivial application that does export of what you want in the format that you want?RegardsPavel", "msg_date": "Tue, 4 Oct 2022 14:15:16 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: possibility of partial data dumps with pg_dump" }, { "msg_contents": "Hi,\n\nOn Tue, Oct 04, 2022 at 02:15:16PM +0200, Pavel Stehule wrote:\n>\n> út 4. 10. 2022 v 12:48 odesílatel Никита Старовойтов <nikstarall@gmail.com>\n> napsal:\n>\n> > Hello,\n> > with a view to meeting with postgres code and to get some practice with\n> > it, I am making a small patch that adds the possibility of partial tables\n> > dump.\n> > A rule of filtering is specified with standard SQL where clause (without\n> > \"where\" keyword)\n>\n> What is benefit and use case? For this case I don't see any benefit against\n> simple\n>\n> \\copy (select * from xx where ...) to file CSV\n>\n> or how hard is it to write trivial application that does export of what you\n> want in the format that you want?\n\nAlso, such approach probably requires a lot of effort to get a valid backup\n(with regards to foreign keys and such).\n\nThere's already a project dedicated to generate such partial (and consistent)\nbackups: https://github.com/mla/pg_sample. Maybe that would address your\nneeds?\n\n\n", "msg_date": "Tue, 4 Oct 2022 20:24:24 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: possibility of partial data dumps with pg_dump" }, { "msg_contents": "Good afternoon, Indeed, the functionality that I started to implement in\nthe patch is very similar to what is included in the program you proposed.\nMany of the use cases are the same. Thanks for giving me a hint about it. I\nhave been working on implementing referential integrity, but have not been\nable to find simple solutions for a complex structure. And I am not sure if\nit can be done in the dump process. Although it is obvious that without\nthis functionality, the usefulness of the function is insignificant. When I\nworked with another database management system, the partial offer feature\nwas available from the dump program. It was useful for me. But I understand\nwhy it might not be worth extending pg_dump with a non-essential feature.\nHowever, I will try to work again to solve the problem with the guaranteed\nrecovery of the database. Thanks for the comments, they were really helpful\nto me.\n\nвт, 4 окт. 2022 г. в 19:24, Julien Rouhaud <rjuju123@gmail.com>:\n\n> Hi,\n>\n> On Tue, Oct 04, 2022 at 02:15:16PM +0200, Pavel Stehule wrote:\n> >\n> > út 4. 10. 2022 v 12:48 odesílatel Никита Старовойтов <\n> nikstarall@gmail.com>\n> > napsal:\n> >\n> > > Hello,\n> > > with a view to meeting with postgres code and to get some practice with\n> > > it, I am making a small patch that adds the possibility of partial\n> tables\n> > > dump.\n> > > A rule of filtering is specified with standard SQL where clause\n> (without\n> > > \"where\" keyword)\n> >\n> > What is benefit and use case? For this case I don't see any benefit\n> against\n> > simple\n> >\n> > \\copy (select * from xx where ...) to file CSV\n> >\n> > or how hard is it to write trivial application that does export of what\n> you\n> > want in the format that you want?\n>\n> Also, such approach probably requires a lot of effort to get a valid backup\n> (with regards to foreign keys and such).\n>\n> There's already a project dedicated to generate such partial (and\n> consistent)\n> backups: https://github.com/mla/pg_sample. Maybe that would address your\n> needs?\n>\n\nGood afternoon,\nIndeed, the functionality that I started to implement in the patch is very similar to what is included in the program you proposed. Many of the use cases are the same. Thanks for giving me a hint about it.\n\nI have been working on implementing referential integrity, but have not been able to find simple solutions for a complex structure. And I am not sure if it can be done in the dump process. Although it is obvious that without this functionality, the usefulness of the function is insignificant.\n\nWhen I worked with another database management system, the partial offer feature was available from the dump program. It was useful for me. But I understand why it might not be worth extending pg_dump with a non-essential feature.\nHowever, I will try to work again to solve the problem with the guaranteed recovery of the database.\n\nThanks for the comments, they were really helpful to me.вт, 4 окт. 2022 г. в 19:24, Julien Rouhaud <rjuju123@gmail.com>:Hi,\n\nOn Tue, Oct 04, 2022 at 02:15:16PM +0200, Pavel Stehule wrote:\n>\n> út 4. 10. 2022 v 12:48 odesílatel Никита Старовойтов <nikstarall@gmail.com>\n> napsal:\n>\n> > Hello,\n> > with a view to meeting with postgres code and to get some practice with\n> > it, I am making a small patch that adds the possibility of partial tables\n> > dump.\n> > A rule of filtering is specified with standard SQL where clause (without\n> > \"where\" keyword)\n>\n> What is benefit and use case? For this case I don't see any benefit against\n> simple\n>\n> \\copy (select * from xx where ...) to file CSV\n>\n> or how hard is it to write trivial application that does export of what you\n> want in the format that you want?\n\nAlso, such approach probably requires a lot of effort to get a valid backup\n(with regards to foreign keys and such).\n\nThere's already a project dedicated to generate such partial (and consistent)\nbackups: https://github.com/mla/pg_sample.  Maybe that would address your\nneeds?", "msg_date": "Wed, 12 Oct 2022 21:46:08 +0700", "msg_from": "=?UTF-8?B?0J3QuNC60LjRgtCwINCh0YLQsNGA0L7QstC+0LnRgtC+0LI=?=\n <nikstarall@gmail.com>", "msg_from_op": true, "msg_subject": "Re: possibility of partial data dumps with pg_dump" } ]
[ { "msg_contents": "I was wondering why we have a definition of Abs() in c.h when there are \nmore standard functions such as abs() and fabs() in widespread use. I \nthink this one is left over from pre-ANSI-C days. The attached patches \nreplace all uses of Abs() with more standard functions.\n\nThe first patch installs uses of abs() and fabs(). These are already in \nuse in the tree and should be straightforward.\n\nThe next two patches install uses of llabs() and fabsf(), which are not \nin use yet. But they are in C99.\n\nThe last patch removes the definition of Abs().\n\n\nFun fact: The current definition\n\n #define Abs(x) ((x) >= 0 ? (x) : -(x))\n\nis slightly wrong for floating-point values. Abs(-0.0) returns -0.0, \nbut fabs(-0.0) returns +0.0.", "msg_date": "Tue, 4 Oct 2022 09:07:36 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "get rid of Abs()" }, { "msg_contents": "Hi,\n\nOn Oct 4, 2022, 15:07 +0800, Peter Eisentraut <peter.eisentraut@enterprisedb.com>, wrote:\n> I was wondering why we have a definition of Abs() in c.h when there are\n> more standard functions such as abs() and fabs() in widespread use. I\n> think this one is left over from pre-ANSI-C days. The attached patches\n> replace all uses of Abs() with more standard functions.\n>\n> The first patch installs uses of abs() and fabs(). These are already in\n> use in the tree and should be straightforward.\n>\n> The next two patches install uses of llabs() and fabsf(), which are not\n> in use yet. But they are in C99.\n>\n> The last patch removes the definition of Abs().\n>\n>\n> Fun fact: The current definition\n>\n> #define Abs(x) ((x) >= 0 ? (x) : -(x))\n>\n> is slightly wrong for floating-point values. Abs(-0.0) returns -0.0,\n> but fabs(-0.0) returns +0.0.\n+1,\n\nLike patch3, also found some places where could use fabsf instead of fabs if possible, add a patch to replace them.\n\nRegards,\nZhang Mingli", "msg_date": "Tue, 4 Oct 2022 17:04:08 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: get rid of Abs()" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I was wondering why we have a definition of Abs() in c.h when there are \n> more standard functions such as abs() and fabs() in widespread use. I \n> think this one is left over from pre-ANSI-C days. The attached patches \n> replace all uses of Abs() with more standard functions.\n\nI'm not in favor of the llabs() changes. I think what we really want\nin those places, or at least most of them, is \"abs() for int64\".\nThat could be had by #define'ing \"iabs64\" (or some name along that\nline) as labs or llabs depending on which type we are using for int64.\n\nSeems OK beyond that nitpick.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 Oct 2022 09:29:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: get rid of Abs()" } ]
[ { "msg_contents": "In PostgreSQL 10, we added identity columns, as an alternative to serial \ncolumns (since 6.something). They mostly work the same. Identity \ncolumns are SQL-conforming, have some more features (e.g., overriding \nclause), and are a bit more robust in schema management. Some of that \nwas described in [0]. AFAICT, there have been no complaints since that \nidentity columns lack features or are somehow a regression over serial \ncolumns.\n\nBut clearly, the syntax \"serial\" is more handy, and most casual examples \nuse that syntax. So it seems like we are stuck with maintaining these \ntwo variants in parallel forever. I was thinking we could nudge this a \nlittle by remapping \"serial\" internally to create an identity column \ninstead. At least then over time, the use of the older serial \nmechanisms would go away.\n\nNote that pg_dump dumps a serial column in pieces (CREATE SEQUENCE + \nALTER SEQUENCE ... OWNED BY + ALTER TABLE ... SET DEFAULT). So if we \ndid this, any existing databases would keep their old semantics, and \nthose who really need it can manually create the old semantics as well.\n\nAttached is a demo patch how the implementation of this change would \nlook like. This creates a bunch of regression test failures, but \nAFAICT, those are mainly display differences and some very peculiar test \nsetups that are intentionally examining some edge cases. These would \nneed to be investigated in more detail, of course.\n\n\n[0]: \nhttps://www.enterprisedb.com/blog/postgresql-10-identity-columns-explained", "msg_date": "Tue, 4 Oct 2022 09:41:19 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "future of serial and identity columns" }, { "msg_contents": "On Tue, 2022-10-04 at 09:41 +0200, Peter Eisentraut wrote:\n> In PostgreSQL 10, we added identity columns, as an alternative to serial \n> columns (since 6.something).  They mostly work the same.  Identity \n> columns are SQL-conforming, have some more features (e.g., overriding \n> clause), and are a bit more robust in schema management.  Some of that \n> was described in [0].  AFAICT, there have been no complaints since that \n> identity columns lack features or are somehow a regression over serial \n> columns.\n> \n> But clearly, the syntax \"serial\" is more handy, and most casual examples \n> use that syntax.  So it seems like we are stuck with maintaining these \n> two variants in parallel forever.  I was thinking we could nudge this a \n> little by remapping \"serial\" internally to create an identity column \n> instead.  At least then over time, the use of the older serial \n> mechanisms would go away.\n\nI think that would be great.\nThat might generate some confusion among users who follow old tutorials\nand are surprised that the eventual table definition differs, but I'd say\nthat is a good thing.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Tue, 04 Oct 2022 11:33:15 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: future of serial and identity columns" }, { "msg_contents": "On Tue, Oct 4, 2022 at 09:41:19AM +0200, Peter Eisentraut wrote:\n> In PostgreSQL 10, we added identity columns, as an alternative to serial\n> columns (since 6.something). They mostly work the same. Identity columns\n> are SQL-conforming, have some more features (e.g., overriding clause), and\n> are a bit more robust in schema management. Some of that was described in\n> [0]. AFAICT, there have been no complaints since that identity columns lack\n> features or are somehow a regression over serial columns.\n\nFYI, SERIAL came from Informix syntax, and it was already a macro, so\nmaking it a different macro seems fine. ;-)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Wed, 5 Oct 2022 17:26:52 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: future of serial and identity columns" }, { "msg_contents": "On 10/4/22 09:41, Peter Eisentraut wrote:\n> In PostgreSQL 10, we added identity columns, as an alternative to serial \n> columns (since 6.something).  They mostly work the same.  Identity \n> columns are SQL-conforming, have some more features (e.g., overriding \n> clause), and are a bit more robust in schema management.  Some of that \n> was described in [0].  AFAICT, there have been no complaints since that \n> identity columns lack features or are somehow a regression over serial \n> columns.\n> \n> But clearly, the syntax \"serial\" is more handy, and most casual examples \n> use that syntax.  So it seems like we are stuck with maintaining these \n> two variants in parallel forever.  I was thinking we could nudge this a \n> little by remapping \"serial\" internally to create an identity column \n> instead.  At least then over time, the use of the older serial \n> mechanisms would go away.\n> \n> Note that pg_dump dumps a serial column in pieces (CREATE SEQUENCE + \n> ALTER SEQUENCE ... OWNED BY + ALTER TABLE ... SET DEFAULT).  So if we \n> did this, any existing databases would keep their old semantics, and \n> those who really need it can manually create the old semantics as well.\n> \n> Attached is a demo patch how the implementation of this change would \n> look like.  This creates a bunch of regression test failures, but \n> AFAICT, those are mainly display differences and some very peculiar test \n> setups that are intentionally examining some edge cases.  These would \n> need to be investigated in more detail, of course.\n\nI haven't tested the patch yet, just read it.\n\nIs there any reason to use BY DEFAULT over ALWAYS? I tend to prefer the \nlatter.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Fri, 7 Oct 2022 14:02:52 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: future of serial and identity columns" }, { "msg_contents": "On Fri, Oct 7, 2022 at 2:03 PM Vik Fearing <vik@postgresfriends.org> wrote:\n\n> On 10/4/22 09:41, Peter Eisentraut wrote:\n> > In PostgreSQL 10, we added identity columns, as an alternative to serial\n> > columns (since 6.something). They mostly work the same. Identity\n> > columns are SQL-conforming, have some more features (e.g., overriding\n> > clause), and are a bit more robust in schema management. Some of that\n> > was described in [0]. AFAICT, there have been no complaints since that\n> > identity columns lack features or are somehow a regression over serial\n> > columns.\n> >\n> > But clearly, the syntax \"serial\" is more handy, and most casual examples\n> > use that syntax. So it seems like we are stuck with maintaining these\n> > two variants in parallel forever. I was thinking we could nudge this a\n> > little by remapping \"serial\" internally to create an identity column\n> > instead. At least then over time, the use of the older serial\n> > mechanisms would go away.\n> >\n> > Note that pg_dump dumps a serial column in pieces (CREATE SEQUENCE +\n> > ALTER SEQUENCE ... OWNED BY + ALTER TABLE ... SET DEFAULT). So if we\n> > did this, any existing databases would keep their old semantics, and\n> > those who really need it can manually create the old semantics as well.\n> >\n> > Attached is a demo patch how the implementation of this change would\n> > look like. This creates a bunch of regression test failures, but\n> > AFAICT, those are mainly display differences and some very peculiar test\n> > setups that are intentionally examining some edge cases. These would\n> > need to be investigated in more detail, of course.\n>\n> I haven't tested the patch yet, just read it.\n>\n> Is there any reason to use BY DEFAULT over ALWAYS? I tend to prefer the\n> latter.\n>\n\nI would assume to maintain backwards compatibility with the semantics of\nSERIAL today?\n\nI do also prefer ALWAYS, but that would make it a compatibility break.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Oct 7, 2022 at 2:03 PM Vik Fearing <vik@postgresfriends.org> wrote:On 10/4/22 09:41, Peter Eisentraut wrote:\n> In PostgreSQL 10, we added identity columns, as an alternative to serial \n> columns (since 6.something).  They mostly work the same.  Identity \n> columns are SQL-conforming, have some more features (e.g., overriding \n> clause), and are a bit more robust in schema management.  Some of that \n> was described in [0].  AFAICT, there have been no complaints since that \n> identity columns lack features or are somehow a regression over serial \n> columns.\n> \n> But clearly, the syntax \"serial\" is more handy, and most casual examples \n> use that syntax.  So it seems like we are stuck with maintaining these \n> two variants in parallel forever.  I was thinking we could nudge this a \n> little by remapping \"serial\" internally to create an identity column \n> instead.  At least then over time, the use of the older serial \n> mechanisms would go away.\n> \n> Note that pg_dump dumps a serial column in pieces (CREATE SEQUENCE + \n> ALTER SEQUENCE ... OWNED BY + ALTER TABLE ... SET DEFAULT).  So if we \n> did this, any existing databases would keep their old semantics, and \n> those who really need it can manually create the old semantics as well.\n> \n> Attached is a demo patch how the implementation of this change would \n> look like.  This creates a bunch of regression test failures, but \n> AFAICT, those are mainly display differences and some very peculiar test \n> setups that are intentionally examining some edge cases.  These would \n> need to be investigated in more detail, of course.\n\nI haven't tested the patch yet, just read it.\n\nIs there any reason to use BY DEFAULT over ALWAYS?  I tend to prefer the \nlatter.I would assume to maintain backwards compatibility with the semantics of SERIAL today?I do also prefer ALWAYS, but that would make it a compatibility break. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 7 Oct 2022 14:07:34 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: future of serial and identity columns" }, { "msg_contents": "On 04.10.22 09:41, Peter Eisentraut wrote:\n> Attached is a demo patch how the implementation of this change would \n> look like.  This creates a bunch of regression test failures, but \n> AFAICT, those are mainly display differences and some very peculiar test \n> setups that are intentionally examining some edge cases.  These would \n> need to be investigated in more detail, of course.\n\nThe feedback was pretty positive, so I dug through all the tests to at \nleast get to the point where I could see the end of it. The attached \npatch 0001 is the actual code and documentation changes. The 0002 patch \nis just tests randomly updated or disabled to make the whole suite pass. \n This reveals that there are a few things that would warrant further \ninvestigation, in particular around extensions and partitioning. To be \ncontinued.", "msg_date": "Tue, 11 Oct 2022 09:59:58 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: future of serial and identity columns" }, { "msg_contents": ">\n> The feedback was pretty positive, so I dug through all the tests to at\n> least get to the point where I could see the end of it. The attached\n> patch 0001 is the actual code and documentation changes. The 0002 patch\n> is just tests randomly updated or disabled to make the whole suite pass.\n> This reveals that there are a few things that would warrant further\n> investigation, in particular around extensions and partitioning. To be\n> continued.\n>\n\nI like what I see so far!\n\nQuestion: the xref refers the reader to sql-createtable, which is a pretty\nbig page, which could leave the reader lost. Would it make sense to create\na SQL-CREATETABLE-IDENTITY anchor and link to that instead?\n\nThe feedback was pretty positive, so I dug through all the tests to at \nleast get to the point where I could see the end of it.  The attached \npatch 0001 is the actual code and documentation changes.  The 0002 patch \nis just tests randomly updated or disabled to make the whole suite pass. \n  This reveals that there are a few things that would warrant further \ninvestigation, in particular around extensions and partitioning.  To be \ncontinued.I like what I see so far!Question: the xref  refers the reader to sql-createtable, which is a pretty big page, which could leave the reader lost. Would it make sense to create a SQL-CREATETABLE-IDENTITY anchor and link to that instead?", "msg_date": "Wed, 12 Oct 2022 02:22:31 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: future of serial and identity columns" }, { "msg_contents": "On 12.10.22 08:22, Corey Huinker wrote:\n> Question: the xref  refers the reader to sql-createtable, which is a \n> pretty big page, which could leave the reader lost. Would it make sense \n> to create a SQL-CREATETABLE-IDENTITY anchor and link to that instead?\n\nYes, I think that would be good.\n\n\n", "msg_date": "Wed, 12 Oct 2022 09:13:13 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: future of serial and identity columns" }, { "msg_contents": "On 2022-Oct-11, Peter Eisentraut wrote:\n\n> diff --git a/src/test/modules/test_ddl_deparse/expected/alter_table.out b/src/test/modules/test_ddl_deparse/expected/alter_table.out\n> index 87a1ab7aabce..30e3dbb8d08a 100644\n> --- a/src/test/modules/test_ddl_deparse/expected/alter_table.out\n> +++ b/src/test/modules/test_ddl_deparse/expected/alter_table.out\n> @@ -25,12 +25,9 @@ NOTICE: DDL test: type simple, tag CREATE TABLE\n> CREATE TABLE grandchild () INHERITS (child);\n> NOTICE: DDL test: type simple, tag CREATE TABLE\n> ALTER TABLE parent ADD COLUMN b serial;\n> -NOTICE: DDL test: type simple, tag CREATE SEQUENCE\n> -NOTICE: DDL test: type alter table, tag ALTER TABLE\n> -NOTICE: subcommand: type ADD COLUMN (and recurse) desc column b of table parent\n> -NOTICE: DDL test: type simple, tag ALTER SEQUENCE\n> +ERROR: cannot recursively add identity column to table that has child tables\n\nI think this change merits some discussion. Surely we cannot simply\ndisallow SERIAL from being used with inheritance. Do we need to have\na way for identity columns to be used by children tables?\n\n(My first thought was \"let's keep SERIAL as the old code when used for\ninheritance\", but then I realized that the parent table starts as a\nnormal-looking table that only later acquires inheritors, so we wouldn't\nknow ahead of time that we need to treat that SERIAL column in a special\nway.)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La vida es para el que se aventura\"\n\n\n", "msg_date": "Wed, 12 Oct 2022 10:05:05 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: future of serial and identity columns" } ]
[ { "msg_contents": "I wanted to propose the attached patch to get rid of the custom pgpid_t \ntypedef in pg_ctl. Since we liberally use pid_t elsewhere, this seemed \nplausible.\n\nHowever, this patch fails the CompilerWarnings job on Cirrus, because \napparently under mingw, pid_t is \"volatile long long int\", so all the \nprintf placeholders mismatch. However, we print pid_t as %d in a lot of \nother places, so I'm confused why this fails here.\n\nAlso, googling around a bit about this, it seems that mingw might have \nchanged the pid_t from long long int to int some time ago. Maybe that's \nhow the pgpid_t came about to begin with. The Cirrus job uses a \ncross-compilation environment. I wonder how up to date that is compared \nto say the native mingw installations used on the build farm.\n\nAny clues?", "msg_date": "Tue, 4 Oct 2022 10:15:13 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "pid_t on mingw" }, { "msg_contents": "On 04.10.22 10:15, Peter Eisentraut wrote:\n> I wanted to propose the attached patch to get rid of the custom pgpid_t \n> typedef in pg_ctl.  Since we liberally use pid_t elsewhere, this seemed \n> plausible.\n> \n> However, this patch fails the CompilerWarnings job on Cirrus, because \n> apparently under mingw, pid_t is \"volatile long long int\", so all the \n> printf placeholders mismatch.  However, we print pid_t as %d in a lot of \n> other places, so I'm confused why this fails here.\n\nI figured out that in most places we actually store PIDs in int, and in \nthe cases where we use pid_t, casts before printing are indeed used and \nnecessary. So nevermind that.\n\nIn any case, I took this opportunity to standardize the printing of PIDs \nas %d. There were a few stragglers.\n\nAnd then the original patch to get rid of pgpid_t in pg_ctl, now updated \nwith the correct casts for printing. I confirmed that this now passes \nthe CompilerWarnings job.", "msg_date": "Mon, 10 Oct 2022 10:57:21 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "clean up pid_t printing and get rid of pgpid_t" } ]
[ { "msg_contents": "Hi,\n\nqueryjumble.c and queryjumble.h both define a macro JUMBLE_SIZE = 1024.\nSince queryjumble.c includes queryjumble.h, the JUMBLE_SIZE definition \nin queryjumble.c should be deleted.\nThoughts?\n\nTatsu", "msg_date": "Tue, 04 Oct 2022 17:41:12 +0900", "msg_from": "bt22nakamorit <bt22nakamorit@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "JUMBLE_SIZE macro in two files" }, { "msg_contents": "Hi,\n\nOn Tue, Oct 04, 2022 at 05:41:12PM +0900, bt22nakamorit wrote:\n>\n> queryjumble.c and queryjumble.h both define a macro JUMBLE_SIZE = 1024.\n> Since queryjumble.c includes queryjumble.h, the JUMBLE_SIZE definition in\n> queryjumble.c should be deleted.\n\n+1\n\n\n", "msg_date": "Tue, 4 Oct 2022 18:46:23 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: JUMBLE_SIZE macro in two files" }, { "msg_contents": "bt22nakamorit <bt22nakamorit@oss.nttdata.com> writes:\n> queryjumble.c and queryjumble.h both define a macro JUMBLE_SIZE = 1024.\n> Since queryjumble.c includes queryjumble.h, the JUMBLE_SIZE definition \n> in queryjumble.c should be deleted.\n\nI would go more for taking it out of queryjumble.h. I see no\nreason why that constant needs to be world-visible.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 Oct 2022 09:16:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: JUMBLE_SIZE macro in two files" }, { "msg_contents": "On Tue, Oct 04, 2022 at 09:16:44AM -0400, Tom Lane wrote:\n> I would go more for taking it out of queryjumble.h. I see no\n> reason why that constant needs to be world-visible.\n\nI was just looking at the patch before seeing your reply, and thought\nthe exact same thing. Perhaps you'd prefer apply that yourself?\n--\nMichael", "msg_date": "Wed, 5 Oct 2022 12:12:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: JUMBLE_SIZE macro in two files" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Oct 04, 2022 at 09:16:44AM -0400, Tom Lane wrote:\n>> I would go more for taking it out of queryjumble.h. I see no\n>> reason why that constant needs to be world-visible.\n\n> I was just looking at the patch before seeing your reply, and thought\n> the exact same thing. Perhaps you'd prefer apply that yourself?\n\nNah, feel free.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 Oct 2022 23:17:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: JUMBLE_SIZE macro in two files" }, { "msg_contents": "On Tue, Oct 04, 2022 at 11:17:09PM -0400, Tom Lane wrote:\n> Nah, feel free.\n\nOkay, thanks. Applied, then.\n--\nMichael", "msg_date": "Wed, 5 Oct 2022 14:29:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: JUMBLE_SIZE macro in two files" } ]
[ { "msg_contents": "While working on the column encryption patch, I wanted to check that \nwhat is implemented also works in OpenSSL FIPS mode. I tried running \nthe normal test suites after switching the OpenSSL installation to FIPS \nmode, but that failed all over the place. So I embarked on fixing that. \n Attached is a first iteration of a patch.\n\nThe main issue is liberal use of the md5() function in tests to generate \nrandom strings. For example, this is a common pattern:\n\n SELECT x, md5(x::text) FROM generate_series(-10,10) x;\n\nThis can be replaced by\n\n SELECT x, encode(sha256(x::text::bytea), 'hex')\n FROM generate_series(-10,10) x;\n\nIn most cases, this could be further simplified by not using text but \nbytea for the column types, thus skipping the encode step.\n\nSome tests are carefully calibrated to achieve a certain column size or \nsomething like that. These will need to be checked in more detail.\n\nAnother set of issues is in the SSL tests, where apparently some \ncertificates are generated with obsolete hash methods, probably SHA1 \n(and possibly MD5 again). Some of this can be addressed by just \nregenerating everything with a newer OpenSSL installation, in some other \ncases it appears to need additional command-line options or a local \nconfiguration file change. This needs more research. I think we should \naugment the setup used to generate these test files in a way that they \ndon't depend on the local configuration of whoever runs it.\n\nOf course, there are some some tests where we do want to test MD5 \nfunctionality, such as in the authentication tests or in the tests of \nthe md5() function itself. I think we can conditionalize these somehow. \n That looks like a smaller issue compared to the issues above.", "msg_date": "Tue, 4 Oct 2022 17:45:32 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On 04.10.22 17:45, Peter Eisentraut wrote:\n> While working on the column encryption patch, I wanted to check that \n> what is implemented also works in OpenSSL FIPS mode.  I tried running \n> the normal test suites after switching the OpenSSL installation to FIPS \n> mode, but that failed all over the place.  So I embarked on fixing that. \n\n> Of course, there are some some tests where we do want to test MD5 \n> functionality, such as in the authentication tests or in the tests of \n> the md5() function itself.  I think we can conditionalize these somehow. \n\nLet's make a small start on this. The attached patch moves the tests of \nthe md5() function to a separate test file. That would ultimately make \nit easier to maintain a variant expected file for FIPS mode where that \nfunction will fail (similar to how we have done it for the pgcrypto tests).", "msg_date": "Tue, 11 Oct 2022 13:51:50 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On Tue, Oct 11, 2022 at 01:51:50PM +0200, Peter Eisentraut wrote:\n> Let's make a small start on this. The attached patch moves the tests of the\n> md5() function to a separate test file. That would ultimately make it\n> easier to maintain a variant expected file for FIPS mode where that function\n> will fail (similar to how we have done it for the pgcrypto tests).\n\nMakes sense to me. This slice looks fine.\n\nI think that the other md5() computations done in the main regression\ntest suite could just be switched to use one of the sha*() functions\nas they just want to put their hands on text values. It looks like a\nfew of them have some expections with the output size and\ngenerate_series(), though, but this could be tweaked by making the\nseries shorter, for example.\n--\nMichael", "msg_date": "Wed, 12 Oct 2022 10:18:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On 12.10.22 03:18, Michael Paquier wrote:\n> On Tue, Oct 11, 2022 at 01:51:50PM +0200, Peter Eisentraut wrote:\n>> Let's make a small start on this. The attached patch moves the tests of the\n>> md5() function to a separate test file. That would ultimately make it\n>> easier to maintain a variant expected file for FIPS mode where that function\n>> will fail (similar to how we have done it for the pgcrypto tests).\n> \n> Makes sense to me. This slice looks fine.\n\nCommitted.\n\n> I think that the other md5() computations done in the main regression\n> test suite could just be switched to use one of the sha*() functions\n> as they just want to put their hands on text values. It looks like a\n> few of them have some expections with the output size and\n> generate_series(), though, but this could be tweaked by making the\n> series shorter, for example.\n\nRight, that's the rest of my original patch. I'll come back with an \nupdated version of that.\n\n\n\n", "msg_date": "Thu, 13 Oct 2022 12:26:32 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On 2022-Oct-13, Peter Eisentraut wrote:\n\n> Right, that's the rest of my original patch. I'll come back with an updated\n> version of that.\n\nHowever, there are some changes in brin_multi.out that are quite\nsurprising and suggest that we might have bugs in brin:\n\n+WARNING: unexpected number of results 31 for (macaddr8col,>,macaddr8,b1:d1:0e:7b:af:a4:42:12,33)\n+WARNING: unexpected number of results 17 for (macaddr8col,>=,macaddr8,d9:35:91:bd:f7:86:0e:1e,15)\n+WARNING: unexpected number of results 11 for (macaddr8col,<=,macaddr8,23:e8:46:63:86:07:ad:cb,13)\n+WARNING: unexpected number of results 4 for (macaddr8col,<,macaddr8,13:16:8e:6a:2e:6c:84:b4,6)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La victoria es para quien se atreve a estar solo\"\n\n\n", "msg_date": "Thu, 13 Oct 2022 13:16:18 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On 13.10.22 12:26, Peter Eisentraut wrote:\n>> I think that the other md5() computations done in the main regression\n>> test suite could just be switched to use one of the sha*() functions\n>> as they just want to put their hands on text values.  It looks like a\n>> few of them have some expections with the output size and\n>> generate_series(), though, but this could be tweaked by making the\n>> series shorter, for example.\n> \n> Right, that's the rest of my original patch.  I'll come back with an \n> updated version of that.\n\nHere is the next step. To contain the scope, I focused on just \"make \ncheck\" for now. This patch removes all incidental calls to md5(), \nreplacing them with sha256(), so that they'd pass with or without FIPS \nmode. (Two tests would need alternative expected files: md5 and \npassword. I have not included those here.)\n\nSome tests inspect the actual md5 result strings or build statistics \nbased on them. I have tried to carefully preserve the meaning of the \noriginal tests, to the extent that they could be inferred, in some cases \nadjusting example values by matching the md5 outputs to the equivalent \nsha256 outputs. Some cases are tricky or mysterious or both and could \nuse another look.", "msg_date": "Wed, 7 Dec 2022 15:14:09 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On Wed, Dec 07, 2022 at 03:14:09PM +0100, Peter Eisentraut wrote:\n> Here is the next step. To contain the scope, I focused on just \"make check\"\n> for now. This patch removes all incidental calls to md5(), replacing them\n> with sha256(), so that they'd pass with or without FIPS mode. (Two tests\n> would need alternative expected files: md5 and password. I have not\n> included those here.)\n\nYeah, fine by me to do that step-by-step.\n\n> Some tests inspect the actual md5 result strings or build statistics based\n> on them. I have tried to carefully preserve the meaning of the original\n> tests, to the extent that they could be inferred, in some cases adjusting\n> example values by matching the md5 outputs to the equivalent sha256 outputs.\n> Some cases are tricky or mysterious or both and could use another look.\n\nincremental_sort mostly relies on the plan generated, so the change\nshould be rather straight-forward I guess, though there may be a side\neffect depending on costing. Hmm, it does not look like stats_ext\nwould be an issue as it checks the stats correlation of the attributes\nfor mcv_lists_arrays.\n\nlargeobject_1.out has been forgotten in the set requiring a refresh.\n--\nMichael", "msg_date": "Fri, 9 Dec 2022 13:16:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On 09.12.22 05:16, Michael Paquier wrote:\n>> Some tests inspect the actual md5 result strings or build statistics based\n>> on them. I have tried to carefully preserve the meaning of the original\n>> tests, to the extent that they could be inferred, in some cases adjusting\n>> example values by matching the md5 outputs to the equivalent sha256 outputs.\n>> Some cases are tricky or mysterious or both and could use another look.\n> incremental_sort mostly relies on the plan generated, so the change\n> should be rather straight-forward I guess, though there may be a side\n> effect depending on costing. Hmm, it does not look like stats_ext\n> would be an issue as it checks the stats correlation of the attributes\n> for mcv_lists_arrays.\n> \n> largeobject_1.out has been forgotten in the set requiring a refresh.\n\nHere is a refreshed patch with the missing file added.", "msg_date": "Tue, 31 Jan 2023 10:55:30 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On Thu, Oct 13, 2022 at 01:16:18PM +0200, Alvaro Herrera wrote:\n> However, there are some changes in brin_multi.out that are quite\n> surprising and suggest that we might have bugs in brin:\n> \n> +WARNING: unexpected number of results 31 for (macaddr8col,>,macaddr8,b1:d1:0e:7b:af:a4:42:12,33)\n> +WARNING: unexpected number of results 17 for (macaddr8col,>=,macaddr8,d9:35:91:bd:f7:86:0e:1e,15)\n> +WARNING: unexpected number of results 11 for (macaddr8col,<=,macaddr8,23:e8:46:63:86:07:ad:cb,13)\n> +WARNING: unexpected number of results 4 for (macaddr8col,<,macaddr8,13:16:8e:6a:2e:6c:84:b4,6)\n\nThis refers to brin_minmax_multi_distance_macaddr8(), no? This is\namazing. I have a hard time imagining how FIPS would interact with\nwhat we do in mac8.c to explain that, so it may be something entirely\ndifferent. Is that reproducible?\n--\nMichael", "msg_date": "Mon, 27 Feb 2023 16:16:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On 27.02.23 08:16, Michael Paquier wrote:\n> On Thu, Oct 13, 2022 at 01:16:18PM +0200, Alvaro Herrera wrote:\n>> However, there are some changes in brin_multi.out that are quite\n>> surprising and suggest that we might have bugs in brin:\n>>\n>> +WARNING: unexpected number of results 31 for (macaddr8col,>,macaddr8,b1:d1:0e:7b:af:a4:42:12,33)\n>> +WARNING: unexpected number of results 17 for (macaddr8col,>=,macaddr8,d9:35:91:bd:f7:86:0e:1e,15)\n>> +WARNING: unexpected number of results 11 for (macaddr8col,<=,macaddr8,23:e8:46:63:86:07:ad:cb,13)\n>> +WARNING: unexpected number of results 4 for (macaddr8col,<,macaddr8,13:16:8e:6a:2e:6c:84:b4,6)\n> \n> This refers to brin_minmax_multi_distance_macaddr8(), no? This is\n> amazing. I have a hard time imagining how FIPS would interact with\n> what we do in mac8.c to explain that, so it may be something entirely\n> different. Is that reproducible?\n\nThis is no longer present in the v2 patch.\n\n\n\n", "msg_date": "Mon, 27 Feb 2023 08:23:34 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On Mon, Feb 27, 2023 at 08:23:34AM +0100, Peter Eisentraut wrote:\n> On 27.02.23 08:16, Michael Paquier wrote:\n>> This refers to brin_minmax_multi_distance_macaddr8(), no? This is\n>> amazing. I have a hard time imagining how FIPS would interact with\n>> what we do in mac8.c to explain that, so it may be something entirely\n>> different. Is that reproducible?\n> \n> This is no longer present in the v2 patch.\n\nSure, but why was it happening in the first place? The proposed patch\nset only reworks some regression tests. So It seems to me that this\nis a sign that we may have issues in some code area that got stressed\nin some new way, no?\n--\nMichael", "msg_date": "Tue, 28 Feb 2023 14:01:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On 28.02.23 06:01, Michael Paquier wrote:\n> On Mon, Feb 27, 2023 at 08:23:34AM +0100, Peter Eisentraut wrote:\n>> On 27.02.23 08:16, Michael Paquier wrote:\n>>> This refers to brin_minmax_multi_distance_macaddr8(), no? This is\n>>> amazing. I have a hard time imagining how FIPS would interact with\n>>> what we do in mac8.c to explain that, so it may be something entirely\n>>> different. Is that reproducible?\n>>\n>> This is no longer present in the v2 patch.\n> \n> Sure, but why was it happening in the first place?\n\nBecause the earlier patch only changed the test input values (which were \ngenerated on the fly using md5()), but did not adjust the expected test \nresults in all the places.\n\n\n\n", "msg_date": "Tue, 28 Feb 2023 08:25:00 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> [ v2-0001-Remove-incidental-md5-function-uses-from-main-reg.patch ]\n\nI've gone through this and have a modest suggestion: let's invent some\nwrapper functions around encode(sha256()) to reduce the cosmetic diffs\nand consequent need for closer study of patch changes. In the attached\nI called them \"notmd5()\", but I'm surely not wedded to that name.\n\nThis also accounts for some relatively recent additions to stats_ext.sql\nthat introduced yet more uses of md5(). This passes for me on a\nFIPS-enabled Fedora system, with the exception of md5.sql and\npassword.sql. I agree that the right thing for md5.sql is just to add\na variant expected-file. password.sql could perhaps use some refactoring\nso that we don't have two large expected-files to manage.\n\nThe only other place that perhaps needs discussion is rowsecurity.sql,\nwhich has some surprisingly large changes: not only do the random\nstrings change, but there are rowcount differences in some results.\nI believe this is because there are RLS policy checks and view conditions\nthat actually examine the contents of the \"md5\" strings, eg\n\nCREATE POLICY p1 ON s1 USING (a in (select x from s2 where y like '%2f%'));\n\nMy recommendation is to just accept those changes as OK and move on.\nI doubt that anybody checked the existing results line-by-line either.\n\nSo, once we've done something about md5.sql and password.sql, I think\nthis is committable.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 04 Mar 2023 18:04:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "> On 5 Mar 2023, at 00:04, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> [ v2-0001-Remove-incidental-md5-function-uses-from-main-reg.patch ]\n> \n> I've gone through this and have a modest suggestion: let's invent some\n> wrapper functions around encode(sha256()) to reduce the cosmetic diffs\n> and consequent need for closer study of patch changes. In the attached\n> I called them \"notmd5()\", but I'm surely not wedded to that name.\n\nFor readers without all context, wouldn't it be better to encode in the\nfunction name why we're not just calling a hash like md5? Something like\nfips_allowed_hash() or similar?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 6 Mar 2023 10:02:55 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 5 Mar 2023, at 00:04, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I've gone through this and have a modest suggestion: let's invent some\n>> wrapper functions around encode(sha256()) to reduce the cosmetic diffs\n>> and consequent need for closer study of patch changes. In the attached\n>> I called them \"notmd5()\", but I'm surely not wedded to that name.\n\n> For readers without all context, wouldn't it be better to encode in the\n> function name why we're not just calling a hash like md5? Something like\n> fips_allowed_hash() or similar?\n\nI'd prefer shorter than that --- all these queries are laid out on the\nexpectation of a very short function name. Maybe \"fipshash()\"?\n\nWe could make the comment introducing the function declarations more\nelaborate, too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 Mar 2023 09:55:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "> On 6 Mar 2023, at 15:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n\n>> For readers without all context, wouldn't it be better to encode in the\n>> function name why we're not just calling a hash like md5? Something like\n>> fips_allowed_hash() or similar?\n> \n> I'd prefer shorter than that --- all these queries are laid out on the\n> expectation of a very short function name. Maybe \"fipshash()\"?\n> \n> We could make the comment introducing the function declarations more\n> elaborate, too.\n\nfipshash() with an explanatory comments sounds like a good idea.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 6 Mar 2023 17:06:22 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On 05.03.23 00:04, Tom Lane wrote:\n> I've gone through this and have a modest suggestion: let's invent some\n> wrapper functions around encode(sha256()) to reduce the cosmetic diffs\n> and consequent need for closer study of patch changes. In the attached\n> I called them \"notmd5()\", but I'm surely not wedded to that name.\n\nDo you mean create this on the fly in the test suite, or make it a new \nbuilt-in function?\n\n\n\n", "msg_date": "Wed, 8 Mar 2023 08:34:35 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 05.03.23 00:04, Tom Lane wrote:\n>> I've gone through this and have a modest suggestion: let's invent some\n>> wrapper functions around encode(sha256()) to reduce the cosmetic diffs\n>> and consequent need for closer study of patch changes. In the attached\n>> I called them \"notmd5()\", but I'm surely not wedded to that name.\n\n> Do you mean create this on the fly in the test suite, or make it a new \n> built-in function?\n\nThe former --- please read my version of the patch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Mar 2023 02:40:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On 09.12.22 05:16, Michael Paquier wrote:\n> On Wed, Dec 07, 2022 at 03:14:09PM +0100, Peter Eisentraut wrote:\n>> Here is the next step. To contain the scope, I focused on just \"make check\"\n>> for now. This patch removes all incidental calls to md5(), replacing them\n>> with sha256(), so that they'd pass with or without FIPS mode. (Two tests\n>> would need alternative expected files: md5 and password. I have not\n>> included those here.)\n> \n> Yeah, fine by me to do that step-by-step.\n\nIt occurred to me that it would be easier to maintain this in the long \nrun if we could enable a \"fake FIPS\" mode that would have the same \neffect but didn't require fiddling with the OpenSSL configuration or \ninstallation.\n\nThe attached patch shows how this could work. Thoughts?", "msg_date": "Wed, 8 Mar 2023 09:49:15 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "> On 8 Mar 2023, at 09:49, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> It occurred to me that it would be easier to maintain this in the long run if we could enable a \"fake FIPS\" mode that would have the same effect but didn't require fiddling with the OpenSSL configuration or installation.\n> \n> The attached patch shows how this could work. Thoughts?\n\n- * Initialize a hash context. Note that this implementation is designed\n- * to never fail, so this always returns 0.\n+ * Initialize a hash context.\nRegardless of which, we wan't this hunk since the code clearly can return -1.\n\n+#ifdef FAKE_FIPS_MODE\nI'm not enthusiastic about this. If we use this rather than OpenSSL with FIPS\nenabled we might end up missing bugs or weird behavior due to changes in\nOpenSSL that we didn't test.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 8 Mar 2023 10:21:26 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On 08.03.23 08:40, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 05.03.23 00:04, Tom Lane wrote:\n>>> I've gone through this and have a modest suggestion: let's invent some\n>>> wrapper functions around encode(sha256()) to reduce the cosmetic diffs\n>>> and consequent need for closer study of patch changes. In the attached\n>>> I called them \"notmd5()\", but I'm surely not wedded to that name.\n> \n>> Do you mean create this on the fly in the test suite, or make it a new\n>> built-in function?\n> \n> The former --- please read my version of the patch.\n\nOk, that makes sense. We have some other uses of this pattern in other \ntest suites that my initial patch didn't cover yet, for example in \nsrc/test/subscripton, but we don't have expected files there, so the \nargument of reducing the diffs doesn't apply.\n\n\n\n", "msg_date": "Wed, 8 Mar 2023 10:26:54 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On 06.03.23 17:06, Daniel Gustafsson wrote:\n> fipshash() with an explanatory comments sounds like a good idea.\n\nI think that name would be quite false advertising.\n\n\n", "msg_date": "Wed, 8 Mar 2023 10:28:00 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On 08.03.23 10:21, Daniel Gustafsson wrote:\n>> On 8 Mar 2023, at 09:49, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n>> It occurred to me that it would be easier to maintain this in the long run if we could enable a \"fake FIPS\" mode that would have the same effect but didn't require fiddling with the OpenSSL configuration or installation.\n>>\n>> The attached patch shows how this could work. Thoughts?\n> \n> - * Initialize a hash context. Note that this implementation is designed\n> - * to never fail, so this always returns 0.\n> + * Initialize a hash context.\n> Regardless of which, we wan't this hunk since the code clearly can return -1.\n\nI was a bit puzzled by these comments in that file. While the existing \nimplementations (mostly) never fail, they are clearly not *designed* to \nnever fail, since the parallel OpenSSL implementations can fail (which \nis the point of this thread). So I would remove these comments \naltogether, really.\n\n> +#ifdef FAKE_FIPS_MODE\n> I'm not enthusiastic about this. If we use this rather than OpenSSL with FIPS\n> enabled we might end up missing bugs or weird behavior due to changes in\n> OpenSSL that we didn't test.\n\nValid point. In any case, the patch is available for ad hoc testing.\n\n\n\n", "msg_date": "Wed, 8 Mar 2023 10:30:21 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "> On 8 Mar 2023, at 10:30, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 08.03.23 10:21, Daniel Gustafsson wrote:\n>>> On 8 Mar 2023, at 09:49, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>>> It occurred to me that it would be easier to maintain this in the long run if we could enable a \"fake FIPS\" mode that would have the same effect but didn't require fiddling with the OpenSSL configuration or installation.\n>>> \n>>> The attached patch shows how this could work. Thoughts?\n>> - * Initialize a hash context. Note that this implementation is designed\n>> - * to never fail, so this always returns 0.\n>> + * Initialize a hash context.\n>> Regardless of which, we wan't this hunk since the code clearly can return -1.\n> \n> I was a bit puzzled by these comments in that file. While the existing implementations (mostly) never fail, they are clearly not *designed* to never fail, since the parallel OpenSSL implementations can fail (which is the point of this thread). So I would remove these comments altogether, really.\n\nThe comment in question was missed in 55fe26a4b58, but I agree that it's a\nfalse claim given the OpenSSL implementation so removing or at least mimicking\nthe comments in cryptohash_openssl.c would be better.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 8 Mar 2023 10:37:12 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On 08.03.23 10:37, Daniel Gustafsson wrote:\n> The comment in question was missed in 55fe26a4b58, but I agree that it's a\n> false claim given the OpenSSL implementation so removing or at least mimicking\n> the comments in cryptohash_openssl.c would be better.\n\nI have fixed these comments to match cryptohash_openssl.c.\n\n\n", "msg_date": "Thu, 9 Mar 2023 10:01:14 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On Thu, Mar 09, 2023 at 10:01:14AM +0100, Peter Eisentraut wrote:\n> I have fixed these comments to match cryptohash_openssl.c.\n\nMissed that, thanks for the fix.\n--\nMichael", "msg_date": "Thu, 9 Mar 2023 19:29:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On 06.03.23 17:06, Daniel Gustafsson wrote:\n>> On 6 Mar 2023, at 15:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Daniel Gustafsson <daniel@yesql.se> writes:\n> \n>>> For readers without all context, wouldn't it be better to encode in the\n>>> function name why we're not just calling a hash like md5? Something like\n>>> fips_allowed_hash() or similar?\n>>\n>> I'd prefer shorter than that --- all these queries are laid out on the\n>> expectation of a very short function name. Maybe \"fipshash()\"?\n>>\n>> We could make the comment introducing the function declarations more\n>> elaborate, too.\n> \n> fipshash() with an explanatory comments sounds like a good idea.\n\ncommitted like that\n\n(I'm going to close the CF item and revisit the other test suites for \nthe next release.)\n\n\n", "msg_date": "Mon, 13 Mar 2023 11:06:42 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "> On 13 Mar 2023, at 11:06, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> On 06.03.23 17:06, Daniel Gustafsson wrote:\n\n>> fipshash() with an explanatory comments sounds like a good idea.\n> \n> committed like that\n\n+1. Looks like there is a just a slight diff in the compression.sql test suite.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 13 Mar 2023 11:10:13 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On 04.10.22 17:45, Peter Eisentraut wrote:\n> While working on the column encryption patch, I wanted to check that \n> what is implemented also works in OpenSSL FIPS mode.  I tried running \n> the normal test suites after switching the OpenSSL installation to FIPS \n> mode, but that failed all over the place.  So I embarked on fixing that. \n>  Attached is a first iteration of a patch.\n\nContinuing this, we have fixed many issues since. Here is a patch set \nto fix all remaining issues.\n\nv4-0001-citext-Allow-tests-to-pass-in-OpenSSL-FIPS-mode.patch\nv4-0002-pgcrypto-Allow-tests-to-pass-in-OpenSSL-FIPS-mode.patch\n\nThese two are pretty straightforward.\n\nv4-0003-Allow-tests-to-pass-in-OpenSSL-FIPS-mode-TAP-test.patch\n\nThis one does some delicate surgery and could use some thorough review.\n\nv4-0004-Allow-tests-to-pass-in-OpenSSL-FIPS-mode-rest.patch\n\nThis just adds alternative expected files. The question is mainly just \nwhether there are better ways to organize this.\n\nv4-0005-WIP-Use-fipshash-in-brin_multi-test.patch\n\nHere, some previously fixed md5() uses have snuck back in. I will need \nto track down the origin of this and ask for a proper fix there. This \nis just included here for completeness.", "msg_date": "Thu, 5 Oct 2023 15:44:19 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "> On 5 Oct 2023, at 15:44, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 04.10.22 17:45, Peter Eisentraut wrote:\n>> While working on the column encryption patch, I wanted to check that what is implemented also works in OpenSSL FIPS mode. I tried running the normal test suites after switching the OpenSSL installation to FIPS mode, but that failed all over the place. So I embarked on fixing that. Attached is a first iteration of a patch.\n> \n> Continuing this, we have fixed many issues since. Here is a patch set to fix all remaining issues.\n> \n> v4-0001-citext-Allow-tests-to-pass-in-OpenSSL-FIPS-mode.patch\n> v4-0002-pgcrypto-Allow-tests-to-pass-in-OpenSSL-FIPS-mode.patch\n\n+ERROR: crypt(3) returned NULL\n\nNot within scope here, but I wish we had a better error message here. That's for another patch though clearly.\n\n> v4-0003-Allow-tests-to-pass-in-OpenSSL-FIPS-mode-TAP-test.patch\n> \n> This one does some delicate surgery and could use some thorough review.\n\nI don't have a FIPS enabled build handy to test in, but reading the patch I\ndon't see anything that sticks out apart from very minor comments:\n\n+my $md5_works = ($node->psql('postgres', \"select md5('')\") == 0);\n\nI think this warrants an explanatory comment for readers not familiar with\nFIPS, without that it may seem quite an odd test.\n\n+), 0, 'created user with scram password');\n\nTiny nitpick, I think we use SCRAM when writing it in text.\n\n> v4-0004-Allow-tests-to-pass-in-OpenSSL-FIPS-mode-rest.patch\n> \n> This just adds alternative expected files. The question is mainly just whether there are better ways to organize this.\n\nWithout inventing a new structure for alternative outputs I don't see how.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 5 Oct 2023 16:17:38 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Continuing this, we have fixed many issues since. Here is a patch set \n> to fix all remaining issues.\n\nOn the way to testing this, I discovered that we have a usability\nregression with recent OpenSSL releases. The Fedora 35 installation\nI used to use for testing FIPS-mode behavior would produce errors like\n\n select md5('') = 'd41d8cd98f00b204e9800998ecf8427e' AS \"TRUE\";\n- TRUE \n-------\n- t\n-(1 row)\n-\n+ERROR: could not compute MD5 hash: disabled for FIPS\n\nIn the shiny new Fedora 38 installation I just set up for the\nsame purpose, I'm seeing\n\n select md5('') = 'd41d8cd98f00b204e9800998ecf8427e' AS \"TRUE\";\n- TRUE \n-------\n- t\n-(1 row)\n-\n+ERROR: could not compute MD5 hash: unsupported\n\n\nThis is less user-friendly; moreover it indicates that we're\ngoing to get different output depending on the vintage of\nOpenSSL we're testing against, which is going to be a pain for\nexpected-file maintenance.\n\nI think we need to make an effort to restore the old output\nif possible, although I grant that this may be mostly a whim\nof OpenSSL's that we can't do much about.\n\nThe F35 installation has openssl 1.1.1q, where F38 has\nopenssl 3.0.9.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 05 Oct 2023 16:04:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "I found another bit of fun we'll need to deal with: on my F38\nplatform, pgcrypto/3des fails as attached. Some googling finds\nthis relevant info:\n\nhttps://github.com/pyca/cryptography/issues/6875\n\nThat is, FIPS deprecation of 3DES is happening even as we speak.\nSo apparently we'll have little choice but to deal with two\ndifferent behaviors for that.\n\nAs before, I'm not too pleased with the user-friendliness\nof the error:\n\n+ERROR: encrypt error: Cipher cannot be initialized\n\nThat's even less useful to a user than \"unsupported\".\n\nFWIW, everything else seems to pass with this patchset.\nI ran check-world as well as the various \"must run manually\"\ntest suites.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 05 Oct 2023 16:55:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On 05.10.23 22:04, Tom Lane wrote:\n> On the way to testing this, I discovered that we have a usability\n> regression with recent OpenSSL releases. The Fedora 35 installation\n> I used to use for testing FIPS-mode behavior would produce errors like\n\n> +ERROR: could not compute MD5 hash: disabled for FIPS\n\n> In the shiny new Fedora 38 installation I just set up for the\n> same purpose, I'm seeing\n\n> +ERROR: could not compute MD5 hash: unsupported\n\nThis makes sense, because the older OpenSSL works basically like\n\n if (FIPS_mode()) {\n specific_error();\n }\n\nwhile the new one has all crypto methods in modules, and if you load the \nfips module, then some crypto methods just don't exist.\n\n\n\n", "msg_date": "Fri, 6 Oct 2023 15:44:40 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On 05.10.23 22:55, Tom Lane wrote:\n> I found another bit of fun we'll need to deal with: on my F38\n> platform, pgcrypto/3des fails as attached. Some googling finds\n> this relevant info:\n> \n> https://github.com/pyca/cryptography/issues/6875\n> \n> That is, FIPS deprecation of 3DES is happening even as we speak.\n> So apparently we'll have little choice but to deal with two\n> different behaviors for that.\n\nHmm, interesting, so maybe there should be a new openssl 3.x release at \nthe end of the year that addresses this?\n\n\n", "msg_date": "Fri, 6 Oct 2023 15:46:24 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On 05.10.23 22:55, Tom Lane wrote:\n> I found another bit of fun we'll need to deal with: on my F38\n> platform, pgcrypto/3des fails as attached. Some googling finds\n> this relevant info:\n> \n> https://github.com/pyca/cryptography/issues/6875\n> \n> That is, FIPS deprecation of 3DES is happening even as we speak.\n> So apparently we'll have little choice but to deal with two\n> different behaviors for that.\n> \n> As before, I'm not too pleased with the user-friendliness\n> of the error:\n> \n> +ERROR: encrypt error: Cipher cannot be initialized\n> \n> That's even less useful to a user than \"unsupported\".\n> \n> FWIW, everything else seems to pass with this patchset.\n> I ran check-world as well as the various \"must run manually\"\n> test suites.\n\nI've been trying to get some VM set up with the right Red Hat \nenvironment to be able to reproduce the issues you reported. But \nsomehow switching the OS into FIPS mode messes up the boot environment \nof the VM or something. So I haven't been able to make progress on this.\n\nI suggest that if there are no other concerns, we proceed with the patch \nset as is for now.\n\nThe 3DES deprecation can be addressed by adding another expected file, \nwhich can easily be supplied by someone having this environment running.\n\nThe error message difference in the older OpenSSL version would probably \nneed a small bit of coding. But we can leave that as a separate add-on \nproject.\n\n\n\n", "msg_date": "Tue, 14 Nov 2023 11:52:42 +0100", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "Peter Eisentraut <peter@eisentraut.org> writes:\n> On 05.10.23 22:55, Tom Lane wrote:\n>> I found another bit of fun we'll need to deal with: on my F38\n>> platform, pgcrypto/3des fails as attached. Some googling finds\n>> this relevant info:\n>> https://github.com/pyca/cryptography/issues/6875\n>> That is, FIPS deprecation of 3DES is happening even as we speak.\n>> So apparently we'll have little choice but to deal with two\n>> different behaviors for that.\n\n> I've been trying to get some VM set up with the right Red Hat \n> environment to be able to reproduce the issues you reported. But \n> somehow switching the OS into FIPS mode messes up the boot environment \n> of the VM or something. So I haven't been able to make progress on this.\n\nHm. I was just using a native install on a microSD card for my\nraspberry pi ...\n\n> I suggest that if there are no other concerns, we proceed with the patch \n> set as is for now.\n\nAfter thinking about it for awhile, I guess I'm okay with only\nbothering to provide expected-files for FIPS failures under OpenSSL\n3.x (which is how your patch is set up, I believe). While there are\ncertainly still LTS platforms with 1.x, we don't have to consider FIPS\nmode on them to be a supported case.\n\nI'm more concerned about the 3DES situation. Fedora might be a bit\nahead of the curve here, but according to the link above, everybody is\nsupposed to be in compliance by the end of 2023. So I'd be inclined\nto guess that the 3DES-is-rejected case is going to be mainstream\nbefore v17 ships.\n\n> The error message difference in the older OpenSSL version would probably \n> need a small bit of coding. But we can leave that as a separate add-on \n> project.\n\nIt's the *newer* version's message that I'm unhappy about ;-).\nBut I agree that that's not a reason to hold up applying what's\nhere. (In reality, people running FIPS mode are probably pretty\naccustomed to seeing this error, so maybe it's not worth the\ntrouble to improve it.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Nov 2023 18:07:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "> On 15 Nov 2023, at 00:07, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> (In reality, people running FIPS mode are probably pretty\n> accustomed to seeing this error, so maybe it's not worth the\n> trouble to improve it.)\n\nIn my experience this holds a lot of truth, this is a common error pattern and\nwhile all improvements to error messages are good, it's not a reason to hold\noff this patch.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 15 Nov 2023 11:06:12 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On 15.11.23 00:07, Tom Lane wrote:\n> I'm more concerned about the 3DES situation. Fedora might be a bit\n> ahead of the curve here, but according to the link above, everybody is\n> supposed to be in compliance by the end of 2023. So I'd be inclined\n> to guess that the 3DES-is-rejected case is going to be mainstream\n> before v17 ships.\n\nRight. It is curious that I have not found any activity in the OpenSSL \nissue trackers about this. But if you send me your results file, then I \ncan include it in the patch as an alternative expected.\n\n\n\n", "msg_date": "Wed, 15 Nov 2023 12:44:36 +0100", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "> On 15 Nov 2023, at 12:44, Peter Eisentraut <peter@eisentraut.org> wrote:\n> \n> On 15.11.23 00:07, Tom Lane wrote:\n>> I'm more concerned about the 3DES situation. Fedora might be a bit\n>> ahead of the curve here, but according to the link above, everybody is\n>> supposed to be in compliance by the end of 2023. So I'd be inclined\n>> to guess that the 3DES-is-rejected case is going to be mainstream\n>> before v17 ships.\n> \n> Right. It is curious that I have not found any activity in the OpenSSL issue trackers about this. But if you send me your results file, then I can include it in the patch as an alternative expected.\n\nAs NIST SP800-131A allows decryption with 3DES and DES I dont think OpenSSL\nwill do much other than move it to the legacy module where it can be used\nopt-in like DES. SKIPJACK is already disallowed since before but is still\ntested with decryption during FIPS validation.\n\nUsing an alternative resultsfile to handle platforms which explicitly removes\ndisallowed ciphers seem like the right choice.\n\nSince the 3DES/DES deprecations aren't limited to FIPS, do we want to do\nanything for pgcrypto where we have DES/3DES encryption? Maybe a doc patch\nwhich mentions the deprecation with a link to the SP could be in order?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 15 Nov 2023 15:25:22 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> Since the 3DES/DES deprecations aren't limited to FIPS, do we want to do\n> anything for pgcrypto where we have DES/3DES encryption? Maybe a doc patch\n> which mentions the deprecation with a link to the SP could be in order?\n\nA docs patch that marks both MD5 and 3DES as deprecated is probably\nappropriate, but it seems like a matter for a separate thread and patch.\n\nIn the meantime, I've done a pass of review of Peter's v4 patches.\nv4-0001 is already committed, so that's not considered here.\n\nv4-0002: I think it is worth splitting up contrib/pgcrypto's\npgp-encrypt test, which has only one test case whose output changes,\nand a bunch of others that don't. v5-0002, attached, does it\nlike that. It's otherwise the same as v4.\n\n(It might be worth doing something similar for uuid_ossp's test,\nbut I have not bothered here. That test script is stable enough\nthat I'm not too worried about future maintenance.)\n\nThe attached 0003, 0004, 0005 patches are identical to Peter's.\nI think that it is possibly worth modifying the password test so that\nwe don't fail to create the roles, so as to reduce the delta between\npassword.out and password_1.out (and thereby ease future maintenance\nof those files). However you might disagree, so I split my proposal\nout as a separate patch v5-0007-password-test-delta.patch; you can\ndrop that from the set if you don't like it.\n\nv5-0006-allow-for-disabled-3DES.patch adds the necessary expected\nfile to make that pass on my Fedora 38 system.\n\nWith or without 0007, as you choose, I think it's committable.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 15 Nov 2023 15:29:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On 15.11.23 21:29, Tom Lane wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> Since the 3DES/DES deprecations aren't limited to FIPS, do we want to do\n>> anything for pgcrypto where we have DES/3DES encryption? Maybe a doc patch\n>> which mentions the deprecation with a link to the SP could be in order?\n> \n> A docs patch that marks both MD5 and 3DES as deprecated is probably\n> appropriate, but it seems like a matter for a separate thread and patch.\n> \n> In the meantime, I've done a pass of review of Peter's v4 patches.\n> v4-0001 is already committed, so that's not considered here.\n> \n> v4-0002: I think it is worth splitting up contrib/pgcrypto's\n> pgp-encrypt test, which has only one test case whose output changes,\n> and a bunch of others that don't. v5-0002, attached, does it\n> like that. It's otherwise the same as v4.\n> \n> (It might be worth doing something similar for uuid_ossp's test,\n> but I have not bothered here. That test script is stable enough\n> that I'm not too worried about future maintenance.)\n> \n> The attached 0003, 0004, 0005 patches are identical to Peter's.\n> I think that it is possibly worth modifying the password test so that\n> we don't fail to create the roles, so as to reduce the delta between\n> password.out and password_1.out (and thereby ease future maintenance\n> of those files). However you might disagree, so I split my proposal\n> out as a separate patch v5-0007-password-test-delta.patch; you can\n> drop that from the set if you don't like it.\n> \n> v5-0006-allow-for-disabled-3DES.patch adds the necessary expected\n> file to make that pass on my Fedora 38 system.\n> \n> With or without 0007, as you choose, I think it's committable.\n\nAll done, thanks.\n\n\n\n", "msg_date": "Fri, 17 Nov 2023 19:45:56 +0100", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On Sat, Nov 18, 2023 at 7:46 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> All done, thanks.\n\nProbably not this thread's fault, but following the breadcrumbs to the\nlast thread to touch the relevant test lines in\nauthentication/001_password, is it expected that we have these\nwarnings?\n\npsql:<stdin>:1: WARNING: roles created by regression test cases\nshould have names starting with \"regress_\"\n\n\n", "msg_date": "Fri, 19 Apr 2024 15:50:40 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Probably not this thread's fault, but following the breadcrumbs to the\n> last thread to touch the relevant test lines in\n> authentication/001_password, is it expected that we have these\n> warnings?\n\n> psql:<stdin>:1: WARNING: roles created by regression test cases\n> should have names starting with \"regress_\"\n\nI think the policy is that we enforce that for cases reachable\nvia \"make installcheck\" (to avoid possibly clobbering global\nobjects in a live installation), but not for cases only reachable\nvia \"make check\", such as TAP tests. So I'm not that concerned\nabout this, although if someone is feeling anal enough to rename\nthe test role I won't stand in the way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Apr 2024 00:00:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" }, { "msg_contents": "On Fri, Apr 19, 2024 at 4:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Probably not this thread's fault, but following the breadcrumbs to the\n> > last thread to touch the relevant test lines in\n> > authentication/001_password, is it expected that we have these\n> > warnings?\n>\n> > psql:<stdin>:1: WARNING: roles created by regression test cases\n> > should have names starting with \"regress_\"\n>\n> I think the policy is that we enforce that for cases reachable\n> via \"make installcheck\" (to avoid possibly clobbering global\n> objects in a live installation), but not for cases only reachable\n> via \"make check\", such as TAP tests. So I'm not that concerned\n> about this, although if someone is feeling anal enough to rename\n> the test role I won't stand in the way.\n\nGot it, thanks. Not me, just asking.\n\n\n", "msg_date": "Fri, 19 Apr 2024 16:12:40 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow tests to pass in OpenSSL FIPS mode" } ]
[ { "msg_contents": "Dear hackers,\n\nI am submitting a patch to expand the label requirements for ltree.\n\nThe current format is restricted to alphanumeric characters, plus _.\nUnfortunately, for non-English labels, this set is insufficient. Rather\nthan figure out how to expand this set to include characters beyond the\nASCII limit, I have instead opted to provide users with some mechanism for\nstoring encoded UTF-8 characters which is widely used: punycode (\nhttps://en.wikipedia.org/wiki/Punycode).\n\nThe punycode range of characters is the exact same set as the existing\nltree range, with the addition of a hyphen (-). Within this system, any\nhuman language can be encoded using just A-Za-z0-9-.\n\nOn top of this, I added support for two more characters: # and ;, which are\nused for HTML entities. Note that & and % have special significance in the\nexisting ltree logic; users would have to encode items as #20; (rather than\n%20). This seems a fair compromise.\n\nSince the encoding could make a regular slug even longer, I have also\ndoubled the character limit, from 256 to 512.\n\nPlease let me know if I can provide any more information or changes.\n\nVery sincerely,\nGaren", "msg_date": "Tue, 4 Oct 2022 12:54:46 -0400", "msg_from": "Garen Torikian <gjtorikian@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Expand character set for ltree labels" }, { "msg_contents": "On Tue, Oct 04, 2022 at 12:54:46PM -0400, Garen Torikian wrote:\n> The punycode range of characters is the exact same set as the existing\n> ltree range, with the addition of a hyphen (-). Within this system, any\n> human language can be encoded using just A-Za-z0-9-.\n\nIIUC ASCII characters like '!' and '<' are valid Punycode characters, but\neven with your proposal, those wouldn't be allowed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 4 Oct 2022 15:32:24 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Expand character set for ltree labels" }, { "msg_contents": "No, not quite.\n\nValid Punycode characters are `[A-Za-z0-9-]`. This proposal includes `-`,\nas well as `#` and `;` for HTML entities.\n\nI double-checked the RFC to see the valid Punycode characters and the set\nabove is indeed correct:\nhttps://datatracker.ietf.org/doc/html/draft-ietf-idn-punycode-02#section-5\n\nWhile it would be nice for ltree labels to support *any* printable\ncharacter, it can't because symbols like `!` and `%` already have special\nmeaning in the querying. This proposal leaves those as is and does not\ndepend on any existing special character.\n\nOn Tue, Oct 4, 2022 at 6:32 PM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> On Tue, Oct 04, 2022 at 12:54:46PM -0400, Garen Torikian wrote:\n> > The punycode range of characters is the exact same set as the existing\n> > ltree range, with the addition of a hyphen (-). Within this system, any\n> > human language can be encoded using just A-Za-z0-9-.\n>\n> IIUC ASCII characters like '!' and '<' are valid Punycode characters, but\n> even with your proposal, those wouldn't be allowed.\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n>\n\nNo, not quite.Valid Punycode characters are `[A-Za-z0-9-]`. This proposal includes `-`, as well as `#` and `;` for HTML entities.  I double-checked the RFC to see the valid Punycode characters and the set above is indeed correct: https://datatracker.ietf.org/doc/html/draft-ietf-idn-punycode-02#section-5While it would be nice for ltree labels to support *any* printable character, it can't because symbols like `!` and `%` already have special meaning in the querying. This proposal leaves those as is and does not depend on any existing special character.On Tue, Oct 4, 2022 at 6:32 PM Nathan Bossart <nathandbossart@gmail.com> wrote:On Tue, Oct 04, 2022 at 12:54:46PM -0400, Garen Torikian wrote:\n> The punycode range of characters is the exact same set as the existing\n> ltree range, with the addition of a hyphen (-). Within this system, any\n> human language can be encoded using just A-Za-z0-9-.\n\nIIUC ASCII characters like '!' and '<' are valid Punycode characters, but\neven with your proposal, those wouldn't be allowed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 4 Oct 2022 19:16:30 -0400", "msg_from": "Garen Torikian <gjtorikian@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Expand character set for ltree labels" }, { "msg_contents": "Garen Torikian <gjtorikian@gmail.com> writes:\n> I am submitting a patch to expand the label requirements for ltree.\n\n> The current format is restricted to alphanumeric characters, plus _.\n> Unfortunately, for non-English labels, this set is insufficient.\n\nHm? Perhaps the docs are a bit unclear about that, but it's not\nrestricted to ASCII alphanumerics. AFAICS the code will accept\nwhatever iswalpha() and iswdigit() will accept in the database's\ndefault locale. There's certainly work that could/should be done\nto allow use of not-so-default locales, but that's not specific\nto ltree. I'm not sure that doing an application-side encoding\nis attractive compared to just using that ability directly.\n\nIf you do want to do application-side encoding, I'm unsure why\npunycode would be the choice anyway, as opposed to something\nthat can fit in the existing restrictions.\n\n> On top of this, I added support for two more characters: # and ;, which are\n> used for HTML entities.\n\nThat seems really pretty random.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 Oct 2022 14:59:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Expand character set for ltree labels" }, { "msg_contents": "Hi Tom,\n\n> Perhaps the docs are a bit unclear about that, but it's not\n> restricted to ASCII alphanumerics. AFAICS the code will accept\n> whatever iswalpha() and iswdigit() will accept in the database's\n> default locale.\n\nSorry but I don't think that is correct. Here is the single\ndefinition check of what constitutes a valid character:\nhttps://github.com/postgres/postgres/blob/c3315a7da57be720222b119385ed0f7ad7c15268/contrib/ltree/ltree.h#L129\n\nAs you can see, there are no `is_*` calls at all. Where in this contrib\npackage do you see `iswalpha`? Perhaps I missed it.\n\n> That seems really pretty random.\n\nOk. I am trying to avoid a situation where other users may wish to use\nother delimiters other than `-`, due to its commonplace presence in words\n(eg., compound ones).\n\nOn Wed, Oct 5, 2022 at 2:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Garen Torikian <gjtorikian@gmail.com> writes:\n> > I am submitting a patch to expand the label requirements for ltree.\n>\n> > The current format is restricted to alphanumeric characters, plus _.\n> > Unfortunately, for non-English labels, this set is insufficient.\n>\n> Hm? Perhaps the docs are a bit unclear about that, but it's not\n> restricted to ASCII alphanumerics. AFAICS the code will accept\n> whatever iswalpha() and iswdigit() will accept in the database's\n> default locale. There's certainly work that could/should be done\n> to allow use of not-so-default locales, but that's not specific\n> to ltree. I'm not sure that doing an application-side encoding\n> is attractive compared to just using that ability directly.\n>\n> If you do want to do application-side encoding, I'm unsure why\n> punycode would be the choice anyway, as opposed to something\n> that can fit in the existing restrictions.\n>\n> > On top of this, I added support for two more characters: # and ;, which\n> are\n> > used for HTML entities.\n>\n> That seems really pretty random.\n>\n> regards, tom lane\n>\n\nHi Tom,> Perhaps the docs are a bit unclear about that, but it's not> restricted to ASCII alphanumerics.  AFAICS the code will accept> whatever iswalpha() and iswdigit() will accept in the database's> default locale.  Sorry but I don't think that is correct. Here is the single definition check of what constitutes a valid character: https://github.com/postgres/postgres/blob/c3315a7da57be720222b119385ed0f7ad7c15268/contrib/ltree/ltree.h#L129As you can see, there are no `is_*` calls at all. Where in this contrib package do you see `iswalpha`? Perhaps I missed it.> That seems really pretty random.Ok. I am trying to avoid a situation where other users may wish to use other delimiters other than `-`, due to its commonplace presence in words (eg., compound ones).On Wed, Oct 5, 2022 at 2:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Garen Torikian <gjtorikian@gmail.com> writes:\n> I am submitting a patch to expand the label requirements for ltree.\n\n> The current format is restricted to alphanumeric characters, plus _.\n> Unfortunately, for non-English labels, this set is insufficient.\n\nHm?  Perhaps the docs are a bit unclear about that, but it's not\nrestricted to ASCII alphanumerics.  AFAICS the code will accept\nwhatever iswalpha() and iswdigit() will accept in the database's\ndefault locale.  There's certainly work that could/should be done\nto allow use of not-so-default locales, but that's not specific\nto ltree.  I'm not sure that doing an application-side encoding\nis attractive compared to just using that ability directly.\n\nIf you do want to do application-side encoding, I'm unsure why\npunycode would be the choice anyway, as opposed to something\nthat can fit in the existing restrictions.\n\n> On top of this, I added support for two more characters: # and ;, which are\n> used for HTML entities.\n\nThat seems really pretty random.\n\n                        regards, tom lane", "msg_date": "Wed, 5 Oct 2022 15:34:49 -0400", "msg_from": "Garen Torikian <gjtorikian@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Expand character set for ltree labels" }, { "msg_contents": "Garen Torikian <gjtorikian@gmail.com> writes:\n>> Perhaps the docs are a bit unclear about that, but it's not\n>> restricted to ASCII alphanumerics. AFAICS the code will accept\n>> whatever iswalpha() and iswdigit() will accept in the database's\n>> default locale.\n\n> Sorry but I don't think that is correct. Here is the single\n> definition check of what constitutes a valid character:\n> https://github.com/postgres/postgres/blob/c3315a7da57be720222b119385ed0f7ad7c15268/contrib/ltree/ltree.h#L129\n\n> As you can see, there are no `is_*` calls at all.\n\nDid you chase down what t_isalpha and t_isdigit do?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 Oct 2022 15:56:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Expand character set for ltree labels" }, { "msg_contents": "After digging into it, you are completely correct. I had to do a bit more\nreading to understand the relationships between UTF-8 and wchar, but\nultimately the existing locale support works for my use case.\n\nTherefore I have updated the patch with three much smaller changes:\n\n* Support for `-` in addition to `_`\n* Expanding the limit to 512 chars (from the existing 256); again it's not\nuncommon for non-English strings to be much longer\n* Fixed the documentation to expand on what the ltree label's relationship\nto the DB locale is\n\nThank you,\nGaren\n\nOn Wed, Oct 5, 2022 at 3:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Garen Torikian <gjtorikian@gmail.com> writes:\n> >> Perhaps the docs are a bit unclear about that, but it's not\n> >> restricted to ASCII alphanumerics. AFAICS the code will accept\n> >> whatever iswalpha() and iswdigit() will accept in the database's\n> >> default locale.\n>\n> > Sorry but I don't think that is correct. Here is the single\n> > definition check of what constitutes a valid character:\n> >\n> https://github.com/postgres/postgres/blob/c3315a7da57be720222b119385ed0f7ad7c15268/contrib/ltree/ltree.h#L129\n>\n> > As you can see, there are no `is_*` calls at all.\n>\n> Did you chase down what t_isalpha and t_isdigit do?\n>\n> regards, tom lane\n>", "msg_date": "Wed, 5 Oct 2022 18:05:11 -0400", "msg_from": "Garen Torikian <gjtorikian@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Expand character set for ltree labels" }, { "msg_contents": "2022年10月6日(木) 7:05 Garen Torikian <gjtorikian@gmail.com>:\n>\n> After digging into it, you are completely correct. I had to do a bit more reading to understand the relationships between UTF-8 and wchar, but ultimately the existing locale support works for my use case.\n>\n> Therefore I have updated the patch with three much smaller changes:\n>\n> * Support for `-` in addition to `_`\n> * Expanding the limit to 512 chars (from the existing 256); again it's not uncommon for non-English strings to be much longer\n> * Fixed the documentation to expand on what the ltree label's relationship to the DB locale is\n>\n> Thank you,\n> Garen\n\nThis entry was marked as \"Needs review\" in the CommitFest app but cfbot\nreports the patch no longer applies.\n\nWe've marked it as \"Waiting on Author\". As CommitFest 2022-11 is\ncurrently underway, this would be an excellent time update the patch.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can move the patch entry forward by visiting\n\n https://commitfest.postgresql.org/40/3929/\n\nand changing the status to \"Needs review\".\n\n\nThanks\n\nIan Barwick\n\n\n", "msg_date": "Fri, 4 Nov 2022 08:37:09 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Expand character set for ltree labels" }, { "msg_contents": "On Thu, 6 Oct 2022 at 03:35, Garen Torikian <gjtorikian@gmail.com> wrote:\n>\n> After digging into it, you are completely correct. I had to do a bit more reading to understand the relationships between UTF-8 and wchar, but ultimately the existing locale support works for my use case.\n>\n> Therefore I have updated the patch with three much smaller changes:\n>\n> * Support for `-` in addition to `_`\n> * Expanding the limit to 512 chars (from the existing 256); again it's not uncommon for non-English strings to be much longer\n> * Fixed the documentation to expand on what the ltree label's relationship to the DB locale is\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\ne351f85418313e97c203c73181757a007dfda6d0 ===\n=== applying patch ./0002-Expand-character-set-for-ltree-labels.patch\npatching file contrib/ltree/expected/ltree.out\npatching file contrib/ltree/ltree.h\nHunk #2 FAILED at 126.\n1 out of 2 hunks FAILED -- saving rejects to file contrib/ltree/ltree.h.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3929.log\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 3 Jan 2023 17:57:40 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Expand character set for ltree labels" }, { "msg_contents": "Sure. Rebased onto HEAD.\n\nOn Tue, Jan 3, 2023 at 7:27 AM vignesh C <vignesh21@gmail.com> wrote:\n\n> On Thu, 6 Oct 2022 at 03:35, Garen Torikian <gjtorikian@gmail.com> wrote:\n> >\n> > After digging into it, you are completely correct. I had to do a bit\n> more reading to understand the relationships between UTF-8 and wchar, but\n> ultimately the existing locale support works for my use case.\n> >\n> > Therefore I have updated the patch with three much smaller changes:\n> >\n> > * Support for `-` in addition to `_`\n> > * Expanding the limit to 512 chars (from the existing 256); again it's\n> not uncommon for non-English strings to be much longer\n> > * Fixed the documentation to expand on what the ltree label's\n> relationship to the DB locale is\n>\n> The patch does not apply on top of HEAD as in [1], please post a rebased\n> patch:\n> === Applying patches on top of PostgreSQL commit ID\n> e351f85418313e97c203c73181757a007dfda6d0 ===\n> === applying patch ./0002-Expand-character-set-for-ltree-labels.patch\n> patching file contrib/ltree/expected/ltree.out\n> patching file contrib/ltree/ltree.h\n> Hunk #2 FAILED at 126.\n> 1 out of 2 hunks FAILED -- saving rejects to file contrib/ltree/ltree.h.rej\n>\n> [1] - http://cfbot.cputube.org/patch_41_3929.log\n>\n> Regards,\n> Vignesh\n>", "msg_date": "Tue, 3 Jan 2023 13:56:49 -0500", "msg_from": "Garen Torikian <gjtorikian@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Expand character set for ltree labels" }, { "msg_contents": "On Wed, 4 Jan 2023 at 00:27, Garen Torikian <gjtorikian@gmail.com> wrote:\n>\n> Sure. Rebased onto HEAD.\n>\n\nThere is one more merge conflict, please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\neb5ad4ff05fd382ac98cab60b82f7fd6ce4cfeb8 ===\n=== applying patch ./0003-Expand-character-set-for-ltree-labels.patch\npatching file contrib/ltree/expected/ltree.out\nHunk #1 succeeded at 25 with fuzz 2.\nHunk #2 FAILED at 51.\nHunk #3 FAILED at 537.\nHunk #4 succeeded at 1201 with fuzz 2.\n2 out of 4 hunks FAILED -- saving rejects to file\ncontrib/ltree/expected/ltree.out.rej\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 6 Jan 2023 11:30:22 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Expand character set for ltree labels" }, { "msg_contents": "\nOn 2022-10-05 We 18:05, Garen Torikian wrote:\n> After digging into it, you are completely correct. I had to do a bit\n> more reading to understand the relationships between UTF-8 and wchar,\n> but ultimately the existing locale support works for my use case.\n>\n> Therefore I have updated the patch with three much smaller changes:\n>\n> * Support for `-` in addition to `_`\n> * Expanding the limit to 512 chars (from the existing 256); again it's\n> not uncommon for non-English strings to be much longer\n> * Fixed the documentation to expand on what the ltree label's\n> relationship to the DB locale is\n>\n\nRegardless of the punycode issue, allowing hyphens in ltree labels seems\nquite reasonable. I haven't reviewed the patch yet, but if it's OK I\nintend to commit it.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 6 Jan 2023 10:59:14 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Expand character set for ltree labels" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-10-05 We 18:05, Garen Torikian wrote:\n>> Therefore I have updated the patch with three much smaller changes:\n>> \n>> * Support for `-` in addition to `_`\n>> * Expanding the limit to 512 chars (from the existing 256); again it's\n>> not uncommon for non-English strings to be much longer\n>> * Fixed the documentation to expand on what the ltree label's\n>> relationship to the DB locale is\n\n> Regardless of the punycode issue, allowing hyphens in ltree labels seems\n> quite reasonable. I haven't reviewed the patch yet, but if it's OK I\n> intend to commit it.\n\nNo objection to allowing hyphens. If we're going to increase the length\nlimit, how about using 1000 characters? AFAICS we could even get away\nwith 10K, but it's probably best to hold a bit or two in reserve in case\nwe ever want flags or something associated with labels.\n\n(I've not read the patch.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 06 Jan 2023 11:29:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Expand character set for ltree labels" }, { "msg_contents": "\nOn 2023-01-06 Fr 11:29, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> Regardless of the punycode issue, allowing hyphens in ltree labels seems\n>> quite reasonable. I haven't reviewed the patch yet, but if it's OK I\n>> intend to commit it.\n> No objection to allowing hyphens. If we're going to increase the length\n> limit, how about using 1000 characters? AFAICS we could even get away\n> with 10K, but it's probably best to hold a bit or two in reserve in case\n> we ever want flags or something associated with labels.\n>\n\nOK, done that way.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 6 Jan 2023 16:08:29 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Expand character set for ltree labels" } ]
[ { "msg_contents": "Hi,\nFor contain_placeholders():\n\n+ if (IsA(node, Query))\n+ return query_tree_walker((Query *) node, contain_placeholders,\ncontext, 0);\n+ else if (IsA(node, PlaceHolderVar))\n\nThe `else` is not needed.\n\nFor correlated_t struct, it would be better if the fields have comments.\n\n+ * (for grouping, as an example). So, revert its status\nto\n+ * a full valued entry.\n\nfull valued -> fully valued\n\nCheers\n\nHi,For contain_placeholders():+   if (IsA(node, Query))+       return query_tree_walker((Query *) node, contain_placeholders, context, 0);+   else if (IsA(node, PlaceHolderVar))The `else` is not needed.For correlated_t struct, it would be better if the fields have comments.+                    * (for grouping, as an example). So, revert its status to+                    * a full valued entry.full valued -> fully valuedCheers", "msg_date": "Tue, 4 Oct 2022 14:45:11 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": true, "msg_subject": "Re: [POC] Allow flattening of subquery with a link to upper query" }, { "msg_contents": "On 5/10/2022 02:45, Zhihong Yu wrote:\n> Hi,\n> For contain_placeholders():\n> \n> +   if (IsA(node, Query))\n> +       return query_tree_walker((Query *) node, contain_placeholders, \n> context, 0);\n> +   else if (IsA(node, PlaceHolderVar))\nFixed\n> \n> The `else` is not needed.\n> \n> For correlated_t struct, it would be better if the fields have comments.\nOk, I've added some comments.\n> \n> +                    * (for grouping, as an example). So, revert its \n> status to\n> +                    * a full valued entry.\n> \n> full valued -> fully valued\nFixed\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Wed, 5 Oct 2022 16:38:10 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [POC] Allow flattening of subquery with a link to upper query" }, { "msg_contents": "On Wed, Oct 5, 2022 at 4:38 AM Andrey Lepikhov <a.lepikhov@postgrespro.ru>\nwrote:\n\n> On 5/10/2022 02:45, Zhihong Yu wrote:\n> > Hi,\n> > For contain_placeholders():\n> >\n> > + if (IsA(node, Query))\n> > + return query_tree_walker((Query *) node, contain_placeholders,\n> > context, 0);\n> > + else if (IsA(node, PlaceHolderVar))\n> Fixed\n> >\n> > The `else` is not needed.\n> >\n> > For correlated_t struct, it would be better if the fields have comments.\n> Ok, I've added some comments.\n> >\n> > + * (for grouping, as an example). So, revert its\n> > status to\n> > + * a full valued entry.\n> >\n> > full valued -> fully valued\n> Fixed\n>\n> --\n> regards,\n> Andrey Lepikhov\n> Postgres Professional\n>\nHi,\n\n+ List *pulling_quals; /* List of expressions contained pulled\nexpressions */\n\ncontained -> containing\n\n+ /* Does the var already exists in the target list? */\n\n exists -> exist\n\n+ {\"optimize_correlated_subqueries\", PGC_USERSET, QUERY_TUNING_METHOD,\n\nIs it possible that in the future there would be other optimization for\ncorrelated subqueries ?\nIf so, either rename the guc or, make the guc a string which represents an\nenum.\n\nCheers\n\nOn Wed, Oct 5, 2022 at 4:38 AM Andrey Lepikhov <a.lepikhov@postgrespro.ru> wrote:On 5/10/2022 02:45, Zhihong Yu wrote:\n> Hi,\n> For contain_placeholders():\n> \n> +   if (IsA(node, Query))\n> +       return query_tree_walker((Query *) node, contain_placeholders, \n> context, 0);\n> +   else if (IsA(node, PlaceHolderVar))\nFixed\n> \n> The `else` is not needed.\n> \n> For correlated_t struct, it would be better if the fields have comments.\nOk, I've added some comments.\n> \n> +                    * (for grouping, as an example). So, revert its \n> status to\n> +                    * a full valued entry.\n> \n> full valued -> fully valued\nFixed\n\n-- \nregards,\nAndrey Lepikhov\nPostgres ProfessionalHi,+   List   *pulling_quals; /* List of expressions contained pulled expressions */contained -> containing+           /* Does the var already exists in the target list? */ exists -> exist+       {\"optimize_correlated_subqueries\", PGC_USERSET, QUERY_TUNING_METHOD,Is it possible that in the future there would be other optimization for correlated subqueries ?If so, either rename the guc or, make the guc a string which represents an enum.Cheers", "msg_date": "Wed, 5 Oct 2022 07:11:54 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": true, "msg_subject": "Re: [POC] Allow flattening of subquery with a link to upper query" } ]
[ { "msg_contents": "Thinking ahead a bit, we need to add meson.build to version_stamp.pl.\n\nMaybe someone can think of a better sed expression, but this one seems \ngood enough to me for now.", "msg_date": "Wed, 5 Oct 2022 10:34:44 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Add meson.build to version_stamp.pl" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n\n> Thinking ahead a bit, we need to add meson.build to version_stamp.pl.\n>\n> Maybe someone can think of a better sed expression, but this one seems\n> good enough to me for now.\n[…]\n> +sed_file(\"meson.build\",\n> +\tqq{-e \"1,20s/ version: '[0-9a-z.]*',/ version: '$fullversion',/\"}\n> +);\n\nI think it would be better to not rely on the line numbers, but instead\nlimit it to the project() stanza, something like this:\n\nsed_file(\"meson.build\",\n\tqq{-e \"/^project(/,/^)/ s/ version: '[0-9a-z.]*',/ version: '$fullversion',/\"}\n);\n\n- ilmari\n\n\n", "msg_date": "Wed, 05 Oct 2022 11:36:36 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Add meson.build to version_stamp.pl" }, { "msg_contents": "Hi,\n\nOn 2022-10-05 11:36:36 +0100, Dagfinn Ilmari Mannsåker wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> \n> > Thinking ahead a bit, we need to add meson.build to version_stamp.pl.\n\nGood idea. I was wondering for a moment whether we should put the version\ninto some file instead, and reference that from meson.build - but the\nconvenient way to do that is only in 0.56, which we currently don't want to\nrequire... We could work around it, but it doesn't seem necessary for now.\n\n\n> > Maybe someone can think of a better sed expression, but this one seems\n> > good enough to me for now.\n> […]\n> > +sed_file(\"meson.build\",\n> > +\tqq{-e \"1,20s/ version: '[0-9a-z.]*',/ version: '$fullversion',/\"}\n> > +);\n> \n> I think it would be better to not rely on the line numbers, but instead\n> limit it to the project() stanza, something like this:\n\n> sed_file(\"meson.build\",\n> \tqq{-e \"/^project(/,/^)/ s/ version: '[0-9a-z.]*',/ version: '$fullversion',/\"}\n> );\n\nYea, that looks nicer.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 Oct 2022 11:20:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add meson.build to version_stamp.pl" }, { "msg_contents": "On Wed, Oct 05, 2022 at 11:20:50AM -0700, Andres Freund wrote:\n> On 2022-10-05 11:36:36 +0100, Dagfinn Ilmari Mannsåker wrote:\n>> sed_file(\"meson.build\",\n>> \tqq{-e \"/^project(/,/^)/ s/ version: '[0-9a-z.]*',/ version: '$fullversion',/\"}\n>> );\n> \n> Yea, that looks nicer.\n\nOh. That's nice..\n--\nMichael", "msg_date": "Thu, 6 Oct 2022 09:31:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add meson.build to version_stamp.pl" } ]
[ { "msg_contents": "Hi,\nI think that 9d58bf\n<https://github.com/postgres/postgres/commit/a9d58bfe8a3ae2254e1553ab76974feeaafa0133>,\nleft a tiny oversight.\n\nguc_strdup uses strdup, so must be cleaned by free not pfree.\nIMO, can be used once free call too.\n\nregards,\nRanier Vilela", "msg_date": "Wed, 5 Oct 2022 09:19:33 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Log details for client certificate failures" }, { "msg_contents": "Em qua., 5 de out. de 2022 às 09:19, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Hi,\n> I think that 9d58bf\n> <https://github.com/postgres/postgres/commit/a9d58bfe8a3ae2254e1553ab76974feeaafa0133>,\n> left a tiny oversight.\n>\n> guc_strdup uses strdup, so must be cleaned by free not pfree.\n> IMO, can be used once free call too.\n>\nSorry, my fault.\nPlease ignore this.\n\nregards,\nRanier Vilela\n\nEm qua., 5 de out. de 2022 às 09:19, Ranier Vilela <ranier.vf@gmail.com> escreveu:Hi,I think that 9d58bf, left a tiny oversight.guc_strdup uses strdup, so must be cleaned by free not pfree.IMO, can be used once free call too.Sorry, my fault.Please ignore this.regards,Ranier Vilela", "msg_date": "Wed, 5 Oct 2022 09:31:47 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Log details for client certificate failures" } ]
[ { "msg_contents": "Hi,\n\nThis is a patch split off from the initial meson thread [1] as it's\nfunctionally largely independent (as suggested in [2]).\n\nUsing precompiled headers substantially speeds up building for windows, due to\nthe vast amount of headers included via windows.h. A cross build from\nlinux targetting mingw goes from\n\n994.11user 136.43system 0:31.58elapsed 3579%CPU\nto\n422.41user 89.05system 0:14.35elapsed 3562%CPU\n\nThe wins on windows are similar-ish (but I don't have a system at hand just\nnow for actual numbers). Targetting other operating systems the wins are far\nsmaller (tested linux, macOS, FreeBSD).\n\nThis is particularly interesting for cfbot, which spends a lot of time\nbuilding on windows. It also makes developing on windows less painful as the\ngains are bigger when compiling incrementally, because the precompiled headers\ndon't typically have to be rebuilt.\n\n\nAs a prerequisite this requires changing the way FD_SETSIZE is defined when\ntargetting windows.\n\nWhen using precompiled headers we cannot override macros in system headers\nfrom within .c files, as headers are already processed before the #define in\nthe C file is reached.\n\nA few files #define FD_SETSIZE 1024 on windows, as the default is only 64. I\nam hesitant to change FD_SETSIZE globally on windows, due to\nsrc/backend/port/win32/socket.c using it to size on-stack arrays. Instead add\n-DFD_SETSIZE=1024 when building the specific targets needing it.\n\nWe likely should move away from using select() in those places, but that's a\nlarger change.\n\n\nMichael, CCing you wrt the second patch, as Thomas noticed [3] that you were\nlooking at where to define FD_SETSIZE.\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/20211012083721.hvixq4pnh2pixr3j%40alap3.anarazel.de\n[2] https://www.postgresql.org/message-id/e0c44fb2-8b66-a4b9-b274-7ed3a1a0ab74%40enterprisedb.com\n[3] https://www.postgresql.org/message-id/CA+hUKG+50eOUbN++ocDc0Qnp9Pvmou23DSXu=ZA6fepOcftKqA@mail.gmail.com\n[4] https://www.postgresql.org/message-id/20190826054000.GE7005%40paquier.xyz", "msg_date": "Wed, 5 Oct 2022 12:08:29 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "meson: Add support for building with precompiled headers" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> When using precompiled headers we cannot override macros in system headers\n> from within .c files, as headers are already processed before the #define in\n> the C file is reached.\n\n> A few files #define FD_SETSIZE 1024 on windows, as the default is only 64. I\n> am hesitant to change FD_SETSIZE globally on windows, due to\n> src/backend/port/win32/socket.c using it to size on-stack arrays. Instead add\n> -DFD_SETSIZE=1024 when building the specific targets needing it.\n\nColor me confused, but how does it work to #define that from the command\nline if it can't be overridden from within the program?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 Oct 2022 16:09:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: meson: Add support for building with precompiled headers" }, { "msg_contents": "Hi,\n\nOn 2022-10-05 16:09:14 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > When using precompiled headers we cannot override macros in system headers\n> > from within .c files, as headers are already processed before the #define in\n> > the C file is reached.\n> \n> > A few files #define FD_SETSIZE 1024 on windows, as the default is only 64. I\n> > am hesitant to change FD_SETSIZE globally on windows, due to\n> > src/backend/port/win32/socket.c using it to size on-stack arrays. Instead add\n> > -DFD_SETSIZE=1024 when building the specific targets needing it.\n> \n> Color me confused, but how does it work to #define that from the command\n> line if it can't be overridden from within the program?\n\nIf specified on the commandline it's also used when generating the precompiled\nheader - of course that's not possible when it's just #define'd in some .c\nfile.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 Oct 2022 13:17:42 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: meson: Add support for building with precompiled headers" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-10-05 16:09:14 -0400, Tom Lane wrote:\n>> Color me confused, but how does it work to #define that from the command\n>> line if it can't be overridden from within the program?\n\n> If specified on the commandline it's also used when generating the precompiled\n> header - of course that's not possible when it's just #define'd in some .c\n> file.\n\nAh, so there's a separate cache of precompiled headers for each set of\ncompiler command-line arguments? Got it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 Oct 2022 16:21:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: meson: Add support for building with precompiled headers" }, { "msg_contents": "Hi,\n\nOn 2022-10-05 16:21:55 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-10-05 16:09:14 -0400, Tom Lane wrote:\n> >> Color me confused, but how does it work to #define that from the command\n> >> line if it can't be overridden from within the program?\n> \n> > If specified on the commandline it's also used when generating the precompiled\n> > header - of course that's not possible when it's just #define'd in some .c\n> > file.\n> \n> Ah, so there's a separate cache of precompiled headers for each set of\n> compiler command-line arguments? Got it.\n\nWorse, it builds the precompiled header for each \"target\" (static/shared lib,\nexecutable), right now. Hence I've only added them for targets that have\nmultiple .c files. I've been planning to submit an improvement to meson that\ndoes what you propose, it'd not be hard, but before it's actually usable, it\ndidn't seem worth investing time in that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 Oct 2022 13:27:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: meson: Add support for building with precompiled headers" }, { "msg_contents": "On 05.10.22 21:08, Andres Freund wrote:\n> This is a patch split off from the initial meson thread [1] as it's\n> functionally largely independent (as suggested in [2]).\n> \n> Using precompiled headers substantially speeds up building for windows, due to\n> the vast amount of headers included via windows.h. A cross build from\n> linux targetting mingw goes from\n\nThese patches look ok to me. I can't really comment on the Windows \ndetails, but it sounds all reasonable.\n\nSmall issue:\n\n+override CFLAGS += -DFD_SETSIZE=1024\n\n(and similar)\n\nshould be CPPFLAGS.\n\n\n\n", "msg_date": "Thu, 6 Oct 2022 09:06:42 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: meson: Add support for building with precompiled headers" }, { "msg_contents": "Hi,\n\nOn 2022-10-06 09:06:42 +0200, Peter Eisentraut wrote:\n> On 05.10.22 21:08, Andres Freund wrote:\n> > This is a patch split off from the initial meson thread [1] as it's\n> > functionally largely independent (as suggested in [2]).\n> > \n> > Using precompiled headers substantially speeds up building for windows, due to\n> > the vast amount of headers included via windows.h. A cross build from\n> > linux targetting mingw goes from\n> \n> These patches look ok to me. I can't really comment on the Windows details,\n> but it sounds all reasonable.\n\nThanks for reviewing!\n\n\n> Small issue:\n> \n> +override CFLAGS += -DFD_SETSIZE=1024\n> \n> (and similar)\n> \n> should be CPPFLAGS.\n\nPushed with that adjusted.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 6 Oct 2022 17:25:24 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: meson: Add support for building with precompiled headers" } ]
[ { "msg_contents": "I happened to wonder why various places are testing things like\n\n#define ISWORDCHR(c)\t(t_isalpha(c) || t_isdigit(c))\n\nrather than using an isalnum-equivalent test. The direct answer\nis that ts_locale.c/.h provides no such test function, which\napparently is because there's not a lot of potential callers in\nthe core code. However, both pg_trgm and ltree could benefit\nfrom adding one.\n\nThere's no semantic hazard here: the documentation I consulted\nis all pretty explicit that is[w]alnum is true exactly when\neither is[w]alpha or is[w]digit are. For example, POSIX saith\n\n The iswalpha() and iswalpha_l() functions shall test whether wc is a\n wide-character code representing a character of class alpha in the\n current locale, or in the locale represented by locale, respectively;\n see XBD Locale.\n\n The iswdigit() and iswdigit_l() functions shall test whether wc is a\n wide-character code representing a character of class digit in the\n current locale, or in the locale represented by locale, respectively;\n see XBD Locale.\n\n The iswalnum() and iswalnum_l() functions shall test whether wc is a\n wide-character code representing a character of class alpha or digit\n in the current locale, or in the locale represented by locale,\n respectively; see XBD Locale.\n\nWhile I didn't try to actually measure it, these functions don't\nlook remarkably cheap. Doing char2wchar() twice when we only need\nto do it once seems silly, and the libc functions themselves are\nprobably none too cheap for multibyte characters either.\n\nHence, I propose the attached. I got rid of some places that were\nunnecessarily checking pg_mblen before applying t_iseq(), too.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 05 Oct 2022 15:53:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "ts_locale.c: why no t_isalnum() test?" }, { "msg_contents": "On Wed, Oct 5, 2022 at 3:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I happened to wonder why various places are testing things like\n>\n> #define ISWORDCHR(c) (t_isalpha(c) || t_isdigit(c))\n>\n> rather than using an isalnum-equivalent test. The direct answer\n> is that ts_locale.c/.h provides no such test function, which\n> apparently is because there's not a lot of potential callers in\n> the core code. However, both pg_trgm and ltree could benefit\n> from adding one.\n>\n> There's no semantic hazard here: the documentation I consulted\n> is all pretty explicit that is[w]alnum is true exactly when\n> either is[w]alpha or is[w]digit are. For example, POSIX saith\n>\n> The iswalpha() and iswalpha_l() functions shall test whether wc is a\n> wide-character code representing a character of class alpha in the\n> current locale, or in the locale represented by locale, respectively;\n> see XBD Locale.\n>\n> The iswdigit() and iswdigit_l() functions shall test whether wc is a\n> wide-character code representing a character of class digit in the\n> current locale, or in the locale represented by locale, respectively;\n> see XBD Locale.\n>\n> The iswalnum() and iswalnum_l() functions shall test whether wc is a\n> wide-character code representing a character of class alpha or digit\n> in the current locale, or in the locale represented by locale,\n> respectively; see XBD Locale.\n>\n> While I didn't try to actually measure it, these functions don't\n> look remarkably cheap. Doing char2wchar() twice when we only need\n> to do it once seems silly, and the libc functions themselves are\n> probably none too cheap for multibyte characters either.\n>\n> Hence, I propose the attached. I got rid of some places that were\n> unnecessarily checking pg_mblen before applying t_iseq(), too.\n>\n> regards, tom lane\n>\n>\nI see this is already committed, but I'm curious, why do t_isalpha and\nt_isdigit have the pair of /* TODO */ comments? This unfinished business\nisn't explained anywhere in the file.\n\nOn Wed, Oct 5, 2022 at 3:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:I happened to wonder why various places are testing things like\n\n#define ISWORDCHR(c)    (t_isalpha(c) || t_isdigit(c))\n\nrather than using an isalnum-equivalent test.  The direct answer\nis that ts_locale.c/.h provides no such test function, which\napparently is because there's not a lot of potential callers in\nthe core code.  However, both pg_trgm and ltree could benefit\nfrom adding one.\n\nThere's no semantic hazard here: the documentation I consulted\nis all pretty explicit that is[w]alnum is true exactly when\neither is[w]alpha or is[w]digit are.  For example, POSIX saith\n\n    The iswalpha() and iswalpha_l() functions shall test whether wc is a\n    wide-character code representing a character of class alpha in the\n    current locale, or in the locale represented by locale, respectively;\n    see XBD Locale.\n\n    The iswdigit() and iswdigit_l() functions shall test whether wc is a\n    wide-character code representing a character of class digit in the\n    current locale, or in the locale represented by locale, respectively;\n    see XBD Locale.\n\n    The iswalnum() and iswalnum_l() functions shall test whether wc is a\n    wide-character code representing a character of class alpha or digit\n    in the current locale, or in the locale represented by locale,\n    respectively; see XBD Locale.\n\nWhile I didn't try to actually measure it, these functions don't\nlook remarkably cheap.  Doing char2wchar() twice when we only need\nto do it once seems silly, and the libc functions themselves are\nprobably none too cheap for multibyte characters either.\n\nHence, I propose the attached.  I got rid of some places that were\nunnecessarily checking pg_mblen before applying t_iseq(), too.\n\n                        regards, tom lane\nI see this is already committed, but I'm curious, why do t_isalpha and t_isdigit have the pair of /* TODO */ comments? This unfinished business isn't explained anywhere in the file.", "msg_date": "Wed, 19 Oct 2022 18:12:47 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ts_locale.c: why no t_isalnum() test?" }, { "msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n> I see this is already committed, but I'm curious, why do t_isalpha and\n> t_isdigit have the pair of /* TODO */ comments? This unfinished business\n> isn't explained anywhere in the file.\n\nWe really ought to be consulting the locale/collation passed to\nthe SQL-level operator or function that's calling these things,\ninstead of hard-wiring the database default. Passing that down\nwould take a large and boring patch, but somebody ought to tackle\nit someday.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 Oct 2022 18:39:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: ts_locale.c: why no t_isalnum() test?" } ]
[ { "msg_contents": "Hi,\n\nAs the meson support stands right now, one cannot easily build an extension\nagainst a postgres built with meson. As discussed at the 2022 unconference\n[1], one way to make that easier for extensions is for meson to generate a\ncomplete enough Makefile.global for pgxs.mk to work.\n\nThis is a series of patches towards that goal. The first four simplify some\naspects of Makefile.global, and then the fifth generates Makefile.global etc.\n\n\n0001: autoconf: Unify CFLAGS_SSE42 and CFLAGS_ARMV8_CRC32C\n\n Right now emit the cflags to build the CRC objects into architecture specific\n variables. That doesn't make a whole lot of sense to me - we're never going to\n target x86 and arm at the same time, so they don't need to be separate\n variables.\n\n It might be better to instead continue to have CFLAGS_SSE42 /\n CFLAGS_ARMV8_CRC32C be computed by PGAC_ARMV8_CRC32C_INTRINSICS /\n PGAC_SSE42_CRC32_INTRINSICS and then set CFLAGS_CRC based on those. But it\n seems unlikely that we'd need other sets of flags for those two architectures\n at the same time.\n\n The separate flags could be supported by the meson build instead, it'd just\n add unneccessary complexity.\n\n\n0002: autoconf: Rely on ar supporting index creation\n\n With meson we don't require ranlib. But as it is set in Makefile.global and\n used by several platforms, we'd need to detect it.\n\n FreeBSD, NetBSD, OpenBSD, the only platforms were we didn't use AROPT=crs,\n all have supported the 's' option for a long time.\n\n On macOS we ran ranlib after installing a static library. This was added a\n long time ago, in 58ad65ec2def. I cannot reproduce an issue in more recent\n macOS versions.\n\n I'm on the fence about removing the \"touch $@\" from the rule building static\n libs. That was added because of macos's ranlib not setting fine-grained\n timestamps. On a modern mac ar and ranlib are the same binary, and maybe\n that means that ar has the same issue? Both do set fine-grained\n timestamps:\n\n cc ../test.c -o test.o -c\n rm -f test.a; ar csr test.a test.o ; ranlib test.a; python3 -c \"import os;print(os.stat('test.a').st_mtime_ns)\"\n 1664999109090448534\n\n But I don't know how far back that goes. We could just reformulate the\n comment to mention ar instead of ranlib.\n\n\n Tom, CCing you due to 58ad65ec2def and 826eff57c4c.\n\n\n0003: aix: Build SUBSYS.o using $(CC) -r instead of $(LD) -r\n\n This is the only direct use of $(LD), and xlc -r and gcc -r end up with the\n same set of symbols and similar performance (noise is high, so hard to say if\n equivalent).\n\n Now that $(LD) isn't needed anymore, remove it from src/Makefile.global\n\n While at it, add a comment why -r is used.\n\n\n0004: solaris: Check for -Wl,-E directly instead of checking for gnu LD\n\n This allows us to get rid of the nontrivial detection of with_gnu_ld,\n simplifying meson PGXS compatibility. It's also nice to delete libtool.m4.\n\n I don't like the SOLARIS_EXPORT_DYNAMIC variable I invented. If somebody has\n a better idea...\n\n\n0005: meson: Add PGXS compatibility\n\n The actual meson PGXS compatibility. Plenty more replacements to do, but\n suffices to build common extensions on a few platforms.\n\n What level of completeness do we want to require here?\n\n\n A few replacements worth thinking about:\n\n - autodepend - I'm inclined to set it to true when using a gcc like\n compiler. I think extension authors won't be happy if suddenly their\n extensions don't rebuild reliably anymore. An --enable-depend like\n setting doesn't make sense for meson, so we don't have anything to source it\n from.\n - {LDAP,UUID,ICU}_{LIBS,CFLAGS} - might some extension need them?\n\n\n For some others I think it's ok to not have replacement. Would be good for\n somebody to check my thinking though:\n\n - LIBOBJS, PG_CRC32C_OBJS, TAS: Not needed because we don't build\n the server / PLs with the generated makefile\n - ZIC: only needed to build tzdata as part of server build\n - MSGFMT et al: translation doesn't appear to be supported by pgxs, correct?\n - XMLLINT et al: docs don't seem to be supported by pgxs\n - GENHTML et al: supporting coverage for pgxs-in-meson build doesn't seem worth it\n - WINDRES: I don't think extensions are bothering to generate rc files on windows\n\n\nMy colleague Bilal has set up testing and verified that a few extensions build\nwith the pgxs compatibility layer, on linux at last. Currently pg_qualstats,\npg_cron, hypopg, orafce, postgis, pg_partman work. He also tested pgbouncer,\nbut for him that failed both with autoconf and meson generated pgxs.\n\nI wonder if and where we could have something like this tested continually?\n\nGreetings,\n\nAndres Freund\n\n[1] https://wiki.postgresql.org/wiki/PgCon_2022_Developer_Unconference#Meson_new_build_system_proposal", "msg_date": "Wed, 5 Oct 2022 13:07:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "meson PGXS compatibility" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On macOS we ran ranlib after installing a static library. This was added a\n> long time ago, in 58ad65ec2def. I cannot reproduce an issue in more recent\n> macOS versions.\n\nI agree that shouldn't be necessary anymore (and if it is, we'll find\nout soon enough).\n\n> I'm on the fence about removing the \"touch $@\" from the rule building static\n> libs. That was added because of macos's ranlib not setting fine-grained\n> timestamps. On a modern mac ar and ranlib are the same binary, and maybe\n> that means that ar has the same issue? Both do set fine-grained\n> timestamps:\n\nPlease see the commit message for 826eff57c4c: the issue seems to arise\nonly with specific combinations of software, in particular with non-Apple\nversions of \"make\" (although maybe later Apple builds have fixed make's\nfailure to read sub-second timestamps?). That's a relatively recent hack,\nand I'm very hesitant to conclude that we don't need it anymore just\nbecause you failed to reproduce an issue locally. It very possibly isn't\na problem in a meson build, though, depending on how much meson depends on\nfile timestamps.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 Oct 2022 16:20:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: meson PGXS compatibility" }, { "msg_contents": "Hi,\n\nOn 2022-10-05 16:20:22 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On macOS we ran ranlib after installing a static library. This was added a\n> > long time ago, in 58ad65ec2def. I cannot reproduce an issue in more recent\n> > macOS versions.\n> \n> I agree that shouldn't be necessary anymore (and if it is, we'll find\n> out soon enough).\n\nCool.\n\n\n> > I'm on the fence about removing the \"touch $@\" from the rule building static\n> > libs. That was added because of macos's ranlib not setting fine-grained\n> > timestamps. On a modern mac ar and ranlib are the same binary, and maybe\n> > that means that ar has the same issue? Both do set fine-grained\n> > timestamps:\n> \n> Please see the commit message for 826eff57c4c: the issue seems to arise\n> only with specific combinations of software, in particular with non-Apple\n> versions of \"make\" (although maybe later Apple builds have fixed make's\n> failure to read sub-second timestamps?).\n\nMy understanding, from that commit message, was that the issue originates in\napple's ranlib setting the timestamp to its components but only queries / sets\nit using second granularity. I verified that apple's ranlib and ar these days\njust set the current time, at a high granularity, as the mtime. Whether or\nnot make then hides the problem seems not that relevant if the source of the\nproblem is gone, no?\n\n\n> That's a relatively recent hack, and I'm very hesitant to conclude that we\n> don't need it anymore just because you failed to reproduce an issue locally.\n\nYea, that's why I was hesitant as well. I'll reformulate the comment to\nreference ar instead of ranlib instead.\n\n\n> It very possibly isn't a problem in a meson build, though, depending on how\n> much meson depends on file timestamps.\n\nMost of the timestamp sensitive stuff is dealt with by ninja, rather than\nmeson. ninja does take timestamps into account when determining what to\nrebuild - although I suspect this specific problem wouldn't occur even with a\nproblematic ar/ranlib version, because the relevant timestamps will be on the\n.c (etc) files, rather than the .a. Ninja has the whole dependency graph, so\nit knows what dependencies it has to rebuild, without needing to check\ntimestamps of intermediary objects.\n\nNinja does support build rules where it checks the timestamps of build outputs\nto see if targets depending on those build outputs have to be rebuilt, or not,\nbecause the target didn't change. But the relevant option (\"restat\") isn't set\nfor compiler / linker invocations in the build.ninja meson generates.\n\nRestat is however set for the \"custom_command\"s we use to generate all kinds\nof sources. Sometimes that leads to the set of build steps shrinking\nrapidly. E.g. a touch src/include/catalog/pg_namespace.dat starts leads to\nninja considering 1135 targets out of date, but as genbki.pl doesn't end up\nchanging any files, it's done immediately after that...\n\n[1/1135 1 0%] Generating src/include/catalog/generated_catalog_headers with a custom command\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 Oct 2022 13:49:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: meson PGXS compatibility" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> My understanding, from that commit message, was that the issue originates in\n> apple's ranlib setting the timestamp to its components but only queries / sets\n> it using second granularity. I verified that apple's ranlib and ar these days\n> just set the current time, at a high granularity, as the mtime. Whether or\n> not make then hides the problem seems not that relevant if the source of the\n> problem is gone, no?\n\nWell, (a) it seemed to happen in only some circumstances even back then,\nso maybe your testing didn't catch it; and (b) even assuming that Apple\nhas fixed it in recent releases, there may still be people using older,\nun-fixed versions. Why's it such a problem to keep the \"touch\" step?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 Oct 2022 16:58:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: meson PGXS compatibility" }, { "msg_contents": "Hi,\n\nOn 2022-10-05 16:58:46 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > My understanding, from that commit message, was that the issue originates in\n> > apple's ranlib setting the timestamp to its components but only queries / sets\n> > it using second granularity. I verified that apple's ranlib and ar these days\n> > just set the current time, at a high granularity, as the mtime. Whether or\n> > not make then hides the problem seems not that relevant if the source of the\n> > problem is gone, no?\n> \n> Well, (a) it seemed to happen in only some circumstances even back then,\n> so maybe your testing didn't catch it; and (b) even assuming that Apple\n> has fixed it in recent releases, there may still be people using older,\n> un-fixed versions. Why's it such a problem to keep the \"touch\" step?\n\nIt isn't! That's why I said that I was on the fence about removing the touch\nin my first email and then that I'd leave the touch there and just\ns/ranlib/ar/ in my reply to you.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 Oct 2022 14:10:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: meson PGXS compatibility" }, { "msg_contents": "On 05.10.22 22:07, Andres Freund wrote:\n> My colleague Bilal has set up testing and verified that a few extensions build\n> with the pgxs compatibility layer, on linux at last. Currently pg_qualstats,\n> pg_cron, hypopg, orafce, postgis, pg_partman work. He also tested pgbouncer,\n> but for him that failed both with autoconf and meson generated pgxs.\n\npgbouncer doesn't use pgxs.\n\n\n", "msg_date": "Thu, 6 Oct 2022 11:34:26 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: meson PGXS compatibility" }, { "msg_contents": "Hi,\n\nOn 2022-10-06 11:34:26 +0200, Peter Eisentraut wrote:\n> On 05.10.22 22:07, Andres Freund wrote:\n> > My colleague Bilal has set up testing and verified that a few extensions build\n> > with the pgxs compatibility layer, on linux at last. Currently pg_qualstats,\n> > pg_cron, hypopg, orafce, postgis, pg_partman work. He also tested pgbouncer,\n> > but for him that failed both with autoconf and meson generated pgxs.\n> \n> pgbouncer doesn't use pgxs.\n\nAh, right. It'd still be interesting to make sure it works, but looks like the\nonly integration point is pg_config --includedir and pg_config --libdir, so\nit should...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 6 Oct 2022 12:11:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: meson PGXS compatibility" }, { "msg_contents": "Hi,\n\nOn 2022-10-05 13:07:10 -0700, Andres Freund wrote:\n> 0003: aix: Build SUBSYS.o using $(CC) -r instead of $(LD) -r\n>\n> This is the only direct use of $(LD), and xlc -r and gcc -r end up with the\n> same set of symbols and similar performance (noise is high, so hard to say if\n> equivalent).\n>\n> Now that $(LD) isn't needed anymore, remove it from src/Makefile.global\n>\n> While at it, add a comment why -r is used.\n\nUnfortunately experimenting further with this it turns out I was wrong: While\nxlc -r results in the same set of symbols, that's not true with gcc -r, at\nleast with some versions of gcc. gcc ends up exposing some of the libgcc\nsymbols.\n\nThat can be rectified by adding -nostartfiles -nodefaultlibs, but that\nbasically makes the change as-is pointless.\n\nI think it'd still be good to get rid of setting LD via configure.ac,\nmirroring the detection logic in meson sounds like a bad plan.\n\nGiven this is aix specific, and only the aix linker works on aix (binutils'\ndoesn't), I think the best plan might be to just hardcode ld in the rule\ngenerating postgres.imp.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 Oct 2022 12:16:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: meson PGXS compatibility" }, { "msg_contents": "Hi,\n\nOn 2022-10-05 13:07:10 -0700, Andres Freund wrote:\n> 0004: solaris: Check for -Wl,-E directly instead of checking for gnu LD\n> \n> This allows us to get rid of the nontrivial detection of with_gnu_ld,\n> simplifying meson PGXS compatibility. It's also nice to delete libtool.m4.\n> \n> I don't like the SOLARIS_EXPORT_DYNAMIC variable I invented. If somebody has\n> a better idea...\n\nA cleaner approach could be to add a LDFLAGS_BE and emit the -Wl,-E into it\nfrom both configure and meson, solely based on whether -Wl,-E is supported. I\nhaven't verified cygwin, but on our other platforms that seems to do the right\nthing.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 Oct 2022 14:35:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: meson PGXS compatibility" }, { "msg_contents": "On 05.10.22 22:07, Andres Freund wrote:\n> 0001: autoconf: Unify CFLAGS_SSE42 and CFLAGS_ARMV8_CRC32C\n\nI wonder, there has been some work lately to use SIMD and such in other \nplaces. Would that affect what kinds of extra CPU-specific compiler \nflags and combinations we might need?\n\nSeems fine otherwise.\n> 0005: meson: Add PGXS compatibility\n> \n> The actual meson PGXS compatibility. Plenty more replacements to do, but\n> suffices to build common extensions on a few platforms.\n> \n> What level of completeness do we want to require here?\n\nI have tried this with a few extensions. Seems to work alright. I \ndon't think we need to overthink this. The way it's set up, if someone \nneeds additional variables set, they can easily be added.\n\n\n\n", "msg_date": "Wed, 12 Oct 2022 07:50:07 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: meson PGXS compatibility" }, { "msg_contents": "On Wed, Oct 12, 2022 at 12:50 PM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n>\n> On 05.10.22 22:07, Andres Freund wrote:\n> > 0001: autoconf: Unify CFLAGS_SSE42 and CFLAGS_ARMV8_CRC32C\n>\n> I wonder, there has been some work lately to use SIMD and such in other\n> places. Would that affect what kinds of extra CPU-specific compiler\n> flags and combinations we might need?\n\nIn short, no. The functionality added during this cycle only uses\ninstructions available by default on the platform. CRC on Arm depends on\narmv8-a+crc, and CRC on x86-64 depends on SSE4.2. We can't assume those\ncurrently.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Oct 12, 2022 at 12:50 PM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:>> On 05.10.22 22:07, Andres Freund wrote:> > 0001: autoconf: Unify CFLAGS_SSE42 and CFLAGS_ARMV8_CRC32C>> I wonder, there has been some work lately to use SIMD and such in other> places.  Would that affect what kinds of extra CPU-specific compiler> flags and combinations we might need?In short, no. The functionality added during this cycle only uses instructions available by default on the platform. CRC on Arm depends on armv8-a+crc, and CRC on x86-64 depends on SSE4.2. We can't assume those currently.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 12 Oct 2022 13:53:05 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: meson PGXS compatibility" }, { "msg_contents": "Hi,\n\nOn 2022-10-10 14:35:14 -0700, Andres Freund wrote:\n> On 2022-10-05 13:07:10 -0700, Andres Freund wrote:\n> > 0004: solaris: Check for -Wl,-E directly instead of checking for gnu LD\n> > \n> > This allows us to get rid of the nontrivial detection of with_gnu_ld,\n> > simplifying meson PGXS compatibility. It's also nice to delete libtool.m4.\n> > \n> > I don't like the SOLARIS_EXPORT_DYNAMIC variable I invented. If somebody has\n> > a better idea...\n> \n> A cleaner approach could be to add a LDFLAGS_BE and emit the -Wl,-E into it\n> from both configure and meson, solely based on whether -Wl,-E is supported. I\n> haven't verified cygwin, but on our other platforms that seems to do the right\n> thing.\n\nI think that does look better. See the attached 0003.\n\n\nThe attached v3 of this patch has an unchanged 0001 (CRC cflags).\n\nFor 0002, I still removed LD from Makefile.global, but instead just hardcoded\nld in the export file generation portion of the backend build - there's no\nworking alternative linkers, and we already hardcode a bunch of other paths in\nmkldexport.\n\n0003 is changed significantly - as proposed in the message quoted above, I\nintroduced LDFLAGS_EX_BE and moved the detection -Wl,-E (I used\n-Wl,--export-dynamic, which we previously only used on FreeBSD) into\nconfigure, getting rid of export_dynamic.\n\n0004, the patch introducing PGXS compat, saw a few changes too:\n- I implemented one of the FIXMEs, the correct determination of strip flags\n- I moved the bulk of the pgxs compat code to src/makefiles/meson.build - imo\n src/meson.build got bulked up too much with pgxs-emulation code\n- some minor cleanups\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 12 Oct 2022 22:16:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: meson PGXS compatibility" }, { "msg_contents": "Hi,\n\nOn 2022-10-12 07:50:07 +0200, Peter Eisentraut wrote:\n> On 05.10.22 22:07, Andres Freund wrote:\n> > 0001: autoconf: Unify CFLAGS_SSE42 and CFLAGS_ARMV8_CRC32C\n> \n> I wonder, there has been some work lately to use SIMD and such in other\n> places. Would that affect what kinds of extra CPU-specific compiler flags\n> and combinations we might need?\n\nThe current infrastructure is very CRC specific, so I don't think this change\nwould stand in the way of using sse specific flags in other places.\n\n\n> > 0005: meson: Add PGXS compatibility\n> > \n> > The actual meson PGXS compatibility. Plenty more replacements to do, but\n> > suffices to build common extensions on a few platforms.\n> > \n> > What level of completeness do we want to require here?\n> \n> I have tried this with a few extensions. Seems to work alright. I don't\n> think we need to overthink this. The way it's set up, if someone needs\n> additional variables set, they can easily be added.\n\nYea, I am happy enough with it now that the bulk is out of src/meson.build and\nstrip isn't set to an outright wrong value.\n\nThanks!\n\nAndres\n\n\n", "msg_date": "Wed, 12 Oct 2022 22:23:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: meson PGXS compatibility" }, { "msg_contents": "On 13.10.22 07:23, Andres Freund wrote:\n>>> 0005: meson: Add PGXS compatibility\n>>>\n>>> The actual meson PGXS compatibility. Plenty more replacements to do, but\n>>> suffices to build common extensions on a few platforms.\n>>>\n>>> What level of completeness do we want to require here?\n>>\n>> I have tried this with a few extensions. Seems to work alright. I don't\n>> think we need to overthink this. The way it's set up, if someone needs\n>> additional variables set, they can easily be added.\n> \n> Yea, I am happy enough with it now that the bulk is out of src/meson.build and\n> strip isn't set to an outright wrong value.\n\nHow are you planning to proceed with this? I thought it was ready, but \nit hasn't moved in a while.\n\n\n\n", "msg_date": "Thu, 1 Dec 2022 08:43:19 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: meson PGXS compatibility" }, { "msg_contents": "Hi,\n\nOn 2022-12-01 08:43:19 +0100, Peter Eisentraut wrote:\n> On 13.10.22 07:23, Andres Freund wrote:\n> > > > 0005: meson: Add PGXS compatibility\n> > > > \n> > > > The actual meson PGXS compatibility. Plenty more replacements to do, but\n> > > > suffices to build common extensions on a few platforms.\n> > > > \n> > > > What level of completeness do we want to require here?\n> > > \n> > > I have tried this with a few extensions. Seems to work alright. I don't\n> > > think we need to overthink this. The way it's set up, if someone needs\n> > > additional variables set, they can easily be added.\n> > \n> > Yea, I am happy enough with it now that the bulk is out of src/meson.build and\n> > strip isn't set to an outright wrong value.\n> \n> How are you planning to proceed with this? I thought it was ready, but it\n> hasn't moved in a while.\n\nI basically was hoping for a review of the prerequisite patches I posted at\n[1], particularly 0003 (different approach than before, comparatively large).\n\nHere's an updated version of the patches. There was a stupid copy-paste bug in\nthe prior version of the 0003 / export_dynamic patch.\n\nI'll push 0001, 0002 shortly. I don't think 0002 is the most elegant approach,\nbut it's not awful. I'd still like some review for 0003, but will try to push\nit in a few days if that's not forthcoming.\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/20221013051648.ufz7ud2b5tioctyt%40awork3.anarazel.de", "msg_date": "Thu, 1 Dec 2022 12:17:51 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: meson PGXS compatibility" }, { "msg_contents": "Hi,\n\nOn 2022-12-01 12:17:51 -0800, Andres Freund wrote:\n> I'll push 0001, 0002 shortly. I don't think 0002 is the most elegant approach,\n> but it's not awful. I'd still like some review for 0003, but will try to push\n> it in a few days if that's not forthcoming.\n\nPushed 0003 (move export_dynamic determination to configure.ac) and 0004 (PGXS\ncompatibility). Hope there's no fallout from 0003.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 6 Dec 2022 19:05:53 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: meson PGXS compatibility" } ]
[ { "msg_contents": "Hello:\n\nI’m working on developing a testing harness for the DDL Deparser being\nworked on in [1], please apply the patches in [1] before apply this\npatch. I think the testing harness needs to achieve the following\ngoals:\n\n1. The deparsed JSON output is as expected.\n2. The SQL commands re-formed from deparsed JSON should make the same\nschema change as the original SQL commands.\n3. Any DDL change without modifying the deparser should fail the\ntesting harness.\n\nBased on these 3 goals, we think the deparser testing harness should\nhave 2 parts: the first part is unit testing to cover the first two\ngoals and the second part is integrating deparser test with pg_regress\nto cover the third goal.\n\nI think the unit test part can be based on\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=tree;f=src/test/modules/test_ddl_deparse;hb=HEAD.\nWe can improve upon this test by using it to validate the output JSON\nand the re-formed SQL from the deparsed JSON.\n\n[2] and [3] proposed using regression tests to test deparser and\nprovided an implementation using TAP framework. I made some changes\nwhich enables testing any SQL file under the folder provided in\n$inputdir variable. This implementation enables us to run test cases\nunder regression tests folder, or just any test cases using a SQL\nfile.\n\nI came up with some ideas during the investigation and want to collect\nsome feedback:\n1, Currently we want to utilize the test cases from regression tests.\nHowever you will find that many test cases end with DROP commands. In\ncurrent deparser testing approach proposed in [2] and [3], we compare\nthe pg_dump schema results between the original SQL scripts and\ndeparser generated commands. Because most test cases end with DROP\ncommand, the schema will not be shown in pg_dump, so the test coverage\nis vastly reduced. Any suggestion to this problem?\n\n2, We found that DROP command are not returned by\npg_event_trigger_ddl_commands() fucntion in ddl_command_end trigger,\nbut it’s caught by ddl_command_end trigger. Currently, we catch DROP\ncommand in sql_drop trigger. It’s unclear why\npg_event_trigger_ddl_commands() function is designed to not return\nDROP command.\n\n3, For unsupported DDL commands by the deparser, the current\nimplementation just skips them silently. So we cannot detect\nunsupported DDL commands easily. Customers may also want the deparser\nrelated features like logical replication to be executed in a strict\nmode, so that the system can warn them when deparser can not deparse\nsome DDL command. So I propose to introduce a new GUC such as\n“StopOnDeparserUnsupportedCommand = true/false” to allow the deparser\nto execute in strict mode, in which an unsupported DDL command will\nraise an error.\n\n4, We found that the event trigger function\npg_event_trigger_ddl_commands() only returns subcommands, and deparser\nis deparsing subcommands returned by this function. The deparser works\non subcommand level by using this function, but the deparser is\ndesigned to deparse the complete command to JSON output. So there is a\nmismatch here, what do you think about this problem? Should the\ndeparser work at subcommand level? Or should we provide some event\ntrigger function which can return the complete command instead of\nsubcommands?\n\nYour feedback is appreciated.\n\nRegards,\nRunqi Tian\nAmazon RDS/Aurora for PostgreSQL\n\n[1] https://www.postgresql.org/message-id/CALDaNm0VnaCg__huSDW%3Dn%3D_rSGGES90cpOtqwZeWnA6muoz3oA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAD21AoBVCoPPRKvU_5-%3DwEXsa92GsNJFJOcYyXzvoSEJCx5dKw%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/20150215044814.GL3391@alvh.no-ip.org", "msg_date": "Wed, 5 Oct 2022 17:17:07 -0400", "msg_from": "Runqi Tian <runqidev@gmail.com>", "msg_from_op": true, "msg_subject": "Testing DDL Deparser" }, { "msg_contents": "Hello\n\nOverall, many thanks for working on this. I hope that the objectives\ncan be fulfilled, so that we can have dependable DDL replication soon.\n\nI haven't read the patch at all, so I can't comment on what you've done,\nbut I have comments to some of your questions:\n\nOn 2022-Oct-05, Runqi Tian wrote:\n\n> I came up with some ideas during the investigation and want to collect\n> some feedback:\n> 1, Currently we want to utilize the test cases from regression tests.\n> However you will find that many test cases end with DROP commands. In\n> current deparser testing approach proposed in [2] and [3], we compare\n> the pg_dump schema results between the original SQL scripts and\n> deparser generated commands. Because most test cases end with DROP\n> command, the schema will not be shown in pg_dump, so the test coverage\n> is vastly reduced. Any suggestion to this problem?\n\nThe whole reason some objects are *not* dropped is precisely so that\nthis type of testing has something to work on. If we find that there\nare object types that would be good to have in order to increase\ncoverage, what we can do is change the .sql files to not drop those\nobjects. This should be as minimal as possible (i.e. we don't need tons\nof tables that are all essentially identical, just a representative\nbunch of objects of different types).\n\n> 2, We found that DROP command are not returned by\n> pg_event_trigger_ddl_commands() fucntion in ddl_command_end trigger,\n> but it’s caught by ddl_command_end trigger. Currently, we catch DROP\n> command in sql_drop trigger. It’s unclear why\n> pg_event_trigger_ddl_commands() function is designed to not return\n> DROP command.\n\nYeah, the idea is that a DDL processor needs to handle both the DROP and\nthe other cases separately in these two event types. As I recall, we\nneeded to handle DROP separately because there was no way to collect the\nnecessary info otherwise.\n\n> 3, For unsupported DDL commands by the deparser, the current\n> implementation just skips them silently. So we cannot detect\n> unsupported DDL commands easily. Customers may also want the deparser\n> related features like logical replication to be executed in a strict\n> mode, so that the system can warn them when deparser can not deparse\n> some DDL command. So I propose to introduce a new GUC such as\n> “StopOnDeparserUnsupportedCommand = true/false” to allow the deparser\n> to execute in strict mode, in which an unsupported DDL command will\n> raise an error.\n\nNo opinion on this. I don't think the deparser should be controlled by\nindividual GUCs, since it will probably have multiple simultaneous uses.\n\n> 4, We found that the event trigger function\n> pg_event_trigger_ddl_commands() only returns subcommands, and deparser\n> is deparsing subcommands returned by this function. The deparser works\n> on subcommand level by using this function, but the deparser is\n> designed to deparse the complete command to JSON output. So there is a\n> mismatch here, what do you think about this problem? Should the\n> deparser work at subcommand level? Or should we provide some event\n> trigger function which can return the complete command instead of\n> subcommands?\n\nNot clear on what this means. Are you talking about ALTER TABLE\nsubcommands? If so, then what you have to do (ISTM) is construct a\nsingle ALTER TABLE command containing several subcommands, when that is\nwhat is being deparsed; the reason for the user to submit several\nsubcommands is to apply optimizations such as avoiding multiple table\nrewrites for several operations, when they can all share a single table\nrewrite. Therefore I think replaying such a command should try and do\nthe same, if at all possible.\n\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Crear es tan difícil como ser libre\" (Elsa Triolet)\n\n\n", "msg_date": "Thu, 6 Oct 2022 18:22:42 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Testing DDL Deparser" }, { "msg_contents": "Hello:\n\nMany thanks for providing feedbacks and sorry for late reply. I\nupdated the testing harness, please apply patches in [1] before apply\nthe attached patch.\n\n>Not clear on what this means. Are you talking about ALTER TABLE\n>subcommands? If so, then what you have to do (ISTM) is construct a\n>single ALTER TABLE command containing several subcommands, when that is\n>what is being deparsed; the reason for the user to submit several\n>subcommands is to apply optimizations such as avoiding multiple table\n>rewrites for several operations, when they can all share a single table\n>rewrite. Therefore I think replaying such a command should try and do\n>the same, if at all possible.\n\nMy question regarding subcommand is actually on commands other than\nALTER TABLE. Let me give an example (You can also find this example in\nthe patch), when command like\n\nCREATE SCHEMA element_test CREATE TABLE foo (id int)\n\nis caught by ddl_command_end trigger, function\npg_event_trigger_ddl_commands() will return 2 records which I called\nas subcommands in the previous email. One is deparsed as\n\nCREATE SCHEMA element_test,\nanother is deparsed as\nCREATE TABLE element_test.foo (id pg_catalog.int4 ).\n\nIs this behavior expected? I thought the deparser is designed to\ndeparse the entire command to a single command instead of dividing\nthem into 2 commands.\n\n>The whole reason some objects are *not* dropped is precisely so that\n>this type of testing has something to work on. If we find that there\n>are object types that would be good to have in order to increase\n>coverage, what we can do is change the .sql files to not drop those\n>objects. This should be as minimal as possible\n\nIt seems that keeping separate test cases in deparser tests folder is\nbetter than using the test cases in core regression tests folder\ndirectly. I will write some test cases in the new deparser test\nfolder.\n\n>Yeah, the idea is that a DDL processor needs to handle both the DROP and\n>the other cases separately in these two event types. As I recall, we\n>needed to handle DROP separately because there was no way to collect the\n>necessary info otherwise.\n\nI see, it seems that a function to deparse DROP command to JSON output\nis needed but not provided yet. I implemented a function\ndeparse_drop_ddl() in the testing harness, maybe we could consider\nexposing a function in engine to deparse DROP command as\ndeparse_ddl_command() does.\n\n>No opinion on this. I don't think the deparser should be controlled by\n>individual GUCs, since it will probably have multiple simultaneous uses.\n\nI see. If the ddl command is not supported, the sub node will not\nreceive the ddl command, so the sub node will dump different results.\nWe are still able to detect the error. But because the error message\nis not explicit, the developer needs to do some investigations in log\nto find out the cause.\n\nAbout the new implementation, I divided the 3 testing goals into 4 goals, from:\n\n>1. The deparsed JSON output is as expected.\n>2. The SQL commands re-formed from deparsed JSON should make the same schema change as the original SQL commands.\n>3. Any DDL change without modifying the deparser should fail the testing harness.\n\nupdated to:\n\n1, The deparsed JSON is the same as the expected string\n2, The reformed SQL command is the same as the expected string\n3, The original command and re-formed command can dump the same schema\n4, Any DDL change without modifying the deparser should fail the\ntesting harness.\n\nThis new implementation achieves the first 3 goals. For the first 2\ngoals, the implementation is the same as test_ddl_deparse, I just\nchanged the notice content from command tag to deparsed JSON and\nreformed SQL command. For the 3rd goal, it’s implemented using TAP\nframework. The pub node will run the same test cases again, the SQL\nfiles will be executed sequentially. After the execution of each SQL\nfile, the reformed commands will be applied to the sub node and the\ndumped results from pub node and sub node will be compared.\n\nFor the next step, I’m going to support goal 4. Because if any DDL\nchange is made, the core regression test cases should also be changed.\nWe need to execute the core regression test cases with DDL event\ntriggers and the DDL deparser enabled. There are 2 solutions in my\nmind:\n\n1, Build DDL event triggers and DDL deparser into pg_regress tests so\nthat DDL commands in these tests can be captured and deparsed.\n2, Let the goal 3 implementation, aka the TAP test to execute test\ncases from pg_regress, if sub and pub node don’t dump the same\nresults, some DDL commands must be changed.\n\nSolution 1 is more lighter weight as we only need to run pg_regress\nonce. Any other thoughts?\n\nNext I’m going to add more test cases for each command type into this framework.\n\nRegards,\nRunqi Tian\nAmazon RDS/Aurora for PostgreSQL\n\n[1] https://www.postgresql.org/message-id/CALDaNm08gZq9a7xnsbaJMmHmi29_kbEuyShHHfxAKLXPh6btWQ%40mail.gmail.com\n\n\nOn Thu, Oct 6, 2022 at 12:22 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Hello\n>\n> Overall, many thanks for working on this. I hope that the objectives\n> can be fulfilled, so that we can have dependable DDL replication soon.\n>\n> I haven't read the patch at all, so I can't comment on what you've done,\n> but I have comments to some of your questions:\n>\n> On 2022-Oct-05, Runqi Tian wrote:\n>\n> > I came up with some ideas during the investigation and want to collect\n> > some feedback:\n> > 1, Currently we want to utilize the test cases from regression tests.\n> > However you will find that many test cases end with DROP commands. In\n> > current deparser testing approach proposed in [2] and [3], we compare\n> > the pg_dump schema results between the original SQL scripts and\n> > deparser generated commands. Because most test cases end with DROP\n> > command, the schema will not be shown in pg_dump, so the test coverage\n> > is vastly reduced. Any suggestion to this problem?\n>\n> The whole reason some objects are *not* dropped is precisely so that\n> this type of testing has something to work on. If we find that there\n> are object types that would be good to have in order to increase\n> coverage, what we can do is change the .sql files to not drop those\n> objects. This should be as minimal as possible (i.e. we don't need tons\n> of tables that are all essentially identical, just a representative\n> bunch of objects of different types).\n>\n> > 2, We found that DROP command are not returned by\n> > pg_event_trigger_ddl_commands() fucntion in ddl_command_end trigger,\n> > but it’s caught by ddl_command_end trigger. Currently, we catch DROP\n> > command in sql_drop trigger. It’s unclear why\n> > pg_event_trigger_ddl_commands() function is designed to not return\n> > DROP command.\n>\n> Yeah, the idea is that a DDL processor needs to handle both the DROP and\n> the other cases separately in these two event types. As I recall, we\n> needed to handle DROP separately because there was no way to collect the\n> necessary info otherwise.\n>\n> > 3, For unsupported DDL commands by the deparser, the current\n> > implementation just skips them silently. So we cannot detect\n> > unsupported DDL commands easily. Customers may also want the deparser\n> > related features like logical replication to be executed in a strict\n> > mode, so that the system can warn them when deparser can not deparse\n> > some DDL command. So I propose to introduce a new GUC such as\n> > “StopOnDeparserUnsupportedCommand = true/false” to allow the deparser\n> > to execute in strict mode, in which an unsupported DDL command will\n> > raise an error.\n>\n> No opinion on this. I don't think the deparser should be controlled by\n> individual GUCs, since it will probably have multiple simultaneous uses.\n>\n> > 4, We found that the event trigger function\n> > pg_event_trigger_ddl_commands() only returns subcommands, and deparser\n> > is deparsing subcommands returned by this function. The deparser works\n> > on subcommand level by using this function, but the deparser is\n> > designed to deparse the complete command to JSON output. So there is a\n> > mismatch here, what do you think about this problem? Should the\n> > deparser work at subcommand level? Or should we provide some event\n> > trigger function which can return the complete command instead of\n> > subcommands?\n>\n> Not clear on what this means. Are you talking about ALTER TABLE\n> subcommands? If so, then what you have to do (ISTM) is construct a\n> single ALTER TABLE command containing several subcommands, when that is\n> what is being deparsed; the reason for the user to submit several\n> subcommands is to apply optimizations such as avoiding multiple table\n> rewrites for several operations, when they can all share a single table\n> rewrite. Therefore I think replaying such a command should try and do\n> the same, if at all possible.\n>\n>\n> --\n> Álvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n> \"Crear es tan difícil como ser libre\" (Elsa Triolet)", "msg_date": "Thu, 20 Oct 2022 16:53:51 -0400", "msg_from": "Runqi Tian <runqidev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Testing DDL Deparser" }, { "msg_contents": "On 2022-Oct-20, Runqi Tian wrote:\n\n> My question regarding subcommand is actually on commands other than\n> ALTER TABLE. Let me give an example (You can also find this example in\n> the patch), when command like\n> \n> CREATE SCHEMA element_test CREATE TABLE foo (id int)\n> \n> is caught by ddl_command_end trigger, function\n> pg_event_trigger_ddl_commands() will return 2 records which I called\n> as subcommands in the previous email.\n\nAh, right.\n\nI don't remember why we made these commands be separate; but for\ninstance if you try to add a SERIAL column you'll also see one command\nto create the sequence, then the table, then the sequence gets its OWNED\nBY the column.\n\nI think the point is that we want to have some regularity so that an\napplication can inspect the JSON blobs and perhaps modify them; if you\nmake a bunch of sub-objects, this becomes more difficult. For DDL\nreplication purposes perhaps this isn't very useful (since you just grab\nit and execute on the other side as-is), but other use cases might have\nother ideas.\n\n> Is this behavior expected? I thought the deparser is designed to\n> deparse the entire command to a single command instead of dividing\n> them into 2 commands.\n\nIt is expected.\n\n> It seems that keeping separate test cases in deparser tests folder is\n> better than using the test cases in core regression tests folder\n> directly. I will write some test cases in the new deparser test\n> folder.\n\nWell, the reason to use the regular regression tests rather than\nseparate, is that when a developer adds a new feature, they will add\ntest cases for it in regular regression tests, so deparsing of their\ncommand will be automatically picked up by the DDL-deparse testing\nframework. We discussed at the time that another option would be to\nhave patch reviewers ensure that the added DDL commands are also tested\nin the DDL-deparse framework, but nobody wants to add yet another thing\nthat we have to remember (or, more likely, forget).\n\n> I see, it seems that a function to deparse DROP command to JSON output\n> is needed but not provided yet. I implemented a function\n> deparse_drop_ddl() in the testing harness, maybe we could consider\n> exposing a function in engine to deparse DROP command as\n> deparse_ddl_command() does.\n\nNo objection against this idea.\n\n> updated to:\n> \n> 1, The deparsed JSON is the same as the expected string\n\nI would rather say \"has the same effects as\".\n\n> 1, Build DDL event triggers and DDL deparser into pg_regress tests so\n> that DDL commands in these tests can be captured and deparsed.\n> 2, Let the goal 3 implementation, aka the TAP test to execute test\n> cases from pg_regress, if sub and pub node don’t dump the same\n> results, some DDL commands must be changed.\n> \n> Solution 1 is more lighter weight as we only need to run pg_regress\n> once. Any other thoughts?\n\nWe have several runs of pg_regress already -- apart from the normal one,\nwe run it once in recovery/t/027_stream_regress.pl and once in the\npg_upgrade test. I'm not sure that we necessarily need to avoid another\none here, particularly if avoiding it would potentially pollute the\nresults for the regular tests. I am okay with solution 2 personally.\n\nIf we really wanted to optimize this, perhaps we should try to drive all\nthree uses (pg_upgrade, stream_regress, this new test) from a single\npg_regress run. But ISTM we don't *have* to.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Hay dos momentos en la vida de un hombre en los que no debería\nespecular: cuando puede permitírselo y cuando no puede\" (Mark Twain)\n\n\n", "msg_date": "Mon, 24 Oct 2022 13:29:10 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Testing DDL Deparser" }, { "msg_contents": "Hi,\n\nThanks for working on this and for the feedback!\n\nI've added the updated deparser testing module to the DDL replication\nthread in [1].\n\nWe'll add more test cases to the testing module and continue the\ndiscussion there.\n\n[1] https://www.postgresql.org/message-id/CAAD30U%2BA%3D2rjZ%2BxejNz%2Be1A%3DudWPQMxHD8W48nbhxwJRfw_qrA%40mail.gmail.com\n\nRegards,\nZheng\n\nOn Mon, Oct 24, 2022 at 7:29 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Oct-20, Runqi Tian wrote:\n>\n> > My question regarding subcommand is actually on commands other than\n> > ALTER TABLE. Let me give an example (You can also find this example in\n> > the patch), when command like\n> >\n> > CREATE SCHEMA element_test CREATE TABLE foo (id int)\n> >\n> > is caught by ddl_command_end trigger, function\n> > pg_event_trigger_ddl_commands() will return 2 records which I called\n> > as subcommands in the previous email.\n>\n> Ah, right.\n>\n> I don't remember why we made these commands be separate; but for\n> instance if you try to add a SERIAL column you'll also see one command\n> to create the sequence, then the table, then the sequence gets its OWNED\n> BY the column.\n>\n> I think the point is that we want to have some regularity so that an\n> application can inspect the JSON blobs and perhaps modify them; if you\n> make a bunch of sub-objects, this becomes more difficult. For DDL\n> replication purposes perhaps this isn't very useful (since you just grab\n> it and execute on the other side as-is), but other use cases might have\n> other ideas.\n>\n> > Is this behavior expected? I thought the deparser is designed to\n> > deparse the entire command to a single command instead of dividing\n> > them into 2 commands.\n>\n> It is expected.\n>\n> > It seems that keeping separate test cases in deparser tests folder is\n> > better than using the test cases in core regression tests folder\n> > directly. I will write some test cases in the new deparser test\n> > folder.\n>\n> Well, the reason to use the regular regression tests rather than\n> separate, is that when a developer adds a new feature, they will add\n> test cases for it in regular regression tests, so deparsing of their\n> command will be automatically picked up by the DDL-deparse testing\n> framework. We discussed at the time that another option would be to\n> have patch reviewers ensure that the added DDL commands are also tested\n> in the DDL-deparse framework, but nobody wants to add yet another thing\n> that we have to remember (or, more likely, forget).\n>\n> > I see, it seems that a function to deparse DROP command to JSON output\n> > is needed but not provided yet. I implemented a function\n> > deparse_drop_ddl() in the testing harness, maybe we could consider\n> > exposing a function in engine to deparse DROP command as\n> > deparse_ddl_command() does.\n>\n> No objection against this idea.\n>\n> > updated to:\n> >\n> > 1, The deparsed JSON is the same as the expected string\n>\n> I would rather say \"has the same effects as\".\n>\n> > 1, Build DDL event triggers and DDL deparser into pg_regress tests so\n> > that DDL commands in these tests can be captured and deparsed.\n> > 2, Let the goal 3 implementation, aka the TAP test to execute test\n> > cases from pg_regress, if sub and pub node don’t dump the same\n> > results, some DDL commands must be changed.\n> >\n> > Solution 1 is more lighter weight as we only need to run pg_regress\n> > once. Any other thoughts?\n>\n> We have several runs of pg_regress already -- apart from the normal one,\n> we run it once in recovery/t/027_stream_regress.pl and once in the\n> pg_upgrade test. I'm not sure that we necessarily need to avoid another\n> one here, particularly if avoiding it would potentially pollute the\n> results for the regular tests. I am okay with solution 2 personally.\n>\n> If we really wanted to optimize this, perhaps we should try to drive all\n> three uses (pg_upgrade, stream_regress, this new test) from a single\n> pg_regress run. But ISTM we don't *have* to.\n>\n> --\n> Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n> \"Hay dos momentos en la vida de un hombre en los que no debería\n> especular: cuando puede permitírselo y cuando no puede\" (Mark Twain)\n>\n>\n\n\n", "msg_date": "Mon, 12 Dec 2022 00:05:22 -0500", "msg_from": "Zheng Li <zhengli10@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Testing DDL Deparser" } ]
[ { "msg_contents": "Hi,\n\nTo the developer(s) who work(s) on pg_statsinfo, are there plans to\nsupport PG15 and when might pg_statsinfo v15 source be released?\nI can see that for v14 of pg_statsinfo there is an incompatibility\nwith the PG15 autovacuum log (i.e. in PG15 some existing autovacuum\nlog fields have been removed and some new fields have been added). I\nalso noticed that PG15 changes how shared libraries must request\nadditional shared memory during initialization and pg_statsinfo source\ncode would need updating for this.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Thu, 6 Oct 2022 09:34:02 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": true, "msg_subject": "pg_statsinfo - PG15 support?" }, { "msg_contents": "At Thu, 6 Oct 2022 09:34:02 +1100, Greg Nancarrow <gregn4422@gmail.com> wrote in \n> To the developer(s) who work(s) on pg_statsinfo, are there plans to\n> support PG15 and when might pg_statsinfo v15 source be released?\n> I can see that for v14 of pg_statsinfo there is an incompatibility\n> with the PG15 autovacuum log (i.e. in PG15 some existing autovacuum\n> log fields have been removed and some new fields have been added). I\n> also noticed that PG15 changes how shared libraries must request\n> additional shared memory during initialization and pg_statsinfo source\n> code would need updating for this.\n\nThank you for the info and apologize for the inconvenience. \"We\" are\naware of the points of incompatibility with PG15 and now working on\npg_statsinfo for PG15. For our historical and procedural reasons, it\nis not fully open-sourced and it is usually released around March of\nthe year following the release of the corresponding version of\nPostgreSQL and the source files are published at the same time. Thus\npg_statsinfo for PG15 will be released in March, 2023.\n\nThank you for your interest.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 06 Oct 2022 14:52:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_statsinfo - PG15 support?" } ]
[ { "msg_contents": "Hi Andres,\n\nSeems there are some typo in file src/backend/meson.build comment, pls\nhave a look.\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Thu, 6 Oct 2022 11:06:06 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH v1] [meson] fix some typo to make it more readable" }, { "msg_contents": "Hi,\n\nOn 2022-10-06 11:06:06 +0800, Junwang Zhao wrote:\n> Seems there are some typo in file src/backend/meson.build comment, pls\n> have a look.\n\nThanks! I editorilized the first sentence a bit more and pushed this.\n\nRegards,\n\nAndres\n\n\n", "msg_date": "Thu, 6 Oct 2022 13:11:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH v1] [meson] fix some typo to make it more readable" } ]
[ { "msg_contents": "Due to the implementation of convert_ANY_sublink_to_join, we have\nlimitations below, which has been discussed at [1] [2].\n\n if (contain_vars_of_level((Node *) subselect, 1))\n return NULL;\n\nI'm thinking if we can do the ${subject}. If so, the query like\n\nSELECT * FROM t1 WHERE\na IN (SELECT * FROM t2 WHERE t2.b > t1.b);\n\ncan be converted to\n\nSELECT * FROM t1 WHERE\nEXISTS (SELECT * FROM t2 WHERE t2.b > t1.b AND t1.a = t2.a);\n\nThen the sublink can be removed with existing logic (the NOT-IN format\nwill not be touched since they have different meanings).\n\nAny ideas?\n\n[1] https://www.postgresql.org/message-id/3691.1342650974%40sss.pgh.pa.us\n[2]\nhttps://www.postgresql.org/message-id/CAN_9JTx7N+CxEQLnu_uHxx+EscSgxLLuNgaZT6Sjvdpt7toy3w@mail.gmail.com\n\n\n-- \nBest Regards\nAndy Fan\n\nDue to the implementation of convert_ANY_sublink_to_join,  we havelimitations below, which has been discussed at [1] [2].    if (contain_vars_of_level((Node *) subselect, 1))        return NULL; I'm thinking if we can do the ${subject}. If so,  the query likeSELECT * FROM t1 WHERE a IN (SELECT * FROM t2 WHERE t2.b > t1.b);can be converted toSELECT * FROM t1 WHERE EXISTS (SELECT * FROM t2 WHERE t2.b > t1.b AND t1.a = t2.a);Then the sublink can be removed with existing logic (the NOT-IN formatwill not be touched since they have different meanings). Any ideas? [1] https://www.postgresql.org/message-id/3691.1342650974%40sss.pgh.pa.us[2] https://www.postgresql.org/message-id/CAN_9JTx7N+CxEQLnu_uHxx+EscSgxLLuNgaZT6Sjvdpt7toy3w@mail.gmail.com -- Best RegardsAndy Fan", "msg_date": "Thu, 6 Oct 2022 15:24:25 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Unify \"In\" Sublink to EXIST Sublink for better optimize opportunity" }, { "msg_contents": "Hi:\n\nOn Thu, Oct 6, 2022 at 3:24 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n> Due to the implementation of convert_ANY_sublink_to_join, we have\n> limitations below, which has been discussed at [1] [2].\n>\n> if (contain_vars_of_level((Node *) subselect, 1))\n> return NULL;\n>\n> I'm thinking if we can do the ${subject}. If so, the query like\n>\n> SELECT * FROM t1 WHERE\n> a IN (SELECT * FROM t2 WHERE t2.b > t1.b);\n>\n> can be converted to\n>\n> SELECT * FROM t1 WHERE\n> EXISTS (SELECT * FROM t2 WHERE t2.b > t1.b AND t1.a = t2.a);\n>\n\nI have coded this and tested my idea, here are some new findings: 1). Not\nall the\nTargetEntry->expr can be used as qual, for example: WindowFunc, AggFunc,\nSRFs.\n2). For simple correlated EXISTS query, the current master code also tries\nto transform it\nto IN format and implement it by hashing (make_subplan). So there is no\nneed to\nconvert an IN query to EXISTS query if the sublink can be pulled up\nalready,\nwhich means this patch should only take care of\n!contain_vars_of_level((Node *) subselect, 1).\n\nNote the changes of postgres_fdw.out are expected. The 'a' in foreign_tbl\nhas varlevelsup = 1;\nSELECT a FROM base_tbl WHERE a IN (SELECT a FROM foreign_tbl);\n\nHere is some performance testing for this patch:\n\nselect * from tenk1 t1\nwhere hundred in (select hundred from tenk2 t2\n where t2.odd = t1.odd\n and even in (select even from tenk1 t3\n where t3.fivethous = t2.fivethous))\nand even > 0;\n\nmaster: 892.902 ms\npatched: 56.08 ms\n\n>\nPatch attached, any feedback is welcome.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Mon, 10 Oct 2022 08:40:34 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Unify \"In\" Sublink to EXIST Sublink for better optimize\n opportunity" } ]
[ { "msg_contents": "Hello\n\nI found a comment typo in xlogprefetcher.c.\nAny thoughts?\n\n--- a/src/backend/access/transam/xlogprefetcher.c\n+++ b/src/backend/access/transam/xlogprefetcher.c\n@@ -19,7 +19,7 @@\n * avoid a second buffer mapping table lookup.\n *\n * Currently, only the main fork is considered for prefetching. Currently,\n- * prefetching is only effective on systems where BufferPrefetch() does\n+ * prefetching is only effective on systems where PrefetchBuffer() does\n * something useful (mainly Linux).\n *\n *-------------------------------------------------------------------------\n\nregards sho kato", "msg_date": "Thu, 6 Oct 2022 08:12:37 +0000", "msg_from": "\"kato-sho@fujitsu.com\" <kato-sho@fujitsu.com>", "msg_from_op": true, "msg_subject": "fix comment typo in xlogprefetcher.c" }, { "msg_contents": "On Thu, Oct 06, 2022 at 08:12:37AM +0000, kato-sho@fujitsu.com wrote:\n> I found a comment typo in xlogprefetcher.c.\n> Any thoughts?\n\nFixed, thanks.\n--\nMichael", "msg_date": "Thu, 6 Oct 2022 20:26:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: fix comment typo in xlogprefetcher.c" } ]
[ { "msg_contents": "Hi hackers,\n\n\"SET local\" is currently recorded in VariableSetStmt (with the boolean \nis_local) but \"SET session\" is not.\n\nPlease find attached a patch proposal to also record \"SET session\" so \nthat VariableSetStmt records all the cases.\n\nRemark: Recording \"SET session\" will also help for the Jumbling work \nbeing done in [1].\n\n[1]: \nhttps://www.postgresql.org/message-id/66be1104-164f-dcb8-6c43-f03a68a139a7%40gmail.com\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 6 Oct 2022 12:57:17 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Record SET session in VariableSetStmt" }, { "msg_contents": "Hi,\n\nOn Thu, Oct 06, 2022 at 12:57:17PM +0200, Drouvot, Bertrand wrote:\n> Hi hackers,\n>\n> \"SET local\" is currently recorded in VariableSetStmt (with the boolean\n> is_local) but \"SET session\" is not.\n>\n> Please find attached a patch proposal to also record \"SET session\" so that\n> VariableSetStmt records all the cases.\n>\n> Remark: Recording \"SET session\" will also help for the Jumbling work being\n> done in [1].\n\nI don't think it's necessary. SET and SET SESSION are semantically the same so\nnothing should rely on how exactly someone spelled it. This is also the case\nfor our core jumbling code, where we guarantee (or at least try to) that two\nsemantically identical statements will get the same queryid, and therefore\ndon't distinguish eg. LIKE vs ~~.\n\n\n", "msg_date": "Thu, 6 Oct 2022 19:18:12 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Record SET session in VariableSetStmt" }, { "msg_contents": "Hi,\n\nOn 10/6/22 1:18 PM, Julien Rouhaud wrote:\n> Hi,\n> \n> On Thu, Oct 06, 2022 at 12:57:17PM +0200, Drouvot, Bertrand wrote:\n>> Hi hackers,\n>>\n>> \"SET local\" is currently recorded in VariableSetStmt (with the boolean\n>> is_local) but \"SET session\" is not.\n>>\n>> Please find attached a patch proposal to also record \"SET session\" so that\n>> VariableSetStmt records all the cases.\n>>\n>> Remark: Recording \"SET session\" will also help for the Jumbling work being\n>> done in [1].\n> \n> I don't think it's necessary. SET and SET SESSION are semantically the same\n\nThanks for your feedback!\n\n> so\n> nothing should rely on how exactly someone spelled it. This is also the case\n> for our core jumbling code, where we guarantee (or at least try to) that two\n> semantically identical statements will get the same queryid, and therefore\n> don't distinguish eg. LIKE vs ~~.\n\nAgree, but on the other hand currently SET and SET SESSION are recorded \nwith distinct queryid:\n\npostgres=# select calls, query, queryid from pg_stat_statements;\n calls | query | queryid\n-------+---------------------------------------------+----------------------\n 2 | select calls, query from pg_stat_statements | -6345508659980235519\n 1 | set session enable_seqscan=1 | -3921418831612111986\n 1 | create extension pg_stat_statements | -1739183385080879393\n 1 | set enable_seqscan=1 | 7925920505912025406\n(4 rows)\n\nand this behavior would change with the Jumbling work in progress in [1] \n(mentioned up-thread) if we don't record \"SET SESSION\".\n\nI think that would make sense to keep the same behavior, what do you think?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 6 Oct 2022 14:19:32 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Record SET session in VariableSetStmt" }, { "msg_contents": "On Thu, Oct 06, 2022 at 02:19:32PM +0200, Drouvot, Bertrand wrote:\n>\n> On 10/6/22 1:18 PM, Julien Rouhaud wrote:\n>\n> > so\n> > nothing should rely on how exactly someone spelled it. This is also the case\n> > for our core jumbling code, where we guarantee (or at least try to) that two\n> > semantically identical statements will get the same queryid, and therefore\n> > don't distinguish eg. LIKE vs ~~.\n>\n> Agree, but on the other hand currently SET and SET SESSION are recorded with\n> distinct queryid:\n>\n> postgres=# select calls, query, queryid from pg_stat_statements;\n> calls | query | queryid\n> -------+---------------------------------------------+----------------------\n> 2 | select calls, query from pg_stat_statements | -6345508659980235519\n> 1 | set session enable_seqscan=1 | -3921418831612111986\n> 1 | create extension pg_stat_statements | -1739183385080879393\n> 1 | set enable_seqscan=1 | 7925920505912025406\n> (4 rows)\n>\n> and this behavior would change with the Jumbling work in progress in [1]\n> (mentioned up-thread) if we don't record \"SET SESSION\".\n>\n> I think that would make sense to keep the same behavior, what do you think?\n\nIt's because until now jumbling of utility statements was just hashing the\nquery text, which is quite terrible. This was also implying getting different\nqueryids for things like this:\n\n=# select query, queryid from pg_stat_statements where query like '%work_mem%';;\n query | queryid\n-----------------------+----------------------\n SeT work_mem = 123465 | -1114638544275583196\n Set work_mem = 123465 | -1966597613643458788\n SET work_mem = 123465 | 4161441071081149574\n seT work_mem = 123465 | 8327271737593275474\n(4 rows)\n\nIf we move to a real jumbling of VariableSetStmt, we should keep the rules\nconsistent with the rest of the jumble code and ignore an explicit \"SESSION\" in\nthe original command.\n\n\n", "msg_date": "Thu, 6 Oct 2022 20:28:27 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Record SET session in VariableSetStmt" }, { "msg_contents": "\n\nOn 10/6/22 2:28 PM, Julien Rouhaud wrote:\n> On Thu, Oct 06, 2022 at 02:19:32PM +0200, Drouvot, Bertrand wrote:\n>>\n>> On 10/6/22 1:18 PM, Julien Rouhaud wrote:\n>>\n>>> so\n>>> nothing should rely on how exactly someone spelled it. This is also the case\n>>> for our core jumbling code, where we guarantee (or at least try to) that two\n>>> semantically identical statements will get the same queryid, and therefore\n>>> don't distinguish eg. LIKE vs ~~.\n>>\n>> Agree, but on the other hand currently SET and SET SESSION are recorded with\n>> distinct queryid:\n>>\n>> postgres=# select calls, query, queryid from pg_stat_statements;\n>> calls | query | queryid\n>> -------+---------------------------------------------+----------------------\n>> 2 | select calls, query from pg_stat_statements | -6345508659980235519\n>> 1 | set session enable_seqscan=1 | -3921418831612111986\n>> 1 | create extension pg_stat_statements | -1739183385080879393\n>> 1 | set enable_seqscan=1 | 7925920505912025406\n>> (4 rows)\n>>\n>> and this behavior would change with the Jumbling work in progress in [1]\n>> (mentioned up-thread) if we don't record \"SET SESSION\".\n>>\n>> I think that would make sense to keep the same behavior, what do you think?\n> \n> It's because until now jumbling of utility statements was just hashing the\n> query text, which is quite terrible. This was also implying getting different\n> queryids for things like this:\n> \n> =# select query, queryid from pg_stat_statements where query like '%work_mem%';;\n> query | queryid\n> -----------------------+----------------------\n> SeT work_mem = 123465 | -1114638544275583196\n> Set work_mem = 123465 | -1966597613643458788\n> SET work_mem = 123465 | 4161441071081149574\n> seT work_mem = 123465 | 8327271737593275474\n> (4 rows)\n> \n> If we move to a real jumbling of VariableSetStmt, we should keep the rules\n> consistent with the rest of the jumble code and ignore an explicit \"SESSION\" in\n> the original command.\n\nUnderstood, so I agree that it makes sense to keep the jumbling behavior \nconsistent and so keep the same queryid for statements that are \nsemantically identical.\n\nThanks for your feedback!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 6 Oct 2022 14:36:43 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Record SET session in VariableSetStmt" }, { "msg_contents": "On Thu, Oct 06, 2022 at 08:28:27PM +0800, Julien Rouhaud wrote:\n> If we move to a real jumbling of VariableSetStmt, we should keep the rules\n> consistent with the rest of the jumble code and ignore an explicit \"SESSION\" in\n> the original command.\n\nHm, interesting bit, I should study more this area. So the query ID\ncalculation actually only cares about the contents of the Nodes\nparsed, while the query string used is the one when the entry is\ncreated for the first time. It seems like the patch to add\nTransactionStmt nodes into the jumbling misses something here, as we'd\nstill compile different query IDs depending on the query string itself\nfor simple commands like BEGIN or COMMIT. I'll reply on the other\nthread about all that..\n--\nMichael", "msg_date": "Fri, 7 Oct 2022 10:30:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Record SET session in VariableSetStmt" }, { "msg_contents": "On Fri, Oct 07, 2022 at 10:30:28AM +0900, Michael Paquier wrote:\n> On Thu, Oct 06, 2022 at 08:28:27PM +0800, Julien Rouhaud wrote:\n> > If we move to a real jumbling of VariableSetStmt, we should keep the rules\n> > consistent with the rest of the jumble code and ignore an explicit \"SESSION\" in\n> > the original command.\n> \n> Hm, interesting bit, I should study more this area. So the query ID\n> calculation actually only cares about the contents of the Nodes\n> parsed, while the query string used is the one when the entry is\n> created for the first time. It seems like the patch to add\n> TransactionStmt nodes into the jumbling misses something here, as we'd\n> still compile different query IDs depending on the query string itself\n> for simple commands like BEGIN or COMMIT. I'll reply on the other\n> thread about all that..\n\nAh, indeed we have different TransactionStmtKind for BEGIN and START!\n\n\n", "msg_date": "Fri, 7 Oct 2022 11:33:03 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Record SET session in VariableSetStmt" } ]
[ { "msg_contents": "Hi,\n\nLike how commits series \"harmonize parameter names\":\n20e69da\n<https://github.com/postgres/postgres/commit/20e69daa1348f6899fffe3c260bf44293551ee87>\nand others.\n\nThis tries to harmonize more parameter names, mainly in Win32 and\nsome others files.\n\nregards,\nRanier Vilela", "msg_date": "Thu, 6 Oct 2022 10:17:31 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Harmonize parameter names in Win32" } ]
[ { "msg_contents": "Hi,\n\nWhen walsender process is evoked for logical replication, walsender is \nconnected to a database of the subscriber.\nNaturally, ones would want the name of the connected database to show in \nthe entry of ps command for walsender.\nIn detail, running ps aux during the logical replication shows results \nlike the following:\npostgres=# \\! ps aux | grep postgres;\n...\nACTC-I\\+ 14575 0.0 0.0 298620 14228 ? Ss 18:22 0:00 \npostgres: walsender ACTC-I\\nakamorit [local] S\n\nHowever, since walsender is connected to a database of the subscriber in \nlogical replication, it should show the database name, as in the \nfollowing:\npostgres=# \\! ps aux | grep postgres\n...\nACTC-I\\+ 15627 0.0 0.0 298624 13936 ? Ss 15:45 0:00 \npostgres: walsender ACTC-I\\nakamorit postgres\n\nShowing the database name should not apply in streaming replication \nthough since walsender process is not connected to any specific \ndatabase.\n\nThe attached patch adds the name of the connected database to the ps \nentry of walsender in logical replication, and not in streaming \nreplication.\n\nThoughts?\n\nTatsuhiro Nakamori", "msg_date": "Thu, 06 Oct 2022 22:30:56 +0900", "msg_from": "bt22nakamorit <bt22nakamorit@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "ps command does not show walsender's connected db" }, { "msg_contents": "\n\nOn 2022/10/06 22:30, bt22nakamorit wrote:\n> Hi,\n> \n> When walsender process is evoked for logical replication, walsender is connected to a database of the subscriber.\n> Naturally, ones would want the name of the connected database to show in the entry of ps command for walsender.\n> In detail, running ps aux during the logical replication shows results like the following:\n> postgres=# \\! ps aux | grep postgres;\n> ...\n> ACTC-I\\+ 14575  0.0  0.0 298620 14228 ?        Ss   18:22   0:00 postgres: walsender ACTC-I\\nakamorit [local] S\n> \n> However, since walsender is connected to a database of the subscriber in logical replication,\n\ns/subscriber/publisher ?\n\n\n> it should show the database name, as in the following:\n> postgres=# \\! ps aux | grep postgres\n> ...\n> ACTC-I\\+ 15627  0.0  0.0 298624 13936 ?        Ss   15:45   0:00 postgres: walsender ACTC-I\\nakamorit postgres\n> \n> Showing the database name should not apply in streaming replication though since walsender process is not connected to any specific database.\n> \n> The attached patch adds the name of the connected database to the ps entry of walsender in logical replication, and not in streaming replication.\n> \n> Thoughts?\n\n+1\n\nThanks for the patch!\n\n-\n+\t\tprintf(\"fork child process\\n\");\n+\t\tprintf(\"\tam_walsender: %d\\n\", am_walsender);\n+\t\tprintf(\"\tam_db_walsender: %d\\n\", am_db_walsender);\n\nThe patch seems to include the debug code accidentally.\nExcept this, the patch looks good to me.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 7 Oct 2022 16:59:48 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: ps command does not show walsender's connected db" }, { "msg_contents": "2022-10-07 16:59 Fujii Masao wrote:\n\n> s/subscriber/publisher ?\n\nI did not understand what this means.\n\n> Thanks for the patch!\n> \n> -\n> +\t\tprintf(\"fork child process\\n\");\n> +\t\tprintf(\"\tam_walsender: %d\\n\", am_walsender);\n> +\t\tprintf(\"\tam_db_walsender: %d\\n\", am_db_walsender);\n> \n> The patch seems to include the debug code accidentally.\n> Except this, the patch looks good to me.\n\nI apologize for the silly mistake.\nI edited the patch and attached it to this mail.\n\nBest,\nTatsuhiro Nakamori", "msg_date": "Fri, 07 Oct 2022 18:43:51 +0900", "msg_from": "bt22nakamorit <bt22nakamorit@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: ps command does not show walsender's connected db" }, { "msg_contents": "\n\nOn 2022/10/07 18:43, bt22nakamorit wrote:\n> 2022-10-07 16:59  Fujii Masao wrote:\n> \n>> s/subscriber/publisher ?\n> \n> I did not understand what this means.\n\nSorry for confusing wording... I was just trying to say that walsender\nis connected to a database of the publisher instead of subscriber.\n\n\n>> Thanks for the patch!\n>>\n>> -\n>> +        printf(\"fork child process\\n\");\n>> +        printf(\"    am_walsender: %d\\n\", am_walsender);\n>> +        printf(\"    am_db_walsender: %d\\n\", am_db_walsender);\n>>\n>> The patch seems to include the debug code accidentally.\n>> Except this, the patch looks good to me.\n> \n> I apologize for the silly mistake.\n> I edited the patch and attached it to this mail.\n\nThanks for updating the patch! LGTM.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 7 Oct 2022 22:55:39 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: ps command does not show walsender's connected db" }, { "msg_contents": "On Fri, Oct 7, 2022 at 7:25 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> Thanks for updating the patch! LGTM.\n\n- if (!am_walsender)\n+ if (!am_walsender || am_db_walsender)\n appendStringInfo(&ps_data, \"%s \", port->database_name);\n\nCan the appendStringInfo be just unconditional? That is more readable\nIMO. We want the database_name to be appended whenever it isn't null.\nThe only case we expect database_name to be null is for walsenders\nserving streaming replication standbys and even when database_name is\nnull, nothing gets appended, it's just the appendStringInfo() call\ngets wasted, but that's okay.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 9 Oct 2022 15:00:49 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ps command does not show walsender's connected db" }, { "msg_contents": "2022-10-09 18:30 Bharath Rupireddy wrote:\n> - if (!am_walsender)\n> + if (!am_walsender || am_db_walsender)\n> appendStringInfo(&ps_data, \"%s \", port->database_name);\n> \n> Can the appendStringInfo be just unconditional? That is more readable\n> IMO. We want the database_name to be appended whenever it isn't null.\nI agree that the patch makes the code less neat.\n\n> The only case we expect database_name to be null is for walsenders\n> serving streaming replication standbys and even when database_name is\n> null, nothing gets appended, it's just the appendStringInfo() call\n> gets wasted, but that's okay.\nappendStringInfo will append extra space after null (since \"%s \"), so \nthe ps entry will look less neat in that case.\nHow about we check whether port->database_name is null or not, instead \nof making it unconditional?\nIt will look like this.\n- if (!am_walsender)\n+ if (port->database_name != NULL)\n appendStringInfo(&ps_data, \"%s \", port->database_name);\n\nThank you for your response,\nTatsuhiro Nakamori\n\n\n", "msg_date": "Mon, 10 Oct 2022 11:30:03 +0900", "msg_from": "bt22nakamorit <bt22nakamorit@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: ps command does not show walsender's connected db" }, { "msg_contents": "On Mon, Oct 10, 2022 at 8:00 AM bt22nakamorit\n<bt22nakamorit@oss.nttdata.com> wrote:\n>\n> appendStringInfo will append extra space after null (since \"%s \"), so\n> the ps entry will look less neat in that case.\n> How about we check whether port->database_name is null or not, instead\n> of making it unconditional?\n> It will look like this.\n> - if (!am_walsender)\n> + if (port->database_name != NULL)\n> appendStringInfo(&ps_data, \"%s \", port->database_name);\n\nif (port->database_name != NULL && port->database_name[0] != '\\0')\n appendStringInfo(&ps_data, \"%s \", port->database_name);\n\nThe above works better.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 10 Oct 2022 08:57:31 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ps command does not show walsender's connected db" }, { "msg_contents": "2022-10-10 12:27 Bharath Rupireddy wrote:\n> if (port->database_name != NULL && port->database_name[0] != '\\0')\n> appendStringInfo(&ps_data, \"%s \", port->database_name);\n> \n> The above works better.\n\nThank you for the suggestion.\nI have edited the patch to reflect your idea.\n\nBest,\nTatsuhiro Nakamori", "msg_date": "Mon, 10 Oct 2022 15:50:21 +0900", "msg_from": "bt22nakamorit <bt22nakamorit@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: ps command does not show walsender's connected db" }, { "msg_contents": "On Mon, Oct 10, 2022 at 12:20 PM bt22nakamorit\n<bt22nakamorit@oss.nttdata.com> wrote:\n>\n> 2022-10-10 12:27 Bharath Rupireddy wrote:\n> > if (port->database_name != NULL && port->database_name[0] != '\\0')\n> > appendStringInfo(&ps_data, \"%s \", port->database_name);\n> >\n> > The above works better.\n>\n> Thank you for the suggestion.\n> I have edited the patch to reflect your idea.\n\nThanks. LGTM.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 10 Oct 2022 12:42:29 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ps command does not show walsender's connected db" }, { "msg_contents": "2022-10-10 16:12 Bharath Rupireddy wrote:\n> Thanks. LGTM.\nThank you for your review!\nI have this issue posted on Commitfest 2022-11 with title \"show \nwalsender's connected db for ps command entry\".\nMay I change the status to \"Ready for Committer\"?\n\nBest,\nTatsuhiro Nakamori\n\n\n", "msg_date": "Tue, 11 Oct 2022 17:41:51 +0900", "msg_from": "bt22nakamorit <bt22nakamorit@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: ps command does not show walsender's connected db" }, { "msg_contents": "On Tue, Oct 11, 2022 at 2:11 PM bt22nakamorit\n<bt22nakamorit@oss.nttdata.com> wrote:\n>\n> 2022-10-10 16:12 Bharath Rupireddy wrote:\n> > Thanks. LGTM.\n> Thank you for your review!\n> I have this issue posted on Commitfest 2022-11 with title \"show\n> walsender's connected db for ps command entry\".\n> May I change the status to \"Ready for Committer\"?\n\nDone - https://commitfest.postgresql.org/40/3937/.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 11 Oct 2022 16:36:34 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ps command does not show walsender's connected db" }, { "msg_contents": "2022年10月11日(火) 20:06 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>:\n>\n> On Tue, Oct 11, 2022 at 2:11 PM bt22nakamorit\n> <bt22nakamorit@oss.nttdata.com> wrote:\n> >\n> > 2022-10-10 16:12 Bharath Rupireddy wrote:\n> > > Thanks. LGTM.\n> > Thank you for your review!\n> > I have this issue posted on Commitfest 2022-11 with title \"show\n> > walsender's connected db for ps command entry\".\n> > May I change the status to \"Ready for Committer\"?\n>\n> Done - https://commitfest.postgresql.org/40/3937/.\n\nHi\n\nFujii-san is marked as committer on the commifest entry for this patch [1];\nare you able to go ahead and get it committed?\n\n[1] https://commitfest.postgresql.org/40/3937/\n\nThanks\n\nIan Barwick\n\n\n", "msg_date": "Thu, 17 Nov 2022 13:32:11 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ps command does not show walsender's connected db" }, { "msg_contents": "On Thu, Nov 17, 2022 at 01:32:11PM +0900, Ian Lawrence Barwick wrote:\n> Fujii-san is marked as committer on the commifest entry for this patch [1];\n> are you able to go ahead and get it committed?\n\nThat's the state of the patch since the 11th of October. Seeing the\nlack of activity, I propose to take care of that myself if there are\nno objections, only on HEAD. That could be qualified as a bug, but\nthat does not seem worth bothering the back-branches, either. Except\nif somebody has a different opinion?\n\nThe patch looks to do the job correctly, still I would leave the extra\nprintf() calls in BackendStartup() out.\n--\nMichael", "msg_date": "Tue, 22 Nov 2022 09:44:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ps command does not show walsender's connected db" }, { "msg_contents": "On Tue, Nov 22, 2022 at 6:14 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Nov 17, 2022 at 01:32:11PM +0900, Ian Lawrence Barwick wrote:\n> > Fujii-san is marked as committer on the commifest entry for this patch [1];\n> > are you able to go ahead and get it committed?\n>\n> That's the state of the patch since the 11th of October. Seeing the\n> lack of activity, I propose to take care of that myself if there are\n> no objections, only on HEAD. That could be qualified as a bug, but\n> that does not seem worth bothering the back-branches, either. Except\n> if somebody has a different opinion?\n\n-1 for back-patching as it's not something critical to any of the\nserver's operations.\n\n> The patch looks to do the job correctly, still I would leave the extra\n> printf() calls in BackendStartup() out.\n\nAre you looking at the latest v3 patch\nhttps://www.postgresql.org/message-id/4b5691462b994c18ff370aaa84cef0d0%40oss.nttdata.com?\nIt has no printf() calls.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 22 Nov 2022 08:46:22 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ps command does not show walsender's connected db" }, { "msg_contents": "On Tue, Nov 22, 2022 at 08:46:22AM +0530, Bharath Rupireddy wrote:\n> Are you looking at the latest v3 patch\n> https://www.postgresql.org/message-id/4b5691462b994c18ff370aaa84cef0d0%40oss.nttdata.com?\n> It has no printf() calls.\n\nYes, I was looking at v1. v3 can be simpler. All this information in\nthe Port comes from the startup packet, and the database name would\ndefault to the user name so it can never be NULL. And applied.\n--\nMichael", "msg_date": "Thu, 24 Nov 2022 16:20:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ps command does not show walsender's connected db" } ]
[ { "msg_contents": "Hello guys.\nIn the previous discussion [1] we find out that while we are in\ntransaction function definition is not invalidated if it was redefined\nin another session. Here is a patch to fix this. Also, I did a small\nperfomance impact measurement (test.sh in attachment) on my home PC\nwith Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz. The result is (each\ntransaction is a 10 million calls to functions):\n\n- without patch\nlatency average = 37087.639 ms\ntps = 0.026963\n\n- with patch\nlatency average = 38793.125 ms\ntps = 0.025778\n\nWhat do you think about it, guys?\n\n[1] \nhttps://www.postgresql.org/message-id/flat/1205251664297977%40mail.yandex.ru", "msg_date": "Thu, 06 Oct 2022 21:04:44 +0300", "msg_from": "Vasya <m7onov@yandex.ru>", "msg_from_op": true, "msg_subject": "[PATCH] Check system cache invalidations before each command in\n transaction" } ]
[ { "msg_contents": "Hi,\n\nGIN Indexes:\n\nDefines a type char GinTernaryValue with 3 values:\n#define GIN_FALSE 0 /* item is not present / does not match */\n#define GIN_TRUE 1 /* item is present / matches */\n#define GIN_MAYBE 2 /* don't know if item is present / don't know\n* if matches */\n\nSo, any use of this GinTernaryValue are:\n\n1. if (key->entryRes[j]) be FALSE if GIN_FALSE\n2. if (key->entryRes[j]) be TRUE if GIN_TRUE\n3. if (key->entryRes[j]) be TRUE if GIN_MAYBE\n\nSo gin matchs can fail with GYN_MAYBE or I lost something?\n\nregards,\nRanier Vilela", "msg_date": "Thu, 6 Oct 2022 18:06:34 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Avoid mix char with bool type in comparisons" }, { "msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> So, any use of this GinTernaryValue are:\n\n> 1. if (key->entryRes[j]) be FALSE if GIN_FALSE\n> 2. if (key->entryRes[j]) be TRUE if GIN_TRUE\n> 3. if (key->entryRes[j]) be TRUE if GIN_MAYBE\n\nYeah, that's how it's designed. Unless you can point to a bug,\nI do not think we ought to change this code.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 06 Oct 2022 19:52:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Avoid mix char with bool type in comparisons" }, { "msg_contents": "Em qui., 6 de out. de 2022 às 20:52, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > So, any use of this GinTernaryValue are:\n>\n> > 1. if (key->entryRes[j]) be FALSE if GIN_FALSE\n> > 2. if (key->entryRes[j]) be TRUE if GIN_TRUE\n> > 3. if (key->entryRes[j]) be TRUE if GIN_MAYBE\n>\n> Yeah, that's how it's designed. Unless you can point to a bug,\n> I do not think we ought to change this code.\n>\nThanks for answering.\n\nMy main concerns is this point:\n /* If already matched on earlier page, do no extra work */\n- if (key->entryRes[j])\n+ if (key->entryRes[j] == GIN_TRUE)\n continue;\n\nIf GIN_MAYBE cases are erroneously ignored.\n\nregards,\nRanier Vilela\n\nEm qui., 6 de out. de 2022 às 20:52, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> So, any use of this GinTernaryValue are:\n\n> 1. if (key->entryRes[j]) be FALSE if GIN_FALSE\n> 2. if (key->entryRes[j]) be TRUE if GIN_TRUE\n> 3. if (key->entryRes[j]) be TRUE if GIN_MAYBE\n\nYeah, that's how it's designed.  Unless you can point to a bug,\nI do not think we ought to change this code.Thanks for answering. My main concerns is this point: \t\t\t\t/* If already matched on earlier page, do no extra work */-\t\t\t\tif (key->entryRes[j])+\t\t\t\tif (key->entryRes[j] == GIN_TRUE) \t\t\t\t\tcontinue;If GIN_MAYBE cases are erroneously ignored.regards,Ranier Vilela", "msg_date": "Thu, 6 Oct 2022 21:15:37 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid mix char with bool type in comparisons" }, { "msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> My main concerns is this point:\n> /* If already matched on earlier page, do no extra work */\n> - if (key->entryRes[j])\n> + if (key->entryRes[j] == GIN_TRUE)\n> continue;\n\n> If GIN_MAYBE cases are erroneously ignored.\n\nSo, if that's a bug, you should be able to produce a test case?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 06 Oct 2022 20:21:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Avoid mix char with bool type in comparisons" }, { "msg_contents": "Em qui., 6 de out. de 2022 às 21:21, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > My main concerns is this point:\n> > /* If already matched on earlier page, do no extra work */\n> > - if (key->entryRes[j])\n> > + if (key->entryRes[j] == GIN_TRUE)\n> > continue;\n>\n> > If GIN_MAYBE cases are erroneously ignored.\n>\n> So, if that's a bug, you should be able to produce a test case?\n>\nNo Tom, unfortunately I don't have the knowledge to create a test with\nGIN_MAYBE values.\n\nWith the patch, all current tests pass.\nEither there are no bugs, or there are no tests that detect this specific\ncase.\nAnd I agree with you, without a test showing the bug,\nthere's not much chance of the patch progressing.\n\nUnless someone with more knowledge can say that the patch improves\nrobustness.\n\nregards,\nRanie Vilela\n\nEm qui., 6 de out. de 2022 às 21:21, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> My main concerns is this point:\n>   /* If already matched on earlier page, do no extra work */\n> - if (key->entryRes[j])\n> + if (key->entryRes[j] == GIN_TRUE)\n>   continue;\n\n> If GIN_MAYBE cases are erroneously ignored.\n\nSo, if that's a bug, you should be able to produce a test case?No Tom, unfortunately I don't have the knowledge to create a test with GIN_MAYBE values.With the patch, all current tests pass.Either there are no bugs, or there are no tests that detect this specific case.And I agree with you, without a test showing the bug,there's not much chance of the patch progressing.Unless someone with more knowledge can say that the patch improves robustness.regards,Ranie Vilela", "msg_date": "Thu, 6 Oct 2022 21:35:25 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid mix char with bool type in comparisons" }, { "msg_contents": "On Thu, Oct 6, 2022 at 8:35 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> No Tom, unfortunately I don't have the knowledge to create a test with GIN_MAYBE values.\n\nWell then don't post.\n\nBasically what you're saying is that you suspect there might be a\nproblem with this code but you don't really know that and you can't\ntest it. That's not a good enough reason to take up the time of\neveryone on this list.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Oct 2022 10:02:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid mix char with bool type in comparisons" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Oct 6, 2022 at 8:35 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>> No Tom, unfortunately I don't have the knowledge to create a test with GIN_MAYBE values.\n\n> Well then don't post.\n\n> Basically what you're saying is that you suspect there might be a\n> problem with this code but you don't really know that and you can't\n> test it. That's not a good enough reason to take up the time of\n> everyone on this list.\n\nFWIW, I did take a look at this code, and I don't see any bug.\nThe entryRes[] array entries are indeed GinTernaryValue, but it's\nobvious by inspection that matchPartialInPendingList only returns\ntrue or false, therefore collectMatchesForHeapRow also only deals\nin true or false, never maybe. I do not think changing\nmatchPartialInPendingList to return ternary would be an improvement,\nbecause then it'd be less obvious that it doesn't deal in maybe.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Oct 2022 11:40:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Avoid mix char with bool type in comparisons" }, { "msg_contents": "On Fri, Oct 7, 2022 at 11:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> FWIW, I did take a look at this code, and I don't see any bug.\n> The entryRes[] array entries are indeed GinTernaryValue, but it's\n> obvious by inspection that matchPartialInPendingList only returns\n> true or false, therefore collectMatchesForHeapRow also only deals\n> in true or false, never maybe. I do not think changing\n> matchPartialInPendingList to return ternary would be an improvement,\n> because then it'd be less obvious that it doesn't deal in maybe.\n\nI mean if the code isn't buggy, I'm glad, but I think there should\nhave been more substantial grounds for getting you to spend time\nlooking at it. It's not asking too much for people to produce a\nnon-zero amount of evidence that the thing they are worried about is\nactually a problem.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Oct 2022 12:32:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid mix char with bool type in comparisons" }, { "msg_contents": "Em sex., 7 de out. de 2022 às 12:40, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Thu, Oct 6, 2022 at 8:35 PM Ranier Vilela <ranier.vf@gmail.com>\n> wrote:\n> >> No Tom, unfortunately I don't have the knowledge to create a test with\n> GIN_MAYBE values.\n>\n> > Well then don't post.\n>\n> > Basically what you're saying is that you suspect there might be a\n> > problem with this code but you don't really know that and you can't\n> > test it. That's not a good enough reason to take up the time of\n> > everyone on this list.\n>\n> FWIW, I did take a look at this code, and I don't see any bug.\n> The entryRes[] array entries are indeed GinTernaryValue, but it's\n> obvious by inspection that matchPartialInPendingList only returns\n> true or false, therefore collectMatchesForHeapRow also only deals\n> in true or false, never maybe.\n\n\nThanks for spending your time with this.\n\nAnyway, it's not *true* that collectMatchesForHeapRow deals\nonly \"true\" and \"false\", once that key->entryRes is initialized with\nGIN_FALSE not false.\n\n/*\n* Reset all entryRes and hasMatchKey flags\n*/\nfor (i = 0; i < so->nkeys; i++)\n{\nGinScanKey key = so->keys + i;\nmemset(key->entryRes, GIN_FALSE, key->nentries);\n}\n\nMaybe only typo, that doesn't matter to anyone but some static analysis\ntools that alarm about these stupid things.\n\n/*\n* Reset all entryRes and hasMatchKey flags\n*/\nfor (i = 0; i < so->nkeys; i++)\n{\nGinScanKey key = so->keys + i;\nmemset(key->entryRes, false, key->nentries);\n}\n\nI do not think changing\n> matchPartialInPendingList to return ternary would be an improvement,\n> because then it'd be less obvious that it doesn't deal in maybe.\n>\nOn this point I don't quite agree with you, since for anyone wanting to\nread the code in gin.h,\nthey will think in terms of GIN_FALSE, GIN_TRUE and GIN_MAYBE,\nand will have to spend some time thinking about why they are mixing char\nand bool types.\n\nBesides that, it remains a trap, just waiting for someone to fall in the\nfuture.\nif (key->entryRes[j]) be TRUE when GIN_MAYBE.\n\nregards,\nRanier Vilela\n\nEm sex., 7 de out. de 2022 às 12:40, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Oct 6, 2022 at 8:35 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>> No Tom, unfortunately I don't have the knowledge to create a test with GIN_MAYBE values.\n\n> Well then don't post.\n\n> Basically what you're saying is that you suspect there might be a\n> problem with this code but you don't really know that and you can't\n> test it. That's not a good enough reason to take up the time of\n> everyone on this list.\n\nFWIW, I did take a look at this code, and I don't see any bug.\nThe entryRes[] array entries are indeed GinTernaryValue, but it's\nobvious by inspection that matchPartialInPendingList only returns\ntrue or false, therefore collectMatchesForHeapRow also only deals\nin true or false, never maybe.  \nThanks for spending your time with this.Anyway, it's not *true* that \ncollectMatchesForHeapRow dealsonly \"true\" and \"false\", once that key->entryRes is initialized with GIN_FALSE not false.\t/*\t * Reset all entryRes and hasMatchKey flags\t */\tfor (i = 0; i < so->nkeys; i++)\t{\t\tGinScanKey\tkey = so->keys + i;\t\tmemset(key->entryRes, GIN_FALSE, key->nentries);\t}Maybe only typo, that doesn't matter to anyone but some static analysis tools that alarm about these stupid things.\n\t/*\t * Reset all entryRes and hasMatchKey flags\t */\tfor (i = 0; i < so->nkeys; i++)\t{\t\tGinScanKey\tkey = so->keys + i;\t\tmemset(key->entryRes, false, key->nentries);\t}\nI do not think changing\nmatchPartialInPendingList to return ternary would be an improvement,\nbecause then it'd be less obvious that it doesn't deal in maybe.On this point I don't quite agree with you, since for anyone wanting to read the code in gin.h, they will think in terms of GIN_FALSE, GIN_TRUE and GIN_MAYBE, and will have to spend some time thinking about why they are mixing char and bool types.Besides that, it remains a trap, just waiting for someone to fall in the future.if (key->entryRes[j]) be TRUE when GIN_MAYBE.\n\nregards,Ranier Vilela", "msg_date": "Fri, 7 Oct 2022 14:00:35 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid mix char with bool type in comparisons" }, { "msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Anyway, it's not *true* that collectMatchesForHeapRow deals\n> only \"true\" and \"false\", once that key->entryRes is initialized with\n> GIN_FALSE not false.\n\nLook: GinTernaryValue is carefully designed so that it is\nrepresentation-compatible with bool, including that GIN_FALSE is\nidentical to false and GIN_TRUE is identical to true. I'm quite\nuninterested in debating whether that's a good idea or not.\nMoreover, there are a ton of other places besides this one where\nthe GIN code relies on that equivalence, so we'd have to do a\nlot more than what's in this patch if we wanted to decouple that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Oct 2022 13:43:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Avoid mix char with bool type in comparisons" }, { "msg_contents": "Em sex., 7 de out. de 2022 às 13:32, Robert Haas <robertmhaas@gmail.com>\nescreveu:\n\n> On Fri, Oct 7, 2022 at 11:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > FWIW, I did take a look at this code, and I don't see any bug.\n> > The entryRes[] array entries are indeed GinTernaryValue, but it's\n> > obvious by inspection that matchPartialInPendingList only returns\n> > true or false, therefore collectMatchesForHeapRow also only deals\n> > in true or false, never maybe. I do not think changing\n> > matchPartialInPendingList to return ternary would be an improvement,\n> > because then it'd be less obvious that it doesn't deal in maybe.\n>\n> I mean if the code isn't buggy, I'm glad, but I think there should\n> have been more substantial grounds for getting you to spend time\n> looking at it. It's not asking too much for people to produce a\n> non-zero amount of evidence that the thing they are worried about is\n> actually a problem.\n>\nSorry if you think this is all just a waste of time.\n\nI think that while the current code has no real bugs, that doesn't mean it\ndoesn't have readability and style issues.\nAnd that not being able to produce tests should not be an impediment to\nimproving the current code.\nI believe I have contributed much more than changing \"fo\" to \"of\" in\ncomments.\n\nRight now I have:\n02/09/2022 15:58 593 0001-fix-typo-isnan-test-geo_ops.patch\n02/09/2022 15:57 7.746 0001-fix-wrong-isnan-test-geo_ops.patch\n11/07/2022 09:39 2.271\n0001-Promove-unshadowing-of-two-variables-PGPROC-type.patch\n11/07/2022 09:39 2.939\n0001-Reduce-Wsign-compare-warnings-from-clang-12-compiler.patch\n11/07/2022 09:39 15.377\n0001-Refactoring-strlen-comparisons-with-zero.patch\n02/09/2022 09:06 6.711\n0002-avoid-small-issues-brin_minmax_multi.patch\n11/07/2022 09:39 6.402\n001-aset-reduces-memory-consumption.patch\n01/07/2022 12:53 2.111 001-avoid-unecessary-MemSet-calls.patch\n11/07/2022 09:39 59.662 001-improve-executor.patch\n11/07/2022 09:39 69.155 001-improve-getsnapshot.patch\n11/07/2022 09:39 15.465 001-improve-memory.patch\n11/07/2022 09:39 24.130 001-improve-scability-procarray.patch\n11/07/2022 09:39 74.693 001-improve-scaling.patch\n22/05/2022 13:23 6.579 001-improve-sort.patch\n11/07/2022 09:39 280.420 001-improve-table-open.patch\n11/07/2022 09:39 11.451 001-reduces-memory-consumption.patch\n11/07/2022 09:39 8.516\n002-generation-reduces-memory-consumption.patch\n11/07/2022 09:39 6.025\n003-aset-reduces-memory-consumption.patch\n11/07/2022 09:39 8.966\n004-generation-reduces-memory-consumption_BUG.patch\n04/09/2022 18:28 12.095 all.patch\n05/10/2022 09:41 2.048 all2.patch\n20/09/2022 10:59 1.513 all_20_09_2022.patch\n09/10/2020 11:42 673 avoid_dereferencing_null_pointer.patch\n29/09/2022 20:39 437 avoid_useless_reassign_lgosegno.patch\n29/09/2022 20:43 418\navoid_useless_retesting_log_min_duration.patch\n29/09/2022 20:44 625 avoid_useless_var_record.patch\n11/07/2022 09:39 32.180 FAST-001-improve-scability.patch\n11/07/2022 09:39 51.453 FAST-001-improve-sort.patch\n11/07/2022 09:39 62.491 FAST2-001-improve-sort.patch\n04/10/2022 08:22 493\nfix_declaration_volatile_signal_pg_test_fsync.patch\n29/09/2022 20:45 484\nfix_declaration_volatile_signal_var.patch\n25/08/2020 12:19 1.087 fix_dereference_null_statscmds.patch\n26/06/2020 11:26 1.526 fix_null_deference_pquery.patch\n28/08/2020 15:53 537 fix_null_memcmp_call.patch\n25/08/2020 14:53 541 fix_possible_overflow_executils.patch\n25/08/2020 14:17 757 fix_possible_overflow_nodeagg.patch\n05/09/2020 10:45 14.049 fix_redudant_init.patch\n05/09/2020 10:35 933\nfix_redudant_initialization_arrayfuncs.patch\n05/09/2020 10:47 2.403\nfix_redudant_initialization_bklno_hash.patch\n05/09/2020 10:07 793\nfix_redudant_initialization_firstmissingnum_heaptuple.patch\n05/09/2020 10:36 362\nfix_redudant_initialization_formatting.patch\n05/09/2020 10:08 406\nfix_redudant_initialization_offsetnumber_gistutil.patch\n05/09/2020 10:25 851\nfix_redudant_initialization_parse_utilcmd.patch\n05/09/2020 10:29 742\nfix_redudant_initialization_procarray.patch\n05/09/2020 10:30 604 fix_redudant_initialization_spell.patch\n05/09/2020 10:16 1.157\nfix_redudant_initialization_status_nbtsearch.patch\n05/09/2020 10:21 537\nfix_redudant_initialization_storage.patch\n05/09/2020 10:28 531\nfix_redudant_initialization_syslogger.patch\n05/09/2020 10:31 878\nfix_redudant_initialization_to_tsany.patch\n05/09/2020 10:36 452 fix_redudant_initialization_tsrank.patch\n05/09/2020 10:38 1.324\nfix_redudant_initialization_tuplesort.patch\n05/09/2020 10:34 428\nfix_redudant_initialization_wparser_def.patch\n05/09/2020 10:18 797 fix_redudant_prefix_spgtextproc.patch\n05/09/2020 10:19 834 fix_redudant_waits_xlog.patch\n25/08/2020 15:48 2.319 fix_unchecked_return_spi_connect.patch\n09/10/2020 09:15 420 fix_uninitialized_var_flag_spell.patch\n09/09/2022 11:25 68.543 fprintf_fixes.patch\n09/09/2020 09:17 13.805 getsnapshotdata.patch\n27/09/2022 16:05 4.083 head_27_09_2022.patch\n24/08/2020 19:31 21.023 hugepage.patch\n14/05/2022 20:32 6.545 improve_sort.patch\n15/09/2022 11:50 6.327 patchs_16_09_2022.patch\n05/10/2022 14:30 15.376 postgres_05_10_2022.patch\n11/07/2022 16:25 2.068 postgres_executor.patch\n29/06/2022 11:01 29.995 postgres_sort.patch\n07/09/2020 22:07 25.449 prefetch.patch\n14/09/2020 10:22 19.919 setvbuf.patch\n14/09/2020 14:36 19.919 setvfbuf.patch\n05/09/2022 13:40 7.857 string_fixes.patch\n11/07/2022 09:39 34.130 strlen.patch\n05/10/2022 09:42 2.048 style_use_compatible_var_type.patch\n28/08/2020 10:19 5.155 unloop_toast_tuple_init.patch\n14/09/2022 20:00 3.237\nuse-heapalloc-instead-deprecated-localalloc.patch\n11/09/2020 11:47 3.733\nv1-0001-simplified_read_binary_file.patch\n07/07/2022 15:22 106.755\nv1-0001-WIP-Replace-MemSet-calls-with-struct-initialization.patch\n11/07/2022 09:39 14.918 v1-001-improve-memory.patch\n28/05/2022 08:45 24.989 v1-001-improve-scability-procarray.patch\n11/07/2022 09:39 14.972 v1-001-improve-scaling.patch\n09/09/2022 11:26 68.543 v1-fprintf_fixes.patch\n05/09/2022 21:42 20.586 v1-string_fixes.patch\n11/07/2022 09:39 34.469 v10-001-improve-scability.patch\n11/07/2022 09:39 32.229 v11-001-improve-scability.patch\n11/07/2022 09:39 36.060 v12-001-improve-scability.patch\n11/07/2022 09:39 53.064 v13-001-improve-scability.patch\n11/07/2022 09:39 48.123 v14-001-improve-scability.patch\n11/07/2022 09:39 35.443 v15-001-improve-scability.patch\n09/08/2022 15:56 95.542\nv2-0001-Improve-performance-of-and-reduce-overheads-of-me.patch\n11/09/2020 16:58 4.228\nv2-0001-simplified_read_binary_file.patch\n11/07/2022 16:03 106.755\nv2-0001-WIP-Replace-MemSet-calls-with-struct-initialization.patch\n11/07/2022 09:39 17.894 v2-001-improve-memory.patch\n11/07/2022 09:39 349.497 v2-001-improve-scability-procarray.patch\n11/07/2022 09:39 69.227 v2-001-improve-scaling.patch\n11/07/2022 09:39 18.709\nv2-002-generation-reduces-memory-consumption.patch\n11/07/2022 09:39 42.893 v2-002-improve-sort.patch\n05/09/2022 23:16 52.064 v2-string_fixes.patch\n11/09/2020 18:38 4.047\nv3-0001-simplified_read_binary_file.patch\n01/08/2022 13:52 26.670\nv3-0001-WIP-Replace-MemSet-calls-with-struct-initialization.patch\n11/07/2022 09:39 349.781 v3-001-improve-scability-procarray.patch\n11/07/2022 09:39 45.707 v3-002-improve-sort.patch\n05/09/2022 08:34 7.510\nv3_avoid_referencing_out_of_bounds_array_elements.patch\n15/09/2020 14:29 4.306\nv4-0001-simplified_read_binary_file.patch\n11/07/2022 09:39 352.181 v4-001-improve-scability.patch\n11/07/2022 09:39 51.453 v4-002-improve-sort.patch\n11/07/2022 09:39 354.611 v5-001-improve-scability.patch\n11/07/2022 09:39 51.453 v5-002-improve-sort.patch\n11/07/2022 09:39 355.739 v6-001-improve-scability.patch\n11/07/2022 09:39 61.904 v6-002-improve-sort.patch\n11/07/2022 09:39 13.547 v7-001-improve-scability.patch\n11/07/2022 09:39 62.491 v7-002-improve-sort.patch\n11/07/2022 09:39 27.800 v8-001-improve-scability.patch\n11/07/2022 09:39 33.358 v9-001-improve-scability.patch\n27/06/2020 11:17 7.754 windows_fixes_v1.patch\n\nAnd it could contribute much, much more.\n\nregards,\nRanier Vilela\n\nEm sex., 7 de out. de 2022 às 13:32, Robert Haas <robertmhaas@gmail.com> escreveu:On Fri, Oct 7, 2022 at 11:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> FWIW, I did take a look at this code, and I don't see any bug.\n> The entryRes[] array entries are indeed GinTernaryValue, but it's\n> obvious by inspection that matchPartialInPendingList only returns\n> true or false, therefore collectMatchesForHeapRow also only deals\n> in true or false, never maybe.  I do not think changing\n> matchPartialInPendingList to return ternary would be an improvement,\n> because then it'd be less obvious that it doesn't deal in maybe.\n\nI mean if the code isn't buggy, I'm glad, but I think there should\nhave been more substantial grounds for getting you to spend time\nlooking at it. It's not asking too much for people to produce a\nnon-zero amount of evidence that the thing they are worried about is\nactually a problem.Sorry if you think this is all just a waste of time.\n\nI think that while the current code has no real bugs, that doesn't mean it doesn't have readability and style issues.And that not being able to produce tests should not be an impediment to improving the current code.I believe I have contributed much more than changing \"fo\" to \"of\" in comments.\n\nRight now I have:02/09/2022  15:58               593 0001-fix-typo-isnan-test-geo_ops.patch02/09/2022  15:57             7.746 0001-fix-wrong-isnan-test-geo_ops.patch11/07/2022  09:39             2.271 0001-Promove-unshadowing-of-two-variables-PGPROC-type.patch11/07/2022  09:39             2.939 0001-Reduce-Wsign-compare-warnings-from-clang-12-compiler.patch11/07/2022  09:39            15.377 0001-Refactoring-strlen-comparisons-with-zero.patch02/09/2022  09:06             6.711 0002-avoid-small-issues-brin_minmax_multi.patch11/07/2022  09:39             6.402 001-aset-reduces-memory-consumption.patch01/07/2022  12:53             2.111 001-avoid-unecessary-MemSet-calls.patch11/07/2022  09:39            59.662 001-improve-executor.patch11/07/2022  09:39            69.155 001-improve-getsnapshot.patch11/07/2022  09:39            15.465 001-improve-memory.patch11/07/2022  09:39            24.130 001-improve-scability-procarray.patch11/07/2022  09:39            74.693 001-improve-scaling.patch22/05/2022  13:23             6.579 001-improve-sort.patch11/07/2022  09:39           280.420 001-improve-table-open.patch11/07/2022  09:39            11.451 001-reduces-memory-consumption.patch11/07/2022  09:39             8.516 002-generation-reduces-memory-consumption.patch11/07/2022  09:39             6.025 003-aset-reduces-memory-consumption.patch11/07/2022  09:39             8.966 004-generation-reduces-memory-consumption_BUG.patch04/09/2022  18:28            12.095 all.patch05/10/2022  09:41             2.048 all2.patch20/09/2022  10:59             1.513 all_20_09_2022.patch09/10/2020  11:42               673 avoid_dereferencing_null_pointer.patch29/09/2022  20:39               437 avoid_useless_reassign_lgosegno.patch29/09/2022  20:43               418 avoid_useless_retesting_log_min_duration.patch29/09/2022  20:44               625 avoid_useless_var_record.patch11/07/2022  09:39            32.180 FAST-001-improve-scability.patch11/07/2022  09:39            51.453 FAST-001-improve-sort.patch11/07/2022  09:39            62.491 FAST2-001-improve-sort.patch04/10/2022  08:22               493 fix_declaration_volatile_signal_pg_test_fsync.patch29/09/2022  20:45               484 fix_declaration_volatile_signal_var.patch25/08/2020  12:19             1.087 fix_dereference_null_statscmds.patch26/06/2020  11:26             1.526 fix_null_deference_pquery.patch28/08/2020  15:53               537 fix_null_memcmp_call.patch25/08/2020  14:53               541 fix_possible_overflow_executils.patch25/08/2020  14:17               757 fix_possible_overflow_nodeagg.patch05/09/2020  10:45            14.049 fix_redudant_init.patch05/09/2020  10:35               933 fix_redudant_initialization_arrayfuncs.patch05/09/2020  10:47             2.403 fix_redudant_initialization_bklno_hash.patch05/09/2020  10:07               793 fix_redudant_initialization_firstmissingnum_heaptuple.patch05/09/2020  10:36               362 fix_redudant_initialization_formatting.patch05/09/2020  10:08               406 fix_redudant_initialization_offsetnumber_gistutil.patch05/09/2020  10:25               851 fix_redudant_initialization_parse_utilcmd.patch05/09/2020  10:29               742 fix_redudant_initialization_procarray.patch05/09/2020  10:30               604 fix_redudant_initialization_spell.patch05/09/2020  10:16             1.157 fix_redudant_initialization_status_nbtsearch.patch05/09/2020  10:21               537 fix_redudant_initialization_storage.patch05/09/2020  10:28               531 fix_redudant_initialization_syslogger.patch05/09/2020  10:31               878 fix_redudant_initialization_to_tsany.patch05/09/2020  10:36               452 fix_redudant_initialization_tsrank.patch05/09/2020  10:38             1.324 fix_redudant_initialization_tuplesort.patch05/09/2020  10:34               428 fix_redudant_initialization_wparser_def.patch05/09/2020  10:18               797 fix_redudant_prefix_spgtextproc.patch05/09/2020  10:19               834 fix_redudant_waits_xlog.patch25/08/2020  15:48             2.319 fix_unchecked_return_spi_connect.patch09/10/2020  09:15               420 fix_uninitialized_var_flag_spell.patch09/09/2022  11:25            68.543 fprintf_fixes.patch09/09/2020  09:17            13.805 getsnapshotdata.patch27/09/2022  16:05             4.083 head_27_09_2022.patch24/08/2020  19:31            21.023 hugepage.patch14/05/2022  20:32             6.545 improve_sort.patch15/09/2022  11:50             6.327 patchs_16_09_2022.patch05/10/2022  14:30            15.376 postgres_05_10_2022.patch11/07/2022  16:25             2.068 postgres_executor.patch29/06/2022  11:01            29.995 postgres_sort.patch07/09/2020  22:07            25.449 prefetch.patch14/09/2020  10:22            19.919 setvbuf.patch14/09/2020  14:36            19.919 setvfbuf.patch05/09/2022  13:40             7.857 string_fixes.patch11/07/2022  09:39            34.130 strlen.patch05/10/2022  09:42             2.048 style_use_compatible_var_type.patch28/08/2020  10:19             5.155 unloop_toast_tuple_init.patch14/09/2022  20:00             3.237 use-heapalloc-instead-deprecated-localalloc.patch11/09/2020  11:47             3.733 v1-0001-simplified_read_binary_file.patch07/07/2022  15:22           106.755 v1-0001-WIP-Replace-MemSet-calls-with-struct-initialization.patch11/07/2022  09:39            14.918 v1-001-improve-memory.patch28/05/2022  08:45            24.989 v1-001-improve-scability-procarray.patch11/07/2022  09:39            14.972 v1-001-improve-scaling.patch09/09/2022  11:26            68.543 v1-fprintf_fixes.patch05/09/2022  21:42            20.586 v1-string_fixes.patch11/07/2022  09:39            34.469 v10-001-improve-scability.patch11/07/2022  09:39            32.229 v11-001-improve-scability.patch11/07/2022  09:39            36.060 v12-001-improve-scability.patch11/07/2022  09:39            53.064 v13-001-improve-scability.patch11/07/2022  09:39            48.123 v14-001-improve-scability.patch11/07/2022  09:39            35.443 v15-001-improve-scability.patch09/08/2022  15:56            95.542 v2-0001-Improve-performance-of-and-reduce-overheads-of-me.patch11/09/2020  16:58             4.228 v2-0001-simplified_read_binary_file.patch11/07/2022  16:03           106.755 v2-0001-WIP-Replace-MemSet-calls-with-struct-initialization.patch11/07/2022  09:39            17.894 v2-001-improve-memory.patch11/07/2022  09:39           349.497 v2-001-improve-scability-procarray.patch11/07/2022  09:39            69.227 v2-001-improve-scaling.patch11/07/2022  09:39            18.709 v2-002-generation-reduces-memory-consumption.patch11/07/2022  09:39            42.893 v2-002-improve-sort.patch05/09/2022  23:16            52.064 v2-string_fixes.patch11/09/2020  18:38             4.047 v3-0001-simplified_read_binary_file.patch01/08/2022  13:52            26.670 v3-0001-WIP-Replace-MemSet-calls-with-struct-initialization.patch11/07/2022  09:39           349.781 v3-001-improve-scability-procarray.patch11/07/2022  09:39            45.707 v3-002-improve-sort.patch05/09/2022  08:34             7.510 v3_avoid_referencing_out_of_bounds_array_elements.patch15/09/2020  14:29             4.306 v4-0001-simplified_read_binary_file.patch11/07/2022  09:39           352.181 v4-001-improve-scability.patch11/07/2022  09:39            51.453 v4-002-improve-sort.patch11/07/2022  09:39           354.611 v5-001-improve-scability.patch11/07/2022  09:39            51.453 v5-002-improve-sort.patch11/07/2022  09:39           355.739 v6-001-improve-scability.patch11/07/2022  09:39            61.904 v6-002-improve-sort.patch11/07/2022  09:39            13.547 v7-001-improve-scability.patch11/07/2022  09:39            62.491 v7-002-improve-sort.patch11/07/2022  09:39            27.800 v8-001-improve-scability.patch11/07/2022  09:39            33.358 v9-001-improve-scability.patch27/06/2020  11:17             7.754 windows_fixes_v1.patch And it could contribute much, much more.regards,Ranier Vilela", "msg_date": "Fri, 7 Oct 2022 14:44:56 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid mix char with bool type in comparisons" }, { "msg_contents": "Em sex., 7 de out. de 2022 às 14:44, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > Anyway, it's not *true* that collectMatchesForHeapRow deals\n> > only \"true\" and \"false\", once that key->entryRes is initialized with\n> > GIN_FALSE not false.\n>\n> Look: GinTernaryValue is carefully designed so that it is\n> representation-compatible with bool, including that GIN_FALSE is\n> identical to false and GIN_TRUE is identical to true. I'm quite\n> uninterested in debating whether that's a good idea or not.\n> Moreover, there are a ton of other places besides this one where\n> the GIN code relies on that equivalence, so we'd have to do a\n> lot more than what's in this patch if we wanted to decouple that.\n>\nJust now I checked all the places where GinTernaryValue is used and they\nall trust using GIN_TRUE, GIN_FALSE and GIN_MAYBE.\nThe only problematic place I found was at ginget.c and jsonb_gin.c\n\njsonb_gin.c:\nThe function execute_jsp_gin_node returns GIN_TRUE if the node->type is\nJSP_GIN_ENTRY and the value is GIN_MAYBE.\nIMO, it should be GIN_FALSE?\n\ncase JSP_GIN_ENTRY:\n{\nint index = node->val.entryIndex;\n\nif (ternary)\nreturn ((GinTernaryValue *) check)[index];\nelse\nreturn ((bool *) check)[index] ? GIN_TRUE : GIN_FALSE;\n}\n\nThe array entryRes is used only in ginget.c and ginscan.c.\n\nThe GinTernaryValue is used only in:\nFile contrib\\pg_trgm\\trgm_gin.c:\nFile src\\backend\\access\\gin\\ginarrayproc.c:\nFile src\\backend\\access\\gin\\ginget.c:\nFile src\\backend\\access\\gin\\ginlogic.c:\nFile src\\backend\\access\\gin\\ginscan.c:\nFile src\\backend\\utils\\adt\\jsonb_gin.c:\nFile src\\backend\\utils\\adt\\tsginidx.c:\n\nregards,\nRanier Vilela\n\nEm sex., 7 de out. de 2022 às 14:44, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> Anyway, it's not *true* that  collectMatchesForHeapRow deals\n> only \"true\" and \"false\", once that key->entryRes is initialized with\n> GIN_FALSE not false.\n\nLook: GinTernaryValue is carefully designed so that it is\nrepresentation-compatible with bool, including that GIN_FALSE is\nidentical to false and GIN_TRUE is identical to true.  I'm quite\nuninterested in debating whether that's a good idea or not.\nMoreover, there are a ton of other places besides this one where\nthe GIN code relies on that equivalence, so we'd have to do a\nlot more than what's in this patch if we wanted to decouple that.Just now I checked all the places where GinTernaryValue is used and they all trust using GIN_TRUE, GIN_FALSE and GIN_MAYBE.The only problematic place I found was at ginget.c and jsonb_gin.cjsonb_gin.c:The function execute_jsp_gin_node returns GIN_TRUE if the node->type is JSP_GIN_ENTRY and the value is GIN_MAYBE.IMO, it should be GIN_FALSE?\t\tcase JSP_GIN_ENTRY:\t\t\t{\t\t\t\tint\t\t\tindex = node->val.entryIndex;\t\t\t\tif (ternary)\t\t\t\t\treturn ((GinTernaryValue *) check)[index];\t\t\t\telse\t\t\t\t\treturn ((bool *) check)[index] ? GIN_TRUE : GIN_FALSE;\t\t\t}The array entryRes is used only in ginget.c and ginscan.c.The GinTernaryValue is used only in:File contrib\\pg_trgm\\trgm_gin.c:File src\\backend\\access\\gin\\ginarrayproc.c:File src\\backend\\access\\gin\\ginget.c:File src\\backend\\access\\gin\\ginlogic.c:File src\\backend\\access\\gin\\ginscan.c:File src\\backend\\utils\\adt\\jsonb_gin.c:File src\\backend\\utils\\adt\\tsginidx.c:regards,Ranier Vilela", "msg_date": "Fri, 7 Oct 2022 15:45:34 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid mix char with bool type in comparisons" }, { "msg_contents": "Em sex., 7 de out. de 2022 às 15:45, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em sex., 7 de out. de 2022 às 14:44, Tom Lane <tgl@sss.pgh.pa.us>\n> escreveu:\n>\n>> Ranier Vilela <ranier.vf@gmail.com> writes:\n>> > Anyway, it's not *true* that collectMatchesForHeapRow deals\n>> > only \"true\" and \"false\", once that key->entryRes is initialized with\n>> > GIN_FALSE not false.\n>>\n>> Look: GinTernaryValue is carefully designed so that it is\n>> representation-compatible with bool, including that GIN_FALSE is\n>> identical to false and GIN_TRUE is identical to true. I'm quite\n>> uninterested in debating whether that's a good idea or not.\n>> Moreover, there are a ton of other places besides this one where\n>> the GIN code relies on that equivalence, so we'd have to do a\n>> lot more than what's in this patch if we wanted to decouple that.\n>>\n> Just now I checked all the places where GinTernaryValue is used and they\n> all trust using GIN_TRUE, GIN_FALSE and GIN_MAYBE.\n> The only problematic place I found was at ginget.c and jsonb_gin.c\n>\n> jsonb_gin.c:\n> The function execute_jsp_gin_node returns GIN_TRUE if the node->type is\n> JSP_GIN_ENTRY and the value is GIN_MAYBE.\n> IMO, it should be GIN_FALSE?\n>\nI spend more time checking this and I think that answer is not.\n\nSo, in the patch attached it changes only ginget.c, to avoid mixing\nGinTernaryValue with bool.\nAnd add comments to warn that GIN_MAYBE comparison is equal GIN_TRUE in\njsonb_gin.c\n\nI think that's all that needs to be changed at the moment.\nI'm also glad there are no bugs, and IMO I hope the code has become more\nreadable and secure.\n\nregards,\nRanier Vilela", "msg_date": "Fri, 7 Oct 2022 22:50:13 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid mix char with bool type in comparisons" } ]
[ { "msg_contents": "Refactors to isolate strcoll, wcscoll, and ucol_strcoll into\npg_locale.c which seems like a better place. Most of the buffer\nmanipulation and equality optimizations are still left in varlena.c.\n\nPatch attached.\n\nI'm not able to easily test on windows, but it should be close to\ncorrect as I just moved some code around.\n\nIs there a good way to look for regressions (perf or correctness) when\nmaking changes in this area, especially on windows and/or with strange\ncollation rules? What I did doesn't look like a problem, but there's\nalways a chance of a regression.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Thu, 06 Oct 2022 16:15:19 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Refactor to introduce pg_strcoll()." }, { "msg_contents": "On 07.10.22 01:15, Jeff Davis wrote:\n> + * Call ucol_strcollUTF8(), ucol_strcoll(), strcoll(), strcoll_l(), wcscoll(),\n> + * or wcscoll_l() as appropriate for the given locale, platform, and database\n> + * encoding. Arguments must be NUL-terminated. If the locale is not specified,\n> + * use the database collation.\n> + *\n> + * If the collation is deterministic, break ties with memcmp(), and then with\n> + * the string length.\n> + */\n> +int\n> +pg_strcoll(const char *arg1, int len1, const char *arg2, int len2,\n> +\t\t pg_locale_t locale)\n\nIt's a bit confusing that arguments must be NUL-terminated, but the \nlength is still specified. Maybe another sentence to explain that would \nbe helpful.\n\nThe length arguments ought to be of type size_t, I think.\n\n\n\n", "msg_date": "Thu, 13 Oct 2022 10:57:38 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Refactor to introduce pg_strcoll()." }, { "msg_contents": "On Thu, 2022-10-13 at 10:57 +0200, Peter Eisentraut wrote:\n> It's a bit confusing that arguments must be NUL-terminated, but the \n> length is still specified.  Maybe another sentence to explain that\n> would \n> be helpful.\n\nAdded a comment. It was a little frustrating to get a perfectly clean\nAPI, because the callers do some buffer manipulation and optimizations\nof their own. I think this is an improvement, but suggestions welcome.\n\nIf win32 is used with UTF-8 and wcscoll, it ends up allocating some\nextra stack space for the temporary buffers, whereas previously it used\nthe buffers on the stack of varstr_cmp(). I'm not sure if that's a\nproblem or not.\n\n> The length arguments ought to be of type size_t, I think.\n\nChanged.\n\nThank you.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Fri, 14 Oct 2022 16:00:10 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Refactor to introduce pg_strcoll()." }, { "msg_contents": "On 15.10.22 01:00, Jeff Davis wrote:\n> On Thu, 2022-10-13 at 10:57 +0200, Peter Eisentraut wrote:\n>> It's a bit confusing that arguments must be NUL-terminated, but the\n>> length is still specified.  Maybe another sentence to explain that\n>> would\n>> be helpful.\n> \n> Added a comment. It was a little frustrating to get a perfectly clean\n> API, because the callers do some buffer manipulation and optimizations\n> of their own. I think this is an improvement, but suggestions welcome.\n> \n> If win32 is used with UTF-8 and wcscoll, it ends up allocating some\n> extra stack space for the temporary buffers, whereas previously it used\n> the buffers on the stack of varstr_cmp(). I'm not sure if that's a\n> problem or not.\n\nRefactoring the common code from varstr_cmp() and varstrfastcmp_locale() \nlooks like a clear win.\n\nBut I think putting the Windows UTF-8 code (win32_utf8_wcscoll()) from \nvarstr_cmp() into pg_strcoll() might create future complications. \nNormally, it would be up to varstr_sortsupport() to set up a \nWindows/UTF-8 specific comparator function, but it says there it's not \nimplemented. But someone who wanted to implement that would have to \nlift it back out of pg_strcoll, or rearrange the code in some other way. \n It's not a clear win, I think. Perhaps you have some follow-up work \nplanned, in which case it might be better to consider this in more \ncontext. Otherwise, I'd be tempted to leave it where it was in \nvarstr_cmp(). (It could still be a separate function, to keep \nvarstr_cmp() itself a bit smaller.)\n\n\n", "msg_date": "Tue, 1 Nov 2022 13:36:23 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Refactor to introduce pg_strcoll()." }, { "msg_contents": "On Tue, 2022-11-01 at 13:36 +0100, Peter Eisentraut wrote:\n> But I think putting the Windows UTF-8 code (win32_utf8_wcscoll())\n> from \n> varstr_cmp() into pg_strcoll() might create future complications. \n> Normally, it would be up to varstr_sortsupport() to set up a \n> Windows/UTF-8 specific comparator function, but it says there it's\n> not \n> implemented.  But someone who wanted to implement that would have to \n> lift it back out of pg_strcoll, or rearrange the code in some other\n> way.\n\nIs there a reason that it wouldn't work to just use\nvarlenafastcmp_locale() after my patch? The comment says:\n\n /* \n * There is a further exception on Windows. When the database \n * encoding is UTF-8 and we are not using the C collation, complex \n * hacks are required... \n\nBut I think the complex hacks are just the transformation into UTF 16\nand calling of wcscoll(). If that's done from within pg_strcoll()\n(after my patch), then I think it just works, right?\n\nI can't easily test on windows, so perhaps I'm missing something. Does\nthe buildfarm have enough coverage here? Is it reasonable to try a\ncommit that removes the:\n\n #ifdef WIN32\n if (GetDatabaseEncoding() == PG_UTF8 &&\n !(locale && locale->provider == COLLPROVIDER_ICU))\n return;\n #endif\n\nalong with my patch and see what I get out of the buildfarm?\n\n> Perhaps you have some follow-up work \n> planned, in which case it might be better to consider this in more \n> context. \n\nI was also considering an optimization to use stack allocation for\nshort strings when doing the non-UTF8 icu comparison. I am seeing some\nbenefit there, but it seems to depend on the size of the stack buffer.\nThat suggests that maybe TEXTBUFSIZE is too large (1024) and perhaps we\nshould drop it down, but I need to investigate more.\n\nIn general, I'm trying to slowly get the locale stuff more isolated.\n\nRegards,\n\tJeff Davis\n\n\n", "msg_date": "Fri, 04 Nov 2022 15:06:41 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Refactor to introduce pg_strcoll()." }, { "msg_contents": "Attached some new patches. I think you were right that the API of\npg_strcoll() was strange before, so I changed it to:\n\n * pg_strcoll() takes nul-terminated arguments\n * pg_strncoll() takes arguments and lengths\n\nThe new pg_strcoll()/pg_strncoll() api in 0001 seems much reasonable to\nsupport in the long term, potentially with other callers.\n\nPatch 0004 exports:\n\n * pg_collate_icu() is for ICU and takes arguments and lengths\n * pg_collate_libc() is for libc and takes nul-terminated arguments\n\nfor a tiny speedup because varstrfastcmp_locale() has both nul-\nterminated arguments and their lengths.\n\nOn Fri, 2022-11-04 at 15:06 -0700, Jeff Davis wrote:\n> But I think the complex hacks are just the transformation into UTF 16\n> and calling of wcscoll(). If that's done from within pg_strcoll()\n> (after my patch), then I think it just works, right?\n\nI included a patch (0005) to enable varstrfastcmp_locale on windows\nwith a server encoding of UTF-8. I don't have a great way of testing\nthis, but it seems like it should work.\n\n> I was also considering an optimization to use stack allocation for\n> short strings when doing the non-UTF8 icu comparison. I am seeing\n> some\n> benefit there\n\nI also included this optimization in 0003: try to use the stack for\nreasonable values; allocate on the heap for larger strings. I think\nit's helping a bit.\n\nPatch 0002 helps a lot more: for the case of ICU with a non-UTF8 server\nencoding, the speedup is something like 30%. The reason is that\nicu_to_uchar() currently calls ucnv_toUChars() twice: once to determine\nthe precise output buffer size required, and then again once it has the\nbuffer ready. But it's easy to just size the destination buffer\nconservatively, because the maximum space required is enough to hold\ntwice the number of UChars as there are input characters[1], plus the\nterminating nul. That means we just call ucnv_toUChars() once, to do\nthe actual conversion.\n\nMy perf test was to use a quick hack to disable abbreviated keys (so\nthat it's doing more comparisons), and then just do a large ORDER BY\n... COLLATE on a table with generated text. The text is random lorem\nipsum data, with some characters replaced by multibyte characters to\nexercise some more interesting paths. If someone else has some more\nsophisticated collation tests I'd be interested to try them.\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/ucnv_8h.html#ae1049fcb893783c860fe0f9d4da84939", "msg_date": "Wed, 09 Nov 2022 18:23:50 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Refactor to introduce pg_strcoll()." }, { "msg_contents": "+ /* Win32 does not have UTF-8, so we need to map to UTF-16 */\n\nI wonder if this is still true. I think in Windows 10+ you can enable\nUTF-8 support. Then could you use strcoll_l() directly? I struggled\nto understand that, but I am a simple Unix hobbit from the shire so I\ndunno. (Perhaps the *whole OS* has to be in that mode, so you might\nhave to do a runtime test? This was discussed in another thread that\nmostly left me confused[1].).\n\nAnd that leads to another thought. We have an old comment\n\"Unfortunately, there is no strncoll(), so ...\". Curiously, Windows\ndoes actually have strncoll_l() (as do some other libcs out there).\nSo after skipping the expansion to wchar_t, one might think you could\navoid the extra copy required to nul-terminate the string (and hope\nthat it doesn't make an extra copy internally, far from given).\nUnfortunately it seems to be defined in a strange way that doesn't\nlook like your pg_strncoll_XXX() convention: it has just one length\nparameter, not one for each string. That is, it's designed for\ncomparing prefixes of strings, not for working with\nnon-null-terminated strings. I'm not entirely sure if the interface\nmakes sense at all! Is it measuring in 'chars' or 'encoded\ncharacters'? I would guess the former, like strncpy() et al, but then\nwhat does it mean if it chops a UTF-8 sequence in half? And at a\nhigher level, if you wanted to use it for our purpose, you'd\npresumably need Min(s1_len, s2_len), but I wonder if there are string\npairs that would sort in a different order if the collation algorithm\ncould see more characters after that? For example, in Dutch \"ij\" is\nsometimes treated like a letter that sorts differently than \"i\" + \"j\"\nnormally would, so if you arbitrarily chop that \"j\" off while\ncomparing common-length prefix you might get into trouble; likewise\nfor \"aa\" in Danish. Perhaps these sorts of problems explain why it's\nnot in the standard (though I see it was at some point in some kind of\ndraft; I don't grok the C standards process enough to track down what\nhappened but WG20/WG14 draft N1027[2] clearly contains strncoll_l()\nalongside the stuff that we know and use today). Or maybe I'm\nunderthinking it.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGJ%3DXThErgAQRoqfCy1bKPxXVuF0%3D2zDbB%2BSxDs59pv7Fw%40mail.gmail.com\n[2] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n1027.pdf\n\n\n", "msg_date": "Mon, 6 Mar 2023 11:20:48 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactor to introduce pg_strcoll()." } ]
[ { "msg_contents": "WARNING: tables were not subscribed, you will have to run ALTER\nSUBSCRIPTION ... REFRESH PUBLICATION to subscribe the tables\n\n~\n\nWhen I first encountered the above CREATE SUBSCRIPTION warning message\nI thought it was dubious-looking English...\n\nOn closer inspection I think the message has some other things that\ncould be improved:\na) it is quite long which IIUC is generally frowned upon\nb) IMO most of the text it is more like a \"hint\" about what to do\n\n~\n\nPSA a patch which modifies this warning as follows:\n\nBEFORE\n\ntest_sub=# create subscription sub1 connection 'host=localhost\nport=test_pub' publication pub1 with (connect = false);\nWARNING: tables were not subscribed, you will have to run ALTER\nSUBSCRIPTION ... REFRESH PUBLICATION to subscribe the tables\nCREATE SUBSCRIPTION\n\nAFTER\n\ntest_sub=# create subscription sub1 connection 'host=localhost\nport=test_pub' publication pub1 with (connect = false);\nWARNING: tables were not subscribed\nHINT: You will have to run ALTER SUBSCRIPTION ... REFRESH PUBLICATION\nto subscribe the tables.\nCREATE SUBSCRIPTION\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 7 Oct 2022 13:15:22 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "create subscription - improved warning message" }, { "msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> WARNING: tables were not subscribed, you will have to run ALTER\n> SUBSCRIPTION ... REFRESH PUBLICATION to subscribe the tables\n\n> When I first encountered the above CREATE SUBSCRIPTION warning message\n> I thought it was dubious-looking English...\n\n> On closer inspection I think the message has some other things that\n> could be improved:\n> a) it is quite long which IIUC is generally frowned upon\n> b) IMO most of the text it is more like a \"hint\" about what to do\n\nYou're quite right about both of those points, but I think there's\neven more to criticize: \"tables were not subscribed\" is a basically\nuseless message, and probably not even conceptually accurate.\nLooking at the code, I think the situation being complained of is that\nwe have created the subscription's main catalog entries locally, but\nsince we were told not to connect to the publisher, we don't know what\ntables the subscription is supposed to be reading. I'm not sure what\nthe consequences of that are: do we not read any data at all yet, or\nwhat?\n\nI think maybe a better message would be along the lines of\n\nWARNING: subscription was created, but is not up-to-date\nHINT: You should now run %s to initiate collection of data.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Oct 2022 11:23:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On Sat, Oct 8, 2022 at 2:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > WARNING: tables were not subscribed, you will have to run ALTER\n> > SUBSCRIPTION ... REFRESH PUBLICATION to subscribe the tables\n>\n> > When I first encountered the above CREATE SUBSCRIPTION warning message\n> > I thought it was dubious-looking English...\n>\n> > On closer inspection I think the message has some other things that\n> > could be improved:\n> > a) it is quite long which IIUC is generally frowned upon\n> > b) IMO most of the text it is more like a \"hint\" about what to do\n>\n> You're quite right about both of those points, but I think there's\n> even more to criticize: \"tables were not subscribed\" is a basically\n> useless message, and probably not even conceptually accurate.\n> Looking at the code, I think the situation being complained of is that\n> we have created the subscription's main catalog entries locally, but\n> since we were told not to connect to the publisher, we don't know what\n> tables the subscription is supposed to be reading. I'm not sure what\n> the consequences of that are: do we not read any data at all yet, or\n> what?\n>\n> I think maybe a better message would be along the lines of\n>\n> WARNING: subscription was created, but is not up-to-date\n> HINT: You should now run %s to initiate collection of data.\n>\n> Thoughts?\n\nYes, IMO it's better to change the message more radically as you did.\n\nBut if it's OK to do that then:\n- maybe it should mention the connection since the connect=false was\nwhat caused this warning.\n- maybe saying 'replication' instead of 'collection of data' would be\nmore consistent with the pgdocs for CREATE SUBSCRIPTION\n\ne.g.\n\nWARNING: subscription was created, but is not connected\nHINT: You should run %s to initiate replication.\n\n(I can update the patch when the final text is decided)\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 10 Oct 2022 10:10:18 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> On Sat, Oct 8, 2022 at 2:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think maybe a better message would be along the lines of\n>> WARNING: subscription was created, but is not up-to-date\n>> HINT: You should now run %s to initiate collection of data.\n\n> [ how about ]\n> WARNING: subscription was created, but is not connected\n> HINT: You should run %s to initiate replication.\n\nOK by me; anybody else want to weigh in?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 09 Oct 2022 19:12:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On Mon, Oct 10, 2022 at 4:40 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> But if it's OK to do that then:\n> - maybe it should mention the connection since the connect=false was\n> what caused this warning.\n> - maybe saying 'replication' instead of 'collection of data' would be\n> more consistent with the pgdocs for CREATE SUBSCRIPTION\n>\n> e.g.\n>\n> WARNING: subscription was created, but is not connected\n> HINT: You should run %s to initiate replication.\n>\n\nYeah, this message looks better than the current one. However, when I\ntried to do what HINT says, it doesn't initiate replication. It gives\nme the below error:\n\npostgres=# Alter subscription sub1 refresh publication;\nERROR: ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions\n\nThen, I enabled the subscription and again tried as below:\npostgres=# Alter subscription sub1 enable;\nALTER SUBSCRIPTION\npostgres=# Alter subscription sub1 refresh publication;\nALTER SUBSCRIPTION\n\nEven after the above replication is not initiated. I see \"ERROR:\nreplication slot \"sub1\" does not exist\" in subscriber logs. Then, I\nmanually created this slot (by using\npg_create_logical_replication_slot) on the publisher. After that,\nreplication started to work.\n\nIf I am not missing something, don't you think we need a somewhat more\nelaborative HINT, or may be just give the WARNING?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 10 Oct 2022 09:57:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> Yeah, this message looks better than the current one. However, when I\n> tried to do what HINT says, it doesn't initiate replication. It gives\n> me the below error:\n\n> postgres=# Alter subscription sub1 refresh publication;\n> ERROR: ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions\n\nGeez ... is there *anything* that's not broken about this message?\n\nI'm beginning to question the entire premise here. That is,\nrather than tweaking this message until it's within hailing\ndistance of sanity, why do we allow the no-connect case at all?\nIt's completely obvious that nobody uses this option, or we'd\nalready have heard complaints about the advice being a lie.\nWhat's the real-world use case?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 Oct 2022 00:40:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On Mon, Oct 10, 2022 at 10:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > Yeah, this message looks better than the current one. However, when I\n> > tried to do what HINT says, it doesn't initiate replication. It gives\n> > me the below error:\n>\n> > postgres=# Alter subscription sub1 refresh publication;\n> > ERROR: ALTER SUBSCRIPTION ... REFRESH is not allowed for disabled subscriptions\n>\n> Geez ... is there *anything* that's not broken about this message?\n>\n> I'm beginning to question the entire premise here. That is,\n> rather than tweaking this message until it's within hailing\n> distance of sanity, why do we allow the no-connect case at all?\n>\n\nThe docs say [1]: \"When creating a subscription, the remote host is\nnot reachable or in an unclear state. In that case, the subscription\ncan be created using the connect = false option. The remote host will\nthen not be contacted at all. This is what pg_dump uses. The remote\nreplication slot will then have to be created manually before the\nsubscription can be activated.\"\n\nI think the below gives accurate information:\nWARNING: subscription was created, but is not connected\nHINT: You should create the slot manually, enable the subscription,\nand run %s to initiate replication.\n\n[1] - https://www.postgresql.org/docs/devel/logical-replication-subscription.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 10 Oct 2022 10:34:05 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On Mon, Oct 10, 2022 at 4:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Oct 10, 2022 at 10:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Amit Kapila <amit.kapila16@gmail.com> writes:\n...\n> The docs say [1]: \"When creating a subscription, the remote host is\n> not reachable or in an unclear state. In that case, the subscription\n> can be created using the connect = false option. The remote host will\n> then not be contacted at all. This is what pg_dump uses. The remote\n> replication slot will then have to be created manually before the\n> subscription can be activated.\"\n>\n> I think the below gives accurate information:\n> WARNING: subscription was created, but is not connected\n> HINT: You should create the slot manually, enable the subscription,\n> and run %s to initiate replication.\n>\n\n+1\n\nPSA patch v2 worded like that.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 10 Oct 2022 20:27:30 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On Mon, Oct 10, 2022 at 12:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm beginning to question the entire premise here. That is,\n> rather than tweaking this message until it's within hailing\n> distance of sanity, why do we allow the no-connect case at all?\n\nThat sounds pretty nuts to me, because of the pg_dump use case if\nnothing else. I don't think it's reasonable to say \"oh, if you execute\nthis DDL on your system, it will instantaneously and automatically\nbegin to create outbound network connections, and there's no way to\nturn that off.\" It ought to be possible to set up a configuration\nfirst and then only later turn it on. And it definitely ought to be\npossible, if things aren't working out, to turn it back off, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 10 Oct 2022 08:28:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> On Mon, Oct 10, 2022 at 4:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> I think the below gives accurate information:\n>> WARNING: subscription was created, but is not connected\n>> HINT: You should create the slot manually, enable the subscription,\n>> and run %s to initiate replication.\n\n> +1\n\nIt feels a little strange to me that we describe two steps rather vaguely\nand then give exact SQL for the third step. How about, say,\n\nHINT: To initiate replication, you must manually create the replication\nslot, enable the subscription, and refresh the subscription.\n\nAnother idea is\n\nHINT: To initiate replication, create the replication slot on the\npublisher, then run ALTER SUBSCRIPTION ... ENABLE and ALTER\nSUBSCRIPTION ... REFRESH PUBLICATION.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 Oct 2022 10:44:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On Mon, Oct 10, 2022 at 8:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > On Mon, Oct 10, 2022 at 4:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >> I think the below gives accurate information:\n> >> WARNING: subscription was created, but is not connected\n> >> HINT: You should create the slot manually, enable the subscription,\n> >> and run %s to initiate replication.\n>\n> > +1\n>\n> It feels a little strange to me that we describe two steps rather vaguely\n> and then give exact SQL for the third step. How about, say,\n>\n> HINT: To initiate replication, you must manually create the replication\n> slot, enable the subscription, and refresh the subscription.\n>\n\nLGTM. BTW, do we want to slightly adjust the documentation for the\nconnect option on CREATE SUBSCRIPTION page [1]? It says: \"Since no\nconnection is made when this option is false, no tables are\nsubscribed, and so after you enable the subscription nothing will be\nreplicated. You will need to then run ALTER SUBSCRIPTION ... REFRESH\nPUBLICATION for tables to be subscribed.\"\n\nIt doesn't say anything about manually creating the slot and probably\nthe wording can be made similar.\n\n[1] - https://www.postgresql.org/docs/devel/sql-createsubscription.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 11 Oct 2022 09:16:33 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On Tue, Oct 11, 2022 at 2:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Oct 10, 2022 at 8:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Peter Smith <smithpb2250@gmail.com> writes:\n> > > On Mon, Oct 10, 2022 at 4:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >> I think the below gives accurate information:\n> > >> WARNING: subscription was created, but is not connected\n> > >> HINT: You should create the slot manually, enable the subscription,\n> > >> and run %s to initiate replication.\n> >\n> > > +1\n> >\n> > It feels a little strange to me that we describe two steps rather vaguely\n> > and then give exact SQL for the third step. How about, say,\n> >\n> > HINT: To initiate replication, you must manually create the replication\n> > slot, enable the subscription, and refresh the subscription.\n> >\n>\n> LGTM.\n\nPSA patch v3-0001 where the message/hint is worded as suggested above\n\n> BTW, do we want to slightly adjust the documentation for the\n> connect option on CREATE SUBSCRIPTION page [1]? It says: \"Since no\n> connection is made when this option is false, no tables are\n> subscribed, and so after you enable the subscription nothing will be\n> replicated. You will need to then run ALTER SUBSCRIPTION ... REFRESH\n> PUBLICATION for tables to be subscribed.\"\n>\n> It doesn't say anything about manually creating the slot and probably\n> the wording can be made similar.\n>\n\nPSA patch v3-0002 which changes CREATE SUBSCRIPTION pgdocs to use the\nsame wording as in the HINT message.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 11 Oct 2022 17:30:37 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On 2022-Oct-10, Peter Smith wrote:\n\n> On Mon, Oct 10, 2022 at 4:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> > I think the below gives accurate information:\n> > WARNING: subscription was created, but is not connected\n> > HINT: You should create the slot manually, enable the subscription,\n> > and run %s to initiate replication.\n\nI guess this is reasonable, but how do I know what slot name do I have\nto create? Maybe it'd be better to be explicit about that:\n\nHINT: You should create slot \\\"%s\\\" manually, enable the subscription, and run %s to initiate replication.\n\nthough this still leaves a lot unexplained about that slot creation\n(which options do they have to use).\n\n\nIf this sounds like too much for a HINT, perhaps we need a documentation\nsubsection that explains exactly what to do, and have this HINT\nreference the documentation? I don't think we do that anywhere else,\nthough.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Industry suffers from the managerial dogma that for the sake of stability\nand continuity, the company should be independent of the competence of\nindividual employees.\" (E. Dijkstra)\n\n\n", "msg_date": "Tue, 11 Oct 2022 12:57:06 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On 2022-Oct-11, Peter Smith wrote:\n\n> On Tue, Oct 11, 2022 at 2:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Oct 10, 2022 at 8:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> > > It feels a little strange to me that we describe two steps rather\n> > > vaguely and then give exact SQL for the third step. How about,\n> > > say,\n> > >\n> > > HINT: To initiate replication, you must manually create the\n> > > replication slot, enable the subscription, and refresh the\n> > > subscription.\n> >\n> > LGTM.\n> \n> PSA patch v3-0001 where the message/hint is worded as suggested above\n\nLGTM.\n\n> > BTW, do we want to slightly adjust the documentation for the\n> > connect option on CREATE SUBSCRIPTION page [1]? It says: \"Since no\n> > connection is made when this option is false, no tables are\n> > subscribed, and so after you enable the subscription nothing will be\n> > replicated. You will need to then run ALTER SUBSCRIPTION ... REFRESH\n> > PUBLICATION for tables to be subscribed.\"\n> >\n> > It doesn't say anything about manually creating the slot and probably\n> > the wording can be made similar.\n> \n> PSA patch v3-0002 which changes CREATE SUBSCRIPTION pgdocs to use the\n> same wording as in the HINT message.\n\nI think we want the documentation to explain in much more detail what is\nmeant. Users are going to need some help in determining which commands\nto run for each of the step mentioned in the hint, so I don't think we\nwant the docs to say the same thing as the hint. How does the user know\nthe name of the slot, what options to use, what are the commands to run\nafterwards. So I think we should aim to *expand* that text, not reduce\nit.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 12 Oct 2022 09:04:04 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On Tue, Oct 11, 2022 at 4:27 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Oct-10, Peter Smith wrote:\n>\n> > On Mon, Oct 10, 2022 at 4:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > > I think the below gives accurate information:\n> > > WARNING: subscription was created, but is not connected\n> > > HINT: You should create the slot manually, enable the subscription,\n> > > and run %s to initiate replication.\n>\n> I guess this is reasonable, but how do I know what slot name do I have\n> to create? Maybe it'd be better to be explicit about that:\n>\n> HINT: You should create slot \\\"%s\\\" manually, enable the subscription, and run %s to initiate replication.\n>\n\nI am not so sure about including a slot name because users can create\na slot with a name of their choice and set it via Alter Subscription.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 12 Oct 2022 13:25:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On Wed, Oct 12, 2022 at 12:34 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Oct-11, Peter Smith wrote:\n>\n> > > BTW, do we want to slightly adjust the documentation for the\n> > > connect option on CREATE SUBSCRIPTION page [1]? It says: \"Since no\n> > > connection is made when this option is false, no tables are\n> > > subscribed, and so after you enable the subscription nothing will be\n> > > replicated. You will need to then run ALTER SUBSCRIPTION ... REFRESH\n> > > PUBLICATION for tables to be subscribed.\"\n> > >\n> > > It doesn't say anything about manually creating the slot and probably\n> > > the wording can be made similar.\n> >\n> > PSA patch v3-0002 which changes CREATE SUBSCRIPTION pgdocs to use the\n> > same wording as in the HINT message.\n>\n> I think we want the documentation to explain in much more detail what is\n> meant. Users are going to need some help in determining which commands\n> to run for each of the step mentioned in the hint, so I don't think we\n> want the docs to say the same thing as the hint. How does the user know\n> the name of the slot, what options to use, what are the commands to run\n> afterwards.\n>\n\nI think it is a good idea to expand the docs for this but note that\nthere are multiple places that use a similar description. For example,\nsee the description slot_name option: \"When slot_name is set to NONE,\nthere will be no replication slot associated with the subscription.\nThis can be used if the replication slot will be created later\nmanually. Such subscriptions must also have both enabled and\ncreate_slot set to false.\". Then, we have a few places in the logical\nreplication docs [1] that talk about creating the slot manually but\ndidn't explain in detail the name or options to use. We might want to\nwrite a slightly bigger doc patch so that we can write the description\nin one place and give reference to the same at other places.\n\n[1] - https://www.postgresql.org/docs/devel/logical-replication-subscription.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 12 Oct 2022 13:47:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On 2022-Oct-12, Amit Kapila wrote:\n\n> I think it is a good idea to expand the docs for this but note that\n> there are multiple places that use a similar description. For example,\n> see the description slot_name option: \"When slot_name is set to NONE,\n> there will be no replication slot associated with the subscription.\n> This can be used if the replication slot will be created later\n> manually. Such subscriptions must also have both enabled and\n> create_slot set to false.\". Then, we have a few places in the logical\n> replication docs [1] that talk about creating the slot manually but\n> didn't explain in detail the name or options to use. We might want to\n> write a slightly bigger doc patch so that we can write the description\n> in one place and give reference to the same at other places.\n\n+1\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Para tener más hay que desear menos\"\n\n\n", "msg_date": "Wed, 12 Oct 2022 10:38:25 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On Wed, Oct 12, 2022 at 2:08 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Oct-12, Amit Kapila wrote:\n>\n> > I think it is a good idea to expand the docs for this but note that\n> > there are multiple places that use a similar description. For example,\n> > see the description slot_name option: \"When slot_name is set to NONE,\n> > there will be no replication slot associated with the subscription.\n> > This can be used if the replication slot will be created later\n> > manually. Such subscriptions must also have both enabled and\n> > create_slot set to false.\". Then, we have a few places in the logical\n> > replication docs [1] that talk about creating the slot manually but\n> > didn't explain in detail the name or options to use. We might want to\n> > write a slightly bigger doc patch so that we can write the description\n> > in one place and give reference to the same at other places.\n>\n> +1\n>\n\nOkay, then I think we can commit the last error message patch of Peter\n[1] as we have an agreement on the same, and then work on this as a\nseparate patch. What do you think?\n\n[1] - https://www.postgresql.org/message-id/CAHut%2BPtgkebavGYsGnROkY1%3DULhJ5%2Byn4_i3Y9E9%2ByDeksqpwQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 12 Oct 2022 16:22:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On 2022-Oct-12, Amit Kapila wrote:\n\n> Okay, then I think we can commit the last error message patch of Peter\n> [1] as we have an agreement on the same, and then work on this as a\n> separate patch. What do you think?\n\nNo objection.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 12 Oct 2022 14:31:29 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Oct-12, Amit Kapila wrote:\n>> Okay, then I think we can commit the last error message patch of Peter\n>> [1] as we have an agreement on the same, and then work on this as a\n>> separate patch. What do you think?\n\n> No objection.\n\nYeah, the v3-0001 patch is fine. I agree that the docs change needs\nmore work.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Oct 2022 11:01:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On Thu, Oct 13, 2022 at 2:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2022-Oct-12, Amit Kapila wrote:\n> >> Okay, then I think we can commit the last error message patch of Peter\n> >> [1] as we have an agreement on the same, and then work on this as a\n> >> separate patch. What do you think?\n>\n> > No objection.\n>\n> Yeah, the v3-0001 patch is fine. I agree that the docs change needs\n> more work.\n\nThanks to everybody for the feedback/suggestions. I will work on\nimproving the documentation for this and post something in a day or\nso.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Thu, 13 Oct 2022 09:07:33 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On Wed, Oct 12, 2022 at 8:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2022-Oct-12, Amit Kapila wrote:\n> >> Okay, then I think we can commit the last error message patch of Peter\n> >> [1] as we have an agreement on the same, and then work on this as a\n> >> separate patch. What do you think?\n>\n> > No objection.\n>\n> Yeah, the v3-0001 patch is fine.\n>\n\nPushed this one.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 13 Oct 2022 10:37:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On Thu, Oct 13, 2022 at 9:07 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Thu, Oct 13, 2022 at 2:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > > On 2022-Oct-12, Amit Kapila wrote:\n> > >> Okay, then I think we can commit the last error message patch of Peter\n> > >> [1] as we have an agreement on the same, and then work on this as a\n> > >> separate patch. What do you think?\n> >\n> > > No objection.\n> >\n> > Yeah, the v3-0001 patch is fine. I agree that the docs change needs\n> > more work.\n>\n> Thanks to everybody for the feedback/suggestions. I will work on\n> improving the documentation for this and post something in a day or\n> so.\n\n\nPSA a patch for adding examples of how to activate a subscription that\nwas originally created in a disconnected mode.\n\nThe new docs are added as part of the \"Logical Replication -\nSubscription\" section 31.2.\n\nThe CREATE SUBSCRIPTION reference page was also updated to include\nlinks to the new docs.\n\nFeedback welcome.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.", "msg_date": "Fri, 14 Oct 2022 13:52:11 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On Fri, Oct 14, 2022 at 8:22 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Thu, Oct 13, 2022 at 9:07 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Thu, Oct 13, 2022 at 2:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > > > On 2022-Oct-12, Amit Kapila wrote:\n> > > >> Okay, then I think we can commit the last error message patch of Peter\n> > > >> [1] as we have an agreement on the same, and then work on this as a\n> > > >> separate patch. What do you think?\n> > >\n> > > > No objection.\n> > >\n> > > Yeah, the v3-0001 patch is fine. I agree that the docs change needs\n> > > more work.\n> >\n> > Thanks to everybody for the feedback/suggestions. I will work on\n> > improving the documentation for this and post something in a day or\n> > so.\n>\n>\n> PSA a patch for adding examples of how to activate a subscription that\n> was originally created in a disconnected mode.\n>\n> The new docs are added as part of the \"Logical Replication -\n> Subscription\" section 31.2.\n>\n> The CREATE SUBSCRIPTION reference page was also updated to include\n> links to the new docs.\n>\n\nYou have used 'pgoutput' as plugin name in the examples. Shall we\nmention in some way that this is a default plugin used for built-in\nlogical replication and it is required to use only this one to enable\nlogical replication?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 15 Oct 2022 18:44:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On Sun, Oct 16, 2022 at 12:14 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Oct 14, 2022 at 8:22 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Thu, Oct 13, 2022 at 9:07 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n...\n> > PSA a patch for adding examples of how to activate a subscription that\n> > was originally created in a disconnected mode.\n> >\n> > The new docs are added as part of the \"Logical Replication -\n> > Subscription\" section 31.2.\n> >\n> > The CREATE SUBSCRIPTION reference page was also updated to include\n> > links to the new docs.\n> >\n>\n> You have used 'pgoutput' as plugin name in the examples. Shall we\n> mention in some way that this is a default plugin used for built-in\n> logical replication and it is required to use only this one to enable\n> logical replication?\n>\n\nUpdated as sugggested.\n\nPSA v5.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 17 Oct 2022 12:46:59 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On Mon, Oct 17, 2022 9:47 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> On Sun, Oct 16, 2022 at 12:14 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Fri, Oct 14, 2022 at 8:22 AM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Thu, Oct 13, 2022 at 9:07 AM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> > > >\r\n> ...\r\n> > > PSA a patch for adding examples of how to activate a subscription that\r\n> > > was originally created in a disconnected mode.\r\n> > >\r\n> > > The new docs are added as part of the \"Logical Replication -\r\n> > > Subscription\" section 31.2.\r\n> > >\r\n> > > The CREATE SUBSCRIPTION reference page was also updated to include\r\n> > > links to the new docs.\r\n> > >\r\n> >\r\n> > You have used 'pgoutput' as plugin name in the examples. Shall we\r\n> > mention in some way that this is a default plugin used for built-in\r\n> > logical replication and it is required to use only this one to enable\r\n> > logical replication?\r\n> >\r\n> \r\n> Updated as sugggested.\r\n> \r\n> PSA v5.\r\n> \r\n\r\nThanks for your patch. Here are some comments.\r\n\r\nIn Example 2, the returned slot_name should be \"myslot\".\r\n\r\n+test_pub=# SELECT * FROM pg_create_logical_replication_slot('myslot', 'pgoutput');\r\n+ slot_name | lsn\r\n+-----------+-----------\r\n+ sub1 | 0/19059A0\r\n+(1 row)\r\n\r\nBesides, I am thinking is it possible to slightly simplify the example. For\r\nexample, merge example 1 and 2, keep the steps of example 2 and in the step of\r\ncreating slot, mention what should we do if slot_name is not specified when\r\ncreating subscription.\r\n\r\n\r\nRegards,\r\nShi yu\r\n\r\n\r\n", "msg_date": "Mon, 17 Oct 2022 08:11:46 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: create subscription - improved warning message" }, { "msg_contents": "On Mon, Oct 17, 2022 at 7:17 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n>\n> Updated as sugggested.\n>\n\n+ <para>\n+ Sometimes, either by choice (e.g. <literal>create_slot = false</literal>),\n+ or by necessity (e.g. <literal>connect = false</literal>), the remote\n+ replication slot is not created automatically during\n+ <literal>CREATE SUBSCRIPTION</literal>. In these cases the user will have\n+ to create the slot manually before the subscription can be activated.\n+ </para>\n\nThis part looks a bit odd when in the previous section we have\nexplained the same thing in different words. I think it may be better\nif we start with something like: \"As mentioned in the previous\nsection, there are cases where we need to create the slot manually\nbefore the subscription can be activated.\". I think you can even\ncombine the next para in the patch with this one.\n\nAlso, it looks odd that the patch uses examples to demonstrate how to\nmanually create a slot, and then we have a separate section whose\ntitle is Examples. I am not sure what is the best way to arrange docs\nhere but maybe we can consider renaming the Examples section to\nsomething more specific.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 17 Oct 2022 16:43:52 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "Thanks for the feedback.\n\nOn Mon, Oct 17, 2022 at 10:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Oct 17, 2022 at 7:17 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> >\n> > Updated as sugggested.\n> >\n>\n> + <para>\n> + Sometimes, either by choice (e.g. <literal>create_slot = false</literal>),\n> + or by necessity (e.g. <literal>connect = false</literal>), the remote\n> + replication slot is not created automatically during\n> + <literal>CREATE SUBSCRIPTION</literal>. In these cases the user will have\n> + to create the slot manually before the subscription can be activated.\n> + </para>\n>\n> This part looks a bit odd when in the previous section we have\n> explained the same thing in different words. I think it may be better\n> if we start with something like: \"As mentioned in the previous\n> section, there are cases where we need to create the slot manually\n> before the subscription can be activated.\". I think you can even\n> combine the next para in the patch with this one.\n\nModified the text and combined the paragraphs as suggested.\n\n>\n> Also, it looks odd that the patch uses examples to demonstrate how to\n> manually create a slot, and then we have a separate section whose\n> title is Examples. I am not sure what is the best way to arrange docs\n> here but maybe we can consider renaming the Examples section to\n> something more specific.\n>\n\nRenamed the examples sections to make their purpose clearer.\n\n~~~\n\nPSA patch v6 with the above changes + one correction from Shi-san [1]\n\n------\n[1] https://www.postgresql.org/message-id/OSZPR01MB631051BA9AAA728CAA8CBD88FD299%40OSZPR01MB6310.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 18 Oct 2022 20:40:35 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On Mon, Oct 17, 2022 at 7:11 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n...\n>\n> Thanks for your patch. Here are some comments.\n>\n> In Example 2, the returned slot_name should be \"myslot\".\n>\n> +test_pub=# SELECT * FROM pg_create_logical_replication_slot('myslot', 'pgoutput');\n> + slot_name | lsn\n> +-----------+-----------\n> + sub1 | 0/19059A0\n> +(1 row)\n>\n\nOops. Sorry for my cut/paste error. Fixed in patch v6.\n\n\n> Besides, I am thinking is it possible to slightly simplify the example. For\n> example, merge example 1 and 2, keep the steps of example 2 and in the step of\n> creating slot, mention what should we do if slot_name is not specified when\n> creating subscription.\n>\n\nSure, it might be a bit shorter to combine the examples, but I thought\nit’s just simpler not to do it that way because the combined example\nwill then need additional bullets/notes to say – “if there is no\nslot_name do this…” and “if there is a slot_name do that…”. It’s\nreally only the activation part which is identical for both.\n\n-----\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 18 Oct 2022 20:44:19 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On Tue, Oct 18, 2022 5:44 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> On Mon, Oct 17, 2022 at 7:11 PM shiy.fnst@fujitsu.com\r\n> <shiy.fnst@fujitsu.com> wrote:\r\n> >\r\n> ...\r\n> >\r\n> > Thanks for your patch. Here are some comments.\r\n> >\r\n> > In Example 2, the returned slot_name should be \"myslot\".\r\n> >\r\n> > +test_pub=# SELECT * FROM pg_create_logical_replication_slot('myslot',\r\n> 'pgoutput');\r\n> > + slot_name | lsn\r\n> > +-----------+-----------\r\n> > + sub1 | 0/19059A0\r\n> > +(1 row)\r\n> >\r\n> \r\n> Oops. Sorry for my cut/paste error. Fixed in patch v6.\r\n> \r\n> \r\n> > Besides, I am thinking is it possible to slightly simplify the example. For\r\n> > example, merge example 1 and 2, keep the steps of example 2 and in the\r\n> step of\r\n> > creating slot, mention what should we do if slot_name is not specified when\r\n> > creating subscription.\r\n> >\r\n> \r\n> Sure, it might be a bit shorter to combine the examples, but I thought\r\n> it’s just simpler not to do it that way because the combined example\r\n> will then need additional bullets/notes to say – “if there is no\r\n> slot_name do this…” and “if there is a slot_name do that…”. It’s\r\n> really only the activation part which is identical for both.\r\n> \r\n\r\nThanks for updating the patch.\r\n\r\n+test_sub=# CREATE SUBSCRIPTION sub1\r\n+test_sub-# CONNECTION 'host=localhost dbname=test_pub'\r\n+test_sub-# PUBLICATION pub1\r\n+test_sub-# WITH (slot_name=NONE, enabled=false, create_slot=false);\r\n+WARNING: subscription was created, but is not connected\r\n+HINT: To initiate replication, you must manually create the replication slot, enable the subscription, and refresh the subscription.\r\n+CREATE SUBSCRIPTION\r\n\r\nIn example 3, there is actually no such warning message when creating\r\nsubscription because \"connect=false\" is not specified.\r\n\r\nI have tested the examples in the patch and didn't see any problem other than\r\nthe one above.\r\n\r\nRegards,\r\nShi yu\r\n", "msg_date": "Wed, 19 Oct 2022 03:44:51 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: create subscription - improved warning message" }, { "msg_contents": "On Wed, Oct 19, 2022 at 2:44 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n...\n\n>\n> +test_sub=# CREATE SUBSCRIPTION sub1\n> +test_sub-# CONNECTION 'host=localhost dbname=test_pub'\n> +test_sub-# PUBLICATION pub1\n> +test_sub-# WITH (slot_name=NONE, enabled=false, create_slot=false);\n> +WARNING: subscription was created, but is not connected\n> +HINT: To initiate replication, you must manually create the replication slot, enable the subscription, and refresh the subscription.\n> +CREATE SUBSCRIPTION\n>\n> In example 3, there is actually no such warning message when creating\n> subscription because \"connect=false\" is not specified.\n>\n\nOh, thanks for finding and reporting that. Sorry for my cut/paste\nerrors. Fixed in v7. PSA,\n\n> I have tested the examples in the patch and didn't see any problem other than\n> the one above.\n>\n\nThanks for your testing.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 19 Oct 2022 15:40:31 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On Wed, Oct 19, 2022 at 10:10 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Thanks for your testing.\n>\n\nLGTM so pushed with a minor change in one of the titles in the Examples section.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 2 Nov 2022 15:32:28 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: create subscription - improved warning message" }, { "msg_contents": "On Wed, Nov 2, 2022 at 9:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Oct 19, 2022 at 10:10 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Thanks for your testing.\n> >\n>\n> LGTM so pushed with a minor change in one of the titles in the Examples section.\n>\n\nThanks for pushing.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 3 Nov 2022 08:01:48 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: create subscription - improved warning message" } ]
[ { "msg_contents": "Hi,\n\nI found that tag files generated by src/tools/make_ctags\ndoesn't include some keywords, that were field names of node\nstructs, for example norm_select in RestrictInfo. Such fields\nare defined with pg_node_attr macro that was introduced\nrecently, like:\n\n Selectivity norm_selec pg_node_attr(equal_ignore);\n\nIn this case, pg_node_attr is mistakenly interpreted to be\nthe name of the field. So, I propose to use -I option of ctags\nto ignore the marco name. Attached is a patch to do it.\n\nRegards,\nYugo Nagata \n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Fri, 7 Oct 2022 15:44:42 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "Hello\n\nOn 2022-Oct-07, Yugo NAGATA wrote:\n\n> I found that tag files generated by src/tools/make_ctags\n> doesn't include some keywords, that were field names of node\n> structs, for example norm_select in RestrictInfo. Such fields\n> are defined with pg_node_attr macro that was introduced\n> recently, like:\n> \n> Selectivity norm_selec pg_node_attr(equal_ignore);\n> \n> In this case, pg_node_attr is mistakenly interpreted to be\n> the name of the field. So, I propose to use -I option of ctags\n> to ignore the marco name. Attached is a patch to do it.\n\nI've wondered if there's anybody that uses this script. I suppose if\nyou're reporting this problem then it has at least one user, and\ntherefore worth fixing.\n\nIf we do patch it, how about doing some more invasive surgery and\nbringing it to the XXI century? I think the `find` line is a bit lame:\n\n> # this is outputting the tags into the file 'tags', and appending\n> find `pwd`/ -type f -name '*.[chyl]' -print |\n> -\txargs ctags -a -f tags \"$FLAGS\"\n> +\txargs ctags -a -f tags \"$FLAGS\" -I \"$IGNORE_LIST\"\n\nespecially because it includes everything in tmp_install, which pollutes\nthe tag list.\n\nIn my own tags script I just call \"ctags -R\", and I feed cscope with\nthese find lines\n\n(find $SRCDIR \\( -name tmp_install -prune -o -name tmp_check -prune \\) -o \\( -name \"*.[chly]\" -o -iname \"*makefile*\" -o -name \"*.mk\" -o -name \"*.in\" -o -name \"*.sh\" -o -name \"*.sgml\" -o -name \"*.sql\" -o -name \"*.p[lm]\" \\) -type f -print ; \\\nfind $BUILDDIR \\( -name tmp_install -prune \\) -o \\( -name \\*.h -a -type f \\) -print )\n\nwhich seems to give decent results. (Nowadays I wonder if it'd be\nbetter to exclude the \"*_d.h\" files from the builddir.)\n(I wonder why don't I have a prune for .git ...)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n¡Ay, ay, ay! Con lo mucho que yo lo quería (bis)\nse fue de mi vera ... se fue para siempre, pa toíta ... pa toíta la vida\n¡Ay Camarón! ¡Ay Camarón! (Paco de Lucía)\n\n\n", "msg_date": "Mon, 10 Oct 2022 12:04:22 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "Hello\n\nOn Mon, 10 Oct 2022 12:04:22 +0200\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> Hello\n> \n> On 2022-Oct-07, Yugo NAGATA wrote:\n> \n> > I found that tag files generated by src/tools/make_ctags\n> > doesn't include some keywords, that were field names of node\n> > structs, for example norm_select in RestrictInfo. Such fields\n> > are defined with pg_node_attr macro that was introduced\n> > recently, like:\n> > \n> > Selectivity norm_selec pg_node_attr(equal_ignore);\n> > \n> > In this case, pg_node_attr is mistakenly interpreted to be\n> > the name of the field. So, I propose to use -I option of ctags\n> > to ignore the marco name. Attached is a patch to do it.\n> \n> I've wondered if there's anybody that uses this script. I suppose if\n> you're reporting this problem then it has at least one user, and\n> therefore worth fixing.\n\nYeah, I am a make_ctags user, there may be few users though....\n\n> If we do patch it, how about doing some more invasive surgery and\n> bringing it to the XXI century? I think the `find` line is a bit lame:\n> \n> > # this is outputting the tags into the file 'tags', and appending\n> > find `pwd`/ -type f -name '*.[chyl]' -print |\n> > -\txargs ctags -a -f tags \"$FLAGS\"\n> > +\txargs ctags -a -f tags \"$FLAGS\" -I \"$IGNORE_LIST\"\n> \n> especially because it includes everything in tmp_install, which pollutes\n> the tag list.\n> \n> In my own tags script I just call \"ctags -R\", and I feed cscope with\n> these find lines\n> \n> (find $SRCDIR \\( -name tmp_install -prune -o -name tmp_check -prune \\) -o \\( -name \"*.[chly]\" -o -iname \"*makefile*\" -o -name \"*.mk\" -o -name \"*.in\" -o -name \"*.sh\" -o -name \"*.sgml\" -o -name \"*.sql\" -o -name \"*.p[lm]\" \\) -type f -print ; \\\n> find $BUILDDIR \\( -name tmp_install -prune \\) -o \\( -name \\*.h -a -type f \\) -print )\n\nThank you for your comments. \n\nI updated the patch to ignore the code under tmp_install and add\nsome file types like sql, p[lm], and so on. .sgml or .sh is not\nincluded because they don't seem to be beneficial for ctags.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Wed, 12 Oct 2022 18:27:04 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "Hi,\n\n>> > I found that tag files generated by src/tools/make_ctags\n>> > doesn't include some keywords, that were field names of node\n>> > structs, for example norm_select in RestrictInfo. Such fields\n>> > are defined with pg_node_attr macro that was introduced\n>> > recently, like:\n>> > \n>> > Selectivity norm_selec pg_node_attr(equal_ignore);\n>> > \n>> > In this case, pg_node_attr is mistakenly interpreted to be\n>> > the name of the field. So, I propose to use -I option of ctags\n>> > to ignore the marco name. Attached is a patch to do it.\n\nI found the same issue with make_etags too.\n\n> I updated the patch to ignore the code under tmp_install and add\n> some file types like sql, p[lm], and so on. .sgml or .sh is not\n> included because they don't seem to be beneficial for ctags.\n\nI tried to apply the v2 patch approach to make_etags but noticed that\nmake_ctags and make_etags have quite a few duplicate codes, that would\nbring maintenance headache. I think we could merge make_etags into\nmake_ctags, then add \"-e\" option (or whatever) to make_ctags so that\nit generates tags files for emacs if the option is specified.\n\nPatch attahced.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp", "msg_date": "Wed, 12 Oct 2022 20:40:21 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "On 2022-Oct-12, Tatsuo Ishii wrote:\n\n> I tried to apply the v2 patch approach to make_etags but noticed that\n> make_ctags and make_etags have quite a few duplicate codes, that would\n> bring maintenance headache. I think we could merge make_etags into\n> make_ctags, then add \"-e\" option (or whatever) to make_ctags so that\n> it generates tags files for emacs if the option is specified.\n\nIf we're going to do this, then I suggest that make_etags should become\na symlink to make_ctags, and behave as if -e is given when called under\nthat name.\n\n> +tags_file=tags\n\n> +rm -f ./$tags_file\n\nI think $tags_file should include the leading ./ bit, to reduce\nconfusion.\n\n\nHowever ... hmm ... \n\n> find . \\( -name 'CVS' -prune \\) -o \\( -name .git -prune \\) -o -type d -print |\n> while read DIR\n> -do\t[ \"$DIR\" != \".\" ] && ln -f -s `echo \"$DIR\" | sed 's;/[^/]*;/..;g'`/tags \"$DIR\"/tags\n> +do\t[ \"$DIR\" != \".\" ] && ln -f -s `echo \"$DIR\" | sed 's;/[^/]*;/..;g'`/$tags_file \"$DIR\"/$tags_file\n> done\n\n... does this create a tags symlink on each directory? This seems\nstrange to me, but I admit the hack I keep in my .vim/vimrc looks even\nmore strange:\n\n: let $CSCOPE_DB=substitute(getcwd(), \"^\\\\(.*/pgsql/source/ [^/]*\\\\)/.*\", \"\\\\1\", \"\")\n: let &tags=substitute(getcwd(), \"^\\\\(.*/pgsql/source/[^/]*\\\\)/.*\", \"\\\\1\", \"\") . \"/tags\"\n\nNot sure which is worse. Having dozens of identically named symlinks\ndoesn't strike me as a great arrangement though. I would definitely not\nuse make_ctags if this is unavoidable. I see both scripts do likewise\ncurrently.\n\n(I keep all my build trees under /pgsql/build [a symlink to\n~/Code/pgsql/source], and all source trees under /pgsql/source, so this\nis an easy conversion to make most of the time.)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"World domination is proceeding according to plan\" (Andrew Morton)\n\n\n", "msg_date": "Wed, 12 Oct 2022 14:30:55 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": ">> I tried to apply the v2 patch approach to make_etags but noticed that\n>> make_ctags and make_etags have quite a few duplicate codes, that would\n>> bring maintenance headache. I think we could merge make_etags into\n>> make_ctags, then add \"-e\" option (or whatever) to make_ctags so that\n>> it generates tags files for emacs if the option is specified.\n> \n> If we're going to do this, then I suggest that make_etags should become\n> a symlink to make_ctags, and behave as if -e is given when called under\n> that name.\n\nWhat I had in my mind was making make_etags a script just exec\nmake_ctags (with -e option). But I don't have strong\npreference. Symlink is ok for me too.\n\n>> +tags_file=tags\n> \n>> +rm -f ./$tags_file\n> \n> I think $tags_file should include the leading ./ bit, to reduce\n> confusion.\n\nOk.\n\n> However ... hmm ... \n> \n>> find . \\( -name 'CVS' -prune \\) -o \\( -name .git -prune \\) -o -type d -print |\n>> while read DIR\n>> -do\t[ \"$DIR\" != \".\" ] && ln -f -s `echo \"$DIR\" | sed 's;/[^/]*;/..;g'`/tags \"$DIR\"/tags\n>> +do\t[ \"$DIR\" != \".\" ] && ln -f -s `echo \"$DIR\" | sed 's;/[^/]*;/..;g'`/$tags_file \"$DIR\"/$tags_file\n>> done\n> \n> ... does this create a tags symlink on each directory? This seems\n> strange to me,\n\nI don't know the original author's intention for this but I think it\nmakes use of the tag file in emacs a little bit easier. Emacs\nconfirms for the first time the default location of tags file under\nthe same directory where the source file resides. I can just hit\nreturn key if there's a symlink of tags. If we do not create the\nsymlink, we have to specify the directory where the tags file was\noriginally created, which is a little bit annoying.\n\n> but I admit the hack I keep in my .vim/vimrc looks even\n> more strange:\n> \n> : let $CSCOPE_DB=substitute(getcwd(), \"^\\\\(.*/pgsql/source/ [^/]*\\\\)/.*\", \"\\\\1\", \"\")\n> : let &tags=substitute(getcwd(), \"^\\\\(.*/pgsql/source/[^/]*\\\\)/.*\", \"\\\\1\", \"\") . \"/tags\"\n> \n> Not sure which is worse. Having dozens of identically named symlinks\n> doesn't strike me as a great arrangement though. I would definitely not\n> use make_ctags if this is unavoidable. I see both scripts do likewise\n> currently.\n\nWell, I often visit different tags file for different development\nprojects (for example Pgpool-II) and I don't want to have fixed\nlocation of tags file in emacs setting. For this reason make_etags's\napproach is acceptable for me.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Wed, 12 Oct 2022 22:26:03 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "Tatsuo Ishii <ishii@sraoss.co.jp> writes:\n>> If we're going to do this, then I suggest that make_etags should become\n>> a symlink to make_ctags, and behave as if -e is given when called under\n>> that name.\n\n> What I had in my mind was making make_etags a script just exec\n> make_ctags (with -e option). But I don't have strong\n> preference. Symlink is ok for me too.\n\nI don't think it's possible to store a symlink in git, so\nhaving one file exec the other sounds saner to me too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Oct 2022 10:09:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "On 2022-Oct-12, Tom Lane wrote:\n\n> Tatsuo Ishii <ishii@sraoss.co.jp> writes:\n> >> If we're going to do this, then I suggest that make_etags should become\n> >> a symlink to make_ctags, and behave as if -e is given when called under\n> >> that name.\n> \n> > What I had in my mind was making make_etags a script just exec\n> > make_ctags (with -e option). But I don't have strong\n> > preference. Symlink is ok for me too.\n> \n> I don't think it's possible to store a symlink in git, so\n> having one file exec the other sounds saner to me too.\n\nI tried before my reply and it seems to work, but perhaps it requires\ntoo new a git version. It may also be a problem under Windows. Having\none exec the other sounds perfect.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 12 Oct 2022 16:22:22 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "On Wed, Oct 12, 2022 at 10:09:06AM -0400, Tom Lane wrote:\n>\n> I don't think it's possible to store a symlink in git, so\n> having one file exec the other sounds saner to me too.\n\nGit handles symlink just fine, but those will obviously create extra burden for\nWindows users (git will just create a text file containing the target path I\nthink), so agreed (even if I doubt that any Windows user will run those\nscripts anyway).\n\n\n", "msg_date": "Wed, 12 Oct 2022 22:22:27 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "On 2022-Oct-12, Tatsuo Ishii wrote:\n\n> >> find . \\( -name 'CVS' -prune \\) -o \\( -name .git -prune \\) -o -type d -print |\n> >> while read DIR\n> >> -do\t[ \"$DIR\" != \".\" ] && ln -f -s `echo \"$DIR\" | sed 's;/[^/]*;/..;g'`/tags \"$DIR\"/tags\n> >> +do\t[ \"$DIR\" != \".\" ] && ln -f -s `echo \"$DIR\" | sed 's;/[^/]*;/..;g'`/$tags_file \"$DIR\"/$tags_file\n> >> done\n> > \n> > ... does this create a tags symlink on each directory? This seems\n> > strange to me,\n> \n> I don't know the original author's intention for this but I think it\n> makes use of the tag file in emacs a little bit easier. Emacs\n> confirms for the first time the default location of tags file under\n> the same directory where the source file resides. I can just hit\n> return key if there's a symlink of tags. If we do not create the\n> symlink, we have to specify the directory where the tags file was\n> originally created, which is a little bit annoying.\n\nOK, that sounds good then. I would make a feature request to have a\nswitch that supresses creation of these links, then.\n\n> Well, I often visit different tags file for different development\n> projects (for example Pgpool-II) and I don't want to have fixed\n> location of tags file in emacs setting. For this reason make_etags's\n> approach is acceptable for me.\n\nMakes sense.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nThou shalt study thy libraries and strive not to reinvent them without\ncause, that thy code may be short and readable and thy days pleasant\nand productive. (7th Commandment for C Programmers)\n\n\n", "msg_date": "Wed, 12 Oct 2022 16:24:40 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "On Wed, Oct 12, 2022 at 10:26:03PM +0900, Tatsuo Ishii wrote:\n> > However ... hmm ... \n> > \n> >> find . \\( -name 'CVS' -prune \\) -o \\( -name .git -prune \\) -o -type d -print |\n> >> while read DIR\n> >> -do\t[ \"$DIR\" != \".\" ] && ln -f -s `echo \"$DIR\" | sed 's;/[^/]*;/..;g'`/tags \"$DIR\"/tags\n> >> +do\t[ \"$DIR\" != \".\" ] && ln -f -s `echo \"$DIR\" | sed 's;/[^/]*;/..;g'`/$tags_file \"$DIR\"/$tags_file\n> >> done\n> > \n> > ... does this create a tags symlink on each directory? This seems\n> > strange to me,\n> \n> I don't know the original author's intention for this but I think it\n> makes use of the tag file in emacs a little bit easier. Emacs\n> confirms for the first time the default location of tags file under\n> the same directory where the source file resides. I can just hit\n> return key if there's a symlink of tags. If we do not create the\n> symlink, we have to specify the directory where the tags file was\n> originally created, which is a little bit annoying.\n\nYes, that is exactly the intent of why it uses symlinks in every\ndirectory.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Wed, 12 Oct 2022 12:15:03 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "On 10.10.22 12:04, Alvaro Herrera wrote:\n> In my own tags script I just call \"ctags -R\", and I feed cscope with\n> these find lines\n> \n> (find $SRCDIR \\( -name tmp_install -prune -o -name tmp_check -prune \\) -o \\( -name \"*.[chly]\" -o -iname \"*makefile*\" -o -name \"*.mk\" -o -name \"*.in\" -o -name \"*.sh\" -o -name \"*.sgml\" -o -name \"*.sql\" -o -name \"*.p[lm]\" \\) -type f -print ; \\\n> find $BUILDDIR \\( -name tmp_install -prune \\) -o \\( -name \\*.h -a -type f \\) -print )\n> \n> which seems to give decent results. (Nowadays I wonder if it'd be\n> better to exclude the \"*_d.h\" files from the builddir.)\n> (I wonder why don't I have a prune for .git ...)\n\nOr use git ls-files.\n\n\n", "msg_date": "Wed, 12 Oct 2022 19:27:29 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "> OK, that sounds good then. I would make a feature request to have a\n> switch that supresses creation of these links, then.\n\nOk, I have added \"-n\" option to make_ctags so that it skips to create\nthe links.\n\nAlso I have changed make_etags so that it exec make_ctags, which seems\nto be the consensus.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp", "msg_date": "Thu, 13 Oct 2022 15:35:09 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "On Thu, 13 Oct 2022 15:35:09 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> > OK, that sounds good then. I would make a feature request to have a\n> > switch that supresses creation of these links, then.\n> \n> Ok, I have added \"-n\" option to make_ctags so that it skips to create\n> the links.\n> \n> Also I have changed make_etags so that it exec make_ctags, which seems\n> to be the consensus.\n\nThank you for following up my patch.\nI fixed the patch to allow use both -e and -n options together.\n\nRegards,\nYugo Nagata\n\n> \n> Best reagards,\n> --\n> Tatsuo Ishii\n> SRA OSS LLC\n> English: http://www.sraoss.co.jp/index_en/\n> Japanese:http://www.sraoss.co.jp\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Fri, 14 Oct 2022 14:02:53 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "> On Thu, 13 Oct 2022 15:35:09 +0900 (JST)\n> Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> \n>> > OK, that sounds good then. I would make a feature request to have a\n>> > switch that supresses creation of these links, then.\n>> \n>> Ok, I have added \"-n\" option to make_ctags so that it skips to create\n>> the links.\n>> \n>> Also I have changed make_etags so that it exec make_ctags, which seems\n>> to be the consensus.\n> \n> Thank you for following up my patch.\n> I fixed the patch to allow use both -e and -n options together.\n\nThanks. I have made mostly cosmetic changes so that it is more\nconsistent with existing scripts.\n\nI would like to push v6 patch if there's no objection.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp", "msg_date": "Sat, 15 Oct 2022 10:40:29 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "Hi,\n\nOn Sat, 15 Oct 2022 10:40:29 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> > On Thu, 13 Oct 2022 15:35:09 +0900 (JST)\n> > Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> > \n> >> > OK, that sounds good then. I would make a feature request to have a\n> >> > switch that supresses creation of these links, then.\n> >> \n> >> Ok, I have added \"-n\" option to make_ctags so that it skips to create\n> >> the links.\n> >> \n> >> Also I have changed make_etags so that it exec make_ctags, which seems\n> >> to be the consensus.\n> > \n> > Thank you for following up my patch.\n> > I fixed the patch to allow use both -e and -n options together.\n> \n> Thanks. I have made mostly cosmetic changes so that it is more\n> consistent with existing scripts.\n> \n> I would like to push v6 patch if there's no objection.\n\nI am fine with this patch.\n\nBy the way, in passing, how about adding \"tags\" and \"TAGS\" to\n.gitignore file?\n\n> \n> Best reagards,\n> --\n> Tatsuo Ishii\n> SRA OSS LLC\n> English: http://www.sraoss.co.jp/index_en/\n> Japanese:http://www.sraoss.co.jp\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Tue, 18 Oct 2022 17:39:52 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "> Hi,\n> \n> On Sat, 15 Oct 2022 10:40:29 +0900 (JST)\n> Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> \n>> > On Thu, 13 Oct 2022 15:35:09 +0900 (JST)\n>> > Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>> > \n>> >> > OK, that sounds good then. I would make a feature request to have a\n>> >> > switch that supresses creation of these links, then.\n>> >> \n>> >> Ok, I have added \"-n\" option to make_ctags so that it skips to create\n>> >> the links.\n>> >> \n>> >> Also I have changed make_etags so that it exec make_ctags, which seems\n>> >> to be the consensus.\n>> > \n>> > Thank you for following up my patch.\n>> > I fixed the patch to allow use both -e and -n options together.\n>> \n>> Thanks. I have made mostly cosmetic changes so that it is more\n>> consistent with existing scripts.\n>> \n>> I would like to push v6 patch if there's no objection.\n> \n> I am fine with this patch.\n\nThanks. the v6 patch pushed to master branch.\n\n> By the way, in passing, how about adding \"tags\" and \"TAGS\" to\n> .gitignore file?\n\nSounds like a good idea.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Wed, 19 Oct 2022 13:25:17 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "On Wed, 19 Oct 2022 13:25:17 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> > Hi,\n> > \n> > On Sat, 15 Oct 2022 10:40:29 +0900 (JST)\n> > Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> > \n> >> > On Thu, 13 Oct 2022 15:35:09 +0900 (JST)\n> >> > Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> >> > \n> >> >> > OK, that sounds good then. I would make a feature request to have a\n> >> >> > switch that supresses creation of these links, then.\n> >> >> \n> >> >> Ok, I have added \"-n\" option to make_ctags so that it skips to create\n> >> >> the links.\n> >> >> \n> >> >> Also I have changed make_etags so that it exec make_ctags, which seems\n> >> >> to be the consensus.\n> >> > \n> >> > Thank you for following up my patch.\n> >> > I fixed the patch to allow use both -e and -n options together.\n> >> \n> >> Thanks. I have made mostly cosmetic changes so that it is more\n> >> consistent with existing scripts.\n> >> \n> >> I would like to push v6 patch if there's no objection.\n> > \n> > I am fine with this patch.\n> \n> Thanks. the v6 patch pushed to master branch.\n\nThanks!\n\n> > By the way, in passing, how about adding \"tags\" and \"TAGS\" to\n> > .gitignore file?\n> \n> Sounds like a good idea.\n\nOk, the patch is attached.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Wed, 19 Oct 2022 13:29:25 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": ">> > By the way, in passing, how about adding \"tags\" and \"TAGS\" to\n>> > .gitignore file?\n>> \n>> Sounds like a good idea.\n> \n> Ok, the patch is attached.\n\nI have search the mail archive and found this:\n\nhttps://www.postgresql.org/message-id/flat/CAFcNs%2BrG-DASXzHcecYKvAj%2Brmxi8CpMAgbpGpEK-mjC96F%3DLg%40mail.gmail.com\n\nIt seems the consensus was to avoid to put this sort of things into\n.gitignore in the PostgreSQL source tree. Rather, put into personal\n.gitignore or whatever so that developers don't need to care about\nother's preference.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Wed, 19 Oct 2022 17:17:17 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "On Wed, 19 Oct 2022 17:17:17 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> >> > By the way, in passing, how about adding \"tags\" and \"TAGS\" to\n> >> > .gitignore file?\n> >> \n> >> Sounds like a good idea.\n> > \n> > Ok, the patch is attached.\n> \n> I have search the mail archive and found this:\n> \n> https://www.postgresql.org/message-id/flat/CAFcNs%2BrG-DASXzHcecYKvAj%2Brmxi8CpMAgbpGpEK-mjC96F%3DLg%40mail.gmail.com\n> \n> It seems the consensus was to avoid to put this sort of things into\n> .gitignore in the PostgreSQL source tree. Rather, put into personal\n> .gitignore or whatever so that developers don't need to care about\n> other's preference.\n\nOk, I understand. Thanks!\n\nBy the way, after executing both make_etags and make_ctags, trying tag jump\nin my vim causes the following error even though there are correct tags files.\n\n E431: Format error in tags file \"backend/access/heap/TAGS\"\n\nRemoving all TAGS files as below can resolve this error.\n $ find . -name TAGS | xargs rm\n\nSo, should we have one more option of make_{ce}tags script to clean up\nexisting tags/TAGS files?\n \n> Best reagards,\n> --\n> Tatsuo Ishii\n> SRA OSS LLC\n> English: http://www.sraoss.co.jp/index_en/\n> Japanese:http://www.sraoss.co.jp\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Wed, 19 Oct 2022 17:42:18 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "> By the way, after executing both make_etags and make_ctags, trying tag jump\n> in my vim causes the following error even though there are correct tags files.\n> \n> E431: Format error in tags file \"backend/access/heap/TAGS\"\n> \n> Removing all TAGS files as below can resolve this error.\n> $ find . -name TAGS | xargs rm\n> \n> So, should we have one more option of make_{ce}tags script to clean up\n> existing tags/TAGS files?\n\nNot sure. Before the commit make_ctags did not do such a thing but we\nnever heard any complain like yours. Also I believe vi/vim users never\ninvoke make_etags (same thing can be said to emacs users). So why\nshould we bother?\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Wed, 19 Oct 2022 18:11:13 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "On Wed, 19 Oct 2022 18:11:13 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> > By the way, after executing both make_etags and make_ctags, trying tag jump\n> > in my vim causes the following error even though there are correct tags files.\n> > \n> > E431: Format error in tags file \"backend/access/heap/TAGS\"\n> > \n> > Removing all TAGS files as below can resolve this error.\n> > $ find . -name TAGS | xargs rm\n> > \n> > So, should we have one more option of make_{ce}tags script to clean up\n> > existing tags/TAGS files?\n> \n> Not sure. Before the commit make_ctags did not do such a thing but we\n> never heard any complain like yours. Also I believe vi/vim users never\n> invoke make_etags (same thing can be said to emacs users). So why\n> should we bother?\n\nIndeed, it was my first use of make_etags (or make_ctags -e) and it was\njust for testing the patch. Similarly, someone who runs mistakenly this\ncommand might want this option. However, as you say, there've been no\ncomplain about this, so I don't feel it necessary so much. Maybe, users\nof this command would be able to remove tags by their selves easily.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Wed, 19 Oct 2022 18:55:47 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "\n\nOn 2022/10/19 13:25, Tatsuo Ishii wrote:\n> Thanks. the v6 patch pushed to master branch.\n\nSince this commit, make_etags has started failing to generate\ntags files with the following error messages, on my MacOS.\n\n$ src/tools/make_etags\n/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ctags: illegal option -- e\nusage: ctags [-BFadtuwvx] [-f tagsfile] file ...\nsort: No such file or directory\n\n\nIn my MacOS, non-Exuberant ctags is installed and doesn't support\n-e option. But the commit changed make_etags so that it always\ncalls ctags with -e option via make_ctags. This seems the cause of\nthe above failure.\n\n IS_EXUBERANT=\"\"\n ctags --version 2>&1 | grep Exuberant && IS_EXUBERANT=\"Y\"\n\nmake_ctags has the above code and seems to support non-Exuberant ctags.\nIf so, we should revert the changes of make_etags by the commit and\nmake make_etags work with that ctags? Or, we should support\nonly Exuberant-type ctags (btw, I'm ok with this) and get rid of\nsomething like the above code?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 7 Feb 2023 01:17:31 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "> On 2022/10/19 13:25, Tatsuo Ishii wrote:\n>> Thanks. the v6 patch pushed to master branch.\n> \n> Since this commit, make_etags has started failing to generate\n> tags files with the following error messages, on my MacOS.\n> \n> $ src/tools/make_etags\n> /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ctags:\n> illegal option -- e\n> usage: ctags [-BFadtuwvx] [-f tagsfile] file ...\n> sort: No such file or directory\n> \n> \n> In my MacOS, non-Exuberant ctags is installed and doesn't support\n> -e option. But the commit changed make_etags so that it always\n> calls ctags with -e option via make_ctags. This seems the cause of\n> the above failure.\n> \n> IS_EXUBERANT=\"\"\n> ctags --version 2>&1 | grep Exuberant && IS_EXUBERANT=\"Y\"\n> \n> make_ctags has the above code and seems to support non-Exuberant\n> ctags.\n> If so, we should revert the changes of make_etags by the commit and\n> make make_etags work with that ctags? Or, we should support\n> only Exuberant-type ctags (btw, I'm ok with this) and get rid of\n> something like the above code?\n\nThanks for the report. I will look into this.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Tue, 07 Feb 2023 16:52:29 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": ">> Since this commit, make_etags has started failing to generate\n>> tags files with the following error messages, on my MacOS.\n>> \n>> $ src/tools/make_etags\n>> /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ctags:\n>> illegal option -- e\n>> usage: ctags [-BFadtuwvx] [-f tagsfile] file ...\n>> sort: No such file or directory\n>> \n>> \n>> In my MacOS, non-Exuberant ctags is installed and doesn't support\n>> -e option. But the commit changed make_etags so that it always\n>> calls ctags with -e option via make_ctags. This seems the cause of\n>> the above failure.\n>> \n>> IS_EXUBERANT=\"\"\n>> ctags --version 2>&1 | grep Exuberant && IS_EXUBERANT=\"Y\"\n>> \n>> make_ctags has the above code and seems to support non-Exuberant\n>> ctags.\n>> If so, we should revert the changes of make_etags by the commit and\n>> make make_etags work with that ctags? Or, we should support\n>> only Exuberant-type ctags (btw, I'm ok with this) and get rid of\n>> something like the above code?\n> \n> Thanks for the report. I will look into this.\n\nPrevious make_etags relied on etags command:\n\n#!/bin/sh\n\n# src/tools/make_etags\n\ncommand -v etags >/dev/null || \\\n\t{ echo \"'etags' program not found\" 1>&2; exit 1; }\n:\n:\n\nMy Mac (M1 Mac running macOS 12.6) does not have etags. Thus before\nthe commit make_etags on Mac failed anyway. Do we want make_etags to\nrestore the previous behavior? i.e. 'etags' program not found\n\n>> If so, we should revert the changes of make_etags by the commit and\n>> make make_etags work with that ctags?\n\nI think ctags on Mac cannot produce tags file for emacs.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Tue, 07 Feb 2023 17:19:37 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "On Tue, 07 Feb 2023 17:19:37 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> >> Since this commit, make_etags has started failing to generate\n> >> tags files with the following error messages, on my MacOS.\n> >> \n> >> $ src/tools/make_etags\n> >> /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ctags:\n> >> illegal option -- e\n> >> usage: ctags [-BFadtuwvx] [-f tagsfile] file ...\n> >> sort: No such file or directory\n> >> \n> >> \n> >> In my MacOS, non-Exuberant ctags is installed and doesn't support\n> >> -e option. But the commit changed make_etags so that it always\n> >> calls ctags with -e option via make_ctags. This seems the cause of\n> >> the above failure.\n> >> \n> >> IS_EXUBERANT=\"\"\n> >> ctags --version 2>&1 | grep Exuberant && IS_EXUBERANT=\"Y\"\n> >> \n> >> make_ctags has the above code and seems to support non-Exuberant\n> >> ctags.\n> >> If so, we should revert the changes of make_etags by the commit and\n> >> make make_etags work with that ctags? Or, we should support\n> >> only Exuberant-type ctags (btw, I'm ok with this) and get rid of\n> >> something like the above code?\n> > \n> > Thanks for the report. I will look into this.\n> \n> Previous make_etags relied on etags command:\n> \n> #!/bin/sh\n> \n> # src/tools/make_etags\n> \n> command -v etags >/dev/null || \\\n> \t{ echo \"'etags' program not found\" 1>&2; exit 1; }\n> :\n> :\n> \n> My Mac (M1 Mac running macOS 12.6) does not have etags. Thus before\n> the commit make_etags on Mac failed anyway. Do we want make_etags to\n> restore the previous behavior? i.e. 'etags' program not found\n> \n> >> If so, we should revert the changes of make_etags by the commit and\n> >> make make_etags work with that ctags?\n> \n> I think ctags on Mac cannot produce tags file for emacs.\n\nDoes is make sense to change make_etags as the attached patch does?\nThis allows make_etags to use etags if Exuberant-type ctags is not\navailable. This allows users to use make_etags if hey has either\nExuberant-type ctags or etags.\n\nRegards,\nYugo Nagata\n\n> \n> Best reagards,\n> --\n> Tatsuo Ishii\n> SRA OSS LLC\n> English: http://www.sraoss.co.jp/index_en/\n> Japanese:http://www.sraoss.co.jp\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Tue, 7 Feb 2023 18:56:57 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "> Does is make sense to change make_etags as the attached patch does?\n> This allows make_etags to use etags if Exuberant-type ctags is not\n> available. This allows users to use make_etags if hey has either\n> Exuberant-type ctags or etags.\n\nThe patch drops support for \"-n\" option :-<\n\nAttached is the patch by fixing make_ctags (make_etags is not\ntouched).\n\nIf Exuberant-type ctags is available, use it (not changed).\n If Exuberant-type ctags is not available, try old ctags (not changed).\n If the old ctags does not support \"-e\" option, try etags (new).\n If etags is not available, give up (new).\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp", "msg_date": "Tue, 07 Feb 2023 21:29:04 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "On Tue, 07 Feb 2023 21:29:04 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> > Does is make sense to change make_etags as the attached patch does?\n> > This allows make_etags to use etags if Exuberant-type ctags is not\n> > available. This allows users to use make_etags if hey has either\n> > Exuberant-type ctags or etags.\n> \n> The patch drops support for \"-n\" option :-<\n> \n> Attached is the patch by fixing make_ctags (make_etags is not\n> touched).\n> \n> If Exuberant-type ctags is available, use it (not changed).\n> If Exuberant-type ctags is not available, try old ctags (not changed).\n> If the old ctags does not support \"-e\" option, try etags (new).\n\nI am not sure if this is good way to check if ctags supports \"-e\" or not. \n\n+\tthen\tctags --version 2>&1 | grep -- -e >/dev/null\n\nPerhaps, \"--help\" might be intended rather than \"--version\" to check\nsupported options? Even so, ctags would have other option whose name contains\n\"-e\" than Emacs support, so this check could end in a wrong result. Therefore,\nit seems to me that it is better to check immediately if etags is available \nin case that we don't have Exuberant-type ctags.\n\nRegards,\nYugo Nagata\n\n> If etags is not available, give up (new).\n> \n> Best reagards,\n> --\n> Tatsuo Ishii\n> SRA OSS LLC\n> English: http://www.sraoss.co.jp/index_en/\n> Japanese:http://www.sraoss.co.jp\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Tue, 7 Feb 2023 22:34:41 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": ">> The patch drops support for \"-n\" option :-<\n>> \n>> Attached is the patch by fixing make_ctags (make_etags is not\n>> touched).\n>> \n>> If Exuberant-type ctags is available, use it (not changed).\n>> If Exuberant-type ctags is not available, try old ctags (not changed).\n>> If the old ctags does not support \"-e\" option, try etags (new).\n> \n> I am not sure if this is good way to check if ctags supports \"-e\" or not. \n> \n> +\tthen\tctags --version 2>&1 | grep -- -e >/dev/null\n> \n> Perhaps, \"--help\" might be intended rather than \"--version\" to check\n> supported options?\n\nYeah, that was my mistake.\n\n> Even so, ctags would have other option whose name contains\n> \"-e\" than Emacs support, so this check could end in a wrong result. Therefore,\n> it seems to me that it is better to check immediately if etags is available \n> in case that we don't have Exuberant-type ctags.\n\nThat makes sense.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Wed, 08 Feb 2023 09:20:34 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": ">> I am not sure if this is good way to check if ctags supports \"-e\" or not. \n>> \n>> +\tthen\tctags --version 2>&1 | grep -- -e >/dev/null\n>> \n>> Perhaps, \"--help\" might be intended rather than \"--version\" to check\n>> supported options?\n> \n> Yeah, that was my mistake.\n> \n>> Even so, ctags would have other option whose name contains\n>> \"-e\" than Emacs support, so this check could end in a wrong result. Therefore,\n>> it seems to me that it is better to check immediately if etags is available \n>> in case that we don't have Exuberant-type ctags.\n> \n> That makes sense.\n\nAttached is the v2 patch.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp", "msg_date": "Wed, 08 Feb 2023 09:49:41 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "\n\nOn 2023/02/08 9:49, Tatsuo Ishii wrote:\n>>> I am not sure if this is good way to check if ctags supports \"-e\" or not.\n>>>\n>>> +\tthen\tctags --version 2>&1 | grep -- -e >/dev/null\n>>>\n>>> Perhaps, \"--help\" might be intended rather than \"--version\" to check\n>>> supported options?\n>>\n>> Yeah, that was my mistake.\n>>\n>>> Even so, ctags would have other option whose name contains\n>>> \"-e\" than Emacs support, so this check could end in a wrong result. Therefore,\n>>> it seems to me that it is better to check immediately if etags is available\n>>> in case that we don't have Exuberant-type ctags.\n>>\n>> That makes sense.\n> \n> Attached is the v2 patch.\n\nThanks for the patch!\n\nWith the patch, I got the following error when executing make_etags..\n\n$ ./src/tools/make_etags\netags: invalid option -- 'e'\n\tTry 'etags --help' for a complete list of options.\nsort: No such file or directory\n\n\n+\t\tif [ $? != 0 -a -z \"$ETAGS_EXISTS\" ]\n+\t\tthen\techo \"'ctags' does not support emacs mode and etags does not exist\" 1>&2; exit 1\n+\t\tfi\n\nThis code can be reached after \"rm -f ./$TAGS_FILE\" is executed in make_ctags.\nBut we should check whether the required program has been already installed\nand exit immediately if not, before modifying anything?\n\n\nThis is the comment for the commit d1e2a380cb. I found that make_etags with\nan invalid option reported the following usage message mentioning make_ctags\n(not make_etags). Isn't this confusing?\n\n$ ./src/tools/make_etags -a\nUsage: /.../make_ctags [-e][-n]\n\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 8 Feb 2023 19:29:40 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": ">> Attached is the v2 patch.\n> \n> Thanks for the patch!\n> \n> With the patch, I got the following error when executing make_etags..\n> \n> $ ./src/tools/make_etags\n> etags: invalid option -- 'e'\n> \tTry 'etags --help' for a complete list of options.\n> sort: No such file or directory\n\nOops. Thank you for pointing it out. BTW, just out of curiosity, do\nyou have etags on you Mac? Mine doesn't have etags. That's why I\nmissed the error.\n\n> +\t\tif [ $? != 0 -a -z \"$ETAGS_EXISTS\" ]\n> + then echo \"'ctags' does not support emacs mode and etags does not\n> exist\" 1>&2; exit 1\n> +\t\tfi\n> \n> This code can be reached after \"rm -f ./$TAGS_FILE\" is executed in\n> make_ctags.\n> But we should check whether the required program has been already\n> installed\n> and exit immediately if not, before modifying anything?\n\nAgreed.\n\n> This is the comment for the commit d1e2a380cb. I found that make_etags\n> with\n> an invalid option reported the following usage message mentioning\n> make_ctags\n> (not make_etags). Isn't this confusing?\n> \n> $ ./src/tools/make_etags -a\n> Usage: /.../make_ctags [-e][-n]\n\nThat's hard to fix without some code duplication. We decided that\nmake_etags is not a symlink to make_ctags, rather execs make_ctags. Of\ncourse we could let make_etags perform the same option check, but I\ndoubt it's worth the trouble.\n\nAnyway, attached is the v3 patch.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp", "msg_date": "Wed, 08 Feb 2023 20:17:34 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "On 2023/02/08 20:17, Tatsuo Ishii wrote:\n>>> Attached is the v2 patch.\n>>\n>> Thanks for the patch!\n>>\n>> With the patch, I got the following error when executing make_etags..\n>>\n>> $ ./src/tools/make_etags\n>> etags: invalid option -- 'e'\n>> \tTry 'etags --help' for a complete list of options.\n>> sort: No such file or directory\n> \n> Oops. Thank you for pointing it out. BTW, just out of curiosity, do\n> you have etags on you Mac?\n\nYes.\n\n$ etags --version\netags (GNU Emacs 28.2)\nCopyright (C) 2022 Free Software Foundation, Inc.\nThis program is distributed under the terms in ETAGS.README\n\n\n>> This is the comment for the commit d1e2a380cb. I found that make_etags\n>> with\n>> an invalid option reported the following usage message mentioning\n>> make_ctags\n>> (not make_etags). Isn't this confusing?\n>>\n>> $ ./src/tools/make_etags -a\n>> Usage: /.../make_ctags [-e][-n]\n> \n> That's hard to fix without some code duplication. We decided that\n> make_etags is not a symlink to make_ctags, rather execs make_ctags. Of\n> course we could let make_etags perform the same option check, but I\n> doubt it's worth the trouble.\n\nHow about just applying the following into make_etags?\n\n+if [ $# -gt 1 ] || ( [ $# -eq 1 ] && [ $1 != \"-n\" ] )\n+then\techo \"Usage: $0 [-n]\"\n+\texit 1\n+fi\n\n\n> Anyway, attached is the v3 patch.\n\nThanks for updating the patch!\n\nWith the patch, make_etags caused the following error messages\non my MacOS.\n\n$ ./src/tools/make_etags\nNo such file or directory\nNo such file or directory\n\nTo fix this error, probaby we should get rid of double-quotes\nfrom \"$FLAGS\" \"$IGNORE_IDENTIFIES\" in the following command.\n\n-\txargs ctags $MODE -a -f $TAGS_FILE \"$FLAGS\" \"$IGNORE_IDENTIFIES\"\n+\txargs $PROG $MODE $TAGS_OPT $TAGS_FILE \"$FLAGS\" \"$IGNORE_IDENTIFIES\"\n\n\n\n+\telse\tctags --help 2>&1 | grep -- -e >/dev/null\n+\t\t# Note that \"ctags --help\" does not always work. Even certain ctags does not have the option.\n\nThis code seems to assume that there is non-Exuberant ctags\nsupporting -e option. But does such ctags really exist?\n\n\nI fixed the above issues and refactored the code.\nAttached is the updated version of the patch. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 9 Feb 2023 03:06:20 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": ">> Oops. Thank you for pointing it out. BTW, just out of curiosity, do\n>> you have etags on you Mac?\n> \n> Yes.\n> \n> $ etags --version\n> etags (GNU Emacs 28.2)\n> Copyright (C) 2022 Free Software Foundation, Inc.\n> This program is distributed under the terms in ETAGS.README\n\nOk. Probably that was installed with emacs. For some reason I don't\nknow, I don't have etags even I already installed emacs. So I decided\nto test using my old Ubuntu 18 vm, which has old ctags and etags.\n\n> How about just applying the following into make_etags?\n> \n> +if [ $# -gt 1 ] || ( [ $# -eq 1 ] && [ $1 != \"-n\" ] )\n> +then\techo \"Usage: $0 [-n]\"\n> +\texit 1\n> +fi\n\nOk from me. Looks simple enough.\n\n> With the patch, make_etags caused the following error messages\n> on my MacOS.\n> \n> $ ./src/tools/make_etags\n> No such file or directory\n> No such file or directory\n> \n> To fix this error, probaby we should get rid of double-quotes\n> from \"$FLAGS\" \"$IGNORE_IDENTIFIES\" in the following command.\n> \n> -\txargs ctags $MODE -a -f $TAGS_FILE \"$FLAGS\" \"$IGNORE_IDENTIFIES\"\n> + xargs $PROG $MODE $TAGS_OPT $TAGS_FILE \"$FLAGS\" \"$IGNORE_IDENTIFIES\"\n\nOh, I see.\n\n> +\telse\tctags --help 2>&1 | grep -- -e >/dev/null\n> + # Note that \"ctags --help\" does not always work. Even certain ctags\n> does not have the option.\n> \n> This code seems to assume that there is non-Exuberant ctags\n> supporting -e option. But does such ctags really exist?\n\nGood question. I vaguely recalled so、but I haven't been able to find\nany evidence for it.\n> \n> I fixed the above issues and refactored the code.\n> Attached is the updated version of the patch. Thought?\n\nThank you! Looks good to me.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Fri, 10 Feb 2023 09:07:01 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": ">> I fixed the above issues and refactored the code.\n>> Attached is the updated version of the patch. Thought?\n> \n> Thank you! Looks good to me.\n\nFix pushed. Thank you!\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Wed, 15 Feb 2023 10:14:54 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "Hi,\n\nOn Tue, Feb 14, 2023 at 8:15 PM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>\n> >> I fixed the above issues and refactored the code.\n> >> Attached is the updated version of the patch. Thought?\n> >\n> > Thank you! Looks good to me.\n>\n> Fix pushed. Thank you!\n\nIn my Mac environment where non-Exuberant ctags and emacs 28.2 are\ninstalled, the generated etags file cannot be loaded by emacs due to\nfile format error. The generated TAGS file is:\n\n% head -10 TAGS\n\n ) /\nsizeof(BlockNumber)sizeof(BlockNumber)117,3750\n my\n@newa newa395,10443\n\nvariadic array[1,2]:array[1,2]56,1803\n\nvariadic array[]::inarray[]::i72,2331\n\nvariadic array[array64,2111\n\nvariadic array[array68,2222\n\nvariadic array[array76,2441\n (2 * (2 53,1353\n my $fn fn387,10147\n startblock 101,4876\n\nSince the etags files consist of multiple sections[1] we cannot sort\nthe generated etags file. With the attached patch, make_etags (with\nnon-Exuberant ctags) generates a correct format etags file and it\nworks:\n\n% head -10 TAGS\n\n/Users/masahiko/pgsql/source/postgresql/GNUmakefile,1187\nsubdir 7,56\ntop_builddir 8,65\ndocs:docs13,167\nworld-contrib-recurse:world-contrib-recurse19,273\nworld-bin-contrib-recurse:world-bin-contrib-recurse24,394\nhtml man:html man26,444\ninstall-docs:install-docs29,474\ninstall-world-contrib-recurse:install-world-contrib-recurse35,604\n\nBTW regarding the following comment, as far as I can read the\nWikipedia page for ctags[1], Exuberant ctags file doesn't have a\nheader section.\n\n# Exuberant tags has a header that we cannot sort in with the other entries\n# so we skip the sort step\n# Why are we sorting this? I guess some tag implementation need this,\n# particularly for append mode. bjm 2012-02-24\nif [ ! \"$IS_EXUBERANT\" ]\n\nInstead, the page says that sorting non-Exuberant tags file allows for\nfaster searching on of the tags file. I've fixed the comment\naccordingly too.\n\nRegards,\n\n[1] https://en.wikipedia.org/wiki/Ctags#Etags_2\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 29 May 2023 09:35:25 -0400", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "Hi Sawada-san,\n\n> In my Mac environment where non-Exuberant ctags and emacs 28.2 are\n> installed, the generated etags file cannot be loaded by emacs due to\n> file format error. The generated TAGS file is:\n> \n> % head -10 TAGS\n> \n> ) /\n> sizeof(BlockNumber)sizeof(BlockNumber)117,3750\n> my\n> @newa newa395,10443\n> \n> variadic array[1,2]:array[1,2]56,1803\n> \n> variadic array[]::inarray[]::i72,2331\n> \n> variadic array[array64,2111\n> \n> variadic array[array68,2222\n> \n> variadic array[array76,2441\n> (2 * (2 53,1353\n> my $fn fn387,10147\n> startblock 101,4876\n> \n> Since the etags files consist of multiple sections[1] we cannot sort\n> the generated etags file. With the attached patch, make_etags (with\n> non-Exuberant ctags) generates a correct format etags file and it\n> works:\n> \n> % head -10 TAGS\n> \n> /Users/masahiko/pgsql/source/postgresql/GNUmakefile,1187\n> subdir 7,56\n> top_builddir 8,65\n> docs:docs13,167\n> world-contrib-recurse:world-contrib-recurse19,273\n> world-bin-contrib-recurse:world-bin-contrib-recurse24,394\n> html man:html man26,444\n> install-docs:install-docs29,474\n> install-world-contrib-recurse:install-world-contrib-recurse35,604\n> \n> BTW regarding the following comment, as far as I can read the\n> Wikipedia page for ctags[1], Exuberant ctags file doesn't have a\n> header section.\n> \n> # Exuberant tags has a header that we cannot sort in with the other entries\n> # so we skip the sort step\n> # Why are we sorting this? I guess some tag implementation need this,\n> # particularly for append mode. bjm 2012-02-24\n> if [ ! \"$IS_EXUBERANT\" ]\n> \n> Instead, the page says that sorting non-Exuberant tags file allows for\n> faster searching on of the tags file. I've fixed the comment\n> accordingly too.\n> \n> Regards,\n> \n> [1] https://en.wikipedia.org/wiki/Ctags#Etags_2\n\nSorry for late reply and thanks for the patch!\n\nI have confirmed the error with make_etags on my Mac (emacs 28.1 +\nnon-Exuberant ctags), and the error is fixed by your patch. Also I\nhave confirmed the patch does not affect make_etags on my Linux (emacs\n26.3 + Exuberant ctags).\n\nI will push the fix to REL_15_STABLE and master branch if there's no\nobjection.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Mon, 12 Jun 2023 11:10:56 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "> Hi Sawada-san,\n> \n>> In my Mac environment where non-Exuberant ctags and emacs 28.2 are\n>> installed, the generated etags file cannot be loaded by emacs due to\n>> file format error. The generated TAGS file is:\n>> \n>> % head -10 TAGS\n>> \n>> ) /\n>> sizeof(BlockNumber)sizeof(BlockNumber)117,3750\n>> my\n>> @newa newa395,10443\n>> \n>> variadic array[1,2]:array[1,2]56,1803\n>> \n>> variadic array[]::inarray[]::i72,2331\n>> \n>> variadic array[array64,2111\n>> \n>> variadic array[array68,2222\n>> \n>> variadic array[array76,2441\n>> (2 * (2 53,1353\n>> my $fn fn387,10147\n>> startblock 101,4876\n>> \n>> Since the etags files consist of multiple sections[1] we cannot sort\n>> the generated etags file. With the attached patch, make_etags (with\n>> non-Exuberant ctags) generates a correct format etags file and it\n>> works:\n>> \n>> % head -10 TAGS\n>> \n>> /Users/masahiko/pgsql/source/postgresql/GNUmakefile,1187\n>> subdir 7,56\n>> top_builddir 8,65\n>> docs:docs13,167\n>> world-contrib-recurse:world-contrib-recurse19,273\n>> world-bin-contrib-recurse:world-bin-contrib-recurse24,394\n>> html man:html man26,444\n>> install-docs:install-docs29,474\n>> install-world-contrib-recurse:install-world-contrib-recurse35,604\n>> \n>> BTW regarding the following comment, as far as I can read the\n>> Wikipedia page for ctags[1], Exuberant ctags file doesn't have a\n>> header section.\n>> \n>> # Exuberant tags has a header that we cannot sort in with the other entries\n>> # so we skip the sort step\n>> # Why are we sorting this? I guess some tag implementation need this,\n>> # particularly for append mode. bjm 2012-02-24\n>> if [ ! \"$IS_EXUBERANT\" ]\n>> \n>> Instead, the page says that sorting non-Exuberant tags file allows for\n>> faster searching on of the tags file. I've fixed the comment\n>> accordingly too.\n>> \n>> Regards,\n>> \n>> [1] https://en.wikipedia.org/wiki/Ctags#Etags_2\n> \n> Sorry for late reply and thanks for the patch!\n> \n> I have confirmed the error with make_etags on my Mac (emacs 28.1 +\n> non-Exuberant ctags), and the error is fixed by your patch. Also I\n> have confirmed the patch does not affect make_etags on my Linux (emacs\n> 26.3 + Exuberant ctags).\n> \n> I will push the fix to REL_15_STABLE and master branch if there's no\n> objection.\n\nFix pushded. Thanks!\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Wed, 14 Jun 2023 11:16:19 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" }, { "msg_contents": "On Wed, Jun 14, 2023 at 11:16 AM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>\n> > Hi Sawada-san,\n> >\n> >> In my Mac environment where non-Exuberant ctags and emacs 28.2 are\n> >> installed, the generated etags file cannot be loaded by emacs due to\n> >> file format error. The generated TAGS file is:\n> >>\n> >> % head -10 TAGS\n> >>\n> >> ) /\n> >> sizeof(BlockNumber)sizeof(BlockNumber)117,3750\n> >> my\n> >> @newa newa395,10443\n> >>\n> >> variadic array[1,2]:array[1,2]56,1803\n> >>\n> >> variadic array[]::inarray[]::i72,2331\n> >>\n> >> variadic array[array64,2111\n> >>\n> >> variadic array[array68,2222\n> >>\n> >> variadic array[array76,2441\n> >> (2 * (2 53,1353\n> >> my $fn fn387,10147\n> >> startblock 101,4876\n> >>\n> >> Since the etags files consist of multiple sections[1] we cannot sort\n> >> the generated etags file. With the attached patch, make_etags (with\n> >> non-Exuberant ctags) generates a correct format etags file and it\n> >> works:\n> >>\n> >> % head -10 TAGS\n> >>\n> >> /Users/masahiko/pgsql/source/postgresql/GNUmakefile,1187\n> >> subdir 7,56\n> >> top_builddir 8,65\n> >> docs:docs13,167\n> >> world-contrib-recurse:world-contrib-recurse19,273\n> >> world-bin-contrib-recurse:world-bin-contrib-recurse24,394\n> >> html man:html man26,444\n> >> install-docs:install-docs29,474\n> >> install-world-contrib-recurse:install-world-contrib-recurse35,604\n> >>\n> >> BTW regarding the following comment, as far as I can read the\n> >> Wikipedia page for ctags[1], Exuberant ctags file doesn't have a\n> >> header section.\n> >>\n> >> # Exuberant tags has a header that we cannot sort in with the other entries\n> >> # so we skip the sort step\n> >> # Why are we sorting this? I guess some tag implementation need this,\n> >> # particularly for append mode. bjm 2012-02-24\n> >> if [ ! \"$IS_EXUBERANT\" ]\n> >>\n> >> Instead, the page says that sorting non-Exuberant tags file allows for\n> >> faster searching on of the tags file. I've fixed the comment\n> >> accordingly too.\n> >>\n> >> Regards,\n> >>\n> >> [1] https://en.wikipedia.org/wiki/Ctags#Etags_2\n> >\n> > Sorry for late reply and thanks for the patch!\n> >\n> > I have confirmed the error with make_etags on my Mac (emacs 28.1 +\n> > non-Exuberant ctags), and the error is fixed by your patch. Also I\n> > have confirmed the patch does not affect make_etags on my Linux (emacs\n> > 26.3 + Exuberant ctags).\n> >\n> > I will push the fix to REL_15_STABLE and master branch if there's no\n> > objection.\n>\n> Fix pushded. Thanks!\n\nThank you!\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 14 Jun 2023 14:08:59 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: make_ctags: use -I option to ignore pg_node_attr macro" } ]
[ { "msg_contents": "Hi hackers,\n\nI've found some odd lines in plpython-related code. These look to me like a\npotential source of problems, but maybe I'm not fully aware of some nuances.\n\nUsually it's not a good idea to exit PG_TRY() block via return statement.\nOtherwise it would leave PG_exception_stack global variable in a wrong\nstate and next ereport() will jump to some junk address. But here it is a\nstraight return in plpy_exec.c:\nPG_TRY();\n{\n args = PyList_New(proc->nargs);\n if (!args)\n return NULL;\n...\n(\nhttps://github.com/postgres/postgres/blob/0fe954c28584169938e5c0738cfaa9930ce77577/src/pl/plpython/plpy_exec.c#L421\n)\n\nTwo more cases could be found further in the same file:\nhttps://github.com/postgres/postgres/blob/0fe954c28584169938e5c0738cfaa9930ce77577/src/pl/plpython/plpy_exec.c#L697\nhttps://github.com/postgres/postgres/blob/0fe954c28584169938e5c0738cfaa9930ce77577/src/pl/plpython/plpy_exec.c#L842\n\nIsn't it a problem?\n\n\nAnother suspicious case is PG_CATCH block in jsonb_plpython.c:\nPG_CATCH();\n{\n ereport(ERROR,\n (errcode(ERRCODE_DATATYPE_MISMATCH),\n errmsg(\"could not convert value \\\"%s\\\" to jsonb\", str)));\n}\n(\nhttps://github.com/postgres/postgres/blob/0fe954c28584169938e5c0738cfaa9930ce77577/contrib/jsonb_plpython/jsonb_plpython.c#L372\n)\n\nThe problem is that leaving PG_CATCH() without PG_RE_THROW(),\nReThrowError() or FlushErrorState() will consume an errordata stack slot,\nwhile the stack size is only 5. Do this five times and we'll have a PANIC\non the next ereport().\nAs it's stated in elog.c (comment about the error stack depth at the\nbeginning of FlushErrorState()), \"The only case where it would be more than\none deep is if we serviced an error that interrupted construction of\nanother message.\"\nLooks to me that FlushErrorState() should be inserted before this ereport.\nShouldn't it?\n\n--\n best regards,\n Mikhail A. Gribkov\n\nHi hackers,I've found some odd lines in plpython-related code. These look to me like a potential source of problems, but maybe I'm not fully aware of some nuances.Usually it's not a good idea to exit PG_TRY() block via return statement. Otherwise it would leave PG_exception_stack global variable in a wrong state and next ereport() will jump to some junk address. But here it is a straight return in plpy_exec.c:PG_TRY();{    args = PyList_New(proc->nargs);    if (!args)        return NULL;...( https://github.com/postgres/postgres/blob/0fe954c28584169938e5c0738cfaa9930ce77577/src/pl/plpython/plpy_exec.c#L421 )Two more cases could be found further in the same file:https://github.com/postgres/postgres/blob/0fe954c28584169938e5c0738cfaa9930ce77577/src/pl/plpython/plpy_exec.c#L697https://github.com/postgres/postgres/blob/0fe954c28584169938e5c0738cfaa9930ce77577/src/pl/plpython/plpy_exec.c#L842Isn't it a problem?Another suspicious case is PG_CATCH block in jsonb_plpython.c:PG_CATCH();{    ereport(ERROR,        (errcode(ERRCODE_DATATYPE_MISMATCH),         errmsg(\"could not convert value \\\"%s\\\" to jsonb\", str)));}( https://github.com/postgres/postgres/blob/0fe954c28584169938e5c0738cfaa9930ce77577/contrib/jsonb_plpython/jsonb_plpython.c#L372 )The problem is that  leaving PG_CATCH() without PG_RE_THROW(), ReThrowError() or FlushErrorState() will consume an errordata stack slot, while the stack size is only 5. Do this five times and we'll have a PANIC on the next ereport().As it's stated in elog.c (comment about the error stack depth at the beginning of FlushErrorState()), \"The only case where it would be more than one deep is if we serviced an error that interrupted construction of another message.\"Looks to me that  FlushErrorState() should be inserted before this ereport. Shouldn't it?-- best regards,    Mikhail A. Gribkov", "msg_date": "Fri, 7 Oct 2022 10:27:26 +0300", "msg_from": "Mikhail Gribkov <youzhick@gmail.com>", "msg_from_op": true, "msg_subject": "Nicely exiting PG_TRY and PG_CATCH" }, { "msg_contents": "Mikhail Gribkov <youzhick@gmail.com> writes:\n> Usually it's not a good idea to exit PG_TRY() block via return statement.\n> Otherwise it would leave PG_exception_stack global variable in a wrong\n> state and next ereport() will jump to some junk address.\n\nYeah, you can't return or goto out of the PG_TRY part.\n\n> Another suspicious case is PG_CATCH block in jsonb_plpython.c:\n\nThis should be OK. The PG_CATCH and PG_FINALLY macros are set up so that\nwe've fully restored that state *before* we execute any of the\nerror-handling code. It would be basically impossible to have a guarantee\nthat CATCH blocks never throw errors; they'd be so restricted as to be\nnear useless, like signal handlers.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Oct 2022 17:19:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Nicely exiting PG_TRY and PG_CATCH" }, { "msg_contents": "> Yeah, you can't return or goto out of the PG_TRY part.\n\nSo this is a problem if the check would ever work.\n(Sorry for such a delayed answer.)\n\nThen we need to fix it. Attached is a minimal patch, which changes nothing\nexcept for correct PG_TRY exiting.\nIsn't it better this way?\n\nCCing to Peter Eisentraut, whose patch originally introduced these returns.\nPeter, will such a patch work somehow against your initial idea?\n\n--\n best regards,\n Mikhail A. Gribkov\n\ne-mail: youzhick@gmail.com\n*http://www.flickr.com/photos/youzhick/albums\n<http://www.flickr.com/photos/youzhick/albums>*\nhttp://www.strava.com/athletes/5085772\nphone: +7(916)604-71-12\nTelegram: @youzhick\n\n\n\nOn Sat, Oct 8, 2022 at 12:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Mikhail Gribkov <youzhick@gmail.com> writes:\n> > Usually it's not a good idea to exit PG_TRY() block via return statement.\n> > Otherwise it would leave PG_exception_stack global variable in a wrong\n> > state and next ereport() will jump to some junk address.\n>\n> Yeah, you can't return or goto out of the PG_TRY part.\n>\n> > Another suspicious case is PG_CATCH block in jsonb_plpython.c:\n>\n> This should be OK. The PG_CATCH and PG_FINALLY macros are set up so that\n> we've fully restored that state *before* we execute any of the\n> error-handling code. It would be basically impossible to have a guarantee\n> that CATCH blocks never throw errors; they'd be so restricted as to be\n> near useless, like signal handlers.\n>\n> regards, tom lane\n>", "msg_date": "Thu, 20 Oct 2022 19:39:08 +0300", "msg_from": "Mikhail Gribkov <youzhick@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Nicely exiting PG_TRY and PG_CATCH" } ]
[ { "msg_contents": "We have in event_trigger.c two functions \nEventTriggerSupportsObjectType() and EventTriggerSupportsObjectClass() \nthat return whether a given object type/class supports event triggers. \nMaybe there was a real differentiation there once, but right now it \nseems to me that *all* object types/classes support event triggers, \nexcept: (1) shared objects and (2) event triggers themselves. I think \nwe can write that logic more explicitly and compactly and don't have to \ngive the false illusion that there is a real choice to be made by the \nimplementer here.\n\nThe only drawback in terms of code robustness is that someone adding a \nnew shared object type would have to remember to add it to \nEventTriggerSupportsObjectType(). Maybe we could add a \"object type is \nshared\" function somehow, similar to IsSharedRelation(), to make that \neasier. OTOH, someone doing that would probably go around and grep for, \nsay, OBJECT_TABLESPACE and find relevant places to update that way.\n\nThoughts?", "msg_date": "Fri, 7 Oct 2022 14:10:44 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Simplify event trigger support checking functions" }, { "msg_contents": "On Fri, Oct 7, 2022 at 8:10 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> The only drawback in terms of code robustness is that someone adding a\n> new shared object type would have to remember to add it to\n> EventTriggerSupportsObjectType().\n\nThis doesn't seem like a good idea to me. If the function names give\nthe idea that you can decide whether new object types should support\nevent triggers or not, we could change them, or add better comments.\nHowever, right now, you can't add a new object type and fail to update\nthese functions. With this change, that would become possible. And the\nwhole point of coding the functions in the way that they are was to\navoid that exact hazard. So I think we should leave the code the way\nit is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Oct 2022 12:43:31 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Simplify event trigger support checking functions" }, { "msg_contents": "On 07.10.22 18:43, Robert Haas wrote:\n> On Fri, Oct 7, 2022 at 8:10 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> The only drawback in terms of code robustness is that someone adding a\n>> new shared object type would have to remember to add it to\n>> EventTriggerSupportsObjectType().\n> \n> This doesn't seem like a good idea to me. If the function names give\n> the idea that you can decide whether new object types should support\n> event triggers or not, we could change them, or add better comments.\n> However, right now, you can't add a new object type and fail to update\n> these functions. With this change, that would become possible. And the\n> whole point of coding the functions in the way that they are was to\n> avoid that exact hazard.\n\nI don't think just adding an entry to these functions is enough to make \nevent trigger support happen for an object type. There are other places \nthat need changing, and really you need to write a test to check it. So \nI didn't think these functions provided any actual value. But I'm not \ntoo obsessed about it if others feel differently.\n\n\n\n", "msg_date": "Wed, 12 Oct 2022 08:55:30 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Simplify event trigger support checking functions" } ]
[ { "msg_contents": "Hello,\n\nI'm trying to make single-row mode and pipeline mode work together in \nPsycopg using libpq. I think there is something wrong with respect to \nthe single-row mode flag, not being correctly reset, in some situations.\n\nThe minimal case I'm considering is (in a pipeline):\n* send query 1,\n* get its results in single-row mode,\n* send query 2,\n* get its results *not* in single-row mode.\n\nIt seems that, as the command queue in the pipeline is empty after \ngetting the results of query 1, the single-row mode flag is not reset \nand is still active for query 2, thus leading to an unexpected \nPGRES_SINGLE_TUPLE status.\n\nThe attached patch demonstrates this in the test suite. It also suggests \nto move the statement resetting single-row mode up in \npqPipelineProcessQueue(), before exiting the function when the command \nqueue is empty in particular.\n\nThanks for considering,\nDenis", "msg_date": "Fri, 7 Oct 2022 15:08:05 +0200", "msg_from": "Denis Laxalde <denis.laxalde@dalibo.com>", "msg_from_op": true, "msg_subject": "[PATCH] Reset single-row processing mode at end of pipeline commands\n queue" }, { "msg_contents": "Hello Denis,\n\nOn 2022-Oct-07, Denis Laxalde wrote:\n\n> I'm trying to make single-row mode and pipeline mode work together in\n> Psycopg using libpq. I think there is something wrong with respect to the\n> single-row mode flag, not being correctly reset, in some situations.\n> \n> The minimal case I'm considering is (in a pipeline):\n> * send query 1,\n> * get its results in single-row mode,\n> * send query 2,\n> * get its results *not* in single-row mode.\n> \n> It seems that, as the command queue in the pipeline is empty after getting\n> the results of query 1, the single-row mode flag is not reset and is still\n> active for query 2, thus leading to an unexpected PGRES_SINGLE_TUPLE status.\n> \n> The attached patch demonstrates this in the test suite. It also suggests to\n> move the statement resetting single-row mode up in pqPipelineProcessQueue(),\n> before exiting the function when the command queue is empty in particular.\n\nYour suggestion to move the code up seems correct to me. Therefore, I\nhave pushed this including the added test code. Thanks for an excellent\nreport and patch.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 14 Oct 2022 19:34:58 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Reset single-row processing mode at end of pipeline\n commands queue" } ]
[ { "msg_contents": "Hi,\nI stumbled over:\n\nhttps://about.gitlab.com/blog/2021/09/29/why-we-spent-the-last-month-eliminating-postgresql-subtransactions/\n\nI wonder if SAVEPOINT / subtransaction performance has been boosted since\nthe blog was written.\n\nCheers\n\nHi,I stumbled over:https://about.gitlab.com/blog/2021/09/29/why-we-spent-the-last-month-eliminating-postgresql-subtransactions/I wonder if SAVEPOINT / subtransaction performance has been boosted since the blog was written.Cheers", "msg_date": "Fri, 7 Oct 2022 15:23:27 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": true, "msg_subject": "subtransaction performance" }, { "msg_contents": "On Fri, Oct 7, 2022 at 03:23:27PM -0700, Zhihong Yu wrote:\n> Hi,\n> I stumbled over:\n> \n> https://about.gitlab.com/blog/2021/09/29/\n> why-we-spent-the-last-month-eliminating-postgresql-subtransactions/\n> \n> I wonder if SAVEPOINT / subtransaction performance has been boosted since the\n> blog was written.\n\nNo, I have not seen any changes in this area since then. Seems there\nare two problems --- the 64 cache per session and the 64k on the\nreplica. In both cases, it seems sizing is not optimal, but sizing is\nnever optimal. I guess we can look at allowing manual size adjustment,\nautomatic size adjustment, or a different approach that is more graceful\nfor larger savepoint workloads.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Mon, 10 Oct 2022 14:20:37 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: subtransaction performance" }, { "msg_contents": "On Mon, Oct 10, 2022 at 02:20:37PM -0400, Bruce Momjian wrote:\n> On Fri, Oct 7, 2022 at 03:23:27PM -0700, Zhihong Yu wrote:\n>> I wonder if SAVEPOINT /�subtransaction performance has been boosted since the\n>> blog was written.\n> \n> No, I have not seen any changes in this area since then. Seems there\n> are two problems --- the 64 cache per session and the 64k on the\n> replica. In both cases, it seems sizing is not optimal, but sizing is\n> never optimal. I guess we can look at allowing manual size adjustment,\n> automatic size adjustment, or a different approach that is more graceful\n> for larger savepoint workloads.\n\nI believe the following commitfest entries might be relevant to this\ndiscussion:\n\n\thttps://commitfest.postgresql.org/39/2627/\n\thttps://commitfest.postgresql.org/39/3514/\n\thttps://commitfest.postgresql.org/39/3806/\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 10 Oct 2022 20:34:33 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: subtransaction performance" }, { "msg_contents": "On Mon, Oct 10, 2022 at 08:34:33PM -0700, Nathan Bossart wrote:\n> On Mon, Oct 10, 2022 at 02:20:37PM -0400, Bruce Momjian wrote:\n> > On Fri, Oct 7, 2022 at 03:23:27PM -0700, Zhihong Yu wrote:\n> >> I wonder if SAVEPOINT / subtransaction performance has been boosted since the\n> >> blog was written.\n> > \n> > No, I have not seen any changes in this area since then. Seems there\n> > are two problems --- the 64 cache per session and the 64k on the\n> > replica. In both cases, it seems sizing is not optimal, but sizing is\n> > never optimal. I guess we can look at allowing manual size adjustment,\n> > automatic size adjustment, or a different approach that is more graceful\n> > for larger savepoint workloads.\n> \n> I believe the following commitfest entries might be relevant to this\n> discussion:\n> \n> \thttps://commitfest.postgresql.org/39/2627/\n> \thttps://commitfest.postgresql.org/39/3514/\n> \thttps://commitfest.postgresql.org/39/3806/\n\nWow, odd that I missed those. Yes, they are very relevant. :-)\nThe only other idea I had was to report such overflows, but these are\nbetter.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Tue, 11 Oct 2022 10:04:10 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: subtransaction performance" } ]
[ { "msg_contents": "As I mentioned in another thread, I came across a reproducible\nsituation in which a memory clobber in a child backend crashes\nthe postmaster too, at least on FreeBSD/arm64. Needless to say,\nthis is Not Cool. I've now traced down what is happening,\nand it's this:\n\n1. Careless coding in aset.c causes it to decide to wipe_mem\nthe universe. (I'll have more to say about that separately;\nthe point of this thread is keeping the postmaster alive\nafterwards.) Apparently, there's not any non-live memory\nspace between process-local memory and shared memory on this\nplatform, so the failing backend manages to trash shared memory\ntoo before it finally hits SIGSEGV.\n\n2. Most of the background processes die on something like\n\nTRAP: FailedAssertion(\"latch->owner_pid == MyProcPid\", File: \"latch.c\", Line: 686, PID: 5916)\n\nor they encounter what seems to be a stuck spinlock. The postmaster,\nhowever, SIGSEGVs. It's not supposed to do that; it is supposed to\nbe sufficiently arms-length from shared memory that it can recover\ndespite a backend trashing shared memory contents.\n\n3. The cause of the SIGSEGV is that AssignPostmasterChildSlot\nnaively believes that it can trust PMSignalState->next_child_flag\nto be a valid array index, so after that's been clobbered with\nsomething like 0x7f7f7f7f we index off the end of memory.\nI see no good reason for that state variable to be in shared memory\nat all, so the attached patch just moves it to postmaster static\ndata. We also need a less-exposed copy of the array size variable.\n\n4. That's enough to stop the SIGSEGV crash, but the postmaster\nstill fails to recover, because then it hits\n\n\telog(FATAL, \"no free slots in PMChildFlags array\");\n\nsince all of the array entries have been clobbered as well.\nIn the attached patch, I fixed this by treating the case similarly\nto failure to fork a new child process. This seems to be enough\nto let the postmaster survive, and recover after it starts noticing\ncrashing children.\n\n5. It's possible that we should take some proactive steps to get out\nof the \"no free slots\" situation, rather than just wait for some\nchild to crash. I'm inclined not to, however. It'd be hard-to-test\ncorner-case code, and given the lack of field reports like this,\nthe situation must be awfully rare.\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 07 Oct 2022 19:57:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Non-robustness in pmsignal.c" }, { "msg_contents": "Hi,\n\nOn 2022-10-07 19:57:35 -0400, Tom Lane wrote:\n> As I mentioned in another thread, I came across a reproducible\n> situation in which a memory clobber in a child backend crashes\n> the postmaster too, at least on FreeBSD/arm64. Needless to say,\n> this is Not Cool.\n\nUgh.\n\n\n> I've now traced down what is happening, and it's this:\n> \n> 1. Careless coding in aset.c causes it to decide to wipe_mem\n> the universe. (I'll have more to say about that separately;\n> the point of this thread is keeping the postmaster alive\n> afterwards.) Apparently, there's not any non-live memory\n> space between process-local memory and shared memory on this\n> platform, so the failing backend manages to trash shared memory\n> too before it finally hits SIGSEGV.\n\nPerhaps it'd be worth mark a page or two inaccessible, via\nmprotect(PROT_NONE), at the start and end of shared memory. I've wondered\nabout a debugging mode where we do that after separate shared memory\nallocations even. But start/end would be something we could conceivably always\nenable.\n\n\n> 2. Most of the background processes die on something like\n> \n> TRAP: FailedAssertion(\"latch->owner_pid == MyProcPid\", File: \"latch.c\", Line: 686, PID: 5916)\n> \n> or they encounter what seems to be a stuck spinlock. The postmaster,\n> however, SIGSEGVs. It's not supposed to do that; it is supposed to\n> be sufficiently arms-length from shared memory that it can recover\n> despite a backend trashing shared memory contents.\n\n> 3. The cause of the SIGSEGV is that AssignPostmasterChildSlot\n> naively believes that it can trust PMSignalState->next_child_flag\n> to be a valid array index, so after that's been clobbered with\n> something like 0x7f7f7f7f we index off the end of memory.\n> I see no good reason for that state variable to be in shared memory\n> at all, so the attached patch just moves it to postmaster static\n> data. We also need a less-exposed copy of the array size variable.\n\nThose make sense to me.\n\n\n> 4. That's enough to stop the SIGSEGV crash, but the postmaster\n> still fails to recover, because then it hits\n> \n> \telog(FATAL, \"no free slots in PMChildFlags array\");\n> \n> since all of the array entries have been clobbered as well.\n> In the attached patch, I fixed this by treating the case similarly\n> to failure to fork a new child process. This seems to be enough\n> to let the postmaster survive, and recover after it starts noticing\n> crashing children.\n\nWhy are we even tracking PM_CHILD_UNUSED / PM_CHILD_ASSIGNED in shared memory?\nISTM those should live in postmaster local memory (maybe copied to shared\nmemory). PM_CHILD_ACTIVE and PM_CHILD_WALSENDER do have to live in shared\nmemory, but ...\n\nYour fix seems ok. We really ought to deduplicate the way we start postmaster\nchildren, but that's obviously work for another day.\n\n\n> 5. It's possible that we should take some proactive steps to get out\n> of the \"no free slots\" situation, rather than just wait for some\n> child to crash. I'm inclined not to, however. It'd be hard-to-test\n> corner-case code, and given the lack of field reports like this,\n> the situation must be awfully rare.\n\nAgreed.\n\n\nAre you thinking these should be backpatched?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 7 Oct 2022 17:28:29 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-robustness in pmsignal.c" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Why are we even tracking PM_CHILD_UNUSED / PM_CHILD_ASSIGNED in shared memory?\n\nBecause those flags are set by the child processes too, cf\nMarkPostmasterChildActive and MarkPostmasterChildInactive.\n\n> Are you thinking these should be backpatched?\n\nI am, but I'm not inclined to push this immediately before a wrap.\nIf we intend to wrap 15.0 on Monday then I'll wait till after that.\nOTOH, if we slip that a week, I'd be okay with pushing in the\nnext day or two.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Oct 2022 20:35:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Non-robustness in pmsignal.c" }, { "msg_contents": "Hi,\n\nOn 2022-10-07 20:35:58 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Why are we even tracking PM_CHILD_UNUSED / PM_CHILD_ASSIGNED in shared memory?\n> \n> Because those flags are set by the child processes too, cf\n> MarkPostmasterChildActive and MarkPostmasterChildInactive.\n\nOnly PM_CHILD_ACTIVE and PM_CHILD_WALSENDER though. We could afford another\nMaxLivePostmasterChildren() sized array...\n\n\n> > Are you thinking these should be backpatched?\n> \n> I am, but I'm not inclined to push this immediately before a wrap.\n\n+1\n\n\n> If we intend to wrap 15.0 on Monday then I'll wait till after that.\n> OTOH, if we slip that a week, I'd be okay with pushing in the\n> next day or two.\n\nMakes sense.\n\n- Andres\n\n\n", "msg_date": "Fri, 7 Oct 2022 17:43:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-robustness in pmsignal.c" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-10-07 20:35:58 -0400, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> Why are we even tracking PM_CHILD_UNUSED / PM_CHILD_ASSIGNED in shared memory?\n\n>> Because those flags are set by the child processes too, cf\n>> MarkPostmasterChildActive and MarkPostmasterChildInactive.\n\n> Only PM_CHILD_ACTIVE and PM_CHILD_WALSENDER though. We could afford another\n> MaxLivePostmasterChildren() sized array...\n\nOh, I see what you mean --- one private and one public array.\nMaybe that makes more sense than what I did, not sure.\n\n>> I am, but I'm not inclined to push this immediately before a wrap.\n\n> +1\n\nOK, I'll take a little more time on this and maybe code it up as\nyou suggest.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Oct 2022 20:49:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Non-robustness in pmsignal.c" }, { "msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> Only PM_CHILD_ACTIVE and PM_CHILD_WALSENDER though. We could afford another\n>> MaxLivePostmasterChildren() sized array...\n\n> Oh, I see what you mean --- one private and one public array.\n> Maybe that makes more sense than what I did, not sure.\n\nYeah, that's definitely a better way. I'll push this after the\nrelease freeze lifts.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 08 Oct 2022 13:15:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Non-robustness in pmsignal.c" }, { "msg_contents": "Hi,\n\nOn 2022-10-08 13:15:07 -0400, Tom Lane wrote:\n> I wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> >> Only PM_CHILD_ACTIVE and PM_CHILD_WALSENDER though. We could afford another\n> >> MaxLivePostmasterChildren() sized array...\n> \n> > Oh, I see what you mean --- one private and one public array.\n> > Maybe that makes more sense than what I did, not sure.\n> \n> Yeah, that's definitely a better way. I'll push this after the\n> release freeze lifts.\n\nCool, thanks for exploring.\n\n\n> /*\n> * Signal handler to be notified if postmaster dies.\n> */\n> @@ -142,7 +152,25 @@ PMSignalShmemInit(void)\n> \t{\n> \t\t/* initialize all flags to zeroes */\n> \t\tMemSet(unvolatize(PMSignalData *, PMSignalState), 0, PMSignalShmemSize());\n> -\t\tPMSignalState->num_child_flags = MaxLivePostmasterChildren();\n> +\t\tnum_child_inuse = MaxLivePostmasterChildren();\n> +\t\tPMSignalState->num_child_flags = num_child_inuse;\n> +\n> +\t\t/*\n> +\t\t * Also allocate postmaster's private PMChildInUse[] array. We\n> +\t\t * might've already done that in a previous shared-memory creation\n> +\t\t * cycle, in which case free the old array to avoid a leak. (Do it\n> +\t\t * like this to support the possibility that MaxLivePostmasterChildren\n> +\t\t * changed.) In a standalone backend, we do not need this.\n> +\t\t */\n> +\t\tif (PostmasterContext != NULL)\n> +\t\t{\n> +\t\t\tif (PMChildInUse)\n> +\t\t\t\tpfree(PMChildInUse);\n> +\t\t\tPMChildInUse = (bool *)\n> +\t\t\t\tMemoryContextAllocZero(PostmasterContext,\n> +\t\t\t\t\t\t\t\t\t num_child_inuse * sizeof(bool));\n> +\t\t}\n> +\t\tnext_child_inuse = 0;\n> \t}\n> }\n\nWhen can PostmasterContext be NULL here, and why can we just continue without\n(re-)allocating PMChildInUse?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 8 Oct 2022 10:32:36 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Non-robustness in pmsignal.c" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> When can PostmasterContext be NULL here, and why can we just continue without\n> (re-)allocating PMChildInUse?\n\nWe'd only get into the !found stanza in a postmaster or a\nstandalone backend. A standalone backend isn't ever going to call\nAssignPostmasterChildSlot or ReleasePostmasterChildSlot, so it\ndoes not need the array; and it also doesn't have a PostmasterContext,\nso there's not a good place to allocate the array either.\n\nPerhaps there's a better way to distinguish am-I-a-postmaster,\nbut I thought checking PostmasterContext is fine since that ties\ndirectly to what the code needs to do.\n\nYes, the code would malfunction if the PostmasterContext != NULL\ncondition changed from one cycle to the next, but that shouldn't\nhappen.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 08 Oct 2022 13:44:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Non-robustness in pmsignal.c" } ]
[ { "msg_contents": "Hi,\n\nI am pretty new to the Postgres code base. I would like to know the\ndifference between HeapTupleData and TupleTableSlot structures.\n\nBasically I am trying to understand some of the table access methods like\nheap_insert, heap_getnext, heap_getnextslot etc where some accepts\nHeaptuple as input and some accepts TupleTableSlot.\n\nCould anyone please help me to understand this? Any example would also be\nhelpful.\n\nBest,\nAjay\n\nHi,I am pretty new to the Postgres code base. I would like to know the difference between HeapTupleData and TupleTableSlot structures. Basically I am trying to understand some of the table access methods like heap_insert, heap_getnext, heap_getnextslot etc where some accepts Heaptuple as input and some accepts TupleTableSlot.Could anyone please help me to understand this? Any example would also be helpful.Best,Ajay", "msg_date": "Fri, 7 Oct 2022 19:26:21 -0700", "msg_from": "Ajay P S <ajayps547@gmail.com>", "msg_from_op": true, "msg_subject": "Difference between HeapTupleData and TupleTableSlot structures" }, { "msg_contents": "Ajay P S <ajayps547@gmail.com> writes:\n> I am pretty new to the Postgres code base. I would like to know the\n> difference between HeapTupleData and TupleTableSlot structures.\n\nHeapTupleData is just a pointer to a concrete tuple. It exists\nmainly because it's often convenient to pass around the tuple's\nlogical location (table OID and t_self) along with the tuple.\nThe comments about it in htup.h tell you about all you need to\nknow.\n\nTupleTableSlot is a more abstract concept, being a container\nfor a tuple that can be present in several different forms.\nIt can contain a concrete tuple (HeapTupleData), or a \"virtual\"\ntuple that is just an array of Datum+isnull values. The executor\nusually uses tuple slots to return tuples out of plan nodes;\nthey're not very common elsewhere.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 07 Oct 2022 22:58:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Difference between HeapTupleData and TupleTableSlot structures" }, { "msg_contents": "Hi hackers,\n\n> TupleTableSlot is a more abstract concept, being a container\n> for a tuple that can be present in several different forms.\n> It can contain a concrete tuple (HeapTupleData), or a \"virtual\"\n> tuple that is just an array of Datum+isnull values. The executor\n> usually uses tuple slots to return tuples out of plan nodes;\n> they're not very common elsewhere.\n\nI came across another little piece of information about\nTupleTableSlots [1] and recalled this thread:\n\n\"\"\"\nTo implement an access method, an implementer will typically need to\nimplement an AM-specific type of tuple table slot (see\nsrc/include/executor/tuptable.h), which allows code outside the access\nmethod to hold references to tuples of the AM, and to access the\ncolumns of the tuple.\n\"\"\"\n\nHopefully this is helpful.\n\n[1] https://www.postgresql.org/docs/current/tableam.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Sun, 23 Oct 2022 13:04:43 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Difference between HeapTupleData and TupleTableSlot structures" } ]
[ { "msg_contents": "Hi hackers,\n\nI find there are some unnecessary commas for goto lables,\nattached a patch to remove them.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Sun, 09 Oct 2022 07:42:58 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Remove unnecessary commas for goto labels" }, { "msg_contents": "Hi,\n\nOn Sun, Oct 09, 2022 at 07:42:58AM +0800, Japin Li wrote:\n>\n> Hi hackers,\n>\n> I find there are some unnecessary commas for goto lables,\n> attached a patch to remove them.\n\nYou mean semi-colon? +1, and a quick regex later I don't see any other\noccurrence.\n\n\n", "msg_date": "Sun, 9 Oct 2022 11:07:56 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove unnecessary commas for goto labels" }, { "msg_contents": "\nOn Sun, 09 Oct 2022 at 11:07, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> Hi,\n>\n> On Sun, Oct 09, 2022 at 07:42:58AM +0800, Japin Li wrote:\n>>\n>> Hi hackers,\n>>\n>> I find there are some unnecessary commas for goto lables,\n>> attached a patch to remove them.\n>\n> You mean semi-colon? +1, and a quick regex later I don't see any other\n> occurrence.\n\nYeah semi-colon, sorry for the typo.\n\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Sun, 09 Oct 2022 11:26:27 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove unnecessary commas for goto labels" }, { "msg_contents": "On Sun, Oct 9, 2022 at 10:08 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Sun, Oct 09, 2022 at 07:42:58AM +0800, Japin Li wrote:\n> >\n> > Hi hackers,\n> >\n> > I find there are some unnecessary commas for goto lables,\n> > attached a patch to remove them.\n>\n> You mean semi-colon? +1, and a quick regex later I don't see any other\n> occurrence.\n\nInterestingly, I did find that in C, some statement is needed after a\nlabel, even an empty one, otherwise it won't compile. That's not the case\nhere, though, so pushed.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Sun, Oct 9, 2022 at 10:08 AM Julien Rouhaud <rjuju123@gmail.com> wrote:>> Hi,>> On Sun, Oct 09, 2022 at 07:42:58AM +0800, Japin Li wrote:> >> > Hi hackers,> >> > I find there are some unnecessary commas for goto lables,> > attached a patch to remove them.>> You mean semi-colon?  +1, and a quick regex later I don't see any other> occurrence.Interestingly, I did find that in C, some statement is needed after a label, even an empty one, otherwise it won't compile. That's not the case here, though, so pushed. --John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Mon, 10 Oct 2022 15:18:53 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Remove unnecessary commas for goto labels" }, { "msg_contents": "\nOn Mon, 10 Oct 2022 at 16:18, John Naylor <john.naylor@enterprisedb.com> wrote:\n> Interestingly, I did find that in C, some statement is needed after a\n> label, even an empty one, otherwise it won't compile.\n\nYeah, this is required by ISO C [1].\n\n> That's not the case here, though, so pushed.\n\nThank you!\n\n\n[1] https://www.gnu.org/software/gnu-c-manual/gnu-c-manual.html#Labels\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Mon, 10 Oct 2022 16:50:17 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove unnecessary commas for goto labels" } ]
[ { "msg_contents": "I happened to notice that the Trap and TrapMacro macros defined in c.h\nhave a grand total of one usage apiece across our entire code base.\nIt seems a little pointless and confusing to have them at all, since\nthey're essentially Assert/AssertMacro but with the inverse condition\npolarity. I'm also annoyed that they are documented while the macros\nwe actually use are not.\n\nI'm also thinking that the \"errorType\" argument of ExceptionalCondition\nis not nearly pulling its weight given the actual usage. Removing it\nreduces the size of an assert-enabled build of HEAD from\n\n$ size src/backend/postgres \n text data bss dec hex filename\n9065335 86280 204496 9356111 8ec34f src/backend/postgres\n\nto\n\n$ size src/backend/postgres \n text data bss dec hex filename\n9001199 86280 204496 9291975 8dc8c7 src/backend/postgres\n\n(on RHEL8 x86_64), which admittedly is only about 1%, but it's 1%\nfor just about no detectable return.\n\nHence, I propose the attached.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 09 Oct 2022 15:51:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Simplifying our Trap/Assert infrastructure" }, { "msg_contents": "On Sun, Oct 09, 2022 at 03:51:57PM -0400, Tom Lane wrote:\n> I happened to notice that the Trap and TrapMacro macros defined in c.h\n> have a grand total of one usage apiece across our entire code base.\n> It seems a little pointless and confusing to have them at all, since\n> they're essentially Assert/AssertMacro but with the inverse condition\n> polarity. I'm also annoyed that they are documented while the macros\n> we actually use are not.\n\n+1, I noticed this recently, too.\n\n> Hence, I propose the attached.\n\nThe patch LGTM. It might be worth removing usages of AssertArg and\nAssertState, too, but that can always be done separately.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 9 Oct 2022 14:01:48 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Simplifying our Trap/Assert infrastructure" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Sun, Oct 09, 2022 at 03:51:57PM -0400, Tom Lane wrote:\n>> Hence, I propose the attached.\n\n> The patch LGTM. It might be worth removing usages of AssertArg and\n> AssertState, too, but that can always be done separately.\n\nSomething I thought about but forgot to mention in the initial email:\nis it worth sprinkling these macros with \"unlikely()\"? I think that\ncompilers might assume the right thing automatically based on noticing\nthat ExceptionalCondition is noreturn ... but then again they might\nnot. Of course we're not that fussed about micro-optimizations in\nassert-enabled builds; but with so many Asserts in the system, it\nmight still add up to something noticeable if there is an effect.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 09 Oct 2022 17:08:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Simplifying our Trap/Assert infrastructure" }, { "msg_contents": "On Sun, Oct 09, 2022 at 05:08:39PM -0400, Tom Lane wrote:\n> Something I thought about but forgot to mention in the initial email:\n> is it worth sprinkling these macros with \"unlikely()\"? I think that\n> compilers might assume the right thing automatically based on noticing\n> that ExceptionalCondition is noreturn ... but then again they might\n> not. Of course we're not that fussed about micro-optimizations in\n> assert-enabled builds; but with so many Asserts in the system, it\n> might still add up to something noticeable if there is an effect.\n\nI don't see why not.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 9 Oct 2022 14:29:37 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Simplifying our Trap/Assert infrastructure" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Sun, Oct 09, 2022 at 05:08:39PM -0400, Tom Lane wrote:\n>> Something I thought about but forgot to mention in the initial email:\n>> is it worth sprinkling these macros with \"unlikely()\"?\n\n> I don't see why not.\n\nI experimented with that, and found something that surprised me:\nthere's a noticeable code-bloat effect. With the patch as given,\n\n$ size src/backend/postgres \n text data bss dec hex filename\n9001199 86280 204496 9291975 8dc8c7 src/backend/postgres\n\nbut with unlikely(),\n\n$ size src/backend/postgres \n text data bss dec hex filename\n9035423 86280 204496 9326199 8e4e77 src/backend/postgres\n\nI don't quite understand why that's happening, but it seems to\nshow that this requires some investigation of its own. So for\nnow I just pushed the patch as-is.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 Oct 2022 15:20:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Simplifying our Trap/Assert infrastructure" }, { "msg_contents": "On Sun, Oct 09, 2022 at 02:01:48PM -0700, Nathan Bossart wrote:\n> The patch LGTM. It might be worth removing usages of AssertArg and\n> AssertState, too, but that can always be done separately.\n\nIf you are so inclined...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 12 Oct 2022 11:36:20 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Simplifying our Trap/Assert infrastructure" }, { "msg_contents": "On 12.10.22 20:36, Nathan Bossart wrote:\n> On Sun, Oct 09, 2022 at 02:01:48PM -0700, Nathan Bossart wrote:\n>> The patch LGTM. It might be worth removing usages of AssertArg and\n>> AssertState, too, but that can always be done separately.\n> \n> If you are so inclined...\n\nI'm in favor of this. These variants are a distraction.\n\n\n", "msg_date": "Wed, 12 Oct 2022 21:19:17 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Simplifying our Trap/Assert infrastructure" }, { "msg_contents": "On Wed, Oct 12, 2022 at 09:19:17PM +0200, Peter Eisentraut wrote:\n> I'm in favor of this. These variants are a distraction.\n\nAgreed, even if extensions could use these, it looks like any\nout-of-core code using what's removed here would also gain in clarity.\nThis is logically fine (except for an indentation blip in\nmiscadmin.h?), so I have marked this entry as ready for committer.\n\nSide note, rather unrelated to what's proposed here: would it be worth\nextending AssertPointerAlignment() for the frontend code?\n--\nMichael", "msg_date": "Thu, 27 Oct 2022 16:23:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Simplifying our Trap/Assert infrastructure" }, { "msg_contents": "On 27.10.22 09:23, Michael Paquier wrote:\n> Agreed, even if extensions could use these, it looks like any\n> out-of-core code using what's removed here would also gain in clarity.\n> This is logically fine (except for an indentation blip in\n> miscadmin.h?), so I have marked this entry as ready for committer.\n\ncommitted\n\n> Side note, rather unrelated to what's proposed here: would it be worth\n> extending AssertPointerAlignment() for the frontend code?\n\nWould there be a use for that? It's currently only used in the atomics \ncode.\n\n\n\n", "msg_date": "Fri, 28 Oct 2022 09:36:23 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Simplifying our Trap/Assert infrastructure" }, { "msg_contents": "On Fri, Oct 28, 2022 at 09:36:23AM +0200, Peter Eisentraut wrote:\n> Would there be a use for that? It's currently only used in the atomics\n> code.\n\nYep, but they would not trigger when using atomics in the frontend\ncode. We don't have any use for that in core on HEAD, still that\ncould be useful for some external frontend code? Please see the\nattached.\n--\nMichael", "msg_date": "Mon, 31 Oct 2022 09:04:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Simplifying our Trap/Assert infrastructure" }, { "msg_contents": "On 31.10.22 01:04, Michael Paquier wrote:\n> On Fri, Oct 28, 2022 at 09:36:23AM +0200, Peter Eisentraut wrote:\n>> Would there be a use for that? It's currently only used in the atomics\n>> code.\n> \n> Yep, but they would not trigger when using atomics in the frontend\n> code. We don't have any use for that in core on HEAD, still that\n> could be useful for some external frontend code? Please see the\n> attached.\n\nI don't think we need separate definitions for frontend and backend, \nsince the contained Assert() will take care of the difference. So the \nattached would be simpler.", "msg_date": "Mon, 31 Oct 2022 10:02:35 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Simplifying our Trap/Assert infrastructure" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I don't think we need separate definitions for frontend and backend, \n> since the contained Assert() will take care of the difference. So the \n> attached would be simpler.\n\nWFM.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 31 Oct 2022 09:14:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Simplifying our Trap/Assert infrastructure" }, { "msg_contents": "On Mon, Oct 31, 2022 at 09:14:10AM -0400, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> I don't think we need separate definitions for frontend and backend, \n>> since the contained Assert() will take care of the difference. So the \n>> attached would be simpler.\n> \n> WFM.\n\nThanks, fine by me.\n--\nMichael", "msg_date": "Tue, 1 Nov 2022 09:30:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Simplifying our Trap/Assert infrastructure" } ]
[ { "msg_contents": "Hi hackers,\n\nAs we know when we pull up a simple subquery, if the subquery is within\nthe nullable side of an outer join, lateral references to non-nullable\nitems may have to be turned into PlaceHolderVars. I happened to wonder\nwhat should we do about the PHVs if the outer join is reduced to inner\njoin afterwards. Should we unwrap the related PHVs? I'm asking because\nPHVs may imply lateral dependencies which may make us have to use\nnestloop join. As an example, consider\n\nexplain (costs off)\nselect * from a left join lateral (select a.i as ai, b.i as bi from b) ss\non true where ss.bi = ss.ai;\n QUERY PLAN\n---------------------------\n Nested Loop\n -> Seq Scan on a\n -> Seq Scan on b\n Filter: (i = a.i)\n(4 rows)\n\nAlthough the JOIN_LEFT has been reduced to JOIN_INNER, the lateral\nreference implied by the PHV makes us have no choice but the nestloop\nwith parameterized inner path. Considering there is no index on b, this\nplan is very inefficient.\n\nIs there anything we can do to improve this situation?\n\nThanks\nRichard\n\nHi hackers,As we know when we pull up a simple subquery, if the subquery is withinthe nullable side of an outer join, lateral references to non-nullableitems may have to be turned into PlaceHolderVars. I happened to wonderwhat should we do about the PHVs if the outer join is reduced to innerjoin afterwards. Should we unwrap the related PHVs? I'm asking becausePHVs may imply lateral dependencies which may make us have to usenestloop join. As an example, considerexplain (costs off)select * from a left join lateral (select a.i as ai, b.i as bi from b) ss on true where ss.bi = ss.ai;        QUERY PLAN--------------------------- Nested Loop   ->  Seq Scan on a   ->  Seq Scan on b         Filter: (i = a.i)(4 rows)Although the JOIN_LEFT has been reduced to JOIN_INNER, the lateralreference implied by the PHV makes us have no choice but the nestloopwith parameterized inner path. Considering there is no index on b, thisplan is very inefficient.Is there anything we can do to improve this situation?ThanksRichard", "msg_date": "Mon, 10 Oct 2022 10:35:04 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Unnecessary lateral dependencies implied by PHVs" }, { "msg_contents": "On Mon, Oct 10, 2022 at 10:35 AM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> As we know when we pull up a simple subquery, if the subquery is within\n> the nullable side of an outer join, lateral references to non-nullable\n> items may have to be turned into PlaceHolderVars. I happened to wonder\n> what should we do about the PHVs if the outer join is reduced to inner\n> join afterwards. Should we unwrap the related PHVs? I'm asking because\n> PHVs may imply lateral dependencies which may make us have to use\n> nestloop join.\n>\n\nAt first I considered about unwrapping the related PHVs after we've\nsuccessfully reduced outer joins to inner joins. But that requires a lot\nof coding which seems not worth the trouble.\n\nI think maybe the problem here is about the order we pull up subqueries\nand we reduce outer joins. But simply flipping the order for them two is\ndefinitely incorrect. I'm not sure how to make it right.\n\nAny thoughts?\n\nThanks\nRichard\n\nOn Mon, Oct 10, 2022 at 10:35 AM Richard Guo <guofenglinux@gmail.com> wrote:As we know when we pull up a simple subquery, if the subquery is withinthe nullable side of an outer join, lateral references to non-nullableitems may have to be turned into PlaceHolderVars. I happened to wonderwhat should we do about the PHVs if the outer join is reduced to innerjoin afterwards. Should we unwrap the related PHVs? I'm asking becausePHVs may imply lateral dependencies which may make us have to usenestloop join.  At first I considered about unwrapping the related PHVs after we'vesuccessfully reduced outer joins to inner joins. But that requires a lotof coding which seems not worth the trouble.I think maybe the problem here is about the order we pull up subqueriesand we reduce outer joins. But simply flipping the order for them two isdefinitely incorrect. I'm not sure how to make it right.Any thoughts?ThanksRichard", "msg_date": "Mon, 17 Oct 2022 10:47:43 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Unnecessary lateral dependencies implied by PHVs" }, { "msg_contents": "Hi Richard:\n\nOn Mon, Oct 10, 2022 at 10:35 AM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> ... I'm asking because\n> PHVs may imply lateral dependencies which may make us have to use\n> nestloop join.\n>\n\nI thought lateral join imply nestloop join, am I missing something? Here\nis my simple\ntesting.\n\npostgres=# explain (costs off) select * from r1 join lateral (select r1.a\nfrom r2) on true;\n QUERY PLAN\n----------------------------\n Nested Loop\n -> Seq Scan on r1\n -> Materialize\n -> Seq Scan on r2\n(4 rows)\n\nTime: 0.349 ms\npostgres=# set enable_nestloop to off;\nSET\nTime: 0.123 ms\n\npostgres=# explain (costs off) select * from r1 join lateral (select r1.a\nfrom r2) on true;\n QUERY PLAN\n----------------------------\n Nested Loop\n -> Seq Scan on r1\n -> Materialize\n -> Seq Scan on r2\n(4 rows)\n\n-- \nBest Regards\nAndy Fan\n\nHi Richard:On Mon, Oct 10, 2022 at 10:35 AM Richard Guo <guofenglinux@gmail.com> wrote:... I'm asking becausePHVs may imply lateral dependencies which may make us have to usenestloop join.I thought lateral join imply nestloop join,  am I missing something?  Here is my simpletesting. postgres=# explain (costs off) select * from r1 join lateral (select r1.a from r2) on true;         QUERY PLAN---------------------------- Nested Loop   ->  Seq Scan on r1   ->  Materialize         ->  Seq Scan on r2(4 rows)Time: 0.349 mspostgres=# set enable_nestloop to off;SETTime: 0.123 mspostgres=# explain (costs off) select * from r1 join lateral (select r1.a from r2) on true;         QUERY PLAN---------------------------- Nested Loop   ->  Seq Scan on r1   ->  Materialize         ->  Seq Scan on r2(4 rows)-- Best RegardsAndy Fan", "msg_date": "Tue, 18 Oct 2022 09:14:49 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unnecessary lateral dependencies implied by PHVs" }, { "msg_contents": "On Tue, Oct 18, 2022 at 9:15 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> On Mon, Oct 10, 2022 at 10:35 AM Richard Guo <guofenglinux@gmail.com>\n> wrote:\n>\n>> ... I'm asking because\n>> PHVs may imply lateral dependencies which may make us have to use\n>> nestloop join.\n>>\n>\n> I thought lateral join imply nestloop join, am I missing something? Here\n> is my simple\n> testing.\n>\n\nI think it's true most of the time. And that's why we should try to\navoid unnecessary lateral dependencies, as discussed in this thread.\n\nISTM in your example nestloop is chosen because there is no available\nmergejoinable/hashjoinable clause. In your query the lateral subquery\nwould be pulled up into the parent query and there would be no lateral\ndependencies afterwards.\n\nThanks\nRichard\n\nOn Tue, Oct 18, 2022 at 9:15 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:On Mon, Oct 10, 2022 at 10:35 AM Richard Guo <guofenglinux@gmail.com> wrote:... I'm asking becausePHVs may imply lateral dependencies which may make us have to usenestloop join.I thought lateral join imply nestloop join,  am I missing something?  Here is my simpletesting.  I think it's true most of the time.  And that's why we should try toavoid unnecessary lateral dependencies, as discussed in this thread.ISTM in your example nestloop is chosen because there is no availablemergejoinable/hashjoinable clause. In your query the lateral subquerywould be pulled up into the parent query and there would be no lateraldependencies afterwards.ThanksRichard", "msg_date": "Tue, 18 Oct 2022 17:13:53 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Unnecessary lateral dependencies implied by PHVs" } ]
[ { "msg_contents": "Hi,\n\nIt looks like we have an unnecessary XLogSegNoOffsetToRecPtr() in\nXLogReaderValidatePageHeader(). We pass the start LSN of the WAL page\nand check if it matches with the LSN that was stored in the WAL page\nheader (xlp_pageaddr). We find segno, offset and LSN again using\nXLogSegNoOffsetToRecPtr(). This happens to be the same as the passed\nin LSN 'recptr'.\n\nHere's a tiny patch removing the unnecessary XLogSegNoOffsetToRecPtr()\nand using the passed in 'recptr'.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 10 Oct 2022 08:53:55 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Remove an unnecessary LSN calculation while validating WAL page\n header" }, { "msg_contents": "At Mon, 10 Oct 2022 08:53:55 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> It looks like we have an unnecessary XLogSegNoOffsetToRecPtr() in\n> XLogReaderValidatePageHeader(). We pass the start LSN of the WAL page\n> and check if it matches with the LSN that was stored in the WAL page\n> header (xlp_pageaddr). We find segno, offset and LSN again using\n> XLogSegNoOffsetToRecPtr(). This happens to be the same as the passed\n> in LSN 'recptr'.\n\nYeah, that's obviously useless. It looks like a thinko in pg93 when\nrecptr became to be directly passed from the caller instead of\ncalculating from static variables for file, segment and in-segment\noffset.\n\n> Here's a tiny patch removing the unnecessary XLogSegNoOffsetToRecPtr()\n> and using the passed in 'recptr'.\n\nLooks good to me.\n\n# Mysteriously, I didn't find a code to change readId in the pg92 tree..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 11 Oct 2022 14:43:59 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove an unnecessary LSN calculation while validating WAL\n page header" }, { "msg_contents": "On Tue, Oct 11, 2022 at 1:44 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Mon, 10 Oct 2022 08:53:55 +0530, Bharath Rupireddy <\n> bharath.rupireddyforpostgres@gmail.com> wrote in\n> > It looks like we have an unnecessary XLogSegNoOffsetToRecPtr() in\n> > XLogReaderValidatePageHeader(). We pass the start LSN of the WAL page\n> > and check if it matches with the LSN that was stored in the WAL page\n> > header (xlp_pageaddr). We find segno, offset and LSN again using\n> > XLogSegNoOffsetToRecPtr(). This happens to be the same as the passed\n> > in LSN 'recptr'.\n>\n> Yeah, that's obviously useless. It looks like a thinko in pg93 when\n> recptr became to be directly passed from the caller instead of\n> calculating from static variables for file, segment and in-segment\n> offset.\n\n\n+1. This should be introduced in 7fcbf6a4 as a thinko. A grep search\nshows other callers of XLogSegNoOffsetToRecPtr have no such issue.\n\nThanks\nRichard\n\nOn Tue, Oct 11, 2022 at 1:44 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Mon, 10 Oct 2022 08:53:55 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> It looks like we have an unnecessary XLogSegNoOffsetToRecPtr() in\n> XLogReaderValidatePageHeader(). We pass the start LSN of the WAL page\n> and check if it matches with the LSN that was stored in the WAL page\n> header (xlp_pageaddr). We find segno, offset and LSN again using\n> XLogSegNoOffsetToRecPtr(). This happens to be the same as the passed\n> in LSN 'recptr'.\n\nYeah, that's obviously useless. It looks like a thinko in pg93 when\nrecptr became to be directly passed from the caller instead of\ncalculating from static variables for file, segment and in-segment\noffset. +1. This should be introduced in 7fcbf6a4 as a thinko. A grep searchshows other callers of XLogSegNoOffsetToRecPtr have no such issue.ThanksRichard", "msg_date": "Tue, 11 Oct 2022 17:49:36 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove an unnecessary LSN calculation while validating WAL page\n header" }, { "msg_contents": "On Tue, Oct 11, 2022 at 3:19 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>\n> On Tue, Oct 11, 2022 at 1:44 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>>\n>> At Mon, 10 Oct 2022 08:53:55 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n>> > It looks like we have an unnecessary XLogSegNoOffsetToRecPtr() in\n>> > XLogReaderValidatePageHeader(). We pass the start LSN of the WAL page\n>> > and check if it matches with the LSN that was stored in the WAL page\n>> > header (xlp_pageaddr). We find segno, offset and LSN again using\n>> > XLogSegNoOffsetToRecPtr(). This happens to be the same as the passed\n>> > in LSN 'recptr'.\n>>\n>> Yeah, that's obviously useless. It looks like a thinko in pg93 when\n>> recptr became to be directly passed from the caller instead of\n>> calculating from static variables for file, segment and in-segment\n>> offset.\n>\n>\n> +1. This should be introduced in 7fcbf6a4 as a thinko. A grep search\n> shows other callers of XLogSegNoOffsetToRecPtr have no such issue.\n\nThanks for reviewing. It's a pretty-old code that exists in 9.5 or\nearlier [1], definitely not introduced by 7fcbf6a4.\n\n[1] see XLogReaderValidatePageHeader() in\nhttps://github.com/BRupireddy/postgres/blob/REL9_5_STABLE/src/backend/access/transam/xlogreader.c\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 11 Oct 2022 16:34:58 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove an unnecessary LSN calculation while validating WAL page\n header" }, { "msg_contents": "On Tue, Oct 11, 2022 at 7:05 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Tue, Oct 11, 2022 at 3:19 PM Richard Guo <guofenglinux@gmail.com>\n> wrote:\n> > On Tue, Oct 11, 2022 at 1:44 PM Kyotaro Horiguchi <\n> horikyota.ntt@gmail.com> wrote:\n> >> At Mon, 10 Oct 2022 08:53:55 +0530, Bharath Rupireddy <\n> bharath.rupireddyforpostgres@gmail.com> wrote in\n> >> > It looks like we have an unnecessary XLogSegNoOffsetToRecPtr() in\n> >> > XLogReaderValidatePageHeader(). We pass the start LSN of the WAL page\n> >> > and check if it matches with the LSN that was stored in the WAL page\n> >> > header (xlp_pageaddr). We find segno, offset and LSN again using\n> >> > XLogSegNoOffsetToRecPtr(). This happens to be the same as the passed\n> >> > in LSN 'recptr'.\n> >>\n> >> Yeah, that's obviously useless. It looks like a thinko in pg93 when\n> >> recptr became to be directly passed from the caller instead of\n> >> calculating from static variables for file, segment and in-segment\n> >> offset.\n> >\n> >\n> > +1. This should be introduced in 7fcbf6a4 as a thinko. A grep search\n> > shows other callers of XLogSegNoOffsetToRecPtr have no such issue.\n>\n> Thanks for reviewing. It's a pretty-old code that exists in 9.5 or\n> earlier [1], definitely not introduced by 7fcbf6a4.\n>\n> [1] see XLogReaderValidatePageHeader() in\n>\n> https://github.com/BRupireddy/postgres/blob/REL9_5_STABLE/src/backend/access/transam/xlogreader.c\n\n\nAs I can see in 7fcbf6a4 ValidXLogPageHeader() is refactored as\n\n-static bool\n-ValidXLogPageHeader(XLogPageHeader hdr, int emode, bool segmentonly)\n-{\n- XLogRecPtr recaddr;\n-\n- XLogSegNoOffsetToRecPtr(readSegNo, readOff, recaddr);\n\n+static bool\n+ValidXLogPageHeader(XLogReaderState *state, XLogRecPtr recptr,\n+ XLogPageHeader hdr)\n+{\n+ XLogRecPtr recaddr;\n+ XLogSegNo segno;\n+ int32 offset;\n+\n+ Assert((recptr % XLOG_BLCKSZ) == 0);\n+\n+ XLByteToSeg(recptr, segno);\n+ offset = recptr % XLogSegSize;\n+\n+ XLogSegNoOffsetToRecPtr(segno, offset, recaddr);\n\nI think this is where the problem was introduced.\n\nBTW, 7fcbf6a4 seems pretty old too as it can be found in 9.3 branch.\n\nThanks\nRichard\n\nOn Tue, Oct 11, 2022 at 7:05 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Tue, Oct 11, 2022 at 3:19 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> On Tue, Oct 11, 2022 at 1:44 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>> At Mon, 10 Oct 2022 08:53:55 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n>> > It looks like we have an unnecessary XLogSegNoOffsetToRecPtr() in\n>> > XLogReaderValidatePageHeader(). We pass the start LSN of the WAL page\n>> > and check if it matches with the LSN that was stored in the WAL page\n>> > header (xlp_pageaddr). We find segno, offset and LSN again using\n>> > XLogSegNoOffsetToRecPtr(). This happens to be the same as the passed\n>> > in LSN 'recptr'.\n>>\n>> Yeah, that's obviously useless. It looks like a thinko in pg93 when\n>> recptr became to be directly passed from the caller instead of\n>> calculating from static variables for file, segment and in-segment\n>> offset.\n>\n>\n> +1. This should be introduced in 7fcbf6a4 as a thinko. A grep search\n> shows other callers of XLogSegNoOffsetToRecPtr have no such issue.\n\nThanks for reviewing. It's a pretty-old code that exists in 9.5 or\nearlier [1], definitely not introduced by 7fcbf6a4.\n\n[1] see XLogReaderValidatePageHeader() in\nhttps://github.com/BRupireddy/postgres/blob/REL9_5_STABLE/src/backend/access/transam/xlogreader.c As I can see in 7fcbf6a4 ValidXLogPageHeader() is refactored as-static bool-ValidXLogPageHeader(XLogPageHeader hdr, int emode, bool segmentonly)-{-   XLogRecPtr  recaddr;--   XLogSegNoOffsetToRecPtr(readSegNo, readOff, recaddr);+static bool+ValidXLogPageHeader(XLogReaderState *state, XLogRecPtr recptr,+                   XLogPageHeader hdr)+{+   XLogRecPtr  recaddr;+   XLogSegNo   segno;+   int32       offset;++   Assert((recptr % XLOG_BLCKSZ) == 0);++   XLByteToSeg(recptr, segno);+   offset = recptr % XLogSegSize;++   XLogSegNoOffsetToRecPtr(segno, offset, recaddr);I think this is where the problem was introduced.BTW, 7fcbf6a4 seems pretty old too as it can be found in 9.3 branch.ThanksRichard", "msg_date": "Tue, 11 Oct 2022 21:49:18 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove an unnecessary LSN calculation while validating WAL page\n header" }, { "msg_contents": "On 2022-Oct-11, Richard Guo wrote:\n\n> As I can see in 7fcbf6a4 ValidXLogPageHeader() is refactored as\n\nTrue, but look at dfda6ebaec67 -- that changed the use of recaddr from\nbeing built from parts (which were globals back then) into something\nthat was computed locally.\n\nKnowing how difficult that code was, and how heroic was to change it to\na more maintainable form, I place no blame on failing to notice that\nsome small thing could have been written more easily.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\nY una voz del caos me habló y me dijo\n\"Sonríe y sé feliz, podría ser peor\".\nY sonreí. Y fui feliz.\nY fue peor.\n\n\n", "msg_date": "Tue, 11 Oct 2022 18:24:26 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Remove an unnecessary LSN calculation while validating WAL page\n header" }, { "msg_contents": "On Tue, Oct 11, 2022 at 06:24:26PM +0200, Alvaro Herrera wrote:\n> Knowing how difficult that code was, and how heroic was to change it to\n> a more maintainable form, I place no blame on failing to notice that\n> some small thing could have been written more easily.\n\n+1. And even after such changes the code is still complex for the\ncommon eye. Anyway, this makes the code a bit simpler with the exact\nsame maths, so applied.\n--\nMichael", "msg_date": "Wed, 12 Oct 2022 10:00:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove an unnecessary LSN calculation while validating WAL page\n header" }, { "msg_contents": "On Wed, Oct 12, 2022 at 12:24 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> Knowing how difficult that code was, and how heroic was to change it to\n> a more maintainable form, I place no blame on failing to notice that\n> some small thing could have been written more easily.\n\n\nConcur with that. The changes in 7fcbf6a4 made the code to a more\nmaintainable and readable state. It's a difficult but awesome work.\n\nThanks\nRichard\n\nOn Wed, Oct 12, 2022 at 12:24 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\nKnowing how difficult that code was, and how heroic was to change it to\na more maintainable form, I place no blame on failing to notice that\nsome small thing could have been written more easily. Concur with that. The changes in 7fcbf6a4 made the code to a moremaintainable and readable state. It's a difficult but awesome work.ThanksRichard", "msg_date": "Wed, 12 Oct 2022 10:20:18 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove an unnecessary LSN calculation while validating WAL page\n header" } ]
[ { "msg_contents": "I noticed that the new Perl test modules are not installed, so if you\ntry to use PostgreSQL/Test/Cluster.pm in an external test from pgxs, it\nfails with the modules not being found.\n\nI see no reason for this other than having overseen it in b235d41d9646,\nso I propose the attached (for all branches, naturally.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La gente vulgar sólo piensa en pasar el tiempo;\nel que tiene talento, en aprovecharlo\"", "msg_date": "Mon, 10 Oct 2022 11:34:15 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "src/test/perl/PostgreSQL/Test/*.pm not installed" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I noticed that the new Perl test modules are not installed, so if you\n> try to use PostgreSQL/Test/Cluster.pm in an external test from pgxs, it\n> fails with the modules not being found.\n> I see no reason for this other than having overseen it in b235d41d9646,\n> so I propose the attached (for all branches, naturally.)\n\n+1, but I suppose you need some adjustment in the meson.build files\nnow too.\n\n(Also, please wait for the v15 release freeze to lift.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 Oct 2022 10:34:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: src/test/perl/PostgreSQL/Test/*.pm not installed" }, { "msg_contents": "On 2022-Oct-10, Tom Lane wrote:\n\n> +1, but I suppose you need some adjustment in the meson.build files\n> now too.\n\nOh, right, I forgot ...\n\n> (Also, please wait for the v15 release freeze to lift.)\n\n... and now that I look, it turns out that 15 and master need no\nchanges: both the Makefile and the meson files are correct already.\nOnly 14 and back have this problem.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 10 Oct 2022 18:30:58 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: src/test/perl/PostgreSQL/Test/*.pm not installed" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Only 14 and back have this problem.\n\nAh, cool. There's no freeze on those branches ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 Oct 2022 13:00:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: src/test/perl/PostgreSQL/Test/*.pm not installed" }, { "msg_contents": "On Mon, Oct 10, 2022 at 11:34:15AM +0200, Alvaro Herrera wrote:\n> I noticed that the new Perl test modules are not installed, so if you\n> try to use PostgreSQL/Test/Cluster.pm in an external test from pgxs, it\n> fails with the modules not being found.\n> \n> I see no reason for this other than having overseen it in b235d41d9646,\n> so I propose the attached (for all branches, naturally.)\n\n+1, good catch. The patch looks fine.\n--\nMichael", "msg_date": "Tue, 11 Oct 2022 14:49:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: src/test/perl/PostgreSQL/Test/*.pm not installed" } ]
[ { "msg_contents": "Hi\n\nSmall patch for $subject, as the other pg_get_XXXdef() functions are\ndocumented\nand I was looking for this one but couldn't remember what it was called.\n\nRegards\n\nIan Barwick", "msg_date": "Mon, 10 Oct 2022 22:38:02 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": true, "msg_subject": "doc: add entry for pg_get_partkeydef()" }, { "msg_contents": "Ian Lawrence Barwick <barwick@gmail.com> writes:\n> Small patch for $subject, as the other pg_get_XXXdef() functions are\n> documented\n> and I was looking for this one but couldn't remember what it was called.\n\nSeems reasonable. Pushed with minor wording adjustment.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 11 Oct 2022 14:29:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: doc: add entry for pg_get_partkeydef()" }, { "msg_contents": "2022年10月12日(水) 3:29 Tom Lane <tgl@sss.pgh.pa.us>:\n\n> Ian Lawrence Barwick <barwick@gmail.com> writes:\n> > Small patch for $subject, as the other pg_get_XXXdef() functions are\n> > documented\n> > and I was looking for this one but couldn't remember what it was called.\n>\n> Seems reasonable. Pushed with minor wording adjustment.\n>\n\nMany thanks!\n\nRegards\n\nIan Barwick\n\n2022年10月12日(水) 3:29 Tom Lane <tgl@sss.pgh.pa.us>:Ian Lawrence Barwick <barwick@gmail.com> writes:\n> Small patch for $subject, as the other pg_get_XXXdef() functions are\n> documented\n> and I was looking for this one but couldn't remember what it was called.\n\nSeems reasonable.  Pushed with minor wording adjustment.Many thanks!RegardsIan Barwick", "msg_date": "Wed, 12 Oct 2022 22:02:40 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": true, "msg_subject": "Re: doc: add entry for pg_get_partkeydef()" } ]
[ { "msg_contents": "Hi,\r\n\r\nThe PostgreSQL 15 GA will be Oct 13, 2022.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Mon, 10 Oct 2022 10:08:30 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "PostgreSQL 15 GA - Oct 13, 2022" } ]
[ { "msg_contents": "Hi!\n\nThis patch is inspired by [0] and many others.\nI've notice recent activity to convert macros into inline functions. We\nshould make TransactionIdRetreat/Advance functions\nInstead of a macro, should we?\n\nI also think about NormalTransactionIdPrecedes and\nNormalTransactionIdFollows, but maybe, they should be addressed\nseparately: the comment says that \"this is a macro for speed\".\n\nAny thoughts?\n\n[0]:\nhttps://www.postgresql.org/message-id/flat/5b558da8-99fb-0a99-83dd-f72f05388517%40enterprisedb.com\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Mon, 10 Oct 2022 17:34:14 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Turn TransactionIdRetreat/Advance into inline functions" }, { "msg_contents": "Maxim Orlov <orlovmg@gmail.com> writes:\n> I've notice recent activity to convert macros into inline functions. We\n> should make TransactionIdRetreat/Advance functions\n> Instead of a macro, should we?\n\n-1. Having to touch all the call sites like this outweighs\nany claimed advantage: it makes them uglier and it will greatly\ncomplicate any back-patching we might have to do in those areas.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 Oct 2022 10:58:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Turn TransactionIdRetreat/Advance into inline functions" }, { "msg_contents": "> -1. Having to touch all the call sites like this outweighs\n> any claimed advantage: it makes them uglier and it will greatly\n> complicate any back-patching we might have to do in those areas.\n>\n> regards, tom lane\n>\n\nOk, got it. But what if we change the semantics of these calls to\nxid = TransactionIdAdvance(xid) ?\n\n-- \nBest regards,\nMaxim Orlov.\n\n\n-1.  Having to touch all the call sites like this outweighs\nany claimed advantage: it makes them uglier and it will greatly\ncomplicate any back-patching we might have to do in those areas.\n\n                        regards, tom lane\nOk, got it. But what if we change the semantics of these calls to xid = TransactionIdAdvance(xid) ?-- Best regards,Maxim Orlov.", "msg_date": "Mon, 10 Oct 2022 18:08:20 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Turn TransactionIdRetreat/Advance into inline functions" }, { "msg_contents": "Maxim Orlov <orlovmg@gmail.com> writes:\n>> -1. Having to touch all the call sites like this outweighs\n>> any claimed advantage: it makes them uglier and it will greatly\n>> complicate any back-patching we might have to do in those areas.\n\n> Ok, got it. But what if we change the semantics of these calls to\n> xid = TransactionIdAdvance(xid) ?\n\nUh ... you'd still have to touch all the call sites.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 Oct 2022 11:12:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Turn TransactionIdRetreat/Advance into inline functions" } ]
[ { "msg_contents": "> One idea would be to add a flag, say report_parallel_vacuum_progress,\r\n> to IndexVacuumInfo struct and expect index AM to check and update the\r\n> parallel index vacuum progress, say every 1GB blocks processed. The\r\n> flag is true only when the leader process is vacuuming an index.\r\n\r\nSorry for the long delay on this. I have taken the approach as suggested\r\nby Sawada-san and Robert and attached is v12.\r\n\r\n1. The patch introduces a new counter in the shared memory already\r\nused by the parallel leader and workers to keep track of the number\r\nof indexes completed. This way there is no reason to loop through\r\nthe index status everytime we want to get the status of indexes completed.\r\n\r\n2. A new function in vacuumparallel.c will be used to update\r\nthe progress of a indexes completed by reading from the\r\ncounter created in point #1.\r\n\r\n3. The function is called during the vacuum_delay_point as a\r\nmatter of convenience, since it's called in all major vacuum\r\nloops. The function will only do anything if the caller\r\nsets a boolean to report progress. Doing so will also ensure\r\nprogress is being reported in case the parallel workers completed\r\nbefore the leader.\r\n\r\n4. Rather than adding any complexity to WaitForParallelWorkersToFinish\r\nand introducing a new callback, vacuumparallel.c will wait until\r\nthe number of vacuum workers is 0 and then process to call\r\nWaitForParallelWorkersToFinish as it does.\r\n\r\n5. Went back to the idea of adding a new view called pg_stat_progress_vacuum_index\r\nwhich is accomplished by adding a new type called VACUUM_PARALLEL in progress.h\r\n\r\n\r\nThanks,\r\n\r\nSami Imseih\r\nAmazon Web Servies (AWS)", "msg_date": "Mon, 10 Oct 2022 16:40:33 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Add index scan progress to pg_stat_progress_vacuum" } ]
[ { "msg_contents": "The autovacuum_freeze_max_age reloption exists so that the DBA can\noptionally have antiwraparound autovacuums run against a table that\nrequires more frequent antiwraparound autovacuums. This has problems\nbecause there are actually two types of VACUUM right now (aggressive\nand non-aggressive), which, strictly speaking, is an independent\ncondition of antiwraparound-ness. There is a tacit assumption within\nautovacuum.c that all antiwraparound autovacuums are also aggressive,\nI think. But that just isn't true, which leads to clearly broken\nbehavior when the autovacuum_freeze_max_age reloption is in use.\n\nNote that the VacuumParams state that gets passed down to\nvacuum_set_xid_limits() does not include anything about the \"reloption\nversion\" of autovacuum_freeze_max_age. So quite naturally\nvacuum_set_xid_limits() can only work off of the\nautovacuum_freeze_max_age GUC, even when the reloption happens to have\nbeen used over in autovacuum.c. In practice this means that we can\neasily see autovacuum spin uselessly when the reloption is in use --\nit'll launch antiwraparound autovacuums that never advance\nrelfrozenxid and so never address the relfrozenxid age issue from the\npoint of view of autovacuum.c.\n\nThere is no reason to think that the user will also (say) set the\nautovacuum_freeze_table_age reloption separately (not to be confused\nwith the vacuum_freeze_table_age GUC!). We'll usually just work off\nthe GUC (I mean why wouldn't we?). I don't see why vacuumlazy.c\ndoesn't just force aggressive mode whenever it sees an antiwraparound\nautovacuum, no matter what. Recall the problem scenario that led to\nbugfix commit dd9ac7d5 -- that also could have been avoided by making\nsure that every antiwraparound autovacuum was aggressive (actually the\noriginal problem was that we'd suppress non-aggressive antiwraparound\nautovacuums as redundant).\n\nI only noticed this problem because I am in the process of writing a\npatch series that demotes vacuum_freeze_table_age to a mere\ncompatibility option (and even gets rid of the whole concept of\naggressive VACUUM). The whole way that vacuum_freeze_table_age and\nautovacuum_freeze_max_age are supposed to work together seems very\nconfusing to me. I'm not surprised that this was overlooked for so long.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 10 Oct 2022 16:46:24 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "autovacuum_freeze_max_age reloption seems broken" }, { "msg_contents": "On Mon, Oct 10, 2022 at 4:46 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> There is no reason to think that the user will also (say) set the\n> autovacuum_freeze_table_age reloption separately (not to be confused\n> with the vacuum_freeze_table_age GUC!). We'll usually just work off\n> the GUC (I mean why wouldn't we?). I don't see why vacuumlazy.c\n> doesn't just force aggressive mode whenever it sees an antiwraparound\n> autovacuum, no matter what.\n\nActually, even forcing every antiwraparound autovacuum to use\naggressive mode isn't enough to stop autovacuum.c from spinning. It\nmight be a good start, but it still leaves the freeze_min_age issue.\n\nThe only way that autovacuum.c is going to be satisfied and back off\nwith launching antiwraparound autovacuums is if relfrozenxid is\nadvanced, and advanced by a significant amount. But what if the\nautovacuum_freeze_max_age reloption happens to have been set to\nsomething that's significantly less than the value of the\nvacuum_freeze_min_age GUC (or the autovacuum_freeze_min_age reloption,\neven)? Most of the time we can rely on vacuum_set_xid_limits() making\nsure that the FreezeLimit cutoff (cutoff that determines which XID\nwe'll freeze) isn't unreasonably old relative to other cutoffs. But\nthat won't work if we're forcing an aggressive VACUUM in vacuumlazy.c.\n\nI suppose that this separate freeze_min_age issue could be fixed by\nteaching autovacuum.c's table_recheck_autovac() function to set\nfreeze_min_age to something less than the current value of reloptions\nlike autovacuum_freeze_min_age and autovacuum_freeze_table_age for the\nsame table (when either of the table-level reloptions happened to be\nset). In other words, autovacuum.c could be taught to make sure that\nthese reloption-based cutoffs have sane values relative to each other\nby applying roughly the same approach taken in vacuum_set_xid_limits()\nfor the GUCs.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 10 Oct 2022 17:37:46 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: autovacuum_freeze_max_age reloption seems broken" } ]
[ { "msg_contents": "Hello,\n\nWe are looking for an example on how to consume the changes of WAL produced\nby logical decoding (streaming or SQL interface) in another postgres server.\n\nBasically, we are trying to create a replica/standby postgre server to a\nprimary progre server. Between Logical replication and Logical Decoding we\ncame up with Logical decoding as the choice due to limitation of logical\nreplication (materialized views, external views/tables, sequences not\nreplicated). However we are not finding a good example with instructions on\nhow to set up a consumer postgre server.\n\nThanks\nAnkit\n\nHello,We are looking for an example on how to consume the changes of WAL produced by logical decoding (streaming or SQL interface) in another postgres server.Basically, we are trying to create a replica/standby postgre server to a primary progre server. Between Logical replication and Logical Decoding we came up with Logical decoding as the choice due to limitation of logical replication (materialized views, external views/tables, sequences not replicated). However we are not finding a good example with instructions on how to set up a consumer postgre server.ThanksAnkit", "msg_date": "Tue, 11 Oct 2022 09:32:38 +0530", "msg_from": "Ankit Oza <ankit.p.oza@gmail.com>", "msg_from_op": true, "msg_subject": "PostgreSQL Logical decoding" }, { "msg_contents": "Hi Ankit,\n\n\nOn Tue, Oct 11, 2022 at 9:32 AM Ankit Oza <ankit.p.oza@gmail.com> wrote:\n>\n> Hello,\n>\n> We are looking for an example on how to consume the changes of WAL produced by logical decoding (streaming or SQL interface) in another postgres server.\n\nbuilt-in logical replication is good example to start looking for.\nhttps://www.postgresql.org/docs/current/logical-replication.html\n\n>\n> Basically, we are trying to create a replica/standby postgre server to a primary progre server. Between Logical replication and Logical Decoding we came up with Logical decoding as the choice due to limitation of logical replication (materialized views, external views/tables, sequences not replicated). However we are not finding a good example with instructions on how to set up a consumer postgre server.\n>\n\nLogical decoding is the process to convert WAL to a logical change,\nlogical replication deals with transferring these changes to another\nserver and applying those there. So they work in tandem; just one\nwithout the other can not be used. So I am confused about your\nrequirements.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 11 Oct 2022 11:01:23 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Logical decoding" }, { "msg_contents": "On Tue, Oct 11, 2022 at 9:32 AM Ankit Oza <ankit.p.oza@gmail.com> wrote:\n>\n> Hello,\n>\n> We are looking for an example on how to consume the changes of WAL produced by logical decoding (streaming or SQL interface) in another postgres server.\n>\n> Basically, we are trying to create a replica/standby postgre server to a primary progre server. Between Logical replication and Logical Decoding we came up with Logical decoding as the choice due to limitation of logical replication (materialized views, external views/tables, sequences not replicated). However we are not finding a good example with instructions on how to set up a consumer postgre server.\n>\n\nI think from a code perspective, you can look at contrib/test_decoding\nand src\\backend\\replication\\pgoutput to see how to consume changes and\nsend them to the replica. You can refer to docs [1] for SQL functions\nto consume changes.\n\n[1] - https://www.postgresql.org/docs/devel/functions-admin.html#FUNCTIONS-REPLICATION\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 11 Oct 2022 11:55:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Logical decoding" }, { "msg_contents": "Thanks Ashutosh,\n\nActually we use the Postgres service offered by Azure (Flexible server).\nSo, I was looking at the following documentation which talks about Logical\nReplication and Logical Decoding as two different methods of replication.\nHere Logical replication talks about creating both Publisher and Subscriber\nsettings using simple SQL statements. While for Logical decoding its\ntalking about publishing WAL but not on how to consume this WAL.\nLogical replication and logical decoding - Azure Database for PostgreSQL -\nFlexible Server | Microsoft Learn\n<https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-logical>\n\nAlso Logical Replication has some limitations like materialized views,\nsequences being not replicated. While DDL changes propagation is a common\ndeficiency among both Logical decoding and Logical Replication. Am I\nreading this correctly?\nPostgreSQL: Documentation: 12: 30.4. Restrictions\n<https://www.postgresql.org/docs/12/logical-replication-restrictions.html>\n\nWith this reading I thought Logical decoding may be the way to go. However\nplease guide us on our understanding.\n\nThanks\nAnkit\n\nOn Tue, Oct 11, 2022 at 11:01 AM Ashutosh Bapat <\nashutosh.bapat.oss@gmail.com> wrote:\n\n> Hi Ankit,\n>\n>\n> On Tue, Oct 11, 2022 at 9:32 AM Ankit Oza <ankit.p.oza@gmail.com> wrote:\n> >\n> > Hello,\n> >\n> > We are looking for an example on how to consume the changes of WAL\n> produced by logical decoding (streaming or SQL interface) in another\n> postgres server.\n>\n> built-in logical replication is good example to start looking for.\n> https://www.postgresql.org/docs/current/logical-replication.html\n>\n> >\n> > Basically, we are trying to create a replica/standby postgre server to a\n> primary progre server. Between Logical replication and Logical Decoding we\n> came up with Logical decoding as the choice due to limitation of logical\n> replication (materialized views, external views/tables, sequences not\n> replicated). However we are not finding a good example with instructions on\n> how to set up a consumer postgre server.\n> >\n>\n> Logical decoding is the process to convert WAL to a logical change,\n> logical replication deals with transferring these changes to another\n> server and applying those there. So they work in tandem; just one\n> without the other can not be used. So I am confused about your\n> requirements.\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n>\n\nThanks Ashutosh, Actually we use the Postgres service offered by Azure (Flexible server). So, I was looking at the following documentation which talks about Logical Replication and Logical Decoding as two different methods of replication. Here Logical replication talks about creating both Publisher and Subscriber settings using simple SQL statements. While for Logical decoding its talking about publishing WAL but not on how to consume this WAL.Logical replication and logical decoding - Azure Database for PostgreSQL - Flexible Server | Microsoft LearnAlso Logical Replication has some limitations like materialized views, sequences being not replicated. While DDL changes propagation is a common deficiency among both Logical decoding and Logical Replication. Am I reading this correctly?PostgreSQL: Documentation: 12: 30.4. RestrictionsWith this reading I thought Logical decoding may be the way to go. However please guide us on our understanding.ThanksAnkitOn Tue, Oct 11, 2022 at 11:01 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:Hi Ankit,\n\n\nOn Tue, Oct 11, 2022 at 9:32 AM Ankit Oza <ankit.p.oza@gmail.com> wrote:\n>\n> Hello,\n>\n> We are looking for an example on how to consume the changes of WAL produced by logical decoding (streaming or SQL interface) in another postgres server.\n\nbuilt-in logical replication is good example to start looking for.\nhttps://www.postgresql.org/docs/current/logical-replication.html\n\n>\n> Basically, we are trying to create a replica/standby postgre server to a primary progre server. Between Logical replication and Logical Decoding we came up with Logical decoding as the choice due to limitation of logical replication (materialized views, external views/tables, sequences not replicated). However we are not finding a good example with instructions on how to set up a consumer postgre server.\n>\n\nLogical decoding is the process to convert WAL to a logical change,\nlogical replication deals with transferring these changes to another\nserver and applying those there. So they work in tandem; just one\nwithout the other can not be used. So I am confused about your\nrequirements.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Wed, 12 Oct 2022 10:09:00 +0530", "msg_from": "Ankit Oza <ankit.p.oza@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL Logical decoding" }, { "msg_contents": "On Wed, Oct 12, 2022 at 10:09 AM Ankit Oza <ankit.p.oza@gmail.com> wrote:\n>\n> Thanks Ashutosh,\n>\n> Actually we use the Postgres service offered by Azure (Flexible server). So, I was looking at the following documentation which talks about Logical Replication and Logical Decoding as two different methods of replication. Here Logical replication talks about creating both Publisher and Subscriber settings using simple SQL statements. While for Logical decoding its talking about publishing WAL but not on how to consume this WAL.\n> Logical replication and logical decoding - Azure Database for PostgreSQL - Flexible Server | Microsoft Learn\n>\n> Also Logical Replication has some limitations like materialized views, sequences being not replicated. While DDL changes propagation is a common deficiency among both Logical decoding and Logical Replication. Am I reading this correctly?\n> PostgreSQL: Documentation: 12: 30.4. Restrictions\n>\n> With this reading I thought Logical decoding may be the way to go. However please guide us on our understanding.\n>\n\nThose restrictions (sequences, materialized views, etc.) apply to\nlogical decoding as well. We don't support decoding operations on\nthose objects.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 12 Oct 2022 11:44:12 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Logical decoding" } ]
[ { "msg_contents": "Hi,\n\n\nAs we know postgres using high level lock when do alter table or other ddl commands,\nIt will block any dml operation, while it also will block by long term dml operation.\n\n\nLike what discuss as follow :\nhttps://dba.stackexchange.com/questions/293992/make-alter-table-wait-for-lock-without-blocking-anything-else.\n\n\nI know that postgres try to avoid rewrite table when alter table happen , and so far, it support serveral ddl using concurrently feature,\nLike create indexes. But like alter table add/drop colum, alter column type, it also will trigger rewrtie table . Long term block will make application offline in long times.\n\n\nSo is there any plan to support these ddl online and lock free?if not could you explain the technological difficulty ?\n\n\nThanks and wating your respond!\nHi,As we know postgres using high level lock when do alter table or other ddl commands,It will block any dml operation, while it also will block by long term dml operation.Like what discuss as follow :https://dba.stackexchange.com/questions/293992/make-alter-table-wait-for-lock-without-blocking-anything-else.I know that postgres try to avoid rewrite table when alter table happen , and so far, it support serveral ddl using concurrently feature,Like create indexes. But like alter table add/drop colum, alter column type, it also will trigger rewrtie table . Long term block will make application offline in long times.So is there any plan to support these ddl online and lock free?if not could you explain the  technological difficulty ?Thanks and wating your respond!", "msg_date": "Tue, 11 Oct 2022 17:43:03 +0800 (CST)", "msg_from": "jiye <jiye_sw@126.com>", "msg_from_op": true, "msg_subject": "Is there any plan to support online schem change in postgresql?" }, { "msg_contents": "On Tue, Oct 11, 2022 at 05:43:03PM +0800, jiye wrote:\n> As we know postgres using high level lock when do alter table or other ddl commands,\n> It will block any dml operation, while it also will block by long term dml operation.\n\nMost of the things can be already done in non-blocking (or almost\nnon-blocking way) if you just do it in a way that takes concurrency into\naccount.\n\nSpecifically - I have no problem adding/deleting columns.\n\nBest regards,\n\ndepesz\n\n\n\n", "msg_date": "Tue, 11 Oct 2022 12:05:01 +0200", "msg_from": "hubert depesz lubaczewski <depesz@depesz.com>", "msg_from_op": false, "msg_subject": "Re: Is there any plan to support online schem change in postgresql?" }, { "msg_contents": "But, as follow, if txn1 not commit (just like long term readonly txn), it will block txn2's ddl job, why alt add/drop column can not concurrently with read only access?\n\n\ntxn1: long term txn not commit access t1.\ntxn2 waiting txn1 to commit or abort.\n\ntxn3 wait txn2...\n\n\n\n\n\n\n\n\nAt 2022-10-11 18:05:01, \"hubert depesz lubaczewski\" <depesz@depesz.com> wrote:\n>On Tue, Oct 11, 2022 at 05:43:03PM +0800, jiye wrote:\n>> As we know postgres using high level lock when do alter table or other ddl commands,\n>> It will block any dml operation, while it also will block by long term dml operation.\n>\n>Most of the things can be already done in non-blocking (or almost\n>non-blocking way) if you just do it in a way that takes concurrency into\n>account.\n>\n>Specifically - I have no problem adding/deleting columns.\n>\n>Best regards,\n>\n>depesz", "msg_date": "Tue, 11 Oct 2022 20:31:53 +0800 (CST)", "msg_from": "jiye <jiye_sw@126.com>", "msg_from_op": false, "msg_subject": "Re:Re: Is there any plan to support online schem change in\n postgresql?" }, { "msg_contents": "On Tue, Oct 11, 2022 at 08:31:53PM +0800, jiye wrote:\n> But, as follow, if txn1 not commit (just like long term readonly txn), it will block txn2's ddl job, why alt add/drop column can not concurrently with read only access?\n> txn1: long term txn not commit access t1.\n> txn2 waiting txn1 to commit or abort.\n> txn3 wait txn2...\n\n1. Please don't share code as screenshots.\n2. If I understand your text above correctly, then the solution is\n trivial:\n https://www.depesz.com/2019/09/26/how-to-run-short-alter-table-without-long-locking-concurrent-queries/\n\ndepesz\n\n\n", "msg_date": "Tue, 11 Oct 2022 14:45:24 +0200", "msg_from": "hubert depesz lubaczewski <depesz@depesz.com>", "msg_from_op": false, "msg_subject": "Re: Re: Is there any plan to support online schem change in\n postgresql?" } ]
[ { "msg_contents": "Today you get\n\ntest=> EXPLAIN SELECT * FROM tab WHERE col = $1;\nERROR: there is no parameter $1\n\nwhich makes sense. Nonetheless, it would be great to get a generic plan\nfor such a query. Sometimes you don't have the parameters (if you grab\nthe statement from \"pg_stat_statements\", or if it is from an error message\nin the log, and you didn't enable \"log_parameter_max_length_on_error\").\nSometimes it is just very painful to substitute the 25 parameters from\nthe detail message.\n\nWith the attached patch you can get the following:\n\ntest=> SET plan_cache_mode = force_generic_plan;\nSET\ntest=> EXPLAIN (COSTS OFF) SELECT * FROM pg_proc WHERE oid = $1;\n QUERY PLAN \n═══════════════════════════════════════════════\n Index Scan using pg_proc_oid_index on pg_proc\n Index Cond: (oid = $1)\n(2 rows)\n\nThat's not the same as a full-fledged EXPLAIN (ANALYZE, BUFFERS),\nbut it can definitely be helpful.\n\nI tied that behavior to the setting of \"plan_cache_mode\" where you\nare guaranteed to get a generic plan; I couldn't think of a better way.\n\nYours,\nLaurenz Albe", "msg_date": "Tue, 11 Oct 2022 14:37:25 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> Today you get\n> test=> EXPLAIN SELECT * FROM tab WHERE col = $1;\n> ERROR: there is no parameter $1\n> which makes sense. Nonetheless, it would be great to get a generic plan\n> for such a query.\n\nI can see the point, but it also seems like it risks masking stupid\nmistakes.\n\n> I tied that behavior to the setting of \"plan_cache_mode\" where you\n> are guaranteed to get a generic plan; I couldn't think of a better way.\n\nI think it might be better to drive it off an explicit EXPLAIN option,\nperhaps\n\nEXPLAIN (GENERIC_PLAN) SELECT * FROM tab WHERE col = $1;\n\nThis option (bikeshedding on the name welcome) would have the effect\nboth of allowing unanchored Param symbols and of temporarily forcing\ngeneric-plan mode, so that you don't need additional commands to\nset and reset plan_cache_mode. We could also trivially add logic\nto disallow the combination of ANALYZE and GENERIC_PLAN, which\nwould otherwise be a bit messy to prevent.\n\nFor context, it does already work to do this when you want to\ninvestigate parameterized plans:\n\nregression=# prepare foo as select * from tenk1 where unique1 = $1;\nPREPARE\nregression=# explain execute foo(42);\n QUERY PLAN \n-----------------------------------------------------------------------------\n Index Scan using tenk1_unique1 on tenk1 (cost=0.29..8.30 rows=1 width=244)\n Index Cond: (unique1 = 42)\n(2 rows)\n\nIf you're trying to investigate custom-plan behavior, then you\nneed to supply concrete parameter values somewhere, so I think\nthis approach is fine for that case. (Shoehorning parameter\nvalues into EXPLAIN options seems like it'd be a bit much.)\nHowever, investigating generic-plan behavior this way is tedious,\nsince you have to invent irrelevant parameter values, plus mess\nwith plan_cache_mode or else run the explain half a dozen times.\nSo I can get behind having a more convenient way for that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 11 Oct 2022 09:49:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "On Tue, Oct 11, 2022 at 09:49:14AM -0400, Tom Lane wrote:\n>\n> If you're trying to investigate custom-plan behavior, then you\n> need to supply concrete parameter values somewhere, so I think\n> this approach is fine for that case. (Shoehorning parameter\n> values into EXPLAIN options seems like it'd be a bit much.)\n> However, investigating generic-plan behavior this way is tedious,\n> since you have to invent irrelevant parameter values, plus mess\n> with plan_cache_mode or else run the explain half a dozen times.\n> So I can get behind having a more convenient way for that.\n\nOne common use case is tools identifying a slow query using pg_stat_statements,\nidentifying some missing indexes and then wanting to check whether the index\nshould be useful using some hypothetical index.\n\nFTR I'm working on such a project and for now we have to go to great lengths\ntrying to \"unjumble\" such queries, so having a way to easily get the answer for\na generic plan would be great.\n\n\n", "msg_date": "Wed, 12 Oct 2022 00:03:48 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "On Wed, 2022-10-12 at 00:03 +0800, Julien Rouhaud wrote:\n> On Tue, Oct 11, 2022 at 09:49:14AM -0400, Tom Lane wrote:\n> > I think it might be better to drive it off an explicit EXPLAIN option,\n> > perhaps\n> >\n> > EXPLAIN (GENERIC_PLAN) SELECT * FROM tab WHERE col = $1;\n> > \n> > If you're trying to investigate custom-plan behavior, then you\n> > need to supply concrete parameter values somewhere, so I think\n> > this approach is fine for that case.  (Shoehorning parameter\n> > values into EXPLAIN options seems like it'd be a bit much.)\n> > However, investigating generic-plan behavior this way is tedious,\n> > since you have to invent irrelevant parameter values, plus mess\n> > with plan_cache_mode or else run the explain half a dozen times.\n> > So I can get behind having a more convenient way for that.\n> \n> One common use case is tools identifying a slow query using pg_stat_statements,\n> identifying some missing indexes and then wanting to check whether the index\n> should be useful using some hypothetical index.\n> \n> FTR I'm working on such a project and for now we have to go to great lengths\n> trying to \"unjumble\" such queries, so having a way to easily get the answer for\n> a generic plan would be great.\n\nThanks for the suggestions and the encouragement. Here is a patch that\nimplements it with an EXPLAIN option named GENERIC_PLAN.\n\nYours,\nLaurenz Albe", "msg_date": "Tue, 25 Oct 2022 11:08:27 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "Hi,\n\nOn Tue, Oct 25, 2022 at 11:08:27AM +0200, Laurenz Albe wrote:\n>\n> Here is a patch that\n> implements it with an EXPLAIN option named GENERIC_PLAN.\n\nI only have a quick look at the patch for now. Any reason why you don't rely\non the existing explain_filter() function for emitting stable output (without\nhaving to remove the costs)? It would also take care of checking that it works\nin plpgsql.\n\n\n", "msg_date": "Tue, 25 Oct 2022 19:03:41 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "On Tue, 2022-10-25 at 19:03 +0800, Julien Rouhaud wrote:\n> On Tue, Oct 25, 2022 at 11:08:27AM +0200, Laurenz Albe wrote:\n> > Here is a patch that\n> > implements it with an EXPLAIN option named GENERIC_PLAN.\n> \n> I only have a quick look at the patch for now.  Any reason why you don't rely\n> on the existing explain_filter() function for emitting stable output (without\n> having to remove the costs)?  It would also take care of checking that it works\n> in plpgsql.\n\nNo, there is no principled reason I did it like that. Version 2 does it like\nyou suggest.\n\nYours,\nLaurenz Albe", "msg_date": "Sat, 29 Oct 2022 10:35:26 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "Hi,\n\nOn 2022-10-29 10:35:26 +0200, Laurenz Albe wrote:\n> On Tue, 2022-10-25 at 19:03 +0800, Julien Rouhaud wrote:\n> > On Tue, Oct 25, 2022 at 11:08:27AM +0200, Laurenz Albe wrote:\n> > > Here is a patch that\n> > > implements it with an EXPLAIN option named GENERIC_PLAN.\n> > \n> > I only have a quick look at the patch for now.� Any reason why you don't rely\n> > on the existing explain_filter() function for emitting stable output (without\n> > having to remove the costs)?� It would also take care of checking that it works\n> > in plpgsql.\n> \n> No, there is no principled reason I did it like that. Version 2 does it like\n> you suggest.\n\nThis fails to build the docs:\n\nhttps://cirrus-ci.com/task/5609301511766016\n\n[17:47:01.064] ref/explain.sgml:179: parser error : Opening and ending tag mismatch: likeral line 179 and literal\n[17:47:01.064] <likeral>ANALYZE</literal>, since a statement with unknown parameters\n[17:47:01.064] ^\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 6 Dec 2022 10:17:08 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "On Tue, 2022-12-06 at 10:17 -0800, Andres Freund wrote:\n> On 2022-10-29 10:35:26 +0200, Laurenz Albe wrote:\n> > > > Here is a patch that\n> > > > implements it with an EXPLAIN option named GENERIC_PLAN.\n> \n> This fails to build the docs:\n> \n> https://cirrus-ci.com/task/5609301511766016\n> \n> [17:47:01.064] ref/explain.sgml:179: parser error : Opening and ending tag mismatch: likeral line 179 and literal\n> [17:47:01.064]       <likeral>ANALYZE</literal>, since a statement with unknown parameters\n> [17:47:01.064]                                 ^\n\n*blush* Here is a fixed version.\n\nYours,\nLaurenz Albe", "msg_date": "Wed, 07 Dec 2022 12:23:02 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "On Wed, Dec 7, 2022 at 3:23 AM Laurenz Albe <laurenz.albe@cybertec.at>\nwrote:\n\n> On Tue, 2022-12-06 at 10:17 -0800, Andres Freund wrote:\n> > On 2022-10-29 10:35:26 +0200, Laurenz Albe wrote:\n> > > > > Here is a patch that\n> > > > > implements it with an EXPLAIN option named GENERIC_PLAN.\n> >\n> > This fails to build the docs:\n> >\n> > https://cirrus-ci.com/task/5609301511766016\n> >\n> > [17:47:01.064] ref/explain.sgml:179: parser error : Opening and ending\n> tag mismatch: likeral line 179 and literal\n> > [17:47:01.064] <likeral>ANALYZE</literal>, since a statement with\n> unknown parameters\n> > [17:47:01.064] ^\n>\n> *blush* Here is a fixed version.\n>\n\nI built and tested this patch for review and it works well, although I got\nthe following warning when building:\n\nanalyze.c: In function 'transformStmt':\nanalyze.c:2919:35: warning: 'generic_plan' may be used uninitialized in\nthis function [-Wmaybe-uninitialized]\n 2919 | pstate->p_generic_explain = generic_plan;\n | ~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~\nanalyze.c:2909:25: note: 'generic_plan' was declared here\n 2909 | bool generic_plan;\n | ^~~~~~~~~~~~\n\nOn Wed, Dec 7, 2022 at 3:23 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:On Tue, 2022-12-06 at 10:17 -0800, Andres Freund wrote:\n> On 2022-10-29 10:35:26 +0200, Laurenz Albe wrote:\n> > > > Here is a patch that\n> > > > implements it with an EXPLAIN option named GENERIC_PLAN.\n> \n> This fails to build the docs:\n> \n> https://cirrus-ci.com/task/5609301511766016\n> \n> [17:47:01.064] ref/explain.sgml:179: parser error : Opening and ending tag mismatch: likeral line 179 and literal\n> [17:47:01.064]       <likeral>ANALYZE</literal>, since a statement with unknown parameters\n> [17:47:01.064]                                 ^\n\n*blush* Here is a fixed version.I built and tested this patch for review and it works well, although I got the following warning when building:analyze.c: In function 'transformStmt':analyze.c:2919:35: warning: 'generic_plan' may be used uninitialized in this function [-Wmaybe-uninitialized] 2919 |         pstate->p_generic_explain = generic_plan;      |         ~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~analyze.c:2909:25: note: 'generic_plan' was declared here 2909 |         bool            generic_plan;      |                         ^~~~~~~~~~~~", "msg_date": "Tue, 27 Dec 2022 14:37:11 -0800", "msg_from": "Michel Pelletier <pelletier.michel@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "On Tue, 2022-12-27 at 14:37 -0800, Michel Pelletier wrote:\n> I built and tested this patch for review and it works well, although I got the following warning when building:\n> \n> analyze.c: In function 'transformStmt':\n> analyze.c:2919:35: warning: 'generic_plan' may be used uninitialized in this function [-Wmaybe-uninitialized]\n>  2919 |         pstate->p_generic_explain = generic_plan;\n>       |         ~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~\n> analyze.c:2909:25: note: 'generic_plan' was declared here\n>  2909 |         bool            generic_plan;\n>       |                         ^~~~~~~~~~~~\n\nThanks for checking. The variable should indeed be initialized, although\nmy compiler didn't complain.\n\nAttached is a fixed version.\n\nYours,\nLaurenz Albe", "msg_date": "Mon, 09 Jan 2023 17:40:01 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "Hi Laurenz,\n\nI'm testing your patch and the GENERIC_PLAN parameter seems to work just \nOK ..\n\ndb=# CREATE TABLE t (col numeric);\nCREATE TABLE\ndb=# CREATE INDEX t_col_idx ON t (col);\nCREATE INDEX\ndb=# INSERT INTO t SELECT random() FROM generate_series(1,100000) ;\nINSERT 0 100000\ndb=# EXPLAIN (GENERIC_PLAN) SELECT * FROM t WHERE col = $1;\n                                 QUERY PLAN\n---------------------------------------------------------------------------\n  Bitmap Heap Scan on t  (cost=15.27..531.67 rows=368 width=32)\n    Recheck Cond: (col = $1)\n    ->  Bitmap Index Scan on t_col_idx  (cost=0.00..15.18 rows=368 width=0)\n          Index Cond: (col = $1)\n(4 rows)\n\n\n.. the error message when combining GENERIC_PLAN with ANALYSE also works \nas expected\n\ndb=# EXPLAIN (ANALYSE, GENERIC_PLAN) SELECT * FROM t WHERE col = $1;\nERROR:  EXPLAIN ANALYZE cannot be used with GENERIC_PLAN\n\n.. and the system also does not throw an error when it's used along \nother parameters, e.g. VERBOSE, WAL, SUMMARY, etc.\n\nHowever, when GENERIC_PLAN is used combined with BUFFERS, the 'Buffers' \nnode is shown the first time the query executed in a session:\n\npsql (16devel)\nType \"help\" for help.\n\npostgres=# \\c db\nYou are now connected to database \"db\" as user \"postgres\".\ndb=# EXPLAIN (BUFFERS, GENERIC_PLAN) SELECT * FROM t WHERE col = $1;\n                                QUERY PLAN\n-------------------------------------------------------------------------\n  Index Only Scan using t_col_idx on t  (cost=0.42..4.44 rows=1 width=11)\n    Index Cond: (col = $1)\n  Planning:\n    Buffers: shared hit=62\n(4 rows)\n\ndb=# EXPLAIN (BUFFERS, GENERIC_PLAN) SELECT * FROM t WHERE col = $1;\n                                QUERY PLAN\n-------------------------------------------------------------------------\n  Index Only Scan using t_col_idx on t  (cost=0.42..4.44 rows=1 width=11)\n    Index Cond: (col = $1)\n(2 rows)\n\ndb=# EXPLAIN (BUFFERS, GENERIC_PLAN) SELECT * FROM t WHERE col = $1;\n                                QUERY PLAN\n-------------------------------------------------------------------------\n  Index Only Scan using t_col_idx on t  (cost=0.42..4.44 rows=1 width=11)\n    Index Cond: (col = $1)\n(2 rows)\n\nIs it the expected behaviour?\n\nAlso, this new parameter seems only to work between parenthesis \n`(GENERIC_PLAN)`:\n\ndb=# EXPLAIN GENERIC_PLAN SELECT * FROM t WHERE col = $1;\nERROR:  syntax error at or near \"GENERIC_PLAN\"\nLINE 1: EXPLAIN GENERIC_PLAN SELECT * FROM t WHERE col = $1;\n\n\nIf it's intended to be consistent with the other \"single parameters\", \nperhaps it should work also without parenthesis? e.g.\n\ndb=# EXPLAIN ANALYSE SELECT * FROM t WHERE col < 0.42;\n                                                           QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n  Index Only Scan using t_col_idx on t  (cost=0.42..1637.25 rows=41876 \nwidth=11) (actual time=0.103..6.293 rows=41932 loops=1)\n    Index Cond: (col < 0.42)\n    Heap Fetches: 0\n  Planning Time: 0.071 ms\n  Execution Time: 7.316 ms\n(5 rows)\n\n\ndb=# EXPLAIN VERBOSE SELECT * FROM t WHERE col < 0.42;\n                                       QUERY PLAN\n---------------------------------------------------------------------------------------\n  Index Only Scan using t_col_idx on public.t (cost=0.42..1637.25 \nrows=41876 width=11)\n    Output: col\n    Index Cond: (t.col < 0.42)\n(3 rows)\n\n\nOn a very personal note: wouldn't just GENERIC (without _PLAN) suffice? \nDon't bother with it if you disagree :-)\n\nCheers\nJim\n\nOn 09.01.23 17:40, Laurenz Albe wrote:\n> On Tue, 2022-12-27 at 14:37 -0800, Michel Pelletier wrote:\n>> I built and tested this patch for review and it works well, although I got the following warning when building:\n>>\n>> analyze.c: In function 'transformStmt':\n>> analyze.c:2919:35: warning: 'generic_plan' may be used uninitialized in this function [-Wmaybe-uninitialized]\n>>  2919 |         pstate->p_generic_explain = generic_plan;\n>>       |         ~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~\n>> analyze.c:2909:25: note: 'generic_plan' was declared here\n>>  2909 |         bool            generic_plan;\n>>       |                         ^~~~~~~~~~~~\n> Thanks for checking. The variable should indeed be initialized, although\n> my compiler didn't complain.\n>\n> Attached is a fixed version.\n>\n> Yours,\n> Laurenz Albe\n\n\n", "msg_date": "Mon, 16 Jan 2023 14:39:08 +0100", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": false, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "Jim Jones <jim.jones@uni-muenster.de> writes:\n> However, when GENERIC_PLAN is used combined with BUFFERS, the 'Buffers' \n> node is shown the first time the query executed in a session:\n\n> psql (16devel)\n> Type \"help\" for help.\n\n> postgres=# \\c db\n> You are now connected to database \"db\" as user \"postgres\".\n> db=# EXPLAIN (BUFFERS, GENERIC_PLAN) SELECT * FROM t WHERE col = $1;\n>                                QUERY PLAN\n> -------------------------------------------------------------------------\n>  Index Only Scan using t_col_idx on t  (cost=0.42..4.44 rows=1 width=11)\n>    Index Cond: (col = $1)\n>  Planning:\n>    Buffers: shared hit=62\n> (4 rows)\n\n> db=# EXPLAIN (BUFFERS, GENERIC_PLAN) SELECT * FROM t WHERE col = $1;\n>                                QUERY PLAN\n> -------------------------------------------------------------------------\n>  Index Only Scan using t_col_idx on t  (cost=0.42..4.44 rows=1 width=11)\n>    Index Cond: (col = $1)\n> (2 rows)\n\nThat's not new to this patch, the same thing happens without it.\nIt's reflecting catalog accesses involved in loading per-session\ncaches (which, therefore, needn't be repeated the second time).\n\n> Also, this new parameter seems only to work between parenthesis \n> `(GENERIC_PLAN)`:\n\n> db=# EXPLAIN GENERIC_PLAN SELECT * FROM t WHERE col = $1;\n> ERROR:  syntax error at or near \"GENERIC_PLAN\"\n> LINE 1: EXPLAIN GENERIC_PLAN SELECT * FROM t WHERE col = $1;\n\nThat's true of all but the oldest EXPLAIN options. We don't do that\nanymore because new options would have to become grammar keywords\nto support that.\n\n> On a very personal note: wouldn't just GENERIC (without _PLAN) suffice? \n> Don't bother with it if you disagree :-)\n\nFWIW, \"GENERIC\" would be too generic for my taste ;-). But I agree\nit's a judgement call.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 16 Jan 2023 12:02:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> [ 0001-Add-EXPLAIN-option-GENERIC_PLAN.v4.patch ]\n\nI took a closer look at this patch, and didn't like the implementation\nmuch. You're not matching the behavior of PREPARE at all: for example,\nthis patch is content to let $1 be resolved with different types in\ndifferent places. We should be using the existing infrastructure that\nparse_analyze_varparams uses.\n\nAlso, I believe that in contexts such as plpgsql, it is possible that\nthere's an external source of $N definitions, which we should probably\ncontinue to honor even with GENERIC_PLAN.\n\nSo that leads me to think the code should be more like this. I'm not\nsure if it's worth spending documentation and testing effort on the\ncase where we don't override an existing p_paramref_hook.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 31 Jan 2023 13:49:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "On Tue, 2023-01-31 at 13:49 -0500, Tom Lane wrote:\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > [ 0001-Add-EXPLAIN-option-GENERIC_PLAN.v4.patch ]\n> \n> I took a closer look at this patch, and didn't like the implementation\n> much.  You're not matching the behavior of PREPARE at all: for example,\n> this patch is content to let $1 be resolved with different types in\n> different places.  We should be using the existing infrastructure that\n> parse_analyze_varparams uses.\n> \n> Also, I believe that in contexts such as plpgsql, it is possible that\n> there's an external source of $N definitions, which we should probably\n> continue to honor even with GENERIC_PLAN.\n> \n> So that leads me to think the code should be more like this.  I'm not\n> sure if it's worth spending documentation and testing effort on the\n> case where we don't override an existing p_paramref_hook.\n\nThanks, that looks way cleaner.\n\nI played around with it, and I ran into a problem with partitions that\nare foreign tables:\n\n CREATE TABLE loc1 (id integer NOT NULL, key integer NOT NULL CHECK (key = 1), value text);\n\n CREATE TABLE loc2 (id integer NOT NULL, key integer NOT NULL CHECK (key = 2), value text);\n\n CREATE TABLE looppart (id integer GENERATED ALWAYS AS IDENTITY, key integer NOT NULL, value text) PARTITION BY LIST (key);\n\n CREATE FOREIGN TABLE looppart1 PARTITION OF looppart FOR VALUES IN (1) SERVER loopback OPTIONS (table_name 'loc1');\n\n CREATE FOREIGN TABLE looppart2 PARTITION OF looppart FOR VALUES IN (2) SERVER loopback OPTIONS (table_name 'loc2');\n\n EXPLAIN (GENERIC_PLAN) SELECT * FROM looppart WHERE key = $1;\n ERROR: no value found for parameter 1\n\nThe solution could be to set up a dynamic parameter hook in the\nExprContext in ecxt_param_list_info->paramFetch so that\nExecEvalParamExtern doesn't complain about the missing parameter.\n\nDoes that make sense? How do I best hook into the executor to set that up?\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 03 Feb 2023 14:30:59 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> I played around with it, and I ran into a problem with partitions that\n> are foreign tables:\n> ...\n> EXPLAIN (GENERIC_PLAN) SELECT * FROM looppart WHERE key = $1;\n> ERROR: no value found for parameter 1\n\nHmm, offhand I'd say that something is doing something it has no\nbusiness doing when EXEC_FLAG_EXPLAIN_ONLY is set (that is, premature\nevaluation of an expression). I wonder whether this failure is\nreachable without this patch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Feb 2023 09:44:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "On Fri, 2023-02-03 at 09:44 -0500, Tom Lane wrote:\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > I played around with it, and I ran into a problem with partitions that\n> > are foreign tables:\n> > ...\n> >   EXPLAIN (GENERIC_PLAN) SELECT * FROM looppart WHERE key = $1;\n> >   ERROR:  no value found for parameter 1\n> \n> Hmm, offhand I'd say that something is doing something it has no\n> business doing when EXEC_FLAG_EXPLAIN_ONLY is set (that is, premature\n> evaluation of an expression).  I wonder whether this failure is\n> reachable without this patch.\n\nThanks for the pointer. Perhaps something like the attached?\nThe changes in \"CreatePartitionPruneState\" make my test case work,\nbut they cause a difference in the regression tests, which would be\nintended if we have no partition pruning with plain EXPLAIN.\n\nYours,\nLaurenz Albe", "msg_date": "Fri, 03 Feb 2023 17:14:29 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> Thanks for the pointer. Perhaps something like the attached?\n> The changes in \"CreatePartitionPruneState\" make my test case work,\n> but they cause a difference in the regression tests, which would be\n> intended if we have no partition pruning with plain EXPLAIN.\n\nHmm, so I see (and the cfbot entry for this patch is now all red,\nbecause you didn't include that diff in the patch).\n\nI'm not sure if we can get away with that behavioral change.\nPeople are probably expecting the current behavior for existing\ncases.\n\nI can think of a couple of possible ways forward:\n\n* Fix things so that the generic parameters appear to have NULL\nvalues when inspected during executor startup. I'm not sure\nhow useful that'd be though. In partition-pruning cases that'd\nlead to EXPLAIN (GENERIC_PLAN) showing the plan with all\npartitions pruned, other than the one for NULL values if there\nis one. Doesn't sound too helpful.\n\n* Invent another executor flag that's a \"stronger\" version of\nEXEC_FLAG_EXPLAIN_ONLY, and pass that when any generic parameters\nexist, and check it in CreatePartitionPruneState to decide whether\nto do startup pruning. This avoids changing behavior for existing\ncases, but I'm not sure how much it has to recommend it otherwise.\nSkipping the startup prune step seems like it'd produce misleading\nresults in another way, ie showing too many partitions not too few.\n\nThis whole business of partition pruning dependent on the\ngeneric parameters is making me uncomfortable. It seems like\nwe *can't* show a plan that is either representative of real\nexecution or comparable to what you get from more-traditional\nEXPLAIN usage. Maybe we need to step back and think more.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Feb 2023 15:11:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "On Fri, 2023-02-03 at 15:11 -0500, Tom Lane wrote:\n> I can think of a couple of possible ways forward:\n> \n> * Fix things so that the generic parameters appear to have NULL\n> values when inspected during executor startup.  I'm not sure\n> how useful that'd be though.  In partition-pruning cases that'd\n> lead to EXPLAIN (GENERIC_PLAN) showing the plan with all\n> partitions pruned, other than the one for NULL values if there\n> is one.  Doesn't sound too helpful.\n> \n> * Invent another executor flag that's a \"stronger\" version of\n> EXEC_FLAG_EXPLAIN_ONLY, and pass that when any generic parameters\n> exist, and check it in CreatePartitionPruneState to decide whether\n> to do startup pruning.  This avoids changing behavior for existing\n> cases, but I'm not sure how much it has to recommend it otherwise.\n> Skipping the startup prune step seems like it'd produce misleading\n> results in another way, ie showing too many partitions not too few.\n> \n> This whole business of partition pruning dependent on the\n> generic parameters is making me uncomfortable.  It seems like\n> we *can't* show a plan that is either representative of real\n> execution or comparable to what you get from more-traditional\n> EXPLAIN usage.  Maybe we need to step back and think more.\n\nI slept over it, and the second idea now looks like the the right\napproach to me. My idea of seeing a generic plan is that plan-time\npartition pruning happens, but not execution-time pruning, so that\nI get no \"subplans removed\".\nAre there any weird side effects of skipping the startup prune step?\n\nAnyway, attached is v7 that tries to do it that way. It feels fairly\ngood to me. I invented a new executor flag EXEC_FLAG_EXPLAIN_GENERIC.\nTo avoid having to change all the places that check EXEC_FLAG_EXPLAIN_ONLY\nto also check for the new flag, I decided that the new flag can only be\nused as \"add-on\" to EXEC_FLAG_EXPLAIN_ONLY.\n\nYours,\nLaurenz Albe", "msg_date": "Sun, 05 Feb 2023 18:24:03 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "Hi,\n\nOn 2023-02-05 18:24:03 +0100, Laurenz Albe wrote:\n> Anyway, attached is v7 that tries to do it that way.\n\nThis consistently fails on CI:\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F42%2F3962\n\nhttps://api.cirrus-ci.com/v1/artifact/task/5156723929907200/testrun/build/testrun/regress/regress/regression.diffs\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 13 Feb 2023 16:33:06 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "On Mon, 2023-02-13 at 16:33 -0800, Andres Freund wrote:\n> On 2023-02-05 18:24:03 +0100, Laurenz Albe wrote:\n> > Anyway, attached is v7 that tries to do it that way.\n> \n> This consistently fails on CI:\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F42%2F3962\n> \n> https://api.cirrus-ci.com/v1/artifact/task/5156723929907200/testrun/build/testrun/regress/regress/regression.diffs\n\nThanks for pointing out.\n\nHere is an improved version.\n\nYours,\nLaurenz Albe", "msg_date": "Tue, 14 Feb 2023 13:44:44 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "Hi,\n\nI reviewed the patch and find the idea of allowing $placeholders with\nEXPLAIN very useful, it will solve relevant real-world use-cases.\n(Queries from pg_stat_statements, found in the log, or in application\ncode.)\n\nI have some comments:\n\n> This allows EXPLAIN to generate generic plans for parameterized statements\n> (that have parameter placeholders like $1 in the statement text).\n\n> + <varlistentry>\n> + <term><literal>GENERIC_PLAN</literal></term>\n> + <listitem>\n> + <para>\n> + Generate a generic plan for the statement (see <xref linkend=\"sql-prepare\"/>\n> + for details about generic plans). The statement can contain parameter\n> + placeholders like <literal>$1</literal> (but then it has to be a statement\n> + that supports parameters). This option cannot be used together with\n> + <literal>ANALYZE</literal>, since a statement with unknown parameters\n> + cannot be executed.\n\nLike in the commit message quoted above, I would put more emphasis on\n\"parameterized query\" here:\n\n Allow the statement to contain parameter placeholders like\n <literal>$1</literal> and generate a generic plan for it.\n This option cannot be used together with <literal>ANALYZE</literal>.\n\n> +\t/* check that GENERIC_PLAN is not used with EXPLAIN ANALYZE */\n> +\tif (es->generic && es->analyze)\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\t\t\t\t errmsg(\"EXPLAIN ANALYZE cannot be used with GENERIC_PLAN\")));\n\nTo put that in line with the other error messages in that context, I'd\ninject an extra \"option\":\n\n errmsg(\"EXPLAIN option ANALYZE cannot be used with GENERIC_PLAN\")));\n\n> --- a/src/test/regress/sql/explain.sql\n> +++ b/src/test/regress/sql/explain.sql\n> @@ -128,3 +128,33 @@ select explain_filter('explain (verbose) select * from t1 where pg_temp.mysin(f1\n> -- Test compute_query_id\n> set compute_query_id = on;\n> select explain_filter('explain (verbose) select * from int8_tbl i8');\n> +\n> +-- Test EXPLAIN (GENERIC_PLAN)\n> +select explain_filter('explain (generic_plan) select unique1 from tenk1 where thousand = $1');\n> +-- should fail\n> +select explain_filter('explain (analyze, generic_plan) select unique1 from tenk1 where thousand = $1');\n> +-- Test EXPLAIN (GENERIC_PLAN) with partition pruning\n> +-- should prune at plan time, but not at execution time\n> +create extension if not exists postgres_fdw;\n\n\"create extension postgres_fdw\" cannot be used from src/test/regress/\nsince contrib/ might not have been built.\n\n> +create server loop42 foreign data wrapper postgres_fdw;\n> +create user mapping for current_role server loop42 options (password_required 'false');\n> +create table gen_part (\n> + key1 integer not null,\n> + key2 integer not null\n> +) partition by list (key1);\n> +create table gen_part_1\n> + partition of gen_part for values in (1)\n> + partition by range (key2);\n> +create foreign table gen_part_1_1\n> + partition of gen_part_1 for values from (1) to (2)\n> + server loop42 options (table_name 'whatever_1_1');\n> +create foreign table gen_part_1_2\n> + partition of gen_part_1 for values from (2) to (3)\n> + server loop42 options (table_name 'whatever_1_2');\n> +create foreign table gen_part_2\n> + partition of gen_part for values in (2)\n> + server loop42 options (table_name 'whatever_2');\n> +select explain_filter('explain (generic_plan) select key1, key2 from gen_part where key1 = 1 and key2 = $1');\n\nI suggest leaving this test in place here, but with local tables (to\nshow that plan time pruning using the one provided parameter works),\nand add a comment here explaining that is being tested:\n\n-- create a partition hierarchy to show that plan time pruning removes\n-- the key1=2 table but generates a generic plan for key2=$1\n\nThe test involving postgres_fdw is still necessary to exercise the new\nEXEC_FLAG_EXPLAIN_GENERIC code path, but needs to be moved elsewhere,\nprobably src/test/modules/.\n\nIn the new location, likewise add a comment why this test needs to\nlook this way.\n\nChristoph\n\n\n", "msg_date": "Tue, 21 Mar 2023 16:32:48 +0100", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "Thanks for the review!\n\nOn Tue, 2023-03-21 at 16:32 +0100, Christoph Berg wrote:\n> I have some comments:\n> \n> > This allows EXPLAIN to generate generic plans for parameterized statements\n> > (that have parameter placeholders like $1 in the statement text).\n> \n> > +   <varlistentry>\n> > +    <term><literal>GENERIC_PLAN</literal></term>\n> > +    <listitem>\n> > +     <para>\n> > +      Generate a generic plan for the statement (see <xref linkend=\"sql-prepare\"/>\n> > +      for details about generic plans).  The statement can contain parameter\n> > +      placeholders like <literal>$1</literal> (but then it has to be a statement\n> > +      that supports parameters).  This option cannot be used together with\n> > +      <literal>ANALYZE</literal>, since a statement with unknown parameters\n> > +      cannot be executed.\n> \n> Like in the commit message quoted above, I would put more emphasis on\n> \"parameterized query\" here:\n> \n>   Allow the statement to contain parameter placeholders like\n>   <literal>$1</literal> and generate a generic plan for it.\n>   This option cannot be used together with <literal>ANALYZE</literal>.\n\nI went with\n\n Allow the statement to contain parameter placeholders like\n <literal>$1</literal> and generate a generic plan for it.\n See <xref linkend=\"sql-prepare\"/> for details about generic plans\n and the statements that support parameters.\n This option cannot be used together with <literal>ANALYZE</literal>.\n\n> > +       /* check that GENERIC_PLAN is not used with EXPLAIN ANALYZE */\n> > +       if (es->generic && es->analyze)\n> > +               ereport(ERROR,\n> > +                               (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > +                                errmsg(\"EXPLAIN ANALYZE cannot be used with GENERIC_PLAN\")));\n> \n> To put that in line with the other error messages in that context, I'd\n> inject an extra \"option\":\n> \n>   errmsg(\"EXPLAIN option ANALYZE cannot be used with GENERIC_PLAN\")));\n\nDone.\n\n> > --- a/src/test/regress/sql/explain.sql\n> > +++ b/src/test/regress/sql/explain.sql\n> > [...]\n> > +create extension if not exists postgres_fdw;\n> \n> \"create extension postgres_fdw\" cannot be used from src/test/regress/\n> since contrib/ might not have been built.\n\nOuch. Good catch.\n\n> I suggest leaving this test in place here, but with local tables (to\n> show that plan time pruning using the one provided parameter works),\n> and add a comment here explaining that is being tested:\n> \n> -- create a partition hierarchy to show that plan time pruning removes\n> -- the key1=2 table but generates a generic plan for key2=$1\n\nI did that, with a different comment.\n\n> The test involving postgres_fdw is still necessary to exercise the new\n> EXEC_FLAG_EXPLAIN_GENERIC code path, but needs to be moved elsewhere,\n> probably src/test/modules/.\n\nTests for postgres_fdw are in contrib/postgres_fdw/sql/postgres_fdw.sql,\nso I added the test there.\n\nVersion 9 of the patch is attached.\n\nYours,\nLaurenz Albe", "msg_date": "Wed, 22 Mar 2023 14:15:23 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "And here is v10, which includes tab completion for the new option.\n\nYours,\nLaurenz Albe", "msg_date": "Thu, 23 Mar 2023 19:31:26 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "Re: Laurenz Albe\n> And here is v10, which includes tab completion for the new option.\n\nIMHO everything looks good now. Marking as ready for committer.\n\nThanks!\n\nChristoph\n\n\n", "msg_date": "Fri, 24 Mar 2023 11:58:09 +0100", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Tue, 2023-03-21 at 16:32 +0100, Christoph Berg wrote:\n>> The test involving postgres_fdw is still necessary to exercise the new\n>> EXEC_FLAG_EXPLAIN_GENERIC code path, but needs to be moved elsewhere,\n>> probably src/test/modules/.\n\n> Tests for postgres_fdw are in contrib/postgres_fdw/sql/postgres_fdw.sql,\n> so I added the test there.\n\nI don't actually see why a postgres_fdw test case is needed at all?\nThe tests in explain.sql seem sufficient.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Mar 2023 13:20:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "Re: Tom Lane\n> I don't actually see why a postgres_fdw test case is needed at all?\n> The tests in explain.sql seem sufficient.\n\nWhen I asked Laurenz about that he told me that local tables wouldn't\nexercise the code specific for EXEC_FLAG_EXPLAIN_GENERIC.\n\n(Admittedly my knowledge of the planner wasn't deep enough to verify\nthat. Laurenz is currently traveling, so I don't know if he could\nanswer this himself now.)\n\nChristoph\n\n\n", "msg_date": "Fri, 24 Mar 2023 21:26:32 +0100", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "Christoph Berg <myon@debian.org> writes:\n> Re: Tom Lane\n>> I don't actually see why a postgres_fdw test case is needed at all?\n>> The tests in explain.sql seem sufficient.\n\n> When I asked Laurenz about that he told me that local tables wouldn't\n> exercise the code specific for EXEC_FLAG_EXPLAIN_GENERIC.\n\nBut there isn't any ... or at least, I fail to see what isn't sufficiently\nexercised by that new explain.sql test case that's identical to this one\nexcept for being a non-foreign table. Perhaps at some point this patch\nmodified postgres_fdw code? But it doesn't now.\n\nI don't mind having a postgres_fdw test if there's something for it\nto test, but it just looks duplicative to me. Other things being\nequal, I'd prefer to test this feature in explain.sql, since (a) it's\na core feature and (b) the core tests are better parallelized than the\ncontrib tests, so the same test should be cheaper to run.\n\n> (Admittedly my knowledge of the planner wasn't deep enough to verify\n> that. Laurenz is currently traveling, so I don't know if he could\n> answer this himself now.)\n\nOK, thanks for the status update. What I'll do to get this off my\nplate is to push the patch without the postgres_fdw test -- if\nLaurenz wants to advocate for that when he returns, we can discuss it\nmore.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Mar 2023 16:41:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" }, { "msg_contents": "On Fri, 2023-03-24 at 16:41 -0400, Tom Lane wrote:\n> Christoph Berg <myon@debian.org> writes:\n> > Re: Tom Lane\n> > > I don't actually see why a postgres_fdw test case is needed at all?\n> > > The tests in explain.sql seem sufficient.\n> \n> > When I asked Laurenz about that he told me that local tables wouldn't\n> > exercise the code specific for EXEC_FLAG_EXPLAIN_GENERIC.\n> \n> But there isn't any ... or at least, I fail to see what isn't sufficiently\n> exercised by that new explain.sql test case that's identical to this one\n> except for being a non-foreign table.  Perhaps at some point this patch\n> modified postgres_fdw code?  But it doesn't now.\n> \n> I don't mind having a postgres_fdw test if there's something for it\n> to test, but it just looks duplicative to me.  Other things being\n> equal, I'd prefer to test this feature in explain.sql, since (a) it's\n> a core feature and (b) the core tests are better parallelized than the\n> contrib tests, so the same test should be cheaper to run.\n> \n> > (Admittedly my knowledge of the planner wasn't deep enough to verify\n> > that. Laurenz is currently traveling, so I don't know if he could\n> > answer this himself now.)\n> \n> OK, thanks for the status update.  What I'll do to get this off my\n> plate is to push the patch without the postgres_fdw test -- if\n> Laurenz wants to advocate for that when he returns, we can discuss it\n> more.\n\nThanks for committing this.\n\nAs Chrisoph mentioned, I found that I could not reproduce the problem\nthat was addressed by the EXEC_FLAG_EXPLAIN_GENERIC hack using local\npartitioned tables. My optimizer knowledge is not deep enough to tell\nwhy, and it might well be a bug lurking in the FDW part of the\noptimizer code. It is not FDW specific, since I discovered it with\noracle_fdw and could reproduce it with postgres_fdw.\n\nI was aware that it is awkward to add a test to a contrib module, but\nI thought that I should add a test that exercises the new code path.\nBut I am fine without the postgres_fdw test.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Tue, 28 Mar 2023 08:35:57 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Make EXPLAIN generate a generic plan for a parameterized query" } ]
[ { "msg_contents": "Various test suites use the \"openssl\" program as part of their setup. \nThere isn't a way to override which openssl program is to be used, other \nthan by fiddling with the path, perhaps. This has gotten increasingly \nproblematic with some of the work I have been doing, because different \nversions of openssl have different capabilities and do different things \nby default. This patch checks for an openssl binary in configure and \nmeson setup, with appropriate ways to override it. This is similar to \nhow \"lz4\" and \"zstd\" are handled, for example. The meson build system \nactually already did this, but the result was only used in some places. \nThis is now applied more uniformly.", "msg_date": "Tue, 11 Oct 2022 17:06:22 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Make finding openssl program a configure or meson option" }, { "msg_contents": "On Tue, Oct 11, 2022 at 05:06:22PM +0200, Peter Eisentraut wrote:\n> Various test suites use the \"openssl\" program as part of their setup. There\n> isn't a way to override which openssl program is to be used, other than by\n> fiddling with the path, perhaps. This has gotten increasingly problematic\n> with some of the work I have been doing, because different versions of\n> openssl have different capabilities and do different things by default.\n> This patch checks for an openssl binary in configure and meson setup, with\n> appropriate ways to override it. This is similar to how \"lz4\" and \"zstd\"\n> are handled, for example. The meson build system actually already did this,\n> but the result was only used in some places. This is now applied more\n> uniformly.\n\nopenssl-env allows the use of the environment variable of the same\nname. This reminds me a bit of the recent interferences with GZIP,\nfor example.\n\nThis patch is missing one addition of set_single_env() in\nvcregress.pl, and one update of install-windows.sgml where all the\nsupported environment variables for commands are listed.\n--\nMichael", "msg_date": "Wed, 12 Oct 2022 10:08:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Make finding openssl program a configure or meson option" }, { "msg_contents": "On 12.10.22 03:08, Michael Paquier wrote:\n> On Tue, Oct 11, 2022 at 05:06:22PM +0200, Peter Eisentraut wrote:\n>> Various test suites use the \"openssl\" program as part of their setup. There\n>> isn't a way to override which openssl program is to be used, other than by\n>> fiddling with the path, perhaps. This has gotten increasingly problematic\n>> with some of the work I have been doing, because different versions of\n>> openssl have different capabilities and do different things by default.\n>> This patch checks for an openssl binary in configure and meson setup, with\n>> appropriate ways to override it. This is similar to how \"lz4\" and \"zstd\"\n>> are handled, for example. The meson build system actually already did this,\n>> but the result was only used in some places. This is now applied more\n>> uniformly.\n> \n> openssl-env allows the use of the environment variable of the same\n> name. This reminds me a bit of the recent interferences with GZIP,\n> for example.\n\nSorry, what is \"openssl-env\"? I can't find that anywhere.\n\n> This patch is missing one addition of set_single_env() in\n> vcregress.pl, and one update of install-windows.sgml where all the\n> supported environment variables for commands are listed.\n\nOk, I'll add that.\n\n\n\n", "msg_date": "Thu, 13 Oct 2022 12:29:36 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Make finding openssl program a configure or meson option" }, { "msg_contents": "On 12.10.22 03:08, Michael Paquier wrote:\n> On Tue, Oct 11, 2022 at 05:06:22PM +0200, Peter Eisentraut wrote:\n>> Various test suites use the \"openssl\" program as part of their setup. There\n>> isn't a way to override which openssl program is to be used, other than by\n>> fiddling with the path, perhaps. This has gotten increasingly problematic\n>> with some of the work I have been doing, because different versions of\n>> openssl have different capabilities and do different things by default.\n>> This patch checks for an openssl binary in configure and meson setup, with\n>> appropriate ways to override it. This is similar to how \"lz4\" and \"zstd\"\n>> are handled, for example. The meson build system actually already did this,\n>> but the result was only used in some places. This is now applied more\n>> uniformly.\n> \n> openssl-env allows the use of the environment variable of the same\n> name. This reminds me a bit of the recent interferences with GZIP,\n> for example.\n\nOkay, I see what you meant here now. openssl-env is the man page \ndescribing environment variables used by OpenSSL. I don't see any \nconflicts with what is being proposed here.\n\n> This patch is missing one addition of set_single_env() in\n> vcregress.pl, and one update of install-windows.sgml where all the\n> supported environment variables for commands are listed.\n\nAdded. New patch attached.", "msg_date": "Tue, 18 Oct 2022 18:46:53 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Make finding openssl program a configure or meson option" }, { "msg_contents": "On Tue, Oct 18, 2022 at 06:46:53PM +0200, Peter Eisentraut wrote:\n> On 12.10.22 03:08, Michael Paquier wrote:\n>> openssl-env allows the use of the environment variable of the same\n>> name. This reminds me a bit of the recent interferences with GZIP,\n>> for example.\n> \n> Okay, I see what you meant here now. openssl-env is the man page describing\n> environment variables used by OpenSSL.\n\nYeah, sorry. That's what I was referring to.\n\n> I don't see any conflicts with what is being proposed here.\n\nIts meaning is the same in the context of the OpenSSL code. LibreSSL\nhas nothing of the kind.\n\n> Added. New patch attached.\n\nLooks fine as a whole, except for one nit.\n\nsrc/test/ssl/t/001_ssltests.pl: warn 'couldn\\'t run `openssl x509` to get client cert serialno';\nPerhaps this warning should mentioned $ENV{OPENSSL} instead?\n--\nMichael", "msg_date": "Wed, 19 Oct 2022 12:06:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Make finding openssl program a configure or meson option" }, { "msg_contents": "On 19.10.22 05:06, Michael Paquier wrote:\n> Looks fine as a whole, except for one nit.\n> \n> src/test/ssl/t/001_ssltests.pl: warn 'couldn\\'t run `openssl x509` to get client cert serialno';\n> Perhaps this warning should mentioned $ENV{OPENSSL} instead?\n\nCommitted with that change.\n\n\n\n", "msg_date": "Thu, 20 Oct 2022 21:20:41 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Make finding openssl program a configure or meson option" } ]
[ { "msg_contents": "Over on [1], Erwin mentions that row_number() over (ORDER BY ... ROWS\nUNBOUNDED PRECEDING) is substantially faster than the default RANGE\nUNBOUNDED PRECEDING WindowClause options. The difference between\nthese options are that nodeWindowAgg.c checks for peer rows for RANGE\nbut not for ROWS. That would make a difference for many of our\nbuilt-in window and aggregate functions, but row_number() does not\nreally care.\n\nTo quantify the performance difference, take the following example:\n\ncreate table ab (a int, b int) ;\ninsert into ab\nselect x,y from generate_series(1,1000)x,generate_series(1,1000)y;\ncreate index on ab(a,b);\n\n-- range unbounded (the default)\nexplain (analyze, costs off, timing off)\nselect a,b from (select a,b,row_number() over (partition by a order by\na range unbounded preceding) rn from ab) ab where rn <= 1;\n QUERY PLAN\n---------------------------------------------------------------------------------------\n Subquery Scan on ab (actual rows=1000 loops=1)\n -> WindowAgg (actual rows=1000 loops=1)\n Run Condition: (row_number() OVER (?) <= 1)\n -> Index Only Scan using ab_a_b_idx on ab ab_1 (actual\nrows=1000000 loops=1)\n Heap Fetches: 1000000\n Planning Time: 0.091 ms\n Execution Time: 336.172 ms\n(7 rows)\n\nIf that were switched to use ROWS UNBOUNDED PRECEDING then the\nexecutor would not have to check for peer rows in the window frame.\n\nexplain (analyze, costs off, timing off)\nselect a,b from (select a,b,row_number() over (partition by a order by\na rows unbounded preceding) rn from ab) ab where rn <= 1;\n QUERY PLAN\n---------------------------------------------------------------------------------------\n Subquery Scan on ab (actual rows=1000 loops=1)\n -> WindowAgg (actual rows=1000 loops=1)\n Run Condition: (row_number() OVER (?) <= 1)\n -> Index Only Scan using ab_a_b_idx on ab ab_1 (actual\nrows=1000000 loops=1)\n Heap Fetches: 0\n Planning Time: 0.178 ms\n Execution Time: 75.101 ms\n(7 rows)\n\nTime: 75.673 ms (447% faster)\n\nYou can see that this executes quite a bit more quickly. As Erwin\npointed out to me (off-list), this along with the monotonic window\nfunction optimisation that was added in PG15 the performance of this\ngets close to DISTINCT ON.\n\nexplain (analyze, costs off, timing off)\nselect distinct on (a) a,b from ab order by a,b;\n QUERY PLAN\n----------------------------------------------------------------------------\n Unique (actual rows=1000 loops=1)\n -> Index Only Scan using ab_a_b_idx on ab (actual rows=1000000 loops=1)\n Heap Fetches: 0\n Planning Time: 0.071 ms\n Execution Time: 77.988 ms\n(5 rows)\n\nI've not really done any analysis into which other window functions\ncan use this optimisation. The attached only adds support to\nrow_number()'s support function and only converts exactly \"RANGE\nUNBOUNDED PRECEDING AND CURRENT ROW\" into \"ROW UNBOUNDED PRECEDING AND\nCURRENT ROW\". That might need to be relaxed a little, but I've done\nno analysis to find that out. Erwin mentioned to me that he's not\ncurrently in a position to produce a patch for this, so here's the\npatch. I'm hoping the existence of this might coax Erwin into doing\nsome analysis into what other window functions we can support and what\nother frame options can be optimised.\n\n[1] https://postgr.es/m/CAGHENJ7LBBszxS+SkWWFVnBmOT2oVsBhDMB1DFrgerCeYa_DyA@mail.gmail.com", "msg_date": "Wed, 12 Oct 2022 15:40:42 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Allow WindowFuncs prosupport function to use more optimal\n WindowClause options" }, { "msg_contents": "On 10/12/22 04:40, David Rowley wrote:\n> I've not really done any analysis into which other window functions\n> can use this optimisation. The attached only adds support to\n> row_number()'s support function and only converts exactly \"RANGE\n> UNBOUNDED PRECEDING AND CURRENT ROW\" into \"ROW UNBOUNDED PRECEDING AND\n> CURRENT ROW\". That might need to be relaxed a little, but I've done\n> no analysis to find that out. \n\nPer spec, the ROW_NUMBER() window function is not even allowed to have a \nframe specified.\n\n b) The window framing clause of WDX shall not be present.\n\nAlso, the specification for ROW_NUMBER() is:\n\n f) ROW_NUMBER() OVER WNS is equivalent to the <window function>:\n\n COUNT (*) OVER (WNS1 ROWS UNBOUNDED PRECEDING)\n\n\nSo I don't think we need to test for anything at all and can \nindiscriminately add or replace the frame with ROWS UNBOUNDED PRECEDING.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Wed, 12 Oct 2022 05:33:33 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: Allow WindowFuncs prosupport function to use more optimal\n WindowClause options" }, { "msg_contents": "On Wed, 12 Oct 2022 at 05:33, Vik Fearing <vik@postgresfriends.org> wrote:\n\n> On 10/12/22 04:40, David Rowley wrote:\n> > I've not really done any analysis into which other window functions\n> > can use this optimisation. The attached only adds support to\n> > row_number()'s support function and only converts exactly \"RANGE\n> > UNBOUNDED PRECEDING AND CURRENT ROW\" into \"ROW UNBOUNDED PRECEDING AND\n> > CURRENT ROW\". That might need to be relaxed a little, but I've done\n> > no analysis to find that out.\n>\n> Per spec, the ROW_NUMBER() window function is not even allowed to have a\n> frame specified.\n>\n> b) The window framing clause of WDX shall not be present.\n>\n> Also, the specification for ROW_NUMBER() is:\n>\n> f) ROW_NUMBER() OVER WNS is equivalent to the <window function>:\n>\n> COUNT (*) OVER (WNS1 ROWS UNBOUNDED PRECEDING)\n>\n>\n> So I don't think we need to test for anything at all and can\n> indiscriminately add or replace the frame with ROWS UNBOUNDED PRECEDING.\n>\n>\nTo back this up:\nSQL Server returns an error right away if you try to add a window frame\nhttps://dbfiddle.uk/SplT-F3E\n\n> Msg 10752 Level 15 State 3 Line 1\n> The function 'row_number' may not have a window frame.\n\nAnd Oracle reports a syntax error:\nhttps://dbfiddle.uk/l0Yk8Lw5\n\nrow_number() is defined without a \"windowing clause\" (in Oravle's\nnomenclature)\nhttps://docs.oracle.com/cd/B28359_01/server.111/b28286/functions001.htm#i81407\nhttps://docs.oracle.com/cd/B28359_01/server.111/b28286/functions144.htm#i86310\n\nAllowing the same in Postgres (and defaulting to RANGE mode) seems like (a)\ngenuine bug(s) after all.\n\nRegards\nErwin\n\nOn Wed, 12 Oct 2022 at 05:33, Vik Fearing <vik@postgresfriends.org> wrote:On 10/12/22 04:40, David Rowley wrote:\n> I've not really done any analysis into which other window functions\n> can use this optimisation. The attached only adds support to\n> row_number()'s support function and only converts exactly \"RANGE\n> UNBOUNDED PRECEDING AND CURRENT ROW\" into \"ROW UNBOUNDED PRECEDING AND\n> CURRENT ROW\".  That might need to be relaxed a little, but I've done\n> no analysis to find that out. \n\nPer spec, the ROW_NUMBER() window function is not even allowed to have a \nframe specified.\n\n     b) The window framing clause of WDX shall not be present.\n\nAlso, the specification for ROW_NUMBER() is:\n\n     f) ROW_NUMBER() OVER WNS is equivalent to the <window function>:\n\n         COUNT (*) OVER (WNS1 ROWS UNBOUNDED PRECEDING)\n\n\nSo I don't think we need to test for anything at all and can \nindiscriminately add or replace the frame with ROWS UNBOUNDED PRECEDING.\nTo back this up:SQL Server returns an error right away if you  try to add a window framehttps://dbfiddle.uk/SplT-F3E> Msg 10752 Level 15 State 3 Line 1> The function 'row_number' may not have a window frame.And Oracle reports a syntax error:https://dbfiddle.uk/l0Yk8Lw5row_number() is defined without a \"windowing clause\" (in Oravle's nomenclature)https://docs.oracle.com/cd/B28359_01/server.111/b28286/functions001.htm#i81407https://docs.oracle.com/cd/B28359_01/server.111/b28286/functions144.htm#i86310Allowing the same in Postgres (and defaulting to RANGE mode) seems like (a) genuine bug(s) after all.RegardsErwin", "msg_date": "Wed, 12 Oct 2022 06:03:35 +0200", "msg_from": "Erwin Brandstetter <brsaweda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow WindowFuncs prosupport function to use more optimal\n WindowClause options" }, { "msg_contents": "On Wed, 12 Oct 2022 at 16:33, Vik Fearing <vik@postgresfriends.org> wrote:\n> Per spec, the ROW_NUMBER() window function is not even allowed to have a\n> frame specified.\n>\n> b) The window framing clause of WDX shall not be present.\n>\n> Also, the specification for ROW_NUMBER() is:\n>\n> f) ROW_NUMBER() OVER WNS is equivalent to the <window function>:\n>\n> COUNT (*) OVER (WNS1 ROWS UNBOUNDED PRECEDING)\n>\n>\n> So I don't think we need to test for anything at all and can\n> indiscriminately add or replace the frame with ROWS UNBOUNDED PRECEDING.\n\nThanks for digging that out.\n\nJust above that I see:\n\nRANK() OVER WNS is equivalent to:\n( COUNT (*) OVER (WNS1 RANGE UNBOUNDED PRECEDING)\n- COUNT (*) OVER (WNS1 RANGE CURRENT ROW) + 1 )\n\nand\n\nDENSE_RANK() OVER WNS is equivalent to the <window function>:\nCOUNT (DISTINCT ROW ( VE1, ..., VEN ) )\nOVER (WNS1 RANGE UNBOUNDED PRECEDING)\n\nSo it looks like the same can be done for rank() and dense_rank() too.\nI've added support for those in the attached.\n\nThis also got me thinking that maybe we should be a bit more generic\nwith the support function node tag name. After looking at the\nnodeWindowAgg.c code for a while, I wondered if we might want to add\nsome optimisations in the future that makes WindowAgg not bother\nstoring tuples for row_number(), rank() and dense_rank(). That might\nsave a bit of overhead from the tuple store. I imagined that we'd\nwant to allow the expansion of this support request so that the\nsupport function could let the planner know if any tuples will be\naccessed by the window function or not. The\nSupportRequestWFuncOptimizeFrameOpts name didn't seem very fitting for\nthat so I adjusted it to become SupportRequestOptimizeWindowClause\ninstead.\n\nThe updated patch is attached.\n\nDavid", "msg_date": "Thu, 13 Oct 2022 13:34:40 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow WindowFuncs prosupport function to use more optimal\n WindowClause options" }, { "msg_contents": "On Wed, Oct 12, 2022 at 5:35 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 12 Oct 2022 at 16:33, Vik Fearing <vik@postgresfriends.org> wrote:\n> > Per spec, the ROW_NUMBER() window function is not even allowed to have a\n> > frame specified.\n> >\n> > b) The window framing clause of WDX shall not be present.\n> >\n> > Also, the specification for ROW_NUMBER() is:\n> >\n> > f) ROW_NUMBER() OVER WNS is equivalent to the <window function>:\n> >\n> > COUNT (*) OVER (WNS1 ROWS UNBOUNDED PRECEDING)\n> >\n> >\n> > So I don't think we need to test for anything at all and can\n> > indiscriminately add or replace the frame with ROWS UNBOUNDED PRECEDING.\n>\n> Thanks for digging that out.\n>\n> Just above that I see:\n>\n> RANK() OVER WNS is equivalent to:\n> ( COUNT (*) OVER (WNS1 RANGE UNBOUNDED PRECEDING)\n> - COUNT (*) OVER (WNS1 RANGE CURRENT ROW) + 1 )\n>\n> and\n>\n> DENSE_RANK() OVER WNS is equivalent to the <window function>:\n> COUNT (DISTINCT ROW ( VE1, ..., VEN ) )\n> OVER (WNS1 RANGE UNBOUNDED PRECEDING)\n>\n> So it looks like the same can be done for rank() and dense_rank() too.\n> I've added support for those in the attached.\n>\n> This also got me thinking that maybe we should be a bit more generic\n> with the support function node tag name. After looking at the\n> nodeWindowAgg.c code for a while, I wondered if we might want to add\n> some optimisations in the future that makes WindowAgg not bother\n> storing tuples for row_number(), rank() and dense_rank(). That might\n> save a bit of overhead from the tuple store. I imagined that we'd\n> want to allow the expansion of this support request so that the\n> support function could let the planner know if any tuples will be\n> accessed by the window function or not. The\n> SupportRequestWFuncOptimizeFrameOpts name didn't seem very fitting for\n> that so I adjusted it to become SupportRequestOptimizeWindowClause\n> instead.\n>\n> The updated patch is attached.\n>\n> David\n>\nHi,\n\n+ req->frameOptions = (FRAMEOPTION_ROWS |\n+ FRAMEOPTION_START_UNBOUNDED_PRECEDING |\n+ FRAMEOPTION_END_CURRENT_ROW);\n\nThe bit combination appears multiple times in the patch.\nMaybe define the combination as a constant in supportnodes.h and reference\nit in the code.\n\nCheers\n\nOn Wed, Oct 12, 2022 at 5:35 PM David Rowley <dgrowleyml@gmail.com> wrote:On Wed, 12 Oct 2022 at 16:33, Vik Fearing <vik@postgresfriends.org> wrote:\n> Per spec, the ROW_NUMBER() window function is not even allowed to have a\n> frame specified.\n>\n>      b) The window framing clause of WDX shall not be present.\n>\n> Also, the specification for ROW_NUMBER() is:\n>\n>      f) ROW_NUMBER() OVER WNS is equivalent to the <window function>:\n>\n>          COUNT (*) OVER (WNS1 ROWS UNBOUNDED PRECEDING)\n>\n>\n> So I don't think we need to test for anything at all and can\n> indiscriminately add or replace the frame with ROWS UNBOUNDED PRECEDING.\n\nThanks for digging that out.\n\nJust above that I see:\n\nRANK() OVER WNS is equivalent to:\n( COUNT (*) OVER (WNS1 RANGE UNBOUNDED PRECEDING)\n- COUNT (*) OVER (WNS1 RANGE CURRENT ROW) + 1 )\n\nand\n\nDENSE_RANK() OVER WNS is equivalent to the <window function>:\nCOUNT (DISTINCT ROW ( VE1, ..., VEN ) )\nOVER (WNS1 RANGE UNBOUNDED PRECEDING)\n\nSo it looks like the same can be done for rank() and dense_rank() too.\nI've added support for those in the attached.\n\nThis also got me thinking that maybe we should be a bit more generic\nwith the support function node tag name. After looking at the\nnodeWindowAgg.c code for a while, I wondered if we might want to add\nsome optimisations in the future that makes WindowAgg not bother\nstoring tuples for row_number(), rank() and dense_rank().  That might\nsave a bit of overhead from the tuple store.  I imagined that we'd\nwant to allow the expansion of this support request so that the\nsupport function could let the planner know if any tuples will be\naccessed by the window function or not.  The\nSupportRequestWFuncOptimizeFrameOpts name didn't seem very fitting for\nthat so I adjusted it to become SupportRequestOptimizeWindowClause\ninstead.\n\nThe updated patch is attached.\n\nDavidHi,+       req->frameOptions = (FRAMEOPTION_ROWS |+                            FRAMEOPTION_START_UNBOUNDED_PRECEDING |+                            FRAMEOPTION_END_CURRENT_ROW);The bit combination appears multiple times in the patch.Maybe define the combination as a constant in supportnodes.h and reference it in the code.Cheers", "msg_date": "Thu, 13 Oct 2022 14:51:44 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Allow WindowFuncs prosupport function to use more optimal\n WindowClause options" }, { "msg_contents": "On Thu, 13 Oct 2022 at 02:34, David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 12 Oct 2022 at 16:33, Vik Fearing <vik@postgresfriends.org> wrote:\n> > Per spec, the ROW_NUMBER() window function is not even allowed to have a\n> > frame specified.\n> >\n> > b) The window framing clause of WDX shall not be present.\n> >\n> > Also, the specification for ROW_NUMBER() is:\n> >\n> > f) ROW_NUMBER() OVER WNS is equivalent to the <window function>:\n> >\n> > COUNT (*) OVER (WNS1 ROWS UNBOUNDED PRECEDING)\n> >\n> >\n> > So I don't think we need to test for anything at all and can\n> > indiscriminately add or replace the frame with ROWS UNBOUNDED PRECEDING.\n>\n> Thanks for digging that out.\n>\n> Just above that I see:\n>\n> RANK() OVER WNS is equivalent to:\n> ( COUNT (*) OVER (WNS1 RANGE UNBOUNDED PRECEDING)\n> - COUNT (*) OVER (WNS1 RANGE CURRENT ROW) + 1 )\n>\n> and\n>\n> DENSE_RANK() OVER WNS is equivalent to the <window function>:\n> COUNT (DISTINCT ROW ( VE1, ..., VEN ) )\n> OVER (WNS1 RANGE UNBOUNDED PRECEDING)\n>\n> So it looks like the same can be done for rank() and dense_rank() too.\n> I've added support for those in the attached.\n>\n> This also got me thinking that maybe we should be a bit more generic\n> with the support function node tag name. After looking at the\n> nodeWindowAgg.c code for a while, I wondered if we might want to add\n> some optimisations in the future that makes WindowAgg not bother\n> storing tuples for row_number(), rank() and dense_rank(). That might\n> save a bit of overhead from the tuple store. I imagined that we'd\n> want to allow the expansion of this support request so that the\n> support function could let the planner know if any tuples will be\n> accessed by the window function or not. The\n> SupportRequestWFuncOptimizeFrameOpts name didn't seem very fitting for\n> that so I adjusted it to become SupportRequestOptimizeWindowClause\n> instead.\n>\n> The updated patch is attached.\n> David\n>\n\nI am thinking of building a test case to run\n- all existing window functions\n- with all basic variants of frame definitions\n- once with ROWS, once with RANGE\n- on basic table that has duplicate and NULL values in partition and\nordering columns\n- in all supported major versions\n\nTo verify for which of our window functions ROWS vs. RANGE never makes a\ndifference.\nThat should be obvious in most cases, just to be sure.\n\nDo you think this would be helpful?\n\nRegards\nErwin\n\nOn Thu, 13 Oct 2022 at 02:34, David Rowley <dgrowleyml@gmail.com> wrote:On Wed, 12 Oct 2022 at 16:33, Vik Fearing <vik@postgresfriends.org> wrote:\n> Per spec, the ROW_NUMBER() window function is not even allowed to have a\n> frame specified.\n>\n>      b) The window framing clause of WDX shall not be present.\n>\n> Also, the specification for ROW_NUMBER() is:\n>\n>      f) ROW_NUMBER() OVER WNS is equivalent to the <window function>:\n>\n>          COUNT (*) OVER (WNS1 ROWS UNBOUNDED PRECEDING)\n>\n>\n> So I don't think we need to test for anything at all and can\n> indiscriminately add or replace the frame with ROWS UNBOUNDED PRECEDING.\n\nThanks for digging that out.\n\nJust above that I see:\n\nRANK() OVER WNS is equivalent to:\n( COUNT (*) OVER (WNS1 RANGE UNBOUNDED PRECEDING)\n- COUNT (*) OVER (WNS1 RANGE CURRENT ROW) + 1 )\n\nand\n\nDENSE_RANK() OVER WNS is equivalent to the <window function>:\nCOUNT (DISTINCT ROW ( VE1, ..., VEN ) )\nOVER (WNS1 RANGE UNBOUNDED PRECEDING)\n\nSo it looks like the same can be done for rank() and dense_rank() too.\nI've added support for those in the attached.\n\nThis also got me thinking that maybe we should be a bit more generic\nwith the support function node tag name. After looking at the\nnodeWindowAgg.c code for a while, I wondered if we might want to add\nsome optimisations in the future that makes WindowAgg not bother\nstoring tuples for row_number(), rank() and dense_rank().  That might\nsave a bit of overhead from the tuple store.  I imagined that we'd\nwant to allow the expansion of this support request so that the\nsupport function could let the planner know if any tuples will be\naccessed by the window function or not.  The\nSupportRequestWFuncOptimizeFrameOpts name didn't seem very fitting for\nthat so I adjusted it to become SupportRequestOptimizeWindowClause\ninstead.\n\nThe updated patch is attached.\n\nDavidI am thinking of building a test case to run- all existing window functions- with all basic variants of frame definitions- once with ROWS, once with RANGE- on basic table that has duplicate and NULL values in partition and ordering columns- in all supported major versionsTo verify for which of our window functions ROWS vs. RANGE never makes a difference.That should be obvious in most cases, just to be sure.Do you think this would be helpful?RegardsErwin", "msg_date": "Tue, 18 Oct 2022 00:10:12 +0200", "msg_from": "Erwin Brandstetter <brsaweda@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow WindowFuncs prosupport function to use more optimal\n WindowClause options" }, { "msg_contents": "Erwin Brandstetter <brsaweda@gmail.com> writes:\n> I am thinking of building a test case to run\n> - all existing window functions\n> - with all basic variants of frame definitions\n> - once with ROWS, once with RANGE\n> - on basic table that has duplicate and NULL values in partition and\n> ordering columns\n> - in all supported major versions\n\n> To verify for which of our window functions ROWS vs. RANGE never makes a\n> difference.\n> That should be obvious in most cases, just to be sure.\n\n> Do you think this would be helpful?\n\nDoubt it. Per the old saying \"testing can prove the presence of bugs,\nbut not their absence\", this could prove that some functions *do*\nrespond to these options, but it cannot prove that a function\n*doesn't*. Maybe you just didn't try the right test case.\n\nIf you want to try something like that as a heuristic to see which\ncases are worth looking at closer, sure, but it's only a heuristic.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 17 Oct 2022 19:18:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow WindowFuncs prosupport function to use more optimal\n WindowClause options" }, { "msg_contents": "On Tue, 18 Oct 2022 at 12:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Erwin Brandstetter <brsaweda@gmail.com> writes:\n> > I am thinking of building a test case to run\n> > - all existing window functions\n> > - with all basic variants of frame definitions\n> > - once with ROWS, once with RANGE\n> > - on basic table that has duplicate and NULL values in partition and\n> > ordering columns\n> > - in all supported major versions\n>\n> > To verify for which of our window functions ROWS vs. RANGE never makes a\n> > difference.\n> > That should be obvious in most cases, just to be sure.\n>\n> > Do you think this would be helpful?\n>\n> Doubt it. Per the old saying \"testing can prove the presence of bugs,\n> but not their absence\", this could prove that some functions *do*\n> respond to these options, but it cannot prove that a function\n> *doesn't*. Maybe you just didn't try the right test case.\n\nI suppose this is kind of like fuzz testing. Going by \"git log\n--grep=sqlsmith\", fuzzing certainly has found bugs for us in the past.\nI personally wouldn't discourage Erwin from doing this.\n\nFor me, my first port of call will be to study the code of each window\nfunction to see if the frame options can affect the result. I *do*\nneed to spend more time on this still. It would be good to have some\nextra assurance on having read the code with some more exhaustive\ntesting results. If Erwin was to find result variations that I missed\nthen we might avoid writing some new bugs.\n\nAlso, I just did spend a little more time reading a few window\nfunctions and I see percent_rank() is another candidate for this\noptimisation. I've never needed to use that function before, but from\nthe following experiment, it seems to just be (rank() over (order by\n...) - 1) / (count(*) over () - 1). Since rank() is already on the\nlist and count(*) over() contains all rows in the frame, then it seems\npercent_rank() can join the club too.\n\ncreate table t0 as select x*random() as a from generate_series(1,1000000)x;\nselect * from (select a,percent_rank() over (order by a) pr,(rank()\nover (order by a) - 1) / (count(*) over () - 1)::float8 pr2 from t0)\nc where pr <> pr2;\n\nDavid\n\n\n", "msg_date": "Tue, 18 Oct 2022 12:58:47 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow WindowFuncs prosupport function to use more optimal\n WindowClause options" }, { "msg_contents": "Thanks for having a look at this.\n\nOn Fri, 14 Oct 2022 at 10:52, Zhihong Yu <zyu@yugabyte.com> wrote:\n> + req->frameOptions = (FRAMEOPTION_ROWS |\n> + FRAMEOPTION_START_UNBOUNDED_PRECEDING |\n> + FRAMEOPTION_END_CURRENT_ROW);\n>\n> The bit combination appears multiple times in the patch.\n> Maybe define the combination as a constant in supportnodes.h and reference it in the code.\n\nI don't believe supportnodes.h has any business having any code that's\nrelated to actual implementations of the support request type. If we\nwere to have such a definition then I think it would belong in\nwindowfuncs.c. I'd rather see each implementation of the support\nrequest spell out exactly what they mean, which is what the patch does\nalready.\n\nDavid\n\n\n", "msg_date": "Tue, 18 Oct 2022 13:05:23 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow WindowFuncs prosupport function to use more optimal\n WindowClause options" }, { "msg_contents": "On Mon, Oct 17, 2022 at 5:05 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> Thanks for having a look at this.\n>\n> On Fri, 14 Oct 2022 at 10:52, Zhihong Yu <zyu@yugabyte.com> wrote:\n> > + req->frameOptions = (FRAMEOPTION_ROWS |\n> > + FRAMEOPTION_START_UNBOUNDED_PRECEDING |\n> > + FRAMEOPTION_END_CURRENT_ROW);\n> >\n> > The bit combination appears multiple times in the patch.\n> > Maybe define the combination as a constant in supportnodes.h and\n> reference it in the code.\n>\n> I don't believe supportnodes.h has any business having any code that's\n> related to actual implementations of the support request type. If we\n> were to have such a definition then I think it would belong in\n> windowfuncs.c. I'd rather see each implementation of the support\n> request spell out exactly what they mean, which is what the patch does\n> already.\n>\n> David\n>\nHi,\nI am fine with keeping the code where it is now.\n\nCheers\n\nOn Mon, Oct 17, 2022 at 5:05 PM David Rowley <dgrowleyml@gmail.com> wrote:Thanks for having a look at this.\n\nOn Fri, 14 Oct 2022 at 10:52, Zhihong Yu <zyu@yugabyte.com> wrote:\n> +       req->frameOptions = (FRAMEOPTION_ROWS |\n> +                            FRAMEOPTION_START_UNBOUNDED_PRECEDING |\n> +                            FRAMEOPTION_END_CURRENT_ROW);\n>\n> The bit combination appears multiple times in the patch.\n> Maybe define the combination as a constant in supportnodes.h and reference it in the code.\n\nI don't believe supportnodes.h has any business having any code that's\nrelated to actual implementations of the support request type.  If we\nwere to have such a definition then I think it would belong in\nwindowfuncs.c.  I'd rather see each implementation of the support\nrequest spell out exactly what they mean, which is what the patch does\nalready.\n\nDavidHi,I am fine with keeping the code where it is now.Cheers", "msg_date": "Mon, 17 Oct 2022 19:40:07 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Allow WindowFuncs prosupport function to use more optimal\n WindowClause options" }, { "msg_contents": "On Thu, 13 Oct 2022 at 13:34, David Rowley <dgrowleyml@gmail.com> wrote:\n> So it looks like the same can be done for rank() and dense_rank() too.\n> I've added support for those in the attached.\n\nThe attached adds support for percent_rank(), cume_dist() and ntile().\n\nDavid", "msg_date": "Fri, 21 Oct 2022 09:02:47 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow WindowFuncs prosupport function to use more optimal\n WindowClause options" }, { "msg_contents": "On 10/20/22 22:02, David Rowley wrote:\n> On Thu, 13 Oct 2022 at 13:34, David Rowley <dgrowleyml@gmail.com> wrote:\n>> So it looks like the same can be done for rank() and dense_rank() too.\n>> I've added support for those in the attached.\n> \n> The attached adds support for percent_rank(), cume_dist() and ntile().\n\n\nShouldn't it be able to detect that these two windows are the same and \nonly do one WindowAgg pass?\n\n\nexplain (verbose, costs off)\nselect row_number() over w1,\n lag(amname) over w2\nfrom pg_am\nwindow w1 as (order by amname),\n w2 as (w1 rows unbounded preceding)\n;\n\n\n QUERY PLAN\n-----------------------------------------------------------------\n WindowAgg\n Output: (row_number() OVER (?)), lag(amname) OVER (?), amname\n -> WindowAgg\n Output: amname, row_number() OVER (?)\n -> Sort\n Output: amname\n Sort Key: pg_am.amname\n -> Seq Scan on pg_catalog.pg_am\n Output: amname\n(9 rows)\n\n-- \nVik Fearing\n\n\n\n", "msg_date": "Sat, 22 Oct 2022 16:03:24 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: Allow WindowFuncs prosupport function to use more optimal\n WindowClause options" }, { "msg_contents": "On Sun, 23 Oct 2022 at 03:03, Vik Fearing <vik@postgresfriends.org> wrote:\n> Shouldn't it be able to detect that these two windows are the same and\n> only do one WindowAgg pass?\n>\n>\n> explain (verbose, costs off)\n> select row_number() over w1,\n> lag(amname) over w2\n> from pg_am\n> window w1 as (order by amname),\n> w2 as (w1 rows unbounded preceding)\n> ;\n\nGood thinking. I think the patch should also optimise that case. It\nrequires re-doing a similar de-duplication phase the same as what's\ndone in transformWindowFuncCall(). I've added code to do that in the\nattached version.\n\nThis got me wondering if the support function, instead of returning\nsome more optimal versions of the frameOptions, I wondered if it\nshould just return which aspects of the WindowClause it does not care\nabout. For example,\n\nSELECT row_number() over (), lag(relname) over (order by relname)\nfrom pg_class;\n\ncould, in theory, have row_number() reuse the WindowAgg for lag. Here\nbecause the WindowClause for row_number() has an empty ORDER BY\nclause, I believe it could just reuse the lag's WindowClause. It\nwouldn't be able to do that if row_number() had an ORDER BY, or if\nrow_number() were some other WindowFunc that cared about peer rows.\nI'm currently thinking this might not be worth the trouble as it seems\na bit unlikely that someone would use row_number() and not care about\nthe ORDER BY. However, maybe the row_number() could reuse some other\nWindowClause with a more strict ordering. My current thoughts are that\nthis feels a bit too unlikely to apply in enough cases for it to be\nworthwhile. I just thought I'd mention it for the sake of the\narchives.\n\nDavid\n\n\nThanks for taking it for a spin.\n\nDavid", "msg_date": "Wed, 26 Oct 2022 14:38:22 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow WindowFuncs prosupport function to use more optimal\n WindowClause options" }, { "msg_contents": "On Wed, 26 Oct 2022 at 14:38, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sun, 23 Oct 2022 at 03:03, Vik Fearing <vik@postgresfriends.org> wrote:\n> > Shouldn't it be able to detect that these two windows are the same and\n> > only do one WindowAgg pass?\n> >\n> >\n> > explain (verbose, costs off)\n> > select row_number() over w1,\n> > lag(amname) over w2\n> > from pg_am\n> > window w1 as (order by amname),\n> > w2 as (w1 rows unbounded preceding)\n> > ;\n>\n> Good thinking. I think the patch should also optimise that case. It\n> requires re-doing a similar de-duplication phase the same as what's\n> done in transformWindowFuncCall(). I've added code to do that in the\n> attached version.\n\nI've spent a bit more time on this now and added a few extra\nregression tests. The previous version had nothing to test to ensure\nthat an aggregate function being used as a window function does not\nhave its frame options changed when it's sharing the same WindowClause\nas a WindowFunc which can have the frame options changed.\n\nI've now pushed the final result. Thank you to everyone who provided\ninput on this.\n\nDavid\n\n\n", "msg_date": "Fri, 23 Dec 2022 12:47:24 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow WindowFuncs prosupport function to use more optimal\n WindowClause options" }, { "msg_contents": "On 12/23/22 00:47, David Rowley wrote:\n> On Wed, 26 Oct 2022 at 14:38, David Rowley <dgrowleyml@gmail.com> wrote:\n> \n> I've now pushed the final result. Thank you to everyone who provided\n> input on this.\n\nThis is a very good improvement. Thank you for working on it.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Fri, 23 Dec 2022 03:46:39 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: Allow WindowFuncs prosupport function to use more optimal\n WindowClause options" } ]
[ { "msg_contents": "Robert pointed out in [1] that ExecCreatePartitionPruneState() that\nwas renamed to CreatePartitionPruneState() in 297daa9d4353 is still\nreferenced in src/test/modules/delay_execution/specs/partition-addition.spec.\n\nPlease find attached a patch to fix that.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/CA+Tgmoa7WnSmnAn7n2826tKMaUZM79jtJdMTPmmAyjQH0hZYUw@mail.gmail.com", "msg_date": "Wed, 12 Oct 2022 16:04:32 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Fix obsolete reference to ExecCreatePartitionPruneState" }, { "msg_contents": "On 2022-Oct-12, Amit Langote wrote:\n\n> Robert pointed out in [1] that ExecCreatePartitionPruneState() that\n> was renamed to CreatePartitionPruneState() in 297daa9d4353 is still\n> referenced in src/test/modules/delay_execution/specs/partition-addition.spec.\n> \n> Please find attached a patch to fix that.\n\nThanks, pushed.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 12 Oct 2022 09:57:26 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Fix obsolete reference to ExecCreatePartitionPruneState" } ]
[ { "msg_contents": "Hi hackers,\n\nA minor bug was found in the \"CREATE_REPLICATION_SLOT\" syntax.\nIt is in protocol.sgml at line 1990.\n\nThe current syntax written in it is as follows,\n\nCREATE_REPLICATION_SLOT slot_name [ TEMPORARY ] { PHYSICAL | LOGICAL } [ ( option [, ...] ) ]\n\nHowever, when I executed a command as follows, that became syntax error.\n\nCREATE_REPLICATION_SLOT tachi LOGICAL;\nERROR: syntax error\n\nTo use LOGICAL, output_plugin must be required.\nCorrect syntax is as follows.\n\nCREATE_REPLICATION_SLOT slot_name [ TEMPORARY ] { PHYSICAL | LOGICAL output_plugin } [ ( option [, ...] ) ]\n\nPSA patch to fix it.\n\nNote that version 15 must also be fixed.\n\nBest Regards,\nAyaki Tachikake\nFUJITSU LIMITED", "msg_date": "Wed, 12 Oct 2022 08:33:43 +0000", "msg_from": "\"tachikake.ayaki@fujitsu.com\" <tachikake.ayaki@fujitsu.com>", "msg_from_op": true, "msg_subject": "Fix wrong syntax about CREATE_REPLICATION_SLOT" }, { "msg_contents": "On Wed, Oct 12, 2022 at 08:33:43AM +0000, tachikake.ayaki@fujitsu.com wrote:\n> Hi hackers,\n> \n> A minor bug was found in the \"CREATE_REPLICATION_SLOT\" syntax.\n> It is in protocol.sgml at line 1990.\n\nIndeed, right. The command is described two times, in its old and new\nfashions and the new flavor is incorrect.\n--\nMichael", "msg_date": "Wed, 12 Oct 2022 18:00:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix wrong syntax about CREATE_REPLICATION_SLOT" } ]
[ { "msg_contents": "Over on [1], Klint highlights a query with a DISTINCT which is a\nlittle sub-optimal in PostgreSQL. ISTM that in cases where all\nDISTINCT pathkeys have been marked as redundant due to constants\nexisting in all of the EquivalenceClasses of the DISTINCT columns,\nthen it looks like it should be okay not to bother using a Unique node\nto remove duplicate results.\n\nWhen all the distinct pathkeys are redundant then there can only be,\nat most, 1 single distinct value. There may be many rows with that\nvalue, but we can remove those extra ones with a LIMIT 1 rather than\ntroubling over needlessly uniquifing them.\n\nThis might not be a hugely common case, but; 1) it is very cheap to\ndetect and 2) the speedups are likely to be *very* good.\n\nWith the attached we get:\n\nregression=# explain (analyze, costs off, timing off) SELECT DISTINCT\nfour,1,2,3 FROM tenk1 WHERE four = 0;\n QUERY PLAN\n-------------------------------------------------\n Limit (actual rows=1 loops=1)\n -> Seq Scan on tenk1 (actual rows=1 loops=1)\n Filter: (four = 0)\n Planning Time: 0.215 ms\n Execution Time: 0.071 ms\n\nnaturally, if we removed the WHERE four = 0, we can't optimise this\nplan using this method.\n\nI see no reason why this also can't work for DISTINCT ON too.\n\nregression=# explain (analyze, costs off, timing off) SELECT DISTINCT\nON (four,two) four,two FROM tenk1 WHERE four = 0 order by 1,2;\n QUERY PLAN\n----------------------------------------------------------\n Unique (actual rows=1 loops=1)\n -> Sort (actual rows=2500 loops=1)\n Sort Key: two\n Sort Method: quicksort Memory: 175kB\n -> Seq Scan on tenk1 (actual rows=2500 loops=1)\n Filter: (four = 0)\n Rows Removed by Filter: 7500\n Planning Time: 0.123 ms\n Execution Time: 4.251 ms\n(9 rows)\n\nthen, of course, if we introduce some column that the pathkey is not\nredundant for then we must do the distinct operation as normal.\n\nregression=# explain (analyze, costs off, timing off) SELECT DISTINCT\nfour,two FROM tenk1 WHERE four = 0 order by 1,2;\n QUERY PLAN\n----------------------------------------------------------\n Sort (actual rows=1 loops=1)\n Sort Key: two\n Sort Method: quicksort Memory: 25kB\n -> HashAggregate (actual rows=1 loops=1)\n Group Key: four, two\n Batches: 1 Memory Usage: 24kB\n -> Seq Scan on tenk1 (actual rows=2500 loops=1)\n Filter: (four = 0)\n Rows Removed by Filter: 7500\n Planning Time: 0.137 ms\n Execution Time: 4.274 ms\n(11 rows)\n\nDoes this seem like something we'd want to do?\n\nPatch attached.\n\nDavid\n\n[1] https://postgr.es/m/MEYPR01MB7101CD5DA0A07C9DE2B74850A4239@MEYPR01MB7101.ausprd01.prod.outlook.com", "msg_date": "Wed, 12 Oct 2022 22:19:20 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Use LIMIT instead of Unique for DISTINCT when all distinct pathkeys\n are redundant" }, { "msg_contents": "On Wed, Oct 12, 2022 at 5:19 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> When all the distinct pathkeys are redundant then there can only be,\n> at most, 1 single distinct value. There may be many rows with that\n> value, but we can remove those extra ones with a LIMIT 1 rather than\n> troubling over needlessly uniquifing them.\n\n\nI'm not sure if this case is common enough in practice, but since this\npatch is very straightforward and adds no more costs, I think it's worth\ndoing.\n\nI also have concerns about the 2 Limit nodes pointed by the comment\ninside the patch. Maybe we can check with limit_needed() and manually\nadd the limit node only if there is no LIMIT clause in the origin query?\n\nThanks\nRichard\n\nOn Wed, Oct 12, 2022 at 5:19 PM David Rowley <dgrowleyml@gmail.com> wrote:\nWhen all the distinct pathkeys are redundant then there can only be,\nat most, 1 single distinct value. There may be many rows with that\nvalue, but we can remove those extra ones with a LIMIT 1 rather than\ntroubling over needlessly uniquifing them. I'm not sure if this case is common enough in practice, but since thispatch is very straightforward and adds no more costs, I think it's worthdoing.I also have concerns about the 2 Limit nodes pointed by the commentinside the patch. Maybe we can check with limit_needed() and manuallyadd the limit node only if there is no LIMIT clause in the origin query?ThanksRichard", "msg_date": "Wed, 12 Oct 2022 19:29:58 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use LIMIT instead of Unique for DISTINCT when all distinct\n pathkeys are redundant" }, { "msg_contents": "On Wed, Oct 12, 2022 at 5:19 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> When all the distinct pathkeys are redundant then there can only be,\n> at most, 1 single distinct value. There may be many rows with that\n> value, but we can remove those extra ones with a LIMIT 1 rather than\n> troubling over needlessly uniquifing them.\n>\n> This might not be a hugely common case, but; 1) it is very cheap to\n> detect and 2) the speedups are likely to be *very* good.\n>\n> With the attached we get:\n>\n> regression=# explain (analyze, costs off, timing off) SELECT DISTINCT\n> four,1,2,3 FROM tenk1 WHERE four = 0;\n> QUERY PLAN\n> -------------------------------------------------\n> Limit (actual rows=1 loops=1)\n> -> Seq Scan on tenk1 (actual rows=1 loops=1)\n> Filter: (four = 0)\n> Planning Time: 0.215 ms\n> Execution Time: 0.071 ms\n>\n> naturally, if we removed the WHERE four = 0, we can't optimise this\n> plan using this method.\n>\n> I see no reason why this also can't work for DISTINCT ON too.\n\n\nFor DISTINCT ON, if all the distinct pathkeys are redundant but there\nare available sort pathkeys, then for adequately-presorted paths I think\nwe can also apply this optimization, using a Limit 1 rather than Unique.\n\nregression=# explain (analyze, costs off, timing off) select distinct on\n(four) * from tenk1 where four = 0 order by four, hundred desc;\n QUERY PLAN\n--------------------------------------------------------------------------------\n Limit (actual rows=1 loops=1)\n -> Index Scan Backward using tenk1_hundred on tenk1 (actual rows=1\nloops=1)\n Filter: (four = 0)\n Rows Removed by Filter: 300\n Planning Time: 0.165 ms\n Execution Time: 0.458 ms\n(6 rows)\n\nThanks\nRichard\n\nOn Wed, Oct 12, 2022 at 5:19 PM David Rowley <dgrowleyml@gmail.com> wrote:\nWhen all the distinct pathkeys are redundant then there can only be,\nat most, 1 single distinct value. There may be many rows with that\nvalue, but we can remove those extra ones with a LIMIT 1 rather than\ntroubling over needlessly uniquifing them.\n\nThis might not be a hugely common case, but; 1) it is very cheap to\ndetect and 2) the speedups are likely to be *very* good.\n\nWith the attached we get:\n\nregression=# explain (analyze, costs off, timing off) SELECT DISTINCT\nfour,1,2,3 FROM tenk1 WHERE four = 0;\n                   QUERY PLAN\n-------------------------------------------------\n Limit (actual rows=1 loops=1)\n   ->  Seq Scan on tenk1 (actual rows=1 loops=1)\n         Filter: (four = 0)\n Planning Time: 0.215 ms\n Execution Time: 0.071 ms\n\nnaturally, if we removed the WHERE four = 0, we can't optimise this\nplan using this method.\n\nI see no reason why this also can't work for DISTINCT ON too. For DISTINCT ON, if all the distinct pathkeys are redundant but thereare available sort pathkeys, then for adequately-presorted paths I thinkwe can also apply this optimization, using a Limit 1 rather than Unique.regression=# explain (analyze, costs off, timing off) select distinct on (four) * from tenk1 where four = 0 order by four, hundred desc;                                   QUERY PLAN-------------------------------------------------------------------------------- Limit (actual rows=1 loops=1)   ->  Index Scan Backward using tenk1_hundred on tenk1 (actual rows=1 loops=1)         Filter: (four = 0)         Rows Removed by Filter: 300 Planning Time: 0.165 ms Execution Time: 0.458 ms(6 rows)ThanksRichard", "msg_date": "Wed, 12 Oct 2022 20:13:11 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use LIMIT instead of Unique for DISTINCT when all distinct\n pathkeys are redundant" }, { "msg_contents": "On Thu, 13 Oct 2022 at 00:30, Richard Guo <guofenglinux@gmail.com> wrote:\n> I also have concerns about the 2 Limit nodes pointed by the comment\n> inside the patch. Maybe we can check with limit_needed() and manually\n> add the limit node only if there is no LIMIT clause in the origin query?\n\nI wasn't hugely concerned about this. I think we're a little limited\nto what we can actually do about it too.\n\nIt seems easy enough to skip adding the LimitPath in\ncreate_final_distinct_paths() if the existing query already has\nexactly LIMIT 1. However, if they've written LIMIT 10 or LIMIT\nrandom()*1234 then we must add the LimitPath to ensure we only get 1\nrow rather than 10 or some random number.\n\nAs for getting rid of the LIMIT 10 / LIMIT random()*1234, we store the\nLIMIT clause information in the parse and currently that's what the\nplanner uses when creating the LimitPath for the LIMIT clause. I'd\nquite like to avoid making any adjustments to the parse fields here.\n(There's a general project desire to move away from the planner\nmodifying the parse. If we didn't do that we could do things like\nre-plan queries with stronger optimization levels when they come out\ntoo costly.)\n\nWe could do something like set some bool flag in PlannerInfo to tell\nthe planner not to bother adding the final LimitPath as we've already\nadded another which does the job, but is it really worth adding that\ncomplexity for this patch? You already mentioned that \"this patch is\nvery straightforward\". I don't think it would be if we added code to\navoid the LimitPath duplication.\n\nDavid\n\n\n", "msg_date": "Thu, 13 Oct 2022 09:41:11 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use LIMIT instead of Unique for DISTINCT when all distinct\n pathkeys are redundant" }, { "msg_contents": "On Thu, 13 Oct 2022 at 01:13, Richard Guo <guofenglinux@gmail.com> wrote:\n> For DISTINCT ON, if all the distinct pathkeys are redundant but there\n> are available sort pathkeys, then for adequately-presorted paths I think\n> we can also apply this optimization, using a Limit 1 rather than Unique.\n>\n> regression=# explain (analyze, costs off, timing off) select distinct on (four) * from tenk1 where four = 0 order by four, hundred desc;\n> QUERY PLAN\n> --------------------------------------------------------------------------------\n> Limit (actual rows=1 loops=1)\n> -> Index Scan Backward using tenk1_hundred on tenk1 (actual rows=1 loops=1)\n> Filter: (four = 0)\n> Rows Removed by Filter: 300\n\nI don't think we can optimise this case, at least not the same way I'm\ndoing it in the patch I attached.\n\nThe problem is that I'm only added the LimitPath to the\ncheapest_total_path. I think to make your case work we'd need to add\nthe LimitPath only in cases where the distinct_pathkeys are empty but\nthe sort_pathkeys are not and hasDistinctOn is true and the path has\npathkeys_contained_in(root->sort_pathkeys, path->pathkeys). I think\nthat's doable, but it's become quite a bit more complex than the patch\nI proposed. Maybe it's worth a 2nd effort for that part?\n\nDavid\n\n\n", "msg_date": "Thu, 13 Oct 2022 09:50:30 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use LIMIT instead of Unique for DISTINCT when all distinct\n pathkeys are redundant" }, { "msg_contents": "On Thu, Oct 13, 2022 at 4:41 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> We could do something like set some bool flag in PlannerInfo to tell\n> the planner not to bother adding the final LimitPath as we've already\n> added another which does the job, but is it really worth adding that\n> complexity for this patch? You already mentioned that \"this patch is\n> very straightforward\". I don't think it would be if we added code to\n> avoid the LimitPath duplication.\n\n\nYeah, maybe this is the right way to do it. I agree that this would\ncomplicate the code. Not sure if it's worth doing.\n\nThanks\nRichard\n\nOn Thu, Oct 13, 2022 at 4:41 AM David Rowley <dgrowleyml@gmail.com> wrote:\nWe could do something like set some bool flag in PlannerInfo to tell\nthe planner not to bother adding the final LimitPath as we've already\nadded another which does the job, but is it really worth adding that\ncomplexity for this patch? You already mentioned that \"this patch is\nvery straightforward\". I don't think it would be if we added code to\navoid the LimitPath duplication. Yeah, maybe this is the right way to do it. I agree that this wouldcomplicate the code. Not sure if it's worth doing.ThanksRichard", "msg_date": "Thu, 13 Oct 2022 11:39:50 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use LIMIT instead of Unique for DISTINCT when all distinct\n pathkeys are redundant" }, { "msg_contents": "On Thu, Oct 13, 2022 at 4:50 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> The problem is that I'm only added the LimitPath to the\n> cheapest_total_path. I think to make your case work we'd need to add\n> the LimitPath only in cases where the distinct_pathkeys are empty but\n> the sort_pathkeys are not and hasDistinctOn is true and the path has\n> pathkeys_contained_in(root->sort_pathkeys, path->pathkeys). I think\n> that's doable, but it's become quite a bit more complex than the patch\n> I proposed. Maybe it's worth a 2nd effort for that part?\n\n\nCurrently in the patch the optimization is done before we check for\npresorted paths or do the explicit sort of the cheapest path. How about\nwe move this optimization into the branch where we've found any\npresorted paths? Maybe something like:\n\n--- a/src/backend/optimizer/plan/planner.c\n+++ b/src/backend/optimizer/plan/planner.c\n@@ -4780,11 +4780,24 @@ create_final_distinct_paths(PlannerInfo *root,\nRelOptInfo *input_rel,\n\n if (pathkeys_contained_in(needed_pathkeys, path->pathkeys))\n {\n- add_path(distinct_rel, (Path *)\n- create_upper_unique_path(root, distinct_rel,\n- path,\n-\nlist_length(root->distinct_pathkeys),\n- numDistinctRows));\n+ if (root->distinct_pathkeys == NIL)\n+ {\n+ Node *limitCount = (Node *) makeConst(INT8OID, -1,\nInvalidOid,\n+ sizeof(int64),\n+ Int64GetDatum(1),\nfalse,\n+ FLOAT8PASSBYVAL);\n+\n+ add_path(distinct_rel, (Path *)\n+ create_limit_path(root, distinct_rel,\n+ path, NULL, limitCount,\n+ LIMIT_OPTION_COUNT, 0, 1));\n+ }\n+ else\n+ add_path(distinct_rel, (Path *)\n+ create_upper_unique_path(root, distinct_rel,\n+ path,\n+\nlist_length(root->distinct_pathkeys),\n+ numDistinctRows));\n\nThis again makes the code less 'straightforward', just to cover a more\nuncommon case. I'm also not sure if it's worth doing.\n\nThanks\nRichard\n\nOn Thu, Oct 13, 2022 at 4:50 AM David Rowley <dgrowleyml@gmail.com> wrote:\r\nThe problem is that I'm only added the LimitPath to the\r\ncheapest_total_path.  I think to make your case work we'd need to add\r\nthe LimitPath only in cases where the distinct_pathkeys are empty but\r\nthe sort_pathkeys are not and hasDistinctOn is true and the path has\r\npathkeys_contained_in(root->sort_pathkeys, path->pathkeys).  I think\r\nthat's doable, but it's become quite a bit more complex than the patch\r\nI proposed. Maybe it's worth a 2nd effort for that part? Currently in the patch the optimization is done before we check forpresorted paths or do the explicit sort of the cheapest path. How aboutwe move this optimization into the branch where we've found anypresorted paths?  Maybe something like:--- a/src/backend/optimizer/plan/planner.c+++ b/src/backend/optimizer/plan/planner.c@@ -4780,11 +4780,24 @@ create_final_distinct_paths(PlannerInfo *root, RelOptInfo *input_rel,  if (pathkeys_contained_in(needed_pathkeys, path->pathkeys))  {-     add_path(distinct_rel, (Path *)-              create_upper_unique_path(root, distinct_rel,-                                       path,-                                       list_length(root->distinct_pathkeys),-                                       numDistinctRows));+     if (root->distinct_pathkeys == NIL)+     {+         Node       *limitCount = (Node *) makeConst(INT8OID, -1, InvalidOid,+                                                     sizeof(int64),+                                                     Int64GetDatum(1), false,+                                                     FLOAT8PASSBYVAL);++         add_path(distinct_rel, (Path *)+                  create_limit_path(root, distinct_rel,+                                    path, NULL, limitCount,+                                    LIMIT_OPTION_COUNT, 0, 1));+     }+     else+         add_path(distinct_rel, (Path *)+                  create_upper_unique_path(root, distinct_rel,+                                           path,+                                           list_length(root->distinct_pathkeys),+                                           numDistinctRows));This again makes the code less 'straightforward', just to cover a moreuncommon case. I'm also not sure if it's worth doing.ThanksRichard", "msg_date": "Thu, 13 Oct 2022 11:47:02 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use LIMIT instead of Unique for DISTINCT when all distinct\n pathkeys are redundant" }, { "msg_contents": "On Thu, 13 Oct 2022 at 16:47, Richard Guo <guofenglinux@gmail.com> wrote:\n> Currently in the patch the optimization is done before we check for\n> presorted paths or do the explicit sort of the cheapest path. How about\n> we move this optimization into the branch where we've found any\n> presorted paths? Maybe something like:\n\nI've attached a patch to that effect, but it turns out a bit more\ncomplex than what you imagined. We still need to handle the case for\nwhen there's no path that has the required pathkeys and we must add a\nSortPath to the cheapest path. That requires adding some similar code\nto add the LimitPath after the foreach loop over the pathlist is over.\n\nI was also getting some weird plans like:\n\nregression=# explain select distinct on (four) four,hundred from tenk1\nwhere four=0 order by 1,2;\n QUERY PLAN\n----------------------------------------------------------------------\n Sort (cost=0.20..0.20 rows=1 width=8)\n Sort Key: hundred\n -> Limit (cost=0.00..0.19 rows=1 width=8)\n -> Seq Scan on tenk1 (cost=0.00..470.00 rows=2500 width=8)\n Filter: (four = 0)\n\nTo stop the planner from adding that final sort, I opted to hack the\nLimitPath's pathkeys to say that it's already sorted by the\nPlannerInfo's sort_pathkeys. That feels slightly icky, but it does\nseem a little wasteful to initialise a sort node on every execution of\nthe plan to sort a single tuple.\n\nDavid", "msg_date": "Thu, 13 Oct 2022 19:48:31 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use LIMIT instead of Unique for DISTINCT when all distinct\n pathkeys are redundant" }, { "msg_contents": "On Thu, Oct 13, 2022 at 2:48 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Thu, 13 Oct 2022 at 16:47, Richard Guo <guofenglinux@gmail.com> wrote:\n> > Currently in the patch the optimization is done before we check for\n> > presorted paths or do the explicit sort of the cheapest path. How about\n> > we move this optimization into the branch where we've found any\n> > presorted paths? Maybe something like:\n>\n> I've attached a patch to that effect, but it turns out a bit more\n> complex than what you imagined. We still need to handle the case for\n> when there's no path that has the required pathkeys and we must add a\n> SortPath to the cheapest path. That requires adding some similar code\n> to add the LimitPath after the foreach loop over the pathlist is over.\n\n\nThanks for the new patch. Previously I considered we just apply this\noptimization for adequately-presorted paths so that we can just fetch\nthe first row from that path. But yes we can also do this optimization\nfor explicit-sort case so that we can get the result from a top-1\nheapsort, just like the new patch does.\n\n\nI was also getting some weird plans like:\n>\n> regression=# explain select distinct on (four) four,hundred from tenk1\n> where four=0 order by 1,2;\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> Sort (cost=0.20..0.20 rows=1 width=8)\n> Sort Key: hundred\n> -> Limit (cost=0.00..0.19 rows=1 width=8)\n> -> Seq Scan on tenk1 (cost=0.00..470.00 rows=2500 width=8)\n> Filter: (four = 0)\n>\n> To stop the planner from adding that final sort, I opted to hack the\n> LimitPath's pathkeys to say that it's already sorted by the\n> PlannerInfo's sort_pathkeys. That feels slightly icky, but it does\n> seem a little wasteful to initialise a sort node on every execution of\n> the plan to sort a single tuple.\n\n\nI don't get how this plan comes out. It seems not correct because Limit\nnode above an unsorted path would give us an unpredictable row. I tried\nthis query without the hack to LimitPath's pathkeys and I get plans\nbelow, with or with index scan:\n\nexplain (costs off) select distinct on (four) four,hundred from tenk1\nwhere four=0 order by 1,2;\n QUERY PLAN\n-----------------------------------------------------\n Result\n -> Limit\n -> Index Scan using tenk1_hundred on tenk1\n Filter: (four = 0)\n\nexplain (costs off) select distinct on (four) four,hundred from tenk1\nwhere four=0 order by 1,2;\n QUERY PLAN\n----------------------------------\n Limit\n -> Sort\n Sort Key: hundred\n -> Seq Scan on tenk1\n Filter: (four = 0)\n\nThanks\nRichard\n\nOn Thu, Oct 13, 2022 at 2:48 PM David Rowley <dgrowleyml@gmail.com> wrote:On Thu, 13 Oct 2022 at 16:47, Richard Guo <guofenglinux@gmail.com> wrote:\n> Currently in the patch the optimization is done before we check for\n> presorted paths or do the explicit sort of the cheapest path. How about\n> we move this optimization into the branch where we've found any\n> presorted paths?  Maybe something like:\n\nI've attached a patch to that effect, but it turns out a bit more\ncomplex than what you imagined.  We still need to handle the case for\nwhen there's no path that has the required pathkeys and we must add a\nSortPath to the cheapest path. That requires adding some similar code\nto add the LimitPath after the foreach loop over the pathlist is over. Thanks for the new patch. Previously I considered we just apply thisoptimization for adequately-presorted paths so that we can just fetchthe first row from that path. But yes we can also do this optimizationfor explicit-sort case so that we can get the result from a top-1heapsort, just like the new patch does. \nI was also getting some weird plans like:\n\nregression=# explain select distinct on (four) four,hundred from tenk1\nwhere four=0 order by 1,2;\n                              QUERY PLAN\n----------------------------------------------------------------------\n Sort  (cost=0.20..0.20 rows=1 width=8)\n   Sort Key: hundred\n   ->  Limit  (cost=0.00..0.19 rows=1 width=8)\n         ->  Seq Scan on tenk1  (cost=0.00..470.00 rows=2500 width=8)\n               Filter: (four = 0)\n\nTo stop the planner from adding that final sort, I opted to hack the\nLimitPath's pathkeys to say that it's already sorted by the\nPlannerInfo's sort_pathkeys.  That feels slightly icky, but it does\nseem a little wasteful to initialise a sort node on every execution of\nthe plan to sort a single tuple. I don't get how this plan comes out. It seems not correct because Limitnode above an unsorted path would give us an unpredictable row. I triedthis query without the hack to LimitPath's pathkeys and I get plansbelow, with or with index scan:explain (costs off) select distinct on (four) four,hundred from tenk1where four=0 order by 1,2;                     QUERY PLAN----------------------------------------------------- Result   ->  Limit         ->  Index Scan using tenk1_hundred on tenk1               Filter: (four = 0)explain (costs off) select distinct on (four) four,hundred from tenk1where four=0 order by 1,2;            QUERY PLAN---------------------------------- Limit   ->  Sort         Sort Key: hundred         ->  Seq Scan on tenk1               Filter: (four = 0)ThanksRichard", "msg_date": "Thu, 13 Oct 2022 16:17:16 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use LIMIT instead of Unique for DISTINCT when all distinct\n pathkeys are redundant" }, { "msg_contents": "On Thu, 13 Oct 2022 at 21:17, Richard Guo <guofenglinux@gmail.com> wrote:\n>\n> On Thu, Oct 13, 2022 at 2:48 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>> To stop the planner from adding that final sort, I opted to hack the\n>> LimitPath's pathkeys to say that it's already sorted by the\n>> PlannerInfo's sort_pathkeys. That feels slightly icky, but it does\n>> seem a little wasteful to initialise a sort node on every execution of\n>> the plan to sort a single tuple.\n>\n>\n> I don't get how this plan comes out. It seems not correct because Limit\n> node above an unsorted path would give us an unpredictable row.\n\nActually, you're right. That manual setting of the pathkeys is an\nunneeded remanent from a bug I fixed before sending out v2. It can\njust be removed.\n\nI've attached the v3 patch.\n\nDavid", "msg_date": "Thu, 13 Oct 2022 23:43:23 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use LIMIT instead of Unique for DISTINCT when all distinct\n pathkeys are redundant" }, { "msg_contents": "On Thu, Oct 13, 2022 at 6:43 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Thu, 13 Oct 2022 at 21:17, Richard Guo <guofenglinux@gmail.com> wrote:\n> >\n> > On Thu, Oct 13, 2022 at 2:48 PM David Rowley <dgrowleyml@gmail.com>\n> wrote:\n> >> To stop the planner from adding that final sort, I opted to hack the\n> >> LimitPath's pathkeys to say that it's already sorted by the\n> >> PlannerInfo's sort_pathkeys. That feels slightly icky, but it does\n> >> seem a little wasteful to initialise a sort node on every execution of\n> >> the plan to sort a single tuple.\n> >\n> >\n> > I don't get how this plan comes out. It seems not correct because Limit\n> > node above an unsorted path would give us an unpredictable row.\n>\n> Actually, you're right. That manual setting of the pathkeys is an\n> unneeded remanent from a bug I fixed before sending out v2. It can\n> just be removed.\n>\n> I've attached the v3 patch.\n\n\nThe v3 patch looks good to me.\n\nThanks\nRichard\n\nOn Thu, Oct 13, 2022 at 6:43 PM David Rowley <dgrowleyml@gmail.com> wrote:On Thu, 13 Oct 2022 at 21:17, Richard Guo <guofenglinux@gmail.com> wrote:\n>\n> On Thu, Oct 13, 2022 at 2:48 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>> To stop the planner from adding that final sort, I opted to hack the\n>> LimitPath's pathkeys to say that it's already sorted by the\n>> PlannerInfo's sort_pathkeys.  That feels slightly icky, but it does\n>> seem a little wasteful to initialise a sort node on every execution of\n>> the plan to sort a single tuple.\n>\n>\n> I don't get how this plan comes out. It seems not correct because Limit\n> node above an unsorted path would give us an unpredictable row.\n\nActually, you're right. That manual setting of the pathkeys is an\nunneeded remanent from a bug I fixed before sending out v2.  It can\njust be removed.\n\nI've attached the v3 patch. The v3 patch looks good to me.ThanksRichard", "msg_date": "Fri, 14 Oct 2022 10:14:55 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use LIMIT instead of Unique for DISTINCT when all distinct\n pathkeys are redundant" }, { "msg_contents": "On Fri, 14 Oct 2022 at 15:15, Richard Guo <guofenglinux@gmail.com> wrote:\n> The v3 patch looks good to me.\n\nThank you for looking at that.\n\nOne other thought I had about the duplicate \"Limit\" node in the final\nplan was that we could make the limit clause an Expr like\nLEAST(<existing limit clause>, 1). That way we could ensure we get at\nmost 1 row, but perhaps less if the expression given in the LIMIT\nclause evaluated to 0. This will still work correctly when the\nexisting limit evaluates to NULL. I'm still just not that keen on this\nidea as it means still having to either edit the parse's limitCount or\nstore the limit details in a new field in PlannerInfo and use that\nwhen making the final LimitPath. However, I'm still not sure doing\nthis is worth the extra complexity.\n\nIf nobody else has any thoughts on this or the patch in general, then\nI plan to push it in the next few days.\n\nDavid\n\n\n", "msg_date": "Wed, 26 Oct 2022 21:25:16 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use LIMIT instead of Unique for DISTINCT when all distinct\n pathkeys are redundant" }, { "msg_contents": "On Wed, Oct 26, 2022 at 4:25 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> One other thought I had about the duplicate \"Limit\" node in the final\n> plan was that we could make the limit clause an Expr like\n> LEAST(<existing limit clause>, 1). That way we could ensure we get at\n> most 1 row, but perhaps less if the expression given in the LIMIT\n> clause evaluated to 0. This will still work correctly when the\n> existing limit evaluates to NULL. I'm still just not that keen on this\n> idea as it means still having to either edit the parse's limitCount or\n> store the limit details in a new field in PlannerInfo and use that\n> when making the final LimitPath. However, I'm still not sure doing\n> this is worth the extra complexity.\n\n\nI find the duplicate \"Limit\" node is not that concerning after I realize\nit may appear in other queries, such as\n\nexplain (analyze, timing off, costs off)\nselect * from (select * from (select * from generate_series(1,100)i limit\n10) limit 5) limit 1;\n QUERY PLAN\n------------------------------------------------------------------------------\n Limit (actual rows=1 loops=1)\n -> Limit (actual rows=1 loops=1)\n -> Limit (actual rows=1 loops=1)\n -> Function Scan on generate_series i (actual rows=1\nloops=1)\n\nAlthough the situation is different in that the Limit node is actually\natop SubqueryScan which is removed afterwards, but the final plan\nappears as a Limit node atop another Limit node.\n\nSo I wonder maybe we can just live with it, or resolve it in a separate\npatch.\n\nThanks\nRichard\n\nOn Wed, Oct 26, 2022 at 4:25 PM David Rowley <dgrowleyml@gmail.com> wrote:\nOne other thought I had about the duplicate \"Limit\" node in the final\nplan was that we could make the limit clause an Expr like\nLEAST(<existing limit clause>, 1).  That way we could ensure we get at\nmost 1 row, but perhaps less if the expression given in the LIMIT\nclause evaluated to 0. This will still work correctly when the\nexisting limit evaluates to NULL. I'm still just not that keen on this\nidea as it means still having to either edit the parse's limitCount or\nstore the limit details in a new field in PlannerInfo and use that\nwhen making the final LimitPath. However, I'm still not sure doing\nthis is worth the extra complexity. I find the duplicate \"Limit\" node is not that concerning after I realizeit may appear in other queries, such asexplain (analyze, timing off, costs off)select * from (select * from (select * from generate_series(1,100)i limit 10) limit 5) limit 1;                                  QUERY PLAN------------------------------------------------------------------------------ Limit (actual rows=1 loops=1)   ->  Limit (actual rows=1 loops=1)         ->  Limit (actual rows=1 loops=1)               ->  Function Scan on generate_series i (actual rows=1 loops=1)Although the situation is different in that the Limit node is actuallyatop SubqueryScan which is removed afterwards, but the final planappears as a Limit node atop another Limit node.So I wonder maybe we can just live with it, or resolve it in a separatepatch.ThanksRichard", "msg_date": "Thu, 27 Oct 2022 10:50:08 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use LIMIT instead of Unique for DISTINCT when all distinct\n pathkeys are redundant" }, { "msg_contents": "On Thu, 27 Oct 2022 at 15:50, Richard Guo <guofenglinux@gmail.com> wrote:\n> I find the duplicate \"Limit\" node is not that concerning after I realize\n> it may appear in other queries, such as\n>\n> explain (analyze, timing off, costs off)\n> select * from (select * from (select * from generate_series(1,100)i limit 10) limit 5) limit 1;\n\nYeah, the additional limits certainly are not incorrect. We could\nmaybe do something better, there just does not seem to be much point.\n\nPerhaps fixing things like this would be better done with the\nUniqueKey stuff that Andy Fan and I were working on a while back.\nWith LIMIT 1 everything would become unique and there'd be no need to\nadd another LimitPath.\n\nI've now pushed the patch after making a small adjustment to one of\nthe comments which mentions about rows being \"indistinguishable by an\nequality check\". I was made to think of the '-0.0'::float8 vs +0.0\ncase again and thought I'd better mention it for this patch.\n\nThanks for reviewing the patch.\n\nDavid\n\n\n", "msg_date": "Fri, 28 Oct 2022 23:17:10 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use LIMIT instead of Unique for DISTINCT when all distinct\n pathkeys are redundant" } ]
[ { "msg_contents": "Hi Hackers,\n\nI noticed that psql has no tab completion around identity columns in\nALTER TABLE, so here's some patches for that.\n\nIn passing, I also added completion for ALTER SEQUECNE … START, which was\nmissing for some reason.\n\n- ilmari", "msg_date": "Wed, 12 Oct 2022 15:18:46 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": true, "msg_subject": "[PATCH] Improve tab completion for ALTER TABLE on identity columns" }, { "msg_contents": "Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> writes:\n\n> Hi Hackers,\n>\n> I noticed that psql has no tab completion around identity columns in\n> ALTER TABLE, so here's some patches for that.\n\nAdded to the next commit fest:\n\nhttps://commitfest.postgresql.org/40/3947/\n\n- ilmari\n\n\n", "msg_date": "Fri, 14 Oct 2022 15:31:26 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve tab completion for ALTER TABLE on identity\n columns" }, { "msg_contents": "> Hi Hackers,\n> \n> I noticed that psql has no tab completion around identity columns in\n> ALTER TABLE, so here's some patches for that.\n> \n> In passing, I also added completion for ALTER SEQUECNE … START, which was\n> missing for some reason.\n> \n> - ilmari\n\nHi ilmari\n\nI've tested all 4 of your patches, and all of them seem to work as expected.\n\nThis is my first time reviewing a patch, so let's see if more experience \nhackers has anything more to say about these patches, but at first they \nseem correct to me.\n\n\n--\nMatheus Alcantara\n\n\n\n", "msg_date": "Tue, 25 Oct 2022 22:45:20 +0000", "msg_from": "Matheus Alcantara <mths.dev@pm.me>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve tab completion for ALTER TABLE on identity\n columns" }, { "msg_contents": "On 14.10.22 16:31, Dagfinn Ilmari Mannsåker wrote:\n>> I noticed that psql has no tab completion around identity columns in\n>> ALTER TABLE, so here's some patches for that.\n> \n> Added to the next commit fest:\n> \n> https://commitfest.postgresql.org/40/3947/\n\nCommitted.\n\n\n\n", "msg_date": "Tue, 1 Nov 2022 12:19:54 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve tab completion for ALTER TABLE on identity\n columns" }, { "msg_contents": "On 26.10.22 00:45, Matheus Alcantara wrote:\n>> I noticed that psql has no tab completion around identity columns in\n>> ALTER TABLE, so here's some patches for that.\n>>\n>> In passing, I also added completion for ALTER SEQUECNE … START, which was\n>> missing for some reason.\n\n> I've tested all 4 of your patches, and all of them seem to work as expected.\n> \n> This is my first time reviewing a patch, so let's see if more experience\n> hackers has anything more to say about these patches, but at first they\n> seem correct to me.\n\nThis was sensible for a first review. Thanks for you help.\n\n\n\n", "msg_date": "Tue, 1 Nov 2022 12:20:54 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve tab completion for ALTER TABLE on identity\n columns" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n\n> On 14.10.22 16:31, Dagfinn Ilmari Mannsåker wrote:\n>>> I noticed that psql has no tab completion around identity columns in\n>>> ALTER TABLE, so here's some patches for that.\n>> Added to the next commit fest:\n>> https://commitfest.postgresql.org/40/3947/\n>\n> Committed.\n\nThanks!\n\n- ilmari\n\n\n", "msg_date": "Tue, 01 Nov 2022 11:29:21 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve tab completion for ALTER TABLE on identity\n columns" } ]
[ { "msg_contents": "Hi,\n\nWriting this on behalf of endoflife.date, where we track postgres \nreleases for the endoflife.date/postgres page.\n\nWe have our automation linked to git tags published on the postgres repo \nmirror on GitHub[0].\n\nIt recently picked up the REL_15_0 tag[2], and compiled it here: \nhttps://github.com/endoflife-date/release-data/blob/main/releases/postgresql.json#L452\n\nSince v15 doesn't seem to be announced yet - this is confusing me. It \ndoesn't impact us anywhere (yet), since we haven't added the v15 release \ncycle to our page yet, but I'd like to check if this is an incorrect tag \nor? If not, what's the correct source for us to use for automation purposes?\n\nI went through the release process on the Wiki[1], and it mentions final \nversion tagging as the final irreversible step, so that added to my \nconfusion.\n\nThanks,\nNemo\n\n(Please keep me in cc for any replies)\n\n[0]: https://github.com/postgres/postgres\n\n[1]: https://wiki.postgresql.org/wiki/Release_process#Final_version_tagging\n\n[2]: https://github.com/postgres/postgres/releases/tag/REL_15_0\n\n\n", "msg_date": "Wed, 12 Oct 2022 21:16:44 +0530", "msg_from": "Nemo <me@captnemo.in>", "msg_from_op": true, "msg_subject": "Git tag for v15" }, { "msg_contents": "On Wed, 12 Oct 2022 at 20:08, Nemo <me@captnemo.in> wrote:\n> Since v15 doesn't seem to be announced yet - this is confusing me. It\n> doesn't impact us anywhere (yet), since we haven't added the v15 release\n> cycle to our page yet, but I'd like to check if this is an incorrect tag\n> or? If not, what's the correct source for us to use for automation purposes?\n\nTags are usually published a few days in advance of the official\nrelease, so that all packagers have the time to bundle and prepare the\nrelease in their repositories. The official release is planned for\ntomorrow the 13th, as mentioned in [0].\n\nThis same pattern can be seen for minor release versions; where the\nrelease is generally stamped well in advance of the release post going\nlive.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://postgr.es/m/2a88ff2e-ffcc-bb39-379c-37244b4114a5%40postgresql.org\n\n\n", "msg_date": "Wed, 12 Oct 2022 20:16:36 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Git tag for v15" }, { "msg_contents": "On 2022-Oct-12, Matthias van de Meent wrote:\n\n> On Wed, 12 Oct 2022 at 20:08, Nemo <me@captnemo.in> wrote:\n> > Since v15 doesn't seem to be announced yet - this is confusing me. It\n> > doesn't impact us anywhere (yet), since we haven't added the v15 release\n> > cycle to our page yet, but I'd like to check if this is an incorrect tag\n> > or? If not, what's the correct source for us to use for automation purposes?\n> \n> Tags are usually published a few days in advance of the official\n> release, so that all packagers have the time to bundle and prepare the\n> release in their repositories. The official release is planned for\n> tomorrow the 13th, as mentioned in [0].\n\nTo be more precise, our cadence goes pretty much every time like this\n(both yearly major releases as well as quarterly minor):\n\n- a Git commit is determined for a release on Monday (before noon\nEDT/EST); a tarball produced from that commit is posted to registered\npackagers\n\n- By Tuesday evening EDT/EST, if no packagers have reported problems,\nthe tag corresponding to the given commit is pushed to the repo\n\n- By Thursday noon EDT/EST the announcement is made and the packages are\nmade available.\n\nThus packagers have all of Monday and Tuesday to report problems. They\ntypically don't, since the buildfarm alerts us soon enough to any\nportability problems. Packaging issues are normally found (and dealt\nwith) during beta.\n\nIf for whatever reason a problem is found before the tag has been\nposted, then a new Git commit is chosen and a new tarball published.\nThe tag will then match the new commit, not the original obviously.\nI don't know what would happen if a problem were to be found *after* the\ntag has been pushed; normally that would just mean the fix would have to\nwait until the next minor version in that branch. It would have to be\nsomething really serious in order for this process to be affected.\nAs far as I know, this has never happened.\n\n\nAs far as Postgres is concerned, you could automate things so that a tag\ndetected on a Tuesday is marked as released on the immediately following\nThursday (noon EDT/EST). If you did this, you'd get it right 99% of the\ntime, and the only way to achieve 100% would be to have a bot that\nfollows the quarterly announcements in pgsql-announce.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 13 Oct 2022 13:19:13 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Git tag for v15" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> If for whatever reason a problem is found before the tag has been\n> posted, then a new Git commit is chosen and a new tarball published.\n> The tag will then match the new commit, not the original obviously.\n> I don't know what would happen if a problem were to be found *after* the\n> tag has been pushed; normally that would just mean the fix would have to\n> wait until the next minor version in that branch. It would have to be\n> something really serious in order for this process to be affected.\n> As far as I know, this has never happened.\n\nI don't think it's happened since we adopted Git, anyway. The plan\nif we did have to re-wrap at that point would be to increment the version\nnumber(s), since not doing so would probably create too much confusion\nabout which tarballs were the correct ones.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Oct 2022 10:06:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Git tag for v15" } ]
[ { "msg_contents": "Hi\n\nI had a talk with Julien about the correct handling of an exception raised\nby pfree function.\n\nCurrently, this exception (elog(ERROR, \"could not find block containing\nchunk %p\", chunk);) is not specially handled ever. Because the check of\npointer sanity is executed first (before any memory modification), then it\nis safe to repeatedly call pfree (but if I read code correctly, this\nbehavior is not asserted or tested).\n\nThe question is - What is the correct action on this error. In the end,\nthis exception means detection of memory corruption. One, and probably safe\nway is raising FATAL error. But it looks like too hard a solution and not\ntoo friendly. Moreover, this way is not used in the current code base.\n\nThe traditional solution is just raising the exception and doing nothing\nmore. I didn't find code, where the exception from pfree is exactly\nhandled. Similar issues with the possible exception from pfree can be in\nplan cache, plpgsql code cache, partially in implementation of update of\nplpgsql variable. Everywhere the implementation is not too strict - just\nthe exception is raised, but the session continues (although in this moment\nwe know so some memory is corrupted).\n\nIs it a common strategy in Postgres?\n\nRegards\n\nPavel\n\nHiI had a talk with Julien about the correct handling of an exception raised by pfree function.Currently, this exception (elog(ERROR, \"could not find block containing chunk %p\", chunk);) is not specially handled ever. Because the check of pointer sanity is executed first (before any memory modification), then it is safe to repeatedly call pfree (but if I read code correctly, this behavior is not asserted or tested).The question is - What is the correct action on this error. In the end, this exception means detection of memory corruption. One, and probably safe way is raising FATAL error.  But it looks like too hard a solution and not too friendly. Moreover, this way is not used in the current code base. The traditional solution is just raising the exception and doing nothing more. I didn't find code, where the exception from pfree is exactly handled. Similar issues with the possible exception from pfree can be in plan cache, plpgsql code cache, partially in implementation of update of plpgsql variable. Everywhere the implementation is not too strict - just the exception is raised, but the session continues (although in this moment we know so some memory is corrupted).Is it a common strategy in Postgres?RegardsPavel", "msg_date": "Wed, 12 Oct 2022 21:35:31 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "how to correctly react on exception in pfree function?" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I had a talk with Julien about the correct handling of an exception raised\n> by pfree function.\n\n> Currently, this exception (elog(ERROR, \"could not find block containing\n> chunk %p\", chunk);) is not specially handled ever.\n\nThere are hundreds, if not thousands, of \"shouldn't ever happen\" elogs\nin Postgres. We don't make any attempt to trap any of them. Why do\nyou think this one should be different?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Oct 2022 19:24:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: how to correctly react on exception in pfree function?" }, { "msg_contents": "On Wed, Oct 12, 2022 at 07:24:53PM -0400, Tom Lane wrote:\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > I had a talk with Julien about the correct handling of an exception raised\n> > by pfree function.\n> \n> > Currently, this exception (elog(ERROR, \"could not find block containing\n> > chunk %p\", chunk);) is not specially handled ever.\n> \n> There are hundreds, if not thousands, of \"shouldn't ever happen\" elogs\n> in Postgres. We don't make any attempt to trap any of them. Why do\n> you think this one should be different?\n\nBecause session variables are allocated in a persistent memory context, so\nthere's a code doing something like this to implement LET variable:\n\n[...]\noldctxt = MemoryContextSwitchTo(SomePersistentContext);\nnewval = palloc(...);\nMemoryContextSwitchTo(oldctxt);\n/* No error should happen after that point or we leak memory */\npfree(var->val);\nvar->val = newval;\nreturn;\n\nAny error thrown in pfree would mean leaking memory forever in that backend.\n\nIs it ok to leak memory in such should-not-happen case or should there be some\nsafeguard?\n\n\n", "msg_date": "Thu, 13 Oct 2022 09:59:17 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: how to correctly react on exception in pfree function?" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Wed, Oct 12, 2022 at 07:24:53PM -0400, Tom Lane wrote:\n>> There are hundreds, if not thousands, of \"shouldn't ever happen\" elogs\n>> in Postgres. We don't make any attempt to trap any of them. Why do\n>> you think this one should be different?\n\n> Because session variables are allocated in a persistent memory context, so\n> there's a code doing something like this to implement LET variable:\n\n> [...]\n> oldctxt = MemoryContextSwitchTo(SomePersistentContext);\n> newval = palloc(...);\n> MemoryContextSwitchTo(oldctxt);\n> /* No error should happen after that point or we leak memory */\n> pfree(var->val);\n> var->val = newval;\n> return;\n\n> Any error thrown in pfree would mean leaking memory forever in that backend.\n\nI've got little sympathy for that complaint, because an error\nin pfree likely means you have worse problems than a leak.\nMoreover, what it most likely means is that you messed up your\nmemory allocations sometime earlier; making your code more\ncomplicated makes that risk higher not lower.\n\nHaving said that, the above is really not very good code.\nBetter would look like\n\n\tnewval = MemoryContextAlloc(SomePersistentContext, ...);\n\t... fill newval ...\n\toldval = var->val;\n\tvar->val = newval;\n\tpfree(oldval);\n\nwhich leaves your data structure in a consistent state whether\npfree fails or not (and regardless of whether it fails before or\nafter recycling your chunk). That property is way more important\nthan the possibility of leaking some memory.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Oct 2022 22:34:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: how to correctly react on exception in pfree function?" }, { "msg_contents": "On Wed, Oct 12, 2022 at 10:34:29PM -0400, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Wed, Oct 12, 2022 at 07:24:53PM -0400, Tom Lane wrote:\n> >> There are hundreds, if not thousands, of \"shouldn't ever happen\" elogs\n> >> in Postgres. We don't make any attempt to trap any of them. Why do\n> >> you think this one should be different?\n> \n> > Because session variables are allocated in a persistent memory context, so\n> > there's a code doing something like this to implement LET variable:\n> \n> > [...]\n> > oldctxt = MemoryContextSwitchTo(SomePersistentContext);\n> > newval = palloc(...);\n> > MemoryContextSwitchTo(oldctxt);\n> > /* No error should happen after that point or we leak memory */\n> > pfree(var->val);\n> > var->val = newval;\n> > return;\n> \n> > Any error thrown in pfree would mean leaking memory forever in that backend.\n> \n> I've got little sympathy for that complaint, because an error\n> in pfree likely means you have worse problems than a leak.\n\nI agree, thus the question of what should be done in such case.\n\n> Moreover, what it most likely means is that you messed up your\n> memory allocations sometime earlier; making your code more\n> complicated makes that risk higher not lower.\n> \n> Having said that, the above is really not very good code.\n> Better would look like\n> \n> \tnewval = MemoryContextAlloc(SomePersistentContext, ...);\n> \t... fill newval ...\n> \toldval = var->val;\n> \tvar->val = newval;\n> \tpfree(oldval);\n> \n> which leaves your data structure in a consistent state whether\n> pfree fails or not (and regardless of whether it fails before or\n> after recycling your chunk). That property is way more important\n> than the possibility of leaking some memory.\n\nWell, it was an over simplification to outline the reason for the question.\n\nThe real code has a dedicated function to cleanup the variable (used for other\ncases, like ON TRANSACTION END RESET), and one of the callers will use that\nfunction to implement LET so that approach isn't possible as-is. Note that for\nthe other callers the code is written like that so the structure is consistent\nwhether pfree errors out or not.\n\nWe can change the API to accept an optional new value (and the few other needed\ninformation) when cleaning the old one, but that's adding some complication\njust to deal with a possible error in pfree. So it still unclear to me what to\ndo here.\n\n\n", "msg_date": "Thu, 13 Oct 2022 11:02:26 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: how to correctly react on exception in pfree function?" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> We can change the API to accept an optional new value (and the few other needed\n> information) when cleaning the old one, but that's adding some complication\n> just to deal with a possible error in pfree. So it still unclear to me what to\n> do here.\n\nI think it's worth investing some effort in ensuring consistency\nof persistent data structures in the face of errors. I doubt it's\nworth investing effort in avoiding leaks in the face of errors.\nIn any case, thinking of it in terms of \"trapping\" errors is the\nwrong approach. We don't have a cheap or complication-free way\nto do that, mainly because you can't trap just one error cause.\n\nIt may be worth looking at the GUC code, which has been dealing\nwith the same sorts of issues pretty successfully for many years.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Oct 2022 23:08:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: how to correctly react on exception in pfree function?" }, { "msg_contents": "On Wed, Oct 12, 2022 at 11:08:25PM -0400, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > We can change the API to accept an optional new value (and the few other needed\n> > information) when cleaning the old one, but that's adding some complication\n> > just to deal with a possible error in pfree. So it still unclear to me what to\n> > do here.\n> \n> I think it's worth investing some effort in ensuring consistency\n> of persistent data structures in the face of errors. I doubt it's\n> worth investing effort in avoiding leaks in the face of errors.\n\nSo if e.g.\n\nLET myvar = somebigstring;\n\nerrors out because of hypothetical pfree() error, it would be ok to leak that\nmemory as long as everything is consistent, meaning here that myvar is in a\nnormal \"reset\" state?\n\n> In any case, thinking of it in terms of \"trapping\" errors is the\n> wrong approach. We don't have a cheap or complication-free way\n> to do that, mainly because you can't trap just one error cause.\n>\n> It may be worth looking at the GUC code, which has been dealing\n> with the same sorts of issues pretty successfully for many years.\n\nThe GUC code relies on malloc/free, so any hypothetical error during free\nshould abort and force an emergency shutdown. I don't think it's ok for\nsession variables to bypass memory contexts, and forcing a panic wouldn't be\npossible without some PG_TRY block.\n\n\n", "msg_date": "Thu, 13 Oct 2022 11:26:02 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: how to correctly react on exception in pfree function?" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Wed, Oct 12, 2022 at 11:08:25PM -0400, Tom Lane wrote:\n>> It may be worth looking at the GUC code, which has been dealing\n>> with the same sorts of issues pretty successfully for many years.\n\n> The GUC code relies on malloc/free,\n\nNot for much longer [1]. And no, I don't believe that that patch\nmakes any noticeable difference in the code's robustness.\n\nIn the end, a bug affecting these considerations is a bug to be fixed\nonce it's found. Building potentially-themselves-buggy defenses against\nhypothetical bugs is an exercise with rapidly diminishing returns.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/2982579.1662416866@sss.pgh.pa.us\n\n\n", "msg_date": "Wed, 12 Oct 2022 23:34:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: how to correctly react on exception in pfree function?" }, { "msg_contents": "On Wed, Oct 12, 2022 at 11:34:32PM -0400, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Wed, Oct 12, 2022 at 11:08:25PM -0400, Tom Lane wrote:\n> >> It may be worth looking at the GUC code, which has been dealing\n> >> with the same sorts of issues pretty successfully for many years.\n> \n> > The GUC code relies on malloc/free,\n> \n> Not for much longer [1]. And no, I don't believe that that patch\n> makes any noticeable difference in the code's robustness.\n\nOk, so the new code still assumes that guc_free can't/shouldn't fail:\n\nstatic void\nset_string_field(struct config_string *conf, char **field, char *newval)\n{\n\tchar\t *oldval = *field;\n\n\t/* Do the assignment */\n\t*field = newval;\n\n\t/* Free old value if it's not NULL and isn't referenced anymore */\n\tif (oldval && !string_field_used(conf, oldval))\n\t\tguc_free(oldval);\n}\n\n\n[...]\n\n\t\t\t\t\t\tset_string_field(conf, &conf->reset_val, newval);\n\t\t\t\t\t\tset_extra_field(&conf->gen, &conf->reset_extra,\n\t\t\t\t\t\t\t\t\t\tnewextra);\n\t\t\t\t\t\tconf->gen.reset_source = source;\n\t\t\t\t\t\tconf->gen.reset_scontext = context;\n\t\t\t\t\t\tconf->gen.reset_srole = srole;\n\nAny error in guc_free will leave the struct in some inconsistent state and\npossibly leak some data. We can use the same approach for session variables.\n\n\n", "msg_date": "Thu, 13 Oct 2022 11:49:44 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: how to correctly react on exception in pfree function?" } ]
[ { "msg_contents": "Hi hackers,\n\nI found that the test catalog_change_snapshot was missed in test_decoding/meson.build file.\nPSA the patch to fix it.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED", "msg_date": "Thu, 13 Oct 2022 04:25:50 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": true, "msg_subject": "Add mssing test to test_decoding/meson.build" }, { "msg_contents": "On Thu, Oct 13, 2022 at 04:25:50AM +0000, kuroda.hayato@fujitsu.com wrote:\n> I found that the test catalog_change_snapshot was missed in test_decoding/meson.build file.\n> PSA the patch to fix it.\n\nThanks, applied. This was an oversight of 7f13ac8, and the CI accepts\nthe test.\n--\nMichael", "msg_date": "Thu, 13 Oct 2022 16:06:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add mssing test to test_decoding/meson.build" }, { "msg_contents": "Dear Michael,\n\n> Thanks, applied. This was an oversight of 7f13ac8, and the CI accepts\n> the test.\n\nI confirmed your commit. Great thanks!\n\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n", "msg_date": "Thu, 13 Oct 2022 07:39:30 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Add mssing test to test_decoding/meson.build" } ]
[ { "msg_contents": "Hi,\n\n\nI noticed a possible typo in the doc for create publication.\nThis applies to PG15 as well.\nKindly have a look at the attached patch for it.\n\n\nBest Regards,\n\tTakamichi Osumi", "msg_date": "Thu, 13 Oct 2022 07:16:39 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": true, "msg_subject": "possible typo for CREATE PUBLICATION description" }, { "msg_contents": "On Thu, Oct 13, 2022 at 6:16 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> Hi,\n>\n>\n> I noticed a possible typo in the doc for create publication.\n> This applies to PG15 as well.\n> Kindly have a look at the attached patch for it.\n>\n>\n\nYour typo correction LGTM.\n\nFWIW, maybe other parts of that paragraph can be tidied too. e.g. The\nwords \"actually\" and \"So\" didn't seem needed IMO.\n\n~\n\nBEFORE\nFor an INSERT ... ON CONFLICT command, the publication will publish\nthe operation that actually results from the command. So depending on\nthe outcome, it may be published as either INSERT or UPDATE, or it may\nnot be published at all.\n\nSUGGESTION\nFor an INSERT ... ON CONFLICT command, the publication will publish\nthe operation that results from the command. Depending on the outcome,\nit may be published as either INSERT or UPDATE, or it may not be\npublished at all.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 13 Oct 2022 18:26:16 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: possible typo for CREATE PUBLICATION description" }, { "msg_contents": "Hello\n\nOn 2022-Oct-13, osumi.takamichi@fujitsu.com wrote:\n\n> I noticed a possible typo in the doc for create publication.\n> This applies to PG15 as well.\n> Kindly have a look at the attached patch for it.\n\nYeah, looks good. It actually applies all the way back to 10, so I\npushed it to all branches. I included Peter's wording changes as well.\n\nThanks\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Ellos andaban todos desnudos como su madre los parió, y también las mujeres,\naunque no vi más que una, harto moza, y todos los que yo vi eran todos\nmancebos, que ninguno vi de edad de más de XXX años\" (Cristóbal Colón)\n\n\n", "msg_date": "Thu, 13 Oct 2022 13:37:58 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: possible typo for CREATE PUBLICATION description" }, { "msg_contents": "Hi, Alvaro-san\n\n\nOn Thursday, October 13, 2022 8:38 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Oct-13, osumi.takamichi@fujitsu.com wrote:\n> \n> > I noticed a possible typo in the doc for create publication.\n> > This applies to PG15 as well.\n> > Kindly have a look at the attached patch for it.\n> \n> Yeah, looks good. It actually applies all the way back to 10, so I pushed it to\n> all branches. I included Peter's wording changes as well.\n> \n> Thanks\nYou are right and thank you so much for taking care of this !\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n", "msg_date": "Thu, 13 Oct 2022 12:02:09 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: possible typo for CREATE PUBLICATION description" } ]
[ { "msg_contents": "We have the NegotiateProtocolVersion protocol message [0], but libpq \ndoesn't actually handle it.\n\nSay I increase the protocol number in libpq:\n\n- conn->pversion = PG_PROTOCOL(3, 0);\n+ conn->pversion = PG_PROTOCOL(3, 1);\n\nThen I get\n\npsql: error: connection to server on socket \"/tmp/.s.PGSQL.65432\" \nfailed: expected authentication request from server, but received v\n\nAnd the same for a protocol option (_pq_.something).\n\nOver in the column encryption patch, I'm proposing to add such a \nprotocol option, and the above is currently the behavior when the server \ndoesn't support it.\n\nThe attached patch adds explicit handling of this protocol message to \nlibpq. So the output in the above case would then be:\n\npsql: error: connection to server on socket \"/tmp/.s.PGSQL.65432\" \nfailed: protocol version not supported by server: client uses 3.1, \nserver supports 3.0\n\nOr to test a protocol option:\n\n@@ -2250,6 +2291,8 @@ build_startup_packet(const PGconn *conn, char *packet,\n if (conn->client_encoding_initial && conn->client_encoding_initial[0])\n ADD_STARTUP_OPTION(\"client_encoding\", \nconn->client_encoding_initial);\n\n+ ADD_STARTUP_OPTION(\"_pq_.foobar\", \"1\");\n+\n /* Add any environment-driven GUC settings needed */\n for (next_eo = options; next_eo->envName; next_eo++)\n {\n\nResult:\n\npsql: error: connection to server on socket \"/tmp/.s.PGSQL.65432\" \nfailed: protocol extension not supported by server: _pq_.foobar\n\n\n[0]: \nhttps://www.postgresql.org/docs/devel/protocol-message-formats.html#PROTOCOL-MESSAGE-FORMATS-NEGOTIATEPROTOCOLVERSION", "msg_date": "Thu, 13 Oct 2022 10:33:01 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "libpq support for NegotiateProtocolVersion" }, { "msg_contents": "On Thu, Oct 13, 2022 at 10:33:01AM +0200, Peter Eisentraut wrote:\n> +\tif (their_version != conn->pversion)\n\nShouldn't this be 'their_version < conn->pversion'? If the server supports\na later protocol than what is requested but not all the requested protocol\nextensions, I think libpq would still report \"protocol version not\nsupported.\"\n\n> +\t\tappendPQExpBuffer(&conn->errorMessage,\n> +\t\t\t\t\t\t libpq_gettext(\"protocol version not supported by server: client uses %d.%d, server supports %d.%d\\n\"),\n> +\t\t\t\t\t\t PG_PROTOCOL_MAJOR(conn->pversion), PG_PROTOCOL_MINOR(conn->pversion),\n> +\t\t\t\t\t\t PG_PROTOCOL_MAJOR(their_version), PG_PROTOCOL_MINOR(their_version));\n\nShould this match the error in postmaster.c and provide the range of\nversions the server supports? The FATAL in postmaster.c is for the major\nversion, but I believe the same information is relevant when a\nNegotiateProtocolVersion message is sent.\n\n\t\tereport(FATAL,\n\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n\t\t\t\t errmsg(\"unsupported frontend protocol %u.%u: server supports %u.0 to %u.%u\",\n\n> +\telse\n> +\t\tappendPQExpBuffer(&conn->errorMessage,\n> +\t\t\t\t\t\t libpq_gettext(\"protocol extension not supported by server: %s\\n\"), buf.data);\n\nnitpick: s/extension/extensions\n\nWhat if neither the protocol version nor the requested extensions are\nsupported? Right now, I think only the unsupported protocol version is\nsupported in that case, but presumably we could report both pretty easily.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 13 Oct 2022 14:00:52 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: libpq support for NegotiateProtocolVersion" }, { "msg_contents": "On 13.10.22 23:00, Nathan Bossart wrote:\n> On Thu, Oct 13, 2022 at 10:33:01AM +0200, Peter Eisentraut wrote:\n>> +\tif (their_version != conn->pversion)\n> \n> Shouldn't this be 'their_version < conn->pversion'? If the server supports\n> a later protocol than what is requested but not all the requested protocol\n> extensions, I think libpq would still report \"protocol version not\n> supported.\"\n\nOk, changed.\n\n>> +\t\tappendPQExpBuffer(&conn->errorMessage,\n>> +\t\t\t\t\t\t libpq_gettext(\"protocol version not supported by server: client uses %d.%d, server supports %d.%d\\n\"),\n>> +\t\t\t\t\t\t PG_PROTOCOL_MAJOR(conn->pversion), PG_PROTOCOL_MINOR(conn->pversion),\n>> +\t\t\t\t\t\t PG_PROTOCOL_MAJOR(their_version), PG_PROTOCOL_MINOR(their_version));\n> \n> Should this match the error in postmaster.c and provide the range of\n> versions the server supports? The FATAL in postmaster.c is for the major\n> version, but I believe the same information is relevant when a\n> NegotiateProtocolVersion message is sent.\n> \n> \t\tereport(FATAL,\n> \t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> \t\t\t\t errmsg(\"unsupported frontend protocol %u.%u: server supports %u.0 to %u.%u\",\n\nIf you increase the libpq minor protocol version and connect to an older \nserver, you would get an error like \"server supports 3.0 to 3.0\", which \nis probably a bit confusing. I changed it to \"up to 3.0\" to convey that \nit could be a range.\n\n>> +\telse\n>> +\t\tappendPQExpBuffer(&conn->errorMessage,\n>> +\t\t\t\t\t\t libpq_gettext(\"protocol extension not supported by server: %s\\n\"), buf.data);\n> \n> nitpick: s/extension/extensions\n\nOk, added proper plural support.\n\n> What if neither the protocol version nor the requested extensions are\n> supported? Right now, I think only the unsupported protocol version is\n> supported in that case, but presumably we could report both pretty easily.\n\nOk, I just appended both error messages in that case.", "msg_date": "Thu, 20 Oct 2022 11:24:36 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: libpq support for NegotiateProtocolVersion" }, { "msg_contents": "A few notes:\n\n> +\t\t\t\telse if (beresp == 'v')\n> +\t\t\t\t{\n> +\t\t\t\t\tif (pqGetNegotiateProtocolVersion3(conn))\n> +\t\t\t\t\t{\n> +\t\t\t\t\t\t/* We'll come back when there is more data */\n> +\t\t\t\t\t\treturn PGRES_POLLING_READING;\n> +\t\t\t\t\t}\n> +\t\t\t\t\t/* OK, we read the message; mark data consumed */\n> +\t\t\t\t\tconn->inStart = conn->inCursor;\n> +\t\t\t\t\tgoto error_return;\n> +\t\t\t\t}\n\nThis new code path doesn't go through the message length checks that are\ndone for the 'R' and 'E' cases, and pqGetNegotiateProtocolVersion3()\ndoesn't take the message length to know where to stop anyway, so a\nmisbehaving server can chew up client resources.\n\nIt looks like the server is expecting to be able to continue the\nconversation with a newer client after sending a\nNegotiateProtocolVersion. Is an actual negotiation planned for the future?\n\nI think the documentation on NegotiateProtocolVersion (not introduced in\nthis patch) is misleading/wrong; it says that the version number sent\nback is the \"newest minor protocol version supported by the server for\nthe major protocol version requested by the client\" which doesn't seem\nto match the actual usage seen here.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Wed, 2 Nov 2022 12:02:09 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: libpq support for NegotiateProtocolVersion" }, { "msg_contents": "On 02.11.22 20:02, Jacob Champion wrote:\n> A few notes:\n> \n>> +\t\t\t\telse if (beresp == 'v')\n>> +\t\t\t\t{\n>> +\t\t\t\t\tif (pqGetNegotiateProtocolVersion3(conn))\n>> +\t\t\t\t\t{\n>> +\t\t\t\t\t\t/* We'll come back when there is more data */\n>> +\t\t\t\t\t\treturn PGRES_POLLING_READING;\n>> +\t\t\t\t\t}\n>> +\t\t\t\t\t/* OK, we read the message; mark data consumed */\n>> +\t\t\t\t\tconn->inStart = conn->inCursor;\n>> +\t\t\t\t\tgoto error_return;\n>> +\t\t\t\t}\n> \n> This new code path doesn't go through the message length checks that are\n> done for the 'R' and 'E' cases, and pqGetNegotiateProtocolVersion3()\n> doesn't take the message length to know where to stop anyway, so a\n> misbehaving server can chew up client resources.\n\nFixed in new patch.\n\n> It looks like the server is expecting to be able to continue the\n> conversation with a newer client after sending a\n> NegotiateProtocolVersion. Is an actual negotiation planned for the future?\n\nThe protocol documentation says:\n\n| The client may then choose either\n| to continue with the connection using the specified protocol version\n| or to abort the connection.\n\nIn this case, we are choosing to abort the connection.\n\nWe could add negotiation in the future, but then we'd have to first have \na concrete case of something to negotiate about. For example, if we \nadded an optional performance feature into the protocol, then one could \nnegotiate by falling back to not using that. But for the kinds of \nfeatures I'm thinking about right now (column encryption), you can't \nproceed if the feature is not supported. So I think this would need to \nbe considered case by case.\n\n> I think the documentation on NegotiateProtocolVersion (not introduced in\n> this patch) is misleading/wrong; it says that the version number sent\n> back is the \"newest minor protocol version supported by the server for\n> the major protocol version requested by the client\" which doesn't seem\n> to match the actual usage seen here.\n\nI don't follow. If libpq sends a protocol version of 3.1, then the \nserver responds by saying it supports only 3.0. What are you seeing?", "msg_date": "Tue, 8 Nov 2022 09:40:20 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: libpq support for NegotiateProtocolVersion" }, { "msg_contents": "On 11/8/22 00:40, Peter Eisentraut wrote:\n> On 02.11.22 20:02, Jacob Champion wrote:\n>> This new code path doesn't go through the message length checks that are\n>> done for the 'R' and 'E' cases, and pqGetNegotiateProtocolVersion3()\n>> doesn't take the message length to know where to stop anyway, so a\n>> misbehaving server can chew up client resources.\n> \n> Fixed in new patch.\n\npqGetNegotiateProtocolVersion3() is still ignoring the message length,\nthough; it won't necessarily stop at the message boundary.\n\n> We could add negotiation in the future, but then we'd have to first have \n> a concrete case of something to negotiate about. For example, if we \n> added an optional performance feature into the protocol, then one could \n> negotiate by falling back to not using that. But for the kinds of \n> features I'm thinking about right now (column encryption), you can't \n> proceed if the feature is not supported. So I think this would need to \n> be considered case by case.\n\nI guess I'm wondering about the definition of \"minor\" version if the\nclient treats an increment as incompatible by default. But that's a\ndiscussion for the future, and this patch is just improving the existing\nbehavior, so I'll pipe down and watch.\n\n>> I think the documentation on NegotiateProtocolVersion (not introduced in\n>> this patch) is misleading/wrong; it says that the version number sent\n>> back is the \"newest minor protocol version supported by the server for\n>> the major protocol version requested by the client\" which doesn't seem\n>> to match the actual usage seen here.\n> \n> I don't follow. If libpq sends a protocol version of 3.1, then the \n> server responds by saying it supports only 3.0. What are you seeing?\n\nI see what you've described on my end, too. The sentence I quoted seemed\nto imply that the server should respond with only the minor version (the\nleast significant 16 bits). I think it should probably just say \"newest\nprotocol version\" in the docs.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 8 Nov 2022 15:08:53 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: libpq support for NegotiateProtocolVersion" }, { "msg_contents": "On 09.11.22 00:08, Jacob Champion wrote:\n> On 11/8/22 00:40, Peter Eisentraut wrote:\n>> On 02.11.22 20:02, Jacob Champion wrote:\n>>> This new code path doesn't go through the message length checks that are\n>>> done for the 'R' and 'E' cases, and pqGetNegotiateProtocolVersion3()\n>>> doesn't take the message length to know where to stop anyway, so a\n>>> misbehaving server can chew up client resources.\n>>\n>> Fixed in new patch.\n> \n> pqGetNegotiateProtocolVersion3() is still ignoring the message length,\n> though; it won't necessarily stop at the message boundary.\n\nI don't follow. The calls to pqGetInt(), pqGets(), etc. check the \nmessage length. Do you have something else in mind? Can you give an \nexample or existing code?\n\n>>> I think the documentation on NegotiateProtocolVersion (not introduced in\n>>> this patch) is misleading/wrong; it says that the version number sent\n>>> back is the \"newest minor protocol version supported by the server for\n>>> the major protocol version requested by the client\" which doesn't seem\n>>> to match the actual usage seen here.\n>>\n>> I don't follow. If libpq sends a protocol version of 3.1, then the\n>> server responds by saying it supports only 3.0. What are you seeing?\n> \n> I see what you've described on my end, too. The sentence I quoted seemed\n> to imply that the server should respond with only the minor version (the\n> least significant 16 bits). I think it should probably just say \"newest\n> protocol version\" in the docs.\n\nOk, I see the distinction.\n\n\n\n", "msg_date": "Fri, 11 Nov 2022 16:13:28 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: libpq support for NegotiateProtocolVersion" }, { "msg_contents": "On 11/11/22 07:13, Peter Eisentraut wrote:\n> On 09.11.22 00:08, Jacob Champion wrote:\n>> pqGetNegotiateProtocolVersion3() is still ignoring the message length,\n>> though; it won't necessarily stop at the message boundary.\n> \n> I don't follow. The calls to pqGetInt(), pqGets(), etc. check the \n> message length.\n\nI may be missing something obvious, but I don't see any message length\nchecks in those functions, just bounds checks on the connection buffer.\n\n> Do you have something else in mind? Can you give an \n> example or existing code?\n\nSure. Consider the case where the server sends a\nNegotiateProtocolVersion with a reasonable length, but then runs over\nits own message (either by sending an unterminated string as one of the\nextension names, or by sending a huge extension number). When I test\nthat against a client on my machine, it churns CPU and memory waiting\nfor the end of a message that will never come, even though it had\nalready decided that the maximum length of the message should have been\nless than 2K.\n\nPut another way, why do we loop around and poll for more data when we\nhit the end of the connection buffer, if we've already checked at this\npoint that we should have the entire message buffered locally?\n\n>+ \tinitPQExpBuffer(&buf);\n>+ \tif (pqGetInt(&tmp, 4, conn) != 0)\n>+ \t\treturn EOF;\n\nTangentially related -- I think the temporary PQExpBuffer is being\nleaked in the EOF case.\n\n--Jacob\n\n\n", "msg_date": "Fri, 11 Nov 2022 14:28:41 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: libpq support for NegotiateProtocolVersion" }, { "msg_contents": "On 11.11.22 23:28, Jacob Champion wrote:\n> Consider the case where the server sends a\n> NegotiateProtocolVersion with a reasonable length, but then runs over\n> its own message (either by sending an unterminated string as one of the\n> extension names, or by sending a huge extension number). When I test\n> that against a client on my machine, it churns CPU and memory waiting\n> for the end of a message that will never come, even though it had\n> already decided that the maximum length of the message should have been\n> less than 2K.\n> \n> Put another way, why do we loop around and poll for more data when we\n> hit the end of the connection buffer, if we've already checked at this\n> point that we should have the entire message buffered locally?\n\nIsn't that the same behavior for other message types? I don't see \nanything in the handling of the early 'E' and 'R' messages that would \nhandle this. If we want to address this, maybe this should be handled \nin the polling loop before we pass off the input buffer to the \nper-message-type handlers.\n\n\n\n", "msg_date": "Sun, 13 Nov 2022 10:21:35 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: libpq support for NegotiateProtocolVersion" }, { "msg_contents": "On 11/13/22 01:21, Peter Eisentraut wrote:\n> On 11.11.22 23:28, Jacob Champion wrote:\n>> Put another way, why do we loop around and poll for more data when we\n>> hit the end of the connection buffer, if we've already checked at this\n>> point that we should have the entire message buffered locally?\n> \n> Isn't that the same behavior for other message types? I don't see \n> anything in the handling of the early 'E' and 'R' messages that would \n> handle this.\n\nI agree for the 'E' case. For 'R', I see the msgLength being passed down\nto pg_fe_sendauth().\n\n> If we want to address this, maybe this should be handled \n> in the polling loop before we pass off the input buffer to the \n> per-message-type handlers.\n\nI thought it was supposed to be handled by this code:\n\n>\t/*\n>\t * Can't process if message body isn't all here yet.\n>\t */\n>\tmsgLength -= 4;\n>\tavail = conn->inEnd - conn->inCursor;\n>\tif (avail < msgLength)\n>\t{\n>\t\t/*\n>\t\t * Before returning, try to enlarge the input buffer if\n>\t\t * needed to hold the whole message; see notes in\n>\t\t * pqParseInput3.\n>\t\t */\n>\t\tif (pqCheckInBufferSpace(conn->inCursor + (size_t) msgLength,\n>\t\t\t\t\t conn))\n>\t\t\tgoto error_return;\n>\t\t/* We'll come back when there is more data */\n>\t\treturn PGRES_POLLING_READING;\n>\t}\n\nBut after this block, we still treat EOF as if we need to get more data.\nIf we know that the message was supposed to be fully buffered, can we\njust avoid the return to the pooling loop altogether and error out\nwhenever we see EOF?\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Mon, 14 Nov 2022 10:11:59 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: libpq support for NegotiateProtocolVersion" }, { "msg_contents": "On 14.11.22 19:11, Jacob Champion wrote:\n>> If we want to address this, maybe this should be handled\n>> in the polling loop before we pass off the input buffer to the\n>> per-message-type handlers.\n> \n> I thought it was supposed to be handled by this code:\n> \n>> \t/*\n>> \t * Can't process if message body isn't all here yet.\n>> \t */\n>> \tmsgLength -= 4;\n>> \tavail = conn->inEnd - conn->inCursor;\n>> \tif (avail < msgLength)\n>> \t{\n>> \t\t/*\n>> \t\t * Before returning, try to enlarge the input buffer if\n>> \t\t * needed to hold the whole message; see notes in\n>> \t\t * pqParseInput3.\n>> \t\t */\n>> \t\tif (pqCheckInBufferSpace(conn->inCursor + (size_t) msgLength,\n>> \t\t\t\t\t conn))\n>> \t\t\tgoto error_return;\n>> \t\t/* We'll come back when there is more data */\n>> \t\treturn PGRES_POLLING_READING;\n>> \t}\n> \n> But after this block, we still treat EOF as if we need to get more data.\n> If we know that the message was supposed to be fully buffered, can we\n> just avoid the return to the pooling loop altogether and error out\n> whenever we see EOF?\n\nI agree this doesn't make sense together. Digging through the history, \nthis code is ancient and might have come about during the protocol 2/3 \ntransition. (Protocol 2 didn't have length fields in the message IIRC.)\n\nI think for the current code, the following would be an appropriate \nadjustment:\n\ndiff --git a/src/interfaces/libpq/fe-connect.c \nb/src/interfaces/libpq/fe-connect.c\nindex 746e9b4f1efc..d15fb96572d9 100644\n--- a/src/interfaces/libpq/fe-connect.c\n+++ b/src/interfaces/libpq/fe-connect.c\n@@ -3412,8 +3412,7 @@ PQconnectPoll(PGconn *conn)\n /* Get the type of request. */\n if (pqGetInt((int *) &areq, 4, conn))\n {\n- /* We'll come back when there are more data */\n- return PGRES_POLLING_READING;\n+ goto error_return;\n }\n msgLength -= 4;\n\nAnd then the handling of the 'v' message in my patch would also be \nadjusted like that.\n\n\n\n", "msg_date": "Tue, 15 Nov 2022 11:18:59 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: libpq support for NegotiateProtocolVersion" }, { "msg_contents": "On Tue, Nov 15, 2022 at 2:19 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> I think for the current code, the following would be an appropriate\n> adjustment:\n>\n> diff --git a/src/interfaces/libpq/fe-connect.c\n> b/src/interfaces/libpq/fe-connect.c\n> index 746e9b4f1efc..d15fb96572d9 100644\n> --- a/src/interfaces/libpq/fe-connect.c\n> +++ b/src/interfaces/libpq/fe-connect.c\n> @@ -3412,8 +3412,7 @@ PQconnectPoll(PGconn *conn)\n> /* Get the type of request. */\n> if (pqGetInt((int *) &areq, 4, conn))\n> {\n> - /* We'll come back when there are more data */\n> - return PGRES_POLLING_READING;\n> + goto error_return;\n> }\n> msgLength -= 4;\n>\n> And then the handling of the 'v' message in my patch would also be\n> adjusted like that.\n\nYes -- though that particular example may be dead code, since we\nshould have already checked that there are at least four more bytes in\nthe buffer.\n\n--Jacob\n\n\n", "msg_date": "Wed, 16 Nov 2022 10:35:02 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: libpq support for NegotiateProtocolVersion" }, { "msg_contents": "On 16.11.22 19:35, Jacob Champion wrote:\n> On Tue, Nov 15, 2022 at 2:19 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> I think for the current code, the following would be an appropriate\n>> adjustment:\n>>\n>> diff --git a/src/interfaces/libpq/fe-connect.c\n>> b/src/interfaces/libpq/fe-connect.c\n>> index 746e9b4f1efc..d15fb96572d9 100644\n>> --- a/src/interfaces/libpq/fe-connect.c\n>> +++ b/src/interfaces/libpq/fe-connect.c\n>> @@ -3412,8 +3412,7 @@ PQconnectPoll(PGconn *conn)\n>> /* Get the type of request. */\n>> if (pqGetInt((int *) &areq, 4, conn))\n>> {\n>> - /* We'll come back when there are more data */\n>> - return PGRES_POLLING_READING;\n>> + goto error_return;\n>> }\n>> msgLength -= 4;\n>>\n>> And then the handling of the 'v' message in my patch would also be\n>> adjusted like that.\n> \n> Yes -- though that particular example may be dead code, since we\n> should have already checked that there are at least four more bytes in\n> the buffer.\n\nI have committed this change and the adjusted original patch. Thanks.\n\n\n\n", "msg_date": "Thu, 17 Nov 2022 16:00:11 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: libpq support for NegotiateProtocolVersion" } ]
[ { "msg_contents": "Hi:\n\nWhen I was working on another task, the following case caught my mind.\n\ncreate table t1(a int, b int, c int);\ncreate table t2(a int, b int, c int);\ncreate table t3(a int, b int, c int);\n\nexplain (costs off) select * from t1\nwhere exists (select 1 from t2\n where exists (select 1 from t3\n where t3.c = t1.c\n and t2.b = t3.b)\nand t2.a = t1.a);\n\n\nI got the plan like this:\n\n QUERY PLAN\n-----------------------------------\n Hash Semi Join\n Hash Cond: (t1.a = t2.a)\n Join Filter: (hashed SubPlan 2)\n -> Seq Scan on t1\n -> Hash\n -> Seq Scan on t2\n SubPlan 2\n -> Seq Scan on t3\n(8 rows)\n\nNote we CAN'T pull up the inner sublink which produced the SubPlan 2.\n\n\nI traced the reason is after we pull up the outer sublink, we got:\n\nselect * from t1 semi join t2 on t2.a = t1.a AND\nexists (select 1 from t3\n where t3.c = t1.c\n and t2.b = t3.b);\n\nLater we tried to pull up the EXISTS sublink to t1 OR t2 *separately*, since\nthis subselect referenced to t1 *AND* t2, so we CAN'T pull up the sublink. I\nam thinking why we have to pull up it t1 OR t2 rather than JoinExpr(t1, t2),\nI think the latter one is better.\n\nSo I changed the code like this, I got the plan I wanted and 'make\ninstallcheck' didn't find any exception.\n\n\n QUERY PLAN\n------------------------------------------------\n Hash Semi Join\n Hash Cond: ((t2.b = t3.b) AND (t1.c = t3.c))\n -> Hash Semi Join\n Hash Cond: (t1.a = t2.a)\n -> Seq Scan on t1\n -> Hash\n -> Seq Scan on t2\n -> Hash\n -> Seq Scan on t3\n(9 rows)\n\n@@ -553,10 +553,10 @@ pull_up_sublinks_qual_recurse(PlannerInfo *root, Node\n*node,\n */\n j->quals = pull_up_sublinks_qual_recurse(root,\n j->quals,\n- &j->larg,\n- available_rels1,\n- &j->rarg,\n- child_rels);\n+ jtlink1,\n+ bms_union(available_rels1, child_rels),\n+ NULL,\n+ NULL);\n /* Return NULL representing constant TRUE */\n return NULL;\n }\n\nAny feedback is welcome.\n\n-- \nBest Regards\nAndy Fan\n\nHi:When I was working on another task, the following case caught my mind.create table t1(a int, b int, c int);create table t2(a int, b int, c int);create table t3(a int, b int, c int);explain (costs off) select * from t1where exists (select 1 from t2              where exists (select 1 from t3                           where t3.c = t1.c                           and t2.b = t3.b)and t2.a = t1.a);I got the plan like this:            QUERY PLAN----------------------------------- Hash Semi Join   Hash Cond: (t1.a = t2.a)   Join Filter: (hashed SubPlan 2)   ->  Seq Scan on t1   ->  Hash         ->  Seq Scan on t2   SubPlan 2     ->  Seq Scan on t3(8 rows)Note we CAN'T pull up the inner sublink which produced the SubPlan 2.I traced the reason is after we pull up the outer sublink, we got:select * from t1 semi join t2 on t2.a = t1.a AND\texists (select 1 from t3        where t3.c = t1.c        and t2.b = t3.b);Later we tried to pull up the EXISTS sublink to t1 OR t2 separately, sincethis subselect referenced to t1 AND t2, so we CAN'T pull up the sublink. Iam thinking why we have to pull up it t1 OR t2 rather than JoinExpr(t1, t2),I think the latter one is better.So I changed the code like this,  I got the plan I wanted and 'makeinstallcheck' didn't find any exception.                   QUERY PLAN------------------------------------------------ Hash Semi Join   Hash Cond: ((t2.b = t3.b) AND (t1.c = t3.c))   ->  Hash Semi Join         Hash Cond: (t1.a = t2.a)         ->  Seq Scan on t1         ->  Hash               ->  Seq Scan on t2   ->  Hash         ->  Seq Scan on t3(9 rows)@@ -553,10 +553,10 @@ pull_up_sublinks_qual_recurse(PlannerInfo *root, Node *node, \t\t\t\t */ \t\t\t\tj->quals = pull_up_sublinks_qual_recurse(root, \t\t\t\t\t\t\t\t\t\t\t\t\t\t j->quals,-\t\t\t\t\t\t\t\t\t\t\t\t\t\t &j->larg,-\t\t\t\t\t\t\t\t\t\t\t\t\t\t available_rels1,-\t\t\t\t\t\t\t\t\t\t\t\t\t\t &j->rarg,-\t\t\t\t\t\t\t\t\t\t\t\t\t\t child_rels);+\t\t\t\t\t\t\t\t\t\t\t\t\t\t jtlink1,+\t\t\t\t\t\t\t\t\t\t\t\t\t\t bms_union(available_rels1, child_rels),+\t\t\t\t\t\t\t\t\t\t\t\t\t\t NULL,+\t\t\t\t\t\t\t\t\t\t\t\t\t\t NULL); \t\t\t\t/* Return NULL representing constant TRUE */ \t\t\t\treturn NULL; \t\t\t}Any feedback is welcome. -- Best RegardsAndy Fan", "msg_date": "Thu, 13 Oct 2022 16:45:31 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Question about pull_up_sublinks_qual_recurse" }, { "msg_contents": ">\n> Later we tried to pull up the EXISTS sublink to t1 OR t2 *separately*,\n> since\n> this subselect referenced to t1 *AND* t2, so we CAN'T pull up the\n> sublink. I\n> am thinking why we have to pull up it t1 OR t2 rather than JoinExpr(t1,\n> t2),\n> I think the latter one is better.\n>\n\nAfter some more self review, I find my proposal has the following side\neffects.\n\n select * from t1\n where exists (select 1 from t2\n where exists (select 1 from t3\n where t3.c = t1.c)\n and t2.a = t1.a);\n\nIn the above example, the innermost sublink will be joined with\nSemiJoin (t1 t2) in the patched version, but joined with t1 in the current\nmaster. However, even if we set the JoinTree with\nSemiJoin(SemiJoin(t1 t2), t3), the join reorder functions can generate a\npath which joins t1 with t3 first and then t2 still. So any hint about this\npatch's self-review is welcome.\n\n-- \nBest Regards\nAndy Fan\n\nLater we tried to pull up the EXISTS sublink to t1 OR t2 separately, sincethis subselect referenced to t1 AND t2, so we CAN'T pull up the sublink. Iam thinking why we have to pull up it t1 OR t2 rather than JoinExpr(t1, t2),I think the latter one is better.After some more self review,  I find my proposal has the following side effects.    select * from t1    where exists (select 1 from t2                  where exists (select 1 from t3                               where t3.c = t1.c)    and t2.a = t1.a);In the above example,  the innermost sublink will be joined withSemiJoin (t1 t2) in the patched version,  but joined with t1 in the currentmaster.  However, even if we set the JoinTree withSemiJoin(SemiJoin(t1 t2), t3),  the join reorder functions can generate apath which joins t1 with t3 first and then t2 still.  So any hint about thispatch's self-review is welcome. -- Best RegardsAndy Fan", "msg_date": "Fri, 14 Oct 2022 16:07:21 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Question about pull_up_sublinks_qual_recurse" }, { "msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n> After some more self review, I find my proposal has the following side\n> effects.\n\nYeah, I do not think this works at all. The mechanism as it stands\nright now is that we can insert pulled-up semijoins above j->larg\nif they only need variables from relations in j->larg, and we can\ninsert them above j->rarg if they only need variables from relations\nin j->rarg. You can't just ignore that distinction and insert them\nsomewhere further up the tree. Per the comment in\npull_up_sublinks_jointree_recurse:\n\n * Now process qual, showing appropriate child relids as available,\n * and attach any pulled-up jointree items at the right place. In the\n * inner-join case we put new JoinExprs above the existing one (much\n * as for a FromExpr-style join). In outer-join cases the new\n * JoinExprs must go into the nullable side of the outer join. The\n * point of the available_rels machinations is to ensure that we only\n * pull up quals for which that's okay.\n\nIf the pulled-up join doesn't go into the nullable side of the upper\njoin then you've changed semantics. In this case, it'd amount to\nreassociating a semijoin that was within the righthand side of another\nsemijoin to above that other semijoin. The discussion of outer join\nreordering in optimizer/README says that that doesn't work, and while\nI'm too lazy to construct an example right now, I believe it's true.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 Oct 2022 15:27:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question about pull_up_sublinks_qual_recurse" }, { "msg_contents": "Hi Tom:\n\nThanks for your reply! I have self reviewed the below message at 3 different\ntime periods to prevent from too inaccurate replies. It may be more detailed\nthan it really needed, but it probably can show where I am lost.\n\nOn Sat, Oct 15, 2022 at 3:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> If the pulled-up join doesn't go into the nullable side of the upper\n> join then you've changed semantics. In this case, it'd amount to\n\nreassociating a semijoin that was within the righthand side of another\n> semijoin to above that other semijoin.\n\n\nI understand your reply as:\n\nselect * from t1 *left join* t2 on exists (select 1 from t3 where t3.a =\nt2.a);\n= select * from t1 *left join* (t2 semi join t3 on t3.a = t2.a) on true;\n-- go to nullable side\n!= select * from (t1 *left join* t2 on true) semi join t3 on (t3.a =\nt2.a); -- go to above the JoinExpr\n\nI CAN follow the above. And for this case it is controlled by below code:\n\npull_up_sublinks_qual_recurse\n\nswitch (j->jointype)\n{\n case JOIN_INNER:\n ...\n case JOIN_LEFT:\n j->quals = pull_up_sublinks_qual_recurse(root, j->quals,\n\n &j->rarg,\n\n rightrelids,\n\n NULL, NULL);\n break;\n ...\n}\n\nand I didn't change this. My question is could we assume\n\nA *semijoin* B ON EXISTS (SELECT 1 FROM C on (Pbc))\n= (A *semijoin* (B *semijoin* C on (Pbc))) on TRUE. (current master did)\n= (A *semijoin* B ON true) *semijoin* C on (Pbc) (my current\nthinking)\n\nNote that there is no 'left outer join' at this place. Since there are too\nmany places called pull_up_sublinks_qual_recurse, to make things\nless confused, I prepared a patch for this one line change to show where\nexactly I changed (see patch 2); I think this is the first place I lost.\n\nThe discussion of outer join\n> reordering in optimizer/README says that that doesn't work,\n\n\nI think you are talking about the graph \"Valid OUTER JOIN Optimizations\".\nI can follow until below.\n\n\"\nSEMI joins work a little bit differently. A semijoin can be reassociated\ninto or out of the lefthand side of another semijoin, left join, or\nantijoin, but not into or out of the righthand side. ..\n\"\nI am unclear why\n (A semijoin B on (Pab)) semijoin C on (Pbc)\n!= A semijoin (B semijoin C on (Pbc)) on (Pab);\n\nSeems both return rows from A which match both semijoin (Pab) and\n(Pbc). or I misunderstand the above words in the first place?\n\nAt last, when I checked optimizer/README, it looks like we used\na 'nullable side' while it should be 'nonnullable side'? see patch 1\nfor details.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Mon, 17 Oct 2022 21:43:58 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Question about pull_up_sublinks_qual_recurse" }, { "msg_contents": "On Sat, Oct 15, 2022 at 3:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andy Fan <zhihui.fan1213@gmail.com> writes:\n> > After some more self review, I find my proposal has the following side\n> > effects.\n>\n> Yeah, I do not think this works at all. ....\n\nThe discussion of outer join\n> reordering in optimizer/README says that that doesn't work, and while\n> I'm too lazy to construct an example right now, I believe it's true.\n>\n\nI came to this topic again recently and have finally figured out the\nreason. It looks to me that semi join is slightly different with outer\njoin in this case.\n\nThe following test case can show why we have to\npull_up_sublinks_qual_recurse to either LHS or RHS rather than the\nJoinExpr.\n\ncreate table t1(a int, b int, c int);\ncreate table t2(a int, b int, c int);\ncreate table t3(a int, b int, c int);\n\ninsert into t1 select 1, 1, 2;\ninsert into t2 select 1, 2, 1;\ninsert into t2 select 1, 1, 2;\ninsert into t3 select 1, 1, 2;\n\nselect * from t1\nwhere exists (select 1 from t2\n-- below references to t1 and t2 at the same time\nwhere exists (select 1 from t3\n where t1.c = t2.c and t2.b = t3.b)\nand t1.a = t2.a);\n\nwhich can be transformed to\n\nSELECT * FROM t1 SEMI JOIN t2\nON t1.a = t2.a\nAND exists (select 1 from t3\nwhere t1.c = t2.c\n and t2.b = t3.b)\n\nHere the semantics of the query is return the rows in T1 iff there is a\nrow in t2 matches the whole clause (t1.a = t2.a AND exists..);\n\nBut If we transform it to\n\nSELECT * FROM (t1 SEMI JOIN t2\nON t1.a = t2.a) SEMI JOIN t3\non t1.c = t2.c and t2.b = t3.b;\n\nThe scan on T2 would stop if ONLY (t1.a = t2.a) matches and the following\nrows will be ignored. However the matched rows may doesn't match the\nexists clause! So in the above example, the correct result set will be 1\nrow. If we pull up the sublink above the JoinExpr, no row would be found.\n\nThe attached is just a comment and a test case to help understand why we\nhave to do things like this.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Tue, 21 Mar 2023 08:56:42 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Question about pull_up_sublinks_qual_recurse" } ]
[ { "msg_contents": "Hi hackers!\n\nheap_xlog_visible is not bumping heap page LSN when setting all-visible \nflag in it.\nThere is long comment explaining it:\n\n         /*\n          * We don't bump the LSN of the heap page when setting the \nvisibility\n          * map bit (unless checksums or wal_hint_bits is enabled, in which\n          * case we must), because that would generate an unworkable \nvolume of\n          * full-page writes.  This exposes us to torn page hazards, but \nsince\n          * we're not inspecting the existing page contents in any way, we\n          * don't care.\n          *\n          * However, all operations that clear the visibility map bit \n*do* bump\n          * the LSN, and those operations will only be replayed if the \nXLOG LSN\n          * follows the page LSN.  Thus, if the page LSN has advanced \npast our\n          * XLOG record's LSN, we mustn't mark the page all-visible, because\n          * the subsequent update won't be replayed to clear the flag.\n          */\n\nBut it still not clear for me that not bumping LSN in this place is \ncorrect if wal_log_hints is set.\nIn this case we will have VM page with larger LSN than heap page, \nbecause visibilitymap_set\nbumps LSN of VM page. It means that in theory after recovery we may have \npage marked as all-visible in VM,\nbut not having PD_ALL_VISIBLE  in page header. And it violates VM \nconstraint:\n\n  * When we *set* a visibility map during VACUUM, we must write WAL. \nThis may\n  * seem counterintuitive, since the bit is basically a hint: if it is \nclear,\n  * it may still be the case that every tuple on the page is visible to all\n  * transactions; we just don't know that for certain.  The difficulty \nis that\n  * there are two bits which are typically set together: the \nPD_ALL_VISIBLE bit\n  * on the page itself, and the visibility map bit.  If a crash occurs \nafter the\n  * visibility map page makes it to disk and before the updated heap \npage makes\n  * it to disk, redo must set the bit on the heap page.  Otherwise, the next\n  * insert, update, or delete on the heap page will fail to realize that the\n  * visibility map bit must be cleared, possibly causing index-only scans to\n  * return wrong answers.\n\n\n\n\n", "msg_date": "Thu, 13 Oct 2022 12:50:37 +0300", "msg_from": "Konstantin Knizhnik <knizhnik@garret.ru>", "msg_from_op": true, "msg_subject": "Lack of PageSetLSN in heap_xlog_visible" }, { "msg_contents": "On Thu, 2022-10-13 at 12:50 +0300, Konstantin Knizhnik wrote:\n>          /*\n>           * We don't bump the LSN of the heap page when setting the \n> visibility\n>           * map bit (unless checksums or wal_hint_bits is enabled, in\n> which\n>           * case we must), because that would generate an unworkable \n> volume of\n>           * full-page writes.\n\nIt clearly says there that it must set the page LSN, but I don't see\nwhere that's happening. It seems to go all the way back to the original\nchecksums commit, 96ef3b8ff1.\n\nI can reproduce a case where a replica ends up with a different page\nheader than the primary (checksums enabled):\n\n Primary:\n create extension pageinspect;\n create table t(i int) with (autovacuum_enabled=off);\n insert into t values(0);\n\n Shut down and restart primary and replica.\n\n Primary:\n insert into t values(1);\n vacuum t;\n\n Crash replica and let it recover.\n\n Shut down and restart primary and replica.\n\n Primary:\n select * from page_header(get_raw_page('t', 0));\n\n Replica:\n select * from page_header(get_raw_page('t', 0));\n\nThe LSN on the replica is lower, but the flags are the same\n(PD_ALL_VISIBLE set). That's a problem, right? The checksums are valid\non both, though.\n\nIt may violate our torn page protections for checksums, as well, but I\ncouldn't construct a scenario for that because recovery can only create\nrestartpoints at certain times.\n\n> But it still not clear for me that not bumping LSN in this place is \n> correct if wal_log_hints is set.\n> In this case we will have VM page with larger LSN than heap page, \n> because visibilitymap_set\n> bumps LSN of VM page. It means that in theory after recovery we may\n> have \n> page marked as all-visible in VM,\n> but not having PD_ALL_VISIBLE  in page header. And it violates VM \n> constraint:\n\nI'm not quite following this scenario. If the heap page has a lower LSN\nthan the VM page, how could we recover to a point where the VM bit is\nset but the heap flag isn't? And what does it have to do with\nwal_log_hints/checksums?\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Thu, 13 Oct 2022 12:49:35 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Lack of PageSetLSN in heap_xlog_visible" }, { "msg_contents": "On Thu, 2022-10-13 at 12:49 -0700, Jeff Davis wrote:\n\n> It may violate our torn page protections for checksums, as well...\n\nI could not reproduce a problem here, but I believe one exists when\nchecksums are enabled, because it bypasses the protections of\nUpdateMinRecoveryPoint(). By not updating the page's LSN, it would\nallow the page to be flushed (and torn) without updating the minimum\nrecovery point. That would allow the administrator to subsequently do a\nPITR or standby promotion while there's a torn page on disk.\n\nI'm considering this a theoretical risk because for a page tear to be\nconsequential in this case, it would need to happen between the\nchecksum and the flags; and I doubt that's a real possibility. But\nuntil we formalize that declaration, then this problem should be fixed.\n\nPatch attached. I also fixed:\n\n * The comment in heap_xlog_visible() says that not updating the page\nLSN avoids full page writes; but the causation is backwards: avoiding\nfull page images requires that we don't update the page's LSN.\n * Also in heap_xlog_visible(), there are comments and a branch\nleftover from before commit f8f4227976 which don't seem to be necessary\nany more.\n * In visibilitymap_set(), I clarified that we *must* not set the page\nLSN of the heap page if no full page image was taken.\n\n\n> > But it still not clear for me that not bumping LSN in this place is\n> > correct if wal_log_hints is set.\n> > In this case we will have VM page with larger LSN than heap page, \n> > because visibilitymap_set\n> > bumps LSN of VM page. It means that in theory after recovery we may\n> > have \n> > page marked as all-visible in VM,\n> > but not having PD_ALL_VISIBLE  in page header. And it violates VM \n> > constraint:\n> \n> I'm not quite following this scenario. If the heap page has a lower\n> LSN\n> than the VM page, how could we recover to a point where the VM bit is\n> set but the heap flag isn't?\n\nI still don't understand this problem scenario.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Thu, 10 Nov 2022 17:20:35 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Lack of PageSetLSN in heap_xlog_visible" }, { "msg_contents": "On 11.11.2022 03:20, Jeff Davis wrote:\n> On Thu, 2022-10-13 at 12:49 -0700, Jeff Davis wrote:\n>\n>> It may violate our torn page protections for checksums, as well...\n> I could not reproduce a problem here, but I believe one exists when\n> checksums are enabled, because it bypasses the protections of\n> UpdateMinRecoveryPoint(). By not updating the page's LSN, it would\n> allow the page to be flushed (and torn) without updating the minimum\n> recovery point. That would allow the administrator to subsequently do a\n> PITR or standby promotion while there's a torn page on disk.\n>\n> I'm considering this a theoretical risk because for a page tear to be\n> consequential in this case, it would need to happen between the\n> checksum and the flags; and I doubt that's a real possibility. But\n> until we formalize that declaration, then this problem should be fixed.\n>\n> Patch attached. I also fixed:\n>\n> * The comment in heap_xlog_visible() says that not updating the page\n> LSN avoids full page writes; but the causation is backwards: avoiding\n> full page images requires that we don't update the page's LSN.\n> * Also in heap_xlog_visible(), there are comments and a branch\n> leftover from before commit f8f4227976 which don't seem to be necessary\n> any more.\n> * In visibilitymap_set(), I clarified that we *must* not set the page\n> LSN of the heap page if no full page image was taken.\n>\n>\n>>> But it still not clear for me that not bumping LSN in this place is\n>>> correct if wal_log_hints is set.\n>>> In this case we will have VM page with larger LSN than heap page,\n>>> because visibilitymap_set\n>>> bumps LSN of VM page. It means that in theory after recovery we may\n>>> have\n>>> page marked as all-visible in VM,\n>>> but not having PD_ALL_VISIBLE  in page header. And it violates VM\n>>> constraint:\n>> I'm not quite following this scenario. If the heap page has a lower\n>> LSN\n>> than the VM page, how could we recover to a point where the VM bit is\n>> set but the heap flag isn't?\n> I still don't understand this problem scenario.\n>\n>\n\nI am sorry that I have reported the issue and then didn't reply.\nYes, you are right: my original concerns that it may cause problems with \nrecovery at replica are not correct.\nNot updating page LSN may just force it's reconstruction.\nI also not sure that it can cause problems with checksums - page is \nmarked as dirty in any case.\nYes, content and checksum of the page will be different at master and \nreplica. It may be a problem for recovery pages from replica.\n\nWhen it really be critical - is incremental backup (like pg_probackup) \nwhch looks at page LSN to determine moment of page modification.\n\nAnd definitely it is critical for Neon, because LSN of page \nreconstructed by pageserver is different from one expected by compute node.\n\nMay be I missing something, but I do not see any penalty from setting \npage LSN here.\nSo even if there is not obvious scenario of failure, I still thing that \nit should be fixed. Breaking invariants is always bad thing\nand there are should be very strong arguments for doing it. And I do not \nsee them here.\n\nSo it will be great if your patch can be committed.\n\n\n", "msg_date": "Fri, 11 Nov 2022 12:43:13 +0200", "msg_from": "Konstantin Knizhnik <knizhnik@garret.ru>", "msg_from_op": true, "msg_subject": "Re: Lack of PageSetLSN in heap_xlog_visible" }, { "msg_contents": "On Fri, 2022-11-11 at 12:43 +0200, Konstantin Knizhnik wrote:\n> Yes, you are right: my original concerns that it may cause problems\n> with \n> recovery at replica are not correct.\n\nGreat, thank you for following up.\n\n> I also not sure that it can cause problems with checksums - page is \n> marked as dirty in any case.\n> Yes, content and checksum of the page will be different at master and\n> replica. It may be a problem for recovery pages from replica.\n\nIt could cause a theoretical problem: during recovery, the page could\nbe flushed out before minRecoveryPoint is updated, and while that's\nhappening, a crash could tear it. Then, a subsequent partial recovery\n(e.g. PITR) could recover without fixing the torn page.\n\nThat would not be a practical problem, because it would require a tear\nbetween two fields of the page header, which I don't think is possible.\n\n> When it really be critical - is incremental backup (like\n> pg_probackup) \n> whch looks at page LSN to determine moment of page modification.\n> \n> And definitely it is critical for Neon, because LSN of page \n> reconstructed by pageserver is different from one expected by compute\n> node.\n> \n> May be I missing something, but I do not see any penalty from setting\n> page LSN here.\n> So even if there is not obvious scenario of failure, I still thing\n> that \n> it should be fixed. Breaking invariants is always bad thing\n> and there are should be very strong arguments for doing it. And I do\n> not \n> see them here.\n\nAgreed, thank you for the report!\n\nCommitted d6a3dbe14f and backpatched through version 11.\n\nAlso committed an unrelated cleanup patch in the same area (3eb8eeccbe)\nand a README patch (97c61f70d1) to master. The README patch doesn't\nguarantee that things won't change in the future, but the behavior it\ndescribes has been true for quite a while now.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Sat, 12 Nov 2022 10:26:19 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Lack of PageSetLSN in heap_xlog_visible" } ]
[ { "msg_contents": "Hi community,\n\nI have problem with pg_upgrade. Tested from 14.5 to 15.0 rc2 when database\ncontains our extension with one new type. Using pg_dump & restore works\nwell.\n\nWe made workaround extension for some usage in javascript library that\ncontains new type that represents bigint as text. So something like auto\nconversion from SELECT (2^32)::text -> bigint when data is stored and\n(2^32) -> text when data is retrieved.\n\nIm not sure if this is postgresql bug or we have something wrong in our\nextension with cast. So becouse this im writing there.\n\nHere is output of pg_upgrade:\n(our extension name is lbuid, our type public.lbuid)\n\ncommand: \"/usr/pgsql-15/bin/pg_dump\" --host /tmp/pg_upgrade_log --port\n50432 --username postgres --schema-only --quote-all-identifiers\n--binary-upgrade --format=custom\n--file=\"/home/pgsql/data_new/pg_upgrade_output.d/20221013T104924.054/\ndump/pg_upgrade_dump_16385.custom\" 'dbname=lbstat' >>\n\"/home/pgsql/data_new/pg_upgrade_output.d/20221013T104924.054/log/pg_upgrade_dump_16385.log\"\n 2>&1\n\n\ncommand: \"/usr/pgsql-15/bin/pg_restore\" --host /tmp/pg_upgrade_log --port\n50432 --username postgres --create --exit-on-error --verbose --dbname\ntemplate1\n\"/home/pgsql/data_new/pg_upgrade_output.d/20221013T104924.054/dump/pg_upgrade_dump_1\n6385.custom\" >>\n\"/home/pgsql/data_new/pg_upgrade_output.d/20221013T104924.054/log/pg_upgrade_dump_16385.log\"\n 2>&1\npg_restore: connecting to database for restore\npg_restore: creating DATABASE \"lbstat\"\npg_restore: connecting to new database \"lbstat\"\npg_restore: creating DATABASE PROPERTIES \"lbstat\"\npg_restore: connecting to new database \"lbstat\"\npg_restore: creating pg_largeobject \"pg_largeobject\"\npg_restore: creating SCHEMA \"public\"\npg_restore: creating COMMENT \"SCHEMA \"public\"\"\npg_restore: creating EXTENSION \"lbuid\"\npg_restore: creating COMMENT \"EXTENSION \"lbuid\"\"\npg_restore: creating SHELL TYPE \"public.lbuid\"\npg_restore: creating FUNCTION \"public.lbuid_in(\"cstring\")\"\npg_restore: creating FUNCTION \"public.lbuid_out(\"public\".\"lbuid\")\"\npg_restore: creating TYPE \"public.lbuid\"\npg_restore: creating CAST \"CAST (integer AS \"public\".\"lbuid\")\"\npg_restore: while PROCESSING TOC:\npg_restore: from TOC entry 3751; 2605 16393 CAST CAST (integer AS\n\"public\".\"lbuid\") (no owner)\npg_restore: error: could not execute query: ERROR: return data type of\ncast function must match or be binary-coercible to target data type\nCommand was: CREATE CAST (integer AS \"public\".\"lbuid\") WITH FUNCTION\n\"pg_catalog\".\"int8\"(integer) AS IMPLICIT;\n\n-- For binary upgrade, handle extension membership the hard way\nALTER EXTENSION \"lbuid\" ADD CAST (integer AS \"public\".\"lbuid\");\n\n\nHint is good, but this cast is already member of our extension:\n\nlbstat=# ALTER EXTENSION lbuid ADD CAST (integer AS public.lbuid);\nERROR: cast from integer to lbuid is already a member of extension \"lbuid\"\n\n\nDatabase contains this:\n\nCREATE EXTENSION lbuid ;\nCREATE TABLE test(id lbuild);\nINSERT INTO test VALUES ('1344456644646645456');\n\nTested on our distribution based on centos7.\n\nWhen i drop this cast from extension manualy a then manualy restore after\npg_upgrade, then operation is without failure.\n\nIn attachment are extension files.\n\nThanks.\n\nBest regards. David T.\n(See attached file: lbuid.control)(See attached file: lbuid--0.1.0.sql)\n\n--\n-------------------------------------\nIng. David TUROŇ\nLinuxBox.cz, s.r.o.\n28. rijna 168, 709 01 Ostrava\n\ntel.: +420 591 166 224\nfax: +420 596 621 273\nmobil: +420 732 589 152\nwww.linuxbox.cz\n\nmobil servis: +420 737 238 656\nemail servis: servis@linuxbox.cz\n-------------------------------------", "msg_date": "Thu, 13 Oct 2022 14:40:34 +0200", "msg_from": "=?ISO-8859-2?Q?David_Turo=F2?= <david.turon@linuxbox.cz>", "msg_from_op": true, "msg_subject": "PG upgrade 14->15 fails - database contains our own extension" }, { "msg_contents": "On Thu, Oct 13, 2022 at 9:57 AM David Turoň <david.turon@linuxbox.cz> wrote:\n> pg_restore: creating TYPE \"public.lbuid\"\n> pg_restore: creating CAST \"CAST (integer AS \"public\".\"lbuid\")\"\n> pg_restore: while PROCESSING TOC:\n> pg_restore: from TOC entry 3751; 2605 16393 CAST CAST (integer AS \"public\".\"lbuid\") (no owner)\n> pg_restore: error: could not execute query: ERROR: return data type of cast function must match or be binary-coercible to target data type\n> Command was: CREATE CAST (integer AS \"public\".\"lbuid\") WITH FUNCTION \"pg_catalog\".\"int8\"(integer) AS IMPLICIT;\n\nI think the error is complaining that the return type of\nint8(integer), which is bigint, needs to coercible WITHOUT FUNCTION to\nlbuid. Your extension contains such a cast, but at the point when the\nerror occurs, it hasn't been restored yet. That suggests that either\nthe cast didn't get included in the dump file, or it got included in\nthe wrong order. A quick test suggest the latter. If I execute your\nSQL file and the dump the database, I get:\n\nCREATE CAST (integer AS public.lbuid) WITH FUNCTION\npg_catalog.int8(integer) AS IMPLICIT;\nCREATE CAST (bigint AS public.lbuid) WITHOUT FUNCTION AS IMPLICIT;\nCREATE CAST (public.lbuid AS bigint) WITHOUT FUNCTION AS IMPLICIT;\n\nThat's not a valid dump ordering, and if I drop the casts and try to\nrecreate them that way, it fails in the same way you saw.\n\nMy guess is that this is a bug in Tom's commit\nb55f2b6926556115155930c4b2d006c173f45e65, \"Adjust pg_dump's priority\nordering for casts.\"\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Oct 2022 10:27:56 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG upgrade 14->15 fails - database contains our own extension" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> CREATE CAST (integer AS public.lbuid) WITH FUNCTION\n> pg_catalog.int8(integer) AS IMPLICIT;\n> CREATE CAST (bigint AS public.lbuid) WITHOUT FUNCTION AS IMPLICIT;\n> CREATE CAST (public.lbuid AS bigint) WITHOUT FUNCTION AS IMPLICIT;\n\n> That's not a valid dump ordering, and if I drop the casts and try to\n> recreate them that way, it fails in the same way you saw.\n\n> My guess is that this is a bug in Tom's commit\n> b55f2b6926556115155930c4b2d006c173f45e65, \"Adjust pg_dump's priority\n> ordering for casts.\"\n\nHmm ... I think it's a very ancient bug that somehow David has avoided\ntripping over up to now. Namely, that we require the bigint->lbuid\nimplicit cast to exist in order to make that WITH FUNCTION cast, but\nwe fail to record it as a dependency during CastCreate. So pg_dump\nis flying blind as to the required restoration order, and if it ever\nworked, you were just lucky.\n\nWe might be able to put in some kluge in pg_dump to make it less\nlikely to fail with existing DBs, but I think the true fix lies\nin adding that dependency.\n\n(I'm pretty skeptical about it being a good idea to have a set of\ncasts like this, but I don't suppose pg_dump is chartered to\neditorialize on that.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Oct 2022 11:23:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG upgrade 14->15 fails - database contains our own extension" }, { "msg_contents": "I wrote:\n> Hmm ... I think it's a very ancient bug that somehow David has avoided\n> tripping over up to now.\n\nLooking closer, I don't see how b55f2b692 could have changed pg_dump's\nopinion of the order to sort these three casts in; that sort ordering\nlogic is old enough to vote. So I'm guessing that in fact this *never*\nworked. Perhaps this extension has never been through pg_upgrade before,\nor at least not with these casts?\n\n> We might be able to put in some kluge in pg_dump to make it less\n> likely to fail with existing DBs, but I think the true fix lies\n> in adding that dependency.\n\nI don't see any painless way to fix this in pg_dump, and I'm inclined\nnot to bother trying if it's not a regression. Better to spend the\neffort on the backend-side fix.\n\nOn the backend side, really anyplace that we consult IsBinaryCoercible\nduring DDL is at hazard. While there aren't a huge number of such\nplaces, there's certainly more than just CreateCast. I'm trying to\ndecide how much trouble it's worth going to there. I could be wrong,\nbut I think that only the cast-vs-cast case is really likely to be\nproblematic for pg_dump, given that it dumps casts pretty early now.\nSo it might be sufficient to fix that one case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Oct 2022 12:06:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG upgrade 14->15 fails - database contains our own extension" }, { "msg_contents": "I wrote:\n> We might be able to put in some kluge in pg_dump to make it less\n> likely to fail with existing DBs, but I think the true fix lies\n> in adding that dependency.\n\nHere's a draft patch for that. I'm unsure whether it's worth\nback-patching; even if we did, we couldn't guarantee that existing\ndatabases would have the additional pg_depend entries.\n\nIf we do only put it in HEAD, maybe we should break compatibility\nto the extent of changing IsBinaryCoercible's API rather than\ninventing a separate call. I'm still not excited about recording\nadditional dependencies elsewhere, but that path would leave us\nwith cleaner code if we eventually do that.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 13 Oct 2022 14:51:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG upgrade 14->15 fails - database contains our own extension" }, { "msg_contents": "Hi,\n\nI really appreciate your help and very quick response. And WOW, write patch\nfor this in few hours ...that's amazing!\n\n> Looking closer, I don't see how b55f2b692 could have changed pg_dump's\n> opinion of the order to sort these three casts in; that sort ordering\n> logic is old enough to vote. So I'm guessing that in fact this *never*\n> worked. Perhaps this extension has never been through pg_upgrade before,\n> or at least not with these casts?\n\nYes its new and I tested right now with upgrade from 9.6 to 15.0 rc2 with\nsame result. So this behavior is probably long time there, but extension is\nnew and not upgraded yet. And probably nobody have this \"strange\" idea.\n\n\n>(I'm pretty skeptical about it being a good idea to have a set of\ncasts like this, but I don't suppose pg_dump is chartered to\neditorialize on that.)\nYes, im not proud of the creation this workaround extension and I did what\nfrontend develepers asked me if it's possible. I don't expect a medal of\nhonor:)\n\nThe problem was when bigint was taken from DB as json and stored as number\nJS library cast number automaticaly to integer that cause problem.\n\nlbstat=# SELECT json_agg(test) FROM test;\n json_agg\n-----------------------\n [{\"id\":\"4294967296\"}]\n(1 row)\n\n-- ID was represnted now as text and JS library can use it and sent back\nwithout error. But for DB is still bigint.\n\nThis was automatic way to solve this problem without casting on all places\nto text. I tested and most things works well until upgrade test didn't\npass.\n\nThank you all.\n\nDavid T.\n\n--\n-------------------------------------\nIng. David TUROŇ\nLinuxBox.cz, s.r.o.\n28. rijna 168, 709 01 Ostrava\n\ntel.: +420 591 166 224\nfax: +420 596 621 273\nmobil: +420 732 589 152\nwww.linuxbox.cz\n\nmobil servis: +420 737 238 656\nemail servis: servis@linuxbox.cz\n-------------------------------------\n\n\n\nOd:\t\"Tom Lane\" <tgl@sss.pgh.pa.us>\nKomu:\t\"David Turoň\" <david.turon@linuxbox.cz>\nKopie:\t\"Robert Haas\" <robertmhaas@gmail.com>,\n pgsql-hackers@postgresql.org, \"Marian Krucina\"\n <marian.krucina@linuxbox.cz>\nDatum:\t13.10.2022 18:06\nPředmět:\tRe: PG upgrade 14->15 fails - database contains our own\n extension\n\n\n\nI wrote:\n> Hmm ... I think it's a very ancient bug that somehow David has avoided\n> tripping over up to now.\n\nLooking closer, I don't see how b55f2b692 could have changed pg_dump's\nopinion of the order to sort these three casts in; that sort ordering\nlogic is old enough to vote. So I'm guessing that in fact this *never*\nworked. Perhaps this extension has never been through pg_upgrade before,\nor at least not with these casts?\n\n> We might be able to put in some kluge in pg_dump to make it less\n> likely to fail with existing DBs, but I think the true fix lies\n> in adding that dependency.\n\nI don't see any painless way to fix this in pg_dump, and I'm inclined\nnot to bother trying if it's not a regression. Better to spend the\neffort on the backend-side fix.\n\nOn the backend side, really anyplace that we consult IsBinaryCoercible\nduring DDL is at hazard. While there aren't a huge number of such\nplaces, there's certainly more than just CreateCast. I'm trying to\ndecide how much trouble it's worth going to there. I could be wrong,\nbut I think that only the cast-vs-cast case is really likely to be\nproblematic for pg_dump, given that it dumps casts pretty early now.\nSo it might be sufficient to fix that one case.\n\n\t\t \t\t \t\t regards, tom lane", "msg_date": "Fri, 14 Oct 2022 07:38:29 +0200", "msg_from": "=?ISO-8859-2?Q?David_Turo=F2?= <david.turon@linuxbox.cz>", "msg_from_op": true, "msg_subject": "Re: PG upgrade 14->15 fails - database contains our own extension" } ]
[ { "msg_contents": "So, is anybody planning to release pglogical for v15 ?\n\nThere are still a few things that one can do in pglogical but not in\nnative / built0in replication ...\n\n\nBest Regards\nHannu\n\nSo, is anybody planning to release pglogical for v15 ?There are still a few things that one can do in  pglogical but not in native / built0in replication ...Best RegardsHannu", "msg_date": "Thu, 13 Oct 2022 18:18:11 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": true, "msg_subject": "Is anybody planning to release pglogical for v15 ?" } ]
[ { "msg_contents": "In trying to answer an SO question I ran across this:\n\nPostgres version 14.5\n\nselect 10^(-1 * 18);\n ?column?\n----------\n 1e-18\n\nselect 10^(-1 * 18::numeric);\n ?column?\n--------------------\n 0.0000000000000000\n\n\nSame for power:\n\nselect power(10, -18);\n power\n-------\n 1e-18\n(1 row)\n\nselect power(10, -18::numeric);\n power\n--------------------\n 0.0000000000000000\n\n\nWhy is the cast throwing off the result?\n\n-- \nAdrian Klaver\nadrian.klaver@aklaver.com\n\n\n", "msg_date": "Thu, 13 Oct 2022 09:20:51 -0700", "msg_from": "Adrian Klaver <adrian.klaver@aklaver.com>", "msg_from_op": true, "msg_subject": "Exponentiation confusion" }, { "msg_contents": "> On 13/10/2022 18:20 CEST Adrian Klaver <adrian.klaver@aklaver.com> wrote:\n> \n> In trying to answer an SO question I ran across this:\n> \n> Postgres version 14.5\n> \n> select 10^(-1 * 18);\n> ?column?\n> ----------\n> 1e-18\n> \n> select 10^(-1 * 18::numeric);\n> ?column?\n> --------------------\n> 0.0000000000000000\n> \n> \n> Same for power:\n> \n> select power(10, -18);\n> power\n> -------\n> 1e-18\n> (1 row)\n> \n> select power(10, -18::numeric);\n> power\n> --------------------\n> 0.0000000000000000\n> \n> \n> Why is the cast throwing off the result?\n\npower has two overloads: https://www.postgresql.org/docs/14/functions-math.html#id-1.5.8.9.6.2.2.19.1.1.1\n\nCalling power(numeric, numeric) is what I expect in that case instead of\ndowncasting the exponent argument to double precision, thus losing precision.\n\nselect\n pg_typeof(power(10, -18)),\n pg_typeof(power(10, -18::numeric));\n\n pg_typeof | pg_typeof \n------------------+-----------\n double precision | numeric\n(1 row)\n\nDetermining the right function is described in https://www.postgresql.org/docs/14/typeconv-func.html\n\n--\nErik\n\n\n", "msg_date": "Thu, 13 Oct 2022 18:53:27 +0200 (CEST)", "msg_from": "Erik Wienhold <ewie@ewie.name>", "msg_from_op": false, "msg_subject": "Re: Exponentiation confusion" }, { "msg_contents": "On 2022-10-13 09:20:51 -0700, Adrian Klaver wrote:\n> In trying to answer an SO question I ran across this:\n> \n> Postgres version 14.5\n> \nSame for 11.17. So it's been like that for some time, maybe forever.\n\n\n> select power(10, -18);\n> power\n> -------\n> 1e-18\n> (1 row)\n> \n> select power(10, -18::numeric);\n> power\n> --------------------\n> 0.0000000000000000\n> \n> \n> Why is the cast throwing off the result?\n\nIt seems that the number of decimals depends only on the first argument:\n\nhjp=> select power(10::numeric, -2::numeric);\n╔════════════════════╗\n║ power ║\n╟────────────────────╢\n║ 0.0100000000000000 ║\n╚════════════════════╝\n(1 row)\nhjp=> select power(10::numeric, -16::numeric);\n╔════════════════════╗\n║ power ║\n╟────────────────────╢\n║ 0.0000000000000001 ║\n╚════════════════════╝\n(1 row)\nhjp=> select power(10::numeric, -18::numeric);\n╔════════════════════╗\n║ power ║\n╟────────────────────╢\n║ 0.0000000000000000 ║\n╚════════════════════╝\n(1 row)\n\nhjp=> select power(10::numeric, 18::numeric);\n╔══════════════════════════════════════╗\n║ power ║\n╟──────────────────────────────────────╢\n║ 1000000000000000000.0000000000000000 ║\n╚══════════════════════════════════════╝\n(1 row)\n\nhjp=> select power(10::numeric(32,30), 18::numeric);\n╔════════════════════════════════════════════════════╗\n║ power ║\n╟────────────────────────────────────────────────────╢\n║ 1000000000000000000.000000000000000000000000000000 ║\n╚════════════════════════════════════════════════════╝\n(1 row)\nhjp=> select power(10::numeric(32,30), -16::numeric);\n╔══════════════════════════════════╗\n║ power ║\n╟──────────────────────────────────╢\n║ 0.000000000000000100000000000000 ║\n╚══════════════════════════════════╝\n(1 row)\n\n\nSo the number of decimals by default isn't sufficient to represent\n10^-18. You have to explicitely increase it.\n\n hp\n\n-- \n _ | Peter J. Holzer | Story must make more sense than reality.\n|_|_) | |\n| | | hjp@hjp.at | -- Charles Stross, \"Creative writing\n__/ | http://www.hjp.at/ | challenge!\"", "msg_date": "Thu, 13 Oct 2022 19:05:27 +0200", "msg_from": "\"Peter J. Holzer\" <hjp-pgsql@hjp.at>", "msg_from_op": false, "msg_subject": "Re: Exponentiation confusion" }, { "msg_contents": "Erik Wienhold <ewie@ewie.name> writes:\n> On 13/10/2022 18:20 CEST Adrian Klaver <adrian.klaver@aklaver.com> wrote:\n>> select power(10, -18::numeric);\n>> power\n>> --------------------\n>> 0.0000000000000000\n>> \n>> Why is the cast throwing off the result?\n\n> Calling power(numeric, numeric) is what I expect in that case instead of\n> downcasting the exponent argument to double precision, thus losing precision.\n\nAn inexact result isn't surprising, but it shouldn't be *that* inexact.\nIt looks to me like numeric.c's power_var_int() code path is setting the\nresult rscale without considering the possibility that the result will\nhave negative weight (i.e. be less than one). The main code path in\npower_var() does adjust for that, so for example\n\nregression=# select power(10, -18.00000001::numeric);\n power \n-------------------------------------\n 0.000000000000000000999999976974149\n(1 row)\n\nbut with an exact-integer exponent, not so much --- you just get 16 digits\nwhich isn't enough.\n\nI'm inclined to think that we should push the responsibility for choosing\nits rscale into power_var_int(), because internally that already does\nestimate the result weight, so with a little code re-ordering we won't\nneed duplicative estimates. Don't have time to work on that right now\nthough ... Dean, are you interested in fixing this?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Oct 2022 13:16:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Exponentiation confusion" }, { "msg_contents": "On Thu, 13 Oct 2022 at 18:17, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I'm inclined to think that we should push the responsibility for choosing\n> its rscale into power_var_int(), because internally that already does\n> estimate the result weight, so with a little code re-ordering we won't\n> need duplicative estimates. Don't have time to work on that right now\n> though ... Dean, are you interested in fixing this?\n>\n\nOK, I'll take a look.\n\nThe most obvious thing to do is to try to make power_var_int() choose\nthe same result rscale as power_var() so that the results are\nconsistent regardless of whether the exponent is an integer.\n\nIt's worth noting, however, that that will cause in a *reduction* in\nthe output rscale rather than an increase in some cases, since the\npower_var_int() code path currently always chooses an rscale of at\nleast 16, whereas the other code path in power_var() uses the rscales\nof the 2 inputs, and produces a minimum of 16 significant digits,\nrather than 16 digits after the decimal point. For example:\n\nselect power(5.678, 18.00000001::numeric);\n power\n-------------------------\n 37628507689498.14987457\n(1 row)\n\nselect power(5.678, 18::numeric);\n power\n---------------------------------\n 37628507036041.8454541428979479\n(1 row)\n\nRegards,\nDean\n\n\n", "msg_date": "Thu, 13 Oct 2022 20:07:08 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Exponentiation confusion" }, { "msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> The most obvious thing to do is to try to make power_var_int() choose\n> the same result rscale as power_var() so that the results are\n> consistent regardless of whether the exponent is an integer.\n\nYeah, I think we should try to end up with that.\n\n> It's worth noting, however, that that will cause in a *reduction* in\n> the output rscale rather than an increase in some cases, since the\n> power_var_int() code path currently always chooses an rscale of at\n> least 16, whereas the other code path in power_var() uses the rscales\n> of the 2 inputs, and produces a minimum of 16 significant digits,\n> rather than 16 digits after the decimal point.\n\nRight. I think this is not bad though. In a lot of cases (such\nas the example here) the current behavior is just plastering on\nuseless zeroes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Oct 2022 15:12:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Exponentiation confusion" }, { "msg_contents": "> On 13/10/2022 19:16 CEST Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Erik Wienhold <ewie@ewie.name> writes:\n> > On 13/10/2022 18:20 CEST Adrian Klaver <adrian.klaver@aklaver.com> wrote:\n> >> select power(10, -18::numeric);\n> >> power\n> >> --------------------\n> >> 0.0000000000000000\n> >> \n> >> Why is the cast throwing off the result?\n> \n> > Calling power(numeric, numeric) is what I expect in that case instead of\n> > downcasting the exponent argument to double precision, thus losing precision.\n> \n> An inexact result isn't surprising, but it shouldn't be *that* inexact.\n\nAh, now I see the problem. I saw a bunch of zeros but not that it's *all*\nzeros. Nevermind.\n\n--\nErik\n\n\n", "msg_date": "Thu, 13 Oct 2022 22:07:33 +0200 (CEST)", "msg_from": "Erik Wienhold <ewie@ewie.name>", "msg_from_op": false, "msg_subject": "Re: Exponentiation confusion" }, { "msg_contents": "[Moving this to -hackers]\n\n> On 13/10/2022 18:20 CEST Adrian Klaver <adrian.klaver@aklaver.com> wrote:\n> > select power(10, -18::numeric);\n> > power\n> > --------------------\n> > 0.0000000000000000\n> >\n\nOn Thu, 13 Oct 2022 at 18:17, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> An inexact result isn't surprising, but it shouldn't be *that* inexact.\n>\n> I'm inclined to think that we should push the responsibility for choosing\n> its rscale into power_var_int(), because internally that already does\n> estimate the result weight, so with a little code re-ordering we won't\n> need duplicative estimates. Don't have time to work on that right now\n> though ... Dean, are you interested in fixing this?\n>\n\nHere's a patch along those lines, bringing power_var_int() more in\nline with neighbouring functions by having it choose its own result\nscale.\n\nIt was necessary to also move the overflow/underflow tests up, in\norder to avoid a potential integer overflow when deciding the rscale.\n\nLooking more closely at the upper limit of the overflow test, it turns\nout it was far too large. I'm not sure where the \"3 * SHRT_MAX\" came\nfrom, but I suspect it was just a thinko on my part, back in\n7d9a4737c2. I've replaced that with SHRT_MAX + 1, which kicks in much\nsooner without changing the actual maximum result allowed, which is <\n10^131072 (the absolute upper limit of the numeric type).\n\nThe first half the the underflow test condition \"f + 1 < -rscale\" goes\naway, since this is now being done before rscale is computed, and the\nchoice of rscale makes that condition false. In fact, the new choice\nof rscale now ensures that when sig_digits is computed further down,\nit is guaranteed to be strictly greater than 0, rather than merely\nbeing >= 0 as before, which is good.\n\nAs expected, various regression test results change, since the number\nof significant digits computed is now different, but I think the new\nresults look a lot better, and more consistent. I regenerated the\nnumeric_big test results by re-running the bc script and rounding to\nthe new output precisions, and the results from power_var_int()\nexactly match in every case. This already included a number of cases\nthat used to round to zero, and now produce much more reasonable\nresults.\n\nThe test cases where the result actually does round to zero now output\n1000 zeros after the decimal point. That looks a little messy, but I\nthink it's the right thing to do in fixed-point arithmetic -- it's\nconsistent with the fractional power case, and with exp(numeric),\nreflecting the fact that the result is zero to 1000 decimal places,\nwhilst not being exactly zero.\n\nOverall, I'm quite happy with these results. The question is, should\nthis be back-patched?\n\nIn the past, I think I've only back-patched numeric bug-fixes where\nthe digits output by the old code were incorrect or an error was\nthrown, not changes that resulted in a different number of digits\nbeing output, changing the precision of already-correct results.\nHowever, having 10.0^(-18) produce zero seems pretty bad, so my\ninclination is to back-patch, unless anyone objects.\n\nRegards,\nDean", "msg_date": "Tue, 18 Oct 2022 11:18:25 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Exponentiation confusion" }, { "msg_contents": "On Tue, Oct 18, 2022 at 6:18 AM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> Overall, I'm quite happy with these results. The question is, should\n> this be back-patched?\n>\n> In the past, I think I've only back-patched numeric bug-fixes where\n> the digits output by the old code were incorrect or an error was\n> thrown, not changes that resulted in a different number of digits\n> being output, changing the precision of already-correct results.\n> However, having 10.0^(-18) produce zero seems pretty bad, so my\n> inclination is to back-patch, unless anyone objects.\n\nI don't think that back-patching is a very good idea. The bar for\nchanging query results should be super-high. Applications can depend\non the existing behavior even if it's wrong.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 18 Oct 2022 15:18:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Exponentiation confusion" }, { "msg_contents": "On Tue, 18 Oct 2022 at 20:18, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Oct 18, 2022 at 6:18 AM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> > Overall, I'm quite happy with these results. The question is, should\n> > this be back-patched?\n> >\n> > In the past, I think I've only back-patched numeric bug-fixes where\n> > the digits output by the old code were incorrect or an error was\n> > thrown, not changes that resulted in a different number of digits\n> > being output, changing the precision of already-correct results.\n> > However, having 10.0^(-18) produce zero seems pretty bad, so my\n> > inclination is to back-patch, unless anyone objects.\n>\n> I don't think that back-patching is a very good idea. The bar for\n> changing query results should be super-high. Applications can depend\n> on the existing behavior even if it's wrong.\n>\n\nOK, on reflection, I think that makes sense. Applied to HEAD only.\n\nRegards,\nDean\n\n\n", "msg_date": "Thu, 20 Oct 2022 10:18:34 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Exponentiation confusion" } ]
[ { "msg_contents": "Hi,\nI was looking at combo_init in contrib/pgcrypto/px.c .\n\nThere is a memset() call following palloc0() - the call is redundant.\n\nPlease see the patch for the proposed change.\n\nThanks", "msg_date": "Thu, 13 Oct 2022 10:55:08 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": true, "msg_subject": "remove redundant memset() call" }, { "msg_contents": "On Thu, Oct 13, 2022 at 10:55:08AM -0700, Zhihong Yu wrote:\n> Hi,\n> I was looking at combo_init in contrib/pgcrypto/px.c .\n> \n> There is a memset() call following palloc0() - the call is redundant.\n> \n> Please see the patch for the proposed change.\n> \n> Thanks\n\n> diff --git a/contrib/pgcrypto/px.c b/contrib/pgcrypto/px.c\n> index 3b098c6151..d35ccca777 100644\n> --- a/contrib/pgcrypto/px.c\n> +++ b/contrib/pgcrypto/px.c\n> @@ -203,7 +203,6 @@ combo_init(PX_Combo *cx, const uint8 *key, unsigned klen,\n> \tif (klen > ks)\n> \t\tklen = ks;\n> \tkeybuf = palloc0(ks);\n> -\tmemset(keybuf, 0, ks);\n> \tmemcpy(keybuf, key, klen);\n> \n> \terr = px_cipher_init(c, keybuf, klen, ivbuf);\n\nUh, the memset() is ks length but the memcpy() is klen, and the above\ntest allows ks to be larger than klen.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Thu, 13 Oct 2022 15:10:16 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: remove redundant memset() call" }, { "msg_contents": "On Thu, Oct 13, 2022 at 12:10 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Thu, Oct 13, 2022 at 10:55:08AM -0700, Zhihong Yu wrote:\n> > Hi,\n> > I was looking at combo_init in contrib/pgcrypto/px.c .\n> >\n> > There is a memset() call following palloc0() - the call is redundant.\n> >\n> > Please see the patch for the proposed change.\n> >\n> > Thanks\n>\n> > diff --git a/contrib/pgcrypto/px.c b/contrib/pgcrypto/px.c\n> > index 3b098c6151..d35ccca777 100644\n> > --- a/contrib/pgcrypto/px.c\n> > +++ b/contrib/pgcrypto/px.c\n> > @@ -203,7 +203,6 @@ combo_init(PX_Combo *cx, const uint8 *key, unsigned\n> klen,\n> > if (klen > ks)\n> > klen = ks;\n> > keybuf = palloc0(ks);\n> > - memset(keybuf, 0, ks);\n> > memcpy(keybuf, key, klen);\n> >\n> > err = px_cipher_init(c, keybuf, klen, ivbuf);\n>\n> Uh, the memset() is ks length but the memcpy() is klen, and the above\n> test allows ks to be larger than klen.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> Indecision is a decision. Inaction is an action. Mark Batterson\n>\n> Hi,\nthe memory has been zero'ed out by palloc0().\n\nmemcpy is not relevant w.r.t. resetting memory.\n\nCheers\n\nOn Thu, Oct 13, 2022 at 12:10 PM Bruce Momjian <bruce@momjian.us> wrote:On Thu, Oct 13, 2022 at 10:55:08AM -0700, Zhihong Yu wrote:\n> Hi,\n> I was looking at combo_init in contrib/pgcrypto/px.c .\n> \n> There is a memset() call following palloc0() - the call is redundant.\n> \n> Please see the patch for the proposed change.\n> \n> Thanks\n\n> diff --git a/contrib/pgcrypto/px.c b/contrib/pgcrypto/px.c\n> index 3b098c6151..d35ccca777 100644\n> --- a/contrib/pgcrypto/px.c\n> +++ b/contrib/pgcrypto/px.c\n> @@ -203,7 +203,6 @@ combo_init(PX_Combo *cx, const uint8 *key, unsigned klen,\n>       if (klen > ks)\n>               klen = ks;\n>       keybuf = palloc0(ks);\n> -     memset(keybuf, 0, ks);\n>       memcpy(keybuf, key, klen);\n>  \n>       err = px_cipher_init(c, keybuf, klen, ivbuf);\n\nUh, the memset() is ks length but the memcpy() is klen, and the above\ntest allows ks to be larger than klen.\n\n-- \n  Bruce Momjian  <bruce@momjian.us>        https://momjian.us\n  EDB                                      https://enterprisedb.com\n\n  Indecision is a decision.  Inaction is an action.  Mark Batterson\nHi,the memory has been zero'ed out by palloc0().memcpy is not relevant w.r.t. resetting memory.Cheers", "msg_date": "Thu, 13 Oct 2022 12:12:35 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": true, "msg_subject": "Re: remove redundant memset() call" }, { "msg_contents": "On Thu, Oct 13, 2022 at 12:12:35PM -0700, Zhihong Yu wrote:\n> On Thu, Oct 13, 2022 at 12:10 PM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Thu, Oct 13, 2022 at 10:55:08AM -0700, Zhihong Yu wrote:\n> > Hi,\n> > I was looking at combo_init in contrib/pgcrypto/px.c .\n> >\n> > There is a memset() call following palloc0() - the call is redundant.\n> >\n> > Please see the patch for the proposed change.\n> >\n> > Thanks\n> \n> > diff --git a/contrib/pgcrypto/px.c b/contrib/pgcrypto/px.c\n> > index 3b098c6151..d35ccca777 100644\n> > --- a/contrib/pgcrypto/px.c\n> > +++ b/contrib/pgcrypto/px.c\n> > @@ -203,7 +203,6 @@ combo_init(PX_Combo *cx, const uint8 *key, unsigned\n> klen,\n> >       if (klen > ks)\n> >               klen = ks;\n> >       keybuf = palloc0(ks);\n> > -     memset(keybuf, 0, ks);\n> >       memcpy(keybuf, key, klen);\n> > \n> >       err = px_cipher_init(c, keybuf, klen, ivbuf);\n> \n> Uh, the memset() is ks length but the memcpy() is klen, and the above\n> test allows ks to be larger than klen.\n> \n> --\n>   Bruce Momjian  <bruce@momjian.us>        https://momjian.us\n>   EDB                                      https://enterprisedb.com\n> \n>   Indecision is a decision.  Inaction is an action.  Mark Batterson\n> \n> \n> Hi,\n> the memory has been zero'ed out by palloc0().\n> \n> memcpy is not relevant w.r.t. resetting memory.\n\nAh, good point.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Thu, 13 Oct 2022 15:15:13 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: remove redundant memset() call" }, { "msg_contents": "On Thu, Oct 13, 2022 at 03:15:13PM -0400, Bruce Momjian wrote:\n> On Thu, Oct 13, 2022 at 12:12:35PM -0700, Zhihong Yu wrote:\n>> the memory has been zero'ed out by palloc0().\n>> \n>> memcpy is not relevant w.r.t. resetting memory.\n> \n> Ah, good point.\n\nYeah, it looks like this was missed in ca7f8e2.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 13 Oct 2022 12:18:41 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: remove redundant memset() call" }, { "msg_contents": "> On 13 Oct 2022, at 21:18, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> \n> On Thu, Oct 13, 2022 at 03:15:13PM -0400, Bruce Momjian wrote:\n>> On Thu, Oct 13, 2022 at 12:12:35PM -0700, Zhihong Yu wrote:\n>>> the memory has been zero'ed out by palloc0().\n>>> \n>>> memcpy is not relevant w.r.t. resetting memory.\n>> \n>> Ah, good point.\n> \n> Yeah, it looks like this was missed in ca7f8e2.\n\nAgreed, it looks like I missed that one, I can't see any reason to keep it. Do\nyou want me to take care of it Bruce, and clean up after myself, or are you\nalready on it?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 13 Oct 2022 21:40:42 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: remove redundant memset() call" }, { "msg_contents": "On Thu, Oct 13, 2022 at 09:40:42PM +0200, Daniel Gustafsson wrote:\n> > On 13 Oct 2022, at 21:18, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> > \n> > On Thu, Oct 13, 2022 at 03:15:13PM -0400, Bruce Momjian wrote:\n> >> On Thu, Oct 13, 2022 at 12:12:35PM -0700, Zhihong Yu wrote:\n> >>> the memory has been zero'ed out by palloc0().\n> >>> \n> >>> memcpy is not relevant w.r.t. resetting memory.\n> >> \n> >> Ah, good point.\n> > \n> > Yeah, it looks like this was missed in ca7f8e2.\n> \n> Agreed, it looks like I missed that one, I can't see any reason to keep it. Do\n> you want me to take care of it Bruce, and clean up after myself, or are you\n> already on it?\n\nYou can do it, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Thu, 13 Oct 2022 15:59:43 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: remove redundant memset() call" }, { "msg_contents": "> On 13 Oct 2022, at 21:59, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Thu, Oct 13, 2022 at 09:40:42PM +0200, Daniel Gustafsson wrote:\n>>> On 13 Oct 2022, at 21:18, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>>> \n>>> On Thu, Oct 13, 2022 at 03:15:13PM -0400, Bruce Momjian wrote:\n>>>> On Thu, Oct 13, 2022 at 12:12:35PM -0700, Zhihong Yu wrote:\n>>>>> the memory has been zero'ed out by palloc0().\n>>>>> \n>>>>> memcpy is not relevant w.r.t. resetting memory.\n>>>> \n>>>> Ah, good point.\n>>> \n>>> Yeah, it looks like this was missed in ca7f8e2.\n>> \n>> Agreed, it looks like I missed that one, I can't see any reason to keep it. Do\n>> you want me to take care of it Bruce, and clean up after myself, or are you\n>> already on it?\n> \n> You can do it, thanks.\n\nDone now.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 13 Oct 2022 23:57:47 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: remove redundant memset() call" } ]
[ { "msg_contents": "Hi,\n\nI unfortunately just noticed this now, just after we released...\n\nIn\n\ncommit 9e98583898c347e007958c8a09911be2ea4acfb9\nAuthor: Michael Paquier <michael@paquier.xyz>\nDate: 2022-03-07 10:26:29 +0900\n\n Create routine able to set single-call SRFs for Materialize mode\n\n\na new helper was added:\n\n#define SRF_SINGLE_USE_EXPECTED\t0x01\t/* use expectedDesc as tupdesc */\n#define SRF_SINGLE_BLESS\t\t0x02\t/* validate tuple for SRF */\nextern void SetSingleFuncCall(FunctionCallInfo fcinfo, bits32 flags);\n\n\nI think the naming here is very poor. For one, \"Single\" here conflicts with\nExprSingleResult which indicates \"expression does not return a set\",\ni.e. precisely the opposite what SetSingleFuncCall() is used for. For another\nthe \"Set\" in SetSingleFuncCall makes it sound like it's function setting a\nproperty.\n\nEven leaving the confusion with ExprSingleResult aside, calling it \"Single\"\nstill seems very non-descriptive. I assume it's named to contrast with\ninit_MultiFuncCall() etc. While those are also not named greatly, they're not\ntypically used directly but wraped in SRF_FIRSTCALL_INIT etc.\n\nI also quite intensely dislike SRF_SINGLE_USE_EXPECTED. It sounds like it's\nsaying that a single use of the SRF is expected, but that's not at all what it\nmeans: \"use expectedDesc as tupdesc\".\n\nI'm also confused by SRF_SINGLE_BLESS - the comment says \"validate tuple for\nSRF\". BlessTupleDesc can't really be described as validation, or am I missing\nsomething?\n\nThis IMO needs to be cleaned up.\n\n\nMaybe something like InitMaterializedSRF() w/\nMAT_SRF_(USE_EXPECTED_DESC|BLESS)\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 13 Oct 2022 12:48:20 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "New \"single-call SRF\" APIs are very confusingly named" }, { "msg_contents": "On Thu, Oct 13, 2022 at 12:48:20PM -0700, Andres Freund wrote:\n> I unfortunately just noticed this now, just after we released...\n\nThanks for the feedback.\n\n> Even leaving the confusion with ExprSingleResult aside, calling it \"Single\"\n> still seems very non-descriptive. I assume it's named to contrast with\n> init_MultiFuncCall() etc. While those are also not named greatly, they're not\n> typically used directly but wraped in SRF_FIRSTCALL_INIT etc.\n\nThis is mentioned on the original thread here, as of the point that we\ngo through the function one single time:\nhttps://www.postgresql.org/message-id/Yh8SBTuzKhq7Jwda@paquier.xyz\n\n> I also quite intensely dislike SRF_SINGLE_USE_EXPECTED. It sounds like it's\n> saying that a single use of the SRF is expected, but that's not at all what it\n> means: \"use expectedDesc as tupdesc\".\n\nOkay. Something like the USE_EXPECTED_DESC you are suggesting or\nUSE_EXPECTED_TUPLE_DESC would be fine by me.\n\n> I'm also confused by SRF_SINGLE_BLESS - the comment says \"validate tuple for\n> SRF\". BlessTupleDesc can't really be described as validation, or am I missing\n> something?\n\nI'd rather keep the flag name to match the history behind this API.\nHow about updating the comment as of \"complete tuple descriptor, for a\ntransient RECORD datatype\", or something like that?\n\n> Maybe something like InitMaterializedSRF() w/\n> MAT_SRF_(USE_EXPECTED_DESC|BLESS)\n\nOr just SetMaterializedFuncCall()? Do we always have to mention the\nSRF part of it once we tell about the materialization part? The\nlatter sort implies the former once a function returns multiple\ntuples.\n\nI don't mind doing some renaming of all that even post-release, though\ncomes the question of keeping some compabitility macros for\ncompilation in case one uses these routines? Any existing extension\ncode works out-of-the-box without these new routines, so the chance of\nsomebody using the new stuff outside core sounds rather limited but it\ndoes not seem worth taking a risk, either.. And this has been in the\ntree for a bit more than half a year now.\n--\nMichael", "msg_date": "Fri, 14 Oct 2022 10:28:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: New \"single-call SRF\" APIs are very confusingly named" }, { "msg_contents": "Hi,\n\nOn 2022-10-14 10:28:34 +0900, Michael Paquier wrote:\n> On Thu, Oct 13, 2022 at 12:48:20PM -0700, Andres Freund wrote:\n> > Maybe something like InitMaterializedSRF() w/\n> > MAT_SRF_(USE_EXPECTED_DESC|BLESS)\n> \n> Or just SetMaterializedFuncCall()?\n\nI think starting any function that's not a setter with Set* is very likely to\nbe misunderstood (SetReturning* is clearer, but long). This just reads like\nyou're setting the materialized function call on something.\n\n\n> Do we always have to mention the SRF part of it once we tell about the\n> materialization part?\n\nYes. The SRF is the important part.\n\n\n> The latter sort implies the former once a function returns multiple tuples.\n\nThere's lot of other other things that can be materialized.\n\n\n> I don't mind doing some renaming of all that even post-release, though\n> comes the question of keeping some compabitility macros for\n> compilation in case one uses these routines?\n\nAgreed that we'd need compat. I think it'd need to be compatibility function,\nnot just renaming via macro, so we keep ABI compatibility.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 13 Oct 2022 18:34:26 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: New \"single-call SRF\" APIs are very confusingly named" }, { "msg_contents": "On Thu, Oct 13, 2022 at 3:48 PM Andres Freund <andres@anarazel.de> wrote:\n> I unfortunately just noticed this now, just after we released...\n>\n> In\n>\n> commit 9e98583898c347e007958c8a09911be2ea4acfb9\n> Author: Michael Paquier <michael@paquier.xyz>\n> Date: 2022-03-07 10:26:29 +0900\n>\n> Create routine able to set single-call SRFs for Materialize mode\n>\n>\n> a new helper was added:\n>\n> #define SRF_SINGLE_USE_EXPECTED 0x01 /* use expectedDesc as tupdesc */\n> #define SRF_SINGLE_BLESS 0x02 /* validate tuple for SRF\n*/\n> extern void SetSingleFuncCall(FunctionCallInfo fcinfo, bits32 flags);\n>\n>\n> I think the naming here is very poor. For one, \"Single\" here conflicts\nwith\n> ExprSingleResult which indicates \"expression does not return a set\",\n> i.e. precisely the opposite what SetSingleFuncCall() is used for. For\nanother\n> the \"Set\" in SetSingleFuncCall makes it sound like it's function setting a\n> property.\n>\n> Even leaving the confusion with ExprSingleResult aside, calling it\n\"Single\"\n> still seems very non-descriptive. I assume it's named to contrast with\n> init_MultiFuncCall() etc. While those are also not named greatly, they're\nnot\n> typically used directly but wraped in SRF_FIRSTCALL_INIT etc.\n\nSo, while I agree that the \"Single\" in SetSingleFuncCall() could be\nconfusing given the name of ExprSingleResult, I feel like actually all\nof the names are somewhat wrong.\n\nExprSingleResult, ExprMultipleResult, and ExprEndResult are used as\nvalues of ReturnSetInfo->isDone, used in value-per-call mode to indicate\nwhether or not a given value is the last or not. The comment on\nExprSingleResult says it means \"expression does not return a set\",\nhowever, in Materialize mode (which is for functions returning a set),\nisDone is supposed to be set to ExprSingleResult.\n\nTake this code in ExecMakeFunctionResultSet()\n\n else if (rsinfo.returnMode == SFRM_Materialize)\n {\n /* check we're on the same page as the function author */\n if (rsinfo.isDone != ExprSingleResult)\n\nSo, isDone is used for a different purpose in value-per-call and\nmaterialize modes (and with pretty contradictory definitions) which is\npretty confusing.\n\nBesides that, it is not clear to me that ExprMultipleResult conveys that\nthe result is a member of or an element of a set. Perhaps it should be\nExprSetMemberResult and instead of using ExprSingleResult for\nMaterialize mode there should be another enum value that indicates \"not\nused\" or \"materialize mode\". It could even be ExprSetResult -- since the\nwhole result is a set. Though that may be confusing since isDone is not\nused for Materialize mode except to ensure \"we're on the same page as\nthe function author\".\n\nExpr[Single|Multiple]Result aside, I do see how SINGLE/Single when used\nfor a helper function that does set up for SFRM_Materialize mode\nfunctions is confusing.\n\nThe routines for SFRM_ValuePerCall all use multi, so I don't think it\nwas unreasonable to use single. However, I agree it would be better to\nuse something else (probably materialize).\n\nThe different dimensions requiring distinction are:\n- returns a set (Y/N)\n- called multiple times to produce a single result (Y/N)\n- builds a tuplestore for result set (Y/N)\n\nSFRM_Materialize comment says \"result set instantiated in Tuplestore\" --\nSo, I feel like the question is, does a function which returns its\nentire result set in a single invocation have to do so using a\ntuplestore and does one that returns part of its result set on each\ninvocation have to do so without a tuplestore (does each invocation have\nto return a scalar or tuple)?\n\nThe current implementation may not support it, but it doesn't seem like\nusing a tuplestore and returning all elements of the result set vs some\nof them in one invocation are alternatives.\n\nIt might be better if the SetFunctionReturnMode stuck to distinguishing\nbetween functions returning their entire result in one invocation or\npart of their result in one invocation.\n\nThat being said, the current SetSingleFuncCall() makes the tuplestore\nand ensures the TupleDesc required by Materialize mode is set or\ncreated. Since it seems only to apply to Materialize mode, I am in favor\nof using \"materialize\" instead of \"single\".\n\n> Maybe something like InitMaterializedSRF() w/\n> MAT_SRF_(USE_EXPECTED_DESC|BLESS)\n\nI also agree that starting the function name with Set isn't the best. I\nlike InitMaterializedSRF() and MAT_SRF_USE_EXPECTED_TUPLE_DESC. Are there\nother kinds of descs?\n\nAlso, \"single call\" and \"multi call\" are confusing because they kind of\nseem like they are describing a behavior of the function limiting the\nnumber of times it can be called. Perhaps the multi* function names\ncould eventually be renamed something to convey how much of a function's\nresult can be expected to be produced on an invocation.\n\nTo summarize, I am in support of renaming SetSingleFuncCall() ->\nInitMaterializedSRF() and SRF_SINGLE_USE_EXPECTED ->\nMAT_SRF_USE_EXPECTED_TUPLE_DESC (or just DESC) as suggested elsewhere in\nthis thread. And I think we should eventually consider renaming the\nmulti* function names and consider if ExprSingleResult is a good name.\n\n- Melanie\n\nOn Thu, Oct 13, 2022 at 3:48 PM Andres Freund <andres@anarazel.de> wrote:> I unfortunately just noticed this now, just after we released...>> In>> commit 9e98583898c347e007958c8a09911be2ea4acfb9> Author: Michael Paquier <michael@paquier.xyz>> Date:   2022-03-07 10:26:29 +0900>>     Create routine able to set single-call SRFs for Materialize mode>>> a new helper was added:>> #define SRF_SINGLE_USE_EXPECTED 0x01    /* use expectedDesc as tupdesc */> #define SRF_SINGLE_BLESS                0x02    /* validate tuple for SRF */> extern void SetSingleFuncCall(FunctionCallInfo fcinfo, bits32 flags);>>> I think the naming here is very poor. For one, \"Single\" here conflicts with> ExprSingleResult which indicates \"expression does not return a set\",> i.e. precisely the opposite what SetSingleFuncCall() is used for. For another> the \"Set\" in SetSingleFuncCall makes it sound like it's function setting a> property.>> Even leaving the confusion with ExprSingleResult aside, calling it \"Single\"> still seems very non-descriptive. I assume it's named to contrast with> init_MultiFuncCall() etc. While those are also not named greatly, they're not> typically used directly but wraped in SRF_FIRSTCALL_INIT etc.So, while I agree that the \"Single\" in SetSingleFuncCall() could beconfusing given the name of ExprSingleResult, I feel like actually allof the names are somewhat wrong.ExprSingleResult, ExprMultipleResult, and ExprEndResult are used asvalues of ReturnSetInfo->isDone, used in value-per-call mode to indicatewhether or not a given value is the last or not. The comment onExprSingleResult says it means \"expression does not return a set\",however, in Materialize mode (which is for functions returning a set),isDone is supposed to be set to ExprSingleResult.Take this code in ExecMakeFunctionResultSet()  else if (rsinfo.returnMode == SFRM_Materialize)  {          /* check we're on the same page as the function author */          if (rsinfo.isDone != ExprSingleResult)So, isDone is used for a different purpose in value-per-call andmaterialize modes (and with pretty contradictory definitions) which ispretty confusing.Besides that, it is not clear to me that ExprMultipleResult conveys thatthe result is a member of or an element of a set. Perhaps it should beExprSetMemberResult and instead of using ExprSingleResult forMaterialize mode there should be another enum value that indicates \"notused\" or \"materialize mode\". It could even be ExprSetResult -- since thewhole result is a set. Though that may be confusing since isDone is notused for Materialize mode except to ensure \"we're on the same page asthe function author\".Expr[Single|Multiple]Result aside, I do see how SINGLE/Single when usedfor a helper function that does set up for SFRM_Materialize modefunctions is confusing.The routines for SFRM_ValuePerCall all use multi, so I don't think itwas unreasonable to use single. However, I agree it would be better touse something else (probably materialize).The different dimensions requiring distinction are:- returns a set (Y/N)- called multiple times to produce a single result (Y/N)- builds a tuplestore for result set (Y/N)SFRM_Materialize comment says \"result set instantiated in Tuplestore\" --So, I feel like the question is, does a function which returns itsentire result set in a single invocation have to do so using atuplestore and does one that returns part of its result set on eachinvocation have to do so without a tuplestore (does each invocation haveto return a scalar or tuple)?The current implementation may not support it, but it doesn't seem likeusing a tuplestore and returning all elements of the result set vs someof them in one invocation are alternatives.It might be better if the SetFunctionReturnMode stuck to distinguishingbetween functions returning their entire result in one invocation orpart of their result in one invocation.That being said, the current SetSingleFuncCall() makes the tuplestoreand ensures the TupleDesc required by Materialize mode is set orcreated. Since it seems only to apply to Materialize mode, I am in favorof using \"materialize\" instead of \"single\".> Maybe something like InitMaterializedSRF() w/> MAT_SRF_(USE_EXPECTED_DESC|BLESS)I also agree that starting the function name with Set isn't the best. Ilike InitMaterializedSRF() and MAT_SRF_USE_EXPECTED_TUPLE_DESC. Are thereother kinds of descs?Also, \"single call\" and \"multi call\" are confusing because they kind ofseem like they are describing a behavior of the function limiting thenumber of times it can be called. Perhaps the multi* function namescould eventually be renamed something to convey how much of a function'sresult can be expected to be produced on an invocation.To summarize, I am in support of renaming SetSingleFuncCall() ->InitMaterializedSRF() and SRF_SINGLE_USE_EXPECTED ->MAT_SRF_USE_EXPECTED_TUPLE_DESC (or just DESC) as suggested elsewhere inthis thread. And I think we should eventually consider renaming themulti* function names and consider if ExprSingleResult is a good name.- Melanie", "msg_date": "Fri, 14 Oct 2022 17:09:46 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: New \"single-call SRF\" APIs are very confusingly named" }, { "msg_contents": "Melanie Plageman <melanieplageman@gmail.com> writes:\n> So, while I agree that the \"Single\" in SetSingleFuncCall() could be\n> confusing given the name of ExprSingleResult, I feel like actually all\n> of the names are somewhat wrong.\n\nMaybe, but ExprSingleResult et al. have been there for decades and\nare certainly embedded in a ton of third-party code. It's a bit\nlate to rename them, whether you think they're confusing or not.\nMaybe we can get away with changing names introduced in v15, but\neven that I'm afraid will get some pushback.\n\nHaving said that, I'd probably have used names based on \"materialize\"\nnot \"single\" for what this code is doing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 Oct 2022 17:32:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: New \"single-call SRF\" APIs are very confusingly named" }, { "msg_contents": "On Fri, Oct 14, 2022 at 05:09:46PM -0400, Melanie Plageman wrote:\n> To summarize, I am in support of renaming SetSingleFuncCall() ->\n> InitMaterializedSRF() and SRF_SINGLE_USE_EXPECTED ->\n> MAT_SRF_USE_EXPECTED_TUPLE_DESC (or just DESC) as suggested elsewhere in\n> this thread. And I think we should eventually consider renaming the\n> multi* function names and consider if ExprSingleResult is a good name.\n\nAs already mentioned, these have been around for years, so the impact\nwould be bigger. Attached is a patch for HEAD and REL_15_STABLE to\nswitch this stuff with new names, with what's needed for ABI\ncompatibility. My plan would be to keep the compatibility parts only\nin 15, and drop them from HEAD.\n--\nMichael", "msg_date": "Sat, 15 Oct 2022 11:41:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: New \"single-call SRF\" APIs are very confusingly named" }, { "msg_contents": "On Fri, Oct 14, 2022 at 7:41 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Oct 14, 2022 at 05:09:46PM -0400, Melanie Plageman wrote:\n> > To summarize, I am in support of renaming SetSingleFuncCall() ->\n> > InitMaterializedSRF() and SRF_SINGLE_USE_EXPECTED ->\n> > MAT_SRF_USE_EXPECTED_TUPLE_DESC (or just DESC) as suggested elsewhere in\n> > this thread. And I think we should eventually consider renaming the\n> > multi* function names and consider if ExprSingleResult is a good name.\n>\n> As already mentioned, these have been around for years, so the impact\n> would be bigger.\n\n\nThat makes sense.\n\n\n> Attached is a patch for HEAD and REL_15_STABLE to\n> switch this stuff with new names, with what's needed for ABI\n> compatibility. My plan would be to keep the compatibility parts only\n> in 15, and drop them from HEAD.\n>\n\n- * SetSingleFuncCall\n+ * Compatibility function for v15.\n+ */\n+void\n+SetSingleFuncCall(FunctionCallInfo fcinfo, bits32 flags)\n+{\n+ InitMaterializedSRF(fcinfo, flags);\n+}\n+\n\n Any reason not to use a macro?\n\n- Melanie\n\nOn Fri, Oct 14, 2022 at 7:41 PM Michael Paquier <michael@paquier.xyz> wrote:On Fri, Oct 14, 2022 at 05:09:46PM -0400, Melanie Plageman wrote:\n> To summarize, I am in support of renaming SetSingleFuncCall() ->\n> InitMaterializedSRF() and SRF_SINGLE_USE_EXPECTED ->\n> MAT_SRF_USE_EXPECTED_TUPLE_DESC (or just DESC) as suggested elsewhere in\n> this thread. And I think we should eventually consider renaming the\n> multi* function names and consider if ExprSingleResult is a good name.\n\nAs already mentioned, these have been around for years, so the impact\nwould be bigger.  That makes sense. Attached is a patch for HEAD and REL_15_STABLE to\nswitch this stuff with new names, with what's needed for ABI\ncompatibility.  My plan would be to keep the compatibility parts only\nin 15, and drop them from HEAD.- * SetSingleFuncCall+ * Compatibility function for v15.+ */+void+SetSingleFuncCall(FunctionCallInfo fcinfo, bits32 flags)+{+\tInitMaterializedSRF(fcinfo, flags);+}+ Any reason not to use a macro?- Melanie", "msg_date": "Sun, 16 Oct 2022 13:22:41 -0700", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: New \"single-call SRF\" APIs are very confusingly named" }, { "msg_contents": "Hi,\n\nOn 2022-10-16 13:22:41 -0700, Melanie Plageman wrote:\n> On Fri, Oct 14, 2022 at 7:41 PM Michael Paquier <michael@paquier.xyz> wrote:\n> - * SetSingleFuncCall\n> + * Compatibility function for v15.\n> + */\n> +void\n> +SetSingleFuncCall(FunctionCallInfo fcinfo, bits32 flags)\n> +{\n> + InitMaterializedSRF(fcinfo, flags);\n> +}\n> +\n> \n> Any reason not to use a macro?\n\nYes - it'd introduce an ABI break, i.e. an already compiled extension\nreferencing SetSingleFuncCall() wouldn't fail to load into an upgraded sever,\ndue to the reference to the SetSingleFuncCall, which wouldn't exist anymore.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 16 Oct 2022 15:04:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: New \"single-call SRF\" APIs are very confusingly named" }, { "msg_contents": "Hi,\n\nOn 2022-10-15 11:41:08 +0900, Michael Paquier wrote:\n> Attached is a patch for HEAD and REL_15_STABLE to switch this stuff with new\n> names, with what's needed for ABI compatibility. My plan would be to keep\n> the compatibility parts only in 15, and drop them from HEAD. -- Michael\n\nLooks reasonable to me. Thanks for working on this.\n\n\n> -/* flag bits for SetSingleFuncCall() */\n> -#define SRF_SINGLE_USE_EXPECTED\t0x01\t/* use expectedDesc as tupdesc */\n> -#define SRF_SINGLE_BLESS\t\t0x02\t/* validate tuple for SRF */\n> +/* flag bits for InitMaterializedSRF() */\n> +#define MAT_SRF_USE_EXPECTED_DESC\t0x01\t/* use expectedDesc as tupdesc */\n> +#define MAT_SRF_BLESS\t\t\t\t0x02\t/* complete tuple descriptor, for\n> +\t\t\t\t\t\t\t\t\t\t\t * a transient RECORD datatype */\n\nI don't really know what \"complete tuple descriptor\" means. BlessTupleDesc()\ndoes say \"make a completed tuple descriptor useful for SRFs\" - but I don't\nthink that means that Bless* makes them complete, but that they *have* to be\ncomplete to be blessed.\n\n\n> @@ -2164,8 +2164,8 @@ elements_worker_jsonb(FunctionCallInfo fcinfo, const char *funcname,\n> \n> \trsi = (ReturnSetInfo *) fcinfo->resultinfo;\n> \n> -\tSetSingleFuncCall(fcinfo,\n> -\t\t\t\t\t SRF_SINGLE_USE_EXPECTED | SRF_SINGLE_BLESS);\n> +\tInitMaterializedSRF(fcinfo,\n> +\t\t\t\t\t MAT_SRF_USE_EXPECTED_DESC | MAT_SRF_BLESS);\n\nAlready was the case, so maybe not worth mucking with: Why the newline here,\nbut not in other cases?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 16 Oct 2022 15:09:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: New \"single-call SRF\" APIs are very confusingly named" }, { "msg_contents": "On Sun, Oct 16, 2022 at 03:04:43PM -0700, Andres Freund wrote:\n> Yes - it'd introduce an ABI break, i.e. an already compiled extension\n> referencing SetSingleFuncCall() wouldn't fail to load into an upgraded sever,\n> due to the reference to the SetSingleFuncCall, which wouldn't exist anymore.\n\nNote that this layer should just be removed on HEAD. Once an\nextension catches up with the new name, they would not even need to\nplay with PG_VERSION_NUM even for a new version compiled with\nREL_15_STABLE.\n--\nMichael", "msg_date": "Mon, 17 Oct 2022 10:06:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: New \"single-call SRF\" APIs are very confusingly named" }, { "msg_contents": "On Sun, Oct 16, 2022 at 03:09:14PM -0700, Andres Freund wrote:\n> On 2022-10-15 11:41:08 +0900, Michael Paquier wrote:\n>> -/* flag bits for SetSingleFuncCall() */\n>> -#define SRF_SINGLE_USE_EXPECTED\t0x01\t/* use expectedDesc as tupdesc */\n>> -#define SRF_SINGLE_BLESS\t\t0x02\t/* validate tuple for SRF */\n>> +/* flag bits for InitMaterializedSRF() */\n>> +#define MAT_SRF_USE_EXPECTED_DESC\t0x01\t/* use expectedDesc as tupdesc */\n>> +#define MAT_SRF_BLESS\t\t\t\t0x02\t/* complete tuple descriptor, for\n>> +\t\t\t\t\t\t\t\t\t\t\t * a transient RECORD datatype */\n> \n> I don't really know what \"complete tuple descriptor\" means. BlessTupleDesc()\n> does say \"make a completed tuple descriptor useful for SRFs\" - but I don't\n> think that means that Bless* makes them complete, but that they *have* to be\n> complete to be blessed.\n\nThat's just assign_record_type_typmod(), which would make sure to fill\nthe cache for a RECORD tupdesc. How about \"fill the cache with the\ninformation of the tuple descriptor type, for a transient RECORD\ndatatype\"? If you have a better, somewhat less confusing, idea, I am\nopen to suggestions.\n\n>> @@ -2164,8 +2164,8 @@ elements_worker_jsonb(FunctionCallInfo fcinfo, const char *funcname,\n>> \n>> \trsi = (ReturnSetInfo *) fcinfo->resultinfo;\n>> \n>> -\tSetSingleFuncCall(fcinfo,\n>> -\t\t\t\t\t SRF_SINGLE_USE_EXPECTED | SRF_SINGLE_BLESS);\n>> +\tInitMaterializedSRF(fcinfo,\n>> +\t\t\t\t\t MAT_SRF_USE_EXPECTED_DESC | MAT_SRF_BLESS);\n> \n> Already was the case, so maybe not worth mucking with: Why the newline here,\n> but not in other cases?\n\nYeah, that's fine as well.\n--\nMichael", "msg_date": "Mon, 17 Oct 2022 10:13:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: New \"single-call SRF\" APIs are very confusingly named" }, { "msg_contents": "On Mon, Oct 17, 2022 at 10:13:33AM +0900, Michael Paquier wrote:\n> That's just assign_record_type_typmod(), which would make sure to fill\n> the cache for a RECORD tupdesc. How about \"fill the cache with the\n> information of the tuple descriptor type, for a transient RECORD\n> datatype\"? If you have a better, somewhat less confusing, idea, I am\n> open to suggestions.\n\nAt the end, I was unhappy with this formulation, so I have just\nmentioned what the top of BlessTupleDesc() tells, with the name of\nthe function used on the tuple descriptor when the flag is activated\nby the init call.\n\nThe compatibility functions have been removed from HEAD, by the way.\n--\nMichael", "msg_date": "Tue, 18 Oct 2022 10:59:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: New \"single-call SRF\" APIs are very confusingly named" }, { "msg_contents": "Hi,\n\nThanks for \"fixing\" this so quickly.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 18 Oct 2022 17:53:58 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: New \"single-call SRF\" APIs are very confusingly named" } ]
[ { "msg_contents": "In reviewing another patch, I noticed that the documentation had an xref to\na fairly large page of documentation (create_table.sgml), and I wondered if\nthat link was chosen because the original author genuinely felt the entire\npage was relevant, or merely because a more granular link did not exist at\nthe time, and this link had been carried forward since then while the\nreferenced page grew in complexity.\n\nIn the interest of narrowing the problem down to a manageable size, I wrote\na script (attached) to find all xrefs and rank them by criteria[1] that I\nbelieve hints at the possibility that the xrefs should be more granular\nthan they are.\n\nI intend to use the script output below as a guide for manually reviewing\nthe references and seeing if there are opportunities to guide the reader to\nthe relevant section of those pages.\n\nIn case anyone is curious, here is a top excerpt of the script output:\n\nfile_name link_name link_count\n line_count num_refentries\n--------------------------------- ---------------------------- ----------\n ---------- --------------\nref/psql-ref.sgml app-psql 20\n 5215 1\necpg.sgml ecpg-sql-allocate-descriptor 4\n 10101 17\nref/create_table.sgml sql-createtable 23\n 2437 1\nref/select.sgml sql-select 23\n 2207 1\nref/create_function.sgml sql-createfunction 30\n 935 1\nref/alter_table.sgml sql-altertable 12\n 1776 1\nref/pg_dump.sgml app-pgdump 11\n 1545 1\nref/pg_basebackup.sgml app-pgbasebackup 11\n 1008 1\nref/create_type.sgml sql-createtype 10\n 1029 1\nref/create_index.sgml sql-createindex 9\n 999 1\nref/postgres-ref.sgml app-postgres 10\n 845 1\nref/copy.sgml sql-copy 7\n 1081 1\nref/create_role.sgml sql-createrole 13\n 511 1\nref/grant.sgml sql-grant 13\n 507 1\nref/create_foreign_table.sgml sql-createforeigntable 14\n 455 1\nref/insert.sgml sql-insert 8\n 792 1\nref/pg_ctl-ref.sgml app-pg-ctl 8\n 713 1\nref/create_trigger.sgml sql-createtrigger 7\n 777 1\nref/set.sgml sql-set 15\n 332 1\nref/create_aggregate.sgml sql-createaggregate 6\n 805 1\nref/initdb.sgml app-initdb 8\n 588 1\nref/create_policy.sgml sql-createpolicy 7\n 655 1\ndblink.sgml contrib-dblink-connect 1\n 2136 19\nref/create_subscription.sgml sql-createsubscription 9\n 472 1\n\nSome of these will clearly be false positives. For instance, dblink.sgml\nand ecpg.sgml have a lot of refentries, but they seem to lack a global\n\"top\" refentry which I assumed would be there.\n\nOn the other hand, I have to wonder if the references to psql might be to a\nspecific feature of the tool, and perhaps we can create refentries to those.\n\n[1] The criteria is: must be first refentry in file, file must be at least\n200 lines long, then rank by lines*references, 2x for referencing the top\nrefentry when others exist", "msg_date": "Thu, 13 Oct 2022 17:06:31 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "WIP: Analyze whether our docs need more granular refentries." } ]
[ { "msg_contents": "I was looking at the code in EndCommand() and noticed a comment\ntalking about some Asserts which didn't seem to exist in the code.\nThe comment also talks about LastOid which looks like the name of a\nvariable that's nowhere to be seen.\n\nIt looks like the Asserts did exists when the completion tag patch was\nbeing developed [1] but they disappeared somewhere along the line and\nthe comment didn't get an update before 2f9661311 went in.\n\nIn the attached, I rewrote the comment to remove mention of the\nAsserts. I also tried to form the comment in a way that's more\nunderstandable about why we always write a \"0\" in \"INSERT 0 <nrows>\".\n\nDavid\n\n[1] https://www.postgresql.org/message-id/1C642743-8E46-4246-B4A0-C9A638B3E88F@enterprisedb.com", "msg_date": "Fri, 14 Oct 2022 10:56:23 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Incorrect comment regarding command completion tags" }, { "msg_contents": "\n\n> On Oct 13, 2022, at 2:56 PM, David Rowley <dgrowleyml@gmail.com> wrote:\n> \n> I was looking at the code in EndCommand() and noticed a comment\n> talking about some Asserts which didn't seem to exist in the code.\n> The comment also talks about LastOid which looks like the name of a\n> variable that's nowhere to be seen.\n> \n> It looks like the Asserts did exists when the completion tag patch was\n> being developed [1] but they disappeared somewhere along the line and\n> the comment didn't get an update before 2f9661311 went in.\n> \n> In the attached, I rewrote the comment to remove mention of the\n> Asserts. I also tried to form the comment in a way that's more\n> understandable about why we always write a \"0\" in \"INSERT 0 <nrows>\".\n\nYour wording is better. +1\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 13 Oct 2022 15:38:40 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Incorrect comment regarding command completion tags" }, { "msg_contents": "On Fri, 14 Oct 2022 at 11:38, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>\n> > On Oct 13, 2022, at 2:56 PM, David Rowley <dgrowleyml@gmail.com> wrote:\n> > In the attached, I rewrote the comment to remove mention of the\n> > Asserts. I also tried to form the comment in a way that's more\n> > understandable about why we always write a \"0\" in \"INSERT 0 <nrows>\".\n>\n> Your wording is better. +1\n\nThanks for having a look. I adjusted the wording slightly as I had\nwritten \"ancient\" in regards to PostgreSQL 11 and earlier. It's\nprobably a bit early to call a supported version of PostgreSQL ancient\nso I just decided to mention the version number instead.\n\nI pushed the resulting patch.\n\nThanks for having a look.\n\nDavid\n\n\n", "msg_date": "Fri, 14 Oct 2022 14:39:23 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Incorrect comment regarding command completion tags" } ]
[ { "msg_contents": "Hi Andres,\n\nCommit ec3c9cc add pg_attribute_aligned in MSVC[1],\nwhich was pushed one day before the meson commits,\nso meson build missed this feature.\n\n[1]: https://www.postgresql.org/message-id/CAAaqYe-HbtZvR3msoMtk+hYW2S0e0OapzMW8icSMYTMA+mN8Aw@mail.gmail.com\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Fri, 14 Oct 2022 10:59:28 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": true, "msg_subject": "[meson] add missing pg_attribute_aligned for MSVC in meson build" }, { "msg_contents": "On Fri, Oct 14, 2022 at 10:59:28AM +0800, Junwang Zhao wrote:\n> Commit ec3c9cc add pg_attribute_aligned in MSVC[1],\n> which was pushed one day before the meson commits,\n> so meson build missed this feature.\n> \n> [1]: https://www.postgresql.org/message-id/CAAaqYe-HbtZvR3msoMtk+hYW2S0e0OapzMW8icSMYTMA+mN8Aw@mail.gmail.com\n\nRight, thanks! And it is possible to rely on _MSC_VER for that in\nthis code path.\n--\nMichael", "msg_date": "Fri, 14 Oct 2022 14:34:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [meson] add missing pg_attribute_aligned for MSVC in meson build" }, { "msg_contents": "Hi,\n\nOn 2022-10-14 10:59:28 +0800, Junwang Zhao wrote:\n> Commit ec3c9cc add pg_attribute_aligned in MSVC[1],\n> which was pushed one day before the meson commits,\n> so meson build missed this feature.\n\nGood catch. It shouldn't have practical consequences for the moment, given\nthat msvc doesn't support 128bit integers, but of course we should still be\ncorrect.\n\nLooked through other recent changes to configure and found a few additional\nomissions.\n\nSee the attached patch fixing those omissions. I'll push it to HEAD once it\nhas the CI stamp of approval.\n\nGreetings,\n\nAndres Freund", "msg_date": "Sat, 15 Oct 2022 12:02:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [meson] add missing pg_attribute_aligned for MSVC in meson build" }, { "msg_contents": "Hi,\n\nOn 2022-10-15 12:02:14 -0700, Andres Freund wrote:\n> See the attached patch fixing those omissions. I'll push it to HEAD once it\n> has the CI stamp of approval.\n\nDone.\n\nThanks for the report!\n\n\n", "msg_date": "Sat, 15 Oct 2022 13:05:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [meson] add missing pg_attribute_aligned for MSVC in meson build" } ]
[ { "msg_contents": "Hi,\n\nMERGE command does not accept foreign tables as targets.\nWhen a foreign table is specified as a target, it shows error messages \nlike this:\n\n-- ERROR: cannot execute MERGE on relation \"child1\"\n-- DETAIL: This operation is not supported for foreign tables.\n\nHowever, when a partitioned table includes foreign tables as partitions \nand MERGE is executed on the partitioned table, following error message \nshows.\n\n-- ERROR: unexpected operation: 5\n\nThe latter error message is unclear, and should be the same as the \nformer one.\nThe attached patch adds the code to display error the former error \nmessages in the latter case.\nAny thoughts?\n\nBest,\nTatsuhiro Nakamori", "msg_date": "Fri, 14 Oct 2022 11:59:34 +0900", "msg_from": "bt22nakamorit <bt22nakamorit@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Fix error message for MERGE foreign tables" }, { "msg_contents": "On Fri, Oct 14, 2022 at 10:59 AM bt22nakamorit <\nbt22nakamorit@oss.nttdata.com> wrote:\n\n> Hi,\n>\n> MERGE command does not accept foreign tables as targets.\n> When a foreign table is specified as a target, it shows error messages\n> like this:\n>\n> -- ERROR: cannot execute MERGE on relation \"child1\"\n> -- DETAIL: This operation is not supported for foreign tables.\n>\n> However, when a partitioned table includes foreign tables as partitions\n> and MERGE is executed on the partitioned table, following error message\n> shows.\n>\n> -- ERROR: unexpected operation: 5\n>\n> The latter error message is unclear, and should be the same as the\n> former one.\n> The attached patch adds the code to display error the former error\n> messages in the latter case.\n> Any thoughts?\n\n\n+1. The new message is an improvement to the default one.\n\nI wonder if we can provide more details in the error message, such as\nforeign table name.\n\nThanks\nRichard\n\nOn Fri, Oct 14, 2022 at 10:59 AM bt22nakamorit <bt22nakamorit@oss.nttdata.com> wrote:Hi,\n\nMERGE command does not accept foreign tables as targets.\nWhen a foreign table is specified as a target, it shows error messages \nlike this:\n\n-- ERROR:  cannot execute MERGE on relation \"child1\"\n-- DETAIL:  This operation is not supported for foreign tables.\n\nHowever, when a partitioned table includes foreign tables as partitions \nand MERGE is executed on the partitioned table, following error message \nshows.\n\n-- ERROR:  unexpected operation: 5\n\nThe latter error message is unclear, and should be the same as the \nformer one.\nThe attached patch adds the code to display error the former error \nmessages in the latter case.\nAny thoughts? +1. The new message is an improvement to the default one.I wonder if we can provide more details in the error message, such asforeign table name.ThanksRichard", "msg_date": "Fri, 14 Oct 2022 12:07:27 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix error message for MERGE foreign tables" }, { "msg_contents": "On Fri, Oct 14, 2022 at 12:07 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n>\n> On Fri, Oct 14, 2022 at 10:59 AM bt22nakamorit <\n> bt22nakamorit@oss.nttdata.com> wrote:\n>\n>> Hi,\n>>\n>> MERGE command does not accept foreign tables as targets.\n>> When a foreign table is specified as a target, it shows error messages\n>> like this:\n>>\n>> -- ERROR: cannot execute MERGE on relation \"child1\"\n>> -- DETAIL: This operation is not supported for foreign tables.\n>>\n>> However, when a partitioned table includes foreign tables as partitions\n>> and MERGE is executed on the partitioned table, following error message\n>> shows.\n>>\n>> -- ERROR: unexpected operation: 5\n>>\n>> The latter error message is unclear, and should be the same as the\n>> former one.\n>> The attached patch adds the code to display error the former error\n>> messages in the latter case.\n>> Any thoughts?\n>\n>\n> +1. The new message is an improvement to the default one.\n>\n> I wonder if we can provide more details in the error message, such as\n> foreign table name.\n>\n\nMaybe something like below, so that we keep it consistent with the case\nof a foreign table being specified as a target.\n\n--- a/contrib/postgres_fdw/postgres_fdw.c\n+++ b/contrib/postgres_fdw/postgres_fdw.c\n@@ -1872,6 +1872,13 @@ postgresPlanForeignModify(PlannerInfo *root,\n returningList,\n &retrieved_attrs);\n break;\n+ case CMD_MERGE:\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"cannot execute MERGE on relation \\\"%s\\\"\",\n+ RelationGetRelationName(rel)),\n+\n errdetail_relkind_not_supported(rel->rd_rel->relkind)));\n+ break;\n\nThanks\nRichard\n\nOn Fri, Oct 14, 2022 at 12:07 PM Richard Guo <guofenglinux@gmail.com> wrote:On Fri, Oct 14, 2022 at 10:59 AM bt22nakamorit <bt22nakamorit@oss.nttdata.com> wrote:Hi,\n\nMERGE command does not accept foreign tables as targets.\nWhen a foreign table is specified as a target, it shows error messages \nlike this:\n\n-- ERROR:  cannot execute MERGE on relation \"child1\"\n-- DETAIL:  This operation is not supported for foreign tables.\n\nHowever, when a partitioned table includes foreign tables as partitions \nand MERGE is executed on the partitioned table, following error message \nshows.\n\n-- ERROR:  unexpected operation: 5\n\nThe latter error message is unclear, and should be the same as the \nformer one.\nThe attached patch adds the code to display error the former error \nmessages in the latter case.\nAny thoughts? +1. The new message is an improvement to the default one.I wonder if we can provide more details in the error message, such asforeign table name. Maybe something like below, so that we keep it consistent with the caseof a foreign table being specified as a target.--- a/contrib/postgres_fdw/postgres_fdw.c+++ b/contrib/postgres_fdw/postgres_fdw.c@@ -1872,6 +1872,13 @@ postgresPlanForeignModify(PlannerInfo *root,                             returningList,                             &retrieved_attrs);            break;+       case CMD_MERGE:+           ereport(ERROR,+                   (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),+                    errmsg(\"cannot execute MERGE on relation \\\"%s\\\"\",+                           RelationGetRelationName(rel)),+                    errdetail_relkind_not_supported(rel->rd_rel->relkind)));+           break;ThanksRichard", "msg_date": "Fri, 14 Oct 2022 12:26:19 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix error message for MERGE foreign tables" }, { "msg_contents": "On Fri, Oct 14, 2022 at 12:26:19PM +0800, Richard Guo wrote:\n> Maybe something like below, so that we keep it consistent with the case\n> of a foreign table being specified as a target.\n> \n> --- a/contrib/postgres_fdw/postgres_fdw.c\n> +++ b/contrib/postgres_fdw/postgres_fdw.c\n> @@ -1872,6 +1872,13 @@ postgresPlanForeignModify(PlannerInfo *root,\n> returningList,\n> &retrieved_attrs);\n> break;\n> + case CMD_MERGE:\n> + ereport(ERROR,\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"cannot execute MERGE on relation \\\"%s\\\"\",\n> + RelationGetRelationName(rel)),\n> +\n> errdetail_relkind_not_supported(rel->rd_rel->relkind)));\n> + break;\n\nYeah, you should not use an elog(ERROR) for cases that would be faced\ndirectly by users.\n--\nMichael", "msg_date": "Fri, 14 Oct 2022 17:35:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix error message for MERGE foreign tables" }, { "msg_contents": "On 2022-Oct-14, Michael Paquier wrote:\n\n> On Fri, Oct 14, 2022 at 12:26:19PM +0800, Richard Guo wrote:\n> > Maybe something like below, so that we keep it consistent with the case\n> > of a foreign table being specified as a target.\n> > \n> > --- a/contrib/postgres_fdw/postgres_fdw.c\n> > +++ b/contrib/postgres_fdw/postgres_fdw.c\n> > @@ -1872,6 +1872,13 @@ postgresPlanForeignModify(PlannerInfo *root,\n> > returningList,\n> > &retrieved_attrs);\n> > break;\n> > + case CMD_MERGE:\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > + errmsg(\"cannot execute MERGE on relation \\\"%s\\\"\",\n> > + RelationGetRelationName(rel)),\n> > +\n> > errdetail_relkind_not_supported(rel->rd_rel->relkind)));\n> > + break;\n> \n> Yeah, you should not use an elog(ERROR) for cases that would be faced\n> directly by users.\n\nYeah, I think this just flies undetected until it hits code that doesn't\nsupport the case. I'll add a test and push as Richard suggests, thanks.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La libertad es como el dinero; el que no la sabe emplear la pierde\" (Alvarez)\n\n\n", "msg_date": "Fri, 14 Oct 2022 10:47:55 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Fix error message for MERGE foreign tables" }, { "msg_contents": "Actually, I hadn't realized that the originally submitted patch had the\ntest in postgres_fdw only, but we really want it to catch any FDW, so it\nneeds to be somewhere more general. The best place I found to put this\ntest is in make_modifytable ... I searched for some earlier place in the\nplanner to do it, but couldn't find anything.\n\nSo what do people think about this?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La grandeza es una experiencia transitoria. Nunca es consistente.\nDepende en gran parte de la imaginación humana creadora de mitos\"\n(Irulan)", "msg_date": "Fri, 14 Oct 2022 11:24:06 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Fix error message for MERGE foreign tables" }, { "msg_contents": "On Fri, Oct 14, 2022 at 5:24 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> Actually, I hadn't realized that the originally submitted patch had the\n> test in postgres_fdw only, but we really want it to catch any FDW, so it\n> needs to be somewhere more general. The best place I found to put this\n> test is in make_modifytable ... I searched for some earlier place in the\n> planner to do it, but couldn't find anything.\n>\n> So what do people think about this?\n\n\nGood point. I agree that the test should be in a more general place.\n\nI wonder if we can make it earlier in grouping_planner() just before we\nadd ModifyTablePath.\n\n--- a/src/backend/optimizer/plan/planner.c\n+++ b/src/backend/optimizer/plan/planner.c\n@@ -1772,6 +1772,17 @@ grouping_planner(PlannerInfo *root, double\ntuple_fraction)\n /* Build per-target-rel lists needed by ModifyTable */\n resultRelations = lappend_int(resultRelations,\n resultRelation);\n+ if (parse->commandType == CMD_MERGE &&\n+ this_result_rel->fdwroutine != NULL)\n+ {\n+ RangeTblEntry *rte = root->simple_rte_array[resultRelation];\n+\n+ ereport(ERROR,\n+ errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"cannot execute MERGE on relation \\\"%s\\\"\",\n+ get_rel_name(rte->relid)),\n+ errdetail_relkind_not_supported(rte->relkind));\n+ }\n\nThanks\nRichard\n\nOn Fri, Oct 14, 2022 at 5:24 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:Actually, I hadn't realized that the originally submitted patch had the\ntest in postgres_fdw only, but we really want it to catch any FDW, so it\nneeds to be somewhere more general.  The best place I found to put this\ntest is in make_modifytable ... I searched for some earlier place in the\nplanner to do it, but couldn't find anything.\n\nSo what do people think about this? Good point. I agree that the test should be in a more general place.I wonder if we can make it earlier in grouping_planner() just before weadd ModifyTablePath.--- a/src/backend/optimizer/plan/planner.c+++ b/src/backend/optimizer/plan/planner.c@@ -1772,6 +1772,17 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)         /* Build per-target-rel lists needed by ModifyTable */         resultRelations = lappend_int(resultRelations,                                       resultRelation);+        if (parse->commandType == CMD_MERGE &&+            this_result_rel->fdwroutine != NULL)+        {+            RangeTblEntry *rte = root->simple_rte_array[resultRelation];++            ereport(ERROR,+                    errcode(ERRCODE_FEATURE_NOT_SUPPORTED),+                    errmsg(\"cannot execute MERGE on relation \\\"%s\\\"\",+                           get_rel_name(rte->relid)),+                    errdetail_relkind_not_supported(rte->relkind));+        }ThanksRichard", "msg_date": "Fri, 14 Oct 2022 19:19:01 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix error message for MERGE foreign tables" }, { "msg_contents": "On Fri, Oct 14, 2022 at 7:19 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n>\n> On Fri, Oct 14, 2022 at 5:24 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n>\n>> Actually, I hadn't realized that the originally submitted patch had the\n>> test in postgres_fdw only, but we really want it to catch any FDW, so it\n>> needs to be somewhere more general. The best place I found to put this\n>> test is in make_modifytable ... I searched for some earlier place in the\n>> planner to do it, but couldn't find anything.\n>>\n>> So what do people think about this?\n>\n>\n> Good point. I agree that the test should be in a more general place.\n>\n> I wonder if we can make it earlier in grouping_planner() just before we\n> add ModifyTablePath.\n>\n\nOr maybe we can make it even earlier, when we expand an RTE for a\npartitioned table and add result tables to leaf_result_relids.\n\n--- a/src/backend/optimizer/util/inherit.c\n+++ b/src/backend/optimizer/util/inherit.c\n@@ -627,6 +627,16 @@ expand_single_inheritance_child(PlannerInfo *root,\nRangeTblEntry *parentrte,\n root->leaf_result_relids = bms_add_member(root->leaf_result_relids,\n childRTindex);\n\n+ if (parse->commandType == CMD_MERGE &&\n+ childrte->relkind == RELKIND_FOREIGN_TABLE)\n+ {\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"cannot execute MERGE on relation \\\"%s\\\"\",\n+ RelationGetRelationName(childrel)),\n+ errdetail_relkind_not_supported(childrte->relkind)));\n+ }\n\nThanks\nRichard\n\nOn Fri, Oct 14, 2022 at 7:19 PM Richard Guo <guofenglinux@gmail.com> wrote:On Fri, Oct 14, 2022 at 5:24 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:Actually, I hadn't realized that the originally submitted patch had the\ntest in postgres_fdw only, but we really want it to catch any FDW, so it\nneeds to be somewhere more general.  The best place I found to put this\ntest is in make_modifytable ... I searched for some earlier place in the\nplanner to do it, but couldn't find anything.\n\nSo what do people think about this? Good point. I agree that the test should be in a more general place.I wonder if we can make it earlier in grouping_planner() just before weadd ModifyTablePath. Or maybe we can make it even earlier, when we expand an RTE for apartitioned table and add result tables to leaf_result_relids.--- a/src/backend/optimizer/util/inherit.c+++ b/src/backend/optimizer/util/inherit.c@@ -627,6 +627,16 @@ expand_single_inheritance_child(PlannerInfo *root, RangeTblEntry *parentrte,      root->leaf_result_relids = bms_add_member(root->leaf_result_relids,                                                childRTindex);+     if (parse->commandType == CMD_MERGE &&+         childrte->relkind == RELKIND_FOREIGN_TABLE)+     {+         ereport(ERROR,+                 (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),+                  errmsg(\"cannot execute MERGE on relation \\\"%s\\\"\",+                         RelationGetRelationName(childrel)),+                  errdetail_relkind_not_supported(childrte->relkind)));+     }ThanksRichard", "msg_date": "Fri, 14 Oct 2022 21:07:22 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix error message for MERGE foreign tables" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> Or maybe we can make it even earlier, when we expand an RTE for a\n> partitioned table and add result tables to leaf_result_relids.\n\nI'm not really on board with injecting command-type-specific logic into\ncompletely unrelated places just so that we can throw an error a bit\nearlier. Alvaro's suggestion of make_modifytable seemed plausible,\nnot least because it avoids spending any effort when the command\ncouldn't be MERGE at all.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 Oct 2022 10:43:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix error message for MERGE foreign tables" }, { "msg_contents": "On Fri, Oct 14, 2022 at 10:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > Or maybe we can make it even earlier, when we expand an RTE for a\n> > partitioned table and add result tables to leaf_result_relids.\n>\n> I'm not really on board with injecting command-type-specific logic into\n> completely unrelated places just so that we can throw an error a bit\n> earlier. Alvaro's suggestion of make_modifytable seemed plausible,\n> not least because it avoids spending any effort when the command\n> couldn't be MERGE at all.\n\n\nYeah, that makes sense. Putting this check in inherit.c does look some\nweird as there is no other commandType related code in that file.\n\nAgree that Alvaro's suggestion is more reasonable.\n\nThanks\nRichard\n\nOn Fri, Oct 14, 2022 at 10:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Richard Guo <guofenglinux@gmail.com> writes:\n> Or maybe we can make it even earlier, when we expand an RTE for a\n> partitioned table and add result tables to leaf_result_relids.\n\nI'm not really on board with injecting command-type-specific logic into\ncompletely unrelated places just so that we can throw an error a bit\nearlier.  Alvaro's suggestion of make_modifytable seemed plausible,\nnot least because it avoids spending any effort when the command\ncouldn't be MERGE at all. Yeah, that makes sense. Putting this check in inherit.c does look someweird as there is no other commandType related code in that file.Agree that Alvaro's suggestion is more reasonable.ThanksRichard", "msg_date": "Mon, 17 Oct 2022 10:07:01 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix error message for MERGE foreign tables" } ]
[ { "msg_contents": "Hi hackers,\n\nI intended for the temporary file name generated by basic_archive.c to\ninclude the current timestamp so that the name was \"sufficiently unique.\"\nOf course, this could also be used to determine the creation time, but you\ncan just as easily use stat(1) for that. In any case, I forgot to divide\nthe microseconds field by 1000 to obtain the current timestamp in\nmilliseconds, so while the value is unique, it's also basically garbage.\nI've attached a small patch that fixes this so that the temporary file name\nincludes the timestamp in milliseconds for when it was created.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 13 Oct 2022 21:41:06 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "thinko in basic_archive.c" }, { "msg_contents": "On Fri, Oct 14, 2022 at 10:11 AM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n> I intended for the temporary file name generated by basic_archive.c to\n\nI'm trying to understand this a bit:\n\n /*\n * Pick a sufficiently unique name for the temporary file so that a\n * collision is unlikely. This helps avoid problems in case a temporary\n * file was left around after a crash or another server happens to be\n * archiving to the same directory.\n */\n\nGiven that temp file name includes WAL file name, epoch to\nmilliseconds scale and MyProcPid, can there be name collisions after a\nserver crash or even when multiple servers with different pids are\narchiving/copying the same WAL file to the same directory?\n\nWhat happens to the left-over temp files after a server crash? Will\nthey be lying around in the archive directory? I understand that we\ncan't remove such files because we can't distinguish left-over files\nfrom a crash and the temp files that another server is in the process\nof copying.\n\nIf the goal is to copy files atomically, why can't we name the temp\nfile 'wal_file_name.pid.temp', assuming no PID wraparound and get rid\nof appending time? Since basic_archive is a test module illustrating\narchive_library implementation, do we really need to worry about name\ncollisions?\n\n> I've attached a small patch that fixes this so that the temporary file name\n> includes the timestamp in milliseconds for when it was created.\n\nThe patch LGTM.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 14 Oct 2022 14:15:19 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: thinko in basic_archive.c" }, { "msg_contents": "On Fri, Oct 14, 2022 at 02:15:19PM +0530, Bharath Rupireddy wrote:\n> Given that temp file name includes WAL file name, epoch to\n> milliseconds scale and MyProcPid, can there be name collisions after a\n> server crash or even when multiple servers with different pids are\n> archiving/copying the same WAL file to the same directory?\n\nWhile unlikely, I think it's theoretically possible. If there is a\ncollision, archiving should fail and retry later with a different temporary\nfile name.\n\n> What happens to the left-over temp files after a server crash? Will\n> they be lying around in the archive directory? I understand that we\n> can't remove such files because we can't distinguish left-over files\n> from a crash and the temp files that another server is in the process\n> of copying.\n\nThe temporary files are not automatically removed after a crash. The\ndocumentation for basic archive has a note about this [0].\n\n> If the goal is to copy files atomically, why can't we name the temp\n> file 'wal_file_name.pid.temp', assuming no PID wraparound and get rid\n> of appending time? Since basic_archive is a test module illustrating\n> archive_library implementation, do we really need to worry about name\n> collisions?\n\nYeah, it's debatable how much we care about this for basic_archive. We\npreviously decided that we at least care a little [1], so that's why we\nhave such elaborate temporary file names. If anything, I hope that the\npresence of this logic causes archive module authors to think about these\nproblems.\n\n> The patch LGTM.\n\nThanks!\n\n[0] https://www.postgresql.org/docs/devel/basic-archive.html#id-1.11.7.15.6\n[1] https://postgr.es/m/CA%2BTgmoaSkSmo22SwJaV%2BycNPoGpxe0JV%3DTcTbh4ip8Cwjr0ULQ%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 14 Oct 2022 11:33:10 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: thinko in basic_archive.c" }, { "msg_contents": "On Sat, Oct 15, 2022 at 12:03 AM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n> On Fri, Oct 14, 2022 at 02:15:19PM +0530, Bharath Rupireddy wrote:\n> > Given that temp file name includes WAL file name, epoch to\n> > milliseconds scale and MyProcPid, can there be name collisions after a\n> > server crash or even when multiple servers with different pids are\n> > archiving/copying the same WAL file to the same directory?\n>\n> While unlikely, I think it's theoretically possible.\n\nCan you please help me understand how name collisions can happen with\ntemp file names including WAL file name, timestamp to millisecond\nscale, and PID? Having the timestamp is enough to provide a non-unique\ntemp file name when PID wraparound occurs, right? Am I missing\nsomething here?\n\n> > What happens to the left-over temp files after a server crash? Will\n> > they be lying around in the archive directory? I understand that we\n> > can't remove such files because we can't distinguish left-over files\n> > from a crash and the temp files that another server is in the process\n> > of copying.\n>\n> The temporary files are not automatically removed after a crash. The\n> documentation for basic archive has a note about this [0].\n\nHm, we cannot remove the temp file for all sorts of crashes, but\nhaving on_shmem_exit() or before_shmem_exit() or atexit() or any such\ncallback removing it would help us cover some crash scenarios (that\nexit with proc_exit() or exit()) at least. I think the basic_archive\nmodule currently leaves temp files around even when the server is\nrestarted legitimately while copying to or renaming the temp file, no?\n\nI can quickly find these exit callbacks deleting the files:\natexit(cleanup_directories_atexit);\natexit(remove_temp);\nbefore_shmem_exit(ReplicationSlotShmemExit, 0);\nbefore_shmem_exit(logicalrep_worker_onexit, (Datum) 0);\nbefore_shmem_exit(BeforeShmemExit_Files, 0);\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 15 Oct 2022 10:19:05 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: thinko in basic_archive.c" }, { "msg_contents": "On Sat, Oct 15, 2022 at 10:19:05AM +0530, Bharath Rupireddy wrote:\n> Can you please help me understand how name collisions can happen with\n> temp file names including WAL file name, timestamp to millisecond\n> scale, and PID? Having the timestamp is enough to provide a non-unique\n> temp file name when PID wraparound occurs, right? Am I missing\n> something here?\n\nOutside of contrived cases involving multiple servers, inaccurate clocks,\nPID reuse, etc., it seems unlikely.\n\n> Hm, we cannot remove the temp file for all sorts of crashes, but\n> having on_shmem_exit() or before_shmem_exit() or atexit() or any such\n> callback removing it would help us cover some crash scenarios (that\n> exit with proc_exit() or exit()) at least. I think the basic_archive\n> module currently leaves temp files around even when the server is\n> restarted legitimately while copying to or renaming the temp file, no?\n\nI think the right way to do this would be to add handling for leftover\nfiles in the sigsetjmp() block and a shutdown callback (which just sets up\na before_shmem_exit callback). While this should ensure those files are\ncleaned up after an ERROR or FATAL, crashes and unlink() failures could\nstill leave files behind. We'd probably also need to avoid cleaning up the\ntemp file if copy_file() fails because it already exists, as we won't know\nif it's actually ours. Overall, I suspect it'd be more trouble than it's\nworth.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 15 Oct 2022 14:10:26 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: thinko in basic_archive.c" }, { "msg_contents": "On Sat, Oct 15, 2022 at 02:10:26PM -0700, Nathan Bossart wrote:\n> On Sat, Oct 15, 2022 at 10:19:05AM +0530, Bharath Rupireddy wrote:\n>> Can you please help me understand how name collisions can happen with\n>> temp file names including WAL file name, timestamp to millisecond\n>> scale, and PID? Having the timestamp is enough to provide a non-unique\n>> temp file name when PID wraparound occurs, right? Am I missing\n>> something here?\n> \n> Outside of contrived cases involving multiple servers, inaccurate clocks,\n> PID reuse, etc., it seems unlikely.\n\nWith a name based on a PID in a world where pid_max can be larger than\nthe default and a timestamp, I would say even more unlikely than what\nyou are implying with unlikely ;p\n\n> I think the right way to do this would be to add handling for leftover\n> files in the sigsetjmp() block and a shutdown callback (which just sets up\n> a before_shmem_exit callback). While this should ensure those files are\n> cleaned up after an ERROR or FATAL, crashes and unlink() failures could\n> still leave files behind. We'd probably also need to avoid cleaning up the\n> temp file if copy_file() fails because it already exists, as we won't know\n> if it's actually ours. Overall, I suspect it'd be more trouble than it's\n> worth.\n\nAgreed. My opinion is that we should keep basic_archive as\nminimalistic as we can: short still useful. It does not have to be\nperfect, just to fit with what we want it to show, as a reference.\n\nAnyway, the maths were wrong, so I have applied the patch of upthread,\nwith an extra pair of parenthesis, a comment where epoch is declared\nto tell that it is in milliseconds, and a comment in basic_archive's\nMakefile to mention the reason why we have NO_INSTALLCHECK.\n--\nMichael", "msg_date": "Mon, 17 Oct 2022 11:44:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: thinko in basic_archive.c" }, { "msg_contents": "On Fri, Oct 14, 2022 at 4:45 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> What happens to the left-over temp files after a server crash? Will\n> they be lying around in the archive directory? I understand that we\n> can't remove such files because we can't distinguish left-over files\n> from a crash and the temp files that another server is in the process\n> of copying.\n\nYeah, leaving a potentially unbounded number of files around after\nsystem crashes seems pretty unfriendly. I'm not sure how to fix that,\nexactly. We could use a name based on the database system identifier\nif we thought that we might be archiving from multiple unrelated\nclusters to the same directory, but presumably the real hazard is a\nbunch of machines that are doing physical replication among\nthemselves, and will therefore share a system identifier. There might\nbe no better answer than to suggest that temporary files that are\n\"old\" should be removed by means external to the database, but that's\nnot an entirely satisfying answer.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 17 Oct 2022 09:15:35 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: thinko in basic_archive.c" }, { "msg_contents": "On Mon, Oct 17, 2022 at 6:45 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Oct 14, 2022 at 4:45 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > What happens to the left-over temp files after a server crash? Will\n> > they be lying around in the archive directory? I understand that we\n> > can't remove such files because we can't distinguish left-over files\n> > from a crash and the temp files that another server is in the process\n> > of copying.\n>\n> Yeah, leaving a potentially unbounded number of files around after\n> system crashes seems pretty unfriendly. I'm not sure how to fix that,\n> exactly.\n\nA simple server restart while the basic_archive module is copying\nto/from temp file would leave the file behind, see[1]. I think we can\nfix this by defining shutdown_cb for the basic_archive module, like\nthe attached patch. While this helps in most of the crashes, but not\nall. However, this is better than what we have right now.\n\n[1] ubuntu:~/postgres/contrib/basic_archive/archive_directory$ ls\n000000010000000000000001\narchtemp.000000010000000000000002.2493876.1666091933457\narchtemp.000000010000000000000002.2495316.1666091958680\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 18 Oct 2022 18:54:22 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: thinko in basic_archive.c" }, { "msg_contents": " +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Mon, Oct 17, 2022 at 6:45 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Fri, Oct 14, 2022 at 4:45 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > What happens to the left-over temp files after a server crash? Will\n> > > they be lying around in the archive directory? I understand that we\n> > > can't remove such files because we can't distinguish left-over files\n> > > from a crash and the temp files that another server is in the process\n> > > of copying.\n> >\n> > Yeah, leaving a potentially unbounded number of files around after\n> > system crashes seems pretty unfriendly. I'm not sure how to fix that,\n> > exactly.\n\nUnbounded number of sequential crash-restarts itself is a more serious\nproblem..\n\nAn archive module could clean up them at startup or at the first call\nbut that might be dangerous (since archive directory is I think\nthought as external resource).\n\nHonestly, I'm not sure about a reasonable scenario where simultaneous\narchivings of a same file is acceptable, though. I feel that we should\nnot allow simultaneous archiving of the same segment by some kind of\ninterlocking. In other words, we might should serialize duplicate\narchiving of asame file.\n\nIn another direction, the current code allows duplicate simultaneous\ncopying to temporary files with different names then the latest\nrenaming wins. We reach the almost same result (on Linuxen (or\nPOSIX?)) by unlinking the existing tempfile first then create a new\none with the same name then continue. Even if the tempfile were left\nalone after a crash, that file would be unlinked at the next trial of\narchiving. But I'm not sure how this can be done on Windows.. In the\nfirst place I'm not sure that the latest-unconditionally-wins policy\nis appropriate or not, though.\n\n> A simple server restart while the basic_archive module is copying\n> to/from temp file would leave the file behind, see[1]. I think we can\n> fix this by defining shutdown_cb for the basic_archive module, like\n> the attached patch. While this helps in most of the crashes, but not\n> all. However, this is better than what we have right now.\n\nShutdownWalRecovery() does the similar thing, but as you say this one\ncovers rather narrow cases than that since RestoreArchiveFile()\nfinally overwrites the left-alone files at the next call for that\nfile.\n\n# The patch seems forgetting to clear the tmepfilepath *after* a\n# successful renaming. And I don't see how the callback is called.\n\n> [1] ubuntu:~/postgres/contrib/basic_archive/archive_directory$ ls\n> 000000010000000000000001\n> archtemp.000000010000000000000002.2493876.1666091933457\n> archtemp.000000010000000000000002.2495316.1666091958680\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 19 Oct 2022 12:28:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: thinko in basic_archive.c" }, { "msg_contents": "On Wed, Oct 19, 2022 at 8:58 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Unbounded number of sequential crash-restarts itself is a more serious\n> problem..\n\nAgree. The basic_archive module currently leaves temp files around\neven for normal restarts of the cluster, which is bad IMO.\n\n> An archive module could clean up them at startup or at the first call\n> but that might be dangerous (since archive directory is I think\n> thought as external resource).\n\nThe archive module must be responsible for cleaning up the temp file\nthat it creates. One way to do it is in the archive module's shutdown\ncallback, this covers most of the cases, but not all.\n\n> Honestly, I'm not sure about a reasonable scenario where simultaneous\n> archivings of a same file is acceptable, though. I feel that we should\n> not allow simultaneous archiving of the same segment by some kind of\n> interlocking. In other words, we might should serialize duplicate\n> archiving of asame file.\n\nIn a typical production environment, there's some kind of locking (for\ninstance lease files) that allow/disallow file\ncreation/deletion/writes/reads which guarantees that the same file\nisn't put into a directory (can be archive location) many times. And\nas you rightly said archive_directory is something external to\npostgres and we really can't deal with concurrent writers\nwriting/creating the same files. Even if we somehow try to do it, it\nmakes things complicated. This is true for any PGDATA directories.\nHowever, the archive module implementers can choose to define such a\nlocking strategy.\n\n> In another direction, the current code allows duplicate simultaneous\n> copying to temporary files with different names then the latest\n> renaming wins. We reach the almost same result (on Linuxen (or\n> POSIX?)) by unlinking the existing tempfile first then create a new\n> one with the same name then continue. Even if the tempfile were left\n> alone after a crash, that file would be unlinked at the next trial of\n> archiving. But I'm not sure how this can be done on Windows.. In the\n> first place I'm not sure that the latest-unconditionally-wins policy\n> is appropriate or not, though.\n\nWe really can't just unlink the temp file because it has pid and\ntimestamp in the filename and it's hard to determine the temp file\nthat we created earlier.\n\nAs far as the basic_archive module is concerned, we ought to keep it\nsimple. I still think the simplest we can do is to use the\nbasic_archive's shutdown_cb to delete (in most of the cases, but not\nall) the left-over temp file that the module is dealing with\nas-of-the-moment and add a note about the users dealing with\nconcurrent writers to the basic_archive.archive_directory like the\nattached v2 patch.\n\n> > A simple server restart while the basic_archive module is copying\n> > to/from temp file would leave the file behind, see[1]. I think we can\n> > fix this by defining shutdown_cb for the basic_archive module, like\n> > the attached patch. While this helps in most of the crashes, but not\n> > all. However, this is better than what we have right now.\n>\n> ShutdownWalRecovery() does the similar thing, but as you say this one\n> covers rather narrow cases than that since RestoreArchiveFile()\n> finally overwrites the left-alone files at the next call for that\n> file.\n\nWe're using unique temp file names in the basic_archive module so we\ncan't overwrite the same upon restart.\n\n> # The patch seems forgetting to clear the tmepfilepath *after* a\n> # successful renaming.\n\nIt does so at the beginning of basic_archive_file() which is sufficient.\n\n> And I don't see how the callback is called.\n\ncall_archive_module_shutdown_callback()->basic_archive_shutdown().\n\nPlease see the attached v2 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 19 Oct 2022 10:21:12 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: thinko in basic_archive.c" }, { "msg_contents": "On Tue, Oct 18, 2022 at 11:28 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> > > Yeah, leaving a potentially unbounded number of files around after\n> > > system crashes seems pretty unfriendly. I'm not sure how to fix that,\n> > > exactly.\n>\n> Unbounded number of sequential crash-restarts itself is a more serious\n> problem..\n\nThey don't have to be sequential. Garbage could accumulate over weeks,\nmonths, or years.\n\nAnyway, I agree we should hope that the system doesn't crash often,\nbut we cannot prevent the system administrator from removing the power\nwhenever they like. We can however try to reduce the number of\ndatabase-related things that go wrong if this happens, and I think we\nshould. Bharath's patch seems like it's probably a good idea, and if\nwe can do better, we should.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Oct 2022 10:48:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: thinko in basic_archive.c" }, { "msg_contents": "At Wed, 19 Oct 2022 10:48:03 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Tue, Oct 18, 2022 at 11:28 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > > > Yeah, leaving a potentially unbounded number of files around after\n> > > > system crashes seems pretty unfriendly. I'm not sure how to fix that,\n> > > > exactly.\n> >\n> > Unbounded number of sequential crash-restarts itself is a more serious\n> > problem..\n\n(Sorry, this was just a kidding.)\n\n> They don't have to be sequential. Garbage could accumulate over weeks,\n> months, or years.\n\nSure. Users' archive cleanup facilities don't work if they only\nhandles the files that with legit WAL file names.\n\n> Anyway, I agree we should hope that the system doesn't crash often,\n> but we cannot prevent the system administrator from removing the power\n> whenever they like. We can however try to reduce the number of\n> database-related things that go wrong if this happens, and I think we\n> should. Bharath's patch seems like it's probably a good idea, and if\n> we can do better, we should.\n\nYeah, I don't deny this, rather agree. So, we should name temporary\nfiles so that they are identifiable as garbage unconditionally at the\nnext startup. (They can be being actually active otherwise.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 20 Oct 2022 09:41:28 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: thinko in basic_archive.c" }, { "msg_contents": "At Wed, 19 Oct 2022 10:21:12 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Wed, Oct 19, 2022 at 8:58 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > An archive module could clean up them at startup or at the first call\n> > but that might be dangerous (since archive directory is I think\n> > thought as external resource).\n> \n> The archive module must be responsible for cleaning up the temp file\n> that it creates. One way to do it is in the archive module's shutdown\n> callback, this covers most of the cases, but not all.\n\nTrue. But I agree to Robert that such temporary files should be\ncleanup-able without needing temporarily-valid knowledge (current file\nname, in this case). A common strategy for this is to name those files\nby names that can be identifed as garbage.\n\n> > Honestly, I'm not sure about a reasonable scenario where simultaneous\n> > archivings of a same file is acceptable, though. I feel that we should\n> > not allow simultaneous archiving of the same segment by some kind of\n> > interlocking. In other words, we might should serialize duplicate\n> > archiving of asame file.\n> \n> In a typical production environment, there's some kind of locking (for\n> instance lease files) that allow/disallow file\n> creation/deletion/writes/reads which guarantees that the same file\n> isn't put into a directory (can be archive location) many times. And\n> as you rightly said archive_directory is something external to\n> postgres and we really can't deal with concurrent writers\n> writing/creating the same files. Even if we somehow try to do it, it\n> makes things complicated. This is true for any PGDATA directories.\n> However, the archive module implementers can choose to define such a\n> locking strategy.\n\nflock() on nfs..\n\n> > In another direction, the current code allows duplicate simultaneous\n> > copying to temporary files with different names then the latest\n> > renaming wins. We reach the almost same result (on Linuxen (or\n> > POSIX?)) by unlinking the existing tempfile first then create a new\n> > one with the same name then continue. Even if the tempfile were left\n> > alone after a crash, that file would be unlinked at the next trial of\n> > archiving. But I'm not sure how this can be done on Windows.. In the\n> > first place I'm not sure that the latest-unconditionally-wins policy\n> > is appropriate or not, though.\n> \n> We really can't just unlink the temp file because it has pid and\n> timestamp in the filename and it's hard to determine the temp file\n> that we created earlier.\n\nBut since power cut is a typical crash source, we need to identify\nruined temporary files and the current naming convention is incomplete\nin this regard.\n\nThe worst case I can come up with regardless of feasibility is a\nmulti-standby physical replication set where all hosts share one\narchive directory. Indeed live and dead temprary files can coexist\nthere. However, I think we can identify truly rotten temp files by\ninserting host name or cluster name (means cluster_name in\npostgresql.conf) even in that case. This premise that DBA names every\ncluster differently, but I think DBAs that is going to configure such\na system are required to be very cautious about that kind of aspect.\n\n> As far as the basic_archive module is concerned, we ought to keep it\n> simple. I still think the simplest we can do is to use the\n> basic_archive's shutdown_cb to delete (in most of the cases, but not\n\n(Sorry, my memory was confused at the time. That callback feature\nalready existed.)\n\n> all) the left-over temp file that the module is dealing with\n> as-of-the-moment and add a note about the users dealing with\n> concurrent writers to the basic_archive.archive_directory like the\n> attached v2 patch.\n> \n> > > A simple server restart while the basic_archive module is copying\n> > > to/from temp file would leave the file behind, see[1]. I think we can\n> > > fix this by defining shutdown_cb for the basic_archive module, like\n> > > the attached patch. While this helps in most of the crashes, but not\n> > > all. However, this is better than what we have right now.\n> >\n> > ShutdownWalRecovery() does the similar thing, but as you say this one\n> > covers rather narrow cases than that since RestoreArchiveFile()\n> > finally overwrites the left-alone files at the next call for that\n> > file.\n> \n> We're using unique temp file names in the basic_archive module so we\n> can't overwrite the same upon restart.\n\nOf course, it premised that a cluster uses the same name for a\nsegment. If we insert cluseter_name into the temprary name, a starting\ncluster can indentify garbage files to clean up. For example if we\nname them as follows.\n\nARCHTEMP_cluster1_pid_time_<lsn>\n\nA starting cluster can clean up all files starts with\n\"archtemp_cluster1_*\". (We need to choose the delimiter carefully,\nthough..)\n\n> > # The patch seems forgetting to clear the tmepfilepath *after* a\n> > # successful renaming.\n> \n> It does so at the beginning of basic_archive_file() which is sufficient.\n\nNo. I didn't mean that, If server stops after a successfull\ndurable_rename but before the next call to\nbasic_archive_file_internal, that call back makes false comlaint since\nthat temprary file is actually gone.\n\n> > And I don't see how the callback is called.\n> \n> call_archive_module_shutdown_callback()->basic_archive_shutdown().\n\nYeah, sorry for the noise.\n\n> Please see the attached v2 patch.\n\n+static char\ttempfilepath[MAXPGPATH + 256];\n\nMAXPGPATH is the maximum length of a file name that PG assumes to be\nable to handle. Thus extending that length seems wrong.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 20 Oct 2022 10:26:47 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: thinko in basic_archive.c" }, { "msg_contents": "On Thu, Oct 20, 2022 at 6:57 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> > The archive module must be responsible for cleaning up the temp file\n> > that it creates. One way to do it is in the archive module's shutdown\n> > callback, this covers most of the cases, but not all.\n>\n> True. But I agree to Robert that such temporary files should be\n> cleanup-able without needing temporarily-valid knowledge (current file\n> name, in this case). A common strategy for this is to name those files\n> by names that can be identifed as garbage.\n\nI'm not sure how we can distinguish temp files as garbage based on\nname. As Robert pointed out upthread, using system identifier may not\nhelp as the standbys share the same system identifier and it's\npossible that they might archive to the same directory. Is there any\nother way?\n\n> But since power cut is a typical crash source, we need to identify\n> ruined temporary files and the current naming convention is incomplete\n> in this regard.\n\nPlease note that basic_archive module creates one temp file at a time\nto make file copying/moving atomic and it can keep track of the temp\nfile name and delete it using shutdown callback which helps in most of\nthe scenarios. As said upthread, repeated crashes while basic_archive\nmodule is atomically copying files around is a big problem in itself\nand basic_archive module need not worry about it much.\n\n> flock() on nfs..\n>\n> The worst case I can come up with regardless of feasibility is a\n> multi-standby physical replication set where all hosts share one\n> archive directory. Indeed live and dead temprary files can coexist\n> there. However, I think we can identify truly rotten temp files by\n> inserting host name or cluster name (means cluster_name in\n> postgresql.conf) even in that case. This premise that DBA names every\n> cluster differently, but I think DBAs that is going to configure such\n> a system are required to be very cautious about that kind of aspect.\n\nWell, these ideas are great! However, we can leave defining such\nstrategies to archive module implementors. IMO, the basich_archive\nmodule ought to be as simple and elegant as possible yet showing up\nthe usability of archive modules feature.\n\n> Of course, it premised that a cluster uses the same name for a\n> segment. If we insert cluseter_name into the temprary name, a starting\n> cluster can indentify garbage files to clean up. For example if we\n> name them as follows.\n>\n> ARCHTEMP_cluster1_pid_time_<lsn>\n>\n> A starting cluster can clean up all files starts with\n> \"archtemp_cluster1_*\". (We need to choose the delimiter carefully,\n> though..)\n\nPostgres cleaning up basic_archive modules temp files at the start up\nisn't a great idea IMO. Because these files are not related to server\nfunctionality in any way unlike temp files removed in\nRemovePgTempFiles(). IMO, we ought to keep the basic_archive module\nsimple.\n.\n> No. I didn't mean that, If server stops after a successfull\n> durable_rename but before the next call to\n> basic_archive_file_internal, that call back makes false comlaint since\n> that temprary file is actually gone.\n\nRight. Fixed it.\n\n> > Please see the attached v2 patch.\n>\n> +static char tempfilepath[MAXPGPATH + 256];\n>\n> MAXPGPATH is the maximum length of a file name that PG assumes to be\n> able to handle. Thus extending that length seems wrong.\n\nI think it was to accommodate the temp file name - \"archtemp\", file,\nMyProcPid, epoch, but I agree that it can just be MAXPGPATH. However,\nmost of the places the core defines the path name to be MAXPGPATH +\nsome bytes.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 20 Oct 2022 13:29:12 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: thinko in basic_archive.c" }, { "msg_contents": "Hi.\n\nAnyway, on second thought, lager picture than just adding the\npost-process-end callback would out of the scope of this patch. So I\nwrite some comments on the patch first, then discussion the rest.\n\n\nThu, 20 Oct 2022 13:29:12 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> > No. I didn't mean that, If server stops after a successfull\n> > durable_rename but before the next call to\n> > basic_archive_file_internal, that call back makes false comlaint since\n> > that temprary file is actually gone.\n> \n> Right. Fixed it.\n\nThanks, but we don't need to wipe out the all bytes. Just putting \\0\nat the beginning of the buffer is sufficient. And the Memset() at the\nbeginning of basic_archive_file_internal is not needed since that\nstatic variables are initially initialized to zeros.\n\nThis is not necessarily needed, but it might be better we empty\ntempfilepath after unlinking the file.\n\n\n> > +static char tempfilepath[MAXPGPATH + 256];\n> >\n> > MAXPGPATH is the maximum length of a file name that PG assumes to be\n> > able to handle. Thus extending that length seems wrong.\n> \n> I think it was to accommodate the temp file name - \"archtemp\", file,\n> MyProcPid, epoch, but I agree that it can just be MAXPGPATH. However,\n> most of the places the core defines the path name to be MAXPGPATH +\n> some bytes.\n\nMmm. I found that basic_archive already does the same thing. So lets\nfollow that in this patch.\n\n\n+ expectation that a value will soon be provided. Care must be taken when\n+ multiple servers are archiving to the same\n+ <varname>basic_archive.archive_library</varname> directory as they all\n+ might try to archive the same WAL file.\n\nI don't understand what kind of care should be taken by reading this..\n\nAnyway the PID + millisecond-resolution timestamps work in the almost\nall cases, but it's not perfect. So.. I don't come up with what to\nthink about this..\n\nTraditionally we told people that \"archiving should not overwrite a\nfile unconditionally. Generally it is safe only when the contents are\nidentical then should be errored-out otherwise.\".. Ah this is.\n\nhttps://www.postgresql.org/docs/devel/continuous-archiving.html\n\n> Archive commands and libraries should generally be designed to\n> refuse to overwrite any pre-existing archive file. This is an\n> important safety feature to preserve the integrity of your archive\n> in case of administrator error (such as sending the output of two\n> different servers to the same archive directory). It is advisable to\n> test your proposed archive library to ensure that it does not\n> overwrite an existing file.\n...\n> file again after restarting (provided archiving is still\n> enabled). When an archive command or library encounters a\n> pre-existing file, it should return a zero status or true,\n> respectively, if the WAL file has identical contents to the\n> pre-existing archive and the pre-existing archive is fully persisted\n> to storage. If a pre-existing file contains different contents than\n> the WAL file being archived, the archive command or library must\n> return a nonzero status or false, respectively.\n\nOn the other hand, basic_archive seems to overwrite existing files\nunconditionally. I think this is not great, in that we offer a tool\nbetrays to our own wrtten suggestion...\n\n\n\nThe following is out-of-the-scope discussions.\n\n==================\nAt Thu, 20 Oct 2022 13:29:12 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Thu, Oct 20, 2022 at 6:57 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote: > > > The archive module must be\n> responsible for cleaning up the temp file > > that it creates. One\n> way to do it is in the archive module's shutdown > > callback, this\n> covers most of the cases, but not all. > > True. But I agree to\n> Robert that such temporary files should be > cleanup-able without\n> needing temporarily-valid knowledge (current file > name, in this\n> case). A common strategy for this is to name those files > by names\n> that can be identifed as garbage.\n> \n> I'm not sure how we can distinguish temp files as garbage based on\n> name. As Robert pointed out upthread, using system identifier may not\n> help as the standbys share the same system identifier and it's\n> possible that they might archive to the same directory. Is there any\n> other way?\n\nCorrect naming scheme would lead to resolution.\n\n> > But since power cut is a typical crash source, we need to identify\n> > ruined temporary files and the current naming convention is incomplete\n> > in this regard.\n> \n> Please note that basic_archive module creates one temp file at a time\n> to make file copying/moving atomic and it can keep track of the temp\n> file name and delete it using shutdown callback which helps in most of\n> the scenarios. As said upthread, repeated crashes while basic_archive\n> module is atomically copying files around is a big problem in itself\n> and basic_archive module need not worry about it much.\n\nI'm not sure. It's a \"basic_archiver\", but an \"example_archiver\". I\nread the name as \"it is no highly configuratable but practically\nusable\". In this criteria, clean up feature is not too much.\n\n> > there. However, I think we can identify truly rotten temp files by\n> > inserting host name or cluster name (means cluster_name in\n> > postgresql.conf) even in that case. This premise that DBA names every\n> \n> Well, these ideas are great! However, we can leave defining such\n> strategies to archive module implementors. IMO, the basich_archive\n> module ought to be as simple and elegant as possible yet showing up\n> the usability of archive modules feature.\n\nI don't understand why garbage-cleanup is not elegant.\n\nOn the other hand, if we pursue minimalism about this tool, we don't\nneed the timestamp part since this tool cannot write two or more files\nsimultaneously by the same process. (I don't mean we shoud remove that\npart.)\n\n> > Of course, it premised that a cluster uses the same name for a\n> > segment. If we insert cluseter_name into the temprary name, a starting\n> > cluster can indentify garbage files to clean up. For example if we\n> > name them as follows.\n> >\n> > ARCHTEMP_cluster1_pid_time_<lsn>\n> >\n> > A starting cluster can clean up all files starts with\n> > \"archtemp_cluster1_*\". (We need to choose the delimiter carefully,\n> > though..)\n> \n> Postgres cleaning up basic_archive modules temp files at the start up\n> isn't a great idea IMO. Because these files are not related to server\n> functionality in any way unlike temp files removed in\n> RemovePgTempFiles(). IMO, we ought to keep the basic_archive module\n> simple.\n\nI think \"init\" feature is mandatory. But it would be another project.\n\n> > > Please see the attached v2 patch.\n> >\n> > +static char tempfilepath[MAXPGPATH + 256];\n> >\n> > MAXPGPATH is the maximum length of a file name that PG assumes to be\n> > able to handle. Thus extending that length seems wrong.\n> \n> I think it was to accommodate the temp file name - \"archtemp\", file,\n> MyProcPid, epoch, but I agree that it can just be MAXPGPATH. However,\n> most of the places the core defines the path name to be MAXPGPATH +\n> some bytes.\n\nOooh. I don't say \"most\" but some instances are found. (Almost all\nusage of MAXPGPATH are not accompanied by additional length). But it\nwould be another issue.\n\nAnyway I don't think even if the use of over-sized path buffers is\nwidely spread in our tree, it cannot be a reason for this new code\nneed to do the same thing. If we follow that direction, the following\ncode in basic_archive should have MAXPGPATH*2 wide, since it stores a\n\"<directory>/<filename>\" construct. (That usage is found in, e.g.,\ndbsize.c, which of course I think stupid..)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 21 Oct 2022 14:13:46 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: thinko in basic_archive.c" }, { "msg_contents": "Sorry, the previous mail are sent inadvertently..\n\nAt Fri, 21 Oct 2022 14:13:46 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> + expectation that a value will soon be provided. Care must be taken when\n> + multiple servers are archiving to the same\n> + <varname>basic_archive.archive_library</varname> directory as they all\n> + might try to archive the same WAL file.\n> \n> I don't understand what kind of care should be taken by reading this..\n> \n> Anyway the PID + millisecond-resolution timestamps work in the almost\n> all cases, but it's not perfect. So.. I don't come up with what to\n> think about this..\n> \n> Traditionally we told people that \"archiving should not overwrite a\n> file unconditionally. Generally it is safe only when the contents are\n> identical then should be errored-out otherwise.\".. Ah this is.\n\nbasic_archive follows the suggestion if the same file exists before it\nstarts to write a file. So please forget this.\n\n>\t * Sync the temporary file to disk and move it to its final destination.\n>\t * Note that this will overwrite any existing file, but this is only\n>\t * possible if someone else created the file since the stat() above.\n\nI'm not sure why we are allowed to allow this behavior.. But it also\nwould be another issue, if anyone cares. Thus I feel that we might not\ntouch this description in this patch.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 21 Oct 2022 14:25:57 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: thinko in basic_archive.c" }, { "msg_contents": "On Fri, Oct 21, 2022 at 10:43 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Thanks, but we don't need to wipe out the all bytes. Just putting \\0\n> at the beginning of the buffer is sufficient.\n\nNah, that's not a clean way IMO.\n\n> And the Memset() at the\n> beginning of basic_archive_file_internal is not needed since that\n> static variables are initially initialized to zeros.\n\nRemoved. MemSet() after durable_rename() would be sufficient.\n\n> This is not necessarily needed, but it might be better we empty\n> tempfilepath after unlinking the file.\n\nI think it's not necessary as the archiver itself is shutting down and\nI don't think the server calls the shutdown callback twice. However,\nif we want basic_archive_shutdown() to be more protective against\nmultiple calls (for any reason that we're missing), we can have a\nstatic local variable to quickly exit if the callback is already\ncalled. instead of MemSet(), but that's not needed I guess.\n\n> + expectation that a value will soon be provided. Care must be taken when\n> + multiple servers are archiving to the same\n> + <varname>basic_archive.archive_library</varname> directory as they all\n> + might try to archive the same WAL file.\n>\n> I don't understand what kind of care should be taken by reading this..\n\nIt's just a notice, however I agree with you that it may be confusing.\nI've removed it.\n\nPlease review the attached v4 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 21 Oct 2022 21:30:16 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: thinko in basic_archive.c" }, { "msg_contents": "On Fri, Oct 21, 2022 at 09:30:16PM +0530, Bharath Rupireddy wrote:\n> On Fri, Oct 21, 2022 at 10:43 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>> Thanks, but we don't need to wipe out the all bytes. Just putting \\0\n>> at the beginning of the buffer is sufficient.\n> \n> Nah, that's not a clean way IMO.\n\nWhy not? This is a commonly-used technique. I see over 80 existing useѕ\nin PostgreSQL. Plus, your shutdown callback only checks for the first\nbyte, anyway.\n\n+\tif (tempfilepath[0] == '\\0')\n+\t\treturn;\n\nAs noted upthread [0], I think we should be cautious to only remove the\ntemporary file if we know we created it. I still feel that trying to add\nthis cleanup logic to basic_archive is probably more trouble than it's\nworth, but if there is a safe and effective way to do so, I won't object.\n\n[0] https://postgr.es/m/20221015211026.GA1821022%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 5 Nov 2022 14:46:51 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: thinko in basic_archive.c" }, { "msg_contents": "On Sun, Nov 6, 2022 at 3:17 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Fri, Oct 21, 2022 at 09:30:16PM +0530, Bharath Rupireddy wrote:\n> > On Fri, Oct 21, 2022 at 10:43 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> >> Thanks, but we don't need to wipe out the all bytes. Just putting \\0\n> >> at the beginning of the buffer is sufficient.\n> >\n> > Nah, that's not a clean way IMO.\n>\n> Why not? This is a commonly-used technique. I see over 80 existing useѕ\n> in PostgreSQL. Plus, your shutdown callback only checks for the first\n> byte, anyway.\n>\n> + if (tempfilepath[0] == '\\0')\n> + return;\n\nThe tempfile name can vary in size for the simple reason that a pid\ncan be of varying digits - for instance, tempfile name is 'foo1234'\n(pid being 1234) and it becomes '\\'\\0\\'oo1234' if we just reset the\nfirst char to '\\0' and say pid wraparound occurred, now the it becomes\n'bar5674' (pid being 567).\n\nBTW, I couldn't find the 80 existing instances, can you please let me\nknow your search keyword?\n\n> As noted upthread [0], I think we should be cautious to only remove the\n> temporary file if we know we created it. I still feel that trying to add\n> this cleanup logic to basic_archive is probably more trouble than it's\n> worth, but if there is a safe and effective way to do so, I won't object.\n>\n> [0] https://postgr.es/m/20221015211026.GA1821022%40nathanxps13\n\nSo, IIUC, your point here is what if the copy_file fails to create the\ntemp file when it already exists. With the name collision being a rare\nscenario, given the pid and timestamp variables, I'm not sure if\ncopy_file can ever fail because the temp file already exists (with\nerrno EEXIST). However, if we want to be extra-cautious, checking if\ntemp file exists with file_exists() before calling copy_file() might\nhelp avoid such cases. If we don't want to have extra system call (via\nfile_exists()) to check the temp file existence, we can think of\nsending a flag to copy_file(src, dst, &is_dst_file_created) and use\nis_dst_file_created in the shutdown callback. Thoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 7 Nov 2022 16:53:35 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: thinko in basic_archive.c" }, { "msg_contents": "On Mon, Nov 07, 2022 at 04:53:35PM +0530, Bharath Rupireddy wrote:\n> The tempfile name can vary in size for the simple reason that a pid\n> can be of varying digits - for instance, tempfile name is 'foo1234'\n> (pid being 1234) and it becomes '\\'\\0\\'oo1234' if we just reset the\n> first char to '\\0' and say pid wraparound occurred, now the it becomes\n> 'bar5674' (pid being 567).\n\nThe call to snprintf() should take care of adding a terminating null byte\nin the right place.\n\n> BTW, I couldn't find the 80 existing instances, can you please let me\n> know your search keyword?\n\ngrep \"\\[0\\] = '\\\\\\0'\" src -r\n\n> So, IIUC, your point here is what if the copy_file fails to create the\n> temp file when it already exists. With the name collision being a rare\n> scenario, given the pid and timestamp variables, I'm not sure if\n> copy_file can ever fail because the temp file already exists (with\n> errno EEXIST). However, if we want to be extra-cautious, checking if\n> temp file exists with file_exists() before calling copy_file() might\n> help avoid such cases. If we don't want to have extra system call (via\n> file_exists()) to check the temp file existence, we can think of\n> sending a flag to copy_file(src, dst, &is_dst_file_created) and use\n> is_dst_file_created in the shutdown callback. Thoughts?\n\nPresently, if copy_file() encounters a pre-existing file, it should ERROR,\nwhich will be caught in the sigsetjmp() block in basic_archive_file(). The\nshutdown callback shouldn't run in this scenario.\n\nI think this cleanup logic should run in both the shutdown callback and the\nsigsetjmp() block, but it should only take action (i.e., deleting the\nleftover temporary file) if the ERROR or shutdown occurs after creating the\nfile in copy_file() and before renaming the temporary file to its final\ndestination.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 7 Nov 2022 13:48:09 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: thinko in basic_archive.c" }, { "msg_contents": "On Tue, Nov 8, 2022 at 3:18 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> The call to snprintf() should take care of adding a terminating null byte\n> in the right place.\n\nAh, my bad. MemSet is avoided in v5 patch setting only the first byte.\n\n> > So, IIUC, your point here is what if the copy_file fails to create the\n> > temp file when it already exists. With the name collision being a rare\n> > scenario, given the pid and timestamp variables, I'm not sure if\n> > copy_file can ever fail because the temp file already exists (with\n> > errno EEXIST). However, if we want to be extra-cautious, checking if\n> > temp file exists with file_exists() before calling copy_file() might\n> > help avoid such cases. If we don't want to have extra system call (via\n> > file_exists()) to check the temp file existence, we can think of\n> > sending a flag to copy_file(src, dst, &is_dst_file_created) and use\n> > is_dst_file_created in the shutdown callback. Thoughts?\n>\n> Presently, if copy_file() encounters a pre-existing file, it should ERROR,\n> which will be caught in the sigsetjmp() block in basic_archive_file(). The\n> shutdown callback shouldn't run in this scenario.\n\nDetermining the \"file already exists\" error/EEXIST case from a bunch\nof other errors in copy_file() is tricky. However, I quickly hacked up\ncopy_file() by adding elevel parameter, please see the attached\nv5-0001.\n\n> I think this cleanup logic should run in both the shutdown callback and the\n> sigsetjmp() block, but it should only take action (i.e., deleting the\n> leftover temporary file) if the ERROR or shutdown occurs after creating the\n> file in copy_file() and before renaming the temporary file to its final\n> destination.\n\nPlease see attached v5 patch set.\n\nIf the direction seems okay, I'll add a CF entry.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 9 Nov 2022 14:47:11 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: thinko in basic_archive.c" } ]
[ { "msg_contents": "Hi hackers,\n\nWhen I was building pg_regress, it didn’t always copy a rebuilt version of refint.so to the folder.\n\nSteps to reproduce:\nOS: centos7\nPSQL version: 14.5\n\n1. configure and build postgres\n2. edit file src/include/utils/rel.h so that contrib/spi will rebuild\n3. cd ${builddir}/src/test/regress\n4. make\nWe’ll find refint.so is rebuilt in contrib/spi, but not copied over to regress folder.\nWhile autoinc.so is rebuilt and copied over.\n\nAttach the potential patch to fix the issue.\n\nRegards,\nDonghang Lin\n(ServiceNow)", "msg_date": "Fri, 14 Oct 2022 06:31:13 +0000", "msg_from": "Donghang Lin <donghang.lin@servicenow.com>", "msg_from_op": true, "msg_subject": "Bug: pg_regress makefile does not always copy refint.so" }, { "msg_contents": "On 2022-Oct-14, Donghang Lin wrote:\n\n> Hi hackers,\n> \n> When I was building pg_regress, it didn’t always copy a rebuilt version of refint.so to the folder.\n> \n> Steps to reproduce:\n> OS: centos7\n> PSQL version: 14.5\n> \n> 1. configure and build postgres\n> 2. edit file src/include/utils/rel.h so that contrib/spi will rebuild\n> 3. cd ${builddir}/src/test/regress\n> 4. make\n> We’ll find refint.so is rebuilt in contrib/spi, but not copied over to regress folder.\n> While autoinc.so is rebuilt and copied over.\n\nI have a somewhat-related-but-not-really complaint. I recently had need to\nhave refint.so, autoinc.so and regress.so in the install directory; but it\nturns out that there's no provision at all to get them installed.\n\nPackagers have long have had a need for this; for example the postgresql-test\nRPM file is built using this icky recipe:\n\n%if %test\n # tests. There are many files included here that are unnecessary,\n # but include them anyway for completeness. We replace the original\n # Makefiles, however.\n %{__mkdir} -p %{buildroot}%{pgbaseinstdir}/lib/test\n %{__cp} -a src/test/regress %{buildroot}%{pgbaseinstdir}/lib/test\n %{__install} -m 0755 contrib/spi/refint.so %{buildroot}%{pgbaseinstdir}/lib/test/regress\n %{__install} -m 0755 contrib/spi/autoinc.so %{buildroot}%{pgbaseinstdir}/lib/test/regress\n\nI assume that the DEB does something similar, but I didn't look.\n\nI think it would be better to provide a Make rule to allow these files to be\ninstalled. I'll see about a proposed patch.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Saca el libro que tu religión considere como el indicado para encontrar la\noración que traiga paz a tu alma. Luego rebootea el computador\ny ve si funciona\" (Carlos Duclós)\n\n\n", "msg_date": "Fri, 14 Oct 2022 20:14:03 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Bug: pg_regress makefile does not always copy refint.so" }, { "msg_contents": "Hi Alvaro,\n\n> I have a somewhat-related-but-not-really complaint. I recently had need to\n> have refint.so, autoinc.so and regress.so in the install directory; but it\n> turns out that there's no provision at all to get them installed.\n\nTrue, we also noticed this build bug described above by copying *.so and pg_regress binary around.\n\n> I think it would be better to provide a Make rule to allow these files to be\n> installed.\nI looked at what postgresql-test package<https://ftp.postgresql.org/pub/repos/yum/15/redhat/rhel-7.9-x86_64/postgresql15-test-15.0-1PGDG.rhel7.x86_64.rpm> provides today :\n\n$ ls /usr/pgsql-15/lib/test/regress\nautoinc.so data expected Makefile parallel_schedule pg_regress pg_regress.c pg_regress.h pg_regress_main.c README refint.so regress.c regressplans.sh regress.so resultmap sql\n(I’m not sure what this package is supposed to do, it contains both source files and the executables.\n\nThe current pgsql install directory of regress only contains pg_regress binary,\nDo you suggest we add these files (excluding the scratched files) to the regress install directory?\nautoinc.so data expected Makefile parallel_schedule pg_regress pg_regress.c pg_regress.h pg_regress_main.c README refint.so regress.c regressplans.sh regress.so resultmap sql\n\n>> 1. configure and build postgres\n>> 2. edit file src/include/utils/rel.h so that contrib/spi will rebuild\n>> 3. cd ${builddir}/src/test/regress\n>> 4. make\n>> We’ll find refint.so is rebuilt in contrib/spi, but not copied over to regress folder.\n>> While autoinc.so is rebuilt and copied over.\n\nI think this build bug is orthogonal to the inconvenient installation/packaging.\nIt produces inconsistent build result, e.g you have to run `make` twice to ensure newly built refint.so is copied to the build dir.\n\nRegards,\nDonghang Lin\n(ServiceNow)\n\nFrom: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: Friday, October 14, 2022 at 8:15 PM\nTo: Donghang Lin <donghang.lin@servicenow.com>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Bug: pg_regress makefile does not always copy refint.so\n[External Email]\n\n\nOn 2022-Oct-14, Donghang Lin wrote:\n\n> Hi hackers,\n>\n> When I was building pg_regress, it didn’t always copy a rebuilt version of refint.so to the folder.\n>\n> Steps to reproduce:\n> OS: centos7\n> PSQL version: 14.5\n>\n> 1. configure and build postgres\n> 2. edit file src/include/utils/rel.h so that contrib/spi will rebuild\n> 3. cd ${builddir}/src/test/regress\n> 4. make\n> We’ll find refint.so is rebuilt in contrib/spi, but not copied over to regress folder.\n> While autoinc.so is rebuilt and copied over.\n\nI have a somewhat-related-but-not-really complaint. I recently had need to\nhave refint.so, autoinc.so and regress.so in the install directory; but it\nturns out that there's no provision at all to get them installed.\n\nPackagers have long have had a need for this; for example the postgresql-test\nRPM file is built using this icky recipe:\n\n%if %test\n # tests. There are many files included here that are unnecessary,\n # but include them anyway for completeness. We replace the original\n # Makefiles, however.\n %{__mkdir} -p %{buildroot}%{pgbaseinstdir}/lib/test\n %{__cp} -a src/test/regress %{buildroot}%{pgbaseinstdir}/lib/test\n %{__install} -m 0755 contrib/spi/refint.so %{buildroot}%{pgbaseinstdir}/lib/test/regress\n %{__install} -m 0755 contrib/spi/autoinc.so %{buildroot}%{pgbaseinstdir}/lib/test/regress\n\nI assume that the DEB does something similar, but I didn't look.\n\nI think it would be better to provide a Make rule to allow these files to be\ninstalled. I'll see about a proposed patch.\n\n--\nÁlvaro Herrera Breisgau, Deutschland — https://urldefense.com/v3/__https://www.EnterpriseDB.com/__;!!N4vogdjhuJM!Gz471V39hPgZI8Uabm3fUUHoZIuoMKlnz6_W38wwQvH20ZKfYaIWioPYtBJU2U75apWuCP6NmZutZfZ92EDwq5vIcvs$<https://urldefense.com/v3/__https:/www.EnterpriseDB.com/__;!!N4vogdjhuJM!Gz471V39hPgZI8Uabm3fUUHoZIuoMKlnz6_W38wwQvH20ZKfYaIWioPYtBJU2U75apWuCP6NmZutZfZ92EDwq5vIcvs$>\n\"Saca el libro que tu religión considere como el indicado para encontrar la\noración que traiga paz a tu alma. Luego rebootea el computador\ny ve si funciona\" (Carlos Duclós)\n\n\n\n\n\n\n\n\n\nHi Alvaro,\n \n> I have a somewhat-related-but-not-really complaint.  I recently had need to\n> have refint.so, autoinc.so and regress.so in the install directory; but it\n> turns out that there's no provision at all to get them installed.\n \nTrue, we also noticed this build bug described above by copying *.so and pg_regress binary around.\n\n \n> I think it would be better to provide a Make rule to allow these files to be\n> installed. \nI looked at what \npostgresql-test package provides today :\n \n$ ls /usr/pgsql-15/lib/test/regress\nautoinc.so  data  expected  Makefile  parallel_schedule  pg_regress  pg_regress.c  pg_regress.h  pg_regress_main.c  README  refint.so  regress.c  regressplans.sh  regress.so  resultmap  sql\n(I’m not sure what this package is supposed to do,  it contains both source files and the executables.\n \nThe current pgsql install directory of regress only contains pg_regress binary,   \nDo you suggest we add these files (excluding the scratched files) to the regress install directory?\nautoinc.so  data  expected  \nMakefile  parallel_schedule  pg_regress  pg_regress.c  pg_regress.h  pg_regress_main.c \nREADME  refint.so  regress.c  regressplans.sh  regress.so  resultmap  sql\n \n>> 1. configure and build postgres\n>> 2. edit file src/include/utils/rel.h so that contrib/spi will rebuild\n>> 3. cd ${builddir}/src/test/regress\n>> 4. make\n>> We’ll find refint.so is rebuilt in contrib/spi, but not copied over to regress folder.\n>> While autoinc.so is rebuilt and copied over.\n \nI think this build bug is orthogonal to the inconvenient installation/packaging.\n\nIt produces inconsistent build result, e.g you have to run `make` twice to ensure newly built refint.so is copied to the build dir.\n \nRegards,\nDonghang Lin\n(ServiceNow)\n \n\nFrom:\nAlvaro Herrera <alvherre@alvh.no-ip.org>\nDate: Friday, October 14, 2022 at 8:15 PM\nTo: Donghang Lin <donghang.lin@servicenow.com>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Bug: pg_regress makefile does not always copy refint.so\n\n\n[External Email]\n\n\nOn 2022-Oct-14, Donghang Lin wrote:\n\n> Hi hackers,\n>\n> When I was building pg_regress, it didn’t always copy a rebuilt version of refint.so to the folder.\n>\n> Steps to reproduce:\n> OS: centos7\n> PSQL version: 14.5\n>\n> 1. configure and build postgres\n> 2. edit file src/include/utils/rel.h so that contrib/spi will rebuild\n> 3. cd ${builddir}/src/test/regress\n> 4. make\n> We’ll find refint.so is rebuilt in contrib/spi, but not copied over to regress folder.\n> While autoinc.so is rebuilt and copied over.\n\nI have a somewhat-related-but-not-really complaint.  I recently had need to\nhave refint.so, autoinc.so and regress.so in the install directory; but it\nturns out that there's no provision at all to get them installed.\n\nPackagers have long have had a need for this; for example the postgresql-test\nRPM file is built using this icky recipe:\n\n%if %test\n        # tests. There are many files included here that are unnecessary,\n        # but include them anyway for completeness.  We replace the original\n        # Makefiles, however.\n        %{__mkdir} -p %{buildroot}%{pgbaseinstdir}/lib/test\n        %{__cp} -a src/test/regress %{buildroot}%{pgbaseinstdir}/lib/test\n        %{__install} -m 0755 contrib/spi/refint.so %{buildroot}%{pgbaseinstdir}/lib/test/regress\n        %{__install} -m 0755 contrib/spi/autoinc.so %{buildroot}%{pgbaseinstdir}/lib/test/regress\n\nI assume that the DEB does something similar, but I didn't look.\n\nI think it would be better to provide a Make rule to allow these files to be\ninstalled.  I'll see about a proposed patch.\n\n--\nÁlvaro Herrera        Breisgau, Deutschland  —  \nhttps://urldefense.com/v3/__https://www.EnterpriseDB.com/__;!!N4vogdjhuJM!Gz471V39hPgZI8Uabm3fUUHoZIuoMKlnz6_W38wwQvH20ZKfYaIWioPYtBJU2U75apWuCP6NmZutZfZ92EDwq5vIcvs$\n\"Saca el libro que tu religión considere como el indicado para encontrar la\noración que traiga paz a tu alma. Luego rebootea el computador\ny ve si funciona\" (Carlos Duclós)", "msg_date": "Tue, 18 Oct 2022 19:48:31 +0000", "msg_from": "Donghang Lin <donghang.lin@servicenow.com>", "msg_from_op": true, "msg_subject": "Re: Bug: pg_regress makefile does not always copy refint.so" }, { "msg_contents": "Hi,\n\nOn 2022-Oct-18, Donghang Lin wrote:\n\n> > I have a somewhat-related-but-not-really complaint. I recently had need to\n> > have refint.so, autoinc.so and regress.so in the install directory; but it\n> > turns out that there's no provision at all to get them installed.\n\n> The current pgsql install directory of regress only contains pg_regress binary,\n> Do you suggest we add these files (excluding the scratched files) to the regress install directory?\n> autoinc.so data expected Makefile parallel_schedule pg_regress pg_regress.c pg_regress.h pg_regress_main.c README refint.so regress.c regressplans.sh regress.so resultmap sql\n\nNo, I think the .c/.h files are likely included only because the RPM\nrule is written somewhat carelessly. If we add support in our\nmakefiles, it would have to be something better-considered.\n\n> I think this build bug is orthogonal to the inconvenient installation/packaging.\n> It produces inconsistent build result, e.g you have to run `make` twice to ensure newly built refint.so is copied to the build dir.\n\nYes, I agree that it is orthogonal. I'm not sure that what you propose\n(changing these order-only dependencies into regular dependencies) is\nthe best possible fix, but I agree we need *some* fix.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 19 Oct 2022 08:27:51 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Bug: pg_regress makefile does not always copy refint.so" } ]
[ { "msg_contents": "These patches take the dozens of mostly-duplicate pg_foo_ownercheck() \nand pg_foo_aclcheck() functions and replace (most of) them by common \nfunctions that are driven by the ObjectProperty table. All the required \ninformation is already in that table.\n\nThis is similar to the consolidation of the drop-by-OID functions that \nwe did a while ago (b1d32d3e3230f00b5baba08f75b4f665c7d6dac6).", "msg_date": "Fri, 14 Oct 2022 09:39:26 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "refactor ownercheck and aclcheck functions" }, { "msg_contents": "On Fri, Oct 14, 2022 at 3:39 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> These patches take the dozens of mostly-duplicate pg_foo_ownercheck()\n> and pg_foo_aclcheck() functions and replace (most of) them by common\n> functions that are driven by the ObjectProperty table. All the required\n> information is already in that table.\n>\n> This is similar to the consolidation of the drop-by-OID functions that\n> we did a while ago (b1d32d3e3230f00b5baba08f75b4f665c7d6dac6).\n\n\nNice reduction in footprint!\n\nI'd be inclined to remove the highly used ones as well. That way the\ncodebase would have more examples of object_ownercheck() for readers to\nsee. Seeing the existence of pg_FOO_ownercheck implies that a\npg_BAR_ownercheck might exist, and if BAR is missing they might be inclined\nto re-add it.\n\nIf we do keep them, would it make sense to go the extra step and turn the\nremaining six \"regular\" into static inline functions or even #define-s?\n\nOn Fri, Oct 14, 2022 at 3:39 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:These patches take the dozens of mostly-duplicate pg_foo_ownercheck() \nand pg_foo_aclcheck() functions and replace (most of) them by common \nfunctions that are driven by the ObjectProperty table.  All the required \ninformation is already in that table.\n\nThis is similar to the consolidation of the drop-by-OID functions that \nwe did a while ago (b1d32d3e3230f00b5baba08f75b4f665c7d6dac6).Nice reduction in footprint!I'd be inclined to remove the highly used ones as well. That way the codebase would have more examples of object_ownercheck() for readers to see. Seeing the existence of pg_FOO_ownercheck implies that a pg_BAR_ownercheck might exist, and if BAR is missing they might be inclined to re-add it.If we do keep them, would it make sense to go the extra step and turn the remaining six \"regular\" into static inline functions or even #define-s?", "msg_date": "Wed, 19 Oct 2022 19:24:25 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactor ownercheck and aclcheck functions" }, { "msg_contents": "On 20.10.22 01:24, Corey Huinker wrote:\n> I'd be inclined to remove the highly used ones as well. That way the \n> codebase would have more examples of object_ownercheck() for readers to \n> see. Seeing the existence of pg_FOO_ownercheck implies that a \n> pg_BAR_ownercheck might exist, and if BAR is missing they might be \n> inclined to re-add it.\n\nWe do have several ownercheck and aclcheck functions that can't be \nrefactored into this framework right now, so we do have to keep some \nspecial-purpose functions around anyway. I'm afraid converting all the \ncallers would blow up this patch quite a bit, but it could be done as a \nfollow-up patch.\n\n> If we do keep them, would it make sense to go the extra step and turn \n> the remaining six \"regular\" into static inline functions or even #define-s?\n\nThat could make sense.\n\n\n\n", "msg_date": "Fri, 21 Oct 2022 21:17:47 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: refactor ownercheck and aclcheck functions" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> These patches take the dozens of mostly-duplicate pg_foo_ownercheck() and\n> pg_foo_aclcheck() functions and replace (most of) them by common functions\n> that are driven by the ObjectProperty table. All the required information is\n> already in that table.\n> \n> This is similar to the consolidation of the drop-by-OID functions that we did\n> a while ago (b1d32d3e3230f00b5baba08f75b4f665c7d6dac6).\n\nI've reviewed this patch, as it's related to my patch [1] (In particular, it\nreduces the size of my patch a little bit). I like the idea to reduce the\namount of (almost) copy & pasted code. I haven't found any problem in your\npatch that would be worth mentioning, except that the 0001 part does not apply\nto the current master branch.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 07 Nov 2022 14:19:42 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: refactor ownercheck and aclcheck functions" }, { "msg_contents": "On 21.10.22 21:17, Peter Eisentraut wrote:\n> On 20.10.22 01:24, Corey Huinker wrote:\n>> I'd be inclined to remove the highly used ones as well. That way the \n>> codebase would have more examples of object_ownercheck() for readers \n>> to see. Seeing the existence of pg_FOO_ownercheck implies that a \n>> pg_BAR_ownercheck might exist, and if BAR is missing they might be \n>> inclined to re-add it.\n> \n> We do have several ownercheck and aclcheck functions that can't be \n> refactored into this framework right now, so we do have to keep some \n> special-purpose functions around anyway.  I'm afraid converting all the \n> callers would blow up this patch quite a bit, but it could be done as a \n> follow-up patch.\n> \n>> If we do keep them, would it make sense to go the extra step and turn \n>> the remaining six \"regular\" into static inline functions or even \n>> #define-s?\n> \n> That could make sense.\n\nAfter considering this again, I decided to brute-force this and get rid \nof all the trivial wrapper functions and also several of the special \ncases. That way, there is less confusion at the call sites about why \nthis or that style is used in a particular case. Also, it now makes \nsure you can't accidentally use the generic functions when a particular \none should be used.", "msg_date": "Tue, 8 Nov 2022 12:16:41 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: refactor ownercheck and aclcheck functions" }, { "msg_contents": ">\n> After considering this again, I decided to brute-force this and get rid\n> of all the trivial wrapper functions and also several of the special\n> cases. That way, there is less confusion at the call sites about why\n> this or that style is used in a particular case. Also, it now makes\n> sure you can't accidentally use the generic functions when a particular\n> one should be used.\n>\n\n+1\n\nHowever, the aclcheck patch isn't applying for me now. That patch modifies\n37 files, so it's hard to say just which commit conflicts.\n\nAfter considering this again, I decided to brute-force this and get rid \nof all the trivial wrapper functions and also several of the special \ncases.  That way, there is less confusion at the call sites about why \nthis or that style is used in a particular case.  Also, it now makes \nsure you can't accidentally use the generic functions when a particular \none should be used.+1However, the aclcheck patch isn't applying for me now. That patch modifies 37 files, so it's hard to say just which commit conflicts.", "msg_date": "Wed, 9 Nov 2022 13:12:36 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: refactor ownercheck and aclcheck functions" }, { "msg_contents": "On 09.11.22 19:12, Corey Huinker wrote:\n> After considering this again, I decided to brute-force this and get rid\n> of all the trivial wrapper functions and also several of the special\n> cases.  That way, there is less confusion at the call sites about why\n> this or that style is used in a particular case.  Also, it now makes\n> sure you can't accidentally use the generic functions when a particular\n> one should be used.\n> \n> \n> +1\n\ncommitted\n\n\n\n", "msg_date": "Sun, 13 Nov 2022 10:26:43 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: refactor ownercheck and aclcheck functions" } ]
[ { "msg_contents": "Hi hackers.\n\nThis post is about parameter default values. Specifically. it's about\nthe CREATE PUBLICATION and CREATE SUBSCRIPTION syntax, although the\nsame issue might apply to other commands I am unaware of...\n\n~~~\n\nBackground:\n\nCREATE PUBLICATION syntax has a WITH clause:\n[ WITH ( publication_parameter [= value] [, ... ] ) ]\n\nCREATE SUBSCRIPTION syntax has a similar clause:\n [ WITH ( subscription_parameter [= value] [, ... ] ) ]\n\n~~~\n\nThe docs describe all the parameters that can be specified. Parameters\nare optional, so the docs describe the defaults if the parameter name\nis not specified. However, notice that the parameter *value* part is\nalso optional.\n\nSo, what is the defined behaviour if a parameter name is specified but\nno *value* is given?\n\nIn practice, it seems to just be a shorthand for assigning a boolean\nvalue to true... BUT -\n\na) I can't find anywhere in the docs where it actually says this\n\nb) Without documentation some might consider it to be strange that now\nthere are 2 kinds of defaults - a default when there is no name, and\nanother default when there is no value - and those are not always the\nsame. e.g. if publish_via_partition root is not specified at all, it\nis equivalent of WITH (publish_via_partition_root=false), but OTOH,\nWITH (publish_via_partition_root) is equivalent of WITH\n(publish_via_partition_root=true).\n\nc) What about non-boolean parameters? In practice, it seems they all\ngive errors:\n\ntest_pub=# CREATE PUBLICATION pub99 FOR ALL TABLES WITH (publish);\nERROR: publish requires a parameter\n\ntest_sub=# CREATE SUBSCRIPTION sub1 CONNECTION 'host=localhost\ndbname=test_pub' PUBLICATION pub1 WITH (slot_name);\nERROR: slot_name requires a parameter\n\ntest_sub=# CREATE SUBSCRIPTION sub1 CONNECTION 'host=localhost\ndbname=test_pub' PUBLICATION pub1 WITH (synchronous_commit);\nERROR: synchronous_commit requires a parameter\n\n~~~\n\nIt almost feels like this is an undocumented feature, except it isn't\nquite undocumented because it is right there in black-and-white in the\nsyntax \"[= value]\". Or perhaps this implied boolean-true behaviour is\nalready described elsewhere? But if it is, I have not found it yet.\n\nIMO a simple patch (PSA) is needed to clarify the behaviour.\n\nThoughts?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 14 Oct 2022 19:54:37 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "pub/sub - specifying optional parameters without values." }, { "msg_contents": "On Fri, Oct 14, 2022 at 07:54:37PM +1100, Peter Smith wrote:\n> Hi hackers.\n> \n> This post is about parameter default values. Specifically. it's about\n> the CREATE PUBLICATION and CREATE SUBSCRIPTION syntax, although the\n> same issue might apply to other commands I am unaware of...\n\nThe same thing seems to be true in various other pages:\ngit grep 'WITH.*value' doc\n\nIn addition to WITH, it's also true of SET:\n\ngit grep -F '[= <replaceable class=\"parameter\">value' doc/src/sgml/ref/alter_index.sgml doc/src/sgml/ref/alter_table.sgml doc/src/sgml/ref/create_materialized_view.sgml doc/src/sgml/ref/create_publication.sgml doc/src/sgml/ref/create_subscription.sgml\n\nNote that some utility statements (analyze,cluster,vacuum,reindex) which\nhave parenthesized syntax with booleans say this:\n| The boolean value can also be omitted, in which case TRUE is assumed.\n\nBTW, in your patch:\n+ <para>\n+ A <type>boolean</type> parameter can omit the value. This is equivalent\n+ to assigning the parameter to <literal>true</literal>.\n+ </para>\n+ <para>\n\nshould say: \"The value can be omitted, which is equivalent to specifying\nTRUE\".\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 17 Oct 2022 15:09:21 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pub/sub - specifying optional parameters without values." }, { "msg_contents": "On Tue, Oct 18, 2022 at 7:09 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, Oct 14, 2022 at 07:54:37PM +1100, Peter Smith wrote:\n> > Hi hackers.\n> >\n> > This post is about parameter default values. Specifically. it's about\n> > the CREATE PUBLICATION and CREATE SUBSCRIPTION syntax, although the\n> > same issue might apply to other commands I am unaware of...\n>\n> The same thing seems to be true in various other pages:\n> git grep 'WITH.*value' doc\n>\n> In addition to WITH, it's also true of SET:\n>\n> git grep -F '[= <replaceable class=\"parameter\">value' doc/src/sgml/ref/alter_index.sgml doc/src/sgml/ref/alter_table.sgml doc/src/sgml/ref/create_materialized_view.sgml doc/src/sgml/ref/create_publication.sgml doc/src/sgml/ref/create_subscription.sgml\n>\n> Note that some utility statements (analyze,cluster,vacuum,reindex) which\n> have parenthesized syntax with booleans say this:\n> | The boolean value can also be omitted, in which case TRUE is assumed.\n\nThank you for the feedback and for reporting about other places\nsimilar to this. For now, I only intended to fix docs related to\nlogical replication. Scope creep to other areas maybe can be addressed\nby subsequent patches if this one gets accepted.\n\n>\n> BTW, in your patch:\n> + <para>\n> + A <type>boolean</type> parameter can omit the value. This is equivalent\n> + to assigning the parameter to <literal>true</literal>.\n> + </para>\n> + <para>\n>\n> should say: \"The value can be omitted, which is equivalent to specifying\n> TRUE\".\n>\n\nI've changed the text as you suggested, except in a couple of places\nwhere I qualified by saying \"For boolean parameters...\"; that's\nbecause the value part is not *always* optional. I've also made\nsimilar updates to the ALTER PUBLICATION/SUBSCRIPTION pages, which\nwere accidentally missed before.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 19 Oct 2022 13:10:30 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pub/sub - specifying optional parameters without values." }, { "msg_contents": "Hi,\r\n\r\nThis documentation change looks good to me. I verified in testing and in code that the value for boolean parameters in PUB/SUB commands can be omitted. which is equivalent to specifying TRUE. For example,\r\n\r\nCREATE PUBLICATIOIN mypub for ALL TABLES with (publish_via_partition_root);\r\nis equivalent to\r\nCREATE PUBLICATIOIN mypub for ALL TABLES with (publish_via_partition_root = TRUE);\r\n\r\nThe behavior is due to the following code\r\nhttps://github.com/postgres/postgres/blob/master/src/backend/commands/define.c#L113\r\n\r\nMarking this as ready for committer.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Mon, 09 Jan 2023 17:37:05 +0000", "msg_from": "Zheng Li <zhengli10@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pub/sub - specifying optional parameters without values." }, { "msg_contents": "Zheng Li <zhengli10@gmail.com> writes:\n> The behavior is due to the following code\n> https://github.com/postgres/postgres/blob/master/src/backend/commands/define.c#L113\n\nYeah, so you can grep for places that have this behavior by looking\nfor defGetBoolean calls ... and there are quite a few. That leads\nme to the conclusion that we'd better invent a fairly stylized\ndocumentation solution that we can plug into a lot of places,\nrather than thinking of slightly different ways to say it and\nplaces to say it. I'm not necessarily opposed to Peter's desire\nto fix replication-related commands first, but we have more to do\nlater.\n\nI'm also not that thrilled with putting the addition up at the top\nof the relevant text. This behavior is at least two decades old,\nso if we've escaped documenting it at all up to now, it can't be\nthat important to most people.\n\nI also notice that ALTER SUBSCRIPTION has fully three different\nsub-sections with about equal claims on this note, if we're going\nto stick it directly into the affected option lists.\n\nThat all leads me to propose that we add the new text at the end of\nthe Parameters <refsect1> in the affected man pages. So about\nlike the attached. (I left out alter_publication.sgml, as I'm not\nsure it needs its own copy of this text --- it doesn't describe\nindividual parameters at all, just refer to CREATE PUBLICATION.)\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 29 Jan 2023 16:36:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pub/sub - specifying optional parameters without values." }, { "msg_contents": "On Mon, Jan 30, 2023 at 8:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Zheng Li <zhengli10@gmail.com> writes:\n> > The behavior is due to the following code\n> > https://github.com/postgres/postgres/blob/master/src/backend/commands/define.c#L113\n>\n> Yeah, so you can grep for places that have this behavior by looking\n> for defGetBoolean calls ... and there are quite a few. That leads\n> me to the conclusion that we'd better invent a fairly stylized\n> documentation solution that we can plug into a lot of places,\n> rather than thinking of slightly different ways to say it and\n> places to say it. I'm not necessarily opposed to Peter's desire\n> to fix replication-related commands first, but we have more to do\n> later.\n>\n> I'm also not that thrilled with putting the addition up at the top\n> of the relevant text. This behavior is at least two decades old,\n> so if we've escaped documenting it at all up to now, it can't be\n> that important to most people.\n>\n> I also notice that ALTER SUBSCRIPTION has fully three different\n> sub-sections with about equal claims on this note, if we're going\n> to stick it directly into the affected option lists.\n>\n> That all leads me to propose that we add the new text at the end of\n> the Parameters <refsect1> in the affected man pages. So about\n> like the attached. (I left out alter_publication.sgml, as I'm not\n> sure it needs its own copy of this text --- it doesn't describe\n> individual parameters at all, just refer to CREATE PUBLICATION.)\n>\n\nThe v3 patch LGTM (just for the logical replication commands).\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 30 Jan 2023 18:15:58 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pub/sub - specifying optional parameters without values." }, { "msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> The v3 patch LGTM (just for the logical replication commands).\n\nPushed then.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Jan 2023 12:00:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pub/sub - specifying optional parameters without values." }, { "msg_contents": "On Tue, Jan 31, 2023 at 4:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > The v3 patch LGTM (just for the logical replication commands).\n>\n> Pushed then.\n>\n\nThanks for pushing the v3 patch.\n\nI'd forgotten about the 'streaming' option -- AFAIK this was\npreviously a boolean parameter and so its [= value] part can also be\nomitted. However, in PG16 streaming became an enum type\n(on/off/parallel), and the value can still be omitted but that is not\nreally being covered by the new generic text note about booleans added\nby yesterday's patch.\n\ne.g. The enum 'streaming' value part can still be omitted.\ntest_sub=# create subscription sub1 connection 'host=localhost\ndbname=test_pub' publication pub1 with (streaming);\n\nPerhaps a small top-up patch to CREATE SUBSCRIPTION is needed to\ndescribe this special case?\n\nPSA.\n\n(I thought mentioning this special streaming case again for ALTER\nSUBSCRIPTION might be overkill)\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 31 Jan 2023 09:36:53 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pub/sub - specifying optional parameters without values." }, { "msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> I'd forgotten about the 'streaming' option -- AFAIK this was\n> previously a boolean parameter and so its [= value] part can also be\n> omitted. However, in PG16 streaming became an enum type\n> (on/off/parallel), and the value can still be omitted but that is not\n> really being covered by the new generic text note about booleans added\n> by yesterday's patch.\n\nHmph. I generally think that options defined like this (it's a boolean,\nexcept it isn't) are a bad idea, and would prefer to see that API\nrethought while we still can.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Jan 2023 17:55:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pub/sub - specifying optional parameters without values." }, { "msg_contents": "On Tue, Jan 31, 2023 at 4:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > I'd forgotten about the 'streaming' option -- AFAIK this was\n> > previously a boolean parameter and so its [= value] part can also be\n> > omitted. However, in PG16 streaming became an enum type\n> > (on/off/parallel), and the value can still be omitted but that is not\n> > really being covered by the new generic text note about booleans added\n> > by yesterday's patch.\n>\n> Hmph. I generally think that options defined like this (it's a boolean,\n> except it isn't) are a bad idea, and would prefer to see that API\n> rethought while we still can.\n>\n\nWe have discussed this during development and considered using a\nseparate option like parallel = on (or say parallel_workers = n) but\nthere were challenges with the same. See discussion in email [1]. We\nalso checked that we have various other places using something similar\nfor options. For example COPY commands option: HEADER [ boolean |\nMATCH ]. Then GUCs like\nsynchronous_commit/constraint_exclusion/huge_pages/backslash_quote\nhave similar values. Then some of the reloptions like buffering,\nvacuum_index_cleanup also have off/on/auto values. I think having an\nenum where off/on are present is already used. In this case, the main\nreason is that after discussion we felt it is better to have streaming\nas an enum with values off/on/parallel instead of introducing a new\noption.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1Kt67RdW0WTR-LTxasj3pyukPCYhfA0arDUNnsz2wh03A%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 31 Jan 2023 11:12:24 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pub/sub - specifying optional parameters without values." }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Tue, Jan 31, 2023 at 4:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hmph. I generally think that options defined like this (it's a boolean,\n>> except it isn't) are a bad idea, and would prefer to see that API\n>> rethought while we still can.\n\n> We have discussed this during development and considered using a\n> separate option like parallel = on (or say parallel_workers = n) but\n> there were challenges with the same. See discussion in email [1]. We\n> also checked that we have various other places using something similar\n> for options. For example COPY commands option: HEADER [ boolean |\n> MATCH ].\n\nYeah, and it's bad experiences with the existing cases that make me\nnot want to add more. Every one of those was somebody taking the\neasy way out. It generally leads to parsing oddities, such as\nnot accepting all the same spellings of \"boolean\" as before.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 31 Jan 2023 09:49:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pub/sub - specifying optional parameters without values." }, { "msg_contents": "On Tuesday, January 31, 2023 10:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\nHi,\n\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Tue, Jan 31, 2023 at 4:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Hmph. I generally think that options defined like this (it's a\n> >> boolean, except it isn't) are a bad idea, and would prefer to see\n> >> that API rethought while we still can.\n> \n> > We have discussed this during development and considered using a\n> > separate option like parallel = on (or say parallel_workers = n) but\n> > there were challenges with the same. See discussion in email [1]. We\n> > also checked that we have various other places using something similar\n> > for options. For example COPY commands option: HEADER [ boolean |\n> > MATCH ].\n> \n> Yeah, and it's bad experiences with the existing cases that make me not want to\n> add more. Every one of those was somebody taking the easy way out. It\n> generally leads to parsing oddities, such as not accepting all the same spellings\n> of \"boolean\" as before.\n\nI understand the worry of parsing oddities. I think we have tried to make the\nstreaming option keep accepting all the same spellings of boolean(e.g. the option still\naccept(1/0/true/false...)). And this is similar to some other option like COPY\nHEADER option which accepts all the boolean value and the 'match' value. Some\nother GUCs like wal_compression also behave similarly:\n0/1/true/false/on/off/lz1/pglz are all valid values.\n\nBest Regards,\nHou zj\n\n\n\n", "msg_date": "Wed, 1 Feb 2023 06:49:24 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: pub/sub - specifying optional parameters without values." }, { "msg_contents": "On Mon, Jan 30, 2023 at 8:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Zheng Li <zhengli10@gmail.com> writes:\n> > The behavior is due to the following code\n> > https://github.com/postgres/postgres/blob/master/src/backend/commands/define.c#L113\n>\n> Yeah, so you can grep for places that have this behavior by looking\n> for defGetBoolean calls ... and there are quite a few. That leads\n> me to the conclusion that we'd better invent a fairly stylized\n> documentation solution that we can plug into a lot of places,\n> rather than thinking of slightly different ways to say it and\n> places to say it. I'm not necessarily opposed to Peter's desire\n> to fix replication-related commands first, but we have more to do\n> later.\n>\n> I'm also not that thrilled with putting the addition up at the top\n> of the relevant text. This behavior is at least two decades old,\n> so if we've escaped documenting it at all up to now, it can't be\n> that important to most people.\n>\n> I also notice that ALTER SUBSCRIPTION has fully three different\n> sub-sections with about equal claims on this note, if we're going\n> to stick it directly into the affected option lists.\n>\n> That all leads me to propose that we add the new text at the end of\n> the Parameters <refsect1> in the affected man pages. So about\n> like the attached. (I left out alter_publication.sgml, as I'm not\n> sure it needs its own copy of this text --- it doesn't describe\n> individual parameters at all, just refer to CREATE PUBLICATION.)\n>\n> regards, tom lane\n>\n\nHi,\n\nHere is a similar update for another page: \"55.4 Streaming Replication\nProtocol\" [0]. This patch was prompted by a review comment reply at\n[1] (#2).\n\nI've used text almost the same as the boilerplate text added by the\nprevious commit [2]\n\n~\n\nPSA patch v4.\n\n======\n[0] https://www.postgresql.org/docs/devel/protocol-replication.html\n[1] https://www.postgresql.org/message-id/OS0PR01MB571663BCE8B28597D462FADE946A2%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n[2] https://github.com/postgres/postgres/commit/ec7e053a98f39a9e3c7e6d35f0d2e83933882399\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 12 Jan 2024 16:07:32 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pub/sub - specifying optional parameters without values." }, { "msg_contents": "On Fri, Jan 12, 2024 at 4:07 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Jan 30, 2023 at 8:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Zheng Li <zhengli10@gmail.com> writes:\n> > > The behavior is due to the following code\n> > > https://github.com/postgres/postgres/blob/master/src/backend/commands/define.c#L113\n> >\n> > Yeah, so you can grep for places that have this behavior by looking\n> > for defGetBoolean calls ... and there are quite a few. That leads\n> > me to the conclusion that we'd better invent a fairly stylized\n> > documentation solution that we can plug into a lot of places,\n> > rather than thinking of slightly different ways to say it and\n> > places to say it. I'm not necessarily opposed to Peter's desire\n> > to fix replication-related commands first, but we have more to do\n> > later.\n> >\n> > I'm also not that thrilled with putting the addition up at the top\n> > of the relevant text. This behavior is at least two decades old,\n> > so if we've escaped documenting it at all up to now, it can't be\n> > that important to most people.\n> >\n> > I also notice that ALTER SUBSCRIPTION has fully three different\n> > sub-sections with about equal claims on this note, if we're going\n> > to stick it directly into the affected option lists.\n> >\n> > That all leads me to propose that we add the new text at the end of\n> > the Parameters <refsect1> in the affected man pages. So about\n> > like the attached. (I left out alter_publication.sgml, as I'm not\n> > sure it needs its own copy of this text --- it doesn't describe\n> > individual parameters at all, just refer to CREATE PUBLICATION.)\n> >\n> > regards, tom lane\n> >\n>\n> Hi,\n>\n> Here is a similar update for another page: \"55.4 Streaming Replication\n> Protocol\" [0]. This patch was prompted by a review comment reply at\n> [1] (#2).\n>\n> I've used text almost the same as the boilerplate text added by the\n> previous commit [2]\n>\n> ~\n>\n> PSA patch v4.\n>\n> ======\n> [0] https://www.postgresql.org/docs/devel/protocol-replication.html\n> [1] https://www.postgresql.org/message-id/OS0PR01MB571663BCE8B28597D462FADE946A2%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n> [2] https://github.com/postgres/postgres/commit/ec7e053a98f39a9e3c7e6d35f0d2e83933882399\n>\n> Kind Regards,\n> Peter Smith.\n> Fujitsu Australia\n\nBump.\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Sun, 3 Mar 2024 10:59:40 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pub/sub - specifying optional parameters without values." }, { "msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> Here is a similar update for another page: \"55.4 Streaming Replication\n> Protocol\" [0]. This patch was prompted by a review comment reply at\n> [1] (#2).\n\n> I've used text almost the same as the boilerplate text added by the\n> previous commit [2]\n\nPushed, except I put it at the bottom of the section not the top.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 07 Apr 2024 17:17:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pub/sub - specifying optional parameters without values." } ]
[ { "msg_contents": "Hi hackers,\n\nwhile working on [1], I thought it could also be useful to add regular \nexpression testing for user name mapping in the peer authentication TAP \ntest.\n\nThis kind of test already exists in kerberos/t/001_auth.pl but the \nproposed one in the peer authentication testing would probably be more \nwidely tested.\n\nPlease find attached a patch proposal to do so.\n\n[1]: \nhttps://www.postgresql.org/message-id/4f55303e-62c1-1072-61db-fbfb30bd66c8%40gmail.com\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 14 Oct 2022 18:31:15 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Add regular expression testing for user name mapping in the peer\n authentication TAP test" }, { "msg_contents": "On Fri, Oct 14, 2022 at 06:31:15PM +0200, Drouvot, Bertrand wrote:\n> while working on [1], I thought it could also be useful to add regular\n> expression testing for user name mapping in the peer authentication TAP\n> test.\n\nGood idea now that we have a bit more coverage in the authentication\ntests.\n\n> +# Test with regular expression in user name map.\n> +my $last_system_user_char = substr($system_user, -1);\n\nThis would attach to the regex the last character of the system user.\nI would perhaps have used more characters than that (-3?), as substr()\nwith a negative number larger than the string given in input would\ngive the entire string. That's a nit, though.\n\n> +# The regular expression does not match.\n> +reset_pg_ident($node, 'mypeermap', '/^$', 'testmapuser');\n\nThis matches only an empty string, my brain gets that right?\n--\nMichael", "msg_date": "Sat, 15 Oct 2022 12:11:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add regular expression testing for user name mapping in the peer\n authentication TAP test" }, { "msg_contents": "Hi,\n\nOn 10/15/22 5:11 AM, Michael Paquier wrote:\n> On Fri, Oct 14, 2022 at 06:31:15PM +0200, Drouvot, Bertrand wrote:\n>> while working on [1], I thought it could also be useful to add regular\n>> expression testing for user name mapping in the peer authentication TAP\n>> test.\n> \n> Good idea now that we have a bit more coverage in the authentication\n> tests.\n\nThanks for looking at it!\n\n>> +# Test with regular expression in user name map.\n>> +my $last_system_user_char = substr($system_user, -1);\n> \n> This would attach to the regex the last character of the system user.\n\nRight.\n\n> I would perhaps have used more characters than that (-3?), as substr()\n> with a negative number larger than the string given in input would\n> give the entire string. That's a nit, though.\n\nI don't have a strong opinion on this, so let's extract the last 3 \ncharacters. This is what v2 attached does.\n\n> \n>> +# The regular expression does not match.\n>> +reset_pg_ident($node, 'mypeermap', '/^$', 'testmapuser');\n> \n> This matches only an empty string, my brain gets that right?\n\nRight. Giving a second thought to the non matching case, I think I'd \nprefer to concatenate the system_user to the system_user instead. This \nis what v2 does.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 15 Oct 2022 07:54:30 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add regular expression testing for user name mapping in the peer\n authentication TAP test" }, { "msg_contents": "On Sat, Oct 15, 2022 at 07:54:30AM +0200, Drouvot, Bertrand wrote:\n> Right. Giving a second thought to the non matching case, I think I'd prefer\n> to concatenate the system_user to the system_user instead. This is what v2\n> does.\n\nFine by me, so applied v2. Thanks!\n--\nMichael", "msg_date": "Mon, 17 Oct 2022 11:07:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add regular expression testing for user name mapping in the peer\n authentication TAP test" }, { "msg_contents": "Hi,\n\nOn 10/17/22 4:07 AM, Michael Paquier wrote:\n> On Sat, Oct 15, 2022 at 07:54:30AM +0200, Drouvot, Bertrand wrote:\n>> Right. Giving a second thought to the non matching case, I think I'd prefer\n>> to concatenate the system_user to the system_user instead. This is what v2\n>> does.\n> \n> Fine by me, so applied v2. Thanks!\n\nThanks!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 17 Oct 2022 08:27:35 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add regular expression testing for user name mapping in the peer\n authentication TAP test" } ]
[ { "msg_contents": "Enclosed is a trivial fix for a typo and misnamed field I noted when doing\nsome code review.\n\nBest,\n\nDavid", "msg_date": "Fri, 14 Oct 2022 16:36:36 -0500", "msg_from": "David Christensen <david.christensen@crunchydata.com>", "msg_from_op": true, "msg_subject": "[PATCH] comment fixes for delayChkptFlags" }, { "msg_contents": "On Fri, Oct 14, 2022 at 04:36:36PM -0500, David Christensen wrote:\n> Enclosed is a trivial fix for a typo and misnamed field I noted when doing\n> some code review.\n\n(Few days after the fact)\n\nThanks. There was a second location where delayChkpt was still\nmentioned, so fixed also that while on it.\n--\nMichael", "msg_date": "Mon, 17 Oct 2022 11:41:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] comment fixes for delayChkptFlags" } ]
[ { "msg_contents": "Hi,\n\nThere have been a couple discussions about using BRIN indexes for\nsorting - in fact this was mentioned even in the \"Improving Indexing\nPerformance\" unconference session this year (don't remember by whom).\nBut I haven't seen any patches, so here's one.\n\nThe idea is that we can use information about ranges to split the table\ninto smaller parts that can be sorted in smaller chunks. For example if\nyou have a tiny 2MB table with two ranges, with values in [0,100] and\n[101,200] intervals, then it's clear we can sort the first range, output\ntuples, and then sort/output the second range.\n\nThe attached patch builds \"BRIN Sort\" paths/plans, closely resembling\nindex scans, only for BRIN indexes. And this special type of index scan\ndoes what was mentioned above - incrementally sorts the data. It's a bit\nmore complicated because of overlapping ranges, ASC/DESC, NULL etc.\n\nThis is disabled by default, using a GUC enable_brinsort (you may need\nto tweak other GUCs to disable parallel plans etc.).\n\nA trivial example, demonstrating the benefits:\n\n create table t (a int) with (fillfactor = 10);\n insert into t select i from generate_series(1,10000000) s(i);\n\n\nFirst, a simple LIMIT query:\n\nexplain (analyze, costs off) select * from t order by a limit 10;\n\n QUERY PLAN\n------------------------------------------------------------------------\n Limit (actual time=1879.768..1879.770 rows=10 loops=1)\n -> Sort (actual time=1879.767..1879.768 rows=10 loops=1)\n Sort Key: a\n Sort Method: top-N heapsort Memory: 25kB\n -> Seq Scan on t\n (actual time=0.007..1353.110 rows=10000000 loops=1)\n Planning Time: 0.083 ms\n Execution Time: 1879.786 ms\n(7 rows)\n\n QUERY PLAN\n------------------------------------------------------------------------\n Limit (actual time=1.217..1.219 rows=10 loops=1)\n -> BRIN Sort using t_a_idx on t\n (actual time=1.216..1.217 rows=10 loops=1)\n Sort Key: a\n Planning Time: 0.084 ms\n Execution Time: 1.234 ms\n(5 rows)\n\nThat's a pretty nice improvement - of course, this is thanks to having a\nperfectly sequential, and the difference can be almost arbitrary by\nmaking the table smaller/larger. Similarly, if the table gets less\nsequential (making ranges to overlap), the BRIN plan will be more\nexpensive. Feel free to experiment with other data sets.\n\nHowever, not only the LIMIT queries can improve - consider a sort of the\nwhole table:\n\ntest=# explain (analyze, costs off) select * from t order by a;\n\n QUERY PLAN\n-------------------------------------------------------------------------\n Sort (actual time=2806.468..3487.213 rows=10000000 loops=1)\n Sort Key: a\n Sort Method: external merge Disk: 117528kB\n -> Seq Scan on t (actual time=0.018..1498.754 rows=10000000 loops=1)\n Planning Time: 0.110 ms\n Execution Time: 3766.825 ms\n(6 rows)\n\ntest=# explain (analyze, costs off) select * from t order by a;\n QUERY PLAN\n\n----------------------------------------------------------------------------------\n BRIN Sort using t_a_idx on t (actual time=1.210..2670.875 rows=10000000\nloops=1)\n Sort Key: a\n Planning Time: 0.073 ms\n Execution Time: 2939.324 ms\n(4 rows)\n\nRight - not a huge difference, but still a nice 25% speedup, mostly due\nto not having to spill data to disk and sorting smaller amounts of data.\n\nThere's a bunch of issues with this initial version of the patch,\nusually described in XXX comments in the relevant places.6)\n\n1) The paths are created in build_index_paths() because that's what\ncreates index scans (which the new path resembles). But that is expected\nto produce IndexPath, not BrinSortPath, so it's not quite correct.\nShould be somewhere \"higher\" I guess.\n\n2) BRIN indexes don't have internal ordering, i.e. ASC/DESC and NULLS\nFIRST/LAST does not really matter for them. The patch just generates\npaths for all 4 combinations (or tries to). Maybe there's a better way.\n\n3) I'm not quite sure the separation of responsibilities between\nopfamily and opclass is optimal. I added a new amproc, but maybe this\nshould be split differently. At the moment only minmax indexes have\nthis, but adding this to minmax-multi should be trivial.\n\n4) The state changes in nodeBrinSort is a bit confusing. Works, but may\nneed cleanup and refactoring. Ideas welcome.\n\n5) The costing is essentially just plain cost_index. I have some ideas\nabout BRIN costing in general, which I'll post in a separate thread (as\nit's not specific to this patch).\n\n6) At the moment this only picks one of the index keys, specified in the\nORDER BY clause. I think we can generalize this to multiple keys, but\nthinking about multi-key ranges was a bit too much for me. The good\nthing is this nicely combines with IncrementalSort.\n\n7) Only plain index keys for the ORDER BY keys, no expressions. Should\nnot be hard to fix, though.\n\n8) Parallel version is not supported, but I think it shouldn't be\npossible. Just make the leader build the range info, and then let the\nworkers to acquire/sort ranges and merge them by Gather Merge.\n\n9) I was also thinking about leveraging other indexes to quickly\neliminate ranges that need to be sorted. The node does evaluate filter,\nof course, but only after reading the tuple from the range. But imagine\nwe allow BrinSort to utilize BRIN indexes to evaluate the filter - in\nthat case we might skip many ranges entirely. Essentially like a bitmap\nindex scan does, except that building the bitmap incrementally with BRIN\nis trivial - you can quickly check if a particular range matches or not.\nWith other indexes (e.g. btree) you essentially need to evaluate the\nfilter completely, and only then you can look at the bitmap. Which seems\nrather against the idea of this patch, which is about low startup cost.\nOf course, the condition might be very selective, but then you probably\ncan just fetch the matching tuples and do a Sort.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 15 Oct 2022 14:33:50 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On Sat, Oct 15, 2022 at 5:34 AM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> Hi,\n>\n> There have been a couple discussions about using BRIN indexes for\n> sorting - in fact this was mentioned even in the \"Improving Indexing\n> Performance\" unconference session this year (don't remember by whom).\n> But I haven't seen any patches, so here's one.\n>\n> The idea is that we can use information about ranges to split the table\n> into smaller parts that can be sorted in smaller chunks. For example if\n> you have a tiny 2MB table with two ranges, with values in [0,100] and\n> [101,200] intervals, then it's clear we can sort the first range, output\n> tuples, and then sort/output the second range.\n>\n> The attached patch builds \"BRIN Sort\" paths/plans, closely resembling\n> index scans, only for BRIN indexes. And this special type of index scan\n> does what was mentioned above - incrementally sorts the data. It's a bit\n> more complicated because of overlapping ranges, ASC/DESC, NULL etc.\n>\n> This is disabled by default, using a GUC enable_brinsort (you may need\n> to tweak other GUCs to disable parallel plans etc.).\n>\n> A trivial example, demonstrating the benefits:\n>\n> create table t (a int) with (fillfactor = 10);\n> insert into t select i from generate_series(1,10000000) s(i);\n>\n>\n> First, a simple LIMIT query:\n>\n> explain (analyze, costs off) select * from t order by a limit 10;\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Limit (actual time=1879.768..1879.770 rows=10 loops=1)\n> -> Sort (actual time=1879.767..1879.768 rows=10 loops=1)\n> Sort Key: a\n> Sort Method: top-N heapsort Memory: 25kB\n> -> Seq Scan on t\n> (actual time=0.007..1353.110 rows=10000000 loops=1)\n> Planning Time: 0.083 ms\n> Execution Time: 1879.786 ms\n> (7 rows)\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Limit (actual time=1.217..1.219 rows=10 loops=1)\n> -> BRIN Sort using t_a_idx on t\n> (actual time=1.216..1.217 rows=10 loops=1)\n> Sort Key: a\n> Planning Time: 0.084 ms\n> Execution Time: 1.234 ms\n> (5 rows)\n>\n> That's a pretty nice improvement - of course, this is thanks to having a\n> perfectly sequential, and the difference can be almost arbitrary by\n> making the table smaller/larger. Similarly, if the table gets less\n> sequential (making ranges to overlap), the BRIN plan will be more\n> expensive. Feel free to experiment with other data sets.\n>\n> However, not only the LIMIT queries can improve - consider a sort of the\n> whole table:\n>\n> test=# explain (analyze, costs off) select * from t order by a;\n>\n> QUERY PLAN\n> -------------------------------------------------------------------------\n> Sort (actual time=2806.468..3487.213 rows=10000000 loops=1)\n> Sort Key: a\n> Sort Method: external merge Disk: 117528kB\n> -> Seq Scan on t (actual time=0.018..1498.754 rows=10000000 loops=1)\n> Planning Time: 0.110 ms\n> Execution Time: 3766.825 ms\n> (6 rows)\n>\n> test=# explain (analyze, costs off) select * from t order by a;\n> QUERY PLAN\n>\n>\n> ----------------------------------------------------------------------------------\n> BRIN Sort using t_a_idx on t (actual time=1.210..2670.875 rows=10000000\n> loops=1)\n> Sort Key: a\n> Planning Time: 0.073 ms\n> Execution Time: 2939.324 ms\n> (4 rows)\n>\n> Right - not a huge difference, but still a nice 25% speedup, mostly due\n> to not having to spill data to disk and sorting smaller amounts of data.\n>\n> There's a bunch of issues with this initial version of the patch,\n> usually described in XXX comments in the relevant places.6)\n>\n> 1) The paths are created in build_index_paths() because that's what\n> creates index scans (which the new path resembles). But that is expected\n> to produce IndexPath, not BrinSortPath, so it's not quite correct.\n> Should be somewhere \"higher\" I guess.\n>\n> 2) BRIN indexes don't have internal ordering, i.e. ASC/DESC and NULLS\n> FIRST/LAST does not really matter for them. The patch just generates\n> paths for all 4 combinations (or tries to). Maybe there's a better way.\n>\n> 3) I'm not quite sure the separation of responsibilities between\n> opfamily and opclass is optimal. I added a new amproc, but maybe this\n> should be split differently. At the moment only minmax indexes have\n> this, but adding this to minmax-multi should be trivial.\n>\n> 4) The state changes in nodeBrinSort is a bit confusing. Works, but may\n> need cleanup and refactoring. Ideas welcome.\n>\n> 5) The costing is essentially just plain cost_index. I have some ideas\n> about BRIN costing in general, which I'll post in a separate thread (as\n> it's not specific to this patch).\n>\n> 6) At the moment this only picks one of the index keys, specified in the\n> ORDER BY clause. I think we can generalize this to multiple keys, but\n> thinking about multi-key ranges was a bit too much for me. The good\n> thing is this nicely combines with IncrementalSort.\n>\n> 7) Only plain index keys for the ORDER BY keys, no expressions. Should\n> not be hard to fix, though.\n>\n> 8) Parallel version is not supported, but I think it shouldn't be\n> possible. Just make the leader build the range info, and then let the\n> workers to acquire/sort ranges and merge them by Gather Merge.\n>\n> 9) I was also thinking about leveraging other indexes to quickly\n> eliminate ranges that need to be sorted. The node does evaluate filter,\n> of course, but only after reading the tuple from the range. But imagine\n> we allow BrinSort to utilize BRIN indexes to evaluate the filter - in\n> that case we might skip many ranges entirely. Essentially like a bitmap\n> index scan does, except that building the bitmap incrementally with BRIN\n> is trivial - you can quickly check if a particular range matches or not.\n> With other indexes (e.g. btree) you essentially need to evaluate the\n> filter completely, and only then you can look at the bitmap. Which seems\n> rather against the idea of this patch, which is about low startup cost.\n> Of course, the condition might be very selective, but then you probably\n> can just fetch the matching tuples and do a Sort.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\nHi,\nI am still going over the patch.\n\nMinor: for #8, I guess you meant `it should be possible` .\n\nCheers\n\nOn Sat, Oct 15, 2022 at 5:34 AM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:Hi,\n\nThere have been a couple discussions about using BRIN indexes for\nsorting - in fact this was mentioned even in the \"Improving Indexing\nPerformance\" unconference session this year (don't remember by whom).\nBut I haven't seen any patches, so here's one.\n\nThe idea is that we can use information about ranges to split the table\ninto smaller parts that can be sorted in smaller chunks. For example if\nyou have a tiny 2MB table with two ranges, with values in [0,100] and\n[101,200] intervals, then it's clear we can sort the first range, output\ntuples, and then sort/output the second range.\n\nThe attached patch builds \"BRIN Sort\" paths/plans, closely resembling\nindex scans, only for BRIN indexes. And this special type of index scan\ndoes what was mentioned above - incrementally sorts the data. It's a bit\nmore complicated because of overlapping ranges, ASC/DESC, NULL etc.\n\nThis is disabled by default, using a GUC enable_brinsort (you may need\nto tweak other GUCs to disable parallel plans etc.).\n\nA trivial example, demonstrating the benefits:\n\n  create table t (a int) with (fillfactor = 10);\n  insert into t select i from generate_series(1,10000000) s(i);\n\n\nFirst, a simple LIMIT query:\n\nexplain (analyze, costs off) select * from t order by a limit 10;\n\n                              QUERY PLAN\n------------------------------------------------------------------------\n Limit (actual time=1879.768..1879.770 rows=10 loops=1)\n   ->  Sort (actual time=1879.767..1879.768 rows=10 loops=1)\n         Sort Key: a\n         Sort Method: top-N heapsort  Memory: 25kB\n         ->  Seq Scan on t\n             (actual time=0.007..1353.110 rows=10000000 loops=1)\n Planning Time: 0.083 ms\n Execution Time: 1879.786 ms\n(7 rows)\n\n                              QUERY PLAN\n------------------------------------------------------------------------\n Limit (actual time=1.217..1.219 rows=10 loops=1)\n   ->  BRIN Sort using t_a_idx on t\n       (actual time=1.216..1.217 rows=10 loops=1)\n         Sort Key: a\n Planning Time: 0.084 ms\n Execution Time: 1.234 ms\n(5 rows)\n\nThat's a pretty nice improvement - of course, this is thanks to having a\nperfectly sequential, and the difference can be almost arbitrary by\nmaking the table smaller/larger. Similarly, if the table gets less\nsequential (making ranges to overlap), the BRIN plan will be more\nexpensive. Feel free to experiment with other data sets.\n\nHowever, not only the LIMIT queries can improve - consider a sort of the\nwhole table:\n\ntest=# explain (analyze, costs off) select * from t order by a;\n\n                               QUERY PLAN\n-------------------------------------------------------------------------\n Sort (actual time=2806.468..3487.213 rows=10000000 loops=1)\n   Sort Key: a\n   Sort Method: external merge  Disk: 117528kB\n   ->  Seq Scan on t (actual time=0.018..1498.754 rows=10000000 loops=1)\n Planning Time: 0.110 ms\n Execution Time: 3766.825 ms\n(6 rows)\n\ntest=# explain (analyze, costs off) select * from t order by a;\n                                    QUERY PLAN\n\n----------------------------------------------------------------------------------\n BRIN Sort using t_a_idx on t (actual time=1.210..2670.875 rows=10000000\nloops=1)\n   Sort Key: a\n Planning Time: 0.073 ms\n Execution Time: 2939.324 ms\n(4 rows)\n\nRight - not a huge difference, but still a nice 25% speedup, mostly due\nto not having to spill data to disk and sorting smaller amounts of data.\n\nThere's a bunch of issues with this initial version of the patch,\nusually described in XXX comments in the relevant places.6)\n\n1) The paths are created in build_index_paths() because that's what\ncreates index scans (which the new path resembles). But that is expected\nto produce IndexPath, not BrinSortPath, so it's not quite correct.\nShould be somewhere \"higher\" I guess.\n\n2) BRIN indexes don't have internal ordering, i.e. ASC/DESC and NULLS\nFIRST/LAST does not really matter for them. The patch just generates\npaths for all 4 combinations (or tries to). Maybe there's a better way.\n\n3) I'm not quite sure the separation of responsibilities between\nopfamily and opclass is optimal. I added a new amproc, but maybe this\nshould be split differently. At the moment only minmax indexes have\nthis, but adding this to minmax-multi should be trivial.\n\n4) The state changes in nodeBrinSort is a bit confusing. Works, but may\nneed cleanup and refactoring. Ideas welcome.\n\n5) The costing is essentially just plain cost_index. I have some ideas\nabout BRIN costing in general, which I'll post in a separate thread (as\nit's not specific to this patch).\n\n6) At the moment this only picks one of the index keys, specified in the\nORDER BY clause. I think we can generalize this to multiple keys, but\nthinking about multi-key ranges was a bit too much for me. The good\nthing is this nicely combines with IncrementalSort.\n\n7) Only plain index keys for the ORDER BY keys, no expressions. Should\nnot be hard to fix, though.\n\n8) Parallel version is not supported, but I think it shouldn't be\npossible. Just make the leader build the range info, and then let the\nworkers to acquire/sort ranges and merge them by Gather Merge.\n\n9) I was also thinking about leveraging other indexes to quickly\neliminate ranges that need to be sorted. The node does evaluate filter,\nof course, but only after reading the tuple from the range. But imagine\nwe allow BrinSort to utilize BRIN indexes to evaluate the filter - in\nthat case we might skip many ranges entirely. Essentially like a bitmap\nindex scan does, except that building the bitmap incrementally with BRIN\nis trivial - you can quickly check if a particular range matches or not.\nWith other indexes (e.g. btree) you essentially need to evaluate the\nfilter completely, and only then you can look at the bitmap. Which seems\nrather against the idea of this patch, which is about low startup cost.\nOf course, the condition might be very selective, but then you probably\ncan just fetch the matching tuples and do a Sort.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL CompanyHi,I am still going over the patch.Minor: for #8, I guess you meant `it should be possible` .Cheers", "msg_date": "Sat, 15 Oct 2022 06:46:47 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On 10/15/22 15:46, Zhihong Yu wrote:\n>...\n> 8) Parallel version is not supported, but I think it shouldn't be\n> possible. Just make the leader build the range info, and then let the\n> workers to acquire/sort ranges and merge them by Gather Merge.\n> ...\n> Hi,\n> I am still going over the patch.\n> \n> Minor: for #8, I guess you meant `it should be possible` .\n> \n\nYes, I meant to say it should be possible. Sorry for the confusion.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 15 Oct 2022 17:23:54 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On Sat, Oct 15, 2022 at 8:23 AM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> On 10/15/22 15:46, Zhihong Yu wrote:\n> >...\n> > 8) Parallel version is not supported, but I think it shouldn't be\n> > possible. Just make the leader build the range info, and then let the\n> > workers to acquire/sort ranges and merge them by Gather Merge.\n> > ...\n> > Hi,\n> > I am still going over the patch.\n> >\n> > Minor: for #8, I guess you meant `it should be possible` .\n> >\n>\n> Yes, I meant to say it should be possible. Sorry for the confusion.\n>\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\nHi,\n\nFor brin_minmax_ranges, looking at the assignment to gottuple and\nreading gottuple, it seems variable gottuple can be omitted - we can check\ntup directly.\n\n+ /* Maybe mark the range as processed. */\n+ range->processed |= mark_processed;\n\n`Maybe` can be dropped.\n\nFor brinsort_load_tuples(), do we need to check for interrupts inside the\nloop ?\nSimilar question for subsequent methods involving loops, such\nas brinsort_load_unsummarized_ranges.\n\nCheers\n\nOn Sat, Oct 15, 2022 at 8:23 AM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:On 10/15/22 15:46, Zhihong Yu wrote:\n>...\n>     8) Parallel version is not supported, but I think it shouldn't be\n>     possible. Just make the leader build the range info, and then let the\n>     workers to acquire/sort ranges and merge them by Gather Merge.\n> ...\n> Hi,\n> I am still going over the patch.\n> \n> Minor: for #8, I guess you meant `it should be possible` .\n> \n\nYes, I meant to say it should be possible. Sorry for the confusion.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL CompanyHi,For brin_minmax_ranges, looking at the assignment to gottuple and reading gottuple, it seems variable gottuple can be omitted - we can check tup directly.+   /* Maybe mark the range as processed. */+   range->processed |= mark_processed;`Maybe` can be dropped.For brinsort_load_tuples(), do we need to check for interrupts inside the loop ?Similar question for subsequent methods involving loops, such as brinsort_load_unsummarized_ranges.Cheers", "msg_date": "Sat, 15 Oct 2022 18:36:51 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On 10/15/22 14:33, Tomas Vondra wrote:\n> Hi,\n> \n> ...\n> \n> There's a bunch of issues with this initial version of the patch,\n> usually described in XXX comments in the relevant places.6)\n> \n> ...\n\nI forgot to mention one important issue in my list yesterday, and that's\nmemory consumption. The way the patch is coded now, the new BRIN support\nfunction (brin_minmax_ranges) produces information about *all* ranges in\none go, which may be an issue. The worst case is 32TB table, with 1-page\nBRIN ranges, which means ~4 billion ranges. The info is an array of ~32B\nstructs, so this would require ~128GB of RAM. With the default 128-page\nranges, it's still be ~1GB, which is quite a lot.\n\nWe could have a discussion about what's the reasonable size of BRIN\nranges on such large tables (e.g. building a bitmap on 4 billion ranges\nis going to be \"not cheap\" so this is likely pretty rare). But we should\nnot introduce new nodes that ignore work_mem, so we need a way to deal\nwith such cases somehow.\n\nThe easiest solution likely is to check this while planning - we can\ncheck the table size, calculate the number of BRIN ranges, and check\nthat the range info fits into work_mem, and just not create the path\nwhen it gets too large. That's what we did for HashAgg, although that\ndecision was unreliable because estimating GROUP BY cardinality is hard.\n\nThe wrinkle here is that counting just the range info (BrinRange struct)\ndoes not include the values for by-reference types. We could use average\nwidth - that's just an estimate, though.\n\nA more comprehensive solution seems to be to allow requesting chunks of\nthe BRIN ranges. So that we'd get \"slices\" of ranges and we'd process\nthose. So for example if you have 1000 ranges, and you can only handle\n100 at a time, we'd do 10 loops, each requesting 100 ranges.\n\nThis has another problem - we do care about \"overlaps\", and we can't\nreally know if the overlapping ranges will be in the same \"slice\"\neasily. The chunks would be sorted (for example) by maxval. But there\ncan be a range with much higher maxval (thus in some future slice), but\nvery low minval (thus intersecting with ranges in the current slice).\n\nImagine ranges with these minval/maxval values, sorted by maxval:\n\n[101,200]\n[201,300]\n[301,400]\n[150,500]\n\nand let's say we can only process 2-range slices. So we'll get the first\ntwo, but both of them intersect with the very last range.\n\nWe could always include all the intersecting ranges into the slice, but\nwhat if there are too many very \"wide\" ranges?\n\nSo I think this will need to switch to an iterative communication with\nthe BRIN index - instead of asking \"give me info about all the ranges\",\nwe'll need a way to\n\n - request the next range (sorted by maxval)\n - request the intersecting ranges one by one (sorted by minval)\n\nOf course, the BRIN side will have some of the same challenges with\ntracking the info without breaking the work_mem limit, but I suppose it\ncan store the info into a tuplestore/tuplesort, and use that instead of\nplain in-memory array. Alternatively, it could just return those, and\nBrinSort would use that. OTOH it seems cleaner to have some sort of API,\nespecially if we want to support e.g. minmax-multi opclasses, that have\na more complicated concept of \"intersection\".\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 16 Oct 2022 15:50:53 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On Sun, Oct 16, 2022 at 6:51 AM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> On 10/15/22 14:33, Tomas Vondra wrote:\n> > Hi,\n> >\n> > ...\n> >\n> > There's a bunch of issues with this initial version of the patch,\n> > usually described in XXX comments in the relevant places.6)\n> >\n> > ...\n>\n> I forgot to mention one important issue in my list yesterday, and that's\n> memory consumption. The way the patch is coded now, the new BRIN support\n> function (brin_minmax_ranges) produces information about *all* ranges in\n> one go, which may be an issue. The worst case is 32TB table, with 1-page\n> BRIN ranges, which means ~4 billion ranges. The info is an array of ~32B\n> structs, so this would require ~128GB of RAM. With the default 128-page\n> ranges, it's still be ~1GB, which is quite a lot.\n>\n> We could have a discussion about what's the reasonable size of BRIN\n> ranges on such large tables (e.g. building a bitmap on 4 billion ranges\n> is going to be \"not cheap\" so this is likely pretty rare). But we should\n> not introduce new nodes that ignore work_mem, so we need a way to deal\n> with such cases somehow.\n>\n> The easiest solution likely is to check this while planning - we can\n> check the table size, calculate the number of BRIN ranges, and check\n> that the range info fits into work_mem, and just not create the path\n> when it gets too large. That's what we did for HashAgg, although that\n> decision was unreliable because estimating GROUP BY cardinality is hard.\n>\n> The wrinkle here is that counting just the range info (BrinRange struct)\n> does not include the values for by-reference types. We could use average\n> width - that's just an estimate, though.\n>\n> A more comprehensive solution seems to be to allow requesting chunks of\n> the BRIN ranges. So that we'd get \"slices\" of ranges and we'd process\n> those. So for example if you have 1000 ranges, and you can only handle\n> 100 at a time, we'd do 10 loops, each requesting 100 ranges.\n>\n> This has another problem - we do care about \"overlaps\", and we can't\n> really know if the overlapping ranges will be in the same \"slice\"\n> easily. The chunks would be sorted (for example) by maxval. But there\n> can be a range with much higher maxval (thus in some future slice), but\n> very low minval (thus intersecting with ranges in the current slice).\n>\n> Imagine ranges with these minval/maxval values, sorted by maxval:\n>\n> [101,200]\n> [201,300]\n> [301,400]\n> [150,500]\n>\n> and let's say we can only process 2-range slices. So we'll get the first\n> two, but both of them intersect with the very last range.\n>\n> We could always include all the intersecting ranges into the slice, but\n> what if there are too many very \"wide\" ranges?\n>\n> So I think this will need to switch to an iterative communication with\n> the BRIN index - instead of asking \"give me info about all the ranges\",\n> we'll need a way to\n>\n> - request the next range (sorted by maxval)\n> - request the intersecting ranges one by one (sorted by minval)\n>\n> Of course, the BRIN side will have some of the same challenges with\n> tracking the info without breaking the work_mem limit, but I suppose it\n> can store the info into a tuplestore/tuplesort, and use that instead of\n> plain in-memory array. Alternatively, it could just return those, and\n> BrinSort would use that. OTOH it seems cleaner to have some sort of API,\n> especially if we want to support e.g. minmax-multi opclasses, that have\n> a more complicated concept of \"intersection\".\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n> Hi,\nIn your example involving [150,500], can this range be broken down into 4\nranges, ending in 200, 300, 400 and 500, respectively ?\nThat way, there is no intersection among the ranges.\n\nbq. can store the info into a tuplestore/tuplesort\n\nWouldn't this involve disk accesses which may reduce the effectiveness of\nBRIN sort ?\n\nCheers\n\nOn Sun, Oct 16, 2022 at 6:51 AM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:On 10/15/22 14:33, Tomas Vondra wrote:\n> Hi,\n> \n> ...\n> \n> There's a bunch of issues with this initial version of the patch,\n> usually described in XXX comments in the relevant places.6)\n> \n> ...\n\nI forgot to mention one important issue in my list yesterday, and that's\nmemory consumption. The way the patch is coded now, the new BRIN support\nfunction (brin_minmax_ranges) produces information about *all* ranges in\none go, which may be an issue. The worst case is 32TB table, with 1-page\nBRIN ranges, which means ~4 billion ranges. The info is an array of ~32B\nstructs, so this would require ~128GB of RAM. With the default 128-page\nranges, it's still be ~1GB, which is quite a lot.\n\nWe could have a discussion about what's the reasonable size of BRIN\nranges on such large tables (e.g. building a bitmap on 4 billion ranges\nis going to be \"not cheap\" so this is likely pretty rare). But we should\nnot introduce new nodes that ignore work_mem, so we need a way to deal\nwith such cases somehow.\n\nThe easiest solution likely is to check this while planning - we can\ncheck the table size, calculate the number of BRIN ranges, and check\nthat the range info fits into work_mem, and just not create the path\nwhen it gets too large. That's what we did for HashAgg, although that\ndecision was unreliable because estimating GROUP BY cardinality is hard.\n\nThe wrinkle here is that counting just the range info (BrinRange struct)\ndoes not include the values for by-reference types. We could use average\nwidth - that's just an estimate, though.\n\nA more comprehensive solution seems to be to allow requesting chunks of\nthe BRIN ranges. So that we'd get \"slices\" of ranges and we'd process\nthose. So for example if you have 1000 ranges, and you can only handle\n100 at a time, we'd do 10 loops, each requesting 100 ranges.\n\nThis has another problem - we do care about \"overlaps\", and we can't\nreally know if the overlapping ranges will be in the same \"slice\"\neasily. The chunks would be sorted (for example) by maxval. But there\ncan be a range with much higher maxval (thus in some future slice), but\nvery low minval (thus intersecting with ranges in the current slice).\n\nImagine ranges with these minval/maxval values, sorted by maxval:\n\n[101,200]\n[201,300]\n[301,400]\n[150,500]\n\nand let's say we can only process 2-range slices. So we'll get the first\ntwo, but both of them intersect with the very last range.\n\nWe could always include all the intersecting ranges into the slice, but\nwhat if there are too many very \"wide\" ranges?\n\nSo I think this will need to switch to an iterative communication with\nthe BRIN index - instead of asking \"give me info about all the ranges\",\nwe'll need a way to\n\n  - request the next range (sorted by maxval)\n  - request the intersecting ranges one by one (sorted by minval)\n\nOf course, the BRIN side will have some of the same challenges with\ntracking the info without breaking the work_mem limit, but I suppose it\ncan store the info into a tuplestore/tuplesort, and use that instead of\nplain in-memory array. Alternatively, it could just return those, and\nBrinSort would use that. OTOH it seems cleaner to have some sort of API,\nespecially if we want to support e.g. minmax-multi opclasses, that have\na more complicated concept of \"intersection\".\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL CompanyHi,In your example involving [150,500], can this range be broken down into 4 ranges, ending in 200, 300, 400 and 500, respectively ?That way, there is no intersection among the ranges.bq. can store the info into a tuplestore/tuplesortWouldn't this involve disk accesses which may reduce the effectiveness of BRIN sort ?Cheers", "msg_date": "Sun, 16 Oct 2022 07:01:33 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "\n\nOn 10/16/22 03:36, Zhihong Yu wrote:\n> \n> \n> On Sat, Oct 15, 2022 at 8:23 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>>\n> wrote:\n> \n> On 10/15/22 15:46, Zhihong Yu wrote:\n> >...\n> >     8) Parallel version is not supported, but I think it shouldn't be\n> >     possible. Just make the leader build the range info, and then\n> let the\n> >     workers to acquire/sort ranges and merge them by Gather Merge.\n> > ...\n> > Hi,\n> > I am still going over the patch.\n> >\n> > Minor: for #8, I guess you meant `it should be possible` .\n> >\n> \n> Yes, I meant to say it should be possible. Sorry for the confusion.\n> \n> \n> \n> regards\n> \n> -- \n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com>\n> The Enterprise PostgreSQL Company\n> \n> Hi,\n> \n> For brin_minmax_ranges, looking at the assignment to gottuple and\n> reading gottuple, it seems variable gottuple can be omitted - we can\n> check tup directly.\n> \n> +   /* Maybe mark the range as processed. */\n> +   range->processed |= mark_processed;\n> \n> `Maybe` can be dropped.\n> \n\nNo, because the \"mark_processed\" may be false. So we may not mark it as\nprocessed in some cases.\n\n> For brinsort_load_tuples(), do we need to check for interrupts inside\n> the loop ?\n> Similar question for subsequent methods involving loops, such\n> as brinsort_load_unsummarized_ranges.\n> \n\nWe could/should, although most of the loops should be very short.\n\n\nregrds\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 16 Oct 2022 16:14:32 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "\n\nOn 10/16/22 16:01, Zhihong Yu wrote:\n> \n> \n> On Sun, Oct 16, 2022 at 6:51 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>>\n> wrote:\n> \n> On 10/15/22 14:33, Tomas Vondra wrote:\n> > Hi,\n> >\n> > ...\n> >\n> > There's a bunch of issues with this initial version of the patch,\n> > usually described in XXX comments in the relevant places.6)\n> >\n> > ...\n> \n> I forgot to mention one important issue in my list yesterday, and that's\n> memory consumption. The way the patch is coded now, the new BRIN support\n> function (brin_minmax_ranges) produces information about *all* ranges in\n> one go, which may be an issue. The worst case is 32TB table, with 1-page\n> BRIN ranges, which means ~4 billion ranges. The info is an array of ~32B\n> structs, so this would require ~128GB of RAM. With the default 128-page\n> ranges, it's still be ~1GB, which is quite a lot.\n> \n> We could have a discussion about what's the reasonable size of BRIN\n> ranges on such large tables (e.g. building a bitmap on 4 billion ranges\n> is going to be \"not cheap\" so this is likely pretty rare). But we should\n> not introduce new nodes that ignore work_mem, so we need a way to deal\n> with such cases somehow.\n> \n> The easiest solution likely is to check this while planning - we can\n> check the table size, calculate the number of BRIN ranges, and check\n> that the range info fits into work_mem, and just not create the path\n> when it gets too large. That's what we did for HashAgg, although that\n> decision was unreliable because estimating GROUP BY cardinality is hard.\n> \n> The wrinkle here is that counting just the range info (BrinRange struct)\n> does not include the values for by-reference types. We could use average\n> width - that's just an estimate, though.\n> \n> A more comprehensive solution seems to be to allow requesting chunks of\n> the BRIN ranges. So that we'd get \"slices\" of ranges and we'd process\n> those. So for example if you have 1000 ranges, and you can only handle\n> 100 at a time, we'd do 10 loops, each requesting 100 ranges.\n> \n> This has another problem - we do care about \"overlaps\", and we can't\n> really know if the overlapping ranges will be in the same \"slice\"\n> easily. The chunks would be sorted (for example) by maxval. But there\n> can be a range with much higher maxval (thus in some future slice), but\n> very low minval (thus intersecting with ranges in the current slice).\n> \n> Imagine ranges with these minval/maxval values, sorted by maxval:\n> \n> [101,200]\n> [201,300]\n> [301,400]\n> [150,500]\n> \n> and let's say we can only process 2-range slices. So we'll get the first\n> two, but both of them intersect with the very last range.\n> \n> We could always include all the intersecting ranges into the slice, but\n> what if there are too many very \"wide\" ranges?\n> \n> So I think this will need to switch to an iterative communication with\n> the BRIN index - instead of asking \"give me info about all the ranges\",\n> we'll need a way to\n> \n>   - request the next range (sorted by maxval)\n>   - request the intersecting ranges one by one (sorted by minval)\n> \n> Of course, the BRIN side will have some of the same challenges with\n> tracking the info without breaking the work_mem limit, but I suppose it\n> can store the info into a tuplestore/tuplesort, and use that instead of\n> plain in-memory array. Alternatively, it could just return those, and\n> BrinSort would use that. OTOH it seems cleaner to have some sort of API,\n> especially if we want to support e.g. minmax-multi opclasses, that have\n> a more complicated concept of \"intersection\".\n> \n> \n> regards\n> \n> -- \n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com>\n> The Enterprise PostgreSQL Company\n> \n> Hi,\n> In your example involving [150,500], can this range be broken down into\n> 4 ranges, ending in 200, 300, 400 and 500, respectively ?\n> That way, there is no intersection among the ranges.\n> \n\nNot really, I think. These \"value ranges\" map to \"page ranges\" and how\nwould you split those? I mean, you know values [150,500] map to blocks\n[0,127]. You split the values into [150,200], [201,300], [301,400]. How\ndo you split the page range [0,127]?\n\nAlso, splitting a range into more ranges is likely making the issue\nworse, because it increases the number of ranges, right? And I mean,\nmuch worse, because imagine a \"wide\" range that overlaps with every\nother range - the number of ranges would explode.\n\nIt's not clear to me at which point you'd make the split. At the\nbeginning, right after loading the ranges from BRIN index? A lot of that\nmay be unnecessary, in case the range is loaded as a \"non-intersecting\"\nrange.\n\nTry to formulate the whole algorithm. Maybe I'm missing something.\n\nThe current algorithm is something like this:\n\n1. request info about ranges from the BRIN opclass\n2. sort them by maxval and minval\n3. NULLS FIRST: read all ranges that might have NULLs => output\n4. read the next range (by maxval) into tuplesort\n (if no more ranges, go to (9))\n5. load all tuples from \"splill\" tuplestore, compare to maxval\n6. load all tuples from no-summarized ranges (first range only)\n (into tuplesort/tuplestore, depending on maxval comparison)\n7. load all intersecting ranges (with minval < current maxval)\n (into tuplesort/tuplestore, depending on maxval comparison)\n8. sort the tuplesort, output all tuples, then back to (4)\n9. NULLS LAST: read all ranges that might have NULLs => output\n10. done\n\nFor \"DESC\" ordering the process is almost the same, except that we swap\nminval/maxval in most places.\n\n> bq. can store the info into a tuplestore/tuplesort\n> \n> Wouldn't this involve disk accesses which may reduce the effectiveness\n> of BRIN sort ?\n\nYes, it might. But the question is whether the result is still faster\nthan alternative plans (e.g. seqscan+sort), and those are likely to do\neven more I/O.\n\nMoreover, for \"regular\" cases this shouldn't be a significant issue,\nbecause the stuff will fit into work_mem and so there'll be no I/O. But\nit'll handle those extreme cases gracefully.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 16 Oct 2022 16:33:50 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> I forgot to mention one important issue in my list yesterday, and that's\n> memory consumption.\n\nTBH, this is all looking like vastly more complexity than benefit.\nIt's going to be impossible to produce a reliable cost estimate\ngiven all the uncertainty, and I fear that will end in picking\nBRIN-based sorting when it's not actually a good choice.\n\nThe examples you showed initially are cherry-picked to demonstrate\nthe best possible case, which I doubt has much to do with typical\nreal-world tables. It would be good to see what happens with\nnot-perfectly-sequential data before even deciding this is worth\nspending more effort on. It also seems kind of unfair to decide\nthat the relevant comparison point is a seqscan rather than a\nbtree indexscan.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 16 Oct 2022 10:41:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On Sun, Oct 16, 2022 at 7:33 AM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n>\n>\n> On 10/16/22 16:01, Zhihong Yu wrote:\n> >\n> >\n> > On Sun, Oct 16, 2022 at 6:51 AM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>>\n> > wrote:\n> >\n> > On 10/15/22 14:33, Tomas Vondra wrote:\n> > > Hi,\n> > >\n> > > ...\n> > >\n> > > There's a bunch of issues with this initial version of the patch,\n> > > usually described in XXX comments in the relevant places.6)\n> > >\n> > > ...\n> >\n> > I forgot to mention one important issue in my list yesterday, and\n> that's\n> > memory consumption. The way the patch is coded now, the new BRIN\n> support\n> > function (brin_minmax_ranges) produces information about *all*\n> ranges in\n> > one go, which may be an issue. The worst case is 32TB table, with\n> 1-page\n> > BRIN ranges, which means ~4 billion ranges. The info is an array of\n> ~32B\n> > structs, so this would require ~128GB of RAM. With the default\n> 128-page\n> > ranges, it's still be ~1GB, which is quite a lot.\n> >\n> > We could have a discussion about what's the reasonable size of BRIN\n> > ranges on such large tables (e.g. building a bitmap on 4 billion\n> ranges\n> > is going to be \"not cheap\" so this is likely pretty rare). But we\n> should\n> > not introduce new nodes that ignore work_mem, so we need a way to\n> deal\n> > with such cases somehow.\n> >\n> > The easiest solution likely is to check this while planning - we can\n> > check the table size, calculate the number of BRIN ranges, and check\n> > that the range info fits into work_mem, and just not create the path\n> > when it gets too large. That's what we did for HashAgg, although that\n> > decision was unreliable because estimating GROUP BY cardinality is\n> hard.\n> >\n> > The wrinkle here is that counting just the range info (BrinRange\n> struct)\n> > does not include the values for by-reference types. We could use\n> average\n> > width - that's just an estimate, though.\n> >\n> > A more comprehensive solution seems to be to allow requesting chunks\n> of\n> > the BRIN ranges. So that we'd get \"slices\" of ranges and we'd process\n> > those. So for example if you have 1000 ranges, and you can only\n> handle\n> > 100 at a time, we'd do 10 loops, each requesting 100 ranges.\n> >\n> > This has another problem - we do care about \"overlaps\", and we can't\n> > really know if the overlapping ranges will be in the same \"slice\"\n> > easily. The chunks would be sorted (for example) by maxval. But there\n> > can be a range with much higher maxval (thus in some future slice),\n> but\n> > very low minval (thus intersecting with ranges in the current slice).\n> >\n> > Imagine ranges with these minval/maxval values, sorted by maxval:\n> >\n> > [101,200]\n> > [201,300]\n> > [301,400]\n> > [150,500]\n> >\n> > and let's say we can only process 2-range slices. So we'll get the\n> first\n> > two, but both of them intersect with the very last range.\n> >\n> > We could always include all the intersecting ranges into the slice,\n> but\n> > what if there are too many very \"wide\" ranges?\n> >\n> > So I think this will need to switch to an iterative communication\n> with\n> > the BRIN index - instead of asking \"give me info about all the\n> ranges\",\n> > we'll need a way to\n> >\n> > - request the next range (sorted by maxval)\n> > - request the intersecting ranges one by one (sorted by minval)\n> >\n> > Of course, the BRIN side will have some of the same challenges with\n> > tracking the info without breaking the work_mem limit, but I suppose\n> it\n> > can store the info into a tuplestore/tuplesort, and use that instead\n> of\n> > plain in-memory array. Alternatively, it could just return those, and\n> > BrinSort would use that. OTOH it seems cleaner to have some sort of\n> API,\n> > especially if we want to support e.g. minmax-multi opclasses, that\n> have\n> > a more complicated concept of \"intersection\".\n> >\n> >\n> > regards\n> >\n> > --\n> > Tomas Vondra\n> > EnterpriseDB: http://www.enterprisedb.com <\n> http://www.enterprisedb.com>\n> > The Enterprise PostgreSQL Company\n> >\n> > Hi,\n> > In your example involving [150,500], can this range be broken down into\n> > 4 ranges, ending in 200, 300, 400 and 500, respectively ?\n> > That way, there is no intersection among the ranges.\n> >\n>\n> Not really, I think. These \"value ranges\" map to \"page ranges\" and how\n> would you split those? I mean, you know values [150,500] map to blocks\n> [0,127]. You split the values into [150,200], [201,300], [301,400]. How\n> do you split the page range [0,127]?\n>\n> Also, splitting a range into more ranges is likely making the issue\n> worse, because it increases the number of ranges, right? And I mean,\n> much worse, because imagine a \"wide\" range that overlaps with every\n> other range - the number of ranges would explode.\n>\n> It's not clear to me at which point you'd make the split. At the\n> beginning, right after loading the ranges from BRIN index? A lot of that\n> may be unnecessary, in case the range is loaded as a \"non-intersecting\"\n> range.\n>\n> Try to formulate the whole algorithm. Maybe I'm missing something.\n>\n> The current algorithm is something like this:\n>\n> 1. request info about ranges from the BRIN opclass\n> 2. sort them by maxval and minval\n> 3. NULLS FIRST: read all ranges that might have NULLs => output\n> 4. read the next range (by maxval) into tuplesort\n> (if no more ranges, go to (9))\n> 5. load all tuples from \"splill\" tuplestore, compare to maxval\n> 6. load all tuples from no-summarized ranges (first range only)\n> (into tuplesort/tuplestore, depending on maxval comparison)\n> 7. load all intersecting ranges (with minval < current maxval)\n> (into tuplesort/tuplestore, depending on maxval comparison)\n> 8. sort the tuplesort, output all tuples, then back to (4)\n> 9. NULLS LAST: read all ranges that might have NULLs => output\n> 10. done\n>\n> For \"DESC\" ordering the process is almost the same, except that we swap\n> minval/maxval in most places.\n>\n> Hi,\nThanks for the quick reply.\n\nI don't have good answer w.r.t. splitting the page range [0,127] now. Let\nme think more about it.\n\nThe 10 step flow (subject to changes down the road) should be either given\nin the description of the patch or, written as comment inside the code.\nThis would help people grasp the concept much faster.\n\nBTW splill seems to be a typo - I assume you meant spill.\n\nCheers\n\nOn Sun, Oct 16, 2022 at 7:33 AM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n\nOn 10/16/22 16:01, Zhihong Yu wrote:\n> \n> \n> On Sun, Oct 16, 2022 at 6:51 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>>\n> wrote:\n> \n>     On 10/15/22 14:33, Tomas Vondra wrote:\n>     > Hi,\n>     >\n>     > ...\n>     >\n>     > There's a bunch of issues with this initial version of the patch,\n>     > usually described in XXX comments in the relevant places.6)\n>     >\n>     > ...\n> \n>     I forgot to mention one important issue in my list yesterday, and that's\n>     memory consumption. The way the patch is coded now, the new BRIN support\n>     function (brin_minmax_ranges) produces information about *all* ranges in\n>     one go, which may be an issue. The worst case is 32TB table, with 1-page\n>     BRIN ranges, which means ~4 billion ranges. The info is an array of ~32B\n>     structs, so this would require ~128GB of RAM. With the default 128-page\n>     ranges, it's still be ~1GB, which is quite a lot.\n> \n>     We could have a discussion about what's the reasonable size of BRIN\n>     ranges on such large tables (e.g. building a bitmap on 4 billion ranges\n>     is going to be \"not cheap\" so this is likely pretty rare). But we should\n>     not introduce new nodes that ignore work_mem, so we need a way to deal\n>     with such cases somehow.\n> \n>     The easiest solution likely is to check this while planning - we can\n>     check the table size, calculate the number of BRIN ranges, and check\n>     that the range info fits into work_mem, and just not create the path\n>     when it gets too large. That's what we did for HashAgg, although that\n>     decision was unreliable because estimating GROUP BY cardinality is hard.\n> \n>     The wrinkle here is that counting just the range info (BrinRange struct)\n>     does not include the values for by-reference types. We could use average\n>     width - that's just an estimate, though.\n> \n>     A more comprehensive solution seems to be to allow requesting chunks of\n>     the BRIN ranges. So that we'd get \"slices\" of ranges and we'd process\n>     those. So for example if you have 1000 ranges, and you can only handle\n>     100 at a time, we'd do 10 loops, each requesting 100 ranges.\n> \n>     This has another problem - we do care about \"overlaps\", and we can't\n>     really know if the overlapping ranges will be in the same \"slice\"\n>     easily. The chunks would be sorted (for example) by maxval. But there\n>     can be a range with much higher maxval (thus in some future slice), but\n>     very low minval (thus intersecting with ranges in the current slice).\n> \n>     Imagine ranges with these minval/maxval values, sorted by maxval:\n> \n>     [101,200]\n>     [201,300]\n>     [301,400]\n>     [150,500]\n> \n>     and let's say we can only process 2-range slices. So we'll get the first\n>     two, but both of them intersect with the very last range.\n> \n>     We could always include all the intersecting ranges into the slice, but\n>     what if there are too many very \"wide\" ranges?\n> \n>     So I think this will need to switch to an iterative communication with\n>     the BRIN index - instead of asking \"give me info about all the ranges\",\n>     we'll need a way to\n> \n>       - request the next range (sorted by maxval)\n>       - request the intersecting ranges one by one (sorted by minval)\n> \n>     Of course, the BRIN side will have some of the same challenges with\n>     tracking the info without breaking the work_mem limit, but I suppose it\n>     can store the info into a tuplestore/tuplesort, and use that instead of\n>     plain in-memory array. Alternatively, it could just return those, and\n>     BrinSort would use that. OTOH it seems cleaner to have some sort of API,\n>     especially if we want to support e.g. minmax-multi opclasses, that have\n>     a more complicated concept of \"intersection\".\n> \n> \n>     regards\n> \n>     -- \n>     Tomas Vondra\n>     EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com>\n>     The Enterprise PostgreSQL Company\n> \n> Hi,\n> In your example involving [150,500], can this range be broken down into\n> 4 ranges, ending in 200, 300, 400 and 500, respectively ?\n> That way, there is no intersection among the ranges.\n> \n\nNot really, I think. These \"value ranges\" map to \"page ranges\" and how\nwould you split those? I mean, you know values [150,500] map to blocks\n[0,127]. You split the values into [150,200], [201,300], [301,400]. How\ndo you split the page range [0,127]?\n\nAlso, splitting a range into more ranges is likely making the issue\nworse, because it increases the number of ranges, right? And I mean,\nmuch worse, because imagine a \"wide\" range that overlaps with every\nother range - the number of ranges would explode.\n\nIt's not clear to me at which point you'd make the split. At the\nbeginning, right after loading the ranges from BRIN index? A lot of that\nmay be unnecessary, in case the range is loaded as a \"non-intersecting\"\nrange.\n\nTry to formulate the whole algorithm. Maybe I'm missing something.\n\nThe current algorithm is something like this:\n\n1. request info about ranges from the BRIN opclass\n2. sort them by maxval and minval\n3. NULLS FIRST: read all ranges that might have NULLs => output\n4. read the next range (by maxval) into tuplesort\n   (if no more ranges, go to (9))\n5. load all tuples from \"splill\" tuplestore, compare to maxval\n6. load all tuples from no-summarized ranges (first range only)\n   (into tuplesort/tuplestore, depending on maxval comparison)\n7. load all intersecting ranges (with minval < current maxval)\n   (into tuplesort/tuplestore, depending on maxval comparison)\n8. sort the tuplesort, output all tuples, then back to (4)\n9. NULLS LAST: read all ranges that might have NULLs => output\n10. done\n\nFor \"DESC\" ordering the process is almost the same, except that we swap\nminval/maxval in most places.Hi,Thanks for the quick reply.I don't have good answer w.r.t. splitting the page range [0,127] now. Let me think more about it.The 10 step flow (subject to changes down the road) should be either given in the description of the patch or, written as comment inside the code.This would help people grasp the concept much faster.BTW splill seems to be a typo - I assume you meant spill.Cheers", "msg_date": "Sun, 16 Oct 2022 07:42:39 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "\n\nOn 10/16/22 16:41, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> I forgot to mention one important issue in my list yesterday, and that's\n>> memory consumption.\n> \n> TBH, this is all looking like vastly more complexity than benefit.\n> It's going to be impossible to produce a reliable cost estimate\n> given all the uncertainty, and I fear that will end in picking\n> BRIN-based sorting when it's not actually a good choice.\n> \n\nMaybe. If it turns out the estimates we have are insufficient to make\ngood planning decisions, that's life.\n\nAs I wrote in my message, I know the BRIN costing is a bit shaky in\ngeneral (not just for this new operation), and I intend to propose some\nimprovement in a separate patch.\n\nI think the main issue with BRIN costing is that we have no stats about\nthe ranges, and we can't estimate how many ranges we'll really end up\naccessing. If you have 100 rows, will that be 1 range or 100 ranges? Or\nfor the BRIN Sort, how many overlapping ranges will there be?\n\nI intend to allow index AMs to collect custom statistics, and the BRIN\nminmax opfamily would collect e.g. this:\n\n1) number of non-summarized ranges\n2) number of all-nulls ranges\n3) number of has-nulls ranges\n4) average number of overlaps (given a random range, how many other\n ranges intersect with it)\n5) how likely is it for a row to hit multiple ranges (cross-check\n sample rows vs. ranges)\n\nI believe this will allow much better / more reliable BRIN costing (the\nnumber of overlaps is particularly useful for the this patch).\n\n> The examples you showed initially are cherry-picked to demonstrate\n> the best possible case, which I doubt has much to do with typical\n> real-world tables. It would be good to see what happens with\n> not-perfectly-sequential data before even deciding this is worth\n> spending more effort on.\n\nYes, the example was trivial \"happy case\" example. Obviously, the\nperformance degrades as the data become more random (with ranges wider),\nforcing the BRIN Sort to read / sort more tuples.\n\nBut let's see an example with less correlated data, say, like this:\n\ncreate table t (a int) with (fillfactor = 10);\n\ninsert into t select i + 10000 * random()\n from generate_series(1,10000000) s(i);\n\nWith the fillfactor=10, there are ~2500 values per 1MB range, so this\nmeans each range overlaps with ~4 more. The results then look like this:\n\n1) select * from t order by a;\n\n seqscan+sort: 4437 ms\n brinsort: 4233 ms\n\n2) select * from t order by a limit 10;\n\n seqscan+sort: 1859 ms\n brinsort: 4 ms\n\nIf you increase the random factor from 10000 to 100000 (so, 40 ranges),\nthe seqscan timings remain about the same, while brinsort gets to 5200\nand 20 ms. And with 1M, it's ~6000 and 300 ms.\n\nOnly at 5000000, where we pretty much read 1/2 the table because the\nranges intersect, we get the same timing as the seqscan (for the LIMIT\nquery). The \"full sort\" query is more like 5000 vs. 6600 ms, so slower\nbut not by a huge amount.\n\nYes, this is a very simple example. I can do more tests with other\ndatasets (larger/smaller, different distribution, ...).\n\n> It also seems kind of unfair to decide\n> that the relevant comparison point is a seqscan rather than a\n> btree indexscan.\n> \n\nI don't think it's all that unfair. How likely is it to have both a BRIN\nand btree index on the same column? And even if you do have such indexes\n(say, on different sets of keys), we kinda already have this costing\nissue with index and bitmap index scans.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 16 Oct 2022 17:48:31 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On 10/16/22 16:42, Zhihong Yu wrote:\n> ...\n> \n> I don't have good answer w.r.t. splitting the page range [0,127] now.\n> Let me think more about it.\n> \n\nSure, no problem.\n\n> The 10 step flow (subject to changes down the road) should be either\n> given in the description of the patch or, written as comment inside the\n> code.\n> This would help people grasp the concept much faster.\n\nTrue. I'll add it to the next version of the pach.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 16 Oct 2022 17:49:55 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On Sun, 16 Oct 2022 at 16:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> It also seems kind of unfair to decide\n> that the relevant comparison point is a seqscan rather than a\n> btree indexscan.\n\nI think the comparison against full table scan seems appropriate, as\nthe benefit of BRIN is less space usage when compared to other\nindexes, and better IO selectivity than full table scans.\n\nA btree easily requires 10x the space of a normal BRIN index, and may\nrequire a lot of random IO whilst scanning. This BRIN-sorted scan\nwould have a much lower random IO cost during its scan, and would help\nbridge the performance gap between having index that supports ordered\nretrieval, and no index at all, which is especially steep in large\ntables.\n\nI think that BRIN would be an alternative to btree as a provider of\nsorted data, even when the table is not 100% clustered. This\nBRIN-assisted table sort can help reduce the amount of data that is\naccessed in top-N sorts significantly, both at the index and at the\nrelation level, without having the space overhead of \"all sortable\ncolumns get a btree index\".\n\nIf BRIN gets its HOT optimization back, the benefits would be even\nlarger, as we would then have an index that can speed up top-N sorts\nwithout bloating other indexes, and at very low disk footprint.\nColumns that are only occasionally accessed in a sorted manner could\nthen get BRIN minmax indexes to support this sort, at minimal overhead\nto the rest of the application.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Sun, 16 Oct 2022 20:22:01 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "First of all, it's really great to see that this is being worked on.\n\nOn Sun, 16 Oct 2022 at 16:34, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Try to formulate the whole algorithm. Maybe I'm missing something.\n>\n> The current algorithm is something like this:\n>\n> 1. request info about ranges from the BRIN opclass\n> 2. sort them by maxval and minval\n\nWhy sort on maxval and minval? That seems wasteful for effectively all\nsorts, where range sort on minval should suffice: If you find a range\nthat starts at 100 in a list of ranges sorted at minval, you've\nprocessed all values <100. You can't make a similar comparison when\nthat range is sorted on maxvals.\n\n> 3. NULLS FIRST: read all ranges that might have NULLs => output\n> 4. read the next range (by maxval) into tuplesort\n> (if no more ranges, go to (9))\n> 5. load all tuples from \"splill\" tuplestore, compare to maxval\n\nInstead of this, shouldn't an update to tuplesort that allows for\nrestarting the sort be better than this? Moving tuples that we've\naccepted into BRINsort state but not yet returned around seems like a\nwaste of cycles, and I can't think of a reason why it can't work.\n\n> 6. load all tuples from no-summarized ranges (first range only)\n> (into tuplesort/tuplestore, depending on maxval comparison)\n> 7. load all intersecting ranges (with minval < current maxval)\n> (into tuplesort/tuplestore, depending on maxval comparison)\n> 8. sort the tuplesort, output all tuples, then back to (4)\n> 9. NULLS LAST: read all ranges that might have NULLs => output\n> 10. done\n>\n> For \"DESC\" ordering the process is almost the same, except that we swap\n> minval/maxval in most places.\n\nWhen I was thinking about this feature at the PgCon unconference, I\nwas thinking about it more along the lines of the following system\n(for ORDER BY col ASC NULLS FIRST):\n\n1. prepare tuplesort Rs (for Rangesort) for BRIN tuples, ordered by\n[has_nulls, min ASC]\n2. scan info about ranges from BRIN, store them in Rs.\n3. Finalize the sorting of Rs.\n4. prepare tuplesort Ts (for Tuplesort) for sorting on the specified\ncolumn ordering.\n5. load all tuples from no-summarized ranges into Ts'\n6. while Rs has a block range Rs' with has_nulls:\n - Remove Rs' from Rs\n - store the tuples of Rs' range in Ts.\nWe now have all tuples with NULL in our sorted set; max_sorted = (NULL)\n7. Finalize the Ts sorted set.\n8. While the next tuple Ts' in the Ts tuplesort <= max_sorted\n - Remove Ts' from Ts\n - Yield Ts'\nNow, all tuples up to and including max_sorted are yielded.\n9. If there are no more ranges in Rs:\n - Yield all remaining tuples from Ts, then return.\n10. \"un-finalize\" Ts, so that we can start adding tuples to that tuplesort.\n This is different from Tomas' implementation, as he loads the\ntuples into a new tuplestore.\n11. get the next item from Rs: Rs'\n - remove Rs' from Rs\n - assign Rs' min value to max_sorted\n - store the tuples of Rs' range in Ts\n12. while the next item Rs' from Rs has a min value of max_sorted:\n - remove Rs' from Rs\n - store the tuples of Rs' range in Ts\n13. The 'new' value from the next item from Rs is stored in\nmax_sorted. If no such item exists, max_sorted is assigned a sentinel\nvalue (+INF)\n14. Go to Step 7\n\nThis set of operations requires a restarting tuplesort for Ts, but I\ndon't think that would result in many API changes for tuplesort. It\nreduces the overhead of large overlapping ranges, as it doesn't need\nto copy all tuples that have been read from disk but have not yet been\nreturned.\n\nThe maximum cost of this tuplesort would be the cost of sorting a\nseqscanned table, plus sorting the relevant BRIN ranges, plus the 1\nextra compare per tuple and range that are needed to determine whether\nthe range or tuple should be extracted from the tuplesort. The minimum\ncost would be the cost of sorting all BRIN ranges, plus sorting all\ntuples in one of the index's ranges.\n\nKind regards,\n\nMatthias van de Meent\n\nPS. Are you still planning on giving the HOT optimization for BRIN a\nsecond try? I'm fairly confident that my patch at [0] would fix the\nissue that lead to the revert of that feature, but it introduced ABI\nchanges after the feature freeze and thus it didn't get in. The patch\nmight need some polishing, but I think it shouldn't take too much\nextra effort to get into PG16.\n\n[0] https://www.postgresql.org/message-id/CAEze2Wi9%3DBay_%3DrTf8Z6WPgZ5V0tDOayszQJJO%3DR_9aaHvr%2BTg%40mail.gmail.com\n\n\n", "msg_date": "Sun, 16 Oct 2022 22:17:37 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On 10/16/22 22:17, Matthias van de Meent wrote:\n> First of all, it's really great to see that this is being worked on.\n> \n> On Sun, 16 Oct 2022 at 16:34, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> Try to formulate the whole algorithm. Maybe I'm missing something.\n>>\n>> The current algorithm is something like this:\n>>\n>> 1. request info about ranges from the BRIN opclass\n>> 2. sort them by maxval and minval\n> \n> Why sort on maxval and minval? That seems wasteful for effectively all\n> sorts, where range sort on minval should suffice: If you find a range\n> that starts at 100 in a list of ranges sorted at minval, you've\n> processed all values <100. You can't make a similar comparison when\n> that range is sorted on maxvals.\n> \n\nBecause that allows to identify overlapping ranges quickly.\n\nImagine you have the ranges sorted by maxval, which allows you to add\ntuples in small increments. But how do you know there's not a range\n(possibly with arbitrarily high maxval), that however overlaps with the\nrange we're currently processing?\n\nConsider these ranges sorted by maxval\n\n range #1 [0,100]\n range #2 [101,200]\n range #3 [150,250]\n ...\n range #1000000 [190,1000000000]\n\nprocessing the range #1 is simple, because there are no overlapping\nranges. When processing range #2, that's not the case - the following\nrange #3 is overlapping too, so we need to load the tuples too. But\nthere may be other ranges (in arbitrary distance) also overlapping.\n\nSo we either have to cross-check everything with everything - that's\nO(N^2) so not great, or we can invent a way to eliminate ranges that\ncan't overlap.\n\nThe patch does that by having two arrays - one sorted by maxval, one\nsorted by minval. After proceeding to the next range by maxval (using\nthe first array), the minval-sorted array is used to detect overlaps.\nThis can be done quickly, because we only care for new matches since the\nprevious range, so we can remember the index to the array and start from\nit. And we can stop once the minval exceeds the maxval for the range in\nthe first step. Because we'll only sort tuples up to that point.\n\n>> 3. NULLS FIRST: read all ranges that might have NULLs => output\n>> 4. read the next range (by maxval) into tuplesort\n>> (if no more ranges, go to (9))\n>> 5. load all tuples from \"splill\" tuplestore, compare to maxval\n> \n> Instead of this, shouldn't an update to tuplesort that allows for\n> restarting the sort be better than this? Moving tuples that we've\n> accepted into BRINsort state but not yet returned around seems like a\n> waste of cycles, and I can't think of a reason why it can't work.\n> \n\nI don't understand what you mean by \"update to tuplesort\". Can you\nelaborate?\n\nThe point of spilling them into a tuplestore is to make the sort cheaper\nby not sorting tuples that can't possibly be produced, because the value\nexceeds the current maxval. Consider ranges sorted by maxval\n\n [0,1000]\n [500,1500]\n [1001,2000]\n ...\n\nWe load tuples from [0,1000] and use 1000 as \"threshold\" up to which we\ncan sort. But we have to load tuples from the overlapping range(s) too,\ne.g. from [500,1500] except that all tuples with values > 1000 can't be\nproduced (because there might be yet more ranges intersecting with that\npart).\n\nSo why sort these tuples at all? Imagine imperfectly correlated table\nwhere each range overlaps with ~10 other ranges. If we feed all of that\ninto the tuplestore, we're now sorting 11x the amount of data.\n\nOr maybe I just don't understand what you mean.\n\n\n>> 6. load all tuples from no-summarized ranges (first range only)\n>> (into tuplesort/tuplestore, depending on maxval comparison)\n>> 7. load all intersecting ranges (with minval < current maxval)\n>> (into tuplesort/tuplestore, depending on maxval comparison)\n>> 8. sort the tuplesort, output all tuples, then back to (4)\n>> 9. NULLS LAST: read all ranges that might have NULLs => output\n>> 10. done\n>>\n>> For \"DESC\" ordering the process is almost the same, except that we swap\n>> minval/maxval in most places.\n> \n> When I was thinking about this feature at the PgCon unconference, I\n> was thinking about it more along the lines of the following system\n> (for ORDER BY col ASC NULLS FIRST):\n> \n> 1. prepare tuplesort Rs (for Rangesort) for BRIN tuples, ordered by\n> [has_nulls, min ASC]\n> 2. scan info about ranges from BRIN, store them in Rs.\n> 3. Finalize the sorting of Rs.\n> 4. prepare tuplesort Ts (for Tuplesort) for sorting on the specified\n> column ordering.\n> 5. load all tuples from no-summarized ranges into Ts'\n> 6. while Rs has a block range Rs' with has_nulls:\n> - Remove Rs' from Rs\n> - store the tuples of Rs' range in Ts.\n> We now have all tuples with NULL in our sorted set; max_sorted = (NULL)\n> 7. Finalize the Ts sorted set.\n> 8. While the next tuple Ts' in the Ts tuplesort <= max_sorted\n> - Remove Ts' from Ts\n> - Yield Ts'\n> Now, all tuples up to and including max_sorted are yielded.\n> 9. If there are no more ranges in Rs:\n> - Yield all remaining tuples from Ts, then return.\n> 10. \"un-finalize\" Ts, so that we can start adding tuples to that tuplesort.\n> This is different from Tomas' implementation, as he loads the\n> tuples into a new tuplestore.\n> 11. get the next item from Rs: Rs'\n> - remove Rs' from Rs\n> - assign Rs' min value to max_sorted\n> - store the tuples of Rs' range in Ts\n\nI don't think this works, because we may get a range (Rs') with very\nhigh maxval (thus read very late from Rs), but with very low minval.\nAFAICS max_sorted must never go back, and this breaks it.\n\n> 12. while the next item Rs' from Rs has a min value of max_sorted:\n> - remove Rs' from Rs\n> - store the tuples of Rs' range in Ts\n> 13. The 'new' value from the next item from Rs is stored in\n> max_sorted. If no such item exists, max_sorted is assigned a sentinel\n> value (+INF)\n> 14. Go to Step 7\n> \n> This set of operations requires a restarting tuplesort for Ts, but I\n> don't think that would result in many API changes for tuplesort. It\n> reduces the overhead of large overlapping ranges, as it doesn't need\n> to copy all tuples that have been read from disk but have not yet been\n> returned.\n> \n> The maximum cost of this tuplesort would be the cost of sorting a\n> seqscanned table, plus sorting the relevant BRIN ranges, plus the 1\n> extra compare per tuple and range that are needed to determine whether\n> the range or tuple should be extracted from the tuplesort. The minimum\n> cost would be the cost of sorting all BRIN ranges, plus sorting all\n> tuples in one of the index's ranges.\n> \n\nI'm not a tuplesort expert, but my assumption it's better to sort\nsmaller amounts of rows - which is why the patch sorts only the rows it\nknows it can actually output.\n\n> Kind regards,\n> \n> Matthias van de Meent\n> \n> PS. Are you still planning on giving the HOT optimization for BRIN a\n> second try? I'm fairly confident that my patch at [0] would fix the\n> issue that lead to the revert of that feature, but it introduced ABI\n> changes after the feature freeze and thus it didn't get in. The patch\n> might need some polishing, but I think it shouldn't take too much\n> extra effort to get into PG16.\n> \n\nThanks for reminding me, I'll take a look before the next CF.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 17 Oct 2022 05:43:40 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On Mon, 17 Oct 2022 at 05:43, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 10/16/22 22:17, Matthias van de Meent wrote:\n> > On Sun, 16 Oct 2022 at 16:34, Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >> Try to formulate the whole algorithm. Maybe I'm missing something.\n> >>\n> >> The current algorithm is something like this:\n> >>\n> >> 1. request info about ranges from the BRIN opclass\n> >> 2. sort them by maxval and minval\n> >\n> > Why sort on maxval and minval? That seems wasteful for effectively all\n> > sorts, where range sort on minval should suffice: If you find a range\n> > that starts at 100 in a list of ranges sorted at minval, you've\n> > processed all values <100. You can't make a similar comparison when\n> > that range is sorted on maxvals.\n>\n> Because that allows to identify overlapping ranges quickly.\n>\n> Imagine you have the ranges sorted by maxval, which allows you to add\n> tuples in small increments. But how do you know there's not a range\n> (possibly with arbitrarily high maxval), that however overlaps with the\n> range we're currently processing?\n\nWhy do we need to identify overlapping ranges specifically? If you\nsort by minval, it becomes obvious that any subsequent range cannot\ncontain values < the minval of the next range in the list, allowing\nyou to emit any values less than the next, unprocessed, minmax range's\nminval.\n\n> >> 3. NULLS FIRST: read all ranges that might have NULLs => output\n> >> 4. read the next range (by maxval) into tuplesort\n> >> (if no more ranges, go to (9))\n> >> 5. load all tuples from \"splill\" tuplestore, compare to maxval\n> >\n> > Instead of this, shouldn't an update to tuplesort that allows for\n> > restarting the sort be better than this? Moving tuples that we've\n> > accepted into BRINsort state but not yet returned around seems like a\n> > waste of cycles, and I can't think of a reason why it can't work.\n> >\n>\n> I don't understand what you mean by \"update to tuplesort\". Can you\n> elaborate?\n\nTuplesort currently only allows the following workflow: you to load\ntuples, then call finalize, then extract tuples. There is currently no\nway to add tuples once you've started extracting them.\n\nFor my design to work efficiently or without hacking into the\ninternals of tuplesort, we'd need a way to restart or 'un-finalize'\nthe tuplesort so that it returns to the 'load tuples' phase. Because\nall data of the previous iteration is already sorted, adding more data\nshouldn't be too expensive.\n\n> The point of spilling them into a tuplestore is to make the sort cheaper\n> by not sorting tuples that can't possibly be produced, because the value\n> exceeds the current maxval. Consider ranges sorted by maxval\n> [...]\n>\n> Or maybe I just don't understand what you mean.\n\nIf we sort the ranges by minval like this:\n\n1. [0,1000]\n2. [0,999]\n3. [50,998]\n4. [100,997]\n5. [100,996]\n6. [150,995]\n\nThen we can load and sort the values for range 1 and 2, and emit all\nvalues up to (not including) 50 - the minval of the next,\nnot-yet-loaded range in the ordered list of ranges. Then add the\nvalues from range 3 to the set of tuples we have yet to output; sort;\nand then emit valus up to 100 (range 4's minval), etc. This reduces\nthe amount of tuples in the tuplesort to the minimum amount needed to\noutput any specific value.\n\nIf the ranges are sorted and loaded by maxval, like your algorithm expects:\n\n1. [150,995]\n2. [100,996]\n3. [100,997]\n4. [50,998]\n5. [0,999]\n6. [0,1000]\n\nWe need to load all ranges into the sort before it could start\nemitting any tuples, as all ranges overlap with the first range.\n\n> > [algo]\n>\n> I don't think this works, because we may get a range (Rs') with very\n> high maxval (thus read very late from Rs), but with very low minval.\n> AFAICS max_sorted must never go back, and this breaks it.\n\nmax_sorted cannot go back, because it is the min value of the next\nrange in the list of ranges sorted by min value; see also above.\n\nThere is a small issue in my algorithm where I use <= for yielding\nvalues where it should be <, where initialization of max_value to NULL\nis then be incorrect, but apart from that I don't think there are any\nissues with the base algorithm.\n\n> > The maximum cost of this tuplesort would be the cost of sorting a\n> > seqscanned table, plus sorting the relevant BRIN ranges, plus the 1\n> > extra compare per tuple and range that are needed to determine whether\n> > the range or tuple should be extracted from the tuplesort. The minimum\n> > cost would be the cost of sorting all BRIN ranges, plus sorting all\n> > tuples in one of the index's ranges.\n> >\n>\n> I'm not a tuplesort expert, but my assumption it's better to sort\n> smaller amounts of rows - which is why the patch sorts only the rows it\n> knows it can actually output.\n\nI see that the two main differences between our designs are in\nanswering these questions:\n\n- How do we select table ranges for processing?\n- How do we handle tuples that we know we can't output yet?\n\nFor the first, I think the differences are explained above. The main\ndrawback of your selection algorithm seems to be that your algorithm's\nworst-case is \"all ranges overlap\", whereas my algorithm's worst case\nis \"all ranges start at the same value\", which is only a subset of\nyour worst case.\n\nFor the second, the difference is whether we choose to sort the tuples\nthat are out-of-bounds, but are already in the working set due to\nbeing returned from a range overlapping with the current bound.\nMy algorithm tries to reduce the overhead of increasing the sort\nboundaries by also sorting the out-of-bound data, allowing for\nO(n-less-than-newbound) overhead of extending the bounds (total\ncomplexity for whole sort O(n-out-of-bound)), and O(n log n)\nprocessing of all tuples during insertion.\nYour algorithm - if I understand it correctly - seems to optimize for\nfaster results within the current bound by not sorting the\nout-of-bounds data with O(1) processing when out-of-bounds, at the\ncost of needing O(n-out-of-bound-tuples) operations when the maxval /\nmax_sorted boundary is increased, with a complexity of O(n*m) for an\naverage of n out-of-bound tuples and m bound updates.\n\nLastly, there is the small difference in how the ranges are extracted\nfrom BRIN: I prefer and mention an iterative approach where the tuples\nare extracted from the index and loaded into a tuplesort in some\niterative fashion (which spills to disk and does not need all tuples\nto reside in memory), whereas your current approach was mentioned as\n(paraphrasing) 'allocate all this data in one chunk and hope that\nthere is enough memory available'. I think this is not so much a\ndisagreement in best approach, but mostly a case of what could be made\nto work; so in later updates I hope we'll see improvements here.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Mon, 17 Oct 2022 16:00:07 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On 10/17/22 16:00, Matthias van de Meent wrote:\n> On Mon, 17 Oct 2022 at 05:43, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 10/16/22 22:17, Matthias van de Meent wrote:\n>>> On Sun, 16 Oct 2022 at 16:34, Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>> Try to formulate the whole algorithm. Maybe I'm missing something.\n>>>>\n>>>> The current algorithm is something like this:\n>>>>\n>>>> 1. request info about ranges from the BRIN opclass\n>>>> 2. sort them by maxval and minval\n>>>\n>>> Why sort on maxval and minval? That seems wasteful for effectively all\n>>> sorts, where range sort on minval should suffice: If you find a range\n>>> that starts at 100 in a list of ranges sorted at minval, you've\n>>> processed all values <100. You can't make a similar comparison when\n>>> that range is sorted on maxvals.\n>>\n>> Because that allows to identify overlapping ranges quickly.\n>>\n>> Imagine you have the ranges sorted by maxval, which allows you to add\n>> tuples in small increments. But how do you know there's not a range\n>> (possibly with arbitrarily high maxval), that however overlaps with the\n>> range we're currently processing?\n> \n> Why do we need to identify overlapping ranges specifically? If you\n> sort by minval, it becomes obvious that any subsequent range cannot\n> contain values < the minval of the next range in the list, allowing\n> you to emit any values less than the next, unprocessed, minmax range's\n> minval.\n> \n\nD'oh! I think you're right, it should be possible to do this with only\nsort by minval. And it might actually be better way to do that.\n\nI think I chose the \"maxval\" ordering because it seemed reasonable.\nLooking at the current range and using the maxval as the threshold\nseemed reasonable. But it leads to a bunch of complexity with the\nintersecting ranges, and I never reconsidered this choice. Silly me.\n\n>>>> 3. NULLS FIRST: read all ranges that might have NULLs => output\n>>>> 4. read the next range (by maxval) into tuplesort\n>>>> (if no more ranges, go to (9))\n>>>> 5. load all tuples from \"splill\" tuplestore, compare to maxval\n>>>\n>>> Instead of this, shouldn't an update to tuplesort that allows for\n>>> restarting the sort be better than this? Moving tuples that we've\n>>> accepted into BRINsort state but not yet returned around seems like a\n>>> waste of cycles, and I can't think of a reason why it can't work.\n>>>\n>>\n>> I don't understand what you mean by \"update to tuplesort\". Can you\n>> elaborate?\n> \n> Tuplesort currently only allows the following workflow: you to load\n> tuples, then call finalize, then extract tuples. There is currently no\n> way to add tuples once you've started extracting them.\n> \n> For my design to work efficiently or without hacking into the\n> internals of tuplesort, we'd need a way to restart or 'un-finalize'\n> the tuplesort so that it returns to the 'load tuples' phase. Because\n> all data of the previous iteration is already sorted, adding more data\n> shouldn't be too expensive.\n> \n\nNot sure. I still think it's better to limit the amount of data we have\nin the tuplesort. Even if the tuplesort can efficiently skip the already\nsorted part, it'll still occupy disk space, possibly even force the data\nto disk etc. (We'll still have to write that into a tuplestore, but that\nshould be relatively small and short-lived/recycled).\n\nFWIW I wonder if the assumption that tuplesort can quickly skip already\nsorted data holds e.g. for tuplesorts much larger than work_mem, but I\nhaven't checked that.\n\nI'd also like to include some more info in the explain, like how many\ntimes we did a sort, and what was the largest amount of data we sorted.\nAlthough, maybe that could be tracked by tracking the tuplesort size of\nthe last sort.\n\nConsidering the tuplesort does not currently support this, I'll probably\nstick to the existing approach with separate tuplestore. There's enough\ncomplexity in the patch already, I think. The only thing we'll need with\nthe minval ordering is the ability to \"peek ahead\" to the next minval\n(which is going to be the threshold used to route values either to\ntuplesort or tuplestore).\n\n>> The point of spilling them into a tuplestore is to make the sort cheaper\n>> by not sorting tuples that can't possibly be produced, because the value\n>> exceeds the current maxval. Consider ranges sorted by maxval\n>> [...]\n>>\n>> Or maybe I just don't understand what you mean.\n> \n> If we sort the ranges by minval like this:\n> \n> 1. [0,1000]\n> 2. [0,999]\n> 3. [50,998]\n> 4. [100,997]\n> 5. [100,996]\n> 6. [150,995]\n> \n> Then we can load and sort the values for range 1 and 2, and emit all\n> values up to (not including) 50 - the minval of the next,\n> not-yet-loaded range in the ordered list of ranges. Then add the\n> values from range 3 to the set of tuples we have yet to output; sort;\n> and then emit valus up to 100 (range 4's minval), etc. This reduces\n> the amount of tuples in the tuplesort to the minimum amount needed to\n> output any specific value.\n> \n> If the ranges are sorted and loaded by maxval, like your algorithm expects:\n> \n> 1. [150,995]\n> 2. [100,996]\n> 3. [100,997]\n> 4. [50,998]\n> 5. [0,999]\n> 6. [0,1000]\n> \n> We need to load all ranges into the sort before it could start\n> emitting any tuples, as all ranges overlap with the first range.\n> \n\nRight, thanks - I get this now.\n\n>>> [algo]\n>>\n>> I don't think this works, because we may get a range (Rs') with very\n>> high maxval (thus read very late from Rs), but with very low minval.\n>> AFAICS max_sorted must never go back, and this breaks it.\n> \n> max_sorted cannot go back, because it is the min value of the next\n> range in the list of ranges sorted by min value; see also above.\n> \n> There is a small issue in my algorithm where I use <= for yielding\n> values where it should be <, where initialization of max_value to NULL\n> is then be incorrect, but apart from that I don't think there are any\n> issues with the base algorithm.\n> \n>>> The maximum cost of this tuplesort would be the cost of sorting a\n>>> seqscanned table, plus sorting the relevant BRIN ranges, plus the 1\n>>> extra compare per tuple and range that are needed to determine whether\n>>> the range or tuple should be extracted from the tuplesort. The minimum\n>>> cost would be the cost of sorting all BRIN ranges, plus sorting all\n>>> tuples in one of the index's ranges.\n>>>\n>>\n>> I'm not a tuplesort expert, but my assumption it's better to sort\n>> smaller amounts of rows - which is why the patch sorts only the rows it\n>> knows it can actually output.\n> \n> I see that the two main differences between our designs are in\n> answering these questions:\n> \n> - How do we select table ranges for processing?\n> - How do we handle tuples that we know we can't output yet?\n> \n> For the first, I think the differences are explained above. The main\n> drawback of your selection algorithm seems to be that your algorithm's\n> worst-case is \"all ranges overlap\", whereas my algorithm's worst case\n> is \"all ranges start at the same value\", which is only a subset of\n> your worst case.\n> \n\nRight, those are very good points.\n\n> For the second, the difference is whether we choose to sort the tuples\n> that are out-of-bounds, but are already in the working set due to\n> being returned from a range overlapping with the current bound.\n> My algorithm tries to reduce the overhead of increasing the sort\n> boundaries by also sorting the out-of-bound data, allowing for\n> O(n-less-than-newbound) overhead of extending the bounds (total\n> complexity for whole sort O(n-out-of-bound)), and O(n log n)\n> processing of all tuples during insertion.\n> Your algorithm - if I understand it correctly - seems to optimize for\n> faster results within the current bound by not sorting the\n> out-of-bounds data with O(1) processing when out-of-bounds, at the\n> cost of needing O(n-out-of-bound-tuples) operations when the maxval /\n> max_sorted boundary is increased, with a complexity of O(n*m) for an\n> average of n out-of-bound tuples and m bound updates.\n> \n\nRight. I wonder if we these are actually complementary approaches, and\nwe could/should pick between them depending on how many rows we expect\nto consume.\n\nMy focus was LIMIT queries, so I favored the approach with the lowest\nstartup cost. I haven't quite planned for this to work so well even in\nfull-sort cases. That kinda surprised me (I wonder if the very large\ntuplesorts - compared to work_mem - would hurt this, though).\n\n> Lastly, there is the small difference in how the ranges are extracted\n> from BRIN: I prefer and mention an iterative approach where the tuples\n> are extracted from the index and loaded into a tuplesort in some\n> iterative fashion (which spills to disk and does not need all tuples\n> to reside in memory), whereas your current approach was mentioned as\n> (paraphrasing) 'allocate all this data in one chunk and hope that\n> there is enough memory available'. I think this is not so much a\n> disagreement in best approach, but mostly a case of what could be made\n> to work; so in later updates I hope we'll see improvements here.\n> \n\nRight. I think I mentioned this in my post [1], where I also envisioned\nsome sort of iterative approach. And I think you're right the approach\nwith ordering by minval is naturally more suitable because it just\nconsumes the single sequence of ranges.\n\n\nregards\n\n[1]\nhttps://www.postgresql.org/message-id/1a7c2ff5-a855-64e9-0272-1f9947f8a558%40enterprisedb.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 18 Oct 2022 14:03:14 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "Hi,\n\nhere's an updated/reworked version of the patch, on top of the \"BRIN\nstatistics\" patch as 0001 (because some of the stuff is useful, but we\ncan ignore this part in this thread).\n\nWarning: I realized the new node is somewhat broken when it comes to\nprojection and matching the indexed column, most likely because the\ntargetlists are wired/processed incorrectly or something like that. So\nwhen experimenting with this, just index the first column of the table\nand don't do anything requiring a projection. I'll get this fixed, but\nI've been focusing on the other stuff. I'm not particularly familiar\nwith this tlist/project stuff, so any help is welcome.\n\n\nThe main change in this version is the adoption of multiple ideas\nsuggested by Matthias in his earlier responses.\n\nFirstly, this changes how the index opclass passes information to the\nexecutor node. Instead of using a plain array, we now use a tuplesort.\nThis addresses the memory consumption issues with large number of\nranges, and it also simplifies the sorting etc. which is now handled by\nthe tuplesort. The support procedure simply fills a tuplesort and then\nhands it over to the caller (more or less).\n\n\nSecondly, instead of ordering the ranges by maxval, this orders them by\nminval (as suggested by Matthias), which greatly simplifies the code\nbecause we don't need to detect overlapping ranges etc.\n\nMore precisely, the ranges are sorted to get this ordering\n\n- not yet summarized ranges\n- ranges sorted by (minval, blkno)\n- all-nulls ranges\n\nThis particular ordering is beneficial for the algorithm, which does two\npasses over the ranges. For the NULLS LAST case (i.e. the default), we\ndo this:\n\n- produce tuples with non-NULL values, ordered by the value\n- produce tuples with NULL values (arbitrary order)\n\nAnd each of these phases does a separate pass over the ranges (I'll get\nto that in a minute). And the ordering is tailored to this.\n\nNote: For DESC we'd sort by maxval, and for NULLS FIRST the phases would\nhappen in the opposite order, but those are details. Let's assume ASC\nordering with NULLS LAST, unless stated otherwise.\n\nThe idea here is that all not-summarized ranges need to be processed\nalways, both when processing NULLs and non-NULL values, which happens as\ntwo separate passes over ranges.\n\nThe all-null ranges don't need to be processed during the non-NULL pass,\nand we can terminate this pass early once we hit the first null-only\nrange. So placing them last helps with this.\n\nThe regular ranges are ordered by minval, as dictated by the algorithm\n(which is now described in nodeBrinSort.c comment), but we also sort\nthem by blkno to make this a bit more sequential (but this only matters\nfor ranges with the same minval, and that's probably rare, but the extra\nsort key is also cheap so why not).\n\nI mentioned we do two separate passes - one for non-NULL values, one for\nNULL values. That may be somewhat inefficient, because in extreme cases\nwe might end up scanning the whole table twice (imagine BRIN ranges\nwhere each range has both regular values and NULLs). It might be\npossible to do all of this in a single pass, at least in some cases -\nfor example while scanning ranges, we might stash NULL rows into a\ntuplestore, so that the second pass is not needed. That assumes there\nare not too many such rows (otherwise we might need to write and then\nread many rows, outweighing the cost of just doing two passes). This\nshould be possible to estimate/cost fairly well, I think, and the\ncomment in nodeBrinSort actually presents some ideas about this.\n\nAnd we can't do that for the NULLS FIRST case, because if we stash the\nnon-NULL rows somewhere, we won't be able to do the \"incremental\" sort,\ni.e. we might just do regular Sort right away. So I've kept this simple\napproach with two passes for now.\n\n\nThis still uses the approach with spilling tuples to a tuplestore, and\nonly sorting rows that we know are safe to output. I still think this is\na good approach, for the reasons I explained before, but changing this\nis not hard so we can experiment.\n\nThere's however a related question - how quickly should we increment the\nminval value, serving as a watermark? One option is to go to the next\ndistinct minval value - but that may result in excessive number of tiny\nsorts, because the number ranges and rows between the old and new minval\nvalues tends to be small. Another negative consequence is that this may\ncause of lot of spilling (and re-spilling), because we only consume tiny\nnumber of rows from the tuplestore after incrementing the watermark.\n\nOr we can do larger steps, skipping some of the minval values, so that\nmore rows quality into the sort. Of course, too large step means we'll\nexceed work_mem and switch to an on-disk sort, which we probably don't\nwant. Also, this may be the wrong thing to do for LIMIT queries, that\nonly need a couple rows, and a tiny sort is fine (because we won't do\ntoo many of them).\n\nPatch 0004 introduces a new GUC called brinsort_watermark_step, that can\nbe used to experiment with this. By default it's set to '1' which means\nwe simply progress to the next minval value.\n\nThen 0005 tries to customize this based on statistics - we estimate the\nnumber of rows we expect to get for each minval increment to \"add\" and\nthen pick just a step value not to overflow work_mem. This happens in\ncreate_brinsort_plan, and the comment explains the main weakness - the\nway the number of rows is estimated is somewhat naive, as it just\ndivides reltuples by number of ranges. But I have a couple ideas about\nwhat statistics we might collect, explained in 0001 in the comment at\nbrin_minmax_stats.\n\nBut there's another option - we can tune the step based on past sorts.\nIf we see the sorts are doing on-disk sort, maybe try doing smaller\nsteps. Patch 0006 implements a very simple variant of this. There's a\ncouple ideas about how it might be improved, mentioned in the comment at\nbrinsort_adjust_watermark_step.\n\nThere's also patch 0003, which extends the EXPLAIN output with a\ncounters tracking the number of sorts, counts of on-disk/in-memory\nsorts, space used, number of rows sorted/spilled, and so on. This is\nuseful when analyzing e.g. the effect of higher/lower watermark steps,\ndiscussed in the preceding paragraphs.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sun, 23 Oct 2022 01:17:55 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On Sat, Oct 15, 2022 at 02:33:50PM +0200, Tomas Vondra wrote:\n> Of course, if there are e.g. BTREE indexes this is going to be slower,\n> but people are unlikely to have both index types on the same column.\n\nOn Sun, Oct 16, 2022 at 05:48:31PM +0200, Tomas Vondra wrote:\n> I don't think it's all that unfair. How likely is it to have both a BRIN\n> and btree index on the same column? And even if you do have such indexes\n\nNote that we (at my work) use unique, btree indexes on multiple columns\nfor INSERT ON CONFLICT into the most-recent tables: UNIQUE(a,b,c,...),\nplus a separate set of indexes on all tables, used for searching:\nBRIN(a) and BTREE(b). I'd hope that the costing is accurate enough to\nprefer the btree index for searching the most-recent table, if that's\nwhat's faster (for example, if columns b and c are specified).\n\n> +\t/* There must not be any TID scan in progress yet. */\n> +\tAssert(node->ss.ss_currentScanDesc == NULL);\n> +\n> +\t/* Initialize the TID range scan, for the provided block range. */\n> +\tif (node->ss.ss_currentScanDesc == NULL)\n> +\t{\n\nWhy is this conditional on the condition that was just Assert()ed ?\n\n> \n> +void\n> +cost_brinsort(BrinSortPath *path, PlannerInfo *root, double loop_count,\n> +\t\t bool partial_path)\n\nIt's be nice to refactor existing code to avoid this part being so\nduplicitive.\n\n> +\t * In some situations (particularly with OR'd index conditions) we may\n> +\t * have scan_clauses that are not equal to, but are logically implied by,\n> +\t * the index quals; so we also try a predicate_implied_by() check to see\n\nIsn't that somewhat expensive ?\n\nIf that's known, then it'd be good to say that in the documentation.\n\n> +\t{\n> +\t\t{\"enable_brinsort\", PGC_USERSET, QUERY_TUNING_METHOD,\n> +\t\t\tgettext_noop(\"Enables the planner's use of BRIN sort plans.\"),\n> +\t\t\tNULL,\n> +\t\t\tGUC_EXPLAIN\n> +\t\t},\n> +\t\t&enable_brinsort,\n> +\t\tfalse,\n\nI think new GUCs should be enabled during patch development.\nMaybe in a separate 0002 patch \"for CI only not for commit\".\nThat way \"make check\" at least has a chance to hit that new code paths.\n\nAlso, note that indxpath.c had the var initialized to true.\n\n> +\t\t\tattno = (i + 1);\n> + nranges = (nblocks / pagesPerRange);\n> + node->bs_phase = (nullsFirst) ? BRINSORT_LOAD_NULLS : BRINSORT_LOAD_RANGE;\n\nI'm curious why you have parenthesis these places ?\n\n> +#ifndef NODEBrinSort_H\n> +#define NODEBrinSort_H\n\nNODEBRIN_SORT would be more consistent with NODEINCREMENTALSORT.\nBut I'd prefer NODE_* - otherwise it looks like NO DEBRIN.\n\nThis needed a bunch of work needed to pass any of the regression tests -\neven with the feature set to off.\n\n . meson.build needs the same change as the corresponding ./Makefile.\n . guc missing from postgresql.conf.sample\n . brin_validate.c is missing support for the opr function.\n I gather you're planning on changing this part (?) but this allows to\n pass tests for now.\n . mingw is warning about OidIsValid(pointer) in nodeBrinSort.c.\n https://cirrus-ci.com/task/5771227447951360?logs=mingw_cross_warning#L969\n . Uninitialized catalog attribute.\n . Some typos in your other patches: \"heuristics heuristics\". ste.\n lest (least).\n\n-- \nJustin", "msg_date": "Sun, 23 Oct 2022 23:32:37 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "\n\nOn 10/24/22 06:32, Justin Pryzby wrote:\n> On Sat, Oct 15, 2022 at 02:33:50PM +0200, Tomas Vondra wrote:\n>> Of course, if there are e.g. BTREE indexes this is going to be slower,\n>> but people are unlikely to have both index types on the same column.\n> \n> On Sun, Oct 16, 2022 at 05:48:31PM +0200, Tomas Vondra wrote:\n>> I don't think it's all that unfair. How likely is it to have both a BRIN\n>> and btree index on the same column? And even if you do have such indexes\n> \n> Note that we (at my work) use unique, btree indexes on multiple columns\n> for INSERT ON CONFLICT into the most-recent tables: UNIQUE(a,b,c,...),\n> plus a separate set of indexes on all tables, used for searching:\n> BRIN(a) and BTREE(b). I'd hope that the costing is accurate enough to\n> prefer the btree index for searching the most-recent table, if that's\n> what's faster (for example, if columns b and c are specified).\n> \n\nWell, the costing is very crude at the moment - at the moment it's\npretty much just a copy of the existing BRIN costing. And the cost is\nlikely going to increase, because brinsort needs to do regular BRIN\nbitmap scan (more or less) and then also a sort (which is an extra cost,\nof course). So if it works now, I don't see why would brinsort break it.\nMoreover, if you don't have ORDER BY in the query, I don't see why would\nwe create a brinsort at all.\n\nBut if you could test this once the costing gets improved, that'd be\nvery valuable.\n\n>> +\t/* There must not be any TID scan in progress yet. */\n>> +\tAssert(node->ss.ss_currentScanDesc == NULL);\n>> +\n>> +\t/* Initialize the TID range scan, for the provided block range. */\n>> +\tif (node->ss.ss_currentScanDesc == NULL)\n>> +\t{\n> \n> Why is this conditional on the condition that was just Assert()ed ?\n> \n\nYeah, that's a mistake, due to how the code evolved.\n\n>> \n>> +void\n>> +cost_brinsort(BrinSortPath *path, PlannerInfo *root, double loop_count,\n>> +\t\t bool partial_path)\n> \n> It's be nice to refactor existing code to avoid this part being so\n> duplicitive.\n> \n>> +\t * In some situations (particularly with OR'd index conditions) we may\n>> +\t * have scan_clauses that are not equal to, but are logically implied by,\n>> +\t * the index quals; so we also try a predicate_implied_by() check to see\n> \n> Isn't that somewhat expensive ?\n> \n> If that's known, then it'd be good to say that in the documentation.\n> \n\nSome of this is probably a residue from create_indexscan_path and may\nnot be needed for this new node.\n\n>> +\t{\n>> +\t\t{\"enable_brinsort\", PGC_USERSET, QUERY_TUNING_METHOD,\n>> +\t\t\tgettext_noop(\"Enables the planner's use of BRIN sort plans.\"),\n>> +\t\t\tNULL,\n>> +\t\t\tGUC_EXPLAIN\n>> +\t\t},\n>> +\t\t&enable_brinsort,\n>> +\t\tfalse,\n> \n> I think new GUCs should be enabled during patch development.\n> Maybe in a separate 0002 patch \"for CI only not for commit\".\n> That way \"make check\" at least has a chance to hit that new code paths.\n> \n> Also, note that indxpath.c had the var initialized to true.\n> \n\nGood point.\n\n>> +\t\t\tattno = (i + 1);\n>> + nranges = (nblocks / pagesPerRange);\n>> + node->bs_phase = (nullsFirst) ? BRINSORT_LOAD_NULLS : BRINSORT_LOAD_RANGE;\n> \n> I'm curious why you have parenthesis these places ?\n> \n\nNot sure, it seemed more readable when writing the code I guess.\n\n>> +#ifndef NODEBrinSort_H\n>> +#define NODEBrinSort_H\n> \n> NODEBRIN_SORT would be more consistent with NODEINCREMENTALSORT.\n> But I'd prefer NODE_* - otherwise it looks like NO DEBRIN.\n> \n\nYeah, stupid search/replace on the indescan code, which was used as a\nstarting point.\n\n> This needed a bunch of work needed to pass any of the regression tests -\n> even with the feature set to off.\n> \n> . meson.build needs the same change as the corresponding ./Makefile.\n> . guc missing from postgresql.conf.sample\n> . brin_validate.c is missing support for the opr function.\n> I gather you're planning on changing this part (?) but this allows to\n> pass tests for now.\n> . mingw is warning about OidIsValid(pointer) in nodeBrinSort.c.\n> https://cirrus-ci.com/task/5771227447951360?logs=mingw_cross_warning#L969\n> . Uninitialized catalog attribute.\n> . Some typos in your other patches: \"heuristics heuristics\". ste.\n> lest (least).\n> \n\nThanks, I'll get this fixed. I've posted the patch as a PoC to showcase\nit and gather some feedback, I should have mentioned it's incomplete in\nthese ways.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 24 Oct 2022 14:06:50 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "Fwiw tuplesort does do something like what you want for the top-k\ncase. At least it used to last I looked -- not sure if it went out\nwith the tapesort ...\n\nFor top-k it inserts new tuples into the heap data structure and then\npops the top element out of the hash. That keeps a fixed number of\nelements in the heap. It's always inserting and removing at the same\ntime. I don't think it would be very hard to add a tuplesort interface\nto access that behaviour.\n\nFor something like BRIN you would sort the ranges by minvalue then\ninsert all the tuples for each range. Before inserting tuples for a\nnew range you would first pop out all the tuples that are < the\nminvalue for the new range.\n\nI'm not sure how you handle degenerate BRIN indexes that behave\nterribly. Like, if many BRIN ranges covered the entire key range.\nPerhaps there would be a clever way to spill the overflow and switch\nto quicksort for the spilled tuples without wasting lots of work\nalready done and without being too inefficient.\n\n\n", "msg_date": "Wed, 16 Nov 2022 16:52:16 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On 11/16/22 22:52, Greg Stark wrote:\n> Fwiw tuplesort does do something like what you want for the top-k\n> case. At least it used to last I looked -- not sure if it went out\n> with the tapesort ...\n> > For top-k it inserts new tuples into the heap data structure and then\n> pops the top element out of the hash. That keeps a fixed number of\n> elements in the heap. It's always inserting and removing at the same\n> time. I don't think it would be very hard to add a tuplesort interface\n> to access that behaviour.\n> \n\n\nBounded sorts are still there, implemented using a heap (which is what\nyou're talking about, I think). I actually looked at it some time ago,\nand it didn't look like a particularly good match for the general case\n(without explicit LIMIT). Bounded sorts require specifying number of\ntuples, and then discard the remaining tuples. But you don't know how\nmany tuples you'll actually find until the next minval - you have to\nkeep them all.\n\nMaybe we could feed the tuples into a (sorted) heap incrementally, and\nconsume tuples until the next minval value. I'm not against exploring\nthat idea, but it certainly requires more work than just slapping some\ninterface to existing code.\n\n> For something like BRIN you would sort the ranges by minvalue then\n> insert all the tuples for each range. Before inserting tuples for a\n> new range you would first pop out all the tuples that are < the\n> minvalue for the new range.\n> \n\nWell, yeah. That's pretty much exactly what the last version of this\npatch (from October 23) does.\n\n> I'm not sure how you handle degenerate BRIN indexes that behave\n> terribly. Like, if many BRIN ranges covered the entire key range.\n> Perhaps there would be a clever way to spill the overflow and switch\n> to quicksort for the spilled tuples without wasting lots of work\n> already done and without being too inefficient.\n\nIn two ways:\n\n1) Don't have such BRIN index - if it has many degraded ranges, it's\nbound to perform poorly even for WHERE conditions. We've lived with this\nuntil now, I don't think this makes the issue any worse.\n\n2) Improving statistics for BRIN indexes - until now the BRIN costing is\nvery crude, we have almost no information about how wide the ranges are,\nhow much they overlap, etc. The 0001 part (discussed in a thread [1])\naims to provide much better statistics. Yes, the costing still doesn't\nuse that information very much.\n\n\nregards\n\n\n[1] https://commitfest.postgresql.org/40/3952/\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 17 Nov 2022 00:52:35 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "Hi,\n\nOn 2022-11-17 00:52:35 +0100, Tomas Vondra wrote:\n> Well, yeah. That's pretty much exactly what the last version of this\n> patch (from October 23) does.\n\nThat version unfortunately doesn't build successfully:\nhttps://cirrus-ci.com/task/5108789846736896\n\n[03:02:48.641] Duplicate OIDs detected:\n[03:02:48.641] 9979\n[03:02:48.641] 9980\n[03:02:48.641] found 2 duplicate OID(s) in catalog data\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 6 Dec 2022 10:51:43 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "Hi,\n\nHere's an updated version of this patch series. There's still plenty of\nstuff to improve, but it fixes a number of issues I mentioned earlier.\n\nThe two most important changes are:\n\n\n1) handling of projections\n\nUntil now queries with projection might have failed, due to not using\nthe right slot, bogus var references and so on. The code was somewhat\nconfused because the new node is somewhere in between a scan node and a\nsort (or more precisely, it combines both).\n\nI believe this version handles all of this correctly - the code that\ninitializes the slots/projection info etc. needs serious cleanup, but\nshould be correct.\n\n\n2) handling of expressions\n\nThe other improvement is handling of expressions - if you have a BRIN\nindex on an expression, this should now work too. This also includes\ncorrect handling of collations (which the previous patches ignored).\n\nSimilarly to the projections, I believe the code is correct but needs\ncleanup. In particular, I haven't paid close attention to memory\nmanagement, so there might be memory leaks when evaluating expressions.\n\n\nThe last two parts of the patch series (0009 and 0010) are about\ntesting. 0009 adds a regular regression test with various combinations\n(projections, expressions, single- vs. multi-column indexes, ...).\n\n0010 introduces a python script that randomly generates data sets,\nindexes and queries. I use it to both test random combinations and to\nevaluate performance. I don't expect it to be committed etc. - it's\nincluded only to keep it versioned with the rest of the patch.\n\n\nI did some basic benchmarking using the 0010 part, to evaluate the how\nthis works for various cases. The script varies a number of parameters:\n\n- number of rows\n- table fill factor\n- randomness (how much ranges overlapp)\n- pages per range\n- limit / offset for queries\n- ...\n\nThe script forces both a \"seqscan\" and \"brinsort\" plan, and collects\ntiming info.\n\nThe results are encouraging, I think. Attached are two charts, plotting\nspeedup vs. fraction of tuples the query has to sort.\n\n speedup = (seqscan timing / brinsort timing)\n\n fraction = (limit + offset) / (table rows)\n\nA query with \"limit 1 offset 0\" has fraction ~0.0, query that scans\neverything (perhaps because it has no LIMIT/OFFSET) has ~1.0.\n\nFor speedup, 1.0 means \"no change\" while values above 1.0 means the\nquery gets faster. Both plots have log-scale y-axis.\n\nbrinsort-all-data.gif shows results for all queries. There's significant\nspeedup for small values of fraction (i.e. queries with limit, requiring\nfew rows). This is expected, as this is pretty much the primary use case\nfor the patch.\n\nThe other thing is that the benefits quickly diminish - for fractions\nclose to 0.0 the potential benefits are huge, but once you cross ~10% of\nthe table, it's within 10x, for ~25% less than 5x etc.\n\nOTOH there are also a fair number of queries that got slower - those are\nthe data points below 1.0. I've looked into many of them, and there are\na couple reasons why that can happen:\n\n1) random data set - When the ranges are very wide, BRIN Sort has to\nread most of the data, and it ends up sorting almost as many rows as the\nsequential scan. But it's more expensive, especially when combined with\nthe following points.\n\n Note: I don't think is an issue in practice, because BRIN indexes\n would suck quite badly on such data, so no one is going to create\n such indexes in the first place.\n\n2) tiny ranges - By default ranges are 1MB, but it's possible to make\nthem much smaller. But BRIN Sort has to read/sort all ranges, and that\ngets more expensive with the number of ranges.\n\n Note: I'm not sure there's a way around this, although Matthias\n had some interesting ideas about how to keep the ranges sorted.\n But ultimately, I think this is fine, as long as it's costed\n correctly. For fractions close to 0.0 this is still going to be\n a huge win.\n\n3) non-adaptive (and low) watermark_step - The number of sorts makes a\nhuge difference - in an extreme case we could add the ranges one by one,\nwith a sort after each. For small limit/offset that works, but for more\nrows it's quite pointless.\n\n Note: The adaptive step (adjusted during execution) works great, and\n the script sets explicit values mostly to trigger more corner cases.\n Also, I wonder if we should force higher values as we progress\n through the table - we still don't want to exceed work_mem, but the\n larger fraction we scan the more we should prefer larger \"batches\".\n\n\nThe second \"filter\" chart (brinsort-filtered-data.gif) shows results\nfiltered to only show runs with:\n\n - pages_per_range >= 32\n - randomness <= 5% (i.e. each range covers about 5% of domain)\n - adaptive step (= -1)\n\nAnd IMO this looks much better - there are almost no slower queries,\nexcept for a bunch of queries that scan all the data.\n\n\nSo, what are the next steps for this patch:\n\n1) cleanup of the existing code (mentioned above)\n\n2) improvement of the costing - This is probably the critical part,\nbecause we need a costing that allows us to identify the queries that\nare likely to be faster/slower. I believe this is doable - either now or\nusing the new opclass-specific stats proposed in a separate patch (and\nkept in part 0001 for completeness).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 7 Feb 2023 21:44:02 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "Hi,\n\nRebased version of the patches, fixing only minor conflicts.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 16 Feb 2023 15:07:59 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On Thu, Feb 16, 2023 at 03:07:59PM +0100, Tomas Vondra wrote:\n> Rebased version of the patches, fixing only minor conflicts.\n\nPer cfbot, the patch fails on 32 bit builds.\n+ERROR: count mismatch: 0 != 1000\n\nAnd causes warnings in mingw cross-compile.\n\nOn Sun, Oct 23, 2022 at 11:32:37PM -0500, Justin Pryzby wrote:\n> I think new GUCs should be enabled during patch development.\n> Maybe in a separate 0002 patch \"for CI only not for commit\".\n> That way \"make check\" at least has a chance to hit that new code\n> paths.\n> \n> Also, note that indxpath.c had the var initialized to true.\n\nIn your patch, the amstats guc is still being set to false during\nstartup by the guc machinery. And the tests crash everywhere if it's\nset to on:\n\nTRAP: failed Assert(\"(nmatches_unique >= 1) && (nmatches_unique <= unique[nvalues-1])\"), File: \"../src/backend/access/brin/brin_minmax.c\", Line: 644, PID: 25519\n\n> . Some typos in your other patches: \"heuristics heuristics\". ste.\n> lest (least).\n\nThese are still present.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 16 Feb 2023 10:10:16 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On 2/16/23 17:10, Justin Pryzby wrote:\n> On Thu, Feb 16, 2023 at 03:07:59PM +0100, Tomas Vondra wrote:\n>> Rebased version of the patches, fixing only minor conflicts.\n> \n> Per cfbot, the patch fails on 32 bit builds.\n> +ERROR: count mismatch: 0 != 1000\n> \n> And causes warnings in mingw cross-compile.\n> \n\nThere was a silly mistake in trying to store block numbers as bigint\nwhen sorting the ranges, instead of uint32. That happens to work on\n64-bit systems, but on 32-bit systems it produces bogus block.\n\nThe attached should fix that - it passes on 32-bit arm, even with\nvalgrind and all that.\n\n> On Sun, Oct 23, 2022 at 11:32:37PM -0500, Justin Pryzby wrote:\n>> I think new GUCs should be enabled during patch development.\n>> Maybe in a separate 0002 patch \"for CI only not for commit\".\n>> That way \"make check\" at least has a chance to hit that new code\n>> paths.\n>>\n>> Also, note that indxpath.c had the var initialized to true.\n> \n> In your patch, the amstats guc is still being set to false during\n> startup by the guc machinery. And the tests crash everywhere if it's\n> set to on:\n> \n> TRAP: failed Assert(\"(nmatches_unique >= 1) && (nmatches_unique <= unique[nvalues-1])\"), File: \"../src/backend/access/brin/brin_minmax.c\", Line: 644, PID: 25519\n> \n\nRight, that was a silly thinko in building the stats, and I found a\ncouple more issues nearby. Should be fixed in the attached version.\n\n>> . Some typos in your other patches: \"heuristics heuristics\". ste.\n>> lest (least).\n> \n> These are still present.\n\nThanks for reminding me, those should be fixed too now.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 18 Feb 2023 13:19:49 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "cfbot complained there's one more place triggering a compiler warning on\n32-bit systems, so here's a version fixing that.\n\nI've also added a copy of the regression tests but using the indexam\nstats added in 0001. This is just a copy of the already existing\nregression tests, just with enable_indexam_stats=true - this should\ncatch some of the issues that went mostly undetected in the earlier\npatch versions.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 18 Feb 2023 16:54:57 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "Are (any of) these patches targetting v16 ?\n\ntypos:\nar we - we are?\nmorestly - mostly\ninterstect - intersect\n\n> + * XXX We don't sort the bins, so just do binary sort. For large number of values\n> + * this might be an issue, for small number of values a linear search is fine.\n\n\"binary sort\" is wrong?\n\n> + * only half of there ranges, thus 1/2. This can be extended to randomly\n\nhalf of *these* ranges ?\n\n> From 7b3307c27b35ece119feab4891f03749250e454b Mon Sep 17 00:00:00 2001\n> From: Tomas Vondra <tomas.vondra@postgresql.org>\n> Date: Mon, 17 Oct 2022 18:39:28 +0200\n> Subject: [PATCH 01/11] Allow index AMs to build and use custom statistics\n\nI think the idea can also apply to btree - currently, correlation is\nconsidered to be a property of a column, but not an index. But that\nfails to distinguish between a freshly built index, and an index with\nout of order heap references, which can cause an index scan to be a lot\nmore expensive.\n\nI implemented per-index correlation stats way back when:\nhttps://www.postgresql.org/message-id/flat/20160524173914.GA11880%40telsasoft.com\n\nSee also:\nhttps://www.postgresql.org/message-id/14438.1512499811@sss.pgh.pa.us\n\nWith my old test case:\n\nIndex scan is 3x slower than bitmap scan, but index scan is costed as\nbeing cheaper:\n\npostgres=# explain analyze SELECT * FROM t WHERE i>11 AND i<55;\n Index Scan using t_i_idx on t (cost=0.43..21153.74 rows=130912 width=8) (actual time=0.107..222.737 rows=128914 loops=1)\n\npostgres=# SET enable_indexscan =no;\npostgres=# explain analyze SELECT * FROM t WHERE i>11 AND i<55;\n Bitmap Heap Scan on t (cost=2834.28..26895.96 rows=130912 width=8) (actual time=16.830..69.860 rows=128914 loops=1)\n\nIf it's clustered, then the index scan is almost twice as fast, and the\ncosts are more consistent with the associated time. The planner assumes\nthat the indexes are freshly built...\n\npostgres=# CLUSTER t USING t_i_idx ;\npostgres=# explain analyze SELECT * FROM t WHERE i>11 AND i<55;\n Index Scan using t_i_idx on t (cost=0.43..20121.74 rows=130912 width=8) (actual time=0.084..117.549 rows=128914 loops=1)\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 18 Feb 2023 12:51:09 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On 2/18/23 19:51, Justin Pryzby wrote:\n> Are (any of) these patches targetting v16 ?\n> \n\nProbably not. Maybe if there's more feedback / scrutiny, but I'm not\nsure one commitfest is enough to polish the patch (especially\nconsidering I haven't done much on the costing yet).\n\n> typos:\n> ar we - we are?\n> morestly - mostly\n> interstect - intersect\n> \n>> + * XXX We don't sort the bins, so just do binary sort. For large number of values\n>> + * this might be an issue, for small number of values a linear search is fine.\n> \n> \"binary sort\" is wrong?\n> \n>> + * only half of there ranges, thus 1/2. This can be extended to randomly\n> \n> half of *these* ranges ?\n> \n\nThanks, I'll fix those.\n\n>> From 7b3307c27b35ece119feab4891f03749250e454b Mon Sep 17 00:00:00 2001\n>> From: Tomas Vondra <tomas.vondra@postgresql.org>\n>> Date: Mon, 17 Oct 2022 18:39:28 +0200\n>> Subject: [PATCH 01/11] Allow index AMs to build and use custom statistics\n> \n> I think the idea can also apply to btree - currently, correlation is\n> considered to be a property of a column, but not an index. But that\n> fails to distinguish between a freshly built index, and an index with\n> out of order heap references, which can cause an index scan to be a lot\n> more expensive.\n> \n> I implemented per-index correlation stats way back when:\n> https://www.postgresql.org/message-id/flat/20160524173914.GA11880%40telsasoft.com\n> \n> See also:\n> https://www.postgresql.org/message-id/14438.1512499811@sss.pgh.pa.us\n> \n> With my old test case:\n> \n> Index scan is 3x slower than bitmap scan, but index scan is costed as\n> being cheaper:\n> \n> postgres=# explain analyze SELECT * FROM t WHERE i>11 AND i<55;\n> Index Scan using t_i_idx on t (cost=0.43..21153.74 rows=130912 width=8) (actual time=0.107..222.737 rows=128914 loops=1)\n> \n> postgres=# SET enable_indexscan =no;\n> postgres=# explain analyze SELECT * FROM t WHERE i>11 AND i<55;\n> Bitmap Heap Scan on t (cost=2834.28..26895.96 rows=130912 width=8) (actual time=16.830..69.860 rows=128914 loops=1)\n> \n> If it's clustered, then the index scan is almost twice as fast, and the\n> costs are more consistent with the associated time. The planner assumes\n> that the indexes are freshly built...\n> \n> postgres=# CLUSTER t USING t_i_idx ;\n> postgres=# explain analyze SELECT * FROM t WHERE i>11 AND i<55;\n> Index Scan using t_i_idx on t (cost=0.43..20121.74 rows=130912 width=8) (actual time=0.084..117.549 rows=128914 loops=1)\n> \n\nYeah, the concept of indexam statistics certainly applies to other index\ntypes, and for btree we might collect information about correlation etc.\nI haven't looked at the 2017 patch, but it seems reasonable.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 18 Feb 2023 21:05:10 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On Sat, 18 Feb 2023 at 16:55, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> cfbot complained there's one more place triggering a compiler warning on\n> 32-bit systems, so here's a version fixing that.\n>\n> I've also added a copy of the regression tests but using the indexam\n> stats added in 0001. This is just a copy of the already existing\n> regression tests, just with enable_indexam_stats=true - this should\n> catch some of the issues that went mostly undetected in the earlier\n> patch versions.\n\n\nComments on 0001, mostly comments and patch design:\n\n> +range_minval_cmp(const void *a, const void *b, void *arg)\n> [...]\n> +range_maxval_cmp(const void *a, const void *b, void *arg)\n> [...]\n> +range_values_cmp(const void *a, const void *b, void *arg)\n\nCan the arguments of these functions be modified into the types they\nare expected to receive? If not, could you add a comment on why that's\nnot possible?\nI don't think it's good practise to \"just\" use void* for arguments to\ncast them to more concrete types in the next lines without an\nimmediate explanation.\n\n> + * Statistics calculated by index AM (e.g. BRIN for ranges, etc.).\n\nCould you please expand on this? We do have GIST support for ranges, too.\n\n> + * brin_minmax_stats\n> + * Calculate custom statistics for a BRIN minmax index.\n> + *\n> + * At the moment this calculates:\n> + *\n> + * - number of summarized/not-summarized and all/has nulls ranges\n\nI think statistics gathering of an index should be done at the AM\nlevel, not attribute level. The docs currently suggest that the user\nbuilds one BRIN index with 16 columns instead of 16 BRIN indexes with\none column each, which would make the statistics gathering use 16x\nmore IO if the scanned data cannot be reused.\n\nIt is possible to build BRIN indexes on more than one column with more\nthan one opclass family like `USING brin (id int8_minmax_ops, id\nint8_bloom_ops)`. This would mean various duplicate statistics fields,\nno?\nIt seems to me that it's more useful to do the null- and n_summarized\non the index level instead of duplicating that inside the opclass.\n\n> + for (heapBlk = 0; heapBlk < nblocks; heapBlk += pagesPerRange)\n\nI am not familiar with the frequency of max-sized relations, but this\nwould go into an infinite loop for pagesPerRange values >1 for\nmax-sized relations due to BlockNumber wraparound. I think there\nshould be some additional overflow checks here.\n\n> +/*\n> + * get_attstaindexam\n> + *\n> + * Given the table and attribute number of a column, get the index AM\n> + * statistics. Return NULL if no data available.\n> + *\n\nShouldn't this be \"given the index and attribute number\" instead of\n\"the table and attribute number\"?\nI think we need to be compatible with indexes on expression here, so\nthat we don't fail to create (or use) statistics for an index `USING\nbrin ( (int8range(my_min_column, my_max_column, '[]'))\nrange_inclusion_ops)` when we implement stats for range_inclusion_ops.\n\n> + * Alternative to brin_minmax_match_tuples_to_ranges2, leveraging ordering\n\nDoes this function still exist?\n\n.\n\nI'm planning on reviewing the other patches, and noticed that a lot of\nthe patches are marked WIP. Could you share a status on those, because\ncurrently that status is unknown: Are these patches you don't plan on\nincluding, or are these patches only (or mostly) included for\ndebugging?\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Thu, 23 Feb 2023 15:19:16 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On 2/23/23 15:19, Matthias van de Meent wrote:\n> On Sat, 18 Feb 2023 at 16:55, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> cfbot complained there's one more place triggering a compiler warning on\n>> 32-bit systems, so here's a version fixing that.\n>>\n>> I've also added a copy of the regression tests but using the indexam\n>> stats added in 0001. This is just a copy of the already existing\n>> regression tests, just with enable_indexam_stats=true - this should\n>> catch some of the issues that went mostly undetected in the earlier\n>> patch versions.\n> \n> \n> Comments on 0001, mostly comments and patch design:\n> \n>> +range_minval_cmp(const void *a, const void *b, void *arg)\n>> [...]\n>> +range_maxval_cmp(const void *a, const void *b, void *arg)\n>> [...]\n>> +range_values_cmp(const void *a, const void *b, void *arg)\n> \n> Can the arguments of these functions be modified into the types they\n> are expected to receive? If not, could you add a comment on why that's\n> not possible?\n> I don't think it's good practise to \"just\" use void* for arguments to\n> cast them to more concrete types in the next lines without an\n> immediate explanation.\n> \n\nThe reason is that that's what qsort() expects. If you change that to\nactual data types, you'll get compile-time warnings. I agree this may\nneed better comments, though.\n\n>> + * Statistics calculated by index AM (e.g. BRIN for ranges, etc.).\n> \n> Could you please expand on this? We do have GIST support for ranges, too.\n> \n\nExpand in what way? This is meant to be AM-specific, so if GiST wants to\ncollect some additional stats, it's free to do so - perhaps some of the\nideas from the stats collected for BRIN would be applicable, but it's\nalso bound to the index structure.\n\n>> + * brin_minmax_stats\n>> + * Calculate custom statistics for a BRIN minmax index.\n>> + *\n>> + * At the moment this calculates:\n>> + *\n>> + * - number of summarized/not-summarized and all/has nulls ranges\n> \n> I think statistics gathering of an index should be done at the AM\n> level, not attribute level. The docs currently suggest that the user\n> builds one BRIN index with 16 columns instead of 16 BRIN indexes with\n> one column each, which would make the statistics gathering use 16x\n> more IO if the scanned data cannot be reused.\n> \n\nWhy? The row sample is collected only once and used for building all the\nindex AM stats - it doesn't really matter if we analyze 16 single-column\nindexes or 1 index with 16 columns. Yes, we'll need to scan more\nindexes, but the with 16 columns the summaries will be larger so the\ntotal amount of I/O will be almost the same I think.\n\nOr maybe I don't understand what I/O you're talking about?\n\n> It is possible to build BRIN indexes on more than one column with more\n> than one opclass family like `USING brin (id int8_minmax_ops, id\n> int8_bloom_ops)`. This would mean various duplicate statistics fields,\n> no?\n> It seems to me that it's more useful to do the null- and n_summarized\n> on the index level instead of duplicating that inside the opclass.\n\nI don't think it's worth it. The amount of data this would save is tiny,\nand it'd only apply to cases where the index includes the same attribute\nmultiple times, and that's pretty rare I think. I don't think it's worth\nthe extra complexity.\n\n> \n>> + for (heapBlk = 0; heapBlk < nblocks; heapBlk += pagesPerRange)\n> \n> I am not familiar with the frequency of max-sized relations, but this\n> would go into an infinite loop for pagesPerRange values >1 for\n> max-sized relations due to BlockNumber wraparound. I think there\n> should be some additional overflow checks here.\n> \n\nGood point, but that's a pre-existing issue. We do this same loop in a\nnumber of places.\n\n>> +/*\n>> + * get_attstaindexam\n>> + *\n>> + * Given the table and attribute number of a column, get the index AM\n>> + * statistics. Return NULL if no data available.\n>> + *\n> \n> Shouldn't this be \"given the index and attribute number\" instead of\n> \"the table and attribute number\"?\n> I think we need to be compatible with indexes on expression here, so\n> that we don't fail to create (or use) statistics for an index `USING\n> brin ( (int8range(my_min_column, my_max_column, '[]'))\n> range_inclusion_ops)` when we implement stats for range_inclusion_ops.\n> \n>> + * Alternative to brin_minmax_match_tuples_to_ranges2, leveraging ordering\n> \n> Does this function still exist?\n> \n\nYes, but only in 0003 - it's a \"brute-force\" algorithm used as a\ncross-check the result of the optimized algorithm in 0001. You're right\nit should not be referenced in the comment.\n\n> \n> I'm planning on reviewing the other patches, and noticed that a lot of\n> the patches are marked WIP. Could you share a status on those, because\n> currently that status is unknown: Are these patches you don't plan on\n> including, or are these patches only (or mostly) included for\n> debugging?\n> \n\nI think the WIP label is a bit misleading, I used it mostly to mark\npatches that are not meant to be committed on their own. A quick overview:\n\n0002-wip-introduce-debug_brin_stats-20230218-2.patch\n0003-wip-introduce-debug_brin_cross_check-20230218-2.patch\n\n Not meant for commit, used mostly during development/debugging, by\n adding debug logging and/or cross-check to validate the results.\n\n I think it's fine to ignore those during review.\n\n\n0005-wip-brinsort-explain-stats-20230218-2.patch\n\n This needs more work. It does what it's meant to do (show info about\n the brinsort node), but I'm not very happy with the formatting etc.\n\n Review and suggestions welcome.\n\n\n0006-wip-multiple-watermark-steps-20230218-2.patch\n0007-wip-adjust-watermark-step-20230218-2.patch\n0008-wip-adaptive-watermark-step-20230218-2.patch\n\n Ultimately this should be merged into 0004 (which does the actual\n brinsort), I think 0006 is the way to go, but I kept all the patches\n to show the incremental evolution (and allow comparisons).\n\n 0006 adds a GUC to specify how many ranges to add in each round,\n 0005 adjusts that based on statistics during planning and 0006 does\n that adaptively during execution.\n\n0009-wip-add-brinsort-regression-tests-20230218-2.patch\n0010-wip-add-brinsort-amstats-regression-tests-20230218-2.patch\n\n These need to move to the earlier parts. The tests are rather\n expensive so we'll need to reduce it somehow.\n\n0011-wip-test-generator-script-20230218-2.patch\n\n Not intended for commit, I only included it to keep it as part of the\n patch series.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 23 Feb 2023 16:22:37 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On Thu, 23 Feb 2023 at 16:22, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 2/23/23 15:19, Matthias van de Meent wrote:\n>> Comments on 0001, mostly comments and patch design:\n\nOne more comment:\n\n>>> + range->min_value = bval->bv_values[0];\n>>> + range->max_value = bval->bv_values[1];\n\nThis produces dangling pointers for minmax indexes on by-ref types\nsuch as text and bytea, due to the memory context of the decoding\ntuple being reset every iteration. :-(\n\n> >> +range_values_cmp(const void *a, const void *b, void *arg)\n> >\n> > Can the arguments of these functions be modified into the types they\n> > are expected to receive? If not, could you add a comment on why that's\n> > not possible?\n>\n> The reason is that that's what qsort() expects. If you change that to\n> actual data types, you'll get compile-time warnings. I agree this may\n> need better comments, though.\n\nThanks in advance.\n\n> >> + * Statistics calculated by index AM (e.g. BRIN for ranges, etc.).\n> >\n> > Could you please expand on this? We do have GIST support for ranges, too.\n> >\n>\n> Expand in what way? This is meant to be AM-specific, so if GiST wants to\n> collect some additional stats, it's free to do so - perhaps some of the\n> ideas from the stats collected for BRIN would be applicable, but it's\n> also bound to the index structure.\n\nI don't quite understand the flow of the comment, as I don't clearly\nsee what the \"BRIN for ranges\" tries to refer to. In my view, that\nmakes it a bad example which needs further explanation or rewriting,\naka \"expanding on\".\n\n> >> + * brin_minmax_stats\n> >> + * Calculate custom statistics for a BRIN minmax index.\n> >> + *\n> >> + * At the moment this calculates:\n> >> + *\n> >> + * - number of summarized/not-summarized and all/has nulls ranges\n> >\n> > I think statistics gathering of an index should be done at the AM\n> > level, not attribute level. The docs currently suggest that the user\n> > builds one BRIN index with 16 columns instead of 16 BRIN indexes with\n> > one column each, which would make the statistics gathering use 16x\n> > more IO if the scanned data cannot be reused.\n> >\n>\n> Why? The row sample is collected only once and used for building all the\n> index AM stats - it doesn't really matter if we analyze 16 single-column\n> indexes or 1 index with 16 columns. Yes, we'll need to scan more\n> indexes, but the with 16 columns the summaries will be larger so the\n> total amount of I/O will be almost the same I think.\n>\n> Or maybe I don't understand what I/O you're talking about?\n\nWith the proposed patch, we do O(ncols_statsenabled) scans over the\nBRIN index. Each scan reads all ncol columns of all block ranges from\ndisk, so in effect the data scan does on the order of\nO(ncols_statsenabled * ncols * nranges) IOs, or O(n^2) on cols when\nall columns have statistics enabled.\n\n> > It is possible to build BRIN indexes on more than one column with more\n> > than one opclass family like `USING brin (id int8_minmax_ops, id\n> > int8_bloom_ops)`. This would mean various duplicate statistics fields,\n> > no?\n> > It seems to me that it's more useful to do the null- and n_summarized\n> > on the index level instead of duplicating that inside the opclass.\n>\n> I don't think it's worth it. The amount of data this would save is tiny,\n> and it'd only apply to cases where the index includes the same attribute\n> multiple times, and that's pretty rare I think. I don't think it's worth\n> the extra complexity.\n\nNot necessarily, it was just an example of where we'd save IO.\nNote that the current gathering method already retrieves all tuple\nattribute data, so from a basic processing perspective we'd save some\ntime decoding as well.\n\n> >\n> > I'm planning on reviewing the other patches, and noticed that a lot of\n> > the patches are marked WIP. Could you share a status on those, because\n> > currently that status is unknown: Are these patches you don't plan on\n> > including, or are these patches only (or mostly) included for\n> > debugging?\n> >\n>\n> I think the WIP label is a bit misleading, I used it mostly to mark\n> patches that are not meant to be committed on their own. A quick overview:\n>\n> [...]\n\nThanks for the explanation, that's quite helpful. I'll see to further\nreviewing 0004 and 0005 when I have additional time.\n\nKind regards,\n\nMatthias van de Meent.\n\n\n", "msg_date": "Thu, 23 Feb 2023 17:44:31 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On 2/23/23 17:44, Matthias van de Meent wrote:\n> On Thu, 23 Feb 2023 at 16:22, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 2/23/23 15:19, Matthias van de Meent wrote:\n>>> Comments on 0001, mostly comments and patch design:\n> \n> One more comment:\n> \n>>>> + range->min_value = bval->bv_values[0];\n>>>> + range->max_value = bval->bv_values[1];\n> \n> This produces dangling pointers for minmax indexes on by-ref types\n> such as text and bytea, due to the memory context of the decoding\n> tuple being reset every iteration. :-(\n> \n\nYeah, that sounds like a bug. Also a sign the tests should have some\nby-ref data types (presumably there are none, as that would certainly\ntrip some asserts etc.).\n\n>>>> +range_values_cmp(const void *a, const void *b, void *arg)\n>>>\n>>> Can the arguments of these functions be modified into the types they\n>>> are expected to receive? If not, could you add a comment on why that's\n>>> not possible?\n>>\n>> The reason is that that's what qsort() expects. If you change that to\n>> actual data types, you'll get compile-time warnings. I agree this may\n>> need better comments, though.\n> \n> Thanks in advance.\n> \n>>>> + * Statistics calculated by index AM (e.g. BRIN for ranges, etc.).\n>>>\n>>> Could you please expand on this? We do have GIST support for ranges, too.\n>>>\n>>\n>> Expand in what way? This is meant to be AM-specific, so if GiST wants to\n>> collect some additional stats, it's free to do so - perhaps some of the\n>> ideas from the stats collected for BRIN would be applicable, but it's\n>> also bound to the index structure.\n> \n> I don't quite understand the flow of the comment, as I don't clearly\n> see what the \"BRIN for ranges\" tries to refer to. In my view, that\n> makes it a bad example which needs further explanation or rewriting,\n> aka \"expanding on\".\n> \n\nAh, right. Yeah, the \"BRIN for ranges\" wording is a bit misleading. It\nshould really say only BRIN, but I was focused on the minmax use case,\nso I mentioned the ranges.\n\n>>>> + * brin_minmax_stats\n>>>> + * Calculate custom statistics for a BRIN minmax index.\n>>>> + *\n>>>> + * At the moment this calculates:\n>>>> + *\n>>>> + * - number of summarized/not-summarized and all/has nulls ranges\n>>>\n>>> I think statistics gathering of an index should be done at the AM\n>>> level, not attribute level. The docs currently suggest that the user\n>>> builds one BRIN index with 16 columns instead of 16 BRIN indexes with\n>>> one column each, which would make the statistics gathering use 16x\n>>> more IO if the scanned data cannot be reused.\n>>>\n>>\n>> Why? The row sample is collected only once and used for building all the\n>> index AM stats - it doesn't really matter if we analyze 16 single-column\n>> indexes or 1 index with 16 columns. Yes, we'll need to scan more\n>> indexes, but the with 16 columns the summaries will be larger so the\n>> total amount of I/O will be almost the same I think.\n>>\n>> Or maybe I don't understand what I/O you're talking about?\n> \n> With the proposed patch, we do O(ncols_statsenabled) scans over the\n> BRIN index. Each scan reads all ncol columns of all block ranges from\n> disk, so in effect the data scan does on the order of\n> O(ncols_statsenabled * ncols * nranges) IOs, or O(n^2) on cols when\n> all columns have statistics enabled.\n> \n\nI don't think that's the number of I/O operations we'll do, because we\nalways read the whole BRIN tuple at once. So I believe it should rather\nbe something like\n\n O(ncols_statsenabled * nranges)\n\nassuming nranges is the number of page ranges. But even that's likely a\nsignificant overestimate because most of the tuples will be served from\nshared buffers.\n\nConsidering how tiny BRIN indexes are, this is likely orders of\nmagnitude less I/O than we expend on sampling rows from the table. I\nmean, with the default statistics target we read ~30000 pages (~240MB)\nor more just to sample the rows. Randomly, while the BRIN index is\nlikely scanned mostly sequentially.\n\nMaybe there are cases where this would be an issue, but I haven't seen\none when working on this patch (and I did a lot of experiments). I'd\nlike to see one before we start optimizing it ...\n\nThis also reminds me that the issues I actually saw (e.g. memory\nconsumption) would be made worse by processing all columns at once,\nbecause then you need to keep more columns in memory.\n\n\n>>> It is possible to build BRIN indexes on more than one column with more\n>>> than one opclass family like `USING brin (id int8_minmax_ops, id\n>>> int8_bloom_ops)`. This would mean various duplicate statistics fields,\n>>> no?\n>>> It seems to me that it's more useful to do the null- and n_summarized\n>>> on the index level instead of duplicating that inside the opclass.\n>>\n>> I don't think it's worth it. The amount of data this would save is tiny,\n>> and it'd only apply to cases where the index includes the same attribute\n>> multiple times, and that's pretty rare I think. I don't think it's worth\n>> the extra complexity.\n> \n> Not necessarily, it was just an example of where we'd save IO.\n> Note that the current gathering method already retrieves all tuple\n> attribute data, so from a basic processing perspective we'd save some\n> time decoding as well.\n> \n\n[shrug] I still think it's a negligible fraction of the time.\n\n>>>\n>>> I'm planning on reviewing the other patches, and noticed that a lot of\n>>> the patches are marked WIP. Could you share a status on those, because\n>>> currently that status is unknown: Are these patches you don't plan on\n>>> including, or are these patches only (or mostly) included for\n>>> debugging?\n>>>\n>>\n>> I think the WIP label is a bit misleading, I used it mostly to mark\n>> patches that are not meant to be committed on their own. A quick overview:\n>>\n>> [...]\n> \n> Thanks for the explanation, that's quite helpful. I'll see to further\n> reviewing 0004 and 0005 when I have additional time.\n> \n\nCool, thank you!\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 23 Feb 2023 19:48:42 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On 2023-Feb-23, Matthias van de Meent wrote:\n\n> > + for (heapBlk = 0; heapBlk < nblocks; heapBlk += pagesPerRange)\n> \n> I am not familiar with the frequency of max-sized relations, but this\n> would go into an infinite loop for pagesPerRange values >1 for\n> max-sized relations due to BlockNumber wraparound. I think there\n> should be some additional overflow checks here.\n\nThey are definitely not very common -- BlockNumber wraps around at 32 TB\nIIUC. But yeah, I guess it is a possibility, and perhaps we should find\na way to write these loops in a more robust manner.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"World domination is proceeding according to plan\" (Andrew Morton)\n\n\n", "msg_date": "Fri, 24 Feb 2023 09:39:16 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "\n\nOn 2/24/23 09:39, Alvaro Herrera wrote:\n> On 2023-Feb-23, Matthias van de Meent wrote:\n> \n>>> + for (heapBlk = 0; heapBlk < nblocks; heapBlk += pagesPerRange)\n>>\n>> I am not familiar with the frequency of max-sized relations, but this\n>> would go into an infinite loop for pagesPerRange values >1 for\n>> max-sized relations due to BlockNumber wraparound. I think there\n>> should be some additional overflow checks here.\n> \n> They are definitely not very common -- BlockNumber wraps around at 32 TB\n> IIUC. But yeah, I guess it is a possibility, and perhaps we should find\n> a way to write these loops in a more robust manner.\n> \n\nI guess the easiest fix would be to do the arithmetic in 64 bits. That'd\neliminate the overflow.\n\nAlternatively, we could do something like\n\n prevHeapBlk = 0;\n for (heapBlk = 0; (heapBlk < nblocks) && (prevHeapBlk <= heapBlk);\n heapBlk += pagesPerRange)\n {\n ...\n prevHeapBlk = heapBlk;\n }\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 24 Feb 2023 11:25:55 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On 2023-Feb-24, Tomas Vondra wrote:\n\n> I guess the easiest fix would be to do the arithmetic in 64 bits. That'd\n> eliminate the overflow.\n\nYeah, that might be easy to set up. We then don't have to worry about\nit until BlockNumber is enlarged to 64 bits ... but by that time surely\nwe can just grow it again to a 128 bit loop variable.\n\n> Alternatively, we could do something like\n> \n> prevHeapBlk = 0;\n> for (heapBlk = 0; (heapBlk < nblocks) && (prevHeapBlk <= heapBlk);\n> heapBlk += pagesPerRange)\n> {\n> ...\n> prevHeapBlk = heapBlk;\n> }\n\nI think a formulation of this kind has the benefit that it works after\nBlockNumber is enlarged to 64 bits, and doesn't have to be changed ever\nagain (assuming it is correct).\n\n... if pagesPerRange is not a whole divisor of MaxBlockNumber, I think\nthis will neglect the last range in the table.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 24 Feb 2023 16:14:16 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "\n\nOn 2/24/23 16:14, Alvaro Herrera wrote:\n> On 2023-Feb-24, Tomas Vondra wrote:\n> \n>> I guess the easiest fix would be to do the arithmetic in 64 bits. That'd\n>> eliminate the overflow.\n> \n> Yeah, that might be easy to set up. We then don't have to worry about\n> it until BlockNumber is enlarged to 64 bits ... but by that time surely\n> we can just grow it again to a 128 bit loop variable.\n> \n>> Alternatively, we could do something like\n>>\n>> prevHeapBlk = 0;\n>> for (heapBlk = 0; (heapBlk < nblocks) && (prevHeapBlk <= heapBlk);\n>> heapBlk += pagesPerRange)\n>> {\n>> ...\n>> prevHeapBlk = heapBlk;\n>> }\n> \n> I think a formulation of this kind has the benefit that it works after\n> BlockNumber is enlarged to 64 bits, and doesn't have to be changed ever\n> again (assuming it is correct).\n> \n\nDid anyone even propose doing that? I suspect this is unlikely to be the\nonly place that'd might be broken by that.\n\n> ... if pagesPerRange is not a whole divisor of MaxBlockNumber, I think\n> this will neglect the last range in the table.\n> \n\nWhy would it? Let's say BlockNumber is uint8, i.e. 255 max. And there\nare 10 pages per range. That's 25 \"full\" ranges, and the last range\nbeing just 5 pages. So we get into\n\n prevHeapBlk = 240\n heapBlk = 250\n\nand we read the last 5 pages. And then we update\n\n prevHeapBlk = 250\n heapBlk = (250 + 10) % 255 = 5\n\nand we don't do that loop. Or did I get this wrong, somehow?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 24 Feb 2023 17:04:17 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On Fri, 24 Feb 2023 at 17:04, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 2/24/23 16:14, Alvaro Herrera wrote:\n> > ... if pagesPerRange is not a whole divisor of MaxBlockNumber, I think\n> > this will neglect the last range in the table.\n> >\n>\n> Why would it? Let's say BlockNumber is uint8, i.e. 255 max. And there\n> are 10 pages per range. That's 25 \"full\" ranges, and the last range\n> being just 5 pages. So we get into\n>\n> prevHeapBlk = 240\n> heapBlk = 250\n>\n> and we read the last 5 pages. And then we update\n>\n> prevHeapBlk = 250\n> heapBlk = (250 + 10) % 255 = 5\n>\n> and we don't do that loop. Or did I get this wrong, somehow?\n\nThe result is off-by-one due to (u)int8 overflows being mod-256, but\napart from that your result is accurate.\n\nThe condition only stops the loop when we wrap around or when we go\npast the last block, but no earlier than that.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Fri, 24 Feb 2023 17:11:21 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On Thu, 23 Feb 2023 at 19:48, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 2/23/23 17:44, Matthias van de Meent wrote:\n> > On Thu, 23 Feb 2023 at 16:22, Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> On 2/23/23 15:19, Matthias van de Meent wrote:\n> >>> Comments on 0001, mostly comments and patch design:\n> >\n> > One more comment:\n> >\n> >>>> + range->min_value = bval->bv_values[0];\n> >>>> + range->max_value = bval->bv_values[1];\n> >\n> > This produces dangling pointers for minmax indexes on by-ref types\n> > such as text and bytea, due to the memory context of the decoding\n> > tuple being reset every iteration. :-(\n> >\n>\n> Yeah, that sounds like a bug. Also a sign the tests should have some\n> by-ref data types (presumably there are none, as that would certainly\n> trip some asserts etc.).\n\nI'm not sure we currently trip asserts, as the data we store in the\nmemory context is very limited, making it unlikely we actually release\nthe memory region back to the OS.\nI did get assertion failures by adding the attached patch, but I don't\nthink that's something we can do in the final release.\n\n> > With the proposed patch, we do O(ncols_statsenabled) scans over the\n> > BRIN index. Each scan reads all ncol columns of all block ranges from\n> > disk, so in effect the data scan does on the order of\n> > O(ncols_statsenabled * ncols * nranges) IOs, or O(n^2) on cols when\n> > all columns have statistics enabled.\n> >\n>\n> I don't think that's the number of I/O operations we'll do, because we\n> always read the whole BRIN tuple at once. So I believe it should rather\n> be something like\n>\n> O(ncols_statsenabled * nranges)\n>\n> assuming nranges is the number of page ranges. But even that's likely a\n> significant overestimate because most of the tuples will be served from\n> shared buffers.\n\nWe store some data per index attribute, which makes the IO required\nfor a single indexed range proportional to the number of index\nattributes.\nIf we then scan the index a number of times proportional to the number\nof attributes, the cumulative IO load of statistics gathering for that\nindex is quadratic on the number of attributes, not linear, right?\n\n> Considering how tiny BRIN indexes are, this is likely orders of\n> magnitude less I/O than we expend on sampling rows from the table. I\n> mean, with the default statistics target we read ~30000 pages (~240MB)\n> or more just to sample the rows. Randomly, while the BRIN index is\n> likely scanned mostly sequentially.\n\nMostly agreed; except I think it's not too difficult to imagine a BRIN\nindex that is on that scale; with as an example the bloom filters that\neasily take up several hundreds of bytes.\n\nWith the default configuration of 128 pages_per_range,\nn_distinct_per_range of -0.1, and false_positive_rate of 0.01, the\nbloom filter size is 4.36 KiB - each indexed item on its own page. It\nis still only 1% of the original table's size, but there are enough\ntables that are larger than 24GB that this could be a significant\nissue.\n\nNote that most of my concerns are related to our current\ndocumentation's statement that there are no demerits to multi-column\nBRIN indexes:\n\n\"\"\"\n11.3. Multicolumn Indexes\n\n[...] The only reason to have multiple BRIN indexes instead of one\nmulticolumn BRIN index on a single table is to have a different\npages_per_range storage parameter.\n\"\"\"\n\nWide BRIN indexes add IO overhead for single-attribute scans when\ncompared to single-attribute indexes, so having N single-attribute\nindex scans to build statistics the statistics on an N-attribute index\nis not great.\n\n> Maybe there are cases where this would be an issue, but I haven't seen\n> one when working on this patch (and I did a lot of experiments). I'd\n> like to see one before we start optimizing it ...\n\nI'm not only worried about optimizing it, I'm also worried that we're\nputting this abstraction at the wrong level in a way that is difficult\nto modify.\n\n> This also reminds me that the issues I actually saw (e.g. memory\n> consumption) would be made worse by processing all columns at once,\n> because then you need to keep more columns in memory.\n\nYes, I that can be a valid concern, but don't we already do similar\nthings in the current table statistics gathering?\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Fri, 24 Feb 2023 19:03:45 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "\n\nOn 2/24/23 19:03, Matthias van de Meent wrote:\n> On Thu, 23 Feb 2023 at 19:48, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 2/23/23 17:44, Matthias van de Meent wrote:\n>>> On Thu, 23 Feb 2023 at 16:22, Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>\n>>>> On 2/23/23 15:19, Matthias van de Meent wrote:\n>>>>> Comments on 0001, mostly comments and patch design:\n>>>\n>>> One more comment:\n>>>\n>>>>>> + range->min_value = bval->bv_values[0];\n>>>>>> + range->max_value = bval->bv_values[1];\n>>>\n>>> This produces dangling pointers for minmax indexes on by-ref types\n>>> such as text and bytea, due to the memory context of the decoding\n>>> tuple being reset every iteration. :-(\n>>>\n>>\n>> Yeah, that sounds like a bug. Also a sign the tests should have some\n>> by-ref data types (presumably there are none, as that would certainly\n>> trip some asserts etc.).\n> \n> I'm not sure we currently trip asserts, as the data we store in the\n> memory context is very limited, making it unlikely we actually release\n> the memory region back to the OS.\n> I did get assertion failures by adding the attached patch, but I don't\n> think that's something we can do in the final release.\n> \n\nBut we should randomize the memory if we ever do pfree(), and it's\nstrange valgrind didn't complain when I ran tests with it. But yeah,\nI'll take a look and see if we can add some tests covering this.\n\n>>> With the proposed patch, we do O(ncols_statsenabled) scans over the\n>>> BRIN index. Each scan reads all ncol columns of all block ranges from\n>>> disk, so in effect the data scan does on the order of\n>>> O(ncols_statsenabled * ncols * nranges) IOs, or O(n^2) on cols when\n>>> all columns have statistics enabled.\n>>>\n>>\n>> I don't think that's the number of I/O operations we'll do, because we\n>> always read the whole BRIN tuple at once. So I believe it should rather\n>> be something like\n>>\n>> O(ncols_statsenabled * nranges)\n>>\n>> assuming nranges is the number of page ranges. But even that's likely a\n>> significant overestimate because most of the tuples will be served from\n>> shared buffers.\n> \n> We store some data per index attribute, which makes the IO required\n> for a single indexed range proportional to the number of index\n> attributes.\n> If we then scan the index a number of times proportional to the number\n> of attributes, the cumulative IO load of statistics gathering for that\n> index is quadratic on the number of attributes, not linear, right?\n> \n\nAh, OK. I was focusing on number of I/O operations while you're talking\nabout amount of I/O performed. You're right the amount of I/O is\nquadratic, but I think what's more interesting is the comparison of the\nalternative ANALYZE approaches. The current simple approach does a\nmultiple of the I/O amount, linear to the number of attributes.\n\nWhich is not great, ofc.\n\n\n>> Considering how tiny BRIN indexes are, this is likely orders of\n>> magnitude less I/O than we expend on sampling rows from the table. I\n>> mean, with the default statistics target we read ~30000 pages (~240MB)\n>> or more just to sample the rows. Randomly, while the BRIN index is\n>> likely scanned mostly sequentially.\n> \n> Mostly agreed; except I think it's not too difficult to imagine a BRIN\n> index that is on that scale; with as an example the bloom filters that\n> easily take up several hundreds of bytes.\n> \n> With the default configuration of 128 pages_per_range,\n> n_distinct_per_range of -0.1, and false_positive_rate of 0.01, the\n> bloom filter size is 4.36 KiB - each indexed item on its own page. It\n> is still only 1% of the original table's size, but there are enough\n> tables that are larger than 24GB that this could be a significant\n> issue.\n> \n\nRight, it's certainly true BRIN indexes may be made fairly large (either\nby using something like bloom or by making the ranges much smaller). But\nthose are exactly the indexes where building statistics for all columns\nat once is going to cause issues with memory usage etc.\n\nNote: Obviously, that depends on how much data per range we need to keep\nin memory. For bloom I doubt we'd actually want to keep all the filters,\nwe'd probably calculate just some statistics (e.g. number of bits set),\nso maybe the memory consumption is not that bad.\n\nIn fact, for such indexes the memory consumption may already be an issue\neven when analyzing the index column by column. My feeling is we'll need\nto do something about that, e.g. by reading only a sample of the ranges,\nor something like that. That might help with the I/O too, I guess.\n\n> Note that most of my concerns are related to our current\n> documentation's statement that there are no demerits to multi-column\n> BRIN indexes:\n> \n> \"\"\"\n> 11.3. Multicolumn Indexes\n> \n> [...] The only reason to have multiple BRIN indexes instead of one\n> multicolumn BRIN index on a single table is to have a different\n> pages_per_range storage parameter.\n> \"\"\"\n> \n\nTrue, we may need to clarify this in the documentation.\n\n> Wide BRIN indexes add IO overhead for single-attribute scans when\n> compared to single-attribute indexes, so having N single-attribute\n> index scans to build statistics the statistics on an N-attribute index\n> is not great.\n> \n\nPerhaps, but it's not like the alternative is perfect either\n(complexity, memory consumption, ...). IMHO it's a reasonable tradeoff.\n\n>> Maybe there are cases where this would be an issue, but I haven't seen\n>> one when working on this patch (and I did a lot of experiments). I'd\n>> like to see one before we start optimizing it ...\n> \n> I'm not only worried about optimizing it, I'm also worried that we're\n> putting this abstraction at the wrong level in a way that is difficult\n> to modify.\n> \n\nYeah, that's certainly a valid concern. I admit I only did the minimum\namount of work on this part, as I was focused on the sorting part.\n\n>> This also reminds me that the issues I actually saw (e.g. memory\n>> consumption) would be made worse by processing all columns at once,\n>> because then you need to keep more columns in memory.\n> \n> Yes, I that can be a valid concern, but don't we already do similar\n> things in the current table statistics gathering?\n> \n\nNot really, I think. We sample a bunch of rows once, but then we build\nstatistics on this sample for each attribute / expression independently.\nWe could of course read the whole index into memory and then run the\nanalysis, but I think tuples tend to be much smaller (thanks to TOAST\netc.) and we only really scan a limited amount of them (30k).\n\nBut if we're concerned about the I/O, the BRIN is likely fairly large,\nso maybe reading it into memory at once is not a great idea.\n\nIt's be more similar if we sampled sampled the BRIN ranges - I think\nwe'll have to do something like that anyway.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 24 Feb 2023 20:14:12 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On 2/24/23 20:14, Tomas Vondra wrote:\n> \n> \n> On 2/24/23 19:03, Matthias van de Meent wrote:\n>> On Thu, 23 Feb 2023 at 19:48, Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>>\n>>> On 2/23/23 17:44, Matthias van de Meent wrote:\n>>>> On Thu, 23 Feb 2023 at 16:22, Tomas Vondra\n>>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>>\n>>>>> On 2/23/23 15:19, Matthias van de Meent wrote:\n>>>>>> Comments on 0001, mostly comments and patch design:\n>>>>\n>>>> One more comment:\n>>>>\n>>>>>>> + range->min_value = bval->bv_values[0];\n>>>>>>> + range->max_value = bval->bv_values[1];\n>>>>\n>>>> This produces dangling pointers for minmax indexes on by-ref types\n>>>> such as text and bytea, due to the memory context of the decoding\n>>>> tuple being reset every iteration. :-(\n>>>>\n>>>\n>>> Yeah, that sounds like a bug. Also a sign the tests should have some\n>>> by-ref data types (presumably there are none, as that would certainly\n>>> trip some asserts etc.).\n>>\n>> I'm not sure we currently trip asserts, as the data we store in the\n>> memory context is very limited, making it unlikely we actually release\n>> the memory region back to the OS.\n>> I did get assertion failures by adding the attached patch, but I don't\n>> think that's something we can do in the final release.\n>>\n> \n> But we should randomize the memory if we ever do pfree(), and it's\n> strange valgrind didn't complain when I ran tests with it. But yeah,\n> I'll take a look and see if we can add some tests covering this.\n> \n\nThere was no patch to trigger the assertions, but the attached patches\nshould fix that, I think. It pretty much just does datumCopy() after\nreading the BRIN tuple from disk.\n\nIt's interesting I've been unable to hit the usual asserts checking\nfreed memory etc. even with text columns etc. I guess the BRIN tuple\nmemory happens to be aligned in a way that just happens to work.\n\nIt howver triggered an assert failure in minval_end, because it didn't\nuse the proper comparator.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 25 Feb 2023 12:34:20 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On Fri, 24 Feb 2023, 20:14 Tomas Vondra, <tomas.vondra@enterprisedb.com> wrote:\n>\n> On 2/24/23 19:03, Matthias van de Meent wrote:\n> > On Thu, 23 Feb 2023 at 19:48, Tomas Vondra\n> >> Yeah, that sounds like a bug. Also a sign the tests should have some\n> >> by-ref data types (presumably there are none, as that would certainly\n> >> trip some asserts etc.).\n> >\n> > I'm not sure we currently trip asserts, as the data we store in the\n> > memory context is very limited, making it unlikely we actually release\n> > the memory region back to the OS.\n> > I did get assertion failures by adding the attached patch, but I don't\n> > think that's something we can do in the final release.\n> >\n>\n> But we should randomize the memory if we ever do pfree(), and it's\n> strange valgrind didn't complain when I ran tests with it.\n\nWell, we don't clean up the decoding tuple immediately after our last\niteration, so the memory context (and the last tuple's attributes) are\nstill valid memory addresses. And, assuming that min/max values for\nall brin ranges all have the same max-aligned length, the attribute\npointers are likely to point to the same offset within the decoding\ntuple's memory context's memory segment, which would mean the dangling\npointers still point to valid memory - just not memory with the\ncontents they expected to be pointing to.\n\n> >> Considering how tiny BRIN indexes are, this is likely orders of\n> >> magnitude less I/O than we expend on sampling rows from the table. I\n> >> mean, with the default statistics target we read ~30000 pages (~240MB)\n> >> or more just to sample the rows. Randomly, while the BRIN index is\n> >> likely scanned mostly sequentially.\n> >\n> > Mostly agreed; except I think it's not too difficult to imagine a BRIN\n> > index that is on that scale; with as an example the bloom filters that\n> > easily take up several hundreds of bytes.\n> >\n> > With the default configuration of 128 pages_per_range,\n> > n_distinct_per_range of -0.1, and false_positive_rate of 0.01, the\n> > bloom filter size is 4.36 KiB - each indexed item on its own page. It\n> > is still only 1% of the original table's size, but there are enough\n> > tables that are larger than 24GB that this could be a significant\n> > issue.\n> >\n>\n> Right, it's certainly true BRIN indexes may be made fairly large (either\n> by using something like bloom or by making the ranges much smaller). But\n> those are exactly the indexes where building statistics for all columns\n> at once is going to cause issues with memory usage etc.\n>\n> Note: Obviously, that depends on how much data per range we need to keep\n> in memory. For bloom I doubt we'd actually want to keep all the filters,\n> we'd probably calculate just some statistics (e.g. number of bits set),\n> so maybe the memory consumption is not that bad.\n\nYes, I was thinking something along the same lines for bloom as well.\nSomething like 'average number of bits set' (or: histogram number of\nbits set), and/or for each bit a count (or %) how many times it is\nset, etc.\n\n> >> Maybe there are cases where this would be an issue, but I haven't seen\n> >> one when working on this patch (and I did a lot of experiments). I'd\n> >> like to see one before we start optimizing it ...\n> >\n> > I'm not only worried about optimizing it, I'm also worried that we're\n> > putting this abstraction at the wrong level in a way that is difficult\n> > to modify.\n> >\n>\n> Yeah, that's certainly a valid concern. I admit I only did the minimum\n> amount of work on this part, as I was focused on the sorting part.\n>\n> >> This also reminds me that the issues I actually saw (e.g. memory\n> >> consumption) would be made worse by processing all columns at once,\n> >> because then you need to keep more columns in memory.\n> >\n> > Yes, I that can be a valid concern, but don't we already do similar\n> > things in the current table statistics gathering?\n> >\n>\n> Not really, I think. We sample a bunch of rows once, but then we build\n> statistics on this sample for each attribute / expression independently.\n> We could of course read the whole index into memory and then run the\n> analysis, but I think tuples tend to be much smaller (thanks to TOAST\n> etc.) and we only really scan a limited amount of them (30k).\n\nJust to note, with default settings, sampling 30k index entries from\nBRIN would cover ~29 GiB of a table. This isn't a lot, but it also\nisn't exactly a small table. I think that it would be difficult to get\naccurate avg_overlap statistics for some shapes of BRIN data...\n\n> But if we're concerned about the I/O, the BRIN is likely fairly large,\n> so maybe reading it into memory at once is not a great idea.\n\nAgreed, we can't always expect that the interesting parts of the BRIN\nindex always fit in the available memory.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Mon, 27 Feb 2023 14:58:51 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "Hi,\n\nOn Thu, 23 Feb 2023 at 17:44, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> I'll see to further reviewing 0004 and 0005 when I have additional time.\n\nSome initial comments on 0004:\n\n> +/*\n> + * brin_minmax_ranges\n> + * Load the BRIN ranges and sort them.\n> + */\n> +Datum\n> +brin_minmax_ranges(PG_FUNCTION_ARGS)\n> +{\n\nLike in 0001, this seems to focus on only single columns. Can't we put\nthe scan and sort infrastructure in brin, and put the weight- and\ncompare-operators in the opclasses? I.e.\nbrin_minmax_minorder(PG_FUNCTION_ARGS=brintuple) -> range.min and\nbrin_minmax_maxorder(PG_FUNCTION_ARGS=brintuple) -> range.max, and a\nbrin_minmax_compare(order, order) -> int? I'm thinking of something\nsimilar to GIST's distance operators, which would make implementing\nordering by e.g. `pointcolumn <-> (1, 2)::point` implementable in the\nbrin infrastructure.\n\nNote: One big reason I don't really like the current\nbrin_minmax_ranges (and the analyze code in 0001) is because it breaks\nthe operatorclass-vs-index abstraction layer. Operator classes don't\n(shouldn't) know or care about which attribute number they have, nor\nwhat the index does with the data.\nScanning the index is not something that I expect the operator class\nto do, I expect that the index code organizes the scan, and forwards\nthe data to the relevant operator classes.\nScanning the index N times for N attributes can be excused if there\nare good reasons, but I'd rather have that decision made in the\nindex's core code rather than at the design level.\n\n> +/*\n> + * XXX Does it make sense (is it possible) to have a sort by more than one\n> + * column, using a BRIN index?\n> + */\n\nYes, even if only one prefix column is included in the BRIN index\n(e.g. `company` in `ORDER BY company, date`, the tuplesort with table\ntuples can add additional sorting without first reading the whole\ntable, potentially (likely) reducing the total resource usage of the\nquery. That utilizes the same idea as incremental sorts, but with the\ncaveat that the input sort order is approximately likely but not at\nall guaranteed. So, even if the range sort is on a single index\ncolumn, we can still do the table's tuplesort on all ORDER BY\nattributes, as long as a prefix of ORDER BY columns are included in\nthe BRIN index.\n\n> + /*\n> + * XXX We can be a bit smarter for LIMIT queries - once we\n> + * know we have more rows in the tuplesort than we need to\n> + * output, we can stop spilling - those rows are not going\n> + * to be needed. We can discard the tuplesort (no need to\n> + * respill) and stop spilling.\n> + */\n\nShouldn't that be \"discard the tuplestore\"?\n\n> +#define BRIN_PROCNUM_RANGES 12 /* optional */\n\nIt would be useful to add documentation for this in this patch.\n\n\nKind regards,\n\nMatthias van de Meent.\n\n\n", "msg_date": "Mon, 27 Feb 2023 16:40:22 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On 2023-Feb-24, Tomas Vondra wrote:\n\n> On 2/24/23 16:14, Alvaro Herrera wrote:\n\n> > I think a formulation of this kind has the benefit that it works after\n> > BlockNumber is enlarged to 64 bits, and doesn't have to be changed ever\n> > again (assuming it is correct).\n> \n> Did anyone even propose doing that? I suspect this is unlikely to be the\n> only place that'd might be broken by that.\n\nTrue about other places also needing fixes, and no I haven't see anyone;\nbut while 32 TB does seem very far away to us now, it might be not\n*that* far away. So I think doing it the other way is better.\n\n> > ... if pagesPerRange is not a whole divisor of MaxBlockNumber, I think\n> > this will neglect the last range in the table.\n> \n> Why would it? Let's say BlockNumber is uint8, i.e. 255 max. And there\n> are 10 pages per range. That's 25 \"full\" ranges, and the last range\n> being just 5 pages. So we get into\n> \n> prevHeapBlk = 240\n> heapBlk = 250\n> \n> and we read the last 5 pages. And then we update\n> \n> prevHeapBlk = 250\n> heapBlk = (250 + 10) % 255 = 5\n> \n> and we don't do that loop. Or did I get this wrong, somehow?\n\nI stand corrected.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 1 Mar 2023 19:33:01 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "Hi,\n\nI finally had time to look at this patch again. There's a bit of bitrot,\nso here's a rebased version (no other changes).\n\n[more comments inline]\n\nOn 2/27/23 16:40, Matthias van de Meent wrote:\n> Hi,\n> \n> On Thu, 23 Feb 2023 at 17:44, Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n>>\n>> I'll see to further reviewing 0004 and 0005 when I have additional time.\n> \n> Some initial comments on 0004:\n> \n>> +/*\n>> + * brin_minmax_ranges\n>> + * Load the BRIN ranges and sort them.\n>> + */\n>> +Datum\n>> +brin_minmax_ranges(PG_FUNCTION_ARGS)\n>> +{\n> \n> Like in 0001, this seems to focus on only single columns. Can't we put\n> the scan and sort infrastructure in brin, and put the weight- and\n> compare-operators in the opclasses? I.e.\n> brin_minmax_minorder(PG_FUNCTION_ARGS=brintuple) -> range.min and\n> brin_minmax_maxorder(PG_FUNCTION_ARGS=brintuple) -> range.max, and a\n> brin_minmax_compare(order, order) -> int? I'm thinking of something\n> similar to GIST's distance operators, which would make implementing\n> ordering by e.g. `pointcolumn <-> (1, 2)::point` implementable in the\n> brin infrastructure.\n> \n> Note: One big reason I don't really like the current\n> brin_minmax_ranges (and the analyze code in 0001) is because it breaks\n> the operatorclass-vs-index abstraction layer. Operator classes don't\n> (shouldn't) know or care about which attribute number they have, nor\n> what the index does with the data.\n> Scanning the index is not something that I expect the operator class\n> to do, I expect that the index code organizes the scan, and forwards\n> the data to the relevant operator classes.\n> Scanning the index N times for N attributes can be excused if there\n> are good reasons, but I'd rather have that decision made in the\n> index's core code rather than at the design level.\n> \n\nI think you're right. It'd be more appropriate to have most of the core\nscanning logic in brin.c, and then delegate only some minor decisions to\nthe opclass. Like, comparisons, extraction of min/max from ranges etc.\n\nHowever, it's not quite clear to me is what you mean by the weight- and\ncompare-operators? That is, what are\n\n - brin_minmax_minorder(PG_FUNCTION_ARGS=brintuple) -> range.min\n - brin_minmax_maxorder(PG_FUNCTION_ARGS=brintuple) -> range.max\n - brin_minmax_compare(order, order) -> int\n\nsupposed to do? Or what does \"PG_FUNCTION_ARGS=brintuple\" mean?\n\nIn principle we just need a procedure that tells us min/max for a given\npage range - I guess that's what the minorder/maxorder functions do? But\nwhy would we need the compare one? We're comparing by the known data\ntype, so we can just delegate the comparison to that, no?\n\nAlso, the existence of these opclass procedures should be enough to\nidentify the opclasses which can support this.\n\n>> +/*\n>> + * XXX Does it make sense (is it possible) to have a sort by more than one\n>> + * column, using a BRIN index?\n>> + */\n> \n> Yes, even if only one prefix column is included in the BRIN index\n> (e.g. `company` in `ORDER BY company, date`, the tuplesort with table\n> tuples can add additional sorting without first reading the whole\n> table, potentially (likely) reducing the total resource usage of the\n> query. That utilizes the same idea as incremental sorts, but with the\n> caveat that the input sort order is approximately likely but not at\n> all guaranteed. So, even if the range sort is on a single index\n> column, we can still do the table's tuplesort on all ORDER BY\n> attributes, as long as a prefix of ORDER BY columns are included in\n> the BRIN index.\n> \n\nThat's now quite what I meant, though. When I mentioned sorting by more\nthan one column, I meant using a multi-column BRIN index on those\ncolumns. Something like this:\n\n CREATE TABLE t (a int, b int);\n INSERT INTO t ...\n CREATE INDEX ON t USING brin (a,b);\n\n SELECT * FROM t ORDER BY a, b;\n\nNow, what I think you described is using BRIN to sort by \"a\", and then\ndo incremental sort for \"b\". What I had in mind is whether it's possible\nto use BRIN to sort by \"b\" too.\n\nI was suspecting it might be made to work, but now that I think about it\nagain it probably can't - BRIN pretty much sorts the columns separately,\nit's not like having an ordering by (a,b) - first by \"a\", then \"b\".\n\n>> + /*\n>> + * XXX We can be a bit smarter for LIMIT queries - once we\n>> + * know we have more rows in the tuplesort than we need to\n>> + * output, we can stop spilling - those rows are not going\n>> + * to be needed. We can discard the tuplesort (no need to\n>> + * respill) and stop spilling.\n>> + */\n> \n> Shouldn't that be \"discard the tuplestore\"?\n> \n\nYeah, definitely.\n\n>> +#define BRIN_PROCNUM_RANGES 12 /* optional */\n> \n> It would be useful to add documentation for this in this patch.\n> \n\nRight, this should be documented in doc/src/sgml/brin.sgml.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 7 Jul 2023 20:26:40 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On Fri, 7 Jul 2023 at 20:26, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> I finally had time to look at this patch again. There's a bit of bitrot,\n> so here's a rebased version (no other changes).\n\nThanks!\n\n> On 2/27/23 16:40, Matthias van de Meent wrote:\n> > Some initial comments on 0004:\n> >\n> >> +/*\n> >> + * brin_minmax_ranges\n> >> + * Load the BRIN ranges and sort them.\n> >> + */\n> >> +Datum\n> >> +brin_minmax_ranges(PG_FUNCTION_ARGS)\n> >> +{\n> >\n> > Like in 0001, this seems to focus on only single columns. Can't we put\n> > the scan and sort infrastructure in brin, and put the weight- and\n> > compare-operators in the opclasses? I.e.\n> > brin_minmax_minorder(PG_FUNCTION_ARGS=brintuple) -> range.min and\n> > brin_minmax_maxorder(PG_FUNCTION_ARGS=brintuple) -> range.max, and a\n> > brin_minmax_compare(order, order) -> int? I'm thinking of something\n> > similar to GIST's distance operators, which would make implementing\n> > ordering by e.g. `pointcolumn <-> (1, 2)::point` implementable in the\n> > brin infrastructure.\n> >\n>\n> However, it's not quite clear to me is what you mean by the weight- and\n> compare-operators? That is, what are\n>\n> - brin_minmax_minorder(PG_FUNCTION_ARGS=brintuple) -> range.min\n> - brin_minmax_maxorder(PG_FUNCTION_ARGS=brintuple) -> range.max\n> - brin_minmax_compare(order, order) -> int\n>\n> supposed to do? Or what does \"PG_FUNCTION_ARGS=brintuple\" mean?\n\n_minorder/_maxorder is for extracting the minimum/maximum relative\norder of each range, used for ASC/DESC sorting of operator results\n(e.g. to support ORDER BY <->(box_column, '(1,2)'::point) DESC).\nPG_FUNCTION_ARGS is mentioned because of PG calling conventions;\nthough I did forget to describe the second operator argument for the\ndistance function. We might also want to use only one such \"order\nextraction function\" with DESC/ASC indicated by an argument.\n\n> In principle we just need a procedure that tells us min/max for a given\n> page range - I guess that's what the minorder/maxorder functions do? But\n> why would we need the compare one? We're comparing by the known data\n> type, so we can just delegate the comparison to that, no?\n\nIs there a comparison function for any custom orderable type that we\ncan just use? GIST distance ordering uses floats, and I don't quite\nlike that from a user perspective, as it makes ordering operations\nimprecise. I'd rather allow (but discourage) any type with its own\ncompare function.\n\n> Also, the existence of these opclass procedures should be enough to\n> identify the opclasses which can support this.\n\nAgreed.\n\n> >> +/*\n> >> + * XXX Does it make sense (is it possible) to have a sort by more than one\n> >> + * column, using a BRIN index?\n> >> + */\n> >\n> > Yes, even if only one prefix column is included in the BRIN index\n> > (e.g. `company` in `ORDER BY company, date`, the tuplesort with table\n> > tuples can add additional sorting without first reading the whole\n> > table, potentially (likely) reducing the total resource usage of the\n> > query. That utilizes the same idea as incremental sorts, but with the\n> > caveat that the input sort order is approximately likely but not at\n> > all guaranteed. So, even if the range sort is on a single index\n> > column, we can still do the table's tuplesort on all ORDER BY\n> > attributes, as long as a prefix of ORDER BY columns are included in\n> > the BRIN index.\n> >\n>\n> That's now quite what I meant, though. When I mentioned sorting by more\n> than one column, I meant using a multi-column BRIN index on those\n> columns. Something like this:\n>\n> CREATE TABLE t (a int, b int);\n> INSERT INTO t ...\n> CREATE INDEX ON t USING brin (a,b);\n>\n> SELECT * FROM t ORDER BY a, b;\n>\n> Now, what I think you described is using BRIN to sort by \"a\", and then\n> do incremental sort for \"b\". What I had in mind is whether it's possible\n> to use BRIN to sort by \"b\" too.\n>\n> I was suspecting it might be made to work, but now that I think about it\n> again it probably can't - BRIN pretty much sorts the columns separately,\n> it's not like having an ordering by (a,b) - first by \"a\", then \"b\".\n\nI imagine it would indeed be limited to an extremely small subset of\ncases, and probably not worth the effort to implement in an initial\nversion.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 10 Jul 2023 12:22:38 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "\n\nOn 7/10/23 12:22, Matthias van de Meent wrote:\n> On Fri, 7 Jul 2023 at 20:26, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Hi,\n>>\n>> I finally had time to look at this patch again. There's a bit of bitrot,\n>> so here's a rebased version (no other changes).\n> \n> Thanks!\n> \n>> On 2/27/23 16:40, Matthias van de Meent wrote:\n>>> Some initial comments on 0004:\n>>>\n>>>> +/*\n>>>> + * brin_minmax_ranges\n>>>> + * Load the BRIN ranges and sort them.\n>>>> + */\n>>>> +Datum\n>>>> +brin_minmax_ranges(PG_FUNCTION_ARGS)\n>>>> +{\n>>>\n>>> Like in 0001, this seems to focus on only single columns. Can't we put\n>>> the scan and sort infrastructure in brin, and put the weight- and\n>>> compare-operators in the opclasses? I.e.\n>>> brin_minmax_minorder(PG_FUNCTION_ARGS=brintuple) -> range.min and\n>>> brin_minmax_maxorder(PG_FUNCTION_ARGS=brintuple) -> range.max, and a\n>>> brin_minmax_compare(order, order) -> int? I'm thinking of something\n>>> similar to GIST's distance operators, which would make implementing\n>>> ordering by e.g. `pointcolumn <-> (1, 2)::point` implementable in the\n>>> brin infrastructure.\n>>>\n>>\n>> However, it's not quite clear to me is what you mean by the weight- and\n>> compare-operators? That is, what are\n>>\n>> - brin_minmax_minorder(PG_FUNCTION_ARGS=brintuple) -> range.min\n>> - brin_minmax_maxorder(PG_FUNCTION_ARGS=brintuple) -> range.max\n>> - brin_minmax_compare(order, order) -> int\n>>\n>> supposed to do? Or what does \"PG_FUNCTION_ARGS=brintuple\" mean?\n> \n> _minorder/_maxorder is for extracting the minimum/maximum relative\n> order of each range, used for ASC/DESC sorting of operator results\n> (e.g. to support ORDER BY <->(box_column, '(1,2)'::point) DESC).\n> PG_FUNCTION_ARGS is mentioned because of PG calling conventions;\n> though I did forget to describe the second operator argument for the\n> distance function. We might also want to use only one such \"order\n> extraction function\" with DESC/ASC indicated by an argument.\n> \n\nI'm still not sure I understand what \"minimum/maximum relative order\"\nis. Isn't it the same as returning min/max value that can appear in the\nrange? Although, that wouldn't work for points/boxes.\n\n>> In principle we just need a procedure that tells us min/max for a given\n>> page range - I guess that's what the minorder/maxorder functions do? But\n>> why would we need the compare one? We're comparing by the known data\n>> type, so we can just delegate the comparison to that, no?\n> \n> Is there a comparison function for any custom orderable type that we\n> can just use? GIST distance ordering uses floats, and I don't quite\n> like that from a user perspective, as it makes ordering operations\n> imprecise. I'd rather allow (but discourage) any type with its own\n> compare function.\n> \n\nI haven't really thought about geometric types, just about minmax and\nminmax-multi. It's not clear to me what the benefit for these types be.\nI mean, we can probably sort points lexicographically, but is anyone\ndoing that in queries? It seems useless for order by distance.\n\n>> Also, the existence of these opclass procedures should be enough to\n>> identify the opclasses which can support this.\n> \n> Agreed.\n> \n>>>> +/*\n>>>> + * XXX Does it make sense (is it possible) to have a sort by more than one\n>>>> + * column, using a BRIN index?\n>>>> + */\n>>>\n>>> Yes, even if only one prefix column is included in the BRIN index\n>>> (e.g. `company` in `ORDER BY company, date`, the tuplesort with table\n>>> tuples can add additional sorting without first reading the whole\n>>> table, potentially (likely) reducing the total resource usage of the\n>>> query. That utilizes the same idea as incremental sorts, but with the\n>>> caveat that the input sort order is approximately likely but not at\n>>> all guaranteed. So, even if the range sort is on a single index\n>>> column, we can still do the table's tuplesort on all ORDER BY\n>>> attributes, as long as a prefix of ORDER BY columns are included in\n>>> the BRIN index.\n>>>\n>>\n>> That's now quite what I meant, though. When I mentioned sorting by more\n>> than one column, I meant using a multi-column BRIN index on those\n>> columns. Something like this:\n>>\n>> CREATE TABLE t (a int, b int);\n>> INSERT INTO t ...\n>> CREATE INDEX ON t USING brin (a,b);\n>>\n>> SELECT * FROM t ORDER BY a, b;\n>>\n>> Now, what I think you described is using BRIN to sort by \"a\", and then\n>> do incremental sort for \"b\". What I had in mind is whether it's possible\n>> to use BRIN to sort by \"b\" too.\n>>\n>> I was suspecting it might be made to work, but now that I think about it\n>> again it probably can't - BRIN pretty much sorts the columns separately,\n>> it's not like having an ordering by (a,b) - first by \"a\", then \"b\".\n> \n> I imagine it would indeed be limited to an extremely small subset of\n> cases, and probably not worth the effort to implement in an initial\n> version.\n> \n\nOK, let's stick to order by a single column.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 10 Jul 2023 13:43:44 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On Mon, 10 Jul 2023 at 13:43, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 7/10/23 12:22, Matthias van de Meent wrote:\n>> On Fri, 7 Jul 2023 at 20:26, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>>> However, it's not quite clear to me is what you mean by the weight- and\n>>> compare-operators? That is, what are\n>>>\n>>> - brin_minmax_minorder(PG_FUNCTION_ARGS=brintuple) -> range.min\n>>> - brin_minmax_maxorder(PG_FUNCTION_ARGS=brintuple) -> range.max\n>>> - brin_minmax_compare(order, order) -> int\n>>>\n>>> supposed to do? Or what does \"PG_FUNCTION_ARGS=brintuple\" mean?\n>>\n>> _minorder/_maxorder is for extracting the minimum/maximum relative\n>> order of each range, used for ASC/DESC sorting of operator results\n>> (e.g. to support ORDER BY <->(box_column, '(1,2)'::point) DESC).\n>> PG_FUNCTION_ARGS is mentioned because of PG calling conventions;\n>> though I did forget to describe the second operator argument for the\n>> distance function. We might also want to use only one such \"order\n>> extraction function\" with DESC/ASC indicated by an argument.\n>>\n>\n> I'm still not sure I understand what \"minimum/maximum relative order\"\n> is. Isn't it the same as returning min/max value that can appear in the\n> range? Although, that wouldn't work for points/boxes.\n\nKind of. For single-dimensional opclasses (minmax, minmax_multi) we\nonly need to extract the normal min/max values for ASC/DESC sorts,\nwhich are readily available in the summary. But for multi-dimensional\nand distance searches (nearest neighbour) we need to calculate the\ndistance between the indexed value(s) and the origin value to compare\nthe summary against, and the order would thus be asc/desc on distance\n- a distance which may not be precisely represented by float types -\nthus 'relative order' with its own order operation.\n\n>>> In principle we just need a procedure that tells us min/max for a given\n>>> page range - I guess that's what the minorder/maxorder functions do? But\n>>> why would we need the compare one? We're comparing by the known data\n>>> type, so we can just delegate the comparison to that, no?\n>>\n>> Is there a comparison function for any custom orderable type that we\n>> can just use? GIST distance ordering uses floats, and I don't quite\n>> like that from a user perspective, as it makes ordering operations\n>> imprecise. I'd rather allow (but discourage) any type with its own\n>> compare function.\n>>\n>\n> I haven't really thought about geometric types, just about minmax and\n> minmax-multi. It's not clear to me what the benefit for these types be.\n> I mean, we can probably sort points lexicographically, but is anyone\n> doing that in queries? It seems useless for order by distance.\n\nYes, that's why you would sort them by distance, where the distance is\ngenerated by the opclass as min/max distance between the summary and\nthe distance's origin, and then inserted into the tuplesort.\n\n(previously)\n>>> I finally had time to look at this patch again. There's a bit of bitrot,\n>>> so here's a rebased version (no other changes).\n\nIt seems like you forgot to attach the rebased patch, so unless you're\nactively working on updating the patchset right now, could you send\nthe rebase to make CFBot happy?\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)\n\n\n", "msg_date": "Mon, 10 Jul 2023 14:38:11 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On 7/10/23 14:38, Matthias van de Meent wrote:\n> On Mon, 10 Jul 2023 at 13:43, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> On 7/10/23 12:22, Matthias van de Meent wrote:\n>>> On Fri, 7 Jul 2023 at 20:26, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>>>> However, it's not quite clear to me is what you mean by the weight- and\n>>>> compare-operators? That is, what are\n>>>>\n>>>> - brin_minmax_minorder(PG_FUNCTION_ARGS=brintuple) -> range.min\n>>>> - brin_minmax_maxorder(PG_FUNCTION_ARGS=brintuple) -> range.max\n>>>> - brin_minmax_compare(order, order) -> int\n>>>>\n>>>> supposed to do? Or what does \"PG_FUNCTION_ARGS=brintuple\" mean?\n>>>\n>>> _minorder/_maxorder is for extracting the minimum/maximum relative\n>>> order of each range, used for ASC/DESC sorting of operator results\n>>> (e.g. to support ORDER BY <->(box_column, '(1,2)'::point) DESC).\n>>> PG_FUNCTION_ARGS is mentioned because of PG calling conventions;\n>>> though I did forget to describe the second operator argument for the\n>>> distance function. We might also want to use only one such \"order\n>>> extraction function\" with DESC/ASC indicated by an argument.\n>>>\n>>\n>> I'm still not sure I understand what \"minimum/maximum relative order\"\n>> is. Isn't it the same as returning min/max value that can appear in the\n>> range? Although, that wouldn't work for points/boxes.\n> \n> Kind of. For single-dimensional opclasses (minmax, minmax_multi) we\n> only need to extract the normal min/max values for ASC/DESC sorts,\n> which are readily available in the summary. But for multi-dimensional\n> and distance searches (nearest neighbour) we need to calculate the\n> distance between the indexed value(s) and the origin value to compare\n> the summary against, and the order would thus be asc/desc on distance\n> - a distance which may not be precisely represented by float types -\n> thus 'relative order' with its own order operation.\n> \n\nCan you give some examples of such data / queries, and how would it\nleverage the BRIN sort stuff?\n\nFor distance searches, I imagine this as data indexed by BRIN inclusion\nopclass, which creates a bounding box. We could return closest/furthest\npoint on the bounding box (from the point used in the query). Which\nseems a bit like a R-tree ...\n\nBut I have no idea what would this do for multi-dimensional searches, or\nwhat would those searches do? How would you sort such data other than\nlexicographically? Which I think is covered by the current BRIN Sort,\nbecause the data is either stored as multiple columns, in which case we\nuse the BRIN on the first column. Or it's indexed using BRIN minmax as a\ntuple of values, but then it's sorted lexicographically.\n\n\n>>>> In principle we just need a procedure that tells us min/max for a given\n>>>> page range - I guess that's what the minorder/maxorder functions do? But\n>>>> why would we need the compare one? We're comparing by the known data\n>>>> type, so we can just delegate the comparison to that, no?\n>>>\n>>> Is there a comparison function for any custom orderable type that we\n>>> can just use? GIST distance ordering uses floats, and I don't quite\n>>> like that from a user perspective, as it makes ordering operations\n>>> imprecise. I'd rather allow (but discourage) any type with its own\n>>> compare function.\n>>>\n>>\n>> I haven't really thought about geometric types, just about minmax and\n>> minmax-multi. It's not clear to me what the benefit for these types be.\n>> I mean, we can probably sort points lexicographically, but is anyone\n>> doing that in queries? It seems useless for order by distance.\n> \n> Yes, that's why you would sort them by distance, where the distance is\n> generated by the opclass as min/max distance between the summary and\n> the distance's origin, and then inserted into the tuplesort.\n> \n\nOK, so the query says \"order by distance from point X\" and we calculate\nthe min/max distance of values in a given page range.\n\n> (previously)\n>>>> I finally had time to look at this patch again. There's a bit of bitrot,\n>>>> so here's a rebased version (no other changes).\n> \n> It seems like you forgot to attach the rebased patch, so unless you're\n> actively working on updating the patchset right now, could you send\n> the rebase to make CFBot happy?\n> \n\nYeah, I forgot about the attachment. So here it is.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 10 Jul 2023 17:09:44 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On Mon, 10 Jul 2023 at 17:09, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 7/10/23 14:38, Matthias van de Meent wrote:\n>> Kind of. For single-dimensional opclasses (minmax, minmax_multi) we\n>> only need to extract the normal min/max values for ASC/DESC sorts,\n>> which are readily available in the summary. But for multi-dimensional\n>> and distance searches (nearest neighbour) we need to calculate the\n>> distance between the indexed value(s) and the origin value to compare\n>> the summary against, and the order would thus be asc/desc on distance\n>> - a distance which may not be precisely represented by float types -\n>> thus 'relative order' with its own order operation.\n>>\n>\n> Can you give some examples of such data / queries, and how would it\n> leverage the BRIN sort stuff?\n\nOrder by distance would be `ORDER BY box <-> '(1, 2)'::point ASC`, and\nthe opclass would then decide that `<->(box, point) ASC` means it has\nto return the closest distance from the point to the summary, for some\nmeasure of 'distance' (this case L2, <#> other types, etc.). For DESC,\nthat would return the distance from `'(1,2)'::point` to the furthest\nedge of the summary away from that point. Etc.\n\n> For distance searches, I imagine this as data indexed by BRIN inclusion\n> opclass, which creates a bounding box. We could return closest/furthest\n> point on the bounding box (from the point used in the query). Which\n> seems a bit like a R-tree ...\n\nKind of; it would allow us to utilize such orderings without the\nexpensive 1 tuple = 1 index entry and without scanning the full table\nbefore getting results. No tree involved, just a sequential scan on\nthe index to allow some sketch-based pre-sort on the data. Again, this\nwould work similar to how GiST's internal pages work: each downlink in\nGiST contains a summary of the entries on the downlinked page, and\ndistance searches use a priority queue where the priority is the\ndistance of the opclass-provided distance operator - lower distance\nmeans higher priority. For BRIN, we'd have to build a priority queue\nfor the whole table at once, but presorting table sections is part of\nthe design of BRIN sort, right?\n\n> But I have no idea what would this do for multi-dimensional searches, or\n> what would those searches do? How would you sort such data other than\n> lexicographically? Which I think is covered by the current BRIN Sort,\n> because the data is either stored as multiple columns, in which case we\n> use the BRIN on the first column. Or it's indexed using BRIN minmax as a\n> tuple of values, but then it's sorted lexicographically.\n\nYes, just any BRIN summary that allows distance operators and the like\nshould be enough MINMAX is easy to understand, and box inclusion are\nIMO also fairly easy to understand.\n\n>>> I haven't really thought about geometric types, just about minmax and\n>>> minmax-multi. It's not clear to me what the benefit for these types be.\n>>> I mean, we can probably sort points lexicographically, but is anyone\n>>> doing that in queries? It seems useless for order by distance.\n>>\n>> Yes, that's why you would sort them by distance, where the distance is\n>> generated by the opclass as min/max distance between the summary and\n>> the distance's origin, and then inserted into the tuplesort.\n>>\n>\n> OK, so the query says \"order by distance from point X\" and we calculate\n> the min/max distance of values in a given page range.\n\nYes, and because it's BRIN that's an approximation, which should\ngenerally be fine.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)\n\n\n", "msg_date": "Mon, 10 Jul 2023 18:18:12 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "\n\nOn 7/10/23 18:18, Matthias van de Meent wrote:\n> On Mon, 10 Jul 2023 at 17:09, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> On 7/10/23 14:38, Matthias van de Meent wrote:\n>>> Kind of. For single-dimensional opclasses (minmax, minmax_multi) we\n>>> only need to extract the normal min/max values for ASC/DESC sorts,\n>>> which are readily available in the summary. But for multi-dimensional\n>>> and distance searches (nearest neighbour) we need to calculate the\n>>> distance between the indexed value(s) and the origin value to compare\n>>> the summary against, and the order would thus be asc/desc on distance\n>>> - a distance which may not be precisely represented by float types -\n>>> thus 'relative order' with its own order operation.\n>>>\n>>\n>> Can you give some examples of such data / queries, and how would it\n>> leverage the BRIN sort stuff?\n> \n> Order by distance would be `ORDER BY box <-> '(1, 2)'::point ASC`, and\n> the opclass would then decide that `<->(box, point) ASC` means it has\n> to return the closest distance from the point to the summary, for some\n> measure of 'distance' (this case L2, <#> other types, etc.). For DESC,\n> that would return the distance from `'(1,2)'::point` to the furthest\n> edge of the summary away from that point. Etc.\n> \n\nThanks.\n\n>> For distance searches, I imagine this as data indexed by BRIN inclusion\n>> opclass, which creates a bounding box. We could return closest/furthest\n>> point on the bounding box (from the point used in the query). Which\n>> seems a bit like a R-tree ...\n> \n> Kind of; it would allow us to utilize such orderings without the\n> expensive 1 tuple = 1 index entry and without scanning the full table\n> before getting results. No tree involved, just a sequential scan on\n> the index to allow some sketch-based pre-sort on the data. Again, this\n> would work similar to how GiST's internal pages work: each downlink in\n> GiST contains a summary of the entries on the downlinked page, and\n> distance searches use a priority queue where the priority is the\n> distance of the opclass-provided distance operator - lower distance\n> means higher priority.\n\nYes, that's roughly how I understood this too - a tradeoff that won't\ngive the same performance as GiST, but much smaller and cheaper to maintain.\n\n> For BRIN, we'd have to build a priority queue\n> for the whole table at once, but presorting table sections is part of\n> the design of BRIN sort, right?\n\nYes, that's kinda the whole point of BRIN sort.\n\n> \n>> But I have no idea what would this do for multi-dimensional searches, or\n>> what would those searches do? How would you sort such data other than\n>> lexicographically? Which I think is covered by the current BRIN Sort,\n>> because the data is either stored as multiple columns, in which case we\n>> use the BRIN on the first column. Or it's indexed using BRIN minmax as a\n>> tuple of values, but then it's sorted lexicographically.\n> \n> Yes, just any BRIN summary that allows distance operators and the like\n> should be enough MINMAX is easy to understand, and box inclusion are\n> IMO also fairly easy to understand.\n> \n\nTrue. If minmax is interpreted as inclusion with a simple 1D points, it\nkinda does the same thing. (Of course, minmax work with data types that\ndon't have distances, but there's similarity.)\n\n>>>> I haven't really thought about geometric types, just about minmax and\n>>>> minmax-multi. It's not clear to me what the benefit for these types be.\n>>>> I mean, we can probably sort points lexicographically, but is anyone\n>>>> doing that in queries? It seems useless for order by distance.\n>>>\n>>> Yes, that's why you would sort them by distance, where the distance is\n>>> generated by the opclass as min/max distance between the summary and\n>>> the distance's origin, and then inserted into the tuplesort.\n>>>\n>>\n>> OK, so the query says \"order by distance from point X\" and we calculate\n>> the min/max distance of values in a given page range.\n> \n> Yes, and because it's BRIN that's an approximation, which should\n> generally be fine.\n> \n\nApproximation in what sense? My understanding was we'd get a range of\ndistances that we know covers all rows in that range. So the results\nshould be accurate, no?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 10 Jul 2023 22:04:40 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On Mon, 10 Jul 2023 at 22:04, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 7/10/23 18:18, Matthias van de Meent wrote:\n>> On Mon, 10 Jul 2023 at 17:09, Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>> On 7/10/23 14:38, Matthias van de Meent wrote:\n>>>>> I haven't really thought about geometric types, just about minmax and\n>>>>> minmax-multi. It's not clear to me what the benefit for these types be.\n>>>>> I mean, we can probably sort points lexicographically, but is anyone\n>>>>> doing that in queries? It seems useless for order by distance.\n>>>>\n>>>> Yes, that's why you would sort them by distance, where the distance is\n>>>> generated by the opclass as min/max distance between the summary and\n>>>> the distance's origin, and then inserted into the tuplesort.\n>>>>\n>>>\n>>> OK, so the query says \"order by distance from point X\" and we calculate\n>>> the min/max distance of values in a given page range.\n>>\n>> Yes, and because it's BRIN that's an approximation, which should\n>> generally be fine.\n>>\n>\n> Approximation in what sense? My understanding was we'd get a range of\n> distances that we know covers all rows in that range. So the results\n> should be accurate, no?\n\nThe distance is going to be accurate only to the degree that the\nsummary can produce accurate distances for the datapoints it\nrepresents. That can be quite imprecise due to the nature of the\ncontained datapoints: a summary of the points (-1, -1) and (1, 1) will\nhave a minimum distance of 0 to the origin, where the summary (-1, 0)\nand (-1, 0.5) would have a much more accurate distance of 1. The point\nI was making is that the summary can only approximate the distance,\nand that approximation is fine w.r.t. the BRIN sort algoritm.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech/)\n\n\n", "msg_date": "Tue, 11 Jul 2023 13:20:11 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "\n\n\nOn 7/11/23 13:20, Matthias van de Meent wrote:\n> On Mon, 10 Jul 2023 at 22:04, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> On 7/10/23 18:18, Matthias van de Meent wrote:\n>>> On Mon, 10 Jul 2023 at 17:09, Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>> On 7/10/23 14:38, Matthias van de Meent wrote:\n>>>>>> I haven't really thought about geometric types, just about minmax and\n>>>>>> minmax-multi. It's not clear to me what the benefit for these types be.\n>>>>>> I mean, we can probably sort points lexicographically, but is anyone\n>>>>>> doing that in queries? It seems useless for order by distance.\n>>>>>\n>>>>> Yes, that's why you would sort them by distance, where the distance is\n>>>>> generated by the opclass as min/max distance between the summary and\n>>>>> the distance's origin, and then inserted into the tuplesort.\n>>>>>\n>>>>\n>>>> OK, so the query says \"order by distance from point X\" and we calculate\n>>>> the min/max distance of values in a given page range.\n>>>\n>>> Yes, and because it's BRIN that's an approximation, which should\n>>> generally be fine.\n>>>\n>>\n>> Approximation in what sense? My understanding was we'd get a range of\n>> distances that we know covers all rows in that range. So the results\n>> should be accurate, no?\n> \n> The distance is going to be accurate only to the degree that the\n> summary can produce accurate distances for the datapoints it\n> represents. That can be quite imprecise due to the nature of the\n> contained datapoints: a summary of the points (-1, -1) and (1, 1) will\n> have a minimum distance of 0 to the origin, where the summary (-1, 0)\n> and (-1, 0.5) would have a much more accurate distance of 1.\n\nUmmm, I'm probably missing something, or maybe my mental model of this\nis just wrong, but why would the distance for the second summary be more\naccurate? Or what does \"more accurate\" mean?\n\nIs that about the range of distances for the summary? For the first\nrange the summary is a bounding box [(-1,1), (1,1)] so all we know the\npoints may have distance in range [0, sqrt(2)]. While for the second\nsummary it's [1, sqrt(1.25)].\n\n> The point I was making is that the summary can only approximate the\n> distance, and that approximation is fine w.r.t. the BRIN sort\n> algoritm.\n> \n\nI think as long as the approximation (whatever it means) does not cause\ndifferences in results (compared to not using an index), it's OK.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 14 Jul 2023 16:21:11 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On Fri, 14 Jul 2023 at 16:21, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n>\n> On 7/11/23 13:20, Matthias van de Meent wrote:\n>> On Mon, 10 Jul 2023 at 22:04, Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>> Approximation in what sense? My understanding was we'd get a range of\n>>> distances that we know covers all rows in that range. So the results\n>>> should be accurate, no?\n>>\n>> The distance is going to be accurate only to the degree that the\n>> summary can produce accurate distances for the datapoints it\n>> represents. That can be quite imprecise due to the nature of the\n>> contained datapoints: a summary of the points (-1, -1) and (1, 1) will\n>> have a minimum distance of 0 to the origin, where the summary (-1, 0)\n>> and (-1, 0.5) would have a much more accurate distance of 1.\n>\n> Ummm, I'm probably missing something, or maybe my mental model of this\n> is just wrong, but why would the distance for the second summary be more\n> accurate? Or what does \"more accurate\" mean?\n>\n> Is that about the range of distances for the summary? For the first\n> range the summary is a bounding box [(-1,1), (1,1)] so all we know the\n> points may have distance in range [0, sqrt(2)]. While for the second\n> summary it's [1, sqrt(1.25)].\n\nYes; I was trying to refer to the difference between what results you\nget from the summary vs what results you get from the actual\ndatapoints: In this case, for finding points which are closest to the\norigin, the first bounding box has a less accurate estimate than the\nsecond.\n\n> > The point I was making is that the summary can only approximate the\n> > distance, and that approximation is fine w.r.t. the BRIN sort\n> > algoritm.\n> >\n>\n> I think as long as the approximation (whatever it means) does not cause\n> differences in results (compared to not using an index), it's OK.\n\nAgreed.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 14 Jul 2023 16:42:43 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "\n\nOn 7/14/23 16:42, Matthias van de Meent wrote:\n> On Fri, 14 Jul 2023 at 16:21, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>>\n>>\n>>\n>> On 7/11/23 13:20, Matthias van de Meent wrote:\n>>> On Mon, 10 Jul 2023 at 22:04, Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>> Approximation in what sense? My understanding was we'd get a range of\n>>>> distances that we know covers all rows in that range. So the results\n>>>> should be accurate, no?\n>>>\n>>> The distance is going to be accurate only to the degree that the\n>>> summary can produce accurate distances for the datapoints it\n>>> represents. That can be quite imprecise due to the nature of the\n>>> contained datapoints: a summary of the points (-1, -1) and (1, 1) will\n>>> have a minimum distance of 0 to the origin, where the summary (-1, 0)\n>>> and (-1, 0.5) would have a much more accurate distance of 1.\n>>\n>> Ummm, I'm probably missing something, or maybe my mental model of this\n>> is just wrong, but why would the distance for the second summary be more\n>> accurate? Or what does \"more accurate\" mean?\n>>\n>> Is that about the range of distances for the summary? For the first\n>> range the summary is a bounding box [(-1,1), (1,1)] so all we know the\n>> points may have distance in range [0, sqrt(2)]. While for the second\n>> summary it's [1, sqrt(1.25)].\n> \n> Yes; I was trying to refer to the difference between what results you\n> get from the summary vs what results you get from the actual\n> datapoints: In this case, for finding points which are closest to the\n> origin, the first bounding box has a less accurate estimate than the\n> second.\n> \n\nOK. I think regular minmax indexes have a similar issue with\nnon-distance ordering, because we don't know if the min/max values are\nstill in the page range (or deleted/updated).\n\n>>> The point I was making is that the summary can only approximate the\n>>> distance, and that approximation is fine w.r.t. the BRIN sort\n>>> algoritm.\n>>>\n>>\n>> I think as long as the approximation (whatever it means) does not cause\n>> differences in results (compared to not using an index), it's OK.\n> \n\nI haven't written any code yet, but I think if we don't try to find the\nexact min/max distances for the summary (e.g. by calculating the closest\npoint exactly) but rather \"estimates\" that are guaranteed to bound the\nactual min/max, that's good enough for the sorting.\n\nFor the max, this probably is not an issue, as we can just calculate\ndistance for the corners and use a maximum of that. At least with\nreasonable euclidean distance ... in 2D I'm imagining the bounding box\nsummary as a rectangle, with the \"max distance\" being a minimum radius\nof a circle containing it (the rectangle).\n\nFor min we're looking for the largest radius not intersecting with the\nbox, which seems harder to calculate I think.\n\nHowever, now that I'm thinking about it - don't (SP-)GiST indexes\nalready do pretty much exactly this?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 14 Jul 2023 17:51:45 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "Hello,\n\n> Parallel version is not supported, but I think it should be possible.\n\n@Tomas are you working on this ? If not, I would like to give it a try.\n\n> static void\n> AssertCheckRanges(BrinSortState *node)\n> {\n> #ifdef USE_ASSERT_CHECKING\n>\n> #endif\n> }\n\nI guess it should not be empty at the ongoing development stage.\n\nAttached a small modification of the patch with a draft of the docs.\n\nRegards,\nSergey Dudoladov", "msg_date": "Wed, 2 Aug 2023 17:25:20 +0200", "msg_from": "Sergey Dudoladov <sergey.dudoladov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "\n\nOn 8/2/23 17:25, Sergey Dudoladov wrote:\n> Hello,\n> \n>> Parallel version is not supported, but I think it should be possible.\n> \n> @Tomas are you working on this ? If not, I would like to give it a try.\n> \n\nFeel free to try. Just keep it in a separate part/patch, to make it\neasier to combine the work later.\n\n>> static void\n>> AssertCheckRanges(BrinSortState *node)\n>> {\n>> #ifdef USE_ASSERT_CHECKING\n>>\n>> #endif\n>> }\n> \n> I guess it should not be empty at the ongoing development stage.\n> \n> Attached a small modification of the patch with a draft of the docs.\n> \n\nThanks. FWIW it's generally better to always post the whole patch\nseries, otherwise the cfbot gets confused as it's unable to combine\nstuff from different messages.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 2 Aug 2023 18:04:10 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On Wed, 2 Aug 2023 at 21:34, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 8/2/23 17:25, Sergey Dudoladov wrote:\n> > Hello,\n> >\n> >> Parallel version is not supported, but I think it should be possible.\n> >\n> > @Tomas are you working on this ? If not, I would like to give it a try.\n> >\n>\n> Feel free to try. Just keep it in a separate part/patch, to make it\n> easier to combine the work later.\n>\n> >> static void\n> >> AssertCheckRanges(BrinSortState *node)\n> >> {\n> >> #ifdef USE_ASSERT_CHECKING\n> >>\n> >> #endif\n> >> }\n> >\n> > I guess it should not be empty at the ongoing development stage.\n> >\n> > Attached a small modification of the patch with a draft of the docs.\n> >\n>\n> Thanks. FWIW it's generally better to always post the whole patch\n> series, otherwise the cfbot gets confused as it's unable to combine\n> stuff from different messages.\n\nAre we planning to take this patch forward? It has been nearly 5\nmonths since the last discussion on this. If the interest has gone\ndown and if there are no plans to handle this I'm thinking of\nreturning this commitfest entry in this commitfest and can be opened\nwhen there is more interest.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sun, 21 Jan 2024 07:32:07 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" }, { "msg_contents": "On Sun, 21 Jan 2024 at 07:32, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, 2 Aug 2023 at 21:34, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> >\n> >\n> >\n> > On 8/2/23 17:25, Sergey Dudoladov wrote:\n> > > Hello,\n> > >\n> > >> Parallel version is not supported, but I think it should be possible.\n> > >\n> > > @Tomas are you working on this ? If not, I would like to give it a try.\n> > >\n> >\n> > Feel free to try. Just keep it in a separate part/patch, to make it\n> > easier to combine the work later.\n> >\n> > >> static void\n> > >> AssertCheckRanges(BrinSortState *node)\n> > >> {\n> > >> #ifdef USE_ASSERT_CHECKING\n> > >>\n> > >> #endif\n> > >> }\n> > >\n> > > I guess it should not be empty at the ongoing development stage.\n> > >\n> > > Attached a small modification of the patch with a draft of the docs.\n> > >\n> >\n> > Thanks. FWIW it's generally better to always post the whole patch\n> > series, otherwise the cfbot gets confused as it's unable to combine\n> > stuff from different messages.\n>\n> Are we planning to take this patch forward? It has been nearly 5\n> months since the last discussion on this. If the interest has gone\n> down and if there are no plans to handle this I'm thinking of\n> returning this commitfest entry in this commitfest and can be opened\n> when there is more interest.\n\nSince the author or no one else showed interest in taking it forward\nand the patch had no activity for more than 5 months, I have changed\nthe status to RWF. Feel free to add a new CF entry when someone is\nplanning to resume work more actively by starting off with a rebased\nversion.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 2 Feb 2024 00:17:02 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: Using BRIN indexes for sorted output" } ]
[ { "msg_contents": "Hi\n\nI had recently updated the M1 mini that I use to test macOS stuff on. Just\ntried to test a change on it and was greeted with a lot of\nwarnings. Apparently the update brought in a newer SDK (MacOSX13.0.sdk), even\nthough the OS is still Monterey.\n\nOne class of warnings is specific to meson (see further down), but the other\nis common between autoconf and meson:\n\n[24/2258] Compiling C object src/port/libpgport_srv.a.p/snprintf.c.o\n../../../src/postgres/src/port/snprintf.c:1002:11: warning: 'sprintf' is deprecated: This function is provided for compatibility reasons only. Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead. [-Wdeprecated-declarations]\n vallen = sprintf(convert, \"%p\", value);\n ^\n/Library/Developer/CommandLineTools/SDKs/MacOSX13.0.sdk/usr/include/stdio.h:188:1: note: 'sprintf' has been explicitly marked deprecated here\n__deprecated_msg(\"This function is provided for compatibility reasons only. Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead.\")\n^\n/Library/Developer/CommandLineTools/SDKs/MacOSX13.0.sdk/usr/include/sys/cdefs.h:215:48: note: expanded from macro '__deprecated_msg'\n #define __deprecated_msg(_msg) __attribute__((__deprecated__(_msg)))\n ^\n\nthe same warning is repeated for a bunch of different lines in the same file,\nand then over the three versions of libpgport that we build.\n\nThis is pretty noisy.\n\n\nThe meson specific warning is\n[972/1027] Linking target src/backend/replication/libpqwalreceiver/libpqwalreceiver.dylib\nld: warning: -undefined dynamic_lookup may not work with chained fixups\n\nWhich is caused by meson defaulting to -Wl,-undefined,dynamic_lookup for\nmodules. But we don't need that because we use -bund-loader. Adding\n-Wl,-undefined,error as in the attached fixes it.\n\nGreetings,\n\nAndres Freund", "msg_date": "Sat, 15 Oct 2022 14:19:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "macos ventura SDK spews warnings" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> One class of warnings is specific to meson (see further down), but the other\n> is common between autoconf and meson:\n\n> [24/2258] Compiling C object src/port/libpgport_srv.a.p/snprintf.c.o\n> ../../../src/postgres/src/port/snprintf.c:1002:11: warning: 'sprintf' is deprecated: This function is provided for compatibility reasons only. Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead. [-Wdeprecated-declarations]\n> vallen = sprintf(convert, \"%p\", value);\n> ^\n\nOriginally we used the platform's sprintf there because we couldn't\nrely on platforms having functional snprintf. That's no longer the case,\nI imagine, so we could just switch these calls over to snprintf. I'm\nkind of surprised that we haven't already been getting the likes of\nthis warning from, eg, OpenBSD.\n\nNote that the hundreds of other sprintf calls in our code are actually\ncalling pg_sprintf, which hasn't got a deprecation label. But the ones\nin snprintf.c are really trying to call the platform's version.\n\n(I wonder how much we ought to worry about bugs in the pg_sprintf usages?\nBut that's a matter for another day and another thread.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 15 Oct 2022 17:56:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: macos ventura SDK spews warnings" }, { "msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> [24/2258] Compiling C object src/port/libpgport_srv.a.p/snprintf.c.o\n>> ../../../src/postgres/src/port/snprintf.c:1002:11: warning: 'sprintf' is deprecated: This function is provided for compatibility reasons only. Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead. [-Wdeprecated-declarations]\n\n> Originally we used the platform's sprintf there because we couldn't\n> rely on platforms having functional snprintf. That's no longer the case,\n> I imagine, so we could just switch these calls over to snprintf. I'm\n> kind of surprised that we haven't already been getting the likes of\n> this warning from, eg, OpenBSD.\n\nThe attached seems enough to silence it for me.\n\nShould we back-patch this? I suppose, but how far? It seems to fall\nunder the rules we established for back-patching into out-of-support\nbranches, ie it silences compiler warnings but shouldn't change any\nbehavior. But it feels like a bigger change than most of the other\nthings we've done that with.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 15 Oct 2022 18:47:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: macos ventura SDK spews warnings" }, { "msg_contents": "Hi,\n\nOn 2022-10-15 18:47:16 -0400, Tom Lane wrote:\n> I wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> >> [24/2258] Compiling C object src/port/libpgport_srv.a.p/snprintf.c.o\n> >> ../../../src/postgres/src/port/snprintf.c:1002:11: warning: 'sprintf' is deprecated: This function is provided for compatibility reasons only. Due to security concerns inherent in the design of sprintf(3), it is highly recommended that you use snprintf(3) instead. [-Wdeprecated-declarations]\n> \n> > Originally we used the platform's sprintf there because we couldn't\n> > rely on platforms having functional snprintf. That's no longer the case,\n> > I imagine, so we could just switch these calls over to snprintf. I'm\n> > kind of surprised that we haven't already been getting the likes of\n> > this warning from, eg, OpenBSD.\n\nIs there a platform still supported in older branches that we need to worry\nabout?\n\n\n> The attached seems enough to silence it for me.\n>\n> Should we back-patch this?\n\nProbably, but not sure either. We could just let it stew in HEAD for a while.\n\n\n> I suppose, but how far? It seems to fall under the rules we established for\n> back-patching into out-of-support branches, ie it silences compiler warnings\n> but shouldn't change any behavior. But it feels like a bigger change than\n> most of the other things we've done that with.\n\nI wonder if we ought to add -Wno-deprecated to out-of-support branches to deal\nwith this kind of thing...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 15 Oct 2022 16:50:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: macos ventura SDK spews warnings" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-10-15 18:47:16 -0400, Tom Lane wrote:\n>>> Originally we used the platform's sprintf there because we couldn't\n>>> rely on platforms having functional snprintf. That's no longer the case,\n>>> I imagine, so we could just switch these calls over to snprintf.\n\n> Is there a platform still supported in older branches that we need to worry\n> about?\n\nsnprintf is required by POSIX going back to SUSv2, so it's pretty darn\nhard to imagine any currently-used platform that hasn't got it. Even\nmy now-extinct dinosaur gaur had it (per digging in backup files).\nI think we could certainly assume its presence in the branches that\nrequire C99. Even before that, is anybody really still building on\nnineties-vintage platforms?\n\n> I wonder if we ought to add -Wno-deprecated to out-of-support branches to deal\n> with this kind of thing...\n\nYeah, that might be a better answer than playing whack-a-mole with\nthese sorts of warnings.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 15 Oct 2022 20:08:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: macos ventura SDK spews warnings" }, { "msg_contents": "On 2022-10-15 14:19:55 -0700, Andres Freund wrote:\n> The meson specific warning is\n> [972/1027] Linking target src/backend/replication/libpqwalreceiver/libpqwalreceiver.dylib\n> ld: warning: -undefined dynamic_lookup may not work with chained fixups\n> \n> Which is caused by meson defaulting to -Wl,-undefined,dynamic_lookup for\n> modules. But we don't need that because we use -bund-loader. Adding\n> -Wl,-undefined,error as in the attached fixes it.\n\nPushed the patch for that.\n\n\n", "msg_date": "Sat, 15 Oct 2022 17:14:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: macos ventura SDK spews warnings" }, { "msg_contents": "I wrote:\n> snprintf is required by POSIX going back to SUSv2, so it's pretty darn\n> hard to imagine any currently-used platform that hasn't got it. Even\n> my now-extinct dinosaur gaur had it (per digging in backup files).\n> I think we could certainly assume its presence in the branches that\n> require C99.\n\nAfter further thought, I think the best compromise is just that:\n\n(1) apply s/sprintf/snprintf/ patch in branches back to v12, where\nwe began to require C99.\n\n(2) in v11 and back to 9.2, enable -Wno-deprecated if available.\n\nOne thing motivating this choice is that we're just a couple\nweeks away from the final release of v10. So I'm hesitant to do\nanything that might turn out to be moving the portability goalposts\nin v10. But we're already assuming we can detect -Wno-foo options\ncorrectly in v10 and older (e.g. 4c5a29c0e), so point (2) seems\npretty low-risk.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 15 Oct 2022 21:00:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: macos ventura SDK spews warnings" }, { "msg_contents": "Hi,\n\nOn 2022-10-15 21:00:00 -0400, Tom Lane wrote:\n> I wrote:\n> > snprintf is required by POSIX going back to SUSv2, so it's pretty darn\n> > hard to imagine any currently-used platform that hasn't got it. Even\n> > my now-extinct dinosaur gaur had it (per digging in backup files).\n> > I think we could certainly assume its presence in the branches that\n> > require C99.\n> \n> After further thought, I think the best compromise is just that:\n> \n> (1) apply s/sprintf/snprintf/ patch in branches back to v12, where\n> we began to require C99.\n> \n> (2) in v11 and back to 9.2, enable -Wno-deprecated if available.\n> \n> One thing motivating this choice is that we're just a couple\n> weeks away from the final release of v10. So I'm hesitant to do\n> anything that might turn out to be moving the portability goalposts\n> in v10. But we're already assuming we can detect -Wno-foo options\n> correctly in v10 and older (e.g. 4c5a29c0e), so point (2) seems\n> pretty low-risk.\n\nMakes sense to me.\n\n- Andres\n\n\n", "msg_date": "Sat, 15 Oct 2022 22:34:43 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: macos ventura SDK spews warnings" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-10-15 21:00:00 -0400, Tom Lane wrote:\n>> After further thought, I think the best compromise is just that:\n>> \n>> (1) apply s/sprintf/snprintf/ patch in branches back to v12, where\n>> we began to require C99.\n>> \n>> (2) in v11 and back to 9.2, enable -Wno-deprecated if available.\n\n> Makes sense to me.\n\nI remembered another reason why v12 should be a cutoff: it's where\nwe started to use snprintf.c everywhere. In prior branches, there'd\nbe a lot of complaints about sprintf elsewhere in the tree.\n\nSo I pushed (1), but on the way to testing (2), I discovered a totally\nindependent problem with the 13.0 SDK in older branches:\n\nIn file included from ../../../src/include/postgres.h:46:\nIn file included from ../../../src/include/c.h:1387:\nIn file included from ../../../src/include/port.h:17:\nIn file included from /Library/Developer/CommandLineTools/SDKs/MacOSX13.0.sdk/usr/include/netdb.h:91:\nIn file included from /Library/Developer/CommandLineTools/SDKs/MacOSX13.0.sdk/usr/include/netinet/in.h:81:\n/Library/Developer/CommandLineTools/SDKs/MacOSX13.0.sdk/usr/include/sys/socket.h:471:1: error: expected ';' after top level declarator\n__CCT_DECLARE_CONSTRAINED_PTR_TYPES(struct sockaddr_storage, sockaddr_storage);\n^\n\nThis is apparently some sort of inclusion-order problem, which is probably\na bug in the SDK --- netdb.h and netinet/in.h are the same as they were in\nSDK 12.3, but sys/socket.h has a few additions including this\n__CCT_DECLARE_CONSTRAINED_PTR_TYPES macro, and evidently that's missing\nsomething it needs. I haven't traced down the cause of the problem yet.\nIt fails to manifest in v15 and HEAD, which I bisected to\n\n98e93a1fc93e9b54eb477d870ec744e9e1669f34 is the first new commit\ncommit 98e93a1fc93e9b54eb477d870ec744e9e1669f34\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Tue Jan 11 13:46:12 2022 -0500\n\n Clean up messy API for src/port/thread.c.\n \n The point of this patch is to reduce inclusion spam by not needing\n to #include <netdb.h> or <pwd.h> in port.h (which is read by every\n compile in our tree). To do that, we must remove port.h's\n declarations of pqGetpwuid and pqGethostbyname.\n\nI doubt we want to back-patch that, so what we'll probably end up with\nis adding some #includes to port.h in the back branches. Bleah.\nOr we could file a bug with Apple and hope they fix it quickly.\n(They might, actually, because this SDK is supposedly beta.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 16 Oct 2022 12:40:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: macos ventura SDK spews warnings" }, { "msg_contents": "I wrote:\n> So I pushed (1), but on the way to testing (2), I discovered a totally\n> independent problem with the 13.0 SDK in older branches:\n\n> In file included from ../../../src/include/postgres.h:46:\n> In file included from ../../../src/include/c.h:1387:\n> In file included from ../../../src/include/port.h:17:\n> In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX13.0.sdk/usr/include/netdb.h:91:\n> In file included from /Library/Developer/CommandLineTools/SDKs/MacOSX13.0.sdk/usr/include/netinet/in.h:81:\n> /Library/Developer/CommandLineTools/SDKs/MacOSX13.0.sdk/usr/include/sys/socket.h:471:1: error: expected ';' after top level declarator\n> __CCT_DECLARE_CONSTRAINED_PTR_TYPES(struct sockaddr_storage, sockaddr_storage);\n> ^\n\nAh, I see it. This is not failing everywhere, only in gram.y and\nassociated files, and it happens because those have a #define for REF,\nwhich is breaking this constrained_ctypes stuff:\n\n/Library/Developer/CommandLineTools/SDKs/MacOSX13.0.sdk/usr/include/sys/socket.h:471:1: error: expected ';' after top level declarator\n__CCT_DECLARE_CONSTRAINED_PTR_TYPES(struct sockaddr_storage, sockaddr_storage);\n^\n/Library/Developer/CommandLineTools/SDKs/MacOSX13.0.sdk/usr/include/sys/constrained_ctypes.h:588:101: note: expanded from macro '__CCT_DECLARE_CONSTRAINED_PTR_TYPES'\n__CCT_DECLARE_CONSTRAINED_PTR_TYPE(basetype, basetag, REF); \\\n ^\n\nNow on the one hand Apple is pretty clearly violating user namespace\nby using a name like \"REF\", and I'll go file a bug about that.\nOn the other hand, #defining something like \"REF\" isn't very bright\non our part either. We usually write something like REF_P when\nthere is a danger of parser tokens colliding with other names.\n\nI think the correct, future-proof fix is to s/REF/REF_P/ in the\ngrammar. We'll have to back-patch that, too, unless we want to\nchange what port.h includes. I found that an alternative possible\nband-aid is to do this in port.h:\n\ndiff --git a/src/include/port.h b/src/include/port.h\nindex b405d0e740..416428a0d2 100644\n--- a/src/include/port.h\n+++ b/src/include/port.h\n@@ -14,7 +14,6 @@\n #define PG_PORT_H\n \n #include <ctype.h>\n-#include <netdb.h>\n #include <pwd.h>\n \n /*\n@@ -491,6 +490,8 @@ extern int pqGetpwuid(uid_t uid, struct passwd *resultbuf, char *buffer,\n size_t buflen, struct passwd **result);\n #endif\n \n+struct hostent; /* avoid including <netdb.h> here */\n+\n extern int pqGethostbyname(const char *name,\n struct hostent *resultbuf,\n char *buffer, size_t buflen,\n\nbut it seems like there's a nonzero risk that some third-party\ncode somewhere is depending on our having included <netdb.h> here.\nSo ceasing to do that in the back branches doesn't seem great.\n\nChanging a parser token name in the back branches isn't ideal\neither, but it seems less risky to me than removing a globally\nvisible #include.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 16 Oct 2022 13:35:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: macos ventura SDK spews warnings" }, { "msg_contents": "I wrote:\n> I think the correct, future-proof fix is to s/REF/REF_P/ in the\n> grammar.\n\nDone like that, after which I found that the pre-v12 branches are\ncompiling perfectly warning-free with the 13.0 SDK, despite nothing\nhaving been done about sprintf. This confused me mightily, but\nafter digging in Apple's headers I understand it. What actually\ngets provided by <stdio.h> is a declaration of sprintf(), now\nwith deprecation attribute attached, followed awhile later by\n\n#if __has_builtin(__builtin___sprintf_chk) || defined(__GNUC__)\nextern int __sprintf_chk (char * __restrict, int, size_t,\n\t\t\t const char * __restrict, ...);\n\n#undef sprintf\n#define sprintf(str, ...) \\\n __builtin___sprintf_chk (str, 0, __darwin_obsz(str), __VA_ARGS__)\n#endif\n\nSo in the ordinary course of events, calling sprintf() results in\ncalling this non-deprecated builtin. Only if you \"#undef sprintf\"\nwill you see the deprecation message. snprintf.c does that, so\nwe see the message when that's built. But if we don't use snprintf.c,\nas the older branches do not on macOS, we don't ever #undef sprintf.\n\nSo for now, there seems no need for -Wno-deprecated, and I'm not\ngoing to install it.\n\nWhat I *am* seeing, in the 9.5 and 9.6 branches, is a ton of\n\nld: warning: -undefined dynamic_lookup may not work with chained fixups\n\napparently because we are specifying -Wl,-undefined,dynamic_lookup\nwhich the other branches don't do. That's kind of annoying,\nbut it looks like preventing that would be way too invasive :-(.\nWe'd added it to un-break some cases in the contrib transform\nmodules, and we didn't have a better solution until v10 [1].\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/2652.1475512158%40sss.pgh.pa.us\n\n\n", "msg_date": "Sun, 16 Oct 2022 16:45:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: macos ventura SDK spews warnings" }, { "msg_contents": "Hi,\n\nOn 2022-10-16 16:45:24 -0400, Tom Lane wrote:\n> I wrote:\n> > I think the correct, future-proof fix is to s/REF/REF_P/ in the\n> > grammar.\n>\n> Done like that, after which I found that the pre-v12 branches are\n> compiling perfectly warning-free with the 13.0 SDK, despite nothing\n> having been done about sprintf. This confused me mightily, but\n> after digging in Apple's headers I understand it. What actually\n> gets provided by <stdio.h> is a declaration of sprintf(), now\n> with deprecation attribute attached, followed awhile later by\n>\n> #if __has_builtin(__builtin___sprintf_chk) || defined(__GNUC__)\n> extern int __sprintf_chk (char * __restrict, int, size_t,\n> \t\t\t const char * __restrict, ...);\n>\n> #undef sprintf\n> #define sprintf(str, ...) \\\n> __builtin___sprintf_chk (str, 0, __darwin_obsz(str), __VA_ARGS__)\n> #endif\n>\n> So in the ordinary course of events, calling sprintf() results in\n> calling this non-deprecated builtin. Only if you \"#undef sprintf\"\n> will you see the deprecation message.\n\nOh, huh. That's an odd setup... It's not like the the object size stuff\nprovides reliable protection.\n\n\n> What I *am* seeing, in the 9.5 and 9.6 branches, is a ton of\n>\n> ld: warning: -undefined dynamic_lookup may not work with chained fixups\n>\n> apparently because we are specifying -Wl,-undefined,dynamic_lookup\n> which the other branches don't do. That's kind of annoying,\n> but it looks like preventing that would be way too invasive :-(.\n> We'd added it to un-break some cases in the contrib transform\n> modules, and we didn't have a better solution until v10 [1].\n\nHm - I think it might actually mean that transforms won't work with the new\nmacos relocation format, which is what I understand \"chained fixups\" to be.\n\nUnfortunately it looks like the chained fixup stuff is enabled even when\ntargetting Monterey, even though it was only introduced with the 13 sdk (I\nthink). But it does look like using the macos 11 SDK sysroot does force the\nuse of the older relocation format. So we have at least some way out :/\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 16 Oct 2022 14:59:48 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: macos ventura SDK spews warnings" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-10-16 16:45:24 -0400, Tom Lane wrote:\n>> What I *am* seeing, in the 9.5 and 9.6 branches, is a ton of\n>> \n>> ld: warning: -undefined dynamic_lookup may not work with chained fixups\n>> \n>> apparently because we are specifying -Wl,-undefined,dynamic_lookup\n>> which the other branches don't do. That's kind of annoying,\n>> but it looks like preventing that would be way too invasive :-(.\n>> We'd added it to un-break some cases in the contrib transform\n>> modules, and we didn't have a better solution until v10 [1].\n\n> Hm - I think it might actually mean that transforms won't work with the new\n> macos relocation format, which is what I understand \"chained fixups\" to be.\n\nHm ... hstore_plpython and ltree_plpython still pass regression check in\n9.6, so it works for at least moderate-size values of \"work\", at least on\nMonterey. But in any case, if there's a problem there I can't see us\ndoing anything about that in dead branches. The \"keep it building\" rule\ndoesn't extend to perl or python dependencies IMO.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 16 Oct 2022 18:29:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: macos ventura SDK spews warnings" } ]
[ { "msg_contents": "Hi hackers,\n\nPresently, when an archive module sets up a shutdown callback, it will be\ncalled upon ERROR/FATAL (via PG_ENSURE_ERROR_CLEANUP), when the archive\nlibrary changes (via HandlePgArchInterrupts()), and upon normal shutdown.\nThere are a couple of problems with this:\n\n* HandlePgArchInterrupts() calls the shutdown callback directly before\nproc_exit(). However, the PG_ENSURE_ERROR_CLEANUP surrounding the call to\npgarch_MainLoop() sets up a before_shmem_exit callback that also calls the\nshutdown callback. This means that the shutdown callback will be called\ntwice whenever archive_library is changed via SIGHUP.\n\n* PG_ENSURE_ERROR_CLEANUP is intended for both ERROR and FATAL. However,\nthe archiver operates at the bottom of the exception stack, so ERRORs are\ntreated as FATALs. This means that PG_ENSURE_ERROR_CLEANUP is excessive.\nWe only need to set up the before_shmem_exit callback.\n\nTo fix, the attached patch removes the use of PG_ENSURE_ERROR_CLEANUP and\nthe call to the shutdown callback in HandlePgArchInterrupts() in favor of\njust setting up a before_shmem_exit callback in LoadArchiveLibrary().\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 15 Oct 2022 15:13:28 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "fix archive module shutdown callback" }, { "msg_contents": "On Sun, Oct 16, 2022 at 3:43 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> Hi hackers,\n>\n> Presently, when an archive module sets up a shutdown callback, it will be\n> called upon ERROR/FATAL (via PG_ENSURE_ERROR_CLEANUP), when the archive\n> library changes (via HandlePgArchInterrupts()), and upon normal shutdown.\n> There are a couple of problems with this:\n\nYeah.\n\n> * HandlePgArchInterrupts() calls the shutdown callback directly before\n> proc_exit(). However, the PG_ENSURE_ERROR_CLEANUP surrounding the call to\n> pgarch_MainLoop() sets up a before_shmem_exit callback that also calls the\n> shutdown callback. This means that the shutdown callback will be called\n> twice whenever archive_library is changed via SIGHUP.\n>\n> * PG_ENSURE_ERROR_CLEANUP is intended for both ERROR and FATAL. However,\n> the archiver operates at the bottom of the exception stack, so ERRORs are\n> treated as FATALs. This means that PG_ENSURE_ERROR_CLEANUP is excessive.\n> We only need to set up the before_shmem_exit callback.\n>\n> To fix, the attached patch removes the use of PG_ENSURE_ERROR_CLEANUP and\n> the call to the shutdown callback in HandlePgArchInterrupts() in favor of\n> just setting up a before_shmem_exit callback in LoadArchiveLibrary().\n\nWe could have used a flag in call_archive_module_shutdown_callback()\nto avoid it being called multiple times, but having it as\nbefore_shmem_exit () callback without direct calls to it is the right\napproach IMO.\n\n+1 to remove PG_ENSURE_ERROR_CLEANUP and PG_END_ENSURE_ERROR_CLEANUP.\n\nIs the shutdown callback meant to be called only after the archive\nlibrary is loaded? The documentation [1] says that it just gets called\nbefore the archiver process exits. If this is true, can we just place\nbefore_shmem_exit(call_archive_module_shutdown_callback, 0); in\nPgArchiverMain() after on_shmem_exit(pgarch_die, 0);?\n\nAlso, I've noticed other 3 threads and CF entries all related to\n'archive modules' feature. IMO, it could be better to have all of them\nunder a single thread and single CF entry to reduce\nreviewers/committers' efforts and seek more thoughts about all of the\nfixes.\n\nhttps://commitfest.postgresql.org/40/3933/\nhttps://commitfest.postgresql.org/40/3950/\nhttps://commitfest.postgresql.org/40/3948/\n\n[1]\n <sect2 id=\"archive-module-shutdown\">\n <title>Shutdown Callback</title>\n <para>\n The <function>shutdown_cb</function> callback is called when the archiver\n process exits (e.g., after an error) or the value of\n <xref linkend=\"guc-archive-library\"/> changes. If no\n <function>shutdown_cb</function> is defined, no special action is taken in\n these situations.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 16 Oct 2022 13:39:14 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fix archive module shutdown callback" }, { "msg_contents": "On Sun, Oct 16, 2022 at 01:39:14PM +0530, Bharath Rupireddy wrote:\n> Is the shutdown callback meant to be called only after the archive\n> library is loaded? The documentation [1] says that it just gets called\n> before the archiver process exits. If this is true, can we just place\n> before_shmem_exit(call_archive_module_shutdown_callback, 0); in\n> PgArchiverMain() after on_shmem_exit(pgarch_die, 0);?\n\nI am not sure to understand what you mean here. The shutdown callback\nis available once the archiver process has loaded the library defined\nin archive_library (assuming it is itself in shared_preload_libraries)\nand you cannot call something that does not exist yet. So, yes, you\ncould define the call to before_shmem_exit() a bit earlier because\nthat would be a no-op until the library is loaded, but at the end that\nwould be just registering a callback that would do nothing useful in a\nlarger window, aka until the library is loaded.\n\nFWIW, I think that the documentation should clarify that the shutdown\ncallback is called before shmem exit. That's important.\n\nAnother thing is that the shutdown callback is used by neither\nshell_archive.c nor basic_archive. We could do something about that,\nactually, say by plugging an elog(DEBUG1) in shutdown_cb for\nshell_archive.c to inform that the archiver is going down?\nbasic_archive could do that, but we already use shell_archive.c in a \nbunch of tests, and this would need just log_min_messages=debug1 or\nlower, so..\n\n> Also, I've noticed other 3 threads and CF entries all related to\n> 'archive modules' feature. IMO, it could be better to have all of them\n> under a single thread and single CF entry to reduce\n> reviewers/committers' efforts and seek more thoughts about all of the\n> fixes.\n\nI don't mind discussing each point separately as the first thread\ndealing with archive modules is already very long, so the current way\nof doing things makes sure to attract the correct type of attention\nfor each problem, IMO.\n--\nMichael", "msg_date": "Mon, 17 Oct 2022 13:51:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: fix archive module shutdown callback" }, { "msg_contents": "At Mon, 17 Oct 2022 13:51:52 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Sun, Oct 16, 2022 at 01:39:14PM +0530, Bharath Rupireddy wrote:\n> > Is the shutdown callback meant to be called only after the archive\n> > library is loaded? The documentation [1] says that it just gets called\n> > before the archiver process exits. If this is true, can we just place\n> > before_shmem_exit(call_archive_module_shutdown_callback, 0); in\n> > PgArchiverMain() after on_shmem_exit(pgarch_die, 0);?\n> \n> I am not sure to understand what you mean here. The shutdown callback\n> is available once the archiver process has loaded the library defined\n> in archive_library (assuming it is itself in shared_preload_libraries)\n> and you cannot call something that does not exist yet. So, yes, you\n\nI guess that the \"callback\" there means the callback-caller function\n(call_archive_module_shutdown_callback), which in turn is set as a\ncallback...\n\n> could define the call to before_shmem_exit() a bit earlier because\n> that would be a no-op until the library is loaded, but at the end that\n> would be just registering a callback that would do nothing useful in a\n> larger window, aka until the library is loaded.\n\nI thought that Bharath's point is to use before_shmem_exit() instead\nof PG_ENSURE_ERROR_CLEANUP(). The place doesn't seem significant but\nif we use before_shmem_exit(), it would be cleaner to place it\nadjecent to on_sheme_exit() call.\n\n> FWIW, I think that the documentation should clarify that the shutdown\n> callback is called before shmem exit. That's important.\n\nSure. What the shutdown callback can do differs by shared memory\naccess.\n\n> Another thing is that the shutdown callback is used by neither\n> shell_archive.c nor basic_archive. We could do something about that,\n> actually, say by plugging an elog(DEBUG1) in shutdown_cb for\n> shell_archive.c to inform that the archiver is going down?\n> basic_archive could do that, but we already use shell_archive.c in a \n> bunch of tests, and this would need just log_min_messages=debug1 or\n> lower, so..\n\n+1 for inserting DEBUG1.\n\n> > Also, I've noticed other 3 threads and CF entries all related to\n> > 'archive modules' feature. IMO, it could be better to have all of them\n> > under a single thread and single CF entry to reduce\n> > reviewers/committers' efforts and seek more thoughts about all of the\n> > fixes.\n> \n> I don't mind discussing each point separately as the first thread\n> dealing with archive modules is already very long, so the current way\n> of doing things makes sure to attract the correct type of attention\n> for each problem, IMO.\n\nI tend to agree to this.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 17 Oct 2022 14:30:52 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fix archive module shutdown callback" }, { "msg_contents": "At Sun, 16 Oct 2022 13:39:14 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Sun, Oct 16, 2022 at 3:43 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >\n> > Hi hackers,\n> >\n> > Presently, when an archive module sets up a shutdown callback, it will be\n> > called upon ERROR/FATAL (via PG_ENSURE_ERROR_CLEANUP), when the archive\n> > library changes (via HandlePgArchInterrupts()), and upon normal shutdown.\n> > There are a couple of problems with this:\n> \n> Yeah.\n> \n> > * HandlePgArchInterrupts() calls the shutdown callback directly before\n> > proc_exit(). However, the PG_ENSURE_ERROR_CLEANUP surrounding the call to\n> > pgarch_MainLoop() sets up a before_shmem_exit callback that also calls the\n> > shutdown callback. This means that the shutdown callback will be called\n> > twice whenever archive_library is changed via SIGHUP.\n> >\n> > * PG_ENSURE_ERROR_CLEANUP is intended for both ERROR and FATAL. However,\n> > the archiver operates at the bottom of the exception stack, so ERRORs are\n> > treated as FATALs. This means that PG_ENSURE_ERROR_CLEANUP is excessive.\n> > We only need to set up the before_shmem_exit callback.\n> >\n> > To fix, the attached patch removes the use of PG_ENSURE_ERROR_CLEANUP and\n> > the call to the shutdown callback in HandlePgArchInterrupts() in favor of\n> > just setting up a before_shmem_exit callback in LoadArchiveLibrary().\n> \n> We could have used a flag in call_archive_module_shutdown_callback()\n> to avoid it being called multiple times, but having it as\n> before_shmem_exit () callback without direct calls to it is the right\n> approach IMO.\n>\n> +1 to remove PG_ENSURE_ERROR_CLEANUP and PG_END_ENSURE_ERROR_CLEANUP.\n\nThat prevents archiver process from cleanly shut down when something\nwrong happnes outside the interuppt handler. In the frist place, why\ndo we need to call the cleanup callback directly in the handler? We\ncan let the handler return something instead to tell the\npgarch_MainLoop to exit immediately on the normal path.\n\n> Is the shutdown callback meant to be called only after the archive\n> library is loaded? The documentation [1] says that it just gets called\n> before the archiver process exits. If this is true, can we just place\n> before_shmem_exit(call_archive_module_shutdown_callback, 0); in\n> PgArchiverMain() after on_shmem_exit(pgarch_die, 0);?\n\n+1 for using before_shmem_exit.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 17 Oct 2022 14:30:57 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fix archive module shutdown callback" }, { "msg_contents": "On Mon, Oct 17, 2022 at 02:30:52PM +0900, Kyotaro Horiguchi wrote:\n> At Mon, 17 Oct 2022 13:51:52 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n>> I am not sure to understand what you mean here. The shutdown callback\n>> is available once the archiver process has loaded the library defined\n>> in archive_library (assuming it is itself in shared_preload_libraries)\n>> and you cannot call something that does not exist yet. So, yes, you\n> \n> I guess that the \"callback\" there means the callback-caller function\n> (call_archive_module_shutdown_callback), which in turn is set as a\n> callback...\n\nA callback in a callback in a callback.\n\n>> could define the call to before_shmem_exit() a bit earlier because\n>> that would be a no-op until the library is loaded, but at the end that\n>> would be just registering a callback that would do nothing useful in a\n>> larger window, aka until the library is loaded.\n> \n> I thought that Bharath's point is to use before_shmem_exit() instead\n> of PG_ENSURE_ERROR_CLEANUP(). The place doesn't seem significant but\n> if we use before_shmem_exit(), it would be cleaner to place it\n> adjecent to on_sheme_exit() call.\n\nRemoving PG_ENSURE_ERROR_CLEANUP() and relying on before_shmem_exit()\nis fine by me, that's what I imply upthread.\n--\nMichael", "msg_date": "Mon, 17 Oct 2022 14:47:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: fix archive module shutdown callback" }, { "msg_contents": "On Mon, Oct 17, 2022 at 11:17 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Oct 17, 2022 at 02:30:52PM +0900, Kyotaro Horiguchi wrote:\n>\n> Removing PG_ENSURE_ERROR_CLEANUP() and relying on before_shmem_exit()\n> is fine by me, that's what I imply upthread.\n\nHm. Here's a v2 patch that addresses review comments. In addition to\nmaking it a before_shmem_exit() callback, this patch also does the\nfollowing things:\n1) Renames call_archive_module_shutdown_callback() to be more\nmeaningful and generic as before_shmem_exit() callback.\n2) Clarifies when the archive module shutdown callback gets called in\ndocumentation.\n3) Defines a shutdown callback that just emits a log message in\nshell_archive.c and tests it.\n\nPlease review it further.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 18 Oct 2022 14:38:07 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: fix archive module shutdown callback" }, { "msg_contents": "On Tue, Oct 18, 2022 at 02:38:07PM +0530, Bharath Rupireddy wrote:\n> 2) Clarifies when the archive module shutdown callback gets called in\n> documentation.\n\nI have looked at that, and it was actually confusing as the callback\nwould also be called on reload if archive_library changes, but the\nupdate somewhat outlines that this would happen only on postmaster\nshutdown.\n\n> 3) Defines a shutdown callback that just emits a log message in\n> shell_archive.c and tests it.\n\nThe test had a few issues:\n- No need to wait for postmaster.pid in the test, as pg_ctl does this\njob.\n- The reload can be time-sensitive on slow machines, so I have added a\nquery run to make sure that the reload happens before stopping the\nserver.\n- slurp_file() was feeding on the full log file of standby2, but we\nshould load it from the log location before stopping the server, even\nif log_min_messages was updated only at the end of the test.\n\nAnd done, after tweaking a few more things.\n--\nMichael", "msg_date": "Wed, 19 Oct 2022 14:22:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: fix archive module shutdown callback" } ]
[ { "msg_contents": "Hi,\n\nWhile working on some logical replication related features.\nI found the HINT message could be improved when I tried to add a publication to\na subscription which was disabled.\n\nalter subscription sub add publication pub2;\n--\nERROR: ALTER SUBSCRIPTION with refresh is not allowed for disabled subscriptions\nHINT: Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).\n--\n\nBecause I was executing the ADD PUBLICATION command, I feel the hint should\nalso mention it instead of SET PUBLICATION.\n\nBest regards,\nHou zj", "msg_date": "Mon, 17 Oct 2022 03:09:29 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "Improve errhint for ALTER SUBSCRIPTION ADD/DROP PUBLICATION" }, { "msg_contents": "On Mon, Oct 17, 2022 at 8:39 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> While working on some logical replication related features.\n> I found the HINT message could be improved when I tried to add a publication to\n> a subscription which was disabled.\n>\n> alter subscription sub add publication pub2;\n> --\n> ERROR: ALTER SUBSCRIPTION with refresh is not allowed for disabled subscriptions\n> HINT: Use ALTER SUBSCRIPTION ... SET PUBLICATION ... WITH (refresh = false).\n> --\n>\n> Because I was executing the ADD PUBLICATION command, I feel the hint should\n> also mention it instead of SET PUBLICATION.\n>\n\n+1. I haven't tested it yet but the changes look sane.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 17 Oct 2022 09:43:14 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve errhint for ALTER SUBSCRIPTION ADD/DROP PUBLICATION" }, { "msg_contents": "Hello\n\nOn 2022-Oct-17, houzj.fnst@fujitsu.com wrote:\n\n> alter subscription sub add publication pub2;\n\n> Because I was executing the ADD PUBLICATION command, I feel the hint should\n> also mention it instead of SET PUBLICATION.\n\nHmm, ok. But:\n\n\n> @@ -1236,8 +1237,9 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,\n> \t\t\t\t\t\tereport(ERROR,\n> \t\t\t\t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n> \t\t\t\t\t\t\t\t errmsg(\"ALTER SUBSCRIPTION with refresh and copy_data is not allowed when two_phase is enabled\"),\n> -\t\t\t\t\t\t\t\t errhint(\"Use ALTER SUBSCRIPTION ... SET PUBLICATION with refresh = false, or with copy_data = false\"\n> -\t\t\t\t\t\t\t\t\t\t \", or use DROP/CREATE SUBSCRIPTION.\")));\n> +\t\t\t\t\t\t\t\t errhint(\"Use ALTER SUBSCRIPTION ... %s PUBLICATION with refresh = false, or with copy_data = false\"\n> +\t\t\t\t\t\t\t\t\t\t \", or use DROP/CREATE SUBSCRIPTION.\",\n> +\t\t\t\t\t\t\t\t\t\t isadd ? \"ADD\" : \"DROP\")));\n\nThis looks confusing for translators. I propose to move the whole\ncommand out of the message, not just one piece of it:\n\n+ /*- translator: %s is an ALTER DDL command */\n+ errhint(\"Use %s with refresh = false, or with copy_data = false, or use DROP/CREATE SUBSCRIPTION.\",\n isadd ? \"ALTER SUBSCRIPTION ... ADD PUBLICATION\" : ALTER SUBSCRIPTION ... DROP PUBLICATION\")\n\nI'm not sure that ERRCODE_SYNTAX_ERROR is the right thing here; sounds\nlike ERRCODE_FEATURE_NOT_SUPPORTED might be more appropriate.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"You don't solve a bad join with SELECT DISTINCT\" #CupsOfFail\nhttps://twitter.com/connor_mc_d/status/1431240081726115845\n\n\n", "msg_date": "Mon, 17 Oct 2022 09:43:42 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Improve errhint for ALTER SUBSCRIPTION ADD/DROP PUBLICATION" }, { "msg_contents": "On Mon, Oct 17, 2022 at 6:43 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Hello\n>\n> On 2022-Oct-17, houzj.fnst@fujitsu.com wrote:\n>\n> > alter subscription sub add publication pub2;\n>\n> > Because I was executing the ADD PUBLICATION command, I feel the hint should\n> > also mention it instead of SET PUBLICATION.\n>\n> Hmm, ok. But:\n>\n>\n> > @@ -1236,8 +1237,9 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,\n> > ereport(ERROR,\n> > (errcode(ERRCODE_SYNTAX_ERROR),\n> > errmsg(\"ALTER SUBSCRIPTION with refresh and copy_data is not allowed when two_phase is enabled\"),\n> > - errhint(\"Use ALTER SUBSCRIPTION ... SET PUBLICATION with refresh = false, or with copy_data = false\"\n> > - \", or use DROP/CREATE SUBSCRIPTION.\")));\n> > + errhint(\"Use ALTER SUBSCRIPTION ... %s PUBLICATION with refresh = false, or with copy_data = false\"\n> > + \", or use DROP/CREATE SUBSCRIPTION.\",\n> > + isadd ? \"ADD\" : \"DROP\")));\n>\n> This looks confusing for translators. I propose to move the whole\n> command out of the message, not just one piece of it:\n>\n> + /*- translator: %s is an ALTER DDL command */\n> + errhint(\"Use %s with refresh = false, or with copy_data = false, or use DROP/CREATE SUBSCRIPTION.\",\n> isadd ? \"ALTER SUBSCRIPTION ... ADD PUBLICATION\" : ALTER SUBSCRIPTION ... DROP PUBLICATION\")\n>\n> I'm not sure that ERRCODE_SYNTAX_ERROR is the right thing here; sounds\n> like ERRCODE_FEATURE_NOT_SUPPORTED might be more appropriate.\n>\n\nI thought maybe ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE, which would\nmake it the same as similar messages in the same function when\nincompatible parameters are specified.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Mon, 17 Oct 2022 19:45:07 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve errhint for ALTER SUBSCRIPTION ADD/DROP PUBLICATION" }, { "msg_contents": "On 2022-Oct-17, Peter Smith wrote:\n\n> On Mon, Oct 17, 2022 at 6:43 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > I'm not sure that ERRCODE_SYNTAX_ERROR is the right thing here; sounds\n> > like ERRCODE_FEATURE_NOT_SUPPORTED might be more appropriate.\n> \n> I thought maybe ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE, which would\n> make it the same as similar messages in the same function when\n> incompatible parameters are specified.\n\nHmm, yeah, I guess that's also a possibility.\n\nMaybe we need a specific errcode, \"incompatible logical replication\nconfiguration\", within that class (\"object not in prerequisite state\" is\na generic SQLSTATE class 55), given that the class itself is a mishmash\nof completely unrelated things. I think I already mentioned this in\nsome other thread ... ah yes:\n\nhttps://postgr.es/m/20220928084641.xecjrgym476fihtn@alvherre.pgsql\n\"incompatible publication definition\" 55PR1 is what I suggested then.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Hay dos momentos en la vida de un hombre en los que no debería\nespecular: cuando puede permitírselo y cuando no puede\" (Mark Twain)\n\n\n", "msg_date": "Mon, 17 Oct 2022 11:10:52 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Improve errhint for ALTER SUBSCRIPTION ADD/DROP PUBLICATION" }, { "msg_contents": "On Mon, Oct 17, 2022 at 2:41 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Oct-17, Peter Smith wrote:\n>\n> > On Mon, Oct 17, 2022 at 6:43 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > > I'm not sure that ERRCODE_SYNTAX_ERROR is the right thing here; sounds\n> > > like ERRCODE_FEATURE_NOT_SUPPORTED might be more appropriate.\n> >\n> > I thought maybe ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE, which would\n> > make it the same as similar messages in the same function when\n> > incompatible parameters are specified.\n>\n> Hmm, yeah, I guess that's also a possibility.\n>\n\nRight, ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE seems to suite better here.\n\n> Maybe we need a specific errcode, \"incompatible logical replication\n> configuration\", within that class (\"object not in prerequisite state\" is\n> a generic SQLSTATE class 55), given that the class itself is a mishmash\n> of completely unrelated things. I think I already mentioned this in\n> some other thread ... ah yes:\n>\n> https://postgr.es/m/20220928084641.xecjrgym476fihtn@alvherre.pgsql\n> \"incompatible publication definition\" 55PR1 is what I suggested then.\n>\n\nYeah, this is another way to deal with it. But, won't it be better to\nsurvey all call sites of ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE and\nthen try to subdivide it instead of doing it for\nsubscription/publication cases? I know that is a much bigger ask and\nwe don't need to do it for this patch but that seems like a more\nfuture-proof way if we can build a consensus for the same.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 17 Oct 2022 15:44:23 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve errhint for ALTER SUBSCRIPTION ADD/DROP PUBLICATION" }, { "msg_contents": "On Monday, October 17, 2022 6:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Mon, Oct 17, 2022 at 2:41 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\r\n> wrote:\r\n> >\r\n> > On 2022-Oct-17, Peter Smith wrote:\r\n> >\r\n> > > On Mon, Oct 17, 2022 at 6:43 PM Alvaro Herrera\r\n> <alvherre@alvh.no-ip.org> wrote:\r\n> >\r\n> > > > I'm not sure that ERRCODE_SYNTAX_ERROR is the right thing here;\r\n> > > > sounds like ERRCODE_FEATURE_NOT_SUPPORTED might be more\r\n> appropriate.\r\n> > >\r\n> > > I thought maybe ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE, which\r\n> > > would make it the same as similar messages in the same function when\r\n> > > incompatible parameters are specified.\r\n> >\r\n> > Hmm, yeah, I guess that's also a possibility.\r\n> >\r\n> \r\n> Right, ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE seems to suite better\r\n> here.\r\n\r\nAgreed. Here is new version patch which changed the error code and\r\nmoved the whole command out of the message according to Álvaro's comment.\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Tue, 18 Oct 2022 04:00:20 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Improve errhint for ALTER SUBSCRIPTION ADD/DROP PUBLICATION" }, { "msg_contents": "\nOn Tue, 18 Oct 2022 at 12:00, houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n\n> Agreed. Here is new version patch which changed the error code and\n> moved the whole command out of the message according to Álvaro's comment.\n>\n\nMy bad! The patch looks good to me.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Tue, 18 Oct 2022 13:40:02 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve errhint for ALTER SUBSCRIPTION ADD/DROP PUBLICATION" }, { "msg_contents": "On 2022-Oct-18, Japin Li wrote:\n\n> \n> On Tue, 18 Oct 2022 at 12:00, houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n> \n> > Agreed. Here is new version patch which changed the error code and\n> > moved the whole command out of the message according to Álvaro's comment.\n> \n> My bad! The patch looks good to me.\n\nThank you, I pushed it to both branches, because I realized we were\nsaying \"SET PUBLICATION\" when we meant \"ADD/DROP\"; that hint could be\nquite damaging if anybody decides to actually follow it ISTM.\n\nI noted that no test needed to be changed because of this, which is\nperhaps somewhat concerning.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 18 Oct 2022 11:50:26 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Improve errhint for ALTER SUBSCRIPTION ADD/DROP PUBLICATION" }, { "msg_contents": "On Tuesday, October 18, 2022 5:50 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\r\n> \r\n> On 2022-Oct-18, Japin Li wrote:\r\n> \r\n> >\r\n> > On Tue, 18 Oct 2022 at 12:00, houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > > Agreed. Here is new version patch which changed the error code and\r\n> > > moved the whole command out of the message according to Álvaro's\r\n> comment.\r\n> >\r\n> > My bad! The patch looks good to me.\r\n> \r\n> Thank you, I pushed it to both branches, because I realized we were saying \"SET\r\n> PUBLICATION\" when we meant \"ADD/DROP\"; that hint could be quite\r\n> damaging if anybody decides to actually follow it ISTM.\r\n\r\nThanks for pushing!\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Wed, 19 Oct 2022 01:51:37 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Improve errhint for ALTER SUBSCRIPTION ADD/DROP PUBLICATION" } ]
[ { "msg_contents": "Hello\n\nWhile messing about with Cluster.pm I noticed that we don't need the\nhack to work around lack of parent.pm in very old Perl versions, because\nwe no longer support those versions (per commit 4c1532763a00). Trivial\npatch attached.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"El que vive para el futuro es un iluso, y el que vive para el pasado,\nun imbécil\" (Luis Adler, \"Los tripulantes de la noche\")", "msg_date": "Mon, 17 Oct 2022 10:16:49 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "remove no longer necessary Perl compatibility hack" }, { "msg_contents": "On Mon, Oct 17, 2022 at 4:24 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> While messing about with Cluster.pm I noticed that we don't need the\n> hack to work around lack of parent.pm in very old Perl versions, because\n> we no longer support those versions (per commit 4c1532763a00). Trivial\n> patch attached.\n\n\n+1. Since we've got rid of perl of versions before 5.10.1, I see no\nreason we do not do this.\n\nThanks\nRichard\n\nOn Mon, Oct 17, 2022 at 4:24 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\nWhile messing about with Cluster.pm I noticed that we don't need the\nhack to work around lack of parent.pm in very old Perl versions, because\nwe no longer support those versions (per commit 4c1532763a00).  Trivial\npatch attached. +1. Since we've got rid of perl of versions before 5.10.1, I see noreason we do not do this.ThanksRichard", "msg_date": "Mon, 17 Oct 2022 18:44:24 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: remove no longer necessary Perl compatibility hack" }, { "msg_contents": "On 2022-Oct-17, Richard Guo wrote:\n\n> On Mon, Oct 17, 2022 at 4:24 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n> \n> > While messing about with Cluster.pm I noticed that we don't need the\n> > hack to work around lack of parent.pm in very old Perl versions, because\n> > we no longer support those versions (per commit 4c1532763a00). Trivial\n> > patch attached.\n> \n> +1. Since we've got rid of perl of versions before 5.10.1, I see no\n> reason we do not do this.\n\nThanks for looking! Pushed now.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nAl principio era UNIX, y UNIX habló y dijo: \"Hello world\\n\".\nNo dijo \"Hello New Jersey\\n\", ni \"Hello USA\\n\".\n\n\n", "msg_date": "Tue, 18 Oct 2022 11:53:17 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: remove no longer necessary Perl compatibility hack" } ]
[ { "msg_contents": "Hi hackers!\n\nReference TOAST mechanics assumes that a relation has a single TOAST\nrelation for all it's\nTOASTable columns. While working on Pluggable TOAST [1] we've found that\nsingle TOAST\nrelation for a relation is a bit of a problem, because different Toasters\ncould have different\nTOAST table structure. Even for regular TOAST mechanism is is very strict\nlimitation that limits\nsize of storable data for the table by total size of TOASTed data, so even\nwithout Pluggable\nTOAST this will be a great feature.\n\nWe've dealt with this a straightforward way - introduced the\nOid *rd_toasterids; /* OIDs of attribute toasters, if any */\nOid *rd_toastrelids; /* OIDs of toast relations corresponding to\ntoasters, if any */\nin the RelationData structure, but it seems to be not the best solution.\n\nAnother way is to store Toaster OID and TOAST relation ID in attoptions,\nwhich is less invasive,\nand keep existing reltoastrelid field for default (reference) TOAST\nmechanism. But here we\nencounter new problem - when we assign another Toaster for the column we\nhave to keep\nsomewhere all the old TOAST tables IDs, which looks like a problem for the\nattoptions.\nOr maybe there is a better way to store TOAST relations ID for a column?\n\nWe'd like to get some feedback on how to implement this part.\n\n[1] https://commitfest.postgresql.org/40/3490/\n\nThanks in advance!\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi hackers!Reference TOAST mechanics assumes that a relation has a single TOAST relation for all it'sTOASTable columns. While working on Pluggable TOAST [1] we've found that single TOAST relation for a relation is a bit of a problem, because different Toasters could have different TOAST table structure. Even for regular TOAST mechanism is is very strict limitation that limits size of storable data for the table by total size of TOASTed data, so even without PluggableTOAST this will be a great feature.We've dealt with this a straightforward way - introduced theOid\t\t   *rd_toasterids;\t/* OIDs of attribute toasters, if any */\tOid\t\t   *rd_toastrelids;\t/* OIDs of toast relations corresponding to toasters, if any */in the RelationData structure, but it seems to be not the best solution.Another way is to store Toaster OID and TOAST relation ID in attoptions, which is less invasive,and keep existing reltoastrelid field for default (reference) TOAST mechanism. But here we encounter new problem - when we assign another Toaster for the column we have to keepsomewhere all the old TOAST tables IDs, which looks like a problem for the attoptions.Or maybe there is a better way to store TOAST relations ID for a column?We'd like to get some feedback on how to implement this part.[1] https://commitfest.postgresql.org/40/3490/Thanks in advance!-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Mon, 17 Oct 2022 11:37:03 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": true, "msg_subject": "RFC: multi TOAST-tables support" } ]
[ { "msg_contents": "Hi,\n\nFor a couple of patches that I am working on ([1], [2]), I have needed\nto put Bitmapsets into a List that is in turn part of a Plan tree or a\nNode tree that may be written (using outNode) and read (using\nnodeRead). Bitmapsets not being a Node themselves causes the\nwrite/read of such Plan/Node trees (containing Bitmapsets in a List)\nto not work properly.\n\nSo, I included a patch in both of those threads to add minimal support\nfor Bitmapsets to be added into a Plan/Node tree without facing the\naforementioned problem, though Peter E suggested [3] that it would be\na good idea to discuss it more generally in a separate thread, so this\nemail. Attached a patch to make the infrastructure changes necessary\nto allow adding Bitmapsets as Nodes, though I have not changed any of\nthe existing Bitmapset that are added either to Query or to\nPlannedStmt to use that infrastructure. That is, by setting their\nNodeTag and changing gen_node_support.pl to emit\nWRITE/READ_NODE_FIELD() instead of WRITE/READ_BITMAPSET_FIELD() for\nany Bitmapsets encountered in a Node tree. One thing I am not quite\nsure about is who would be setting the NodeTag, the existing routines\nin bitmapset.c, or if we should add wrappers that do.\n\nActually, Tom had posted about exactly the same thing last year [4],\nthough trying to make Bitmapset Nodes became unnecessary after he\nresolved the problem that required making Bitmapsets Nodes by other\nmeans -- by getting rid of the field that was a List of Bitmapset\naltogether. Maybe I should try to do the same in the case of both [1]\nand [2]? In fact, I have tried getting rid of the need for List of\nBitmapset for [1], and I like the alternative better in that case, but\nfor [2], it still seems that a List of Bitmapset may be better than\nList of some-new-Node-containing-the-Bitmapset.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://commitfest.postgresql.org/40/3478/\n[2] https://commitfest.postgresql.org/40/3224/\n[3] https://www.postgresql.org/message-id/94353655-c177-1f55-7afb-b2090de33341%40enterprisedb.com\n[4] https://www.postgresql.org/message-id/flat/2847014.1611971629%40sss.pgh.pa.us", "msg_date": "Mon, 17 Oct 2022 18:30:48 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Bitmapsets as Nodes" }, { "msg_contents": "On Mon, Oct 17, 2022 at 6:30 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> For a couple of patches that I am working on ([1], [2]), I have needed\n> to put Bitmapsets into a List that is in turn part of a Plan tree or a\n> Node tree that may be written (using outNode) and read (using\n> nodeRead). Bitmapsets not being a Node themselves causes the\n> write/read of such Plan/Node trees (containing Bitmapsets in a List)\n> to not work properly.\n>\n> So, I included a patch in both of those threads to add minimal support\n> for Bitmapsets to be added into a Plan/Node tree without facing the\n> aforementioned problem, though Peter E suggested [3] that it would be\n> a good idea to discuss it more generally in a separate thread, so this\n> email. Attached a patch to make the infrastructure changes necessary\n> to allow adding Bitmapsets as Nodes, though I have not changed any of\n> the existing Bitmapset that are added either to Query or to\n> PlannedStmt to use that infrastructure. That is, by setting their\n> NodeTag and changing gen_node_support.pl to emit\n> WRITE/READ_NODE_FIELD() instead of WRITE/READ_BITMAPSET_FIELD() for\n> any Bitmapsets encountered in a Node tree. One thing I am not quite\n> sure about is who would be setting the NodeTag, the existing routines\n> in bitmapset.c, or if we should add wrappers that do.\n>\n> Actually, Tom had posted about exactly the same thing last year [4],\n> though trying to make Bitmapset Nodes became unnecessary after he\n> resolved the problem that required making Bitmapsets Nodes by other\n> means -- by getting rid of the field that was a List of Bitmapset\n> altogether. Maybe I should try to do the same in the case of both [1]\n> and [2]? In fact, I have tried getting rid of the need for List of\n> Bitmapset for [1], and I like the alternative better in that case, but\n> for [2], it still seems that a List of Bitmapset may be better than\n> List of some-new-Node-containing-the-Bitmapset.\n\nFTR, this has been taken care of in 5e1f3b9ebf6e5.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Nov 2022 10:43:55 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Bitmapsets as Nodes" } ]